notes-alessiosignorini
notes-alessiosignorini
Alessio's Notes
53 posts
Nerdy notes on things it took me a while to figure out and do not want to forget
Don't wanna be here? Send us removal request.
notes-alessiosignorini · 4 months ago
Text
IPv6 Pihole on Cloudkey v1
Over the past few months I saw an increase in ads during my web browsing at home, especially after I had enabled IPv6 support on my network. I have Pihole installed but setup only for IPv4 so the provider's IPv6 DNS was added automatically to my devices and many DNS queries were going through it.
First thing, let's make sure the Cloudkey v1 has an IPv6 address. It was not the case for me. To fix that I did two things:
Enable IPv6 in the Unifi Controller (via Settings->Internet->WAN->IPv6 Connection->DHCPv6)
Enable IPv6 Prefix Delegation in the networks we need it (via Settings->Networks->Default->IPv6 Interface Type->Prefix Delegation). Remember also to enable "Router Advertisement (RA)". In "DHCPv6/RDNSS DNS Control" specify the IPv6 of the DNS server (in my case, the Pihole one "2600:6502:8792:4d00::8888").
With this, the Cloudkey should get an IPv6 via DHCP. The problem is that we need it to be fixed to be able to use it as DNS server.
So after rebooting the Cloudkey and logging into it (ssh [email protected]) I used "ipconfig" to determine the IP address. I did the same on my laptop. The first 4 parts (e.g., 2600:6502:8792:4d00) were the same, so I went into "/etc/systemd/network/eth0.network" and set it up this way
[Match] Name = eth0
[Network] DHCP = ipv6 Address = 2600:6502:8792:4d00::8888/64 IPv6SendRA=true Address = 192.168.1.2/24
[Fallback] Address = 192.168.1.2 Netmask = 255.255.255.0
In this way it will use DHCP to get an IPv6 address, but we also set a fixed one (the one that ends in ::8888). This will allow the gateway for IPv6 to be configured correctly and automatically. The "IPv6SendRA=true" will announce the address/route.
I tested this with
ping6 ipv6.google.com ping6 2600:6502:8792:4d00::8888
from the Security Gateway and also various machines. I also used "https://whatismyipaddress.com" to make sure I had an IPv6 address.
Having a fixed IP address allows to use that in the DNS configuration of my devices. Thanks to item (2) above it will be automatically added to the devices via DHCP.
What I was not able to do yet is to use NAT to intercept any IPv6 DNS request not to that DNS and redirect it, like I do for IPv4, but since I am using this only on my personal machines and devices, I don't have to worry about it as much as I did for the rest of IoT devices on my network (for which I have a NAT rule).
0 notes
notes-alessiosignorini · 2 years ago
Text
PiHole DNS not Responding, disk full
The Internet seemed unreachable at my house. After checking with the provider I determined it was due to my pihole being down.
Logging into the dashboard of my pihole showed "Lost Connection to API". This indicated an issue with the pihole-FTL service.
After logging in the Unify CloudKey where I installed pihole I used df -h to determined that the disk was full
root@UniFi-CloudKey:~# df -h Filesystem Size Used Avail Use% Mounted on aufs-root 2.9G 2.9G 0 100% / udev 10M 0 10M 0% /dev
It was due to 3 things:
apt cache at "/var/cache/apt/archives"
CloudKey backups at "/data/autobackup"
pihole-FTL database at "/etc/pihole/pihole-FTL.db"
You can cleanup the first using "apt-get autoclean". For the second, you can manually delete some of the old backups but perhaps you should set a better backup policy in your CloudKey.
The third one accumulates all the queries ever done against your pihole (18M in the past 2 years for me) unless you set something like MAXDBDAYS=90 in /etc/pihole/pihole-FTL.conf. Mine was 1.4GB.
You can stop the pihole-FTL with "service pihole-FTL stop", delete the file, and restart it, if you want. Or perform a more surgical cleaning directly deleting old entries from the database before restarting it.
3 notes · View notes
notes-alessiosignorini · 2 years ago
Text
Check Old/Dead Domains
The other day I took a look at my Route53 table and wow, so many domains, with TXT, CNAMEs, verifications, etc, accumulated over the last 10 years.
I could be that most of those were dead, the machine behind wasn't there anymore, or that the verification was old. But how to tackle 568 domains?
With a script, to start!
#!/bin/zsh while IFS= read -r line; do ports=$(nmap -Pn -p22,80,443,8080,8888 $line 2>/dev/null | grep -c open) http=$(curl -m 3 -L -sIXGET "http://$line" | grep HTTP | tail -n 1) echo "$line\t$ports\t$http" done
Given a file with a domain per line (it's OK if they have a DOT at the end) it will check for commonly open ports (e.g., HTTP, SSH, etc) and check what is the HTTP response for a GET. Results will be print on standard output and separated via tab.
It uses Nmap and cURL.
0 notes
notes-alessiosignorini · 3 years ago
Text
Enable IPv6 on Unifi Gateway Networks
I was not able to SSH into any IPv6 servers. Simple test commands were not working
ping6 ipv6.google.com ping6: connect: Network is unreachable
The issue was that I had not configured my home network to know what to do with IPv6. Here is how I solved it.
1) Connect to local UI of Unifi Controller - Go to its local address (e.g., 192.168.1.2:8443) and then into Settings.
2) Enable IPv6 Delegation on Controller - Go to Settings -> Internet. There edit the WAN, go to IPv6 Connection settings, and select "DHCPv6" as "IPv6 Connection" and try 56 for the "Prefix Delegation Size".
3) Enable IPv6 on Network - Go to Settings -> Networks and edit the network that you desire to make IPv6-enabled. Select "Prefix Delegation" for "IPv6 Interface Type". I left the other settings untouched.
Once you have done that you may need to restart the network on the devices you want to make IPv6 capables but once things work both ping6 ipv6.google.com and ip -6 route should succeed.
0 notes
notes-alessiosignorini · 3 years ago
Text
Can’t disable Pihole, session expired
Today my Pihole was acting weird. It was not letting me disable it, the dashboard was not loading properly, and it was hard to look at the query log. A reboot helped but did not solve things.
Trying to reboot the service using the Settings returned the message "Session expired! Please re-login on the Pi-hole dashboard." which is strange since I did not set username/password in my installation. I could also see that response from api.php looking at the traffic when trying to temporarily disable Pihole.
I logged on the machine and discovered that the APT cache used up all the disk space and nobody could write in /var/ anymore.
I cleaned up the disk deleting old logs and ran apt-get clean and recovered 1.3GB of space. I rebooted the machine (and pihole) and things started working again.
0 notes
notes-alessiosignorini · 3 years ago
Text
Cleaning up Street Addresses using Google Geocoding API
I had a bunch of addresses in a file with random formats (e.g., "1 State St #3 SB CA 93101") which I had to normalize so they could be parsable by a program that prints envelopes.
Instead of doing the work manually I decided to write some code. I used the Google Geocoding API for the normalization as it was easy to use and fast to work with (I just had to obtain credentials).
In a few minutes I was ready to write queries like
curl -s "https://maps.googleapis.com/maps/api/geocode/json?key=APIKEY&address=1+State+St+SB+CA+93101"
and obtain JSON formatted results. I then wrote a little bash script
while read p; do t=`echo "$p" | sed 's/ /+/g'` curl -s "https://maps.googleapis.com/maps/api/geocode/json?key=APIKEY&address=$t" | grep -o 'formatted_address.*' | cut -c 23- | cut -d '"' -f 1 done
and ran it as
./clean-addresses.sh < addresses.txt
to obtain my normalized list of addresses. It probably took as much as doing all the parsing manually but it was definitely more fun! :)
1 note · View note
notes-alessiosignorini · 4 years ago
Text
Find the Fastest DNS Servers for your Wifi
DNS resolution is generally fast, but could be faster. With all the extra requests that modern web pages make, a 50~200ms hostname resolution adds up. Wifi and cable routers generally proxy DNS requests to the provider's servers obtained via DHCP. But is that the fastest setup?
A few years ago we all used Namebench to find the fastest DNS servers to use. The tool is outdated and does not work on modern computers. There is an effort to rewrite it in Go but that too seems to have stopped.
Yesterday I decided to hack together a quick script to do the majority of the work.
The script assumes you have a servers.csv file with the list of the DNS servers to test and a hosts.csv file with the list of hostnames to resolve. Every entry has to be on a new line.
I used about 20 DNS servers (e.g., my router, my provider's, 1.1.1.1, 8.8.8.8, other well known ones) and about 100 hosts (i.e., top hosts in Italy). It ran in less than a minute and produced this output
2833 results/1625085538/213.205.32.70 2870 results/1625085538/1.1.1.1 3357 results/1625085538/212.216.112.112 3390 results/1625085538/212.216.172.62 3792 results/1625085538/8.8.8.8 3878 results/1625085538/8.8.4.4 5190 results/1625085538/192.168.1.1 5856 results/1625085538/4.2.2.6 6196 results/1625085538/4.2.2.3 7561 results/1625085538/4.2.2.5 7627 results/1625085538/4.2.2.4 7664 results/1625085538/213.205.36.70
This indicates that the fastest were 213.205.32.70 and 1.1.1.1 and that the current setup of my own router (192.168.1.1) was almost 2x slower. I configured the first two as DNS servers to use in the router (instead of accepting the provider's) and now all devices on the network enjoy faster DNS resolutions. :)
ps = in results/EPOCH/* you will find detailed requests/responses for each server tested
0 notes
notes-alessiosignorini · 4 years ago
Text
Fix MongoDB Issues on Unifi CloudKey Gen 1
Today I had to restart several times my network hardware due to some issues with the cable provider (COX) which they did not want to admit. When it was all finally resolved (by them!) I discovered I could not login anymore on my Cloud Key nor my Gateway Controller.
I could connect to 192.168.1.2 where I was welcomed with the usual 2 buttons to login in either interface but they did not work right. In some occasions I was able to login in the Cloud Key, but it would say that unifi was stopped. In other occasions, I would receive a blank screen trying to login in the Cloud Key panel. I was never able to login in the Controller.
Luckily I could still SSH into the Cloud Key and using the username/password above (ubtn/...) I was given root access.
After much tribulation, perusing /srv/unifi/logs/server.log I noticed some issues with MongoDB that prevented starting the unifi process
... [2021-05-04T23:56:57,008] <db-server> INFO db - Tue May 4 23:56:56.998 [initandlisten] recover skipping application of section more... [2021-05-04T23:56:57,051] <db-server> INFO db - Tue May 4 23:56:57.051 [initandlisten] recover /usr/lib/unifi/data/db/journal/j._74 [2021-05-04T23:56:57,083] <db-server> INFO db - Tue May 4 23:56:57.083 [initandlisten] couldn't uncompress journal section [2021-05-04T23:56:57,084] <db-server> INFO db - Tue May 4 23:56:57.083 [initandlisten] Assertion: 15874:couldn't uncompress journal section [2021-05-04T23:56:57,084] <db-server> INFO db - 0x48e13c 0x46f83a 0x45d190 0x27efdc 0x27f15e 0x27f4d4 0x27f718 0x27fe04 0x2800ba 0x273402 0x1681 b0 0x1699dc 0x15004c 0x7678e632 [2021-05-04T23:56:57,097] <db-server> INFO db - bin/mongod(_ZN5mongo15printStackTraceERSo+0x17) [0x48e13c] [2021-05-04T23:56:57,097] <db-server> INFO db - bin/mongod(_ZN5mongo10logContextEPKc+0xa9) [0x46f83a] [2021-05-04T23:56:57,098] <db-server> INFO db - bin/mongod(_ZN5mongo11msgassertedEiPKc+0x67) [0x45d190] [2021-05-04T23:56:57,098] <db-server> INFO db - bin/mongod(_ZN5mongo3dur11RecoveryJob14processSectionEPKNS0_11JSectHeaderEPKvjPKNS0_11JSectFoot erE+0x613) [0x27efdc] [2021-05-04T23:56:57,099] <db-server> INFO db - bin/mongod(_ZN5mongo3dur11RecoveryJob17processFileBufferEPKvj+0xe9) [0x27f15e] [2021-05-04T23:56:57,099] <db-server> INFO db - bin/mongod(_ZN5mongo3dur11RecoveryJob11processFileEN5boost10filesystem4pathE+0x7f) [0x27f4d4] [2021-05-04T23:56:57,099] <db-server> INFO db - bin/mongod(_ZN5mongo3dur11RecoveryJob2goERSt6vectorIN5boost10filesystem4pathESaIS5_EE+0xcf) [0x 27f718] [2021-05-04T23:56:57,100] <db-server> INFO db - bin/mongod(_ZN5mongo3dur8_recoverEv+0x4e3) [0x27fe04] [2021-05-04T23:56:57,100] <db-server> INFO db - bin/mongod(_ZN5mongo3dur7recoverEv+0x15) [0x2800ba] [2021-05-04T23:56:57,101] <db-server> INFO db - bin/mongod(_ZN5mongo3dur7startupEv+0x25) [0x273402] [2021-05-04T23:56:57,101] <db-server> INFO db - bin/mongod(_ZN5mongo14_initAndListenEi+0x67f) [0x1681b0] [2021-05-04T23:56:57,101] <db-server> INFO db - bin/mongod(_ZN5mongo13initAndListenEi+0xb) [0x1699dc] [2021-05-04T23:56:57,102] <db-server> INFO db - bin/mongod(main+0x1d3) [0x15004c] [2021-05-04T23:56:57,102] <db-server> INFO db - /lib/arm-linux-gnueabihf/libc.so.6(__libc_start_main+0x99) [0x7678e632] [2021-05-04T23:56:57,105] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] dbexception during recovery: 15874 couldn't uncompress journal section [2021-05-04T23:56:57,105] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] exception in initAndListen: 15874 couldn't uncompress journal section, terminating [2021-05-04T23:56:57,106] <db-server> INFO db - Tue May 4 23:56:57.105 dbexit: [2021-05-04T23:56:57,106] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] shutdown: going to close listening sockets... [2021-05-04T23:56:57,107] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] shutdown: going to flush diaglog... [2021-05-04T23:56:57,107] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] shutdown: going to close sockets... [2021-05-04T23:56:57,107] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] shutdown: waiting for fs preallocator... [2021-05-04T23:56:57,108] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] shutdown: lock for final commit... [2021-05-04T23:56:57,108] <db-server> INFO db - Tue May 4 23:56:57.105 [initandlisten] shutdown: final commit... [2021-05-04T23:56:57,152] <db-server> INFO db - Tue May 4 23:56:57.150 [initandlisten] shutdown: closing all files... [2021-05-04T23:56:57,153] <db-server> INFO db - Tue May 4 23:56:57.151 [initandlisten] closeAllFiles() finished [2021-05-04T23:56:57,153] <db-server> INFO db - Tue May 4 23:56:57.151 [initandlisten] shutdown: removing fs lock... [2021-05-04T23:56:57,153] <db-server> INFO db - Tue May 4 23:56:57.152 dbexit: really exiting now [2021-05-04T23:56:57,169] <db-server> INFO db - DbServer stopped ...
My guess is that the many restart corrupted one of the journal files (/usr/lib/unifi/data/db/journal/j._74) and that it was unrecoverable.
To restore things, I simply deleted the file and restarted unifi with
systemctl restart unifi
It took a while (I monitored progress with tail -f /srv/unifi/logs/server.log in another window) but eventually it started it cleanly and I was again able to login both in the Cloud Key and the Gateway Controller.
1 note · View note
notes-alessiosignorini · 4 years ago
Text
Installing Pi-Hole on CloudKey v1.1.19 (Debian 8 Jessie)
The Unifi CloudKey v1.1.19 runs Debian 8 (jessie) which reached the end of life in December 2020 so Pi-Hole's installer does not run out of the box. As of Pihole v5.3.1 you can install it using some packages from Debian 9 (Stretch) to fix some dependencies.
1. Update the CloudKey to the latest firmware and controller
Right now those are v1.1.19 for the CloudKey and v6.1.71 for the Cloud Controller. I performed it with ssh [email protected] and then running apt update; apt upgrade but you can do it via the web interface.
2. Add necessary packages from Debian 9 (Stretch)
Fetch (wget) and install (dpkg -i) the following packages
http://ftp.us.debian.org/debian/pool/main/s/sqlite3/libsqlite3-0_3.16.2-5+deb9u1_armhf.deb
http://ftp.us.debian.org/debian/pool/main/n/ncurses/libtinfo5_6.0+20161126-1+deb9u2_armhf.deb
http://ftp.us.debian.org/debian/pool/main/r/readline/libreadline7_7.0-3_armhf.deb
http://ftp.us.debian.org/debian/pool/main/s/sqlite3/sqlite3_3.16.2-5+deb9u1_armhf.deb
http://ftp.us.debian.org/debian/pool/main/n/ncurses/libncurses5_6.0+20161126-1+deb9u2_armhf.deb
http://ftp.us.debian.org/debian/pool/main/n/ncurses/libncursesw5_6.0+20161126-1+deb9u2_armhf.deb
3. Fetch the script and remove dependency on php5-xml
Fetch (wget) the install script in /tmp
wget -O basic-install.sh https://install.pi-hole.net
then manually edit basic-install.sh (e.g., via vim) removing the reference to "${phpVer}-xml".
4. Install without OS checks
Since Debian 8 is not supported you need to run the installer with
PIHOLE_SKIP_OS_CHECK=true ./basic-install.sh
and remember to select eth0 and not eth0p.
5. Stop and Disable DNS daemon
You need to run
systemctl stop systemd-resolved systemctl disable systemd-resolved
to stop and disable the current DNS daemon, then use
systemctl restart pihole-FTL
to start the one embedded with pihole.
6. Switch LigHTTPd to port 81
Edit the configuration file with vim /etc/lighttpd/lighttpd.conf, look and change the port number, save (:x) and then restart
systemctl restart lighttpd
7. Reset or clear the login password
The password should be shown during install but I have never saw it. You can change or reset it with pihole -a -p.
0 notes
notes-alessiosignorini · 4 years ago
Text
Setup NAT DNS rules on UniFi Security Gateway
For the past few years I have used Steven Black's hosts files in my machine to block adware, trackers, etc. Recently I was made aware of the Pi-Hole project and want to move to that solution.
While learning more I discovered that many IoT devices (e.g., WyzeCam v3, Withings Aura, ...) have hardcoded DNS settings (e.g., 8.8.8.8) and bypass whatever values the DHCP server recommended. Who knew! But I get it, from their perspective it's one less thing to worry about.
Thankfully, I can setup NAT rules in Unifi Security Gateway (USG) to intercept all DNS requests (i.e., TCP and UDP on port 53) and reroute them wherever I want (the gateway, in my case).
There is lots of documentation and forum posts on how to do it but since it took me a couple of hours to figure things out I decided to write it down here hoping to save time to others (or myself in the future!).
IMPORTANT: CloudKey must be on separate VLAN
When I applied the approach below I had my CloudKey and laptops on the main LAN network, my IoT devices on VLAN 200, and guest devices on VLAN 300. Devices on the LAN network could not resolve domains while any on the VLANs could.
In lots of forum threads people suggest to add a "masquerading" rule to make things work. That does solve the problem but also hides the origin of the DNS request in Pi-Hole. That was not good enough for me.
After various trials and errors I discovered that simply moving my devices on a separated VLAN (and adding/adjusting the NAT rules appropriately) solved the issue. Currently, my USG and CloudKey are on LAN (192.168.1.x), laptops/phones on VLAN 100, IoT devices on VLAN 200, etc.
What you will need
You will need a few things: 1. The password of your Unifi CloudKey (e.g., ubnt/...) 2. Enable SSH access to your USG (Controller -> System Settings -> Device SSH Authentication) and set user/password 3. The list of the VLAN IDs that you want to affect (e.g., default one, 200, 300, etc) 4. The name of your site - you can find it in the URL of the Controller Dashboard after "site", it is "default" for the main/first one
Inspect Controller's Traffic
First thing to learn is how to check on the traffic going on through your router. To do that, SSH into the controller (e.g., ssh [email protected]) and then do tail -f /var/log/messages. You will see lots of messages, to isolate the DNS ones you can run
tail -f /var/log/messages | grep 'DPT=53 '
it will display things like
Dec 30 20:32:46 ubnt kernel: [LAN_LOCAL-default-A]IN=eth1.200 OUT= MAC=cc:ee:dd:77:55:aa:55:55:33:ff:77:88:88:00:55:00:00:45 SRC=192.168.5.93 DST=8.8.8.8 LEN=64 TOS=0x00 PREC=0x00 TTL=64 ID=51477 DF PROTO=TCP SPT=65371 DPT=53 WINDOW=32768 RES=0x00 SYN URGP=0
in the case above, the interface is eth1 the VLAN ID is 200, the request was made by 192.168.5.93 and it was directed to 8.8.8.8 (instead of the local 192.168.5.1).
Add the configuration in the CloudKey
Now SSH into the CloudKey (e.g., ssh [email protected]) and go edit the config.gateway.json file. The right one to edit is located at /srv/unifi/data/sites/<sitename> where <sitename> will be default for the main site created.
You can use vim config.gateway.json to edit/add the file if it does not exist. Remember to execute chwon unifi:unifi config.gateway.json after editing and saving it.
Here is what to add to redirect all "foreign" DNS requests on eth1.200 (network eth1, vlan 200) to the internal DNS server (192.168.5.1)
{ "service": { "nat": { "rule": { "1": { "description": "Redirect all DNS requests to 192.168.5.1", "destination": { "address": "!192.168.5.1", "port": "53" }, "inbound-interface": "eth1.200", "inside-address": { "address": "192.168.5.1", "port": "53" }, "protocol": "tcp_udp", "type": "destination", "log": "enable" } } } } }
Once you saved it (vim key :x), go in the Controller Dashboard, click on Devices -> USG -> Config -> Manage Device -> Provision to force the configuration to be propagated.
Check if it worked
To check if the configuration has been applied, SSH back into the USG and launch the following commands
configure show service nat
If your rule(s) show there, congratulations, they have been propagated correctly!
To see if they are applied, monitor for NAT- messages in the logs with tail -f /var/log/messages | grep 'NAT-'. To force it to happen, go on your computer and try to lookup some domain using a specified DNS, e.g.,
dig google.com @8.8.8.8
it should trigger the rule, e.g.,
Dec 30 20:34:16 ubnt kernel: [NAT-2-DNAT] IN=eth1.200 OUT= MAC=cc:ee:dd:77:55:aa:55:55:33:ff:77:88:88:00:55:00:00:45 SRC=192.168.5.93 DST=8.8.8.8 LEN=64 TOS=0x00 PREC=0x00 TTL=64 ID=51516 DF PROTO=TCP SPT=65362 DPT=53 WINDOW=32768 RES=0x00 SYN URGP=0
and it should be followed by an appropriate network connection, e.g.,
Dec 30 20:34:16 ubnt kernel: [LAN_LOCAL-default-A]IN=eth1.200 OUT= MAC=cc:ee:dd:77:55:aa:55:55:33:ff:77:88:88:00:55:00:00:45 SRC=192.168.5.93 DST=192.168.5.1 LEN=74 TOS=0x00 PREC=0x00 TTL=64 ID=37206 DF PROTO=UDP SPT=46620 DPT=53 LEN=54
If you see all this, you also know they are being applied!
0 notes
notes-alessiosignorini · 5 years ago
Text
Quick SQL on a CSV with SQLite
When I have to perform some SQL operations on a CSV file I find the SQLite CLI very useful. You just a few commands:
.mode csv .import filename.csv tablename
and you are good to go, run any SQL commands on tablename. Once you close SQLite it will all be forgotten.
If you want to make sure the schema was inferred correctly you can use .schema tablename. Similarly, if the separator is not a comma, use .separator '|' to define it.
0 notes
notes-alessiosignorini · 5 years ago
Text
Transform JSON to CSV/TSV with JQ
I have been using jq for a while, mostly to prettify JSON in my console. It's lightweight and awesome, I highly recommend to learn to use it.
Yesterday I found myself copying and pasting some values from a giant JSON structure into a spreadsheet. I had 50 more cells to copy&paste. I figured my time would be better invested in learning something new.
The JSON I had to work contains some daily statistics about COVD positives, tests, deaths, etc, and can be found at this address
https://covidtracking.com/api/us/daily
I needed a TSV with the following columns
positive, hospitalized, death, total
Figuring out the right syntax required looking at the JQ manual and playing a bit with this online editor, but in the end I figured it out
sort_by(.date) | .[] | select(.date > 20200421) | [.positive, .hospitalized, .death, .total] | @tsv
The sort_by(.date) sorts the array of objects by the date field. Then I used .[] to let jq know that I will be treating the structure as an array of objects. I only needed the most recent ones so I used select(.date > 20200421) to filter out what I did not need. Finally, I created the array of fields I needed with [.positive, .hospitalized, .death, .total] and converted it into a TSV with @tsv.
1 note · View note
notes-alessiosignorini · 5 years ago
Text
Heroku, Webpacker and Failing Tests
In the past weeks, I had troubles running Rails 6 tests on Heroku CI due to the following Webpacker error
ActionView::Template::Error: Webpacker can't find application in /app/public/packs-test/manifest.json. Possible causes: 1. You want to set webpacker.yml value of compile to true for your environment unless you are using the `webpack -w` or the webpack-dev-server. 2. webpack has not yet re-run to reflect updates. 3. You have misconfigured Webpacker's config/webpacker.yml file. 4. Your webpack configuration is not creating a manifest. Your manifest contains: { }
Tests worked on my local machine but failed on Heroku CI and my webpacker.yml was properly configured with compile: true in the test section.
Turns out, the compile: true directive is ignored on Heroku while running tests. The issue is simply that webpacker:compile was/is never launched. Surprisingly, it is launched on production deploy.
To fix the error above, force Heroku CI to precompile your assets in the test-setup step of your app.json. Something like
{ "environments": { "test": { ... "scripts": { "test-setup": "bin/rails assets:precompile" } } } }
0 notes
notes-alessiosignorini · 6 years ago
Text
Truncate a Git Repository to Hide History
If you need to truncate a Git repository, for example to hide anything happened before commit XXXXX, here is a way to do it:
git checkout --orphan temp XXXXX git commit -m "Truncate history" git rebase --onto temp XXXXX master
0 notes
notes-alessiosignorini · 6 years ago
Text
Get a Number from your City on Grasshopper
GrassHopper is one of my favorite virtual phone systems. We used it in the past at my company and I use it in my daily life.
Recently I wanted to open an account with a local Santa Barbara number. GrassHopper allows you to choose the area code of the number but does not let you pick the city it is from.
The first number I got was not from Santa Barbara and their support personnel was not able to help me. Time for some scripting!
The GrassHopper website uses a REST endpoint to get the list of number availables you can choose from. The one below is the URL, but it requires a few more parameters and headers, which I recommed you to get from your Chrome Development Tools
https://signup.grasshopper.com/api/LocalNumberCatalog/LocalNumbersbynpa?areaCode=805&count=10&numberLockSessionId=...
I then looked for a website that would give me the primary city associated to a number, and found this one
https://www.hocalls.com/name-and-address/8059551
It was then a matter of putting them together with some bash scripting
CURL="curl 'https://signup.grasshopper.com/api/LocalNumberCatalog/LocalNumbersbynpa?areaCode=805&count=10&numberLockSessionId=..." for i in {1..10}; do for number in `$CURL 2>/dev/null | jq . | grep E164 | egrep -o '+1805[0-9]*'`; do prefix=`echo $number | cut -c 2-8` city=`wget -q -O- "https://www.hocalls.com/name-and-address/$prefix" | egrep -o '<b style="color: #f7f7f7;">.*</b>' | sed 's/]*>//g'` echo "$number $city" done done
This scripts makes 10 requests to the GrassHopper API to retrieve 100 numbers and checks with HoCalls the primary city of each of them, printing the output on the screen. Here is an example:
... 18058743159 Oxnard, CA 18058453330 Santa Barbara, CA 18057699014 Paso Robles, CA 18052501568 San Luis Obispo, CA 18053183283 Santa Barbara, CA 18053929991 Santa Paula, CA 18059543056 Ventura, CA 18057794034 San Luis Obispo, CA ...
Once I identified some Santa Barbara numbers, I could confidently go ahead and buy them on GrassHopper.
0 notes
notes-alessiosignorini · 6 years ago
Text
Best .yardopts for your Ruby/Rails Project
I think YARD is a great tool for documenting Ruby/Rails projects. If used correctly the documentation is nice to see both in your IDE and your browser.
Like most systems YARD is highly configurable. It accepts command line parameters but for convenience these can be saved into a .yardopts file in the root directory of your project.
Here is my favorite list of options to add to your .yardopts
-o docs --markup markdown --private --protected --tag nodocs --query '!@nodocs' --tag http:"HTTP Status" --tag url:"Example Endpoints" --exclude test --exclude vendor
0 notes
notes-alessiosignorini · 6 years ago
Text
Log/Trace DNS Requests made by Apps in MacOS
To log and trace all the DNS requests made by apps on MacOS just use the built in tcpdump launching a Terminal and typing
sudo tcpdump port 53
you will get an output like
tcpdump: data link type PKTAP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on pktap, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes 00:38:38.360343 IP 192.168.0.12.51329 > 192.168.0.1.domain: 33947+ A? www-google-analytics.l.google.com. (51) 00:38:38.365213 IP 192.168.0.12.58006 > 192.168.0.1.domain: 32878+ PTR? 1.0.168.192.in-addr.arpa. (42) 00:38:38.377238 IP 192.168.0.1.domain > 192.168.0.12.51329: 33947 1/0/0 A 216.58.205.206 (67) 00:38:38.382262 IP 192.168.0.1.domain > 192.168.0.12.58006: 32878 NXDomain 0/1/0 (119) ...
0 notes