Computer and electronics tinkerer. Cheap restaurant enthusiast. Lives at www.joelw.id.au.
Don't wanna be here? Send us removal request.
Text
cPanel's WP Toolkit API
One feature I really like about WP Toolkit in cPanel is that it maintains a list of plugins/themes and integrates with a vulnerability database. The downside is that WP Toolkit doesn't always automatically find WordPress installations that were created outside of WP Toolkit. You have to periodically scan the server for new sites through the web interface. It would be nice to automate this. cPanel support denies that WP Toolkit has an API, instead pointing users to the command link tool, which is a bit limited. Within WP Toolkit's web interface, the API is even documented!
It's not entirely obvious how to authenticate to the API programmatically, but I figured it out with some fiddling. Here's a script that will run a whole-server scan.
In summary, it use a WHM API token to open a session as root, fetching the /cpsess??? token and whostmgrsession cookie. Using this you can call the WP Toolkit API endpoints.
Tumblr seeems to have broken inline code, so you can find it here: https://gist.github.com/joelw/8397e8e78a40233226babe8b46ad16b6
0 notes
Text
WordPress automatic login link
I manage many WordPress sites for customers and need to log in to them occasionally, but don’t tend to record the passwords in my password manager and just reset them to something random each time using WP CLI.
I came up with a niftier solution - generate a one-time login link which will automatically log me in as an administrator!
The script is below.
Run it in the top level directory of a WordPress installation. You can run it as the user who owns the files or as root (it will su as a appropriate and set file ownership).
The script will creates a file and then output the one-time login URL to the console e.g. https://example.com/auto-login-abcdef1234.php . If you visit this link, WordPress will log you in as the first administrator it finds in the database and then delete the PHP file.
Further improvements that could be considered:
Check if you already have an admin user, perhaps by returning the entire list and searching for one matching your name / email. It should then use that one if possible, or else pick a random administrator or create a new admin for you. It doesn’t really matter, but customers with WordFence or similar plugins installed might get alerts that you have logged in as them and this could be confusing
If you don’t visit the link, it should auto-expire after some time. A system-wide cron job which finds and deletes them would be a good idea. The script could/should also be extended with a self-expiry - i.e. the shell script can set the current timestamp in a variable in the PHP script, and then PHP should check if the system time is no more than 10 minutes since the script was created. If the script is expired, it should not proceed with the login.
#!/bin/bash # Detect if wp-load.php exists in the current directory. If not, exit. if [ ! -f "wp-load.php" ]; then echo "wp-load.php not found. Exiting." exit 1 fi # Create a PHP file with a random name and .php extension filename=auto-login-$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1).php wpsu() { if [[ "$PWD" =~ ^/home/([^/]+)/.*$ ]]; then sudo -u ${BASH_REMATCH[1]} /usr/local/bin/php /usr/local/bin/wp "$@" else /usr/local/bin/php /usr/local/bin/wp "$@" fi } # Write the PHP script to the file cat <<EOF > "$filename" <?php require_once('wp-load.php'); \$users = get_users([ 'role' => 'administrator', 'number' => 1, ]); if (!empty(\$users)) { \$user = \$users[0]; \$username = \$user->user_login; wp_set_auth_cookie(\$user->ID); wp_redirect(admin_url()); unlink(__FILE__); exit; } else { echo "No administrator user found in the WordPress database."; } ?> EOF # Get the current site's URL siteurl=$(wpsu option get siteurl) # Print the URL + / + the new file name echo "$siteurl/$filename" # Check if the current user is root if [ $(id -u) -eq 0 ]; then # Get the uid and gid of wp-load.php uid=$(stat -c "%u" wp-load.php) gid=$(stat -c "%g" wp-load.php) # Change the ownership of the PHP file to match wp-load.php chown "$uid:$gid" "$filename" fi
0 notes
Text
dhclient - DHCPOFFER rejected by rule
I have a Netcomm NF18ACV router and configured a static DHCP reservation for my NAS. Recently, the NAS began failing to get an IPv4 address from DHCP, with the following error:
nas dhclient[6588]: no expiry time on offered lease. nas dhclient[6588]: Server added to list of rejected servers. nas dhclient[6588]: DHCPOFFER from 192.168.20.1 rejected by rule 192.168.20.1 mask 255.255.255.255.
In this case the the Netcomm DHCP server is not RFC compliant and dhclient is pedantic. The workaround was to add the following file to my NAS under /etc/dhcp/dhclient.conf:
supersede dhcp-lease-time 7200;
0 notes
Text
Adding up storage accounts in Azure
How much data are you storing in your Azure Storage accounts? Who wants to click through the Portal a thousand times when you can run this simple command:
az storage account list --query '[].id' -o tsv | xargs -n 1 -I X az monitor metrics list --resource X --metrics "UsedCapacity" --interval PT1H | jq '.value[0].timeseries[0].data[0].average' | grep -v 'null' | paste -sd+ - | bc
This lists all storage accounts in your current subscription and returns the ID, checks the UsedCapacity metric, removes any nulls, then adds everything up using paste and bc. The final result is in bytes, average over the last hour.
0 notes
Text
MySQL disaster recovery
Here are some steps for dumping and reloading a MySQL database if table repairs are not sufficient for getting the server to start without innodb_force_recovery set. There are some cPanel-specific steps, but the general concept is standard.
This is inspired by the excellent guide at https://forums.cpanel.net/resources/innodb-corruption-repair-guide.395/
Restart the server by using the minimum innodb_force_recovery level that prevents it from crashing. You can add this setting in /etc/my.cnf
Restart the server. On cPanel, use /scripts/restartsrv_mysql to do this. It seems like other methods (systemctl, /etc/init.d/mysql) MAY lead to the database running multiple times.
The database should now be up and running, albeit in read-only mode. Dump the contents of the mysql schema first, as this contains all user permissions, and then the entire database.
mysqldump -ER > /root/mysql_db.sql mysqldump -AER > /root/recovery_dump.sql
Now stop the database and move all old files aside
/scripts/restartsrv_mysql --stop mkdir /var/lib/ARGH mv /var/lib/mysql/* /var/lib/ARGH su -c mysql_install_db mysql # do NOT run this as root, as file permissions will be wrong # Not sure if this is necessary - but wouldn't hurt /scripts/restartsrv_mysql
Now you can start reloading databases from the dump you prepared earlier:
mysql -p < /root/mysql.sql # press enter - blank password mysql -p -e 'flush privileges' # press enter - blank password mysql < /root/recovery_dump.sql
If the reload of recovery_dump.sql fails with a 'database went away' error, do the following:
See how far the restore progressed by running mysql -e 'SHOW DATABASES'. Databases are restored in alphabetical order, so look at the last table in the list, ignoring mysql, test, and information_schema.
Duplicate the recovery_dump.sql file, edit the new file, and delete everything up to the crteate database statement
Delete all lines up to the database create statement for the last database in the list (which would have been partially restored).
Add the following as the first line of the script: SET FOREIGN_KEY_CHECKS=0;
Resume restoring from this new file
I'm not sure why the database goes away, but by bypassing databases that have already been restored and trimming the restore file down in this way, it's more likely to succeed.
If you cloned/trimmed the file you may get an error about the time zone being null - this can be ignored.
It is entirely possible that there are errors inside your recovery_dump.sql file (i.e. if there really are some corrupt tables and the data could not be extracted). In that case, you must restore the affected tables/databases from your most recent backup.
0 notes
Text
NodeJS, Azure MySQL, Managed Identity
I couldn’t find any good information on how to connect to Azure Database for MySQL using DefaultAzureCredential, which is a nice way of supporting Azure CLI auth and Managed Identity. I spent some time figuring it out!
Use the mysql2 library - this did not work with mysql
const { DefaultAzureCredential } = require("@azure/identity"); const mysql = require('mysql2'); const credential = new DefaultAzureCredential(); const token = await credential.getToken("https://ossrdbms-aad.database.windows.net") const username = "yourdatabaseusername@youraaddomain@DATABASENAME" var connection = mysql.createConnection({ host: "DATABASENAME.mysql.database.azure.com", user: mysqlUsername, password: token.token, database: "databasename", ssl: { rejectUnauthorized: true }, insecureAuth: true, authPlugins: { mysql_clear_password: () => () => { return Buffer.from(token.token + '\0') } } }); connection.connect();
Instead of supplying your database username, if you're using az cli authentication (e.g. for local development) you could extract your AAD username from token.token, since it's a JWT token. This would probably be a nice convenience.
Unfortunately this doesn't work for managed identity, since the token doesn't contain a username. So! This could be made a bit smarter - either you pass in a database username for a managed identity, fall back to trying to extract it from the token for local development, or else throw an error.
0 notes
Text
Weekly git diff reports
I follow some exciting Git repositories and wanted to have a way to get a weekly summary of what is happening in them. Here’s how I did it!
It’s easiest if you do this on a server which is always on, but I guess it could be run anywhere.
First, clone your git repository somewhere. Next, download the diff2html script from this article and put it somewhere: https://www.linuxjournal.com/content/convert-diff-output-colorized-html
Next, run a script like this from cron:
#!/bin/sh cd /home/username/Src/excitingrepo git pull > /dev/null git whatchanged --since="7 days ago" -p | /home/username/Src/diff2html.sh | mutt -e "set content_type=text/html" [email protected] -s "Weekly Diff"
Now run this from cron every seven days, or whatever!
0 notes
Text
Resetting a redash password
I set up redash locally using docker-compose with PostgreSQL and then promptly forgot the password, and without setting up email integration properly (which I didn't really want to do for a local installation), the password reset link method doesn't work.
Here’s how you can generate a new hash and reset it!
pip3 install passlib python3 from passlib.apps import custom_app_context as pwd_context pwd_context.encrypt('your_new_password')
Now, connect to Postgres and update your hash:
docker exec -it redash_postgres_1 psql -U postgres select * from users; update users set password_hash='???' where id=?;
0 notes
Text
macOS window arrangement shortcuts
I’ve used Spectacle for a long time for managing macOS window arrangement and it’s great. I recently got a 32″ 4K monitor and really wanted some extra arrangement options though - in particular a 3x2 grid. I had a look at a few other options like Rectangle and Amethyst but none of them quite did what I wanted, which is basically exactly the same as Spectacle but with some more options.
Then I found Hammerspoon! It wasn’t quite what I was expecting, but it’s even better - you can use it to script all sorts of things that I haven’t even started checking out, but I was immediately able to make it re-implement the Spectacle shortcuts I use, plus the 3x2 grid I wanted.
I bound Cmd-Alt plus the number pad for the grid. This only works when I have an external keyboard plugged in, which is fine - if I’m not using a keyboard I’m also probably not using the big monitor.
hs.hotkey.bind({"cmd", "alt"}, "pad7", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x f.y = max.y f.w = max.w / 3 f.h = max.h / 2 win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad8", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (max.w / 3) f.y = max.y f.w = max.w / 3 f.h = max.h / 2 win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad9", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (2 * (max.w / 3)) f.y = max.y f.w = max.w / 3 f.h = max.h / 2 win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad1", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x f.y = max.y + (max.h / 2) f.w = max.w / 3 f.h = max.h / 2 win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad2", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (max.w / 3) f.y = max.y + (max.h / 2) f.w = max.w / 3 f.h = max.h / 2 win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad3", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (2 * (max.w / 3)) f.y = max.y + (max.h / 2) f.w = max.w / 3 f.h = max.h / 2 win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad4", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x f.y = max.y f.w = max.w / 3 f.h = max.h win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad5", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (max.w / 3) f.y = max.y f.w = max.w / 3 f.h = max.h win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "pad6", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (2 * (max.w / 3)) f.y = max.y f.w = max.w / 3 f.h = max.h win:setFrame(f) end) hs.hotkey.bind({"ctrl"}, "return", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x f.y = max.y f.w = max.w f.h = max.h win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "left", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x f.y = max.y f.w = max.w / 2 f.h = max.h win:setFrame(f) end) hs.hotkey.bind({"cmd", "alt"}, "right", function() local win = hs.window.focusedWindow() local f = win:frame() local screen = win:screen() local max = screen:frame() f.x = max.x + (max.w / 2) f.y = max.y f.w = max.w / 2 f.h = max.h win:setFrame(f) end)
0 notes
Text
Archiving Dovecot mail stores
I wanted to archive and delete some old emails stored in a Dovecot mdbox mail store (on a cPanel server). This is pretty straightforward with Maildir - you can just move files to another folder. Dovecot’s mdbox mail store format is excellent, but makes this a bit trickier.
Here’s how you can do it, though!
List which folders the mailbox contains:
doveadm mailbox list -u [email protected]
Copy messages to a new Maildir. Here I specified an end date of 2019-01-01. This will copy messages from all folders.
mkdir /home/user/mailbackup doveadm backup -u [email protected] -e 2019-01-01 \ maildir:/home/user/mailbackup
Now delete the messages with the same date range. Note that expunge requires a specific folder to be specified (using the mailbox ?? term) - you previously fetched the list of folders and can substitute one in here.
doveadm expunge -u [email protected] mailbox INBOX before 2019-01-01
Finally, reclaim the disk spae
doveadm purge -u [email protected]
I’m not sure if it’s possible to do a mailbox search wildcard, or if you’d need to loop through all folders, but this was a good solution for one customer with tens of thousands of emails in their inbox!
0 notes
Text
Updating cPanel contact email without notification
Changing a cPanel account contact email address via the WHM web interface normally generates a notification email, which can be a bit confusing for recipients. There’s no straightforward way to disable these notifications, other than disabling the notification at an account level, which itself generates a notification!
Here’s how you can change contact addresses via the command line:
Edit /var/cpanel/users/<username> and change the CONTACTEMAIL field
Edit /home/<username>/.cpanel/contactinfo and change the “email” field
Run /usr/local/cpanel/scripts/updateuserdomains
0 notes
Text
Using Splunk SmartStore with MinIO
Update - MinIO now provide a guide for setting up Splunk SmartStore with MinIO
Your Splunk server is gobbling up disk space and you don’t want to upgrade your disks. What to do?
My initial solution was to use frozen buckets to limit data retention by setting the following in ~splunk/etc/system/local/indexes.conf:
[default] coldToFrozenDir = /opt/frozen-archives frozenTimePeriodInSecs = 39000000
Any buckets older than the frozen time period would be moved into this frozen-archives directory, from where I would periodically archive them to another machine, from which they would be completely inaccessible but I felt a little better knowing that they hadn't just been deleted.
Eventually due to increasing data volumes, I found that I was still running low on disk space. The next option was to split indexes up and apply different retention policues to these. There are some log data which are useful for short term investigations, but I don't need much retention. These were set to just erase data after the frozen time period. Eventually I started running out of disk space again!
A while back, Splunk added support for Amazon S3 storage using something called SmartStore, which copies warm buckets to remote storage. It can then evict them from the local index at its leisure, and then bring them back when they are needed. This sounds like just what I want! I had a heck of a time getting it working though.
The first problem was that I wanted to use MinIO instead of Amazon S3. S3 is pretty inexpensive, but in this case I had another 'storage' VPS at the same hosting provider with plenty of disk space and free gigabit intra-VPS transmit. Here's how I got it working!
On the storage server
Download minio
Create configuration for it:
MINIO_VOLUMES="/home/minio/data" MINIO_OPTS="--address 127.0.0.1:9000 --compat" MINIO_ACCESS_KEY=??? MINOI_SECRET_KEY=???
Set up a systemd unit file for it
Set up nginx as a reverse proxy. This is mainly so that I can use certbot and Let's Encrypt:
server { listen 9001 ssl http2; listen [::]:9001 ssl http2; server_name minio.example.com; ssl_certificate /etc/letsencrypt/live/minio.example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/minio.example.com/privkey.pem; # managed by Certbot ignore_invalid_headers off; # Allow any size file to be uploaded. # Set to a value such as 1000m; to restrict file size to a specific value client_max_body_size 0; # To disable buffering proxy_buffering off; location / { proxy_set_header Host $http_host; proxy_pass http://localhost:9000; # health_check uri=/minio/health/ready; } }
On the Splunk server
In indexes.conf, configure a remote store:
[volume:remote_store] storageType = remote path = s3://splunk/ # Replace splunk with whatever you want to call it remote.s3.access_key = # from minio's configuration remote.s3.secret_key = # from minio's configuration remote.s3.endpoint = https://minio.example.com:9001/ remote.s3.auth_region = us-east-1
Add some file to the MinIO container (I think you can just copy something in there manually on the filesystem) and then confirm that Splunk can see the file:
~splunk/bin/splunk cmd splunkd rfs -- ls --starts-with volume:remote_store
Then you can add the following for a specific index
[fooindex] remotePath = volume:remote_store/fooindex # The following two lines are apparently required, but ignored coldPath = $SPLUNK_DB/main/colddb thawedPath = $SPLUNK_DB/main/thaweddb
Restart Splunk
Monitor splunkd.log for activity by S3Client, BucketMover, and CacheManager
The configuration settings are a bit opaque and confusing. As far as I can tell, it will just start uploading warm buckets as it feels like it, and then evict them on a least recently used basis when you get close to your minimum free disk space.
Gotchas
Do not apply a remotePath to the [default] index. I did this (by mistake) and while data started uploading happily, it seemed like the frozen time retention policies of different indexes started to stomp over each other, so files were frozen (actually - deleted) from remote storage, probably according to the retention policy of the index with the smallest retention time. This was a bit of a disaster. It might work if you do remotePath=volume:remote_store/$_index_name.
It seems like Splunk requires MinIO to be run with the --compat parameter. I had a lot of trouble getting it to work without that.
MinIO isn't quite as full-featured as S3 - if you want to have different access keys for different users/purposes, I guess you're meant to run another instance of the servers. I wasted a lot of time trying to set up different keys and access levels, all of which kind of look like they are supported but don't really work the way you expect.
0 notes
Text
Use Azure CLI (az) to generate Storage Blob SAS URIs en masse
You have many storage blobs and want to generate a SAS URI for all of them. You can do this through Storage Explorer, but it takes a million years. Here’s how you could do it with az, the Azure CLI!
I’ll assume that you are already logged in.
ACCOUNTNAME=acctname CONTAINERNAME=containername az storage blob list --account-name $ACCOUNTNAME --container-name $CONTAINERNAME --output json | jq '.[] | .name ' | xargs -n 1 az storage blob generate-sas --account-name $ACCOUNTNAME --container-name $CONTAINERNAME --full-uri --expiry 2019-09-30T00:00:00Z --permissions r --name
This will extract the list of all Blobs in the container (you could do some filtering of this if necessary), pipe it to JQ and extract just the names, and then pipe these back to az, one at a time, and generate the URI.
It would be even quicker to generate signatures using the root key, but this was also a fun command line exercise.
0 notes
Text
cPanel AutoSSL command line script
Because WHM is a bit slow, here’s a simple shell script which will see if any of a user’s vhosts don’t have SSL certificates, trigger an AutoSSL check if necessary, wait for it to complete, and then report on how it went.
#!/bin/sh count=$(uapi SSL installed_hosts | grep issuer.organizationName | grep -c '~') if [ $count -eq 0 ]; then echo "Nothing to do" exit fi uapi SSL start_autossl_check loop=1 while [ $loop -gt 0 ]; do sleep 1 loop=$(uapi SSL is_autossl_check_in_progress | grep -c 'data: 1') echo -n . done echo count=$(uapi SSL installed_hosts | grep issuer.organizationName | grep -c '~') if [ $count -eq 0 ]; then echo "Success!" exit fi echo "Still $count unprotected vhosts"
0 notes
Text
WordPress-Xdebug-Docker setup
Occasionally a WordPress problem gets serious enough that you need to bust our Xdebug to find the root cause. However on macOS I found that it was especially annoying to set up Homebrew with Apache (and to manage the vhost configurations and document roots), different versions of PHP, and occasionally different versions of MySQL/MariaDB. Then because Homebrew installs Apache and PHP in funny places I could never remember which config files I needed to edit.
This is exactly what Docker’s for! Thus I have created a very simple Docker Compose config that will install PHP/Apache and WP CLI in one container and MySQL in another. WP CLI is great because you can easily import a database file and search-replace all URLs to replace them with http://localhost:8000 . You can easily adjust the PHP and MySQL versions by editing the Dockerfile. PHP is set up to connect to an Xdebug client running on the host - I use VSCode.
It may need some improvements, but so far it works pretty well for me.
https://github.com/joelw/wordpress-xdebug
0 notes
Text
Exim smarthost on cPanel DNSONLY
The problem: Your cPanel DNSONLY server needs to send system emails to you, but your hosting provider blocks outgoing port 25 so you want to use a smarthost on another port. DNSONLY’s WHM does not have the full Exim configuration module, and cPanel has a weird custom Exim configuration.
How to fix it:
This one’s pretty easy, but took me some time to figure out. Edit /etc/exim.conf.localopts and change the line
smarthost_routelist=* example.com::587
Then run /scripts/buildeximconf.
0 notes
Text
Training SpamAssassin when using Dovecot mdbox
SpamAssassin’s sa-learn tool works well if you have an mbox or Maildir folder for spam, but it does not support Dovecot’s excellent though slightly less convenient mdbox database format.
Here’s one way I’ve found to work around it and to extract messages and train the filter. Unfortunately it is not very efficient because it calls sa-learn once per message.. any suggestions for improvements would be welcome!
Paths are hard-coded for cPanel. su to the user that owns the mailbox so that the correct SpamAssassin database is trained, or run it as a superuser and specify the DB option to point to the correct place.
[email protected] DB=/home/username/.spamassassin/bayes # Get the list of mailboxes, and then set the folder you want to train again. doveadm mailbox list -u $USER MAILBOX=INBOX.junk doveadm search -u $USER mailbox $MAILBOX | while read guid uid; do doveadm fetch -u $USER text mailbox-guid $guid uid $uid | grep -v '^text:$' | /usr/local/cpanel/3rdparty/bin/sa-learn --dbpath $DB --spam --nosync ; done; /usr/local/cpanel/3rdparty/bin/sa-learn --dbpath $DB --sync
0 notes