vena
vena
vena.net
8 posts
code, networking, nerdery
Don't wanna be here? Send us removal request.
vena · 1 year ago
Text
Overriding netplan config managed by cloud-init on an existing Digital Ocean Ubuntu droplet
(please excluse any bad formatting, still haven't moved this away from tumblr...)
I saw a bunch of suggestions out there for doing this which have you entirely disable cloud-init's management of the network settings. I didn't want to do that, I just want to change the DNS servers away from DO's default of Google's DNS and to a local unbound service running on the droplet.
It's easy enough to edit /etc/systemd/resolved.conf and set DNS= there, but that still leaves Google's servers in the netplan config for the default interface which may be used as fallback.
Here's the relevant part of the netplan config on a Ubuntu 24.04:
# /etc/netplan/50-cloud-init.yaml network: version: 2 ethernets: eth0: match: macaddress: ... addresses: - ... nameservers: addresses: - 8.8.8.8 - 8.8.4.4 search: []
It doesn't look like simply dropping a new network configuration in /etc/cloud/cloud.cfg.d/ will do what I want. It doesn't seem to do anything at all (local configs run too early/late?), but even if it did, I'm also concerned it might overwrite the whole netplan config I'm trying to preserve. So instead, I'm going to use the runcmd directive to call netplan directly.
I added the following to /etc/cloud/cloud.cfg.d/99-local-dns.cfg:
#cloud-config runcmd: - | netplan set network.ethernets.eth0.nameservers=null \ && netplan set network.ethernets.eth0.nameservers="{ \ addresses: [127.0.0.1], \ search: [] \ }" \ && netplan apply
The reason I'm setting nameservers to null instead of setting nameservers.addresses directly is because "netplan set" will simply append when applied to a list.
Rebooted, and confirmed.
# /etc/netplan/50-cloud-init.yaml network: version: 2 ethernets: eth0: match: macaddress: ... addresses: - ... nameservers: addresses: - 127.0.0.1 search: []
$ resolvectl status ... Link 2 (eth0) Current Scopes: DNS Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=yes/supported Current DNS Server: 127.0.0.1 DNS Servers: 127.0.0.1 ...
0 notes
vena · 2 years ago
Text
Disabling WordPress Jetpack's AI Assistant
I didn't see this anywhere else, so here's what worked for me...
\add_action( 'jetpack_register_gutenberg_extensions', function () { if ( \is_callable( '\Jetpack_Gutenberg::set_extension_unavailable' ) ) { \Jetpack_Gutenberg::set_extension_unavailable( 'jetpack/ai-assistant', 'no_ai_allowed' ); } }, \PHP_INT_MAX );
(ick, i guess tumblr is no good for programming blogs anymore...)
0 notes
vena · 3 years ago
Text
Proxmox/ZFS Notes
The Proxmox installer makes it seem like ZFS is all about zRAIDs, but I routinely use it on single-NVMe installs for the benefit of features like snapshots and compression.
That's great, but ZFS on Proxmox can also come with some annoying disk thrashing (frequency of writes) which may have a negative effect on the drive's wear levelling.
Some of what I'm doing here is risky. Some folks (used to?) think ZFS on a single disk is risky already, but I'm disabling protections (where noted) which risk corruption from sudden power loss or crashes. Proceed at your own risk, keep thorough backups.
Basic ZFS stuff
This stuff is likely best done at creation, but if not, then immediately after installation. If you're handy and a little brave, installing in debug mode under advanced options from the initial installer menu may give you the opportunity to make these changes after the zpool is created, but before data is copied.
# Force writes to be asynchoronous. # Potentially unsafe! zfs set sync=disabled rpool # Disable writing access times zfs set atime=off rpool # Store extended attributes in inodes rather than files zfs set xattr=sa rpool # If not already set in the installer, use lz4 compression. # Modern CPUs are plenty fast enough. zfs set compression=lz4 rpool # Sets block size for r/w, may need additional tuning. zfs set recordsize=16k rpool # Increase the time to wait before flushing transactions. # Potentially unsafe! echo "options zfs zfs_txg_timeout=30" >> /etc/modprobe.d/zfs.conf
Reducing rrdcached writes
rrdcached is essentially a memory buffer that flushes to disk occassionally. It also has a journal so effectively it's always writing to disk. Proxmox uses it for statistics, so it's throwing a lot of data at it, which means a lot of writes. I need to both increase the time rrdcached lets data queue up before writing, and disable its journal.
Edit /etc/default/rrdcached:
# Change this: WRITE_TIMEOUT=3600 # Add this: FLUSH_TIMEOUT=7200 ... # Comment out JOURNAL_PATH to disable it. Potentially unsafe! # JOURNAL_PATH=...
By default, rrdcached isn't started with the -f switch necessary to use the FLUSH_TIMEOUT. Edit /etc/init.d/rrdcached and find RRDCACHED_OPTIONS a few lines down from the top. After the WRITE_TIMEOUT definition, add one for -f ${FLUSH_TIMEOUT} so it looks similar to this:
${WRITE_TIMEOUT:+-w ${WRITE_TIMEOUT}} \ ${FLUSH_TIMEOUT:+-f ${FLUSH_TIMEOUT}} \ ${WRITE_JITTER:+-z ${WRITE_JITTER}} \
I've heard of people putting the journal on a ramdisk, but that's memory I'd actually like to use, and is no safer than just disabling the journal.
Restart rrdcached with systemctl restart rrdcached.service or reboot.
Bonus: Disable unneeded services
I also want this box to boot faster, so I look at the output of systemd-analyze blame for services I don't need and their effect on boot time.
This is a single node cluster, so I don’t need HA or corosync.
systemctl mask pve-ha-crm.service systemctl mask pve-ha-lrm.service systemctl mask corosync.service
Don't need console spam at login, and I don't need SPICE, so...
systemctl mask pvebanner.service systemctl mask pvebanner.service
I'm not using LVM, so let’s disable all of that...
systemctl mask lvm2-monitor.service systemctl mask lvm2.service systemctl mask e2scrub_reap.service
And along with not using LVM, my single NVMe drive is the boot disk. I don't need to have the boot process sit around waiting for non-existant block devices to be discovered or created.
systemctl mask systemd-udev-settle.service
With the number of disabled services, I typically reboot at this point.
0 notes
vena · 11 years ago
Text
Reload gulp when gulpfile.js changes
I'm that lazy.
var spawn = require('child-process').span; gulp.task('auto-reload', function() { spawn('gulp', [], {stdio: 'inherit'}); process.exit(); }); gulp.task('watch', function() { gulp.watch('gulpfile.js', ['auto-reload']); }); gulp.task('default', ['watch']);
2 notes · View notes
vena · 11 years ago
Text
"Nothing to migrate."
Ran into an amusing issue with Laravel today. Artisan uses glob for searching the filesystem and doesn't escape paths. That means if you have a special character anywhere in your folder names, including folders above the project folder, glob will do strange things, fail, and otherwise go kablooey. In my case, a folder above the project had brackets in its name.
1 note · View note
vena · 11 years ago
Text
Using an ASUS router as a Time Capsule
I acquired an old Drobo Gen2, but it's a pretty slow device so I'd like to try using it as a networked Time Machine drive. Attaching a disk array to my home server seemed redundant, I'd rather that thing focus on streaming, and I want my macs to be able to sleep, so I figured I'd try out my ASUS RT-N16 router as a Time Capsule. It runs/can run linux-based firmware and most of the necessary tools are available through entware.
It turned out to not be the best idea in the world, but guides I could find on doing this were thin and full of holes, so I decided to write my own for anyone who doesn't mind that it's really slow and could shorten the life of your router.
Only the Step 4 is specific to using a Drobo, so technically this is just a guide to turning your router into a Time Capsule.
Software installed in this guide:
busybox
Provides various linix system tools
drobo-utils
Drobo management for linux
dbus
Allows applications to talk to eachother in a standard way
avahi
Enables zeroconf, allowing your router to advertise services to your macs in a way they can understand (Bonjour). The file server we create on the router will automatically show up in your finder with a neat little icon and everything.
netatalk
Enables the router to serve files over the standard Apple File Protocol
Step 1: Install Merlin's ASUS Firmware
I generally choose Merlin's firmware for ASUS routers. It's close to stock, but frequently updated, a lot more secure, and opens up a lot more capability. Much of this guide may get you where you need to be with other linux-based firmware such as Tomato, but I'm focusing solely on the tools available in Merlin's.
After install, enable JFFS. This creates a partition on the router's flash memory which will survive reboots, letting us create scripts and configuration files that hook into and extend the base system which gets reflashed at every boot.
Enable SSH in Administration, System tab. Your username and password to connect via SSH will be the same as that for the router's web interface.
Step 2: Install entware
Note: ASUS's DownloadMaster is incompatible with entware, and the installer will remove it. That's fine for me, I don't use DownloadMaster. While DownloadMaster IS compatible with optware, optware is seemingly no longer maintained and no one should be using it.
I want the entware install to be portable, so I'm going to install it on a USB thumb drive. The drive must be formatted ext2 or ext3. Once you've got your drive plugged into one of your router's USB ports, simply install entware.
Step 3: Install busybox
BusyBox is a prebuilt library of unix tools that aren't generally available on embedded linux systems, such as something as simple as adduser. Installation is easy with entware:
opkg install busybox
Step 4: Setting Up the Drobo
Drobo doesn't provide first-party tools for linux, so I'll be using drobo-utils. You may want to read the portions on LUN sizes, but I generally skip all of that by formatting the drive with a real, first-party Drobo Dashbash installed on another machine. When formatting, drobo-utils will use whatever LUN size is already set on the device. It doesn't matter what filesystem it's formatted at that point, just that we're setting the LUN size to the maximum your particular Drobo will support (16TB for the consumer-level Drobos).
Note: I really recommend formatting the Drobo on a more powerful linux machine. Ubuntu has drobo-utils in apt, spin up a virtual machine and do it there. The router is just too slow to do this in a reasonable amount of time, and even on a more powerful machine a 16TB LUN can take hours to format. That said, if you insist on doing it on the router, here are instructions specific to doing that.
On the router, you'll need to install drobo-utils' dependencies before it'll work:
opkg install git parted python
Once that's done, we'll fetch the latest from drobo-utils' git repo and put it on our thumb drive (assuming it's at /mnt/sda1):
cd /mnt/sda1 git clone git://drobo-utils.git.sourceforge.net/gitroot/drobo-utils/drobo-utils
It's not designed to run on entware, and assumes python is in the standard *nix sbin path, so we have to change that by editing the drobom file in drobo-utils. Change the first line to read:
#!/opt/bin/python
Now just connect the drobo to the other USB port on the router, give it a few minutes to spin up, and drobo-utils should be able to find it when you run drobom status:
/dev/sdb - Drobo disk pack 00% full - ([], 0)
Now we can use drobom to format the array.
./drobom format ext3
It's going to take a while. Seriously, a really long while. You really might want to do this on a more powerful linux machine.
Once it's formatted, you can mount it to /mnt/Drobo01 (or whatever your Drobo is named), or reboot the router and it'll be mounted to the same place automatically.
Create a directory on the Drobo named TimeMachine
mkdir /mnt/Drobo01/TimeMachine
Step 5: Dealing With Reboots
As mentioned before, the router's embedded linux does not survive reboots. We're going to be adding some users and groups here which are added to the regular old /etc/passwd and /etc/group files. They will not survive a reboot, so we need to make creating them something that happens at every boot. To do this, we'll create a startup script that will contain each command we need to run at boot. Create /opt/etc/init.d/S00setup and add the following as the first line:
#!/bin/sh
Now make it executable
chmod +x /opt/etc/init.d/S00setup
Keep in mind, will need to add every adduser and addgroup command we use in this guide to this file so it can be run on every boot.
Step 5: Install dbus
dbus would normally be installed as a dependency of the other packages we'll be installing, but since it requires some additional configuration, I install it separately and make sure it's working before moving on.
opkg install dbus
If you were to try and run dbus now, it would throw errors like this:
Failed to start message bus: Could not get UID and GID for username "root"
entware's dbus package does not come properly configured for the Merlin environment out-of-the-box and expects to run as user root which does not exist. To solve this, open /opt/etc/dbus-1/system.conf and change the user option to nobody:
<!-- Run as special user --> <user>nobody</user>
Once this is done, dbus will start as normal.
/opt/etc/init.d/S20dbus restart
dbus also has a habit of leaving its PID file behind. Edit /opt/etc/init.d/S20dbus and add the following after #!/bin/sh
rm /opt/var/run/dbus.pid > /dev/null 2>&1
Step 6: Install avahi
avahi broadcasts services to the network using a system called zeroconf, known in the Apple world as Bonjour. It's going to let our Drobo show up nice and pretty in Finder with a Time Capsule icon and everything.
opkg install avahi-daemon avahi-utils
avahi needs to start with an unprivileged user and group, and though entware's package for it uses the nobody user, it expects a group named nogroup — No such group exists in Merlin's firmware. Create the group:
addgroup nogroup
Add it to the startup script
echo 'addgroup nogroup' >> /opt/etc/init.d/S00setup
Edit /opt/etc/avahi/avahi-daemon.conf, uncomment the host-name line and give it a name:
host-name=TimeCapsule
avahi uses XML files to describe services, so we're going to create one to tell it about the AFP (Apple File Protocol) service we're going to create with netatalk in a minute. Edit or create /opt/etc/avahi/services/afpd.service with the following content:
<?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_afpovertcp._tcp</type> <port>548</port> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=TimeCapsule6</txt-record> </service> </service-group>
You can start avahi now to check out how it'll appear in Finder:
/opt/etc/init.d/S42avahi-daemon start
You should see the TimeCapsule appear in your Finder sidebar.
Step 7: Install netatalk
netatalk is an AFP server which will let us share a folder natively with our macs. Once again, entware makes it easy to install:
opkg install netatalk
Edit /opt/etc/netatalk/afpd.conf and comment out the first (and only) line. Add this under it:
- -tcp -noddp -uamlist -uams_dhx.so,uams_dhx2_passwd.so -nosavepassword
Edit /opt/etc/netatalk/AppleVolumes.default and comment out any existing lines. Add this at the end of the file:
/mnt/Drobo01/TimeMachine "TimeMachine" veto:"/lost+found/Network Trash Folder/Temporary Items/" allow:@timemachine cnidscheme:dbd options:usedots,upriv,tm
This tells netatalk to...
Make /mnt/Drobo01/TimeMachine available as a shared drive named "TimeMachine"
Hide unnecessary system volumes
Make it accessible only to users in the timemachine group (which we will create in a second)
With options to make dotfiles visible, use OS X AFP3 user privileges, and enable Time Machine for the volume.
Note: netatalk is going to report the maximum LUN size of the Drobo as its actual capacity, which can cause problems. You may want to set volsizelimit on the share.
Create the timemachine group:
addgroup -g 1000 timemachine echo 'addgroup -g 1000 timemachine' >> /opt/etc/init.d/S00setup
This will allow us to have multiple users if we need them, each with their own login. Keep in mind if you create users for this, you will have to do the user creation AND password setting in the S00setup script:
echo 'adduser -u 1001 -G timemachine myusername' >> /opt/etc/init.d/S00setup echo 'echo -e "password\npassword" | passwd myusername'
Note you'll want to keep track of the UIDs for those users, since the drive may be mounted before the setup script runs and you'll want the UIDs on the drive data to match the users that create the backup files.
I will just be using the same username and password as the web interface, so I will simply add user /admin/ to the /timemachine/ group:
adduser -G timemachine admin
Restart netatalk...
/opt/etc/init.d/S27afpd restart
Or just reboot the router entirely. You should now be able to connect to the TimeCapsule and TimeMachine share, and set up Time Machine on your mac.
Epilogue
This is EXTREMELY slow on the RT-N16. I let an encrypted backup run overnight and I only saw 80GB moved, the whole time running the router red hot. It's obvious the router is really not powerful enough to do this efficiently, and it doesn't exactly help that netatalk is severely outdated in entware. Pogoplugs are cheap and can run ArchLinux, so I may try that next.
0 notes
vena · 12 years ago
Text
Accessing MySQL servers from the host on Vagrant+Puppet VMs
On my Vagrant VMs, I wanted to be able to access a guest VM's MySQL server from the host when the guest has a dynamic IP, but simply giving a MySQL user a wildcard host wouldn't cut it so long as MySQL was bound to 127.0.0.1 by default in Ubuntu. I needed to bind MySQL to all interfaces (0.0.0.0) but augeas chokes on my.cnf for some reason, so out comes the hatchet:
exec { '/usr/bin/perl -pi -e "s/^.*bind-address.*$/bind-address = 0.0.0.0/" "/etc/mysql/my.cnf"': onlyif => '/bin/grep "bind-address.*\=.*127\.0\.0\.1" /etc/mysql/my.cnf', require => Package["mysql-server"], notify => Service["mysql"], }
You'll still need to give your user a remote or wildcard host, of course.
1 note · View note
vena · 12 years ago
Text
Remove personally identifying information from iTunes Match m4a files
Quick little bash script using AtomicParsley to recursively remove all personally identifying information from m4a files downloaded through iTunes Match:
#!/bin/bash if [ -z $1 ] || [ -z $2 ]; then echo "Usage: $(basename $0) [source directory] [output directory]" exit 0 fi find "$1" -depth -name "*.m4a" -type f | while read file ; do basename=$(basename "$file") path=$(dirname "$file") newpath="$2/${path/$1/}" newfile="$newpath/$basename" # Create the new directory if it doesn't exist mkdir -p "$newpath" echo "Working on: ${file/$1/}" AtomicParsley \ "$file" \ --DeepScan \ --manualAtomRemove "moov.trak.mdia.minf.stbl.mp4a.pinf" \ --manualAtomRemove "moov.udta.meta.ilst.apID" \ --manualAtomRemove "moov.udta.meta.ilst.atID" \ --manualAtomRemove "moov.udta.meta.ilst.cnID" \ --manualAtomRemove "moov.udta.meta.ilst.geID" \ --manualAtomRemove "moov.udta.meta.ilst.plID" \ --manualAtomRemove "moov.udta.meta.ilst.sfID" \ --manualAtomRemove "moov.udta.meta.ilst.cprt" \ --manualAtomRemove "moov.udta.meta.ilst.flvr" \ --manualAtomRemove "moov.udta.meta.ilst.purd" \ --manualAtomRemove "moov.udta.meta.ilst.rtng" \ --manualAtomRemove "moov.udta.meta.ilst.soal" \ --manualAtomRemove "moov.udta.meta.ilst.stik" \ --manualAtomRemove "moov.udta.meta.ilst.xid" \ --manualAtomRemove "moov.udta.meta.ilst.----.name:[iTunMOVI]" \ -o "$newfile" > /dev/null done
So far this does not seem to break the match on any of my files, it just changes them from "Matched AAC audio file" to simply, "AAC audio file."
3 notes · View notes