Text
LXD as developer isolation
I'm a developer. Loving it. What I hate is all the cruft my machine accumulates over the passing months. Have you ever tried NodeJS? Prepare for megabytes of javascript sprinkled over your system. Android development (or Java in general) is even worse.
Sure there are way of mitigating this problem somewhat. For Ruby you have rvm (or rbenv or ...), for Python there's virtual_env and for Node there's nvm. In my humble opinion these solutions are nothing more than dirty shell hacks. God forbid if anything ever goes wrong. Or if you have to compile native binaries. This all ends in tears.
Long since I've been working my way to developer nirvana. I've created Ansible scripts to provision throwaway VPS's, I've explored the wonderous realm of Chromebook development (there's nothing to screw up!). AndI have really tried to love Docker. The promise is significant. Just spin up a new Docker image and your problems go away!
First of Docker is for production environments where all the experimentation has been done. Second the Dockerfile is terrible. Just some wonky DSL which really is just a shell script. Third starting several services is no fun. Sure there's Compose but who whats all the complexity on you development machine.
LXD
Enter LXD. Which really is an easier LXC. Which really is an easier libvirt. Which really is an easier cgroups. Or so I think.
LXD let's you start Linux containers right on your laptop. Shutting them down won't delete all the changes you've done there. LXD is fast, networking recently became easy and sharing volumes never felt so relaxed. Copying files back and forth is easy too. I'll let you read the docs instead of me explaining everything again here.
One confusing bit; there are two LXC's. The first is the project you find on linuxcontainers.org and the other is the CLI client which ships with LXD. Forget about the first one. You start the daemon lxd with which you interact with the CLI client lxc.
My setup
I have a prj directory with all my source ever which I share in each and every container I spin up.
I edit my code on my host machine and execute the code in the container.
If I mess up I delete the container.
Bliss
Wanna give Alpine Linux a try (you should)? Go for it. Really need that sweet Ubuntu packaged piece of kit? Spin up that container!
I've been using LXD for my experiments in machine learning (Tensorflow) and to follow the excellent Python video tutorials of sentdex. Crystal has it's own container too. And all my clients nowadays are neatly seperated too! Good times.
Downsides
Obviously there are downsides. When doing stuff with hardware I had mediocre success. My Arduino works but adb can't find my Android phone in the container (perhaps fixed in the latest release).
And I image anything non terminal is going to mess things up. IPython works great for me as this just runs on a port. But I doubt Android studio would work (but I don't care since I edit my code on the host).
Hattip
Major props for the main developer of LXD Stéphane Graber. I feel he and his team have made the right design choices with regard to the interface and API, he's also super helpful on his blog and on Twitter. Keep up the absolutely great work.
0 notes
Text
Predicting house prices
Our little country is blessed with a monopoly which sort of seems to work; the online real estate market is dominated by one player called Funda
Recently I found myself with a little time on my hands in which I decided to give an other stab at my Machine Learning aspirations. The plan is really simple. Scrape said website and try to determine, based on several features the correct price of a house.
The website holds almost 200,000 houses up for sale which I think are almost enough to do some proper learning on. I'm going to try several approached from the Scikit-learn toolkit but I'm certainly also going to try some Deep Learning approach. Probably there's not enough data but perhaps some data can be generated.
My scraper can be found here: https://github.com/haarts/funda_scraper
More coming soon(ish).
0 notes
Text
LUKS with key file on Kali/Debian Jessie
Today I managed to setup Kali (Debian Jessie) with LUKS and a key file on a USB stick. This took me considerable time as much information on the process is either outdated or for other Linux flavors.
When you have figured it out it is, of course, pretty easy. Let me break it down.
First of forget all the key generation bullshit. Carrying a USB drive with a file called 'secret_key.bin' is immediately suspesious. Carrying a USB drive with a whole bunch of pictures of your kids is significantly less so. For the TLA agencies to break your encryption they need to generate that specific picture of you at the beach. So just pick a nice picture and keep it under 8MB as that's the maximum size of the key file. Just to make things absolutely innocent also make it a VFAT disk.
As the next step make is easy on yourself and do an install with the guided partitioning option with encryption. Go ahead and use the entire disk, everything on one partition. Many opt for convoluted setups, I believe that's only more to mess up. Besides doing a guided setup will make sure that the encryption setup works and that you only have to add the key file (and some other minor details).
Then there are a couple of things to know as background. You'll read everywhere that you're supposed to edit /etc/crypttab. This confused me as this file is on the to-be-encrypted disk, so how was whatever was decrypting the disk what to do? Well, enter 'initramfs' (I fear that at some point I'll need to read Linux From Scratch). This is the tiny system booted into by Grub.
Power on.
Grub (really the only bootloader which is relevant nowadays).
Initramfs.
Full blow system.
This is where the outdated stuff confused me. There's talk about kernel parameters and Grub options and custom shell scripts. None of that is necessary. We'll be doing our business in the update to date initramfs. Edit the /etc/crypttab file to point to your key file. Replace 'none' into something like:
root UUID=<your-encrypted-disk-uuid> /dev/disk/by-uuid/<your-usb-disk-uuid>:/path/to/keyfile luks,keyscript=passdev
(I now believe that using the UUID is a risk when your USB disk breaks there's no way to recover. by-label looks saner.)
Add to /etc/initramfs-tools/modules:
nls_cp437 nls_utf8
This is to ensure the tiny system called initramfs knows how to read a VFAT disk. All the USB modules are included by default.
Also add your keyfile to the accepted keys:
cryptsetup luksAddKey /dev/<device> (/path/to/keyfile)
You'll need to type the initial encryption password.
Now for the magic part:
$ update-initramfs -u
If you see the message: target sda5_crypt uses a key file, skipped you have forgotten to add keyscript=passdev. This basically pipes the key file as the password into the decryption phase.
Reboot!
You'll be greeted by the somewhat strange a start job is running and a timer which counts up to 1m30. I'm unsure what to make of this other than that is annoyingly long. If you don't have your USB key slotted it just seems to hang ('oh no my disk crashed!').
0 notes
Text
LXC, Cgroups and weird numbers
Running docker inside of an LXC container is not as trivial as I expected. Here I would like to clarify one of the many things that puzzled me along the way. The enigmatic configuration lines in the lxc configuration file (/var/lib/lxc/your-container/config):
lxc.cgroup.devices.allow = b 7:* rwm
With some effort I figured out the b is a block device. c a character device (no idea what those are at the moment) and a for 'all' or both. r, w and m are relatively easy too; read, write and mknod (make node I guess).
What remained unclear were the digits. In the man page they are refert to as 'major/minor'.
After some digging I found /var/lib/lxc/your-container/rootfs/dev which lists the devices. There I couldn't find the devices Docker was complaining missing. The command lxc-cgroup eventually cleared everything up. This command can be used to add devices to a running container. Where I should have been looking was /dev. The numbers there can be used in the container configuration file.
Helpful where the posts of Stéphane Graber on LXC.
Hope this helps someone.
0 notes
Text
Developer flow
An important, and often overlooked, aspect of developing code is setup.
Often I notice that it is not clear how one should proceed. The tasks are there, the concept is clear but how should you get started on them?
I'm talking about getting into the write code/test loop. Which databases should be up and running? With what content? Should you use a debugger? What is the best directory structure for the project?
The last conference I've been to was insightful. Not because of the talks or the interesting people or the great time I had with colleagues. But because I saw some very experienced programmers work their magic. I saw how they wrote tests and code.
Are you procrastinating? Perhaps you haven't established a flow yet. Do so. And write it down. Others have the same issues.
0 notes
Text
Someone else's code
Not too long ago I created a pull request for Fish. In that pull request I greatly enhanced Fish's ability to handle Git repositories. In my humble opinion of course.
In doing so I went all the way. I refactored all the functionality. Hell, I started all over again! I took the work of many programmers and threw it out.
All seemed dandy, I had fruitful discussions with other developers and we worked out a plan which I implemented. But when I finally offered up the pull request resistance arose from several senior developers in the community. I was puzzled. I incorporated many of their suggestions but as it stands my pull request has yet to be merged. I have given up on that particular request.
Last month I worked hard and long on a piece of open source (Groovy this time). I, together with a colleague, rewrote the way the application handled background jobs. The request got merged and promptly got refactored beyond recognition! All our carefully design decisions and hard work down the drain. I felt angry!
On the bike home it struck me. The situation eerily resembled the botched Fish pull request only from the other perspective.
I'll be more observant to the ecosystem next time.
0 notes
Text
PHP is awesome, sometimes
I quite a job when my employer switched from a Java stack to a PHP environment. I don't like needles and haystacks. I'm not keen on many aspects of PHP. However, a developer and entrepreneur I greatly respect often works with PHP. He is a true hacker. Within minutes he is able to whip up a script to perform a small task and show the output on a HTML page. Massaging some JSON, showing a couple of pictures, interacting with an API. You name it. I stand corrected! PHP does have it's uses sometimes.
0 notes
Text
Whitespace delimited formats
Recently I've had the opportunity to dabble with Python, the whitespace delimited variant of Ruby.
I always felt whitespace delimited formats are unwieldy, now I know they are. For one, your editor can't help you.
def sheep(): a = 123
When pressing 'enter' just after the '3', is your method done? Your editor can't know and thus can de-indent. Even typing a new 'def' won't help you as this is valid Python. Stupid.
Then there are the no-op methods (granted: a small thing). But imagine sketching out a program. You can't leave the method bodies empty:
def sheep(): def herd():
This is an IndentationError. You'll have to use the language keyword 'pass' to make this valid Python. Ugh.
To end this on a positive note; I really do like the list comprehensions in Python. Nice and clean:
[ transform(item) for item in items ]
0 notes
Text
Writing functions in Fish
I really like Fish. No more awkward prompt definitions, no more tons of configuration till your shell is useful. So I made the switch.
For my work I use a lot of Git. It's important for me that my shell plays nice with Git. My Zsh used a slightly modified version of this Git prompt and I would like to continue using it with Fish.
With Fish everything is a function, your prompt as well. Fish also defines a function you can use in your prompt definition. This is a good place to start but not everything I wanted could be done. The defined function provided some customization but in a limited way. I decided to help out.
Below you'll find a guide I would have found useful figuring out how to write functions in Fish. It's not hard, but I would have benefited from the following text. I assume you have read the function entry in the user documentation.
Creating functions in Fish
Variable declarations
Variables in Fish might be a bit counter intuitive for some one with a programming background. It is done with the set buildin. The plain set galaxy_destroyers does one of two things depending on the global state of the shell.
If there is already a global variable with the name galaxy_destroyers invokation of set galaxy_destroyers 42 changes that variable to 42.
If there is no such variable invokation creates a local variable with said value.
To avoid this ambiguity it is wise to use set with the -l flag in your functions. This forces the variable to be local (with a twist...).
When following this best practice you'd might be temped to do:
function galaxy_destroyer if test 1 -eq 1 # always true set -l destroyer 234 end echo $destroyer end $ sheep > 234
This works!
However:
function last_destroyer set -l destroyers dalek cylon borg for i in $destroyers set -l last $i end echo $last end $ herd >
This returns nothing! That is because the if clause does not introduce a new scope but the for clause does. Using the -l flag makes the variable local to the scope.
To avoid these subtle errors you often see the variables declared in a function like so:
function herd set -l destroyers ... end
This concludes part 1. In the next part I'll elucidate return types in Fish.
0 notes
Text
Chromebook development machine, 1 month in
I've been using a Chromebook lately as my primary machine. In my first post I've explained the basic setup and issues I was having. Having used my Chromebook for a while now I think I have (even) more sensible things to say.
VPS
I think I have finally settled on a VPS. My ansible configs have come a long way. With regard to RAM: 2GB is enough, 1 GB is not, it's that simple. It also matters which virtualization you use. I've started out with OpenVZ, this is what all the cheap VPS's are using. The major downside it that you are unable to run kernel modules with it.
FUSE
And I wanted to use a kernel module, FUSE. This is used to mount external filesystems. I wanted a 'drive' back by one of the big storage providers. This drive I could take with me as a moved from VPS to VPS. After much deliberation I settled with S3QL. This requires FUSE (as they all do). This meant choosing any other virtualization over OpenVZ. Currently I'm running a KVM VPS at FileMedia after trying the 1GB offering at DigitalOcean.
Not settled yet
S3QL is nice, it encrypts my stuff on the backend but that means I can only access it via S3QL. Bittorrent just released their Sync offering and I tried it out using my RaspberryPi as a store, this seems to work well and I plan to start using it.
Music
This is still hard. I've sort of settled on Grooveshark and, surprisingly enough, my phone which I use anyway daily for tunes.
Overall
I like my Chromebook, however it has some distinct disadvantages. For one it is slow. It's not that I have to wait a lot but it is just not as snappy as my old Macbook (not that I expected it to be so). I would love for it to be just a tad more powerful. The other downside is external screens; while they nearly always work, disconnecting always leads to a frozen Chromebook. I now just power off the device, booting is quick enough. Seems that after 20 years external monitors are still hard. Then there is printing and scanning. I still have some hope for printing but scanning seems just not possible. My Canon LiDE 110, a devise I absolutely adore, will sit idle. My aging Lexmark E120n network printer will also gather dust.
To end on a happier note; battery life is just great. It is also perfectly predictable, all the compiling I used to do now happens on my VPS!
0 notes
Text
Chromebook as a development machine
Update followup: One month in
I drank the cool-aid and bought a Samsung ARM Chromebook. My Macbook Pro needed repairing and I can't go without a laptop for two weeks. For a living I am a Ruby on Rails developer, most of my time I spent on either the commandline and Vim or in a browser. I figured I just as well might get a Chromebook which only has these two things. My plan was to use plain Chrome OS with a VPS. I really didn't want to install Ubuntu on the laptop. In the end the only thing I tweaked on the Chromebook itself was to run it in developer mode.
VPS
Recently I have gotten quite enthusiastic about the devops tool Ansible which I now use to provision my VPS. I decided to do this not only because I liked Ansible but also because there are so many different VPS vendors out there, I wanted to be able to quickly and painlessly switch from one vendor to the other. Currently I rent a 2GB VPS from BudgetVM for $10 a month. Initially I was looking for a 4GB VPS but the price point was more important to me and who knew how my RAM I would need? After having run this setup for a couple of days I think 2GB is more than enough, actually I now think the order of importance is: latency, disk IO, CPU, RAM. What helped me enormously was Serverbear.com, this site finally gave me insight in the overgrown VPS market (and they have coupons too!).
Ruby
Many people are compiling Ruby from source for their servers. I think that is silly to say the least. Initially I set out to create my own Ubuntu Ruby packages, this is not for the faint of heart and I was struggling. Then I found Brightbox, bless them! These guys are maintaining a PPA with the latest and greatest Ruby builds (also an experimental PPA). They even threw in some optimizations. This saved me hours.
Development
Shell
Lots of road bumps! I have tried 'crosh' the native shell of Chrome OS and I have tried Secure Shell. The main downside of crosh is the it runs in a tab which then eats your Ctrl-W. I really need that. I have twiddled with vim bindings and I might get back to it but for now I have settled for Secure Shell. The latter one makes it easier to store your ssh keys, which, by the way, I had to generate on a different machine (unless you run in Chrome OS developer mode).
Windows
I also struggled with a setup in which I could use my windows effectively. On my Macbook I used iTerm which had several tabs. Running Secure Shell in a tab is not possible which it eating Ctrl-W so I run that in a window. For a tabs substitute I now use tmux, I'm still getting used to it. So now I only have 2 windows between which I can cycle with 'alt-tab'. One running Chrome and the other running Secure Shell. I find it very important that with one particular keystroke I always get to the same spot.
Downsides
Music
I still haven't found a proper way of playing my tunes. I have a large iTunes library but I would be willing to switch to some online service for less than $10 per month. For now I resort to online radio which involves Flash players. These are no fun at all and cause the load on my Chromebook to steadily rise to around 3.5 at which point things start to slow down noticeably.
Downloads
I receive email, sometimes even with an attachment. This I can download but what to do with a .xsd file? What I would really like is for my Downloads folder to be mounted on my VPS.
My resources
Provisioning
In my vps-ansible-config repo all provisioning comes together. It does:
install Ruby
add new user
install basic packages
fixes locale
sets up my dev environment
links up my dotfiles
Dotfiles
I have seriously spent time on my dotfiles. I have files for:
vim
zsh
git
tmux
0 notes
Text
Expectations in a RSpec before filter
Last week I found a piece of code which put expectations (for example `should_receive` or the likes) in before filters in your spec file. I believe this is not correct in subtle ways. - A before filter is meant to define the 'world' your RSpec is going to run in. Not to set expectations. These go into `it` blocks. - Besides that, when putting expectations in a `before` filter, the expectation is needlessly checked ever `it` block. Imagine the expectation _not_ being met. All of the sudden all your specs turn red. I'd rather see a specific spec fail so I immediately know where to look. So I'm suggesting to put no expectations in your `before` filters and if you feel the urge to do so it probably needs it's own `it` block. An example of how it should be done: describe "foo" do before do some_object.stub(:some_method).and_return(:some_value) end it "manipulates the object" do some_object.should_receive(:some_method).with(:some_argument) some_invocation end end
0 notes
Text
Go on a Raspberry Pi
Recently I found myself playing with both [Go](golang.org) and a [Raspberry Pi](raspberrypi.org). I've build a some Go programs on my Mac but copying over the executable to the Pi did not work (surprise!). It is actually fairly easy to crosscompile a Go binary: ``` GOARM=5 GOARCH=arm GOOS=linux go build your_program.go ``` It's all pretty self explanatory, the one thing that confused me was the `GOARM` variable. I'd expect the `GOARCH` to be enough. But apparently there are different ARM types out there. A full list can be found on the Go site under [Optional Environment Variables](http://golang.org/doc/install/source#environment).
0 notes
Text
Here documents in Ansible
Sometimes you have a binary which _insists_ on asking you questions when installing it. I ran into that with the linux [Crashplan](http://crashplan.com/) installer. Of course there is a solution to this, called ['here documents'](https://en.wikipedia.org/wiki/Here_document). This allows you to issue commands to a script or program by piping in a piece of pre formatted text. Usually the CR/enter/return is represented as a just that. Create the here document with enters at the appropriate places. With an ansible script that is not possible. Luckily you can use `\n` in these cases: ``` ansible hostname -i ~/ansible_hosts -m shell -a "/some_path/install.sh <
0 notes
Text
Why I'm returning my Sonos
I am going to return my Play:3. This saddens me a great deal and I would like to explain why I'm doing so. First a little bit about my setup. My wife and I both have an (Android) smartphone with music. We also both have a Macbook with iTunes. Hers with a small library, mine 80GB. We have a Sonos Play:3 and a bridge. First of all the good parts; - I love the setup. Easy, simple, clean. Great, great work. - I also love the sound the Play:3 produces. I couldn't believe it when I heard it in the shop. So small, such great sound. - The ability to control the music with any device. Whether it was streaming from my laptop or hers. This is how it should be. But there are, sadly, still enough reasons to return it. 1. The controller is build for a touchscreen. Which leads to a subpar experience on the desktop client. For example, pressing spacebar won't pause/start the music. Clicking on the volume bar (upper left) won't set the volume to where I clicked. It just takes a tiny step. Stuff easily fixed, I'm sure, but disappointing none the less. 2. This is really unacceptable; the podcasts I _view_ in iTunes are mixed in my Music Library. I can't believe it. Import podcast for all I care but don't mix them in my Music Library. 3. When I add music to my iTunes library (which I do regularly) I have to wait till the Sonos controller decides to sync. What I've done so far is to go to Preferences -> Music Library -> Advanced and set the automatic update to 1 minute in the future. Not even a button 'sync now'. Or better yet, just keep an eye on the file system (inotify anyone?). 4. With two libraries on two laptops it happens often that one of them is offline. Why does the controller let me select a track which is offline? It really isn't that hard to know if a share is up. 5. As mentioned earlier my wife and I added both our Libraries to Sonos. Our libraries overlap. What happens now when her laptop is offline is that I can't play songs which I _know_ I have on my disk. Sonos thinks the tracks are located solely on her laptop, which is not true. I can go to Music Library -> Folders and select it there. 6. Playback from an Android device is nigh impossible. doubleTwist sorta does it, Twonky is barely capable. Playto is bad, as is iMediashare. Besides, all these apps are large (~20MB) which makes them appropriate only for mid-high to high end smartphones. 7. Would it have killed anyone to install a line-in on the Play:3? I feel this is only done to up sell to the Play:5. (That almost worked BTW.) 8. When a friend of mine came around last week he wanted to let me listen to a great new band he discovered. Without a line in I mounted his phone on my computer. I then proceeded to import his songs in my iTunes library, I then went on to the Sonos preferences and did the '1 minute in the future' trick. Waited far more that 1 minute. In that time I had ample time to explain to him why I bought a device of €300 which could to this simple task. This, in a nutshell, is why I'm (seriously considering) returning my unit.
0 notes
Text
A better Chef; Ansible
Lots of people still use Chef for their devops needs. I briefly used Chef about 6 months ago and got very frustrated. I find the process convoluted and confusing. For our Riak cluster we needed a deploy script to move an install script and resources to the remove host. The install script installed Ruby, some required packages only _then_ the actual Chef recipe was run. Maybe I was doing it wrong, but then I'm of the opinion that the whole thing is too complicated. ## Ansible With [Ansible](ansible.cc) you can: - Run one-off commands (want to know the loadon your cluster? `ansible riak-nodes -m 'w' ` - Provision servers with a simple YAML file - Even replace Capistrano Ansible is opinionated, questions I have asked on the mailing list were often quickly answered with a "you're doing it wrong". For example: - Adding lines to a configuration file is considered smell. This leads to a remote system of which the state is not clearly defined. - Downloading packages from a remote source? You can't be sure this source is still up. Download the package yourself and copy it over. - Downloading a tarball to compile it on the spot? Do it once and roll a package. A "playbook" is the Ansible equivalent of a cookbook in Chef. It is a play YAML file: God bless it's simplicity. (Yes I know the playbook is still a work in progress)
0 notes
Text
Using RSpec to monitor your apps
RSpec did it again. It blew my mind. Just when I was doubting how awesome it was.
At Skylines we run a lot of services. API's, websites, apps. You name it. When something goes down we want to know about it. Not when a client calls but the second after it breaks.
As a good developer you write a bunch of (unit/integration) tests. These tests are used when writing the code. But when the code is deployed all bets are off.
Our applications have a lot of moving parts and when one breaks that impacts our user experience. We want to monitor the experience our users are getting. It our API still up? Does it return valid JSON? Contains the JSON returned the fields we expect it to hold?
These questions sounded eerily like your run of the mill RSpec tests. I saw two problems with RSpec. Unlike regular tests we wanted to run tests continuously on live applications. Also RSpec just outputs on the console whether or not a test failed, we want to receive emails/texts when stuff breaks.
Turns out these aren't problems at all! Years ago David wrote about the metadata properties you could pass to a describe or it method. Turns out this is ideally suited for what we want. Flag an example (or example group) as being a live test:
describe "live tests", :run_live do it "should return a 200 status code" do response = Curl::Easy.http_get("http://google.com") example.metadata[:live][:time_to_first_byte] = response.start_transfer_time response.header_str.should =~ /200 OK/ end it "should return valid JSON" it "should have some fields in the JSON" end
And in spec_helper.rb:
config.after(:each, :run_live => true) do |example_group| #do whatever you like with the exception and possible metadata example_group.example.exception example_group.example.metadata[:live] end
All we need now is a daemon on some random server which checks out every repo we have and periodically run:
$ rspec -t run_live
Happy.
10 notes
·
View notes