I'm a software engineer, based out of San Francisco. I specialize in Ruby on Rails, iOS (Objective-C and Swift), and JavaScript development. Currently working at ReferralExchange.
Don't wanna be here? Send us removal request.
Text
Using extensions_loader to cleanly monkeypatch Ruby
One of the greatest features that Ruby has to offer is its excellent support for metaprogramming. This concept is certainly not unique to Ruby, but Ruby is unique in the sheer amount support that it has for metaprogramming. And the Ruby community has definitely adopted it. You would be hard pressed to find a major Ruby library that does not make use of metaprogramming to some extent. Whether it be a custom DSL, extensions upon core Ruby classes, or dynamically created methods, metaprogramming is everywhere in Ruby code.
While it is powerful, metaprogramming can get messy. Keeping track of all your extensions, where they apply, and when they're loaded can introduce unnecessary complexity to an application. That's where extensions_loader comes into play.
extensions_loader is a lightweight Ruby gem that takes the complexity out of creating Ruby extensions and applying them to existing classes.
Lets try an example. Lets say that I want to add the #average method to the Array class. Without using extensions_loader, I might do the following:
class Array def average inject { |sum, num| sum + num } / count end end
This would work as expected, but its not exactly clear what's happening here. A non-Rubyist would have no idea that this code actually extends upon the core Array class, rather than defining a new class.
Why don't we try that with extensions_loader?
module ArrayAverage def average inject { |sum, num| sum + num } / count end end ExtensionsLoader.load!(Array => ArrayAverage)
Now, instead of directly reopening the Array class, we are defining a module with the clear purpose of averaging an Array. Then, with extensions_loader, we're cleanly providing that functionality to the Array core class.
What if we wanted to add another method to Array, say #to_sentence?
module ArrayAverage def average inject { |sum, num| sum + num } / count end end module ArraySentence def to_sentence join(', ') end end ExtensionsLoader.load!( Array => [ArrayAverage, ArraySentence] )
By providing an Array of Modules rather than a single Module, we can specify a number of extensions to load onto the Array class. It's very clear what's happening here, and its super easy to remove specific pieces of functionality. Just remove the element!
Hopefully extensions_loader can clean up your Ruby monkeypatching and make it easier to understand your Ruby metaprogramming code!
0 notes
Text
Generating nested OpenStructs from JSON
If you've parsed JSON with Ruby, you're undoubtedly familiar with the following code:
data = JSON.parse(File.read('path/to/my.json'))
This does exactly what you expect it to do. It reads the contents of path/to/my.json as text, and parses that JSON data into a Ruby Hash, which is fine for many use cases. However, you may be drawn to OpenStruct as an alternative (for any number of reasons).
Lets try it out. OpenStruct's initializer accepts a Hash, so we should simply be able to pass in data.
require 'ostruct' data = JSON.parse(File.read('path/to/my.json')) struct = OpenStruct.new(data)
This will work just fine... if you don't have nested objects. Consider the following JSON:
{ "name": "John Smith", "age": 50, "education": { "school": "Stanford University", "degree": "BS in Computer Science" } }
Running that through our previous code, we'd get an OpenStruct with the name, age, and education keys set. name would be a String, age would be a FixNum, but education? Well, we'd expect it to be an OpenStruct. But unfortunately, its a Hash. The OpenStruct initializer does not recursively convert nested Hashes to nested OpenStruct objects.
So, how do we get what we want? Enter object_class. We can get the desired effect in less lines of code with the following:
require 'ostruct' data = JSON.parse(File.read('path/to/my.json'), object_class: OpenStruct)
The object_class option to JSON#parse specifies the dictionary-like class to use as the Ruby representation of a JSON object. In this example I used OpenStruct, but it can be a plain old Struct, or even an ActiveSupport::HashWithIndifferentAccess. You can also specify an array_class, which is an Enumerable like class that JSON will use as the Ruby representation of a JSON array.
0 notes
Text
Tracking down bugs with git bisect
I came across this awesome, lesser known feature in git a while ago. git bisect allows you to efficiently track down the commit that introduced a bug using a binary search.
The Problem
Let's consider the following scenario. You have the following commit history:
A -> B -> C -> D -> E (HEAD)
Right now, in HEAD, you have a bug, and you have no idea why. But, you've been diligent about committing frequently, so you know that if you're able to track down the commit that introduced the bug, you would be able to easily determine the cause. So, what are your options?
Well, you could checkout each individual commit and check to see if the bug is present. If the bug is present, you check out the prior commit, and so on until the bug is no longer present. At that point, you have identified the first commit with the bug and you can begin working on resolving it.
This would work, but lets say you have a ton of commits and its just not feasible to go through each commit and test for the bug. Well, you could apply your algorithms knowledge and perform a binary search. You could go pick a random commit from a while ago (when you're sure the bug wasn't present). Find the halfway point, see if the bug is present. If it is, perform the same action on the set of commits to the left. If not, perform the same action on the set of commits to the right. Repeat until you've narrowed it down to the single commit that introduced the bug. You've now accomplished this task in 0(log n) worst case complexity. This is a good solution.
So why not automate it?
The Solution
That's where git bisect helps us out. It simplifies the process of tracking down the first bad commit by walking us through a binary search.
Lets go back to our previous scenario:
A -> B -> C -> D -> E (HEAD)
The bug is present in HEAD (E). E is our known bad commit. We just checked out commit A, and the bug is not present. A is our known good commit. So, we know that the bug was introduced between commit A and E. Let's try out git bisect.
$ git bisect start $ git bisect bad E $ git bisect good A Bisecting: 1 revision left to test after this (roughly 1 step) [C] Commit C
We start by running git bisect start. Then, we tell git that commit E is bad, by running git bisect bad E. Alternatively, you could omit the revision and run git bisect bad to set HEAD as a bad commit.
Then, we run git bisect good A, to tell git that A is a good commit. This kicks off the binary search. We move to the halfway point and git automatically checks out C. We can check for the presence of the bug at this point. Lets say that the bug is present. We run...
$ git bisect bad Bisecting: 0 revisions left to test after this (roughly 0 steps) [B] Commit B
We've told git that C is bad. Now we've narrowed the issue down to either C or B. Lets check B...
$ git bisect bad B is the first bad commit commit B Author: Andrew Page <[email protected]> Date: Wed Dec 9 12:04:20 2015 -0800 B
The bug is present in B, so B is the first bad commit. We've simplified our binary search process by using git bisect, and we've easily tracked down a bad commit. But, there's a way to make this better...
The Cooler Solution
We've come a long way since the first attempt at tracking this bug down, but we can always do better. git bisect offers an awesome way to automate this, provided that you're able to check for the presence of the bug by running a command that returns a non-zero exit code if the bug is present.
The vast majority of testing tools support this, so if you have a test that can check for the presence of the bug, you're golden. Back to our scenario, lets say we have an Rspec testing suite that can accurately detect this bug.
$ git checkout A $ rspec ; echo $? 0 $ git checkout E $ rspec ; echo $? 1
We've verified that rspec will detect our bug and return a non-zero exit code if the bug is present. Let's automate this, with git bisect run. This tool will run the script you pass it. If the script exits with a 0, its considered a good commit. If not, its considered a bad commit.
$ git bisect start HEAD A Bisecting: 1 revision left to test after this (roughly 1 step) [C] Commit C $ git bisect run test-error.rb running rspec Bisecting: 0 revisions left to test after this (roughly 0 steps) [B] Commit B running rspec B is the first bad commit commit B Author: Andrew Page <[email protected]> Date: Wed Dec 9 12:03:59 2015 -0800 Commit B bisect run success
There we go, in two lines we've completely automated the entire binary search! This will allow you to track down bad commits much more easily.
0 notes
Text
Sharing functionality using Rails Engines
I came across an interesting challenge a few days ago. We have a number of Rails application in our system, architected differently, using different gems, doing different things. However, across the board, they all maintain a single piece of shared functionality – the ability to share their status.
What do we want?
The ability to share the fact that they're running
The ability to share the version of the code that they're running
No matter the purpose of each individual app, they each share this functionality. And this functionality is completely decoupled from the rest of the app. So why not DRY it up?
I tried a number of approaches to this problem. I tried monkeypatching ActionController, re-drawing Rails routes dynamically, and even copy+pasting the controller. None of these approaches worked well and were an acceptable solution to me. Eventually, I stumbled upon Rails Engines, and realized this was the best solution to this problem.
What are Rails Engines?
Engines are mini-Rails apps (typically bundled as a Ruby gem) that provide functionality to the parent app. An engine's directory structure mimics the structure of a Rails app. Any controller you add to app/controllers, any model you add to app/models, any view you add to app/views will be available to the parent app. Chances are, you've used a Rails engine in your application before. devise is a popular one.
How do I create a Rails engine?
Creating a Rails engine is as simple as including the rails gem in your gemspec, creating a new class and extending from Rails::Engine. However, Rails offers a great tool to automatically create an Engine gem and set up the directory layout.
rails plugin new v8 --mountable
This will create a number of directories and files, but these are the ones you need to know:
app - Just like the app directory in your Rails app. This is where you put your Engine controllers, models, views, helpers, mailers, etc.
config/routes.rb - Just like the routes.rb file in Rails. Here, you define your Engine specific routes. Later, in your Rails app's routes.rb, you can choose to mount this engine and your app will have all of the routes you define here.
lib - This is the lib folder like in any gem. Here, you can configure your Engine, and add any non-Rails code that you need.
Take a look at lib/v8/engine.rb. You'll see the line isolate_namespace V8. This line is extremely important because it namespaces everything in your gem under your gem name. Without this line, every single model and controller will be namespaced under the base namespace, so you may experience collisions if your Rails app defines models or controllers with the same name.
How do I share functionality?
The purpose of this post is to teach you how to share functionality between Rails apps using Engines, so lets get to the point!
We want to define a simple controller, PingController, that accepts a GET request to /ping and returns "pong".
app/controllers/ping_controller.rb
module V8 class PingController < ActionController::Base def ping render text: 'pong' end end end
Now that we have our controller and our action, we have to define a way to access it.
config/routes.rb
V8::Engine.routes.draw do get 'ping', to: 'ping#ping' end
As you can see, we're defining a GET endpoint, /ping, pointing at our new controller action. Great! We should be good to go, right? Lets just throw our gem into our Rails application, run rake routes, and...
$ rake routes You don't have any routes defined! Please add some routes in config/routes.rb. For more information about routes, see the Rails guide: http://guides.rubyonrails.org/routing.html.
...nothing? Oh, right! You'll need to mount your Rails engine in your routes.rb for any of your Engine routes to take effect. Keep in mind that you do not need to specifically mount anything to gain access to your controllers, models, views. Only routes.
Add this line to your routes.rb: mount V8::Engine => ''
Now, run rake routes again...
$ rake routes Prefix Verb URI Pattern Controller#Action v8 / V8::Engine Routes for V8::Engine: ping GET /ping(.:format) v8/ping#ping
Much better! Now we have a fully functional PingController and its route added to our Rails application, by installing 1 gem and adding 1 line to routes.rb. Engines are an extremely powerful way to separate functionality and reuse code across applications.
0 notes
Text
Pre-commit static code analysis with Rubocop
On a medium to large sized development team, code consistency can become an issue. Every developer has their own style and text editor settings, so maintaining readability can be quite the effort. There were a number of options on the table:
An editor code linter, like SublimeLinter or Rubocop for RubyMine. These work well, but do they not enforce properly formatted code. This might be a good addition to another solution.
A pre-commit code linter. These are nice, but they can be a pain to configure on multiple dev environments (the .git directory is not checked in to source control, and is thus not shared by developers working on a shared remote repository).
A post-push code linter, like HoundCI. I've used HoundCI in the past, and I found it quite useful. However, if a developer hasn't been following best practices while writing a large feature, when it comes time to code review, it can simply be too much work to go back and refactor a large amount of code.
We eventually decided on option #2, a pre-commit code linter. Why? Well, it would literally block developers from committing bad code (so its effective), while remaining easy to bypass in an emergency (git commit --no-verify will not run pre-commit hooks). Also, its very configurable, so you can customize it as you see fit.
I didn't find any existing scripts that were to my liking, so I decided to write my own. This is a very simple Ruby script, and I commented it so it should be understandable.
(Note, we're a Ruby shop, so this script uses a Ruby code linter, Rubocop). It should be easy to replace it with any other code linter.
Link to the script (GitHub gist).
To use this script, you'll need to add it to your pre-commit hook. Drop it into .git/hooks/pre-commit in your git repository. pre-commit must be executable (chmod +x pre-commit). Just as a bit of background, if the execution of pre-commit results in a non-zero exit code, the commit will be blocked. This script returns a non-zero exit code if any of the Rubocop rules fail.
Remember that the .git directory is not checked into source control or pushed up to any remote repositories. It is completely local. If you want to share this pre-commit script between development machines or repositories, I would recommend storing it in a separate repository and symlinking it to the proper locations across your repositories.
Hope this helps you stop your codebase from being overrun with bad code!
0 notes
Text
Beta Entitlements with iTunes Connect
I encountered an issue deploying my app to iTunes Connect earlier today. Once I had worked out all the kinks with the Swift and WatchKit libraries, I was able to deploy a valid binary to iTunes Connect, but Apple was still complaining about an invalid beta entitlement within my app.
To use TestFlight Beta Testing, build 26 must contain the correct beta entitlement. For more information, see the FAQ.
After some research, I discovered that Apple recently snuck in a new entitlement to provisioning profiles, beta-reports-active. Luckily, all you need to do to resolve this issue is regenerate your App Store provisioning profiles.
In the Apple Developer Center, navigate to the "Certificates, Identifiers, & Profiles" section and click the "Provisioning Profiles" tab. From there, edit each one of your Provisioning Profiles and click the "Generate" button at the bottom of the page. This will regenerate the profile and will include the new entitlement.
To verify that your profile has this new entitlement, change the .mobileprovision extension to .xml and open it up in Xcode. Search for the following:
<key>beta-reports-active</key><true></true>
Keep in mind that only App Store distribution profiles include this entitlement. Ad Hoc distribution profiles do not. If the entitlement is present, your profile is good to go. Configure your app to use this new provisioning profile and you should no longer encounter any beta entitlement issues with iTunes Connect.
0 notes
Text
Packaging WatchKit apps from the command line
Similarly to Swift apps, WatchKit apps are not properly bundled for iTunes Connect distribution by command line tools. Unfortunately, only Xcode itself (Xcode.app) creates the necessary directories containing the runtime dependencies that iTunes Connect requires for WatchKit apps.
This issue would manifest in an Invalid Binary state for your app in iTunes Connect, and the following email from Apple.
Dear developer,
We have discovered one or more issues with your recent delivery for “MyWatchKitApp”. To process your delivery, the following issues must be corrected:
Invalid WatchKit Support - The bundle contains an invalid implementation of WatchKit. The app may have been built or signed with non-compliant or pre-release tools. Visit developer.apple.com for more information.
Once these issues have been corrected, you can then redeliver the corrected binary.
Regards,
The App Store team
Fortunately, there is a very simple workaround. Like Swift, WatchKit apps require a top-level directory in the .ipa. In this case, the directory is named WatchKitSupport.
The contents of this directory are simple, a single binary from within Xcode.app named WK. The exact path to this binary within Xcode.app is as follows:
Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/Library/Application Support/WatchKit/WK
Start by unzipping your .ipa file and creating the WatchKitSupport directory.
$ unzip MyApp.ipa $ rm MyApp.ipa $ mkdir WatchKitSupport
Then, copy the WK binary into the WatchKitSupport directory.
$ cp /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/Library/Application Support/WatchKit/WK WatchKitSupport
All thats left to do is to re-package your .ipa.
$ zip -r MyApp.ipa .
Your command-line built app is now ready for deployment to iTunes Connect!
We’ve completely automated this process at Ship.io. We offer continuous integration and delivery for iOS and Android apps that’s super simple to set up. We’re free to try out, so feel free to take a look!
0 notes
Text
Packaging Swift apps from the command line
If you’re like me, you want to build your iOS apps from the command line. Luckily, there are many tools to help you do that. You can use Apple’s xcodebuild, or you can use Facebook’s drop-in replacement for xcodebuild, xctool.
However, while these tools seem to offer the same functionality as Xcode itself, there are some differences, specifically with Swift apps. If you've attempted to compile and archive a Swift app via a command line tool and distribute it to iTunes Connect, you may have gotten an email from Apple like this:
Dear developer,
We have discovered one or more issues with your recent delivery for "MyiOSApp". To process your delivery, the following issues must be corrected:
Invalid Swift Support - The bundle contains an invalid implementation of Swift. The app may have been built or signed with non-compliant or pre-release tools. Visit developer.apple.com for more information.
Once these issues have been corrected, you can then redeliver the corrected binary.
Regards,
The App Store team
The key piece of information in this email is this:
Invalid Swift Support - The bundle contains an invalid implementation of Swift. The app may have been built or signed with non-compliant or pre-release tools. Visit developer.apple.com for more information.
After a bit of digging, I discovered that this is due to a change that Apple made to the internal structure of .ipa files for Swift apps. .ipa files (which are essentially just Zip archives) generated by Swift apps now contain a top-level directory named SwiftSupport. This directory contains all the runtime libraries that are required to run your Swift app.
Unfortunately, this folder is only automatically generated when the .ipa is packaged by Xcode.app itself. If you’re using a tool like xcodebuild or xcrun to package your .ipa, you’ll need to manually create this folder and copy over the runtime libraries.
Once you unzip your .ipa, create a folder alongside Payload named SwiftSupport. You can get a list of libraries to copy over by searching for all .dylib files within the Payload/*.app/Frameworks directory. It’s not quite as simple as copying these files to SwiftSupport, you’ll need to find the matching libraries within the Xcode.app package and copy those over. The libraries are located in the following directory within Xcode.app:
Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphoneos/
For every .dylib in your Frameworks directory, copy over the corresponding .dylib from .../lib/swift/iphoneos to SwiftSupport. Then, zip the two folders back into an .ipa, and your app should be good to distribute to iTunes Connect.
We've completely automated this process at Ship.io. We offer continuous integration and delivery for iOS and Android apps that's super simple to set up. We're free to try out, so feel free to take a look! If you're just interested in automating this process locally, BQ has written a nice script to handle this. Take a look at ipa-packager on GitHub.
0 notes