Text
Getting started with standalone Kayenta + Referee for Automated Canary Analysis
Automated canary analysis is a deploy strategy wherein you direct a subset of your (production?) traffic to a canary version of your application. The idea is that your baseline application and your canary application will then be shipping metrics off to your metrics provider, and Kayenta + Referee help you determine whether your new code fails any metrics! This replaces developers manually looking at dashboards & logs during deploys.
Kayenta is an open source Java microservice for performing automated canary analysis - it's just one microservice of many that make up Netflix's Spinnaker platform. Referee is an open-source React frontend that sits in front of Kayenta and allows for rapid iteration of your canary configurations. This blog post will outline the steps I took to get this set up at $WORK recently!
You can read through the details below in this blog post, or go straight to the kayenta-referee-quickstart repo to get started right away.
Configuring Kayenta
Starting up Kayenta consists mainly of configuring its three data stores in the kayenta.yml config: METRICS_STORE, CONFIGURATION_STORE, and OBJECT_STORE.
For the METRICS_STORE, choose the one that your applications is shipping its metrics to. Kayenta supports Atlas, Google, Datadog, graphite, New Relic, Prometheus, SignalFX, and Wavefront. You'll need to obtain the appropriate API keys to allow Kayenta to query the metrics API, which will depend on the datastore you're using. For datadog, the config excerpt looks like:
datadog: enabled: false metadataCachingIntervalMS: 900000 accounts: - name: my-datadog-account apiKey: xxxx applicationKey: xxxx supportedTypes: - METRICS_STORE endpoint.baseUrl: https://app.datadoghq.com
For CONFIGURATION_STORE and OBJECT_STORE, you can start out with the built-in in-memory-store, and this is already configured in the quickstart repo. Later, you can switch that to S3/GCP/Azure if you need to persist your configs/objects.
memory: enabled: true accounts: - name: in-memory-store supportedTypes: - OBJECT_STORE - CONFIGURATION_STORE
running kayenta + referee
Over in the kayenta-referee-quickstart Github repo, there are dockerfiles and a docker-compose.yml that you can use to run both of the microservices locally. Fill out your metrics store API keys in the kayenta.yml config at the root of the repo, and then you can start it up:
docker-compose up -d
The Kayenta API documentation will be available at http://localhost:3001/swagger-ui.html, and Referee will be running at http://localhost:8090/dashboard. If the containers don't come up, check their logs for any telling messages:
docker-compose logs kayenta docker-compose logs referee
If you decide you want to deploy Kayenta & Referee into your internal infrastructure, you'll need to provide a Redis instance for Kayenta to talk to. This will depend on your internal infrastructure, but in a pinch, you could use a Docker container running Redis even without a persistent data volume backing it, if you don't need to store the Canary configurations inside Kayenta.
integrating kayenta + referee into your production deploys
If you're embarking on the canary production deploy integration road, one of the most useful tips I came across was to start out with an asynchronous canary pipeline separate from your existing production pipeline. Have the canary pipeline run in parallel to your existing production deploy, and don't let the canary pipeline block or even fail deploys. Instead, the idea is to set up the canary pipeline with a few metrics and have it gather data on every production deploy until you can see whether the metrics you chose are appropriate and would catch real production issues.
canary configurations
Configuring a canary consists of constructing the POST body for the standalone canary endpoint; there's probably enough material there for its own blog post, so we'll defer that to a potential part two!
0 notes
Text
tramp & shell auto-complete slow-ness
I'm at a different $WORK from before! Now I get to (have to?) SSH directly into remote boxes again, which I haven't had to do in a while, as previously my workflow was based around Kubernetes and kubectl, (which meant I didn't have autocomplete at all...).
Anyway, I was noticing that (I thought) company-mode was triggering and taking upwards of 5 seconds to finish composing an autocomplete list for me to pick from. I also have a very low company activation setting, as I don't even have a keyboard binding to activate company's autocomplete.
Turns out it was projectile that was creating most of the lag, as it purportedly is trying to figure out what directory to put in the modeline or something. Long story short, turning off projectile in shell mode made it much more bearable:
(add-hook 'shell-mode-hook (lambda () (interactive) (projectile-mode -1)))
That being said, projectile is indispensable for the rest of my workflows! But just not with tramp in *shell-mode* :)
UPDATE: Huh, turns out projectile-mode turns it off globally, which actually isn't what I wanted - I thought it would've been a buffer local setting. Welp, some additional digging turned up a hook that short-circuits the expensive find-file-hook from projectile that was causing the issue:
(add-hook 'find-file-hook (lambda () (when (file-remote-p default-directory) (setq-local projectile-mode-line "Projectile"))))
ht: https://www.reddit.com/r/emacs/comments/320cvb/projectile_slows_tramp_mode_to_a_crawl_is_there_a/ , https://github.com/bbatsov/projectile/issues/1232
0 notes
Text
Uninstalling Mersive's Solstice Audio Driver from MacOS Mojave
This was a minor annoyance, but since my Google searches were particularly unfruitful, I figured I'd make a note of it. Our WeWork uses the Mersive Solstice client for broadcasting our devices to the meeting room televisions. It has the option to install its own audio driver to try to mirror the audio on its own.
The client's built-in audio driver is called "Desktop Audio Streaming Device" and it shows up as an output and an input device. It also completely failed to work at all. The only way I managed to get sound on to the TV was by using AirPlay after connecting with the Solstice client - basically using it as little as possible.
Uninstalling audio drivers is apparently something weird in OS X - the device was listed in the Audio Midi Setup built-in macOS app, but couldn't be uninstalled from the app. Some wild googling directed me to /Library/Audio/Plug-Ins/HAL/, or /Library/Preferences, or their ~/Library counterparts. In the end, it was a kext in /System/Library/Extensions: SolsticeAudioDevice.kext. Trashing that file and restarting the computer fixed the issue.
0 notes
Text
Connecting to Mongo with a self signed CA on a JVM in Kubernetes
At $WORK, we're creating an internal platform on top of Kubernetes for developers to deploy their apps. Our Ops people have graciously provided us with Mongo clusters that all use certificates signed by a self-signed certificate authority. So, all our clients need to know about the self-signed CA in order to connect to Mongo. For Node or Python, it's possible to pass the self-signed CA file in the code running in the application.
But, things are a little more complicated for Java or Scala apps, because configuration of certificate authorities is done at the JVM level, not at the code level. And for an extra level of fun, we want to do it in Kubernetes, transparently to our developers, so they don't have to worry about it on their own.
err, wha? telling the JVM about our CA
First off, we had to figure out how to tell the JVM to use our CA. And luckily since all the JVM languages use the same JVM, it's the same steps for Scala, or Clojure, or whatever other JVM language you prefer. The native MongoDB Java driver docs tell us exactly what we need to do: use keytool to import the cert into a keystore that the JVM wants, and then use system properties to tell the JVM to use that keystore. The keytool command in the docs is:
$ keytool -importcert -trustcacerts -file <path to certificate authority file> \ -keystore <path to trust store> -storepass <password>
The path to the existing keystore that the JVM uses by default is $JAVA_HOME/jre/lib/security/cacerts, and its default password is changeit. So if you wanted to add your self signed CA to the existing keystore, it'd be something like
$ keytool -importcert -trustcacerts -file ssca.cer \ -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit
(Even this very first step had complications. Our self signed CA was a Version 1 cert with v3 extensions, and while no other language cared, keytool refused to create a keystore with it. We ended up having to create a new self-signed CA with the appropriate version. Some lucky googling led us to that conclusion, but of particular use was using openssl to examine the CA and check its versions and extensions:)
$ openssl x509 -in ssca.cer -text -noout // Certificate: // Data: // Version: 3 (0x2) // Serial Number: ... // ... // X509v3 extensions: // X509v3 Subject Key Identifier: ... // X509v3 Key Usage: ... // X509v3 Basic Constraints: ...
Another useful command was examining the keystore before and after we imported our self signed CA:
$ keytool -list -keystore /path/to/keystore/file
as you can look for your self-signed CA in there to see if you ran the command correctly.
Anyway, once you've created a keystore for the JVM, the next step is to set the appropriate system properties, again as out lined in the docs:
$ java \ -Djavax.net.ssl.trustStore=/path/to/cacerts \ -Djavax.net.ssl.trustStorePassword=changeit \ -jar whatever.jar
Since the default password is changeit, you may want to change it... but if you don't change it, you wouldn't have to specify the trustStorePassword system property.
handling this in kubernetes
The above steps aren't too complicated on their own. We just need to make sure we add our CA to the existing ones, and point the JVM towards our new and improved file. But, since we'll eventually need to rotate the self-signed CA, we can't just run keytool once and copy it everywhere. So, an initContainer it is! keytool is a java utility, and it's handily available on the openjdk:8u121-alpine image, which means we can make a initContainer that runs keytool for us dynamically, as part of our Deployment.
Since seeing the entire manifest at once doesn't necessarily make it easy to see what's going on, I'm going to show the key bits piece by piece. All of the following chunks of yaml belong to in the spec.template.spec object of a Deployment or Statefulset.
spec: template: spec: volumes: - name: truststore emptyDir: {} - name: self-signed-ca secret: secretName: self-signed-ca
So, first things first, volumes: an empty volume called truststore which we'll put our new and improved keystore-with-our-ssca. Also, we'll need a volume for the self-signed CA itself. Our Ops provided it for us in a secret with a key ca.crt, but you can get it into your containers any way you want.
$ kubectl get secret self-signed-ca -o yaml --export apiVersion: v1 data: ca.crt: ... kind: Secret metadata: name: self-signed-ca type: Opaque
With the volumes in place, we need to set up init containers to do our keytool work. I assume (not actually sure) that we need to add our self-signed CA to the existing CAs, so we use one initContainer to copy the existing default cacerts file into our truststore volume, and another initContainer to run the keytool command. It's totally fine to combine these into one container, but I didn't feel like making a custom docker image with a shell script or having a super long command line. So:
spec: template: spec: initContainers: - name: copy image: openjdk:8u121-alpine command: [ cp, /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/cacerts, /ssca/truststore/cacerts ] volumeMounts: - name: truststore mountPath: /ssca/truststore - name: import image: openjdk:8u121-alpine command: [ keytool, -importcert, -v, -noprompt, -trustcacerts, -file, /ssca/ca/ca.crt, -keystore, /ssca/truststore/cacerts, -storepass, changeit ] volumeMounts: - name: truststore mountPath: /ssca/truststore - name: self-signed-ca mountPath: /ssca/ca
Mount the truststore volume in the copy initContainer, grab the file cacerts file, and put it in our truststore volume. Note that while we'd like to use $JAVA_HOME in the copy initContainer, I couldn't figure out how to use environment variables in the command. Also, since we're using a tagged docker image, there is a pretty good guarantee that the filepath shouldn't change underneath us, even though it's hardcoded.
Next, the import step! We need to mount the self-signed CA into this container as well. Run the keytool command as described above, referencing our copied cacerts file in our truststore volume and passing in our ssCA.
Two things to note here: the -noprompt argument to keytool is mandatory, or else keytool will prompt for interaction, but of course the initContainer isn't running in a shell for someone to hit yes in. Also, the mountPaths for these volumes should be separate folders! I know Kubernetes is happy to overwrite existing directories when a volume mountPath clashes with a directory on the image, and since we have different data in our volumes, they can't be in the same directory. (...probably, I didn't actually check)
The final step is telling the JVM where our new and improved trust store is. My first idea was just to add args to the manifest and set the system property in there, but if the Dockerfile ENTRYPOINT is something like
java -jar whatever.jar
then we'd get a command like
java -jar whatever.jar -Djavax.net.ssl.trustStore=...
which would pass the option to the jar instead of setting a system property. Plus, that wouldn't work at all if the ENTRYPOINT was a shell script or something that wasn't expecting arguments.
After some searching, StackOverflow taught us about the JAVA_OPTS and JAVA_TOOL_OPTIONS environment variables. We can append our trustStore to the existing value of these env vars, and we'd be good to go!
spec: template: spec: containers: - image: your-app-image env: # make sure not to overwrite this when composing the yaml - name: JAVA_OPTS value: -Djavax.net.ssl.trustStore=/ssca/truststore/cacerts volumeMounts: - name: truststore mountPath: /ssca/truststore
In our app that we use to construct the manifests, we check if the developer is already trying to set JAVA_OPTS to something, and make sure that we append to the existing value instead of overwriting it.
a conclusion of sorts
Uh, so that got kind of long, but the overall idea is more or less straightforward. Add our self-signed CA to the existing cacerts file, and tell the JVM to use it as the truststore. (Note that it's the trustStore option you want, not the keyStore!). The entire Deployment manifest all together is also available, if that sounds useful...
0 notes
Text
Executing promises serially with [].reduce
Recently at $WORK, we were writing a data migration script in node that needed to make a couple hundred requests. The first attempt was just to wrap everything up in a Promise.all:
const rp = require('request-promise-native'); const urls = [ 'url1', 'url2', // ..., 'url300' ]; Promise.all(urls.map((url) => rp.get(url) .then(sendRelatedRequests)));
However, the internal server we were talking to wasn't able to handle all of the requests concurrently, and since the subsequent logic would also send a few more requests of its own, we ended up taking down the server because we were spawning all the promises at the same time, and since Promises execute once they're made, that means all the requests were starting off at roughly the same time.
So, for our second pass, we decided we wanted to only send one request at a time, lining up all of our requests serially, since we know that when the server finishes responding to our nth request, it should be ready to handle the (n+1)th request. One way to accomplish this is with a big long chain of .thens, as by the time we're in a .then, we're guaranteed that its preceding promise is completed. And one way we can construct that chain is with a reduce:
urls.reduce( (acc, url) => acc.then(() => rp.get(url).then(sendRelatedRequests)), Promise.resolve() );
[].reduce takes two arguments: the reducing function, and the initial value. We need to start with a Promise, because our reduce function assumes that the accumulator acc has a .then on it.
For the reducing function, we have an accumular, and a url. Each time, acc is the existing serial chain of promises, and we add another .then on to it. The important part is that the function in the .then handler is not immediately creating the promise, because that would mean the request is immediately sent. Instead, passing the function expression means the Promise isn't created until the .then handler is invoked, and since the .then handler is invoked until its preceding Promise is complete, we get our serial behavior.
Also, since the requests don't care about each other, we don't need to use the arguments that are coming from the previous promise, so the function expression doesn't use its arguments.
The one last catch (hoho) is that if any of the rp.get(url) promises fail, then all of the subsequent .thens are skipped, as the promise flow dictates that it should jump to a .catch handler, if one exists. So, to guarantee that we do make all the requests that we wanted to, we need to add a catch handler to each of the promises in the chain.
urls.reduce( (acc, url) => acc.then(() => rp.get(url) .then(sendRelatedRequests) .catch(console.error)), Promise.resolve() );
0 notes
Text
Proxying Safari network traffic on a real iOS device with Appium
At $WORK, we need to analyze our network traffic during our mobile web tests. In particular, we want to double check our analytics calls, since they're of utmost import. For a desktop browser, this is baked right in to Webdriver with the proxy desiredCapability, but for a mobile test through Appium, things are a bit more complicated, and doubly so for a real iOS device. Note that unfortunately, using a proxy is not possible while using Appium with a native app - this method is only for inspecting Safari traffic1.
So, you'll need a couple things to get this set up:
a real iOS device with which to test
the proper iOS certificate and provisioning profile
Appium
A proxy capable of on-the-fly SSL MITM. I use Browsermob Proxy with its perl bindings, but you can of course choose your own. mitmproxy is a popular Python proxy that would also work.
Preparing the real iOS device
Get permission to install your own apps on your iOS device
There are pretty good instructions on how to do this in the Appium hybrid app testing docs, copied here for clarity, since it's difficult to link to this specific section of the docs:
Step 1: Create a new App Id and select the WildCard App ID option and set it to “*”
Step 2: Create a new Development Profile and for App Id select the one created in step 1.
Step 3: Select your certificate(s) and device(s) and click next.
Step 4: Set the profile name and generate the profile.
Note that you will need to pay $99 to sign up to do this in the Apple Developer center. At this point, download the provisioning profile you created and tell XCode about it by opening the profile in XCode.
The idea here is to indicate to Apple that we are an iOS developer, so that they let us install native applications on our real device, ostensibly for testing a native app we're developing. The provisioning profile is actually used to install SafariLauncher on the real device, which is a tiny app that launches Safari for us2.
At some point before trying to run your test, you'll need to connect the iOS device to the computer via USB cable; now's as good a time as any.
Instruct your device to trust your MITM
As expected, if you try to MITM your own SSL traffic, your iOS device will realize that the SSL traffic has been intercepted and refuse to load it.
You need to install and trust the cert offered by your proxy. For Browsermob, this means you should use your iOS device to go to the browsermob cert on github (the link is valid at time of writing, but in case the link 404s, you'll want to search that github repository for ca-certificate-rsa.cer), and then click the Raw button there. This will open up the .cer file in Safari, and Safari figures out that it should try to install it as a profile. You'll need to click Verify or Trust a few times during this process, and afterwards you can check what certs your device trusts in Settings -> General -> Profiles.
Turn on Web Inspector for your iOS's Safari
This one is pretty straightforward - go into Settings -> Safari -> Advanced -> Web Inspector and make sure it's turned on3.
Starting servers
Proxy Server
As mentioned, I use Browsermob Proxy to capture the network traffic into a HAR. You are free to use any proxy setup you want, noting that being able to programatically create and delete proxies is very useful. So, I'd need to start the Browsermob server - after downloading the Browsermob binaries, that should simply be
# defaults to running a server on 8080 $ bin/browsermob-proxy
At this point, you can set up the proxy settings on the real iOS device. Open up Settings -> Wi-Fi, and then tap the connection you're using for the internet. This should open up the advanced settings for that connection, and at the bottom of that view you should choose to set a Manual proxy.
The HOST will be the address of your proxy server, in my case wherever I'm running Browsermob Proxy. You can use ifconfig/ipconfig to get the IP of the machine on the network. Note that you shouldn't use localhost/127.0.0.1 for this, since the iOS device will be searching on the network for the address, so it needs to be an IP that the iOS device can see.
The PORT needs to be selected at this point and manually put into the iOS device. Later, when running a test, we'll start our proxy on this port before starting the Appium test. So, put the port in the iOS device, and then press back in the top left to save the settings4.
Putting in the proxy settings should end up breaking the internet on your iOS device, since we haven't created the proxy yet.
ios_webkit_debug_proxy
You'll also need to download and run another process: the ios_webkit_debug_proxy. This is not the same as the above proxy you'll use for analyzing traffic - it's for communicating with the Safari webviews through Appium. When starting IWDP, you'll need to specify the UDID of your device.
The ios webkit debug proxy must also be configured to run on port 27753, Appium expects that specific port and iwdp doesn't default to it, so it must be set during iwdp startup.
For me, this ends up looking like
ios_webkit_debug_proxy -c <UDID>:27753
Running a Test
At this point, this fragile set up should be ready for you to run some code. You have a proxy server running for capturing HARs, IWDP running comms to/from the webviews, and Appium running so you can use the handy JSONWireProtocol to drive Safari around your website. Meanwhile, your iOS device is connected via USB, it trusts your MITM proxy, its network connection is configured to use the manual proxy of your choosing, and you bribed Apple to let you install your own apps on your own hardware. Excellent!
Your code only has to do a few things to get you going:
For BMP, start a proxy on the proper port. If you're not using BMP, this may be look different5
(optional): Check your appium server and kill any existing Appium sessions6
Pass the proper desiredCaps to tell Appium you want to run Safari
After the test is done, take down the proxy you created (so we can make it new again for the next test)
In perl, this can be done like
use strict; use warnings; use Browsermob::Proxy; use Appium; # must match the manual proxy settings on the iOS device my $hardcoded_proxy_port = 9090; # assumes BMP server is running on localhost:8080 my $proxy = Browsermob::Proxy->new( port => $hardcoded_proxy_port ); # assumes Appium server is running on localhost:4723 my $ios_appium = Appium->new(caps => { browserName => 'Safari', platformName => 'iOS', deviceName => 'iPhone 6', # must match your device's name platformVersion => '9.2' # must match your device's iOS version }); $proxy->new_har; $ios_appium->get('https://www.google.com'); my $google_har = $proxy->har; use Data::Dumper; use DDP; p $google_har; $ios_appium->quit;
After all the set up, the code is pretty straightforward: Start the proxy on the proper port, boot up Appium, start the HAR, use Appium to make Safari do some network traffic, and then analyze the HAR as you please.
Caveats
First off, this has no chance of scaling well. Each iOS device needs a dedicated OS X to instrument it, and nothing can be run in parallel. If you run two jobs at once (and you're doing the optional auto-delete of existing sessions), you get nonsense results. We're working around this by having a dedicated queue in Resque with only a single worker, and all iOS jobs for a particular iOS device/OS X box pair go to the same queue, regardless of the job's source.
Also, as mentioned, the server set up is a little precarious. Three server app need to be available, and with appropriately matching versions - across iOS upgrades, I have a big enough headache just getting Appium back into shape. Some of this can be alleviated with a docker set up or something like that, where the configuration different server apps can be locked down, but I haven't gotten that far.
Additionally, it's a bit of a hassle to have to interact with the physical device to set the port and accept the fake SSL cert. For server configs, you can do something like schedule Puppet to regularly reset the configs to the proper state to alleviate silly humans trying to change things. But, for an actual physical iOS device, I don't know how to restore a configuration, and we can't prevent people from fooling with it accidentally, either.
Although I didn't use the PAC option during my initial set up, it's definitely an option to write a PAC and use the Auto proxy option on the iOS device. However, I don't think it's possible to have the .pac file pick a different proxy port depending on which device is making the request, so you'd have to have a separate .pac file for each iOS device, and configure each iOS device separately to point at the proper .pac file ... it's possible this could be useful, but I haven't tried it yet.
Another point is that the proxy server you use needs to be able to do on the fly SSL MITM. There are definitely proxies that do this - LittleProxy (which backs Browsermob Proxy) and MITM Proxy are two open source options that both have successful implementations (CharlesProxy also does, but it's closed source and a paid product), so it's not impossible to find. But, you may find that the proxy setup you're using for your webdriver proxy tests doesn't yet have SSL MITM.
Finally, if you're not already developing an iOS mobile app (giving you another reason to pay $99 a year for the Apple Developer License thing), it's going to cost you $99 a year that you wouldn't have spent otherwise.
Conclusion
In summary, this is quite a house of cards. But, after having gotten everything set up and just trying not to touch anything, it's already started being pretty useful - I've used it a couple times when I needed access to the physical device for a quick test, but I was not able to physically access the device. Composing tests for Appium's Safari is often very similar to writing Webdriver tests, and we've got plenty of experience doing that.
As a bonus, you can set all of this up on a remote machine combined with a cool feature of Quicktime Player. Set up an unused OS X box on your network to accept Screen Sharing requests, connect an iOS device to it, start all the requisite servers on the box, and then open up Quicktime and do File -> New Mobile Recording. Choose your iOS device from the dropdown list near the record button, and then you can see the screen of the device from a remote machine. Leave that quicktime window open, and then you don't need to be physically near your iOS device to observe the tests - it will be visible when you use Screen Sharing to connect to the OS X box!
For whatever reason (probably a very good reason), native apps do not respect the proxy settings on the device, and I'm not aware of any way to route a native app's HTTP traffic through a proxy. ↩︎
I think Appium's usual flow is to install an app on the iOS simulator or device, and then use instruments to interact with it. However, (I think) we can't instrument Safari because we can't build the Safari app. Since IWDP lets us talk to Safari's WebViews, SafariLauncher bridges the gap: it's not built-in, so Appium can install and instrument it, and then SafariLauncher has a button that launches Safari, at which point the Safari WebViews are available for communication through IWDP. ↩︎
Coincidentally, this also lets you use your OS X Safari's devTools on your iOS device (or simulator)'s instance of Safari, so you can see the console & network tabs and run javascript on your device (or simulator). ↩︎
If, like me, you don't believe the proxy settings are saved, since you don't press a "Save" or "Confirm" button, and there's no UI change to indicate that the proxy settings were saved, you can open the connection again and voila, the proxy settings are still there :P ↩︎
I chose to create & delete the BMP proxy around each Appium session because this more closely mimics my existing BMP server behavior for desktop Webdriver proxy tests. The only difference is that the port is hardcoded for the Appium tests. If you want to leave the proxy around all the time instead of re-creating/deleting it, that's fine as well, and it gives you the added advantage that your iOS device will have a working network connection outside of your Appium tests, since the proxy is available. With my set up, the iOS device's network connection is useless without its proxy. ↩︎
I don't think there's any technical limitation on Appium's side that limits you to a single session at a time. But, I think iOS and OS X only let you instrument one app at a time. As a result, for iOS runs, Appium is set up to run only one session at a time. If there's an existing iOS session on the Appium server, it will get mad and refuse to start up. This limitation dovetails somewhat well with the fact that we're hardcoding the proxy port. If we were able to run two tests at once across the same proxy port, there'd be no way to separate what traffic came from which test. ↩︎
0 notes
Text
Emacs tip: If your instance hangs and won't respond to C-g, you can use `pkill -SIGUSR2 emacs` to force emacs to stop whatever it's doing.
— Wilfred Hughes (@_wilfredh) October 28, 2015
I recalled briefly that I had found another way to interrupt Emacs after accidentally sending it into a hang. I usually hang my Emacs in two ways:
An extra long line in the compilation buffer
Many lines of output in an M-x shell spawned via Tramp on a remote box.
I wasn't sure if it was on twitter or where else, but now that I have recovered it, I am attempting to put it more places to make it easier for me to find it the next time I freeze my Emacs.
0 notes
Text
Fixing a laggy compilation buffer
I use Emacs compilation mode to execute most of my scripts - it has a lot of nice built in things, including error highlighting with automatic navigation. But, I've noticed that with really long lines, the compliation buffer gets so slow that it makes Emacs unresponsive to input, and I end up having to frantically Ctrl+G and hope that I can stop the compilation before I lose Emacs and have to Force Quit it.
One particular instance is when I'm using Webdriver to take screenshots - Webdriver takes the screenshot (in binary format?) and base64 encodes it, so that means I've got a very long string to work with, and if I accidentally print it out, Emacs gets quite overwhelmed.
As usual, I'm not the only one to run into this issue, and there's a gmane.emacs.bugs thread (cache) about disabling the Maven regular expression in the compilation-error-mode-alist:
(setq compilation-error-regexp-alist (delete 'maven compilation-error-regexp-alist))
I had thought it had to do with colorization, or perhaps the rainbow delimiters that I use, but that simple line of disabling the Maven regex made a huge difference. Hooray for correcting long-standing annoyances.
#emacs#elisp#compilation#buffer#compile.el#compilation.el#compilation-mode#compilation-error-regexp-alist#gmane
0 notes
Text
I was originally thinking about this from a testing standpoint - we had a bit of a pain testing some simple geolocation features and ended up using a proxy to inject headers to trick Maxmind's GeoIP into testing out our different locations. Luckily for us, verifiable & trustworthy accuracy wasn't a core requirement for the feature.
From a trust-standpoint, I bet the developers of Google's Ingress game had a somewhat similar challenge to overcome. Ingress is a native mobile app that displays a map based on your current location, along with its own Ingress-game overlay. GPS spoofing obviously provides an advantage, since you're no longer limited by where you can physically travel. One of the things that the Ingress devs are suspected to have implemented to counter spoofing was doing some (complicated?) calculation about how fast it's possible for a person to physically move. For example, if a user's GPS resolved to India at t_0 and Greenland at t_0 + 30mins, they were obviously spoofing. Of course, this was all speculation and the exact calculation details weren't ever released, since it's a pretty simple counter measure and spoofers would probably be able to reverse-engineer it to their benefit.
Geolocation, can we really trust it?
Recently our team has been working on a web app that uses geolocation. We use geolocation to identify real-world geographic location of an user and allow them to check into an event. A fundamental requirement is the accuracy of the user location. Luckily, HTML5 geolocation API is supported from most of mobile browsers, provides an interface to query device’s location and its accuracy is quite high.
The app rewards the user for participating to located events, so we need to be sure that user geolocation is not forged. The major problem is the high number of possibilities for forgeing geolocation, for example:
fake location in Chrome from Developers Tool
browser extension that allow one to fake location
external apps that allow one to fake location
We tried to solve the problem by matching the results of HTML5 geolocation with several geolocation services such as Maxmind GeoIP and Akamai Edgescape, without success. Using Maxmind we’ve encountered problems with location accuracy. Maxmind offers an IP location service; when you query their database with IP an API returns a possible geolocation. We have found that some mobile carriers do not change the device IP address for several days, thus Maxmind database answers with the same result even if you keep traveling.
Akamai Edgescape, instead gives you the location of the pop that served your request. The limit of this method is that the pop can be near the user location or not.
In the end we can say that requiring user’s location and integrating geolocation service is quite simple; on the other hand making sure that the location is not forged is pretty hard.
1 note
·
View note
Text
Stopping footnotes here from opening in a new tab
Tumblr's markdown formatting mode somewhat secretly supports footnotes. But, it seems like my settings or my theme or something makes footnote links with the target="_blank" attribute set, which is pretty odd. Who wants a footnote to pop them to a new tab ? And furthermore, the return links in the footer also have the same target="_blank". Basically, the footnotes on this blog have been nigh unusable, since they keep spawning new tabs all over the place.
So, some quick javascript to get things sorted:
Array.prototype.slice.call( document.querySelectorAll( 'a[rel=footnote], a[rev=footnote]' ) ).forEach( function (node) { node.target = ''; } );
find the impacted nodes with document.querySelectorAll
convert that NodeList to an Array
clear the target on each node
Honestly, I'm not entirely sure why this happens on this blog - other people don't seem to have the issue. In case anyone else is seeing this behavior on their tumblr footnotes, just add the code above to a <script></script> tag at the bottom of the HTML for the page :)
0 notes
Text
Chromedriver and the Weak Ephemeral Diffie-Hellman Public Key
As of Chrome 45, there's a new error message about a weak ephemeral Diffie-Hellman public key that started showing up in our webdriver & chromedriver proxy tests. The intent of the block was to secure users from the LogJam vulnerability.
ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY
In our testing set up at $WORK, we use Browsermob Proxy to MitM our E2E test traffic so that we can analyze the network traffic. Using a proxy allows us to test things like Omniture & Google analytics, and also enables us to simulate XSS attacks against our website.
Our E2E test suite depends heavily on the proxy being allowed to MitM the traffic, and Chrome started noticing that the DH key that our proxy presented was insecure. This is pretty valid for Chrome to want to block, since we are after all attacking ourselves1. Luckily, Chrome allows us to blacklist certain ciphers as an argument during startup and after some wild googling, I arrived upon the following CLI argument for Chrome:
--cipher-suite-blacklist=0x0039,0x0033
Basically, we are blacklisting the cipher suites that we don't want to run, and this allows Browsermob::Proxy to go on its merry man-in-the-middle-attack way. Using the perl webdriver bindings to start up a proxy and then chromedriver, this ends up looking like:
use strict; use warnings; use feature qw/say/; use Selenium::Remote::Driver; use Browsermob::Proxy; my $proxy = Browsermob::Proxy->new( server_addr => $server, server_port => 8080 ); my $d = Selenium::Remote::Driver->new( desired_capabilities => { browserName => 'chrome', proxy => $proxy->selenium_proxy, chromeOptions => { args => [ 'cipher-suite-blacklist=0x0039,0x0033' ], } } ); $d->get('https://www.google.com'); say $d->get_body;
Probably the most interesting part2 is the particular data structure that chromeOptions requires. The same idea will work with any of the other webdriver language bindings as well - if you can figure out how to pass arguments to your chromedriver via your bindings of choice, just append the cipher-suite-blacklist argment to the list of args.
Also of note - a lot of sources recommended blacklisting a longer list of ciphers:
--cipher-suite-blacklist=0x0088,0x0087,0x0039,0x0038,0x0044,0x0045,0x0066,0x0032,0x0033,0x0016,0x0013
After little trial and error I was able to determine which particular ciphers that I needed to blacklist, so I only chose 0x0039,0x0033, but others may be necessary; YMMV! For further reading, there were a couple very helpful links I found:
Frank Ehlis already figured out how to disable SSL ciphers in Dec 2013.
There's a chromium issue listing all the ciphers.
This superuser also helpfully had the answer about disabling the SSL suites
A diagnostic page for determining which SSL cipher suites your browser supports!
Coincidentally, the security block also to inadvertently caused issues for people with intranet websites that aren't properly secure, amongst other things. This has somewhat understadably ended up angering plenty of internet users who feel very entitled to their free browsers. ↩︎
The other gotcha that I ran into while troubleshooting this issue is that if the Browsermob proxy server was on the same machine as the browser in question, the issue didn't manifest itself at all. We only experienced the issue when the proxy server and the browser were operating on different machines. ↩︎
#webdriver#chromedriver#weakdh#diffie-hellman#chromium#disable#blacklist#cli#chromeoptions#ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY#browsermob proxy#perl#selenium-remote-driver#browsermob#security#logjam
0 notes
Link
Today I want to talk about the concept of pages in GNU Emacs, how I use them, and why I use them. In my experience pages are often overlooked by Emacs users, most likely because they look ugly, but I will show you a package to deal with that. And along the way I will show you a package that I use to make it easier to navigate between pages.
Huh, the last time I thought about page breaks was when I was writing papers in Microsoft Word. I had no idea the concept even existed in emacs, not to mention that it could be useful.
0 notes
Link
Argh, this reminds me of my struggles to get an angular 2 app working alongside my existing angular 1 app. I've tried three separate Fridays to configure some voodoo magic that satisfies the requirements and compilation steps to transpile Angular2 and my app down to proper ES5 , and failed three times, getting lost in es6-module-loader and System.JS and picking a proper traceur module format that would work for both my existing app and the new stuff. Although, I think I have just thought of another strategy to try (just take the ng2 stuff song its own build pipeline, separate from my existing app), so perhaps I'll dedicate my next Friday to that :).
0 notes
Text
My own reference for flashing new android without losing root or user data
Humm, so my nexus 6 has a persistent notification about wanting me to upgrade to 5.1.1. I was lazy this time and didn't bother doing it at the end of May when the stock images came out, but my phone doesn't know how to do it on its own since it's rooted and unencrypted and/or it's using TWRP recovery instead of the stock recovery. I don't do this frequently enough to remember, but too infrequently to want to write a script for it (especially since Wug's toolkit already exists). Anyway, here's my pretty straightforward steps to getting my N6 to 5.1.1, generic enough to apply to any update1:
start downloading whatever factory image that needs flashing
on the phone, make a backup via TWRP
copy the backup to a desktop, just in case of lightning strikes!
whenever the DL finishes, unzip the .tgz and go in that folder
in that folder unzip the .zip, cuz it has boot.img and system.img
wire the phone to a computer (probably already done in 2)
reboot to bootloader (power off, hold vol down & power)
use fastboot to do some things:
fastboot flash bootloader bootloader.img fastboot flash radio radio.img fastboot reboot-bootloader fastboot flash boot boot.img fastboot flash system system.img fastboot reboot-bootloader
head into TWRP recovery, supposedly I should wipe cache & Dalvik, but I forgot and it seems fine. anyway, reboot to system and it'll prompt about installing SU for us since we lost root (yay TWRP).
Make sure not to flash the recovery from the download since we'd like to keep TWRP, not the stock recovery. TWRP's reboot into system will spin the "optimizing app M of N" for a while, and then it should be gravy!
I used to go to theSiteThatShallNotBeNamed for this, but after seeing their recent behaviors and reading the thoughts of some industry thought leaders (heh, heh, thought leaders is a funny concept - but here I am following their thoughts...), I figure I'd ought to make a reference for myself so I can stop giving them traffic. ↩︎
0 notes
Text
Bitlbee’s account xml file on os x
This is primarily for me to find next time I run into this issue! Having previously installed bitlbee on my OS X machine via homebrew & some elbow grease, I made a one-off password for the local bitlbee server I run to store my account credentials. Since I run the bitlbee server through Emacs and like a well behaved Emacs user, I hardly ever restart Emacs, I'm sure to have forgotten my bitlbee credentials between server restarts. At those times it's pretty useful to be able to find the bitlbee XML file that houses my username and encoded password. This is also useful in case I need to change other account setting things.
So, for bitlbee installed on OS X via homebrew, the account xml file is in the following folder on my machine:
/usr/local/var/bitlbee/lib/<username>.xml
and the bitlbee server wants the credentials like
> identify <password>
0 notes
Photo
(via Designing Data-Driven Interfaces — Truth Labs — Medium)
Humm, earmarking this article for the next time I get motivated to do a dashboard re-design. We've got a lot of automated test report data at $work that I'm not doing a good job visualizing. My design process is typically "find something you like and copy it." Unfortunately, that doesn't leave much room for customizing the design to our specific task.
I'm looking forward to making a new dashboard using the guidelines and links from this article...perhaps I'll bump that priority up a bit :D
0 notes
Link
It's really quite nifty seeing how less popular languages work in production environments, including logistics about personnel and hiring. Fancy!
0 notes