The daily, sometimes weekly, ramblings of an iOS UI Engineer.
Don't wanna be here? Send us removal request.
Video
youtube
(via https://www.youtube.com/watch?v=W2J6DCVWpRI)
0 notes
Link
0 notes
Link
Developing an iOS app? Read this if you haven't already. If you have, read it again.
1 note
·
View note
Link
<blockquote> <p>Developer: “We need fast messaging.” Me: “Is it okay if messages get dropped occasionally?” Developer: “What? Of course not! We need it to be reliable.” Me: “Okay, we’ll add a delivery ack, but what happens if your application crashes before it processes the message?” Developer: “We’ll ack after processing.” Me: “What happens if you crash after processing but before acking?” Developer: “We’ll just retry.” Me: “So duplicate delivery is okay?” Developer: “Well, it should really be exactly-once.” Me: “But you want it to be fast?” Developer: “Yep. Oh, and it should maintain message ordering.” Me: “Here’s TCP.”</p> </blockquote>
0 notes
Photo


iPhone 6S camera: Before and After thanks to Filtron. My review here: https://youtu.be/wo-pyokUHCk
0 notes
Text
Peaceful negotiations with ATS
At Dezine Zync Studios, we've had a pretty solid week configuring our clients' and our own servers properly to work with 's App Transport Security (ATS) requirements.
Here's a high level overview of these requirements: 1. The server must support at least Transport Layer Security (TLS) protocol version 1.2. 2. Certificates must be signed using a SHA256 or better signature hash algorithm, with either a 2048 bit or greater RSA key or a 256 bit or greater Elliptic-Curve (ECC) key. 3. Invalid certificates result in a hard failure and no connection. 4. Accepted Cipher Suites
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
If you've never seen those constants before, don't fear. They're simply cryptographical algorithms which aid with TLS networking over the wire (that's a pretty good brief, isn't it?)
If you ran you're app against the iOS 9 SDK and ended up with failing network requests to you're HTTPS endpoint, you're probably left scratching your head. So you head over to a search engine website and search for "iOS 9 ATS" or the likes and end up with results like this or this. Now both of those articles show methods of disabling ATS in one way or the other, however, thankfully, they do also mention the ramifications of said steps. There is also this article which actually goes in-depth into debugging this issue with some very good tips. I strongly recommend reading the whole thing, but maybe later... not now... we have something else to do right now, remember?
Unless you've already had too much coffee, you'd be right to guess that's a terrible thing. Apple Engineers have worked so hard to give us as developers to improve the security for our users & customers, why would we, in our right mind, disable this very feature?
I vaguely remember either the WWDC 2015 Session, "Networking with NSURLSession" or reading it somewhere else on the interwebs that Apple could and will possibly reject apps opting to disable ATS completely. I doubt Apple will enforce this when iOS 9 drops for the masses.
I didn't want to get caught in either of those two situations. So I started digging into this ATS thing causing our apps not to connect to our APIs. Let's take this step by step.
The hardwork
Let's look at point 1: The server must support at least Transport Layer Security (TLS) protocol version 1.2 Well, to support TLS at all, you'll need your server configured with SSL certificates. There's a lot of detail in there, but I'll keep this fairly high level overview, as things will differ from Infrastructure, to your application's language, front-facing proxies, load-balancers and what have you.
For nginx, this is fairly straight forward: Change the lines in the ssl conf file to this (I believe the location of this file differs across versions, ours were at /etc/nginx/conf.d/ssl.conf)
ssl_protocols TLSv1.2 TLSv1.1;
We opted to no longer support TLSv1.0, but your requirements may differ. Also note that if you're overriding this configuration line in other configuration files, please apply the patches there as well.
Cerficates, 2048bits, unicorns and thingy-majiggys Yep, just consult your SSL Certificate vendor about these. Let them know your requirements. If your certificate/s already qualify with all of those points, you're good to go!
Invalid Certificates This is obvious. This is how expired certificates should be treated. You can override this beahior using NSURLSession delegates, but I would recommend against it. I personally haven't tested this behavior, but if ATS takes precendence over the delegate, then you're seriously f***** if you've been relying on that.
Cipher Suites All the cipher suites Apple has listed are very strong ciphers. All of them fall under the "modern" category. But that's only half the story if you want to get this right. There are a lot of weak ciphers which have lead to phenomenons like the Poodle and Heartbleed attacks (who comes up with these names?). We need to get rid of those as well. So here's a list of all the good cipers, and the bad ones marked with a ! before their names:
ECDHE-RSA-AES128-GCM-SHA256, ECDHE-ECDSA-AES128-GCM-SHA256, ECDHE-RSA-AES256-GCM-SHA384, ECDHE-ECDSA-AES256-GCM-SHA384, DHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES128-SHA256, DHE-RSA-AES128-SHA256, ECDHE-RSA-AES256-SHA384, DHE-RSA-AES256-SHA384, ECDHE-RSA-AES256-SHA256, DHE-RSA-AES256-SHA256, HIGH, !aNULL, !eNULL, !EXPORT, !DES, !RC4, !MD5, !PSK, !SRP, !CAMELLIA
Quite a list, huh! To set this in nginx, simply add/modify the following line in your conf file:
ssl_ciphers "..."; ssl_prefer_server_ciphers on;
Put in the above list, each other separated by a :. Okay okay, here's the actual bit, all properly pieced together for your copy-pasta pleasures:
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA256:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!SRP:!CAMELLIA"; ssl_prefer_server_ciphers on;
With that in piece in place, your nginx server should be able to handle all things properly.
If you're directly using a NodeJS server on port 443, pass the cipher list when creating the server like so:
var options = { key: "/path/to/key.pem", cert: "/path/to/cert.pem", chipers: "...", honorCipherOrder: true } https.createServer(options, app);
By following these steps, we were able to successfully have all our apps talking to our backend services without any of the Info.plist edits. In my opinion, this is the best case scenario, for us (getting past the Apple Review 😅) as well as for our end users.
So before you go all out and wage war at App Transport Security, consider this peaceful way will you?
0 notes
Text
[Zypher Log]2015-09-02 Continuous Integration & Delivery
Continuous Delivery of web apps is a thing... we have had CI for arbitrary code for a while. We have had CI for mobile apps for a while. These further plugin into delivery systems which deploy the apps. We've recently seen a plethora of such services for the web. But nothing comes close to the simplicity of bash scripts.
Background
The web app lives inside it's own git repo. It has two remote repos:
Github a. All code merging, forking, etc. happens here. Each push to any branch other than develop or master automatically triggers npm test on Travis which further handles letting us know if something failed. b. Each push to the develop branch triggers the same actions as the develop branch, however, if all tests pass, it further pushes the repo to our staging server. c. Pushes to the master branch are currently unconfigured. This will go in when the live servers are up.
Staging Server : The git repo on the staging server is created using git init --bare. That is, it's a bare repo. It then checks out the HEAD to another folder with the same name without the .git extension. When code gets here, it's safe to assume it's working properly as it passes all the tests.
Bash it
The staging server repo is configured with a post-receive hook which essentially does 3 things.
!/bin/sh # Step 1 # update the source GIT_WORK_TREE=/home/username/zypher git checkout -f # Step 2 chmod +x /home/username/zypher/post-receive.sh chmod +x /home/username/zypher/post-s3.sh # Step 3 /bin/sh /home/username/zypher/post-receive.sh
Step 1 updates the source to the latest code. The staging repo has a single branch: master. All code pushed here comes here from develop branch.
Step 2 ensures that our deployment scripts can be executed.
Step 3 finally executes our first script. All other scripts should be run by the first. This ensures you do error checking inside the scripts and call subsequent scripts accordingly. If they are not dependant on each other, then you can safely call all of them from here itself.
Post Receive
As the name suggests, the post-receive.sh script does all the heavy lifting of deploying your app. In our case, we need to achieve the following: - Compile es6 scripts to js using babel - Compile less files to css - Sync the public folder inside our express app with S3 - Ensure correct Content-Type headers for public files. More on this a bit.
But before we begin, we must do some housekeeping. So first of all, let's get all our NPM modules up to date. npm update
Next, we update all our submodules (this is temporary. Everything will move to NPM soon). git --git-dir=/home/username/zypher.git --work-tree=. submodule update
We trigger npm run build just to be sure any steps included there are also run. Currently, our build part is empty.
Now that the house keeping is complete, we can initiate compiling all our resources and syncing them up.
lessc main.less main.css takes care of creating a single css file from all the other less files I have.
babel ./public/js --out-dir ./public/js -x '.es6' compiles all the es6 files to js files. I load the es6 files in browsers which support them. Currently, this is limited to the Webkit Nightlies, but soon when other browsers will implement ES6 features, Zypher will be ready. For older browsers, the js files are loaded.
s3cmd sync --delete-removed --no-mime-magic ./ s3://bucketname/public/ syncs the public folder with the s3 bucket. This is a true lifesaver as it only uploads files that have been changed by remembering the md5 hash of a file during it's last sync and then comparing it to the new file. Unfortnatuely, without the --no-mime-magic argument, s3cmd screws up the Content-Type header for most files. Now s3cmd is not the one to be blamed here as it relies on a python package to help it out. Not a deal breaker as we'll tackle it in the next step.
Remember that post-s3.sh file? Yes, it's time to call that like so: /bin/bash /home/ec2-user/web/post-s3.sh *
Post Update
The post-s3.sh file has little to no magic inside it. It's really trivial code. At it's head, it has a simple function:
execCmd () { echo "Running ${1}" eval ${1} }
We'll use this throughout the script like so:
for f in $* do FILE="${f}" FILE_EXT="${f##*.}" FILE_NAME="${f%.*}" # echo "${FILE_EXT}" # process JS files if [[ "${FILE_EXT}" == "es6" || "${FILE_EXT}" == "js" ]]; then CMD="s3cmd modify --add-header=\"Content-Type:application/javascript\" s3://bucketname/public/js/${FILE};" execCmd "${CMD}"; fi done
We check for the file extension and then ask s3cmd to update it's Content-Type header so our browsers have no issues when loading these files up.
We then have the script make these scripts publicly accessible.
CMD="s3cmd setacl s3://bucketname/public/js/ --acl-public --recursive" execCmd "${CMD}";
We do the exact same for the images folder, and for a single file inside the css folder, main.css.
Taking it further
If you're reading this to adapt a similar Continuous Delivery method for your own app, you may want to take two extra steps.
We're still testing and building. So it helps having separate js files. However, on production, you should avoid this. Concat all of them into a single js file and use that instead.
No ifs and buts for this one. GZIP the js and css files. Do not skip this step. This vastly improves loading times and I cannot stress this enough. But it into the above post-receive script and then, and only then, run the sync command on s3. Don't let bare files get to your S3 bucket.
Ok, I don't mean to scare you with the last one, but just imagine me saying it to you in a Darth-Vader sound. You get the idea ;)
End notes
All in all, this has been pretty fun to work out. I love the simplicity and extensibility this method allows me to have without relying on external systems and giving them access to secure servers and private git repositories.
You may be thinking though, where do these files live? Well, if you did ask yourself that, you haven't been paying attention. These files live inside your git repo. You can work on them locally, and when you push your code to your remote repo, these get updated too, before being executed. This opens up all you can do with this system; think chaining multiple files to create an entire pipeline workflow like Fastlane (this one is for iOS apps and uses ruby. We use it literally every day for CI and CD as well as App Store deployments).
Let me know your thoughts, suggest improvements or point out my errors. I'm @dezinezync on twitter.
0 notes
Text
[Zypher Log] 2015-09-01
Yes, yes... I said the backend log would only be one of it's kind. Unfortunately, the backend was all I worked on today. And some pretty interesting problems (atleast I found them so...)
Thumbnails
Previously, the iOS app used to download the shot images from S3, create thumbnails locally and use them. This clearly isn't feasible for browsers or atleast not yet. Yes, I now realize how wasteful this really is. Nonetheless, I've realized my mistake. And it was time to correct it. However, I ran a quick ls (not the bash ls but some magic ls I made over the weekend) on the s3 buckets that house these shot images, and there are over 120,000+ of them. Whew!
Creating Thumbnails on the go
So before I embakred on a crazy journey of trying to figure out how to do this, I first setup a lambda function which would handle generating thumbnails for all shots coming in from s3. Now, since the shots are encrypted upon upload, I couldn't simply download them and run a quick command to resize them. This heavy-lifting was done by lambda and some funny looking code (by funny, I mean Lambda's reluctance to even process the NodeJS script which used promises (even with the help of an external module, Q). I had to go back and rewrite it using async. It works, but I sure am not happy with the code.
This takes care of any new shots coming into the system. Now for handling what we currently have. So I went back to my original script (thank you Time Machine) and put it on a m4.xlarge ec2 server, adapted it to utilize the instance's 4 vCPUs. And then I came across the quirks of the AWS NodeJS API. The S3 listObjects command does funny things with the NextMarker parameter.
The NextMarker parameter helps with the pagination. Given that the command is limited to 1000 items per call, this made sense. However, even with the upper limit set to 1000, each call yeilds no more than 284 items, which is super odd.
I refactored my code again to a "streaming" style module, such that each vCPU would process 1/4th of the files coming in per call. If any files are left over, the 1st vCPU would hanlde those as well. Using Q.all(...), this was rather trivial to track and manage. Once all files are listed (and processed), the s3 ListObject function simply returns an undefined NextMarker. Atleast they got that right.
This entire process took me 90 minutes worth of time, out of which 30m spent on the server installing NodeJS, ImageMagick, etc. The JS file itself took only 42s (hurray for multiple vCPUs), including downloading the shots, decrypting them, creating thumbnails, encrypting it, and uploading the thumbnail.
42s is seriously fast. I was atleast expecting 15minutes. I did fail to consider the memory buildup from the file buffers and the fact that the garbage-collector wouldn't have enough time to cleanup, given the 42s benchmark, which resulted in massive RAM useup. Nonetheless, this did not matter as the server was immediatelty shutdown. If you're building such a thing for long running tasks, please do take into consideration the memory buildup from dangling buffers.
End Notes
The cost of creating thumbnails on the fly will be known in the coming few months as Zypher Web's private beta starts rolling out. However, the cost for creating thumbnails from existing shots (without counting human hours spent): $0.252. Yes, processing 120,000+ images (albeit very small ones, each file being no more than 3-5MB in size) cost us not even a whole dollar. AWS keeps suprising me at every step.
-- Note 1: Data transferred between EC2 and S3 in the same region is free. Yes, free. As long as you're within that one limit, you'll save a lot of money simply by processing the images on an EC2 server.
0 notes
Text
[Zypher 2.0] 2015-08-31
In this post, I'll be documenting some of the architectral decisions we've taken to provide overall improved networking performance improvements which will in-turn affect the speed and performance of the upcoming web app and related tasks.
Background
Zypher's API turned out to be very rock solid. It only suffered a down time of 12min (AWS issue causing network packets to consistently drop). Other than that one isolated instance, it served me well for the 2.5 years that it was run. It was recently shut down due to all the breaking changes in the API.
To accomodate for this, we had to move all our existing users to an adhoc API service. This will eventually mutate into the 2.0 API, running alongside it and then finally merging into it.
The upgrade
To avoid this very situation, we decided to break down the API into some obvious components namely- - The API Service - The Authentication Service - The Internal Messaging service (designed such that we can upgrade it to use disque when it it attains atleast a beta status) - The Push Notifications service (designed to currently work with the Apple Push Notifications Service for both iOS and desktop Safari, and will soon include Chrome and Firefox push notifications) - Queue based email service (transactional emails like reset password links, support, etc.) - The Cache Service (powered by Redis. I feel bad for putting Redis inside brackets)
Reinventing the wheel?
You must have noticed that a lot of things already exist. Queue based email service (Mandrill, Postmark) and Push Notifications Service (I forgot the name of the SaaS company) are the items in question. However, it proved to be much more cost-effective for us to write these on top of Redis (the email service will move to Disque). It took us less than a week's time for both, and I believed it would have taken us much more time to do so with 3rd party APIs. This also enabled us to tightly integrate multiple AWS services thereby making accounting for Zypher a s*** ton easier.
All of these services run on AWS, which allows us to interlink with private IPs while only exposing the API and Web App publicly. This enables fast networking amongst the inter-dependant components. No one service talks to more than two other services. This vastly reduces interdependancy and reduces the number of Single-point-of-failures... but wait, you've heard of all this before. Yes, you indeed have. This is no revolutionary methodology, but just something that makes sense in our scenario.
The Guts
Because of the aforementioned breaking changes to the API, I had the opportunity (in some cases, a misfourtune) to rewrite the API. Just around then, HTTP 2 specification got finalied and at WWDC 2015, Apple announced support for the new protocol. Adopting it was an absolute no-brainer given the short and long term effects it comes with. Browsers and devices which don't support H2 will fallback to SPDY which will further fallback to the HTTP 1.1 specification if need be. This covers a vast majority of browsers and iOS 8 and 9 use-case scenarios.
It was like Christmas when I began working on the new API. H2 was finalized and so was the ECMAScript v6 a.k.a. Javascript-Next a.k.a.... This vastly reduced the code I had to write just because the syntax is so much more concise. ES6 also allowed us to create a tiny MVC architecture using Classes (this is possible without ES6 as well, but it's a prototypical hell) which further boosted the speed of development.
As you can tell, one benefits the end user while the other benefits us. More maintainable code = more time spent with the customers. And this is something that's been in our foundation since the beginning.
Open Source
A lot of the things I mentioned above wouldn't work with some awesome Open Source Software. NodeJS, Express, Redis and more NPM modules than I can keep track of. And to give back, we'll be open-sourcing a lot of the components used for the Backend and front-end services.
By now, you must have realized I've mostly spoken about the backend systems. However, this will be the only post of it's kind. All the subsequent logs will mostly talk about the Front-end unless I work on something new and would like to talk about it. The subsequent posts will also be shorter, and more frequent. So unitl then, I have a lot of material to write about and share with you folks. I hope I've peaked your interest.
PS: There are a lot of clues of the two new major features coming in Zypher 2.0. ;) Let me know if you find them, I'm @dezinezync on twitter.
0 notes
Text
The Zypher 2.0 Log
Earlier today, I spoke about Zypher 2.0's status...
As you may have correctly guessed, I was talking about the iOS app that I made 3.5 years ago (yes, it's been that long)... However, it missed a critical part of itself, which we've addressed. Now that I have a team working on the product, I can assign tasks more effectively amongst ourselves. This delegation of work has allowed us to do something we've been longing to do for a while.
Zypher for the Web
Yes, a lot of our users have been asking for this. We, unfortunately, lost quite a bit of prospect paid users due to the lack of this feature. And I personally did not like that.
So now that we have established that, I'll be keeping an active log of how things are moving and what's the progress on both the iOS and web apps.
In the next post, I'll be writing about some of the architectural decisions we made, so if you're into that kind of thing, keep an eye out.
0 notes
Text
Lower prices, higher stakes.
Let's begin with the obvious. Apple introduced the Rs.10 and Rs. 30 pricing tiers for India last week: https://mobile.twitter.com/dezinezync/status/618615866599546888 As I stated, this is excellent for Music downloads, Streaming services and VoIP services like Skype, etc. I can see a lot of consumers (app users) rejoicing over this. This presents a problem however: with the new lower pricing tiers, and big corporations soon adapting to this change, which they obviously will since now the barrier of entry is so low, users will soon start to expect these tiers as the de-facto standard. For quick, small or monthly consumable items, these tiers fit the bill perfectly. This presents a big problem for indie-developers like myself. Once these pricing tiers become the de-facto, users are going to complain about apps which don't conform to this. In India, the percentage of users who are actually comfortable paying for good software are very very low. Almost negligible. You see where I'm going with this. This might become a problem sooner than later, if bigger companies who's profit do not come from the sales & downloads of their apps adapt to this change. As as consumer, I properly welcome this but as an indie-developer, to be very honest, I'm a little scared of what's to come.
0 notes
Text
Thoughts on Platforms SotU
The WWWDC Platform State of the Union videos just released inside the WWDC app for the iPhone & iPad. If you wish to watch them on your desktop or laptops inside Safari, here are some references:
SD Links:Design Awards: http://t.co/t1GgRfn8AMPlatforms State of the Union: http://t.co/tZfYPMCsJP
— Nikhil Nigade (@dezinezync) June 9, 2015
Design Awards 2015 http://t.co/fktW3N9VfDPlatforms State of the Union http://t.co/xlcryBErON
— Nikhil Nigade (@dezinezync) June 9, 2015
I'm rounding up my thoughts & reactions below, somewhat like a TL;DW. (didn't watch 😉 )
⚠️ No more HTTP networking in iOS 9. Let that sink in. Yes, you can soft-enable it, but I doubt Apple will allow it on the AppStore. I for one welcome this move. Moving to HTTPS is going to benefit everyone in the release chain, right from the developers to the end users. However, I've already spoken to a client of ours and they didn't sound happy about it. I hope this move from urges some of the Indian companies who's recent network inspections rendered user's sensitive information being transmitted over HTTP.
✅ UI Testing in Xcode A colleague had to hold me down when they announced this. I was jumping all over the place with extreme joy. The previous methods we had to employ for UI Testing were archaic, time-consuming and cost a lot. I'm sure a lot of developers already privy to writing Unit tests are going to jump on this train right away. Quick note: You'll need to create a new UI Testing target if you're opening on existing Xcode Project in Xcode 7.
🍮 StackView I'm yet to personally explore this one, and even the demo for it was brief, but nonetheless, it sounds like a great addition. Making those AppStore and the new Music App UIs should be really easy now.
⌚️ Native Watch Apps They spoke about this in the keynote as well, nothing new to say, but I still feel that the whole moving the "extension" from the iPhone to the watch is worthy enough to be called "native". The entire existence of the "extension", in my opinion, prevents it from achieving this state. If they were truly native, then the requirement for a host app shouldn't be there. When that happens, the watchOS would have become truly powerful.
🌐 IPV6 becomes compulsory Going forward, ensuring your app runs over a IPV6 network address is compulsory. is making this easier to test via a single setting from Internet sharing on your Mac. However, you should be fine as long as you've been using the likes of NSURLSession and such. Related: NSURLConnection is now deprecated?
What were your favorite parts? Let me know on twitter, @dezinezync.
0 notes