Text
Localstorage Design Patterns
I wrote a blog post about how we leveraged localstorage for our consumer web application. Take a look and share your thoughts there.
https://bytes.grubhub.com/2017/01/11/localstorage-design-patterns-and-passive-expiration/
0 notes
Text
GHS Typeahead
As people may have noticed www.grubhub.com and www.seamless.com have gotten significant redesigns using the AngularJS framework. As things have settled down now we want to take this time to contribute back to the open source community. One of the first projects that we wanted to release is the ghs-typeahead which is an extension off of the angular-bootstrap library��s typeahead with 3 significant improvements.
When a user focuses on the input it will show the popup if there are results
Adds an ngDisabled field so the user can see results but not necessary select them
Allows for the result set to be refreshed without opening/closing the popup again.
Refer below for a gif of it in action.
It’s important to note that this is not functional by itself. It is assuming that you are using angular bootstrap so you’ll have to set that up as well. We are currently using this library on production as well as a customized build of angular-bootstrap with only the components that are important to us. I recommend you do the same for shorter page load time.
To get setup please add the dependencies to your app as such
You can also pull down the latest code and run 'grunt watch' to play with the code yourself. The typeahead will support all the attributes that the original supported. In addition to this you will have access to the ngDisabled feature which must be added to each individual object. Please refer to the code below for more details. If a user were to click/select the ‘Toggle Saved Addresses’ element it would not modify ngModel because it is marked as disabled.
1 note
·
View note
Text
Fixing AngularJS Forms #1
Angular's 2 way binding is kind of magical and for most cases it works as intended but it does have the unfortunate side effect of creating an awful user experience in form data entry.
If I were a user I'd absolutely hate that. Of course it's not a valid email address, you haven't even let me finish typing it in yet. If you want to play around with the gif yourself the jsFiddle is here. At Grubhub/Seamless we've opted to apply the following form UX pattern
Only update the model upon blur event
All form submit buttons will never be disabled
Any form submit event will trigger blurs on all inputs within that form
#1 Only update the model upon blur event
This is a common pattern and we should be able to safely assume that the user is done editing the field upon leaving it. The difficult part of this task is coding it in angular. What we did was create a directive update-on-blur that uses Jqlite to unbind any events that angular is listening on and binds the element on the blur event only. Refer to the code sample below.
Also I’ve provided a jsfiddle to try it yourself.
0 notes
Text
Limitations of $localStorage
At Grubhub/Seamless we started to notice that our application was becoming unwieldy so we made the decision to break it down into separate sub-applications. This has the obvious advantage of reducing your PLT because you should have less JS and template files to load but it increases the complexity and it means you will have to rely on localStorage to keep everything in sync. This is why it is important to choose the right local storage solution for you.
We began our app using ngStorage because it was the first thing that popped up on google and it had the sexiness of not having getters and setters. One could simply type in
$localStorage.account= 'moohooo' // Instead of... localStorage.setItem('account', 'moohooo');
Sexy right? Well we thought so too until we started messing around with sub-apps. We found that local storage was not getting updated fast enough when going from one sub-app to another and if you look closely at the code you'll see why.
There's a 100 ms delay from when localStorage is actually getting set. This means if we modify localStorage in one sub-app and then immediately redirect the user to another sub-app then that change is lost. This forced us to use a different solution and although there is angular-local-storage we ended up creating a custom localStorageService mostly because we wanted to expire any data after XX hours.
tldr - use ngStorage if and only if your application isn't composed of separate sub-apps
0 notes
Text
Binding Events in AngularJS & JQlite
If you've been working with AngularJS you've no doubt run into JQlite and with it you'll be able to do things like addClass(), css(), etc. I spent some time creating an affix directive (makes an element fixed after a certain scroll position) and to do this I had to bind a scroll event on the window.
This is very basic version of my directive but because AngularJS is a single page application (SPA) it technically never changes documents so when you change routes you will still find that those binded events are still firing even though the directive scope has been trashed.
To remedy this you will have to listen to a route change and unbind the event as such. This code snippet assumes that you are using ngRoute but other routing libraries also have a similar signal.
If you want a live demo click here
If you comment out the 'unbind' event and switch from the first page to the second page you will notice that the console log messages are still firing because the event is still binded.
0 notes
Text
grunt-ddescribe-iit
ProtractorJS end to end testing goes hand in hand with AngularJS development but running them can be very time consuming. That's why it is crucial to know the existence of "ddescribe" and "iit." They allow you to only focus on only a few selected tests (if you have both iit and ddescribe, only the iit blocks will be run). Also keep in mind that when you use iit and ddescribe it will actually run only those specific tests and all other tests will get an immediate pass which can be very misleading.
You should be careful when committing code with these keywords which is why a grunt task would be perfect for preventing this. There is already a one to do this but there are several issues with it so I forked it to address them.
Automatically throws an error for ddescribe, iit, xdescribe, xit, describe.only, and it.only which may not be desirable. Should be editable through configuration.
Grunt task only shows the first error and stops running.
Try out this version and if you want to incorporate it into your project simply install the npm module or for more information read up the readme here.
npm install git://github.com/erictsai6/grunt-ddescribe-iit.git --save-dev
0 notes
Text
Rebuilding Your Development Environment
I just recently upgraded my development environment to Mavericks 10.9.5 and found that many of my globally installed tools were broken. If you are a frontend developer and you use tools such as Homebrew, Npm, Pip, etc. then you are in luck.
Let's go through a quick example with Homebrew and Nodejs.
Let the lesson be to figure out where your installing tool keeps those executables and try to use their link/unlink features and if not then you'll have to manually symlink them up to /usr/local/bin/ (most likely). Or you can simply try reinstalling them as a last resort
0 notes
Text
Javascript Promise Chaining
As I transitioned over to using AngularJS I learned a lot of small nuances with javascript specifically revolving around the concept of promises. This post will mostly focus on the coding convention you should handle for chaining together. Please refer to the Q module for more details about promises in general.
Refer to the above gist for the setup. I've created a promiseFn that will either resolve or reject based on a certain flag that is passed in. If you build your promise chain and have an error handler in each "then" statement then you should expect an early caught rejection to stop the chain. Unfortunately the last chain is Refer to the gist below.
This is what is output from this code.
Inside promiseFn v: 0 Inside promiseFn v: 1 Error message v: 1 Inside promiseFn v: 3
If you want the error to stop the chain then you should append the promise chain with a catch all. The following gist demonstrates this.
This is what is output from this code.
Inside promiseFn v: 0 Inside promiseFn v: 1 Error message v: 1
As you can see we do not hit promiseFn #2 or #3 with this scheme.
0 notes
Text
Git Workflow - Merge vs. Rebase
I've worked with Git for a few years now and have encountered two very different workflows. I was surprised at the flexibility of git and thought it was important to note some of the troubles I went through.
In general the accepted practice is to spawn a development branch off of your stable branch and to make your changes (bug fixes, features, etc). As mentioned before there are two ways to maintain your code base and the fundamental difference between the two is merge vs. rebase.
At my old job at AppFIrst we moved at a very rapid pace so we opted to take the merge route. This workflow is nice because it requires little management of the git branches. This method doesn't manipulate the git history. When merging it essentially keeps the commits from both branches and creates a special merge commit where they meet. At the point of the merge commit is also when you would deal with any merge conflicts and there will only be one merge conflict per merge. That last sentence may be a little confusing but will make sense when I discuss rebase.
When you move to a bigger team, however, merging can get unwieldy. Since you are saving all the commits from branched off development branches if you attempt to make sense of it all you may end up crying. Refer to the picture below and you'll see what I mean.
I forked 4 different branches off of master and committed to each one separately. I then merged them all to 1234_merging branch which results in the graph you see above. The image is only of 4 branches but as you grow your team out it is pretty clear that the graph column will become a mess.
Enter rebase. When I started at GrubHub it took me a while to understand the flow of rebasing and what it all actually meant. One of the key differences you have to know is that the truth should lie in your local branch - not the remote branch. Let's say you fork a development branch called moohooo and start coding and committing normally. Meanwhile at the same time the master branch is also getting regular commits.
If you wanted to get the latest master code into your development branch you would do a simple git rebase origin/master (after fetching of course). What this would do is pop off the development commits up to the commit that master and moohooo share (refer to image above to highlighted part). The code would then merge fast forward up to the latest master and then play back each of the popped off commits in order. What results is a nice linear log history that is easy to read as opposed to the branching nightmare from merging. Refer to image below.
Please note that the master commits all come before the development branches after rebasing.
After a successful rebase you will find that the git history has been reworked and that your local and remote branch will likely be out of sync (moohooo vs origin/moohooo). If you attempt to git push origin moohooo then you will run into an error complaining about it not being fast forwarded. This is what I meant when I said that the truth lies in your local branch. To correct this you will have to force push your local branch up with git push --force origin moohooo which will overwrite anything you had in origin/moohooo. Please note that you should only do this if you are sure that your local branch is more up to date with your remote branch.
If you remember before I also mentioned that in merge you would only have to deal with one merge conflict. In rebase, however, you may have to deal with as many merge conflicts as you have commits in your development branch because of the way rebase works. Each commit is played back one by one so if there is a conflict then you will have to deal with it at that point before continuing with the rebase process. The development branch moohooo does a good job of demonstrating this disadvantage if you want to test it out yourself.
This problem could be dealt with if you squash your commits but that is for another time.
I created this public repo for demonstration purpose. If you want to test rebasing and better understand the process feel free to clone it and switch to moohooo branch. After you are on moohooo rebase off of master and you'll see each commit being played one by one because you'll have to resolve merge conflicts for each one.
https://github.com/erictsai6/git-demo
0 notes
Text
Rpmbuild and CentOS 6
I wanted to document this particular problem because it took me a few days to figure out and it's very frustrating.
So first some background information. At this time I was working at AppFirst which is essentially a monitoring SaaS solution so we are in charge of collecting system, application, log, and so on data. In order to do this we install a collector program on our customer's servers which are ultimately built via RPMs. Whenever a customer first requests a collector we build that custom package for them in real time and send it back to them. Simple enough right? Well here's where things get out of whack.
We had recently made the switch from CentOS 5 to CentOS 6. Everything was more or less working as expected except (wouldn't you guess it) rpmbuild. We had a shell script called "blah.sh" that ran rpmbuild and made sure it was pointing to all the correct files/directories and that was run using python's subprocess module. Before I continue let me first list the system we were on.
CentOS 6.4 (previously CentOS 5.8)
python 2.6.6
rpmbuild 4.8 (previously rpmbuild 4.4.2
Everytime blah.sh was run with subprocess it kept exiting out with status 1 so my first assumption was to just print out it's stdout. Refer below for code snippet on how to do this.
proc = subprocess.Popen(["./blah.sh"], stdout=subprocess.PIPE) logger.error(proc.communicate())
Calling "communicate" will display all the stdout messages and any error message that may pop up as well in the following format: (<info stdout put>, <error message>). Unfortunately what was being displayed was utterly useless because it only showed the following two info messages and no errors...
('Building target platforms: x86_64\nBuilding for target x86_64\n', None)
Now what's really interesting is that this script runs fine through the command line so I went through many steps to see what the actual difference was. I validated that the user was the same, the same environment variables, I tested to see if subprocess was the issue, and so on.
What it ultimately boiled down to was permissions error. v4.8 of rpmbuild will no longer use /usr/src/redhat and instead default to /root/rpmbuild. And since you are doing this from an apache server (assuming you are not running apache as root user which is a huge security risk) apache will automatically block anything that is trying to access the root directory which caused the entire script to fail out with exit 1. To fix this modify the rpmbuild line to define __topdir as a folder you have permission to write to. Refer to this link to set up your system accordingly.
This link above tells you to update your .rpmmacros file but these settings will unfortunately not get loaded in this specific scenario so you'll have to either modify the .spec file or define it in the commandline argument. I chose the latter so here was my solution. Note that this is the contents of blah.sh
cwd='pwd' rpmbuild --define "_topdir /home/sample_home/rpmbuild/" --bb $cwd/sample.spec --target x86_64
And that should solve your problem. Hopefully this made your transition from CentOS 5 to 6 much smoother. Until next time.
0 notes
Text
Instant Search Bar
For our web application we knew we wanted our results to come instantaneous as soon as the user typed. Luckily it's pretty simple with jquery. It's important to note that whatever API that you hit for the results should be limiting the number of items returned otherwise this will prove to be very expensive to your application.
var searchTimeout = undefined; $("#main-search-bar").keyup(function() { var filter_name = $(this).val(); if (searchTimeout) { clearTimeout(searchTimeout); } searchTimeout = setTimeout(function() { $.ajax({ url: "/search/", type: "GET", data: {filter: filter_name}, success: function(data) { // Do whatever you want with the data }, failure: function() { // Error handling } }); }, 300); });
0 notes
Text
Building a RoR Web App on AWS - Woes #1
As my friend and I build out a web application I'll try documenting mistakes I made so others do not fail where I did.
I'm developing a Ruby on Rails web application with MySQL coupled with Riak for data storage. These components will be hosted on Amazon Web Services free usage tier.
Ruby (1.9.3-p448)
Rails (4.0.0)
MySQL (5.1)
Riak (tbd)
My web application requires the use of a long-running database so setting up Amazon Web Services (AWS) Relational Database Storage (RDS) was the first thing I had to do.
Important Notes:
In order to get the free usage tier you must change your database class to micro. By default AWS has the large class selected.
There are two ways to start AWS instances. Through the GUI on the web management site or through the command-line tool found here. The link provided will only start up RDS instances. Please note that if you do start up an instance via the command line then you will not be able to see it on the AWS website. Don't ask me why, I do not know.
In order to access your database you'll have to set up your security groups in AWS. Don't worry as it is pretty straightforward.
You cannot SSH into an RDS instace even if your security group allows it. I've verified this with AWS support.
To gain access from your local computer to the production database find your public IP address and select the rule for MYSQL. Add the rule and apply the changes. PS: If you do not know the subnet mask just put 32.
You should be able to connect and access your AWS RDS database now.
More to come later.
1 note
·
View note
Text
Python Tutorial - Making REST API calls
For shits and giggles, I will write up a short entry about making REST API calls. It's easiest to refer to a real-life example so this example will refer specifically to HubSpot.
HubSpot is marketing software that we use to organize our customer information and create certain workflows for them so they get the best user experience. Each customer will henceforth be called a "contact" and the HubSpot's public Contact API can be found here.
There are 4 different request types of REST
GET - Most common one that you can call from the browser. Simply retrieves data from the web server
PUT - Usually has fields in the request body to update an already existing entry
POST - Has fields in the request body to create a new entry
DELETE - Deletes an existing entry
Luckily if you look at the API docs above you can see that only GET and POST are relevant for the Contact API's.
Editing an Existing Contact
The first thing you have to do is search HubSpot's database to get the unique ID of the contact before actually editing it. This means we will have to make TWO api calls.
Retrieving the Contact
https://api.hubapi.com/contacts/v1/contact/email/EMAIL/profile?hapikey=HAPIKEY
Replace the EMAIL with the email address of the contact you are searching for and the HAPIKEY with the HubSpot specific key that allows you to make REST API calls to their server. Below is the following code snippet.
import urllib2 import json email = "[email protected]" hapikey = "aj28djbkciaos" get_url = "https://api.hubapi.com/contacts/v1/contact/email/{0}/profile?hapikey={1}".format(email, hapikey) vid = None try: request = urllib2.Request(url=get_url) response = urllib2.urlopen(request) code = response.getcode() if code not in (200, 201, 202): print "Error: getting Contact information failed, returned code: %s" % code contact_obj = json.load(response) vid = str(hubspot_contact['vid']) except Exception, e: print "Error: getting Contact information failed, reason: %s" % e
This will return you a Contact object in the form of a JSON and set the variable vid to the unique ID of the contact.
Updating the Contact
https://api.hubapi.com/contacts/v1/contact/vid/VID/profile?hapikey=HAPIKEY
Replace the VID with the Contact distinct ID and the HAPIKEY with the same key as before. If you want to update a field of that contact you must follow the following syntax.
{ 'properties': [
{ 'property' :FIELD_NAME1,
'value' : VALUE1},
{ 'property' :FIELD_NAME2,
'value' : VALUE2 }
] }
I'll demonstrate this in the following code snippet.
import urllib2, urllib import json vid = "11211" hapikey = "aj28djbkciaos" update_url = "https://api.hubapi.com/contacts/v1/contact/vid/{0}/profile?hapikey={1}".format(vid, hapikey) properties = [] properties.append({'property' : 'tenant_activation', 'value' : 'Yes'}) properties.append({'property' : 'tenant_activation_date', 'value' : int(time.time())*1000}) data = {'properties' : properties} try: request = urllib2.Request(url=update_url, data=json.dumps(data), {'Content-Type': 'application/json'}) response = urllib2.urlopen(request) code = response.getcode() if code not in (200, 201, 202): print "Error: updating Contact information failed, returned code: %s" % code except Exception, e: print "Error: updating Contact information failed, reason: %s" % e
2 notes
·
View notes
Text
Node.js and node_modules directory
When I first started out using node.js and npm (nodejs package manager) I had no idea how my node processes found the node_modules directory. It seems that node.js automatically looks in a hierarchical fashion for the node_modules folder, so if any of your parent directories have it then it should be able to access your downloaded nodejs modules. If your node_modules directory, however, is in a random location then you can utilize the environment variable NODE_PATH which can be checked by going to your shell terminal and typing the command.
printenv
If you see NODE_PATH not there or set to a different directory simply change it by running this command.
export NODE_PATH="/home/moohooo/node_modules"
I also highly recommend keeping your node modules in the same directory that way it is easier to manage. And keep in mind that if you are running your nodejs process in sudo root mode then you must change the same environment variable under root mode as well.
2 notes
·
View notes
Text
Mr. Donald Green
Nobody is Superman but I can try to do my part in improving the world. Working in NYC you see many homeless people on the streets and nothing is quite as tough as having to pass them by without giving them good news. Its financially impossible to donate to each homeless person I see but I do what I can.
What happened last night surprised me. I passed by someone in the streets and gave him the remains of my loose change which totaled to somewhere around 65 cents. I expected him to say "Thank you and God bless" and be done with it but he took it a step further, gave me a poem and started telling me about himself.

His name was Donald Green and he seemed poised in sharing with me his proudest moments (one of his crowning achievements was being published by the New York Times). Our talk was only a few minutes long and although I could feel the nearby presence of my friends hinting at me to hurry up I knew that the right thing to do was to stay. Mr. Green had such a joyful look on his face, but I don't think it was about the money. Hell, I didn't even give him a dollar. I think it was more the fact that I took the time to stop and talk to him.
After our brief meeting I shook his hand and thanked him for the poem.
I got to thinking that night about Mr. Green and about homelessness in general. It goes without saying that seeing him in his situation I should feel grateful that I have all that I have, but I think there's another issue at hand. Think about when you are on a subway train, and a homeless man starts talking about his plight and starts asking for spare change. If you were a passenger witnessing this, what would you do? If you look at the other people on the train I guarantee you that the majority of the people won't even look at the unfortunate man... I should know, I was guilty of this as well. What is it that causes us to do this? Are we that selfish that we simply don't care about other people's issues or is it the fact that we are too cowardly to look him in the eye and say "sorry"? If we continue to pretend there are no problems then nothing will be solved. Next time hopefully you won't just ignore your fellow man and at least give him the decency to acknowledge his existence.
I hope I meet you again Mr. Donald Green and next time I will have a $20 bill with your name on it.
0 notes
Text
Django and PostgreSQL Database Reliability
What is the issue?
Creating a scalable architecture is never an easy thing and one of the most common issues people run into is making their database more reliable. In most cases the database is a single point of failure, so if it is running too hot and crashes then your entire system is usually down as well.
Solutions?
HBase is a great solution since it is made for distributed systems and is horizontally scalable. Need more space? Simply start up another Hadoop cluster. Oh no, a Hadoop cluster is down? That's okay HBase replicates the data so nothing is ever lost. Obviously I am oversimplifying everything but you get the idea.
If, however, your data is relational then perhaps HBase is not the right answer. A relational database management systems (RDMS) is more for you and Postgre is a great open source option. Best of all it works well with Django.
PostgreSQL 9.0 and higher started to address the reliability issue of databases by introducing the new feature called streaming replication. In order to explain this assume that there are two Postgre databases (one primary and one secondary). The primary database handles all of the CREATE/UPDATE/DELETE SQL calls and asynchronously updates the secondary database with any changes. This causes the secondary database to usually be behind the primary database but it comes at the advantage of faster database commits (this is in comparison to synchronously updating secondary database).
The beauty of this architecture is that you can have one primary and as many secondary databases which essentially helps spread the load across multiple servers. And with Django 1.2ish or higher, multiple databases are now supported so you can easily point all the reads to the secondary databases and all the writes to the primary.
So how do you implement this?
If I can give you one advice it is that it is much easier to do this on a Linux machine than a Windows machine. If you must stick with the Windows OS then refer to this blog for more help, but I could not get it working for the life of me. Here are Linux instructions
Issues?
Like I mentioned before you will have a syncing issue between the primary and secondary. If you try to read from the secondary database what you had just written to the primary database then you will most likely not be able to retrieve your recent changes. I've read that you could remedy this issue by pointing that specific read (a read right after a write) directly to the primary database again. Of course there is not really any documentation on this, but I will be trying to implement this in the near future and follow up with another post.
Additional Resources
http://www.slideshare.net/OReillyOSCON/unbreaking-your-django-application
3 notes
·
View notes