I post about mobile and web development. Also other things.
Don't wanna be here? Send us removal request.
Text
Share artifacts with an orphan branch
Over the last several years I've worked with a variety of great hosted services for team code integration:
For build automation - Jenkins, Bitrise, CircleCi, BuddyBuild, TeamCity.
For code analysis - SonarQube, Codacy, Code Climate, Veracode.
For peer review - Github, Bitbucket, GitLab, Gerrit.
When everything works in harmony (checking out files, compiling, running tests, analyzing code structure, assembling artifacts, posting clear results) the whole team sees steady feedback, early in the development cycle, that helps prevent defects, educate new team members, and nudge us all to better design habits.
But for various reasons we can't always set up the perfect recipe of hosted services for every project. If information that we've counted on in the past is missing, unreliable, or hard to access, the impact quickly fades. In the worst cases, the lack of feedback makes us falsely confident.
There are fancy ways to connect build automation services and code analysis services to deployment services and code review services. If these options are available, if you can quickly troubleshoot interruptions, and if your whole team can access the results, then do this. If not, here is a relatively independent alternative:
Most software platforms have standalone tools for code analysis (test execution, test coverage, linting, docgen, etc.) that can generate standalone reports (probably HTML). If you get familiar with how to generate these, you can port them to future projects without relying on an external hosted service or special IDE features. Make a one-step script that generates all of these artifacts.
Make another one-step script that does this: check out an orphan branch from your code repository into a temporary folder. Remove existing files. Copy in the generated files from your previous script. Commit and push. Print a download link for the commit.
Finally copy/paste the download link into your pull request description. Your team can use this link to easily download and browse results. One additional file I include is a root index.html with short descriptions and relative links for each report. This helps everyone know why each report is there and gives a single entry point for everything in the download.
For reference, I've set this up on one of my small, public Github projects. You can see an example pull request that includes a direct link to download reports. You can also see the shell script for pushing artifacts and the shell script for initializing the branch with an index.html. The goal for these scripts is to be relatively generic and portable so they can quickly be applied to future projects. The same goal applies to scripts that generate report artifacts, but this is inherently more platform-dependent. In this example, the script to generate artifacts is Gradle-based and the code analysis tools include JaCoCo, CPD, Detekt, BuildChecks, and JUnit.
0 notes
Text
Checklist yourself
In 1935, Boeing introduced a heavy bomber that outperformed any other aircraft of it's kind. It crashed tragically in exhibition because a routine step was missed by the well-trained and experienced flight crew. As a result, pilots everywhere began to adopt preflight checklists and failures decreased significantly.
I've written previously about how we do continuous integration, which is essentially an automated checklist verified by a machine. CI provides a level of safety, consistency, and efficiency that's hard to match in any other way. The problem is some details are prohibitively hard to automate and too important to ignore. Our current solution for this is to manually review all code before it "takes flight". We use pull requests for this. What we've been missing, tho, is a preflight checklist.
Gawande sees ineptitude (not making use of what we already know) as a greater problem than ignorance (what we don't know). So our development group did a retrospective on code reviews. We made a list of what we already know we're looking for when reviewing code. The things we don't want to forget about in the future. Based on input from that discussion and this handy checklist for creating checklists, I started one:
A checklist for reviewing code
Integration - [ ] Will merging this code create source conflicts? - [ ] Is there a clear and concise description of the changes? - [ ] Did all automated checks (build, test, lint) run and pass? - [ ] Are there supporting metrics or reports (e.g. test coverage, fitness functions) that measure the impact? - [ ] Are there obvious logic errors or incorrect behaviors that might break the software? Readability - [ ] Is the code self-documenting? Do we need secondary sources to understand it? - [ ] Do the names of folders, objects, functions, and variables intuitively represent their responsibilities? - [ ] Could comments be replaced by descriptive functions? - [ ] Is there an excessively long object, method, function, pull request, parameter list, or property list? Would decomposing make it better? . Anti-patterns - [ ] Does the code introduce any of the following anti-patterns? - [ ] Sequential coupling - a class that requires its methods to be called in order - [ ] Circular dependency - mutual dependencies between objects or software modules - [ ] Shotgun surgery - a change needs to be applied to multiple classes at the same time - [ ] Magic numbers - unexplained numbers in algorithms - [ ] Hard code - embedding assumptions about the environment in the implementation - [ ] Error hiding - catching an error and doing nothing or showing a meaningless message - [ ] Feature envy - a class that uses methods of another class excessively - [ ] Duplicate code - identical or very similar code exists in more than one location. - [ ] Boat anchor - retaining a part of a system that no longer has any use - [ ] Cyclomatic complexity - a function contains too many branches or loops - [ ] Famous volatility - a class or module that many others depend on and is likely to change Design principles - [ ] Does the code align with the following principles? - [ ] Single Responsibility - an object should have only one reason to change - [ ] Open/Closed - objects should be open for extension, closed for modification - [ ] Liskov Substitution - subtypes should not alter the correctness of code that depends on a supertype - [ ] Interface Segregation - many client specific interfaces are better than one general purpose interface - [ ] Dependency Inversion - dependencies should run in the direction of abstraction; high level policy should be immune to low level details Last updated: 8/15/2018 Note: This is not a checklist for *approving* or *merging* code, it is a checklist for *reviewing* code. It's a list of questions a reviewer should ask themselves as they review.
0 notes
Text
Put a motor on your code cycle
There's an old programmer joke:
> Show me a line of code and I'll tell you what's wrong with it, > Show me five hundred lines of code and I'll say "looks ok to me"
Where I work, we use pair programming and pull requests in our development process. No code is committed to the main branch without peer review.
I'm convinced the investment in extra eyes can pay off for a project of any size. Human brains are uniquely capable of solving hard problems even when objectives are vaguely defined. When we do this together it increases ownership, cohesion, velocity and resilience across the team. This is magical.
But even the collective brainpower of a long-lived, high-functioning team isn't always reliable. The complexities, pressures, and context-switching of a typical day can wear us down. When we add the eye-glazing drudgery of a long pull request at 4pm in the afternoon, we're in trouble.
Here's how we're using automation to optimize our review process:
Build checks
Before anyone reviews source code, a machine should do it first. It won't catch everything a human could see, but it's more consistent and much faster. Source control systems (e.g. Github, Bitbucket) usually have an API so build results can be tracked for each code commit. These are commonly called "status checks" or "build checks". As shown below, a green icon indicates a check posted from a build server was successful.
Reviewers see a green or red icon immediately and can avoid wasting time reading code with known problems. Even better, we can make these checks required for merging any code into our main code branch. Now the entire team, whether they review code or not, has confidence that the main branch is always protected.
We currently use 3 build checks: build, test, and lint. These can work for almost any project.
To process and post the checks, we use a Gradle plugin called BuildChecks. We use Gradle because it's free, fast, configurable, testable, well-supported, and portable. From one project to the next, we don't aways get to use the same development languages, source control system, or build servers, but we want to preserve key processes. BuildChecks can work across multiple languages and source control systems. It can run anywhere Java 7+ is installed. In situations where we aren't able to use a dedicated build server, it can be run from a developer's workstation. We can have the same automated integration protection even on shoestring budgets and timelines.
Build
The "build" check tells us if a build finished successfully and how long it took. The process may be different for each project but is typically some variation of: compile source code, assemble artifacts, run tests, run lint, and deploy artifacts. Having this feedback alone for a pull request review will save considerable time and effort.
Test
The "test" check show us the percentage of code that is covered by tests. BuildChecks parses output from coverage tools like JaCoCo, Cobertura, Istanbul, Slather, and OpenCover. A minimum threshold for coverage can be set that will cause the check to fail. This is optional. Even without a threshold, the check clarifies that tests are running and if they've changed between commits.
Lint
The "lint" check tells us if the code violates any predefined standards. BuildChecks parses output from linters like ESLint, TSLint, Detekt, Checkstyle, PMD, SwiftLint, and Android Lint. Each linter has different rule sets that span categories like correctness, security, performance, accessibility, formatting style, internationalization, etc. The lists can be overwhelming at first. Many are language or platform-specific, but one category that has some pretty universal rules is maintainability. If you don't know where to start, this is a good place.
Here's list of maintainability rules from a multi-language code analysis platform called CodeClimate:
Argument count - Methods or functions defined with a high number of arguments
Complex logic - Boolean logic that may be hard to understand
Method complexity - Functions or methods that may be hard to understand
File length - Excessive lines of code within a single file
Method count - Classes defined with a high number of functions or methods
Method length - Excessive lines of code within a single function or method
Nested control flow - Deeply nested control structures like if or case
Return statements - Functions or methods with a high number of return statements
Similar blocks of code - Duplicate code which is not identical but shares the same structure
Enable any of these that your lint tool supports. Then use pull request conversations to discuss and identify additional patterns you want to add. If the pattern isn't already available, you can write your own custom rule.
Details
In the image above, each check has a hyperlink labelled "Details". This links back to detail on the build server. It's a great way to give reviewers access to the full generated reports from lint and coverage tools as well as build logs and other artifacts. The more context we can provide, the easier it will be for others to give real feedback.
Final thoughts
Automated checks help code reviews scale with consistency. Getting started is as easy as enabling the requirement in your source control system and using a tool like BuildChecks to report it. If you're not already, this puts you on a path to a several important development practices: automated builds, unit tests, frequent integration, maintainability standards, and protected branches. Don't worry if you start with low test coverage and high lint violations. You will immediately have a better understanding of your current situation and a way track your progress.
What automation tricks have you found for improving code review and integration?
0 notes
Text
Continuous integration for Firebase cloud code
In a previous post I showed how to add type-checking and unit tests to Firebase cloud code (cloud functions and database rules). Those tests are independent from any Firebase environment and independent from each other. We should be able to run them all quickly and consistently in a clean Node.js environment with a single command. We should also be able to chain that command with others so that we can build, test, and deploy each code commit directly into a live Firebase environment.
Here are the steps we want to automate:
Download project source
Download project dependencies
Compile project
Run all tests
Stop if any test fails, otherwise continue
Deploy (cloud functions and database rules) to a Firebase environment
Write a deployment summary to our Firebase environment (Git info, date, test results)
We'll assume we have a machine with Node.js and Git installed. This could be a developer machine or a dedicated continuous integration server (I try to make the execution identical for either if I can). The first three steps are pretty straightforward from the command line:
git clone [email protected]:whatever folder-name npm install tsc
Now that we have a compiled project environment, we can use Typescript/Javascript to handle the rest of our steps. From the command line, node can execute a function from a local file like this:
node -e 'require("./build.js").runAllTests()'
Now let's implement a function to run all our tests:
export async function runAllTests() : Promise<TestResultsEntity> { const Mocha = require('mocha'); const mocha = new Mocha(); mocha.addFile('./test/tests.functions.js'); mocha.addFile('./test/tests.database.js'); const results = new TestResultsEntity(); return await new Promise<TestResultsEntity>((resolve, reject) => { mocha.run() .on('pass', (test) => { results.passed++; }) .on('fail', (test, err) => { results.failed++; }) .on('end', () => { resolve(results) }); }); }
Here I'm use the Mocha API to point to our test files, run them, and keep track of how many pass and fail. Let's write a deploy function that grabs the test results and handles our last three steps:
export async function deploy() { const testResults = await runAllTests(); const gitSummary = await getGitSummary(); const summary = `${testResults.getSummary()} on ${getDateSummary()} from ${gitSummary}`; if (testResults.hasFailures()) { console.log('Deploy failed'); } else { await asyncCommand(`firebase deploy --only functions,database`); await asyncCommand(`firebase database:set /lastDeploy -d '"${summary}"' -y`); console.log('Deploy succeeded'); } console.log(summary); }
In the else clause above we run two Firebase CLI commands. The first command deploys our code to the currently configured Firebase environment. The second command writes a record to the database of that environment with a summary of our deployment. This makes it easy for anyone on the team to see:
what code is deployed,
when it happened,
and the results of the tests.
To pull all of this together into a single command, we'll use the package.json file to set up an npm script:
"scripts": { "deploy": "npm install && tsc && node -e 'require(\"./build.js\").deploy()'" }
Now we can run all steps with these two commands:
git clone [email protected]:whatever folder-name npm run deploy
Finally, here's the ancillary code referenced by the deploy() and runAllTests() functions above:
export async function getGitSummary() : Promise<string> { const gitSha = await asyncCommand("git rev-parse --short HEAD"); const gitBranch = await asyncCommand("git rev-parse --abbrev-ref HEAD"); return Promise.resolve(`${gitSha.trim()}-${gitBranch.trim()}`); } function getDateSummary() : string { return new Date().toLocaleString("en-US", { timeZone: 'America/Chicago' }).trim(); } const exec = require('child_process').exec; function asyncCommand(command : string) : Promise<string> { return new Promise<string>((resolve, reject) => { exec(command, function(error, stdout, stderr){ resolve(stdout); }); }) } export class TestResultsEntity { passed : number = 0; failed : number = 0; getSummary() : string { return `${this.passed}/${this.failed+this.passed} tests passed` } hasFailures() : boolean { return this.failed != 0 } }
0 notes
Text
Types and tests for Firebase cloud code
Firebase makes it cheap and easy to code custom functions and database access rules that run in the cloud along with their standard services. This is great because you can try things out very quickly in a real, shared environment that performs at scale. This simplicity make it tempting just to verify each change manually in the cloud environment. I think static type-checking and independent unit tests are an even more appealing way to verify our code with confidence. Here's a recipe for eliminating dependencies on Firebase so we can run fast, automated tests before deploying code to the cloud.
Typescript + IDE
Typescript is a superset of Javascript that compiles to plain Javascript. It adds many useful language features like type annotations, interfaces, classes, and generics to Javascript. All features are optional, so you can always ignore types and interop with plain Javascript when you want.
Image we want have an app where we want to archive posts if they've been flagged too many times. Let's use a Typescript interface to decouple the Firebase dependency from as much of our code as we can.
Say we have a PostEntity class like this:
class PostEntity { id : string; flags : number; maxFlags = 5; hasTooManyFlags() : boolean { return this.flags >= this.maxFlags } }
...and a function that fetches a post by id, determines if it has too many flags, and archives it if so:
function archiveIfTooManyFlags(postId : string, datasource : PostDatasource) : Promise<void> { return datasource.getPost(postId) .then(post => { if (post.hasTooManyFlags()) return datasource.archivePost(post.id); else return Promise.resolve(); }) } interface PostDatasource { getPost(postId : string) : Promise<postentity> archivePost(postId : string) : Promise<void> }
Notice that we defined a PostDatasource interface with methods getPost and archivePost. We can write an implementation of that interface that uses Firebase as our datasource or we could write an implementation that uses something completely different. All of the code we've written so far is independent of that implementation.
Here's what that implementation might look like using Firebase:
class PostDatasourceFb implements PostDatasource { db = require('firebase-admin').database(); getPost(postId: string) : Promise<postentity> { return this.db.ref(`/posts/${postId}`).once('value') .then(snap => snap.val()); } archivePost(postId: string): Promise<void> { return this.db.ref(`/posts/${postId}/isArchived`).set(true); } }
To execute this as a cloud function when a post is flagged in our Firebase Database we do this :
exports.onFlaged = functions.database.ref('/posts/{postId}/flags') .onWrite(event => { return archiveIfTooManyFlags( event.params.postId, new PostDatasourceFb() ); });
In addition you can use a Typescript-aware IDE to make discovering the properties of your objects, language features and other javascript dependencies automatic as you write code. If you've used any other OO language with a good IDE you'll know what this means. WebStorm is my choice because I'm already familiar with the shortcuts and organization of JetBrains tools, but there are many others. They all support features like: instant type checking, code assist, inline refactoring, breakpoint debugging, object navigation, source control tracking, etc.
Mocha + Chai + Sinon
Now that we have the language tools to decouple our application code from dependencies, we can set up unit tests that run locally outside of the Firebase cloud environment. Our test libraries are:
Mocha - lets you describe and execute a set of unit tests.
Chai - lets you make assertions within each of those tests.
Sinon - lets you create spies, stubs, and mocks for your test dependencies.
Here's what a test for our archiveIfTooManyFlags function might look like:
describe('archiveIfTooManyFlags', () => { it('should call archivePost when there are too many flags', async () => { const sinon = require('sinon'); const post = new PostEntity(); post.flags = 7; const postDatasource = <postdatasource>{}; postDatasource.getPost = sinon.stub().resolves(post); postDatasource.archivePost = sinon.stub().resolves(null); await archiveIfTooManyFlags("123", postDatasource); sinon.assert.calledWith(<sinonstub>postDatasource.archivePost, post.id); sinon.assert.calledWith(<sinonstub>postDatasource.getPost, "123"); }); });
First we set up Sinon, and declare a PostDatasource and a PostEntity. Since PostDatasource is an interface, we have to implement at least the methods we plan to use in our test. We don't want to use the PostDatasourceFb class from earlier, because it requires a Firebase environment. We could write a second implementation from scratch that returns some mock values and keeps track of method calls, or we could let Sinon do most of that work for us. Creating our PostDatasource as {} means it exists, but none of the methods have been implemented yet. We use sinon.stub() to stub the behavior of the methods we plan to use: getPost and archivePost. Finally we call archiveIfTooManyFlags and verify that getPost and archivePost were called with the expected arguments.
Now we can run this test and others like it locally without connecting to Firebase. This covers our cloud functions, but what about our database rules?
Targaryen
Targaryen lets us write the same kind of Mocha-based unit tests for our Firebase database rules.
Here's a rule for reading & writing to a post:
{ "rules": { ".read": "false", ".write": "false", "posts" : { ".read": "true", "$post" : { ".read" : "true", ".write": "auth != null && (data.child('userID').val() == auth.uid || root.child('users/' + auth.uid + '/isAdmin').val() == true)", "views":{ ".write":"true" } } } }
Only admins and authors should be able to write to (edit) a post. Targaryen lets us test this rule locally like this:
describe('posts/aPostId', () => { it(`can write if admin or author`, () => { targaryen.setFirebaseData({ users: { adminUser: { isAdmin:true } }, posts: { aPostId: { userID:"authorUser" } } }); expect({uid: 'adminUser'}).can.write.path('posts/aPostId'); expect({uid: 'authorUser'}).can.write.path('posts/aPostId'); expect({uid: 'randomUser'}).cannot.write.path('posts/aPostId'); expect(null).cannot.write.path('posts/aPostId'); }); });
Now we can cover all our cloud functions and all our database rules with fast, automated tests! Wouldn't it be great if we had a single command that could confirm all our tests are passing, then deploy the code and results to our Firebase environment? I'll write up my solution to this in a future post.
0 notes
Text
API Acceptance Tests with Cucumber and Rest-assured
On projects where multiple systems undergo development at the same time, it's crucial to maintain a clear picture of how they should interact. We commonly have a backend system providing a REST API to multiple frontends (browsers, mobile apps, chatbots, IoT, etc.). Because it will likely change over time, keeping the API picture clear and up to date can be a significant challenge. How can we efficiently describe the currently expected behavior and know if it's working as expected so teams don't spin their wheels due to miscommunication?
Cucumber
Cucumber helps us write readable requirements upfront that can be tied directly to executable tests. Here's an example for a guestbook REST API:
Scenario: Read a list of guestbook entries Given I'm using the staging API environment And the guestbook has at least "2" entries When I make a GET request to "/guestbook/entries" Then I get a response code of "200" And I get a response with at least "2" entries And each entry has a "name" And each entry has a "date" formatted as a Unix timestamp
Each line in the scenario above represents a discrete Cucumber step. A developer can now write a short block of code to fulfill each step.
There are options in various languages for fulfilling Cucumber step definitions (e.g. Ruby, Javascript, Python, .NET, Java). I chose the Java implementation, Cucumber-JVM, for these reasons:
Works with many out-of-the-box reporting and automation tools - because it's JUnit-based
Intuitive IDE support for code assist, breakpoints, debugging, output formatting, etc. (Intellij and Eclipse)
Easy-to-build HTTP request and response assertions using the Rest-assured library
Cucumber on Java
Using Cucumber-JVM and the Intellij IDE, I get automatically generated step definitions like this:
public GuestbookStepDefinitions() { Given("^I'm using the staging API environment$", () -> { //short block of code goes here }); When("^I make a GET request to \"([^\"]*)\"$", (String path) -> { //another block here }); Then("^I get a response code of \"([^\"]*)\"$", (Integer code) -> { //and another }); }
Next we fill in the implementations using Rest-assured...
Rest-assured
public GuestbookStepDefinitions() { private RequestSpecification request; private ValidatableResponse response; @Before public void before(Scenario scenario) { request = RestAssured.with(); } Given("^I'm using the staging API environment$", () -> { request.given() .contentType(ContentType.JSON) .baseUri("https://staging.mycompany.com"); }); When("^I make a GET request to \"([^\"]*)\"$", (String path) -> { response = request.get(path + ".json").then(); }); Then("^I get a response code of \"([^\"]*)\"$", (Integer code) -> { response.statusCode(code); }); }
The Given and When steps are building a request with details for our REST API. The Then step calls response.statusCode(...) which is an assertion of the status code returned by the REST API. If any step fails we get targeted feedback like this:
java.lang.AssertionError: 1 expectation failed. Expected status code but was . ... io.restassured.internal.ValidatableResponseOptionsImpl.statusCode(ValidatableResponseOptionsImpl.java:117) at GuestbookStepDefinitions.lambda$new$8(GuestbookStepDefinitions.java:66) at ✽.Then I get a response code of "100"(guestbook-entries-read.feature:12) Failed scenarios: guestbook-entries-read.feature:9 # Scenario: Read a list of guestbook entries 1 Scenarios (1 failed) 3 Steps (1 failed, 2 passed) 0m1.370s
This output is a bit verbose (we'll worry about report formatting later), but contains important information about the failure.
The step: we see that the "Then I get a response code of 100" step of our "Read a list of guestbook entries" scenario is where we're failing. That means the previous two steps passed successfully.
The expectation: we see that we got a response code but expected a response code. If we change the expected status code back to 200, we should get a passing test:
Scenario: Read all guestbook entries # guestbook-entries-read.feature:9 Given I'm using the staging API environment # GuestbookStepDefinitions.java:89 When I make a GET request to "/guestbook" # GuestbookStepDefinitions.java:59 Then I get a response code of "200" # GuestbookStepDefinitions.java:65 1 Scenarios (1 passed) 3 Steps (3 passed) 0m1.937s
Since we're making HTTP calls it'd be nice to see the request and response details too. We can tell Rest-assured to print those along with our test results:
Request method: GET Request URI: https://staging.mycompany.com/guestbook/entries.json Proxy: <none> Request params: <none> Query params: <none> Form params: <none> Path params: <none> Multiparts: <none> Headers: Accept=*/* Content-Type=application/json; charset=UTF-8 Cookies: <none> Body: <none> HTTP/1.1 200 OK Server: nginx Date: Mon, 10 Apr 2017 19:09:16 GMT Content-Type: application/json; charset=utf-8 Content-Length: 4830 Connection: keep-alive Access-Control-Allow-Origin: * Cache-Control: no-cache Strict-Transport-Security: max-age=31556926; includeSubDomains; preload { "-KgBbHUcv2NWn2M6tzGp": { "comment": "Hello Guestbook", "name": "Test User", "timestamp": 1490565277672 }, "-KgBbzZE2WtRD9wz1t-D": { "comment": "Hello Guestbook", "name": "Test User", "timestamp": 1490565462287 } }
Reporting
Now when we run this test we immediately know three things: 1. What we expect to happen (the Given-When-Then statement) 2. How to make it happen (the printed request and response) 3. Is it currently working as expected (Pass or Fail)
Having this feedback continuously throughout development mitigates communication issues early before teams waste time heading in different directions. The easy-to-read Cucumber steps that everyone can read tie directly to the gritty HTTP definitions that developers need and we can drop it on a CI server to generate formatted reports visible to the whole team.
Here's an example of formatted results from the Intellij IDE:
On the left we have a collapsible, colored outline of our features, scenarios, and steps. We can select anything in the tree and see corresponding details on the right.
And here's a standalone HTML report:
Again we have a collapsible, colored outline that documents the expected behavior and HTTP details.
Process
This approach is designed drive collaboration early in the process so it's a great chance to work in pairs. Pairing a frontend developer and a backend developer can help start the conversation about how systems should interact. Getting other roles like analysts, designers, and testers involved can level-set everyone's understanding of how the product is supposed to work. As soon as we have requirements for our first feature, we can start writing tests. The code required to fulfill step definitions should be easy enough for any developer to pick up quickly regardless of language choice. I prefer to put API acceptance tests in a separate repository apart from any other production code. This limits external dependencies from affecting our ability to write and run the tests.
0 notes
Text
Serverless apps with Firebase
Firebase is a set of backend platform services (owned by Google and closely integrated with the Google Cloud Platform) for building web and mobile apps. They have SDKs for Android, iOS, web, C++, Unity, Node.js, and Java. Their generous free tier makes it easy to launch fully functional apps to a modest user base without cost.
Free and unlimited features include: Authentication, Analytics, App Indexing, Cloud Messaging, Crash Reporting, Dynamic Links, Notifications, Remote Config
Free features with usage limits: Realtime Database, Cloud Functions, Hosting, Storage, Test Lab
There is way too much to cover here. If you want all the details, their website docs are some of the best I've encountered. You can also demo many features right from the web console. Here are a few of the highlights that I think can considerably reduce effort and improve quality for app development.
Sign in/up simply
Firebase Authentication provides email, social, anonymous, and custom sign in methods out of the box. Accounts can be managed and each method enabled or disabled from the Firebase web console. Access tokens are based on the JWT feature of OpenID Connect which encrypts portable authorization data in each token. This means multi-system architectures can share tokens and without the expense of server-to-server callbacks on client requests.
Realtime apps are no longer a luxury
The Realtime Database is a NoSQL cloud database with REST and SDK (websocket) support. Data is synced across all clients in realtime, and remains available even when offline. Network calls, cache updates, device resources, and intermittent connectivity are managed automatically by the SDK. Clients simply listen for data changes and react with UI updates.
Free backend - if needed
For the most part Firebase requires no server-side code, but if you want something to be handled in a trusted backend environment (e.g. push notification logic), there are two relatively simple and free-tier options: Cloud Functions and App Engine. Cloud Functions is a hosted, private, and scalable Node.js environment where you can run JavaScript code and interact with Firebase. App Engine is a hosted, private, and scalable environment that supports Java, Python, PHP, and Go. If you're building an Android app, App Engine is a convenient option because your Java code and IDE tools can be shared between the two.
Deep links that survive installs
A Dynamic Link is a deep link that can survive the optional app installation step (on Android and iOS) or fall back to a web link if the user is on a desktop machine. Firebase will generate short links that contain all the details required. This works great for letting users invite their friends to an app or tracking referral codes.
Final thoughts
Serverless isn't the right approach for every situation, but the potential for reduced effort, early feedback, and tighter operational management is compelling - especially in the early life stages of an app. Firebase is one of the most complete Backend-as-a-Service platforms, but the field is still fairly new. The competition will evolve and so will your app. Design principles like dependency inversion can help minimize these future risks.
Further Reading
https://scotch.io/bar-talk/a-look-at-the-new-firebase-a-powerful-google-platform
https://firebase.google.com/docs/
https://martinfowler.com/articles/serverless.html
0 notes
Text
Reuse Android code without remote dependencies
You're starting to write similar code project after project and think it would be useful to establish some base components that can be reused across multiple projects. First you create a library module to isolate your code. Then you need to figure out how you're going to include it in each project.
If it's open source, hosting it in a public maven repository like JCenter/Maven Central is a good solution. If the code can't be shared publicly, the simplest way I've found is this:
Step 1
Copy the packaged library file:
MyReusableLibrary/build/outputs/aar/MyReusableLibrary-release.aar
and paste it into the app/libs directory of each project you want to use it in.
Step 2
Add the following to the app/build.gradle of each project
repositories { ... flatDir { dirs 'libs' } } dependencies { ... compile ':MyReusableLibrary-release.aar }
What I like about it
It's self-contained for library consumers. If a developer tries to download and build a project that uses this library, there are no additional commands to run and no extra credentials to track down.
It's low maintenance for library providers. You can host the aar files directly in your source repository or wherever else makes sense. You don't need to worry about maintaining a private maven server.
What I don't like about it
It doesn't include transitive dependencies automatically. Since there is no associated pom file, you need to describe them along with your library. If the example library above depended on the Android Support Library, the project dependencies would need to include both:
dependencies { ... compile ':MyReusableLibrary-release.aar compile 'com.android.support:appcompat-v7:24.2.0' }
Additional tips
Add the following to the build.gradle of your library module to store versioned aar files:
android { defaultConfig { versionName "1.0" } buildTypes { release { archivesBaseName = "${project.name}-${android.defaultConfig.versionName}" } } } task copyAar(type: Copy) { from('build/outputs/aar') into('../app/libs') include(archivesBaseName + '-release.aar') } copyAar.dependsOn assemble
Running ./gradlew copyAar assembles the file MyResuableLibrary-1.0-release.aar and copies it to the libs folder.
Have a sample app module in your library project so you can test your library before using it in other projects. See the Picasso library from Square (or many of the other Android library projecst on Github) as an example. The /picasso directory is the library and /picasso-sample directory is the sample app. You can also use this sample app module to test the flatDir dependency approach described above.
0 notes
Text
Observable cache policies with Shelf
Mobile apps often depend on network data that needs to be stored locally, restructured, and periodically refreshed. You'll find plenty of implementation options for http calls, serialization, disk storage, encryption, etc. You'll also find plenty of changing requirements that make you want to replace the choices you made a month ago.
Wouldn't it be nice to minimize the dependencies your application code has on these persistence details?
Wouldn't it be extra nice to defer complex and highly optimized implementations you may never need?
I've applied these ideas to a storage library I wrote called Shelf. Shelf aims to be a simple but extendable interface for local storage in Java. Out of the box it uses Gson for serialization, flat files for storage, and RxJava for observable cache policies. Rx is great for asynchronous data operations. It helps separate what you want to do with data from how and when it becomes available.
Over and over I found myself writing code to do this:
Get cached data if available, then get new data if the cache is too old.
I also frequently repeated this:
Get cached data if available and not too old, else get new data.
Shelf wraps these policies (and others) into Rx Observables and Transformers:
cacheThenNew
cacheOrNew
newOnly
pollNew
cacheThenPollNew
A simple example
To demonstrate, here's a contrived example of printed strings. We start with a cached string value that's good for 1 millisecond and an observable to fetch a new value (imagine the new value call is slow, e.g. relies on the network):
shelf.defaultLifetime(1, MILLISECOND); shelf.item("myString").put("cached value"); Observable<String> myObservable = Observable.fromCallable(() -> return "new value");
Now we use the cacheThenNew policy:
//Prints "cached value" then prints "new value". myObservable .compose(shelf.item("myString").cacheThenNew(String.class)) .subscribe(s -> System.out.println(s));
Using the cacheOrNew policy looks like this:
//Prints "new value" if the cache is older than 1 millisecond, //otherwise it prints "cached value". myObservable .compose(shelf.item("myString").cacheOrNew(String.class)) .subscribe(s -> System.out.println(s));
Network data example
Imagine we want to display a list of blog articles. We have an basic Article object like this,
public class Article { String title; String url; String category; }
And a Retrofit service to fetch a list of articles from the web,
public interface ArticleService { @GET("/articles") Observable<Article[]> getArticles(); } ArticleService service = retrofit.create(ArticleService.class);
And a view interface to display the articles once they're available,
public interface ArticlesView { void displayArticles(Article[] articles); }
Now all we need to populate the view with articles is this:
service .getArticles() .compose(shelf.item("articles").cacheThenNew(Article[].class)) .subscribe(articles -> view.displayArticles(articles));
Whether we have old cache, new cache, or no cache, Shelf has built-in policies that will react appropriately, update the cache, and emit results to the view. Best of all our view, cache, and network implementations are independent. RxJava makes it easy to replace any part of the equation in isolation.
Finally, let's imagine that our articles service actually returns thousands of articles from dozens of categories and we only want the ones about baseball. Assuming we have a filterBy(category) function like this,
public Func1<Article[], Article[]> filterBy(final String category) { return (articles) -> List<String> filteredArticles = new ArrayList<>(); for (int i = 0; i < articles.length; i++) { if (articles[i].category.equals(category)) { filteredArticles.add(articles[i]); } } return (String[]) filteredArticles.toArray(); }; }
Then our chain from above would look like this,
service .getArticles() .map(filterBy("baseball")) .compose(shelf.item("articles").cacheThenNew(Article[].class)) .subscribe(articles -> view.displayArticles(articles));
Additional info
If you're interested in using the caching policies described above, but don't want to use Shelf, take a look at the RxCacheable class in the Shelf library. It should work for most caching implementations and only depends on RxJava.
If you want to learn the basics of RxJava, specifically for Android, Dan Lew has some great posts. In one particular article he talks about using it to manage data from multiple sources (memory, disk, and network) which was the inspiration for Shelf's cache policies.
0 notes
Text
3 ways to reach Android users beyond your app
If you have an app in the Google Play store and are looking for better ways to connect with users, there are some powerful Android utilities you should know about. Users don't need to have your app open or even installed to reap the benefits. Here are 3 tools for reaching Android users beyond your app.
Deep links from Google search results
By implementing Google's App Indexing API, users will see Google search results and autocomplete suggestions that take them directly to specific sections in your app. Even when the app isn't installed, users will see results in relevant searches and can walk through a few steps to install and land in your app.
Just as URLs are used to index pages of your website, deep links must be defined to index the sections of your app. Although not required, deep links should align with corresponding webpages when they exist. This will streamline traffic flows across web and app experiences. For example, when an Android user selects your content link a search result (or marketing email, or device notification) the system can decide whether the link should direct them into the installed app or fallback to the website.
App installs without leaving your website
If you offer "Sign in with Google" on your website, users can not only skip a lengthy registration process, they'll have the option to install your Android app on their device instantly without leaving your website. This can significantly reduce barriers to user activation. Google reports that developers see a 40% acceptance rate from the app install prompt. Here's what the process looks like:
1. Website Sign in

2. Choose devices

3. App installed

Note: this install prompt appears only if: 1) The user signs in with a Google+ account tied to an Android device. 2) The app is in the Google Play store. 3) The app is free. 4) The app has a minimum of 10 ratings, with a minimum rating of three stars.
Interactive notifications
Android provides some special notification features that don't exist yet for iOS. This includes embedding images, controlling media playback, expandable details, continuous display updates, categorization, prioritization and more. Apps can leverage these features to create quick and easy user interactions without ever launching an app. Here are a few examples.
Expandable notifications allow a user to see more detail and perform convenient actions without leaving the the notification list.

The Big Picture style is a special type of expandable notification that can contain images.

Notification display updates allow an app to provide immediate feedback in the notification when something changes. In the example below the "Check-in" button changes to a disabled "Checked-in" button when selected by the user.
Honorable mention - Google Now integration
This one doesn't technically have anything to do with your app and isn't exclusive to Android, but since many Android users interact with Google Now on a daily basis I'm giving it an honorable mention.
Full third-party integration is being slowly rolled out which will ultimately allow things like custom Now Cards and voice activated deep links into your app. Until then you can get limited integration for your products and services by adding markup to your user email notifications. Users who connect Google Now to their Gmail account will see Now Cards for the email notifications you send to users.
Final thoughts
The tools above will help extend the reach and discoverability of your products and services on the Android platform. The features they enable are already familiar to users. They're expecting them and will welcome the enhancements and conveniences they provide.
0 notes
Text
Android Material Transitions
This Android project samples some Material Design-ish transitions for list items and floating action buttons. It uses the the shared element concept introduced in Android 5.0. I tried to pull it off with pure fragment transitions and ran into a few stags (see below) so my current solution uses an activity transition for each step.
Activity Transition tricks:
Generate a background bitmap immediately before the transition and pass it to the called activity
Suppress the view overaly (used by default for activity transitions) to keep shared elements behind the toolbar & system bars
Fall back to fade and scale activity transitions when
0 notes
Text
Mocking backend features in an Android app
Many of the apps we build rely on backend web services. During development these systems are often still in flux. Test environments go on and offline. Web services change. Data from the servier is inconsistent or incomplete. This article outlines a few techniques to mock external network dependencies so development and user testing can keep moving.
Dev settings
Android's Setting/Preference API is a handy way for developers and testers to configure mock features directly in your app.
Jake Wharton's u2020 project wraps settings like these in a clever "Debug Drawer" that's completely hidden until the user swipes the app from the right bezel. You don't have to get this fancy, but the goal is to make them easily accessible from anywhere in the app without altering your interface layouts.
Local Web Service Stubs
The first mock setting shown in the image above selects a backend environment. Typical choices are: Dev environment, Test environment, and Prod environment which each point to a different set of external web services by changing a base URL. In addition to external environments like these, it's quite useful to have mock environment options that rely only on locally stored stub files. This isolates your app from flaky servers while still exercising most of the http and serialization logic used for real network requests. See Matt Swanson's article outlining this and 3 alternative approaches for mock data integration.
First we copy HTTP responses from real web service calls and paste them into files in the res/raw directory of our app:
MyAndroidProject |_app |_src |_main |_res |_raw |_ get_some_web_service.stub |_ get_another_web_service.stub |_ empty_get_some_web_service.stub |_ empty_get_another_web_service.stub
The example above shows local stub files for two web service endpoints - /some/web/service, /another/web/service) and two scenarios - empty and default (where default stubs have no prefix and empty stubs are prefixed by "empty_"). The contents of these files is typically JSON.
Next we write a custom HTTP client to fetch from these local files instead of the network. The following example implements the Client object from Square's Retrofit library. If you're not familiar with Retrofit, the default Client implementation is OkClient. It makes a standard HTTP network request. We're writing a custom client to fake a network request and get the response data from a local file instead of a network server.
public class LocalStubClient implements Client { private Context context; private String scenario; public LocalStubClient(Context context, String scenario) { this.context = context; this.scenario = scenario; } public LocalStubClient(Context context) { this.context = context; } public void setScenario(String scenario) { this.scenario = scenario; } @Override public Response execute(Request request) throws IOException { //get resource id for local file String fileName = createFilename(request, scenario); int resourceId = getResourceId(fileName); if (resourceId == 0) { //fallback to default filename fileName = createFilename(request, null); resourceId = getResourceId(fileName); if (resourceId == 0) { throw new IOException("Could not find res/raw/" + fileName + ".stub"); } } //get input stream & mime for local file InputStream inputStream = context.getResources().openRawResource(resourceId); String mimeType = URLConnection.guessContentTypeFromStream(inputStream); //wrap local stream in retrofit objects TypedInput body = new TypedInputStream(mimeType, inputStream.available(), inputStream); Response response = new Response(request.getUrl(), 200, "Stub from res/raw/" + fileName, new ArrayList<Header>(), body); return response; } private String createFilename(Request request, String scenario) throws IOException { URL requestedUrl = new URL(request.getUrl()); String requestedMethod = request.getMethod(); String prefix = scenario == null ? "" : scenario + "_"; String filename = prefix + requestedMethod + requestedUrl.getPath(); filename = filename.replace("/", "_").replace("-", "_").toLowerCase(); return filename; } private int getResourceId(String filename) { return context.getResources().getIdentifier(filename, "raw", context.getPackageName()); } private static class TypedInputStream implements TypedInput { private final String mimeType; private final long length; private final InputStream stream; private TypedInputStream(String mimeType, long length, InputStream stream) { this.mimeType = mimeType; this.length = length; this.stream = stream; } @Override public String mimeType() { return mimeType; } @Override public long length() { return length; } @Override public InputStream in() throws IOException { return stream; } } }
Finally we check our "Backend Environment" setting to determine if we should use our LocalStubClient or the default OkClient:
RestAdapter.Builder builder = new RestAdapter.Builder(); String selectedBackend = getSharedPreference(R.string.pref_backend, ""); switch (Integer.parseInt(selectedBackend)) { case R.string.test_server: case default: builder.setClient(new OkClient()); builder.setEndpoint("http://test.api.mydomain.com"); break; case R.string.demo_server: builder.setClient(new OkClient()); builder.setEndpoint("http://demo.api.mydomain.com"); break; case R.string.default_local_stubs: builder.setClient(new LocalStubClient(app)); break; case R.string.empty_local_stubs: builder.setClient(new LocalStubClient(app, "empty")); break; } RestAdapter adapter = builder.build();
Notice we have two local stub options. One is "default" and one is "empty". This way we can quickly check how our app handles empty scenarios and then go right back to "default" responses.
Simulate network delays and failures
Retrofit also allows us to simulate network delays, variance, and failures. Here's code to enable/disable a mock network adapter based on our second Dev setting:
RestAdapter.Builder builder = new RestAdapter.Builder(); //... RestAdapter adapter = builder.build(); if (getSharedPreference(R.string.pref_mock_network, false)) { MockRestAdapter mockRestAdapter = MockRestAdapter.from(adapter); mockRestAdapter.setDelay(1000); mockRestAdapter.setVariancePercentage(50); mockRestAdapter.setErrorPercentage(10); return mockRestAdapter.create(serviceType, adapter.create(serviceType)); } return adapter.create(serviceType)
Conclusion
We're continuing to improve our techniques for mocking backend features but have found these to be useful and easy to apply (if you're using Retrofit). The extra work to build quick settings access into your app will pay for itself as your backend systems undergo varying levels of reliability and consistency. Anyone else have tips or techniques like these for managing app backends?
1 note
·
View note
Text
Automatically boot and unlock an Android emulator
For all you DRY-conscious developers out there who want to eliminate some repetitive manual steps when testing against an Android emulator, here's a simple bash script. It boots the emulator if it's not already running, waits for the full boot, and then unlocks the lock screen. It's pretty handy automated tests against an emulator with something like Calabash.
#!/bin/bash # init variables DEVICE_ID="@nexus5" EMULATOR_CHECK_STATUS="adb shell getprop init.svc.bootanim" EMULATOR_IS_BOOTED="stopped" # see if emulator needs to be booted OUT=$($EMULATOR_CHECK_STATUS) if [[ ${OUT:0:7} == $EMULATOR_IS_BOOTED ]] then # do nothing if already booted echo "Emulator already booted." else # Start the emulator in the background emulator $DEVICE_ID & # for Android SDK emulator # /Applications/Genymotion.app/Contents/MacOS/player --vm-name "$DEVICE_ID" & # for genymotion emulator # wait for full emulator boot OUT=$($EMULATOR_CHECK_STATUS) while [[ ${OUT:0:7} != $EMULATOR_IS_BOOTED ]]; do OUT=$($EMULATOR_CHECK_STATUS) echo "Waiting for emulator to fully boot..." sleep 3 done echo "Emulator booted!" adb shell input keyevent 82 # unlock lock screen fi
0 notes
Link
DevAppsDirect is a great tool for Android designers and developers. Install it on any Android device and get quick access to demo UI patterns, custom views, widgets, game engines, and other libraries.
0 notes
Text
Easy local stack management with Vagrant
I’m using Vagrant to hand-off complete copies of my local development environment to other members of my team. This is a great way to lower setup time, isolate dependencies and eliminate inconsistencies. Frontend developers are able to work against a full local environment without wasting time on backend configuration. The following describes how Vagrant can make this possible without any additional provisioning tools. Those tools are powerful and offer even more efficiencies, but we’re leaving them out in the interest of simplicity.
A sample environment
On my MacBook Pro, I built a VM that runs the entire technology stack for a web application I’m currently working on called Jude. It’s a VirtualBox VM with things like Linux, Apache Web Server, MySQL, PHP, Memcache, APC, Drush, and Apache Solr installed and configured to work together. The codebase is checked out from a remote SVN code repository to a local directory on my Mac that’s also shared to the VM. I can use my regular Mac text editor (NetBeans) to edit code locally, and the changes are immediately available in the running VM. I can also use the command line (SSH), a database explorer (Sequel Pro), and a breakpoint debugger (NetBeans) to inspect the running web app.
Vagrant, Chef, and the Drupal Vagrant project made most of this configuration automatic, but manual configuration would have worked just as well. The point being it doesn't matter how the initial VM gets created or what the technology stack is. It just matters that we set it up once and we want an easy way to copy it to another machine.
Sample workflow for spinning up a new VM copy
Step 1: Package the source VM
First we need to package up the initial VM from the source machine and make it available for download. The following command packs up the VirtualBox VM called vagrant_1374870184 and creates a file on the source machine called jude.box.
vagrant package --base vagrant_1374870184 --output jude.box --vagrantfile box.Vagrantfile
The box file then needs to be copied to the target machine or uploaded to a publicly accessible URL.
Step 2: Install the target VM
On the target machine we need to install VirtualBox and Vagrant, then open up a terminal window and run the following commands.
mkdir <project-directory> cd <project-directory> svn co https://path/to/repo/trunk/htdocs public/dev-site.vbox.local/www/ echo -e '33.33.33.10\tdev-site.vbox.local' | sudo tee -a /etc/hosts vagrant init jude https://path/to/file/jude.box
The first three lines create the project directory and checkout the codebase from a remote SVN repository. The public directory in the checkout location is the directory that will be mounted to the VM via NFS. The dev-site.vbox.local/www directory represents the web root of an Apache vhost on our VM.
Line four adds the site’s domain alias to our local hosts file. 33.33.33.10 is the IP address we defined in the Vagrantfile on our source VM and dev-site.vbox.local is the vhost we defined in the source VM's Apache conf.
Line five uses the source box file we packaged in the first step to initialize the target VM configuration. We now have a file called Vagrantfile in our project directory where we could override environment settings if we needed.
Step 3: Start the target VM
Now we're ready to start up the new VM by running:
vagrant up
The first start up may take a few minutes, especially if the box file is remote and has not been downloaded yet. Start ups in the future will be much faster. Once this is complete we can access our new local copy of Jude at http://dev-site.vbox.local.
Further possibilities
At my company, we use a similar technology stack across many of our projects. Vagrant can be used to manage reusable VM components across all these projects. In addition to developer workstation installations, these could be used to spin up identical development, testing and production environments. Say goodbye to "works on my machine" bugs.
If you're interested in using this approach on your next project and you're using Drupal, be sure to check out Drupal Vagrant. It made the setup of my initial VM really simple. The only piece that needed to be manually configured was Apache Solr.
0 notes
Text
A clean, lightweight, pure JS alternative for anchoring file fields to Drupal WYSIWYGs
jquery.modalize.js is a lightweight, pure-javascript approach to automatically turn part of any web page into a modal overlay.
I originally wrote it as a simple alternative for associating file upload fields to WYSIWYGs in Drupal, but can be used to modalize any chunk of HTML and significantly clean up overloaded pages (Drupal or non-Drupal).
The original dilemma
I've never loved any of the solutions for associating image and file uploads to WYSIWYGs in Drupal and yet I've had to do it on almost every project I've worked on. The Media module and the Wysiwyg Fields module are both ambitious projects that attempt this feature (and other things as well). Unfortunately I've run into issues with both. Being complex modules they are difficult to troubleshoot and hard to move away from if they don't work out.
My usual solution
I normally end up sticking a multi-value image field and a multi-value file field underneath the WYSIWYG and then using the excellent Insert module to allow content editors to "send" HTML for the uploaded files to the WYSIWYG.
This is reliable and works nicely, but has a few drawbacks:
It takes up lot of screen real estate - especially if you upload many files
The WYSIWYG associations aren't immediately intuitive to users - sometimes they have to scroll to see the whole picture
If you have more than one WYSIWYG on a page it's even harder to infer the associations.
The new modally-powered solution
With jquery.modalize.js, I start with my usual solution (as described above), add a single line of jQuery to turn my file fields into modals, and attach them to every WYSIWYG on the page.
To use it you just need JQuery, jquery.modalize.js, and a line of code like this:
$.modalize('#edit-field-image-attachments', '+ Attach images', '.field-type-text-with-summary');
This turns the #edit-field-image-attachments element into a hidden modal, replaces it with a link labeled + Attach images and prepends the link to all elements with the .field-type-text-with-summary class (covers WYSIWYG fields in Drupal). Clicking on the link will open the modal as a page overlay. The third argument is optional and if not provided, the modal link will be attached in the original element's DOM location.
As a bonus, Modalize is Insert module aware, so clicking the Insert button will automatically close the modal and show the user the WYSIWYG it was inserted into.
1 note
·
View note
Text
Single Page Interface with Drupal
We recently built a community app in Drupal. It has:
a fully abstracted (no Drupal), single page (no reloads) frontend
a web service enabled Drupal backend
an integrated Drupal overlay for edit and admin pages
Here’s how we did it:
Setting up Drupal web services
The Drupal Services module provides a standard framework for defining web services. This was our foundation for making Drupal data available to an external application. Out of the box it supports multiple interfaces like REST, XMLRPC, JSON, JSON-RPC, SOAP, AMF and provides functional CRUD services for core Drupal entities like nodes, users, and taxonomy terms. The response data is structured much like the Drupal entity objects you’re used to seeing in PHP and provides all the same data. We added our own group of “UI” services to clean up these objects and strip out some of the data that isn’t relevant to the UI layer (lots of [‘und’] arrays). I’m hoping to make this into a contrib module sometime soon.
A response to the standard Services resource /rest/user/<uid>.json looks something like this:
{ uid: "222", name: "tway", field_first_name: { und: [ { value: "Todd", format: null, safe_value: "Todd" } ] }, field_last_name: { und: [ { value: "Way", format: null, safe_value: "Way" } ] }, field_location: { und: [ { tid: "604" } ] }, field_department: { und: [ { tid: "614" } ] }, ... }
And a response to our UI resource /rest/user/<uid>/ui.json looks like this:
{ uid: "222", name: "tway", field_first_name: { label: "Todd" }, field_last_name: { label: "Way" }, field_location: { id: "604", label: "KC-Airport", type: "taxonomy_term" }, field_department: { id: "614", label: "Technology", type: "taxonomy_term" }, edit_link: "user/222/edit", display_name: "Todd Way", ... }
Making authenticated web service requests from your frontend app
The Services module comes with support for session-based authentication. This was essential for our app because we did not want any of our content or user data to be publicly available. Each request had to be associated with an authorized user. So basically, if a valid session key is set (as a request header cookie) on a service request, Drupal will load the user associated with that session key - just like any standard Drupal page request. There are two ways to accomplish this with an external frontend.
Option 1: Run your app on the same domain as Drupal
If your app can run on the same web domain as the Drupal services backend, you can use the built-in Drupal login form to handle authentication for you. It will automatically set the session key cookie and pass it on any service requests from the browser on that domain. So for example if your Drupal site is at http://mysite.com and your Drupal login is at http://mysite.com/user, your UI app will be at something like http://mysite.com/my-ui-path (more on how to set this up later).
To make a jQuery-based service request for a user object you would simply need to do this:
(function ($) { $.getJSON('/rest/user/1.json', function(data) { console.log(data); }); })(jQuery);
The response data, if the request was properly authenticated, would be:
{ "uid":"1", "name":"admin", "mail":"[email protected]", "created":"1354058561", "access":"1363899033", "login":"1363725854", "status":"1", "timezone":"America/Chicago", "roles":[ 2 ], }
and if unauthenticated would be:
[ "Access denied for user anonymous" ]
Option 2: Run your app on a separate domain
If your app will be on a separate domain it will need it’s own server (e.g. node.js, etc.) to proxy all authenticated service requests. One reason for this is that web browsers do not allow the Cookie header to be set on XMLHttpRequest from the browser (see the W3C Spec). You can get around this on GET requests with JSONP if you do something like this:
$.ajax({ type: 'GET', url: 'http://mysite.com/rest/uiglobals.jsonp?callback=jsonpCallback', async: false, jsonpCallback: 'jsonpCallback', contentType: "application/json", dataType: 'jsonp', success: function(json) { console.log(json); }, error: function(e) { console.log(e.message); } });
However JSONP does not allow POST requests, so this is not a complete solution. For more details, check out this article.
Your proxy server will need to call an initial login service request (already part of the services module) on behalf of the client browser that takes a username and password and, if valid, returns a session key. The server then needs to pass the session key in the Cookie header on all service requests. If you were using a second Drupal site for your proxy, the PHP would look something like this:
<?php function example_service_request() { $server = 'http://example.com/rest/'; //login request - we need to make an initial authentication request before requesting protected data $username = 'username'; $password = 'password'; $url = $server . 'user/login.json'; $options = array( 'method' => 'POST', 'headers' => array('Content-Type' => 'application/json'), 'data' => json_encode(array( 'username' => $username, 'password' => $password, )), ); $result = drupal_http_request($url, $options['headers'], $options['method'], $options['data']); //d6 //$result = drupal_http_request($url, $options); //d7 if ($result->code != 200) { drupal_set_message(t('Authentication error: ') . $result->status_message, 'error'); } $login_data = json_decode($result->data); //build the session cookie from our login repsonse so we can pass it on subsequent requests $cookie = $login_data->session_name . "=" . $login_data->sessid . ";"; //user search request //$url = $server . 'search/user_index/ui.json'; $url = $server . 'search/user_index/ui.json?keys=joe&sort=field_anniversary:DESC'; $options = array( 'method' => 'GET', 'headers' => array( 'Content-Type' => 'application/json', 'Cookie' => $cookie, //add our auth cookie to the header ), ); $result = drupal_http_request($url, $options['headers'], $options['method'], $options['data']); //d6 //$result = drupal_http_request($url, $options); //d7 dpm(json_decode($result->data), 'result data'); //Log out request, since we are done now. $url = $server . 'user/logout.json'; $options = array( 'method' => 'POST', 'headers' => array('Cookie' => $cookie), ); $result = drupal_http_request($url, $options['headers'], $options['method'], $options['data']); //d6 //$result = drupal_http_request($url, $options); //d7 }
We didn't use this code for our UI app, but it came in handy for testing and we eventually used it to interact with data from another backend system.
For our UI app, we used option 1 for two main reasons: 1. No need for a separate frontend server or custom authentication handling. 2. Better integration with the Drupal overlay (more on this later).
Hosting the frontend app for local development
We didn’t want our frontend developers to need any Drupal knowledge or even a local Drupal install in order to develop the app. We set up a web proxy on our shared Drupal development environment so frontend developers could build locally against it while appearing to be on the same domain (to maintain the cookie-based authentication). We used a simplified version of PHP Simple Proxy for this and added it to the Drupal webroot, but Apache can be configured to handle this as well. I wouldn't recommend using a Drupal-based proxy since each request would perform unnecessary database calls during the Drupal bootstrap.
Our frontend developers used node.js and localtunnel, but other local dev tools could be used for this. As long as the Drupal development server can make requests to the frontend developer’s machine, the web proxy will work. Using this setup, the URL for frontend development looks something like this...
http://mysite.devserver.com/proxy.php?url=myfrontend.localtunnel.com
...where mysite.devserver.com is the domain alias of the dev server, proxy.php is the name of the PHP proxy script, and myfrontend.localtunnel.com is the domain alias for a frontend developer’s machine.
Hosting the frontend app in Drupal
To make the frontend app easy to deploy along with the Drupal backend, we set up a simple custom Drupal module to host it. Since the app is just a single HTML page (and some JS and CSS files), we define one custom menu item and point it to a custom TPL.
Here are the essential pieces for our judeui.module file:
<?php /** * implements hook_menu */ function judeui_menu() { //define our empty ui menu item $items['ui'] = array( 'page callback' => 'trim', //shortcut for empty menu callback 'page arguments' => array(''), 'access callback' => TRUE, ); return $items; } /** * implements hook_theme */ function judeui_theme() { //point to our custom UI TPL for the 'ui' menu item return array( 'html__ui' => array( 'render element' => 'page', 'template' => 'html__ui', ), ); } /** * implements hook_preprocess_html * @param type $vars */ function judeui_preprocess_html(&$vars) { //if we're serving the ui page, add some extra ui variables for the tpl to use $item = menu_get_item(); if ($item['path'] == 'ui') { $vars['judeui_path'] = url(drupal_get_path('module', 'judeui')); $vars['site_name'] = variable_get('site_name', ''); //add js to a custom scope (judeui_scripts) so we can inject global settings into the UI TPL drupal_add_js( "var uiglobals = " . drupal_json_encode(_get_uiglobals()), array('type' => 'inline', 'scope' => 'judeui_scripts') ); $vars['judeui_scripts'] = drupal_get_js('judeui_scripts'); } }
The hook_menu function defines our ui page, the hook_theme function points it at our custom TPL, and the hook_preprocess_html lets us add a few custom variables to the TPL. We use the judeui_scripts variable to get global settings from Drupal into the page - much like the Drupal.settings variable on a standard Drupal page. We also have a web service that the ui app could use for this, but adding this directly to the page saves an extra request when intially building the page. More on ui globals in the next section.
And here is our custom html_ui.tpl.php file:
<!DOCTYPE html> <html> <head> <title><?php echo $site_name ?></title> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <?php echo $judeui_scripts ?> <script src="<?php echo $judeui_path ?>/app.js"></script> <link href="<?php echo $judeui_path ?>/app.css" rel="stylesheet"/> </head> <body> </body> </html>
It contains the few very basic PHP variables that we set in hook_preprocess_html and a small amount of HTML to set the page up. Frontend developers can build and deploy app updates simply by committing new app.js and app.css files to the module folder. Drupal serves the page at http://mysite.com/ui.
Global settings for the UI
We added a custom web service to pass global settings to the UI app. The frontend app can call http://mysite.com/rest/uiglobals.json to load this or use the uiglobals variable we added to the UI TPL in the section above. Both of these methods use a function that returns an array of settings that are useful to the UI app.
<?php function _get_uiglobals() { return array( 'basePath' => base_path(), 'site-name' => variable_get('site_name', ''), 'site-slogan' => variable_get('site_slogan',''), 'publicFilePath' => file_stream_wrapper_get_instance_by_uri('public://')->getDirectoryPath(), 'privateFilePath' => 'system', 'main-menu' => _get_uimenu('main-menu'), 'user-menu' => _get_uimenu('user-menu'), 'image-styles' => array_keys(image_styles()), 'user' => $GLOBALS['user']->uid, 'messages' => drupal_get_messages(NULL, TRUE), ); }
You can see it contains global data like base path, site name, currently logged in user, public file path, image styles, messages, etc. This is a handy way for the frontend to access data that shoudn't change during the browser session.
Integrating standard Drupal pages/URLs
In the early stages of frontend development it was quite useful to model the pages in a standard Drupal theme. For a while we thought we might still want some parts of the site to just be standard Drupal pages. Handling this incremently was fairly simple.
We established some conventions for URL aliases patterns in both Drupal and the frontend app. For example, one of our content types is post. The URL alias pattern for posts is post/[node:nid]. So we had a Drupal-themed URL for the post at http://mysite.com/post/123 and a frontend URL at http://mysite.com/ui#post/123.
Once the frontend app was ready to start handling posts, we used hook_url_inbound_alter to redirect http://mysite.com/post/123 to http://mysite.com/ui#post/123.
<?php /** * implements hook_url_inbound_alter */ function judeui_url_inbound_alter(&$path, $original_path, $path_language) { //dpm($path, $original_path); if (variable_get('site_frontpage', 'node') == 'ui') { $oargs = explode('/', $original_path); if (in_array($oargs[0], array('post', 'user', 'group', 'tool')) && !isset($oargs[2]) && isset($oargs[1]) && is_numeric($oargs[1])) { drupal_goto('ui/' . $original_path); } if (strpos($original_path, 'user/edit') === 0) { $frag = 'modal/' . str_replace('user/edit', 'user/' . $GLOBALS['user']->uid . '/edit', $original_path); drupal_goto('ui/' . $frag); } } }
This is incredibly handy for redirecting preconfigured links in Drupal views or email notifications to our abstracted UI URLs. And hook_url_inbound_alter can be expanded as more of the app moves to the abstracted UI.
Integrating the Drupal overlay
We wanted to use the standard content editing and admin pages that Drupal provides and have those pages open in an overlay just like any other Drupal 7+ site. To make it appear like part of the abstracted frontend, links to Drupal-rendered pages open in an iframe with 100% height and 100% width (just like the Drupal 7 admin overlay), and we made some minor CSS tweaks to the Drupal theme so that the page appears to be a modal window in front of the abstracted UI. Now edit and create links throughout our abstracted frontend can open the iframe overlay and display pure Drupal admin pages.
In addition we needed to facilitate closing the modal when appropriate. Setting up a close link in the top right corner of the modal was a pretty straightforward javascript exercise, but we also wanted to close it automatically when a user completed a task in the modal (For example, when a user clicks save after editing a content item, we want the modal to close on it’s own). Drupal already has a way to handle a redirect after a task (usually a form submit) is complete - the destination query string parameter. So in our frontend app, we add a destination query parameter to all of the edit and create links. The frontend app listens to the onload event of the iframe, and if it redirects to a non-modal page (e.g. /ui ), it closes the modal.
Finally, we want to pass Drupal messages back to the abstracted UI so the user can still see them even if the modal closes. Since the modal is redirecting to the ui callback when it closes, the messages variable of the uiglobals array will contain any related messages that should be displayed to the user.
Final thoughts
This was our first attempt using this kind of site architecture with Drupal. Although there were new development challenges inherent to any single page web application (and best saved for another post), the integration with Drupal as a backend and as an adminstrative frontend was suprisingly smooth. Our community site incorporates other contrib modules like Organic Groups, Search API Solr, Message Notify, and CAS without issue. Here are some additional benefits we discovered:
Full suite of reusable UI web services for other client apps (Android, iOS, etc).
Free to use a frontend development team with no Drupal knowledge.
Avoided many of the usual Drupal theming struggles and limitations.
Relatively seamless integration of Drupal UI and abstracted UI.
Progressive integration (You don't have to build the entire UI outside of Drupal - convert it later, if desired)
1 note
·
View note