Tumgik
donohoe · 9 years
Text
Old
How to be a publisher [▓ ▒ ▒ ▒ ▒ ▒ ▒ ▒ ▒ ▒] 1%
HTML This is the easy bit. Except it’s never easy. But its the known quantity in that its HTML. And CSS to make it pretty. Typekit or Google Fonts. Or make and host your own web-fonts. It’s just typography. The CSS covers breakpoints if you must be responsive. And that adds HTML for the things that need to be very different on Mobile but not at all on Desktop. Not forgetting Tablet, so let’s just call it Non-Mobile. Not forgetting Javascript, not jQuery, and if you’re good you’ll even test that the basic functionality of your site work with it disabled. Oh, and 3rd party SCRIPT tags. Analytics, which one? Everyone needs wants something different. Google Analytics is free but sampled once you go over a million events. ChartBeat is trendy for the here and now. Parsely is great but some prefer Chartbeat. Some argue for both. And comScore. And Omniture. Optimizely for AB testing, unless you’ve Test & Target via Adobe which integrates with Omniture. Some want need both because it’s the UI wars. And we’re all ready to drop everything below IE11 now. Make it all SEO friendly with canonical URLs, publish time strategies, headlines, and micro-formats. There is no god. And Google News friendly so there is that, and feeds, and setup and config in Google Webmaster Tools, and Google Plus, and likely some other obscurely linked control panel. Sitemaps. Robots.txt. Add in a Short URL service for spice. You still need short URLs since Twitter made all links count the same in length. So scratch that. Unless you use SocialFlow or Bitly else to hide tracking parameters when you share on Facebook vs Pinterest vs Twitter vs Reddit (but only organically share on Reddit which means you don’t really share at all) vs Email vs Google Plus vs LinkedIn vs Tumblr vs iMessage. Ignore that’s sometimes people will do things like email the Twitter link but that shouldn’t mess with reporting numbers too much. Shout out to Dark Social. Sounds like something from Tron but less neon.
RSS Too easy. A basic CMS has some sort of plugin, or config, or panel. Life is good. Except we may want to have the ability to use the SEO friendly headline. Sometimes. And append a paramater to the URL so we know you came from RSS. Isn’t it funny that a GUID looks like a URL but no one has told me a good reason to have a GUID. It’s in a spec from 1999 or thereabouts. And we should have Thumbnails. Let’s just call it namespace and XML bloat. Let’s be the Times and inject ads into our feeds. Beautiful, no? Let’s ignore that RSS is dead, except when it’s being used for purposes other than intended. And you know it will and you cry a little on the inside.
Email Everyone loves email. Just keep it under 100KB to play nice with the message display gods. And that’s it because it’s not hard to be just email, unless you need to look okay in more than one email client and that couldn’t possibly be a testing nightmare because this is tech from 1993 isn’t it, and how hard or splintered can that be or get. For the love of the name in all that is holy make sure your unsubscribe link works and doesn’t tell them to re-enter their email address cos how in hell could you be expected to know something as personal as that when you just sent them an email. No one ever resells email address so trust issues aren’t a problem. Unless I’ve missed something.
[▓ ▓ ▓ ▓ ▓ ▒ ▒ ▒ ▒ ▒] 50%
OpenGraph & Twitter Cards All documented. Just more META tags in the HTML. Some logic to determine when best to do a Summary Card or a Picture Card. Ack. The right image size. Custom field for Twitter, and use the same or another over-ride for Facebook? Need to get Twitter handle of authors in there too where appropriate. Thats hard-coded for now. Ask nicely to be whitelisted.
Flipboard Back to RSS again. Augmented fields. Mapping and some hacks to get odd but important taxonomies to appear in some shape or form. Say goodbye to some embedded content. Say goodbye to the the notion of packaged content and hello to complete atomization of the article as a standard unit.
Google Newsstand Its Flipboard and RSS again. So far so good. Mind your embedded content.
FB Instant Articles Much great. Many fast. Hmm. No paywall support. Yet. Maybe. Guess its fine for the all publishers that only ever want ad revenue which is fine because advertising is so lucrative and there could never be a downturn there (Hello NYTimes!). Very specific view of what an Article is and isn’t. So you’re getting your users used to the idea that Facebook is the web. Fab.
Apple News Like FBIA with no paywall support. Can’t really say much more.
Google AMP It’s like HTML but it is WAP. Paywall support. Lets all great a brand new site. It may or may not affect your search referral traffic but you gotta go along with it because there is so much uncertainty and fear. One more thing to learn the nuances of to make sure you’re not shooting yourself in the foot and mean you can’t actually spend time optimizing the existing HTML to be fast.
Ads DFP is shiny. OpenX is forecasting hell. Forget HTTPS until ALL networks fall into line, say 2028. You could follow WaPo but sounds like you’d need 2 full-time staff just make sure it doesn’t fall over. All ads come back as iframes most of the time, unless you want to hack DFP to get JSON (hello Quartz) to do better and native units without a 3rd party that no one likes. Hello to native units and sponsored posts and a million flavors of tracking pixels and event tracking and untrusted parties spilling your entire life across the web. Put the NSA to shame. Chase down interstitial ads with broken close buttons or improper z-index values. Back-port your HTML-based design to have some basic Viewability and battle the trade-off with Impression based campaigns against Viewability based campaigns cos you know you’re gonna have to support both. Explain lazy-loaded ads over ads that load on PV regardless. What does a PV even mean any more. Not a question. Enjoy weekends on Competitive Separation and Take-overs. Enjoy the agency model over the planned sprint. Enjoy trying to shield your team so they don’t quit. I love you all dearly. Never mind year on year rise in ad-blocking and the half-apology from IAB on behalf of advertising killing UX. Mobile was safe, but isn’t any longer because Apple is taking a swing at Google and you’re the collateral damage but that is okay, you’ll be fine until the kids discover how much data they’re using each month and hitting the upper levels of the AT&T non-grandfathered limited 4G data plan and paying for it. Never mind the confessions of ad execs over drinks that they’ve installed ad blockers on their kids phones out of genuine concern.
Just not going there.
[▓ ▓ ▓ ▓ ▓ ▓ ▓ ▓ ▓ ▓] 98%
So… Other than that, it’s not too hard to be a publisher.
6 notes · View notes
donohoe · 9 years
Text
Tips for Those Who Insist on Poor Password Habits
This is terrible advice. This doesn't make you that much better. But Don't think this gets you off the hook for poor security in an age where this is of growing importance. But you're still here. So I hope this can help. Your mileage will vary. First... Never ever email someone a username and password combination. Ever. Dear god, just stop. If you do, you're not qualified to use a computer let along the internet. Second... If you must share credentials then you should: 1. Look for a non-digital solution 2. Remove their connection 3. Employ separate mediums What does this mean? If you can walk over to someones desk and tell them verbally or, better yet, enter it yourself manually then do that. Hate exercise? Email them the username. Then call them on the phone and tell them the password. Preferably while they enter it so they DON"T WRITE IT DOWN. Hate talking to people? Okay, email them the username. Write down the password. Take a photo and text it. Third... No reason for this, other than to mess with people... Make up complex BS password. Write it on a sticky note and hang it up by your computer. Fourth... You're terrible at picking password and will always use your cats name or some bland default-like password. Do this: Pick 1 special letter (!#$%&*@) and 1 Uppercase letter. Example: #K (pick you own) Done that? What did you pick? REMEMBER IT. That is 50% of this tip. Now that you have that, always begin your crummy regular password with this. The trick here is to be consistent. Bonus round: Make that 2nd letter the 2nd letter of the domain of the service you are using. That way your password is always somewhat unique. Example: !M for gmail.com !T for twitter.com etc Bonus round 2: Double up & Repeat twice! Example: !M!M for gmail.com !T!I for twitter.com
6 notes · View notes
donohoe · 9 years
Photo
Tumblr media
Its hard getting an article web page to 1s over 4G. It will be just as hard getiign this live on production.
5 notes · View notes
donohoe · 9 years
Text
Anatomy of a Simple Extension
I've always had a thing for browser extensions. Firefox almost killed that dream with the XUL approach, but Safari and Google Chrome have always been approachable.
We recently created an extension at The New Yorker based on our Photo Booth blog. The idea is to pull in photos and fill the "New Tab" window with something a bit more compelling. Its definitely not for everyone, many may prefer the cleaner empty default, but for those that enjoy rich curations of photography are in for a treat.
You can find it here and try for yourself here.
The idea isn't original. I recall a post on Hacker News that used Instagram photos. Here I'm going to show you and extension and who you can modify it to pull in collections of photos from any source.
The guide assumes you're familiar with the basics Google Chrome extensions, specifically how to enable Developer Mode and get started with running local extensions. It may be helpful to do a bit of advance reading if this is all new...
Getting Started: Building a Chrome Extension
How can I set up Chrome for extension development?
The full source code for this extension is available on Github:
https://github.com/newyorker/photobooth-extension
It does the following:
manifest.json Tells Chrome to open "content/index.html" when a New Tab opens
index.html A simple web page, loads in CSS and JS.
app.js This handles all the heavy listing, but can be summarized as:
Check you're online, if not fail gracefully
Get the photo data
Check if data if fresh, in which case use cached version
Otherwise, pull in data (and cache it)
Build the page content
Insert HTML placeholders with photo data in the attributes
There is some extra work where I randomize the list of photos so that every New Tab is different. You can skip that.
Show the photos
Leveraging wow.js, show the photos (with random visual effects and timing) as they come into view
To modify this to work as-is with other sources you need to update both the feed URL and also the permissions on the manifest file:
https://github.com/newyorker/photobooth-extension/blob/master/chrome/content/js/app.js#L168
https://github.com/newyorker/photobooth-extension/blob/master/chrome/manifest.json#L16-L17
This will help, but only if you're reproducing the exact same feed format. In this case I didn't follow any standard since I had a very specific one-off need.
The feed looks like this:
{ title: "Photo Booth", link: "http://www.newyorker.com/culture/photo-booth/", updated: "Fri, 01 May 2015 12:36:35 +0000", items: [ { link: "https://www.newyorker.com/culture/photo-booth/exploring-the-new-whitney?mbid=ext_photobooth", headline: "A Night at the New Whitney", author: "The New Yorker", image: "http://www.newyorker.com/wp-content/uploads/2015/04/Whitney_TNY-edit01-320.jpg", orientation: "horizontal" }, ...
There is an example on Github
There are a number of ways to build a feed. In many cases you'll have an API or extended RSS feed that you can use as the start point.
In this case there were neither. The initial prototype relied on three things; scraping the web pages directly for data, a cron job to ensure data was relatively current, and Dropbox to give it a public location and https (Google requires that external data and sources be available over HTTPS).
You can find an example here: https://github.com/newyorker/photobooth-extension/tree/master/service
Once the prototype was up and running, and people had a chance to play with it, we decided that since we had direct access to the content we didn't need to stay a second-class citizen. With that in mind here is the approximate WordPress implementation:
https://github.com/newyorker/photobooth-extension/tree/master/service
The clear difference between the two is the "scraping" option is publish a simple text file on a CDN. The Wordpress option is direct and cached. With a little extra work it can ensure updates follow as soon as posts are added.
Build new experiences for fun NOT profit...
If I were a developer at another organization odds are I already have what I need to get started without having to first build an API service.
LENS blog - The New York Times
Quartz
Proof - National Geographic
LENS Blog
A great candidate is The New York Times photography blog, The Lens.
If you dig through the network activity you see that the "Load More" button at the bottom makes a request to a service to get more data. You can determine that the content of the blog is available under this URL structure:
http://lens.blogs.nytimes.com/more_posts_json/?apagenum={page-number}
Example: http://lens.blogs.nytimes.com/more_posts_json/?apagenum=3
This returns 15 posts at a time. Again, using the scaler example, it would be very easy to collect several pages of data. Since we already use phpQuery it would be easy to parse the HTML payload and get the data we need.
In general I'd do the following: - Iterate through each item - Create new dom fragment from html using phpQuery - Pull the innerText for the first SCRIPT tag - Encode as JSON - Directly pull data on the first slide and map to a new feed
Thats 10 minutes of work, including basic testing.
Quartz
Even easier. They have a ready to go JSON API thats makes life too easy:
http://qz.com/api/latest
National Geographic
When you look at Network tab in Chrome Tools you can quickly see that this is a WordPress blog. They make use of "admin-ajax" and you can quickly determine through the URL structure how to get specific data.
Taking the POST request, and making it GET:
http://proof.nationalgeographic.com/wp-admin/admin-ajax.php ?action=ngspb_ajax_archive&ngspb_archive_args%5B posts_per_page%5D=10&ngspb_archive_args%5Bpaged%5D=2
Here I'd caution and say ensure you're caching responses. Anything through "admin-ajax.php" isn't necessarily cached. Prototypes should play nice.
{ "result" : "success", "markup" : "BIG HTML BLOB....." }
A word of caution: These are nothing more than examples. I often do this for personal use and fun. Many web sites are that host photos from a photographer or agency on their domain often have strict contractual restrictions. Its not that they don't mind people creating new and compelling experiences, its that they've promised photographers that they will protect their work. Its a real legal obligation. In other words; don't create a unofficial LENS blog extension and be surprised when Google takes it down.
6 notes · View notes
donohoe · 10 years
Text
I'm going to start using this again.
1 note · View note
donohoe · 11 years
Text
Infinite ALL the Scrolls
With the relaunch of Time.com I’ve had a number of people comparing it to Quartz. This happened a too when Daily Beast made updates last year, but its been more pronounced this time. Either way, its something I feel that is still mis-categorized - especially when people bemoan the “infinite scroll”. Well, call it “infinite scroll” if you like but to me its the furthest thing from that. I fully admit the difference may be in my mind alone. Let me explain a little bit of history and something I learned many years ago a The New York Times. When a reader makes it to the bottom of an article there is a high tendency to lose them. Thats a general observation and not just specific to the Times. The common way people have found to deal with this was to add more links. Between the obligatory ads (seriously, why!), and recommended articles, callouts to compelling content, most popular, 3rd-party modules, and impassioned pleading to sign-up for a newsletter, you eventually end up alienating the reader and they bounce. Gone. Disappeared. No conversion. No new tasty ad impression to help keep the lights on. So it was at the Times. We’d see a high number of people bounce and wouldn’t it be great of we could entice them to stay a little bit longer. In 2011 that gave rise to the oft copied “Coming up next” module. It flew out from the bottom-right corner of the page as you approach the end of the article. It was conceived and built by Tahir Khan and it was very very effective. The Bounce Rate dropped a very significant percentage.
Tumblr media
To me the main conclusion was; readers are being paralyzed by choice. They’re overwhelmed with options. This module was clear and different from everything else and it solved one big problem - Do not make the reader think. It didn’t try and guess from a list of curated items, or present a recommendation based on browsing history, it just said “go here” (arbitrarily the next article in a section) and a significant number of people did with one click. It was genius.
Tumblr media
In building Quartz this was one lesson I took with me. In our execution the idea wasn’t to implement “infinite scroll”. It was about taking the ”Coming up next” module to its logical conclusion - remove the need to click. What most people do not know, is that when you load a page on Quartz - be it the main page or an article, you load ALL the pages. That side-bar on the left side (we call it the “Queue”) - it typically has 18 to 26 articles present. When you open the page we load a big string of JSON and save it all locally. Its actually quite inexpensive and it means when you click on any Article its there immediately. Only images, if not already cached are left to load. So as you approach the end of an Article there is no cost to “preload” that next Article. Its there. You don’t have to read it. It will never be counted as a Page View unless you scroll beyond a generous threshold (we were never ever sneaky about trying to abuse PVs - PVs are evil anyway but thats a different story...). So, in the end: Simple Reader Choice (effectively none) + Preloaded Data = Amazing Page Depth This is the part where I disappoint people and provide absolutely no numbers to back all this up. Sorry! Its a combination of memory and also keeping promises. However I’ve seen the Page Depth metric of a number of different web sites. Ours was several times higher - typically to a ballpark of 3 to 5 times. This isn't going to be a magic bullet. Even at Quartz there are things we didn't plan for (a large 'Engage" ad at the end of every article instead of every 2 or 3). Also consider that if a reader is skipping through thats a lot of content to push into DOM - see Mediums approach that avoids this). So yeah, forgive me if I winch at the term Infinite Scroll. Its not quite that simple.
8 notes · View notes
donohoe · 11 years
Photo
Tumblr media
Olaf says...
2 notes · View notes
donohoe · 11 years
Photo
Tumblr media
It’s been a very busy year at Quartz. Out of almost 20 code repositories we have on Github, there are 4 that handle the core functions of the site and consume most of our development time.
Our most busy repo is qzapp, which handles core front-end application, styles, and other assets. It had 2,220 commits (as of this writing—still another week to go…), including the addition of 476,250 lines of code and the removal of 328,859.
Those numbers alone aren’t very meaningful (covers addition, movement, and removal of libraries and build files), but they reflect a high measure of effort and continuous iterations throughout the year that included:
- Refactoring of Javascript web app - Offline mode, offline metrics - Mobile navigation redesign - Overhaul of analytics reporting - Ongoing performance enhancing updates - Twitter sign-in - Annotations - Highlighting
You can see more on our Version page.
And that was just one repository. It doesn’t cover all of our efforts spanning WordPress, ads, analytics, login and registration, internal tools, and backend.
There is even more in the pipeline for 2014. Speaking of which, we are hiring, so come talk to us.
4 notes · View notes
donohoe · 11 years
Link
Tumblr media
Earlier this year, we made it possible to leave annotations in the margins of Quartz. Today we’re adding the ability to highlight our text, too—and share those highlights with the rest of the world.
All you need to do is click on a paragraph to activate it. (You’ll see a grey background.)...
9 notes · View notes
donohoe · 11 years
Photo
Tumblr media
"Jeff Bezos presents Amazon Now"
2 notes · View notes
donohoe · 11 years
Photo
Tumblr media
The Day of the Doctor
3 notes · View notes
donohoe · 11 years
Text
What a difference an "r" makes
So I realized that if I made this http://buzzfeedr.com then it should probably have this @buzzfeedr
1 note · View note
donohoe · 11 years
Photo
Tumblr media Tumblr media Tumblr media
"And Then Steve Said, ‘Let There Be an iPhone’"
Illustration by Matt Dorfman
2 notes · View notes
donohoe · 11 years
Photo
Tumblr media
Bots + Chat + Github make Quartz hum
1 note · View note
donohoe · 11 years
Photo
Tumblr media
12 notes · View notes
donohoe · 11 years
Text
44% of @CNN followers are Fake (and another 34% Inactive)?
According to ten minutes playing with "Fake Follower Check" there are a lot of big names on Twitter that actually have much lower influence then you'd think.
Tumblr media
The big surprise for me is how low the 'good follower' count - many well below 30%. @CNN has 10.175 million followers. This suggests that its true reach is 2.23 million followers. Thats a drastic reduction. That leaves me asking; how inflated is Twitter? So reality check... take this with a BIG grain of salt. A few critical points to remember: - Many fake Twitter accounts follow popular organizations so they blend in - This is based on criteria by StatusPeople. I do not know what that criteria is or how accurate. It may be way way off. No idea If you have a few minutes to spare I'd love to gather more data on other Twitter accounts (which I'll post here). Click here for more info or to help. I include @qz, the company I work at, not because its a healthier rate but rather if I omitted it it would seem like I'm being selective.
5 notes · View notes
donohoe · 11 years
Text
Dave Winer says I'm rude
I'm wondering if its like email. You need to watch for tone and voice before you hit send. Anyway, Dave Winer says my comment on one of his posts was "one of the most rude messages i've seen in a while. congrats". Perhaps its true? Its deleted now. But I do have a draft version. I no longer recall how close it is to the final version but not too far off I imagine. I tried to quote from his post and be specific in replies so as to avoid broad generalizations etc. That didn't help. If it doesn't read as "respectful" (as per his comment guidelines) then so be it. I was simply disagreeing with him not calling him names. UPDATE: Its because I said "BS". Well, I still think its was appropriate to make the point. His point on "new ideas" was way off and unsubstantiated imho. His post is here. My draft comment is as follows (pardon typos etc - its a draft): === "NYT publisher Arthur Sulzberger said the mistake they made was not hiring more programmers sooner. I thought this was noteworthy -- I think the exact opposite is true. I think the Times should have tried to avoid hiring programmers as much as possible. Before they had a lot of programmers it was possible to do deals with them, after the programmers came on, they had yet another set of gatekeepers, who as a side-effect of doing their jobs, kept new ideas from penetrating the Times." I have to disagree with you there - and Mr. Sulzberger. There were developers. Quite a lot of them (for the time) but we were kept away over at 700 7th Avenue. We were not gatekeepers in the sense you describe simply because we were not part of the conversation. What are these "deals" you speak of? "kept new ideas from penetrating the Times" Thats BS. Who introduced Twitter to the Times? Who pushed short-urls? New templating, TimesPeople (short lived social network of sorts), Times Wire, Times Skimmer, JSON feeds, Developer APIs, Recommendation Engine? Developers. Developers pushed an pioneered those initiatives. "RSS. As such an unqualified success, it should have been used as the model for other good things that could have happened, but didn't, because of gatekeeping" I'm not sure how - other than allowing full content in RSS feeds I'm not sure how newspapers acted as gatekeepers. I'd like to know more on your thoughts around that (apologies if you've gone in-depth on this elsewhere) "For example, I had a very simple Blackberry river of news for the Times in 2006. This was before it was known widely that mobile was everything (another thing Sulzberger said last night). But it couldn't happen because the Times had its own internal effort to do a mobile app that was, imho, nowhere near as easy, fast or nice as the one I was able to whip up in a weekend because I didn't have the time to make it complicated." Its worth noting that most people do not want a river of news from the Times (or most other news sources). In the Times case people wanted the Homepage. They wanted the editorial judged placement on the lead story and what else was news of the day. Also, the Mobile site of the time wasn't particularly slow as I recall and to be fair the aesthetics level it was better then the version you built. Thats not a criticism - yours is much simpler but because its pure chronological and plainly formatted not something I would say is consumer ready. Thats just opinion on my part though.
4 notes · View notes