Don't wanna be here? Send us removal request.
Text
Twitter does not optimize user-uploaded images
A few months ago, a perf fail was published about Twitter’s unoptimized post-logout image. Today, I realized that Twitter doesn’t optimize user-provided cover images… which makes their site very slow.
In above example the cover image is a PNG file that weighs in at no less than 1.3MB and accounts for more than half of the total page size. Using OptiPNG on this image would save about 550kB, but the best way would be to use a JPEG format instead (and you can still think about WebP or MozJPEG). The same image as a JPEG has a weight of 130kB (with q=15 compression setting) — a tenth of the original file size. Lessons learned
Optimize user-provided content on upload!
If the above is not possible, provide clear guidance and pointers to relevant tools, to help users optimize the asset prior to uploading it.
2 notes
·
View notes
Text
TheBlaze.com is stuck in a fetch loop
ThisBlaze.com delivers a fast first render at ~1.7 seconds, but then goes on to open 1000+ TCP connections and fetches 10MB+ of data over the next few minutes (yes, minutes!) due to continuos pings to various advertising servers.

WebPageTest gives up on the site after ~110 seconds and 12MB of data, but if you keep the page open in your own browser, you'll see that the page never stops firing background requests (some of which weigh-in at 100KB+ and quickly add up to many megabytes), and after a few minutes the page forces a meta refresh to restart the full cycle.
The exact culprit of the problem is hard to pin down, but the endless stream of JavaScript errors indicating that various third party scripts are failing the same-origin policy is a likely factor.
A while fetch loop is never a good idea.
Lessons learned
Audit your JavaScript errors.
Audit your third party integrations.
Audit your HTTP overhead and minimize polling.
1 note
·
View note
Text
Evite's Gallery: ads targeting gone wrong
Evite’s gallery page makes over 400 requests (~200 redirects), over 180 distinct connections, and fetches over 2MB of data, ~1MB of which is in cookies!
A beacon here, a beacon there, and before you know it you’re incurring hundreds of requests and transferring megabytes of uncompressed cookies! In this particular case, check out the resulting waterfall:

The requests in yellow are redirects - yes, hundreds of them. The main offender appears to be rc.rlcdn.com which belongs to TowerData (RapLeaf) which is triggering hundreds of fetches, each with a 302 redirect, and sending over 700KB of cookies in the process!

Needless to say, there is a lot of room for improvement: if you need to log multiple metrics, batch them and make a single request instead of firing many individual requests; eliminate 302’s to speed up processing; reduce the cookie overhead.
Lessons learned
Audit your third party dependencies! A simple tracker can add hundreds of requests and megabytes of transferred data.
Batch reports to reduce HTTP overhead.
Watch your cookie size, it all adds up over hundreds of requests.
1 note
·
View note
Text
LG G Watch site delivers an images-gone-wrong 54MB RWD experience
If you go to the website for the new LG G Watch, it will download over 54MB on the first pageview - 99% of this is due to unoptimized image use.
There are many issues with this site not related to performance, that I will not cover here. But I must point out one peculiar solution: If you open the site in a browser with dimensions that somehow does not “fit” the site, it tells you to tilt your device. In itself not a good solution on a modern responsive site. But to make matters worse, this information is shown as a 656KB 1920x1080px background image. This is neither performant, responsive, nor accessible.
How does this site manage to download 54MB of images?
Client-side image resizing
The site reuses the same full resolution images regardless of screen size and RWD breakpoints - the assets are resized on the client. This is a classic mistake when making responsive websites (see delivering scaled image assets), and there are many techniques to avoid this.
All images, all the time
When you visit this site, it immediately loads all the images used by the site, including the ones you likely won't see. As a result, if you just want to see the specifications for the watch, your browser ends up downloading 240 images that another part of the site uses to show a “VR” view of the watch. Each of the images for the “VR-View” alone is around 150kB, and adds up to 28MB.
Background images
The other big bandwidth-thief is background images. In addition to making it impossible for blind people to use the site (due to embedded text in background images), these images are huge, adding another 24MB to the site. Further, because images do not adapt to screen-size, they also provide some humorous art-direction on carefully selected screen-sizes:
Lessons learned
Serve scaled image assets.
There is no "right" screen size for RWD, adapt your layout to users viewport to avoid embarrassing UX disasters.
Avoid encoding text in images - not selectable, searchable, accessible, etc.
Defer loading of optional assets - e.g. the VR experience images can/should be downloaded on demand.
-Sverre Nøkleby
4 notes
·
View notes
Text
TechCrunch's flash of styled content
Experience of loading a TC article: you see the text; article image shifts the text; both disappear (???); image and text return to their original positions.
Layout reflows, in general, often lead to a suboptimal experience: there is nothing fun about trying to consume content (e.g. read text) while it is jumping all over the screen as the page is loading.
That said, the TechCrunch experience is particularly bad. Capturing a frame-by-frame view of the rendering steps shows that, for some reason, the page content is temporarily shifted way down the page by an empty element (step 3 from the left in above diagram), which then disappears and the content is returned to its original place - what's going on?
After reloading the page a few times over and watching the DOM construction as the page loads, it turns out that the culprit is the Facebook share button! For some reason, when it's first instantiated, it creates a 1000x1000 px iframe container, which is then resized to fit the actual button. The time gap between when the iframe is created, and when the share button loads is what's causing the content flash.
Turns out, this is a known issue with some plausible workarounds. Let's hope that the Facebook team solves this problem soon. In the meantime, some lessons learned:
Audit your third party widgets for performance gotchas
Where possible, minimize layout reflows to improve UX
5 notes
·
View notes
Text
Adobe's Runtime CDN forgot to enable GZIP
GZIP compression is one of the easiest and high-return optimizations: ~70~90% byte savings for text-based assets. But, you do need to enable it to make it work!
The Adobe Runtime CDN assets are served through Akamai, but neither Akamai nor Adobe's origin servers are configured to apply GZIP compression - doh! As a result, requesting the edge JavaScript runtime results in a 100KB uncompressed fetch.
An easy to way to spot this is to open DevTools and compare the Size and Content values: Size is the transfer size, Content is the decoded size. The fact that they're the same (transfer size is slightly larger due to HTTP header overhead), is an immediate red flag for any text-based resource!
Enable GZIP compression on your servers (Apache example), regardless of whether you use a CDN or not.
Do not assume that your CDN will GZIP for you, check for the right configuration flags and toggles - e.g. Akamai can do it on your behalf, but you have to click the checkbox!
With GZIP compression in place, the edge runtime is 33KB, which translates to 67% savings in transfer size.
4 notes
·
View notes
Text
BusinessInsider webfont diet: trimming 1.9MB of unnecessary downloads
A visit to businessinsider.com with an empty cache will result in 172 TCP connections, 365 requests, and ~8.5MB of downloads... Including ~2.3MB of webfont assets, of which ~1.9MB is entirely unnecessary!
There are multiple problems here. Let's start with a simple one: why do we have duplicate downloads? Inspecting the HTML response shows that the page has a conditional CSS file for IE9 (which is the browser we're using for this test) that declares multiple font-face rules. Confusingly enough, these rules are identical to the ones provided in the main stylesheet. However, while the duplicate declaration results in unnecessary CSS bytes, it shouldn't be a problem - the browser is smart enough to dedupe requests, right?
Well, not quite, domain sharding breaks dedupe logic: the IE9 CSS file is served from static4.businessinsider.com, while the main CSS file is served from static6.businessinsider.com. Both files use relative font paths, and we end up downloading each file twice - once from each shard!
Next, duplicates aside, the site downloads ~841KB of EOTs, but the WOFF equivalent comes in at 319KB - why the difference? WOFF has built-in compression, but so does EOT (MTX). Problem is, the served EOT files are not compressed. In fact, MTX aside, applying regular HTTP compression (i.e. GZIP) to EOT files reduces their size to 313KB!
Finally, you may be wondering, why do we need both WOFF and EOT at all? We don't. IE9 supports WOFF, but has some quirks in how it processes the @font-face declarations, which results in unnecessary downloads of both formats - see fix here.
In summary, BI downloads ~2.3MB of fonts, but with a bit of optimization we can reduce that to ~320KB! That's 1.9MB of savings for the visitor:
Be careful with domain sharding, as it may result in unnecessary downloads and retransmission overhead - see Etsy fail.
Compress your font files! If you're serving EOT's, make sure they are using MTX compression, or, configure your server to apply GZIP - see Apache example.
Test your @font-face declarations. I'm willing to bet that BI has a large IE9 demographic, all of whom are fetching megabytes of unnecessary font files.
2 notes
·
View notes
Text
Serve scaled image assets, strip unnecessary meta-data, gzip, minify!
I visited The Norwegian National Security Authority website to get information about the “Shellshock bug”, and noticed the site loading slowly.
The first issue is a common one. The site acts as if it is responsive, but loads the same high-resolution image for all screen sizes and relies on client-side resizing: the image is 3888×2592 pixels and weighs in at 3.7MB, which is far too large even for large desktop screens!
The fix here is to serve scaled image assets. This would drastically reduce the number of unnecessary bytes and eliminate the expensive client-side resize! A great tool to see how much bandwidth you waste with oversized images is http://sizersoze.org:
The second perf fail is more subtle:
The three SVG icons add up to a total of 2.5MB of data! Peeking inside these files, we find many large CDATA chunks:
A bit of googling indicates that these CDATA segments are added by Adobe Illustrator, and store Illustrator meta-data that’s not definitely not needed to render the icon in the browser. Removing the unnecessary CDATA reduces these SVG files to just a few KB in size!
There are other issues with this site as well: lack of GZIP, missing minification and bundling of CSS/JS, and the above mentioned SVG-s could be a sprite. But the 6MB spent by three icons and an image overshadow these issues.
The lesson learned from these errors are is that you should always process files before serving them to the user, and ideally, automate the process: scale images, strip unnecessary meta-data, minify. Don’t put the responsibility of resizing images and removing meta-information on the publisher / user, but build this into the system.
For example, make sure all images uploaded in the CMS by a user/publisher are processed server-side - e.g. imageresizing.net. Similarly, the issue with meta-data in the icon files is something that should be discovered by an automated or manual performance budget.
-Sverre Nøkleby
0 notes
Text
Strip unnecessary meta-data from your images!
Adobe Fireworks uses the PNG file format as its native format, storing metadata in “private chunks” of the file that aren’t used to render it. Unfortunately, if you publish your unoptimized images, the browser still has to download all of that metadata.
In the above case, the server sent down a 10x30 pixel asset with just 164 bytes of image data (the IDAT chunk) and over 46kb of Adobe Fireworks private data.
Fiddler’s ImageView Inspector will show you when your PNG file contains unneeded data, and if you’d like to explore further, check out this great free PNG chunk decoder: http://entropymine.com/jason/tweakpng/
Lesson learned? If you don’t need it, optimize your images and strip unnecessary meta-data from your image assets! See suggested tools.
7 notes
·
View notes
Text
Learning from Apple's livestream perf fiasco
Apple's Sept 9th livestream got a lot of press, both good (great products), and bad: broken livestream, site downtime, and so on. We'll leave the products to the press, but let's dissect (some) of the perf problems...

Running the livestream page through WPT shows that even without the video stream the landing page weighs in at 5,790KB - not completely outrageous (we've all seen worse), but pretty hefty. Let's break it down.
348 image requests accounting for ~3.7MB! That's a lot of image requests. Digging deeper, most of them appear to come from the curated feed of tweets and images. More specifically the page fetches feed.json from www.apple.com origin, which provides the text and image URLs. At this point, we arrive at our first perf fail:
The JSON file is 531.7KB, and its served uncompressed! Applying gzip to the file would reduce it to 57KB - that's 90% savings.
To make matters worse, the maxage is set to <10 seconds! On one hand, this is understandable, you want to expire old content and get new updates to your users. But, this also significantly reduces the effectiveness of edge caching - apple.com is served via Akamai.
Moving on. The fetched JSON file contains 290+ entries, most of which contain some text and a set of image references documenting highlights from the keynote. The site immediately dispatches all of the requests... at once. No on demand loading, just give me all of it, please, and now.
But, 290 entries and 348 image requests? It still doesn't add up. Turns out, the site is fetching not one, but two image assets: a "lores" preview and a full replacement image. Also, the site also appears to adapt to screen DPR, which indicates that the ~5MB total page weight indicated by WPT is not the highest amount either, since we could have downloaded higher res photos. Yes, I'm looking at you, you lucky "Retina screen" users - you got the 10MB+ experience! Those image bytes sure add up quickly.

To be fair, the "lores" previews are scaled down and well compressed. That said, something tells me most of them were useless since the visitor never sees the majority of them - even if you tried, you can't scroll fast enough to see all the previews. Hi, Jank. That said, now things take a turn for the worse: the max-age on these image assets is also <10s.
Why? No idea. All of them are served via images.apple.com, which is also fronted by Akamai, but once again, a short TTL really doesn't help with caching, which means there were a lot of requests hitting the Apple origin servers. Those poor Apache servers powering Apple's site must have been working really, really hard. I'm not surprised the site was experiencing intermittent outages.
Oh, and speaking of load on origin servers... Remember feed.json? Every 10 seconds the page makes a polling request to the server to fetch the latest version. Combine that with a really short maxage TTL and missing gzip compression, and you've just created a self-inflicted DDoS.
So, lessons learned? Well, there's a bunch:
Compress your JSON. No really, gzip helps.
On demand image loading strategy would have significantly reduced the number of requests for latecomers to the stream.
Your images are likely static, cache them with a long TTL!
The "lores" previews seems to have done more harm than good.
Providing a push feed instead of polling, or a "list of recent updates since X timestamp" functionality would have significantly reduced the amount of data in feed.json
Perf fail happens to the best of us.
P.S. The actual media stream was delivered via a different origin (also via Akamai). Did the above site perf problems affect the streaming? Maybe, hard to say. Perf fail is often additive in bizarre and interesting ways.
19 notes
·
View notes
Text
Image carousel fail: use lazy-loading
I worked briefly as a front-end web developer at a certain Harvard-affiliated institution. These guys were very fond of displaying image carousels on certain pages, which they’d stuff with 40, 50, even 60 images apiece — there was no limit, even though the carousels could only display one to five images at a time; if you visited a page with a carousel, you were going to download a whole lot of images, whether you wanted to or not.

As a direct result of this heavy-handed approach, some pages weighed anywhere from their 3 MB home page to upwards of 12 MB! Even more egregiously, the carousels were simply suppressed with CSS when viewed on phones, so you’d be wasting your data plan downloading images you’d never see!
The lesson here is that if you’re going to serve a large number images that users may never see, use on-demand lazy loading! They could have easily prevented the excessive bandwidth consumption by writing a script to download a JSON list of images in any given carousel, and then dynamically loaded the required images as the user interacted with the carousel.
Further reading resources:
Should I use a carousel?
Carousel Interaction Stats
Why Users Aren’t Clicking Your Home Page Carousel
Outdated UX Patterns And Alternatives
1 note
·
View note
Text
Twitter's unoptimized post-logout image (1.1MB)
After logging out from twitter.com we land on a page asking us to download Twitter’s mobile apps. The image used on the aforementioned site is a 1.1MB PNG!
Lossless optimization on Kraken.io could bring this image down to 254KB and lossy optimization could save a whooping 93% bringing it down to 71KB - learn more about lossy vs. lossless image optimization.
Lesson learned? Images often account for over 60% of total bytes on your pages! Optimize them all before pushing to production.
8 notes
·
View notes
Text
XHR CSS fetching breaks resource prioritization
Shubhie Panicker: "We (Google+) were using XHR to download CSS (this was to work around an issue in IE - exceeding the limit on the number of rules in a single stylesheet; for convenience we used XHR for all browsers). When it came to our attention that XHRs are requested at low priority, we decided to run an experiment to see its impact on G+ latency...
In SPDY capable browsers this resulted in a big latency improvement. In Chrome 27 we saw a 4x speedup at the median, and 5x at 25th percentile. In Firefox 21 we saw a 5x speedup at median, and 8x at 25th percentile. No significant impact was seen at the 99th percentile.
In conclusion, in a SPDY world, XHRs are requested at low priority (except sync XHRs), and are opaque to the browser. Using declarative markup like <link> enables the browser to do appropriate resource prioritization."
Good to know that simple (i.e. <link> tag) works better! For more details, checkout the discussion in this G+ thread.
5 notes
·
View notes
Text
Twitter's unoptimized profile images
Twitter doesn't optimize profile images submitted by users. Beyond the expected 73x73px JPEGs (~70kb, 13 bytes per pixel) there are images with embedded geolocation data and the truly bizarre...
For instance, one user's profile JPEG is 16kb; 5kb of image and 10.5k of 0x20 octets hanging off the end. Twitter appears to have mitigated this problem somewhat by enabling GZIP when serving images... which, by itself, is a topic for another perf fail post!
Lesson learned? If you don't need it, optimize your images and strip unnecessary meta-data from your image assets!
3 notes
·
View notes
Text
Zealous sharding hurts Etsy performance
Zealous sharding can cause congestion-related spurious retransmissions that degrade page load time - especially for mobile users!
What's a "spurious retransmission"? It's a subset of retransmissions where the bytes had already previously been received by the client - i.e. the server thinks the bytes were lost and retransmits them, causing the client do receive duplicate bytes. Common cause? Excessive sharding.
As an example, at one point Etsy.com sharded its image asset hostnames 4 ways (img[0-3].etsystatic.com), opening up ~30 connections in parallel. As a result, the server floods the client with data on those 30 connections, which breaks congestion control and leads to unnecessary spurious retransmissions.
The fix? Simple, reduce number of shards! There is no one magic number that will work in all cases, but as a rule of thumb, you probably don't need more than two shards.
For more details, see Will Chan's excellent writeup on congestion control, and Jonathan's writeup on how Etsy solved this problem.
4 notes
·
View notes
Text
Winner of worst time to first render? HBO.com
HBO.com desktop site takes 20s+ to load! At first, I thought it was broken, but as it turns out, it's not... it's just really slow.
First, the page is blocked on downloading a ton of tiny JS files, then the 3MB flash file comes in, followed by more JS and image downloads... The first (useful) render happens at ~20s - check out the video. Yikes.
I'm sure someone had good intentions when they made the decision to implement the whole site in Flash, but boy, did it ever go wrong. That said, Flash is not the only problem, even with flash there is no reason why the site can't be optimized to deliver a better loading experience.
On a positive note, their mobile site is much better - no flash!
3 notes
·
View notes
Text
Tumblr excels at perf fail!
Tumblr's static CDN does not compress, cache, or minify any of their resources! Lots of room for improvement.
Digging deeper, it looks like anytime you upload an asset to static.tumblr.com, the file is stashed directly into S3 and is then served via CloudFront, which doesn't gzip, minify, or provide any kind of sane caching policy for any of the files.
To make matters worse, Tumblr also doesn't play nice with third party CDNs: their custom domain checks fail, forcing you to use their own infrastructure only. So, even if you wanted to, you couldn't easily fix this, short of hosting the assets yourself on a third party domain. Hmm...
How do we solve this dilemma? CloudFront can serve compressed files, but to do that you need to upload the gzip'ed assets to S3 with correct headers. Also, prior to uploading compressed files, Tumblr should minify and version uploaded files, which would allow them to use a far-future cache expire. With that, we'd have gzip, minification, and caching in place!
10 notes
·
View notes