Rails in the Wild: 5 Client-Side Performance Observations

We’re putting together the presentation materials for our five-city “Rails Performance in the Cloud” Roadshow at the end of the month (Boston, Austin, Seattle, LA and Chicago). We’ll be presenting some findings on page load performance for a casual survey sample of 100 North American Rails web-sites. Since some of the findings are not what you’d expect, we thought we’d share some of these early findings with you (come to the Roadshow and be the first to hear the rest of the analysis!).

1. It’s easy to forget to compress your JavaScript and CSS

It seems like people are pretty good about gzipping their HTML and images, but for whatever reason, a lot of people forget to tell mod_deflate to compress JavaScript and CSS files. JavaScript payloads are becoming a much bigger percentage of total downloads (even a majority of the payload for many sites), and they’re blocking the rest of the page from loading.

2. Watch out for slow third party services

Some of the big outliers in page load performance are caused by poor response times from third party services. Services like Google Ads and Analytics, Doubleclick, and Facebook Connect can kill your performance if you loading them early, so almost all sites (sensibly) put them as late as possible in the page load. Response times of up to eight seconds from Google Analytics are not uncommon, so this can result in a big road-bump in your page load if it’s in the wrong place. Google Analytics, in particular, uses document.write, so most people have to work around it, but a significant minority of sites don’t.

3. Using multiple image hosts doesn’t always mean higher performance

There’s nothing magical about using multiple image hosts. It should produce higher performance by allowing parallel downloads, but only if you’ve put the necessary resources to work. An interesting performance outlier in the survey was a site that had configured multiple image hosts, but response times from those hosts were multi-second — probably a good sign that they were under-resourced.

4. S3 is NOT a Webserver!

Amazon S3 is a reliable, cheap storage service, but don’t treat it like just another web-server. Response time in the wild was regularly between 0.5 and 1.5 seconds, so make sure that you’re not serving performance sensitive content from it. And if you do, try to use pre-loading to hide latency. Unlike a regular web-server, S3 (still) does not gzip content, so you also need to use a pre-compression utility like Yahoo’s Smush.It to reduce image sizes before you put them up there.

5. Most performance variability is NOT attributable to page factors

When we did the analysis, we found that less than half of total page-load time in our sample was attributable to front-end factors like the number of http requests made by the page, the size of the page payload and whether or not you were scaling images in HTML. A majority of performance variability (a little surprisingly) was attributable to stuff that had nothing to do with page construction (basically network and back-end factors).

To learn more about what we found about average page response time, page size targets and http chatter, as well as what the analysis said about front-end performance, sign up for the Rails Roadshow, coming to a city near you in two weeks time. We’ll be presenting along with our partners New Relic, Soasta, Amazon, CVSDude and more.

See you there!

posted on 2009-10-30 16:44  only4agile  阅读(218)  评论(0编辑  收藏  举报

导航