By Ilya Grigorik on March 07, 2013
At the risk of sounding repetitive... An average page is now over 1300 kB in size and over 60% of that is in images. Hence, if you have limited time, then inspecting and optimizing your image assets will likely yield the highest rate of return.
Case in point, the new data compression proxy for Chrome applies dozens of different content optimizations, but image optimization almost invariably comes out at the top. End result? On average, data usage is reduced by 50%! The strategy? Simple, transcode all images to WebP!
With that in mind, I had a chance to sit down with Stephen Konig (Product Manager on the WebP team), to chat about the latest news, team progress over last two years, and where WebP is heading. You can scan through the slides, or watch the GDL on YouTube, but below are a few highlights and resources to help you get started.
The primary focus and goal for the WebP team over the course of the past two years has been on developing the necessary features of the format itself - i.e., the engineering part. The good news is, end of 2012 marked an important milestone: support for lossy and lossless compression, alpha channel, animation, metadata, color profiles, and more. With all of these features in place, the focus is shifting towards tooling and driving adoption.
In fact, in the time-honored tradition of dogfooding own products, there is already a large and growing list of Google properties (Gmail, Drive, Picasa, Instant Previews, Play Magazines, Image Search, YouTube, ...) with WebP support. Most recently, Chrome Web Store switched to WebP, saw ~30% byte reduction on average, and is now saving several terabytes of bandwidth per day!
In parallel, there are now over 300,000 sites using the open-source PageSpeed libraries, which enable transparent WebP transcoding on Apache (mod_pagespeed) and Nginx (ngx_pagespeed), and there is a growing list of commercial products (Torbit, EdgeCast) which can do similar optimizations - the bandwidth savings are hard to argue against!
WebP achieves better compression by spending more CPU cycles - that's an inherent tradeoff of any compression algorithm. Today, when compared to JPEG, the encoding speed for WebP is ~10x slower, and decoding is ~1.4x slower when done on the CPU. Is that big deal? The answer depends on your application: if you are generating unique and dynamic images on every request, then the extra CPU overhead is something you'll notice. But if the files are (mostly) static, then the encoding time is a non-issue.
More likely, the concern is not over encoding, but over decoding speeds. Is 1.4x going to hurt your performance? Once again, it depends on your application - as with any performance metric, measure it. The Ebay tech team recently published a great overview of various image optimization techniques:
This test from webpagetest.org compares the page load time of WebP vs. JPEG. The test has one page with 50 images in the WebP format, and another page with the same 50 images in the JPEG format. Because the WebP page had to download fewer bytes (474484 vs. 757228), it completes loading much earlier compared to the JPEG page... If you track your web site's browser usage stats and find that Chrome/Opera users are a sizable chunk, using WebP images will improve the page load time for these users.
Despite the extra decoding time, the visual rendering time is much faster due to fewer bytes shipped. Further, for a significant segment of the population, bytes are (literally) expensive: bandwidth caps are a real constraint for many users, especially on mobile devices and in the developing world.
Deploying WebP in a native app, iOS or Android, is actually very straightforward: Android 4.x.x+ has native support for WebP, and there is a backport for earlier versions; on iOS you can use the official libraries provided by the WebP team (tutorial, demo app). Since you control the display logic and the platform, you can safely convert all assets to WebP and save on data transfers both from and to the device - WebP helps for upload cases equally well!
On the web, things get a bit more difficult, but still manageable. Chrome and Opera have native support for WebP, and there is an open discussion with the Firefox team. There are also third party plugins for Safari and Chrome Frame provides support for IE - however, you can't rely on these plugins being present. Instead, for time being you have to fall back to User-Agent
detection on the server, or use a JavaScript check - in fact, there is even a JS polyfill! Finally, there is work in progress to fix Accept
negotiation to make this entire process much easier.
The most popular technique today and one that is used by PageSpeed and other optimization products, is to rely on server detection: run a User-Agent
check, and serve the HTML with WebP image links, or non-WebP links. What are the User-Agent
rules? PageSpeed is open-source, so the answer is right in the code. Further, to simplify this process, I've translated the rules into sample configuration files for Varnish and Nginx:
With above detection in place, Varnish and Nginx will report an extra WebP: lossy, lossless
header to the app server, allowing your application to generate customized HTML based on available WebP support. In turn, the static image assets can be safely cached either on the local file system or on any CDN service. The only other caveat is that the HTML must be marked with Cache-Control: private
to ensure that an intermediate cache does not accidentally serve the wrong file to the wrong browser.
For more details on WebP, scan through the slides, and check out the GDL on YouTube for additional context.