Android’s UX architecture needs work. UI compositing and the view system are both primarily done in software. Garbage collection and async operations frequently block UI rendering.
It sparked a wildfire of comments on the article, Hacker News, and a writeup by John Gruber, who like many others found amusement in the defensive comments of Jonathan Rockway (@jrockway) and other Android apologists. Rockway later admitted to turning off animations entirely, adding to the amusement.
Setting aside the arguments of people who seem content with hands over their eyes and fingers in their ears, Charles had a point. Technically, Android’s architecture was inferior, and it had a measurable effect on the user experience.
Things have come a long way since then, so I think it’s only fair to take another look at Android’s graphics pipeline to see if the issues have been addressed.
A short disclaimer
I write software exclusively for iOS, so you should consider what follows to be biased, at least by observation. I lack insight into the technical depths of Android development, however I like to think I’ve made an attempt over the years to keep abreast of Android technologies & platform capability.
Google Developer Day Sydney
At the most recent GDD Sydney, Ankur Kotwal (@ankurkotwal) gave a talk on High-performance graphics for Android which is the catalyst for this article. I’m not sure if Ankur’s slides are online, but they’re near identical to the slides from Romain Guy’s (@romainguy) Google I/O 2011 talk on Android Accelerated Rendering.
Android pre-3.0, the problem restated
As Charles pointed out in his article, the 2D primitives (lines, boxes, text, gradients, images etc.) used to compose a view in an Android app prior to 3.0 are drawn/rasterised on the CPU by the Skia graphics engine. The contents of each view are then combined/composited by the GPU before being displayed on the screen.
Drawing on the CPU isn’t the problem, though. Quartz 2D / Core Graphics — Skia’s equivalent on iOS — performs the majority of rasterisation on the CPU too.
So if the issue isn’t drawing on the CPU, what is? It’s that on Android pre-3.0, the view hierarchy is redrawn on the CPU on each tick of the render loop if any view changes.
Scroll a list? Redraw. Pinch & zoom? Redraw.
That’s fine, if you can keep up.
You need to be able to draw on the CPU & copy the pixels to the GPU quickly enough that you don’t lag behind the display rate of 60hz.
Android phones can this just fast enough, most of the time, so this solution is considered “good enough” by Google. Start running Android on a bigger screen with more pixels, and you’ve got yourself a pipeline problem. TVs & tablets mean bigger screens.
The problem is illustrated quite clearly in a chart from Romain Guy’s Google I/O 2011 talk on Android Accelerated Rendering comparing pixel count and memcpy performance on the HTC G1, HTC Droid, Google Nexus One, Google Nexus S & Motorola’s XOOM tablet:
Interestingly, both the HTC Droid & Nexus One didn’t quite have enough memory bandwidth available to transfer screen-sized pixel buffers to the GPU. The Nexus S gained some speed, but the Motorola XOOM made it clear that something had to change.
When you’re scrolling a list, the pixels of the views within the list aren’t changing, they’re simply moving. The GPU already has the pixels of each view in the form of textures, so why not move the texture instead?
This is exactly what iOS does, and is why it’s never suffered any consequences of CPU-based rasterisation.
In the case of scrolling a list we’re simply moving a view, but views can also be scaled, rotated, clipped, made translucent, transformed in 2D or 3D space, or even have pixel shaders applied to them. All these things (and others!) can be done to a view’s contents without needing to redraw & send a new texture to the GPU.
Android Catches Up
Android 3.0 (Honeycomb) bridged the gap between the iOS & Android rendering architecture, allowing developers targeting API level 11+ on tablets to opt-in to the new hardware accelerated pipeline.
Android 4.0 (Ice Cream Sandwich) has done the same, bringing the hardware accelerated pipeline to phones and turning it on by default for all applications built against Android API level 14.
Android Goes One Step Further
Rather than simply generating a CPU-rasterised bitmap ready for the GPU, the Skia 2D drawing APIs generate an ordered list of operations. Each view maintains a list of those operations in a
DisplayList. Android 3.0 introduced a new OpenGL-ES Skia backend, which takes each of those
DisplayLists and runs the operations on the GPU. Pretty neat!
This technique of keeping an ordered list of drawing operations is similar to PDF’s object graph:
…each page in a PDF document has a page content stream—an ordered list of drawing operations that describes how to render the page-content when it is drawn.
Some of you may have noticed that quote’s from a book on Mac OS X’s Quartz framework. Here’s why:
Quartz’s internal imaging model correlates well with the PDF object graph, making it easy to output PDF to multiple devices.
i.e. Quartz & Skia have a similar internal model.
In fact, Apple released something similar to Skia’s OpenGL backend called Quartz 2D Extreme in Mac OS X 10.4 (renamed QuartzGL in 10.5), but never ported the technology to iOS.
I’d always thought QuartzGL wasn’t ported due to lacklustre performance, based on Matt Gallagher’s article comparing Quartz & QuartzGL performance on Mac OS X. However, it seems Google’s results have been rather positive, as can be seen in this chart from Romain Guy’s Google I/O 2011 talk on Android Accelerated Rendering:
These results seems rather impressive!
The changes to Android’s rendering pipeline in 3.0 & 4.0 bring it into the same league as iOS & Windows Phone 7. Hopefully the migration 2.x to 4.x is smooth, given the minor incompatibilities between Skia’s CPU & GPU backends.
It’ll be interesting to see whether Apple decides to provide a GPU rasterisation backend to Quartz 2D / Core Graphics on iOS. It doesn’t seem to be a pain point, but you never know.
I’ll definitely be keeping my eye on the Nexus Prime. If they’re any good, I may need to stop spectating from the edges and get my hands dirty.