I decided to look into AJAX for a class I’m teaching at Interop in New York this week. And it was an interesting project. I’ve talked to some of the guys that popularized it, as well as people who’ve written in it, and everyone seems consumed with how great, how flexible, how versatile it is. But few people have looked into the impact it has on networks. (One notable exception is Jep Castelein of Backbase.)
Now, I’m a pretty pragmatic guy; and I can’t code to save my life. But I’ve talked to lots of folks about running large-scale web applications, and there are some fairly basic things that they worry about. Availability, for example. Simply making sure that the site works reasonably quickly for everyone. And maintainability—so that when it does break, we can fix it.
Here’s what I learned.
First of all: What is AJAX? Specifically, it’s a term coined by an expert in usability and application design named Jessie James Garrett. He was referring to a specific combination of client- and server-side technologies that developers use to make cool web sites. Okay, that’s a huge oversimplification. But AJAX is behind things like GMail and Google Maps. And a couple of years ago, people had pretty much decided that web-based mail (Hotmail) and mapping (Mapquest) were technical dead-ends. Now you’ve got e-mail that saves a backup in case you’re cut off; and maps that you can zoom and drag.
Big deal, you say. My PC can do that. Well, that’s not the point. When your web page can do that, it means you can make cool applications that don’t follow the click-and-wait approach we’re used to on the web.
The Internet Geeks (mea culpa) are up in arms about this. Some say it’s the greatest innovation since HTML, freeing us from the tyranny of clumsy forms and inhumane interfaces. Others say it breaks the nice, strong metaphor of pages and documents on the Internet and will result in hundreds of different user interface widgets that are cool to their inventors but incomprehensible to others. Alex Bosworth, formerly Chief Architect at BEA and now over at Google, is on my list of cool people to have dinner with someday, primarily because of some amazing stuff he wrote about the migration from relational databases to message-centric networking. And Adam Bosworth has some excellent, hype-free guidelines on AJAX design.
I say, let the chips fall where they may. AJAX and its ilk—more broadly referred to as client-side scripting or rich client interfaces—are here to stay. Want proof? The underpinnings of this technology, which lets web page scripts talk XML to servers, has been a part of Microsoft Exchange’s Outlook Web Access since 1998. So you probably used it this week, if you accessed your company’s e-mail from home.
Let’s dispense with the political fallout and look at what AJAX actually does. In my (admittedly unscientific) research, I looked at some popular sites, and compared them to their non-AJAX counterparts. I then talked to several people who spend their time working on these applications to get a sense of where they’re going. My conclusions (so you’ll be intrigued and keep reading) are:
- Many more HTTP requests (“hits”) per second
- A bigger first page, followed by many smaller updates
- In some applications, one hit per keystroke or mouseclick
- Pre-fetching 3-4 times as much data as the user actually needs
So why on earth would someone actually use this stuff? The answers are speed and usability. Applications written in AJAX-like approaches “feel” more responsive, and are therefore more engaging. They also tend to do nice things like checking forms and populating drop-down dialogs more intelligently.
To understand why AJAX has these effects, we need to spend a bit of time digging into networking. Two of the most important concepts in network performance are latency and throughput. They’re different, but related, and they both affect how much data you can pump across a network. It’s easy to use a highway analogy to understand this.
- Latency is how fast the traffic goes. Think of the speed on the highway. If it’s a 60 mph highway, it takes you an hour to go sixty miles. If you double the speed, you get there in half the time (derr, as Jess would say). But it also means (on a highway) that more cars can get there. The problem is, there’s a limit to speed. Just as bad things would happen when cars started driving a thousand miles an hour, so networks can only move packets of data so fast. And it turns out there’s a good reason for that: The speed of light. A couple of years ago, I gave a presentation in Vegas on performance, and as part of it, I did the math on the network latency from New York to Las Vegas. Turns out, it’s always going to take at least 13 milliseconds. So there.
- Throughput is how wide the highway is. A two-lane highway can carry twice of many cars, at the same speed, as a one-lane highway. And this is easy to grow—there’s really no limit to how many lanes a highway can have. The problem is, if you’re in one of the cars, you’re still going to take an hour to get where you’re going.
Okay, so we have latency (which we can’t change until the Star Trek days) and throughput (which is easy to fix; there’s already a glut of bandwidth out there and we really aren’t splitting the spectrum much yet anyway.)
Now, consider that latency is tightly related to usability and the productivity of a person. A number of studies into optimal worker productivity by various organizations (IBM is one, pioneered byMihaly Csikszentmihalyi) show that people are productive when they’re in a state of concentration called flow state. And flow state takes place when people feel like they’re interacting in real time. Here’s my unscientific table of response times and usability.
|Under 1 millisecond
||Video games, virtual reality
||Browsing, exploration, affordances
||UI devices (buttons, etc.), completing forms
||Voice conversations, moving from area to area
|100 milliseconds – 1 second
||Instant messenger, authentication, completing a process
||Some distraction but continued engagement
||Getting search results, starting a video stream
|10 seconds or more
||Waiting in a queue
||Loss of concentration (Alt-Tab to something else)
||Batch-based computation, offline downloads
So here’s the problem. UI designers want their users to be productive, and to feel engaged in the application. But to get to that “real-time” state, they need to get under 10 milliseconds’ latency—which is tough across the Internet. And if they’re ever going to shake the hold that the desktop has on the design of applications, they need to fix this.
Latency, as we’ve seen, is very expensive. It’s even priceless at some points in time. There’s nothing we can do to speed it up. Or is there?
One trick designers can employ is pre-fetching. This is the act of grabbing several things you might need, so that they’re ready when you do need them. In our highway analogy, this is like putting a police car, an ambulance, and a fire truck on the highway as soon as an emergency call comes in. Once you know the nature of the emergency, the latency before which the right car arrives goes way down since it’s already on its way.
“But wait,” you say. “Isn’t that ridiculously expensive, sending three times as many vehicles as you need?” It would be if it weren’t for the fact that latency is fantastically more expensive than throughput. Put another way: Emergency services vehicles are cheap; quick response is precious.
A second issue AJAX tries to tackle is form prepopulation. In this case, the application tries to anticipate what you want (for example, by showing the most popular search terms or a list of known e-mail recipients.) Google Suggest is one such application.
In fact, Adam Bosworth lists ten places where AJAX makes sense.
||Impact on the network
|Form driven interaction
||Smaller messages but may trigger additional traffic if autosaved intermittently.
|Deep hierarchical tree navigation
||Lazy loading of data is like pre-fetching; trading additional data for improved responsiveness.
|Rapid user-to-user communication
||Many small messages; people tend to type shorter sentences.
|Voting, Yes/No boxes, Ratings submissions
||More, smaller messages.
|Filtering and involved data manipulation
||Additional prefetching (draggable maps) and more transactions (one message per click-and-drag).
|Commonly entered text hints/autocompletion
||One transaction per keystroke; prefetching.
(yes, according to Adam, this is ten entries. But he’s smart so I’ll agree.)
Okay, so I looked at both these cases. For the first one, I put a sniffer onto a Google search and looked up “Coradiant” (where I work.) Then I did the same with Google Suggest. You can try it if you like.
In the first case, there are two web hits—one for the search page, and one for the results page:
Here’s the trace:
In the second, there are as many hits as there are letters (but the hits are much smaller.)
Here’s the second trace:
(notice all of those hits, one per character? Scary…) So we have a lot more back-and-forth HTTP transactions.
The second thing I looked at was a side-by-side comparison of a “traditional” mapping application (Mapquest) and an AJAX-enabled one (Google Maps.)
For Mapquest, I zoomed in, then recentered three times:
I did the same for Google Maps:
Here’s what I found.
||Pause between clicks
||None, but repainting in background
And we have a lot more data.
One of the big reasons for the higher bandwidth is the pre-fetching. In a maps application, you’re getting “tiles” of surrounding map in every direction; but the user’s probably only going to drag in one of them. If you think about this for a bit, you’ll see that we initially load 9 tiles of data; then we drag in one of them; and this means we’re downloading three-to-four times as many images as we need. This is a worst-case scenario; a lot depends on the size of the window and the tiles, etc.
Here’s what you see:
And here’s what your browser downloaded (sort of)
So most Ajax implementations add more HTTP transactions and pre-fetch more data. They also burden the browser with a lot of processing (though most modern browsers are OK on a fairly current machine.) Studies show that performance degrades exponentially with increased data processing on browsers.
For those of us who run networks, this means we’ll get more hits, more often. Those hits will start with a larger object, then a series of small messages or a series of content updates whose size will vary by application.
Some of my conclusions from all this, then, are that the application should:
- Provide clear feedback to the user: Be smart about preloading data and handle the XMLHttpRequest object properly.
- Plan for more throughput & latency complaints. Remember that throughput is cheap, but response time is expensive, some people think latency matters 4 times as much as download size
- Recognize that the “hypertext” metaphor vanishes. So there’s no more back/forward/bookmarking concepts
- Without feedback, users will complain. Background errors will seem inexplicable, and slow-downs will be difficult to troubleshoot.
- Some key factors to demand from your developers so you can understand traffic profiles are the message size, the user interaction that triggers a transaction, the amount of data being pre-fetched, and the expected network latency of typical users.
More to follow as I get feedback from Wednesday’s presentation.