Web performance management book

I’ve got the beginnings of a book on web performance that Sean Power and I are collaborating on. My last book is so ancient it’s practically fossilized; it also deals with lower-level network plumbing on a broader sense, rather than higher-level, protocol-specific performance.

Excerpt from the intro, after the jump.

To understand the problem, it helps to look back at the way computer networks are designed. By definition, a network means that several machines can talk to one another. Those machines won’t all come from one company; and the applications won’t all be written by one person. So we need standards: Agreements on how the machines will communicate.

Before the modern Internet, networks were generally built by large companies. They convened committees and bashed out standards such as X.25, Frame Relay, and Ethernet. The standards were usually set in stone; once a switch was built, you didn’t change it much. Back then, computing was expensive—so you built a machine to send data in a specific way, efficiently, with very little waste. Many of the technologies we rely on today—ATM, for example, or IPSEC—were the result of committees.

In the 1960s and 1970s, there were two attempts to build global public networks. Both were government-backed; one was in Europe, and one was in the US. The European approach was to define everything in painstaking detail, and one of the key elements of their standard was the OSI model. This model laid out, across seven distinct sets of rules, the various functions a network could offer. It was ambitious; it was brilliant; and it failed.

At the same time, engineers in the US were taking a different approach. The US government’s Defense Advanced Research Projects Agency (DARPA) was specifically created to do work that traditional, mainstream government organizations couldn’t. Engineers at DARPA had an unwritten rule of “loose consensus and working code,” in which several different applications were built, and the one that worked best won. This turned out to be to their advantage.

Neither the Europeans nor the developers at DARPA could foresee today’s web. They didn’t even know about the web back then—they were just building applications to let universities talk to one another, or to log into machines using a terminal. But where the OSI model tried to create standards for applications that hadn’t been invented yet, the DARPA model let people add things when necessary.

And add they did. From voice-over-IP, to streaming video, to peer-to-peer networks, the Internet protocols succeeded precisely because they encouraged innovation. While there’s a standards body—the Internet Engineering Task Force, or IETF—that documents standards, sites like YouTube and Skype managed to deliver streaming video or good-quality phone calls over the Internet without getting the IETF to agree on a standard. In fact, most of the things we expect from the Internet today started without standards: Rich clients, drag-and-drop, audio, video, real-time two-way chat.

We even created some simple standards in response to the complexity of committee-driven technology. SSL VPNs were a simple alternative to IPSEC networks; MPLS delivered most of what ATM promised. On the Internet, a rich underlying framework that could run most things, pretty well, won.

There’s another reason the Internet turned out the way it did. Modern computers are built on general-purpose processors. They can be used to play video games, or forward packets, or stream voice. This is because their CPUs can be adapted to many tasks. They’re fast. Processing is cheap. As a result, we can patch them. The old model of defining something precisely and then burning it into single-purpose chips is gone, replaced by a world where—for the most part—it’s okay to guess, roll it out as a beta, and patch it later. This means we can innovate with a safety net, building first and asking questions later.

The downside to this innovation was the variety it gave us. It’s great to have lots of options—but while we have one Microsoft operating system (in a few flavours) there are literally hundreds of competing Linux implementations.

Along the way, we got hooked on this innovation. It’s given us Attention Deficit Disorder: What was unthinkable a week ago and novel yesterday is expected today (and out of date a week from now.)

The public web has fuelled this what-have-you-done-for-me-lately approach. Hotmail stole millions of users from desktop clients with a web interface; Google Maps revolutionized travel with a draggable approach to navigation. Today, these technologies are considered “table stakes” for any modern web application.

Public expectations are driving features in business software. We use a fancy visualization at home, and we expect it in our ERP or Business Intelligence application at home the next morning.

Software makers are listening. Oracle has announced that its entire Fusion platform will rely on AJAX (a set of technologies for separating client actions from back-end servers in order to improve usability and offer “desktop-like” behavior in a web application.) Microsoft’s investment in Office Live and Windows Live embraces the web. As a result, companies are getting Web 2.0 whether they like it or not.

The web has always been an open platform. From its first days, the TCP/IP protocols on which the Internet relies were a quick-and-dirty alternative to complicated standards that were designed by committee. If you’re in web operations, it’s your job to try and run a site that’s mission-critical using technologies that may well have been invented on a napkin.

And that’s what web performance management is about.

No publication details yet but there’s lots planned. In particular, we’re going to propose a new performance model (rather than the traditional pages/objects and host/network times) that’s better suited to a Web 2.0 model of asynchronous, bi-directional client-server communications.