Strangeloop shared some fascinating data on web performance and Key Performance Indicators (KPIs) like visit duration and conversion rates. There’s a detailed overview on Watching Websites, and we’ll be sitting down with their VP of Products for a more in-depth look at the study and the results on October 8.
I’m at Velocity in San Jose. Just got in last night, and I wish I could have been here for the whole thing. It’s no exaggeration to say that this is the biggest congregation of people who make the Internet work, in one place, for one subject. Jesse Robbins and Steve Souders, along with O’Reilly, get an amazing group of people together. Even the chat in the speaker room this morning was skimming the top of my forehead.
It actually feels like cloud computing and web monitoring are converging very quickly. It’s increasingly obvious that performance, user experience, and revenues are inextricably linked. Microsoft and Google covered this in a joint presentation yesterday, and by now, you’vep probably heard about the number of results Google shows. They tested the number of results that should be shown on the first results page, then tested them.
As Google’s VP of products Marissa Mayer points out, users wanted 30 results. But when they turned this on, they saw a 25% drop in searches on the site!
Today, I’m going to write about an equation. I’ll try to make it easy to follow, but it’s still stats and graphs. Stay tuned and I’m convinced it will be worth your while, because in my opinion, it’s the most important equation in cloud computing. It’s what drives your market, your customers, and your burn rate.
If you build a traditional data center platform for your application, you worry about three variables: The amount of traffic to your site, your capacity to handle that traffic, and the user experience they get, such as latency. The equation looks like this:
User experience = Traffic / Capacity
As traffic increases, user experience gets worse and delay goes up. This is because each visit to your site consumes resources on your infrastructure, and some users wind up waiting for the app to respond. Networks get full; databases encounter record locking; message queues back up; and so on. Ultimately, some of your visitors have a lousy experience.
On-demand computing platforms fundamentally change how you deal with this, because as far as you’re concerned, they have infinite capacity.
Jennifer Bell and the folks at Visible Government took the covers off their much-needed I Believe In Open project. If you’re a Canadian, you should go sign up. Simply put: any elected official who isn’t willing to be transparent and accountable to their electorate has something to hide, and we now have the technology to track their record.
Which makes me wonder what Bitcurrent’s record is. Once upon a time, many of the folks behind Bitcurrent were part of Networkshop, a consulting firm that became Coradiant, a web performance company that helped create the end user experience management space.
Back then, Networkshop talked a lot of trash. We blew the whistle on SSL performance issues, and wrote a huge (250+ page) study on load balancing. We also prognosticated a lot.
Using the Internet Way-Back Machine, I decided to go scoop up some issues of Networkshop News and see how they stood up to scrutiny nine years later. Here’s one on how networks change if the PC is no longer the dominant client, from March, 2000.
How do you think it stacks up?
Within hours of Chrome’s release, many companies were reporting operational issues. This might seem strange: Chrome is supposed to be leaner, faster, and better. But some of those improvements meant headaches for people running websites — and for those monitoring them. We sat down with Gomez CTO Imad Mouline to look at his company’s experience with the Chrome rollout.
Amazon has publicly released a new Amazon web service called Elastic Block Store providing up to a terabyte per volume of persistent storage and allowing you to run your database in their cloud with the advantages of snapshots and flexible attachment to servers.
Rightscale, who offers a management and automation system based on AWS, has an excellent article explaining how Amazon’s Elastic Block Store works. In testing they report over 70 MB/s (that’s over half a gigabit per second) and over 1000 IOPS or input/output operations per second which is the ballpark equivalent of a dozen 7200rpm hard drives serving your data in tandem. They also report “it is possible to mount multiple volumes on the same instance such that file systems of 10TB are practical.” No doubt much more detailed performance and feature analysis will ensue shortly.
Continue reading “SANs in the cloud”
As I think I’ve written before, Werner Vogels is a very sharp guy. His take on Twitter, after a few beers, was: “This is a hard problem. All the people who are smaller than them are telling them how to fix it. And all the people bigger than them are staying quiet, because they’ve been there before.”
The always-incisive Register opined about this beautifully, and it’s a great read. Hadoop is Real Software By Real Programmers. I try not to post about what other people write, because there’s enough copying on blogs. But this one’s worth the pointer.
“those who [use Hadoop in practice] don’t write about it. Why? Because they’re adults who don’t care about getting on the front page of Digg.”
Cloud computing is the hottest Internet insider buzzword since the technologies to which it owes its existence: Virtualization and Grid Computing.
In May’s Interop Unconference, we explored their intersection in an informal jam session with enthusiastic audience participation starring Jinesh Varia (Amazon), Kirill Sheynkman (Elastra), Rueven Cohen (Enomaly), Jacob Farmer (Cambridge Computer), and Louis DiMeglio (ScienceLogic).
It’s taken some time to fully digest the results.
To many of us, the cloud is that amorphous blob of semicircular squiggles the IT crowd has been using on whiteboards to represent the internet since the mid-nineties. Clouds mean we don’t care what’s in them.
Once upon a time, that cloud in the middle of the whiteboard used to just represent the network — how to get from here to there. All the interesting stuff happened outside its borders. More recently, however, we’ve started moving the rest of the shapes on the whiteboard into the cloud. Applications and infrastructure are now drawn within the borders of that formerly ill-defined and anarchic etherspace.
If you listen to some overzealous cloudnuts, you’ll will hear that pretty much everything is rushing headlong into the Internet’s troposphere. But the truth is much more complex, and rational opinions seem to favor a hybrid future of rich clients, hardware, and software. We’ll have a hugely diverse mix of private and public cloud-based services providing both a back-end and a matrix for device interaction.
Aside: I’ll leave defining cloud computing ad nauseam to other bloggers. For our purpose it is the trend of outsourcing what you would normally run in your datacenter to an indefinitely flexible computing platform which is billed to you as a utility. Traditional hosters don’t count (for me) as cloud providers, but newer managed service hosters might, depending on the level of automation and scalability they employ.
So what did the Interop crowd conclude?
We’re presenting at MeshU in a few hours. The subject is web monitoring — not just analytics, but things like synthetic testing, usability, and so on. The Toronto, Montreal, and Ottawa technology community is here in force, and the lineup of presenters is impressive (and intimidating.)It’s a big deck, and Slideshare isn’t behaving well. But the presentation (in .pptx format) is available in Bitcurrent’s drop.io dropbox.
Update: Now Slideshare is working, so here’s the deck:
A couple of weeks ago, I was lucky enough to moderate a panel on next-generation databases at Web2Expo. Having database greats Brian Aker, Dave Campbell, and Matt Domo in one place made for great dialogue. In addition to finding out whether RDBMS is dead, we looked at the big challenges of data storage (synchronization, working offline, and a shift towards specialized data models.)
We even found out how these three datascenti track their contacts (MySQL’s Aker uses scripts he wrote; Microsoft’s Campbell uses Outlook.)
Then last week at Interop, I had folks from platform companies like Google, Amazon, and Opsource together with a number of startups and virtualization tool makers. Again, great dialogue, even on the five-person panel that ran over. This time, the consensus seemed to be that on-demand computing was great for bursty capacity and highly parallel tasks, but lacked the controls, management tools, and SLAs to be a production platform for enterprises at the moment.
But Structure promises to be the most compressed discussion yet. Om Malik, the guy behind the event, says it’s about two things: Learning how the new web is built from the architects that built it; and networking with investors who “are looking to place their bets on cloud computing” and see it as a huge opportunity. “Structure 08 is about Getting Web Done,” says Malik.
I have two panels on the same day to moderate:
- Cloud Computing: Infrastructure for Entrepreneurs, featuring Geva Perry, CMO of GigaSpaces; Jason Hoffman, CTO of Joyent, Tony Lucas, CEO of XCalibre; Lew Moorman, SVP Strategy of Rackspace; Christophe Bisciglia, senior software engineer at Google; and Joseph Weinman, corporate development and strategy at AT&T.
- Scaling to Satiate Demand: Tactics from the pioneers, with Sandy Jen, co-founder and VP Engineering of Meebo; Akash Garg, CTO of Hi5, Jeremiah Robinson, CTO of Slide; and Jonathan Heiliger, VP Technical Operations of Facebook.
Each of these will be a fast-and-furious fifty-minute discussion around on-demand computing and the ability to scale. Time to come up with some pithy questions and awkward follow-ups.