Why I like my data near my logic

I use an email client on my iMac. When it can’t get to the server, it still works. But sometimes, on a slow day with an unreliable network like the one I’m on right now, I don’t realize that I have mail waiting for me. The disconnect between by client-side logic and the server-side data camouflages the fact that the network isn’t working.

By contrast, when I use GMail’s web interface to read my mail, I know when I have new messages. Because Google controls the processing (on its servers) and data (right next to them) the two are connected. No camouflage there: If the network sucks, I know it.
Continue reading “Why I like my data near my logic”

Amazon's new CDN: More than just footprint in Asia

Amazon’s rolling out an extension to its S3 storage offering that will help move content closer to users, reducing WAN latency. “Using a global network of edge locations this new service can deliver popular data stored in Amazon S3 to customers around the globe through local access,” announced Amazon CTO Werner Vogels on his blog. Om beat me to the punch on this one and has a great writeup, too.

The service gives Amazon a much-needed footprint in Asia, but also serves notice to CDN companies that the days of long-term, minimum-rate, negotiated contracts and favored pricing are nearing their end.
Continue reading “Amazon's new CDN: More than just footprint in Asia”

Sitting on the frontlines of the Chrome rollout

Within hours of Chrome’s release, many companies were reporting operational issues. This might seem strange: Chrome is supposed to be leaner, faster, and better. But some of those improvements meant headaches for people running websites — and for those monitoring them. We sat down with Gomez CTO Imad Mouline to look at his company’s experience with the Chrome rollout.

Following its launch, Chrome rocketed to roughly 1% market share practically overnight, according to some sources, and although its use is tailing off a bit, this was a significant enough change in traffic to cause problems. “Small differences under the hood of the browser can lead to big issues in application delivery,” said Mouline. “For example, Chrome has a different connection profile with up to 6 connections per host” which increases TCP session concurrency. “The use of millisecond timing for the Javascript setinterval function also causes issues.”

Continue reading “Sitting on the frontlines of the Chrome rollout”

Human 2.0 is the Next Big Thing

We’re about to upgrade the human race. It’s more than a technology shift, it’s a cultural one. And it’s perhaps the first step on the singularity. This is most of what I’ve been thinking about lately. We’re sliding into it day by day, without noticing. I firmly believe it is the most significant change the human race faces, and it’s going to drive a tremendous amount of business and fuel wide-ranging ethical discussions. Most of the other technologies we cover here and elsewhere are simply building blocks for Human 2.0.

This is the first of many posts on the subject, and it sounds a bit muddy. Hopefully we can clarify that in the coming months. But if you’re willing to wade through some still-addled thinking, read on.

Continue reading “Human 2.0 is the Next Big Thing”

IT needs to stop being so Canadian

In modern companies, information drives everything from product planning to sales to finances. The flow of knowledge throughout a company is a critical asset.

There’s gold in that traffic—real-time business intelligence, risks and threats, customer insight. IT is custodian of that information, but most of the time it simply passes on raw data to the rest of the company. And that’s wrong.

If it is to remain relevant, IT must stop being a resource economy and become a producer of finished goods. This has happened before, and it’s a history lesson anyone in information technology needs to study.

Continue reading “IT needs to stop being so Canadian”

Never a truer word was spoken

As I think I’ve written before, Werner Vogels is a very sharp guy. His take on Twitter, after a few beers, was: “This is a hard problem. All the people who are smaller than them are telling them how to fix it. And all the people bigger than them are staying quiet, because they’ve been there before.”

The always-incisive Register opined about this beautifully, and it’s a great read. Hadoop is Real Software By Real Programmers. I try not to post about what other people write, because there’s enough copying on blogs. But this one’s worth the pointer.

“those who [use Hadoop in practice] don’t write about it. Why? Because they’re adults who don’t care about getting on the front page of Digg.”

What Kitchen-Aid taught me about cloud computing

If you’re even slightly interested in utility computing — the move towards on-demand, pay-as-you-go processing platforms — then Nick Carr’s The Big Switch is a must-read. You may not agree with everything he says, but his basic thesis is compelling: Just as we went from running our own generators to buying electricity from the power company, so we’re going to move from running our own computers to buying computing from a utility.

Because I spend a lot of my time writing, I’m constantly trying to out-guess the future. And something I’m obsessed with right now is appliances. Not virtual appliances, or network appliances, but simple appliances like pasta makers, bread machines, meat grinders, blenders, and so on.

If you look at the history of the electrical industry, the businesses that became interesting immediately after ubiquitous power was available were those you could plug into it. Generators were boring; but fans, irons, and fridges were really, really cool.

I touched on the topic back in May at Interop (there’s a Slideshare of the deck here on Bitcurrent.) And I think it’s worthy of a lot more consideration because, well, Costco had a sale on Kitchen-Aid mixers.

My wife is an extraordinary cook and an even better baker. And she’s long lusted after a Kitchen-Aid. They’re something of a cult, with a powerful motor, a custom-fit bowl, and dozens of attachments. Most people are happy with a hand-mixer, or a whisk, but there’s an obsessed segment of the market, the Really Serious Home Baker, full of those who simply must have a Kitchen-Aid. So this grey, intimidating, vaguely Cylon-like appliance dominates our countertop.

The Kitchen-Aid is at its core a motor. Its most common use is as a mixer, whisk, or dough hook. But it has attachments that can grind sausage, make ice-cream, roll pasta, shuck peas, and so on. It was conceived in an era where motors were expensive, and attachments were cheap. Here’s a great photo of a precursor to the modern Kitchen-Aid.

Today, motors are cheap. We don’t even think about them. We build them into everything, which is why gift tables at weddings are festooned with single-purpose appliances. And the Kitchen-Aid is the workhorse of near-professionals who demand a 600-watt motor that can tug even the toughest foods into submission.

User interfaces are the modern equivalent of appliances. Until recently, the Internet’s user interface was a desktop computer. Connecting to the Internet was a lot of work for a device: Network signaling, properly rendered graphics, keyboard and mouse, a display with enough resolution, and so on. It required a dedicated machine. The “motor” was expensive, the attachments were cheap. So we put many applications on our PC: Mail, Instant messaging, games, document viewers, file storage, mapping software, videoconferencing, and so on.

But all that has changed. We now have set-top boxes, game consoles, PDAs, cellphones, book readers, SANs — hundreds of devices, all able to access the Internet, all purpose built. That PC in the room is increasingly the jack of all trades, and master of none. The motor is cheap; the attachments matter now.

There are things the PC is still best for: Workstation tasks, like graphic design or software development. But if you want to understand the future of consumer electronics and user interfaces when CPUs are ubiquitous, consider what happened to kitchens when the motor was everywhere.

Self-powered appliances were all about convenience and portability: You don’t have to set up, dismantle, and clean your Kitchen-Aid every time you want to do something, and you can use an immersion blender single-handed over a hot stove-top. In other words, while many cooks crave a Kitchen-Aid, few use it to grind their morning coffee.

We still have to deal with gadget sprawl. Just as everyone has spare hand mixers and blenders secreted away at the back of their kitchen cupboards, so we’re struggling with multiple devices and seeking a way to reduce them. Certainly, high-end PDAs like the Blackberry, iPhone, Windows Mobile devices or the Nokia N95 are tackling this challenge.

It’s also important to remember we’re not just dealing with physical devices, we’re dealing with information. Having multiple blenders isn’t bad–it just wastes space. But having multiple gadgets, each with a part of your digital life on it, is horrible. Which is why synchronization and architectures like Microsoft’s Live Mesh, Google Apps/iGoogle, and Apple’s Mobile Me are so important: It’s not just about decentralizing the physical interface, it’s about decentralizing the information.

When I talk with people about cloud computing and SaaS, I’m always surprised how little mention is made of mobility and ubiquitous computing. To me, these are as big a driver of on-demand platforms like Amazon Web Services or Google App Engine as any of the cost savings or fast development cycles that a cloud can offer.

The problem of monoculture

I wrote a piece a while back about how centralized computing makes a cloud a big target. I didn’t want to get into the biological origins of this stuff, but one commenter was right: Monoculture is a precursor to extinction.

In university (which seems a long, long time ago) I wrote my thesis on evolutionary theory and product life cycles. Admittedly, not a screamingly fun topic, but it did give me a chance to read up on the Burgess Shale and other such things.

Now comes word that Amazon’s EC2, by virtue of the independence it affords hosters, is being used by bad guys for nefarious misdeeds (thanks to Rachel Chalmers of The 451 for pointing it out.) This provides an additional risk: Many of the Internet’s defense mechanisms involve black-holing specific hosters when the sites they’re operating do bad things.

Of course, when you’re hosting many applications, having one of them get blacklisted can be a nuisance for all the others. What’s interesting is the back-pressure we’re seeing arise against the popularity of cloud computing: At Structure, we debated the fear of lock-in; Stacey has a great piece on enterprise obstacles to adoption; and here, we’re seeing the downside of on-demand, easy-access platforms.

In other words, the bigger they are, the harder they fall. And that doesn’t just apply to dinosaurs.