Dispelling the myths of cloud lock-in

Over the course of 6 demos, the ECS team have shown how a single application can be moved between clouds with a minimal amount of code changes. Our video messaging application, which is fully detailed and can be tried for yourself at www.interopcloud.com, was first written with a back-end and a front end on Amazon Web Services. We introduced load-balancing of the front and back ends with Rightscale, and tested this with SOASTA CloudTest.

We then showed how the back end can be moved in house with MongoDB, and today (pictured right) we looked at moving the front end to a Google App Engine application, and that this could be done with a minimum of code changes. Later today we’ll be looking at how to handle multiple levels and versions of a cloud application to form a virtual development lab with Skytap.

Is the future of IT managing scripts?

This is the question Alistair Croll first asked Werner Vogels in the fireside chat session at ECS. Werner admitted he’d been caught off guard by the question but admitted that the future is automation for sure and scripts are powerful tools to achieve this.

An example of an enterprise use case for cloud computing

Werner related the case of the NASDAQ which had a lack of capital, which was restricting innovation and making it difficult for them to solve the technical problems around handling complex historical stock queries.

They solved this by having every ticker symbol for 10 minutes written to a text file in Amazon S3. An Adobe Air application was created which allowed you to specify a symbol and a date range. The app would download the text files for that time period – meaning you can do joins, queries etc. The computation is done by the customer’s desktop which means there is no resource investment. They were able to use cloud technology to keep things “nice and simple”

Cost savings can include people

Werner talked about the idea that when assessing the cost of cloud computing versus in-house infrastructure, you have to think about the total cost of ownership not just hardware. Werner talked about the example of the Indy 500. He said they have a very nice website which offers a flash environment with multiple video streams including views from the cockpits of drivers’ cars with audio feeds and telemetry. This is a high load application but it only runs three times a year. They found that they had to move a lot of engineers into data centers to keep their servers up. When they moved to cloud infrastructure they made 75% cost savings, the majority of which was on the people side; now they can manage everything from their armchair at home.

On Amazon direction and strategy

Continue reading “Is the future of IT managing scripts?”

Enterprise Cloud Summit is underway

Alistair Croll opens Enterprise Cloud Summit 2009

Alistair Croll today opened the Enterprise Cloud Summit in Las Vegas by drawing analogies between cloud computing and the early days of electricity generation.

It used to be that companies would have their own generators, they had generator rooms and technicians much like we have server rooms and techies that run them now.

What happened was a major shift; the creation of a grid enabled the separation of electricity generation from usage. Power didn’t have to be next to the work being done. This enabled cost savings for all electricity users and meant that anyone could make use of the grid to power their businesses, not just the largest companies.

The question is, is cloud computing like electricity? Are we moving to a utility model of computing? Continue reading “Enterprise Cloud Summit is underway”

Keeping ourselves honest

Jennifer Bell and the folks at Visible Government took the covers off their much-needed I Believe In Open project. If you’re a Canadian, you should go sign up. Simply put: any elected official who isn’t willing to be transparent and accountable to their electorate has something to hide, and we now have the technology to track their record.

Which makes me wonder what Bitcurrent’s record is. Once upon a time, many of the folks behind Bitcurrent were part of Networkshop, a consulting firm that became Coradiant, a web performance company that helped create the end user experience management space.

Back then, Networkshop talked a lot of trash. We blew the whistle on SSL performance issues, and wrote a huge (250+ page) study on load balancing. We also prognosticated a lot.

Using the Internet Way-Back Machine, I decided to go scoop up some issues of Networkshop News and see how they stood up to scrutiny nine years later. Here’s one on how networks change if the PC is no longer the dominant client, from March, 2000.

How do you think it stacks up?

Continue reading “Keeping ourselves honest”

Do MSP's have a Cloudy Future?

Just read an interesting article on Forbes.com by Dan Woods entitled “Parsing the Cloud“. Dan makes a similar argument to our own Ian Rae, suggesting that specialized clouds will be required to meet the privacy, regulatory, geographic latency and application architecture demands of cloud consumers.

This begs the question, who will build all these specialized clouds? Are there incumbents who simply need to evolve, or will we see the birth of dozens or hundreds of new cloud providers?

Continue reading “Do MSP's have a Cloudy Future?”

IT needs to stop being so Canadian

In modern companies, information drives everything from product planning to sales to finances. The flow of knowledge throughout a company is a critical asset.

There’s gold in that traffic—real-time business intelligence, risks and threats, customer insight. IT is custodian of that information, but most of the time it simply passes on raw data to the rest of the company. And that’s wrong.

If it is to remain relevant, IT must stop being a resource economy and become a producer of finished goods. This has happened before, and it’s a history lesson anyone in information technology needs to study.

Continue reading “IT needs to stop being so Canadian”

What Kitchen-Aid taught me about cloud computing

If you’re even slightly interested in utility computing — the move towards on-demand, pay-as-you-go processing platforms — then Nick Carr’s The Big Switch is a must-read. You may not agree with everything he says, but his basic thesis is compelling: Just as we went from running our own generators to buying electricity from the power company, so we’re going to move from running our own computers to buying computing from a utility.

Because I spend a lot of my time writing, I’m constantly trying to out-guess the future. And something I’m obsessed with right now is appliances. Not virtual appliances, or network appliances, but simple appliances like pasta makers, bread machines, meat grinders, blenders, and so on.

If you look at the history of the electrical industry, the businesses that became interesting immediately after ubiquitous power was available were those you could plug into it. Generators were boring; but fans, irons, and fridges were really, really cool.

I touched on the topic back in May at Interop (there’s a Slideshare of the deck here on Bitcurrent.) And I think it’s worthy of a lot more consideration because, well, Costco had a sale on Kitchen-Aid mixers.

My wife is an extraordinary cook and an even better baker. And she’s long lusted after a Kitchen-Aid. They’re something of a cult, with a powerful motor, a custom-fit bowl, and dozens of attachments. Most people are happy with a hand-mixer, or a whisk, but there’s an obsessed segment of the market, the Really Serious Home Baker, full of those who simply must have a Kitchen-Aid. So this grey, intimidating, vaguely Cylon-like appliance dominates our countertop.

The Kitchen-Aid is at its core a motor. Its most common use is as a mixer, whisk, or dough hook. But it has attachments that can grind sausage, make ice-cream, roll pasta, shuck peas, and so on. It was conceived in an era where motors were expensive, and attachments were cheap. Here’s a great photo of a precursor to the modern Kitchen-Aid.

Today, motors are cheap. We don’t even think about them. We build them into everything, which is why gift tables at weddings are festooned with single-purpose appliances. And the Kitchen-Aid is the workhorse of near-professionals who demand a 600-watt motor that can tug even the toughest foods into submission.

User interfaces are the modern equivalent of appliances. Until recently, the Internet’s user interface was a desktop computer. Connecting to the Internet was a lot of work for a device: Network signaling, properly rendered graphics, keyboard and mouse, a display with enough resolution, and so on. It required a dedicated machine. The “motor” was expensive, the attachments were cheap. So we put many applications on our PC: Mail, Instant messaging, games, document viewers, file storage, mapping software, videoconferencing, and so on.

But all that has changed. We now have set-top boxes, game consoles, PDAs, cellphones, book readers, SANs — hundreds of devices, all able to access the Internet, all purpose built. That PC in the room is increasingly the jack of all trades, and master of none. The motor is cheap; the attachments matter now.

There are things the PC is still best for: Workstation tasks, like graphic design or software development. But if you want to understand the future of consumer electronics and user interfaces when CPUs are ubiquitous, consider what happened to kitchens when the motor was everywhere.

Self-powered appliances were all about convenience and portability: You don’t have to set up, dismantle, and clean your Kitchen-Aid every time you want to do something, and you can use an immersion blender single-handed over a hot stove-top. In other words, while many cooks crave a Kitchen-Aid, few use it to grind their morning coffee.

We still have to deal with gadget sprawl. Just as everyone has spare hand mixers and blenders secreted away at the back of their kitchen cupboards, so we’re struggling with multiple devices and seeking a way to reduce them. Certainly, high-end PDAs like the Blackberry, iPhone, Windows Mobile devices or the Nokia N95 are tackling this challenge.

It’s also important to remember we’re not just dealing with physical devices, we’re dealing with information. Having multiple blenders isn’t bad–it just wastes space. But having multiple gadgets, each with a part of your digital life on it, is horrible. Which is why synchronization and architectures like Microsoft’s Live Mesh, Google Apps/iGoogle, and Apple’s Mobile Me are so important: It’s not just about decentralizing the physical interface, it’s about decentralizing the information.

When I talk with people about cloud computing and SaaS, I’m always surprised how little mention is made of mobility and ubiquitous computing. To me, these are as big a driver of on-demand platforms like Amazon Web Services or Google App Engine as any of the cost savings or fast development cycles that a cloud can offer.

Cloud panel at Web2Expo New York

Web 2.0 Expo New York 2008The (apparently) slower pace of summer is giving way to a very hectic September, with Bitnorth, Unconference, Interop, and Web2Expo all happening in a two week period.

I’m moderating a panel on Scaling Web 2.0 applications by building in the clouds as part of the Performance and Scaling track. It’s a great lineup, with folks from Amazon, Bungee, Joyent and 10Gen.

Haven’t figured out all the questions yet, but it’s bound to be a good discussion with that many seasoned Web2 operators in one place. Bitcurrent has a $100 discount code for the conference: webny08mc23.

Structure08 roundup

GigaOm’s Structure08 event is a wrap. It was an amazing turnout of people in the next-generation infrastructure world, and a packed day of panels and discussions.
I had my computer and phone off most of the time, since I was announcing speakers and moderating panels for much of the day. So I’m now scouring the net to see how we did.

I was incredibly lucky to have great panelists for panels on cloud platforms. While most of the discussions were fairly pragmatic, we did of course invent some new terms:

  • “Bare metal” clouds: Clouds that aspire to nothing more than giving you root more economically.
  • “Little Fluffy Clouds” (thanks to The Orb) after Tony Lucas referred to “Loving clouds” as those clouds which really want to do no evil.
  • A cloud user’s bill of rights, which would outline portability and so on.
  • “Cloudbursting”, the idea that an enterprise private cloud might burst into the public cloud temporarily.

I wish I’d known that Facebook was hiccuping while I was talking with the scaling panel, particularly since we had Jonathan Heiliger with us.

Then a dozen of us headed to an excellent pub on Haight and stayed up until far too late. Great end to a great couple of weeks in the Bay Area. Now, as the Spirit of the West say, it’s home for a rest. But I leave you with this.