Amazon’s rolling out an extension to its S3 storage offering that will help move content closer to users, reducing WAN latency. “Using a global network of edge locations this new service can deliver popular data stored in Amazon S3 to customers around the globe through local access,” announced Amazon CTO Werner Vogels on his blog. Om beat me to the punch on this one and has a great writeup, too.
The service gives Amazon a much-needed footprint in Asia, but also serves notice to CDN companies that the days of long-term, minimum-rate, negotiated contracts and favored pricing are nearing their end.
Continue reading “Amazon's new CDN: More than just footprint in Asia”
Amazon has publicly released a new Amazon web service called Elastic Block Store providing up to a terabyte per volume of persistent storage and allowing you to run your database in their cloud with the advantages of snapshots and flexible attachment to servers.
Rightscale, who offers a management and automation system based on AWS, has an excellent article explaining how Amazon’s Elastic Block Store works. In testing they report over 70 MB/s (that’s over half a gigabit per second) and over 1000 IOPS or input/output operations per second which is the ballpark equivalent of a dozen 7200rpm hard drives serving your data in tandem. They also report “it is possible to mount multiple volumes on the same instance such that file systems of 10TB are practical.” No doubt much more detailed performance and feature analysis will ensue shortly.
Continue reading “SANs in the cloud”
Cloud computing is the hottest Internet insider buzzword since the technologies to which it owes its existence: Virtualization and Grid Computing.
In May’s Interop Unconference, we explored their intersection in an informal jam session with enthusiastic audience participation starring Jinesh Varia (Amazon), Kirill Sheynkman (Elastra), Rueven Cohen (Enomaly), Jacob Farmer (Cambridge Computer), and Louis DiMeglio (ScienceLogic).
It’s taken some time to fully digest the results.
To many of us, the cloud is that amorphous blob of semicircular squiggles the IT crowd has been using on whiteboards to represent the internet since the mid-nineties. Clouds mean we don’t care what’s in them.
Once upon a time, that cloud in the middle of the whiteboard used to just represent the network — how to get from here to there. All the interesting stuff happened outside its borders. More recently, however, we’ve started moving the rest of the shapes on the whiteboard into the cloud. Applications and infrastructure are now drawn within the borders of that formerly ill-defined and anarchic etherspace.
If you listen to some overzealous cloudnuts, you’ll will hear that pretty much everything is rushing headlong into the Internet’s troposphere. But the truth is much more complex, and rational opinions seem to favor a hybrid future of rich clients, hardware, and software. We’ll have a hugely diverse mix of private and public cloud-based services providing both a back-end and a matrix for device interaction.
Aside: I’ll leave defining cloud computing ad nauseam to other bloggers. For our purpose it is the trend of outsourcing what you would normally run in your datacenter to an indefinitely flexible computing platform which is billed to you as a utility. Traditional hosters don’t count (for me) as cloud providers, but newer managed service hosters might, depending on the level of automation and scalability they employ.
So what did the Interop crowd conclude?
Continue reading “Future of computing: Forecast calls for partly cloudy”
A couple of weeks ago, I was lucky enough to moderate a panel on next-generation databases at Web2Expo. Having database greats Brian Aker, Dave Campbell, and Matt Domo in one place made for great dialogue. In addition to finding out whether RDBMS is dead, we looked at the big challenges of data storage (synchronization, working offline, and a shift towards specialized data models.)
We even found out how these three datascenti track their contacts (MySQL’s Aker uses scripts he wrote; Microsoft’s Campbell uses Outlook.)
Then last week at Interop, I had folks from platform companies like Google, Amazon, and Opsource together with a number of startups and virtualization tool makers. Again, great dialogue, even on the five-person panel that ran over. This time, the consensus seemed to be that on-demand computing was great for bursty capacity and highly parallel tasks, but lacked the controls, management tools, and SLAs to be a production platform for enterprises at the moment.
But Structure promises to be the most compressed discussion yet. Om Malik, the guy behind the event, says it’s about two things: Learning how the new web is built from the architects that built it; and networking with investors who “are looking to place their bets on cloud computing” and see it as a huge opportunity. “Structure 08 is about Getting Web Done,” says Malik.
I have two panels on the same day to moderate:
- Cloud Computing: Infrastructure for Entrepreneurs, featuring Geva Perry, CMO of GigaSpaces; Jason Hoffman, CTO of Joyent, Tony Lucas, CEO of XCalibre; Lew Moorman, SVP Strategy of Rackspace; Christophe Bisciglia, senior software engineer at Google; and Joseph Weinman, corporate development and strategy at AT&T.
- Scaling to Satiate Demand: Tactics from the pioneers, with Sandy Jen, co-founder and VP Engineering of Meebo; Akash Garg, CTO of Hi5, Jeremiah Robinson, CTO of Slide; and Jonathan Heiliger, VP Technical Operations of Facebook.
Each of these will be a fast-and-furious fifty-minute discussion around on-demand computing and the ability to scale. Time to come up with some pithy questions and awkward follow-ups.