Follow us:

Brier Dudley's blog

Brier Dudley offers a critical look at technology and business issues affecting the Northwest.

June 15, 2009 at 4:00 PM

Q&A: IBM cloud exec on launch, competition and blueprints from UW-Google

After spending five years and billions on development — including early research at the University of Washington — IBM today announced its enterprise cloud computing services.

IBM is starting with three offerings that further stretch the definition of cloud computing, a term loosely applied to big, scalable computing systems accessed on demand via the Internet.

Now IBM is offering to install “private clouds” for companies in-house, behind their firewalls.

One version is a “test cloud” that enables companies to develop and test applications in-house, instead of renting time from companies such as, Microsoft or even startups such as Seattle’s Skytap.

IBM is also offering a bundle of development and test tools that can be used on IBM’s cloud — a network running on 13 datacenters located around the globe.

Or companies may now order a turnkey cloud computing system from IBM, called “CloudBurst.” It’s a 42-unit server cabinet that comes preloaded with hardware, storage, virtualization, networking and service management software.

(Here’s IBM Innovation Systems Engineer James Thoensen with a CloudBurst prototype — Cloud in a Box? — in an IBM-supplied image)


Some customers would prefer a more tailored, integrated cloud setup than a “smorgasboard of different siloed systems,” said Dennis Quan, IBM’s director of autonomic computing development in Raleigh, N.C..

“You have bunch of systems that co-exist in datacenters, but they don’t act like a system, a single system, and enterprises spend a lot of time having to integrate the different software systems together,” he said.

There’s also going to be a need for more “fit to purpose” clouds, especially if datacenters are strained by the flood of new data. Some 15 petabytes (15 quadrillion bytes) of information are being created daily — mostly by consumers — but companies are responsible for maintaining 85 percent of it, according to IBM.

Blueprints for IBM’s cloud offerings came from a joint research project with Google. It initially explored business intelligence at big schools and large-scale analytics, and led to the creation of a cloud computing cluster at the UW and two run by IBM in 2007.

“The work that was done as part of that project really informed how we can put together large cloud datacenters that can efficiently process terabytes, petabytes, of information across thousands of machines,” he said.

The early clusters also “kind of provide the bluepints for the designs we base these new clouds on,” he said.

What’s crucial is the service management systems that make the cloud systems work — Quan said it’s like the orchestra conductor, or “an operating system for the 21st century datacenter.”

Here’s an edited excerpt from the interview with Quan:

Q: So do public clouds offer inadequate security?

A: If you look across the various industries and worklaod types, there are some workloads that are better suited to public clouds today and some that are better suited for private clouds. Our approach has been a hybrid approach that leverages the strength of both models.

Q: Will IBM acquire more companies to build its cloud stack?

A: IBM Is always considering these things.

Q: Will there be a free or low-cost cloud offering for startups?

A: There are certainly offerings we have for different segments of the marketplace.

Q: Will IBM offer a simple pay-as-you-go approach like Amazon Web Services, or will users need annual agreements?

A: What we’ve discovered as we worked with clients around the world is they’re looking for that total solution. They’re looking for ways to solve their business problems.

Q: What is a private cloud? Is it still a cloud if it’s an internal system, behind a firewall?

A: We started in 2006 where we built our own cloud for our [400,000] employees. … Today folks are able to log in to what we call the IBM Innovation Portal.

The reason it’s a cloud is because they are delivered over the network, they have all the key characteristics of a cloud computing environment: They are delivered over the network from a central location; it is elastic, meaning if you need to grow the amount of resources you can do that at any time; it is self-service — an employee goes and initiates these requests on their own.

There are companies that want to build their own clouds. They’ll be able to get a CloudBurst system and put that in their datacenter.

Q: That sounds kind of like an IBM mainframe …

A: It’s a 42u rack — a fairly large piece of equipment, with blades and storage and it’s preloaded with service management software, virtualization capabilities and the self-service portal.

Q: How much does a CloudBurst cost?

A: The minimum price starts around $200,000.

Q: How will your offerings compete with cloud services from Amazon and Microsoft?

A: I think the key to what we’ve learned from working with our clients is they’re looking for that total solution. You need to have an environment that directly addresses the business problems people are facing.

Q: Will companies have a cloud specifically for, say, business analytics instead of using the same cloud for multiple functions?

A: You would have different parts of your cloud allocated to the different workload types but the processes and practices needed to support these different workload types will be different. The same pool of virtualized hardware could be used, though.

Q: Will IBM’s offerings win back customers now using Amazon and Microsoft clouds?

A: Many of our clients have experimented with different cloud services out there. They want to know how they can get a secure, robust cloud environment that is suited to the problems they are facing.

Q: Is this a natural evolution from what used to be IBM’s mainframe business?

A: The trend that cloud computing is part of is this trend in distributed computing.

You look at the Web 2.0 trend that has come about. It’s taught us that we need to not only be able to support large-scale information processing, which is something that grid was good at, but also the large-scale, end-user facing applications.

Cloud is kind of the latest step in this distributed computing evolution that is designed to support an extremely broad range of business computing applications. The mainframe plays a part in this as well.

People often talk about virtualization on x86 platforms and that being a key part of the cloud computing platform. Virtualization has existed on the mainframe for decades. This is one of the areas where our mainframe approach is bleeding into the mainstream computing approach.

Comments | Topics:, cloud computing, Enterprise


No personal attacks or insults, no hate speech, no profanity. Please keep the conversation civil and help us moderate this thread by reporting any abuse. See our Commenting FAQ.

The opinions expressed in reader comments are those of the author only, and do not reflect the opinions of The Seattle Times.

The Seattle Times

The door is closed, but it's not locked.

Take a minute to subscribe and continue to enjoy The Seattle Times for as little as 99 cents a week.

Subscription options ►

Already a subscriber?

We've got good news for you. Unlimited content access is included with most subscriptions.

Subscriber login ►
The Seattle Times

To keep reading, you need a subscription upgrade.

We hope you have enjoyed your complimentary access. For unlimited access, please upgrade your digital subscription.

Call customer service at 1.800.542.0820 for assistance with your upgrade or questions about your subscriber status.

The Seattle Times

To keep reading, you need a subscription.

We hope you have enjoyed your complimentary access. Subscribe now for unlimited access!

Subscription options ►

Already a subscriber?

We've got good news for you. Unlimited content access is included with most subscriptions.

Activate Subscriber Account ►