Friday, June 27, 2008

CloudCamp San Francisco Pictures

CloudCamp SF Recap

What a week for cloud computing, between Velocity and Structure08 on the west coast and xconomy on east coast, this week truly has been the week of the cloud. As many of you know we through the first ever CloudCamp on Tuesday evening at the Microsoft offices in downtown San Francisco. The event was a smashing success attracting more then 300+ attendees who enjoyed heated technical cloud discussions, and more the 6 hours of open bar and great food. Our signature drink, called the dark and stormy was a disruptive blend of dark rum and ginger beer which set an interesting ambiance for the evening event.

Some CloudCamp highlights included an appearance Werner Vogels, Amazon CTO who actively engaged in several of our discussions. One of the more interesting aspects of the event was our use of the open spaces format in which participants define the agenda with a relatively on the fly process, and adjust it as the event proceeds. A lot of people where skeptical at first about the format quickly became fan. The format allowed for a much more indepth look at some of the opportunities and challenges in the cloud computing. The discussions ranged from "what is cloud computing" to how to scale a db to data integration and security. There seemed to be a little something for everyone.

Dave Nielsen our MC for the evening was masterful composing a cloud event most will not soon forget. Our sponsors really came through raising more then $15,000 in sponsorship in less then 3 weeks form the point we proposed the event on our cloud group to the event itself. I'd also like to thank Sara and Jesse from Sun, Sam from Appistry and Alexis from CohesiveFT for all there help in organizing the event.

I'm actually still amazed that in barely over 3 weeks cloudcamp has managed to go from an idea to a international phenomenon with another event in London on the 16th of July as well as interest in doing cloud events in New York, Montreal, Chicago, St Louis, Tel Aviv and Sydney. We're now actively organizing the London event, so if you're interested in sponsoring and or attending I would invite you to join our mailing list at http://groups.google.com/group/cloudcamp or at http://london.cloudcamp.com

I hope to see some of you at the next event!

Sunday, June 22, 2008

More with Moore, more or less.

Recently I've been asked about the benefits of cloud computing in comparison to that of virtualization. Generally my answer has been they are an ideal match. For the most part virtualization has been about doing more with less (consolidation). VMware in particular positioned their products and pricing in a way that encourages you to use the least amount of servers possible. The interesting thing about cloud computing is it's about doing more with more. Or if you're Intel, doing more with Moore.

At Intel's core, they are a company driven by one singular mantra, "Moore's Law". According to wikipedia, Moore's law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years. The observation was first made by Intel co-founder Gordon E. Moore in a 1965 paper.

Over the last couple years we have been working very closely with Intel, specifically in the areas of virtualization. During this time we have learned a lot about how they think and what drives them as an organization. In one of my early pitches we described our approach to virtualization as "Doing more with Moore" A kind of play on the common phases "doing more with less" combined with some of the ideas behind "Moores Law" which is all about growth and greater efficiencies. They loved the idea, for the first time someone was looking at virtualization not purely as a way to consolidate a data center but as a way to more effectively scale your overall capacity.

What is interesting about Moores law in regards to cloud computing is it is no longer just about how many transistors you can get on a single CPU, but more about how effectively you spread your compute capacity on more then one CPU, be it multi-core chips, or among hundreds, or even thousands of connected servers. Historically the faster the CPU gets the more demanding the applications built for it become. I am curious if we're on the verge of seeing a similar "Moores Law" applied to the cloud? And if so, will it follow the same principals? Will we start to see a "Cloud Law" where every 18 months the amount of cloud capacity will double or will we reach a point where there is never enough excess capacity to meet the demand?

Saturday, June 21, 2008

Describing the cloud

Recently on our Cloud Computing Group, Trevor asked how do we describe cloud computing, below is my response.

Cloud computing is one of those catch all buzz words that tries to encompass a variety of aspects ranging from deployment, load balancing, provisioning, business model and architecture (like web2.0). It's the next logical step in software (software 10.0). For me the simplest explanation for cloud computing is describing it as, "internet centric software". This new cloud computing software model is a shift from the traditional single tenant approach to software development to that of a scalable, multi-tenant, multi-platform, multi-network, and global. This could be as simple as your web based email service or as complex as a globally distributed load balanced content delivery environment. I think drawing a distinction on whether its, PaaS, SaaS, HaaS is completely secondary, ultimately all these approaches are attempting to solve the same problems (Scale).

As software transitions from a traditional desktop deployment model to that of a network & data centric one, "the cloud" will be the key way in which you develop, deploy and manage applications in this new computing paradigm.

IBM followup

Over the last few day I've received a number of emails about my recent
IBM commentary. A lot of people seem to be ready to read quite a bit
more into what I wrote. I am neither for or against IBM. As one of the
most dominate players in the data center space, how IBM approachs
cloud computing effects everyone involved in the emerging space. My
biggest complaint is because of sheer volume of acquisitions, IBM is
forced to use what they have (a suite of fairly random data center
components) rather then to re-invent something totally new and unique.
In my opinion the days of centralized data center management are
numbered and from what I can see they have yet to embrace a
decentralized approach to cloud management.

What I tried to do is paint the picture that I saw as an outsider.
And as an outsider I saw a lot of opportunities available in weaker
areas of IBM's cloud strategy (Dynamic Provisioning and data center to
cloud migrations - D2C). I should also note this was a research
focused event and only showed a small part of their overall strategy.

At the end of the day I have no doubt IBM will become a dominate
player in the space, but whether it's through acquisitions or R&D is still to
be determined.

Wednesday, June 18, 2008

IBM Cloud Computing Day

Earlier this week I had the opportunity of being invited to IBM's Cloud Computing day at their Canadian headquarters in Markham Ontario (just outside of Toronto). The invite only event brought together key IBM employees and researchers from The Centre of Excellence for Research in Adaptive Systems (CERAS) and focused on emerging technologies, methods and architectures for cloud computing.

Before this event I had never heard of CERAS, they describe themselves as an innovative, collaborative virtual organization which explores and evolves promising new technologies, methods and techniques that enable dramatically more agile approaches to software development and evolution; approaches that enable delivery of software and computing resources on demand and on time, with less operational effort. Simply explained its a joint partnership between IBM research and a number of University CS research labs.

One of the more interesting aspects of the event was Andrew Trossman's introduction to Cloud Computing. His presentation looked more like a comedy routine and was very entertaining. (At one point he answered a phone call from his security guard at his home, which was hilarious) One of the main points I took away from his presentation was that most people at IBM really have no idea what cloud computing is and a few select early adopters such as Trossman are key to pushing the cloud agenda within IBM. I also found it interesting, that they do seem to utilize a kind of internal "research cloud" for researches within IBM, but appear to have no intentions to offer this type of service commercially. They were vague on exactly what or how this cloud worked.

Other interesting presentations included "Self-Optimization in the Cloud" by Murray Woodside at Carleton University. He presented a compelling approach to what he called "autonomic computing" whereby resources levels are automatically adjusted based on application response times. His presentation also touched upon "self healing" system but did little explain how this would actually function. Woodside's research seemed ideally suited for environments like Amazon EC2 where you may need to adjust your virtual resources for short periods of time. Although he was a little hazy on the which technologies he used and whether it would ever be made available commercially (I can only assume his research was based on IBM's Tivoli suite). I look forward to seeing these features some day included in IBM's data center software.

The brief presentation by Christina Amza on Dynamic Provisioning was particularly interesting. She presented her work on the challenges to dynamically "packing" virtual machines into the cloud using a unique packing algorithm which helps determine the optimal location of each VM. Her questions to me about Enomalism, was by far some of the most difficult I've ever had to answer. Dynamic provisioning is in my opinion one of the most difficult and potentially lucrative areas in the development of clouds for both private and utility use. The ability to effectively manage thousands of virtual and physical servers may mean the difference between a profit and a complete failure. Her research looks very promising and I know that I could certainly use her work in our software if she ever decides to make it publicly available (I assume IBM is thinking along the same lines.)

All in all It was interesting to get a birds-eye view of the cloud computing programs going on within IBM's research labs and their related technology groups. Although the event was fairly academic, it did give me unique opportunity to see what IBM is up to. From my outsiders point of view, I can summarize IBM's "blue cloud" as a way for them to repackage their existing data center management tools to enable the creation of "private clouds" for IBM's enterprise customers. From what I saw I don't think we'll be seeing anything like a Amazon EC2 or Google App Engine anytime in the near future. What I think we will see from them is an active involvement in the development of cloud computing technologies as well as number of select cloud technology acquisitions.

Friday, June 13, 2008

"The World Wide Cloud: Bridging the Data Center and The Cloud"

As cloud computing becomes more commonplace, creating a secure method to bridge the gap between existing data centers and remote sources of compute capacity is becoming more and more important. Reuven Cohen, Founder & Chief Technologist of Enomaly, will be giving a session on Bridging the Data Center and The Cloud at SYS-CON's 'Cloud Computing Summit' (November 20-21, 2008) - a brand new adjunct to the 4th International Virtualization Conference & Expo being held at The Fairmont Hotel in San Jose, CA.

"The ability to efficiently and securely tap into remote cloud resources is one of the most important opportunities in the cloud computing today," Cohen notes.

Location, security, portability, and reliability all play critical roles in a scalable IT environment, he adds. In his session at the Summit, he will discuss some of the challenges and opportunities to deploying across a diverse global cloud infrastructure.

Cohen is the Founder & Chief Technologist for Toronto-based Enomaly Inc. - leading developer of Cloud computing products and solutions focused on enterprise businesses. Enomaly enables enterprises to realize the benefits of cloud computing by delivering turn-key IT solutions that help in the use and migration to remote cloud computing resources. Enomaly's products include the Enomalism elastic computing platform, an open source cloud platform that enables a scalable enterprise IT and local cloud infrastructure platform.

At the Cloud Computing Summit he will be speaking along side other top Cloud luminaries, including:

  • Mike Eaton, CEO, Cloudworks
  • Willy Chiu, VP of High Performance on Demand Solutions, IBM
  • Dave Durkee, CEO, ENKI
  • John Janakiraman, CTO, Skytap
  • Billy Marshall, Founder & CEO, rPath
  • Dr Thorsten von Eicken, CTO & Founder, RightScale
  • Patrick Harr, CEO, Nirvanix

Thursday, June 12, 2008

Banking on the Cloud

I've spent the last few days hanging out with a bunch of bankers at the annual Morgan Stanley CTO Summit in San Francisco. The invite only event mixes the top Morgan Stanley technology personnel, emerging technology companies and key players in the venture capital world.

Cloud Computing was a noticeably "hot topic" of conversation at this years summit. My invitation to this years event was a rare opportunity to pick the brains of the true enterprise decision makers on the challenges as well as the opportunities for cloud computing within a large financial environment. This year was particularly interesting because of the downturn in the finance market and challenges associated with it.

I was surprised by just how informative this event actually was, I figured it would be just another "bankers" tech get together. I was wrong. Below are some of the key points I took away from the summit.

Cloud Computing was front and center this year. One of the more interesting points that kept reoccurring was the need for better security. There seems to be a definite desire to use "Cloud Infrastructure" both internally within high performance computing, trading platforms and other various software platform services. There seems to be a genuine desire to use external cloud resources such as Amazon. The need to secure data in the cloud was one of their single biggest concern. Those who offer this kind of "bridge to the cloud" will be the ones who will bring the most value to the banking industry. What is interesting, for the time being they seem more interested in keeping their "compute resources" safely tucked under the mattress then putting it to the hands of a "book store". (Personally I'd rather keep my money in the bank where it is safe and more easily managed in the same way I'd rather keep my computing infrastructure in a well managed cloud rather then in my office closet. Until the major banks realize this, I don't foresee a lot of movement toward the public cloud.)

Another interesting take away, the traditional enterprise sales model is dead. Getting in through the back door is the way of the future. SaaS, Cloud and Open source are all viable options and in some ways perferred. They provide a frictionless way for IT works within Morgan Stanley a way to try new approaches, services and technologies. They were also quick to point out that whether or not the software was traditional or hosted was secondary to what "problem" it solved. The ability to solve a partcular problem was the most important aspect in getting your product or service in the door, this point is more important then any license applied to the technology. So don't focus on the "it's SaaS", focus on the problem.

Also interesting was the declaration that cost is not always a major part of the decision process when looking at software and related services. One example was provided by a top level VP, his story involed a 2,000 server deployment used for some sort of risk analysis (he was vague). This deployment of 2,000 servers easily costs them several million dollars, moreover they only use these servers for about 1 hour per month (if at all). But when they do use these servers, on that one day when the "market goes crazy" it could mean the difference between a 2 billion dollar loss or a 1 billion dollar profit. His numbers may have been an exaggerated a bit, but the point hit home. (It's all about making money)

Another area that kept being mentioned was virtual desktop deployments are big business for the bank. VDI users now have the ability to work within their own "context" and have their personal desktop environment move with them. No longer do IT staff need to continuely maintain desktops onsite thus saving the bank a lot of time and resources. They also made mention that "human resources" is their biggest technology cost. If a employee changes position, moves to a new office and leaves all together, it's now just a couple clicks saving the bank a lot of money.

Interesting was the amount of data integration companies at the event. Based on the sheer volume of data integration companies at the event, I would say they are looking seriously at this area, although my conversations didn't touch upon this topic. (I was way to busy pushing my cloud agenda.)

One of the biggest surprises was regardless of the downturn in the markets, Morgan Stanley is on track to spend more then ever on their IT budget. They seem to think that during periods of lower economic activity it gives them a rare opportunity to establish themselves in new areas of emerging technology that my give them a competitive advantage down the road. They also seem to think that their use of technology will directly influence their ability to maintain their lead in the lucrative tech IPO market (which appears to be none existent this year). They went on to say that the companies that emerge during the hardtimes tend to do better in the long term (Think Google). Morgan Stanley is ready to apply this to their own business and I applaud them for it. If I ever go IPO, I know who will represent me!

Sunday, June 8, 2008

Vertebra: EngineYard's Cloud Computing Platform

I'm preparing for my trip to San Francisco this week to attend the Morgan Stanley CTO summit, so you won't see or hear much from me. I did want to briefly share some details of a new open source cloud computing platform the guys over at Engineyard are currently working on. They call it Vertebra, awesome name, it's almost as crazy as our Enomalism name so I have to give them credit for having the guts to do something different both technology wise and with their naming conventions.

Ezra Zygmuntowicz, founder of EngineYard and Merb developer, presented his latest project—Vertebra—at RailsConf 2008. The presentation slides are available on Ezra's blog.

He describes Vertebra as
Vertebra is a fairly large scope project. It is best described as a next generation Cloud Computing Platform. Built with Erlang/Ruby and centered around Ejabberd and XMPP. Vertebra can be used for automating the cloud as well as for distributed real time application development. The whole idea of Vertebra is to democratize the cloud, abstracting the cloud interface API's and allowing folks to utilize multiple cloud providers based on a number of cost/benefit factors. It also has large potential for enterprise integration projects. If you have some old legacy service that needs to join a modern architecture, you can write a simple agent to get your legacy service on the Vertebra message bus where it can be addressed by anything else on the message bus in a standard way.

Vertebra itself is the 'backbone' of a new platform. We are using it to automate many many servers, but it also has big implications for application developers working on the real time web. It is basically an integration system, any language with an XMPP library that implements our protocol can join the XMPP cloud and become part f a larger organism of machines and services.

So Vertebra will come with tools for automating deployment of applications and virtual servers in the cloud. But will also be useful as a backend messaging and distributed computing system that runs behind web apps, giving them powerful tools for running compute heavy jobs in parallel ala map/reduce. It will also allow for dispatching based on least loaded nodes. Say you get a request to your web app that includes some image processing and you have a farm of 20 backends that can process images. When you get the request to your web app, you make a call to vertebra asking for the least loaded node that can service this particular request, vertebra returns a list of least loaded nodes and allows you to dispatch based on this or many other factors.
I can't wait to see this in action and best of all it will be available under an open source GPL license.

You can get more details here: http://brainspl.at/articles/2008/06/02/introducing-vertebra

Wednesday, June 4, 2008

The Business of Building Clouds

There is an old saying in the venture capital world that consulting doesn't scale. As an entrepreneur I'm continually walking the line between making the short term buck (consulting revenue) versus the longtail (recurring revenue on product based licensing and support). Given our platform is open source, consulting is typically a major part of our revenue model. The dilemma is a fairly straight forward one. I'm in business to make money, in our case, from as many different opportunities as possible.

Lately it seems everyone is in need of assistance with their clouds, from architecture, setup and deployment there seems to be real need for the "Cloud Consultant". For us these jobs range from your dedicated hosting firms and large telecoms looking to create EC2 like utilities to software & traditional enterprises looking to deploy their new "as a service" offerings in a scalable way. A lot of people talk about the cloud killing the traditional system administrator's job, but in my opinion there has never been a better time to be working in IT. Those who see this paradigm shift toward cloud computing will prosper.

Defining what cloud computing is in itself a tough job, the lack of common cloud methodologies and best practices is making the job even harder. Trying to find experienced people with knowledge on how to build out a 30,000 machine cloud is nearly impossible, finding someone who's deployed hundreds is proving to be almost as difficult. We the pioneers in the cloud computing space must take steps to create an open development ecosystem, one where we share our failures and successes so others can learn the trade.

One way may be to create a common cloud specification. David Young over at Joyent, attempted to do this, he has called for a common cloud specification called "Cloud Nine". In his modest proposal, he calls for an open specification based on nine core components.

1) Virtualization Layer Network Stability

2) API for Creation, Deletion, Cloning of Instances

3) Application Layer Interoperability

4) State Layer Interoperability

5) Application Services (e.g. email infrastructure, payments infrastructure)

6) Automatic Scale (deploy and forget about it)

7) Hardware Load Balancing

8) Storage as a Service

9) “Root”, If Required

Although I'm not sure about the need for root access or hardware based load balancing his post raises some interesting ideas. In particular he says "a developer should be able to move between Joyent, the Amazon Web Services, Google, Mosso, Slicehost, GoGrid, etc. by simply pointing the “deploy gun” at the cloud and go." I think he nailed it dead on with this statement.

At the end of the day our job as cloud builders is about creating simplicity and making IT easier to manage and easier to scale.

CloudCamp San Francisco

Over the last few weeks a group of us have been working on organizing a Cloud Computing Event in San Francisco on June 24th.

CloudCamp San Francisco – Summer 2008

Call for Participants

CloudCamp was formed in order to provide a common ground for the introduction and advancement of cloud computing. Through a series of local CloudCamp events, attendees can exchange ideas, knowledge and information in a creative and supporting environment, advancing the current state of cloud computing and related technologies.

As an informal, member-supported gathering, we rely entirely on volunteers to help with meeting content, speakers, meeting locations, equipment and membership recruitment. We also depend on corporate sponsors who provide financial assistance with venues, software, books, discounts, and other valuable donations.

We invite you to participate in the first CloudCamp event, to be held Tuesday, June 24, 2008 from 5-9 pm in San Francisco, California.

There are a number of opportunities to get involved, including:

· ATTEND – Attending CloudCamp SF is both free and fun. Space is limited, so please visit http://upcoming.yahoo.com/event/759667 to register.

· PRESENT – CloudCamp SF will follow the popular Open Space format, which encourages an open exchange between presenters and participants. If you've got a cloud-related topic to discuss, visit the Cloud Camp SF wiki and post your ideas.

· DEMO – A separate track will be held for 10 minute cloud computing demonstrations. If you'd like to demo, please submit your topic to the wiki.

· SPONSOR – A number of sponsorship opportunities are available at the Platinum ($2,000), Gold ($1,000) and Silver ($500) levels. Sponsors receive recognition for their support and enhanced visibility at the Camp.

· ORGANIZE – CloudCamp is a non-profit, volunteer-driven organization. If you'd like to help plan CloudCamp SF or a future CloudCamp, join an organizing committee by joining the Cloud Computing Google Group and letting us know about your interest.

CloudCamp Around the Web

For additional information on CloudCamp, please visit www.cloudcamp.com. CloudCamp can also be found in various places around the Web:

Monday, June 2, 2008

Failure as a Service - Cloud Redundancy

This weekend we had one of those "I should have known better moments". For the last few years we've hosted our primary and secondary DNS servers at The Planet. Around 5pm on Saturday our data center literally blew up. Even though most of our application servers are hosted at Amazon EC2 or in house, this one relatively minor point of failure managed to take down our entire IT infrastructure. We mistakenly assumed that the chances of both DNS servers going offline at the same time were slim and up until today, we had assumed correctly.

This disaster is especially difficult for me since I spend my days pitching the merits of geographically redundant cloud computing which I call "failure as service". The concept goes like this; If you assume you may lose any of your servers at any point in time, you'll design a more fault tolerant environment. For us that means making sure our application components are always replicated on more then one machine, preferably geographically dispersed. This way we can lose groups of VMs, physical machines, data centers, or whole geographic regions without taking down the overall cloud. This approach in a lot of ways is similar to the architecture of a P2P network or even a modern botnet which rely heavily on a decentralized command and control.

As an early user of Amazon EC2 we quickly learned about failure, we would routinely lose EC2 instances and it became almost second nature to design for this type of transient operating environment. To make matters worse for a long time EC2 had no persistent storage available, if you lost an instance, the data was also lost. So we created our own Amazon S3 based disaster recover system we called ElasticDrive.

ElasticDrive allows us to mount amazon s3 as a logical block device, which looks and acts like a local storage system. This enables us to always have a "worst case scenario" remote backup for exactly this type of event, and luckily for us we lost no data because it. What we did lose was time, our time on a Sunday afternoon fixing something that shouldn't have even been an issue.

In our case our application servers, databases and content had been designed to be distributed, but our key point of failure was in our use of a single data center to host both of our name servers. When the entire data center went offline, so did our dns servers and so did our 200+ domains. If we had made one small, but critical change (adding a redundant remote name server) our entire IT infrastructure would have continued to work uninterpreted. But when I awoke Sunday morning (to my surprise) everything from email, to our web sites, to even our network monitoring system failed to work.

I should also note that recently Amazon has worked to overcome some of the early limitation of a EC2 with the inclusion of persistent storage options as well as something they call Amazon EC2 Availability Zones. They describe availability zones as: "The ability to place instances in multiple locations. Amazon EC2 locations are composed of regions and availability zones. Regions are geographically dispersed and will be in separate geographic areas or countries. Currently, Amazon EC2 exposes only a single region. Availability zones are distinct locations that are engineered to be insulated from failures in other availability zones and provide inexpensive, low latency network connectivity to other availability zones in the same region. Regions consist of one or more availability zones. By launching instances in separate availability zones, you can protect your applications from failure of a single location."

Well Amazon, if you were looking for a "Use Case" look no further, Cause I'm your guy.

I've learned a valuable , if not painful lesson. No matter how much planning you do, nothing beats a geographically redundant configuration.
----
If anyone is interested in the learning more about the issues at the planet. (9000 servers offline)
http://tech.slashdot.org/article.pl?sid=08/06/01/1715247

Or EC2 Availability Zones

http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1347

Sunday, June 1, 2008

Cloud Computing on Wikipedia

Is it just me or is the wikipedia article on cloud computing terrible? Considering it's the first result in a google search on the topic, I think it might be worthwhile for a few us to volunteer to improve it. For anyone who hasn't seen it, check it out at http://en.wikipedia.org/wiki/Cloud_computing

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram