Wednesday, December 31, 2008

Who invented the term cloud computing?

Interesting post by John Willis over at on the topic of "who coined the term cloud computing" In his post he asks the question of who invented the term cloud computing?

He points to an August 2006, where Eric Schmidt of Google described there approach to SaaS as cloud computing at a search engine conference. I think this was the first high profile usage of the term, where not just “cloud” but “cloud computing” was used to refer to SaaS and since it was in the context Google, the term picked up the PaaS/IaaS connotations associated with the Google way of managing data centers and infrastructure. (One common story indicates that Schmidt took the opportunity to use the term "cloud computing" in an attempt to steal some of the thunder from the Amazon Elastic Compute Cloud which was also launching later that same month in 2006, a classic example of Google "FUD" )

In my conversations with Amazon folks in spring of 2006 they had already begun refering to their "secret" utility computing project as an elastic compute cloud. So privately the term was already being fairly broadly discussed, as well as externally to Amazon, with people including me.

According to my editor on the forth coming "Cloud Computing: A strategy guide", Michael Loukides at O'Reilly, says the use of "cloud" at O'Reilly's as a metaphor for the Internet dates back to back to at least 1992, which is pretty close to the start of O'Reilly's publishing on networking topics. He goes on to say the idea of a "cloud" was already in common use then.

By 2006 the term cloud had grown to by the kind of catch phrase of the year, every blog, social widget, and web based application seemed to have some kind of cloud angle, only back then it refereed to a visual cloud of words, typically organized by size and frequency. So applying the term cloud to something other then the visual, would have fitted well into the hype cycle found within social applications in 2006.

In doing my research for the cloud guide, I think I have found the first public usage of the term "Cloud" as a metaphor for the Internet in a paper published by MIT in 1996. As side note, this article fully outlines most of the concepts which have become central to the ideas found within cloud computing. Certainly worth a read.

The Self-governing Internet: Coordination by Design

See Figure 1. The Internet's Confederation Approach
(Do find in your browser for "Figure 1" and see image)
Massachusetts Institute of Technology

Sharon Eisner Gillett
Research Affiliate
Center for Coordination Science

Sloan School of Management
Mitchell Kapor
Adjunct Professor
Media Arts and Sciences

Prepared for:
Coordination and Administration of the Internet
Workshop at Kennedy School of Government, Harvard University
September 8-10, 1996

Appearing in:
Coordination of the Internet, edited by Brian Kahin and James Keller,
MIT Press, 1997

Tuesday, December 30, 2008

ElasticHosts Releases Extensive API

Richard Davis, over at Elastic Hosts in the UK has sent me details of their latest API release for the Elastic Hosts Cloud infrastructure service. Davies noted that they feel the API semantics are more important than syntax - and went on to say they intend to produce text/plain, application/xml, application/json and application/x-www-form-urlencoded variants of this same set of commands so that users can pick whichever seems easiest. (I should also note we use a similar approach with the Enomaly ECP API) They seem to put a lot of emphasis on the API, I guess we'll see if customers embrace their use of a extensive ReSTful API.

Here are some more details of the release;

ElasticHosts, the second European cloud infrastructure and the world's first public cloud based upon KVM*, today released its API and remote management tool. These enable users to automatically upload server images and control servers running within ElasticHosts' flexible infrastructure, supplementing the existing web management interface.

ElasticHosts provides flexible server capacity in the UK for scalable web hosting and on-demand burst computing uses such as development/test, batch compute, overflow capacity and disaster recovery.
The ElasticHosts HTTP API is described in detail at:

The API is implemented in a straightforward ReST style, and is accompanied by a simple command line tool enabling Unix users to control ElasticHosts infrastructure from their own scripts without writing any code.

Monday, December 29, 2008

The United Federation of Cloud Providers

A fundamental challenge in creating and managing a globally decentralized cloud computing environment is that of maintaining consistent connectivity between various untrusted components that are capable of self-organization while remaining fault tolerant. In the next few years the a key opportunity for the emerging cloud industry will be on defining a federated cloud ecosystem by connecting multiple cloud computing providers using an agreeing upon standard or interface. In this post I will examine some of work being done in cloud federation ranging from adaptive authentication to modern P2P botnets.

Cloud Computing is undoubtedly a hot topic these days, lately it seems just about everyone is claiming to be a cloud of some sort. At Enomaly our focus is on the supposed "cloud enabler" Those daring enough to go out and create their very own computing clouds, either privately or publicly. In our work it has become obvious the the real problems are not in building these large clouds, but in maintaining them. Let me put it this way, deploying 50,000 machines is relatively straight forward, updating 50,000 machines or worst yet taking back control after a security exploit is not.

There are a number of organizations looking into solving the problem of cloud federation. Traditionally, there has been a lot of work done in the grid space. More recently, a notable research project being conducted by Microsoft called the “Geneva Framework" has been focusing on some the issues surrounding cloud federation. Geneva is described as a Claims Based Access Platform and is said to help simplify access to applications and other systems with an open and interoperable claims-based model.

In case you're not familiar with the claims authentication model, the general idea is using claims about a user, such as age or group membership, that are passed to obtain access to the cloud environment and to systems integrated with that environment. Claims could be built dynamically, picking up information about users and validating existing claims via a trusted source as the user traverses a multiple cloud environments. More simply, the concept allows for multiple providers to seamlessly interact with another. The model enables developers to incorporate various authentication models that works with any corporate identity system, including Active Directory, LDAPv3-based directories, application-specific databases and new user-centric identity models, such as LiveID, OpenID and InfoCard systems, including Microsoft’s CardSpace and Novell's Digital Me. For Microsoft, Authentication seems to be at heart of their interoperability focus. For anyone more microsoft inclined, Geneva is certainly worth a closer look.

For the more academically focused, I recommend reading a recent paper titled Decentralized Overlay for Federation of Enterprise Clouds published by Rajiv Ranjan and Rajkumar Buyya at the The University of Melbourne. The team outlines the need for cloud decentralization & federation to create a globalized cloud platform. In the paper they say that distributed cloud configuration should be considered to be decentralized if none of the components in the system are more important than the others, in case that one of the component fails, then it is neither more nor less harmful to the system than caused by the failure of any other component in the system. The paper also outlines the opportunities to use Peer2Peer (P2P) protocols as the basis for these decentralized systems.

The paper is very relevant given the latest discussions occurring in the cloud interoperability realm. The paper outlines several key problems areas:
  • Large scale – composed of distributed components (services, nodes, applications,users, virtualized computers) that combine together to form a massive environment. These days enterprise Clouds consisting of hundreds of thousands of computing nodes are common (Amazon EC2, Google App Engine,Microsoft Live Mesh) and hence federating them together leads to a massivescale environment;
  • Resource contention - driven by the resource demand pattern and a lack of
    cooperation among end-user’s applications, particular set of resources can get
    swamped with excessive workload, which significantly undermines the overall
    utility delivered by the system;
  • Dynamic – the components can leave and join the system at will.
Another topic of the paper is on the challenges in regards to the design and development of decentralized, scalable, self-organizing, and federated Cloud computing system as well as a applying the the characteristics of a peer-to-peer resource protocols, which they call Aneka-Federation. (I've tried to find any other references to Aneka, but it seems to be a term used solely withing the university of Melbourne, interesting none the less)

Also interesting was the problems they outline with earlier distributed computing projects such as [email protected] saying they these systems do not provide any support for multi-application and programming models. A major factors driving some of the more traditional users of grid technologies to the use of cloud computing.

One the of questions large scale cloud computing opens is not about how to many a few thousand machines, but how do you manage a few hundred thousand machines? A lot of the work being done in decentralized cloud computing can be traced back to the emergence of modern botnets. A recent paper titled "An Advanced Hybrid Peer-to-Peer Botnet" Ping Wang, Sherri Sparks, Cliff C. Zou at The University of Central Florida outlines some of the "opportunities" by examining the creation of a hybrid P2P botnet.

In the paper the UCF team outlines the problems encountered by P2P botnets which appear surprisingly similar to the problems being encountered by the cloud computing community. The paper lays out the following practical challenges faced by botmasters; (1). How to generate a robust botnet capable of maintaining control of its remaining bots even after a substantial portion of the botnet population has been removed by defenders? (2). How to prevent significant exposure of the network topology when some bots are captured by defenders? (3). How to easily monitor and obtain the complete information of a botnet by its botmaster? (4). How to prevent (or make it harder) defenders from detecting bots via their communication traffic patterns? In addition, the design should also consider many network related issues such as dynamic or private IP addresses and the diurnal online/offline property of bots. A very interesting read.

I am not condoning the use of botnets, but architecturally speaking we can learn a lot from our more criminally focused colleagues. Don't kid yourselves, they're already looking at ways to take control of your cloud and federation will be a key aspect in how you protect yourself and your users from being taken for a ride.

Tuesday, December 23, 2008

Cloud Computing For a Cause

Yesterday I had a great conversation with Romanus Berg of Ashoka, the world's largest network of social entrepreneurs and a long time customer of Enomaly. In the conversation we discussed some of the opportunities that cloud computing may offer as a social empowerment tool in emerging economies.

In case you've never head of Ashoka, founded by Bill Drayton it was one of the first groups to popularize the concept of Social Entrepreneurship. The core foundation of social entrepreneurship is found within businesses that recognizes a social problem and uses entrepreneurial principles to organize, create, and manage a venture to make social change. Whereas a business entrepreneur typically measures performance in profit and return, a social entrepreneur assesses success in terms of the impact s/he has on society.

Ashoka acts as a kind of people aggregator, finding the diamonds in the rough. Those 1 in a million that effect major changes within their local society. Ashoka believes that we are in the midst of a rare, fundamental structural change in society: citizens and citizen groups are beginning to operate with the same entrepreneurial and competitive skill that has driven business ahead over the last three centuries. People all around the world are no longer sitting passively idle; they are beginning to see that change can happen and that they can make it happen.

During the conversation it became clear the both Romanus and I shared a similar vision for cloud computing not just a method of increasing IT productivity but as an empowerment tool for "under-enabled" people. People that up until recently have never had the opportunities that modern information technology has afforded the western world.

This concept of a socially conscience cloud stuck a cord with me. In many emerging economies technology in particular can skip whole generations. For example the move in China to mobile phones, skipping past more traditional forms of telephony. Similarly cloud computing may represent a major opportunity to bring both knowledge as well as modern computing technology through the use of low cost, wireless networks and mobile devices connected to regionalized clouds.

Cloud Computing as a socially conscience enterprise may not just be limited to emerging economies but may also enable the latest eco-trend of green technology. Global cloud computing represents the opportunity to make adjustments based on your carbon footprint. Imaging being able to adjust your computing energy consumption levels based on which provider of electricity is using the best and greenest sources.

Like Ashoka, I believe we are in midst of a rare, fundamental structural change. At the end of the day, cloud computing is about choice, mix in a social consciousness and we start to see one of the bigger socio-technological revolutions of our time, the information revolution.

Sunday, December 21, 2008

Cloud Interoperability and The Neutrality Paradox

Recently during some behind the scenes conversations, the question of neutrality within the cloud interoperability movement was raised.

The question of cloud interoperability does open an interesting point when looking at the concepts of neutrality, in particular to those in the position to influence its outcome. At the heart of this debate was my question of whether anyone or anything can be be truly neutral? Or is the very act of neutrality in itself the basis for some other secondary agenda? (Think of Switzerland in the Second World War) For this reason I have come to believe that the very idea of neutrality is in itself a paradox.

Let me begin by stating my obvious biases. I have been working toward the basic tenets of cloud computing for more then 5 years, something I originally referred to as elastic computing. As part of this vision, I saw the opportunity to connect a global network of computing capacity providers using common interfaces as well as (potentially) standardized interchange formats.

As many of you know I am the founder of a Toronto based technology company Enomaly Inc, which focuses on the creation of an "elastic computing" platform. The platform is intended to bridge the need for the better utilization of enterprise compute capacity (private cloud) with opportunities of a limitless, global, on demand ecosystem for cloud computing providers. The idea is to enable a global hybrid data center environment. In a lot of ways, my mission for creating a consensus for the standardized exchange of compute capacity is both driven by a fundamental vision for both my company and the greater cloud community. To say interoperable cloud computing is something I'm passionate about would be putting it mildly. Just ask my friends, family or colleagues and they will tell you I am obsessed.

Recently, I created a CCIF Mission & Goals page, a kind of constitution which outlines some of the groups core mission. As part of that constitution I included a paragraph stating what we're not. In the document I stated the following: "The CCIF will not condone any use of a particular technology for the purposes of market dominance and or advancement of any one particular vendor, industry or agenda. Whenever possible the CCIP will emphasis the use of open, patent free and or vendor neutral technical solutions. " This statement directly addresses some of the concepts of vendor bias, but doesn't state bais within the organizational structure of the group dynamic.

Back to the concept of neutrality as a cloud vendor, as interest in cloud interoperability has begun to gain momentum, it has become clear that these activities have more to do with realpolitik and less to do with idealism. A question was posed - should a vendor (big or small) be in a position to lead the conversation on the topic of cloud interoperability? Or would a more impartial neutrality party be in a better position to drive the agenda forward?

The very fact that question is being raised is indicative of the success of both the greater cloud computing industry as well as our efforts to drive some industry consensus around the topic of interoperability. So regardless of my future involvement, my objectives have been set into motion. Which is a good thing.

My next thought was whether there is really such a thing as a truly neutral entity? To be truly neutral would require a level of apathy that may ultimately result in a failed endeavour. Or to put it another way, to be neutral means being indifferent to the logical outcome. Which also means there is nothing at stake to motivate an individual or group to work towards its stated goals. My more pragmatic self can't also help but feel that even a potentially "more neutral" party could also have some ulterior motives - we all have our agendas. And I'm ok with that.

I'm not ok with those who don't admit to them. The first step in creating a fair and balanced interoperable cloud ecosystem is to in fact state our biases and take steps to offset them by including a broad swath of the greater cloud community, big or small, vendor, analyst or journalist.

So my question is this, how should we handle the concept of neutrality and does it matter?

Friday, December 19, 2008

Stephen Pollack leaves PlateSpin but shares the love

Infoweek has a nice little article about Stephen Pollack, formerly the founder of Platespin and now a notible advisor to virtualization startups Embotics, and Enomaly. David Marshall askes, Will they be as successful as PlateSpin? I certainly hope so. Stephen's insights as a booter have been extremely useful and timely.

Pollack stated, "Embotics and Enomaly are examples of young companies yearning to explore some of the new areas emerging in systems management - if I can help them succeed in some way using my experiences, I'm happy to do so. If there are companies who might need some assistance from someone like myself, I'm happy to explore that with them."

Read Article Here

(Please Note, I originally, mistakenly posted that Stephen is an advisor at DynamicOps, this is not the case.)

Thursday, December 18, 2008

Keeping Grid and Cloud Computing Seperate

There is nothing I enjoy more then stirring up controversy and lately it seems to find me where ever I go. As some of you know I've been attempting to organize a Wall Street Cloud Interoperability Forum for this March. In my planning, I seem to have stirred up some great debates among the more established high performance computing & Grid computing organizations who have focused on the banking and fiance industry.

In a conversation yesterday with prominent wall street grid advocate, he bluntly said that no bank would use external (cloud) resources anytime in the near future and that a uniform cloud interface was a hopeless cause (I'm paraphrasing). This is pretty much exactly the opposite from what I'm hearing from several high level IT folks within the banking industry. And actually quite the different story from what Craig Lee, the President of the OGF has said to me. According to Lee, the intersection of cloud computing is a critical area for the OGF and one of their main focuses going forward. I think the hardest part is going to be convincing the established HPC/Grid community that cloud computing and grid are not one in the same.

(I should also note we're putting on a joint CloudCamp with the OGF at their OGF25/EGEE User Forum in Sicily this March)

To shed some light on the whether banks are truly embracing cloud computing I had recently the opportunity of speaking to many IT managers on Wall Street over the last few months. In one such conversation early this week with a major German bank, I was told that most banks are being forced to do a lot more with a lot less money. They see the ability to outsource the so called low hanging fruit such as test/dev, performance testing, continuity, as well as various customer facing services as immediate opportunities that will allow them to better utilize existing infrastructure while moving less important, yet capacity intensive applications to the cloud. Also the the concept of a unified cloud interface (UCI) that enables a single (standardized) programmatic interface to both internal and external resource is of particular interest. I've received no less then a half dozen calls on the subject of UCI in the last two weeks alone. The fact is, I've heard the same story from several of the largest banks on Wall Street, it isn't that cloud computing may happen, but that it is and there is a lack of tools to enable this migration/transformation. The opportunity would seem to be in addressing the intersection of legacy IT infrastructures with the hybrid infrastructure of the future. I'm not alone in this thinking, notably Cisco and Sun have stated similar opportunities. (I should note, both Cisco and sun are involved in my cloud interop activities)

I think what bugs me about the HPC/Grid guys is that they seem to be driven by academic motivations of using distributed computing as a kind of universal problem solver. It's been almost 10 years since Grid technology was first discussed and yet we seem no closer to any kind of wide industry adoption. In a little over two years, cloud computing has been able to do something grid has never been able to accomplish, become the next big thing. Yes, I know hype doesn't make a sustainable industry, but it certainly helps.

The Grid computing community's problem is it doesn't appear to be driven by saving money or any broad or practical business benefits, but instead by solving very particular problem sets. I feel that one of the major benefits of cloud computing is in the fact it does address a much broader opportunity, one that touches upon just about every aspect of a modern IT environment. I also find it interesting that unlike the Grid/HPC community which is predominately driven by academia, cloud computing is being driven by business and more importantly actual business problems. Businesses who for the first time are able to tap into near limitless opportunities driven by cheap and easy access to compute capacity.

Wednesday, December 17, 2008

Cloud Mining & Enterprise Social Messaging (Enterprise Twitter)

Got sent a few intriguing links today. The first touches upon a concept of data mining the "social web", a topic I wrote about a few weeks back in my post "The Industrial Revolution of Data". A new site called StockTwits is described as an open, community-powered idea and information service for investments. Users can eavesdrop on traders and investors, or contribute to the conversation and build their reputation as savvy market wizards. The service takes financial related data - using Twitter as the content production platform - and structures it by stock, user, reputation, etc.

StockTwits is a prime example of the concept I call "social knowledge discovery". These services will give the ability for anyone to spot trends, breaking news as well as threats (economic, physical or otherwise) in real time or even preemptively. Think of it as a social cloud mining platform.

Also launched today is a service called StatTweets which uses twitter as an notification mechanism. The service allows users to get news, live game scores, standings, rankings, point spread updates, and other stats for your favorite basketball or football team all from Twitter.

Are you interested in creating your very own social knowledge discovery service? Now you can thanks to a new open source project by the Apache foundation called Apache ESME or Enterprise Social Messaging Experiment. ESME is described as a secure and highly scalable microsharing and micromessaging platform that allows people to discover and meet one another and get controlled access to other sources of information, all in a business process context.

Some of ESME's present features include an Adobe Air client, Web client, Extensive set of built-in actions, Login via Open-ID. Planned Features include; Federation scenarios, Groups, ERP notifications, Prioritization, Show local time for users. Most of these planned feature don't currently exist in twitter. So it will be interested to see if twitter follows suit.

Tuesday, December 16, 2008

Google Cloud Economics, The Quota

Big news coming out of Google today. They have announced some new upcoming features to the Google App Engine platform. There really hasn't been much news coming out of the Google App Engine lately, so todays news is all the more exciting.

First up is a feature they're describing as a Downtime Notify Google Group, which is a dashboard to announce scheduled downtime and explain any issues that affect App Engine applications. This is similar to other trouble dashboards such as Amazon's.

The more interesting feature is that of the Quota Details Dashboard which enables a granular utilization view for each Google App Engine application. In case you've never used Google App Engine, one of the more unique aspects of the system is found within its use of a set of resource quotas that control how much CPU, Bandwidth, and storage space a Google App can consume. Currently all of this usage is free but in the future, developers will be allowed to purchase additional usage beyond these free quotas. Until today, potential customers of the system haven't had a real pricing model outlined beyond some vague pricing details. So it's been rather difficult to actually model a business on Google App Engine.

Google shed some light on the subject today, according to a blog post;

"You'll be able to buy capacity based on a daily budget for your app, similar to the way AdWords spending works. You'll have fine-grained control over this daily budget so you can apply it across CPU, network bandwidth, disk storage, and email as you see fit. You'll only pay for the resources your app actually uses, not to exceed the budget you set."

What I find the most interesting about all this new is Google's use of a Quota system for a tiered billing of cloud resources. In this quota model Google can attract a large user base by offering a "free" frictionless entry, while the more successful applications may choose to pay for enhanced services via Fixed or Per Day usage quotas. In my opinion this very well may be the first step in the creation of a true commodity based exchange of computing services and capacity.

In a post on GigaOm earlier today Allistair Croll said that "Google is carefully launching an ecosystem for developers to build and sell their cloud-based software." I did some digging on the subject, but other then Croll's comments, I couldn't find anything concrete on the subject coming from anyone at Google. But it does make sense, recent reports indicate that there is a market emerging around Google Apps with more then 10 million active users as well as signing up some 3,000 new companies a day, according to Matthew Glotzbach, product management director of Google Enterprise. So it would seem Google App Engine might be the ideal tool to power Google's emerging cloud application ecosystem. Think along the lines of a Cloud Application Marketplace. It’s the same basic concept Facebook did with its API or Salesforce did with AppExchange; in Google’s case, users may now have a global turnkey channel that can reach small businesses easily. Very cool.

I for one am looking forward to seeing how this all plays out.

CCIF on Twitter (@cloudforum)

A lot of people have been asking about whether or not we have a twitter account setup for the Cloud Interoperability Forum (CCIF). Well I'm happy to say we do now.

If you use twitter and are interested in interacting with other CCIF members, please follow @cloudforum (

Once you're following the @cloudforum twitter feed you can then add fellow CCIF members at

I'll try to post the first group of members using the cloudforum account.

You may also want to "retweet"
Announcing the @cloudforum twitter feed on Cloud Computing Interoperability. Please follow & Retweet.

Monday, December 15, 2008

Fire Eagle + XMPP realtime notifications

Been busy working on my cloud interoperability chapter for the upcoming O'Reilly Cloud Computing guide, I thought I'd take a break to share a cool new location centric XMPP based app I was sent today.

Seth Fitzsimmons over at Yahoo has published a new Fire Eagle's XMPP PubSub endpoint which is up and running at It's a great XMPP implemtation and does a nice job of showcasing a subset of the XEP-0060 (Publish-Subscribe), components including;

* subscribe (signed using OAuth (XEP-0235) w/ User-specific access tokens)
* unsubscribe (signed using OAuth w/ User-specific access tokens)
* subscriptions (signed using OAuth w/ General Purpose access tokens)
* event notifications (location updates)

Seth also wrote some (Ruby) client code for it as well as some additional instructions for subscribing to individual users' nodes:

Event notifications contain XML-formatted location information, the same as you would get when querying for an individual user's location.

According to his post, You don't need to use Fire Hydrant to receive updates (use an XMPP library in your language of choice that supports PubSub event notifications), but you'll probably want to use switchboard to create appropriate subscriptions (as it's the only tool I know of that supports OAuth for XMPP requests).

Switchboard is here:

Original Post >

On-Demand Enterprise (formerly GRIDtoday) suspending operations

Sad day for cloud publications. On-Demand Enterprise (formerly GRIDtoday) is suspending operations. For anyone involved in any sort distributed computing over the last few years, Gridtoday has been a key source of information and insights into the industry. They will be sorely missed.

I wish Derrick Harris and the rest of the team over at On-Demand Enterprise the best of luck in any future endeavors.

Sunday, December 14, 2008

The 2009 Cloud Experience (Repost)

As part of an ongoing series of posts, David Marshall at VMblog asked me to prognosticate on some of the key IT / Cloud trends for 2009. His question was simple, "What do virtualization executives think about 2009? Below is my take on things for 2009.

For archival purposes, I'm reposting the article on ElasticVapor. To see the original post, please visit

The 2009 Cloud Experience

The year 2008 has been a big one for cloud computing. In a rather dramatic shift, we've seen the term "cloud" enter the collective IT consciousness. It seems almost every technology vendor, big or small, has embraced the movement to the cloud as a software and marketing philosophy. Generally, cloud computing can be viewed loosely as an Internet-centric software and services model. Specifically for the data center, cloud computing represents the opportunity to apply some of the characteristics of the decentralized, fault tolerant nature of the Internet.

Up until now software companies didn't have to concern themselves with the concepts of "scale" or adaptive infrastructure capacity. In the traditional 90's desktop software model, users of a given software were responsible for the installation, administration and operation of a particular application, typically on a single computer. Each desktop formed a "capacity silo" somewhat separate from the greater world around it. Now with the rising popularity of Internet based applications, the need for remote capacity to handle an ever expanding, always connected online user-base is increasingly becoming a crucial aspect of any modern software architecture. In 2008 we even witnessed the typically desktop-centric Microsoft jump into the fray outlining a vision for Software + Services (S+S) which they describe as local software and Internet services interacting with one another.

In looking at 2009 I feel the greater opportunity will be in the merger of the next generation of "virtualized" data centers with a global pool of cloud providers to create scalable hybrid infrastructures geared toward an optimal user experience. Cisco's Chief Technology Officer, Padmasree Warrior recently referred to this merger as the "intra-cloud". Warrior outlined Cisco's vision for the cloud saying that cloud computing will evolve from private and stand-alone clouds to hybrid clouds, which allow movement of applications and services between clouds, and finally to a federated "intra-cloud". She elaborated on the concept at a conference in November "We will have to move to an 'intra-cloud,' with federation for application information to move around. It's not much different from the way the Internet evolved." With Cisco's hybrid vision, I believe the end-user experience will be the key factor driving the usage of cloud computing both internally and externally.

To enable this hybrid future, cloud interoperability will be front and center in 2009. Before we can create a truly global cloud environment, a set of unified cloud interfaces will need to be created. One such initiative which I helped create is called "The Cloud Computing Interoperability Forum (CCIF)". The CCIF was formed in order to enable a global cloud computing ecosystem whereby organizations work together for the purposes of wider industry adoption of cloud computing technology and related services. The forum is a little over 3 months old and has grown to almost 400 members encompassing almost every major cloud vendor. A key focus of the forum is on the creation of a common agreed upon framework / ontology that enables the ability of two or more cloud platforms to exchange information in an unified manor. To help accomplish this, the CCIF in 2009 will be working on the creation of a Unified Cloud Interface (UCI) or cloud broker. The cloud broker will serve as an open interface for interaction with remote cloud platforms, systems, networks, data, identity, applications and services. A common set of cloud definitions will enable vendors to exchange management information between remote cloud providers. By enabling industry wide participation, the CCIF is helping to create a truly interoperable global cloud which should also improve the cloud centric user-experience.

In the startup space, there have been a number of notable companies showing up to address the concepts of "scale" by creating load based cloud monitoring and performance tools. For the most part, these tools have been focused on cloud infrastructure environments such as Amazon's Elastic Compute Cloud. There have also been an increasing number of cloud providers appearing on a regional level in Europe and Asia providing resources for the first time to scale on a geographical basis. Thanks in part to interop efforts as well as advancement in wide-area computing, we may soon be able to not only scale based on superficial aspects such as load, but based on practical aspects like how fast does my application load for users in the UK?

The quality of a user experience as the basis for scaling & managing your infrastructure will be a key metric in 2009. The general problem is a given cloud vendor/provider may be living up to the terms of their SLA's contract language, thus rating high in quality of service, but the actual users may be very unhappy because of a poor user experience. In a lot of ways the traditional SLA is becoming somewhat meaningless in a service focused IT environment. With the emergence of global cloud computing, we have the opportunity to build an adaptive infrastructure environment focused on the key metric that matters most, the end user's experience while using your application. Whether servicing an internal business unit within an enterprise or a group of customers accessing a website, ensuring an optimal experience for those users will be the reason they will keep coming back and ultimately what will define a successful business.

A key player in the emerging cloud as a user-experience enabler is Microsoft with their Microsoft UC Quality of Experience (QoE) program. Although initially focused toward VOIP applications, I feel the core concepts work well for the cloud. The MS QoE program is described as a comprehensive, user-focused approach to perceived quality centered on the actual users, and incorporating all significant influencing parameters in optimizing the user experience. Real time metrics of the actual experience help in measuring, quantifying and monitoring at all times the actual experience using live analytical information of the user's perceived subjective quality of the experience. QoE may very well be a key component in Microsoft's plan to dominate the cloud. It resonates well with what I'm hearing in the community at large. As trusted enablers, companies like IBM, Cisco, Sun, and Microsoft are in the prime spot to address this current trend toward "trusted" cloud computing providers. They have the know-how, global networks and most importantly the budgets to make this a reality.

Possibly the biggest opportunity in the coming year will be for those who embrace the "cloud stack", a computing stack that takes a real time look at a given user's experience. A computing environment that can adapt as well as autonomously take corrective actions to continuously optimize the user’s subjective experience on any network, anywhere in the world at anytime. Sun Microsystem's John Gage was right when he famously said, The Network Is the Computer. It's just taken us 25 years to realize that the "Internet" (the cloud) is the computer.

Friday, December 12, 2008

Enomaly ECP 2.1.1 Released

We're happy to announce that Enomaly ECP 2.1.1 has hit the shelves (or SourceForge, as it were...). This is a bug fix and security release, so don't expect to see a whole lot of new functionality - 2.2 is coming, hopefully shortly in the new year with lots of new features.

This maintenance release fixes a potential security exploit in the startup script's temporary file handling as well as the following bug fixes:

* Randomly generated mac addresses are now written to the machine XML at provision time.
* The available system memory is now checked against the required memory for new machines at provision time.
* Fixed a bug regarding the valet extension module not properly checking the hypervisor type.
* Fixed a bug that disallows a machine's XML definition to be edited.
* Fixed several misc. bugs in the valet extension module.
* Added messages to the interface stating the required extension modules.

Thanks to everyone who provided feedback during the development process!

Grab the latest release today.

Wednesday, December 10, 2008

Enomaly Appoints PlateSpin Founder Stephen Pollack to Advisory Board

I'm happy to announce that Stephen Pollack, founder and CEO of Platespin has joined the Enomaly Board of Advisors. Stephen led PlateSpin, a provider of data center workload management solutions, to a highly publicized $205M exit when it was acquired by Novell in Feb 2008. Before joining PlateSpin, Stephen held senior management positions at FloNetwork Inc., Fulcrum Technologies (now part of Hummingbird) and NCR. He brings over 25 years of IT experience in marketing, sales, development and lifecycle support of successful businesses.

I'm very excited to have Stephen advising us. With his extensive experience in launching and managing companies from startup to acquisition, he provides us with a wealth of knowledge that is second to none. I'm looking forward to working with Stephen over the coming months as we build Enomaly into a world class organization. As a self made and extremely successful Canadian entrepreneur, Stephen is the kind of role model you aspire to become. Needless to say we are very fortunate to have Stephen as part of our team.

Cloud Costing: Fixed Costs vs Variable Costs & CAPEX Vs OPEX

As I continue my world tour of the cloud, I keep having the same conversation about the cost advantages of cloud computing in a tough economic climate. The general consensus seems to be that cloud computing's real benefit in a bad economy is its ability to move so-called "fixed costs" to variable or operational costs. So I thought I'd take a moment to investigate some various cloud costing terms.

Starting at the bottom are fixed costs which are basically business expenses that are not dependent on the level of production or sales. They tend to be time-related, such as salaries or rents being paid per month. You can think if of these costs as the baseline operational expenses of running your business, or in the case of IT, the cost of maintaining your core technical infrastructure.

In a sense, a key economic driver for the use of remote capacity (cloud computing) is in its ability to convert a fixed cost into that of a variable cost or a cost which is volume-related (paid per use / quantity). This seems to be the industry standard way to justify the use of cloud computing.

Then there is the question of capital expenditures (CAPEX) which are expenditures creating future benefits. For example, the money Amazon is spending on building out their web services infrastructure. One way to look at cloud computing is as method of deferring CAPEX. The theory is generally stated like this, by utilizing someone elses infrastructure you can choose pay only if and when you need additional capacity as a variable cost. It's also interesting to note that capital expenditures can have some tax benefits because they can be amortized or depreciated over the life of the assets in question. So potentially the ability to create a hybrid cloud model that uses both exisiting resources and remote resource may actually be very compelling to larger established businesses mostly for tax benefits. For example, a large hosting firm may choose to recycle used dedicated servers into their cloud offering thus making money on both sides of CAPEX and variable costs. (This assumes the business is profitable)

The next motivating factor is that of the operational expenditure or OPEX which is an on-going cost for running a product, business, or infrastructure. I like the wikipedia OPEX example, the purchase of a photocopier is the CAPEX, and the annual paper and toner cost is the OPEX. In the cloud world OPEX is considered great unknown variable, it may be cheaper upfront to use a cloud provider thus reducing your CAPEX, but long term it may cost more to manage your remote assets because of increased software complexity, security as well as a variety of other reasons.

This bring us to Total cost of ownership (TCO) which is a financial estimate designed to help consumers and enterprise managers assess direct and indirect costs. TCO is typically the way a IT vendor validates their particular costs over a competitor, such as Mircosoft Vs Linux. The argument Microsoft used was that Linux is cheaper upfront, but is much more expensive to manage down the road. Therefore the TCO of Microsoft Vs that of Linux is much lower. TCO is very difficult to quantify and can be easily "gamed" and therefore should be looked rather sceptically.

At the end of the day, for most businesses it's cheaper to use someone elses infrastructure then is to use your own. At it's heart, this is the key reason to use the cloud.

Tuesday, December 9, 2008

Botnets : Electronic Weapons of Mass Destruction

Slashdot is reporting on a paper in the magazine Policy Review titled "The botnet peril" by the recent Permanent Undersecretary of Defense for Estonia.

In the article, the authors say botnets should be designated as 'eWMDs' — electronic weapons of mass destruction. I personally couldn't agree more. With modern advancements in decentralized command control systems combined with simplistic desktop vulnerability detection, creating a botnet of several thousand zombie PC's is a matter of sniffing your local ISP's network. (I actually wrote an article for wired earlier this year on how to build your own botnet, but it was -- declined for some unknown reason)

The article raises some great points including the concept of cyber warfare as asymmetric warfare; more is at risk for us than for most of our potential adversaries. (criminals, terrorists or rouge governments) Another asymmetric aspect is that the victims of cyber warfare may never be able to determine the identity of their actual attacker. Thus, America cannot meet this threat by relying solely upon a strategy of retaliation, or even offensive operations in general. (How do you attack a decentralized multi-nation organism, think digital Al-Qaeda?)

I also found this bit interesting "The U.S. government has a similar duty, but on a larger scale. Because botnets represent such a real threat to our domestic cyberspace and all the assets that those Internet-accessible computers control, it is a vital national interest to secure the domestic Internet." (Basically, the weakest link in almost all critical infrastructure is now IT connectivity. Those who control the network control the world.)

They give pretty good detail of the Russian botnet attack on Estonia last year. Interestingly, a similar two phased tactic which was used on Georgia this summer. In the initial so-called hacktivist phase was apparently used as pr cover, or diversion for a later botnet phase. It took some time for the international media to realize that the actual nature of the attack was the ensuing more sophisticated, organized, and devastating botnet attack which brings down critical pieces of the governments ability to communicate (email, phones, etc).

As many readers of my blog are keenly aware, the only real way to deal with this sort of cyberwarfare is to create a proactive botnet defence system, one capable of adapting to prolonged digital bombardment. The next major opportunity for the governments and global enterprises will be in the implementation of custom "enterprise botnets". This is not me going out and prognosticating, this is happening today.

The Cloud Poser / Expert

Great post by Daryl Plummer at Gartner titled "Instant Experts Floating in the Cloud". In his post he outlines some of the issues in the advocacy of cloud computing and the rise of the so called "Cloud Expert". He describes this person as "The instant expert is one of those people who seemed to know nothing about a topic two days ago but now sounds like they invented it. It’s the woman who studied all night to learn the difference between the cloud, cloud computing, and cloud services because Daryl Plummer or Reuven Cohen was so eloquent about it. Get the picture? Instant Experts are all around us; and, oddly enough, we need them now more than ever."

He's dead on, and I'm the first to admit I do fall into that category from time to time. I'm the cloud interoperability expert not so much because I have any real experience in cloud standards or because I've participated in any other interoperability groups. It's simply because I have a vested interest in the subject as well as the network of associates to make such a endeavour feasible. Basically I'm learning as I go. As for Elastic Computing aka cloud computing. I'm the expert because 5 years ago I was the nut trying to get people to use shared virtual resources. Simply, I've put in my time.

In a recent Business Week article "Cloud Computing Is No Pipe Dream" Jeffrey Rayport said it well.

"Tech pundits and practitioners alike have spilled lots of ink to hype cloud computing. They'll encourage you to think of it as IT infrastructure on demand—like plugging into the power grid to get electricity, or turning on a faucet to get water, but getting raw computing power instead. To boot, you'll get access to storage capacity and software-based services—and it can scale infinitely, proponents point out.

If this all sounds too good to be true—like so much cold fusion, the now-debunked tabletop nuclear fusion reactor in a bottle—don't be fooled. This time, getting more for less is real. And with the economy of its currently parlous condition, businesses have never needed cloud computing more."

What struck me about what Rayport said was that it difficult to tell the Poser's from the legitament "experts". Moreover, I'm not every sure why we should consider Rayport an export on the subject. With all the FUD, who do you trust? Random bloggers, providers, vendors?

Daryle Plummer has done a great job of outlining how to spot the pretenders from the contenders.

  • Pretenders want you to know how much they know. Contenders want you to know what you need to know.
  • Pretenders want you to believe they truly understand concepts. Contenders want you to know how concepts relate to other concepts in a specific context.
  • Pretenders spout facts. Contenders deliver insights.
  • Pretenders dismiss differences in meanings and definitions of concepts as “just semantics”. Contenders specify the context of their ideas and the meanings of concepts within that context.
  • Pretenders use their knowledge to reflect glory on their past accomplishments. Contenders use their past accomplishments to inject knowledge into their new ideas.
  • Pretenders can easily be tripped up by a well placed question. Contenders pose well-placed questions.
  • Pretenders can only slice an inch deep. Contenders cut straight to the bone.

Read the whole post here.

Monday, December 8, 2008

Cloud Scaling: Dynamic Vs Auto Scaling

There has been a rather interesting exchange on the O'Reilly's blog on the topic of Auto-Scaling in the Cloud. George Reese, founder of Valtira and enStratus, (a couple of companies I've never heard of) argues that the concept of auto-scaling is stupid.

George describes auto-scaling as the the ability to add and remove capacity into a cloud infrastructure based on actual usage. No human intervention is necessary. He goes on to say that he doesn't like this concept and vigorously talks his customers away from the idea of auto-scaling. All the more interesting consider enStratus describes their service / product as a cloud infrastructure management tool that automates the operation of your web sites and transactional applications in a cloud infrastructure. (Sounds like they specialize in Auto-scaling)

Instead he seems to prefer the concept of Dynamic-scaling. Which he describes as scaling on the ability to add and remove capacity into your cloud infrastructure on a whim—ideally because you know your traffic patterns are about to change and you are adjusting accordingly. (If I knew my traffic was going to change, then why would I need an automated scaling system?)

If I am reading George's theory correctly, Automation = Bad, Dynamic = Good. But I think he misses the point to a certain degree. The two are not mutually exclusive.

For me the concept of dynamic scaling is simply the ability to change and adapt your infrastructure. This in itself is a major advancement, until recently the ability to "change" or adapt your infrastructure in a reasonable amount of time was a extremely difficult process. Infrastructure has been a somewhat "static" resource for most traditional data centers. With the advancements in dynamic system management such as Opsware, Bladelogic and even VMWare you now have the ability to create a fully dynamic IT environment. For a large portion of the world this is a very big step forward.

I think the real issue with George's post is in the word auto. Does he actually mean automatic? (Capable of operating without external control or intervention) Or does he mean Automation? (The act or process of converting the controlling of a machine or device to a more automatic system.) I would say the latter. No scaling operation should be fully or completed automated, it should be a series of controls / rules, policies, quotas and monitors that are tailored to reduce the need for human operators involvement or for the purposes of achieving a set of requirements such as the quality of my users experience.

I'm personally all for a Dynamic Automated Infrastructure.

Businessweek: U.S. Is Losing Global Cyberwar

Well, I'm back from my whirl wind trip to Boston, I hope to get back to my regular posts this week.

In the mean time I wanted to tell you about a great article in BusinessWeek magazine about how the U.S. Is Losing Global Cyberwar. According to the article "The U.S. faces a cybersecurity threat of such magnitude that the next President should move quickly to create a Center for Cybersecurity Operations and appoint a special White House advisor to oversee it."

Check out the article here

Saturday, December 6, 2008

Information Week: Master The Cloud

Charles Babcock at Information Week has written a nice post saying that if you're the master of virtualization at home, then you'll eventually master the cloud. He also included the work I've been doing with the cloud Interoperability forum.

Read the informationweek post here.

Wednesday, December 3, 2008

The Internet as a Cloud Infrastructure Model

In my never ending quest to speak to every venture capitalist on the planet, I find myself in Boston this evening and pondering something I said in a few of my investor meetings today.

In one of my famous off topic VC rants, I described my vision for a unified cloud interface using an analogy of the Internet's self governing model as the basis of an adaptive enterprise cloud. Funny as it may sound this is the first I've used this particular analogy.

The Internet itself would appear to be the perfect model for a "cloud platform". The Internet uses a self governing model whereby there is no single administrative entity and must continue to operate in the event of critical failures. By design, the web's core architecture assumes there will be sporadic global failures and can fail gracefully without effecting the internet as a whole. The Internet exists and functions as a result of the fact millions of separate services and network providers work independently by using common data transfer protocols to exchange communications and information with one another (which in turn exchange communications and information with still other systems). There is no centralized storage location, control point, or communications channel for the Internet. The very decentralized architecture of the internet has allowed it to adapt and evolve overtime, almost like a living organism.

One of the reasons why the internet works is because of its open communication protocols, by their very nature they form the ideal model for a cloud coordination tool - exactly the kind of system that can automate the routine 99 percent of computer-to-computer interactions you'd want in a cloud platform. But there is a catch. Protocols automate interoperability only if all core Internet service providers agree to use the same ones. (Enter Cloud Interoperability)

Open standards are key to the Internet's composition and are a core component to interoperability within a truly distributed command and control structure. The internet works because anyone who wants to create a Web site can freely use the relevant document and network protocol formats (HTML and HTTP, etc), secure in the knowledge that anyone who wants to look at their Web site will expect the same format. An open standard serves as a common intermediate language - a simplifying approach to the complex coordination problem of allowing anyone to communicate successfully with anyone else.

My random thought for this evening.

Forget about Artificial Intelligence, think Collective Intelligence

Amazon seems to be on roll these days. After announcing their Amazon Data Sets a few weeks ago they are back with a very cool new iPhone App. For me it isn't so much that Amazon has released yet another iphone app (YaiPa), but how they've enabled their customers to tap into what I'm calling the " Collective Intelligence".

Basically the app lets users take a photograph of any product they see in the real world. (Yes, the world outside the digital one, crazy I know) The photos are then uploaded to Amazon and given to a global network of (low paid) workers utilizing Amazon’s Mechanical Turk, crowd sourcing platform. For a small fee a collection of "actual humans" will try to match the photos with products for sale on According to a New York Times post, the results will not be instantaneous (between 5 minutes and 24 hours).

So why is this cool? Let's revisit the idea of collective intelligence, that of a shared or group intelligence that emerges from the collaboration and competition of many individuals. Basically this is a subset of the concept of crowdsourcing, which has been around for a awhile. Until recently there really has a been an effective way to "programatically" tap into the greater community and more importantly there haven't been too many good examples of this used in the "real world".

What is interesting is up until now traditional applications haven't been really able to autonomically adjust itself based physical or digital demands. It's typically required human intervention (The website is slow, lets add another server). With the emergence of global cloud computing these formally static systems are beginning to become "infrastructure aware", the capability of automatically adjusting as demands and requirements change in realtime is quickly becoming the standard requirement of any modern IT environment. The next logical step as we move toward a technical singularity will be in combining a somewhat or fully aware infrastructure with the collective intelligence of humans. I feel this represents a crucial bridge between how intelligent technology will interact with both the digital and physical world around it, and more importantly how it will benefit the the people who use it.

As anyone familar with my various schemes, from Enomaly's open source elastic computing platform to Cloud Camp's community driven events to more recently my Cloud Interoperability Forum, my guiding principle has always been that technology is changing faster then any single human can adapt, no single person or company is as valuable as the community who supports it. I feel the ability to draw on the power of mass collaboration as a point of participation is ultimately more valuable then any workforce I might be able to assemble.

Amazon appears to be one of the few companies that grasps this concept and best of all, applies it as a kind of corporate mantra, "We are not what we sell but rather the community we sell to".

Tuesday, December 2, 2008

The Cloud OS & The Future of the Operating System

Some exiting developments in the world of cloud operating systems today. Yes I know, operating environments aren't usually associated with the adjective "exciting", but this news is different.

Good OS has introduced a new operating system for cloud computing appropriately called "Cloud," which is the successor to company's Linux-based gOS.

Unlike gOS, Cloud does not open up onto a desktop. Instead, it boots directly into a web browser; after booting up, you are greeted with a full screen browser page which looks like a traditional OS including shortcuts to cloud applications like Google Docs and Calendar as well as Blogger and YouTube. Cloud's so called "proprietary application framework" is said to allow you to run client applications, such as Skype or Media Player, opening them in new tabs just like in Windows or Linux. They don't really give any indication of how this actually is accomplished. Fear not, I have an email to them, I'll let you know as I find out more.

Back to why this is exciting, Cloud is one of the first OS's to embrace the hybrid cloud OS model. A model that combines the best of both worlds, the use of a local CPU for performance with the scale and infinite opportunities of the cloud. Together this creates an unique opportunity to take on the traditional OS's of the world.

You can also think of Cloud OS as being similar to that of the Apple Iphone, where certain applications are loaded directly on the phone (CPU) and where other application components remain on a server some where in the cloud (Internet). This hybrid model seems particularly well suited to emerging economies (Think India, China, etc), where software licensing can be prohibitively expensive. The One Laptop per Child (OLPC) project comes to mind. (One Laptop per Child (OLPC) is a non-profit association dedicated to research to develop a low-cost, connected laptop, a technology that could revolutionize how we educate the world's children.)

While I'm on that theme, I find this hybrid model particular interesting for "virtual deskop" deployments, where a user maybe given a netbook or thin client that contains a users core identity, favorites etc while the majority of the functionality is loaded via the cloud (aka the Internet). I will also say, this also sounds an awful like Microsoft's new Software + Services philosophy. A combination of local software and Internet services interacting with one another.

We seem to be quickly moving toward future where the operating system acts more like a portal and less like a traditional application stack. In the case of Good OS's Cloud, unlike other cloud frameworks such as Googles' Chrome, Cloud is its own operating system that runs along side an existing OS such as Windows or linux while securely accessing the CPU. From what I can tell it currently cannot replace the main operating system, but in the future this may very well become the case.

ThinkGOS describes it this way: "Cloud uniquely integrates a web browser with a compressed Linux operating system kernel for immediate access to Internet, integration of browser and rich client applications, and full control of the computer from inside the browser."

They go on to say "Cloud features a beautifully designed browser with an icon dock for shortcuts to favorite apps, tabs for multi-tasking between web and rich client apps, and icons to switch to Windows, power off, and perform other necessary system functions. Users power on their computers, quickly boot into Cloud for Internet and basic applications, and then just power off or boot into Windows for more powerful desktop applications."

In some ways I think they're missing some key opportunities. Rather then switch to Windows for key applications such as Office, or even gaming. I think the opportunity may be to stream the applications "on demand" using remote desktop technologies. So if you'd like to use a more traditional app, then you can do so directly within the browser based OS but in a fully quarantined VM or VNC connection. Another option would be use a technology such as Wine which is an Open Source implementation of the Windows API on top of X, OpenGL, and Unix. (That's a whole other post)

Currently the Cloud operating space is in its infancy, but with companies like Microsoft entering with its Azure framework, it would seem that the future of the OS lies in the cloud.

Monday, December 1, 2008

EU launches Green Code of Conduct for Data Centers

I seem to have missed this major announcement last week that the EU launched a "Green" Code of Conduct for Data Centers. The Code of Conduct was created in response to increasing energy consumption in data centers and the need to reduce the related environmental, economic and energy supply impacts. It was developed with collaboration from the British Computer Society, AMD, APC, Dell, Fujitsu, Gartner, HP, IBM, Intel, and many others.

According to the PDF overview, "The Code of Conduct was been created in response to increasing energy consumption in data centres and the need to reduce the related environmental, economic and energy supply security impacts. The aim is to inform and stimulate data centre operators and owners to reduce energy consumption in a cost-effective manner without hampering the mission critical function of data centres. The Code of Conduct aims to achieve this by improving understanding of energy demand within the data centre, raising awareness, and recommending energy efficient best practice and targets."

Those who choose to abide by the voluntary Code of Conduct will have to implement energy efficiency best practices, meet minimum procurement standards, and report energy consumption every year.

Learn more here

CCIF Mission & Goals (Draft Outline)

Since I posted my "Unified Cloud Interface (UCI)" last week to say there has been a significant amount of interest in cloud interoperability would be putting it mildly. Over the last week I've received dozens of emails and calls from people looking to get involved with our initiative. So I figured it would make sense to outline some of our objectives for the forum before we discuss any technical implementation aspects.

(The following is a draft and should be looked at as a suggestion open to discussion)

The Cloud Computing Interoperability Forum
Mission and Goals (Draft Dec 1st 2008)

CCIF Goals
The Cloud Computing Interoperability Forum (CCIF) was formed in order to enable a global cloud computing ecosystem whereby organizations are able to seamlessly work together for the purposes of wider industry adoption of cloud computing technology and related services. A key focus will be placed on the creation of a common agreed upon framework / ontology that enables the ability of two or more cloud platforms to exchange information in an unified manor.

CCIF is an open, vendor neutral, not for profit community of technology advocates, and consumers dedicated to driving the rapid adoption of global cloud computing services. CCIF shall accomplish this by working through the use open forums (physical and virtual) focused on building community consensus, exploring emerging trends, and advocating best practices / reference architectures for the purposes of standardized cloud computing.

Community Engagement
By bringing a global community of vendors, researchers, architects and end users together within an open forum, business and science requirements can be translated into best practices and, where appropriate, relevant and timely industry standards that enable interoperability and integration within and across organizational boundaries. This process is facilitated by online mailing lists, forums and regular "in person" meetings held several times a year that bring the broad community together in workshops and or group meetings. All CCIF activity will be underpinned by a web presence that enables communication within the various CCIF working groups and the sharing of their work with the broader community.

What we're not
The CCIF will not condone any use of a particular technology for the purposes of market dominance and or advancement of any one particular vendor, industry or agenda. Whenever possible the CCIP will emphasis the use of open, patent free and or vendor neutral technical solutions.

Board of Directors and Advisory Board
There has been some interest in the creation of both a CCIF board of directors and advisory board. I'm open to the idea, but don't think it might be necessary at this time.

If you are interested in getting involved in the discussion, please join our forum here.

New England Cloud User Group Meeting (Tuesday)

Word travels fast. I'll be in Boston this week for several meetings, but unfortunatly will be missing the New England Cloud User Group Meeting on Tuesday evening, I arrive Wednesday afternoon. If you're in the Boston Area on Tuesday and are interested in learning more about cloud computing, you shoud go to the New England Cloud User Group!

They'll have Bret Hartman, the CTO from RSA talking about security as it relates to the cloud as well as Vikram Kumar and Prasad Thammaneni CTO and CEO from Pixily to give the start-up experience, leveraging S3 and EC2 exclusively

Dec 2nd, 6-9pm
Papa Razzi restaurant
16 Washington St (on route 16 off of route 128)
Wellesley, MA 02481
(781) 235-4747

Friday, November 28, 2008

Cloud Wars: The US Department of Defence under Cyber Attack!

The big headline in today's Los Angles Times reads "Cyber-attack on Defense Department computers raises concerns" Raises concerns? Are you kidding me? I've been saying this for a while, the current US Network Centric defenses are in a complete state of disarray and everyone outside of the U.S. knows it. Saying it's a concern is putting it mildly.

In the LA Times article they had this to say about the current attack "Senior military leaders took the exceptional step of briefing President Bush this week on a severe and widespread electronic attack on Defense Department computers that may have originated in Russia -- an incursion that posed unusual concern among commanders and raised potential implications for national security."

This should not come as a surprise to anyone involved in Network Centric Operations / Warfare. Since 9/11 the the US Defense Information Systems Agency (DISA) has spent billions on various intelligence schemes, but has completely failed in the area's of proactive network defense. (As a side note, DISA is a combat support agency responsible for planning, engineering, acquiring, fielding, and supporting global net-centric solutions to serve the needs of the President, Vice President, the Secretary of Defense, and other DoD Components, under all conditions of peace and war.)

At DISA there seems to be a fixation on data mining communications networks in the vain attempt to find terrorists using the US phone system or un-encrypted websites. The real issue is a complete lack of DoD network interoperability for joint, interagency, and multi-national network operations. One major step is through the adoption of open standards and common processes. In the mean time, countries such as China and Russia in particular, have built massive citizen botnets. In an instant, Russia can turn on a hundred thousand slave PC's and bring down the entire networks of Georgia, Ukraine or some other unsuspecting country before the US or other allies even know what's happening. (Look at Georgia this summer)

This current attack on the DoD is a relatively minor diversion in comparison to what a full out, planned network centric attack could actually do. Think about the potential fall out if the US electrical grid, cell / phone network and financial infrastructure was to be attacked in unison and taken offline all at once. Combine that with if it were to happen during the midst of an actual "crisis" such as what we're currently seeing in India this week. The turmoil would be unprecedented.

DISA isn't alone in this new age of cyber warfare. Earlier this year the World Bank revealed a significant intrusion with in the banks highly restricted treasury network. In the attack it was revealed that the banks core systems had been deeply penetrated with "spy software", and that the invaders also had full access to the rest of the bank's network for nearly a month in June and July of 2007. At least six major breaches have been detected at the World Bank since the summer of 2007, with the most recent breach occurring just last month. What is worse, this has been "common knowledge" in the black hat security scene for more then 6 months before it was disclosed to the public.

In fairness to the World Bank and the US DoD, they are not alone, every single G7/G8 government has suffered similar breaches over the last couple years. What's scary is the fact that most of these countries have not disclosed these breaches publicly. Lately most of these countries seem to be pre-occupied with the current financial crisis while a far more dangerous crisis sits in waiting. As conspiracy theorist, I can't help but think the two might be somewhat connected. Traditional terrorism doesn't work, in the new world order it's those who control the network who hold the power.

Recently I've been invited to speak at the Network Centric Operations Industry Consortium (NCOIC ) on the topic of network centric operations and interoperability. Unfortunately because of my wife being in her 9th month of pregnancy, I will miss the next event on Dec 11th.

I think NCOIC mission sums up the challenges nicely: "The deciding factor in any military conflict is not the weaponry, it is the network. The missing link in today's disaster recovery efforts is a working network. And the key to emergency response is accurate information that enables first-responders to know what happened, who's responded, and what is still required. From the warrior to emergency personnel to the modern day consumer, access to all information, without regard to hardware, software, or location of the user, is no longer attractive, it is imperative."

Thursday, November 27, 2008

The Industrial Revolution of Data

As I watch the reports from Mumbai come in across my various social feeds, blogs, twitter, Facebook, flickr etc. I can't help but think one of the biggest opportunities for the next generation of news providers is that of data mining the massive amount of information being feed through the Internet. When a big news story breaks, it is now much more likely that information will be delivered through an army of citizen journalists using mobile phones and social media services then by traditional means.

In a recent post "The Commoditization of Massive Data Analysis" on O'Reilly Radar, Joe Hellerstein, described what he called "The Industrial Revolution of Data" His post went on to say "we are starting to see the rise of automatic data generation "factories" such as software logs, UPC scanners, RFID, GPS transceivers, video and audio feeds. These automated processes can stamp out data at volumes that will quickly dwarf the collective productivity of content authors worldwide. The last step of the revolution is the commoditization of data analysis software, to serve a broad class of users."

Although for most of the post Joe seems to get too caught up in the finer technical details, I think he was onto a general trend toward the large scale commoditization of data analysis. In someways I also think he misses one of the bigger opportunities for large scale data analysis, that of the social cloud.

Traditionally, business analysts have used data mining of large data sets such as credit card transactions to determine things like risk, fraud and credit scores. More recently live Internet data feeds (Facebook, Twitter, FriendFeed, etc) have become much more common place, enabling the ability to do large scale realtime knowledge discovery. For me this is a remarkable opportunty, think about how google maps revolutionized the geospatial industry by putting satellites imagery and geocentric information into the hands of everyday people. Similarly we now have the ability to do this for within several other industry verticals.

As the volume of social data increases I feel we will soon see the emergence of "social knowledge discovery services". These services will give the ability for anyone to spot trends, breaking news as well as threats (economic, physical or otherwise) in real time or even preemptively.

One such example is Google Trends, a service that has taken this concept to the next level by aggregating statistical trends that actually matter. Google's Flu trends service, which tracks certain search terms that are indicators of flu activity in a particular area of the US. The Google Flu service uses aggregated Google search data which helps to estimate flu activity in "your" region up to two weeks faster than traditional systems. What's more, this data is freely available for you to download and remix. Think about it, using this sort of data at a small local pharmacy could be enabled so that they could stock up on flu related products two weeks before a major outbreak occurs.

The next big opportunity will be in "data mining" all the unstructured data found within the "social web" of the Internet. The emergence of social knowledge discovery services will enable anyone to identify trends within publicly available social data sets. Through the use of sophisticated algorithms such as google's map/reduce, we now have the opportunity to identify key social patterns and preemptively target those opportunities.

Wednesday, November 26, 2008

The Battle for the future of the Internet

Since Khaz banned me from the Google Cloud Computing group back in September (a group I helped create). I've been a missing a cloud community where I could share my ideas. Recently my cloud computing interoperability group has started to take off and I finally have a community outlet for my ideas on cloud computing, standards and interoperability.

In a recent post I asked the question"If not a traditional XML Schema, what other approaches may give us equal or greater flexibility?" In this fairly generic and broad question I received some very thought provoking responses.

Here is an except from the discussion. (Join the group to engage in the conversation)

Thanks for all the great insights. I also agree the the last thing we need are standards. If we do this right the standards will organically emerge over time. My goals for a Unified Cloud Interface (UCI) are fairly simple, although my ambitions are much larger.

The mission is this: Cloud interoperability for the purposes of reducing cross cloud complexity.

I completely agree with Paul and others, let's not re-invent the wheel, boil the ocean, (insert your own metaphor) . Whether it's OWL, RDF, SNMP or whatever. We have a significant amount of material to use as the basis for what we're trying to accomplish.

We must focus on the core aspects of simplicity, extensibility and scalability / decentralization in looking at this opportunity.

In regards to whether or not XMPP is powerful enough would at this point seems somewhat secondary. I'd use TCP as an analogy for our dilemma. TCP is arguable not the most scalable, secure or efficient protocol. But in it's simplicity was its ultimate advantage. The Internet works because it can fail dramatically without affecting the Internet at large, this is because of a decentralized fault tolerant architecture. An architecture that assumes failure. There are numerous messaging platforms and protocols to choose from, but none of which seem to address decentralization and extensibility to the extent that XMPP does. Is XMPP perfect? Probably not, but for our purposes it's more then adequate.

I envision a communication protocol that takes into consideration a future that may be vastly different then today's Internet landscape. In someways my ambitions for UCI is to enable a global computing environment that was never previously possible. A technology landscape where everything and anything is web enabled.

Yes, I have big ambitions, it is not often we find ourselves in the midst of a true paradigm shift. This is our opportunity to lose.

Trusting the Cloud

Ian Rae has written an interesting piece asking "Is cloud computing stable in bad weather?" , Below are my comments.
Ian your post reads like the doom and gloom forecasts we saw about cloud computing back in 2007. Cloud computing is as much about trust as it is about efficiency. The real question you should be asking is do I trust microsoft, amazon or even AT&T to mange my infrastructure better then an internal data center team, and in my rather bias opinion the answer for the most part is yes.

Also, about your comment on outages, the question isn't will the cloud provider go down, because it certainly will. The real question is how do I enable a hybrid cloud environment that assumes failure and can do so gracefully. (This one of my main motivators for a unified cloud interface standard)

Lastly, as some one who has built a business on the so called private cloud platform. The idea of a quarantined shared internal cloud, aka private cloud is an oxymoron. Cloud computing is about using resources where ever and when ever, internally or externally.

Unified Cloud Interface: To Schema or not to Schema

My post the other day about creating an XMPP based unified cloud interface has generated a lot of interest (Thank you Dave @ Cnet). One point that has been mentioned by several people is in regards to the proposed usage of a XML schema and whether a predefined model makes any sense. A few of you also said to look at a more "RESTful" architecture, which in my opinion is not mutually exclusive to a XML schema. Several have pointed me to the SNMP protocol and its object model, as a good example. SNMP uses a strict verb discipline in tandem to the protocol's small operator set, and the 'resources' are addressed with a uniform global scheme of Object identifiers.

Yet another suggestion was to look at the Resource Description Framework (RDF) as the basis for UCI. What I find interesting about the RDF data model is it based upon the idea of making statements about Web resources in the form of subject-predicate-object expressions. I also found it's use of statement reification and context possibly very useful. Although RDF brings us back to the usage of schemas.

So I'd like to propose a question. If not a traditional XML Schema, what other approaches may give us equal or greater flexibility?
Join the conversation at the Cloud Interoperability Forum on Google Groups

Tuesday, November 25, 2008

Cloud Computing: Weapons of mass disruption

I gave my lecture today at the University of Toronto and like most of my events lately I only made it through my first three slides. I seem to have a knack for creating an interactive discussion and today's un-lecture was no different.

The group consisted of mostly computer science post grad students. They did ask me some fairly intriguing questions. In one my random off topic rants, I described cloud computing as one of the biggest disruptive technologies to emerge this decade. Now I'm not sure if I'm starting to believe my own hype or if there was actually more too the story. I know one thing, I need to stop using the word "paradigm". But then again, it did spark some great dialog. Afterward for some reason the idea of cloud computing being a weapon of mass disruption came to mine, but unfortunately I didn't have a chance to use the term in my presentation today.

Another example I gave was in describing the movement away from the traditional single tenant desktop or server environment to that of a decentralized internet centric one. The idea of the network or the internet as the computer also seemed to strike a cord with the audience. This is a concept I still do truly believe. The idea of a hybrid computing environment where some software aspects remain on your desktop and other are farmed out to the cloud seems to resonate with a lot of the people I've been talking to lately. Microsoft's Photosynth is prime example, they refer to this as a Software + Services and actually makes a lot of sense. Microsoft describes their Software + Services philosophy as "a combination of local software and Internet services interacting with one another. Software makes services better and services make software better. And by bringing together the best of both worlds, we maximize choice, flexibility and capabilities for our customers." Microsoft at least from a "talk is cheap point" of view seems to get it.

At the end of the day, I think I enjoy interacting with people, whether it's at a conference or in a more intimate university lecture, now I see what I missed out on by not going to college or university, an interactive forum for discussion. If you're interested in getting me to speak at your school, please get in touch.

Monday, November 24, 2008

IBM gives Cloud Startups an Exit Strategy

IBM announced today a series of cloud computing related products and services. To be frank I didn't see anything particularly ground breaking in the announcement. It was what they didn't say I found most interesting. For me, what IBM actually announced today was that we now have a major acquisition partner for the various cloud computing startups being created. More simply, on the business plan page titled "exit strategy", you can now include IBM.

Part of this strategy can be seen in IBM's cloud benchmarking program which offers a Resilient Cloud logo which they describe as a confidence booster for enterprise customers who are interested in shifting services to the cloud but concerned about reliability. On the surface this program would appear to be a nice easy way for IBM to test out new cloud wares, acquiring the most promising startups before anyone else.

I certainly hope there is a secondary agenda, because on its own, IBM's Resilient Cloud logo is about as useful as lipstick on a pig.

Cloud Standardization: Unified Cloud Interface (UCI)

Today I submitted my first email to the XMPP standards list. I thought I'd share my post with the readers of my blog.
A few months ago a number of us came together to create "The Cloud Computing Interoperability Forum". The purpose of this group is to discuss the creation of a common cloud computing interface. The group is made up of a some of the largest cloud related vendors and startups who all share the goal of cloud interoperability as well reducing cross cloud complexity.

I'd like to take a moment to explain my cloud interoperability ideas. After various conversations, our concept is starting to take shape and is based on what I'm called the "unified cloud interface" (aka cloud broker). The cloud broker will serve as a common interface for the interaction with remote platforms, systems, networks, data, identity, applications and services. A common set of cloud definitions will enable vendors to exchange management information between remote cloud providers.

The unified cloud interface (UCI) or cloud broker will be composed of a specification and a schema. The schema provides the actual model descriptions, while the specification defines the details for integration with other management models. UCI will be implemented as an extension to the Extensible Messaging and Presence Protocol (XMPP) specifically as a XMPP Extension Protocol or XEP.

The unified cloud model will address both Platform as a service offerings such as Google App Engine, Azure and as well as infrastructure cloud platforms such as Amazon EC2. Ultimately this model will enable a decentralized yet extensible hybrid cloud computing environment with a focus on secure global asynchronous communication.

Once we are in general agreement on the draft proposal, it will be submitted for approval by the Internet Engineering Task Force (IETF) for inclusion as a XMPP Extension and presented at the IEEE International Workshop on Cloud Computing (Cloud 2009) being held in May 18-21, 2009, in Shanghai, China.

My draft is based on a combination of working being done in conjunction to XMPP, CIM, Xam and several other standardization efforts.

Comments welcome.


#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.