Wednesday, December 31, 2008

Who invented the term cloud computing?

Interesting post by John Willis over at johnmwillis.com on the topic of "who coined the term cloud computing" In his post he asks the question of who invented the term cloud computing?

He points to an August 2006, where Eric Schmidt of Google described there approach to SaaS as cloud computing at a search engine conference. I think this was the first high profile usage of the term, where not just “cloud” but “cloud computing” was used to refer to SaaS and since it was in the context Google, the term picked up the PaaS/IaaS connotations associated with the Google way of managing data centers and infrastructure. http://www.google.com/press/podium/ses2006.html (One common story indicates that Schmidt took the opportunity to use the term "cloud computing" in an attempt to steal some of the thunder from the Amazon Elastic Compute Cloud which was also launching later that same month in 2006, a classic example of Google "FUD" )

In my conversations with Amazon folks in spring of 2006 they had already begun refering to their "secret" utility computing project as an elastic compute cloud. So privately the term was already being fairly broadly discussed, as well as externally to Amazon, with people including me.

According to my editor on the forth coming "Cloud Computing: A strategy guide", Michael Loukides at O'Reilly, says the use of "cloud" at O'Reilly's as a metaphor for the Internet dates back to back to at least 1992, which is pretty close to the start of O'Reilly's publishing on networking topics. He goes on to say the idea of a "cloud" was already in common use then.

By 2006 the term cloud had grown to by the kind of catch phrase of the year, every blog, social widget, and web based application seemed to have some kind of cloud angle, only back then it refereed to a visual cloud of words, typically organized by size and frequency. So applying the term cloud to something other then the visual, would have fitted well into the hype cycle found within social applications in 2006.

In doing my research for the cloud guide, I think I have found the first public usage of the term "Cloud" as a metaphor for the Internet in a paper published by MIT in 1996. As side note, this article fully outlines most of the concepts which have become central to the ideas found within cloud computing. Certainly worth a read.

The Self-governing Internet: Coordination by Design
http://ccs.mit.edu/papers/CCSWP197/CCSWP197.html

See Figure 1. The Internet's Confederation Approach
(Do find in your browser for "Figure 1" and see image)
--
Massachusetts Institute of Technology

Sharon Eisner Gillett
Research Affiliate
Center for Coordination Science

Sloan School of Management
Mitchell Kapor
Adjunct Professor
Media Arts and Sciences

Prepared for:
Coordination and Administration of the Internet
Workshop at Kennedy School of Government, Harvard University
September 8-10, 1996

Appearing in:
Coordination of the Internet, edited by Brian Kahin and James Keller,
MIT Press, 1997

Tuesday, December 30, 2008

ElasticHosts Releases Extensive API

Richard Davis, over at Elastic Hosts in the UK has sent me details of their latest API release for the Elastic Hosts Cloud infrastructure service. Davies noted that they feel the API semantics are more important than syntax - and went on to say they intend to produce text/plain, application/xml, application/json and application/x-www-form-urlencoded variants of this same set of commands so that users can pick whichever seems easiest. (I should also note we use a similar approach with the Enomaly ECP API) They seem to put a lot of emphasis on the API, I guess we'll see if customers embrace their use of a extensive ReSTful API.

Here are some more details of the release;

ElasticHosts, the second European cloud infrastructure and the world's first public cloud based upon KVM*, today released its API and remote management tool. These enable users to automatically upload server images and control servers running within ElasticHosts' flexible infrastructure, supplementing the existing web management interface.

ElasticHosts provides flexible server capacity in the UK for scalable web hosting and on-demand burst computing uses such as development/test, batch compute, overflow capacity and disaster recovery.
The ElasticHosts HTTP API is described in detail at:

http://www.elastichosts.com/products/api

The API is implemented in a straightforward ReST style, and is accompanied by a simple command line tool enabling Unix users to control ElasticHosts infrastructure from their own scripts without writing any code.

Monday, December 29, 2008

The United Federation of Cloud Providers

A fundamental challenge in creating and managing a globally decentralized cloud computing environment is that of maintaining consistent connectivity between various untrusted components that are capable of self-organization while remaining fault tolerant. In the next few years the a key opportunity for the emerging cloud industry will be on defining a federated cloud ecosystem by connecting multiple cloud computing providers using an agreeing upon standard or interface. In this post I will examine some of work being done in cloud federation ranging from adaptive authentication to modern P2P botnets.

Cloud Computing is undoubtedly a hot topic these days, lately it seems just about everyone is claiming to be a cloud of some sort. At Enomaly our focus is on the supposed "cloud enabler" Those daring enough to go out and create their very own computing clouds, either privately or publicly. In our work it has become obvious the the real problems are not in building these large clouds, but in maintaining them. Let me put it this way, deploying 50,000 machines is relatively straight forward, updating 50,000 machines or worst yet taking back control after a security exploit is not.

There are a number of organizations looking into solving the problem of cloud federation. Traditionally, there has been a lot of work done in the grid space. More recently, a notable research project being conducted by Microsoft called the “Geneva Framework" has been focusing on some the issues surrounding cloud federation. Geneva is described as a Claims Based Access Platform and is said to help simplify access to applications and other systems with an open and interoperable claims-based model.

In case you're not familiar with the claims authentication model, the general idea is using claims about a user, such as age or group membership, that are passed to obtain access to the cloud environment and to systems integrated with that environment. Claims could be built dynamically, picking up information about users and validating existing claims via a trusted source as the user traverses a multiple cloud environments. More simply, the concept allows for multiple providers to seamlessly interact with another. The model enables developers to incorporate various authentication models that works with any corporate identity system, including Active Directory, LDAPv3-based directories, application-specific databases and new user-centric identity models, such as LiveID, OpenID and InfoCard systems, including Microsoft’s CardSpace and Novell's Digital Me. For Microsoft, Authentication seems to be at heart of their interoperability focus. For anyone more microsoft inclined, Geneva is certainly worth a closer look.

For the more academically focused, I recommend reading a recent paper titled Decentralized Overlay for Federation of Enterprise Clouds published by Rajiv Ranjan and Rajkumar Buyya at the The University of Melbourne. The team outlines the need for cloud decentralization & federation to create a globalized cloud platform. In the paper they say that distributed cloud configuration should be considered to be decentralized if none of the components in the system are more important than the others, in case that one of the component fails, then it is neither more nor less harmful to the system than caused by the failure of any other component in the system. The paper also outlines the opportunities to use Peer2Peer (P2P) protocols as the basis for these decentralized systems.

The paper is very relevant given the latest discussions occurring in the cloud interoperability realm. The paper outlines several key problems areas:
  • Large scale – composed of distributed components (services, nodes, applications,users, virtualized computers) that combine together to form a massive environment. These days enterprise Clouds consisting of hundreds of thousands of computing nodes are common (Amazon EC2, Google App Engine,Microsoft Live Mesh) and hence federating them together leads to a massivescale environment;
  • Resource contention - driven by the resource demand pattern and a lack of
    cooperation among end-user’s applications, particular set of resources can get
    swamped with excessive workload, which significantly undermines the overall
    utility delivered by the system;
  • Dynamic – the components can leave and join the system at will.
Another topic of the paper is on the challenges in regards to the design and development of decentralized, scalable, self-organizing, and federated Cloud computing system as well as a applying the the characteristics of a peer-to-peer resource protocols, which they call Aneka-Federation. (I've tried to find any other references to Aneka, but it seems to be a term used solely withing the university of Melbourne, interesting none the less)

Also interesting was the problems they outline with earlier distributed computing projects such as Seti@home saying they these systems do not provide any support for multi-application and programming models. A major factors driving some of the more traditional users of grid technologies to the use of cloud computing.

One the of questions large scale cloud computing opens is not about how to many a few thousand machines, but how do you manage a few hundred thousand machines? A lot of the work being done in decentralized cloud computing can be traced back to the emergence of modern botnets. A recent paper titled "An Advanced Hybrid Peer-to-Peer Botnet" Ping Wang, Sherri Sparks, Cliff C. Zou at The University of Central Florida outlines some of the "opportunities" by examining the creation of a hybrid P2P botnet.

In the paper the UCF team outlines the problems encountered by P2P botnets which appear surprisingly similar to the problems being encountered by the cloud computing community. The paper lays out the following practical challenges faced by botmasters; (1). How to generate a robust botnet capable of maintaining control of its remaining bots even after a substantial portion of the botnet population has been removed by defenders? (2). How to prevent significant exposure of the network topology when some bots are captured by defenders? (3). How to easily monitor and obtain the complete information of a botnet by its botmaster? (4). How to prevent (or make it harder) defenders from detecting bots via their communication traffic patterns? In addition, the design should also consider many network related issues such as dynamic or private IP addresses and the diurnal online/offline property of bots. A very interesting read.

I am not condoning the use of botnets, but architecturally speaking we can learn a lot from our more criminally focused colleagues. Don't kid yourselves, they're already looking at ways to take control of your cloud and federation will be a key aspect in how you protect yourself and your users from being taken for a ride.

Tuesday, December 23, 2008

Cloud Computing For a Cause

Yesterday I had a great conversation with Romanus Berg of Ashoka, the world's largest network of social entrepreneurs and a long time customer of Enomaly. In the conversation we discussed some of the opportunities that cloud computing may offer as a social empowerment tool in emerging economies.

In case you've never head of Ashoka, founded by Bill Drayton it was one of the first groups to popularize the concept of Social Entrepreneurship. The core foundation of social entrepreneurship is found within businesses that recognizes a social problem and uses entrepreneurial principles to organize, create, and manage a venture to make social change. Whereas a business entrepreneur typically measures performance in profit and return, a social entrepreneur assesses success in terms of the impact s/he has on society.

Ashoka acts as a kind of people aggregator, finding the diamonds in the rough. Those 1 in a million that effect major changes within their local society. Ashoka believes that we are in the midst of a rare, fundamental structural change in society: citizens and citizen groups are beginning to operate with the same entrepreneurial and competitive skill that has driven business ahead over the last three centuries. People all around the world are no longer sitting passively idle; they are beginning to see that change can happen and that they can make it happen.

During the conversation it became clear the both Romanus and I shared a similar vision for cloud computing not just a method of increasing IT productivity but as an empowerment tool for "under-enabled" people. People that up until recently have never had the opportunities that modern information technology has afforded the western world.

This concept of a socially conscience cloud stuck a cord with me. In many emerging economies technology in particular can skip whole generations. For example the move in China to mobile phones, skipping past more traditional forms of telephony. Similarly cloud computing may represent a major opportunity to bring both knowledge as well as modern computing technology through the use of low cost, wireless networks and mobile devices connected to regionalized clouds.

Cloud Computing as a socially conscience enterprise may not just be limited to emerging economies but may also enable the latest eco-trend of green technology. Global cloud computing represents the opportunity to make adjustments based on your carbon footprint. Imaging being able to adjust your computing energy consumption levels based on which provider of electricity is using the best and greenest sources.

Like Ashoka, I believe we are in midst of a rare, fundamental structural change. At the end of the day, cloud computing is about choice, mix in a social consciousness and we start to see one of the bigger socio-technological revolutions of our time, the information revolution.

Sunday, December 21, 2008

Cloud Interoperability and The Neutrality Paradox

Recently during some behind the scenes conversations, the question of neutrality within the cloud interoperability movement was raised.

The question of cloud interoperability does open an interesting point when looking at the concepts of neutrality, in particular to those in the position to influence its outcome. At the heart of this debate was my question of whether anyone or anything can be be truly neutral? Or is the very act of neutrality in itself the basis for some other secondary agenda? (Think of Switzerland in the Second World War) For this reason I have come to believe that the very idea of neutrality is in itself a paradox.

Let me begin by stating my obvious biases. I have been working toward the basic tenets of cloud computing for more then 5 years, something I originally referred to as elastic computing. As part of this vision, I saw the opportunity to connect a global network of computing capacity providers using common interfaces as well as (potentially) standardized interchange formats.

As many of you know I am the founder of a Toronto based technology company Enomaly Inc, which focuses on the creation of an "elastic computing" platform. The platform is intended to bridge the need for the better utilization of enterprise compute capacity (private cloud) with opportunities of a limitless, global, on demand ecosystem for cloud computing providers. The idea is to enable a global hybrid data center environment. In a lot of ways, my mission for creating a consensus for the standardized exchange of compute capacity is both driven by a fundamental vision for both my company and the greater cloud community. To say interoperable cloud computing is something I'm passionate about would be putting it mildly. Just ask my friends, family or colleagues and they will tell you I am obsessed.

Recently, I created a CCIF Mission & Goals page, a kind of constitution which outlines some of the groups core mission. As part of that constitution I included a paragraph stating what we're not. In the document I stated the following: "The CCIF will not condone any use of a particular technology for the purposes of market dominance and or advancement of any one particular vendor, industry or agenda. Whenever possible the CCIP will emphasis the use of open, patent free and or vendor neutral technical solutions. " This statement directly addresses some of the concepts of vendor bias, but doesn't state bais within the organizational structure of the group dynamic.

Back to the concept of neutrality as a cloud vendor, as interest in cloud interoperability has begun to gain momentum, it has become clear that these activities have more to do with realpolitik and less to do with idealism. A question was posed - should a vendor (big or small) be in a position to lead the conversation on the topic of cloud interoperability? Or would a more impartial neutrality party be in a better position to drive the agenda forward?

The very fact that question is being raised is indicative of the success of both the greater cloud computing industry as well as our efforts to drive some industry consensus around the topic of interoperability. So regardless of my future involvement, my objectives have been set into motion. Which is a good thing.

My next thought was whether there is really such a thing as a truly neutral entity? To be truly neutral would require a level of apathy that may ultimately result in a failed endeavour. Or to put it another way, to be neutral means being indifferent to the logical outcome. Which also means there is nothing at stake to motivate an individual or group to work towards its stated goals. My more pragmatic self can't also help but feel that even a potentially "more neutral" party could also have some ulterior motives - we all have our agendas. And I'm ok with that.

I'm not ok with those who don't admit to them. The first step in creating a fair and balanced interoperable cloud ecosystem is to in fact state our biases and take steps to offset them by including a broad swath of the greater cloud community, big or small, vendor, analyst or journalist.

So my question is this, how should we handle the concept of neutrality and does it matter?

Friday, December 19, 2008

Stephen Pollack leaves PlateSpin but shares the love

Infoweek has a nice little article about Stephen Pollack, formerly the founder of Platespin and now a notible advisor to virtualization startups Embotics, and Enomaly. David Marshall askes, Will they be as successful as PlateSpin? I certainly hope so. Stephen's insights as a booter have been extremely useful and timely.

Pollack stated, "Embotics and Enomaly are examples of young companies yearning to explore some of the new areas emerging in systems management - if I can help them succeed in some way using my experiences, I'm happy to do so. If there are companies who might need some assistance from someone like myself, I'm happy to explore that with them."

Read Article Here

(Please Note, I originally, mistakenly posted that Stephen is an advisor at DynamicOps, this is not the case.)

Thursday, December 18, 2008

Keeping Grid and Cloud Computing Seperate

There is nothing I enjoy more then stirring up controversy and lately it seems to find me where ever I go. As some of you know I've been attempting to organize a Wall Street Cloud Interoperability Forum for this March. In my planning, I seem to have stirred up some great debates among the more established high performance computing & Grid computing organizations who have focused on the banking and fiance industry.

In a conversation yesterday with prominent wall street grid advocate, he bluntly said that no bank would use external (cloud) resources anytime in the near future and that a uniform cloud interface was a hopeless cause (I'm paraphrasing). This is pretty much exactly the opposite from what I'm hearing from several high level IT folks within the banking industry. And actually quite the different story from what Craig Lee, the President of the OGF has said to me. According to Lee, the intersection of cloud computing is a critical area for the OGF and one of their main focuses going forward. I think the hardest part is going to be convincing the established HPC/Grid community that cloud computing and grid are not one in the same.

(I should also note we're putting on a joint CloudCamp with the OGF at their OGF25/EGEE User Forum in Sicily this March)

To shed some light on the whether banks are truly embracing cloud computing I had recently the opportunity of speaking to many IT managers on Wall Street over the last few months. In one such conversation early this week with a major German bank, I was told that most banks are being forced to do a lot more with a lot less money. They see the ability to outsource the so called low hanging fruit such as test/dev, performance testing, continuity, as well as various customer facing services as immediate opportunities that will allow them to better utilize existing infrastructure while moving less important, yet capacity intensive applications to the cloud. Also the the concept of a unified cloud interface (UCI) that enables a single (standardized) programmatic interface to both internal and external resource is of particular interest. I've received no less then a half dozen calls on the subject of UCI in the last two weeks alone. The fact is, I've heard the same story from several of the largest banks on Wall Street, it isn't that cloud computing may happen, but that it is and there is a lack of tools to enable this migration/transformation. The opportunity would seem to be in addressing the intersection of legacy IT infrastructures with the hybrid infrastructure of the future. I'm not alone in this thinking, notably Cisco and Sun have stated similar opportunities. (I should note, both Cisco and sun are involved in my cloud interop activities)

I think what bugs me about the HPC/Grid guys is that they seem to be driven by academic motivations of using distributed computing as a kind of universal problem solver. It's been almost 10 years since Grid technology was first discussed and yet we seem no closer to any kind of wide industry adoption. In a little over two years, cloud computing has been able to do something grid has never been able to accomplish, become the next big thing. Yes, I know hype doesn't make a sustainable industry, but it certainly helps.

The Grid computing community's problem is it doesn't appear to be driven by saving money or any broad or practical business benefits, but instead by solving very particular problem sets. I feel that one of the major benefits of cloud computing is in the fact it does address a much broader opportunity, one that touches upon just about every aspect of a modern IT environment. I also find it interesting that unlike the Grid/HPC community which is predominately driven by academia, cloud computing is being driven by business and more importantly actual business problems. Businesses who for the first time are able to tap into near limitless opportunities driven by cheap and easy access to compute capacity.

Wednesday, December 17, 2008

Cloud Mining & Enterprise Social Messaging (Enterprise Twitter)

Got sent a few intriguing links today. The first touches upon a concept of data mining the "social web", a topic I wrote about a few weeks back in my post "The Industrial Revolution of Data". A new site called StockTwits is described as an open, community-powered idea and information service for investments. Users can eavesdrop on traders and investors, or contribute to the conversation and build their reputation as savvy market wizards. The service takes financial related data - using Twitter as the content production platform - and structures it by stock, user, reputation, etc.

StockTwits is a prime example of the concept I call "social knowledge discovery". These services will give the ability for anyone to spot trends, breaking news as well as threats (economic, physical or otherwise) in real time or even preemptively. Think of it as a social cloud mining platform.

Also launched today is a service called StatTweets which uses twitter as an notification mechanism. The service allows users to get news, live game scores, standings, rankings, point spread updates, and other stats for your favorite basketball or football team all from Twitter.

Are you interested in creating your very own social knowledge discovery service? Now you can thanks to a new open source project by the Apache foundation called Apache ESME or Enterprise Social Messaging Experiment. ESME is described as a secure and highly scalable microsharing and micromessaging platform that allows people to discover and meet one another and get controlled access to other sources of information, all in a business process context.

Some of ESME's present features include an Adobe Air client, Web client, Extensive set of built-in actions, Login via Open-ID. Planned Features include; Federation scenarios, Groups, ERP notifications, Prioritization, Show local time for users. Most of these planned feature don't currently exist in twitter. So it will be interested to see if twitter follows suit.

Tuesday, December 16, 2008

Google Cloud Economics, The Quota

Big news coming out of Google today. They have announced some new upcoming features to the Google App Engine platform. There really hasn't been much news coming out of the Google App Engine lately, so todays news is all the more exciting.

First up is a feature they're describing as a Downtime Notify Google Group, which is a dashboard to announce scheduled downtime and explain any issues that affect App Engine applications. This is similar to other trouble dashboards such as Amazon's.

The more interesting feature is that of the Quota Details Dashboard which enables a granular utilization view for each Google App Engine application. In case you've never used Google App Engine, one of the more unique aspects of the system is found within its use of a set of resource quotas that control how much CPU, Bandwidth, and storage space a Google App can consume. Currently all of this usage is free but in the future, developers will be allowed to purchase additional usage beyond these free quotas. Until today, potential customers of the system haven't had a real pricing model outlined beyond some vague pricing details. So it's been rather difficult to actually model a business on Google App Engine.

Google shed some light on the subject today, according to a blog post;

"You'll be able to buy capacity based on a daily budget for your app, similar to the way AdWords spending works. You'll have fine-grained control over this daily budget so you can apply it across CPU, network bandwidth, disk storage, and email as you see fit. You'll only pay for the resources your app actually uses, not to exceed the budget you set."

What I find the most interesting about all this new is Google's use of a Quota system for a tiered billing of cloud resources. In this quota model Google can attract a large user base by offering a "free" frictionless entry, while the more successful applications may choose to pay for enhanced services via Fixed or Per Day usage quotas. In my opinion this very well may be the first step in the creation of a true commodity based exchange of computing services and capacity.

In a post on GigaOm earlier today Allistair Croll said that "Google is carefully launching an ecosystem for developers to build and sell their cloud-based software." I did some digging on the subject, but other then Croll's comments, I couldn't find anything concrete on the subject coming from anyone at Google. But it does make sense, recent reports indicate that there is a market emerging around Google Apps with more then 10 million active users as well as signing up some 3,000 new companies a day, according to Matthew Glotzbach, product management director of Google Enterprise. So it would seem Google App Engine might be the ideal tool to power Google's emerging cloud application ecosystem. Think along the lines of a Cloud Application Marketplace. It’s the same basic concept Facebook did with its API or Salesforce did with AppExchange; in Google’s case, users may now have a global turnkey channel that can reach small businesses easily. Very cool.

I for one am looking forward to seeing how this all plays out.

CCIF on Twitter (@cloudforum)

A lot of people have been asking about whether or not we have a twitter account setup for the Cloud Interoperability Forum (CCIF). Well I'm happy to say we do now.

If you use twitter and are interested in interacting with other CCIF members, please follow @cloudforum (http://twitter.com/cloudforum)

Once you're following the @cloudforum twitter feed you can then add fellow CCIF members at http://twitter.com/cloudforum/followers

I'll try to post the first group of members using the cloudforum account.

You may also want to "retweet"
Announcing the @cloudforum twitter feed on Cloud Computing Interoperability. Please follow & Retweet.

Monday, December 15, 2008

Fire Eagle + XMPP realtime notifications

Been busy working on my cloud interoperability chapter for the upcoming O'Reilly Cloud Computing guide, I thought I'd take a break to share a cool new location centric XMPP based app I was sent today.

Seth Fitzsimmons over at Yahoo has published a new Fire Eagle's XMPP PubSub endpoint which is up and running at fireeagle.com. It's a great XMPP implemtation and does a nice job of showcasing a subset of the XEP-0060 (Publish-Subscribe), components including;

* subscribe (signed using OAuth (XEP-0235) w/ User-specific access tokens)
* unsubscribe (signed using OAuth w/ User-specific access tokens)
* subscriptions (signed using OAuth w/ General Purpose access tokens)
* event notifications (location updates)

Seth also wrote some (Ruby) client code for it as well as some additional instructions for subscribing to individual users' nodes:
http://github.com/mojodna/fire-hydrant/tree/master

Event notifications contain XML-formatted location information, the same as you would get when querying for an individual user's location.

According to his post, You don't need to use Fire Hydrant to receive updates (use an XMPP library in your language of choice that supports PubSub event notifications), but you'll probably want to use switchboard to create appropriate subscriptions (as it's the only tool I know of that supports OAuth for XMPP requests).

Switchboard is here:
http://github.com/mojodna/switchboard/tree/master

Original Post > http://tech.groups.yahoo.com/group/fireeagle/message/946

On-Demand Enterprise (formerly GRIDtoday) suspending operations

Sad day for cloud publications. On-Demand Enterprise (formerly GRIDtoday) is suspending operations. For anyone involved in any sort distributed computing over the last few years, Gridtoday has been a key source of information and insights into the industry. They will be sorely missed.

I wish Derrick Harris and the rest of the team over at On-Demand Enterprise the best of luck in any future endeavors.

Sunday, December 14, 2008

The 2009 Cloud Experience (Repost)

As part of an ongoing series of posts, David Marshall at VMblog asked me to prognosticate on some of the key IT / Cloud trends for 2009. His question was simple, "What do virtualization executives think about 2009? Below is my take on things for 2009.

For archival purposes, I'm reposting the article on ElasticVapor. To see the original post, please visit VMblog.com
--------------

The 2009 Cloud Experience

The year 2008 has been a big one for cloud computing. In a rather dramatic shift, we've seen the term "cloud" enter the collective IT consciousness. It seems almost every technology vendor, big or small, has embraced the movement to the cloud as a software and marketing philosophy. Generally, cloud computing can be viewed loosely as an Internet-centric software and services model. Specifically for the data center, cloud computing represents the opportunity to apply some of the characteristics of the decentralized, fault tolerant nature of the Internet.

Up until now software companies didn't have to concern themselves with the concepts of "scale" or adaptive infrastructure capacity. In the traditional 90's desktop software model, users of a given software were responsible for the installation, administration and operation of a particular application, typically on a single computer. Each desktop formed a "capacity silo" somewhat separate from the greater world around it. Now with the rising popularity of Internet based applications, the need for remote capacity to handle an ever expanding, always connected online user-base is increasingly becoming a crucial aspect of any modern software architecture. In 2008 we even witnessed the typically desktop-centric Microsoft jump into the fray outlining a vision for Software + Services (S+S) which they describe as local software and Internet services interacting with one another.

In looking at 2009 I feel the greater opportunity will be in the merger of the next generation of "virtualized" data centers with a global pool of cloud providers to create scalable hybrid infrastructures geared toward an optimal user experience. Cisco's Chief Technology Officer, Padmasree Warrior recently referred to this merger as the "intra-cloud". Warrior outlined Cisco's vision for the cloud saying that cloud computing will evolve from private and stand-alone clouds to hybrid clouds, which allow movement of applications and services between clouds, and finally to a federated "intra-cloud". She elaborated on the concept at a conference in November "We will have to move to an 'intra-cloud,' with federation for application information to move around. It's not much different from the way the Internet evolved." With Cisco's hybrid vision, I believe the end-user experience will be the key factor driving the usage of cloud computing both internally and externally.

To enable this hybrid future, cloud interoperability will be front and center in 2009. Before we can create a truly global cloud environment, a set of unified cloud interfaces will need to be created. One such initiative which I helped create is called "The Cloud Computing Interoperability Forum (CCIF)". The CCIF was formed in order to enable a global cloud computing ecosystem whereby organizations work together for the purposes of wider industry adoption of cloud computing technology and related services. The forum is a little over 3 months old and has grown to almost 400 members encompassing almost every major cloud vendor. A key focus of the forum is on the creation of a common agreed upon framework / ontology that enables the ability of two or more cloud platforms to exchange information in an unified manor. To help accomplish this, the CCIF in 2009 will be working on the creation of a Unified Cloud Interface (UCI) or cloud broker. The cloud broker will serve as an open interface for interaction with remote cloud platforms, systems, networks, data, identity, applications and services. A common set of cloud definitions will enable vendors to exchange management information between remote cloud providers. By enabling industry wide participation, the CCIF is helping to create a truly interoperable global cloud which should also improve the cloud centric user-experience.

In the startup space, there have been a number of notable companies showing up to address the concepts of "scale" by creating load based cloud monitoring and performance tools. For the most part, these tools have been focused on cloud infrastructure environments such as Amazon's Elastic Compute Cloud. There have also been an increasing number of cloud providers appearing on a regional level in Europe and Asia providing resources for the first time to scale on a geographical basis. Thanks in part to interop efforts as well as advancement in wide-area computing, we may soon be able to not only scale based on superficial aspects such as load, but based on practical aspects like how fast does my application load for users in the UK?

The quality of a user experience as the basis for scaling & managing your infrastructure will be a key metric in 2009. The general problem is a given cloud vendor/provider may be living up to the terms of their SLA's contract language, thus rating high in quality of service, but the actual users may be very unhappy because of a poor user experience. In a lot of ways the traditional SLA is becoming somewhat meaningless in a service focused IT environment. With the emergence of global cloud computing, we have the opportunity to build an adaptive infrastructure environment focused on the key metric that matters most, the end user's experience while using your application. Whether servicing an internal business unit within an enterprise or a group of customers accessing a website, ensuring an optimal experience for those users will be the reason they will keep coming back and ultimately what will define a successful business.

A key player in the emerging cloud as a user-experience enabler is Microsoft with their Microsoft UC Quality of Experience (QoE) program. Although initially focused toward VOIP applications, I feel the core concepts work well for the cloud. The MS QoE program is described as a comprehensive, user-focused approach to perceived quality centered on the actual users, and incorporating all significant influencing parameters in optimizing the user experience. Real time metrics of the actual experience help in measuring, quantifying and monitoring at all times the actual experience using live analytical information of the user's perceived subjective quality of the experience. QoE may very well be a key component in Microsoft's plan to dominate the cloud. It resonates well with what I'm hearing in the community at large. As trusted enablers, companies like IBM, Cisco, Sun, and Microsoft are in the prime spot to address this current trend toward "trusted" cloud computing providers. They have the know-how, global networks and most importantly the budgets to make this a reality.

Possibly the biggest opportunity in the coming year will be for those who embrace the "cloud stack", a computing stack that takes a real time look at a given user's experience. A computing environment that can adapt as well as autonomously take corrective actions to continuously optimize the user’s subjective experience on any network, anywhere in the world at anytime. Sun Microsystem's John Gage was right when he famously said, The Network Is the Computer. It's just taken us 25 years to realize that the "Internet" (the cloud) is the computer.

Friday, December 12, 2008

Enomaly ECP 2.1.1 Released

We're happy to announce that Enomaly ECP 2.1.1 has hit the shelves (or SourceForge, as it were...). This is a bug fix and security release, so don't expect to see a whole lot of new functionality - 2.2 is coming, hopefully shortly in the new year with lots of new features.

This maintenance release fixes a potential security exploit in the startup script's temporary file handling as well as the following bug fixes:

* Randomly generated mac addresses are now written to the machine XML at provision time.
* The available system memory is now checked against the required memory for new machines at provision time.
* Fixed a bug regarding the valet extension module not properly checking the hypervisor type.
* Fixed a bug that disallows a machine's XML definition to be edited.
* Fixed several misc. bugs in the valet extension module.
* Added messages to the interface stating the required extension modules.

Thanks to everyone who provided feedback during the development process!

Grab the latest release today.

Wednesday, December 10, 2008

Enomaly Appoints PlateSpin Founder Stephen Pollack to Advisory Board

I'm happy to announce that Stephen Pollack, founder and CEO of Platespin has joined the Enomaly Board of Advisors. Stephen led PlateSpin, a provider of data center workload management solutions, to a highly publicized $205M exit when it was acquired by Novell in Feb 2008. Before joining PlateSpin, Stephen held senior management positions at FloNetwork Inc., Fulcrum Technologies (now part of Hummingbird) and NCR. He brings over 25 years of IT experience in marketing, sales, development and lifecycle support of successful businesses.

I'm very excited to have Stephen advising us. With his extensive experience in launching and managing companies from startup to acquisition, he provides us with a wealth of knowledge that is second to none. I'm looking forward to working with Stephen over the coming months as we build Enomaly into a world class organization. As a self made and extremely successful Canadian entrepreneur, Stephen is the kind of role model you aspire to become. Needless to say we are very fortunate to have Stephen as part of our team.

Cloud Costing: Fixed Costs vs Variable Costs & CAPEX Vs OPEX

As I continue my world tour of the cloud, I keep having the same conversation about the cost advantages of cloud computing in a tough economic climate. The general consensus seems to be that cloud computing's real benefit in a bad economy is its ability to move so-called "fixed costs" to variable or operational costs. So I thought I'd take a moment to investigate some various cloud costing terms.

Starting at the bottom are fixed costs which are basically business expenses that are not dependent on the level of production or sales. They tend to be time-related, such as salaries or rents being paid per month. You can think if of these costs as the baseline operational expenses of running your business, or in the case of IT, the cost of maintaining your core technical infrastructure.

In a sense, a key economic driver for the use of remote capacity (cloud computing) is in its ability to convert a fixed cost into that of a variable cost or a cost which is volume-related (paid per use / quantity). This seems to be the industry standard way to justify the use of cloud computing.

Then there is the question of capital expenditures (CAPEX) which are expenditures creating future benefits. For example, the money Amazon is spending on building out their web services infrastructure. One way to look at cloud computing is as method of deferring CAPEX. The theory is generally stated like this, by utilizing someone elses infrastructure you can choose pay only if and when you need additional capacity as a variable cost. It's also interesting to note that capital expenditures can have some tax benefits because they can be amortized or depreciated over the life of the assets in question. So potentially the ability to create a hybrid cloud model that uses both exisiting resources and remote resource may actually be very compelling to larger established businesses mostly for tax benefits. For example, a large hosting firm may choose to recycle used dedicated servers into their cloud offering thus making money on both sides of CAPEX and variable costs. (This assumes the business is profitable)

The next motivating factor is that of the operational expenditure or OPEX which is an on-going cost for running a product, business, or infrastructure. I like the wikipedia OPEX example, the purchase of a photocopier is the CAPEX, and the annual paper and toner cost is the OPEX. In the cloud world OPEX is considered great unknown variable, it may be cheaper upfront to use a cloud provider thus reducing your CAPEX, but long term it may cost more to manage your remote assets because of increased software complexity, security as well as a variety of other reasons.

This bring us to Total cost of ownership (TCO) which is a financial estimate designed to help consumers and enterprise managers assess direct and indirect costs. TCO is typically the way a IT vendor validates their particular costs over a competitor, such as Mircosoft Vs Linux. The argument Microsoft used was that Linux is cheaper upfront, but is much more expensive to manage down the road. Therefore the TCO of Microsoft Vs that of Linux is much lower. TCO is very difficult to quantify and can be easily "gamed" and therefore should be looked rather sceptically.

At the end of the day, for most businesses it's cheaper to use someone elses infrastructure then is to use your own. At it's heart, this is the key reason to use the cloud.

Tuesday, December 9, 2008

Botnets : Electronic Weapons of Mass Destruction

Slashdot is reporting on a paper in the magazine Policy Review titled "The botnet peril" by the recent Permanent Undersecretary of Defense for Estonia.

In the article, the authors say botnets should be designated as 'eWMDs' — electronic weapons of mass destruction. I personally couldn't agree more. With modern advancements in decentralized command control systems combined with simplistic desktop vulnerability detection, creating a botnet of several thousand zombie PC's is a matter of sniffing your local ISP's network. (I actually wrote an article for wired earlier this year on how to build your own botnet, but it was -- declined for some unknown reason)

The article raises some great points including the concept of cyber warfare as asymmetric warfare; more is at risk for us than for most of our potential adversaries. (criminals, terrorists or rouge governments) Another asymmetric aspect is that the victims of cyber warfare may never be able to determine the identity of their actual attacker. Thus, America cannot meet this threat by relying solely upon a strategy of retaliation, or even offensive operations in general. (How do you attack a decentralized multi-nation organism, think digital Al-Qaeda?)

I also found this bit interesting "The U.S. government has a similar duty, but on a larger scale. Because botnets represent such a real threat to our domestic cyberspace and all the assets that those Internet-accessible computers control, it is a vital national interest to secure the domestic Internet." (Basically, the weakest link in almost all critical infrastructure is now IT connectivity. Those who control the network control the world.)

They give pretty good detail of the Russian botnet attack on Estonia last year. Interestingly, a similar two phased tactic which was used on Georgia this summer. In the initial so-called hacktivist phase was apparently used as pr cover, or diversion for a later botnet phase. It took some time for the international media to realize that the actual nature of the attack was the ensuing more sophisticated, organized, and devastating botnet attack which brings down critical pieces of the governments ability to communicate (email, phones, etc).

As many readers of my blog are keenly aware, the only real way to deal with this sort of cyberwarfare is to create a proactive botnet defence system, one capable of adapting to prolonged digital bombardment. The next major opportunity for the governments and global enterprises will be in the implementation of custom "enterprise botnets". This is not me going out and prognosticating, this is happening today.

The Cloud Poser / Expert

Great post by Daryl Plummer at Gartner titled "Instant Experts Floating in the Cloud". In his post he outlines some of the issues in the advocacy of cloud computing and the rise of the so called "Cloud Expert". He describes this person as "The instant expert is one of those people who seemed to know nothing about a topic two days ago but now sounds like they invented it. It’s the woman who studied all night to learn the difference between the cloud, cloud computing, and cloud services because Daryl Plummer or Reuven Cohen was so eloquent about it. Get the picture? Instant Experts are all around us; and, oddly enough, we need them now more than ever."

He's dead on, and I'm the first to admit I do fall into that category from time to time. I'm the cloud interoperability expert not so much because I have any real experience in cloud standards or because I've participated in any other interoperability groups. It's simply because I have a vested interest in the subject as well as the network of associates to make such a endeavour feasible. Basically I'm learning as I go. As for Elastic Computing aka cloud computing. I'm the expert because 5 years ago I was the nut trying to get people to use shared virtual resources. Simply, I've put in my time.

In a recent Business Week article "Cloud Computing Is No Pipe Dream" Jeffrey Rayport said it well.

"Tech pundits and practitioners alike have spilled lots of ink to hype cloud computing. They'll encourage you to think of it as IT infrastructure on demand—like plugging into the power grid to get electricity, or turning on a faucet to get water, but getting raw computing power instead. To boot, you'll get access to storage capacity and software-based services—and it can scale infinitely, proponents point out.

If this all sounds too good to be true—like so much cold fusion, the now-debunked tabletop nuclear fusion reactor in a bottle—don't be fooled. This time, getting more for less is real. And with the economy of its currently parlous condition, businesses have never needed cloud computing more."

What struck me about what Rayport said was that it difficult to tell the Poser's from the legitament "experts". Moreover, I'm not every sure why we should consider Rayport an export on the subject. With all the FUD, who do you trust? Random bloggers, providers, vendors?

Daryle Plummer has done a great job of outlining how to spot the pretenders from the contenders.

  • Pretenders want you to know how much they know. Contenders want you to know what you need to know.
  • Pretenders want you to believe they truly understand concepts. Contenders want you to know how concepts relate to other concepts in a specific context.
  • Pretenders spout facts. Contenders deliver insights.
  • Pretenders dismiss differences in meanings and definitions of concepts as “just semantics”. Contenders specify the context of their ideas and the meanings of concepts within that context.
  • Pretenders use their knowledge to reflect glory on their past accomplishments. Contenders use their past accomplishments to inject knowledge into their new ideas.
  • Pretenders can easily be tripped up by a well placed question. Contenders pose well-placed questions.
  • Pretenders can only slice an inch deep. Contenders cut straight to the bone.

Read the whole post here.

Monday, December 8, 2008

Cloud Scaling: Dynamic Vs Auto Scaling

There has been a rather interesting exchange on the O'Reilly's blog on the topic of Auto-Scaling in the Cloud. George Reese, founder of Valtira and enStratus, (a couple of companies I've never heard of) argues that the concept of auto-scaling is stupid.

George describes auto-scaling as the the ability to add and remove capacity into a cloud infrastructure based on actual usage. No human intervention is necessary. He goes on to say that he doesn't like this concept and vigorously talks his customers away from the idea of auto-scaling. All the more interesting consider enStratus describes their service / product as a cloud infrastructure management tool that automates the operation of your web sites and transactional applications in a cloud infrastructure. (Sounds like they specialize in Auto-scaling)

Instead he seems to prefer the concept of Dynamic-scaling. Which he describes as scaling on the ability to add and remove capacity into your cloud infrastructure on a whim—ideally because you know your traffic patterns are about to change and you are adjusting accordingly. (If I knew my traffic was going to change, then why would I need an automated scaling system?)

If I am reading George's theory correctly, Automation = Bad, Dynamic = Good. But I think he misses the point to a certain degree. The two are not mutually exclusive.

For me the concept of dynamic scaling is simply the ability to change and adapt your infrastructure. This in itself is a major advancement, until recently the ability to "change" or adapt your infrastructure in a reasonable amount of time was a extremely difficult process. Infrastructure has been a somewhat "static" resource for most traditional data centers. With the advancements in dynamic system management such as Opsware, Bladelogic and even VMWare you now have the ability to create a fully dynamic IT environment. For a large portion of the world this is a very big step forward.

I think the real issue with George's post is in the word auto. Does he actually mean automatic? (Capable of operating without external control or intervention) Or does he mean Automation? (The act or process of converting the controlling of a machine or device to a more automatic system.) I would say the latter. No scaling operation should be fully or completed automated, it should be a series of controls / rules, policies, quotas and monitors that are tailored to reduce the need for human operators involvement or for the purposes of achieving a set of requirements such as the quality of my users experience.

I'm personally all for a Dynamic Automated Infrastructure.

Businessweek: U.S. Is Losing Global Cyberwar

Well, I'm back from my whirl wind trip to Boston, I hope to get back to my regular posts this week.

In the mean time I wanted to tell you about a great article in BusinessWeek magazine about how the U.S. Is Losing Global Cyberwar. According to the article "The U.S. faces a cybersecurity threat of such magnitude that the next President should move quickly to create a Center for Cybersecurity Operations and appoint a special White House advisor to oversee it."

Check out the article here


Saturday, December 6, 2008

Information Week: Master The Cloud

Charles Babcock at Information Week has written a nice post saying that if you're the master of virtualization at home, then you'll eventually master the cloud. He also included the work I've been doing with the cloud Interoperability forum.

Read the informationweek post here.

Wednesday, December 3, 2008

The Internet as a Cloud Infrastructure Model

In my never ending quest to speak to every venture capitalist on the planet, I find myself in Boston this evening and pondering something I said in a few of my investor meetings today.

In one of my famous off topic VC rants, I described my vision for a unified cloud interface using an analogy of the Internet's self governing model as the basis of an adaptive enterprise cloud. Funny as it may sound this is the first I've used this particular analogy.

The Internet itself would appear to be the perfect model for a "cloud platform". The Internet uses a self governing model whereby there is no single administrative entity and must continue to operate in the event of critical failures. By design, the web's core architecture assumes there will be sporadic global failures and can fail gracefully without effecting the internet as a whole. The Internet exists and functions as a result of the fact millions of separate services and network providers work independently by using common data transfer protocols to exchange communications and information with one another (which in turn exchange communications and information with still other systems). There is no centralized storage location, control point, or communications channel for the Internet. The very decentralized architecture of the internet has allowed it to adapt and evolve overtime, almost like a living organism.

One of the reasons why the internet works is because of its open communication protocols, by their very nature they form the ideal model for a cloud coordination tool - exactly the kind of system that can automate the routine 99 percent of computer-to-computer interactions you'd want in a cloud platform. But there is a catch. Protocols automate interoperability only if all core Internet service providers agree to use the same ones. (Enter Cloud Interoperability)

Open standards are key to the Internet's composition and are a core component to interoperability within a truly distributed command and control structure. The internet works because anyone who wants to create a Web site can freely use the relevant document and network protocol formats (HTML and HTTP, etc), secure in the knowledge that anyone who wants to look at their Web site will expect the same format. An open standard serves as a common intermediate language - a simplifying approach to the complex coordination problem of allowing anyone to communicate successfully with anyone else.

My random thought for this evening.

Forget about Artificial Intelligence, think Collective Intelligence

Amazon seems to be on roll these days. After announcing their Amazon Data Sets a few weeks ago they are back with a very cool new iPhone App. For me it isn't so much that Amazon has released yet another iphone app (YaiPa), but how they've enabled their customers to tap into what I'm calling the " Collective Intelligence".

Basically the app lets users take a photograph of any product they see in the real world. (Yes, the world outside the digital one, crazy I know) The photos are then uploaded to Amazon and given to a global network of (low paid) workers utilizing Amazon’s Mechanical Turk, crowd sourcing platform. For a small fee a collection of "actual humans" will try to match the photos with products for sale on Amazon.com. According to a New York Times post, the results will not be instantaneous (between 5 minutes and 24 hours).

So why is this cool? Let's revisit the idea of collective intelligence, that of a shared or group intelligence that emerges from the collaboration and competition of many individuals. Basically this is a subset of the concept of crowdsourcing, which has been around for a awhile. Until recently there really has a been an effective way to "programatically" tap into the greater community and more importantly there haven't been too many good examples of this used in the "real world".

What is interesting is up until now traditional applications haven't been really able to autonomically adjust itself based physical or digital demands. It's typically required human intervention (The website is slow, lets add another server). With the emergence of global cloud computing these formally static systems are beginning to become "infrastructure aware", the capability of automatically adjusting as demands and requirements change in realtime is quickly becoming the standard requirement of any modern IT environment. The next logical step as we move toward a technical singularity will be in combining a somewhat or fully aware infrastructure with the collective intelligence of humans. I feel this represents a crucial bridge between how intelligent technology will interact with both the digital and physical world around it, and more importantly how it will benefit the the people who use it.

As anyone familar with my various schemes, from Enomaly's open source elastic computing platform to Cloud Camp's community driven events to more recently my Cloud Interoperability Forum, my guiding principle has always been that technology is changing faster then any single human can adapt, no single person or company is as valuable as the community who supports it. I feel the ability to draw on the power of mass collaboration as a point of participation is ultimately more valuable then any workforce I might be able to assemble.

Amazon appears to be one of the few companies that grasps this concept and best of all, applies it as a kind of corporate mantra, "We are not what we sell but rather the community we sell to".

Tuesday, December 2, 2008

The Cloud OS & The Future of the Operating System

Some exiting developments in the world of cloud operating systems today. Yes I know, operating environments aren't usually associated with the adjective "exciting", but this news is different.

Good OS has introduced a new operating system for cloud computing appropriately called "Cloud," which is the successor to company's Linux-based gOS.

Unlike gOS, Cloud does not open up onto a desktop. Instead, it boots directly into a web browser; after booting up, you are greeted with a full screen browser page which looks like a traditional OS including shortcuts to cloud applications like Google Docs and Calendar as well as Blogger and YouTube. Cloud's so called "proprietary application framework" is said to allow you to run client applications, such as Skype or Media Player, opening them in new tabs just like in Windows or Linux. They don't really give any indication of how this actually is accomplished. Fear not, I have an email to them, I'll let you know as I find out more.

Back to why this is exciting, Cloud is one of the first OS's to embrace the hybrid cloud OS model. A model that combines the best of both worlds, the use of a local CPU for performance with the scale and infinite opportunities of the cloud. Together this creates an unique opportunity to take on the traditional OS's of the world.

You can also think of Cloud OS as being similar to that of the Apple Iphone, where certain applications are loaded directly on the phone (CPU) and where other application components remain on a server some where in the cloud (Internet). This hybrid model seems particularly well suited to emerging economies (Think India, China, etc), where software licensing can be prohibitively expensive. The One Laptop per Child (OLPC) project comes to mind. (One Laptop per Child (OLPC) is a non-profit association dedicated to research to develop a low-cost, connected laptop, a technology that could revolutionize how we educate the world's children.)

While I'm on that theme, I find this hybrid model particular interesting for "virtual deskop" deployments, where a user maybe given a netbook or thin client that contains a users core identity, favorites etc while the majority of the functionality is loaded via the cloud (aka the Internet). I will also say, this also sounds an awful like Microsoft's new Software + Services philosophy. A combination of local software and Internet services interacting with one another.

We seem to be quickly moving toward future where the operating system acts more like a portal and less like a traditional application stack. In the case of Good OS's Cloud, unlike other cloud frameworks such as Googles' Chrome, Cloud is its own operating system that runs along side an existing OS such as Windows or linux while securely accessing the CPU. From what I can tell it currently cannot replace the main operating system, but in the future this may very well become the case.

ThinkGOS describes it this way: "Cloud uniquely integrates a web browser with a compressed Linux operating system kernel for immediate access to Internet, integration of browser and rich client applications, and full control of the computer from inside the browser."

They go on to say "Cloud features a beautifully designed browser with an icon dock for shortcuts to favorite apps, tabs for multi-tasking between web and rich client apps, and icons to switch to Windows, power off, and perform other necessary system functions. Users power on their computers, quickly boot into Cloud for Internet and basic applications, and then just power off or boot into Windows for more powerful desktop applications."

In some ways I think they're missing some key opportunities. Rather then switch to Windows for key applications such as Office, or even gaming. I think the opportunity may be to stream the applications "on demand" using remote desktop technologies. So if you'd like to use a more traditional app, then you can do so directly within the browser based OS but in a fully quarantined VM or VNC connection. Another option would be use a technology such as Wine which is an Open Source implementation of the Windows API on top of X, OpenGL, and Unix. (That's a whole other post)

Currently the Cloud operating space is in its infancy, but with companies like Microsoft entering with its Azure framework, it would seem that the future of the OS lies in the cloud.

Monday, December 1, 2008

EU launches Green Code of Conduct for Data Centers

I seem to have missed this major announcement last week that the EU launched a "Green" Code of Conduct for Data Centers. The Code of Conduct was created in response to increasing energy consumption in data centers and the need to reduce the related environmental, economic and energy supply impacts. It was developed with collaboration from the British Computer Society, AMD, APC, Dell, Fujitsu, Gartner, HP, IBM, Intel, and many others.

According to the PDF overview, "The Code of Conduct was been created in response to increasing energy consumption in data centres and the need to reduce the related environmental, economic and energy supply security impacts. The aim is to inform and stimulate data centre operators and owners to reduce energy consumption in a cost-effective manner without hampering the mission critical function of data centres. The Code of Conduct aims to achieve this by improving understanding of energy demand within the data centre, raising awareness, and recommending energy efficient best practice and targets."

Those who choose to abide by the voluntary Code of Conduct will have to implement energy efficiency best practices, meet minimum procurement standards, and report energy consumption every year.

Learn more here
--

CCIF Mission & Goals (Draft Outline)

Since I posted my "Unified Cloud Interface (UCI)" last week to say there has been a significant amount of interest in cloud interoperability would be putting it mildly. Over the last week I've received dozens of emails and calls from people looking to get involved with our initiative. So I figured it would make sense to outline some of our objectives for the forum before we discuss any technical implementation aspects.

(The following is a draft and should be looked at as a suggestion open to discussion)

The Cloud Computing Interoperability Forum
Mission and Goals (Draft Dec 1st 2008)

CCIF Goals
The Cloud Computing Interoperability Forum (CCIF) was formed in order to enable a global cloud computing ecosystem whereby organizations are able to seamlessly work together for the purposes of wider industry adoption of cloud computing technology and related services. A key focus will be placed on the creation of a common agreed upon framework / ontology that enables the ability of two or more cloud platforms to exchange information in an unified manor.

Mission
CCIF is an open, vendor neutral, not for profit community of technology advocates, and consumers dedicated to driving the rapid adoption of global cloud computing services. CCIF shall accomplish this by working through the use open forums (physical and virtual) focused on building community consensus, exploring emerging trends, and advocating best practices / reference architectures for the purposes of standardized cloud computing.

Community Engagement
By bringing a global community of vendors, researchers, architects and end users together within an open forum, business and science requirements can be translated into best practices and, where appropriate, relevant and timely industry standards that enable interoperability and integration within and across organizational boundaries. This process is facilitated by online mailing lists, forums and regular "in person" meetings held several times a year that bring the broad community together in workshops and or group meetings. All CCIF activity will be underpinned by a web presence that enables communication within the various CCIF working groups and the sharing of their work with the broader community.

What we're not
The CCIF will not condone any use of a particular technology for the purposes of market dominance and or advancement of any one particular vendor, industry or agenda. Whenever possible the CCIP will emphasis the use of open, patent free and or vendor neutral technical solutions.

Board of Directors and Advisory Board
There has been some interest in the creation of both a CCIF board of directors and advisory board. I'm open to the idea, but don't think it might be necessary at this time.

If you are interested in getting involved in the discussion, please join our forum here.
---

New England Cloud User Group Meeting (Tuesday)

Word travels fast. I'll be in Boston this week for several meetings, but unfortunatly will be missing the New England Cloud User Group Meeting on Tuesday evening, I arrive Wednesday afternoon. If you're in the Boston Area on Tuesday and are interested in learning more about cloud computing, you shoud go to the New England Cloud User Group!

They'll have Bret Hartman, the CTO from RSA talking about security as it relates to the cloud as well as Vikram Kumar and Prasad Thammaneni CTO and CEO from Pixily to give the start-up experience, leveraging S3 and EC2 exclusively

Dec 2nd, 6-9pm
Papa Razzi restaurant
16 Washington St (on route 16 off of route 128)
Wellesley, MA 02481
(781) 235-4747

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram