Friday, February 27, 2009

Examining Cloud Compatibility, Portability and Interoperability

Over the last few months there has been a growing amount of momentum around cloud interoperability. Lately it seems that a few in the industry have gotten a little confused on the differences between Cloud Compatibility, Portability and Interoperability. So I thought it might be time to take a closer look at the various terms and how they relate as well as how they differ.

First lets start with Cloud Interoperability. As I've described before, Cloud Interoperability refers to the ability for multiple cloud platforms to work together or inter-operate. A key driver of an interoperable cloud computing ecosystem is to eliminate what I call proprietary "API Propagation" whereby each new cloud service provides their own unique set of web services and application programming interfaces. Simply, the goal of Cloud Interoperability is to make it easier to use multiple cloud providers who share a common set of application interfaces as well as a consensus on the terminology / taxonomies that describe them.

The next logical question is "how?" which brings us to Cloud Compatibility & Portability, something I would describe as a subset of Interoperability. What I find interesting about Cloud Computing is that unlike traditional CPU centric software development, in a cloud computing infrastructure the underlying hardware is typically abstracted to a point that it no longer matters what type of hardware is powering your cloud. Cloud Computing is about uniformity -- all your systems acting as one. Within this vision for the cloud we now have the opportunity to uniformly interact with a virtual representation of the infrastructure stack, one that can look and act anyway we choose (See my Multi & Metaverse posts). Whether you're using a component centric virtual environment (IaaS) or an application fabric (PaaS ) is completely secondary. What matters now is ensuring that an application has a common method of programmatic interaction to the underlying resources & services. More simply, Cloud Compatibility means your application and data will always work the same way regardless of the cloud provider or platform, internally or externally, open or closed.

Lastly is that of "Cloud Portability", this is where the application components are able to be easily moved and reused regardless of the provider, location, operating system, storage, format or API. The prerequirement for portability is the generalized abstraction between the application logic, data and system interfaces. When you're targeting several cloud platforms with the same application, portability is the key issue for development & operational cost reduction as well as a critical requirment when trying to avoid cloud lock-in.

One such example that address several of these concepts is the Open Virtualization Format (OVF). The format is an "open" standard proposed by the DMTF for packaging and distributing virtual appliances or more generally software to be run in virtual machines. The proposed specification describes an "open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines". The OVF standard is not tied to any particular hypervisor or processor architecture. Like most standards, the major problem is it's only useful if all platforms / clouds actually support the OVF format. That's where a "semantic abstraction layer" comes in handy, with semantic cloud abstraction it doesn't matter if any providers actually implement the standard because it acts as a unifying agent capable of adapting to the similarities all cloud API's share while either ignoring the unique aspects.

I believe that without first a common consensus and eventually set of cloud standards, that the cloud ecosystem will likely result in a series of proprietary cloud's giving cloud users little hope of ever leaving. Cloud customers will be forced to be locked-in to a particular cloud vendor, unable to use another cloud without substantial switching costs. At the end of the day, this is the problem we're trying to solve at the CCIF.
Reblog this post [with Zemanta]

Thursday, February 26, 2009

Draft Agenda for DC Cloud Computing Interoperability Workshop

March 23, Hyatt Regency Crystal City, Virginia

Morning Session: Standards Groups (8:00 - 12:00)

8:00 - 8:30 NIST - Tim Grance (NIST Cloud Program Manager), Peter Mell (Senior Computer Scientist)

8:30 - 9:00 Cloud Computing Interoperability Forum - Reuven Cohen Leader, Founder & Chief Technologist, Enomaly Inc)
“Defining an End-to-End Cloud with Web Architecture"

9:00 - 9:30 Open Cloud Consortium - Robert Grossman (Leader)

"An Overview of the Open Cloud Consortium"

9:30- 10:00 Open Grid Forum - Craig Lee (President)
"Distributed Computing Scenarios: What this Means for Interoperability"

10:30 - 11:00 Network Centric Operations Industry Consortium -Krishna Sankar (Leader of Cloud Computing Team)

”Vectors in Federal Cloud Computing”

11:00 - 12:00 Interactive Panel and Q&A with all Standards Group Speakers
=======================

Afternoon Session: Companies (1:00 - 6:00)


1:00 - 1:30 Cisco - Krishna Sankar (Distinguished Engineer)

"A Hitchhiker’s Guide to InterCloud”

1:30 - 2:00 IBM - TBD

TBD

2:00 - 2:30 Microsoft - Susie Adams (CTO of the Federal Sales organization)
"Interoperability: A Necessary Condition for Success in the Cloud"

2:30 - 3:00 Salesforce.com - Dan Burton (Senior Vice President, Global Public Policy)
"Salesforce.com and the Interoperable Cloud"

3:30 - 4:00 Sun - Scott Radeztsky (Chief Architect for Americas Systems Engineering)

"Real Clouds for Real People"

4:00 - 4:30 Elastra - Stuart Charlton (Chief Software Architect)
"Defining an End-to-End Cloud with Web Architecture"

4:30 - 5:30 Interactive Panel and Q&A with all Company Speakers

5:30 - 6:00 Wrap-up discussions


Reblog this post [with Zemanta]

Did ZumoDrive trademark "Hybrid Cloud"?

Just looking at a cloud storage product called ZumoDrive and noticed a little TM next to their usage of Hybrid Cloud. I couldn't find any trademark filing to backup their claims.

According to their website , ZumoDrive is a product of Zecter, a Silicon Valley startup creating technology that transforms the way people use cloud storage.

They claim that "With HybridCloudTM storage solution, our product ZumoDrive brings reliable, scalable cloud storage to consumers. ZumoDrive brings cloud storage to computers and smartphones without sacrificing the benefits of local storage"

Zecter was founded in 2007. The company has raised capital from Y Combinator and Tandem Entrepreneurs.

Reblog this post [with Zemanta]

Wednesday, February 25, 2009

Breaking News: Salesforce.com First Cloud Computing Company to Achieve Fiscal Year Revenue of One Billion Dollars

Congrats to Salesforce.com on being the first 1 billion dollar Cloud company!

Salesforce.com Announces Record Fiscal Fourth Quarter Results
  • First Enterprise Cloud Computing Company to Achieve Fiscal Year Revenue of One Billion Dollars -- Record Revenue of $290 Million, up 34% Year-Over-Year
  • GAAP EPS of $0.11, up 83% Year-Over-Year
  • Net Customers Increase 3,600 in the Quarter to 55,400
  • Net Paying Subscribers Increase 400K Year-Over-Year to Surpass 1.5 Million
  • Operating Cash Flow of $76 Million for Quarter; $230 Million for Fiscal Year
  • Total Cash and Marketable Securities of $883 Million, up $213 Million Year-Over-Year
  • Company Updates FY10 Revenue Guidance to $1.30 - $1.33 Billion

VMWare's Vcloud API still hazy, Ambitions are clear

VMWare keeps continuing to make noise around it's forthcoming Vcloud API initiative. According to an announced yesterday VMware has developed a new API aimed at offering service providers with the ability to easily migrate between public and private VMWare based clouds. Like the previous announcement, details are sketchy other then to say "select group" of partners are using it. When asked to comment or share a copy of the Vcloud API, the companies involved indicated they we're covered by aNDA. Those companies include SAVVIS, SunGard, Telefonica, Telstra and Terremark.

According to my source, the Vcloud API will be released "publicly very shortly". Funny that same source said that back in November as well.

Actually what I found most was the quote VMware's Dan Chu, vice president of emerging products and markets made had the Network World website in the post he outlines "that one of the drivers for the API was the lack of standardisation for cloud computing interoperability." He goes on to say that the company was looking to build on its work with Distributed Management Task Force (DMTF) on the open virtualisation format (OVF). "The industry needs to take a big step towards interoperability. We hope to work with the appropriate bodies to move forward to establish a common standard."

As for being interoperable, VMware is saying that its various management tools will only work on top of the VMware hypervisor. In other words, physical servers and servers virtualised by Microsoft, Citrix or any other vendor will not be compatible with the Vcloud initiative. Summarized, we're interoperable as long as it's VMWare.

According to the Network World website, VMware has already submitted a draft of its VMware vCloud API to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds." (What!? Did I miss something here?)

What concerns me about this is that Winston Bumpus is both President of the DMTF as well as Director of Standards Architecture at VMware. This would seem to mean that Bumpus has the ability to submit draft API specifications directly to the DMTF without any outside public review. He in effect has the ability to to define cloud standards directly thus giving VMware a "somewhat" unfair advantage in terms of defining the future direction for standards compliant cloud platforms, VMWare based or otherwise. If the DMTF accepts the Vcloud API specification, that would mean VMware essentially owns the cloud API standard. A standard that no one other then a select group of VMWare's partners has ever actual had a chance to review.

I'll keep you updated as more details emerge.

Friday, February 20, 2009

Cloudy Standards & The Skinny on Cloud Lock-in

Great post over at the Computer World blog on the subject of cloud API standardization. In the post, Jeff Boles says "there only needs to be standardization around a few core "activities" that are targeted more at interoperability than uniform services and structure" He goes on to say "what I care about, as an end user, is really carrying out a couple of key steps, in the same way, regardless of who the provider is."

I also found his insight into what he describes the ability to "crawl a web of potential services and see which APIs could be replaced or duplicated by other services. Conceptually again, this could mean distributing your application and data across many different providers."

Check out the whole article at http://blogs.computerworld.com/cloudy_clouds_and_standards

While I'm sharing interesting links, the folks over at Rightscale have posted an article on Cloud Lock-in. In it they say "The higher the cloud layer you operate in, the greater the lock-in. Lock-in occurs with this vendor to the extent it is prohibitively expensive or time-consuming to run your application elsewhere or move your data elsewhere. Whether this "elsewhere" is another vendor or whether it is your own infrastructure is not important: if you can't move, or it costs a lot or takes a long time to do so, you're locked-in."

Check out the entire post at http://blog.rightscale.com/2009/02/19/the-skinny-on-cloud-lock-in/
Reblog this post [with Zemanta]

Thursday, February 19, 2009

Joint CCIF / OMG Cloud Interoperability Workshop on March 23 in DC

We are pleased to announce the Cloud Computing Interoperability Forum's participation in an all day Workshop entitled "Strategies And Technologies for Cloud Computing Interoperability (SATCCI)" to be held in conjunction with the Object Management Group (OMG) March Technical Meeting on March 23, 2009, Hyatt Regency Crystal City, Arlington, VA, USA.

I'll personally be presenting my thoughts on creation of an open unified cloud interface and opportunity for unification between existing IT and cloud based infrastructures. (aka hybrid computing)

The SATCCI Workshop will provide leaders in the Federal computing community with an overview of Cloud Interoperability/Portability issues and possible solutions. The Workshop will increase the attendees understanding of this area, will encourage ongoing participation from attendee organizations, and gather feedback on future requirements for open Cloud Computing deployments. This feedback can help guide future Cloud Computing standardization organization deliverables.

Representatives from groups working on Cloud interoperability and portability will be invited to present their approaches at interactive sessions. Invited presenters include the Cloud Computing Interoperability Forum, the Open Cloud Consortium, the Open Grid Forum, the Open Group, the Distributed Management Task Force, the Network Centric Operations Industry Consortium, the Object Management Group, and vendors with active interoperability and portability efforts.

One motivation for the SATCCI Workshop aimed at the Federal IT community is that the new administration will be making critical technology funding decisions in the next few months. The SATCCI organizers will invite representatives of major Federal computing organizations to attend and participate in making this a practical, productive, and timely Workshop.

We hope to see you at this special session. To register for this Special Event, click HERE

http://www.omg.org/registration/dc/

Wednesday, February 18, 2009

Unified Cloud Interface Project Mailing List Created

If you've been following the CCIF discussions, a group of us have come up with the concept of creating an open unified cloud interface specification and test harness. One of the key drivers of the unified cloud interface is to create an api about other api's. A singular programmatic point of contact that can encompass the entire infrastructure stack as well as emerging cloud centric technologies all through a unified interface.

We've created a secondary UCI mailing list to help guide the creation of the unified cloud interface. If you are interested in shaping the future or just following our progress we invite you to join the UCI mailing list.

Testing The World Wide Cloud

For all the discussion about the potential opportunities for the world wide cloud, there have been very few real world examples of applications that can take advantage of this idea. Today, in possibly one of the first true killer applications for multiple global cloud providers, SOASTA a provider of cloud based testing platform has announced its new CloudTest Global Platform.

SOASTA has unveiled an ambitious plan to utilize an interconnected series of regionalized cloud providers for global load and performance testing Web applications and networks. They're calling this new service the CloudTest Global Platform, which is commercially available today, and is said to enable companies of any size to simulate Web traffic and conditions by leveraging the elasticity and power of Cloud Computing.

In full disclosure, over the last several months I've gotten to know Tom Lounibos, the Co-founder and CEO of SOASTA quite well. At Enomaly, we've worked on several projects together so I can honestly say Tom is a the kind of guy a young technology entrepreneur (like myself) strives to become. Tom is a visionary and has an impressive resume to prove it. He brings more than 30 years of experience in building early stage software companies as well as leading two companies to successful IPO's. (Remember when there was an IPO market?) Most recently, Tom was CEO of Dorado Corporation, focused on Enterprise Lending Automation. In the world of cloud CEO's Tom is one to keep your eye on.

Back to what I find interesting about this new scheme; traditionally performance testing has been a kind of "best guess" scenario. Although there are many testing frameworks available most of which create a hypothetical experience using a set of static machines typically limited to one or two geographic locations. With the emergence of a global supply of regional cloud providers SOASTA is tapping into almost limitless capacity to test your application environment in a proactive fashion. Until the emergence of cloud based infrastructures testing beyond a few hundred thousand users was impossible, now you can slap together a few regionalized clouds and realistically see how 3 million or more users around the globe will actually experience your application and infrastructure. This is specially important in emerging markets such as China and India where even a low usage site can routinely get millions of users.

I think the the idea of a global testing platform is very intriguing for a number other reasons as well. Although they're pitching "CloudTest" as a testing / performance tool there is nothing saying that it can't be used as part of a proactive monitoring / scaling environment where you periodically test performance thresholds. In this proactive scaling approach you may want to predefine when, where and how you scale your infrastructure based on the real world conditions your users are "actually" experiencing. When your application is in production you could use your performance tools for proactive analysis allowing for BPM and other performance based policies to be defined ahead of time ensuring a consistent Quality of Experience.

In examining the opportunity for a world wide cloud the idea of using the quality of a user experience as the basis for scaling & managing your infrastructure will be a key metric going forward. Scaling based solely on "load" is a relic of the past. The problem is a given cloud vendor/provider may be living up to the terms of their SLA's contract language, thus rating high in quality of service, but the actual users may be very unhappy because of a poor user experience. In a lot of ways the traditional SLA is becoming somewhat meaningless in a service focused IT environment. With the emergence of global cloud computing, we have the opportunity to build an adaptive globalized infrastructure environment focused on the key metric that matters most, the end user's experience. Whether servicing an internal business unit within an enterprise or a group of customers accessing a website, ensuring an optimal experience for those users will be the reason they will keep coming back and ultimately what will define a successful business.

Monday, February 16, 2009

Red Hat Announces it's kinda Interoperable, sort of, maybe?

In a rather lack luster announcement today Red Hat has indicated they have signed a reciprocal agreement with Microsoft to enable increased "interoperability" for the companies’ virtualization platforms. Both companies said that they would offer a joint virtualization validation/certification program that will provide coordinated technical support for their mutual server virtualization customers.

Is it just me or does this Red Hat Interop announcement seem a little misguided? Digging a little deep it appears that Red Hat and Microsoft don't fully grasp what Interoperability actually is or more to the point who it benefits. But rather they seem to taking advantage of the buzz that interoperability has enjoyed in 2009. So now rather then slapping a "cloud" logo on your product, you slap an interoperable logo on there too.

Back to the announcement,
  • Red Hat will validate Windows Server guests to be supported on Red Hat Enterprise virtualization technologies.
  • Microsoft will validate Red Hat Enterprise Linux server guests to be supported on Windows Server Hyper-V and Microsoft Hyper-V Server.
  • Once each company completes testing, customers with valid support agreements will receive coordinated technical support for running Windows Server operating system virtualized on Red Hat Enterprise virtualization, and for running Red Hat Enterprise Linux virtualized on Windows Server Hyper-V and Microsoft Hyper-V Server.

My question to Red Hat is since when does certification and technical support count as interoperability? Making this all the more confusing is there was no mention of Red Hat's actual interoperability efforts which traditionally have focused on it's open source systems management API called LibVirt.

In case you're not familar with LibVirt, it is a toolkit incubated under Red Hat's Emerging Technology projects group. The goal of the API is to create an interoperable systems API which serves as central point of interaction with the virtualization capabilities of recent versions of Linux (and other OSes). Among it's various features the API also acts as a CIM provider for the DMTF virtualization schema as well as a QMF agent for the AMQP/QPid messaging system. Libvirt is free and available under the GNU Lesser General Public License.

Up until today it appeared that Red Hat was ready to lead the interoperability effort, this anouncement puts some doubt if this is actual a true motivation of the company. If Red Hat is serious about being interoperable, they must do more than be MS certified. Partners and customers are smart enough to read beyond press releases. Action speaks louder then words and Red Hat must actually take steps to enable an interoperability cross vendor environment. The most logical first step is to provide direct support for Microsoft with in Libvirt. Unfortunately this is not the case.

Come on Red Hat we expected more from you.

Describing the Cloud MetaVerse

If you are offended by Theoretical Computer Science, stop reading this now

In describing my theory on the Cloud Multiverse, I may have missed the few obvious implications of using the prefix "multi" or consisting of more than one part or entity. Although the Cloud Multiverse thesis suggests there will be more then one internet based platform or cloud to choose from. It does little to describe how each of those clouds interact. For this we need another way to describe how each of these virtualized interconnected environments interact with one another.

In place of "multi" I suggest we use the prefix "Meta" (from Greek: μετά = "after", "beyond", "with", "adjacent", "self").

I'd like to propose yet another term to describe the inter-relationships between the world of various internet connected technologies and platforms. What I'm calling "The Cloud MetaVerse" -- This concept was inspired in part by the suggestion of Scott Radeztsky at Sun to look at the problem of cloud interoperability as a meta-problem, (problems describing other problems). In order to solve abstract problems, we need abstract solutions to these problems. This fit perfectly into my Semantic Cloud Abstraction thesis loosely described as an API for other API's.

Before I go any further, I'm not to first to use this term, according to the wikipedia -- The Metaverse was first described in Neal Stephenson's 1992 science fiction novel Snow Crash, where humans, as avatars, interact with each other and software agents, in a three-dimensional space that uses the metaphor of the real world.

Unlike Neal Stephenson's virtual reality Metaverse definition or my previous Cloud Multiverse theory. Both of these concepts do little to define the external attributes but instead define how each virtual world or environment is internally governed. In the "multi"verse everything that virtually exists does so as a private segmented environment which is limited to its own specific rules that internally govern it.

In contrast the Cloud "Meta"verse is its logical inverse describing everything that exists beyond the confines of a particular virtualized environment. The Cloud Metaverse could also be called a metacloud. At the core of this theory we are given the ability to define the relationship of how multiple distributed clouds describe their interrelations between themselves (who, what, where, when, and so on)

To use a visual metaphor, in Radeztsky's Cube he describes a sort of Rubik's cube where each of the subsequent internal parts are connected but continually being rearranged. The Cloud MetaVerse describes how these series of cubes can be externally arranged as larger sets of Lego building blocks made of a series of small self contained cubes (clouds) of capacity. In a sense, what is happening within a particular virtual environment is completely secondary to how these environments interact with one another. (Microsoft may have it's own way of building a cloud that is completely different then Amazon's, it doesn't matter as long as we have a uniform meta-lanaguage to interact with each other)

Although there is still a lot more work to be done, describing the terminology of our problem set is the first step to creating a true semantic abstraction of all Internet based systems. And yes, this may be my craziest idea yet.

Sunday, February 15, 2009

Comprehension of Cloud Subjectivity

I just read an interesting paper called "A Berkeley View of Cloud Computing" and had random thought. In reading this paper, it occurred to me that the very nature of cloud computing, like the Internet itself is based on that of subjectivity or a particular subject's perspective, particularly feelings, beliefs, and desires to drive their own view point / agenda. This seems to be particularly true in the academic views of cloud computing recently.

Lately it seems that just about every "academic paper" I read on the topic of cloud computing seems to push unjustified personal opinions, in contrast to knowledge and justified facts. I can't help but think the academic realm is quickly becoming a tool to justify particular vendors market strategies rather then attempting to uncover original ideas and concepts.

Why are the most original computing concepts emerging from those who "on paper" are the least qualified? Has the Computer Science faculty at most major universities lost touch with the real world or maybe they are enslaved by their corporate benefactors, a symptom of a larger problem?

On the flip side, one of the best aspects of the term cloud computing is in it's complete lack of a uniform definition thus giving us the ability to adapt the term for our own purposes. With in this nebulous definition is its true opportunity to transcend any one usage.

More broadly, cloud computing represents a new era in computing, one that is not limited by anyone school, application, methodology or business case. Or to say it another way, cloud computing represents a fundamental shift, one that will allow anyone with an Internet connection the ability to access a global cloud of opportunities previously only available to the largest companies who could afford the cost of building a global computing infrastructure. A little subjectivity of my own on a Sunday morning between diaper changes.

Saturday, February 14, 2009

Radeztsky's Cube & The Interoperability Metaverse

I wanted to bring to the attention of my readers a diagram created by Scott Radeztsky, a Principal Engineer over Sun as well as one of our main cloud interoperability champions within the company. You can see the diagram posted at http://cloudforum.googlegroups.com/web/Metaverse+Decomposition.pdf

Radeztsky's diagram does an great job of visually representing the "semantic" similarities between the various technical pieces as well as higher level taxonomies. Although the first draft is still in a fairly rough form, it does take a nice visual representation of a 3 dimensional view of the key aspects of the cloud or what he calls the "Interoperability Metaverse" with in "the cube".

axis 1: type: private vs. public vs. hybrid cloud
axis 2: layer: SaaS, PaaS, IaaS
axis 3: domain (known optimizations of HW/SW): HPC, Analytics, Healthcare, Telco

What is interesting is there actual may be room for a 4th dimension, possibly the API dimension where we can map not only the relations on the face of the cube but also within it.

Radeztsky goes on to describe the constraints that he feels will help reduce the multiple scenarios in our spectrum where one size may not fit all.
  • app space contributing: my app can use dynamic elements and rediscover some things or it has hardwired/hardcoded things
  • infra space contributing: my infra contains dynamic things that can help apps or it does not (DDNS, identity)
  • NW space contributing: interop between the various cloud types, high rates of shuffling vs. lower, etc

There are pieces of technology and typical optimizations and places in / pieces of the cube where solns are perhaps simpler, and places where things are still not known and therefore must remain more abstract or academic in out definitions.

Personally I like the visual metaphor Radeztsky has chosen to use, I'm calling it "Radeztsky's Cube" (cause I like naming things and it sounds cool). I'd also like to see a version 2 use a 4d approach. Keep up the good work.

Thursday, February 12, 2009

The State of the Cloud Computing Interoperability Forum

This message was posted on the Cloud Computing Interoperability Forum today, it's meant to be a kind of "state of the union" address outling a vision and direction for the group.
------------------------
Considering the myriad of great discussions
recently on the CCIF, we figured it was time to bring everyone up to speed on some of the things going on behind the scenes. First we'd like to thank everyone for your enthusiasm, participation, support. It's been amazing to see how quickly the cloud interoperability movement has taken off.

A few have pointed out that the CCIF appears to be the act of one person, I'd like to assure you this is not the case. Although I am the "instigator" and most vocal member, the CCIF has an excellent leadership team made up of my partners from CloudCamp, (Jesse Silver, Dave Neilsen, Sam Charrington) as well as a significant number of major industry sponsors (David Bernstein from Cisco, Craig Lee from OGF, Jake Smith from Intel, Steve Diamond from IEEE, Scott Radeztsky from Sun & Shishir Garg from France Telecom / Orange ) who we are working closely with us to secure a long and productive future for the CCIF. Although I can't go into details, we've have been in discussions for some months now and are very close to making an exciting announcement.

As a community, let's continue working to develop our ideas and methods around cloud interop, and related discussions. Our ultimate goal is to drive a consensus on the difficulties as well as opportunities for cloud interoperability. As you may have noticed unlike other discussion groups we strive to limit moderation, regardless of whether or not we agree you should have the freedom to express your opinions. If you feel you have been unfairly moderated for any reason I encourage you to please contact one of us directly.

You may have also seen we are forming a UCI working group. If you want to take a leadership role in CCIF today, this is the best way to do so. This is an "Open Forum" and therefore it must be open to all regardless of whether you're the CTO of a major corporation or a consultant with an interest in cloud computing.

If you are as passionate about cloud interoperability as we are, we'd like to personally invite you to join us for Cloud Camp NY and CCIF Wall Street in April in New York City.

Looking forward to continue inspiring.

Yours Truly

The Founders and Sponsors of CCIF
www.cloudforum.org

Tuesday, February 10, 2009

The Hybrid Cloud Multiverse (IPv6 VLANS)

Christofer Hoff has proposed an interesting idea earlier today. He asked, "How many of the cloud providers (IaaS, PaaS) support IPv6 natively or support tunneling without breaking things like NAT and firewalls? As part of all this Infrastructure 2.0 chewy goodness, from a networking (and security) perspective, it's pretty important."

His post actual was a kind of epiphany which lead me to think that one of the great things about cloud computing is in its ability to virtualize everything. The cloud is a kind of "multiverse" where the rules of nature can continually be rewritten using quarantined virtual worlds within other virtual worlds (aka virtualization). The need for a traditional physical piece of hardware is no longer a requirement or necessary.

For example VLANs don't need to differentiate between IPv4 and IPv6; the deployment is just dual-stack, as Ethernet is without VLANs. So why not just use modern VLAN technology to "overlay" IPv6 links onto existing IPv4 links? This can be achieved without needing any changes to the IPv4 configuration and allows for seamless and secure cloud networking while also allowing for all the wonders that IPv6 brings. It's in a sense the best of both worlds, the old with the new.

A VLAN based IPv6 overlay offers several interesting aspects such network security that is directly integrated into the design of the IPv6 architecture. (Security being one of the biggest limitations to broader cloud adoption) IPv6 also implements a feature that simplifies aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. It's almost like the designers of IPv6 envisioned the hybrid cloud model.

Thanks for the inspiration Hoff, looking forward to trying this out.

IBM & Juniper team up for Cloud Migration

Interesting news today on the interoperability front. (Yes, I know I am obsessed) In a news conference IBM and Juniper Networks jointly demonstrated what they describe as a means of seamlessly migrating workloads over private and public clouds enabling enterprises' existing data centers to seamlessly interact with the public Internet.

Also interesting is the news that IBM has created a new group called the Enterprise Initiative Group, which will focus on accelerating adoption of the cloud related technology. (Sounds like a great CCIF sponsor) The unit will be headed by Erich Clementi as general manager and he will report directly to IBM CEO Sam Palmisano.

What I find most telling about this news is the technical approach that IBM and Juniper have chosen to go with. In the announcement they outlined a plan to use a hardware based virtual private lan which allows any-to-any (multipoint) connectivity in conjunction to a Multi-Protocol Label Switching (MPLS) system. In case you're unfamiliar with MPLS, it is a protocol agnostic, data-carrying mechanism.

I did a little further digging into what MPLS is, and from what I can tell it allows data packets to assign labels as a kind of packet meta-data descriptor enabling packet-forwarding decisions to be made solely on the contents of this label, without the need to examine the packet itself.

Interestingly this approach may also fit in nicely with some of the concepts of my virtual private cloud, semantic cloud abstraction and unified cloud interface where a virtual networking interface can create end-to-end circuits across any type of transport medium, using any protocol. Needless to say, a very interesting approach to secured cloud networking. To give some background, some of you may remember my concept of a "Virtual Private Cloud" or VPC which I described last year. The general idea was to provide a method for the partitioning of public & private clouds so they could encapsulate multiple local and remote resources to appear as a single homogeneous computing environment. This concept could allow for the bridging and secure utilization of remote resources as part of an seamless global compute infrastructure.

With in this vision for a virtual private cloud was a core component of a virtual private network (VPN) or a virtual LAN (Vlan) in which the links between nodes are encrypted and carried by virtual switches. MPLS may be an ideal basis for this concept. Another reason for the use of MPLS within the context of a VPC is in its ability to virtualize the entire network stack giving it particular characteristics & appearance that match the demands as well as requirements of a given application regardless of where it's deployed.

Unfortunately there are very few implementations MPLS except for a few high end networking devices. If Juniper and IBM are serious about this plan, they will need to create something capable of running both on traditional hardware as well as in virtual machines. I should also note that Juniper unveiled its new MPLS based line of the technology late last year under their "E-series Broadband Service Routing Platforms" brand.

This new announcement clearly pits IBM against Cisco. As the propagation of cloud computing protocols & models continues, there has never been a better time to address the needs of an interoperable cloud ecosystem. I'd like to personally welcome IBM and Juniper to the party.

Monday, February 9, 2009

New Design for ElasticVapor.com

In my attempt to unify the branding among my various activities, CCIF, etc. I have implemented a new design on the elasticvapor site. No more wildwest Ruv and went for a brighter and more modern look, which a little bit of cloud but not too much. The design is suppose to represent the area just above the cloud deck give you a view of the entire planet, or something like that. I know a large portion of my readers use RSS so check it out if you get a chance.

Please let me know if you see any specific design errors and I'll do my best to fix them. If you're using ie6 or below, upgrade your browser :)

r/c

Call to Action: UCI Working Group

I'd like to thank everyone who has recently joined the cloud computing interoperability forum's discussion group. I won't respond directly to all the messages, but I will state that I too believe that open standards are the only way that cloud will be truly successful which IS exactly the purpose of this group.

In regards to the last few CCIF posts, the one thing I keep hearing over and over are the bits and pieces of what we're trying to do that have already been created to some degree. I think one of the things that the CCIF as well as our events have provided to the broader cloud computing community is an open dialog between the "established powers" and the new generation of provocateurs (myself included). This group is a perfect example consisting of a variety of companies and individuals from almost all aspects of the cloud / IT industry, big and small. I can't help but think that traditionally the development of standards and related protocols (open or closed) has been an exclusive club limited to only the largest companies who were able to pay to play. What the emergence of the cloud and related social technologies has given us the ability to do is publicly iterate on the development of standards. What used to take years now can happen in days. ( Jayshree Ullal comments about the advancements made in 'cloud networking' are perfect -- "more has been achieved in the last 100 days of Cloud Networking than is possible in 100 weeks")

Jesse Silver, Dave Neilsen, John Willis, Sam Charrington and few others suggested we create a working group to help drive the conversations forward. I agree, I feel we need to start working on something that may not be perfect, but instead is something that actually starts to take shape in a functional form. So my question to the group is rather then continually talk about the opportunities (past, present and future). Why don't we instead start to do something about it, something tangible, something we can achieve in the next 100 days.

If your interested in getting involved, we've created a google code group at http://code.google.com/p/unifiedcloud/ for this very reason. I invite anyone who is interested in getting involved to send me your email address and I'll make sure you're added as a contributor on our Google code site.

CloudCamp Federal @ FOSE (Washington DC, March 10th)

Sign up now for CloudCamp Federal @ FOSE, March 10,2009, 3pm - 8:30pm at the Walter E. Washington Convention Center, 801 Mount Vernon Place NW , Washington, DC. As a follow-up to last November's successful CloudCamp Federal this event will also be held as an unconference where attendees can exchange ideas, knowledge and information in a creative and supporting environment, advancing the current state of cloud computing and related technologies.

Although focused on cloud computing within the Federal government context, CloudCamp Federal is an informal, member-supported gathering, that relies entirely on volunteers to help with meeting content, speakers, meeting locations, equipment and membership recruitment.

Sunday, February 8, 2009

CCIF Slashdotted: Meta-data in the cloud

A nice spike in traffic to the CCIF today thanks to our friends at F5 (Lori MacVittie) and Slashdot. In MacVittie's post she asks "Who owns application delivery meta-data in the cloud?" I found this statement particularly interesting, in it she says "There is a very real danger, however, that cloud interoperability and portability specifications will fail to address the very real need to include all the relevant application and network infrastructure meta-data necessary to move an application from one cloud to another."

This is exactly the problem we're trying to solve with the CCIF and our unified cloud interface working group. In this effort we are attempting to create an open semantic specification (taxonomy, ontology, etc) of the meta data models and control structures applied across multiple cloud & infrastructure providers.

For those new to the CCIF, the goal of our unified cloud interface (UCI) is simple, an API for other API's. A singular abstraction that can encompass the entire infrastructure stack as well as emerging cloud centric technologies through a unified interface. Although actually implementing this vision will be far more difficult.

I think the real question about cloud interoperability has more to do with portability and vendor lockin versus freedom from the confines of your traditional infrastructure. And, yes the two are not mutually exclusive. Whether it's internal or external, proprietary or open, for me the answer is choice. Interoperability gives the freedom to choose the best services, providers and applications regardless of technology or market adoption.

Don't get me wrong, cloud portability is nice and should be a focal point. But there is already a significant amount of work being done in this space, at least from a VM point of view. In particular is the work the Distributed Management Task Force and VMWare have being doing on the Open Virtual Format. I'd love to be able to take an OVF formatted virtual application and deploy it to Amazon EC2, GoGrid, Rackspace and Savvis and have it run & scale in the same way. But even with OVF, the question of how to uniformly interact with the "cloud" still remains. I think the broader opportunity for cloud interoperability and the CCIF isn't to create a new set of standards, but instead to advocate for those that have been created or like OVF, advocate for those that are in the midst of being adopted.

So who owns this meta data that describes the cloud resources? I say those who run their applications in the cloud do. As for who owns the data you put in the cloud? You do.

Friday, February 6, 2009

CCIF Wall Street. RSVP

When we launched the new site today we received a lot more traffic then we were expecting resulting in large amount of RSVP's for the CCIF Wall Street. event . I want to make mention of a small error on the thank you page mistakenly outlining the previous CCIF event location in San Francisco. (Thank you JP Morgenthal for pointing it out) The Wall St event is indeed in Manhattan, not SF.

Just to be clear the time / date / location is as follows. (Limit 2 people per firm, unless your a sponsor)

April 2nd, 2009
New York City, New York (10am - 4pm)

Thomson Reuters
195 Broadway Street, 4th Floor
New York, NY 10007

You must RSVP In order to attend, space is limited to 150.
http://www.cloudforum.org/events/rsvp/

Announcing the Launch of CCIF website at CloudForum.org

I'm happy to announce the launch of the Cloud Computing Interoperability Forum's official website at http://www.cloudforum.org

The current site is fairly rudimentary. Over the next few weeks we will continue to improve the site and fix any errors. In the mean time please feel free to take a look and give us any feedback.

We've also posted the RSVP for the Wall St. Interoperability event on April 2nd at http://www.cloudforum.org/events/rsvp/

We're still looking for additional sponsors for both the site as well as upcoming Wall St event, so please get in touch and we'll make sure to get your logo on the site and Wall St invitations.

Thursday, February 5, 2009

Small Web Hosts turning to 'Mini Clouds'

It's been an interesting year so far for cloud computing "enablers" such as my firm Enomaly. It's certainly not what I was expecting. The mood for big business investing in large enterprise "cloud" infrastructures has all but disappeared. But there is another market segment quickly picking up steam. Recently I've seen a significant amount of interest from the smaller traditional VPS style hosting firms looking to create what I call "mini clouds".

These mini clouds are similar to traditional virtual private servers, but instead of using dedicated servers made up a container based virtual machines. They instead provide an EC2 like interface with a specific set of virtual applications. Unlike EC2, the focus is purely on the quick deployment and scaling of speciality applications such as Zimbra or SugarCRM or specific niche industries such as not for profits, with the added bonus of a pay per use model applied.

To give a little background, a key focus of our Enomaly ECP platform is in enabling the ability to partion clouds within larger cloud infrastructures, which we call a Virtual Private Cloud (VPC) or a method for partitioning a public computing utility such as EC2 into quarantined virtual infrastructure. A VPC may encapsulate multiple local and remote resources to appear as a single homogeneous computing environment bridging the ability to securely utilize remote resources as part of an seamless global compute infrastructure.

So it's interesting to see these smaller hosting firms who have traditionally leased dedicated servers starting to embrace the move to larger cloud providers and in turn creating these so called "mini clouds". The biggest players in the emerging "mini cloud" hosting space include Linode.com and Rackspaces "slicehost" both traditionally described themselves as VPS hosts and have recently moved into cloud focused offerings.

It's too soon to tell if this is just a short term trend or an indication of a larger move toward cloud computing by the smaller hosting firms. One thing is for certain, companies such as Parallels who focus this market are going to have to start moving quickly because companies like Enomaly are going to start eating there lunch.

IT Cloud Computing Conference -- Feb 11th Torontot (Discount Code)

As some of you know I'm helping put on a cloud conference &a free cloud camp next week (Feb11) in Toronto. The conference is called the IT Cloud Computing Conference and is happening earlier the same day (and same place) as CloudCamp .... The organizers have agreed to give my blog readers a 50% discount off the normal price to attend their conference. And best of all you get to hear me talk about my crazy ideas.

REGISTER HERE >> Click Here

Discount Code (50% off): CCAL

This code to saves you $299

Cloud Computing Conference: An IT Paradigm Shift (www.itcloudconference.com) is a one-day conference happening on the same day as CloudCamp, also at the Metro Toronto Convention Centre. We have arranged a special 50% discount.

The two events are co-located so you have the opportunity to experience a jam-packed Cloud Computing day.

+++ Use this code upon registering: CCAL

IT CLOUD COMPUTING CONFERENCE AGENDA:

February 11, 2009

8:00 am ~ 8:30 am

Registration & Continental Breakfast

8:30 am ~ 9:30 am

Opening Keynote sponsored by Info-Tech Research Group

9:30 am ~ 12:00 pm

Breakout Sessions and coffee break

12:00 pm ~ 1:30 pm

Lunch and Keynote sponsored by IBM Canada Inc.

1:30 pm ~ 3:15 pm

Breakout Sessions and coffee break

3:15 pm ~ 4:00 pm

Closing Keynote

4:00 pm ~ 4:30 pm

Refreshment Networking Break

4:30 pm ~ 8:00 pm

CLOUDCAMP TORONTO (light dinner and refreshments)

The conference will cover Strategic (why) and Tactical (how) sessions. Regular admission is $599, but it’s only $299 with your discount code and includes breakfast, lunch, a light dinner, networking, and refreshments.


OPENING KEYNOTE:

Understanding the Great Cloud Rush of ’09 – click here for more details on John Sloan’s opening keynote from Info-Tech Research Group.

Example breakout session:

>> The Intersection of Cloud and Grid Computing

…more details here

Save $299.99 and register today! (You must be attending CloudCamp to get this offer).

Questions?

Call Stephanie Cole, 905.695.0123 x211,

www.itcloudconference.com

Wednesday, February 4, 2009

Cloud Control with Google Talk (Botnets)

In digging through Googles XMPP and GTalk developers documentation I may have stumbled upon the biggest threat/opportunity in Google's cloud toolbox. Before I go any deeper into this, I will warn you that this technology can also be potentially used for a host of illicit activities. None of which I condone.

Google Talk is very possibly the most advanced open communication platform on the globe. It's an extremely flexible instant messaging platform which utilizes an open protocol called XMPP. This protocol allows users of other XMPP clients to communicate securely with Google Talk users using Google's extensive global infrastructure. It also uses an innovative P2P approach within it's VoIP library based around the Jingle protocol. A major driver of the Google Talk service is focused on interoperability or what they describe as Open Communications.

The Google Talk network supports open interoperability with hundreds of other communications service providers through a process known as federation. This means that a user on one service can communicate with users on another service without needing to sign up for, or sign in with, each service. At it's heart, this is where some of the biggest opportunities for Google Talk are. Any users of the Google Talk platform or any other XMPP based platform just need to support the XMPP standard for server-to-server federation and their users will be able to talk to the Google users (gmail & google apps).

Google Talk is by far one of the biggest deployments of an XMPP based cloud infrastructure anywhere. To help enable this "interoperable" and federated XMPP platform is a component Google offers called Libjingle. The component is described as an open source P2P Interoperability Library and is made available under an open source Berkeley-style license. More specifically Libjingle is a set of components that provides the ability to interoperate with Google Talk's XMPP peer-to-peer file sharing and voice calling capabilities in a variety of amazing ways.

Unfortunately the closest real world example for Libjingle is that of a modern P2P botnet and its use of distributed command-and-control (C&C) systems, which are typically embedded directly into the botnet itself. Similarly Libjingle can act as dynamic updating component capable of transversing networks and proxies through builtin negotiation classes. Another interesting feature is its use of P2P and distributed architectures which makes it able to avoid any single points of failure. Through Google Talk's "Group Chat" an administrator can be identified solely through secure keys and all data except the binary itself can be encrypted making the communication channels extremely secure. Similarly a shady individual could potentially create several anonymous gmail accounts for the basis of a Google Talk based C&C. (Yup, kind of scary.)

I'm not going to get into all the finer details of the library except to say this software extremely versatile.

There are a number of interesting usecases for Google Talk in the context of cloud based monitoring as well as command and control. The simplest example is that of a cloud system monitor & scaling agent. Google Talk, like many chat clients, lets a user display a custom status message to other users. The Google Talk server stores lists of recently used status messages, and you can request and modify these values. An XMPP XEP extension enables a client to retrieve and modify these stored message lists, and also provides notifications so that all resources can report an updated status (i.e. up,down, under heavy load, here's my ip, not responding, offline, etc) which are available for all other approved "group members" to see. Whenever any resource changes its status a message is sent to all other resources which are notified with the new values and can adjust accordingly. For example if an EC2 node goes down, every other node could be almost instantly notified using a gossip protocol which spreads the message that ABC node is no longer available over a huge number of interconnected application nodes in very short period of time. So in a sense it's a perfect distributed command and control "cloud".

For me, I think the more interesting use case is a highly available cloud based communications platform. It is a fact that most cloud's can and will go down. One of the most highly available infrastructures with little doubt is Google's. So even if your Cloud goes down, your command and control will remain allowing you to rapidly adapt.

Arista Networks Drops "Cloud Networking" Trademark

In a follow up to my October 23rd post on Arista Networks attempting to copyright / trademark the term "Cloud Networking" The companies CEO Jayshree Ullal has stated that they will no longer be attempting to trademark the term. (Mission accomplished)

In a recent post she shed some light on the topic saying "It has been 100+ days since Arista Networks formally unveiled our new name, Cloud Networking mission and our much talked about leadership team. What is clear is more has been achieved in 100 days of Cloud Networking than is possible in 100 weeks."

She goes onto point out that several New Cloud Networking groups have been formed on Facebook (Lippis), Linkedin, Yahoo and Google (to name a few) to provide common ground for the introduction and advancement of Cloud Networking. (I'm the guy behind the linkedin and Google Groups, although the Google Group hasn't seen any real action yet)

Ullal also describes the reasons for dropping the trademark efforts "Arista Networks will therefore not pursue the filing of a trademark for Cloud Networking. The original goal of broad awareness has already been accomplished. The US Patent & Trademark Office has also advised us that the use of this term may be generic. We completely agree and prefer to focus on adoption of Cloud Networking rather than trade mark pursuits. Quite frankly, I could never have predicted the remarkable acceptance in such a short compressed time! Such things usually take years not days."

In all this I would say Arista Networks has proven to be an upstanding, honest and respectable company focused on what matters most, creating a larger and sustainable cloud networking industry and vision. Arista Networks is the kind of company I would highly encourage others to become. Jayshree Ullal you did good!

PR: Enomaly Extends Cloud Computing Professional Services

Building upon the growing industry momentum toward cloud computing and elastic infrastructures, Enomaly is happy to announce that we are extending our suite of cloud computing professional services to better serve the Cloud Computing community. Enomaly's Cloud Computing leadership and expertise are now available in more ways to help your organization leverage the emerging Cloud Computing paradigm.

Contact us at [email protected] or visit
http://www.enomaly.com/services.html for more information

Enomaly's suite of Cloud Computing service offerings now includes:

Enterprise Cloud Computing Strategy and Architecture

Enomaly has worked with Fortune 500 organizations to architect and implement internal or enterprise cloud computing facilities. This experience and expertise is available to help your organization obtain higher levels of efficiency and business agility through the use of these new technologies within your datacenters and hosting environments.

Cloud based performance testing and optimization.

A fully-integrated, one-stop scaling and testing solution. With Enomaly's CloudTest you can test any Web application or service, no matter how complex, at real-world scale and load. No need to resort to cobbling together point products from many vendors.

Cloud-based Business Continuity / Disaster Recovery

Business resilience extends beyond the continuity of business operations to include regulatory compliance; high availability, data protection; and integrated risk management. Our services take you from planning and design through implementation and management, with a strong commitment to understanding your ever-changing business requirements spanning remote cloud resources and locate data center resources.

Amazon CloudFront CDN Integration

Amazon CloudFront is a web service for content delivery. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments.

Amazon Flexible Payment Service Integration

Amazon Flexible Payments Service (Amazon FPS) is the first payments service designed from the ground up specifically for developers. The set of web services APIs allows the movement of money between any two entities, humans or computers. It is built on top of Amazon's reliable and scalable payment infrastructure.

Google App Engine Integration

Google App Engine lets you run your web applications on Google's infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow.

Emerging Technology Consulting Services and Integration

Whether Amazon EC2, AppNexus, Gogrid, Google App Engine or something else our cloud experts can help with your cloud deployment, scaling and clustering issues

Virtualization Architecture and Integration (VMware, Citrix/Xen,Microsoft Hyper-V)

Enomaly's extensive expertise in Cloud computing extends to in-depth knowledge of the underlying virtualization layers, and this expertise is avaialble to help with the architecture and integration of your virtualization infrastructure, based on any of the elading virtualization platforms.

Enomaly's global leadership in Cloud Computing will help you:

- Shorten the time required to plan, implement and deploy new virtual applications. Faster adoption of new technologies and improved end user service levels.

- Optimize performance, reduce operational management costs and increase adaptability of cloud resources.

- Take advantage of the latest expertise and technologies in order to simplify systems management and increase system utilization.

Contact us at [email protected] or visit
http://www.enomaly.com/services.html for more information

--
Please note if you require free support, please visit our Google Group at http://groups.google.com/group/enomalism or grab the latest 2.2 ECP distro at www.enomaly.com

Tuesday, February 3, 2009

VMWare Goes Open Source, VDI

In a rather sudden and bold move VMware has open sourced their client for virtual desktop infrastructure client called the VMware View Open Client (no I'm not dyslexic, that's the name). This announcement could have drastic ramifications within the VDI ecosystem. Also surprising is that it's hosted at Google Code, which could indicate something brewing between the two.

The VMware VOC lets you connect from a Linux desktop to remote Windows desktops managed by VMware View. It is available under the GNU Lesser General Public License version 2.1 (LGPL v 2.1). (Personally I would have preferred to see GPL3.0, but beggars can't be choosers)

According to the release, the VDI client has been optimized for thin client devices and is encouraged for use by thin client partners applications and devices. Partners are encouraged to use this open source software to develop clients fo non-x86 platforms, or operating systems other than Windows XP/e or Linux.

So what does all this mean? For one, it represents a shot across the bow of the Redmond giant Microsoft who is already offering their hyper-v platform free of charge. It also pits them directly against the other "open source" virtualization company -- Citrix, who's main money maker is their proprietary desktop virtualization platform. It is interesting to see if this move forces Citrix to actually finally embrace open source for anything other then their Xen project. It will also be interesting to see if RedHat with their KVM or Ericom follow suit and offer some level of "free" VDI. Until today the only real open source VDI platform was Nomachines FreeNX. This cetainly changes the playing field.

The move also seems to be an attenpt to solidify VMware's position in the potentially huge "cloud" or thin client virtual desktop market. According to Gartner, (not exactly an ideal source of prognostications), they predict;
  • That approximately 50 million user licenses for hosted virtual desktops will be purchased by 2013.
  • The thin-client terminal will account for about 40% of user devices for hosted virtual desktop deployment.
It is my opinion that as we move into the 4th generation of computing (Thanks Cisco), there will be two camps emerging. Those who use cloud applications and various as-a-service Internet centric software approaches, and those who hold on to the traditional desktop centric approaches such as large enterprises, etc. I feel the key difference in this new computing era will be that the desktops will start to look more like services, and VMware knows this all to well.

Many companies including Verizon already have active cloud desktop services under development, who knows maybe in the near future your computer will be provided by your ISP. I know I can't wait for my Comcast Desktop, yikes.

Monday, February 2, 2009

Cloud Computing is for everyone -- except stupid people

I love reading stupid headlines from stupid articles, I won't point specifically to them because we've all read them. In particular I've read a few lately that have pointed out that "cloud computing" isn't ready for primetime, it's too new, it's only for early adopters, trailblazers and upstarts who are willing to risk their business on this new thing called the cloud.

So you, the writers of stupid articles are not going like this next statement, the cloud or the internet as it's known by some, is for everyone or anyone with a website or modern software application. Big or small. If you're looking to build a sustainable software/application business using internet at any point in your application stack, then the cloud is for you. Whether you call it the cloud or the internet. Whether you pay for by the gulp or by the month, whether you're 2.0 or 3.0, *aas or or asp. It's time for you to get on the internets.

Cloud Interoperability Magazine

I'm happy to announce the CCIF now has it very own "Cloud Interoperability Magazine" at http://cloudinterop.ulitzer.com The site will focus on Cloud Computing, standardization efforts, emerging technologies, infrastructure API's and anything else I think is relevant to cloud interop.

I'd like to also note we're currently recruiting contributors to the magazine, so if your interested in publishing or blog reposting to the site, please signup here > http://cloudinterop.ulitzer.com/user/register after you've registered, send me and email with your username & email address and I'll make sure you're included.

Special thanks go out to the sys-con team for providing us with a little more visibility.

Sunday, February 1, 2009

Semantic Cloud Abstraction

The CCIF and it's members have recently focused on creating a common cloud taxonomy and ontology. I find it's starting to sound a lot like semantics, Cloud Semantics. We are in a sense defining what cloud computing is by describing it's "components" and their relationships to one another. One that is capable of expressing cloud computing and its subsequent parts in terms of a consensus data model.

So in this effort we may actually be defining a dynamic computing model that can, under certain conditions, be 'trained' to appropriately 'learn' the meaning of related cloud & infrastructure resources based on an common ontology / taxonomy. In a sense, we are talking about the Semantic Web applied to API's or more broadly, a unified cloud interface.

What is the Semantic Web and why does it matter for a unified cloud interface?

The Wikipedia describes the semantic web as "a vision of information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing and combining information on the web."

Why not apply those same philosophies to the underlying control structures of the web (i.e. the cloud) itself? A Semantic Cloud Infrastructure capable of adapting to a variety of methodologies / architectures and completely agnostic to any specific API or platform being described. A general abstraction that doesn't care if you're focusing on a platform (Google App Engine, Salesforce, Mosso etc), application (SaaS, Web2.0, email, id/auth) or infrastructure model (EC2, Vmware, CIM, etc).

I'm the first to point out there are other groups attempting a similar approach for web services (i.e. Liberty Alliance), systems management (i.e. DMTF / CIM, OASIS) and others (i.e. OpenID/OAuth). However, I feel all of these groups lack a true consensus of how to describe "all" the aspects of the cloud using a unified semantic ontology.

I've been very impressed by the work the Distributed Management Task Force (DMTF) has achieved with their Common Information Model (CIM). It could be a key aspect of our unified cloud interface efforts. However, one of the problems a lot of these system management standards are missing is any kind of usage outside of the traditional "enterprise" system management platforms or in the case of VMWare or Microsoft, they are simply limited to their own platforms - interoperable, but only if you're using our software.

The key drivers of a unified cloud interface (UCI) is "One abstraction to Rule them All" - an API for other API's. A singular abstraction that can encompass the entire infrastructure stack as well as emerging cloud centric technologies through a unified interface. What a semantic model enables for UCI is a capability to bridge both cloud based API's such as Amazon Web Services with existing protocols and standards, regardless of the level of adoption of the underlying API's or technology. The goal is simple, develop your application once, deploy anywhere at anytime for any reason.

The other benefit of a semantic model is that of future proofing. Creating a model that assumes we as an industry are moving forward and not making any assumptions on the advancements in technology by implementing a static specification based on current technological limitations but instead creating one that can adapt as technology evolves.

The use of a resource description framework (RDF) based language may be an ideal method to describe our semantic cloud data model. The benefit to these types of RDF based ontology languages is they act as a general method for the conceptual description or modeling of information that is implemented by actual web resources. These web resources could just as easily be "cloud resources" or API's. This approach will allow us to easily take an RDF-based cloud data model and adapt it within other ontology languages or web service formats making it both platform and vendor agnostic. In applying this approach we're not so much defining how, but instead describing what. What is cloud computing?

The next steps in enabling a unified future for cloud computing will be to create a working group focused on an open semantic taxonomy and framework. This group will investigate the development of a functional unified cloud interface implementation across several cloud and infrastructure providers. To this end, we have created a Google Code project at http://code.google.com/p/unifiedcloud/ The current time frame for the public unveiling of an initial functional draft UCI implementation, taxonomy and ontology will be April 2nd at the upcoming Wall Street Cloud Computing Interoperability Forum.

If you'd like to be a contributor to the UCI working group please get in touch with me directly. If you would like to be involved without joining the UCI Working Group, you are encouraged to review and comment on the working drafts as they become available. Updates will be posted to the Cloud Computing Interoperability Forum mailing list. Feedback from people & companies actually implementing the specifications will be especially valuable and appreciated.

--Update --
Functional Implemntation of UCI on Amazon EC2 and Enomaly ECP

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram