Tuesday, March 31, 2009

Introducing the Cloud Security Alliance (not by ruv)

I was just informed of an interesting cloud security alliance created by Nils Puhlmann & Jim Reavis. I can't help but wonder if anyone is going to complain that they did they did this without community involvement? (joke) Regardless, this seems like a great idea.

In a post, Chris Hoff takes steps to address any similarities to the Open Cloud Manifesto. "The key difference between the two efforts relates to the CSA’s engagement and membership by both providers and consumers of Cloud Services and the organized non-profit structure of the CSA. The groups are complimentary in nature and goals."

Nice going guys!

According to the CSA site.

The CSA is a non-profit organization formed to promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing.

The Cloud Security Alliance is comprised of many subject matter experts from a wide variety disciplines, united in our objectives:

  • Promote a common level of understanding between the consumers and providers of cloud computing regarding the necessary security requirements and attestation of assurance.
  • Promote independent research into best practices for cloud computing security.
  • Launch awareness campaigns and educational programs on the appropriate uses of cloud computing and cloud security solutions.
  • Create consensus lists of issues and guidance for cloud security assurance.

The Cloud Security Alliance will be launched at the RSA Conference 2009 in San Francisco, April 20-24, 2009.

An Update from New York

Yesterday representatives of CCIF, CloudCamp, Cisco, IBM, Intel, Microsoft, and the IEEE-ISTO met while attending the Cloud Computing Expo in New York. Other companies were invited but were unable to attend, generally due to the short notice. The companies agreed on a shared goal to promote use and awareness of open and interoperable cloud computing. The group brainstormed several ideas including the possibility to build on the momentum created by CloudCamp. Another topic was the ability to enable participants, from individuals and companies, both large and small, to be able to contribute to and use the results of broad community collaboration. Additionally, the possibility of a trade association or marketing association for cloud computing was discussed but no specific actions were agreed. The final topic was the need to have broader participation from the community in this discussion.

Next Steps

I have been invited to give a speech tomorrow morning at the Cloud Computing Expo in New York. At this time, I will publicly ask for the support of the greater community in the creation of a completely new kind of cloud computing trade association. This organization will be focused on the marketing and advancement of cloud computing industry, a goal we all share. This organization must focus on bringing together all the various industry participants in a truly open and collaborative environment. The cloud community as well as companies big or small will have a voice. What I am asking for is not just another trade association but the opportunity to re-imagine how we, the cloud community can jointly co-operate to advance the over all market opportunity for cloud computing through a neutral yet formalized and legal organization.

My question to you the cloud community is how should this new kind of community centric trade association look? Your comments and suggestions will form the basis of my keynote. The eyes of the technology world are upon on us, let's not miss the opportunity to enable a change for the better.

Thank you for your continuing support.

Sunday, March 29, 2009

OpenCloudManifesto.org Goes Live

Just a quick note that the http://opencloudmanifesto.org website is now live.

Direct Link to the manifesto is available here > http://opencloudmanifesto.org/opencloudmanifesto1.htm

Supporters include.
-----------------------------------------------
Akamai, AMD, Aptana, AT&T Corp., Boomi, Cast lron, Cisco, CSC, The Eclipse Foundation, Elastra, EMC, EngineYard, Enomaly, F5, GoGrid, Hyperic, IBM, Juniper, LongJump, North Carolina State University, Nirvanix, Novell, Object Management Group, Open Cloud Consortium (OCC), Rackspace, Red Hat, The Reservoir Project, RightScale, rPath, SAP, SOASTA, Sogeti, Sun Microsystems, Telefónica, The Open Group, VMWare

Key Principles.

1. Cloud providers must work together to ensure that the challenges to cloud adoption (security, integration, portability, interoperability, governance/management, metering/monitoring) are addressed through open collaboration and the appropriate use of standards.

2. Cloud providers must not use their market position to lock customers into their particular platforms and limit their choice of providers.

3. Cloud providers must use and adopt existing standards wherever appropriate. The IT industry has invested heavily in existing standards and standards organizations; there is no need to duplicate or reinvent them

4. When new standards (or adjustments to existing standards) are needed, we must be judicious and pragmatic to avoid creating too many standards. We must ensure that standards promote innovation and do not inhibit it.

5.Any community effort around the open cloud should be driven by customer needs, not merely the technical needs of cloud providers, and should be tested or verified against real customer requirements.

6.Cloud computing standards organizations, advocacy groups, and communities should work together and stay coordinated, making sure that efforts do not conflict or overlap

An Open Future for CCIF

It is with an eye toward an open future that we address the many apt criticisms levied at the Cloud Computing Interoperability Forum (CCIF) and the difficult circumstance in which this community finds itself.

As the organizers of the community, we would like to make our intentions clear. The following letter is not an edict or decree. It is a heartfelt attempt to reach out to our fellow community members so we might begin to move past recent events and together, discuss our options.

An Apology

While sifting through this week's enthusiastic and well argued posts, one issue rose to painful clarity: There is not and has never been an agreed upon definition of the CCIF. As organizers we have “announced,” at various times, conflicting statements on how “our members” should view this Forum. These definitions range from “cloud advocacy group,” which implies membership and organized offline activity, to the much narrower “email discussion group.” Due to our failure to better define our project each community member has been left to his or her own devices, latching onto any number of definitions.

At some point over the last few months, the community began to feel a sense of ownership of and membership in the entity CCIF. Until this week, we had not fully appreciated that the CCIF had become the de facto membership organization for interoperability stakeholders. Under this new premise, it is clear that our direct and private engagement, in the name of the CCIF, vis a vis the Open Cloud Manifesto may be viewed as a breech of this community's norms. For this oversight, we take full responsibility.

Open Cloud Manifesto

To this end, when the Open Cloud Manifesto is officially released on Monday, March 30, the CCIF's name will not appear as a signatory. This decision comes with great pain as we fully endorse the document's contents and its principals of a truly open cloud. However, this community has issued a mandate of openness and fair process, loudly and clearly, and so the CCIF can not in good faith endorse this document.

Knowing what we know now, we certainly would have lobbied harder to open the document to the forum before this uproar ensued.

Governance and the Future of the CCIF

Therein lies the problem. Consider this: even if we had secured the OK to open the Manifesto for discussion before signing in the name of CCIF, there would have been no mechanism by which to formally make changes or give approval. This is, or at least in our opinion ought to be, unacceptable to most of the community.

Therefore, though this is simply a proposal to get us started considering next steps, we feel that it is time for some degree of formalization. This means governance and, of course, some or all of the following components:

1. Formal mission statement, laws and articles
2. Formal membership structure
3. A board or other defined leadership structure
4. Formal decision making mechanism
5. Committees and/or formal interest groups
6. Goals, deliverables and activities
7. Wikis, websites and other properties governed by our laws and articles
8. Financial backing and/or formal associations with industry

If the community coalesces around formalization, CCIF's organizers will go to the greatest possible lengths to ensure the process unfolds openly and in the best interests of the cloud computing community at large, not for the benefit or self-aggrandizement of any specific member or interest group.

Regarding the specifics of the outcome, we are not prepared to propose or oppose any plan. If and when the time is right, we will create a wiki or other mechanism to hash out details. For now, let's start discussing whether this is the right direction for the CCIF.

Thank you and best wishes to all,

Sam Charrington, Reuven Cohen, Dave Nielsen, Jesse Silver (alphabetical)
Reblog this post [with Zemanta]

Saturday, March 28, 2009

Microsoft / CCIF Update

By now most of you have seen the various headlines flying around about Microsoft Vs the Open Cloud Manifesto and their commitment to open cloud computing. Over the last couple days we have been continuing to engage in active discussions with Microsoft around ways the CCIF and Microsoft can continue to work together. A few key points of clarification regarding the "Open Cloud Manifesto" Although I had personally being speaking with Microsoft about inclusion of some of their requested alterations to the document, we are dealing with several very large companies with numerous points of contact. Somewhere between the various conversations there seems to have been a miscommunication regarding both the publication date as well as the whether or not there was time to make any final alterations. Going forward we will try harder to make sure there are clearer lines of communications in place to avoid this from happening again.

In our discussions it remains apparent that Microsoft was and still is committed to an open cloud ecosystem. On Monday I will be meeting with a team from Microsoft in New York to discuss how we can foster a stronger open relationship between our two organizations.

I'll keep you posted.
Reblog this post [with Zemanta]

Friday, March 27, 2009

Clarification on the Open Cloud Manifesto

I want make a quick point of clarification on the Open Cloud Manifesto and my involvement. I am not the "leader" or "instigator" of the manifesto. I am among a broader group of supporters & co-authors of this document, all of which have had equal involvement. Some of the recent media reports have mis-characterized me as being the sole creator or originator which is not correct. This effort would never have come together without the wider industry support and backing of the dozens of companies involved.

This manifesto must not be about anyone company or individual but instead the opportunity we share collectively to enable an open interoperable cloud ecosystem.

I am but one eager supporter like everyone else.

Repost: Cloud Neutrality

It might be time to revisit this post. This post was originally made December 22nd, 2008.

Yes, I said behind the scenes conversations. Like it or not that's seems to be the way the technology world operates. I'm just happy to have a seat at the table.

--
Recently during some behind the scenes conversations, the question of neutrality within the cloud interoperability movement was raised.

The question of cloud interoperability does open an interesting point when looking at the concepts of neutrality, in particular to those in the position to influence its outcome. At the heart of this debate was my question of whether anyone or anything can be be truly neutral? Or is the very act of neutrality in itself the basis for some other secondary agenda? (Think of Switzerland in the Second World War) For this reason I have come to believe that the very idea of neutrality is in itself a paradox.

Let me begin by stating my obvious biases. I have been working toward the basic tenets of cloud computing for more then 5 years, something I originally referred to as elastic computing. As part of this vision, I saw the opportunity to connect a global network of computing capacity providers using common interfaces as well as (potentially) standardized interchange formats.

As many of you know I am the founder of a Toronto based technology company Enomaly Inc, which focuses on the creation of an "elastic computing" platform. The platform is intended to bridge the need for the better utilization of enterprise compute capacity (private cloud) with opportunities of a limitless, global, on demand ecosystem for cloud computing providers. The idea is to enable a global hybrid data center environment. In a lot of ways, my mission for creating a consensus for the standardized exchange of compute capacity is both driven by a fundamental vision for both my company and the greater cloud community. To say interoperable cloud computing is something I'm passionate about would be putting it mildly. Just ask my friends, family or colleagues and they will tell you I am obsessed.

Recently, I created a CCIF Mission & Goals page, a kind of constitution which outlines some of the groups core mission. As part of that constitution I included a paragraph stating what we're not. In the document I stated the following: "The CCIF will not condone any use of a particular technology for the purposes of market dominance and or advancement of any one particular vendor, industry or agenda. Whenever possible the CCIP will emphasis the use of open, patent free and or vendor neutral technical solutions. " This statement directly addresses some of the concepts of vendor bias, but doesn't state bais within the organizational structure of the group dynamic.

Back to the concept of neutrality as a cloud vendor, as interest in cloud interoperability has begun to gain momentum, it has become clear that these activities have more to do with realpolitik and less to do with idealism. A question was posed - should a vendor (big or small) be in a position to lead the conversation on the topic of cloud interoperability? Or would a more impartial neutrality party be in a better position to drive the agenda forward?

The very fact that question is being raised is indicative of the success of both the greater cloud computing industry as well as our efforts to drive some industry consensus around the topic of interoperability. So regardless of my future involvement, my objectives have been set into motion. Which is a good thing.

My next thought was whether there is really such a thing as a truly neutral entity? To be truly neutral would require a level of apathy that may ultimately result in a failed endeavour. Or to put it another way, to be neutral means being indifferent to the logical outcome. Which also means there is nothing at stake to motivate an individual or group to work towards its stated goals. My more pragmatic self can't also help but feel that even a potentially "more neutral" party could also have some ulterior motives - we all have our agendas. And I'm ok with that.

I'm not ok with those who don't admit to them. The first step in creating a fair and balanced interoperable cloud ecosystem is to in fact state our biases and take steps to offset them by including a broad swath of the greater cloud community, big or small, vendor, analyst or journalist.

So my question is this, how should we handle the concept of neutrality and does it matter?

Thank you Microsoft

Let's put this into perspective for a moment. Regardless of what the manifesto says (If you follow the CCIF forum or read my blog you've got a pretty good idea). The goal of the open cloud manifesto is to bring wide scale industry attention to open interoperable Cloud Computing. In this goal we have succeeded even before the manifesto has been published.

In one move, Microsoft has provided more visibility to our cloud interoperability effort then all our previous efforts combined. For this reason alone we need thank Microsoft. Moving forward I believe Microsoft will continue to be a major partner in our activities including recently signing on as a global sponsor for our Cloud Camps. Who knows maybe they'll sign on to the manifesto too.

Thursday, March 26, 2009

Re: Microsoft Moving Toward an Open Process on Cloud Computing Interoperability

I have received no less then 100 emails and voicemails about the Microsoft blog post this morning. See > http://bit.ly/10UHgP

Let me say, we've been in active discussions with Microsoft about the open cloud manifesto which has literally come together in the last couple weeks. It is unfortunate they feel this way. Microsoft was among the first to review the manifesto. Their 2:28 AM pre-announcement of the manifesto was a complete surprise given our conversations. If Microsoft is truly committed to an open cloud ecosystem, this document provides a perfect opportunity to publicly state it.

Introducing the Open Cloud Manifesto

Over the last few weeks I have been working closely with several of the largest technology companies and organizations helping to co-author the Open Cloud Manifesto. Our goal is to draft a document that clearly states we (including dozens of supporting companies) believe that like the Internet, the cloud itself should be open. The manifesto does not speak to application code or licensing but instead to the fundamental principles that the Internet was founded upon - an open platform available to all. It is a call to action for the worldwide cloud community to get involved and embrace the principles of the open cloud.

We are still working on the first version of the manifesto which will be published Monday, March 30th with a goal of being ratified by the greater cloud community. Given the nature of this document we have attempted to be as inclusive as possible inviting most of the major names in technology to participate in the initial draft. The intention of this first draft is to act as a line in the sand, a starting point for others to get involved. That being said this manifesto is not specifically targeting any one company or industry but instead is intended to engage a dialogue on the opportunities and benefits of fostering an open cloud ideology for everyone.

Many clouds will continue to be different in a number of important ways, providing unique value for organizations. It is not our intention to define standards for every capability in the cloud and create a single homogeneous cloud environment. Rather, as cloud computing matures to address several key principles that we believe must be followed to ensure the cloud is open and delivers the choice, flexibility and agility organizations demand. This is just one of several initiatives and announcements we will be making in the coming weeks as we move to organize the Cloud Computing Interoperability Forum (CCIF) and Cloud Camp into a formalized organization.

If you would like to be part of the discussion we invite you to get involved at our Open Cloud Manifesto Discussion Group or on the Cloud Computing Interoperability Forum (CCIF).

Wednesday, March 25, 2009

Cloud Computing Mergers & Acquisitions

One topic that keeps reoccurring in my various conversations is the sudden interest in cloud centric M&A activities thanks in part to the recent IBM / Sun rumors. Being that I continuously find myself in the midst of a lot of the back room, off the record type conversations I thought I'd share some of my recent insights into cloud m&a.

Without going into detail, lately for one reason or another I have become aware that several of the largest names in technology have put together "cloud M&A teams" focused on buying up both established cloud related companies as well as some of the latest crop of cloud startups. What is clear in these acquisition discussions is the traditional models applied to M&A don't work in the current economy and is specially true in cloud computing.

Along with the financial crisis, we are facing what I can only describe as a technological crisis of identity. Suddenly every tech company worth it's salt is claiming to be a cloud company or has a cloud computing strategy of some sort similar to the late nineties where every company suddenly had a web strategy. For most of the established tech companies this means buying your way into this technology revolution. Also interesting is a lot of these companies are sitting on significant cash stockpiles. The problem is there are not nearly enough solid revenue generating cloud computing companies to match the demand and hype surrounding the cloud sector which has caused some very interesting side effects.

One of those is the rise of the mini-acquisition, which I would describe as buying a company consisting of a couple guys with a good idea or a rough "beta". For me this seems like a glorified hire with signing bonus. Another is the R&D acquisition, basically it's easier to buy a venture funded company which is light on customers but strong on technology then build it yourself. In the case of a VC backed company it's certainly easier to value the startup since in a sense the VC's have already set the value. But this leaves the independent boot strappers in a interesting spot. This group are in a great position because they don't have to worry about paying back their venture investors and have been operating successfully for sometime. A 10 or 20 million dollar exit goes a lot further with a bootstrapped startup then a 60 million dollar exit with a VC backed startup.

From what I can see, the current cloud M&A market is focused on the fact the we're in the midst of a seismic shift in the IT world, one that the larger players are having trouble adapting to. No longer do you have to be a Microsoft or Google spending billions on infrastructure to gain significant market share, just look to Twitter or the dozens of successful startups using Amazon Web Services. What's more, the balance of power has shifted, a couple guys with a good idea can now influence the direction of major corporations thanks to the emergence of social web technologies. Ideas have become among the most valuable assets you can have.

In the coming weeks your going to hear about a variety of M&A deals. I think you will be surprised to see that these deals are not based on booked revenue or other "traditional" metrics but rather market influence and speed to market. What is clear is cloud computing has less to do with pay-per-use, on demand, outsourced infrastructure and more to do with a term that has in a very general sense come to define the next generation of computing.
Reblog this post [with Zemanta]

Free Expo Plus Passes for Cloud Computing Conference & Expo 2009 East, next week in New York City

I am happy to announce free Expo Passes to the Sys-con Cloud Computing Conference & Expo 2009 East, next week in New York City. The Free Expo Plus Registration includes access to Exhibits, Keynotes, Vendor Technology Presentations, and Power Panels.

To use the free coupon code for a complimentary Expo Plus Pass please Visit https://www3.sys-con.com/cloud0309/registernew.cfm?a1=expoplus and type in the coupon code cloudspeakerexpo (case sensitive) to redeem the Expo Plus Pass.

Don't forget to come by and see my presentation on Unified Cloud Computing at 9:25 - 10:10 am Wednesday April 1st.

Monday, March 23, 2009

Strategies and Technologies for Cloud Computing Interoperability (SATCCI)

Had a great day hanging out in Washington DC with a variety of federal government & cloud computing folks discussing Strategies and Technologies for Cloud Computing Interoperability held in conjunction with the Object Management Group (OMG) March Technical Meeting.

The day was broken into two parts with standards related groups presenting in the morning and "cloud vendors" in the afternoon. A few highlights from the morning sessions included an announcement fromNIST , a non-regulatory federal agency within the U.S. Department of Commerce with a mission is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. They announced the creation of "Cloud Interoperability Profile" for federal-compliant cloud infrastructures with the goal of creating a standardized profile for cloud computing with in the federal government. I'm told this is a big step for the agency.

Craig Lee, President of the Open Grid Forum suggested that we need to take more time to examine the overlap between various standards groups, mapping the opportunities for collaboration. Something I think would be useful. Also, from a lot of the discussions it sounded like there was a lot of duplicationamoung the various groups and we needed better insight into these efforts. I also found this quote in Lee's slide deck interesting, it was from Chris Smith,OGF's VP of standards in which he said, Grids are Access models, clouds are business models. The quote makes a lots sense. (I may use that myself)

Winston Bumpus, Director of Standards Architecture at VMware and President of the DMTF also announced that OVF had reached a v1.0 release and suggested that it was an ideal cloud migration and deployment package. I suggested that OVF had the potential for many other uses such as an platform centric application deployment package such as what you may use for deployment on a platform as a service offering like Google App Engine. He seemed to agree. I also had a chance to speak with Bumpus and he did a great job of clarifying his position on potential vendor bias within the standards group, noting that like any political organization you need to ware many hats and do what's right for the community at large while balancing your responsibilities to your employer. He did a great job of putting my previous VMware / DMTF conflicts of interest concerns to rest. You can tell he's been doing this for a while I am not the first to suggest this. Later in the day he indicated we still had work to do in defining a common cloud taxonomy, but also noted that it was a difficult proposition without wider industry support. He also recommended we create what he called a "scrum wiki" that might help organize our taxonomy efforts and asked if the CCIF take the lead on creating it. Something I am also open to.

In the afternoon the folks from Salesforce.com presented their thoughts on cloud interoperability, which was a kind of bizarre mix of confidence and blissful ignorance. To start it off they stated that there is no such thing as a private cloud quoting a statement they claim was made by Joe Weinman at AT&T. (It looked like it was taken out of context) Then they informed us they were indeed interoperable because you could export your data in a "text format" if you ever wanted to leave their "cloud". When I asked if they thought that exporting a 1 terabyte text file was a good idea, they said they think it was going on to say it was proof of their commitment to interoperability. Further digging their own hole, they then went onto state that their Apex programming language, built for their force.com because no other multi tenant languages existed, was a great example of interoperability, but admitted you could only run it on force.com platform, noting that it had facebook and twitter integration which was why it was interoperable.

For me one of the most interesting presentations was from Microsoft. They outlined their software + services strategy and commitment to open source and open standards. They also shed some light on their global data center deployments saying that they're buying upwards of 20,000 servers a month around the globe. They also stated that Azure would be released later this year in a "dedicated" deployment option that could run in virtual machines bundled with hyper-v as well as leveraging a hybrid geotargeting option.

After the workshop I had a chance to go out for a few beer with Dirk Nicol from IBM and Scott Radeztsky who both noted their commitment to the concepts of an Open Cloud. But that's a story for another time.

I want to thank Bob Marcus for putting on this great event!

Thursday, March 19, 2009

Guest Post on Cisco: Unified Computing Perspectives

The folks at Cisco have kindly asked if I would be a guest contributor to the Cisco Data Center Blog. Being a fan of the latest Cisco Unified Computing approach I was happy to help. My post is titled, Unified Computing Perspectives and examines the benefits of Cisco's unified approach to cloud computing.
Reblog this post [with Zemanta]

Wednesday, March 18, 2009

10 Steps To Unified Cloud Computing: CloudExpo NYC - April 1st

I'm happy to announce I will be presenting 10 Steps To Unified Cloud Computing at the Sys-con Cloud Computing Expo Wednesday april 1st at 9:25 - 10:10 am.

This presentation will examine the opportunities for unified cloud computing in order to create an open and standardized cloud interface for the unification of various cloud api's. A singular programmatic point of contact that can encompass the entire infrastructure stack as well as emerging cloud centric technologies all through a unified interface.

In this vision for a unified cloud interface the use of the resource description framework (RDF) is an ideal method to describe a semantic cloud data model (taxonomy & ontology). The benefit to an RDF based ontology languages is they act as general method for the conceptual description or modeling of information that is implemented by web resources. These web resources could just as easily be "cloud resources" or API's. This approach may also allow us to easily take an RDF -based cloud data model and use it within other ontology languages or web services making it both platform and vendor agnostic. Using this approach we're not so much defining how, but instead describing what.

Reblog this post [with Zemanta]

Reviewing Sun's Open Cloud Platform & API

Big news from Sun Microsystems today on several fronts, first there is talk of an IBM / Sun merger and second Sun has offically unveiled their Open Cloud Platform & API. Exciting news on both fronts.

As a member of the Sun Cloud Computing Strategic Advisory Council, I have been working closely with Sun since last year in an effort to guide their direction in terms of openness, portability and interoperability on their cloud efforts. These are areas I believe Sun has done a tramendous job of addressing in their latest cloud offering. Lew Tucker, Tim Bray, Craig McClanahan and the rest of the team at Sun have clearly spent a lot of time developing the most feature rich, powerful and open cloud on the market.

One of the first things I'd like to point out is their open source API. What you will immediately observe is that they've taken an open, extensible approach towards developing a cloud computing API. While fully RESTful, the consumer of the service only needs to know the starting URI. Everything else is discoverable through retrieving the representations of the various entities. This, along with it being "free of any restrictions," and made available under The Creative Commons Attribution 3.0 License are excellent components. My concern about this particular license is its Attribution requirement : "You must attribute the work in the manner specified by the author or licensor". I would prefer to see a less restrictive license. That being said, my first impression is that the API is very easy to understand and Sun has applied the RESTful principles beautifully.

The Sun Cloud API uses the concept of a use center for launching virtual machines, assigning public IP addresses, attaching storage volumes, etc. Lew Tucker, CTO, Cloud Computing had this to say "When looking at what would be needed to run Sun.com, eBay.com, or other large web properties, we learned that it was important to introduce abstractions for grouping machines, creating subnetworks, isolating resources, and support for teams in the virtual cloud environment. This is the basis of a Virtual Data Center, that every user gets upon joining Sun's cloud".

It's a perfect example of unified computing in action. Simply stated, when someone signs up with the Sun Cloud, they are given a Virtual Data Center or what I like to call a "Virtual Private Cloud". According to Sun, it was important to make it possible for people to build the equivalent of a physical datacenter in the cloud. That way customers are able to more effectively treat their cloud computing resources much as they would a traditional data center. Sun's Virtual Data Centers represent the collection of virtual machines, networks, and storage associated with each customer, isolated from others sharing the cloud computing service. Within a customer's data center, there may be teams of users fulfilling different roles, as well as multiple spaces, partitions, or clusters which can be created to house the resources applied to different applications or groups. This facilitates the dynamic creation and control over multi-instance applications and the formation of different environments for different teams such as production, staging, and test environments.

As an additional benefit they outlined that customers should be able to quickly clone or copy an application running on multiple instances, creating opportunities for new approaches to manage software releases and dynamically changing loads. This ability to model the system architecture of a real world application and its often complex set of instances, makes it possible for developers to explicitly describe and share entire architectural design patterns. They may then add others to this Virtual Data Center, or begin to architect their application.

To make it easy for even the individual developer to get started, everyone starts with a Virtual Data Center containing a single default cluster or partition. Using the GUI, they can simply drag and drop virtual machines, virtual subnets, and storage devices. Once an application is developed, the entire set of virtual machines can be stored, copied, or cloned to meet an myriad set of needs.

I know that some of you may point of that Sun is late to the game, a game that is probably still in its first inning. What Sun has done with their Cloud is taken steps to create one of the first truely open cloud infrastructure offerings on the market geared specifically with the needs of enterprise users in mind.They may not be first, but they certainly are now among the best cloud providers.

--

If you're interested in discussing the inner works of Sun's Cloud API, there is a message thread I started on Sun's kenai developer portal > http://kenai.com/projects/suncloudapis/forums/forum/topics/524-API-Discussion?

Also, read Sun’s official announcement here.

Sun has also released A Guide to Getting Started with Cloud Computing, which offers a useful overview of the basic issues whilst relegating most of the Sun pitch to a separate section.


Reblog this post [with Zemanta]

Monday, March 16, 2009

Cisco's Grand Vision for Unified Computing

In probably one of the more interesting announcements to come out of the hardware space in quite some time, Cisco today announced that the Computer is still the computer, just a virtual one inside a physical one. Actually the more interesting aspect is they've shed some more light on their Unified Computing Vision. As anyone who has followed my blog or company knows, the unification of IT is a key area of interest to me, so this news is quite exciting.

First of all, what I find interesting are the similarities to the concepts within the unified cloud interface project (UCI). Our stated goal is "One abstraction to Rule them All" - an API for other API's. Similarly Cisco is attempting to provide a a singular virtualized point of contact that can encompass the entire infrastructure stack through a unified computing platform. Although today's announcement does little to actually demonstrate what this interface will look like other then to say it's a command line interface with a web service API's coming soon. They do promise it will use standardized API's and will be based on open technologies -- wherever possible. I'm hoping to get more details on this API in the coming days and will give a full overview once I've had a chance to review it. Regardless of the API, I feel Cisco is in an ideal position to provide an interoperability glue which sits between the legacy data center and a cloud centric future.

As I've written before in my verbosely titled post, Technological Universalism & The Unification of IT, Cisco's move into server hardware makes a lot sense for the traditionally "networking" focused company. A company that derives most of it's revenue from providing static "boxes" that sit in your data center doing one thing and one thing only. The trend in IT recently has been the move away from boxed appliances to that of virtualizing everything. Whether networking gear or storage, everything is becoming a virtual machine (VM). The requirement to buy expensive "static" networking gear is quickly becoming a relic of the past. What is acting as an application server today may be a network switch or load balancer tomorrow. This is the opportunity I feel Cisco is going after. (A kind of grand unification of IT resources)

I should also note, the folks at Cisco have asked me to write a guest post on the Cisco blog which further details my views on the benefits of unified computing, hopefully it will be posted in the next couple days. I'll make sure to share the link when it's available.

Is Cloud Computing Becoming A National Security Risk?

I'm back from Stockholm and starting to get caught up with my various blog reading. In one of the more interesting posts, Ken Fischer asks a very thought provoking question on his web 2.0 blog, his question is simple yet far reaching, "What would be the economic impact of google mail going down be?"

Fischer says that "In the next 5 years or so, there will be a massive shift from single server to cloud computing as well as an increasing reliance on everything being always up because of the interwoven nature of the semantic web. Websites and webservers will no longer be individual and isolated but exist on the ‘cloud’..."

With the newly appointed / retired Federal chief information officer Vivek Kundra in the US, my question is, will a key focus of his job description be in addressing cloud based availability & Security? Could cloud security become so critical as to become a risk to a nations national security?

In another recent post on the ITworld website, Meridith Levinson outlined a tough job for the new US CIO in which he is forced to balance openness with security. In the post she goes on to outline " that Kundra will have to strike a balance between the President's drive for openness and transparency and the need for security. Cyber-threats against the country and the government are growing exponentially, and the desire to connect agencies and make government open, transparent and interoperable makes it easier for hackers to carry out their attacks." -- Will openess and interoperability make us as a nation less secure?

For me the bigger question is, assuming we are moving toward to a fully outsourced computing future, what happens when crucial pieces of communication infrastructures is brought down, either by accident or on purpose? Are we moving toward a future where GMail is considered critical to the national security of the country? And if this is a reality we may be soon facing, how can governments work to protect these key pieces of cloud infrastructure? Or should they?

Saturday, March 14, 2009

The Computer is the Operating System?

Random Thought. With the rise of chip level virtualization form Intel and AMD and the sudden interest in Netbooks (just enough computer), is the need for a full blown OS required any longer?

Phoenix Technologies may have the right idea with their BIOS level Virtualization technology. Phoenix envisions multiple light weight apps running inside the BIOS, enabled by its Xen based HyperCore hypervisor. According to the company, you can check your email, browse the Web or launch a media player without burning through your battery life because the core application componets site directly on the CPU. Phoenix is banking on software vendors seeing the market potential and developing HyperSpace-friendly offerings, I'd imagine complete with a Iphone style app store in the works. Imagine an iphone style netbook.

Also Phoenix has made a strong move in embedded security, including firmware to leverage the Trusted Platform Module (TPM) to provide pre-boot device authentication. This will be paricularly important in secured and auditable cloud computing.

Another interesting opportunity is using a light cloud centric OS such as Googles Android on a NetBook PC or Server. The concept is just enough OS (jeOS) with the majority of the heavy lifting done offsite, or in the cloud. My question, is this the future of personal computing?

Comments?

CloudAve: CloudCamp gets big in London (800+ attendees)

Sounds like the latest CloudCamp London has become bigger then we could have ever imagined. According to a post by Paul Miller more then 800+, although he admits may have been in the 500-600 range. An amazing turnout regardless.

Miller goes on to say, "The evening was ably compeered by Simon Wardley and James Governor, who did a great job of keeping the initial round of lightning talks to time."

Also interesting he said, "So… an interesting event, and well worth attending, but possibly in danger of becoming a victim of its own success if expectations are not better managed for next time." Read his overview at at CloudAve.com

Maybe it's time to reevaluate our approach to CloudCamp, specially with the rapid rise of the events around the globe. If you have any comments or suggestion to help make the events better. I encourage you to get involved on our mailing list at Google Groups.

CloudCamp New York - April 1st

We have organized a CloudCamp the night before CCIF Wall Street event on Wednesday, April 1st in New York, NY … No Joking!

Sun Microsystems’ Main NYC Office
101 Park Ave.
New York, NY

If you are interested in sponsoring or submitting a proposal for a Lightning Talk, send an email to dave AT platformd DOT com.

Tentative Schedule:
6:00pm: Registration and Networking
6:30pm: Introductions
6:45pm: Lightning Talks
7:15pm: Lightning Panel
7:45pm: Begin Unconference
8:00pm: Unconference Session 1
9:00pm: Unconference Session 2
10:00pm Networking with some food/drinks

RSVP Here

Friday, March 13, 2009

On-Demand Overspending

Daryl Plummer an analyst at Gartner has written a great piece on what he's calling, On-Demand Overspending. I love the term. Plummer is quickly becoming one of my favorite Gartner analysts, a man who isn't afraid to speak his mind on the issues surrounding cloud computing. His previous Cloud Overdraft protection concept also does a great job of outlining some of the problems in "cloud bursting" and how that can lead to run away costs.

In his most recent post aptly titled "Cloud Elasticity Could Make You Go Broke" he says "Some research done by Onecompare.com was highlighted in a web article recently citing how mobile customers in the UK were overspending on their mobile plans because they had been sold plans smaller than their actual needs. These users either talked using more minutes than they expected they did, or talked beyond the number of planned minutes they had paid for. The same will happen, both intentionally and unintentionally with cloud services."

His post is worth the read.

Amazon reserves the right to host your applications

At first the concept of Amazon's Reserved Instances didn't really make any sense to me. But then it suddenly became clear what Amazon is doing with its new Reserved Instances feature. Simply, they are going after the web hosting space.

According to the Amazon EC2 email sent out earlier, the EC2 reserved Instances is "an option to make a low, one-time payment for an instance to reserve capacity and further reduce hourly usage charges. As with On-Demand Instances, you will still pay only for the compute capacity that you actually consume, and if and when you do not use an instance, you will not pay usage charges for it."

The pricing is as follows;

Standard Reserved Instances 1 yr Term 3 yr Term Usage
Small (Default) $325 $500 $0.03 per hour
Large $1300 $2000 $0.12 per hour
Extra Large $2600 $4000 $0.24 per hour
High CPU Reserved Instances 1 yr Term 3 yr Term Usage
Medium $650 $1000 $0.06 per hour
Extra Large $2600 $4000 $0.24 per hour

What I find most interesting about this is Amazon seems to be going after the service provider & web hosting space with pricing model that is compelling for services that may not need instant scale but instead require cheap, reliable and affordable web hosting.

In doing the math, @ $0.03 a hour, a small reserved EC2 instance will cost you about $262 a year for the uptime, and $325 for the reservation or about $48 dollars a month. Compared to about $876 a year or $73 a month using an on demand instance (not including storage and bandwidth). This new pricing model brings EC2 inline with most VPS style hosting providers and removes any doubt for those who may want to host their web applications completely on EC2.

What is clear is that Amazon is a master of utility pricing and they are not afraid to continue innovating on it. As always, I'm blown away by there unique and cutting edge approach to cloud costing. Nice work guys!

Inside access to Microsoft VSTS Invitation Only Event

Wanna feel like an insider? Well now you can thanks to yours truly.

Microsoft is holding a series of invitation only events and I have the ultra-secret invitation codes available for you. (Well, I guess it's not an ultra secret event, cause I'm allowed to share them with you) You'll be able to see the best of VSTS 2008 while also getting an early view at what is coming in VSTS 2010.

VSTS or Visual Studio Team System provides multi-disciplined team members with an integrated set of tools for architecture, design, development, database development and testing of applications. Team members can continuously collaborate and utilize a complete set of tools and guidance at every step of the application life cycle.

You'll find the one day Team System "Big Event" in Denver CO, Mountain View CA, Irvine CA, Portland OR, and Phoenix AZ.

Sessions

  • Test Driven Development - Improving .NET Application Performance and Scalability.
  • "It Works on My Machine!" - Closing the Loop Between Development and Testing.
  • Treating Databases as First Class Citizens in Development.
  • Architecture without Big Design Up Front.
  • Development Best Practices & How Microsoft Helps
  • "Bang for Your Buck" - Getting the Most out of Team Foundation Server.

Registration + Invitation Codes

Denver, CO April 22, 2009 Click here to register with invitation code: DD1A7F Mountain View, CA April 28, 2009 Click here to register with invitation code: 80D459 Irvine, CA April 30, 2009 Click here to register with invitation code: A86389 Portland, OR May 5, 2009 Click here to register with invitation code: 2DC0A9 Phoenix, AZ May 7, 2009 Click here to register with invitation code: 90BC47

US Government seeking 'game-changing' concepts against cyber attacks

I just recieved an interesting notice published in the Federal Register by the US National Coordination Office for Networking Information Technology Research and Development seeking "industry submissions of research concepts that will be 'game-changing' in the efforts to protect government systems from cyber attacks." Submissions will be accepted through April 15.

The request, which is part of the Comprehensive National Cybersecurity Initiative, is the first stage of what the office has called the National Cyber Leap Year, in which the government identifies concepts that create a leap in technology that could bring about such a shift.

NITRD will collect responses until April 15, after which a working group of six to eight high-ranking IT officials will decide those ideas that show the most promise. The government then will hold workshops to consider the research needed to make the concepts a reality. Government, academia and industry will work on how to develop the concepts.

See announcement here > http://edocket.access.gpo.gov/2009/pdf/E9-4321.pdf

Thursday, March 12, 2009

Enomaly ECP 2.2.3 Released

ECP 2.2.3 has been released. Several bug-fixes and enhancements have been made. This release should be consider a "maintenance" release.

  • New approach to handling an index error caused by SQLObject.(TupleIndexError people were seeing on 64 bit platforms with SQLObject)
  • Better handling of ECP clustering when the host machine record does not exist in the database.
  • Fixed a defect in the way remote eggs are installed with the vmfeed extension module.
  • Better exception handling when importing existing Libvirt domains.
  • Fixed several invalid javascript references.
  • Small CSS fix.

Download:
http://src.enomaly.com/wiki/Download

Changelog:
http://src.enomaly.com/wiki/Documentation/ChangeLog

Live From Stockholm, Ruv I Molnet

I've been hanging out the last couple of days in Stockholm, Sweden, the city built on stocks, an island (holm) in the midst of a marsh. It's been cold and snowing, but the conversations have been great. Because I live in Toronto, people here in Sweden seem to keep asking me if I know Mats Sundin, formerly the captain of the Toronto Maple Leafs Hockey team, whom interestingly enough I have met a few times. . Sundin seems to be quite popular here, so my joke about him trying to pick up my twin sister got lots of applause.

I'm in town to keynote the IASA Cloud Architecture conference. In case you're not familiar with IASA, The International Association of Software Architects (IASA) is the premier association focused on the architecture profession through the advancement of best practices and education while delivering programs and services to IT architects of all levels around the world. I'm here as a kind of good will ambassador for cloud computing. And before you say anything, I have no idea how I got that title. But if it means free trips to Europe, I'm happy to oblige.

Last evening I went out with the Daniel Akenine, Teknikchef @ Microsoft AB, that's Swedish for CTO of Microsoft Sweden, Per Bjorkegren of Sogeti and Christer Berg from Dataföreningen Kompetens who are also the main organizers of the conference. Over several bottles of wine, we had a very interesting conversation on the opportunities for cloud computing in Europe. One of the points they brought up was in contrasting cloud interoperability to data base interoperability. Although database interop (ODBC) is common place, data portability is still a major issue. The general consensus was that this could pose a similar issue for cloud computing.

During my (un)keynote this morning. I was asked a great question, "Is there an opportunity to certifiy a cloud as secure, similar to a ecommerce site?" This concept is something I've spoken about before, and in someways I think is what IBM is attempting to do with their cloud certification program. Also, companies like RSA are in a good position to provide an audit of a cloud providers "security" and trust. The bigger question is, will a cloud ceritification change public perceptions around cloud computing. For that, I think that large software companies (IBM, Microsoft, Cisco, etc), need to do a little proactive marketing of the benefits and concepts of cloud computing in the mainstream media.

I must say I am enjoying Stockholm and am happy to announce an upcoming CloudCamp Stockholm for this September. Also there seems to be a great deal of interest in creating a "Swedish Molnet", Molnet is Swedish for Cloud. Something that I also fully support. Interestingly, one of the opportunities outlined for cloud computing in Sweden is a new initiative with the Swedish police force. Word is they're creating some sort of proactive cloud centric website. My Swedish isn't great, so I'm sure I'm missing the big picture. More details to follow.

Hej då

Tuesday, March 10, 2009

Reaffirming the CCIF Goals and Mission

Over the last six months the Cloud Computing Interoperability Forum has grown from a handful of participants interested in cloud interoperability to become one of the most vocal and visible cloud advocacy groups. In some of the recent conversations it seems to have become unclear to what our goals and objects are, so I thought I'd take a moment to restate our mission.

The goal of the CCIF is to be a thought forum and advocacy group for cloud interoperability and related standards, it is NOT to be a standards body.

To accomplish this goal, our mission is three fold.

1) Free, in person events like Cloud Camp and the Wall Street / Mountain View & Washington Cloud Interoperability Forums.
2) A community site for professional, open and unmoderated discussion, relating to cloud computing interoperability
3) A place to incubate actual working groups doing real technical proposals for specific areas of the cloud computing interoperability. We should encourage these proposals to target submission for consideration to an existing standards or industry organization if one exists.

Recently some of you have gone out and created various secondary groups in an attempt to "get things done". We encourage this and hope to support the various efforts in anyway we can. One of the benefits of CCIF and CloudCamp is our extensive network of corporate sponsors & industry organizations we bring to the forum. Simply, we can help you and your technical proposal get in front of the right people when the time is right.

I also agree with the comments that we should think about the key challenges for cloud interoperability and identify "actionable" specific sub-areas to work on, and how they fit into supporting specific uses cases of interoperability. To that end lets identify use cases, and identify the technical areas that need to be addressed to enable those use cases. Details on both the use cases themselves, and the technical areas themselves, should be what we encourage the workgroups to progress, with the eye to head this towards existing standards bodies.

If you are interested in getting involved in something other then "talk", we would suggest joining the Unified Cloud Interface Project at http://groups.google.com/group/unifiedcloud Our first group meeting is this Thursday March 12th at 12 noon EDT (9 PDT,4/5/6pm across Europe) and will be held in a XMPP ([email protected]) chat room. For anyone not able to attend, logs will be available at http://talk.cloudforum.org

To help illustrate some of the opportunities & challenges, we've put together a diagram outlining some of the interoperability areas we see within the wider cloud community. These challenges include Management, Endpoints, Communications, Infrastructure, Platform, and Physical. Please see the PDF document. We look forward to your comments and suggestions.

(P.S, I'm off to Stockholm for the rest of the week, so if I don't respond right away, you'll know why)

Monday, March 9, 2009

Appistry Bridging Application Clouds

Interesting developments in the world of unified multi-cloud computing. Appistry an early pioneer in enterprise cloud computing, today announced a new product that gives companies the power to migrate their heterogeneous applications to cloud-based environments. Called Appistry CloudIQ Manager & Engine they describe the product as a single point of application level management utilizing public and private clouds. From what I can tell, it looks very promising.

Before I go any further, in full disclosure, Sam Charrington (VP Product Management and Marketing at Appistry) is one of the co-creators of Cloud Camp as well as a key backer of both the Cloud Computing Interoperability Forum (CCIF) and Unified Cloud Interface Project (UCI). I know Charrington well and Interoperability is a big area of interest for him both personally and professionally. Regardless of our previous work, this is a very interesting announcement and marks one of the first true hybrid cloud products on the market today.

In my conversation with Charrington this morning he said that "one of the problems we see at the infrastructure level is folks being forced to deploy and manage things at the granularity of the virtual machine--we think that that is a cumbersome approach and want people to be able to package, deploy and manage applications independent of the VMs themselves"

I couldn't agree more, the benefit for legacy applications is a VM acts as a legacy container, something that bridges the old with then new. But in the very near future, the line between application and OS will quickly become blurred. The flavor of OS will no longer be a chief requirement, but rather the flavor cloud provider (internal or external, open or closed) will become the new OS of choice. Appistry's approach is targeting this new reality. A seamless global application platform or to put it another way, you can think of it like a hybrid "google app engine" for the enterprise.

What I like about this approach is their infrastructure & application agnostic, although still focused largely on enterprise private clouds they've realized the opportunity in providing tools to bridge public and hybrid clouds in a secure and efficient way. More simply, they've built a platform geared toward extensibility for existing application stacks, while enabling these existing applications to be packaged and deployed to a cloud without modification, simplifying migration and application portability. A tangable example for the hybrid cloud model.

Another interesting aspect is in their approach to application portability across a wide variety of private and public cloud environments, allowing enterprises to choose the right cloud for the right job at the right time. According to Charrington they will support Amazon, GoGrid and Skytap in the initial release due out this spring.

The folks at Appistry have heard me rant about this a few times, but I will mention it again. Appistry is a closed source product but they are taking steps to become an "open" and "interoperable" platform. To put it another way, you aren’t locked into a particular cloud provider’s infrastructure, but you're still somewhat locked into Appistry's. In response to my lock-in concerns, Charrington had this to say "With CloudIQ Manager we are packaging and managing the lifecycle of existing applications -- you are telling us how to manage your apps and bundling them up and we take it from there. There are absolutely no dependencies on us. CloudIQ Engine, that's an application framework for people building apps for extreme scale. We try to minimize lock-in by supporting existing Java and .NET components, but there is a little."

In the conversation, Charrington goes on to describe an open application packaging format for apps deployed to their engine called a FAR (fabric archive). Basically it's just a zip with your app code, files, media, etc and some XML they call a Service Definition Template. You can think of a FAR as a kind of OVF (Open Virtualization Format) for application centric cloud environments. Given the simplicity of the FAR spec, my recommendation is that they open up the FAR format under an open source license such as BSD.

What is exciting about this news is the work we've been doing with Appistry on the Unified Cloud Interface project. Their platform uses a set of open and extensible APIs, allowing enterprises to integrate CloudIQ Manager with existing management tools such as the Enomaly ECP platform, VMware and others to create what they describe as next-generation "cloud management mashups." More specifically CloudIQ will be among the first platforms to support the UCI (Unified Cloud Interface) specification from a platform-as-a-service point of view. Together this will allow users to universally bridge application workloads among a world wide cloud of compute providers. (Keep posted for more "UCI" news in the near future) Combining UCI and FAR would make for a killer "open cloud" application stack!

Thursday, March 5, 2009

Cloud Warfare & Proactive Network Defenses

It is quickly becoming apparent that new weapon of choice for modern conflicts is not in the traditional battle field. Instead many nation-states are increasingly employing cyber warfare to attack other states or entities in an effort to disrupt or disable critical technological infrastructure. States like China and Russia, which remain inferior to the U.S. militarily, have identified the United State's cyberspace vulnerability and worked diligently to exploit it. I've had a keen facination with the military side of computing, so I thought I'd take a moment to give you an overview of the current state of Cyber warfare.

For those new to military network computing / Cyber warfare, there two main groups in the US military who are publicly devoted to Cyber warfare activities, the Air Force Cyber Command and 67th Network Warfare Wing. The stated mission of the AF Cyber Command is to develop a major command that stands alongside Air Force Space Command and Air Combat Command as the provider of forces that the President, combatant commanders and the American people can rely on for preserving the freedom of access and commerce, in air, space and now cyberspace.

Although less is known about the NSA, the National Security Agency has recently become a major player in the emerging network defenses segment of the US military. Generally speaking NSA's cyber mandate is to "help monitor" U.S. federal agency computer networks to protect them against attacks. Unofficially they have been know to proactively engage in network defenses including botnet based activities. NSA's official mission, as set forth in Executive Order 12333, is to collect information that constitutes "foreign intelligence or counterintelligence" while not "acquiring information concerning the domestic activities of United States persons". NSA has declared that it relies on the FBI to collect information on foreign intelligence activities within the borders of the USA.

The topic of cyber warfare is not a new one, in 1996 Richard Harknett wrote a paper title "Information Warfare & Deterrence" In this famous paper, Harknett appears to be advocating for what he describes as absolute deterrence. He notes, “The essence of the Information Age is the emergence of a new form of organization. The information technology network seamlessly connects all of its parts, creating shared situational awareness throughout an organization. High connectivity supports both enhanced sustainability and greater accessibility. "

At it's heart Harknett has outlined the vision of state sponsored offensive network centric armies, both as a offensive tool but also as a form of mutual deterrence. The problem with mutual deterrence is it quickly becoming difficult to detect the friendlies from the enemies. The recent Georgian war is a perfect example. A large part of the Georgian government web infrastructure was brought down during the Russian / Georgian conflict. Although the Russian's were thought to behind the attack, the actual computers inflicting the demand were seen to be originating from civilian ISP's in the United States, in order for NATO or others combat this barrage, they would in effect have been attacking American computing targets, or "friendlies".

Harknett goes on to say "Deterrence requires that the capability to inflict retaliatory costs be perceived as reliable. Deterrence weakens to the degree that the deterrent capability can be contested by a challenger through degradation or avoidance. The inherent accessibility of information technology invites challenges to a network's connectivity. Deterrent threats relying on such connectivity will be susceptible to technical, tactical, and operational contest. The contestability of connectivity will make deterrence of information warfare difficult. "

What is most interesting is the contrast he draws between that of a Nuclear deterrent which he says "have a degree of ‘reliability of effect’ that makes the costs associated with a nuclear response seem incontestable."

Col. Charles W. Williamson III the deputy staff judge advocate at U.S. Air Forces in Europe’s military justice division recently added his own opinions. He say's "If the standard is absolute deterrence, then I admit intellectual defeat. On the other hand, if the standard is the more conventional meaning — to discourage somebody from taking action — then most of the world is deterred from symmetrical attack on the U.S. because of our conventional weapons dominance."

In a recent post Christofer Hoff added his opinions on "offensive computing";
"There's not been a war yet that has been won with defense alone, so why do we expect we can win this one by simply piling on more barbed wire when the enemy is dropping smart bombs? This is the definition of insanity and a behavior that we don't talk about changing.

"Don't spend money on AV because it's not effective" is an interesting behavioral change from the perspective of how you invest. Don't lay down and take it up the assets by only playing defense is another."

In the modern global computing environment, being a passive participant is no longer an option for most nations, if you are not taking proactive and sometimes offensive network measures you run the risk that your critical infrastructure will be exploited. This very real risk could result in real world casualties. The next big opportunity for the military contractors of the world will be in creating the next generation of distributed computing defense system, ones that can potentially take over a network of civilian compute resources both friendly or hostile. Like it or not, this is the fact we're now facing.
Reblog this post [with Zemanta]

The Universal Amazon EC2 API Adapter (UEC2)

Over the last several weeks I have been having some interesting conversations in regards to standardizing cloud based API's. As you know I am a big proponent of the concept of semantic cloud abstraction. Our Unified Cloud Interface project (UCI) has attracted more then 350 members in a little over a month. Before I get into my latest scheme, I want to assure you I still feel that a singular cloud abstraction interface that can encompass the entire infrastructure stack as well as emerging cloud centric technologies through a semantic application interface is truly the future of cloud computing. We hope to have a functional UCI demo ready for presentation at the upcoming Wall Street Interoperability Forum, so stay tuned for more news on that front.

I'm also realistic, most users who have deployed to the cloud have written their applications specifically for the Amazon Web Service API, making it currently the De facto standard. So it occurred to me, that a potentially big opportunity might be to create an open universal EC2 API adapter / abstraction layer (UEC2). Unlike EUCALYPTUS, the EC2 API adapter can work with your existing infrastructure tools and is completely platform agnostic.

At the heart of this concept would be a universal EC2 abstraction, similar to ODBC, a platform-independent database abstraction layer. Like ODBC a user could install the specific EC2 api-implementation, through which a cloned EC2 API is able to communicate with traditional virtual infrastructure platforms such as VMware using the standardized EC2 API. The user then has the ability to have their EC2 specific applications communicate directly with any infrastructure using this EC2 Adapter. The adapter then relays the results back and forth between the the other various infrastructure platforms & API's.

I admit the downside of a universal EC2 abstraction layer is the increased overhead to transform statements into constructs understood by the target management platforms.

The Universal EC2 API adapter complements our current unified cloud interface efforts because in a sense it is a logical inverse. Where UCI is a semantic representation for all API's, (One API to Rule them all) the EC2 API is very specific to an infrastructure as a service environment. The EC2 adapter could easily utilize UCI as an interchange format allowing for a one to many deployment methodology. An EC2 abstraction layer will reduce the amount of developer work by providing a consistent API. To put it another way, rather then coming at the problem from the top down, your coming at from the bottom up with UCI in the middle.

To be clear I don't have the time or resources to make this project happen myself, between my various cloud advocacy efforts and a new baby, I'm totally overwhelmed. So I'd like to propose we crowd source this idea. Make it an open source project governed by an enterprise friendly open source license such as BSD.

Tuesday, March 3, 2009

Browser Based Distributed Computing

In possibly the coolest concept I've seen in a long time Ilya Grigorik founder and CTO of AideRSS, has come up with an intriguing idea to implement Google's Map/Reduce algorithm in a browser via HTTP & Javascript.

(MapReduce is a framework for computing certain kinds of distributable problems using a large number of computers)
In the post Grigorik asks "What if you could contribute to a computational (Map-Reduce) job by simply pointing your browser to a URL? Surely your social network wouldn't mind opening a background tab to help you crunch a dataset or two!

Instead of focusing on high-throughput proprietary protocols and high-efficiency data planes to distribute and deliver the data, we could use battle tested solutions: HTTP and your favorite browser. It just so happens that there are more Javascript processors around the world (every browser can run it) than for any other language out there - a perfect data processing platform."

Grigorik even includes some functional ruby & javascript that actually works!

So in keeping with the concept of browser based distributed computing, I thought I'd pitch in a few ideas. For instance, why not tie in a crowdsourcing aspect. For example my elasticvapor blog does about 2,000-3,000 pageviews a day. Each of those pageviews could potentially be a series of map reduce jobs, the last script that loads on the page. My blog visitors would be completely unaware that they're actually helping run distributed computing jobs.

Another idea could be to use a Google Adwords / Adsense approach whereby instead of servering up a Google text ad on your blog or website, participating websites could serve up a distributed batch job at the same cost as an ad click-through. The costs could be managed in a similar Google adwords like interface where users could determine how much they are prepared to spend on their map/reduce jobs. Say 5 cents per 10 map/reduce jobs. After the budget has been set, these jobs are then distributed to a series of partner websites that render the jobs in parallel or in place of a Google Adwords/Adsense advertisement.

Lots of potential with this idea! Great job Ilya.

CCIF Skype Discussion Group Created

For those interested in a real time conversation on cloud computing and standards. We've created a CCIF Skype Discussion Group here:

For anyone interested in the Unified Cloud Interface Project, we also have a separate UCI Skype Discussion Group here:

Monday, March 2, 2009

(Updated) Breaking News: Denial of Service, The Pirate Bay is offline

I guess my timing on my previous Hacking the Cloud post could not have been better. I just got word that "someone" is currently DDoS'ing the thepiratebay.org. Even more interesting it may be a hijacked botnet causing the problem. More details as they come in.

-- Update 10pm EST - March 2/09 --

According to the Torrentfreak website.

A few hours ago The Pirate Bay website started to slow down, and eventually it became completely unresponsive. With the trial going on at the moment, the downtime instantly led to all kinds of rumors. However, there is nothing to worry about, the downtime is not related to the trial and people are on their way to bring the site back up.

At the moment there is no estimate for when the site will return. The problem can’t be fixed remotely we were told. However, people are on their way to the ’secret’ location where the Pirate Bay hardware is located to find out what the problem is.

When we receive additional information we’ll post an update here. The Pirate Bay’s trackers are still up so all the torrents that are downloaded already should work just fine.

For those who are interested in the trial coverage, a summary of the events of day 10 was posted earlier today.

One can’t help but think that if The Pirate Bay was a traditional business making lots of profits as the Prosecution in the case would have everyone believe, the site wouldn’t suffer anywhere near the amount of downtime it does. Of course, a torrent site that is fully operational all of the time would be no fun at all. Everyone knows that absence makes the heart grow fonder….

Reblog this post [with Zemanta]

Cloud Jackin, Hacking the Cloud

Often when those who say the cloud is too early or not ready for wide scale enterprise usage they point to "security" as being a key concern. Although they are quick to point out the security of a third party provider is an obvious point of weakness, they typically lack any specific examples of what these possible weak points actually are. So I thought I'd point out a few.

When looking at the potential vulnerabilities that cloud computing introduces, I typically recommend looking at the low hanging fruit, the stuff that a novice user could exploit with little or no technical capabilities. Right now the simplest exploits involve something I call "cloud jacking" or "cloud hijacking". This is when a unscrupulous element takes either partial or complete control of your cloud infrastructure typically by using a simple automated exploit script (kiddie script). An example of this in action is found within the world of botnets in which an existing series of compromised computing resource are used to create an exploit map of the cloud.

The basic premise of "cloud exploit mapping" is to use a technique similar to that of Celestial navigation, which was a navigational positioning technique that was devised to help sailors cross the featureless oceans without having to rely on dead reckoning to enable them to strike land. Similarly cloud exploit mapping is used in order to navigate and locate the optimal targets for exploitation across the cloud. Once the potential vulnerable machines have been mapped, all a potential hacker needs to do is hijack a series of already exploited machines by crawling the structure of an existing botnet basically using it as a guide to the easiest targets replacing the previous command and control with a new set. Generally speaking, botnet controllers don't plug existing holes, so it's fairly easily to exploit the previous vulnerabilities.

When looking at Security in the cloud Richard Reiner, formerly the founder of Assurent Secure Technologies and Advisor for Enomaly puts it another way.

"Securing the cloud doesn't present radically new challenges, although new technology may be required. For example, rather than implementing firewall and IPS functions exclusively in the physical network, some of these network security functions may need to be delivered within the virtual switch provided by a hypervisor, and products specifically adapted to this deployment will be required. Host-based security agents may also require some modification to run well in this environment, as they need to handle events such as migration of the guest instance form one host to another.

When an enterprise makes use of public cloud resources (e.g. Amazon EC2, or Rackspace's Mosso cloud services), additional issues arise. Here there is a new trust issue. The customer's compute tasks are now executing within the cloud providers infrastructure, and the "servers" these tasks are operating on are guests under the cloud's hypervisors -- i.e. essentially fictions created by the hypervisor software. The hypervisor is software, so it is easily modified; and it is all-powerful with respect to the guest instances running under it -- the hypervisor can copy, modify, or delete data from within the guest at will. This is a new trust problem: the customer must trust that the cloud provider's hypervisors and management software are behaving appropriately and haven't been tampered with.

Unlike traditional hosting, the problem can't be solved by locking the physical servers in a cage that only the customer has access to, since these are virtual servers running on shared hardware."

For cloud providers, the next major issue may be in addressing multi-tenant cloud federation and security. When a series of applications or machines have been exploited the next generation of cloud platforms will need to provide a quick and secure way to quarantine those machines before they can further harm or potentially bring down the entire cloud. Most security products were never made to hand the management of ten of thousands or more of transient physical and virtual machines that could be used by anyone at anytime for any reason. This is the new reality facing public cloud providers and their customers.

Sunday, March 1, 2009

Navigating the Fog -- Billing, Metering & Measuring the Cloud

It's that dreaded time of the month again, the time of the month that we, the 400,000+ Amazon Web Service consumers await with great anticipation / horror. What I'm talking about is the Amazon Web Services Billing Statement sent at beginning of each month. A surprise every time. In honor of this monthly event, I thought I'd take a minute to discuss some of the hurtles as well as opportunities for Billing, Metering & Measuring the Cloud.

I keep hearing that one of the biggest issues facing IaaS users currently is a lack of insight into costing, billing and metering. The AWS costing problem is straight forward enough, unlike other cloud services Amazon has decided to not offer any kind of real time reporting or API for their cloud billing (Ec2, S3, etc). There are some reporting features for DevPay and Flexible Payments Service (Amazon FPS) as well as a Account Activity page, but who has time for a dashboard when what we really want is an realtime API.

To give some background, when Amazon launched S3 and later EC2 the reasoning was fairly straight forward, they were a new services still in beta. So without officially comfirming, the word was a billing API was coming soon. But 3 years later, still no billing billing API? So I have to ask, what gives?

Other Cloud services have done a great job of providing a real time view of what the cloud is costing you. One of the best examples is GoGrid's myaccount.billing.get API and widget which offers a variety of metrics through their Open Source GoGrid API.

Billing API's aside, another major problem still remains for most cloud users, a basis for comparing the quality & cost of cloud compute capacity between cloud providers. This brings us to the problem of metering the cloud which Yi-Jian Ngo at Microsoft pointed out last year. In his post he stated that "Failing to come up with an appropriate yardstick could lead to hairy billing issues, savvy customers tinkering with clever arbitrage schemes and potentially the inability of cloud service providers to effectively predict how much to charge in order to cover their costs."

Yi-Jian Ngo couldn't have been more right in pointing to Wittgenstein's Rule: "Unless you have confidence in the ruler's reliability, if you use a ruler to measure a table, you may as well be using the table to measure the ruler."

A few companies have attempted to define cloud capacity, notably Amazon's Elastic Compute Cloud service uses a EC2 Compute Unit as the basis for their EC2 pricing scheme (As well as bandwidth and storage) Amazon states they use a variety of measurements to provide each EC2 instance with a consistent and predictable amount of CPU capacity. The amount of CPU that is allocated to a particular instance is expressed in terms of EC2 Compute Units. Amazon explains that they use several benchmarks and tests to manage the consistency and predictability of the performance from an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. They claim this is the equivalent to an early-2006 1.7 GHz Xeon processor. Amazon makes no mention of how they achieve their benchmark and users of the EC2 system are not given any real insight to how they came to their benchmark numbers. Currently there are no standards for cloud capacity and therefore there is no effective way for users to compare with other cloud providers in order to make the best decision for their application demands.

An idea I suggested in a post last year was to create an open universal compute unit which could be used to address an "apples-to-apples" comparison between cloud capacity providers. My rough concept was to create a Universal Compute Unit specification and benchmark test based on integer operations that can form an (approximate) indicator of the likely performance of a given virtual application within a given cloud such as Amazon EC2, GoGrid or even a virtualized data center such as VMWare. One potential point of analysis cloud be in using a stand clock rate measured in hertz derived by multiplying the instructions per cycle and the clock speed (measured in cycles per second). It can be more accurately defined within the context of both a virtual machine kernel and standard single and multicore processor types.

My other suggestion was to create a Universal Compute Cycle (UCC) or the inverse of Universal Compute Unit. The UCC would be used when direct system access in the cloud and or operating system is not available. One such example is Google's App Engine or Microsoft Azure. UCC could be based on clock cycles per instruction or the number of clock cycles that happen when an instruction is being executed. This allows for an inverse calculation to be performed to determine the UcU value as well as providing a secondary level of performance evaluation / benchmarking.

I'm not the only one thinking about this, One such company trying to address this need is Satori Tech with their capacity measurement metric, which they call the Computing Resource Unit (“CRU”). They claim that the CRU allows for dynamic monitoring of available and used computing capacity on physical servers and virtual pools/instances. The CRU allows for uniform comparison of capacity, usage and cost efficiency in heterogeneous computing environments and abstraction away from operating details for financial optimization. Unfortunately the format is a patented and closed format only available to customers of Satori Tech.

And before you say it, I know that UCU, UCC or CRU could be "gamed" by unsavory cloud providers attempting to pull an "Enron", this is why we would need to create an auditable specification which includes a "certified measurement" to address this kind of cloud bench marking. A potential avenue is IBM's new "Resilient Cloud Validation" program, which I've come to appreciate lately. (Sorry about my previous lipstick on pig remarks) The program will allow businesses who collaborate with IBM to perform a rigorous, consistent and proven program of benchmarking and design validation to use the IBM logo: "Resilient Cloud" when marketing their services. These types of certification programs may serve as the basis for defining a level playing field among various cloud providers. Although I feel that a more impartial trade group such as the IEEE may be a better entity to handle the certification process.
Reblog this post [with Zemanta]

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram