Sunday, May 31, 2009

HTTP is Dead, Long Live The Realtime Cloud

Last week Google announced a new service called Google Wave. Loosely it can be thought of as a realtime communication and collaboration platform & protocol. The platform is based on hosted XML documents (called waves) which focus on supporting massive concurrency and low-latency updates on top of a decentralized XMPP architecture. It's taken me a few days to fully understand what this announcement really means and the importance it may have in terms of the future of the Internet and how we use and consume it.

The Internet for all intents and purposes is a living organism, continually adapting and changing. It has evolved from a somewhat static medium where content updates were typically pulled from fairly simple syndication and transfer sources to a network of realtime data sources continually changing at an ever quickening pace. Combined with the ability to semantically describe millions of new data sources through powerful on demand cloud based computing platforms -- we are in the midst of a realtime computing transformation. One that is fundamentally different then the hyper text based Internet that was first described more then 26 years ago.

In Google's announcement what I found most fascinating was the protocol they choose for the basis of their new realtime vision. It wasn't HTTP but instead XMPP was selected as the foundation for this decentralized and interoperable vision. What this means in very simple terms is Google has declared the HTTP protocol is dead, an inefficient relic of the past. A protocol that was never designed with the requirements for the reality of a global realtime cloud.

Among HTTP numerous problems is it's requirement that a user's machine poll a server periodically to see if any new information is available. For a few data sources this may seem like a small burden, but multipled by millions or even billions of constantly changing sources and you have a major problem on your hands -- enter the wonders of decentralization & XMPP.

XMPP is the ultimate interoperability layer, letting one server send messages to any other XMPP server that it is available to receive new information. When another user sends new content through the XMPP server, the message is passed on immediately and automatically to all recipients who are marked as available. Building upon this core, Google's XMPP based Wave federation protocol goes well beyond by including the additional auto discovery of IP addresses and ports using SRV records (Service record is a category of data in the Internet Domain Name System specifying information on available services). As well as TLS authentication and encryption of connections. The great thing about TLS authentication is it's unilateral: only the server is authenticated (the client knows the server's identity) but not vice versa (the client remains unauthenticated or anonymous). Basically Googles vision for XMPP is everything HTTP should be, but sadly isn't.

Googles ambition with Wave goes far beyond the creation of a new kind of messaging or collaboration platform but instead seems to be an effort to fundamentally re imagine how the Internet itself is managed and used.
Reblog this post [with Zemanta]

Saturday, May 30, 2009

Intel & Open Cloud Standards – Where it Makes Most Sense

In a recent post on the Intel community blogs, Intel has for the first time publicly shed light on its strategy around Cloud Computing and the so called "Open Cloud" and how open standards and open source solutions play in the cloud computing era.

In the post Jackson He, a Senior Architect at Intel said "Intel has been a leader for HW standard building block for the last 30 years and has changed the industry. It is natural to assume that Intel should focus IaaS and PaaS building blocksas well as how these open standards could be applied at open datacenters (ODC) as“adjacent” growth opportunities to embrace the booming cloud computing. Some conventional wisdom says that Intel is not relevant to cloud, as cloud computing be definition abstracts HW. I would say just the opposite – Intel will continue to play a critical role to define and promote open standards and open source solutions for IaaS and PaaS, so that the cloud can actually mushroom. There is a strong correlation between how fast cloud computing can proliferate and how well Intel plays its role to lead the open cloud solutions at IaaS and PaaS layers. What do you think?"

Read the rest of the post here.

(Disclosure, Enomaly has been working closely with Intel on their Elastic / Cloud strategy since 2006)
Reblog this post [with Zemanta]

Friday, May 29, 2009

Redux: The Universal Compute Unit & Compute Cycle

Recently I posed a questions, is there an opportunity to create a common or standard Cloud Performance Rating System? And if so, how might it work? The feed back has been staggering. With all the interest in a Standardized Cloud Performance Rating System concept, I thought it was time to reintroduce the Universal Compute Unit & Compute Cycle concept.

Last year I proposed the need for a "standard unit of measurement" for cloud computing similar to that of the International System of Units or better known as the metric system. This unit of cloud capacity is needed in order to ensure a level playing field as the demand and use of cloud computing becomes commoditized.

Nick Carr famously pointed out that before the creation of a standardized electrical grid it was nearly impossible for a large scale sharing of electricity. Cities and regions would have their own power plants limited to their particular area, and the energy itself was not reliable (specially during peak times). Then came the "universal system" which enabled a standard in which electricity could be interchanged and or shared using a common set of electrical standards. Generating stations and electrical loads using different frequencies could now be interconnected using this universal system.

Recently several companies have attempted to define cloud capacity, notably Amazon's Elastic Compute Cloud service uses a EC2 Compute Unit. Amazon states they use a variety of measurements to provide each EC2 instance with a consistent and predictable amount of CPU capacity. The amount of CPU that is allocated to a particular instance is expressed in terms of EC2 Compute Units. Amazon explains that they use several benchmarks and tests to manage the consistency and predictability of the performance from an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. They claim this is the equivalent to an early-2006 1.7 GHz Xeon processor. Amazon makes no mention of how they achieve their benchmark and users of the EC2 system are not given any insight to how they came to their conclusion. Currently there are no standards for cloud capacity and therefore there is no effective way for users to compare with other cloud providers in order to make the best decision for their application demands.

There have been attempts to do this type of benchmarking in the grid and high performance computing space in particular, but these standards pose serious problems for non scientific usage such as web applications. One of the more common methods has been the use of the FLOPS (or flops or flop/s) an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations, but actually don't do much for applications outside of the scientifically realm because of its dependence on floating point calculations. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more effective and could form the basis for the Universal Compute Unit.

By basing the Universal Compute Unit on integer operations it can form an (approximate) indicator of the likely performance of a given virtual machine within a given cloud such as Amazon EC2 or even avirtualized data center. One potential point of analysis may be in using a stand clock rate measured in hertz derived by multiplying the instructions per cycle and the clock speed (measured in cycles per second). It can be more accurately defined within the context of both a virtual machine kernel and standard single andmulticore processor types.

The Universal Compute Cycle (UCC) is the inverse of Universal Compute Unit. The UCC would be used when direct system access in the cloud and or operating system is not available. One such example is Google's App Engine. UCC could be based on clock cycles per instruction or the number of clock cycles that happen when an instruction is being executed. This allows for an inverse calculation to be performed to determine the UcU value as well as providing a secondary level of performance evaluation.

I am the first to admit there are a variety of ways to solve this problem and by no means am I claiming to have solved all the issues at hand. My goal at this point is to engage an open dialog to work toward this common goal.

To this end I propose the development of an open standard for cloud computing capacity called the Universal Compute Unit (UcU) and it's inverse Universal Compute Cycle (UCC). An open standard unit of measurement (with benchmarking tools) will allow providers, enablers and consumers to be able to easily, quickly and efficiently access auditable compute capacity with the knowledge that 1 UcU is the same regardless of the cloud provider.

The cloud isn't about anyone single VM or process but how many VM's or processes work together. For example AMD's PR Performance Rating system which was used to compare their (under performing) processors to the leader Intel. Problem was it was for a very particular use case, but generally it gave you the idea. (Anyone technical knew Intel was better at Floating point, but most consumers didn't care or weren't technical enough to know the difference)

Similarly cloud provider may want to use some aggregate performance metrics as a basis of comparing themselves to other providers. For example, Cloud A (High End) has 1,000 servers and fibre channel, Provider B (Commodity) has 50,000 servers but uses direct attached storage. Both are useful but for different reasons. If I want performance I pick Cloud A, if I want massive scale I pick Cloud B. Think of it like the food guide on back of your cereal box.

Reblog this post [with Zemanta]

Introducing Government as a Service

With all the talk of cloud computing with in the U.S. Federal Government lately, it seems to be rubbing off on other governments around the globe. I've recently had conversations with the Canadian, US, UK, UN, and EU governments asking about how they might be able to investigate the creation of "public cloud computing infrastructures" for both governmental use as well as for their citizens. I'm calling this new movement toward the governmental adoption of cloud computing -- Government as a Service.

Simply Government as a Service is a way for governments around the globe to offer enabling technical services to their population. These could be as simple as web based services to complex infrastructure and platforms made available through the web and specifically to a citizen of a given country.

In a lot of ways, Government as a Service is the ultimate social program, an equalizer that enables the broader populous uniform access to emerging cutting edge technology that may otherwise be out of reach for the average person. Combined with broadband initiatives, governmental cloud computing could truly be the information revolution we've all been waiting for. If information is power, cloud computing is the tool that gives it.

-- Update --
Giving credit where credit is due, it looks like Alexis Richardson beat me to the term. He posted it to CCIF in April. Here is the link.
Reblog this post [with Zemanta]

Thursday, May 28, 2009

A Standardized Cloud Performance Rating System

Ever get one of those random phone calls in the midst of your work day that makes you think, huh -- interesting idea? Well earlier today I had one from a guy looking to learn more about cloud computing platforms. Although it ended up he wasn't specifically looking for an elastic computing platform, he did ask a few very thought provoking questions.

What he asked was if there is a simple way to compare the performance, security and quality of various cloud computing providers? He went on to say that when comparing traditional hardware vendors it was easy for him to understand the standardized specifications (GHZ, GB, etc) as well as determine quality based on brand recognition, but in the cloud world there was no easy way for him to compare "apples to apples". In his words, "there is Amazon and then there is everyone else". Although overly simplified, he was kind of right. For a lot of people looking to get into the cloud, it's a bit of a mystery.

This got me thinking. With all the talk lately of cloud standards, is there an opportunity to create a common or standard Cloud Performance Rating System? And if so, how might it work?

Unlike CPU or Storage, Cloud Computing is significantly more complex involving many different moving parts (deployment approaches, architectures and operating models). Defining one common standardized basis of comparison would be practically impossible. But within the various aspects of cloud computing there certainly are distinct areas that we may be able to quantify. The most likely starting point would be infrastructure related offerings such as compute and storage clouds.

The next question is what would you rate? Quality, performance, security? And how might these be actually quantified?

I'm going to leave those answers for another time. But it does make you think. So thank you random guy for brightening up an otherwise rainy day.
Reblog this post [with Zemanta]

A Bright Future for CloudCamp

With all the discussion about CloudCamp and it's future, I though I'd take a brief moment to review our history as well as look at our future.

It's been almost exactly one year since I held the first CloudCamp in Toronto Ontario as part of the Mesh08 conference. Based on the success of that first event, I sent out an email to the Google Cloud Computing mailing list. Upon sending that email I was inundated with more then 100+ responses from both companies interested in sponsoring as well as individuals looking to help orgnanize. Of those responses I selected a few key people who really put together the first "major" CloudCamp at the Microsoft office in downtown San Francisco with Sun as our first paying sponsor. The initial team included Dave Nielsen, Jesse Silver, Sam Charrington, Sara Dornsife, Alexis Richardson and Whurley with the goal of organizing just one hell of a Cloud event/party in San Francisco.

Shortly after this Alexis Richardson and a team from the U.K. held another CloudCamp in London at which point CloudCamp's popularity exploded. Never in my wildest imagination did I ever think CloudCamp would grow into the phenomenon it has become with something like 25 events at last count and dozens more planned. To be honest I am humbled by the communities rapid and overwhelming embrace of CloudCamp over the last year. But I'd like to remind you that CloudCamp has grown organically thanks to a dedicated team of hard working individuals (Dave, Alexis, Tom, George, Sam, Jesse, Sara as well as other regional organizers) who have gone out of their way to make CloudCamp a success and I would like to personally thank you for this. This would have never have happened without your hard work and dedication.

Recently a few have indicated that my intentions have not been honorable. This is completely untrue and not fair. Please keep in mind that CloudCamp has never had any real strategy, plan or organization other then a bunch of like mind folks who share a common passion for Cloud Computing.

As many of you know we've been working to organize ourselves into a legal structure. This takes time, CloudCamp isn't our full time job, we all have other responsibilities, but I assure you this will happen, soon. Dave has been activily working on a CloudCamp plan that we hope to share with the community in the near future. Once we have solid plan and legal organization, I will happily transfer all related IP, domains, etc to the control of the organization.

Going forward I believe in what we've created and will do everything in my power to ensure a sustainable long term future for CloudCamp.

Thank you for your continued support.

Google Jumps into the Cloud Wave (AJAX over XMPP)

Busy day for cloud interoperability related news. Google just announced a new service called Google Wave, described as an open communication and collaboration platform & protocol based on hosted XML documents (called waves) supporting concurrent modifications and low-latency updates. In simple terms Google Wave can be thought of like an ajax spreadsheet over XMPP.

According to Google, "The platform enables people to communicate and work together in new, convenient and effective ways. We will offer these benefits to users of Google Wave and we also want to share them with everyone else by making waves an open platform that everybody can share. We welcome others to run wave servers and become wave providers, for themselves or as services for their users, and to "federate" waves, that is, to share waves with each other and with Google Wave. In this way users from different wave providers can communicate and collaborate using shared waves. We are introducing the Google Wave Federation Protocol for federating waves between wave providers on the Internet."

A wave provider operates a wave service on one or more networked servers. The central pieces of the wave service is the wave store, which stores wavelet operations, and the wave server, which resolves wavelet operations by operational transformation and writes and reads wavelet operations to and from the wave store. Typically, the wave service serves waves to users of the wave provider which connect to the wave service frontend (see "Google Wave Data Model and Client-Server Protocol") More importantly, for the purpose of federation, the wave service shares waves with participants from other providers by communicating with these wave provider's servers. The wave service uses two components for this, a federation gateway and a federation proxy and is based on open extension to XMPP core [RFC3920] protocol to allow near real-time communication between two wave servers.


Google has also released an open source Google Wave Federation Protocol, with the canonical copy maintained in Subversion hosted at: http://code.google.com/p/wave-protocol/. The intellectual property related to this protocol is licensed under a liberal patent license. If you'd like to contribute to the specification, please review the community principles.

It looks very cool. More details to follow.

RumorMill: Amazon to Open Source Web Services API's

I usually try to avoid posting rumors but this one is particularly interesting, I first heard about it a few weeks back but recently had independent confirmation. Word is Amazon's legal team is currently "investigating" open sourcing their various web services API's including EC2, S3 etc. (The rumor has not been officially confirmed by Amazon, but my sources are usually pretty good)

If true, this move makes a lot of sense for a number of reasons. The first and foremost is it would help foster the adoption of Amazon's API's which are already the de facto standards used by hundreds of thousands of AWS customers around the globe thus solidifying Amazons position as the market leader.

By actually giving their stamp of approval, they would be in a sense officially giving the opportunity for other players to embrace the interface methods while keeping the actual implementation (their secret sauce) a secret. If anything this may really help Amazon win over Enterprise customers by enabling an ecosystem of compatible "private cloud" products and services that could seamlessly move between Amazon's Public Cloud and existing data center infrastructure.

This would also continue the momentum started by a number of competitors/partners who have begun adopting the various AWS API's including Sun Microsystems in their Open Cloud Platform and the EUCALYPTUS project.

From a legal standpoint this would help negate some of the concerns around API liability. Amazon is known to have an extensive patent portfolio and in past has not been afraid to enforce it. A clear policy regarding the use of their API's would certainly help companies that up until now have been reluctant to adopt them.

Lastly this provides the opportunity to foster a ecosystem of API driven applications to emerge (
EUCALYPTUS is perfect example). Another possible opportunity I wrote about awhile back is the creation of a Universal EC2 API adapter (UEC2) that could plug into your existing infrastructure tools and is completely platform agnostic.

At the heart of this concept is a universal EC2 abstraction, similar to ODBC, (a platform-independent database abstraction layer). Like ODBC a user could install the specific EC2 api-implementation, through which a lightweight EC2 API daemon (Think Libvirt) is able to communicate with traditional virtual infrastructure platforms such as VMware, Xen, Hyper-V etc using a standardized EC2 API. The user then has the ability to have their EC2 specific applications communicate directly with any infrastructure using this EC2 Adapter. The adapter then relays the results back and forth between the the other various infrastructure platforms & API's. Maybe it's time for me to get moving on this concept.

May a thousand Clouds Bloom.

---Update--
David Berlind on twitter asked a great question: Can Amazon actually "open source" an API? Or, with APIs, is Creative Commons or a Non-Assertion Covenant the way to go?

Wednesday, May 27, 2009

Monitoring Cloud based Revenue Erosion

Over the past few weeks we've been busy preparing for our upcoming Enomaly ECP Cloud Service Provider Edition launch happening June 1st. Recently we've had the chance to speak with a broad group of traditional web hosting and managed data center providers about the opportunities for cloud computing and infrastructure as a service platforms within their existing environments, it's interesting to see how our pitch has evolved.

Lately in order to prove our point, all we need to do is tell the web hosters to monitor any traffic to and from Amazon Web Services. What this AWS traffic represents is revenue leakage or lost revenue opportunities. It has become obvious that for hosting companies the cloud has little to do with efficiency or data center optimization but more to do with recapturing revenue lost to Amazon and other cloud infrastructure providers.

For a data center provider, taking an offlease server and sticking it into a turnkey cloud deployment is low risk and allows for almost instant ROI from a hetrogenenous infrastructure. Throw in our monthly per core pricing model and our cloud platform almost magically pays for itself. Simply, we scale with our hosting customers. Throw in a Cloud App Store and the hosting provider now has several new incrimental revenue opportunties that they had previously never been ability to tap into.

While the media and industry analysts argue about whether cloud computing is real or ready for the enterprise -- what has become certain is that cloud computing is having a real effect on many web hosting firms bottom lines today and if they don't adapt, they're going to be left behind.

So if you're a hosting firm go ahead and start monitoring your network traffic to and from Amazon Web Services -- that's your revenue leaking.
Reblog this post [with Zemanta]

Tuesday, May 26, 2009

Cloud Security: NIST Releases Guide to Enterprise IT Security

The National Institute of Standards and Technology (NIST) recently released a draft "Guide to Adopting and Using the Security Content Automation Protocol" (SCAP) for public review. The guide takes a close look at what they describe as "the need for a comprehensive, standardized approach to overcoming security challenges found within a modern enterprise IT environment". In case you're not familiar with SCAP, it comprises a suite of specifications for organizing and expressing security-related information in standardized ways, as well as related reference data, such as identifiers for software flaws and security configuration issues, mostly geared toward federal government agencies. Although SCAP can be used for maintaining the security of enterprise systems, such as automatically verifying the installation of patches, checking system security configuration settings, and examining systems for signs of compromise.

I haven't done too much digging through the specification, but at first glance a lot of the security concepts seem fairly well suited to both governmental and enterprise infrastructure as a service deployments such as at Amazon Ec2.

Interesting to note, one of the major issues outlined in the guide is the lack of interoperability across system security tools; for example, the use of proprietary names for vulnerabilities or platforms creates inconsistencies in reports from multiple tools, which can cause delays in security assessment, decision-making, and vulnerability remediation. The guide recommends that organizations should to demonstrate compliance with security requirements in mandates such as the Federal Information Security Management Act (FISMA).

The guide goes onto outline; "Many tools for system security, such as patch management and vulnerability management software, use proprietary formats, nomenclatures, measurements, terminology, and content. For example, when vulnerability scanners do not use standardized names for vulnerabilities, it might not be clear to security staff whether multiple scanners are referencing the same vulnerabilities in their reports. This lack of interoperability can cause delays and inconsistencies in security assessment, decision-making, and remediation."

Direct Link > http://csrc.nist.gov/publications/drafts/800-117/draft-sp800-117.pdf

NIST requests comments on the new publication, 800-117, "Guide to Adopting and Using the Security Content Automation Protocol." E-mail comments to [email protected] by Friday, June 12.

Saturday, May 23, 2009

NASA's NEBULA Space Cloud Computing Platform Launches

Earlier this week NASA took the wraps off a new Cloud Computing platform called NEBULA, or what I'm calling the (Space Cloud). Described as a way to manage research-class computing capacity. NASA describes NEBULA as "a Cloud Computing environment integrating a set of open-source components into a seamless, self-service platform."

I found the location of the Space Cloud particularly interesting, The primary NEBULA data center is at Ames Research Center, in the Ames Internet Exchange (AIX). AIX was formerly "Mae West", one of the original nodes of the Internet, and is still a major peering location for Tier 1 ISPs, as well as being the home of the "E" root name servers. Basically you can't find a better location to put a cloud then the birth place of the internet.

NASA also put out a request for "Computonauts" through the TESS Community Observer program which will allows teams of citizen scientists to propose, test, refine and rank algorithms for on-board analysis of image data to support serendipitous science.

NEBULA is currently in a limited beta, NASA is looking for beta testers who are interested in working with NASA projects to test drive the Cloud. They go on to state that "users desiring to utilize the underlying NEBULA components directly will be required to pass the necessary security reviews, content reviews and legal certifications themselves." Not sure what that means, but it sounds like an interesting project none the less.

Below is architecteral diagram of the NEBULA platform.

#TwitterData Proposal

I just stumbled upon an interesting proposal for a simple, open specification for embedding data in Twitter messages. The proposal was released earlier this week by Todd Fast, (@toddfast), the chief architect of Sun Microsystems' Platform as a Service (PaaS) cloud offerings, and CTO and founding principal of zembly.com, (Sun's cloud application platform and browser-based, extensible, social development environment) and Jiri Kopsa, (@jirikops) a social platform developer at Sun Microsystems, who also co-founded zembly.com.

The Twitter Data proposal is described as "a simple, open, semi-structured format for embedding machine-readable, yet human-friendly, data in Twitter messages. This data can then be transmitted, received, and interpreted in real time by powerful new kinds of applications built on the Twitter platform."

The proposal seems like an ideal basis for building realtime semantic applications on top of twitter or other micro-blogging services. The authors had this to say about the semantic aspects of the proposal, "We have designed Twitter Data with an eye to layering additional semantic structures on top. We will discuss these possibilities in a later version of this proposal."

Here is an example Twitter Data message:

I love the #twitterdata proposal! $vote +1

See the proposal at http://twitterdata.org

Friday, May 22, 2009

Announcing The Cloud Computing Newswire Service

Over the last few weeks I have been inundated with cloud computing PR related inquires. So I had an idea. Why not create a Cloud Computing Newswire Service as a conduit for Cloud Computing related news, media announcements and press releases. It's a place where you can get your cloud related products, services and various announcements out to the world.

The Cloud Computing Newswire Service is syndicated across the Sys-con & Ulitzer network of websites, magazines, newsletters, blogs and syndication partners. In April the Ulitzer network of sites saw 140,000 visitors who read 197,000 pages of articles, blogs and news stories. The syndicated page views (outside the Ulitzer domain) exceeded 1,369,983 page views with more than 3,000 Ulitzer stories appearing through "Google News" in addition to the 30,000 stories syndicated through various SYS-CON sites.

How To Contribute The Cloud Computing Newswire Service
If your interested in contributing or submitting your own press and news releases you must first signup for a Ulitzer account. After you've registered, go to your Account management page and select "topics" and add "The Cloud Computing Newswire" in the add topic field. At this point you'll be able to submit PR and news stories.

or Submit a PR or News Release Directly (No Ulitzer Registration Required)

See you on the Cloud Computing Newswire.

Thursday, May 21, 2009

ElasticVapor Disclosure Policy

This policy is valid from 21 May 2009 (last updated March 27/2012)

This blog is a personal blog written and edited by Reuven Cohen, the owner of the blog as well as associated copyrights, domain names, and content. For questions about this blog, please contact ruv (at) ruv dot net.

This blog does not accept any form of advertising, sponsorship, or paid insertions. It is written for Reuven Cohen's own purposes. However, he may be influenced by his background, occupation, religion, political affiliation or experience.

The owner(s) of this blog [may] receive compensation from this blog.

The owner(s) of this blog [might at times be] compensated to provide opinion on products, services, websites and various other topics with full disclosure provided. The views and opinions expressed on this blog are purely the blog owners. If we claim or appear to be experts on a certain topic or product or service area, we will only endorse products or services that we believe, based on our expertise, are worthy of such endorsement. Any product claim, statistic, quote or other representation about a product or service should be verified with the manufacturer or provider.

The owner(s) of this blog would like to disclose the following existing relationships. These are companies, organizations or individuals that may have a significant impact on the content of this blog. I am the former founder of Enomaly Inc and currently employed as Senior Vice President at Virtustream Inc, 4800 Montgomery Lane, Suite 1100, Bethesda, MD 20814

I am an unpaid co-host of the Intel sponsored Digital Nibbles Podcast as of November 2011 and have spoken publicly on behalf of Intel on several occasions on an unpaid and on going basis.

I serve on the following corporate or non profit advisory boards: TechStars Mentor, Strategic advisor for Sun Microsystems, CloudCamp Founder, Cloud Computing Interoperability Forum Founder, Information Technology Association of Canada (ITAC) Board of Governors, Seneca@York University Board of Advisors, Microsoft Windows Azure Product Planning Program Board of Advisors, IDG CloudWorld Advisory Board and the Canadian Youth Business Foundation Mentor.

Current government related activities include (but are not limited to) the governments of the United States, Canada, United Kingdom, Switzerland, Brasil, Israel, Japan, South Korea, Philippines, Taiwan, and China. I have also participated as a member of the Sino-EU, America Cooperation Roundtable forum sponsored by the European Economic Union in Beijing, China, May 2011

U.S Federal Government Launches Data.gov

I'm happy to announce that the U.S. Federal Government earlier today launched the new Data.Gov website. The primary goal of Data.Gov is to improve access to Federal data and expand creative use of those data beyond the walls of government by encouraging innovative ideas (e.g., web applications). Data.gov strives to make government more transparent and is committed to creating an unprecedented level of openness in Government. The openness derived from Data.gov will strengthen the Nation's democracy and promote efficiency and effectiveness in Government.

As a priority Open Government Initiative for President Obama's administration, Data.gov increases the ability of the public to easily find, download, and use datasets that are generated and held by the Federal Government. Data.gov provides descriptions of the Federal datasets (metadata), information about how to access the datasets, and tools that leverage government datasets. The data catalogs will continue to grow as datasets are added. Federal, Executive Branch data are included in the first version of Data.gov.

Public participation and collaboration will be one of the keys to the success of Data.gov. Data.gov enables the public to participate in government by providing downloadable Federal datasets to build applications, conduct analyses, and perform research. Data.gov will continue to improve based on feedback, comments, and recommendations from the public and is activily encouraging individuals to suggest datasets they'd like to see, rate and comment on current datasets.

In a recent interview on NextGov.com, Federal CIO Vivek Kundra shed some details on Data.Gov & Open Government Initiative. "We recognize the power of tapping into the ingenuity of the American people and recognize that government doesn't have a monopoly on the best ideas or always have the best idea on finding an innovative path to solving the toughest problems the country faces. By democratizing data and making it available to the public and private sector ... we can tap into that ingenuity."

One of the most telling aspects of the new Data.Gov website is the Data policy which outlines a broad usage policy which states data accessed through Data.gov do not, and should not, include controls over its end use. In a sense the U.S. federal government has open sourced large portions of public information. It will be very interesting to see how people use this fountain of information.

Wednesday, May 20, 2009

Google's GDrive Cloud Storage Offering "Coming Soon"

There has been a lot of talk this week about a new cloud storage service being rolled out by Google as part of their Google App Engine offering. The announcement was part of a presentation at the Interop Conference in Las Vegas, by Mike Repass, Product Manager at Google who indicated it would be made available 'within weeks'.

During the presentation, Repass said, "We started out Google AppEngine as an abstract virtualization service. Our static content solution is something we're shipping soon."

He didn't provide much detail into what the interface or cost model would be used, but it is widely believed that it will be offered as part of Google's expanding App Engine platform -- building upon a similar quota based cost model currently in place for App Engine.

I find it interesting that a few industry pundants have been reporting the so called Google "Gdrive" as a "S3 killer", but in reality this may very well help strengthen S3 by providing yet another point of cloud redundancy. I think a potential opportunity is to think of Gdrive as part of a larger Redundant Array of Cloud Storage Providers (RACS) or (Cloud RAID), where your storage is dispersed among a global pool of cloud providers.

Google isn't the only one getting into the cloud storage business, recently Canonical, the commercial company behind the popular Ubuntu Linux distribution, has announced plans to offer a Cloud based storage service called Ubuntu One. Which can be loosly described as a desktop Dropbox where you can sync your files, share your work with others or work remotely.

Moving forward I believe we are about about to see the emergence of a "Cloud RAID" model where you are able connect multiple remote cloud storage services (S3, Nirvanix, CloudFS, GDrive) in a broader data cloud. One possible Cloud RAID mechanism is the new SNIA XAM Initiative which aims to drive adoption of the eXtensible Access Method (XAM) specification as an interoperable data storage interface standard.

XAM looks to solve key cloud storage problem spots including;

  • Interoperability: Applications can work with any XAM conformant storage system; information can be migrated and shared
  • Compliance: Integrated record retention and disposition metadata
  • ILM Practices: Framework for classification, policy, and implementation
  • Migration: Ability to automate migration process to maintain long-term readability
  • Discovery: Application-independent structured discovery avoids application obsolescence

So back to GDrive, rather then describe this an Amazon S3 killer, I would describe it as a CDN killer. Akamai probably has the most to lose, not Amazon.

Tuesday, May 19, 2009

Wolfram|Alpha's Computation as a Service

The idea of distributed computational computing isn't new, if anything it is one of the oldest concepts within computing. Recently there has been a renewed buzz within the world of computational computing with the release of Wolfram|Alpha. An on demand computational service with the goal of make all systematic knowledge immediately computable and accessible to everyone.

Wolfram|Alpha states their aim is to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. More simply their goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.

Among their short-term plans are the release of a developer APIs for your very own computational mashups (Stock Market Analyst for Dummy's Anyone?), professional and corporate versions, custom versions for internal data, connections with other forms of content, and deployment on emerging mobile and other platforms. In short Wolfram|Alpha is attempting to build a computation as a service platform where any information is instantly and easily understandable by any human or device. This is going to be huge and possibilities that it enables are endless.

Although the current version of the site is admittedly somewhat limited, the future opportunities are certainly not. Imagine being able to instantly know the answer to any question at anytime, for free.

Gartner Includes Enomaly in New Cloud Reports

We seem to be on roll lately at Enomaly. I'm happy to announce that Enomaly has been included in two new Gartner reports on Cloud Computing. I'd like to personally thank Daryl Plummer at Gartner for the honor of including our Elastic Computing Platform as an example.

I found the report on Cloud 'Capacity Overdraft' particularly interesting. In the report Plummer does a great job of outlining the benefits of one of the key value propositions of cloud computing -- the ability to increase or decrease service capacity on demand and to pay for only what you use. This is commonly referred to as "cloud service elasticity." Along with that idea is a complementary idea called "capacity overdrafting" or "cloudbursting". It is the ability to automatically get more capacity from a different cloud infrastructure when the primary cloud infrastructure is overloaded.

Below are links to the reports, which you need to be customer of Gartner in order to download.

Anatomy of a Cloud 'Capacity Overdraft': One Way Elasticity Happens
14 May , by Daryl C. Plummer

"Cloudbursting" (capacity overdrafting) is automatically adding and subtracting compute capacity on demand to handle workloads in the cloud. We examine one way this technology works, and its implications.

Three Levels of Elasticity for Cloud Computing Expand Provider Options
13 May 2009, Daryl C. Plummer & David Mitchell Smith

Scaling capacity up and down for a cloud service is commonly called elasticity. However, the concept is more complex to implement than to describe. We examine the issues.

NetworkWorld Names Enomaly Top 10 Company to Watch

NetworkWorld has named Enomaly in their latest list of Top 10 cloud computing companies to watch. What is exciting about this news is Enomaly has been included with some very good company including AT&T, Amazon, Microsoft, GoGrid, Google, Rackspace & Rightscale. Needless to say, to be included with such a list of whos-who in cloud computing is a real honor. Thank you Network World.

Monday, May 18, 2009

Keeping up with The Jones -- Amazon Releases Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch

I'm about to head out on Vacation in beautiful British Columbia, but before head to the airport, I wanted to let everyone know that Amazon Web Services continues to set a breakneck pace. Earlier today they released several new features for their cloud toolset. These include Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch.

Amazon CloudWatch tracks and stores a number of per-instance performance metrics including CPU load, Disk I/O rates, and Network I/O rates. The metrics are rolled-up at one minute intervals and are retained for two weeks. Once stored, you can retrieve metrics across a number of dimensions including Availability Zone, Instance Type, AMI ID, or Auto Scaling Group. Because the metrics are measured inside Amazon EC2 you do not have to install or maintain monitoring agents on every instance that you want to monitor. You get real-time visibility into the performance of each of your Amazon EC2 instances and can quickly detect underperforming or underutilized instances.

Auto Scaling lets you define scaling policies driven by metrics collected by Amazon CloudWatch. Your Amazon EC2 instances will scale automatically based on actual system load and performance but you won't be spending money to keep idle instances running. The service maintains a detailed audit trail of all scaling operations. Auto Scaling uses a concept called an Auto Scaling Group to define what to scale, how to scale, and when to scale. Each group tracks the status of an application running across one or more EC2 instances. A set of rules or Scaling Triggers associated with each group define the system conditions under which additional EC2 instances will be launched or unneeded EC2 instances terminated. Each group includes an EC2 launch configuration to allow for specification of an AMI ID, instance type, and so forth.

Finally, the Elastic Load Balancing feature makes it easy for you to distribute web traffic across Amazon EC2 instances residing in one or more Availability Zones. You can create a new Elastic Load Balancer in minutes. Each one contains a list of EC2 instance IDs, a public-facing URL, and a port number. You will need to use a CNAME record in your site's DNS entry to associate your this URL with your application. You can use Health Checks to ascertain the health of each instance via pings and URL fetches, and stop sending traffic to unhealthy instances until they recover.



Reblog this post [with Zemanta]

Friday, May 15, 2009

Cloud Computing's Watershed Week

As someone who has been following cloud computing since the start, there have been a few key moments in the progression of cloud computing from a fringe term to main stream concept. Among these include August 2006 with the private beta launch of Amazon EC2, June 2008 with what is now described as the "Week of Cloud" in San Francisco which included the launch of CloudCamp as well as several key cloud conferences. I believe this week is one of those times with the U.S Government's formal and vocal adoption of cloud computing.

I'd like to recap a few of the important things have happened over the last week or so.

The U.S Federal Government has indicated they have hired Patrick Stingley as federal cloud CTO (Federal Cloud Czar).

The White House unveiled a Cross-Cutting Programs Document that outlines the administration's 2010 budget requests including pilot projects that identify common services and solutions and that focus on using cloud computing

The National Institute of Standards and Technology (NIST) revealed a draft of a formal government definition for cloud computing (including the definition of infrastructure as a service), as well as an upcoming NIST Special Publication on Cloud Computing and Security.

The GSA announced a
cloud computing RFI which included provisions for cloud interoperability and portability.

As I've said previously, I believe this is a landmark moment for Cloud Computing and I'm not the only one who thinks so. Earlier this week David Mihalchick, manager of Google's federal business development team said in an informationweek interview, that the "White House's budget addendum could be a watershed event for the federal cloud computing market".

I don't know about you, but I can't wait to see what happens next. I'll keep you posted.
Reblog this post [with Zemanta]

Citrix Jumps on Cloud Hosting Bandwagon

The cloud hosting & service provider market seem to be becoming the key battle ground for cloud enablers formerly known as virtualization vendors. Following upon recent announcements from VMware and Cisco, Citrix has announced a Service Provider Program aimed squarely at service/hosting providers who deliver software services and hosted applications to end-user customers on a rental, subscription or services basis, A.K.A. Cloud Service Providers (CSP).

The most interesting aspect of the new program has to do with Citrix's approach to billing. The program is designed with cloud business goals in mind with no up-front license fee commitments. Cloud hosters need only submit monthly usage reports and are invoiced accordingly. The program is being offered as part of Citrix Cloud Center (C3) which they describe as designed to give cloud providers a complete set of service delivery infrastructure building blocks for hosting, managing and delivering cloud-based computing services.

I find this per-use approach to software billing particularly interesting. For Enomaly's forth coming ECP Service Provider Edition we also decided to use a similar approach to billing our hosting customers who were asking to be billed on a monthly usage basis. Similar to Citrix, our approach is based on a per core based rate which calculates the maximum core count in the run of a particular month. In trying to determine the optimal billing approach we found that this allows both the service provider and the cloud vendor to grow in unison, a kind of symbiotic relationship. Simply, the better they do the better we do.

Of course the bigger question in this type of utility usage based billing is finding the right middle ground between self reporting (trust) and automated system reporting. In Citrix's case this is still unclear. What is clear is that service providers are quickly embracing this new per-use enterprise software sales model. It will be interesting to see if the other players in the space start to use a similar approach when offering cloud enablement software & platforms.

Thursday, May 14, 2009

Has Google Gotten too Big to Fail?

Lately we've been on a roll within the cloud interoperability movement. With the recent inclusion of interoperability & portability guidelines within the new Federal Government's cloud computing mandate or the long list of supporters publicly speaking / advocating for interoperability. The topic has become a central issue within the emerging cloud technology scene. Yet one problem still remains, it's what I'm calling the "Fear Of God" (FOG) effect.

Today was one of those days. The folks at venturebeat.com said it best. "When Google fails, the Internet fails". If you need proof, just look at your webstats between 11am and 12pm (EST) this morning. If your sites are like mine, you'll notice a sharp drop in traffic across most of your websites durning the outage. It was even worse if you rely on Google Apps or use a Google Blog. You're dead in the water. I couldn't even complain on my blog because it wasn't responding.

The fact is Google has become the de facto way most internet users access information, whether it's search, mail or other various internet applications, like it or not Google has become the conduit for most people's internet experience.

So back to why this matters for cloud interoperability.. My question is, has Google gotten too big to fail? Simply Google's failure today perfectly illustrates why it is important to have an exit strategy, this is especially true when relying on cloud computing services. Again, It's not that I want to leave, it's just nice to know it's possible and a little fear in this particular case certainly helps shed light on why portability and interoperability are so important.
Reblog this post [with Zemanta]

Wednesday, May 13, 2009

Federal Cloud Capability RFI Released by U.S. Government

I'm happy to be able to disclose today that the Federal Government of the United States released its cloud computing RFI earlier this afternoon. Enomaly was fortunate to have been included in recent consultations with the government, and we've been impressed by the vision of this administration and the speed with which they've taken action.

What makes this RFI especially exciting is that for the first time things are really starting to move very quickly toward the creation of a federal cloud capability including an actual budget which has been included in the 2010 federal budget recommendation released earlier this week by the White House. In reviewing the Federal Cloud RFI it seems that a federal "elastic computing cloud" may soon be a reality.

To give you some background, the RFI is provided by the GSA Office of the Chief Information Officer (OCIO), in concert with the IT Infrastructure Line of Business (ITI LoB), which has requested capability statements and responses to examination of cloud related business models, pricing models, and Service Level Agreements (SLA) from vendors who provide Infrastructure as a Service (IaaS) offerings.

The IT Infrastructure Line of Business (ITI LoB) is a government-wide initiative sponsored by the Office of Management and Budget (OMB). The ITI LoB focuses on the effective use of IT Infrastructure systems, services and operational practices in the federal government. The General Services Administration (GSA) has been designated by OMB as the Managing Partner for this initiative, but governance is shared across more than two dozen agencies.

For me the most important aspect of this RFI is the emphasis they've placed on cloud Computing Interoperability and Portability specifically in #5 of the RFI Doc. Something I've been pushing for in my recent Washington meetings. I'm ecstatic they've included some of my recommendations including an "exit strategy", prevention of vendor lockin and multi-cloud (“cloud-to-cloud”) support.

Below are from Question 5 of the RFI (Interoperability and Portability)

5.1 Describe your recommendations regarding “cloud-to-cloud” communication and ensuring interoperability of cloud solutions.
5.2 Describe your experience in weaving together multiple different cloud computing services offered by you, if any, or by other vendors.
5.3 As part of your service offering, describe the tools you support for integrating with other vendors in terms of monitoring and managing multiple cloud computing services.
5.4 Please explain application portability; i.e. exit strategy for applications running in your cloud, should it be necessary to vacate.
5.5 Describe how you prevent vendor lock in.

Download the RFI Here.

Redux: The Rise of Dark Cloud Computing

Recently Andreas M. Antonopoulos wrote a story for Computer World / Network World titled "Dark cloud computing" which seems to borrow a key concept from from a post I wrote almost a year ago. So I thought I'd go ahead and repost my original "The Rise of The Dark Cloud" from Saturday, July 26, 2008
-----------.

The Rise of The Dark Cloud

For nearly as long as the internet has been around there have been private subnetworks called the darknets. These private, covert and often secret networks were typically formed as decentralized groups of people engaged in the sharing of information, computing resources and communications typically for illegal activities.

Recently there has been a resurgence in interest of the darknet ranging from the more unsavory such as P2P filesharing and botnets as well as more mainstream usages such as inter-government information sharing, bandwidth alliances or even offensive military botnets. All of these activities are pointing to a growing interest in the form of covert computing I call "dark cloud computing" whereby a private computing alliance is formed. In this alliance members are able to pool together computing resources to address the ever expanding need for capacity.

According to my favorite source of quick disinformation, The term Darknet was originally coined in the 1970s to designate networks which were isolated from ARPANET (which evolved into the Internet) for security purposes. Some darknets were able to receive data from ARPANET but had addresses which did not appear in the network lists and would not answer pings or other inquiries. More recently the term has been associated with the use of dark fiber networks, private file sharing networks and distributed criminal botnets.

The botnet is quickly becoming the tool of choice for governments around the globe. Recently Col. Charles W. Williamson III. staff judge advocate, Air Force Intelligence, Surveillance and Reconnaissance Agency, writes in Armed Forces Journal for the need of botnets within the US DoD. In his report he writes " The world has abandoned a fortress mentality in the real world, and we need to move beyond it in cyberspace. America needs a network that can project power by building an af.mil robot network (botnet) that can direct such massive amounts of traffic to target computers that they can no longer communicate and become no more useful to our adversaries than hunks of metal and plastic. America needs the ability to carpet bomb in cyberspace to create the deterrent we lack."

I highly doubt the US is alone in this thinking. The world is more then ever driven by information and botnet usages are not just limited to governments but to enterprises as well. In our modern information driven economy the distinction between corporation and governmental organization has been increasingly blurred. Corporate entities are quickly realizing they need the same network protections. By covertly pooling resources in the form of a dark cloud or cloud alliance, members are able to counter or block network threats in a private, anonymous and quarantined fashion. This type distributed network environment may act as an early warning and threat avoidance system. An anonymous cloud computing alliance would enable a network of decentralized nodes capable of neutralizing potential threats through a series of counter measures.

My question is: Are we on the brink of seeing the rise of private corporate darknets aka dark clouds? And if so, what are the legal ramifications, and do they out weight the need to protect ourselves from criminals who can and will use these tactics against us?

Cloud Computing as a Zero Sum Game

From Governments to the ordinary individual the internet has become central to our personal technological identity. The term "cloud" has become an all encompassing buzz word open to interpretation meaning everything and nothing, meaningful and meaningless.

Recently I have begun to hear a common quote propagate through the tech scene, the quote made famous by Nick Carr focuses on a supposed "big switch" that is occurring within the technology world. Although the quote is being thrown around frequently, one major issue remains -- the perception that cloud computing is an all or nothing option.

The problem with this all or nothing mentality is it places cloud computing as a binary option, where the payoff is absolute or nothing at all. The answer isn't that simple. Like most emerging technologies cloud computing isn't a zero sum game that hinges on the notion that "there must be one winner and one loser, for every gain there is a loss." In reality cloud computing is a nice way to describe an evolutionary step in technology. Simply, the next big thing, not some sort of final solution to an unknown and possibly infinite set of problems.

What is clear to me is that over the last 20 years the internet has become a key piece that binds our modern society together. It has become the information on ramp to a limitless world of opportunities both social and economic. What the cloud has enabled more then anything else is a blank slate, a new beginning, an opportunity to re imagine how we as a global community collaborate, learn and advance. The new reality is the Internet has become a fundamental human right and the cloud is how we enable it.

My random thought on this random day in a random year.

Tuesday, May 12, 2009

A World Wide Stream of Consciousness

Just finished reading a very enlightened post titled "Is The Stream What Comes After the Web?" by Nova Spivack, twine.com founder. In his post he describes a unique theory called "the stream"

He goes on to say "The Stream is what the Web is thinking. It is the dynamic activity of the Web: the conversations, the live streams of audio and video, the changes to Web sites that are happening, the ideas and trends that are rippling between millions of Web pages, applications, and human minds...

If the Internet is our collective nervous system, and the Web is our collective brain, then the Stream is our collective mind. These three layers are interconnected, yet are distinctly different aspects, of our emerging and increasingly awakened planetary intelligence.

The emergence of the Stream is an interesting paradigm shift that may turn out to characterize the next evolution of the Web, this coming third-decade of the Web's development. Even though the underlying data model may be increasingly like a graph, or even a semantic graph, the user experience will be increasingly stream oriented."

Continuing on Spivack theory, I can't help but thinking of The Stream as a kind of World Wide Stream of Consciousness applied in real time to an endless supply of individually created information (Twitter, Facebook, etc). In other words, think of it as a global narrative that seeks to portray an instant collection of various points of view. The technical data mining of one or many thought processes -- instantly.
Reblog this post [with Zemanta]

The Future of Web Hosting is in The Cloud

Following upon VMWare and others, Cisco has announced a new cloud strategy focused on enablement of cloud hosting & service providers. The program called "Unified Service Delivery" focuses on service providers who want to offer cloud services publicly.

According to a press release the new Cisco Unified Service Delivery solution is optimized to enable virtualization within the data center, between data centers and across the Next-Generations Networks (NGN). Like VMWare, Cisco is trying to tap into their existing ecosystem of partners to build, run and deliver new cloud services acting as a kind of intermediary rather then a direct cloud provider. In a recent GigaOM post, Simon Aspinall, senior director of service provider marketing at Cisco described "an arms dealer role -– equipping its customers to fight that battle instead of offering a platform of its own" In my opinion a very smart place to be.

What is clear in this move is that the opportunities for cloud computing enablers such as Cisco, IBM, Sun, VMWare and smaller player like (warning blatant self promotion ahead) Enomaly is in that in the short term the "real money" will be made in converting the broad group of existing hosting providers into cloud computing providers. A proof point is recently VMWare has also turned it's focus to the so called cloud service provider market with their Vcloud initiative. The VMWare pitch rests on the idea of VMware specific interoperability. As the dominant virtualization platform, they don't have to worry if they're interoperability with anyone other then themselves and for this reason is a key part of their pitch. (We're interoperable with ourself.) According to the Vcloud site they claim that your application will work in the cloud as it did on-premise, and will remain portable between your data center and other vCloud Service Providers.

Similarly at Enomaly, we have also begun to roll out a cloud service provider specific version of our ECP platform and have seen a significant level of interest from web hosting providers around the globe. What has really surprised me is how much interest we're seeing from over seas. It has become clear to me the true opportunity for cloud hosting is in enabling a a hybrid cloud -- a cloud that allows hosting providers the ability to recapture revenue being lost to rival cloud providers. For these providers it's become very easy to monitor lost revenue, all that is required is you monitor traffic to and from Amazon Web Services.

White House Leading Cloud Computing Charge

Very interesting developments today from the U.S. federal government on cloud computing. Bob Marcus at the OMG has sent me an overview of a White House Cross-Cutting Programs Document released earlier. The document outlines the administration's 2010 budget requests. According to the document White House officials want agencies to launch pilot projects that identify common services and solutions and that focus on using cloud computing. I think the most important aspect of this announcement is that "cloud computing" is now being mandated from the highest levels of the U.S. government.

Marcus has some great insights into the opportunity saying; "I think the leadership from above offers an opportunity to put in place a coherent strategy. In my past industry experience, I have recommended setting up an enterprise Coordination Team to guide the introduction of emerging technologies. The basic idea is for this Team to provide expertise, recommend standardizations, and facilitate reuse for individual projects while documenting best practices, lessons learned from experience, and future directions."

He goes on to say "Without this coordination, there is a strong probability that many pilot projects will have successful local Cloud Deployments but it will not be possible to share resources across projects and agencies. (This was one of the downsides of federal SOA implementations.) . Even worse possibilities are inappropriate Cloud Deployments that have publicized security, performance, reliability, and/or cost problems. (This often happens when inexperienced project planners deal directly only with vendors)."

I also believe a Coordination Team makes a lot of sense and is potentially another reason to create a cloud computing trade association, something I have been actively pushing. Fact is that currently there is no unified way for the federal government to interface with the emerging cloud industry other then a series of invite only insider "summits". I also think there is an opportunity for the federal government through organization such as NIST to not only draft common definitions for cloud computing but also federal policies dictating its usage. Imagine a federal mandate advocating cloud interoperability among any federal cloud vendors?

---

Optimizing Common Services and Solutions/Cloud-Computing Platform

(From http://www.whitehouse.gov/omb/budget/fy2010/assets/crosscutting.pdf Page 157-158)

The Federal technology environment requires a fundamental reexamination of investments in technology infrastructure. The Infrastructure Modernization Program will be taking on new challenges and responsibilities. Pilot projects will be implemented to offer an opportunity to utilize more fully and broadly departmental and agency architectures to identify enterprise-wide common services and solutions, with a new emphasis on cloud-computing. The pilots will test a variety of services and delivery modes, provisioning approaches, options, and opportunities that cloud computing brings to Federal Government. Additionally, the multiple approaches will focus on measuring service, cost, and performance; refining and scaling pilots to full capabilities; and providing financial support to accelerate migration. These projects should lead to significant savings, achieved through basic changes in future Federal information infrastructure investment strategies and elimination of duplicative operations at the agency level.

Cloud-computing is a convenient, on-demand model for network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The cloud element of cloud-computing derives from a metaphor used for the Internet, from the way it is often depicted in computer network diagrams. Conceptually it refers to a model of scalable, real-time, internet-based information technology services and resources, satisfying the computing needs of users, without the users incurring the costs of maintaining the underlying infrastructure. Examples in the private sector involve providing common business applications online, which are accessed from a web browser, with software and data stored on the “cloud” provider’s servers.

Implementing a cloud-computing platform incurs different risks than dedicated agency data centers. Risks associated with the implementation of a new technology service delivery model include policy changes, implementation of dynamic applications, and securing the dynamic environment. The mitigation plan for these risks depends on establishing a proactive program management office to implement industry best practices and government policies in the management of any program. In addition, the Federal community will need to actively put in place new security measures which will allow dynamic application use and information-sharing to be implemented in a secure fashion. In order to achieve these goals, pilot programs will provide a model for scaling across the Government.

Pilots supporting the implementation of a cloud-computing environment include:

— End-user communications and computing—secure provisioning, support (help desk), and operation of end-user applications across a spectrum of devices; addressing telework and a mobile workforce.

— Secure virtualized data centers, with Government-to-Government, Government-to-Contractor, and Contractor-to-Contractor modes of service delivery.

— Portals, collaboration and messaging —secure data dissemination, citizen and other stakeholder engagement, and workforce productivity.

— Content, information, and records management — delivery of services to citizens and workforce productivity.

— Workflow and case management—delivery of services to citizens and workforce productivity.

— Data analytics, visualization, and reporting— transparency and management.

— Enterprise Software-as-a-Service—for example, in financial management.

Cloud-computing will help to optimize the Federal data facility environment and create a platform to provide services to a broader audience of customers. Another new program, the “work-at-a-distance” initiative, will leverage modern technologies to allow Federal employees to work in real time from remote locations, reducing travel costs and energy consumption, and improving the Government’s emergency preparedness capabilities.

Cloud-computing and “work-at-a-distance” represent major new Government-wide initiatives, supported by the CIO Council under the auspices of the Federal CIO (OMB’s E-Government Administrator), and funded through the General Services Administration (GSA) as the service-provider.

Of the investments that will involve up-front costs to be recouped in outyear savings, cloud-computing is a prime case in point. The Federal Government will transform its Information Technology Infrastructure by virtualizing data centers, consolidating data centers and operations, and ultimately adopting a cloud-computing business model. Initial pilots conducted in collaboration with Federal agencies will serve as test beds to demonstrate capabilities, including appropriate security and privacy protection at or exceeding current best practices, developing standards, gathering data, and benchmarking costs and performance. The pilots will evolve into migrations of major agency capabilities from agency computing platforms to base agency IT processes and data in the cloud. Expected savings in the outyears, as more agencies reduce their costs of hosting systems in their own data centers, should be many times the original investment in this area.

Sunday, May 10, 2009

Are Trademarks Harming Cloud Computing?

A few blogs (slashdot) are reporting a story posted last week on PCworld.com titled "Trademarks: The Hidden Menace" in which Keir Thomas asks why open source advocates are keen to suggest patent and copyright reform, yet completely ignore the issue of trademarks. In his story, he says "Trademarking is just as dangerous as its two intellectual property brothers."

Thomson goes on to say "Trademarking encourages organizations to foster back-room deals, and negotiations to get permissions. It's almost exclusively a domain for lawyers. Does this sound familiar? That's right -- it's just like the kind of deals that go on over copyright and patents in the boardrooms of big corporations. And just like patents and traditional copyright, it's totally incompatible with the spirit and ethos of open source software."

Given the discussions around the potential trademarking of various cloud terms such as "netbook" "cloud networking" and "cloud computing" the topic seems equally relevant within cloud computing.

My biggest issue with Thomson thesis is he wrongly asserts that "Even within the Linux community, trademarking can be used as obstructively as copyright and patenting to further business ends."

As the founder of an open source cloud computing product company my concern with combining open source and trademark law is that although we freely encourage the adoption of our open source software, we still want to control our brand and corporate identify. At the end of the day this is the key piece of the value of an our source company or community. The trademark or brand identity in many cases can be far more valuable then any direct revenue, (MySQL or Apache for example). Like most software -- perception is what matters, Apple is perceived to be more secure, less prone to security exploits, not because it's true but because Apples brand & identity is perceived that way. The same applies to open source.

Giving away free usage to your brand's trademarks would be on par with giving away the keys to the castle.

Friday, May 8, 2009

Federal Cloud Standards Summit (July 15th)

Building upon the Federal CIO Cloud Summit this week in Washington. I have been invited to provide a keynote presentation at the upcoming Cloud Standards Summit July 15th in Washington. My keynote presentation will focus on providing a summary from the Federal CIO's Cloud Summit as well as next steps for supporting adoption of cloud computing within the U.S. federal government.

Being held as part of the OMG Standards in Government & NGO's Workshop July 13-15, 2009, Arlington,Virginia. The goal of the Cloud Standards Summit is to initiate a dialogue with government IT leaders on the theme of "Coordinating Standardization Activities to Remove Government Cloud Computing Roadblocks". Potential government implementers of Cloud Computing will supply their feedback on key issues that could delay federal Cloud Computing deployments.

These issues will include areas such as security, governance, SLAs, portability/interoperability across Clouds, compliance, legacy systems integration, APIs, and virtualized resource management. At the Summit, standards groups will be asked to describe their future work on Cloud standards and how it maps to the government concerns. Government attendees will be encouraged to provide feedback. Companies working on Cloud Computing are invited to be part of the audience and provide their perspective on the concerns and proposed standardization activities.

Announcement: http://www.omg.org/news/meetings/GOV-WS/css/index.htm
Draft Agenda: http://federalcloudcomputing.wik.is/July_15%2c_2009
Wiki: http://federalcloudcomputing.wik.is
Reblog this post [with Zemanta]

Thursday, May 7, 2009

The US Federal Government defines Cloud Computing

I've just spend the last two days in Washington DC in conversations with various US government officials regarding the opportunities for cloud computing within the federal government. During these conversations it has become very clear that the topic of "Cloud Computing" is front and center within the various departmental IT strategies going forward. All the more interesting is that the term is being mandated from the highest levels including the new federal CIO Vivek Kundra who indicated cloud computing was one of the biggest revolutions technology has seen in a long time.

So why does it matter what the US federal government thinks of Cloud Computing? Simple, with an IT budget of more then 70 billion dollars a year, the US government represents the largest IT consumer on the planet. With this kind of money at stake, the influence the US government imposes is enormous and directly influences how we as industry both define and use the cloud.

Something I found particularly interesting was that for the first time, the federal government is moving more quickly then the private sector in both their interest and potential adoption of what has been referred to as the federal cloud. Making things even more interesting is the appointment of Patrick Stingley as what I would describe as the federal "Cloud Czar" or more formally the Federal Cloud CTO at the General Services Administration (GSA). GSA being the federal agency that provides goods and services to other federal agencies and will be the point of contact for any federal cloud services either offered directly or procured through various cloud providers. I should also note that Stingley is also the CTO for the Dept. of the Interior. One of Stingley's first tasks is creating a development plan for a federal cloud computing capability.

I've been saying this for awhile, before Cloud Computing can be broadly adopted by various governmental bodies we must have a clear definition of what it is. In yesterdays federal CIO cloud summit a draft definition for federal use of cloud computing was revealed. The purpose of this definition is to act as a kind of basic litmus test for use as the various government agency move forward in selecting cloud related products and services.

The definition was prepared by Peter Mell and Tim Grance at the National Institute of Standards and Technology (NIST). For those unfamiliar with NIST, they are a non-regulatory agency of the United States Department of Commerce with a mission to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve quality of life. Simply put, their definition of cloud computing will be the de facto standard definition that the entire US government will be given.

In creating this definition, NIST consulted extensively with the private sector including a wide range of vendors, consultants and industry pundants including your truly. Below is the draft NIST working definition of Cloud Computing. I should note, this definition is a work in progress and therefore is open to public ratification & comment. The initial feedback was very positive from the federal CIO's who were presented it yesterday in DC. Baring any last minute lobbying I doubt we'll see many more major revisions.

-------------------------------

Draft NIST Working Definition of Cloud Computing

4-24-09

Peter Mell and Tim Grance - National Institute of Standards and Technology, Information Technology Laboratory

Note 1: Cloud computing is still an evolving paradigm. Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors. These definitions, attributes, and characteristics will evolve and change over time.

Note 2: The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. This definition attempts to encompass all of the various cloud approaches.

Definition of Cloud Computing:

Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is comprised of five key characteristics, three delivery models, and four deployment models.

Key Characteristics:

      On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed without requiring human interaction with each service’s provider.

      Ubiquitous network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

      Location independent resource pooling. The provider’s computing resources are pooled to serve all consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The customer generally has no control or knowledge over the exact location of the provided resources. Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

      Rapid elasticity. Capabilities can be rapidly and elastically provisioned to quickly scale up and rapidly released to quickly scale down. To the consumer, the capabilities available for rent often appear to be infinite and can be purchased in any quantity at any time.

      Pay per use. Capabilities are charged using a metered, fee-for-service, or advertising based billing model to promote optimization of resource use. Examples are measuring the storage, bandwidth, and computing resources consumed and charging for the number of active user accounts per month. Clouds within an organization accrue cost between business units and may or may not use actual currency.

      Note: Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.

Delivery Models:

      Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

      Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created applications using programming languages and tools supported by the provider (e.g., java, python, .Net). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, or storage, but the consumer has control over the deployed applications and possibly application hosting environment configurations.

      Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to rent processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers).

Deployment Models:

      Private cloud. The cloud infrastructure is owned or leased by a single organization and is operated solely for that organization.

      Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations).

      Public cloud. The cloud infrastructure is owned by an organization selling cloud services to the general public or to a large industry group.

      Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (internal, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting).

Each deployment model instance has one of two types: internal or external. Internal clouds reside within an organizations network security perimeter and external clouds reside outside the same perimeter.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram