Friday, July 31, 2009

Crowd-Sourced Cloud Computing Use Cases White Paper Published

Doug Tidwell from IBM has just sent out a message informing that the final version of the crowd sourced Cloud Computing use cases white paper has been published under an open source license (including images). (See complete white paper below)
Tidwell has this to say "This is the synthesis of lots of hard work and insight from lots of people. I hope it will be a great resource for the community and a foundation for the hard work ahead of us. Our common goal is to keep cloud computing open, and the use cases and requirements we've developed will help us drive that forward. Whenever someone says, "We don't need an open way of doing X," we can respond, "So tell us how to implement use case Y without being locked in to your platform."
According to the final document, "The Cloud Computing Use Case group brought together cloud consumers and cloud vendors to define common use case scenarios for cloud computing. The use case scenarios demonstrate the performance and economic benefits of cloud computing and are based on the needs of the widest possible range of consumers."

"The goal of this white paper is to highlight the capabilities and requirements that need to be standardized in a cloud environment to ensure interoperability, ease of integration and portability. It must be possible to implement all of the use cases described in this paper without using closed, proprietary technologies. Cloud computing must evolve as an open environment, minimizing vendor lock-in and increasing customer choice."

This is a great example of how we in the open cloud computing community can come together to create something meaningful. Congratulations to a job well done!
Cloud Computing Use Cases Whitepaper

US Federal Government Releases Cloud Computing Initiative RFQ

I wanted to quickly let everyone know that GSA has just published its request for quotation to vendors to set up the first piece of the GSA cloud computing storefront initiative. According to the RFQ document (See document here) the objective of this RFQ is to offer three key service offerings through IaaS providers for ordering activities. The requirements have been divided into three distinct Lots:

• Lot 1: Cloud Storage Services
• Lot 2: Virtual Machines
• Lot 3: Cloud Web Hosting

The RFQ also sheds some light on the potential usage of cloud based infrastructure with the US federal Government. The Federal Cloud Computing initiative is a services oriented approach, whereby common infrastructure, information, and solutions can be shared/reused across the Government. The overall objective is to create a more agile Federal enterprise – where services can be reused and provisioned on demand to meet business needs.

The document also includes a Cloud Computing Framework diagram which provides a high-level overview of the key functional components for cloud computing services for the Government. The Cloud Computing Framework is described as "neither an architecture nor an operating model." The Framework is a functional view of the key capabilities required to enable Cloud Computing.

The framework consists of three major categories including:

• Cloud Service Delivery Capabilities - Core capabilities required to deliver Cloud Services
• Cloud Services – Services delivered by the Cloud
• Cloud User Tools – Tools or capabilities that enable users to procure, manage, and use the Cloud services

The initial acquisition of these services will be facilitated by GSA through the GSA Cloud Computing Storefront Site – which will enable Government purchasers to buy (using a credit card or other acceptable payment option) IaaS service offerings as needed through a common Web Portal, called the Cloud Computing Storefront, which will be managed and maintained by GSA.

US Federal Cloud Computing Initiative RFQ (GSA)

Reblog this post [with Zemanta]

Thursday, July 30, 2009

A Cloud Service Rating System

Yesterday's post "Cloud Computing as a Commodity" received some very interesting feedback. In particular were the comments suggesting the creation of a Cloud Service Provider Rating System similar to a corporate "credit rating" that estimates the service worthiness of a cloud computing provider. Below are a collection of the comments posted.

Rodos said;
Rather than classifying clouds I think the information needs to go into the workloads descriptions or metadata, such as an service level.

Your example of the disk storage relates. Does it matter if the storage is local or FC? Its cloud and should be abstracted. What does matter is the performance or service level of the storage, which the workload should dictate a minimum performance.

What we will need then is some interfaces, standards and auditing for confirming those SLAs.
wllm said;

There is also a question of support quality, which can't fully be addressed with any number of standards. An auction site such as this would have to offer a reputation system, as well.

As well as a comment on twitter by @grey_area;
Something like a Standard & Poor's rating system for Cloud Providers? If so, that's not a half bad idea - assuming proper diligence"
I particularly like the S&P concept. Standard and Poor's, which began rating insurance companies in the mid 1980s, assesses a company's Claims-Paying Ability–that is, its financial capacity to meet its insurance obligations. Similarly a Cloud provider may have a Cloud Performance Ability (CPA) that estimates it's ability to meet certain service levels.

S&P forms its opinion by examining industry-specific risk, management factors, operating performance and capitalization. Industry-specific risk addresses the inherent risk in and diversity of the insurance business being underwritten. Management factors include how management defines its corporate strategy and the effectiveness of its operations and financial controls. For a Cloud Provider, an independent auditor may look at various aspects of the infrastructure including operational history, physical security, networking, storage, platform maturity, peering relationships, customer support & satisfaction, up time and financial structure to determine a cloud providers overall rating or CPA. In turn this rating could form a competitive differentiation among various commodity service providers.

@rogerjenn also brings up a very good point;
You might want to reconsider any Wall Street rating firm based on their performance rating subprime mortgage-backed securities.
@Beaker suggests;
Audit, Assertion, Assessment, and Assurance API (A6) initiative could provide this;
In a recent post on his blog, @beaker aka Hoff said;
This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.
Regardless of the approach, an interesting idea none the less.

CloudCamp in the Cloud (A Virtual unConference)

Interesting idea floating around this afternoon on twitter. After yesterday's two CloudCamp's I received a few messages asking if there was any online ability to view the CloudCamp proceeding streaming live. In short the answer is no, although a few have posted video afterward.

This got me thinking, why not do a monthly virtual CloudCamp unconference via Webex? Cisco is already a sponsor of several CloudCamp's and I'm sure they would be more then happy to donate the Webex account in return for some promotional consideration. By creating a virtual unconference we could include everyone anywhere in the world using the very medium we help promote. Another benefit is an archive of audio and video posted to the CloudCamp.com website from the monthly events.

The virtual unconference could generally follow the same format as the physical one with a main unconference sesssion and several breakouts in the form of secondary webex's.

So far the feedback on twitter has been very positive. If others agree, I suggest we aim to hold our first "CloudCamp in the Cloud" this August with ongoing monthly events going forward.

Thoughts?

Wednesday, July 29, 2009

The Rise of the Government App Store

In a recent post to the CCIF mailing list, Bob Marcus outlined the coming opportunties and challenges facing what he described as "Government Cloud Storefronts". In the post he described Vivek Kundra's (US Federal CIO) vision for the creation of a government Cloud Storefront. This Storefront (run by GSA) which will be launched Sept 9th and will make Cloud resources (IaaS, PaaS, SaaS) available to agencies with in the US federal Government. (an $80+ Billion a year IT organization).

What's also interesting is the US isn't alone in the vision of centralized access points for procuring Cloud services and related applications. Several other governments including the United Kingdom G-Cloud app store and the Japanese Kasumigaseki Cloud are attempting to do the same with Japan spending upwards of 250 million dollars on their initiative.

Kundra, speaking at a recent conference at the National Defense University on cloud computing elaborated on his GovApp Store concept "Any agency can go online and agencies will be able to say 'I'm interested in buying a specific technology' and we will abstract all the complexities for agencies. They don't have to worry about Federal Information Security Management Act compliance. They don't have to worry about certification and accreditation. And they don't have to worry about how the services are being provisioned. Literally, you'll be able to go in as an agency… and provision those on a real-time basis and that is where the government really needs to move as we look at standardization. This will be the storefront that will be simple."

According to Marcus, "There are strong initial efficiency benefits (reduced procurement time and costs) gained by providing government projects with controlled access to multiple Cloud resources. However unless a set of best practices are followed, there could be negative long-range results such as lack of portability and interoperability across Cloud deployments."

Ed Meagher, former deputy CIO at the Interior and Veterans Affairs departments also sheds some light on the topic saying, "The challenge will be working in both worlds and making those two worlds work together. There's going to be lot of pressure on the [federal] CIO community to help this administration do the things it wants to do, like making government more efficient, more accessible to citizens and more transparent."

I could not agree more. But I also don't think the US Federal GovApp store requires standardization so much as transparency into the underliying processes that support the so called "running" of the app store.

Some thoughts that come to mind include, who exactly is building this app store, how will it be managed, what oversight will it have and how can we prevent abuse (halliburton style contracts anyone?) or even Apple's Iphone app store style "vendor lockout". These are much more important questions that need to be addressed first.

To help solve these issues on September 21, the Network Centric Operations Industry Consortium (NCOIC) will host an open free Session on "Best Practices for Cloud Storefronts" at its Virginia Plenary. The focus will be on recommended minimal standardizations (and compliance tests) for Cloud resources that are included in Storefront. Government IT leaders (e.g. GSA) will be invited to participate in the Session.

Cloud Computing as a Commodity

-- Update ---
I'm happy to announce that we've launched SpotCloud, the first Cloud Capacity Clearinghouse and Marketplace. Check it out at www.spotcloud.com
--
I seem to keep coming back to the same question when discussing Cloud Computing. Can cloud computing be treated as a commodity that could be brokered and or exchanged? Recently a few have attempted to do this, notably a German firm called Zimory.

To give you a little background, before the development of the Enomaly ECP platform. I had the grand idea to create what I described as "Distributed Exchange (DX)" (circa 2004 - I've put the site online temporarily for demo purposes). This was actually one of the original motivations for the creation of the original ECP platform (aka Enomalism) The idea of DX was to create a platform and marketplace which would allow companies to buy and sell excess computing capacity similar to that of a commodities /exchange marketplace. Think Google Adwords & Adsense for compute capacity.

I actually put quite a bit of thought into the whole concept. Generally the concept was to use a commodity-based approach to manage computational resources for consumers in a peer-to-peer style computing grid. In turn buyers and sellers would have access to an electronic trading environment where they could bid for unused compute cycles from a Google Adwords style web interface. Consumers could bid on computing power while Computing Capacity providers offer cost-effective computational capacity on demand, thus creating a competitive computing marketplace.

Although the concept may have been fairly well thought out, it was way too early. For one thing, there was no demand for the service, and the second problem was compute capacity wasn't and still isn't an actual commodity which can be supplied without qualitative differentiation across the greater cloud computing market. Basically there is no basis for an apples to apples comparison.

Wikipedia describes a commodity as "a product that is the same no matter who produces it, such as petroleum, notebook paper, or milk. In other words, copper is copper. The price of copper is universal, and fluctuates daily based on global supply and demand. Stereos, on the other hand, have many levels of quality. And, the better a stereo is [perceived to be], the more it will cost."

Another major limiting factor for treating cloud capacity as a commodity is that there are no standards for cloud capacity and therefore there is no effective way for users to compare it among cloud providers. With out this standardization there is no way to determine optimal cloud capacity requirements for particular application demands and thus determine the optimal price & costing. In order to overcome this, I believe a Standardized Cloud Performance Measurement and Rating System (SCPM) will need to be created which would form a basis of measurement through an aggregate performance benchmark.

As an example a cloud provider may want to use some aggregate performance metrics as a basis of comparing themselves to other providers. For example, Cloud A (High End) has 1,000 servers and fibre channel, Provider B (Commodity) has 50,000 servers but uses direct attached storage. Both are useful but for different reasons. If I want performance I pick Cloud A, if I want massive scale I pick Cloud B. Think of it like the food guide on back of your cereal box. This may provide the basis for determine the value and therefore a cost for the cloud capacity.

This has been one of the motivations behind the creation of an open standard for cloud computing capacity called the Universal Compute Unit (UcU) and it's inverse Universal Compute Cycle (UCC). An open standard unit of measurement (with benchmarking tools) which would allow providers, enablers and consumers to be able to easily, quickly and efficiently access auditable compute capacity with the knowledge that 1 UcU is the same regardless of the cloud provider. (See my previous posts on the subject)

I also believe that the creation of cloud exchanges & brokerage services has less to do with the technology and more to do with the concept of trust and accountability. If I'm going to buy a certain amount of regional cloud capacity ahead of time for my Christmas rush. I want to rest assured that the capacity will be actually available with an agreed upon quality and service level. I also need to be assured that the exchange is financially stable / adequately capitalized and will remain in business for the foreseeable future.

I've never been a particularly big fan of regulation, but given the potential for fraud some oversight may be required to assure a fair and balanced playing field. If we truly want to enable a cloud computing exchange / marketplace another option might be to build upon existing exchange platforms with a proven history. A platform with an existing level of trust, governance and compliance.

Tuesday, July 28, 2009

A Unified Theory for Cloud Computing

I just had an epiphany in my never ending quest to answer one of lifes most delicate questions -- "what the heck is cloud computing?" My sudden realization comes in the form of a grand unified theory for the term cloud computing. It's elegance is in it's simplicity ;)

Cloud Computing is an Analogy for a Metaphor wrapped in a Euphemism.

CC = E(A<=M)

Another way to look at it, the cloud is everything and nothing all at once.

Leading Cloud Computing Security Expert Joins Enomaly

I'm pleased to announce that David Lie, a leader in the Cloud Security sector has joined Enomaly as our new Chief Security Architect.

Currently on sabbatical David Lie is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Toronto. Widely regarded as a leading information and virtualization security leader, David Lie received his B.S. from the University of Toronto in 1998, and his M.S. and Ph.D from Stanford University in 2001 and 2004 respectively. While at Stanford, David founded and led the XOM (eXecute Only Memory) Processor Project, which supports the execution of tamper and copy-resistant software. He was the recipient of a best paper award at SOSP for this work.

David is also a recipient of the MRI Early Researcher Award, a CFI New Opportunities Fund Award and serves on the Scientific Advisor Board of the NSERC ISSNet Strategic Network on Systems Security. David has served on various program committees including OSDI, ASPLOS, Usenix Security and IEEE S & P. He has consulted for various silicon valley companies including HP and Compaq. Currently he focuses on securing commodity systems through low-level software such as virtual machine monitors and operating system kernels; architectural and hardware support to increase security; and security in cloud computing platforms.

As Enomaly's new Chief Security Architect, his focus will be to create a security infrastructure that will allow customers to gain control and trust over a virtual platform that is hosted by remote cloud providers. These cloud security concerns have become a key gating factor holding back enterprise adoption of cloud computing.

In this role he will lead the development of Enomaly's Cloud related security products and services that will address what he describes as "net-new security risks" that are unique to cloud computing, namely the risks that are due to execution of applications in a shared, virtual infrastructure. Shared infrastructure opens the possibility that alongside the customer's workload is hostile code. Virtual infrastructure is software-based, and software is modifiable -- so a hostile agent might be able to modify layers of the execution environment, resulting in the customer's workload executing on a (virtual) CPU that is hostile to it and programmed to wait for secrets to be decrypted and then make off with the plaintext, or storing secrets in (virtual) RAM that is hostile to it, etc.

To help solve this new cloud based security risk, David had this to say, "While cloud providers use virtualization to ensure isolation between customers, they face additional security challenges. Malicious customers may leverage the provider’s hardware to launch attacks, either from VMs they own or by compromising VMs from benign customers. These attacks can damage the provider’s reputation and ability to serve other customers. While cloud providers can use introspection to monitor customer VMs and detect malicious activity, it must be used with care since existing introspection techniques are based on assumptions that do not hold in cloud environments."

Welcome to the team!

Sunday, July 26, 2009

The Inter-Cloud and The Cloud of Clouds

I admit it, it's taken me a while to come around to the term inter-cloud, a concept being primarily promoted by Cisco as part of their Unified Computing platform. Lately the term seems to have been picking up some steam so I thought I'd take a moment to examine it a bit further.

My interpretation of the so called "inter-cloud" is the abstract ability to exchange information between distinct computing clouds (storage, compute, messaging etc) be it public or private in a uniform/unified way. I've come to think of it like a higher level inter-connected network atop the current world wide web via linked API's and data sources. Greg Papadopoulos from Sun calls it a Cloud of Clouds.

In one of the more thought provoking posts I've read in a long time, Vint Cerf (the father of the Internet & Google's Chief Internet Evangelist) compares the emergence of cloud computing and more specifically the inter-cloud to "the networks of the 1960s -- each network was typically proprietary. These networks were specific to each manufacturer and did not interconnect nor even have a way to express the idea of connecting to another network." Exactly the same problem we now face with the current generation of cloud infrastructure and services.

Cerf goes on to state the problem with the current inter-cloud is that "each cloud is a system unto itself. There is no way to express the idea of exchanging information between distinct computing clouds because there is no way to express the idea of “another cloud.” Nor is there any way to describe the information that is to be exchanged. Moreover, if the information contained in one computing cloud is protected from access by any but authorized users, there is no way to express how that protection is provided and how information about it should be propagated to another cloud when the data is transferred."

Interestingly he points to work being done by another father of the Internet, Sir Tim Berners-Lee (the inventor the World Wide Web) who has been pursuing ideas that may solve these so-called “inter-cloud” problems. According to Cerf, Tim Berners-Lee's idea is that by semantically linking data, we are able to create "the missing part of the vocabulary needed to interconnect computing clouds. The semantics of data and of the actions one can take on the data, and the vocabulary in which these actions are expressed appear to constitute the beginning of an inter-cloud computing language."

I understand this next statement may cause some lively debate, but I will say it anyway. If I am interpreting his assertion correctly, what the world needs is not yet another API to control the finer nuances of a physical or virtual infrastructure but instead a way for that infrastructure to communicate with other clouds around it regardless of what it is. The biggest hurdle to cloud interoperability appears to have very little to do with a willingness for cloud vendors to create open cloud API's but instead the willingness to provide the ability for these clouds to effectively inter-operate with one another. More simply the capability to work along side other cloud platforms in an open way.

Ever the visionary, Cerf says it best. The Cloud represents a "new layer in the Internet architecture and, like the many layers that have been invented before, it is an open opportunity to add functionality to an increasingly global network.." Amen brotha.

Saturday, July 25, 2009

Japanese Government Launches Global Inter-Cloud Technology Forum

Over the last week I've been away at my cottage on vacation so I'm trying to catch up on my massive backlog of emails (about 5k worth). One of the more interesting was one sent earlier in the week from Masayuki Hyugaji in Japan. (I have a call with Hyugaji later this week, so I should be able to provide more details afterward)

According to Hyugaji, the Ministry of Internal Affairs and Communications of Japan has embarked in a series of new research and development activities including launching a Global Inter-Cloud Technology Forum. For those who don't speak Japanese (my wife does) the primary focus of the Global Inter-Cloud Technology Forum is on Cloud Federation and currently includes several large Japanese companies. The aim of the forum is to promote standardization of network protocols and the interfaces through which cloud systems "interwork" with each other, and to enable the provisioning of more reliable cloud services.

Main activities and goals
- Promote the development and standardization of technologies to build or use cloud systems;
- Propose standard interfaces that allow cloud systems to interwork with each other;
- Collect and disseminate proposals and requests regarding organization of technical exchange meetings and training courses;
- Establish liaison with counterparts in the U.S. and Europe, and promote exchange with relevant R&D teams.

Global Inter-Cloud Technology Forum Aims

The Global Inter-Cloud Technology Forum website sheds some light on the initiative.

"Cloud systems have not yet reached the level that would allow their application to mission-critical fields, such as e-government, medical care and finance, in terms of reliability, ability to respond quickly, data quality and security. To achieve reliability and quality high enough to meet the requirements in these fields, it is necessary to interconnect multiple cloud systems via a broadband network and provide a mechanism that would allow them to interwork with, and complement, each other.

"In light of the fact that each provider is currently building cloud systems based on its proprietary specifications, we propose the establishment of the “Global Inter-Cloud Technology Forum” to promote standardization of network protocols and the interfaces through which cloud systems interwork with each other, to promote international interworking of cloud systems, to enable global provision of highly reliable, secure and high-quality cloud services, and to contribute to the development Japan’s ICT industry and to the strengthening of its international competitiveness."

Learn more at http://www.gictf.jp

Wednesday, July 22, 2009

Enomaly Helps Best Buy Leverage the Cloud to Connect with Customers

This is a guest post written by Lars Forsberg, a partner and co-founder at Enomaly. Lars manages our professional services group which focuses on cloud related services for our enterprise customers including Intel, Best Buy, and Orange / France Telecom among others. The post originally appears on the bestbuyapps.com blog.
---
In late May, Gary Koelling and Steve Bendt came to Enomaly looking for our help to realize their latest brainchild, called "Connect". We'd previously worked with Best Buy to develop an internal social networking site called "Blueshirt Nation" and we were eager for an another opportunity to collaborate with them.

Inspired by Ben Herington's ConnectTweet, the concept was simple; Connect BlueShirts directly with customers via twitter. In less than two months, Enomaly developed a Python application on Google App Engine to syndicate tweets from Best Buy employees and rebroadcast them. Thereby allowing customers to ask questions via twitter (@twelpforce) and have the answer crowdsourced to a vast network of Best Buy employees who are participating in the service. Now the answer to "looking for a Helmet Cam I can use for snowboarding and road trips" is only a tweet away.

The application, which has since come to be named Twelpforce, is a prime example of innovators like Best Buy leveraging cloud computing to enhance their business. The response to the service since it was launch on Sunday has been very positive and it's exciting to be an intregral part in bringing this project to life. We're interested in hearing your feedback on the service, so please feel free to leave a comment!

Tuesday, July 14, 2009

Is Microsoft Starting a Cloud Price War?

Earlier today Microsoft unveiled it's pricing model for its forthcoming Windows Azure cloud services platform. What's interesting about the pricing model is that they seem to be taking direct aim at Amazon Web Services.

To recap, Amazon charges 12.5 cents per hour for a basic Windows Server instance in contrast Microsoft's stated that their price will be 12 cents.They noted that the service will remain free until November. I should also point out that it still isn't clear if comparing Windows Azure to Amazon's Windows EC2 is a fair comparison given the rather drastic differences in functionality.

Microsoft calls Windows Azure a "cloud services operating system" that serves as the development, service hosting and service management environment for the Azure Platform. They've also said they will offer a private data center version of Azure that will be capable of being hosted within a "private cloud" context. This will be most likely as part of their upcoming virtual infrastructure platform "hyper-v" possibly as a virtual machine image. Currently to build applications and services on Windows Azure, developers can use their existing Microsoft Visual Studio 2008 expertise with the ability to easily run highly scalable ASP.NET Web applications or .NET code in the MS cloud.

In a post this afternoon, John Foley at Informationweek did the math on Azure's new pricing scheme and had this to say "Microsoft officials had previously indicated that Windows Azure pricing would be competitive, but the price differential may be more symbolic than material. At their published rates, if you ran a Window server in the cloud every hour of the day for an entire year, you'd save a mere $43.80 going with Microsoft. Indeed, if penny pinching is important, Amazon Web Services actually has a cheaper alternative, though it's not Windows. Amazon charges 10 cents per hour for "small" virtualized Linux and Unix servers."

Personally I believe that the half cent price difference is quite certainly material for those running larger cloud application deployments. A few cents can quickly add up. For me the move indicates that Microsoft is certainly not afraid to subsidize its cloud pricing in order to take a larger piece of the cloud market and with the large cash reserves Microsoft is said to have, they can certainly afford to engage in a price war. The bigger question will be how other more closely related cloud platform providers will adjust their pricing models? Right now all signs are starting to point a cloud price war. I suppose time will tell if I'm right or not.

(Disclosure, I am on the Microsoft Windows Azure Advisory Council)

Rackspace's Cloud Servers API is live

Erik Carlin from Rackspace sent me a heads up that the new Rackspace CS API is now live. In the email he also noted that they are planning to open source both the Servers and Files API specifications later in a separate announcement.

According to the CS API press release; The Rackspace Cloud solicited feedback and conducted intensive testing with its partners and cloud developers to help ensure that the community shaped the API. With today’s announcement, users now have control panel and programmatic access to the company’s cloud infrastructure services: Cloud Servers, Cloud Files and Slicehost.

The Cloud Servers API will introduce four new features including:
Server Metadata – Supply server-specific metadata when an instance is created that can be accessed via the API.

Server Data Injection – Specify files when instance is created that will be injected into the server file system before startup. This is useful, for example, when inserting SSH keys, setting configuration files, or storing data that you want to retrieve from within the Cloud Server itself.

Host Identification – The Cloud Servers provisioning algorithm has an anti-affinity property that attempts to spread out customer VMs across hosts. Under certain situations, Cloud Servers from the same customer may be placed on the same host. Host identification allows you to programmatically detect this condition and take appropriate action.

Shared IP Groups – While Rackspace has always supported shared IPs, it’s been made simpler with the creation of Shared IP Groups and the ability to enable high availability configurations.

> Press Release is here

> API docs are here

CloudCamp Toronto, July 22nd

I'm happy to announce an upcoming CloudCamp in Toronto Ontario, July 22nd.
The event is being Co-hosted and sponsored by The Open Group Conference Toronto as well as Martin Kirk, David Lounsbury, Scott Radeztsky

Register Here

Location
Toronto Marriott Downtown Eaton Centre
525 Bay Street
Toronto, Ontario M5G 2L2
Canada

Agenda
5:00pm Doors Open
5:30pm Registration and Networking
6:00pm Welcome and Thank You's
6:15pm 15 minute keynote (for first CloudCamp only)
6:30pm 4-6 Lightning Talks (5 min each)
7:00pm Unpanel
7:30pm Prepare for Unconference
7:45pm Break
8:00pm Unconference (3+ sessions simultaneous)
- include "What is Cloud Computing?" session
8:45pm Unconference (2nd set of sessions)
- include "What is Different with a Cloud Application?"
9:30pm Wrap up
9:45pm More networking on location or head to the Pub

Thursday, July 9, 2009

A Federal CloudBursting & Cyber Defense Contingency Plan

Over the last week several US government websites have been repeatedly attacked by a foreign botnet. A lot of folks in the media are now saying this is may actually be cyber war. I would argue that this isn't anything new, just more publicized. But if this Internet attack on U.S. federal web sites is an actual assault by North Korea or some other foreign government, what are the legitimate responses available in America's arsenal -- either traditional or cyber? Sadly right now the answer is, not many.

The question remains, how do you attack a botnet that may include zombies that exist within your own infrastructure. How do you tell who is good and who is bad? In reality you can't attack the problem using traditional military tactics. Instead of focusing on an offensive response, we should focus on limiting the effects that these cyber attacks cause. For the most part these cyber denial of service attacks are more of a nuisance then actual physical threat.

Now that governments around the globe are starting to embrace cloud computing, I feel the next logical step is to actually start defining how to actually recover from serious Cyber attacks with a minimum level time cost and disruption. Yes, it's time for a Federal CloudBursting Contingency Plan.

In 2002 The National Institute of Standards and Technology (NIST) published a contingency planning guide for Information Technology Systems. The guide provides instructions, recommendations, and considerations for government IT contingency planning. It outlines contingency planning for interim measures to recover IT services following an emergency or system disruption. The document details so called "interim measures" may include the relocation of IT systems and operations to an alternate site, the recovery of IT functions using alternate equipment, or the performance of IT functions using manual methods.

What it does not do is outline any sort of on demand or cloud computing capabilities to help negate the effects of a prolonged cyber attack. This is mainly because the guide was written in 2002 and was never subsiquently updated. The guide completely lacks any real insight into the advantages that cloud computing offers the modern IT infrastructure. This is made plainly obvious with a note on page 6.
Responses to cyber attacks (denial-of-service, viruses, etc.) are not covered in this document. Responses to these types of incidents involve activities outside the scope of IT contingency planning. Similarly, this document does not address incident response activities associated with preserving evidence for computer forensics analysis following an illegal intrusion, denial- of-service attack, introduction of malicious logic, or other cyber crime
So basically the document only outlines the requirements for a physical disaster but lacks any real insights into cyber defenses or the need for a cloud centric contingency plan. I believe the simplest and most effective response for a good portion of the problems plaguing the current federal IT and web infrastructure may be resolved with a clear and concise plan of action. This means creating an official federal CloudBursting & Cyber defense contingency plan. This plan could also address specific strategies and actions to deal with a threat in realtime. Most traditional contingency plan such as ones for nature disasters include a monitoring process and “triggers” for initiating planned actions. Why not include similar planning for if and when federal IT infrastructure is under attack?

There has been some work done in the space, specifically by the National Science and Technology Council in a document called the Federal Plan for Cyber Security and Information Assurance Research and Development. Which takes the first step toward developing that agenda. Mostly focused on R&D the plan and proposal responds to recent calls for improved Federal cyber security and information assurance. The document was developed by the Cyber Security and Information Assurance Interagency Working Group (CSIA IWG), an organization under the National Science and Technology Council (NSTC), the Plan provides baseline information and a technical framework for coordinated multiagency R&D in cyber security and information assurance. Other areas – including policy making (e.g., legislation, regulation, funding, intellectual property, Internet governance), economic issues, IT workforce education and training, and operational IT security approaches and best practices. It's a pretty good read, but completely misses the opportunity for Cloud Computing and more specically cloudbursting scenarios to help avoid some of the most obvious DoS style attacks.
Reblog this post [with Zemanta]

Wednesday, July 8, 2009

CloudNet & The Case for Enterprise-Ready Virtual Private Clouds

AT&T Labs and the University of Massachusetts Amherst have published a paper called "The Case for Enterprise-Ready Virtual Private Clouds" that continues on my vision for a Virtual Private Cloud (VPC). And they even gave me some credit in the paper [see reference 5] -- sort of.

To recap, over a year ago I described the opportunity for what I called a Virtual Private Cloud or a method for partitioning a public computing utility such as EC2 into quarantined virtual infrastructure. A VPC may encapsulate multiple local and remote resources to appear as a single homogeneous computing environment bridging the ability to securely utilize remote resources as part of an seamless global compute infrastructure.

Well it seems that I may have been onto something with this VPC concept. In the paper they propose "the enhancement of the cloud computing framework to seamlessly integrate virtual private networks (VPNs). To this end, we propose CloudNet, which joins VPNs and cloud computing. CloudNet uses VPNs to provide secure communication channels and to allow customer’s greater control over network provisioning and configuration."

The paper goes on to claim that they have a solution which seems very similar to my proposal
"To address these challenges, we propose the idea of a Virtual Private Cloud (VPC). A VPC is a combination of cloud computing resources with a VPN infrastructure to give users the abstraction of a private set of cloud resources that are transparently and securely connected to their own infrastructure. VPCs are created by taking dynamically configurable pools of cloud resources and connecting them to enterprise sites with VPNs. Figure 1 shows a pair of VPCs connected to two different enterprises, each composed of multiple sites. A VCP can span multiple cloud data centers, but presents a unified pool of resources to the enterprise."


"VPNs can be leveraged to provide seamless network connections between VPCs and enterprise sites. VPNs create the abstraction of a private network and address space shared by all VPN endpoints. Since addresses are specific to a VPN, the cloud operator can allow customers to use any IP address ranges that they like without worrying about conflicts between cloud customers. The level of abstraction can be made even greater with Virtual Private LAN Services (VPLS) that bridge multiple VPN endpoints onto a single LAN segment. If the cloud provider in the previous section’s example used VPCs, a VPLS could be setup so that the processing component could be easily run within the cloud without requiring any modifications since the cloud resources would appear indistinguishable from existing compute infrastructure already on the enterprise’s own LAN."
Interesting and worth a read.

GovBursting & The Denial of Governmental Services Attack

I was just reading a few articles about a series of new cyber attacks on several US government websites. According to an article by the AP, "a widespread and unusually resilient computer attack that began July 4 knocked out the Web sites of several government agencies, including some that are responsible for fighting cyber crime."

"The Treasury Department, Secret Service, Federal Trade Commission and Transportation Department Web sites were all down at varying points over the holiday weekend and into this week, according to officials inside and outside the government. Some of the sites were still experiencing problems Tuesday evening. Cyber attacks on South Korea government and private sites also may be linked, officials there said."

With the ever increasing amount distributed denial of service attacks on various governments, it seems that there is a trend in the attempt to limit access to crucial pieces of governments public web infrastructure and websites. Although these attacks are not hitting any particularly critical infrastructure they are certainly causing problems for the those who may want to access governmental information in a timely manor. I'm calling this latest tactic, Denial of Governmental Services attack. DoGS. Yup, like the animal.

Actually what all this recent activity is really pointing to is the need for governmental agencies to have some sort of actual cloudbursting contingency plan in place. Whether it's a Federal Cloud Capability or just a trust third party cloud provider, the need for a GovBursting capability is real and should be addressed now.

I'll post more thoughts on this topic in the coming days including the potential for the creation of a Cloud Performance Certification. I'll be in Washington DC this coming Sunday and Monday. I'm fairly certain this will be a hot topic of discussion.

Google's Cloud Operating System (Chrome OS)

Recently Google announced that it is creating a Cloud OS called Google Chrome OS. They describe it as an open source, lightweight operating system that will initially be targeted at netbooks and will be available for consumers in the second half of 2010.

In a post on the Google Blog they shed some light on the proposed Google OS.

Speed, simplicity and security are the key aspects of Google Chrome OS. We're designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work.

Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year. The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac and Linux thereby giving developers the largest user base of any platform.

More generally you can think of a Cloud OS as being similar to that of the Apple iPhone, where certain applications are loaded directly on the phone and where other application components remain on a server somewhere in the cloud. This hybrid model (intercloud) seems particularly well suited to emerging economies (Think India, China, etc), where software licensing can be prohibitively expensive. The One Laptop per Child (OLPC) project which started the whole netbook craze coming to mind.

As I've said before, I find this hybrid cloud model particular interesting for cloud OS / netbooks where a user may be given a netbook or thin client that contains a users core identity, favorites etc while the majority of the functionality is loaded via the cloud. I will also point out that Google's Cloud OS ambitions sounds an awful like Microsoft's Software + Services philosophy. A combination of local software and Internet services interacting with one another.

Obviously Google isn't the first in the Cloud OS market, last year Good OS introduced a operating system for cloud computing appropriately called "Cloud," which is the successor to company's Linux-based gOS. I don't imagine Good OS are particularly happy about the news of an "official" Google OS.

Currently the cloud centric operating space is in its infancy with Linux distro's such as Ubuntu and Red Hat struggling to find a position in a world where the operating system is a somewhat meaningless commodity. I think the bigger question moving forward is whether or not the OS matters at all in a cloud based environment. Regardless heavy weight OS vendors such as Microsoft are quickly moving forward entering the nascent market. It seems that the future of the OS lies in the cloud.

Tuesday, July 7, 2009

Ruv's Cloud Google Reader Bundle

Ever wonder what I read? Well wonder no more. I've published a Google Reader Bundle with some of the blogs I frequently read.

A Google Reader Bundle is a collection of blogs and websites hand-selected on a particular topic or interest. You can keep up to date with them all in one place. (Click here for link to the bundle or subscribe via Atom/RSS)

There are 37 feeds included in this bundle including.
  • Daryl Plummer
  • CloudEnterprise.info
  • Paul Miller
  • Amazon Web Services Blog
  • ElasticVapor :: Life in the Cloud
  • Blogs@Intel
  • Red Canary
  • The Daily Galaxy: News from Planet Earth & Beyond
  • virtualization.info
  • igvita.com
  • O'Reilly Radar
  • Read/WriteWeb
  • Rednod
  • Rough Type: Nicholas Carr's Blog
  • Silicon Alley Insider
  • VentureBeat
  • Googling Google
  • neoTactics
  • The Troposphere
  • Cloudy Times
  • The Wisdom of Clouds
  • Hacker News
  • BusinessWeek Online -- Tech Beat
  • VagaBond Poet Blog
  • VMblog.com - Virtualization Information
  • Boduch's Blog
  • Businesspundit
  • Data Center Knowledge
  • Engadget
  • John M Willis ESM Blog
  • MacRumors : Mac News and Rumors
  • PhysOrg.com - latest science and technology news
  • Rational Survivability
  • VC Ratings
  • Tim Mather
  • RightScale Blog
  • The Open Road

Am I missing something? Add your blog in the comments area.

Beta is Going Out of Beta

Last January I announced that the tagline of "Beta" for software and web development was approaching its last days as a marketing fad. In that post I stated the problem with the term "beta" with in a broader product release is that most users don't draw a distinction between a beta or production ready release.

The assumption is if you're making your application available, it's going to be fully tested and ready for production regardless of a "Beta" title slapped on top. On the flip side, most companies who use the term beta are in a sense saying exactly the opposite that, it's not ready but we'd like to you to try it anyway in an attempt to gain some market share or whatnot.

Early today Google basically came to the same conclusion in a statement on ComputerWorld. Matthew Glotzbach, Google Enterprise product management director, acknowledged that from the outside the decision to graduate Gmail, Calendar, Docs and Talk out of beta at once may seem arbitrary. However, the removal of the beta label from those services is rather the "culmination" of a years-long process of maturation through which the products have exceeded internal goals for reliability, quality and usability.

In more simple terms he said that Google lacks a uniform set of criteria across its different product groups for determining when a product should and shouldn't carry the beta label. "We haven't had a consistent set of standards across the product teams. It has been done individually."

The article goes on to say that while the beta issue is less important for consumers, it is a major roadblock for the Enterprise group's efforts to market Apps, especially when dealing with large companies and their IT managers and CIOs.

I for one am happy to see the beta tag go away. It's meaning less and just an excuse for when things go badly.

Thursday, July 2, 2009

IBM Cloud Computing Use Cases Group Releases Draft White Paper

IBM's experiment with group authorship for Cloud Computing interoperability is starting to pay off. Earlier today, Doug Tidwell posted the first draft of a Cloud Computing Use Cases White Paper produced extensively via a new Google group created to help define the various use case requires. The white paper was also released under a Creative Commons License with the intention of remixing for use within other white papers and marketing materials.

In an email by Tidwell he said everything in the paper comes from the comments posted on the Google group. But also admits there are several areas that need a lot more discussion.

The introduction of the whitepaper states that it utilizes existing customer based scenarios with the goal of highlighting the capabilities and requirements that need to be standardized in a Cloud environment to ensure interoperability, ease of integration and also portability. It strives to ensure that cloud computing evolves as an open environment, minimizing vendor lock-in and increasing customer choice.

At first glance the paper is great starting point outlining some of the key aspects for an interoperable cloud ecosystem. It borrows heavily from the NIST definition of cloud computing, but I think it is a smart move to align with the NIST definition rather then to try to create yet another. I also liked the Taxonomy diagram, which brakes down the various aspects into three main groups, Service Consumer, Service Provider and finally Service Developer.


All in all, a great starting point. Keep up the good work!

Review the draft white paper here.

Wednesday, July 1, 2009

Gartner Asks, Can The Cloud Save The World?

I'm currently at a wedding, killing time reading various blog feeds. In that task, I just read a generally insightful commentary thanks to particularly ridiculous question in a post by Eric Knipp at Gartner. In the post he asks if The Cloud Will Save The World?

Initially I didn't see where he was going with this idea, spending a little too much time on his so called Killer [cloud] App - Application Platform-as-a-Service (APaaS). Moving to the good stuff, he points out that cloud computing is not "an easy button - in the abstract. Someone, somewhere, has to do heavy lifting in any software development endeavor to build a high-quality, high-availability, highly-reusable Web architecture."

I will agree, the cloud does not solve IT complexity, in a lot of ways it creates significant new challenges. Things like security, auditablity, governance and performance certification all come to mind just off the top of my head.

His next question / statement shed some light on his theory, aka my head node moment. "-- companies who learn how to screw down the cost drivers while simultaneously enhancing value drivers to satisfy customer needs have a competitive advantage in their industries."

Bingo. He's hit the nail on the head. Satisfying customer needs is a competitive advantage (finding simplicity in complexity).

So, how does this all add up to the Cloud saving the world? Knipp goes on to say "that as the world grows more complex, the only chance we have to head off the disintegration of modern society under the weight of complexity comes through technological leaps in the form of disruptive innovation. -- Could this new level of simplicity in complexity be the disruptive innovation that saves the world - or at least gives us a bit more time?"

I do enjoy a good news headline as much as the next guy, but I doubt the world will be saved by cloud computing -- unless your counting saving money and scaling your enterprise in the saving the world section of things we need to get done. Really what I think this new whiz bang buzzword does is give a more compelling / sexy technology topic to discuss at dinner parties and tech conferences. And for me that's good enough.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram