Friday, November 28, 2008

Cloud Wars: The US Department of Defence under Cyber Attack!

The big headline in today's Los Angles Times reads "Cyber-attack on Defense Department computers raises concerns" Raises concerns? Are you kidding me? I've been saying this for a while, the current US Network Centric defenses are in a complete state of disarray and everyone outside of the U.S. knows it. Saying it's a concern is putting it mildly.

In the LA Times article they had this to say about the current attack "Senior military leaders took the exceptional step of briefing President Bush this week on a severe and widespread electronic attack on Defense Department computers that may have originated in Russia -- an incursion that posed unusual concern among commanders and raised potential implications for national security."

This should not come as a surprise to anyone involved in Network Centric Operations / Warfare. Since 9/11 the the US Defense Information Systems Agency (DISA) has spent billions on various intelligence schemes, but has completely failed in the area's of proactive network defense. (As a side note, DISA is a combat support agency responsible for planning, engineering, acquiring, fielding, and supporting global net-centric solutions to serve the needs of the President, Vice President, the Secretary of Defense, and other DoD Components, under all conditions of peace and war.)

At DISA there seems to be a fixation on data mining communications networks in the vain attempt to find terrorists using the US phone system or un-encrypted websites. The real issue is a complete lack of DoD network interoperability for joint, interagency, and multi-national network operations. One major step is through the adoption of open standards and common processes. In the mean time, countries such as China and Russia in particular, have built massive citizen botnets. In an instant, Russia can turn on a hundred thousand slave PC's and bring down the entire networks of Georgia, Ukraine or some other unsuspecting country before the US or other allies even know what's happening. (Look at Georgia this summer)

This current attack on the DoD is a relatively minor diversion in comparison to what a full out, planned network centric attack could actually do. Think about the potential fall out if the US electrical grid, cell / phone network and financial infrastructure was to be attacked in unison and taken offline all at once. Combine that with if it were to happen during the midst of an actual "crisis" such as what we're currently seeing in India this week. The turmoil would be unprecedented.

DISA isn't alone in this new age of cyber warfare. Earlier this year the World Bank revealed a significant intrusion with in the banks highly restricted treasury network. In the attack it was revealed that the banks core systems had been deeply penetrated with "spy software", and that the invaders also had full access to the rest of the bank's network for nearly a month in June and July of 2007. At least six major breaches have been detected at the World Bank since the summer of 2007, with the most recent breach occurring just last month. What is worse, this has been "common knowledge" in the black hat security scene for more then 6 months before it was disclosed to the public.

In fairness to the World Bank and the US DoD, they are not alone, every single G7/G8 government has suffered similar breaches over the last couple years. What's scary is the fact that most of these countries have not disclosed these breaches publicly. Lately most of these countries seem to be pre-occupied with the current financial crisis while a far more dangerous crisis sits in waiting. As conspiracy theorist, I can't help but think the two might be somewhat connected. Traditional terrorism doesn't work, in the new world order it's those who control the network who hold the power.

Recently I've been invited to speak at the Network Centric Operations Industry Consortium (NCOIC ) on the topic of network centric operations and interoperability. Unfortunately because of my wife being in her 9th month of pregnancy, I will miss the next event on Dec 11th.

I think NCOIC mission sums up the challenges nicely: "The deciding factor in any military conflict is not the weaponry, it is the network. The missing link in today's disaster recovery efforts is a working network. And the key to emergency response is accurate information that enables first-responders to know what happened, who's responded, and what is still required. From the warrior to emergency personnel to the modern day consumer, access to all information, without regard to hardware, software, or location of the user, is no longer attractive, it is imperative."

Thursday, November 27, 2008

The Industrial Revolution of Data

As I watch the reports from Mumbai come in across my various social feeds, blogs, twitter, Facebook, flickr etc. I can't help but think one of the biggest opportunities for the next generation of news providers is that of data mining the massive amount of information being feed through the Internet. When a big news story breaks, it is now much more likely that information will be delivered through an army of citizen journalists using mobile phones and social media services then by traditional means.

In a recent post "The Commoditization of Massive Data Analysis" on O'Reilly Radar, Joe Hellerstein, described what he called "The Industrial Revolution of Data" His post went on to say "we are starting to see the rise of automatic data generation "factories" such as software logs, UPC scanners, RFID, GPS transceivers, video and audio feeds. These automated processes can stamp out data at volumes that will quickly dwarf the collective productivity of content authors worldwide. The last step of the revolution is the commoditization of data analysis software, to serve a broad class of users."

Although for most of the post Joe seems to get too caught up in the finer technical details, I think he was onto a general trend toward the large scale commoditization of data analysis. In someways I also think he misses one of the bigger opportunities for large scale data analysis, that of the social cloud.

Traditionally, business analysts have used data mining of large data sets such as credit card transactions to determine things like risk, fraud and credit scores. More recently live Internet data feeds (Facebook, Twitter, FriendFeed, etc) have become much more common place, enabling the ability to do large scale realtime knowledge discovery. For me this is a remarkable opportunty, think about how google maps revolutionized the geospatial industry by putting satellites imagery and geocentric information into the hands of everyday people. Similarly we now have the ability to do this for within several other industry verticals.

As the volume of social data increases I feel we will soon see the emergence of "social knowledge discovery services". These services will give the ability for anyone to spot trends, breaking news as well as threats (economic, physical or otherwise) in real time or even preemptively.

One such example is Google Trends, a service that has taken this concept to the next level by aggregating statistical trends that actually matter. Google's Flu trends service, which tracks certain search terms that are indicators of flu activity in a particular area of the US. The Google Flu service uses aggregated Google search data which helps to estimate flu activity in "your" region up to two weeks faster than traditional systems. What's more, this data is freely available for you to download and remix. Think about it, using this sort of data at a small local pharmacy could be enabled so that they could stock up on flu related products two weeks before a major outbreak occurs.

The next big opportunity will be in "data mining" all the unstructured data found within the "social web" of the Internet. The emergence of social knowledge discovery services will enable anyone to identify trends within publicly available social data sets. Through the use of sophisticated algorithms such as google's map/reduce, we now have the opportunity to identify key social patterns and preemptively target those opportunities.

Wednesday, November 26, 2008

The Battle for the future of the Internet

Since Khaz banned me from the Google Cloud Computing group back in September (a group I helped create). I've been a missing a cloud community where I could share my ideas. Recently my cloud computing interoperability group has started to take off and I finally have a community outlet for my ideas on cloud computing, standards and interoperability.

In a recent post I asked the question"If not a traditional XML Schema, what other approaches may give us equal or greater flexibility?" In this fairly generic and broad question I received some very thought provoking responses.

Here is an except from the discussion. (Join the group to engage in the conversation)

Thanks for all the great insights. I also agree the the last thing we need are standards. If we do this right the standards will organically emerge over time. My goals for a Unified Cloud Interface (UCI) are fairly simple, although my ambitions are much larger.

The mission is this: Cloud interoperability for the purposes of reducing cross cloud complexity.

I completely agree with Paul and others, let's not re-invent the wheel, boil the ocean, (insert your own metaphor) . Whether it's OWL, RDF, SNMP or whatever. We have a significant amount of material to use as the basis for what we're trying to accomplish.

We must focus on the core aspects of simplicity, extensibility and scalability / decentralization in looking at this opportunity.

In regards to whether or not XMPP is powerful enough would at this point seems somewhat secondary. I'd use TCP as an analogy for our dilemma. TCP is arguable not the most scalable, secure or efficient protocol. But in it's simplicity was its ultimate advantage. The Internet works because it can fail dramatically without affecting the Internet at large, this is because of a decentralized fault tolerant architecture. An architecture that assumes failure. There are numerous messaging platforms and protocols to choose from, but none of which seem to address decentralization and extensibility to the extent that XMPP does. Is XMPP perfect? Probably not, but for our purposes it's more then adequate.

I envision a communication protocol that takes into consideration a future that may be vastly different then today's Internet landscape. In someways my ambitions for UCI is to enable a global computing environment that was never previously possible. A technology landscape where everything and anything is web enabled.

Yes, I have big ambitions, it is not often we find ourselves in the midst of a true paradigm shift. This is our opportunity to lose.

Trusting the Cloud

Ian Rae has written an interesting piece asking "Is cloud computing stable in bad weather?" , Below are my comments.
---
Ian your post reads like the doom and gloom forecasts we saw about cloud computing back in 2007. Cloud computing is as much about trust as it is about efficiency. The real question you should be asking is do I trust microsoft, amazon or even AT&T to mange my infrastructure better then an internal data center team, and in my rather bias opinion the answer for the most part is yes.

Also, about your comment on outages, the question isn't will the cloud provider go down, because it certainly will. The real question is how do I enable a hybrid cloud environment that assumes failure and can do so gracefully. (This one of my main motivators for a unified cloud interface standard)

Lastly, as some one who has built a business on the so called private cloud platform. The idea of a quarantined shared internal cloud, aka private cloud is an oxymoron. Cloud computing is about using resources where ever and when ever, internally or externally.

Unified Cloud Interface: To Schema or not to Schema

My post the other day about creating an XMPP based unified cloud interface has generated a lot of interest (Thank you Dave @ Cnet). One point that has been mentioned by several people is in regards to the proposed usage of a XML schema and whether a predefined model makes any sense. A few of you also said to look at a more "RESTful" architecture, which in my opinion is not mutually exclusive to a XML schema. Several have pointed me to the SNMP protocol and its object model, as a good example. SNMP uses a strict verb discipline in tandem to the protocol's small operator set, and the 'resources' are addressed with a uniform global scheme of Object identifiers.

Yet another suggestion was to look at the Resource Description Framework (RDF) as the basis for UCI. What I find interesting about the RDF data model is it based upon the idea of making statements about Web resources in the form of subject-predicate-object expressions. I also found it's use of statement reification and context possibly very useful. Although RDF brings us back to the usage of schemas.

So I'd like to propose a question. If not a traditional XML Schema, what other approaches may give us equal or greater flexibility?
-
Join the conversation at the Cloud Interoperability Forum on Google Groups

Tuesday, November 25, 2008

Cloud Computing: Weapons of mass disruption

I gave my lecture today at the University of Toronto and like most of my events lately I only made it through my first three slides. I seem to have a knack for creating an interactive discussion and today's un-lecture was no different.

The group consisted of mostly computer science post grad students. They did ask me some fairly intriguing questions. In one my random off topic rants, I described cloud computing as one of the biggest disruptive technologies to emerge this decade. Now I'm not sure if I'm starting to believe my own hype or if there was actually more too the story. I know one thing, I need to stop using the word "paradigm". But then again, it did spark some great dialog. Afterward for some reason the idea of cloud computing being a weapon of mass disruption came to mine, but unfortunately I didn't have a chance to use the term in my presentation today.

Another example I gave was in describing the movement away from the traditional single tenant desktop or server environment to that of a decentralized internet centric one. The idea of the network or the internet as the computer also seemed to strike a cord with the audience. This is a concept I still do truly believe. The idea of a hybrid computing environment where some software aspects remain on your desktop and other are farmed out to the cloud seems to resonate with a lot of the people I've been talking to lately. Microsoft's Photosynth is prime example, they refer to this as a Software + Services and actually makes a lot of sense. Microsoft describes their Software + Services philosophy as "a combination of local software and Internet services interacting with one another. Software makes services better and services make software better. And by bringing together the best of both worlds, we maximize choice, flexibility and capabilities for our customers." Microsoft at least from a "talk is cheap point" of view seems to get it.

At the end of the day, I think I enjoy interacting with people, whether it's at a conference or in a more intimate university lecture, now I see what I missed out on by not going to college or university, an interactive forum for discussion. If you're interested in getting me to speak at your school, please get in touch.

Monday, November 24, 2008

IBM gives Cloud Startups an Exit Strategy

IBM announced today a series of cloud computing related products and services. To be frank I didn't see anything particularly ground breaking in the announcement. It was what they didn't say I found most interesting. For me, what IBM actually announced today was that we now have a major acquisition partner for the various cloud computing startups being created. More simply, on the business plan page titled "exit strategy", you can now include IBM.

Part of this strategy can be seen in IBM's cloud benchmarking program which offers a Resilient Cloud logo which they describe as a confidence booster for enterprise customers who are interested in shifting services to the cloud but concerned about reliability. On the surface this program would appear to be a nice easy way for IBM to test out new cloud wares, acquiring the most promising startups before anyone else.

I certainly hope there is a secondary agenda, because on its own, IBM's Resilient Cloud logo is about as useful as lipstick on a pig.

Cloud Standardization: Unified Cloud Interface (UCI)

Today I submitted my first email to the XMPP standards list. I thought I'd share my post with the readers of my blog.
------
A few months ago a number of us came together to create "The Cloud Computing Interoperability Forum". The purpose of this group is to discuss the creation of a common cloud computing interface. The group is made up of a some of the largest cloud related vendors and startups who all share the goal of cloud interoperability as well reducing cross cloud complexity.

I'd like to take a moment to explain my cloud interoperability ideas. After various conversations, our concept is starting to take shape and is based on what I'm called the "unified cloud interface" (aka cloud broker). The cloud broker will serve as a common interface for the interaction with remote platforms, systems, networks, data, identity, applications and services. A common set of cloud definitions will enable vendors to exchange management information between remote cloud providers.

The unified cloud interface (UCI) or cloud broker will be composed of a specification and a schema. The schema provides the actual model descriptions, while the specification defines the details for integration with other management models. UCI will be implemented as an extension to the Extensible Messaging and Presence Protocol (XMPP) specifically as a XMPP Extension Protocol or XEP.

The unified cloud model will address both Platform as a service offerings such as Google App Engine, Azure and Force.com as well as infrastructure cloud platforms such as Amazon EC2. Ultimately this model will enable a decentralized yet extensible hybrid cloud computing environment with a focus on secure global asynchronous communication.

Once we are in general agreement on the draft proposal, it will be submitted for approval by the Internet Engineering Task Force (IETF) for inclusion as a XMPP Extension and presented at the IEEE International Workshop on Cloud Computing (Cloud 2009) being held in May 18-21, 2009, in Shanghai, China.

My draft is based on a combination of working being done in conjunction to XMPP, CIM, Xam and several other standardization efforts.

Comments welcome.

--

Sunday, November 23, 2008

The Great Paradigm Revolution

I'm going to say something today that some of you are not going to like. We are in the midst of one of the greatest multifaceted revolutions in modern times. I'm not talking about about peasants Storming of the Bastille. I'm speaking about a potentially much greater social, economical, political, environmental and technological upheaval, one not seen in the last 200 years. I'm calling it the Great Paradigm Revolution.

During Barack Obama's political campaign to become the leader of the free world, (aka the United States) his message was clear and concise, "Change". In this message lies a far more ambitious goal. One that stretches across almost all aspects of our current establishment. For me, this idea of change represents a movement away from the paradigm paralysis of the late 20th century. A period of time which was marked by the inability or refusal to see beyond the current way of doing things.

From the financial industry to medicine, the environment to agriculture the keys of society were handed to a generation that seemed to have collectively objected to deviating from the common conventions while systematically engaging in one of the biggest periods of technological advancement humankind has ever seen. A kind of generational oxymoron. It's almost like they (the boom/bust generation) were too busy changing the world to remember someone needed to manage it.

There appears to be a disconnect between this technological advancement and how we as a collective society interact the world around us. The problem with paradigm shifts is you can't see them, you can only realize them. Think about Microsoft's failure to see the shift to the Internet in the 1990's or the American auto industry's failure to embrace smaller more fuel efficient cars, or even the environmental advocates failure to spot the global climate change. It's always been a reactive response, rather then a proactive vision for our future. My question is why do we never look ahead? Why is the current economic crisis even a problem? Was the fact the housing boom may lead to bust not obvious? Or was it because we truly believed it would be different this time.Or may we just didn't care.

As a technologist and futurist I believe we have the ability to take control of our collective direction. One such way may to to look at an area of study known as cybernetics. The essential goal of cybernetics is to understand and define the functions and processes of systems that have goals. Think of this as a social/economic feedback loop. In the short term this means not just blindly putting money into half-cocked bailout plans, but programatically looking at how we came to be in the predicament that we're experiencing and taking steps to avoid doing it again.

To accomplish this, I'd like to prose a new area of cybernetics called "Predictive Cybernetics". The general concept is by analyzing current and historical data we can make predictions about future socioeconomic events. Predictive cybernetics may provide us with the means for examining the future outcome and function of any system. More importantly for the systems that matter, including but not limited to social systems such as business, healthcare, environmental, economic and agricultural with a focus on making these areas more efficient and effective for future generations.

With advancements in computing and access to near limitless computing power, we now have the ability to map or even predict our future. The problem is we're not looking. Maybe I'm idealistic, but this would seem to be a key aspect missing from our current governing elite. We must starting looking ahead. We have to be proactive rather then continually reactive. I suppose what I'm saying is let's spend some of the bailout money on making sure this sort of boom/bust cycle never happens again.

It is my belief that what we're experiencing now, is an unprecedented socioeconomic revolution, one that if handled correctly may present a tremendous opportunity as the shift in power moves from a failed generation to the next. Those of us who realize this opportunity will be the ones who will prosper. I don't have all the answers, but like Obama I am hopeful for change.

Saturday, November 22, 2008

Google the Human Genome with Amazon Public Data Sets

Ever wanted to Google the Human Genome? Well now you can thanks to Amazon.

You've got to hand it to Amazon. They keep coming out with unbelievability cool stuff and the latest is no exception. Called Amazon Public Data sets, it's a cloud data service that provides what they describe as a convenient way to share, access, and use public data within your Amazon EC2 environment. The service enables users to select public data sets are hosted on AWS for free as an Amazon EBS snapshot. Any Amazon EC2 customer can access this data by creating their own personal Amazon EBS volume from a publicly shared Amazon EBS public data set snapshot.

The possibilities for the service are tremendous. Think about it, worried about your stock portfolio? No problem, run map/reduce job against various economic databases provided by The US Bureau of Economic Analysis.

The service can enable hobbists to be able to discover increased cancer risks in the human genome. (The nobel prize to goes to Joe in Cincinnati Ohio from his basement) The service enables "anyone" to instantly run analysis against data sets like the
Annotated Human Genome Data provided by ENSEMBL.

For the first time in the history of human kind, regular people from home can do the sort of high performance computing that had been previously limited to major governments and corporations. If we are truly in living in the information age, then you can think of this as the information age on steroids, or information age 2.0.
I very well might understating this, it is big, very big.

All we need now is a hosted Map/Reduce API for AWS or an AI interface and you've got your very own "Doctor Evil's" world domination cloud kit. It's a brave new world powered by limitless computing power.
Welcome to the future folks.

Friday, November 21, 2008

Economic Darwinism

First of all I am not a libertarian, I am a pragmatical entrepreneur. I pride myself on being able to make something from nothing with a focus on sustainable business economics. Simply put, I believe a business should not have to rely wholly on outside financial support be it a government or otherwise. So this brings me to a discussion I has this week in regards to the various US government bailouts being proposed or implemented and why I think in certain cases such as the US auto sector, we should let the free market determine who wins and who fails. I've been using the term economic darwinism to describe my philosophy.

The concept of economic darwinism isn't a new one. It's routes trace back to a concept more commonly known as evolutionary economics.The concept goes like this; In the 19th century Charles Darwin proposed a theory for a general framework whereby small, random variations could be accumulated and selected over time into large-scale changes that resulted in the emergence of wholly novel forms of life. In economics the basic premise is that markets act as the major selection vehicles. As firms compete, unsuccessful rivals fail to capture an appropriate market share, go bankrupt and have to exit. (Survial of the fittest)

GM, Ford and other American auto makers are in trouble because they are failing to adapt to the market demands of the 21st century. They still cling to 50 year old business practices, design sensibilities as well as out dated sales models. Add to the mix an over capacity of vehicles and you have recipe for disaster. Again, I say let the market dictate who wins and who loses.

Yes, I know that thousands of jobs will be lost. So I say bail out those people. Give them the tools to reinvent themselves. Bailing out struggling unsustainable enterprises will create a cycle of boom and bust that ultimately will do more harm then good.

My suggestion to those who control the direction of the US economy must be to realize that economic sustainability is paramount to the continued health of the over all business ecosystem and more importantly those who power it, people.

Networking at the cloud expo

I've been having a great time in San Jose this week at the various cloud related events. I was telling my wife the only time in my life people actual recognize me is at a cloud conference. How geeky is that. Actually the fact that there even is a conference dedicated to cloud computing goes to show how far we've come. I guess, I'm not so crazy after all. Anyway, I've had several great conversations over the last couple days. These discussions have ranged from decentralization, to economic Darwinism to the myth of 5 nines.

At the cloud camp mixer the other night I ran into some of the folks from Juniper Networks. In our conversation it became apparent the next major a battle field in the networking space may very well be in the area of "Cloud Networking" and upstarts like Arista or more established niche players like Juniper Networks look to be in an ideal position to take advantage of what they described as the "great paradigm" shift.

To give a little background, Juniper has made itself name by providing a sort of customized network solution for companies like Microsoft and Google. The word is that they make upwards of 200million a year on Google alone. So they already have a major foot hold in the emerging cloud ecosystem. The opportunity for Juniper as well as smaller network players is to assist in the migration from the centralized data centers of the past to the decenteralized, hybrid computing environments of the future.

I'll post more of my conversations over the coming days.

Tuesday, November 18, 2008

Cloud Camp Party Tomorrow Night - San Jose

I'm happy to announce we're throwing a cloud camp mixer tomorrow night in San Jose (Wed, Nov 19th 7:30pm at Gordon Biersch) Drinks provided by Enomaly, Mosso, Rightscale, and Sun.

With so many Campers in San Jose for CloudComputingExpo, we've decided to find our own time/place to meet. We couldn't find a large enough time slot to put on a formal CloudCamp, but getting together for a few drinks and letting the discussions happen seems like a good enough
reason to meet.

Wednesday night is the best time because the conference sessions end at 7:30pm. Gordon Biersch is just around the corner from the Fairmont Hotel and they’ll give us our own room from 7pm-11pm.

Registration on the website is required. So let us know if you can make it, and come on by!

REGISTER HERE ==> http://www.cloudcamp.com/sanjose

Thanks to Mosso, RighScale, Sun & Enomaly for stepping up and
providing the funds for FREE drinks.

See you there!
Dave, Reuven ... and the rest of the CloudCamp organizers
http://www.cloudcamp.com

Amazon's Global CDN Storm Front

I've been traveling to California today for some upcoming cloud conferences, so I haven't had a chance to chime in on Amazon's CDN announcement. In case you haven't heard, Amazon Web Services has released their global content delivery service. That service is called Amazon CloudFront and it is ready for public use.

At first glance it looks awesome, a content distribution that uses a single REST-style POST... need I say more. But is it an Akamai killer? That's not so easy to answer. Traditionally most CDN's have been out of reach for the type of users that typical utilize amazon S3. (Smaller websites and startups) So I would say that this new commodity CDN opens a whole new potential market, the small to medium sized websites that until today haven't been able to effectively scale beyond the borders of the North America because simply the legacy CDN's weren't only addressing the needs of the startup.

Amazon says that the CDN service will have initially 14 edge locations (8 in the United States, 4 in Europe, and 2 in Asia) I think the question will be how quickly the large traditional CDN customer base (CNN, MTV, etc) will embrace this type of low cost service. A lot of the customers I've talked to have indicated that Amazon's complete lack of enterprise customer support is one the main reasons they haven't used the other AWS services. Will cost out weigh customer service?

As for the question of whether or not larger enterprises will migrate to Amazon's CDN. I'd say some will and some won't, but at the end of the day, I feel the fortune 5,000,000 is a far bigger opportunity then the traditional enterprise customer and Amazon knows this all too well. CDN providers such as limelight and Akamai's have done a far better job providing a proactive CDN infrastructure that enables a global user base, but at a cost that is out of line with other hosting options. Amazon is a master of the commodity business. Business areas that have slight profit margin seem to be a prefer ed target. Mix in a industry ripe for disruption and you've got a potent combination. Watch out Akamai.

In the short term the key issue will probably be centered around the more cosmetic aspects of a CDN such as the reporting dashboards. In the longer term, just like in EC2, we'll probably start to see a great deal of innovation appearing on top of the Cloud Front service just like the ecosystem that appeared around the other AWS services, which is far more valuable then any single AWS service on it's own. In this ecosystem is Amazon's core advantage.

--
I also recently wrote about the opportunities for the content delivery cloud.

Monday, November 17, 2008

ElasticHosts Launches first KVM based Cloud Service

ElasticHosts has released what they describe as the first wave of capacity on its UK cloud infrastructure as a public beta. Actually what I find most interesting about this announcement is that ElasticHosts has chosen to use KVM based Virtualization for their offering (KVM=Kernel-based Virtual Machine).

Recently KVM has caught on as a viable open source virtualization solution being adopted by Red Hat and Ubuntu as their preferred hypervisor. For us at Enomaly, we've found KVM to be just simplier and easier to install and configured compared to Xen. One of the biggest limiting factors to the adoption of "private clouds" is in the complexity found within most cloud platforms, ours included. I'd imagine the folks at ElasticHost felt the sameway.

ElasticHosts provides flexible server capacity in the UK to our customers for scalable web hosting, on-demand burst computing and other uses. They claim to be the second publically-available cloud infrastructure to launch outside the US. (Which is arguable)

Key customers benefit from:

- Instantly scalable on-demand server capacity, with a free selection of CPU, memory and disk sizes to suit any application.

- UK data centre, offering fast direct links to the UK/EU internet and ensuring that your data stays within EU jurisdiction and data protection laws. (GeoPolitical Cloud Computing)

- Competitive prices for both subscription and burst use. Buy exactly the capacity you need, when you need it.

- Advanced KVM virtualization technology, delivering the full power of our infrastructure to your servers, and supporting any PC operating system.

Find out more, and sign up for a trial or account at www.elastichosts.com.

Sunday, November 16, 2008

Bad Economy? Blame the baby boomers

Alright I know this post may not win a lot of accolades from the seventy-six million or so Americans who were born between 1946 and 1964. But in my recent conversations I have come to a rather unscientific conclusion. My conclusion is that the current economic crisis is in fact completely and totally the fault of the idealistic baby boomer generation. Yup, you heard it here first, I'm blaming the hippy generation for our current economy.

It would seem to me, (a Generation X'er) that the "boomers" complete disregards for our economic system has created the current economic crisis.

There may have been a combination of factors which may or may not be limited to a few key "boomer" characteristics such as;
  • sexual freedom thanks in part to Viagra (Your screwing around helped screw the rest of us)
  • a complete lack of governmental oversight and regulations (Your communist hippy way of government doesn't work)
  • environmental movement (i.e. boomers fixed the ozone hole and created tonne of green house gases in the process, nice going.)
  • most importantly the boomers experimentation with various intoxicating recreational substances. (Do you even remember 1969?)
I should also probably note the boomers complete disregard for the "man". Which in case someone forgot to mention, you are now offically "the man" and by being "the man" you get to take all the blame.

Luckily for us Gen Xers, we're not on the verge of retiring, so ironically the baby boomers screwed themselves more then anyone else.

Obama, The First Cybergenic President of America

In a New York times article back in August, Paul Saffo, a Silicon Valley futurist said that if elected, President Elect Barack Obama would become “the first cybergenic president,” just as John F. Kennedy was considered the first telegenic president. (cybergenic meaning internet friendly, telegenic meaning attractive to television viewers)

In my opinion what Obama being elected now means is that for the first time in the history of the American presidency we actually have a leader who understands the context of what the Internet enables not only as a mass communication medium but as a global channel for change.

It's also interesting to look at historical parallels to see the opportunities that embracing new technologies can do for American leaders. Whether it's President Kennedy's simple ability engage 1960s television viewers or how Abraham Lincoln Used the Telegraph to Win the Civil War (a book by Tom Wheeler).

Wheeler's book gives a particularly insightful look at how President Lincoln's use of the telegraph trickled down to most parts of his government while also enabling a competitive advantage to his closest advisors and generals. In the book Tom Wheeler gives an example of how General Grant used the telegraph to operate what he called the "General-In-Chief" while traveling with the armies, rather than managing at a distance from Washington D.C. Grant now had the technological advantage to quickly improvise, based upon changing battlefield conditions. I find Wheeler analogy for General Grant perfect for today's president, "His decision to operate from the field would not have been possible but for the army's central nervous system running over telegraph wires." Replace the telegraph with the internet or a blackberry and you can quickly see the importance of how a president knowledgeable in the use of information technology can become a critical tool.

In the 21st century as it was in the 19th century, efficient information management is still a key aspect of how effective you can be in your duties as a Chief Executive, be it of a company or a country. I would argue that it's more important now then it has ever been.

At the end of the day, Obama's use of Internet may may give him a unique opportunity to make significant changes, not only to how American's interact with the government but to how they interact with those who run it.

Saturday, November 15, 2008

Information Week Startup Of The Week: Enomaly

I'm happy to announce that Enomaly is Information Week's startup of the week.

John Foley had this to say: For IT departments that like the idea of cloud computing but are held back by security, governance, or other concerns, Enomaly makes it possible to create cloud-like environments in corporate data centers. The company's Elastic Computing Platform 2.1 originated as an open source project that recently morphed into a commercial offering.


With all the interest in cloud computing, Enomaly seems to have the right product at the right time. It has four years of experience under its belt--mostly as a services company--and some impressive early customers. Yet, Enomaly has only 16 employees, and the management team, though technically deep, is relatively light on experience in the enterprise software market. Potential customers should try before they buy. Download its free software first, then sign an enterprise license if all goes well.

Read the overview at informationweek.com

Friday, November 14, 2008

Cloud.gov - Cloud Computing in the Federal Government

Had a great trip to Washington DC this week for our first ever Cloud Camp Federal government edition. It was very interesting to see the point of view of so many involved in IT for the US Federal government. We had a great turn out with more then 160 people signing up for the event. What I found most telling was that there seems to be a growing interest in using remote computing services within federal agencies and it's not fore the reasons you'd expect.

One of the recurring questions I kept hearing was that of trust. Can we trust third party providers of computing capacity? The analogy of the traditional phone company such as AT&T was also used quite frequently. We prefer to work with "cloud providers" that we trust. There seems to be the biggest opportunity for those already entrenched in the existing political IT establishment.

The question of security was raised numerous times. It's ranged from the more obvious concerns to the question of data privacy and portability. A group from the DOD said they looked forward to using technologies such as XMPP for the federation of multiple shared private clouds and said interoperability standards should be created.

One of the more interesting comments was that money was not an issue for the more major IT organizations within the federal government. So if you are the DOD, the core benefit to cloud computing wasn't as a cost saving measure but as an efficiency enabler. It sounded to me like just getting VMware into their infrastructure would be a huge win. Not that I completely agree that virtualization is in itself cloud computing, but it was obvious that inefficency in general was a major issue.

For other less critical federal agencies it wasn't that cloud computing wasn't going to happen but that it already was happening. There was an emphasis on none critical services, the so called low hanging fruit. These none core web services provided the best and possibly largest opportunity for cloud computing with in the federal government. Think along the lines of the Whitehouse website or federal information programs. Ways to quickly and easily get the word out.

The spooks in the room also had an interesting take on things. The US is being beaten, and beaten badly by upstart cloud programs coming out of China and Russia and the level of red tape on the beltway was doing more harm then good. Also the concept of Russia being able take control of millions of zombie PC's at moment notice seem to be troubling. Another point of contention was that China has been able to create million server clouds with little or no competition from the US. On the flip side they also assured me that there is a lot more going on, but they couldn't talk about it. It was clear the use of distributed cloud technology represented one of the biggest opportunities within the military IT organizations and the likelihood of some small cloud upstart or even Google or Amazon getting the job was slim.

Needless to say, we're living in some interesting times.

Wednesday, November 12, 2008

CloudExpo: The World Wide Cloud: Bridging the Data Center and the Cloud

Come see my live and in person at the International Cloud Computing Expo on Nov. 19 from 4:30 to 5:15 at the Fairmont Hotel in San Jose, Calif. I'll be speaking on the topic of "The World Wide Cloud: Bridging the Data Center and the Cloud."

ABSTRACT

As cloud computing becomes more commonplace, creating a secure method to bridge the gap between existing data centers and remote sources of compute capacity is becoming more and more important. The ability to efficiently and securely tap into remote cloud resources is one of the most important opportunities in the cloud computing today.

In this session, I will discuss some of the challenges and opportunities to deploying across a diverse global cloud infrastructure. Location, security, portability, and reliability, I will explain how they all play critical roles in a scalable IT environment.


I'll probably get way off topic and discuss my other various points of view. So make sure to stop by. If you're interested in meeting up, please ping me as well.

Tuesday, November 11, 2008

Complex Models & Value of Utility Resources in the Cloud

Had nice chat today with Joe Weinman from AT&T. He had a very unique take on utility / cloud computing and it's adoption within enterprises. During the conversation he also pointed me to a site Joe put together called Complex Models. The site is a simulation resource intended for a small number of models addressing structure, dynamics, and financial analysis of utility and cloud computing, random graphs, power law preferential attachment graphs, and other simple models that may illustrate complex, emergent characteristics or behavior. If your into playing with model, is a cool site to check out.

I found the Value of Utility Resources in the Cloud particularly interesting.

Check it out at > http://complexmodels.com

Monday, November 10, 2008

Defining Cloud Optimized Storage

With today's EMC announcement of their cloud optimized storage (COS) platform called Atmos, we are starting to see the first time an enterprise ready attempt at a global cloud storage system. For the most part, these types of global distributed file systems have been what Chuck Hollis at EMC described as home grown solutions built by academics or hobbyists.

Personally, what I found even more interesting then the actual product release was in how they described a new cloud optimized storage market segment.

EMC describes cloud optimized storage as "the ability to access applications and information from a third-party provider—like a large telecommunications company—that has built a global cloud infrastructure. That cloud infrastructure will make massive amounts of unstructured information available on the Web, and will require policy to efficiently disperse the information worldwide."

One of the biggest limitations to the adoption of Atmos is that it isn't open source. Cloud computing is about ubiquity. The more users of your platform the better. I think that ultimately EMC's activity in the cloud storage sector will help drive more interested and demand for cloud storage across the board. I believe that the rising tide floats all boats and my boat has already left the harbor.

To give you some background a while back I came up with a term I described as the "Content Delivery Cloud" I think the approach of EMC's cloud optimized storage fits into this concept very nice.

In partnership with PandoNetworks, we created a joint site promoting this concept at www.contentdeliverycloud.com

Here is an overview:

A Content Delivery Cloud is a system of computers networked together across the Internet that are orchestrated transparently to deliver content to end users, most often for the purposes of improving performance, scalability and cost efficiency. Extending the model of a traditional Content Delivery Network, a Content Delivery Cloud may utilize the resources of end-user computers ("the cloud") to assist in the delivery of content.

Attributes:

  1. Utilizes the unused collective bandwidth of the audience. Every content consumer becomes a server, offloading bandwidth demand from central CDN servers, thus cutting bandwidth costs and boosting media monetization margins.
  2. Improves delivery performance by providing data from a virtually unlimited number of servers in parallel.
  3. Scales with demand. The more consumers demand a particular piece of content, the larger, better performing and more cost efficient that content's delivery cloud becomes.
  4. Benefits all participants in content delivery value chain. To be successful, a Content Delivery Cloud must provide value for the content publisher, the Content Delivery Network, the Internet Service Provider, and the content consumer.
  5. Utilizes a wide range of delivery strategies. Maximize performance and economics by optimally utilizing all available, appropriate data sources, including origin servers, CDN servers, streaming servers, cache servers, and peers. Participants in the delivery cloud can include not only desktop computers but also set top boxes, file servers, mobile devices, and any other Internet enabled device that produces or consumes content.
--
I also wanted to follow up from my previous post on Atmos.

storagezilla said the following.
Atmos is *not* a clustered FS nor does EMC see it as a Isilon or OnTap GX clone/competitor.

Chuck Hollis, VP -- Global Marketing CTO EMC also had some interesting insights.

The traditional storage taxonomy doesn't do a good job of describing what Atmos (and, presumably, future solutions from other vendors) actually does. As you'll see shortly, it isn't SAN, NAS or even CAS. So, what makes "cloud optimized storage" so different? The use of policy to drive geographical data placement.

He goes on to give some more techical details
Is Atmos hardware-agnostic? Yes, that's the design. It runs well as a VMware guest, for example. That being said, our experience with customers so far indicates a strong desire for hardware that's built for purpose -- especially at this sort of scale. Check out the rest of Chucks post to gain a better insight into the usage and deployment of Atmos.

Here are a few more Atmos links as well:

StorageZilla's,
Steve Todd's,
StorageBod,
Chris Mellor at The Register
Network World
Tarry Singh

Sunday, November 9, 2008

Cloud Chaos & Decentralization

In the late nineties and early part of this decade there was a marketing push around the concept of "centralization". Companies like IBM, Oracle and Sun focused on creating hardware and software platforms with single points of deployment and administration in the vain attempt to make it easier manage your infrastructure. It quickly became apparent that for all its marketing hype centralization has created more problems then it has solved.

In nature, most things are not centralized, they are almost always decentralized. Centralization is a human construct used to create structure to an unstructured world. Whether an ant hill or a human body, the Sun or a Galaxy, decentralization and chaos is all around us. Some may see decentralization as anarchy or chaos but in the chaos comes the ability for systems whose states can evolve and adapt over time. These adaptive systems can exhibit dynamics that are highly sensitive to initial conditions and may adjust to demands placed on them.

To build scalable cloud platforms the use of decentralized architectures and systems maybe our best option. The cloud must run like a decentralized organism, one without a single person or organization managing it. Like the Internet it should allow 99 percent of its day-to-day operations to be coordinated without a central authority. The Internet is in itself the best example of a scalable decentralized system and should serve as our model.

The general concept of decentralization is to remove the central structure of a network so that each object can communicate as an equal to any other object. The main benefits to decentralization are applications deployed in this fashion tend to be more adaptive and fault tolerant, because a single point of failure is eliminated. On the flip side, they are also harder to shut down and can be slower. For a wide variety of applications decentralization appears to be an ideal model for an adaptive computing environment.

For me, cloud computing is a metaphor for Internet based computing and therefore should be the basis for any cloud reference architectures. In the case of the creation of cloud computing platforms we need to look at decentralization as a way of autonomously coordinating a global network of unprecedented scale and complexity with little or no human management. Through the chaos of decentralization will emerge our best hope for truly scalable cloud environments.
----
This has been a random thought brought to you on a random night.

Saturday, November 8, 2008

RumorMill: EMC's Atmos Cloud Optimized Storage Launching Nov 10th

I just received word from an anonymous source that EMC will be releasing their Cloud Storage Platform on Monday November 10th. It will be a new category of storage called "Cloud Optimized Storage". The release will be named "Atmos" short for Atmosphere and was previously referred to as "Maui".

According to previous details, Maui is to be a clustered file system software that will compete with Isilon's OneFS or NetApp's OnTap GX.

In November 2007, EMC CEO Joe Tucci said this: "Maui is well beyond a clustered file system, but will incorporate some of the things a clustered file system does," somewhat vaguely, during a keynote at an EMC Innovation Day event.

According to my source the final features for Atmos will include the following:

Massively scalable infrastructure
- Petabyte scale
- Global footprint

All-in-one data services
- Replication, Versioning
- Compression, Spin-down,
- De-duplication
- Advanced metadata support, Indexing
- Powerful access mechanisms

Intelligent data management
- Personalized by metadata and policy
- Auto-configuring when capacity is added
- Auto-healing when failures occur
- Auto-managing content placement

Cost effective hardware
- Industry standard building blocks
- Modular packaging
- User-serviceable components

---- BENEFITS ----

Reduces complexity of global content distribution
- Policy based intelligence for objects and tenants

Single tier easy to manage
- Auto-configuring, auto-healing, browser-based interface

Infinitely scalable
- Multi-petabytes, multi-site, multi-tenant

Easily to integrate and extend
- REST and SOAP API’s, NFS, CIFS, IFS

Easy to configure and expand
- Single-entity, global namespace, No RAID, No LUNs

Cost compelling content store, dispersion and archive
- Standard hardware, economy of scale

According to my source, EMC also has a few more products for the cloud but the source couldn't say anymore about it.
--

V for Vendetta

My favorite antagonist Sam Johnston who infamously called for a boycott of Enomaly has been helping us find security holes in our Enomaly ECP. Fueled on a potent mix of rage and red bull Sam has been busy trying to find ways to exploit ECP. One of the great reasons for using an open source model is for exactly this reason. Anyone be it a friend or foe can assist with the discovery of security vunrabilites.

Sam's security exploit is relativly minor and should not effect anyone with decent dom0 access rules. We currently use random filenames that are pretty hard to guess and if an un-authorized user were to gain access to the Dom0, you'd probably have bigger issues to deal with. So this really only effects "trusted" dom0 users. The resolution is don't give out dom0 access to untrusted users, which is probably a good idea anyway. The whole purpose of ECP is to abstract resources so you don't have to give that level of access to core system resources. The next release of Enomaly ECP will address this issue.

Here is Sam's Full post.

Enomaly ECP/Enomalism: Insecure temporary file creation vulnerabilities

Synopsis

All versions of Enomaly ECP/Enomalism use temporary files in an insecure
manner, allowing for symlink and command injection attacks.

2. Impact Information

Background

Enomaly ECP (formerly Enomalism) is management software for virtual machines.

Description

Sam Johnston of Australian Online Solutions reported that enomalism2.sh uses
the /tmp/enomalism2.pid temporary file in an insecure manner.

Impact

A local attacker could perform a symlink attack to overwrite arbitrary files
on the system with root privileges, or inject arguments to the 'kill' command
to terminate or send arbitrary signals to any process(es) as root.

Exploits

a. ln -s /tmp/target /tmp/enomalism2.pid
b. echo "-9 1" > /tmp/enomalism2.pid
-
Never under estimate the power of a vendetta. Thanks Sam, let me know if there is anything I can do in return.

Whurley named 2008’s “Best Evil Genius”

William Hurley, aka Whurley is a visionary systems theorist, skateboarder and now best evil genius as awarded by the Austin Chronicle. Whurley and I go way back, he was one of the first guys to reach out and has been leading the efforts to form an open source community among Austin tech geeks. The chief architect of open-source strategy at BMC Software and the man behind BarCampAustin as well as being critical to helping me get Cloud Camp off the group. Whurley is all about connecting people and encouraging involvement in collaborative projects so great things can happen.

Way to go dude!

Friday, November 7, 2008

Cisco 3.0 and the future of cloud computing

It's been a big week for Cisco and their activities in Cloud Computing. On Wednesday and Thursday they held their first ever "Cisco Cloud Computing Research Symposium" (C3RS) in San Jose, which they described a forum to stimulate conversation and exchange of ideas with the intent of laying out the main open lines of research in Cloud Computing. The by-invitation-only event, which I had the honored of beening invited to, but sadly could not attend ,focused on cloud technology and its impact on the Internet of the future. (And specifically how Cisco was going to make a tonne of cash off major customers such as Bank of America who are rumored to be in the process of rolling out a major private cloud with them)

In a separate conversation, Cisco's Chief Technology Officer Padmasree Warrior spoke to a group at the Web 2.0 Summit in San Francisco. During her panel, Warrior outlined Cisco's vision for the cloud saying that cloud computing will evolve from private and stand-alone clouds to hybrid clouds, which allow movement of applications and services between clouds, and finally to a federated "intra-cloud."(Guess she reads my blog, Padmasree call me I'll hook you up with an intra-cloud)

She seems like the term intra-cloud, which actually isn't that bad. She elaborated with this tidbit: "We will have to move to an 'intra-cloud,' with federation for application information to move around. It's not much different from the way the Internet evolved. It will take us a few years to get there. We have to think about security and load balancing and peering," she said. "Flexibility and speed at which you can develop and deploy applications are the basic advantages that will drive this transformation."

During the same conference panel, Adobe CTO Kevin Lynch, said that compatibility at the cloud platform layer is a problem. "The level of lock-in in the cloud in terms of applications running and data aggregation is at a risky juncture right now in terms of continuity," I actually agree with Lynch, Adobe's cloud efforts have been some of the most locked down out of most of the major players.

Salesforce.com CEO Marc Benioff went back into his vision for integration between his Force.com platform and Google and Facebook as an example of the way cloud services can be mashed up. "It's an unclear area of the law as to who owns what." (Don't get me started on this one, I smell BS a mile away. Marc your customers own their data, how many times do i have to tell you this!)

At the end of the day all the major players love to talk about interoperability and federation. VMware with there Vcloud is prime example of interop-vapor. Hey look, we're interoperable, just as long as your using VMware. But in the end action still speaks louder the words and for now it's just words.

Busy as a beaver.

Just quick post to let everyone know I've been swamped with Enomaly related activity.So my posts may be sparse over next couple weeks (Like my favorite type of file system) Lots of new announcements coming including "my cloud dream team" board of advisors and some new partnership deals. I'll keep you posted.

Thursday, November 6, 2008

Cloudwars: Is Salesforce Evil or Stupid?

After my post earlier today on Salesforce delusional state, it would seem they keep adding fuel to the fire. The Silicon Valley Watcher is reporting that Marc Benioff, the CEO of Salesforce wasn't too pleased when he found out SugarCRM was hosting its user conference at the Marriott, just a few yards from the Salesforce Dreamforce conference at the Moscone Center in downtown San Francisco.

John Roberts, the CEO of SugarCRM, had this to say: "When Marc Benioff found out we were at the Marriott he pressured the hotel to move us out. That's how we ended up here at the St. Regis, and Marriott is paying for it."

Nice.

(More details: http://www.siliconvalleywatcher.com/mt/archives/2008/11/sugarcrm_the_li.php)

MIT Technology Review: Opening the Cloud

Erica Naone , has written a nice piece on Open-source cloud-computing tools that could give companies greater flexibility. She has graciously included me as a contributor.

Here's my piece;
Reuven Cohen, founder and chief technologist of Enomaly, explains that an open-source cloud provides useful flexibility for academics and large companies. For example, he says, a company might want to run most of its computing in a commercial cloud such as that provided by Amazon but use the same software to process sensitive data on its own machines, for added security. Alternatively, a user might want to run software on his or her own resources most of the time, but have the option to expand to a commercial service in times of high demand. In both cases, an open-source cloud-computing interface can offer that flexibility, serving as a complement to the commercial service rather than a replacement.
Read the whole article here > http://www.technologyreview.com/web/21642/

Wednesday, November 5, 2008

Salesforce's Dreamland: Data Portability, Interoperability and Cloud Monopolies

It's been an interesting week for Salesforce.com Chief executive Marc Benioff. During a presentation at Salesforce Dreamforce delusion, sorry conference. The Salesforce chief called for the creation of cloud interoperability standards for moving data between applications in rival clouds. Which on the surface appears to be a great step forward. Problem is he was very quick to contradict himself.

During a followup, Benioff called platform lock in an inevitable part of the industry and said vendors should be more honest. Customers know that when they pick Salesforce .com, they are making a "strategic relationship," he said. In that regard, it's no different to what's been happening for decades with Microsoft, Oracle, and IBM.

(So if IBM, Oracle and Microsoft were to jump from a bridge... I think we know where salesforce would be)

So Marc which is it? Are you for data portability and cloud interoperability or are you for vendor lock in? It would seem that for all intensive purposes he seems to favor the latter.

In a recent post by Zoho founder Sridhar Vembu. Sridhar paints a very worrysome picture. In his post he goes on to say: "Salesforce has repeatedly tried to block customers from migrating to Zoho CRM, by telling them (falsely) that they cannot take their data out of Salesforce until their contract duration is over. We have emails from customers recounting this."

This seem pretty odd for a guy promoting interoperability. Marc's stance on cloud interoperability is almost as confusing as trying to follow the recent back and forth between Hugh MacLeod, Nick Carr and Tim O'Reilly's wildly crazy epic on cloud monopolies. To sum things up, they went into some kind of deluded tangent on the definition of the network effect while attempting to draw parallels to the effect that one user of a cloud or service has on the value of that cloud to other users or something like that. (While I'm on my own tangent, the hype around Cloud Computing itself is the network effect in action.)

Back to the topic at hand, I think this quote from Marc Benioff is the most telling for Salesforces interoperability plans."Larry Ellison is my mentor. He is a tremendous leader in the industry. He still owns 5% of our company. Now that I've said that, he also studies the Art of War."

James Governor, response to Hugh MacLeod said it best. "Customers always vote with their feet, and they tend vote for something somewhat proprietary - see Salesforce APEX and iPhone apps for example. Experience always comes before open. Even supposed open standards dorks these days are rushing headlong into the walled garden of gorgeousness we like to call Apple Computers."

(Yup, myself included writing this on my Mac Book Pro, Os X)

My suggestion for people considering salesforce or any other cloud platform is to not only look at how easily you can get up and running, but to also consider how easily you can move or get your data back again. For cloud computing (aka internet based computing) to ultimately become a viable main stream computing option, I feel we have no choice but to embrace both data portability and cloud interoperability standards so that the entire industry may flourish.

And by the way, Marc Benioff, monopolies are so 1995.

Cloud Taxonomy - Cluster F*ck

I've been working on the cloud taxonomy section for my cloud guide. Adam, one of our developers here at Enomaly recommended a new term.

Cluster Fuck - 1. Total cluster failure. 2. Your entire cloud has gone down. 3. Epic Internet failure.
--

Press: Cloud Computing Tools For Managing Amazon, Google Services

Some more press for Enomaly, also a great overview of some cloud management tools. (Elastra, Coghead, Heroku, Enomaly, and Hyperic CloudStatus. )

See the whole article at InformationWeek.

Tuesday, November 4, 2008

Storm Clouds around Mosso's Outage

Looks like Rackspace's Mosso is the latest cloud provider to have an outage. This is hot on the heals of last week's Flexiscale problems, cloud outages seems to be a recurring problem.

According to Mosso, they are currently experiencing an outage in there San Antonio data center. I'll post more details shortly.

Overcast: Conversations on Cloud Computing

Geva Perry and James Urquhart have started a new podcast, titled "Overcast: Conversations on Cloud Computing". The show covers cloud computing, virtualization, application infrastructures and related topics, and we will be inviting a number of prominent guests in coming weeks to inform us and there listeners about both the truly revolutionary and the simply evolutionary aspects of the cloud. They are also both skeptical enough to ask some tough questions about the reality of the cloud from time to time. Check it out at Overcast: Conversations on Cloud Computing

eweek Podcast: The Evolution of Cloud Computing

I recently spoke with Eweek, here is the podcast.


Monday, November 3, 2008

RightScale and the Eucalyptus Join Forces

Great news for open source cloud computing users. RightScale and the Eucalyptus have announced a plan to join forces.

--

SANTA BARBARA, Calif. – November 4, 2008RightScale, Inc., the leader in cloud computing management, announced that it has partnered with the Eucalyptus Project Team at the University of California, Santa Barbara (UCSB) to foster cloud computing research, experimentation and adoption. Starting today, the RightScale cloud computing management platform is available for use with the Eucalyptus Public Cloud (EPC), a cluster of servers at UCSB for testing and evaluating the Eucalyptus cloud infrastructure. RightScale and the Eucalyptus Project Team are also collaborating to deliver a more robust private cloud for organizations whose testing requirements extend beyond those offered by the EPC. The RightScale-Eucalyptus partnership is aimed at making cloud computing simple and accessible to everyone from universities, students and entrepreneurs to enterprises evaluating large cloud deployments.

We are honored to collaborate with the talented UCSB Eucalyptus Project Team to accelerate the advancement of cloud computing technology,” said Michael Crandell, CEO at RightScale. “Now anyone — from those just becoming familiar with cloud computing to organizations evaluating a massive application for deployment on Amazon’s EC2 — will be able to easily test their applications on the Eucalyptus EC2-compatible, open source cloud infrastructure using RightScale’s management platform.”

RightScale announced last month a new strategic initiative to become the first cloud management platform to deliver the integrated management of multiple cloud environments. RightScale’s support for Eucalyptus adds to its expanding list of supported clouds, which includes Amazon’s EC2, FlexiScale and GoGrid. RightScale is also working with Rackspace to assure compatibility with their cloud offerings. Eucalyptus has leveraged RightScale’s widely used and proven interface with Amazon’s EC2 to validate that the Eucalyptus infrastructure is fully compatible with EC2.

With hundreds of thousands of instances deployed already, RightScale has emerged as the de facto cloud management platform,” said Rich Wolski, a professor in the Computer Science Department at UCSB and director of the Eucalyptus project. “Deploying scalable, reliable applications from scratch in a multi-cloud world is a time consuming and expensive task. Eucalyptus makes it possible to use RightScale’s top-quality management platform and expertise to make this easy and cost-effective for both a local IT infrastructure and many of the popular public clouds. The combination of RightScale and Eucalyptus is clearly an excellent way to achieve federation between public and private cloud platforms.”

Cloud computing infrastructures have the potential to radically change the way organizations deploy and manage their IT systems. A “cloud,” however, is essentially just a collection of bare bones virtual servers. Most organizations do not have the expertise or resources to deploy and manage cloud computing applications cost effectively and according to best practices. RightScale’s management platform provides rapid cloud deployment and a dynamically scalable infrastructure to meet varying traffic and loads, using minimal resources. The company’s core offerings include automated system management, pre-packaged and re-usable components and service expertise. With RightScale’s platform and services, any organization can easily tap the enormous power of cloud computing for a virtually infinite, cost-effective, pay-as-you-go IT infrastructure.

Eucalyptus, which stands for “Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems,” is an open source software infrastructure for implementing cloud computing on clusters. The current interface to Eucalyptus is compatible with Amazon’s EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. Eucalyptus can be downloaded for free and installed on any set of servers to create a private, EC2-compatible cloud computing system on those servers.

RightScale Availability on Eucalyptus

The RightScale cloud management platform is available today for use with Eucalyptus on the Eucalyptus Public Cloud (EPC). The EPC is open to anyone for evaluation purposes, with specific server number and time limitations. To access the EPC with RightScale, go to https://eucalyptus.rightscale.com/users/new.

RightScale and Eucalyptus will also be delivering a more robust, full-featured private cloud aimed at addressing the needs of organizations whose cloud testing requirements go beyond those offered by the EPC. This private cloud will offer the full array of Eucalyptus and RightScale functionality and be accessible by invitation only. For more information or to request access to the Eucalyptus-RightScale private cloud, please contact [email protected].

For more information about the Eucalyptus project at UCSB, please go to http://eucalyptus.cs.ucsb.edu

Seeking Contributors for a Cloud Article

I'm looking for some end user references for small businesses utilizing cloud computing (i.e. Google, Salesforce, Amazon, etc). If selected, your company will be featured as part of a cloud computing article for a major business magazine. Ping me if your interested in participating.

Defining Cloud Computing By Defining the Internet

From a high level cloud computing can be defined as "Internet based computing" so I figured it would be interesting to take a look at some of the definitions of the Internet as the basis for defining "cloud computing".

What is the internet?

  • A worldwide network of computers that can be accessed via the computer network. The Internet allows local computer users to find and ...
    www.lib.cwu.edu/research/help/cwuglos.html

  • A computer "network" is a group of computers connected together to exchange data. When several networks are interconnected we have an "internet". The global network of interconnected internets is "The Internet". ...
    www.st.com/stonline/press/news/glossary/i.htm

  • The Internet is a worldwide communications network originally developed by the US Department of Defense as a distributed system with no single point of failure. The Internet has seen an explosion in commercial use since the development of easy-to-use software for accessing the Internet.
    www.galassi.org/mark//mydocs/docbook-intro/g645.html

  • A cooperative system linking computer networks worldwide.
    www.micro2000.co.uk/network_glossary.htm

  • an international computer network connecting universities, research institutions, government agencies, and businesses
    www.cs.trinity.edu/About/The_Courses/cs301/html/intro/WWWSum_7.html

  • The wide collection of connected networks that all use the TCP/IP protocols.
    www.mantis.biz/glossary

  • A global network; an interconnection of large and small networks around the world
    www.netnw.net.uk/Jargon_Explained/jargon.htm

  • is a global network of networks using TCP/IP to communicate.
    www.consultantcommons.org/es/computer_networking_terms

  • The worldwide collection of computers, networks and gateways that use TCP/IP protocols to communicate with one another. ...
    193.188.128.125/~acc/Resources/glossarycommonterms.htm

  • A global network connecting millions of computers. More than 100 countries are linked into exchanges of data, news and opinions.
    www.trendmx.com/help/website-promotion-glossary.aspx

  • An international collection of computer networks that are networked together using the common TCP/IP protocol. There are thousands of computer networks comprising millions of computers connected through the Internet.
    aa.uncw.edu/ward/chm255/glossary.htm

  • A global network linking millions of computers for communications purposes. The Internet was developed in 1969 for the US military and gradually grew to include educational and research institutions. ...
    www.mysouthwest.com.au/Business/SmallBusinessIT/Glossary

  • An international conglomeration of interconnected computer networks. Begun in the late 1960s, it was developed in the 1970s to allow government and university researchers to share information. The Internet is not controlled by any single group or organization. ...
    www.gbdpro.com/glossary2.html

  • An international network of networks accessed via computers with compatible communication standards. More detailed definition of Internet.
    lib.colostate.edu/howto/gloss.html

  • A group of interconnected worldwide computers using an agreed on set of standards and protocols to request information from and send information to each other.
    www.peabody.jhu.edu/1437

  • The global computer network that connects independent networks.
    online.wsj.com/documents/glos_i_.htm

  • A global "network of networks" used to communicate electronically that is linked by a common set of protocols. These protocols allow computers from one network to communicate with a computer on another network.
    www.primode.com/glossary.html

  • The Internet is the largest Internet in the world. It is a three level hierarchy composed of backbone networks (eg ARPAnet, NSFNet, MILNET), mid ...
    www.sqatester.com/glossary/index.htm

  • lobal, decentralized communications network connecting millions of computers, providing exchange of data, news and opinions. (International Standards Organization)
    osulibrary.oregonstate.edu/archives/handbook/definitions/

  • a worldwide network of computers that allows the "sharing" or "networking" of information at remote sites from other academic institutions, research institutes, private companies, government agencies, and individuals.
    www.uakron.edu/library/instruction/glossary.htm

  • a computer network consisting of a worldwide network of computer networks that use the TCP/IP network protocols to facilitate data transmission ...
    wordnet.princeton.edu/perl/webwn

  • The Internet is a worldwide, publicly accessible series of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). ...
    en.wikipedia.org/wiki/Internet

  • Any set of computer networks that communicate using the Internet Protocol. (An intranet.); (considered incorrect by some — see the Usage notes under Internet) The Internet
    en.wiktionary.org/wiki/internet

  • You wonder why we're having so much trouble defining cloud computing? Because we're trying to define something that is indescribable and purely based on your point of view. If your a 15 year old, the internet is facebook or your IM buddies. For others it may be a communication system or for others it maybe a entertainment system etc, etc.

    Keep it simple, keep it stupid, keep it straightforward. The computer is the internet.

    #DigitalNibbles Podcast Sponsored by Intel

    If you would like to be a guest on the show, please get in touch.

    Instagram