Friday, May 30, 2008

Trends: Grid Computing Vs Cloud Computing


A recent discussion on who invented cloud computing has raised some interesting details about the popularity of the terms "Grid Computing (Blue)" Vs "Cloud Computing (RED)". While grid seems to be in a general downward trend, cloud has risen from no searches or news activity to surpass grid in less then 1 year. Is this the sign of things to come? I guess only time will tell.

Tuesday, May 27, 2008

Elastic Computing - a brief history

The basis of my cloud computing analogy is that computing capacity like the development of the early electrical grid is becoming the fundamental basis for our information based society. Therefore we need a universal system for the interchange of computing capacity like we have for power. Well it appears that no matter how much I complain, the only way to truly create an analogy these days is to write an old fashion book (on paper no less).

Being I am no fan of paper, or books for that matter. (Sorry Amazon) I thought I'd take a brief moment to discuss how I came up with the idea for our Elastic Computing Platform as well as well as the electrical analogy which I've been telling people for years. To do this, I need to take you back to how I first conceived the idea of "elastic computing" almost 5 years ago.

At the time I had just started Enomaly, and like all good bootstrapped entrepreneurs I had absolutely no idea what I wanted to do for with our business. Previously as a freelance open source developer I knew there was a strong demand for implementing open source technologies within an enterprise context. But in fairness that wasn't where my passion was. My real passion was in the development of new and exciting forms of emerging internet technology. So on a rainy drive one evening back from visiting a business partner in Montreal, myself and Lars Forsberg (my other business partner) discussed various potential ideas. One of the earliest ideas we discussed was the creation of a shared computing exchange where compute capacity could be exchanged similar to that of a commodities exchange. This platform would provide the ability for IT departments to tap into virtual computing resources beyond the confines of their existing data center on a utility basis. We envisioned telecoms & hosting providers signing up the service. Problem was there was no demand for such a service at the time, and worse yet there was no software to enable such an exchange of compute capacity.

We then took it upon ourselves to create a website around the idea, which was initially called "distributed Potential" & "distributed exchange". We built the websites, and to our surprise we had zero interest in such a service.

Undaunted we then spotted a actual customer need based on the open source projects we had been deploying at the time. For the most part these projects involved the development of open source content management systems and portal systems for large "fortune 500" type organizations. These were usually built in linux, apache, mysql, php and python (LAMPP) . One the major challenges in those early LAMPP deployments was in there inability to easily scale. The majority of the php system at that time (circa 2003) were built for low volume / lower usage sites. We then took the tools we had be developing as part of our customer deployments and released it as an open source project called Enomalism in late 2005.

Back then virtualization was just starting to catch on and we immediately saw an opportunity to use the new Xen hyper-visor as a mechanism to adjust the application environment on the fly based on the "somewhat" real time demands placed on the overall environment. Typically this meant adjusting RAM, storage, networking and creating replicated LVM snap shots wherever possible. We also utilized openLDAP & ssh based authentication providing for a method of utility and metered access to the virtualized resource pool (now called cloud) . We also saw an opportunity in using a URI based webservice instead of the more common SOAP approach of the time. This approach later became known as RESTful web services. We described this method for the automatic scaling of an application by tying an application server (apache) directly to the hypervisor as "elastic computing", well before anyone else that I am aware of. This more recently has become known as "cloud computing" or "infrastructure as a service".

Although our first attempts at "elastic computing" were for the most part a financial disappointment they did act as an opportunity generator, opening doors to projects and companies we may never had access to otherwise. Among those was in July of 2006 when Amazon came knocking. At the time they had been working on a top secret project described as a "grid utility". For the first time our idea for tapping into resources beyond the confines of your data center were starting to take shape. Over the next few months we took this opportunity to learn as much as we could about the benefits as well as the hurdles to creating this type of large scale compute utility. When Amazon EC2 finally launched into a private beta, (one we could publicly speak about) we were amazed at the amount of interest our up until then unknown products suddenly received. Among the opportunities that presented was the chance to work with Intel on several next generation virtualization and media projects (Intel remains our biggest customer). We had found our product and it was in the cloud.

Google sets pricing & opens app hosting service to the public

This is a just quick note to let any of the lucky few Google App Engine users know that Google has announced pricing for their Cloud offering as well as opened the beta to the public. I've been using Google App Engine for about a month and am very impressed so far. But I am a bit of python snob!

The pricing plan has developers still getting around 500MB of data and 5 million page views per month for free. After that, developers will pay 15 cents to 18 cents per GB of data stored monthly, according to Google.

Also, developers will be charged 10 cents to 12 cents per CPU core-hour consumed. On the bandwidth side, developers will pay 11 cents to 13 cents per month per GB of data transferred out of App Engine and 9 cents to 11 cents per month per GB transferred into App Engine, Google said. The pricing schedule is to be effective later this year.

Also interesting to note; App Engine also will now allow anyone to use the service, expanding beyond a list of 10,000 developers that gradually had grown to 60,000 developers. "We've decided to open the floodgates," More than 150,000 developers have been on the product's waiting list.

If you haven't done so, you should check out http://code.google.com/appengine/

I'd also recommend checking out http://pypi.python.org/pypi (The Python Package Index is a repository of software for the Python programming language) for some ready to go python apps for quick and easy deployment on GAPPS.

Sunday, May 25, 2008

The Geopolitical Cloud

Over the last few months there has been a lot of discuss here in Canada on the topic of hosting data outside of the country (specifically in the US). The topic has recently reached a breaking point when the Canadian government declared that government IT workers not use network services that are operating within United States borders. The reasoning is that Canadian data stored on those servers could conceivably be negatively impacted by the repercussions of the Patriot Act.

It is amazing it's taken this long for the Canadian government to come to this conclusion. The radical Patriot Act has been around for several years, and stories of servers being seized without justification almost as long. As a small Canadian company, we've been hosting our most critical and sensitive information on either servers located in our own "data closet" or Canadian based ISP's since I started Enomaly 5 years ago for exactly this reason. One of my biggest concerns about Patriot Act is it's lack of transparency to when, where, what and how my data may be inspected and or seized.

I've been a proponent of geotargeted cloud computing for awhile (the ability to geographically target compute resources). The problem is for the most part there still really isn't any options outside of the US to do this. This type of Geopolitical computing may very well be one of the best opportunities for cloud computing in the future. (Security being the #1 issue) Not only will cloud users be able to adjust their computing environment based on geographic demands, but also based on Geopolitical ones. For example I have a sudden spike in traffic from China, but the great firewall of China may place limitations on data I actually store there, therefore I data warehouse in Singapore.

At Enomaly we've been working with several large telecom's on the development of prototype cloud utilities for this very reason (among others). I am curious if anyone else is also working or thought about this type of use case?

It may be merely a mater of time before the large infrastructure providers (France Telecom, BT, China Telecom etc) realize this opportunity and we have a real geographic cloud targeting capability. In the mean time, I'll just make sure to use cloud resources on less sensitive data or use Enomalism (blatant product placement) to create my own clouds on my own hardware. Then again isn't the value of cloud computing not having to own or manage any hardware?

Saturday, May 24, 2008

Cloud Computing: So You Don’t Have to Stand Still (New York Times)

Great article in the New York times today on cloud computing. The Sunday edition is the one of the most read main stream newspapers and should go a long way to helping introduce the concept of cloud computing to a much broader audience. The article itself is fairly introductory, but I was fortune enough to be included in the article! So for that reason alone it's worth the read :)

Read the article here >
http://www.nytimes.com/2008/05/25/technology/25proto.html?_r=1&ref=business&oref=slogin

Friday, May 23, 2008

IRC #cloud-computing on freenode.org

I'm having trouble keeping up with all the email and instant messages I'm getting from cloud computing enthusiasts, so I've created a cloud computing channel on the irc.

I'll be hanging out in there most days I'm in the office and invite you to do the same.

Please feel free to log on to the irc at: irc.freenode.org #cloud-computing

My nick is chief_ruv

P.S.
I use the colloquy irc client for mac & iphone.
See > http://colloquy.info/

Down on the server farm

I've been at Mesh08 all week so I haven't had much of a chance to post.

Just read an interesting article in the Economist on Cloud Computing, I particularly agree with this bit.
In future the geography of the cloud is likely to get even more complex. “Virtualisation” technology already allows the software running on individual servers to be moved from one data centre to another, mainly for back-up reasons. One day soon, these “virtual machines” may migrate to wherever computing power is cheapest, or energy is greenest. Then computing will have become a true utility—and it will no longer be apt to talk of computing clouds, so much as of a computing atmosphere.

Read the Article here: http://www.economist.com/business/displaystory.cfm?story_id=11413148

Tuesday, May 20, 2008

MeshU - Introduction to Cloud Computing Slides

I'm back in Toronto today presenting at MeshU, and yes you guessed it, my presentation is on cloud computing. check out my slides below.


Monday, May 19, 2008

Microsoft "Cloud" braces for major customer shift

Great article today on Reuters, "Microsoft braces for major customer shift". The article focuses specifically on the effectives of Cloud Computing will place on Microsoft.
Microsoft sees tens of millions of corporate e-mail accounts moving to its data centers over the next five years, shifting to a business model that may thin profit margins but generate more revenue.
Email seems to be the canary in the coal mine. During the first major shift email was the killer app, now it seems email again may be well positioned to be at the forefront as companies begin to discover the benefits to scalable hosted model. Microsoft seems to think a hosted exchange service is the key, companies like Mailtrust (Aquired by Rackspace) who are already doing this type of service are ideally suited to take advantage of this movement.
Chris Capossela, who manages Microsoft's Office products, said the company will see more and more companies abandon their own in-house computer systems and shift to "cloud computing", a less expensive alternative.
He goes on to say;
"A lot of companies are not ready to take their money out of the pillowcase and put it in the bank."
This is a great analogy, cloud computing is all about trust, I trust my bank will keep my money safe, therefore I give it to them because it's safer, more secure and easier to manage then keeping it under my bed.

I wasn't totally sold this statement;
In a services business, the customer will pay Microsoft a larger fee, since Microsoft also runs and maintains all the hardware. But Microsoft's profit margins may not be "as high," Capossela said, even though revenue may be more consistent.
As cloud computing becomes more commoditized it may become harder and harder for Microsoft to force this type of premium. This may also be why Microsoft is so interested in the Advertising model as a kind of hedge against "free as a service".

I'm asked a lot lately if there is any money in cloud infrastructure? There certainly seems to be if Microsoft is any indication of the trend towards cloud infrastructures. "Microsoft said it continues to build up its infrastructure, adding roughly 10,000 powerful computer servers a month to its data centers."

Thats a lot of hardware and I think for the most part a typical centralized data center management system will not cut it. Most data center management systems were not designed to manage hundred of thousand or even millions of physical and virtual machines. I think we're going to start seeing a lot of these large federated data centers start to looking and acting more like a botnet then your traditional hosting environment.

Read the whole article here:
http://www.reuters.com/article/businessNews/idUSN1643480920080519

Thursday, May 15, 2008

Green Computing in the cloud

I haven't had much of a chance to blog this week, I've been busy traveling first to San Antonio and now Halifax, Nova Scotia. I wanted to take a brief moment to discuss some of the more interesting ideas I've had the opportunity to discuss this week.

In particular "Green Cloud" computing; while meeting with a major hosting firm I had the opportunity to have an adjacent conversation, in it we discussed the huge potential of providing and consuming green electricity. In their case by installing an array of solar panels on the top of their newly built and massive data center complex. According to one of their chief data center architects this type of self created green power could account for as much as 1 megawatt of electricity, which would offset a significant portion of their current power needs. What's more, this could potentially be sold back to the grid during off peak times, like 3am, providing yet another potential avenue for revenue.

Of course this usage of green power probably has less to do with a new found environmental consciousness and more to do with the bottom line. But it did get me thinking, would you choose a "Green Cloud" over a lower cost but less environmentally friendly cloud? And if so, what kind of green tax would you prepared to accommodate? Personally, I think I would be prepared to accept a 5% increase, so long as the SLA's and QoS are there also.

I've also been thinking about creating a "Green Computing" API, (the ability to adjust power levels on the fly programmatically) but I'll leave that for another day.

Sunday, May 11, 2008

Introducing the Universal Compute Unit (UcU) & Universal Compute Cycle (UCC)

The last few weeks there has been a great discussion on our Cloud computing group. The discussions have been on how to define Cloud Economies and Standards. As the conversations have progressed it has become clear that there is a need for a "standard unit of measurement" for cloud computing similar to that of the International System of Units or better known as the metric system. This unit of cloud capacity is needed in order to ensure a level playing field as the demand and use of cloud computing becomes commoditized.

Before the creation of a standardized electrical grid it was nearly impossible for a large scale sharing of electricity. Cities and regions would have their own power plants limited to their particular area, and the energy itself was not reliable (specially during peak times). Then came the "universal system" which enabled a standard in which electricity could be interchanged and or shared using a common set of electrical standards. Generating stations and electrical loads using different frequencies could now be interconnected using this universal system.

Recently several companies have attempted to define cloud capacity, notably Amazon's Elastic Compute Cloud service uses a EC2 Compute Unit. Amazon states they use a variety of measurements to provide each EC2 instance with a consistent and predictable amount of CPU capacity. The amount of CPU that is allocated to a particular instance is expressed in terms of EC2 Compute Units. Amazon explains that they use several benchmarks and tests to manage the consistency and predictability of the performance from an EC2 Compute Unit. One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. They claim this is the equivalent to an early-2006 1.7 GHz Xeon processor. Amazon makes no mention of how they achieve their benchmark and users of the EC2 system are not given any insight to how they came to their conclusion. Currently there are no standards for cloud capacity and therefore there is no effective way for users to compare with other cloud providers in order to make the best decision for their application demands.

There have been attempts to do this type of benchmarking in the grid and high performance computing space in particular, but these standards pose serious problems for non scientific usage such as web applications. One of the more common methods has been the use of the FLOPS (or flops or flop/s) an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations, but actually don't do much for applications outside of the scientifically realm because of its dependence on floating point calculations. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more effective and could form the basis for the Universal Compute Unit.

By basing the Universal Compute Unit on integer operations it can form an (approximate) indicator of the likely performance of a given virtual machine within a given cloud such as Amazon EC2 or even avirtualized data center. One potential point of analysis may be in using a stand clock rate measured in hertz derived by multiplying the instructions per cycle and the clock speed (measured in cycles per second). It can be more accurately defined within the context of both a virtual machine kernel and standard single andmulticore processor types.

The Universal Compute Cycle (UCC) is the inverse of Universal Compute Unit. The UCC would be used when direct system access in the cloud and or operating system is not available. One such example is Google's App Engine. UCC could be based on clock cycles per instruction or the number of clock cycles that happen when an instruction is being executed. This allows for an inverse calculation to be performed to determine the UcU value as well as providing a secondary level of performance evaluation.

I am the first to admit there are a variety of ways to solve this problem and by no means am I claiming to have solved all the issues at hand. My goal at this point is to engage an open dialog to work toward this common goal.

To this end I propose the development of an open standard for cloud computing capacity called the Universal Compute Unit (UcU) and it's inverse Universal Compute Cycle (UCC). An open standard unit of measurement (with benchmarking tools) will allow providers, enablers and consumers to be able to easily, quickly and efficiently access auditable compute capacity with the knowledge that 1 UcU is the same regardless of the cloud provider.

Next up, doing the math.

Friday, May 9, 2008

The Electric Grid & Cloud Computing Standards

We (Enomaly) are currently in the midst starting development of several large scale cloud utilities for a number of hosting providers as well as a large telecom, so I have some first hand knowledge of the the issues facing the broad adoption of cloud computing. I think there is a definite need for a set of standards for Cloud Computing. I would put it in the context of the early electrical utilities and the development of the universal electrical grid infrastructure.

Before the creation of a standardized electrical grid it was nearly impossible for a large scale sharing of electricity. Cities and regions would have their own power plants limited to their particular area, and the energy itself was not reliable (specially during peak times). Transmission of electric power at the same voltage as used by lighting and mechanical loads restricted the distance between generating plant and consumers. Different classes of loads, for example, lighting, fixed motors, and traction (railway) systems, required different voltages and so used different generators and circuits. Kind of like the various flavors of cloud infrastructure we see today.

Then came the "universal system" which enabled a standard in which electricity could be interchanged and or shared using a common set of definitions. Generating stations and electrical loads using different frequencies could now be interconnected using this universal system. By utilizing uniform and distributed generating plants for every type of load, important economies of scale were achieved, lower overall capital investment was required, load factor on each plant was increased allowing for higher efficiency, allowing for a lower cost of energy to the consumer and increased overall use of electric power.

By allowing multiple generating plants to be interconnected over a wide area, the electricity production cost was reduced and efficiency was vastly improved. The most efficient available plants could be used to supply the varying loads during the day. This relates particularly well today, similarly the need for hosted applications to easily to tie into remote compute capacity during peak periods. Reliability for the end user was improved and capital investment cost was reduced, since stand-by generating capacity could be shared over many more customers and a wider geographic area. (A user have a sudden spike in traffic from China, can tap into the Asian compute Cloud.) Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to lower energy production cost. In terms of "Green Computing" a user can access the most environmentally friendly sources of compute power as part of their computing policies.(Cloud A uses Coal based power, Cloud B uses Nuclear and Cloud C uses Wind, there for I choose Cloud C for the environment or cloud A for the cost)

I see a lot of similarities between the creation of the early electricity standards and the need of a set of common standards for "cloud computing". By defining these standards, providers, enablers and consumers will be able to easily, quickly and efficiently access compute capacity without the need to continually re-architect their applications for every new cloud offering.

Thursday, May 8, 2008

Virtual Private Cloud (VPC)

According to wikipedia, (everyones favorite fair and balanced source of information) "a virtual private server (VPS) is a method of partitioning a physical server computer into multiple servers such that each has the appearance and capabilities of running on its own dedicated machine."

Over the last few years the VPS has become one of the defacto methods in the web hosting world. But like all commonly used technology the VPS does have its limitations. The biggest problem is it is still a single tenant environment. Moving, securing, adapting and just plan scaling a VPS remains difficult. Not to worry, I have a solution to propose, complete with its own acronym. I call it the "Virtual Private Cloud" (VPC)

A VPC is a method for partitioning a public computing utility such as EC2 into quarantined virtual infrastructure. A VPC may encapsulate multiple local and remote resources to appear as a single homogeneous computing environment bridging the ability to securely utilize remote resources as part of an seamless global compute infrastructure.

A core component of a VPC is a virtual private network (VPN) and or a virtual LAN (Vlan) in which some of the links between nodes are encrypted and carried by virtual switches. Another reason for the use of a VPN within the context of a VPC includes the ability to virtualize the network giving it particular characteristics & appearance that match the demands as well as requirements of a given application deployed in the cloud.

A VPC enables virtual to cloud (V2C) and physical to cloud (P2C) migrations whereby a operating environment is seamlessly moved from a traditional hosting environment to that of a cloud with little or no interference to the operations of a given server instance and its performance.

As cloud computing becomes more common, I foresee a major opportunity for hosting companies looking to extend their dedicated and VPS hosting services by enabling customers an easy migration path into the cloud.

Stupid as a Service

Everyday I open my netvibes, to discover yet another "as a service" offering. Yesterday I was reading an article outlining HP now has a methodology they describe as "Everything as a Service" Which loosely translates to "We're to busy to come up with anymore as a service slogans" I'm just waiting for Sun to come out with their "Stupid as a Service", or loosely translated, "we have the best engineers but are too stupid to figure out how to use them, as a service". Then their is IBM's, Buzzwords as a Service, or "if it sounds cool lets start marketing the term, even though we have no F^cking idea what problem it solves, as a service". Or how about Microsoft, Licenses as a Service, umm, Live at your service (as long as you pay us, don't use it on a more then one windows vista complainant computer for no longer then 30 days, and agree that we're smarter then you).

I'm so sick (as a service) of all this aaS stuff. At last count I think I've heard the following "as a service" mentioned in the last week.
  • Software as a service
  • Infrastructure as a service
  • Platform as a service
  • Virtualization as a service
  • Desktop as a service
  • Everything as a service
  • Storage as a service
  • Hardware as a service
  • Microsoft as a service
  • Cloud as a service (I've been using this one)
So today I'm happy to announce I'm going to be offering you my loyal readers, as a service, the service of me, yup, you guessed it, ruv as a service.

Tuesday, May 6, 2008

ElasticDrive 0.4.2 Now Available for Download

Enomaly Inc is proud to announce a major update to the ElasticDrive Remote Storage system. The new version of ElasticDrive has significant performance and storage improvements. Previous customers are encouraged to go to www.elasticdrive.com and download the the latest version (free of charge). Due to strong customer demand we are now offering a lower priced 50GB version for $49.

New Improvements and Additions include;
  • DATACENTER TARGETING! You can now specify the EU datacenter for storage.
  • TTL specification, to set how long cache entries are stored. This allows you to maintain the READ cache for a longer time in order to improve performance.
  • ElasticDrive now has an ordered write engine which ensures that data is always written to disk in the correct order. This engine is the s3simple storage engine. If you want more performance, you can now RAID multiple elasticdrives in parallel (RAID0). The old s3 fast engine has been removed.
  • At the request of one of our customers, ElasticDrive now supports gzip encoding on your stripes; which gives a significant speed (and cost) improvement for a lot of use cases.
  • ElasticDrive also has preliminary support for the tcp_window_scaling option, which increases window size for much faster transfers. We expect this one to grow legs as Amazon pushes window scaling into production. This option is currently in testing at Amazon, and is not recommended for production use.
  • SSL can be turned on and off at will with the no_ssl option. Good for higher speed and lower CPU usage transfers.
  • CPU usage is much, much lower.
  • ElasticDrive's new caching algorithm is simpler, smaller, and faster for data reads.
  • ElasticDrive now comes with the ed_bucket_mgr tool for listing your buckets, and deleting the ones you are no longer using. Useful for cleaning out file-systems that you are no longer using.
  • A lower priced 50GB version is now available for $49

Enomalism Elastic Computing Platform : EC2 Module
We are also pleased to announce the latest release of the Enomalism Elastic Computing Platform, Enomalism v2.01 including improved support for Amazon EC2. The Free Amazon EC2 module allows you to easily manage Amazon EC2 instances from the web based Enomalism interface. The EC2 instances blend in with existing virtual machines as if they were running on a local virtual machine as well as a providing an web services interface for the automatic scaling of virtual applications both locally and remotely. The free EC2 module is available for both Windows and Linux installations.

Download a free copy of Enomalism at: http://www.enomalism.com
Docs: http://trac.enomalism.com/enomalism/wiki/Ec2

Storm Clouds ahead ~ The battle for the cloud!

I just read a really interesting opt-ed piece in the Wall Street Journal. In the article Andy Kessler gives a very good overview of the upcoming "The War for the Web" as well as breaking down why the Microhoo! murder... (merger) was a bad idea.

Quote
"Why the rush to pay billions for Yahoo? The simple (and wrong) answer was that adding Yahoo's 20% Web search market share to Microsoft's 10% meant that it could compete against Google's 60% share. Technology changes too fast for that to make sense except on paper."
Another interesting, if not completely obvious point he makes;
"At the moment, neither Google nor Microsoft, or anyone else, has nailed down cloud, edge, speed and platform. All the loosely coupled electronic devices in our pockets need to work together seamlessly with Facebook applications in the cloud."

No shit. You don't need to be our favorite cloud guru Nick Carr to figure that one out, but for some reason most enterprises can't seem to get their collective heads around the use of cloud technology.

He goes on to say;
"Programs run anywhere these days – on your desktop computer, on servers in data centers, on your iPod, cellphone, GPS, video game console, digital camera and on and on. It's not just about beating Google at search, it's about tying all these devices together in a new end-to-end computing framework."
Ahmen brotha. The cloud is about a ubiquity in the users computing experience. The cloud is about a seamless transition regardless of platform, device or application. Those who realize this will prosper, or at the very least do a little better then their competition.

Andy also does a fantastic job of breaking down the fundamental aspects of the cloud.

- The Cloud. The desktop computer isn't going away. But as bandwidth speeds increase, more and more computing can be done in the network of computers sitting in data centers - aka the "cloud."

- The Edge. The cloud is nothing without devices, browsers and users to feed it.

(Think Akamai, Limelight,. The Edge & CDN providers as well as p2p)

- Speed. - Speed. Once you build the cloud, it's all about network operations.

(At the end of the day cloud is a about the user experience regardless of application. People use Google cause it just works. You never hear, Is Google down? )

- Platform. ...Having a fast cloud is nothing if you keep it closed. The trick is to open it up as a platform for every new business idea to run on, charging appropriate fees as necessary.

(That's why we give away our Enomalism platform. Personally I'd rather use apache over IIS and I think our users agree. The same principals apply to cloud as a platform)

Read the whole article here: http://www.andykessler.com/andy_kessler/2008/05/wsj-the-war-for.html

Monday, May 5, 2008

Microsoft getting into the Cloud with Free Silverlight Streaming

A lot of people have been asking whether or not Microsoft is going to be getting into the cloud computing space. Well, it seems that they have with a new Live.com platform as a service offering called Silverlight Streaming.

Silverlight Streaming by Windows Live offers a free streaming and application hosting solution for delivering high-quality, cross-platform, cross-browser, media-enabled rich interactive applications (RIAs). With the ability to author content in Microsoft Expression Encoder and other third-party editing environments, Web designers maintain complete control of the user experience.

Some interesting features include;
  • Free streaming and application hosting for up to 10 gigabytes (GB)
  • Immediate high-performance, global-scale application delivery
I've never used Silverlight so I don't know if it's worth using or not, but with 10GB of space, it may be a nice location to save your files!

Check it out at: http://streaming.live.com/ and http://www.microsoft.com/silverlight/overview/streaming.aspx
---
David Crow was kind enough to send me some more MS Cloud Links;

* SQL Server Data Services - http://www.microsoft.com/sql/dataservices/default.mspx
** PhlufflyPhotos - http://dunnry.com/blog/PhluffyFotosSampleAvailable.aspx built on SSDS
* Live Mesh - http://mesh.com/
** See MaryJo Foley's review http://blogs.zdnet.com/microsoft/?p=1355
* Live including Contacts, Virtual Earth, LiveID, etc. - http://dev.live.com/
* Mashups in Popfly - http://popfly.ms
* Dynamics CRM Online - http://www.microsoft.com/presspass/press/2008/apr08/04-22DynamicsCRMOnlinePR.mspx

* Yet to be released RedDog - http://www.portal.itproportal.com/articles/2008/04/09/microsofts-red-dog-compete-google-apps-engine-and-amazons-ec2/ and http://blogs.zdnet.com/microsoft/?p=597

Rackspace Unveils Mosso CloudFS Storage Service

Rackspace Hostings Mosso®, the companys cloud hosting division, today announced plans to launch a new Internet-based storage offering, CloudFS, that will let developers store any amount of data on the Web more easily and reliably than ever before. CloudFS joins Mossos flagship service, The Hosting Cloud, and complements Rackspaces comprehensive suite of hosting products including its Mailtrust Email Hosting solution and the companys traditional managed hosting services. CloudFS will be available to a limited number of customers in a free private beta for testing and refinement until Q3 2008, when it will open for public beta.

This is great news for Rackspace who up to now have been a reluctant entrant in the cloud computing market. I hope this is just one of many more announcements you'll be hearing from Rackspace in the coming months as they ramp up their cloud offerings.

Sunday, May 4, 2008

Sun & Amazon to announce a partnership?

According to Om Malik of Gigaom, Sun & Amazon are about announce a major deal / partnership between their two organizations involving some kind of joint Cloud offering and or services.

In the article Sun's Jonathan Schwartz said, “Amazon knocked the ball out of the park,” For Sun, the opportunities are with mid-size and large corporations — like banks, pharma and financial companies — that need to build their own clouds because they cannot use Amazon type on-demand computing due to certain legal and regulatory limitations." He went on to say, “Then you’ll be paying attention to the announcement we make tomorrow with what we’ll be doing with Amazon.”

For once I actually completely agree with this. One of the biggest issues I've had with Amazon Web Services has been trying to convince larger companies to use a "book sellers" on demand infrastructure. For startups this hasn't been a tough sell, for larger companies, the pitch has been a bit more problematic. Pitching cloud computing within the context of a Rackspace, Sun, IBM or even an existing data center might make cloud computing more attractive to the fortune 500 crowd. Sun definitely has it's foot firmly in the door to make this happen and at the end of the day what Amazon has done is show, in a real world context, how to build a service oriented infrastructure that actually scales.

I'm just not sure if Sun is just chasing another pot at then end of the yet another rainbow, or this bit of vapor has some substance. Needless to say, Sun's history in this space has been partly cloudy at best.

Check out the article here: http://gigaom.com/2008/05/04/sun-amazon-web-services/

I'll keep you post as I learn more.
---
Updates
---
Some More details, possibly involving Sun's ZFS file system at informationweek.
--
According to Cnet
Sun said it has partnered with Amazon.com to release OpenSolaris as an on-demand service as part of Amazon.com's Elastic Compute Cloud (EC2). OpenSolaris will be available for operating system and storage services as part of the overall EC2 service, which starts at 10 cents per CPU-hour, the company said. Sun touts OpenSolaris as the most robust Unix-flavored operating system.

Saturday, May 3, 2008

Cloud Computing: Eyes on the Skies (Businessweek)

I'm catching up on my reading after being away. I've discovered some more main stream coverage of Cloud computing in this weeks Business Week magazine, in the article titled "Cloud Computing: Eyes on the Skies" they attempt (badly I might add) at introducing the concept of cloud computing.

Their definition of cloud computing is:
"Any situation in which computing is done in a remote location (out in the clouds), rather than on your desktop or portable device. You tap into that computing power over an Internet connection. "The cloud is a smart, complex, powerful computing system in the sky that people can just plug into," says Web browser pioneer Marc Andreessen.
Umm ok, that certainly doesn't do much to help clear things up.

Generally an interesting read, but nothing ground breaking by any means. I do find it interesting how much interest there in the concept of cloud these days. Last week at Interop, both SaaS and Cloud seemed to be the topic du jour. It is exciting to be involved with such a hot topic!

Check out the article at:
http://www.businessweek.com/magazine/content/08_18/b4082059989191.htm

What makes a good cloud based IDE?

I was recently asked a question about Coghead on our Cloud computing Group / mailing list.
Experiences with Coghead to implement one elementary CRM
> What are your experiences with this source? Pros vs. Cons. based on your experience, specifically deficiencies if any. ( speed ?, easy to make backups of databases ?, too many incidences ? ). Initially I see as positive from this solution the flexibility to adapt progressively to the way that client works. Thank you
My Response.
I don't have any direct experience with Coghead, but we do include them as one of our clients (elasticdrive), so I'm bias. I will say that when looking at cloud based IDE's the one thing that always concerns me is what if I want to move away from a given platform, then what?. One of the benefits to the Google App Engine is it is based on an open python framework, meaning I can take the SDK and run it in my own data center. This for for me is a big plus.. Ultimately a cloud IDE is about speed to market, reducing development complexity and easy application scaling. Coghead does a tremendous job at all these things.

What I think both Coghead and Bungee need to do is provide a Google style SDK, then they'll have the killer platform "as a service".

Xen Vs KVM, which will prevail long term?

I was recently asked to comment on an article comparing xen vs kvm the question was if I think the two hypverisors can coexist or if only one will prevail long term.

KVM isn't really a hyper-visor in the traditional sense. Because it's based on Qemu it's more of a system emulator and utilizes a kernel patch/module to access the latest hardware virtualization to increase/improve performance. Long term, I think the better choice is the system that is easiest to install and use. Right now its KVM, I think Xen and KVM solve different problems, so there is room for both to flourish. Xen has more features, a larger community and better funding (Citrix). I would also note KVM isn't officially supported by Red Hat or Novell Suse although KVM has become the defacto standard for Ubuntu.

Given Xen's complexities, I'm recommending KVM to our more novice users, and Xen for the experts.

Friday, May 2, 2008

Enomaly’s open source virtual platform moves VMs in the cloud

I'm back from Interop, had a great time. I'd tell you more but what happens in Vegas... well you know.

I'm also proud to tell you about an interesting article about, me! Gives a great overview of our cloud computing platform Enomalism,

Check it out on zdnet

--
Another article about us on GridToday.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram