Friday, May 9, 2008

The Electric Grid & Cloud Computing Standards

We (Enomaly) are currently in the midst starting development of several large scale cloud utilities for a number of hosting providers as well as a large telecom, so I have some first hand knowledge of the the issues facing the broad adoption of cloud computing. I think there is a definite need for a set of standards for Cloud Computing. I would put it in the context of the early electrical utilities and the development of the universal electrical grid infrastructure.

Before the creation of a standardized electrical grid it was nearly impossible for a large scale sharing of electricity. Cities and regions would have their own power plants limited to their particular area, and the energy itself was not reliable (specially during peak times). Transmission of electric power at the same voltage as used by lighting and mechanical loads restricted the distance between generating plant and consumers. Different classes of loads, for example, lighting, fixed motors, and traction (railway) systems, required different voltages and so used different generators and circuits. Kind of like the various flavors of cloud infrastructure we see today.

Then came the "universal system" which enabled a standard in which electricity could be interchanged and or shared using a common set of definitions. Generating stations and electrical loads using different frequencies could now be interconnected using this universal system. By utilizing uniform and distributed generating plants for every type of load, important economies of scale were achieved, lower overall capital investment was required, load factor on each plant was increased allowing for higher efficiency, allowing for a lower cost of energy to the consumer and increased overall use of electric power.

By allowing multiple generating plants to be interconnected over a wide area, the electricity production cost was reduced and efficiency was vastly improved. The most efficient available plants could be used to supply the varying loads during the day. This relates particularly well today, similarly the need for hosted applications to easily to tie into remote compute capacity during peak periods. Reliability for the end user was improved and capital investment cost was reduced, since stand-by generating capacity could be shared over many more customers and a wider geographic area. (A user have a sudden spike in traffic from China, can tap into the Asian compute Cloud.) Remote and low-cost sources of energy, such as hydroelectric power or mine-mouth coal, could be exploited to lower energy production cost. In terms of "Green Computing" a user can access the most environmentally friendly sources of compute power as part of their computing policies.(Cloud A uses Coal based power, Cloud B uses Nuclear and Cloud C uses Wind, there for I choose Cloud C for the environment or cloud A for the cost)

I see a lot of similarities between the creation of the early electricity standards and the need of a set of common standards for "cloud computing". By defining these standards, providers, enablers and consumers will be able to easily, quickly and efficiently access compute capacity without the need to continually re-architect their applications for every new cloud offering.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram