Friday, September 10, 2010

Edge Based Cloud Spanning

As a long time proponent of elastic computing or the dynamic use of global computing resources it's very interesting to see some of the new usage models emerging from the growing pool of regional cloud service providers around the globe. With this new world wide cloud, the concept of low cost edge base computing is now starting to take shape. More specifically the ability to run an application in a way that its components straddle multiple localized cloud services (which could be any combination of internal/private and external/public clouds). And unlike Cloud Bursting, which refers strictly to expanding the application to an External Cloud to handle spikes in demand, the idea of edge based cloud computing or 'cloud spanning' includes scenarios in which an applications component are continuously distributed across multiple clouds in near realtime.

Actually, wikipedia does a great job of outlining the rationale.

  1. Edge application services significantly decrease the data volume that must be moved, the consequent traffic, and the distance the data must go, thereby reducing transmission costs, shrinking latency, and improving quality of service (QoS).
  2. Edge computing eliminates, or at least de-emphasizes, the core computing environment, limiting or removing a major bottleneck and a potential point of failure.
  3. Security is also improved as encrypted data moves further in, toward the network core. As it approaches the enterprise, the data is checked as it passes through protected firewalls and other security points, where viruses, compromised data, and active hackers can be caught early on.
  4. Finally, the ability to "virtualize" (i.e., logically group CPU capabilities on an as-needed, real-time basis) extends scalability. The Edge computing market is generally based on a "charge for network services" model, and it could be argued that typical customers for Edge services are organizations desiring linear scale of business application performance to the growth of, e.g., a subscriber base.
Today one of the biggest opportunities I see emerging out of the rising tide of regional cloud providers is the ability to leverage multiple cloud providers, who exist across a so called inter-connected meta-cloud. This market has been traditionally limited to the realm of companies such as Akamai who have spent hundreds of millions of dollars building out global server infrastructures. The problem with these infrastructures are they are typically configured for one use case and are quite expensive. But with the emerging regional cloud provider, the ability to connect several of these providers together is now a reality greatly reducing the overall cost and essentially allowing anyone be build there own private CDN.

Also the underlying virtualization or even operating system is less important than the application itself. But the question is what is "the" application?

One such application ideally suited to this sort of edge based deployment architecture is a web cache such as Squid or Varnish as well as a selection of proprietary options. The interesting thing about web cache software in general is how it could be used in parallel to a series of random (untrusted) regional cloud providers. Moreover these caches don't necessarily need to worry about the security, performance or even SLA of a given provider; the location and connectivity is really all the matters. These local cloud services may be viewed as transient (see my post yesterday Random Access Compute Capacity) Meaning, location is more important than uptime, and if a given provider is no longer available, well, there are potentially dozens of others near by waiting to take up the slack.

It will be interesting to watch this space and see what kind of new geo-centric apps start to appear.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.