Thursday, September 9, 2010

Random Access Compute Capacity (RACC)

Forgive me for it's been awhile since my last post. Between the latest addition to my family (little Finnegan) and some new products we have in the works at Enomaly, I haven't had much time to write.

One of the biggest issues I have when I hear people talking about developing data intensive cloud applications is being stuck in a historical point of view. The consensus is this is how we've always done it, so it must be done this way. The problem starts with the fact that many seem to look at cloud apps as a extension to how they've always developed apps in the past. A server is a server, an application a singular component connected a finite set of resources albeit RAM, storage, network, I/o or compute. The trouble with this is development point of view is the concept of linear deployment and scale. The typical cloud development pattern we think of is building applications that scale horizontally to meet the potential and often unknown demands of a given environment rather than one that focuses on the metrics of time and cost. Today I'm going to suggest a more global / holistic view of application development and deployment. A view that looks at global computing in much the same way you would treat memory on a server - a series random transient components.

Before I begin, this post is 2 parts theoretical and one part practical. It's not meant to solve the problem so much as address alternative ways to think about the problems facing an increasingly global data centric & time sensitive applications.

When I think of cloud computing, I think of a large seemingly infinite pool of computing resources available on demand, at anytime, anywhere. I think of these resources as mostly untrusted, from a series of suppliers that may exist in countless regions around the world. Rather than focus on the limitations I focus on the opportunities this new globalized computing world enables in a world where data and it's transformation into usable information will mean the difference between those who succeed and those who fail. A world where time is just as important as user performance. The ability to transform useless data into usable information. I believe those who do accomplish this more efficiently will ultimately win over their competitors. Think Google Vs Yahoo.

For me it's all about time. When looking at hypervisors, I'm typically less interested in the raw performance of the VM than I am in the time it takes to provision a new VM. In a VM world the time it takes to get a VM up and running is arguably just as important a metric as the performance of that VM after it's been provisioned. Yet, for many in the virtualization world this idea of provisioning time seems to be of little interest. And if you treat a VM like a server, sure 5 or 10 minutes to provision a new server is fine if you intend to use it like a traditional server. But for those who need quick indefinite access to computing capacity 10 minutes to deploy a server that may only be used for 10 minutes is a huge overhead. If I intend to use that VM for 10 minutes, than the ability to deploy it in a matter of seconds becomes just as important as the performance of the VM while operational.

I've started to thinking about this need for quick, short term computing resources as Random Access Compute Capacity. The idea of Random Access Capacity, is not unlike the concept of cloud bursting. But with a twist, the capacity it self can be from any number of sources, trusted (in house) or from a global pool of random cloud providers. The strength of the concept is in treating any and all providers as nameless, faceless and possibly unsecured group of providers of raw localized computing capability. By removing trust completely from the equation, you begin to view your application development methods differently. You see a provider as a means to accomplish something in smaller anonymous asynchronous pieces. (Asynchronous communication is a mediated form of communication in which the sender and receiver are not concurrently engaged in communication.) Rather than moving large macro data sets, you move micro pieces of data across a much larger federated pool of capacity providers (Edge based computing). Transforming data closer to the end users of the information rather than centrally. Singularly anyone provider doesn't pose much threat because each application workload is useless unless in a completed set.

Does this all sound eerily familiar? Well it should, it's basically just grid computing. The difference between grid computing of the past and today's cloud based approach is one of publicly accessible capacity. The rise of regional cloud providers means that there is today a much more extensive supply of compute capacity from just about every region of the world. Something that previously was limited to mostly the academic realms. It will be interesting to see what new applications and approaches emerge from this new world wide cloud of compute capacity.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram