Saturday, November 1, 2008

Forget Virtualization, Think Realization

Over the last few years there as been a major movement toward the use of "virtualization" for the deployment of next generation of applications. For those of you who don't know about virtualization, it's a broad term that refers to the abstraction of computer resources. Basically a way to define the traditional physical resources as a virtual series of components such as storage, networking, applications and other various pieces of hardware contained within a fully partitioned virtual package.

What virtualization has done for the modern application stack is free it from it's previous physical limitations. These limitations were typically that of the hardware and in some cases software that ran on each server. With virtualiztion, you are given the ability easily move, manage, scale as well as dream up wonderful new ways to deploy a true global infrastructure. But there has been one major flaw in the use of virtualization, that is in what I called, the big question, "what's actually happening?"

What's actually happening? Is the the broad question you need to answer, it's nondescript because the answer is always different because every deployment is unique. The reason to use virtualization runs the gammit from ROI, to flexibility to scalability. So for this reason, I have a new theory and term to describe the big question. I'm calling it "Realization".

Think of realization as a combination of contingency planning, system monitoring, performance testing, infrastructure orchestration and reporting analytics.

To help define realization we need to fist look at an area of mathematics called probability and statistics (P&S). In P&S a realization, or the observed value, of a random variable is the value that is actually observed (what actually happened or is happening). The random variable itself should be thought of as the process how the observation comes about. In a virtualized environment we for the first time have the ability to both monitor the current state as well as the ability to define a real time realization of whether or not your goals are being achieved. Add the capability adjust your infrastructure on the fly, you now have a transparent way to instantly realize your infrastructure goals.

Getting back to the math, in reality, a random variable cannot be an arbitrary function and it needs to satisfy another condition: it needs to be measurable.) Elements of the sample space can be thought of as all the different possibilities that could happen; while a realization (an element of the state space) can be thought of as the value X attains when one of the possibilities that did happen. The goal of this equation is to automatically adjust and manage uncertainties. You can plan to scale to one million users, by doing performance testing, contingency planning and real time monitoring, but in real life you can almost never anticipate every possible contingency or status of your application components that very well may not be necessarily true or false.

I'll post more on my realization theory in the coming weeks.
-

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram