Monday, November 29, 2010

Examining Compute Commodities

Over the last several weeks I've been doing quite a bit of studying of the commodities markets, mostly around the emergence of energy trading in the early 1990's. During my research a few things have become pretty clear to me. I thought I'd outline a few of the more interesting observations.

In looking at corollaries in treating compute resources as a commodity, the closest is probably that of energy creation and power plant financing. In the energy world, commodities trading desk are used to protect power companies from dramatic price shifts, using a so called "Hedge". These hedges are done in a number of very interesting ways. First of all, most of the major banks act as both a provider of capital for buyers and sellers of power plant assets and the companies themselves. The majority of the major finance players in the energy world also have commodities desks, allowing them to operate in the commodities market as well as in the more traditional loan / financing businesses. They can offer issuers (power providers) access to commodity markets where hedges can be created to protect an energy company from dramatic changes in prices of coal, natural gas or oil. Many utilities that provide electricity, for example, use coal to fire their plants are protected from price swings and more importantly the banks who are trading these commodities have greater influence and protection from these price spikes by sitting on both sides of the deal. In essence dovetailing leverage finance with commodities. In return, these banks are granted the ability to not only finance the development of the power plants, they also buy the future contracts on the energy itself which in turn gives the energy providers a guarantee of future revenue which then can be used as collateral for the development of their various energy assets and reduces the risk for all involved.

Let me explain this concept using the data center space as an example. Imagine being able to build a data center with a guarantee that a portion of your capacity will be bought at a certain price for an extended period of time before you even built your data center? This in a nutshell is the driver for the commoditization of computing resources. It has less to do with the actual computing resources so much as the ability to provide enhanced insight into future cash flows while reducing the risk surrounding the un-certainty of future utilization levels. Data centers are the new power plants.

What SpotCloud does is provide a general structured framework for the creation of a Compute Spot Market - an essential requirement before you can potentially have the ability to buy future capacity in bulk. Now imagine for a moment a SpotCloud Price Index (SCPi) where buyer are given a normalized average (a weighted average) of prices for a given class of compute services (based on a key hardware metric) in a given region, during a given interval of time. This is where things start to get interesting, an index is a requirement for any futures market. Without this critical statistic, it would be very difficult to compare how compute prices, taken as a whole, differ between time periods or geographical locations. With this data, hedges and arbitrage are now possible regardless of whether the quality of compute resources differs because you can use the average with traditional finance methodologies allowing for differences among providers answering the problem of not all compute resources are actually the same. Money itself being the great equalizer.

I readily admit that selling any debt in the current market is tough, but given the current trends in computing and the ever increasing need for computing resources around the globe, it's a fair bet to say that the demand for these types of resources will continue for the foreseeable future.

Sunday, November 14, 2010

Pork Bellies, Bandwidth and Cloud Computing

-- Update ---
I'm happy to announce that we've launched SpotCloud, the first Cloud Capacity Clearinghouse and Marketplace. Check it out at www.spotcloud.com

--
In May 1999 Enron announced to the world that it was creating a new market for trading Bandwidth. A wired article from 2001 noted that it seemed to many that the then energy giant had found a new pot of virtual gold. Enron and a broader group of their experienced traders believed it was only a matter of time before bandwidth (as well as other virtual resources) would be bought and sold in much the same way that commodities markets trade everything from petroleum to pork bellies. Now, more then 10 years later this transition has yet to occur. In this post I will examine why the idea of trading bandwidth never took off and see if today might be the ideal time to try again.

Looking back at the previous attempts to create virtual commodities exchanges including Enron's failed attempt, it now appears that it was indeed a great opportunity, but just about a decade too early. In the case of Enron, they had a right vision, but suffered from the now obvious fact that it was born from an overwhelming greed. In other words, the right idea but the wrong people at the wrong time.

Today with the emergence of cloud computing looking at these past failures such as the failed bandwidth markets and as well as the successes of the energy markets of 1990's may represent a case study in how to we might go about creating a successful commodity compute marketplace.

One of the first problems in getting bandwidth trading off the ground was timing. The bursting of the dotcom bubble meant that there was a significant disconnect between an over supply of bandwidth versus the demand for it. Basically there weren't enough companies who wanted to buy and too many selling. Making the market go in one direction, down. This discouraged both buys and sellers from getting involved. The key to an active market and ecosystem is growth.

Secondly, as the wired article points out, the telecom firms that owned the fiber optic networks didn't like the idea of selling their services as a commodity. Some made the case that "not all networks perform equally well." Basically there were no measurement standards and therefore no easy way to determine the good from the bad. In addition, most telecom firms preferred to negotiate prices with customers, rather than be stuck with a one-size-fits-all pricing scheme. In a sense they would rather lose on the excess capacity and make up the difference by charging more for the capacity that was actually used. In contrast, the benefit to a commodity style approach, you may charge less overall but make more money because you have a higher utilization of your resources (volume).

Another major problem was at that time the adoption of broadband was at its infancy. Most Internet users in 1999 we're still using dial up connections. Compounding things was other than a few notable exceptions (Napster) the majority of web applications were static and light weight. Mobile apps, streaming media, social web applications, realtime web and cloud computing (Internet centric computing) had yet to be widely accepted. Fast forward to today and these applications have become the key drivers to a recent explosion of rich user generated content and the ever increasing need for realtime compute capacity to process it all.

Thanks in part to the increasing popularity of cloud computing, the idea of just-in-time compute capacity has helped lower some of barriers that limited the previous bandwidth markets from flourishing. For many the concept of distributed batch processing and compute elasticity have become critical parts of modern business IT strategies. These kind of flexible and elastic compute usage models are ideally suited to that of a spot market for commodity compute capacity (provided via a method that is quoted for immediate (spot) settlement for both payment and delivery. Also the announcement last month that Amazon Web Services would start offering excess EC2 capacity using a spot market approach has also helped legitimized the concept. The notion of selling your excess compute capacity now has a poster child (AWS), this may lead to increased acceptance of selling excess compute resources using a commodities approach. This is in much the same way that Amazon EC2 has encouraged companies to use cloud like strategies within their internal systems (private clouds). In a very real way, AWS is blazing a path for the broader industry.

I see tremendous opportunities for the trading of excess Cloud Computing resources or compute capacity and believe the most viable market example may be that of the energy marketplace. The energy market is similar to bandwidth and compute capacity in that the commodities are variable, transient and don't store well. The concept of selling excess capacity in cloud centric data centers may also make sense in that cloud providers must have significant additional capacity on hand just in case of demand spikes.

As I've said before, unused compute capacity = lost revenue. It's better to sell your excess then to have it disappear. For a lot of larger players such as Telecoms and large content providers this means un-utilized compute capacity is making you nothing. The notion of a public spot market may help address this problem.

The great example for a compute centric market may be based on that of the electricity wholesale markets. Like compute capacity, electricity is difficult to store because of it's transient nature. It needs to be available on demand, and unpredictable demand spikes may occur. Using the energy trading market as a model provides an existing proven context that may translate well into compute centric environments, not mention there are wide variety of trading platforms already built that may be easily modified to address the needs of a compute exchange market.

One of the more common energy trading models uses a automated central scheduler to balance supply and demand and calculate the market price. Another model is that of conducting auctions in various time scales, i.e. auctions for yearly and daily provision of power, with additional spot market that resolves the need for accommodating short-term demand spikes.

Before a widely accepted commodity compute trading market may form and begin trading, governments may also need to provide a common regulatory framework as well as standards and liability controls. Otherwise the market will be doomed to serve as a novelty or worst yet, limited to academic use only.

So what's next? First a trading organization must form, preferably in a transparent not for profit context, so to help avoid future Enron type scenarios. I'd also say the capital to develop such a trading platform, the will of the industry to help make this happen and some standard processes for the measurement of the cloud capacity itself. So will this happen? Certainly, but question of when is still up for debate.

Wednesday, November 10, 2010

SpotCloud Update (Free ECP SpotCloud Edition & Webinar)

It's been a crazy week at Enomaly after last week’s SpotCloud announcement. We'd like to take a brief moment to update you on some new and exciting opportunities that have emerged out of our discussions around the SpotCloud Marketplace (http://www.spotcloud.com).

Buyer / Seller Traction
Signups for both the buy and sell side for SpotCloud have been very strong with hundreds registering for the service. One of the more interesting statistics is the ratio of buyers to sellers is tracking at 5:1 for buyers. This shows there’s a lot of buy-side demand for the service. For those of you who have already registered, we are selectively adding new buyers and sellers while we work through the beta phase, so hold tight, we haven't forgotten about you. If you haven't registered yet, go ahead and add yourself today.  

Announcing the Free Enomaly ECP SpotCloud Edition
We're happy to announce that we are now offering a Free Enomaly ECP SpotCloud Edition (Shipping next week). This is a feature-limited version of the Enomaly Elastic Computing Platform specifically tailored for cloud providers, as well as private and public data centers looking to sell excess capacity on the SpotCloud market. By simply installing ECP SpotCloud edition on a few spare servers, providers are able to easily participate in the marketplace. No public cloud or payment systems are required. Just a few servers and the Free ECP SpotCloud IaaS software and you're ready to start making money. To get access to the ECP SpotCloud Beta, simply register as a seller athttp://www.spotcloud.com and check the box for the ECP SpotCloud edition. 

Third Party IaaS / Cloud Platform Support 
Due to strong demand from non ECP-based, cloud providers and IaaS platforms, we have decided to open up our marketplace to any and all platforms. In the next week we will publish seller integration instructions for the SpotCloud marketplace so platforms powered by other technologies can easily participate . If you haven't already done so, please sign up to be notified when the docs are ready. 

Joint Marketing and PR Opportunity
Responses to our initial SpotCloud announcement has been overwhelming. A number of buyers and sellers have indicated they would like to participate in a broader announcement highlighting our partners. If you'd like to participate in our upcoming PR announcement, please feel free to get in touch with us.

SpotCloud Webinar and Live Demo  Friday, Nov 19, 2010 
Interested in seeing how SpotCloud works or just have some questions? Join the SpotCloud Webinar next Friday.  

Topic: SpotCloud Demo 
Time: 12:00 pm, Eastern Standard Time (New York, GMT-05:00) 

Register for the SpotCloud Webinar at http://ruv.net/a/cz

Friday, November 5, 2010

Group Buying For Cloud Computing Capacity

As many of you know, I've been to China many times this year. The market for cloud computing is booming over there. But what you may not know is that these trips to China have been a key part of the inspiration for the creation of SpotCloud. Some of my inspirations has come from the popular concept of group buying know as tuangou in Chinese.

If you haven't heard of Group buying wikipedia describes it as follows:
Group buying, which refers to social buying or collective buying as well, is the buying an offer which has been significantly reduced, due to the fact that it is only valid if enough buyers are found. Recently, group buying has been taken online in numerous forms, although group buys prior to 2009 usually referred to the grouping of industrial products for wholesale (especially in China).
Group buys are a variation of tuangou buying that also occurs in China, in which an item must be bought in a minimum quantity or dollar amount, otherwise, the seller will not allow the purchase. Since individuals typically do not need multiples of one item or do not have the resources to buy in bulk, group buys allow people to invite others to purchase in bulk jointly. These group buys often result in better prices for the individual buyers or ensure that a scarce or obscure item is available for sale.
So now lets think about how this approach could be applied to the buying of cloud infrastructure resources through a marketplace such as SpotCloud. A simple example could be buying Amazon EC2 reserve instances. With Reserved Instances you pay a one-time fee and in turn receive a significant discount on the hourly usage charge for that instance over a 1 to 3 year term. Using this model of reserved Instances can save you up to 49% over the cost of On-Demand EC2 instances. But there is a small problem, you need to commit to at least 1 - 3 years with a fairly high utilization for the full savings to be realized.

Now apply group buying to EC2 reserved instances, say 100 or so capacity buyers who need cheap capacity for short periods of time. The brokerage (SpotCloud) essentially buys on behalf of the group and makes the the capacity available at a greatly reduced cost for all market participants. The group buying approach also leads to interesting arbitrage models for increased cost reduction by taking advantage of a price difference between two or more cloud resource providers (Amazon's Spot Instances for example), as well as potentially negotiating wholesale discounts on behalf of the collective buying group from other large cloud capacity providers.

Lots of interesting SpotCloud ideas. I'll keep posting as they come to me.

SpotCloud Launch Overview (Week 1)

So by now you've probably heard about the SpotCloud announcement. If you missed it, after more than a year of development, we finally took the covers off our SpotCloud Capacity Clearinghouse and Marketplace for service providers. The feedback so far has been tremendous, with hundreds of registrations for the SpotCloud service including a good portion of the major "Non ECP" based cloud providers signing up. One particularly interesting stat is the ratio of buyers to sellers registering for the service at a ratio of 5:1 for buyers, indicating significant interest from the demand / buy side. A key metric for a successful marketplace.

But what I really want to tell you is some of the opportunities that have emerged in my discussions with both providers and buyers. Most of which I hadn't even considered before this weeks launch.

The first usage example is with managed service providers. With the recent downturn in the economy a lot of dedicated and managed hosting providers are seeing large amounts of customers abandoning their dedicated servers for various reasons, causing a major influx of un-used racked servers. One director of IT for a large hosting firm indicated that last week one of his long time customers had defaulted on 100+ dedicated servers. He said that they had no plans to create a "public" cloud service but thought that the ability to offer these servers through the SpotCloud marketplace would be an ideal way for his company to cover their "carrying costs" while they attempted to resell / lease these servers to other hosting customers at a much higher margin. Turns out this concept has legs, I had the very same conversation this week at the CloudExpo in Santa Clara with no less then 20 different hosting companies all telling me the same story, not interested in public cloud, very interested in private capacity offering through an opaque market. I used the analogy of renting a new or used car before selling it.

Another recurring opportunity was that of what I have started describing as a "private cloud exchange" where a group of aligned companies or organizations share capacity amongst each other through a private version of the SpotCloud marketplace. A kind of capacity consortium. One particularly good example was from a major Asian Telecom with many various divisions around the globe, each with their own data centers and no effective way to share capacity between the business units who run independently from one another. Another private exchange idea was focused with in specific industry verticals such as Finance, healthcare, entertainment and eduction. Think of a group of private clouds with specific regulatory controls sharing or selling capacity with each other.

What's next for SpotCloud? Going forward we plan to release SpotCloud integration instructions for working with Non ECP based cloud providers and suppliers. This will allow SpotCloud to support the broadest group of capacity suppliers with out having to directly use ECP. We've also had a lot of interest in doing a Free Enomaly ECP SpotCloud edition, which we hope to have available in the near future. As soon as these are ready, I'll post the links on this blog.

We are hoping to do some public announcements around actual buyers and sellers in the coming weeks, ping me if you'd like to be included in this announcement.

Do you have an interesting use case for SpotCloud? Please let me know.

Monday, November 1, 2010

Introducing SpotCloud, The First Clearinghouse & Marketplace for Cloud Computing Services

Enomaly Inc., the leading vendor of Infrastructure-as-a-Service (IaaS) cloud computing software, is proud to announce that it has launched the beta of SpotCloud (http://www.spotcloud.com) the first cloud computing clearinghouse & marketplace.

For cloud service providers
, the SpotCloud Marketplace Platform provides an easy way to sell unused cloud capacity. Cloud providers can use SpotCloud to clear out unused capacity and sell computing inventory that would otherwise go unsold, enabling increased utilization and revenue, without undermining their standard pricing.

In order to avoid directly competing with regular retail sales of cloud services, SpotCloud uses an "opaque" sales model, similar to sites such as Hotwire.com. The SpotCloud service meters, tracks, and bills capacity buyers, and pays capacity sellers directly.

For Cloud Capacity buyers
, SpotCloud bridges many disparate regional cloud providers, allowing buyers to find the best cloud providers at the best price. SpotCloud serves as the central place to discover and buy computing capacity, based on performance, cost and location parameters, through a simple and easy to use web dashboard and API. 

By 2014, IDC predicts, sales of cloud computing products or services will generate almost $56 billion in annual revenues. Gartner analyst Daryl Plummer has stated that by 2015 “cloud service brokers will be the largest revenue growth opportunity”, going on to say that “20 percent of cloud services will be consumed via a broker.“

“The market driven approach of SpotCloud is a game changer for both buyers and providers of cloud computing resources.” said Reuven Cohen, Enomaly founder and CTO.  “For service providers, SpotCloud enables the most important feature – the ability to make more money. Each service provider can define prices for the excess capacity offered through the service and adjust these based on time and utilization. For consumers, SpotCloud provides a secure central location to buy from a global pool of providers at highly competitive prices.”

How it Works
How it works

How it Looks
Screenshot
Signup for our Beta at www.spotcloud.com 

Tuesday, October 26, 2010

Enomaly, Autonomic Resources, Carpathia, and Dell Win US Federal IaaS Cloud Contract

Enomaly Inc., the leading vendor of Infrastructure-as-a-Service (IaaS) cloud computing platform software, announced today that it has been selected by US Government under the first Government-wide contract for cloud computing. Under the GSA’s blanket purchase agreement (BPA # GS00Q11AEA003), Enomaly ECP High Assurance Edition (HAE) software will help power Autonomic Resources ARC-P Public Cloud (IaaS) services platform for U.S. Government customers.

Enomaly was chosen to be included in Autonomic Resources ARC-P Public cloud stack that will provide US Government customers the benefits of on-demand computing with no compromise in security. ARC-P integrates Enomaly with the ARC-P Dell hardware stack that will provide for trusted virtual machine assurance via Trusted Execution Technology. Additionally,  Enomaly integration with Secured by SPYRUS technology for multi-factor authentication and sets a new “bar” for cloud security. ARC-P’s multi-factor solution utilizes the already approved US Cybercom USB flash drives, following Federal standards to levels not yet available on any commercial platform.

“We’re delighted to have been selected for this opportunity,” said Dr. Richard Reiner, CEO of Enomaly. “As the only third-generation IaaS platform in the industry, Enomaly ECP is ideally suited to meet the needs of the most demanding Government users for mature, massively scalable, highly reliable cloud services. The unique capabilities of our High Assurance Edition, delivered from FISMA certified data centers with standard multi-factor authentication access, will ensure that Government users can benefit from access to trusted and secure solutions for all their cloud computing needs.”

“Enomaly helped ARC-P address many of the IaaS demands of the Federal government. In developing  ARC-P our focus was to differentiate our offering based on secure and flexible on-demand cloud” said John Keese, President of Autonomic Resources.

About Enomaly Inc.
Enomaly, based in Toronto, Canada, is the leader in empowering telecom and IDC operators to deliver the benefits of Cloud Computing to their customers.  Enomaly’s Elastic Computing Platform (Enomaly ECP) has often been described as the world's first true IaaS platform.

Today, Enomaly ECP 3 benefits from our 6+ years of cloud computing leadership to empower telecom and IDC operators in North America, the UK, Europe, and Asia to deliver some of the world’s most advanced cloud computing services. Enomaly ECP 3 is available in the core Service Provider Edition as well as the security-enhanced High Assurance Edition, which offers a unique set of hardware-based security mechanisms to enable the application of cloud computing for higher-assurance environments.

About Autonomic Resources
Autonomic Resources ( www.autonomicresources.com ) is a service integration firm and cloud provider serving the U.S. federal government. Core capabilities include the implementation of strategic technologies related to long term IT services as well as strategic services like data center automation, cloud computing, open source adoption, IA, NextGen networking, BI and software development services. Headquartered in Cary, N.C., Autonomic is a certified 8(a) SDB - Search GSA Schedule #GS-35F-0587R on http://www.gsaadvantage.gov .

Enhanced by Zemanta

Monday, October 11, 2010

The Cloud Computing Opportunity by the Numbers -[update]-

How big is the opportunity for cloud computing? A question asked at pretty well every IT conference these days. Whatever the number, it's a big one. Let's break down the opportunity by the numbers available today.

By 2011 Merrill Lynch says the cloud computing market will reach $160 billion.

The number of physical servers in the World today: 50 million.

By 2013, approximately 60 percent of server workloads will be virtualized

By 2013 10 percent of the total number of physical servers sold will be virtualized with an average of 10 VM's per physical server sold.

At 10 VM's per physical host that means about 80-100 million virtual machines are being created per year or 273,972 per day or 11,375 per hour.

50 percent of the 8 million servers sold every year end up in data centers, according to a BusinessWeek report

The data centers of the dot-com era consumed 1 or 2 megawatts. Today data center facilities require 20 megawatts are common, - 10 times as much as a decade ago.

Google currently controls 2% of all servers or about 1 million servers with it saying it plans to have upwards of 10 million servers ( 107 machines) in the next 10 years.

98% of the market is controlled by everyone else.

Hosting / Data center providers by top 5 regions around the world: 33,157

Top 5 break down
USA: 23,656
Canada: 2,740
United Kingdom: 2,660
Germany: 2,371
Netherlands: 1,730.

According to IDC, the market for private enterprise "Cloud servers will grow from an $8.4 billion opportunity in 2010, representing over 600,000 units, to a $12.6 billion market in 2014, with over 1.3 million units.

Market opporunity based purly on server count. $160 billion dollars divided by 50 million servers = $3,200 per server.

The amount of digital information increased by 73 percent in 2008 to an estimated 487 billion gigabytes, according to IDC.

World Population 2009: 6,767,805,208
Internet Users 2000: 360,985,492
Internet Users 2009: 1,802,330,457
Overall Internet User Growth: 399.3%
Fastest Growth Markets (Last 10 years) - Africa +1,809.8%, Middle East, +1,675%, Latin America +934.5%, Asia +568.8%

Slowest Growth Markets - North America +140.1%

Cloud value by world population: $23.64 per person

Cloud value by Global Internet population: $88.77 per person

-- Update --
Netcraft Finds 365,000 Web Sites on EC2

June 4th 2010
IBM says The Cloud cuts IT labor costs by up to 50%, improves capital utilization by 75%

July 1st 2010
IDC estimates that sales of public cloud services will grow at a 25 percent annual clip. The annual growth rate for typical IT projects, conversely, is 5 percent.

July 26 - 2010
SaaS Revenue to Grow Five Times Faster Than Traditional Packaged Software Through 2014, IDC Finds
  • By 2012, IDC expects that less than 15% of net-new software firms coming to market will ship a packaged product (on CD). By 2014, about 34% of all new business software purchases will be consumed via SaaS, and SaaS delivery will constitute about 14.5% of worldwide software spending across all primary markets.

  • By 2012, nearly 85% of net-new software firms coming to market will be built around SaaS service composition and delivery; by 2014, about 65% of new products from established ISVs will be delivered as SaaS services.

  • SaaS-derived revenue will account for nearly 26% of net new growth in the software market in 2014.

  • Traditional packaged software and perpetual license revenue are in decline and IDC predicts that a software industry shift toward subscription models will result in a nearly $7 billion decline in worldwide license revenue in 2010. As a result, a permanent change in software licensing regime will occur.

  • SaaS segment mix will shift toward infrastructure and application development and deployment/PaaS, and away from U.S. dominance. IDC expects that by 2014, applications will account for just over half of market revenue. This shift will happen in part as a result of increasing IT cloud spending by enterprise IT groups and commercial cloud services providers (cloud SPs) relative to end-user spending

October 2010

Internet Keeps Growing! Traffic up 62% in 2010 (13.2 Tbps of new Internet capacity)

    -- Conclusions --

    Based on these numbers, a few things are clear. First server virtualization has lowered the capital expenditure required for deploying applications, but the operational costs have gone up significantly more than the capital cost savings making the operational long tail the costliest part of running servers.

    Although Google controls 2 percent of the global supply of servers, the remaining 98 percent is where the real opportunities are both in private enterprise data centers as well as in 40,000+ public hosting companies.

    This year 80-100 million virtual machine will be created, the traditional management approaches to infrastructure will break. Infrastructure automation is becoming a central part of any model data center. Providing infrastructure as a service will not be a nice to have but will be a requirement. Hosters, Enterprises and small business will need to start running existing servers in a cloud context or face in-efficiency which may limit potential growth.

    Surging demand for data and information creation will force a migration to both public and private clouds specially in emerging markets such as Africa and Latin America.
    Lastly, there is a tonne of money to be made.

    Thursday, September 30, 2010

    The Intel Cloud Builder Series Webinar Featuring Enomaly (Oct 14th)


    Join this discussion of Enomaly’s aptly named Elastic Computing Platform (ECP), which brings nearly infinite scalability and provisioning flexibility to cloud deployments. Learn how Enomaly’s ECP can support very large clouds with many thousands of servers, and virtually eliminate the upper limit on the number of virtual machines that can be provisioned. Cloud experts from Enomaly and Intel will present a logical blueprint for a typical configuration, discuss the unique attributes of ECP, and answer questions at the end of the presentation. Register today to hold your spot.
    Thursday, October 14th, 2010 2:00 p.m. ET - Register Today!

    Tuesday, September 21, 2010

    Infrastructure Elasticity versus Scalability

    Over the last few years I've gotten a lot of pressure to call our Elastic Computing Platform, a Cloud Computing Platform. Some may wonder why I've resisted to make this some-what semantic change in the branding of our platform. Yes, our customers are deploying cloud infrastructures, but how they're doing it is what this post is about.

    For me, 'the cloud' represents the Internet, or more specifically it's a way of looking at application development and deployment from a network centric point of view. It's pretty much about thinking about applications that treat or use the Internet as the operating system. To enable such an environment takes a new way of thinking about your underlying infrastructure. A unified (API driven) infrastructure that is distributed, an infrastructure that is global, an infrastructure that is fault tolerant and most importantly scalable & elastic.

    Some may think that being scalable is the same as being elastic, but I'm not convinced. Being scalable means being able to grow to the demands of an application and it's user base. This says nothing about what happens after this scale as been achieved. Being 'elastic' means being able to adapt (up, out and down again) The ablity to adjust readily to different realtime conditions. The metric that matters in an elastic computing environment isn't just how many users can I support, but how fast can I adapt to support those users. Speed and performance are key. How fast can I provision a new VM, how well does that VM perform once deployed, and how much will it cost me are what matters most. The problem with scalability is the question of what happens afterward. Elasticity is all about what is happening now.

    Friday, September 17, 2010

    Enomaly ECP 3.4 Service Provider Edition Released

    Enomaly is proud to announce the latest 3.4 release of ECP Service Provider Edition. This is a major milestone, and brings many new capabilities and enhancements:

    Full support for VMware ESXi 4.0 servers
    A huge improvement in virtual machine I/O performance
    Multiple Mix and Match Storage Pools

    New Features
    1. VMware ESXi 4.0 is now supported as a first class hypervisor alongside KVM and XEN.
    2. Greatly improved VM performance with the addition of advanced machine flag for VirtIO for Disks (VIRTBLK). This allows pass through paravirt disk drivers.
    3. Cluster monitoring and notification module.
    4. Administrators can now restrict account/API access to a specific IP address or range.
    5. Administrative user interface is now fully localizable.
    6. Optional agent to manager heartbeat has been added for high latency networks.
    7. New administrative interface for Disk Templates.
    8. Users can now specify advanced VM tunables. Advanced options on VMs used to create packages will also be carried over into the package. The following options are now tunable:
    9. PAE - Allows 32-bit VMs to address more than 4 GB of memory
    10. APIC - Adding a virtual APIC can improve latency, performance, and timer accuracy, especially on multiprocessor virtual machines.
    11. ACPI - ACPI power management
    12. VIRTIO is a Linux standard for network and disk devices, enabling cooperation with the hypervisor. This enables VMs to get high performance network operations, and gives most of the performance benefits of paravirtualization.
    13. Ability to connect ECP to multiple storage pools (mix and match storage tiers).
    14. Added user editable "notes" field to VM details dialog. This is intended to be a location where users can document details about a virtual machine
    15. API enhancements to allow VMs to be queried by IP or MAC address and package ID.
    Enhancements
    1. ISOs are now managed by the object-based permissions system.
    2. Improved consolidated product documentation.
    3. VM boot device can now be selected.
    4. Usability improvements to customer users interface. VM tab is first in the customer UI. Home tab has been relabelled Logs.
    5. Billing API has been improved to include additional information regarding packages and resource usage.
    6. Improvements to the i18n Internationalization system to allow multiple versions of a given language.
    7. Enhanced VLAN management. VLANs can now be assigned to alternative Nic’s on the host.
    8. Updated Korean, Japanese and Swedish translations
    9. Add installer support for CentOS 5.5.
    10. Improvements to the Virtual Infrastructure machine listing table.
    If you haven't already done so, I invite you to give Enomaly ECP a try, please get in touch for a free evaluation edition.

    Friday, September 10, 2010

    Someone Please Build Me a Laptop Based Blade Server

    When it comes to building cloud based infrastructure it's not the performance of anyone single server that matters so much as the collective ability to deploy many parallel VM's quickly. Lately there seems to be a growing trend for service providers to use low end commodity servers rather than higher end kit. So building upon this concept, I have a crazy idea. Why not create a blade chasis that can use laptops or at least the realivant stuff inside anyway as the basis for a super cheap, power efficient lapserver?

    Think I'm crazy? Well, I'm not the only one thinking about this. Take for example SeaMicro and their SM10000 High Density, Low Power Server. It uses ¼ the power and takes ¼ the space of today's best in class volume server. It is designed to replace 40 1 RU servers and integrates 512 independent ultra low power Atom processors, top rack switching, load balancing, and server management in a single 10 RU system at about $150k. Do that math, it basically replaces 40 traditional servers at nearly 1/8th the price per computing unit. Although there is one problem, the Atom processor which support Intel VT is 32 bit greatly limiting it's usefulness.

    That's where the Laptop comes in. Let's take your lowest end laptop (Non Atom) with the Intel i3 processor. A quick search and I discovered that these laptops sell for about $350. Keep in mind these laptops come with a keyboard, and screen which add to the costs. Boil it down to just the components you need and you're looking at about $150 retail for the parts. (dual core i3 2.26ghz, 4GB ram, 500GB, network card, and 6 Cell battery)

    So now consider putting this into a small form factor "laptop blade chasis" lets say a 2u or 4u size. My estimate is you could probably fit about 5 of these lapblades in a 2u or about 10 in a 4u. So doing the math on 4u (10 lapblades, 20 cores, 40GB Ram, 5,000GB of onboard storage and 10 batteries for about $1500 - $2000 net) Those same 20 cores would cost you about $25k-30k with traditional servers. So the ROI for a service provider would be days. Not to mention with ten 6 cell batteries, you've got an onboard UPS and laptops are inherently low power.

    Crazy idea? Someone should do it, I'd buy one.

    --Update--
    A few people have pointed me to SGI's CloudRack and HP's BladePC

    Edge Based Cloud Spanning

    As a long time proponent of elastic computing or the dynamic use of global computing resources it's very interesting to see some of the new usage models emerging from the growing pool of regional cloud service providers around the globe. With this new world wide cloud, the concept of low cost edge base computing is now starting to take shape. More specifically the ability to run an application in a way that its components straddle multiple localized cloud services (which could be any combination of internal/private and external/public clouds). And unlike Cloud Bursting, which refers strictly to expanding the application to an External Cloud to handle spikes in demand, the idea of edge based cloud computing or 'cloud spanning' includes scenarios in which an applications component are continuously distributed across multiple clouds in near realtime.

    Actually, wikipedia does a great job of outlining the rationale.

    1. Edge application services significantly decrease the data volume that must be moved, the consequent traffic, and the distance the data must go, thereby reducing transmission costs, shrinking latency, and improving quality of service (QoS).
    2. Edge computing eliminates, or at least de-emphasizes, the core computing environment, limiting or removing a major bottleneck and a potential point of failure.
    3. Security is also improved as encrypted data moves further in, toward the network core. As it approaches the enterprise, the data is checked as it passes through protected firewalls and other security points, where viruses, compromised data, and active hackers can be caught early on.
    4. Finally, the ability to "virtualize" (i.e., logically group CPU capabilities on an as-needed, real-time basis) extends scalability. The Edge computing market is generally based on a "charge for network services" model, and it could be argued that typical customers for Edge services are organizations desiring linear scale of business application performance to the growth of, e.g., a subscriber base.
    Today one of the biggest opportunities I see emerging out of the rising tide of regional cloud providers is the ability to leverage multiple cloud providers, who exist across a so called inter-connected meta-cloud. This market has been traditionally limited to the realm of companies such as Akamai who have spent hundreds of millions of dollars building out global server infrastructures. The problem with these infrastructures are they are typically configured for one use case and are quite expensive. But with the emerging regional cloud provider, the ability to connect several of these providers together is now a reality greatly reducing the overall cost and essentially allowing anyone be build there own private CDN.

    Also the underlying virtualization or even operating system is less important than the application itself. But the question is what is "the" application?

    One such application ideally suited to this sort of edge based deployment architecture is a web cache such as Squid or Varnish as well as a selection of proprietary options. The interesting thing about web cache software in general is how it could be used in parallel to a series of random (untrusted) regional cloud providers. Moreover these caches don't necessarily need to worry about the security, performance or even SLA of a given provider; the location and connectivity is really all the matters. These local cloud services may be viewed as transient (see my post yesterday Random Access Compute Capacity) Meaning, location is more important than uptime, and if a given provider is no longer available, well, there are potentially dozens of others near by waiting to take up the slack.

    It will be interesting to watch this space and see what kind of new geo-centric apps start to appear.

    Thursday, September 9, 2010

    Random Access Compute Capacity (RACC)

    Forgive me for it's been awhile since my last post. Between the latest addition to my family (little Finnegan) and some new products we have in the works at Enomaly, I haven't had much time to write.

    One of the biggest issues I have when I hear people talking about developing data intensive cloud applications is being stuck in a historical point of view. The consensus is this is how we've always done it, so it must be done this way. The problem starts with the fact that many seem to look at cloud apps as a extension to how they've always developed apps in the past. A server is a server, an application a singular component connected a finite set of resources albeit RAM, storage, network, I/o or compute. The trouble with this is development point of view is the concept of linear deployment and scale. The typical cloud development pattern we think of is building applications that scale horizontally to meet the potential and often unknown demands of a given environment rather than one that focuses on the metrics of time and cost. Today I'm going to suggest a more global / holistic view of application development and deployment. A view that looks at global computing in much the same way you would treat memory on a server - a series random transient components.

    Before I begin, this post is 2 parts theoretical and one part practical. It's not meant to solve the problem so much as address alternative ways to think about the problems facing an increasingly global data centric & time sensitive applications.

    When I think of cloud computing, I think of a large seemingly infinite pool of computing resources available on demand, at anytime, anywhere. I think of these resources as mostly untrusted, from a series of suppliers that may exist in countless regions around the world. Rather than focus on the limitations I focus on the opportunities this new globalized computing world enables in a world where data and it's transformation into usable information will mean the difference between those who succeed and those who fail. A world where time is just as important as user performance. The ability to transform useless data into usable information. I believe those who do accomplish this more efficiently will ultimately win over their competitors. Think Google Vs Yahoo.

    For me it's all about time. When looking at hypervisors, I'm typically less interested in the raw performance of the VM than I am in the time it takes to provision a new VM. In a VM world the time it takes to get a VM up and running is arguably just as important a metric as the performance of that VM after it's been provisioned. Yet, for many in the virtualization world this idea of provisioning time seems to be of little interest. And if you treat a VM like a server, sure 5 or 10 minutes to provision a new server is fine if you intend to use it like a traditional server. But for those who need quick indefinite access to computing capacity 10 minutes to deploy a server that may only be used for 10 minutes is a huge overhead. If I intend to use that VM for 10 minutes, than the ability to deploy it in a matter of seconds becomes just as important as the performance of the VM while operational.

    I've started to thinking about this need for quick, short term computing resources as Random Access Compute Capacity. The idea of Random Access Capacity, is not unlike the concept of cloud bursting. But with a twist, the capacity it self can be from any number of sources, trusted (in house) or from a global pool of random cloud providers. The strength of the concept is in treating any and all providers as nameless, faceless and possibly unsecured group of providers of raw localized computing capability. By removing trust completely from the equation, you begin to view your application development methods differently. You see a provider as a means to accomplish something in smaller anonymous asynchronous pieces. (Asynchronous communication is a mediated form of communication in which the sender and receiver are not concurrently engaged in communication.) Rather than moving large macro data sets, you move micro pieces of data across a much larger federated pool of capacity providers (Edge based computing). Transforming data closer to the end users of the information rather than centrally. Singularly anyone provider doesn't pose much threat because each application workload is useless unless in a completed set.

    Does this all sound eerily familiar? Well it should, it's basically just grid computing. The difference between grid computing of the past and today's cloud based approach is one of publicly accessible capacity. The rise of regional cloud providers means that there is today a much more extensive supply of compute capacity from just about every region of the world. Something that previously was limited to mostly the academic realms. It will be interesting to see what new applications and approaches emerge from this new world wide cloud of compute capacity.

    Wednesday, August 11, 2010

    Saving Ryan's Private Cloud

    I am all for debates. Some might even say I've become a master at it. But this whole private cloud versus public debate has to stop. We're fighting the symantics of the english language. Really who cares? If you want to run your own data center like Google or Amazon and call it a private cloud -- so what? If you want to outsource your infrastructure to Google or Amazon and call it a public cloud -- even better. If you want to focus on running your business rather than a data center, good on ya. But please, stop debating the semantics of a metaphor already. It's like a computing black hole where nothing smart or interesting can ever escape. But that's an analogy, which makes me an idiom ;-)






    Enhanced by Zemanta

    Monday, July 12, 2010

    Should You Build or Buy Your Cloud?

    It's the age old question in IT, the question of whether or not to build it yourself or just buy it off the shelf. Lately, I seem to be hearing the questions again and again. It seems that for some reason some IT guys have gotten it into their head that if they adopt a cloud infrastructure platform, either hosted or in house, they're going to lose their jobs. So the only choice is to build it. I think the reasoning is if you build it, you will control it, and your company will have no choice but to keep you around. Unfortunately the answer isn't so cut and dry.

    The rationale for building it yourself has been around as long as IT. There have always been various reasons for it, from there weren't any systems that could delivery what we needed, or we're different, we're smarter, we're bigger.. you get the point.

    The real question you need to ask yourself is where does youe strengths as an organization lay? As a software developer or selling some other core business? For most it's the latter. Building your own cloud software is fraught with risk. One such example is a major hosting firm who spent 16 months building their own cloud IaaS platform only to realize that the assumptions they made about the potential cloud market opportunity had changed and their platform couldn't deliver the technical requirements of their new customer reality. More to the point, their platform wasn't what their targeted customers wanted to buy. Compounding their problem was the platform they built themselves didn't actually work - period. The key system engineer left mid-way through the project, forcing the company to find a replacement as well as inducing a major delay in the development. Additionally poor documentation meant those replacements had no practical way to continue what had been started previously. Needless to say, several million dollars later the project did launch, only to be promptly replaced a by a turn key IaaS platform.

    Then there is the question of service differentiation to which I say, if you choose an extensible cloud platform, then you're able to differentiate faster than you could if you build it yourself. Business is about adapting to market conditions. Building it yourself mean longer development cycles and potentially less adaptability. Customizing an existing platform, one that provides you a template for success and best practices is inherently less risky or time consuming. The real question you should be asking is can I deploy this cloud platform in a way that allows my business to be unique? If the answer is no, than find a platform that can. If you still can't find one, then build it yourself. But be prepared that you should now consider yourself a software developer.

    As the founder of a IaaS platform vendor, I freely admit I am bias toward buying a platform over building it yourself. My reasoning is simple, our business is building IaaS platforms for service providers. Is it yours? If you answered no, my comment is unless you plan to get into the software business, building it yourself will only serve to add un-need risk, uncertainty and potential failure to your IT operations. Something I think we can all agree you should avoid.

    Sunday, July 11, 2010

    Embracing Your Niche in The Cloud

    With all the talk and the hype surrounding cloud computing many seem to be missing a major factor -- both in terms of the growth potential as well as the current opportunity for cloud computing products and services. Although the tech media and analysts love to tell you the cloud is everything including a $160 billion plus opportunity, like it or not, Cloud Computing is still an emerging niche market and exists only as part of a much larger market segment. And I'm here to tell you that as soon as you start embracing this fact, the sooner you will start to capitalize on the opportunity.

    Regardless of your industry or market segment, every single product or service that is sold today can be defined by its market niche. Of course there are the products aimed at wider demographic audiences otherwise known as mainstream niches. But those markets tend to take years or even decades to mature. As an example think of the broader web hosting industry compared to that of the shared hosting, VPS, CDN, or the managed / dedicated server markets. Those companies that arguably have had the most success in each of these markets focused on winning in their particular niches. Rackspace within managed hosting sector or Akamai in the CDN space. Both can be considered as part of the broader hosting markets, but both have significant differentiation and more importantly success with in their particular niches. Both have also been able to charge signifcantly more than the previous generations of services within the broader hosting market.

    Also being first in market doesn't necessarily mean you're going to own it. When looking at niche markets it's interesting point out that narrower demographics (PaaS, SaaS or IaaS in contrast to VPS hosting) lead to elevated prices due to the concept called the price elasticity of demand. (Which is a good thing) In other words, a niche market is a highly specialized market that allows you to survive among the competition from numerous much larger and broader focused competitors.

    As a further example think of Amazon (the book version of course) versus your local community corner book store. The economics that define success for that corner book store are significantly different than what Amazon would consider a success both from a profit margin as well as a volume point of view. The corner book store wins by effectively differentiating itself from it's much larger competitor. It's books are more expensive to buy, but possibly you can also buy a coffee or browse actual physical books or even have a conversation with a human. This differentiation attracts a unique customer profile and caters to an alternative market segment. A segment that possibly is willing to spend more for something they could have gotten cheaper at Amazon. These differentiators allow the corner book store to compete and even win business (within their niche) from that much larger competitor even though the price is higher. The same concept applies to cloud computing.

    I suppose what I am saying is you will never win if your goal is be Amazon. You will win by not being Amazon. By being different. By embracing your niche.

    Friday, July 9, 2010

    Do Customers Really Care About Cloud API's?

    Interesting post by Ellen Rubin of CloudSwitch asking if Amazon is the Official Cloud Standard? Her post was inspired by a claim that Amazon’s API should be the basis for an industry standard. Something I've long been against for the simple reason that choice / innovation is good for business. Although I agree with Ellen that AWS has made huge contributions to advance cloud computing. And also agree that "their API is highly proven and widely used, their cloud is highly scalable, and they have by far the biggest traction of any cloud". But the question I ask is do cloud customers really care about the API, so much as the applications and sevice levels applied higher up the stack?

    At Enomaly we currently have customers launching clouds around the globe, each of which have their own feature requests ranging from various storage approaches to any number of unique technical requirements. Out of all the requests we hear on a daily basis, the Amazon API is almost never is requested. Those who do request it are typically in the government or academic spaces. When it is, it's typically part of a broader RFP where it's mostly a check box and part of a laundry list of requirements. When pushed the answer is typically, -- it's not important. So I ask why the fascination with the AWS API's as a sales pitch when it appears neither service providers or their end customer really care? More to the point, why aren't there any other major cloud providers who support the format other than Amazon? The VMware API or even the Enomaly API are more broadly deployed if you count the number of unique public cloud service providers as your metric.

    An API from a sales point of view isn't important because you're not selling an API. You're selling the applications that sit above the API and mostly those applications don't really care what's underneath. As a cloud service provider you're selling a value proposition, and unfortunately an API provides little inherent value other than potentially some reduction in development time if you decide to leave. Actually the really hard stuff is in moving Amazon machine images away from EC2 in a consistant way, which Amazon through their AMI format have made a practically impossible mission. [Paravirt, really?]

    I'm not saying API's aren't important for cloud computing, just that with the emergence of meta cloud API such as LibCloud, Jclouds and others, programming against any one single unique cloud service provider API is no longer even a requirement. So my question to those who would have you believe the AWS API is important is again -- why? Is it because your only value is that in which there is little other than your API support? Or is there something I'm missing?

    Wednesday, June 30, 2010

    The Dichotomy of AJAX and RESTful API's

    Had an interesting conversation the other day with Adam, our lead interface developer at Enomaly. He's been our key AJAX and API developer on the Enomaly ECP platform for several years. During our random afternoon chat he basically said that AJAX is quite possibly the worst way to consume a RESTful API. He pointed out the purpose of a RESTful approach to API development & implementation is in its similarities to HTTP and more generally uri/urls -- each of which is easily viewable both programmatic as well as visually. The problem is AJAX is kind of the opposite. Most of the things that make the web great, such as urls, hyperlinks and bookmarking are not easily done or seen in a AJAX application. All the benefits to a RESTful architecture are hidden by the AJAX itself making development longer, more difficult to debug and often harder to scale.

    The conversation certainly got me thinking, not that I think we're going to abandon our AJAX interface and it could be worse, it could be flash based. I'm wondering what others think about the apparent dichotomy between AJAX and a RESTful API?
    Enhanced by Zemanta

    Wednesday, June 23, 2010

    Enomaly Customer CentriLogic Launches U.S. Cloud Service

    Long time Enomaly customer CentriLogic has just announced they've launched another regional cloud. Following on the success of their previous Canadian focused cloud service this one is in the United States. They've also announced a major new customer as part of the launch. Cookie Jar Entertainment, one of the world's leading creators, producers and marketers of animated and live-action programming, is the first customer to "go live" on its newly launched U.S.-based infrastructure-as-a-service (IaaS) cloud. Cookie Jar Entertainment recently successfully completed an initial pilot with CentriLogic, and is now in the process of making the final transition for its consumer websites to the on-demand cloud service.

    Powered by Enomaly's Elastic Computing Platform (ECP) CentriLogic's new enterprise-class cloud solution is one of the first North America-wide (Cross Border) IaaS offerings that provides a secure and scalable on-demand computing infrastructure for organizations to efficiently deliver content and information services over the Web. It is also one of the only providers to enable geopolitical delineation of data resources through its own dedicated facilities in the U.S. and Canada. The new U.S. cloud enhances CentriLogic's suite of hosting and managed services including, co-location, private and managed hosting, data management, security, network and professional services.

    "The issue of where data resides is growing in relevance as organizations move to the cloud but are expressing growing concern about the corresponding regulatory and compliance issues," said Antonio Piraino, Vice President and Research Director of Tier1 Research, a division of The 451 Group. "As a result, cloud infrastructure providers that have multiple nodes in different regulatory and legal jurisdictions have a value proposition that will draw wider and more diverse opportunities."

    According to Jim Latimer, Vice President of Client Solutions for CentriLogic, Cookie Jar Entertainment's commitment to the cloud for the Web components of its shows represents a significant milestone in the transition to cloud services. "Cookie Jar Entertainment has broken through the hype and is proving that on-demand cloud services can offer a significant strategic opportunity to manage resources in a more flexible and dynamic environment. Cloud computing is an ideal approach for Web properties; training environments, quality assurance and testing; SaaS and software development; or any other infrastructure where demand is highly elastic or unpredictable."

    According to Mike Haas, Director of IT for Cookie Jar Entertainment, "We were outgrowing our infrastructure, and we had added many new sites and systems to house our Web content in different locations. When CentriLogic announced a cloud option, we saw an opportunity to consolidate our external-facing properties in a more dynamic and scalable environment that allowed room for both planned and unexpected growth. We have worked with CentriLogic for three years, so we knew that they could offer a rock solid solution."

    CentriLogic's new cloud service allows customers to access and manage any number of virtual servers running Microsoft Windows or Linux through a Web-based dashboard, as well as automatically scale up and down their use of cloud servers through a robust Web-based API. Unique Public, Private, and Hybrid Cloud offerings are highly flexible, secure, and scalable and have been designed to comply with local information management and international data delineation requirements. The cloud can be used to host websites, power internal business applications, provide burst capacity to meet peak loads for existing systems, and provide a highly flexible, virtual element to an existing physical infrastructure.

    For more information on services and pricing call 1-866-366-3678 or visit www.CentriLogic.com/cloud.

    Wednesday, June 16, 2010

    Top 10 Essential Truths of Cloud Computing

    I admit I'm writing this post in a slightly sleep deprived state. (If this post makes no sense you've been warned) As I ponder the possibilities of endless diaper changes at 2am, I had a few insights about the shift currently underway in both business and computing.

    One - I don't know about you, but if I have to install a piece of software on my desktop, I won't. My first essential Truth of Cloud Computing is any consumer software created today, not in the cloud (on the internet), should be in the cloud.

    Two - the browser is the only desktop application requirement.

    Three - I'm calling the DevOps Truth.
    When it comes to infrastructure - Anything that can be automated, should be automated.

    Four - Simplicity is always better than complixity.

    Five - Open is better than closed.

    Six - Value is money made or money saved.

    Seven - My customers customer is my customer.

    Eight -  Empower your customers to be successful and in return you will be.

    Nine - The OS doesn't matter.

    Ten - Information wants to be free.

    I'm sure there's more, please add your own essential truths to the comments.
    Enhanced by Zemanta

    Saturday, June 12, 2010

    It's a Boy! Finnegan Anthony Cohen

    Brenda and I are pround to announce our latest addition to the Family. Finnegan Anthony Cohen, born June 11th @ 1:18pm Eastern. 9 pounds 7oz.

    Tuesday, June 8, 2010

    Enomaly & Ericom Deliver Cloud Desktop Infrastructure (CDI) for Managed Service Providers

    Another day, another announcement, Today I'm happy to announce that Enomaly and Ericom have partnered to offer a complete turnkey cloud desktop infrastructure (CDI) platform for managed service providers. The solution combines Ericom’s PowerTerm® WebConnect with Enomaly's Elastic Computing Platform® (ECP) enabling web hosting firms, data center operators and managed service providers to offer revenue generating cloud desktop services. The joint solution combines broad industry expertise, with proven, reliable, scalable and adaptable technology, into an easy to manage and quick to deploy platform. The ECP desktop platform is designed to meet the needs of service providers who are looking for an effective way to generate new revenue opportunities.

    Enomaly / Ericom CDI is an Internet-centric computing approach to desktop management, deployment and delivery that combines the traditional thin-client, utility hosting, infrastructure as a service (IaaS) and cloud storage. It is designed to give system administrators and end-users the best of both worlds: the ability to host remotely managed virtual desktops in a data center while giving end users a portable self-service PC desktop experience regardless of location -- billed on per usage basis.

    We are also very aware of the current realities limiting virtual desktop infrastructure deployments -- they typically require too much upfront capital and complicated integration to make it feasible for most organizations. We believe that the combined Enomaly / Ericom Cloud Desktop platform is an attractive concept because of its potential to streamline and simplify management and support, enhance security and – more importantly – reduce IT costs.

    By leveraging a cloud-based architecture, enterprises can quickly ramp up to virtual desktops, add or remove desktop capacity on-the-fly, and immediately enjoy the benefits of CDI's centralization, security and ease of management. There's no need for building an infrastructure and no complexity. Enterprise IT and end-users simply access a service provider's infrastructure, which already has the capacity and connectivity assets needed to support a high-performing service desktop environment.

    Key Benefits Include;

    Anywhere, Anytime Access
    Empowers users with on-demand access to their virtual desktops when they need it, where they need it - from the office, home, road, customer site, etc.

    Expedite Desktop Deployments
    Enables swift, centralized desktop deployments, updates and maintenance—eliminating the hassle of local PC installations

    Better Management and Control
    Increases administrator's control over desktop configurations – while enabling desktop customization based on user needs

    Enhanced Security
    Desktops with applications and data are hosted within the datacenter – protecting sensitive information that would be compromised with stolen laptops or PCs

    Full PC Desktop Experience
    Virtual desktops maintain the same look and feel of traditional PCs – enabling a quick end-use migration to virtual desktops

    Rich Integration with Existing Infrastructure
    Support for various hypervisors including Xen®, KVM and VMware®.

    Monetization and Back office Integration
    Powerful back-office facing administrative API, enabling simple integration with providers' provisioning and billing systems and supporting automation of all administrative tasks.

    Learn more about Enomaly Desktops at > http://ruv.net/a/cloudvdi

    Friday, June 4, 2010

    Louisiana Gets Its Own Green Cloud Computing Infrastructure

    Let's be honest, lately there hasn't been a lot of good news coming out of Louisiana, the gulf coast has taken quite the beating with the economic fallout from the BP oil spill. Finally we have some good news for the region. They now have their very own Cloud IaaS platform based in Lafayette, Louisiana -- a City that is quickly emerging as one of the best and most connected in the country.

    Technology initiatives have helped fuel the region’s thriving business environment while building up a tech-savvy community. The City of Lafayette developed the nation’s largest and most robust municipally-owned fiber optic broadband infrastructure delivering 100 Megabit per second (Mbps) peer-to-peer broadband network within its city limits; another confirmation of lightening fast data connections and forward thinking.

    Businesses in this creative environment are joining forces with worldwide service providers to create IT solutions with international applications. One such pairing is the Abacus Data Exchange (Lafayette, LA) and Enomaly. Together we have brought the concepts of “Cloud Computing” and “Elastic Platforms” into the local vocabulary. Picture a very green communications/data center with direct access to virtual dedicated servers for end-user applications, running on blazing fast fiber optic broadband network and connecting to anywhere on the planet.

    To give you a little background on Lafayette, according to several studies naming “best” locations in the United States, Lafayette, Louisiana is a top choice as a place to live, to start a business, to enjoy the creative class and be happy. Relocate America lists Lafayette, Louisiana as one of the "Top 100 Places to Live". For this year's list, the editorial team focused on communities poised for recovery and future growth. They discovered that this mid-sized Louisiana city showed strong local leadership, employment opportunities, thriving community commitment, improving real estate markets, growing green initiatives, plentiful recreational options and overall high quality of life. Lafayette has a population of fewer than 200,000 and is located near the Gulf Coast, halfway between New Orleans and Houston.

    Fortune Small Business ranked Lafayette as the #2 mid-sized city in the nation for small business startups. Lafayette was also named by Southern Business & Development Magazine as one of the top cities in the South for the "Creative Class". This designation is based on the community's commitment to technology, forward thinking and cultural diversity. And the list goes on... Louisiana ranks #1 in happiness according to Science Magazine's survey of 1.3 million people across the USA.

    So why does Louisiana need it's own Cloud?

    When the Abacus research team set out to design a next-generation cloud computing solution, it found a partner in Enomaly and its Elastic Computing Platform. By combining Abacus's expertise in managing green fiber optic broadband datacenters and Enomaly's cutting edge cloud computing technology, the Abacus Data Cloud offers direct access to variably sized on-demand virtual servers for end-user applications. Customers can easily provision their own virtual, high-performance, green-friendly on-line datacenter services within a matter of hours not days, using only the resources they need for the duration that they need them. The clean and simple Customer User Interface allows customers to tailor their cloud environment by provisioning virtual machines with specific application stacks and resource capacity. Built-in resource and usage monitoring provides real-time integrated billing and network reporting.

    Customers are using the cloud platform to quickly and easily create virtual servers to host web sites, run internal business applications and provide burst capacity to meet peak loads when working on data-intensive operations.

    “Working with Enomaly's ECP- SPE solution to provide our customers a cloud solution has been surprisingly easy. I've never felt more confident in the stability, user-friendliness and over all personalization of a publicly accessible interface as I do with Enomaly. Knowing that we have created a superior product benefits everyone” said Bryan Fuselier, CIO, Abacus Data Exchange

    In addition to accessing virtual cloud resources, through the Abacus Data Cloud anyone, anywhere can replicate (take a virtual snapshot) their existing servers to prevent data and configuration loses, or plan for server replacement and upgrades on their own schedule.. Replica servers are activated when needed and turned off when tasks are finished. Additionally, everyone reduces server/IT infrastructure cost by moving towards a ‘green’ solution that cuts down on high utility use, cooling needs and hardware server support . The end-user’s ability to test new applications or engage in Web 2.0 development allows them to gain more effectiveness in critical in day-to-day operations. Cost savings kick in when businesses build extremely flexible server resources through leasing shared infrastructure. Businesses can create, access and manage these servers from any Internet connection through a personalized web-based dashboard that scales server resources up and down dynamically based on demand.

    The Abacus Data Cloud delivers some of the world's fastest cloud computing, and people are is taking notice. “We are excited to include Abacus as a customer and we will be following their growing venture opportunities closely”, stated Dr. Richard Reiner, CEO of Enomaly. “We see great potential for a customizable, scalable and reliable high-speed virtual provisioning service using the Service Provider Edition of our Elastic Computing Platform in conjunction with a proven combination of a low-footprint data center and a high-speed fiber network solution.”

    Customers can set up individual servers on their own within minutes, and entire datacenters can be virtualized within a matter of days. And because of Enomaly's integrated billing and network usage monitoring along with their clean user interface and clustering support, the Abacus Data Cloud offers the most cost-effective and customer-friendly cloud computing solution on the market today.

    "The implications and potential are absolutely enormous" says Abigail Ransonet, CVO of Abacus. "The Abacus Data Cloud on a unified elastic computing platform running Enomaly's ECP SPE is a replicable business model that has a world wide customer-base. We create super-fast, easy-to-use virtual computing environments that businesses can scale up and down as needed. Our model exemplifies the dynamics of fiber optic broadband and innovative solutions. Building and managing on-line unified server technology just got very personal! We all have a sense of pride that we live and work in Lafayette, Louisiana, while we offer the entire world innovative technology solutions."

    Learn more at abacusdataexchange.com

    HP, Intel and Enomaly Collaborate to Deliver End-to-End Cloud Platform for Service Providers

    I'm happy to announce today that Enomaly has collaborated with HP and Intel to offer a complete end-to-end cloud IaaS platform for cloud service providers, hosting firms and Internet Data Center (IDC) providers. The solution is built on an optimized stack of HP products including HP ProLiant® servers, HP StorageWorks® storage system, ProCurve® Networking solution (3Com) powered by advanced Intel Xeon® processors. Essentially, everything you need to deploy a complete revenue focused cloud service.

    HP/Enomaly hosting solutions enable you to offer timely, comprehensive services to your customers—whether you provide shared hosting, dedicated hosting, virtual/ cloud hosting or managed services. HP’s flexible, energy-efficient Intel Xeon® processor- based server, storage and networking solutions—coupled with Enomaly software solutions—help you overcome your primary business challenges:

    • Lower costs by increasing the density of sites per server
    • Increase average revenue per user by offering more hosting services and/or up-sell the customer to purchase additional services
    • Increase predictability of capacity planning for the Service and understand the relationship of capacity to service level performance
    • Automate the user experience and empower the end-user

    Another exciting aspect of our collaboration with HP is with HP Financial Services. Our relationship provides a complete one stop shop for our customers. The collaboration enables Enomaly to offer a flexible purchase and lease program covering the complete ECP IaaS cloud stack (hardware and software).

    Contact us for more information.

    #DigitalNibbles Podcast Sponsored by Intel

    If you would like to be a guest on the show, please get in touch.

    Instagram