As you can probably tell I haven't been very inspired to write on the old blog lately. My wife [Brenny] is now 38 weeks pregnant so that's got me pretty busy with the last few things we need to do to prepare for our new arrival. From the new and exciting world of cloud computing, well nothing much new to report. Obviously interest is strong, people talk about clouds [possibly too much]. I admit. I've found myself wondering what's coming next. I think I've entered 'Gartner's trough of disillusionment'. Or to put it more plainly, I'm suffering from cloud-burn.
It's not that I'm sick of cloud computing per-se, but I'm just a little tired of the term itself. And the same same cloudy questions, What is cloud? Really? ME: It's cloudy, it's suppose to be fuzzy, it's not called clear computing now is it ;) Come on ppl. It's Internet computing. It's a way to make money. It's the the new black. Good enough. Move on, next question.
As for the blog world, since cloud has become the new black, it's really become the new boring for IT writers. Mainstream IT media now talks about how MegaSoft Corp or Big Yellow is losing or winning or both. I'm more interested in who's innovating. It's obvious the most innovate companies are those you've never heard of. I'm not sure why we choose to focus so much attention on those with the least interesting stories.
As for me. What am I into? Power / Green / carbon friendly computing maybe. The iPad looks cool, maybe energy efficient code is the next big thing? I don't know. I am really excited about the new baby. -- Oh yeah I mentioned that didn't I?
Over the next few weeks I'm house and office bound, so don't expect to see me at any CloudCamps or other various *Cloud* Conferences. It's not that I don't care, it's just I care too much -- about my family that is.
Monday, May 31, 2010
Friday, May 14, 2010
Exploring Differentiation Among Cloud Service Providers
I just got off our weekly Enomaly Webex in which I filled in as the host in place our product manager Pat Wendorf. I've become notorious for getting off topic when I do our Webex presentations and this week was no different. Actually doing these presentations really does help me think through some of my ideas as well as helping me get the temperature of the IaaS / hosting market of potential customers. Lately it seems that the questions have shifted from does your product have ABC or XYZ feature to how do we build a differentiated cloud service. These are the kind questions that I enjoy the most. So I'm going to explore this a bit today.
One of the biggest transitions in the hosting space over the last decade has been that of Virtual Private Servers (VPS)-- a market controlled effectively by one company, Parallels. A critical problem with the Virtuozzo Containers product line and approach has been that there is effectively no difference between any VPS hosting company. The lack of differentiation among the various VPS hosting firms has meant that the only real way to set your service apart from that of the other guys is based purely on price. This price centric approach to product/service differentiation creates a commodity market for all the providers. Basically they're sales pitch is "We're cheaper". This means you're now competing based on a low margin, high volume business, not on any real value proposition. Effectively the VPS space has become a race to zero. So in this market you'll find that most VPS hosting companies are now charging roughly the same low price, a few dollars a month, with the same low margins for same basic service. The only one making any real margin is Parallels at the expensive of their customers.
What's interesting about the emerging cloud service provider segment is the opportunity to differentiate based on the value you provide to your customers. It's not that you're cheaper than the other guy, but instead that you have an actual solution to a problem they have -- today. Maybe your platform can scale more easiliy, or you're in a specific region or city, you possibly you have a particular application deployment focus. In the case of Enomaly ECP when we developed our platform we focused on developing a cloud infrastructure with the capability to define various economic models that run the gamut from pure utility offering to quota or even tiered quality of service centric approaches. By doing so, we understand that there may still be multiple ECP deployments in a particular region, but from an end customer point of view (The customer of our customers) they can be significantly different from one another. This allows for competition based on business value not purely on price. I'll choose ECP cloud provider A because they solve my particular pain point, even though they may be more expensive than cloud B who doesn't.
My position has always been to create a shared success model, one that is mutually beneficial. The better our customers do, the better we do. A model that doesn't cannibalize our service provider customers margin in return for higher margins for us. A shared success value also provides an incentive to buy more licenses based on our customers growth. At the end of the day those 500 customers City Cloud in Sweden have now effectively become my responsibly too. (The customer of my customer is my customer) I win because my customers wins and they win because they're different, they're compelling and they have a service people need and more importantly want to buy.
One of the biggest transitions in the hosting space over the last decade has been that of Virtual Private Servers (VPS)-- a market controlled effectively by one company, Parallels. A critical problem with the Virtuozzo Containers product line and approach has been that there is effectively no difference between any VPS hosting company. The lack of differentiation among the various VPS hosting firms has meant that the only real way to set your service apart from that of the other guys is based purely on price. This price centric approach to product/service differentiation creates a commodity market for all the providers. Basically they're sales pitch is "We're cheaper". This means you're now competing based on a low margin, high volume business, not on any real value proposition. Effectively the VPS space has become a race to zero. So in this market you'll find that most VPS hosting companies are now charging roughly the same low price, a few dollars a month, with the same low margins for same basic service. The only one making any real margin is Parallels at the expensive of their customers.
What's interesting about the emerging cloud service provider segment is the opportunity to differentiate based on the value you provide to your customers. It's not that you're cheaper than the other guy, but instead that you have an actual solution to a problem they have -- today. Maybe your platform can scale more easiliy, or you're in a specific region or city, you possibly you have a particular application deployment focus. In the case of Enomaly ECP when we developed our platform we focused on developing a cloud infrastructure with the capability to define various economic models that run the gamut from pure utility offering to quota or even tiered quality of service centric approaches. By doing so, we understand that there may still be multiple ECP deployments in a particular region, but from an end customer point of view (The customer of our customers) they can be significantly different from one another. This allows for competition based on business value not purely on price. I'll choose ECP cloud provider A because they solve my particular pain point, even though they may be more expensive than cloud B who doesn't.
My position has always been to create a shared success model, one that is mutually beneficial. The better our customers do, the better we do. A model that doesn't cannibalize our service provider customers margin in return for higher margins for us. A shared success value also provides an incentive to buy more licenses based on our customers growth. At the end of the day those 500 customers City Cloud in Sweden have now effectively become my responsibly too. (The customer of my customer is my customer) I win because my customers wins and they win because they're different, they're compelling and they have a service people need and more importantly want to buy.
Thursday, May 13, 2010
The White House Further Outlines Federal Cloud Strategy
Interesting post today over at the Whitehouse.gov blog by Vivek Kundra, the U.S. Chief Information Officer. The post describes both the rationale as well as what cloud computing, at least what it means to the US Government.
Here are a couple of the more interesting parts.
"For those of you not familiar with cloud computing, here is a brief explanation. There was a time when every household, town, or village had its own water well. Today, shared public utilities give us access to clean water by simply turning on the tap. Cloud computing works a lot like our shared public utilities. However, instead of water coming from a tap, users access computing power from a pool of shared resources. Just like the tap in your kitchen, cloud computing services can be turned on or off as needed, and, when the tap isn’t on, not only can the water be used by someone else, but you aren’t paying for resources that you don’t use. Cloud computing is a new model for delivering computing resources – such as networks, servers, storage, or software applications."
The post clearly outlines the use of Cloud Computing within the US federal government's IT strategy. "The Obama Administration is committed to leveraging the power of cloud computing to help close the technology gap and deliver for the American people. I am hopeful that that the Recovery Board’s move to the cloud will serve as a model for making government’s use of technology smarter, better, and faster."
Read the rest here
Here are a couple of the more interesting parts.
"For those of you not familiar with cloud computing, here is a brief explanation. There was a time when every household, town, or village had its own water well. Today, shared public utilities give us access to clean water by simply turning on the tap. Cloud computing works a lot like our shared public utilities. However, instead of water coming from a tap, users access computing power from a pool of shared resources. Just like the tap in your kitchen, cloud computing services can be turned on or off as needed, and, when the tap isn’t on, not only can the water be used by someone else, but you aren’t paying for resources that you don’t use. Cloud computing is a new model for delivering computing resources – such as networks, servers, storage, or software applications."
The post clearly outlines the use of Cloud Computing within the US federal government's IT strategy. "The Obama Administration is committed to leveraging the power of cloud computing to help close the technology gap and deliver for the American people. I am hopeful that that the Recovery Board’s move to the cloud will serve as a model for making government’s use of technology smarter, better, and faster."
Read the rest here
NobelPrize.org Powered By City Cloud and Enomaly ECP
There is no better feeling than to see one our Enomaly ECP customers doing well in the competitive cloud service provider space. In particular City Cloud in Sweden has shared with me some interesting new customer wins including the Nobel Foundation's NobelPrize.org site. Yet another proof point is the City Cloud now boasts more than 500 customers. Proving the opportunity for regional compute clouds is real.
Also interesting to see the eco-system evolving around our customers including a new iphone app.
Congrats to the City Networks team! Keep on kicking IaaS!
Monday, May 10, 2010
Making Money with Cloud Computing (Video Podcast)
I'm happy to announce the first episode in a new series of Enomaly Podcasts focused on one of the most important questions when looking at building, deploying and running public cloud computing infrastructures. The question of how to Make Money. Over the next few weeks we'll be posting a series of podcasts exploring various revenue, business and ROI models for cloud infrastructure providers. The video podcast features Enomaly VP Sales Justin Groen and myself.
Labels:
Business,
Cloud Computing,
enomaly,
roi
Failure as a Service
A recent seven hour outage at Amazon Web Services on Saturday has renewed the discussion about cloud failures and whether the customer or the provider of the services should be held responsible. The conversation stems from two power outages on May 4 and an extended power loss early on Saturday, May 8. Saturday’s outage began at about 12:20 a.m. and lasted until 7:20 a.m., and affected a “set of racks,” according to Amazon, which said the bulk of customers in its U.S. East availability zone remained unaffected.
In one of the most direct posts, Amazon EBS sucks I just lost all my data, Dave Dopson said "they [AWS] promise redundancy, it is BS." Going on to point to AWS's statement " EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. The durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot. As an example, volumes that operate with 20 GB or lessof modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% –0.5%, where failure refers to a complete loss of the volume. This compares with commodity hard disks that will typically fail with an AFR of around 4%, making EBS volumes 10 times more reliable than typical commodity disk drives."
Like many new users to cloud computing, he assumed that he could just use the service and upon failure AWS's redundancy would automatically fix any problems, because they do (sort of) say that they prevent data loss. What Amazon actually states is a little different in that "the durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot" placing the responsibility on the customer. On one hand they say that they prevent data loss, but only if you use the AWS cloud correctly, otherwise you're SOL. The reality is that AWS for most users requires significant failure planning -- in this case the use of EBS's snap shot capability. The problem is that most [new] users have a hard time learning the rules of the road. A quick search for AWS failure planning on the AWS forums resulted in little additional insights and really appears to mostly about trial and error.
In the case of hardware failures Amazon expects you to design your architecture correctly for these kinds of events by use of redundancy, for example use mutliple VM's etc. They expect a certain level of knowledge of both system administration as well as how AWS itself has been designed to be used. Newbies need not apply or should use at you're own risk. Which isn't all that clear to a new user, who hears that cloud computing is safe and the answer to all your problems. Which I admit should be a red flag in itself. The problem is two fold, an over hyped technology and unclear failure models which combine to create a perfect storm. You need the late adopters for the real revenue opportunities, but these same late adopters require a different more gentle kind of cloud service, probably one a little more platform than infrastructure focused. As IaaS matures it is becoming obvious that the "Über Geek" developers who first adopted the service is not where the long tail revenue opportunities are. To make IaaS viable to a broader market, AWS and other IaaS vendors need to mature their platforms for a lesser type of user. (A lower or least common denominator) One who is smart enough to be dangerous, otherwise they're doomed to be limited to the only for experts only segment.
The bigger question is should a cloud user have to worry about hardware failures or should these types of failures be the sole responsiblity of the service provider? My opinion is deploying to the cloud should reduce complexity, not increase it. The user should be responsible for what they have access to, so in the case of AWS, they should be responsible for failures that are brought about by the applications and related components they build and deploy, not by the hardware. If hardware fails (which it will) this should be the responsibility of those who manage and provide it. Making things worst is promising to be highly available, reliable and redundant, but with the fine print of "if you are smart enough to use all our services in the proper way" which isn't fair. If EBS is automatically replicated why did Dave lose all his data?
In a optimal cloud environment any single server failures shouldn't matter. But it appears at AWS it does.
In one of the most direct posts, Amazon EBS sucks I just lost all my data, Dave Dopson said "they [AWS] promise redundancy, it is BS." Going on to point to AWS's statement " EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. The durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot. As an example, volumes that operate with 20 GB or lessof modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% –0.5%, where failure refers to a complete loss of the volume. This compares with commodity hard disks that will typically fail with an AFR of around 4%, making EBS volumes 10 times more reliable than typical commodity disk drives."
Like many new users to cloud computing, he assumed that he could just use the service and upon failure AWS's redundancy would automatically fix any problems, because they do (sort of) say that they prevent data loss. What Amazon actually states is a little different in that "the durability of your volume depends both on the size of your volume and the percentage of the data that has changed since your last snapshot" placing the responsibility on the customer. On one hand they say that they prevent data loss, but only if you use the AWS cloud correctly, otherwise you're SOL. The reality is that AWS for most users requires significant failure planning -- in this case the use of EBS's snap shot capability. The problem is that most [new] users have a hard time learning the rules of the road. A quick search for AWS failure planning on the AWS forums resulted in little additional insights and really appears to mostly about trial and error.
In the case of hardware failures Amazon expects you to design your architecture correctly for these kinds of events by use of redundancy, for example use mutliple VM's etc. They expect a certain level of knowledge of both system administration as well as how AWS itself has been designed to be used. Newbies need not apply or should use at you're own risk. Which isn't all that clear to a new user, who hears that cloud computing is safe and the answer to all your problems. Which I admit should be a red flag in itself. The problem is two fold, an over hyped technology and unclear failure models which combine to create a perfect storm. You need the late adopters for the real revenue opportunities, but these same late adopters require a different more gentle kind of cloud service, probably one a little more platform than infrastructure focused. As IaaS matures it is becoming obvious that the "Über Geek" developers who first adopted the service is not where the long tail revenue opportunities are. To make IaaS viable to a broader market, AWS and other IaaS vendors need to mature their platforms for a lesser type of user. (A lower or least common denominator) One who is smart enough to be dangerous, otherwise they're doomed to be limited to the only for experts only segment.
The bigger question is should a cloud user have to worry about hardware failures or should these types of failures be the sole responsiblity of the service provider? My opinion is deploying to the cloud should reduce complexity, not increase it. The user should be responsible for what they have access to, so in the case of AWS, they should be responsible for failures that are brought about by the applications and related components they build and deploy, not by the hardware. If hardware fails (which it will) this should be the responsibility of those who manage and provide it. Making things worst is promising to be highly available, reliable and redundant, but with the fine print of "if you are smart enough to use all our services in the proper way" which isn't fair. If EBS is automatically replicated why did Dave lose all his data?
In a optimal cloud environment any single server failures shouldn't matter. But it appears at AWS it does.
Sunday, May 9, 2010
Cloudy with a Chance of Scale (Cloud Songs)
If you're like me and spend a lot of time organizing cloud related events and get togethers you'll understand the value of finding good cloud related songs. Well look no further, I've put together the ultimate cloud related song list over at Grooveshark.com called Cloudy with a Chance of Scale or see the widget below.
Friday, May 7, 2010
Unlocking the Value of IaaS
Until recently I've been in an odd spot, generally speaking my biggest competition when a potential customer came to us looking for an Infrastructure as a Service (IaaS) platform was either to build it yourself (aka huge risk) or buy it from us. (Yes there were a few other competitors) But for the most part the space was a greenfield, we really didn't have to worry about our competitors because there weren't many and the ones that were out there were positioned in a significantly different way than us. The problem with being first is one of education, most potential customers didn't even realize they needed a cloud platform. The good news is things are changing, the idea of Infrastructure as a service is no longer a radical one. IaaS companies are getting funded left and right and customers are buying. I've long held the notion that an industry isn't real until you have direct competitors. This both proves there is an opportunity as well as brings a broader awareness. A rising tide floats all boats if you will. In my post today, I thought I'd briefly explore the value proposition of IaaS.
So you ask what's the value of providing an Infrastructure as a Service? From a private point of view it's most about efficiency, the draw back is you need to spend money to save money. Which can be a tough sell in a rough economic climate. From a public cloud context it's about converting costs (Fixed to Variable, Capex to Opex etc). Ok, we've heard the story before. It's really about saving money or not losing it to a more nimble competitor. So the real question becomes how do you unlock the value of an IaaS cloud, either internally or externally or both? For me it's all about the application.
If done right, an IaaS platform provides a simple unified interface to your infrastructure. At it's heart it simplifies your infrastructure so you can focus on what matters most, the applications that run on it. It also changes the job of a system/network admin from one of being reactive to one of being proactive. If a server dies, who cares -- leave it. Instead hot-add additional servers just in time based on automatic utilization notifications. Where previously a sysadmin could manage dozens of servers that same admin can now manage thousands, because no one server really matters. It's the overall cloud that matters. IaaS allows you to focus higher in the stack, where the real business value really lies. Do you think end users really case what flavor of load balancer you're using? Probably not. What they do care about is what application they can have instant access to when they have a real problem they need solved. And when it's deployed it will scale to handle anything they through at it. The application just works.
Another one of the more puzzling approaches I've seen a lot lately in the virtualization and IaaS space is that of the so called "Virtual Data Center". Basically what a few of the more backward thinking vendors are saying is, let's recreate the traditional physical experience of running a datacenter but in a virtual or cloud context. Some have even gone to the extent of creating virtual blades which graphically look like physical servers to get their point across. To which I say why? Infrastructure is already too complex so why would I want to recreate an already painful experience. The sales pitch is do what you've already done. I think a better approach is to instead take what you've already done and make it better, easier, more efficient and ultimately simpler. Instead of recreating complexity, I say remove it. If someone is trying to sell you a "Virtual Data Center" I say run. What they should be selling you is business value, how can this save or make you money. The value of IaaS is that it just works, it just scales and it makes you money. Not how many features can we cram into a user interface. And more importantly from a technical standpoint it doesn't require endless hours of configuration and testing. Yep, it's turnkey.
The biggest value of an IaaS platform is as a stepping stone -- one that allows you to gracefully migrate from the traditional physical single tenant infrastructure of the past to a multi-tenant distributed cloud of the future, while not requiring you to completely re-architect or rebuild your applications for it. What IaaS does is move the need to make your applications multi-tenant by making your infrastructure mutli-tenant. If it doesn't accomplish this, than it's not IaaS and there's not a lot of value in it.
So you ask what's the value of providing an Infrastructure as a Service? From a private point of view it's most about efficiency, the draw back is you need to spend money to save money. Which can be a tough sell in a rough economic climate. From a public cloud context it's about converting costs (Fixed to Variable, Capex to Opex etc). Ok, we've heard the story before. It's really about saving money or not losing it to a more nimble competitor. So the real question becomes how do you unlock the value of an IaaS cloud, either internally or externally or both? For me it's all about the application.
If done right, an IaaS platform provides a simple unified interface to your infrastructure. At it's heart it simplifies your infrastructure so you can focus on what matters most, the applications that run on it. It also changes the job of a system/network admin from one of being reactive to one of being proactive. If a server dies, who cares -- leave it. Instead hot-add additional servers just in time based on automatic utilization notifications. Where previously a sysadmin could manage dozens of servers that same admin can now manage thousands, because no one server really matters. It's the overall cloud that matters. IaaS allows you to focus higher in the stack, where the real business value really lies. Do you think end users really case what flavor of load balancer you're using? Probably not. What they do care about is what application they can have instant access to when they have a real problem they need solved. And when it's deployed it will scale to handle anything they through at it. The application just works.
Another one of the more puzzling approaches I've seen a lot lately in the virtualization and IaaS space is that of the so called "Virtual Data Center". Basically what a few of the more backward thinking vendors are saying is, let's recreate the traditional physical experience of running a datacenter but in a virtual or cloud context. Some have even gone to the extent of creating virtual blades which graphically look like physical servers to get their point across. To which I say why? Infrastructure is already too complex so why would I want to recreate an already painful experience. The sales pitch is do what you've already done. I think a better approach is to instead take what you've already done and make it better, easier, more efficient and ultimately simpler. Instead of recreating complexity, I say remove it. If someone is trying to sell you a "Virtual Data Center" I say run. What they should be selling you is business value, how can this save or make you money. The value of IaaS is that it just works, it just scales and it makes you money. Not how many features can we cram into a user interface. And more importantly from a technical standpoint it doesn't require endless hours of configuration and testing. Yep, it's turnkey.
The biggest value of an IaaS platform is as a stepping stone -- one that allows you to gracefully migrate from the traditional physical single tenant infrastructure of the past to a multi-tenant distributed cloud of the future, while not requiring you to completely re-architect or rebuild your applications for it. What IaaS does is move the need to make your applications multi-tenant by making your infrastructure mutli-tenant. If it doesn't accomplish this, than it's not IaaS and there's not a lot of value in it.
Wednesday, May 5, 2010
How to Make Money in Cloud Computing
It's been fairly quiet on the blog front lately mostly because of my ridiculous travel schedule as well as an endless series of meetings both with new customers & partners. A recurring question I've been asked lately has revolved around one of the more difficult questions to answer in cloud computing. The questions is how does one make money in cloud computing? A very good question, one that I've been asking myself for quite sometime. So let me begin by giving you a brief history of my company, Enomaly.
Over the last 6 1/2 years since we founded Enomaly Inc I've often considered myself a bootstrapper. The ability to build a self-sustaining business that succeeds without external financial support - built and supported by the money we make . In my case Enomaly was formed out of the previous consulting work I was doing in enterprise content management, back then (pre-2004) I focused on open source CMS's products. (I made money on others free software) Basically I built Enomaly by taking the money I made as an independent consultant / freelancer / contractor and brought together a founding team and created Enomaly. The original partners, George a financial whiz looked after the operations and Lars looked after both the product and project management, they both excelled in areas I found myself weak in. As time passed we used the revenue to hire more people and develop our products all the while gradually growing the business. This time gave us the ability to both grow and adapt to emerging market trends in what we called "elastic computing", now generally refereed to as "Infrastructure as a service" or Cloud Computing. One of biggest advantages was time.
This time gave us the ability of watching the industry evolve over the years and to see the concepts of cloud computing go from fringe concept to main stream phenomenon in what seems like overnight. Although I do admit, one of the biggest problems with bootstrapping your business is that of scale, it's hard to grow a business based purely on your own financial resources. On the positive side you remain in control of your destiny (mostly). Another issues we faced was timing (not to be confused with time), by creating the first version of Enomaly in 2005 meant we were probably 4 -5 years to early. So for the first 4 years we just gave away our software under an open source license in the hopes that it would act as an opportunity generator of sorts. Which it did, opening up opportunities at companies like NBC/Universal, France Telecom, Intel, Best Buy, John Hancock and even an early beta invite to a yet to be released project at Amazon called EC2 in 2006. So the original value of our product was that of being mostly a market reseach tool and lead generator, but not a direct revenue generator. Unfortunatly, we really never made any money from our software until we finally decided to offer a properiery edition focused on a sector who had the most to lose, the sevice providers and web hosts who began to see revenue being lost to a selection of newer cool, cloud service providers such as Amazon Web Services.
This brings me back to how do you make money in the cloud. First let's look at software. Let's be honest, for the most part providing traditional single tenant software is dead. Most broad customer software today is being provided as a service over the internet and in a browser. Companies like Salesforce.com and Google represent the poster children for this approach with Salesforce now making more than 1 billion dollars a year. Other areas of software such as infrastructure are much more difficult. At the end of the day whether it's a PaaS or SaaS, you still need to power the underlying infrastructure and this means software sitting on a server somewhere.
The move toward free and open software models in this space means that revenue has become a secondary objective, second to that of ubiquity. It's no longer about who makes the most money but instead it seems to be about who can get the most market share and hopefully sell their business to some larger technology company making it "their problem". One of the main problems with the no revenue approach is that of sustainability. Another issue of the free model in an emerging market is that of disruption. In an established market such as Enterprise Databases, the market is there, money is being made so disrupting the incumbent makes sense, but when you attempt to disrupt an emerging market, the only one you're disrupting is yourself. Something that became painfully obvious after several years of asking why aren't we making any money from our free software to which someone said to me, have you tried charging for it? In which I responded. Well, no actually.
Good software takes a long time to build, it's not about throwing more people on it, or spending more money, it's about spending more time developing it. Software development is not linear. Sure those with deep pockets can attempt to drive the market value of software to zero in the hopes of owning the largest possible segment in shortest possible timeframe, but at what cost? I'd say quality. Another issue with this approach is it creates an environment that bases success not on the best most mature products but on the free-est. Today most free/open source commerical software tends to be an amalgamation of a variety of others free software held together by some glue. Of course there are exceptions, mostly in the more mature community driven open source projects such as Linux, Apache and others, but unfortunately for the most part this isn't the case in the for profit realm. Customers have come to expect everything and pay nothing for it and than ask why the software doesn't work they way they would expect. Then they're given the answer, we will sell you support to help you with our hard to install and buggy software. To which I say, why not create software that works so you don't need to offer professional services as a way to monitize.
So how do you make money in Cloud Computing? By making and more importantly "selling" products and service people want to buy.
Over the last 6 1/2 years since we founded Enomaly Inc I've often considered myself a bootstrapper. The ability to build a self-sustaining business that succeeds without external financial support - built and supported by the money we make . In my case Enomaly was formed out of the previous consulting work I was doing in enterprise content management, back then (pre-2004) I focused on open source CMS's products. (I made money on others free software) Basically I built Enomaly by taking the money I made as an independent consultant / freelancer / contractor and brought together a founding team and created Enomaly. The original partners, George a financial whiz looked after the operations and Lars looked after both the product and project management, they both excelled in areas I found myself weak in. As time passed we used the revenue to hire more people and develop our products all the while gradually growing the business. This time gave us the ability to both grow and adapt to emerging market trends in what we called "elastic computing", now generally refereed to as "Infrastructure as a service" or Cloud Computing. One of biggest advantages was time.
This time gave us the ability of watching the industry evolve over the years and to see the concepts of cloud computing go from fringe concept to main stream phenomenon in what seems like overnight. Although I do admit, one of the biggest problems with bootstrapping your business is that of scale, it's hard to grow a business based purely on your own financial resources. On the positive side you remain in control of your destiny (mostly). Another issues we faced was timing (not to be confused with time), by creating the first version of Enomaly in 2005 meant we were probably 4 -5 years to early. So for the first 4 years we just gave away our software under an open source license in the hopes that it would act as an opportunity generator of sorts. Which it did, opening up opportunities at companies like NBC/Universal, France Telecom, Intel, Best Buy, John Hancock and even an early beta invite to a yet to be released project at Amazon called EC2 in 2006. So the original value of our product was that of being mostly a market reseach tool and lead generator, but not a direct revenue generator. Unfortunatly, we really never made any money from our software until we finally decided to offer a properiery edition focused on a sector who had the most to lose, the sevice providers and web hosts who began to see revenue being lost to a selection of newer cool, cloud service providers such as Amazon Web Services.
This brings me back to how do you make money in the cloud. First let's look at software. Let's be honest, for the most part providing traditional single tenant software is dead. Most broad customer software today is being provided as a service over the internet and in a browser. Companies like Salesforce.com and Google represent the poster children for this approach with Salesforce now making more than 1 billion dollars a year. Other areas of software such as infrastructure are much more difficult. At the end of the day whether it's a PaaS or SaaS, you still need to power the underlying infrastructure and this means software sitting on a server somewhere.
The move toward free and open software models in this space means that revenue has become a secondary objective, second to that of ubiquity. It's no longer about who makes the most money but instead it seems to be about who can get the most market share and hopefully sell their business to some larger technology company making it "their problem". One of the main problems with the no revenue approach is that of sustainability. Another issue of the free model in an emerging market is that of disruption. In an established market such as Enterprise Databases, the market is there, money is being made so disrupting the incumbent makes sense, but when you attempt to disrupt an emerging market, the only one you're disrupting is yourself. Something that became painfully obvious after several years of asking why aren't we making any money from our free software to which someone said to me, have you tried charging for it? In which I responded. Well, no actually.
Good software takes a long time to build, it's not about throwing more people on it, or spending more money, it's about spending more time developing it. Software development is not linear. Sure those with deep pockets can attempt to drive the market value of software to zero in the hopes of owning the largest possible segment in shortest possible timeframe, but at what cost? I'd say quality. Another issue with this approach is it creates an environment that bases success not on the best most mature products but on the free-est. Today most free/open source commerical software tends to be an amalgamation of a variety of others free software held together by some glue. Of course there are exceptions, mostly in the more mature community driven open source projects such as Linux, Apache and others, but unfortunately for the most part this isn't the case in the for profit realm. Customers have come to expect everything and pay nothing for it and than ask why the software doesn't work they way they would expect. Then they're given the answer, we will sell you support to help you with our hard to install and buggy software. To which I say, why not create software that works so you don't need to offer professional services as a way to monitize.
So how do you make money in Cloud Computing? By making and more importantly "selling" products and service people want to buy.
Monday, May 3, 2010
A Trip to the Real Amazon Cloud
I just returned from a week in spectacular Rio De Janeiro in Brazil. I will admit that before I travelled to Brazil I had a bit of a mis-conception of the country. Turns out my preconceived perceptions of both the country and city of Rio were almost completely wrong. Instead I found a modern, safe and vibrant city full of culture and excitement. Rio is in one word > Awesome.
What brought me to Brazil was an invitation of Giuseppe Romagnoli at Serpro -- a public IT company created by the government of Brazil which offers technology advice as well as implements internet solutions for the Brazilian Government. Serpro is considered one of the largest IT organizations in the government sector in Latin America. Recently one of Serpro's primary focuses has been in creating Brazil's national cloud computing strategy. As part of this strategy is the Cloud Computing Brazil (CCBrazil) conference in Rio last week which I had the honor of providing the keynote presentation. The conference brought together the key local industry influencers, policy makers and academics from around the country to discuss the opportunities and challenges for cloud computing in the region. Let me tell you, the opportunities are abundant.
During my time in Brazil I discovered that the country is currently under going a major economic renaissance brought about by strong foreign investment thanks in part to huge oil reserves recently found at Tupi, located some 250km offshore of the southern Brazilian coast. In one of my conversations with someone who worked for the Brazilian Oil firm Petrobras, he put the Tupi Oil field into opportunity into context, saying the oil lies some 4.5 miles beneath the ocean's surface. To reach it, Petrobras will have to run lines through 7,000 feet of water and then drill up to 17,000 feet through sand, rock, and a massive 1 mile deep salt layer before they can get to the oil. But he also said that Tupi is probably the world's biggest oil find of the last 30 years. This vast oil find is quickly moving Brazil into the position of being a major economic power player not just in América do Sul, but globally. All this new money is directly effecting how Brazil as a nation will compete on the global stage. The government also seems to realize that to do so -- they need more than just the raw resources oil provides but also the IT infrastructure to enable the country to rise as modern IT / business centric economy similar to places like the USA or Canada. It would seem that Cloud computing may play a critical roll in the revitalization of Brazil and is arguably just as important as the oil in the sense that all the oil in the world is meaningless if the average person doesn't see an increase in the quality of life because of it.
Also the reason I refer to Brazil as going through a renaissance is as I walked to through Rio last week what struck me was the style, it was a snap shot of 1970's architecture -- a kind of porthole into another heyday long since past. Today Rio is yet again an exciting place full of culture, great people and amazing sights. I was also very lucky to have a personal guide for the week in that of Paola Garcia Juarez (A.K.A Cloud Girl). Paola is a recent graduate of the Master program in Computer Science at Federal University of Rio de Janeiro (UFRJ) as well as a key cloud evangelist in South America. Paola was one the key organizers for CloudCamp Rio which was held along side CCBrazil. (I should also note that she is currently organizing CloudCamp Lima for November) More importantly, she knew the best places to see and best places to eat (and drink) in Rio.
Needless to say, Rio is Awesome, (oh I said that already). If you're looking for a place to both play and do business than Rio is for you!
What brought me to Brazil was an invitation of Giuseppe Romagnoli at Serpro -- a public IT company created by the government of Brazil which offers technology advice as well as implements internet solutions for the Brazilian Government. Serpro is considered one of the largest IT organizations in the government sector in Latin America. Recently one of Serpro's primary focuses has been in creating Brazil's national cloud computing strategy. As part of this strategy is the Cloud Computing Brazil (CCBrazil) conference in Rio last week which I had the honor of providing the keynote presentation. The conference brought together the key local industry influencers, policy makers and academics from around the country to discuss the opportunities and challenges for cloud computing in the region. Let me tell you, the opportunities are abundant.
During my time in Brazil I discovered that the country is currently under going a major economic renaissance brought about by strong foreign investment thanks in part to huge oil reserves recently found at Tupi, located some 250km offshore of the southern Brazilian coast. In one of my conversations with someone who worked for the Brazilian Oil firm Petrobras, he put the Tupi Oil field into opportunity into context, saying the oil lies some 4.5 miles beneath the ocean's surface. To reach it, Petrobras will have to run lines through 7,000 feet of water and then drill up to 17,000 feet through sand, rock, and a massive 1 mile deep salt layer before they can get to the oil. But he also said that Tupi is probably the world's biggest oil find of the last 30 years. This vast oil find is quickly moving Brazil into the position of being a major economic power player not just in América do Sul, but globally. All this new money is directly effecting how Brazil as a nation will compete on the global stage. The government also seems to realize that to do so -- they need more than just the raw resources oil provides but also the IT infrastructure to enable the country to rise as modern IT / business centric economy similar to places like the USA or Canada. It would seem that Cloud computing may play a critical roll in the revitalization of Brazil and is arguably just as important as the oil in the sense that all the oil in the world is meaningless if the average person doesn't see an increase in the quality of life because of it.
Also the reason I refer to Brazil as going through a renaissance is as I walked to through Rio last week what struck me was the style, it was a snap shot of 1970's architecture -- a kind of porthole into another heyday long since past. Today Rio is yet again an exciting place full of culture, great people and amazing sights. I was also very lucky to have a personal guide for the week in that of Paola Garcia Juarez (A.K.A Cloud Girl). Paola is a recent graduate of the Master program in Computer Science at Federal University of Rio de Janeiro (UFRJ) as well as a key cloud evangelist in South America. Paola was one the key organizers for CloudCamp Rio which was held along side CCBrazil. (I should also note that she is currently organizing CloudCamp Lima for November) More importantly, she knew the best places to see and best places to eat (and drink) in Rio.
Needless to say, Rio is Awesome, (oh I said that already). If you're looking for a place to both play and do business than Rio is for you!
Subscribe to:
Posts (Atom)