Tuesday, September 30, 2008
Generally the idea of a cloud exchange would be a central financial exchange where people / companies could trade standardized cloud capacity in the form of a futures contract; that is, a contract to buy specific quantities of a compute capacity in the form of a commodity at a specified price with delivery set at a specified time in the future. The contract details what cloud asset is to be bought or sold, and how, when, where and in what quantity it is to be delivered.
I'd like to pose the question. Is the time alright for us to start to thinking about the creation of a "Cloud Exchange" and if so, what are some other challenges we need to overcome? (General Compute Unit, Capacity interchange, Bandwidth, Quality, Auditability, fraud, etc.)
Feel free to pitch in your ideas, good, bad or indifferent.
This is from Adam's Blog
I've started work on the Enomalism developer guide. I've only just begun writing it so there is a lack of content at the moment. However, the good news is that I'll be frequently updating it. So check back often.
The guide hopefully be a useful starting point for creating Enomalism extension modules as well as using the exposed API. If there is any area that is lacking (that doesn't already have a TODO label) or missing completely, be sure to let me know.
Monday, September 29, 2008
Ok this is getting ridiculous. Joining Ballmer and Ellison, Free software activist Richard Stallman
He goes on.
"Do your own computing on your own computer with your copy of a freedom-respecting program," he says. "If you use a proprietary program or somebody else's web server, you're defenceless. You're putty in the hands of whoever developed that software."
Gee thanks for the insight, my suggestion: go and take a shower and shave.
Sunday, September 28, 2008
After hosting our first CloudCamp in San Francisco, and then taking it to London, we are excited to bring CloudCamp back to the Bay Area. Tuesday, Sept 30th, we will be hosting CloudCamp at Sun’s Executive Briefing Center in Menlo Park.
We are also happy to partner with SDForum’s Cloud Computing conference which takes place the following day in Santa Clara. You may still get a pass if you register for SDForum's Cloud Computing and Beyond conference on Oct 1st in Santa Clara.(FYI - I won't be in attendance, I have meetings in Austin that day)
Following Larry Ellison's lead, Steve Ballmer speaking with venture capitalist Ann Winblad at the Churchill Club last Thursday, the Microsoft CEO addressed the differences of opinion on cloud computing.
"I think when people talk about cloud computing they're talking about taking some stuff, putting it outside the firewall, and perhaps putting it on servers that are also shared--or storage systems--that are also shared, perhaps with other companies that they know nothing about."
That's actually a pretty good description. Let's hope MS gets it right.
For those of us who fell asleep during middle school. (Myself included) A metaphor asserts two topics are the same whereas analogy may acknowledge differences. Are we too focused on the similarities between the internet (an indescribable organism) and cloud computing when we should be focusing on the differences between traditional computing versus internet based computing?
This random thought has been brought to you by, walking my dog Winston on a Sunday afternoon.
Friday, September 26, 2008
"The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do. I can't think of anything that isn't cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?
No one is accusing Larry of being an idiot, he's far from it. He does have a very good point. If Larry can't figure out what cloud computing is, how can we expect to build an industry around it?
When I originally proposed the cloud forum, my goal was to invite a select group of cloud consumers, vendors and cloud industry advocates. Like everything cloud related these days, the event exploded with more then 50 people signing up. Actually another 30 people requested entry but due to lack of space I needed to cap it. One of the coolest parts of the day was actually seeing my "social network" in person. (After the event a group of us met at Benihana's and someone noted they had never seen so many grey beards in one room) I really apprieciated that, I did actually go out of my way to invite the "heavy hitters" of the technology world and to my surprise they all showed up. (Thank you!, I am truly honoured)
The forum utilized an unconference format to facilitate a face-to-face participant-driven discussion. The day was primarily composed of group activities, brainstorming sessions and general cloud discussions.
We had a wide group representing a nice cross section of the emerging cloud industry and consumers. The participation included representatives from 3Tera, 10gen, Appistry, AppNexus, Citrix, Dell , Ebay, Elastra , Eli Lilly and Company, Enomaly, EMC, France Telecom / Orange, Gigaspaces, Gogrid, IBM, Intel, JP Morgan Chase, Oracle, Open Cloud Consortium, Open Grid Forum, Parallels, Rackspace / Mosso, Rightscale, RSA, Snaplogic, Sun, University of California / EUCALYPTUS, VMware, and Zeronines.
Areas like security, transparency and SLAs were the hot topics of the day. The workgroup on "Cloud API" was by far the most popular subject, with a preliminary taxonomy and API architecture drafted on the white board.
So did we reach a consensus? No, but we did all agree that we needed to investigate ways we could start working in a more "interoperable way". As well as to continue the discussion of common ways for us to interact with remote cloud platforms using a more uniformAPI.
The plan is to hold our next forum during SYS-CON's Cloud Computing Expo sometime during November 19-21 in San Francisco. If you're interesed in getting involved in the forum. Please go to the discussion group. (It's invite for the moment so give me a good reason and I'll add you)
Sunday, September 21, 2008
When we first started Enomaly back in 2003 we knew there were a lot of opportunities in the open source technology field, but we also knew that we wanted to create some disruptive products of our own. Over the last 5 years we've tried a variety of products and services. With more then 50,000 downloads, 14,000 installs and 3 years of development we finally found our "killer application" in the Enomaly Elastic Computing Platform (ECP).
I can proudly and clearly state our new mission to bring "Clarity to Cloud Computing". We do this through our ECP and associated components, as well as our cloud computing professional services.
I welcome you to check out the new Enomaly at www.enomaly.com and download our latest (fully supported and stable) open source elastic computing platform.
Saturday, September 20, 2008
Imagine the ability to map/reduce your genome across a scalable and secure compute cloud for near instant DNA diagnostics. My random thought for today.
Here's a piece of the PR release.
The Distributed Management Task Force (DMTF), the industry organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, today announced the Virtualization Management Initiative (VMAN). The VMAN Initiative unleashes the power of virtualization by delivering broadly supported interoperability and portability standards to virtual computing environments. VMAN provides IT managers the freedom to deploy pre-installed, pre-configured solutions across heterogeneous computing networks and to manage those applications through their entire lifecycle. This Initiative delivers much-needed open industry standards to the management of virtualized environments.Complete Release here >
Virtualization has radically enhanced the IT industry landscape by optimizing usage of existing physical resources and helping reduce the number of systems deployed and managed. This consolidation helps reduce hardware costs and mitigates power and cooling needs. However, even with the efficiencies virtualization offers, this new approach adds IT cost as a result of increased system management complexity.The VMAN Initiative will help reduce the complexity and cost of virtualization management for IT managers. Also, because the DMTF builds on existing standards for server hardware, management tool vendors can more easily provide holistic management capabilities that allow IT managers to manage their virtual environments in the context of the underlying hardware. This lowers the IT learning curve, and also lowers complexity for vendors implementing this support in their management solutions reducing IT management costs.
Thursday, September 18, 2008
Amazon Describes their CDN;
This new service will provide you a high performance method of distributing content to end users, giving your customers low latency and high data transfer rates when they access your objects. The initial release will help developers and businesses who need to deliver popular, publicly readable content over HTTP connections. Our goal is to create a content delivery service that:
* Lets developers and businesses get started easily - there are no minimum fees and no commitments. You will only pay for what you actually use.
* Is simple and easy to use - a single, simple API call is all that is needed to get started delivering your content.
* Works seamlessly with Amazon S3 - this gives you durable storage for the original, definitive versions of your files while making the content delivery service easier to use.
* Has a global presence - we use a global network of edge locations on three continents to deliver your content from the most appropriate location
Now if Amazon would just roll out more regional EC2 environments.
Tuesday, September 16, 2008
Here are some links
Monday, September 15, 2008
I'll post updates as I learn more.
More details here >
Sunday, September 14, 2008
In the era of cloud computing uptime guarantees and service level agreements (SLA) have started to become standard requirements for most cloud providers. Google, Amazon, and Microsoft have all started to implement some kind of SLA. They do this in an attempt to give their cloud users the confidence to utilize these systems in place of more common in house alternatives. The common goal for most of these cloud platform is to build for what I consider the myth of five nines. (Five nines meaning 99.999% availability, which translates to a total downtime of approximately five minutes and fifteen seconds per year.) The problem with five nines is it's a meaningless goal which can be manipulated to meet what ever you need it to mean.
In the case of a physical failure such as Flexiscales recent one, the hardware downtime might be small, but the time to restore from a backup might be considerably longer. A minor cloud failure could cause a cascading series of software failures causing further application outage of hours or even days for those who depended on the availability of the given cloud. Meaning your cloud may achive five nines, but your application hosted on it doesn't.
Lately it seems there are a number of people in the cloud computing community who are starting to discuss alternatives to the dreaded five nines concept and looking at ways that cloud based infrastructures could be configured / deployed in a mannor that is more proactive than reactive to disasters. There is a growing consensus that cloud based disaster recovery may very well be the "killer app" for cloud computing. To achieve this, we need to start creating reference architectures and models that assume for failure. One that doesn't need to worry when the next disaster will happen next, just that it will happen and when it does, it's going to be business as usual.
In a recent conversation with Alan Gin founder of a super secret stealth firm called Zeronines, Alan described an interesting philosophy. He said the problem with most disaster recovery plans is the recovery is reactive, it is what happens after a disaster has already harmed your business. He said on its face, this is an unsound strategy. He went on to say; That current disaster recovery architectures, which uses the synonym “failover,” is based on the cutover archetype: a system’s primary component fails, damaging operations; then failover to a secondary component is attempted to resume operations. The problem with current cutover approaches is that it views unplanned downtime as inevitable, acceptable, and so requires that business halt.
I really liked this quote from an executive from EMC, a leading computer storage equipment firm, “current failover infrastructures are failures waiting to happen.”
To be competitive in today's always connected, always available world. We need to reinvent the fundamental idea of disaster recovery. One of the major benefits to using cloud computing is that you can make these types of failover assumptions well before they happen using an emerging global toolset of cloud components. It's not a matter of if, but a matter of when, when you take into consideration that application components will fail then you can build an application that features "failure as service". One that is always available, one with Zero Nines.
Friday, September 12, 2008
The other day I had a very interesting conversation with Vijay Manwani the CTO and Founder of Bladelogic. Our conversation touched upon a number of subjects ranging from hiring a CEO to the importance of the customer CIO to finding a corporate champion. What was interesting is he confirmed that he saw a similar adoption pattern for bladelogic to what I've been seeing for our cloud computing platform. Although bladelogic isn't a cloud platform their data center automation platform offered to enterprises does have a lot of "cloud like" features. For this reason I think Bladelogic's history does serve as nice point of comparison for what we in the cloud computing space need to follow.
In our conversation Vijay described a 4 year cycle starting mostly with prototype deployments. For the early customers of bladelogic, they would for the most part "kick the tires", seeing if the solution worked for them. He said these sales cycles could last years. Eventually, thanks in part to internal champions they started to see further and further adoption leading to what he called a "tipping point" in year 3 when their sales exploded. He see's cloud computing in a similar way. So if 2008 is year 1, he said not expect the explosive growth in the enterprise space for another couple years. He went on to say that he thought cloud computing was potentially a huge disruptive technology trend.
Another comment Vijay made was that one of the best things to happen to Bladelogic was closing their first round of funding just before Sept 11 2001. He said the downturn meant they could recruit some the best people, with little or no competition from other startups. He also said that the rise of Opsware as a competitor gave their potential customers a basis of comparison. Meaning that potential customers were able to contrast not what each did the same but what the others didn't do. So in other words, competition is ultimately a good thing. The more companies pitching cloud computing, the better the overall industry will do. In closing he said to find that one thing that you can do better then anyone else.
Brief Travel Schedule
Sept 16 - 19th - New York, Interop
Sept 23-25th - San Francisco
Sept 29-30th - Austin Tx
Wednesday, September 10, 2008
Can anyone confirm this rumor?
Long before the internet, there was a simplier time when the typical mode of communication was a telephone. These early communication devices were based on the idea of circuit switching, where a dedicated circuit is tied up for the duration of the call and communication is only possible with the single party on the other end of the circuit. In the 1970's a major shift occurred, this shift was the move from circuit switching to a newer packet switching methods. With packet switching, a system could use one communication link to communicate with more than one machine by disassembling data into datagraphs, then gather these as packets. Not only could the link be shared (much as a single post box can be used to post letters to different destinations), but each packet could be routed independently of other packets. This was a revolutionary advancement and lead directly to a US military project called ARPAnet.
ARPAnet which stands for Advanced Research Projects Agency Network and is widely considered the first packet switching network and a direct predessor to today's Internet and what I consider the first compute cloud.
What's interesting about ARPAnet and today's move toward cloud computing is that they both shared similar ideals and goals.
- Both are designed to work unambiguously with a broad range of computer architectures
- Both are designed were to designed to be multi-tenant
- Both are designed to be global and distributed across geographically disperse environments.
- Both are application agnostic, meaning they could support a wide variety of applications from voice to data.
- Both are designed withstand losses of large portions of the underlying networks. ARPANET was designed to survive network losses, but the main reason was actually that the switching nodes and network links were not highly reliable, similar today internet;
I'll keep you posted as continue to write my cloud computing guide. If you're interested in contributing, please get in touch.
* 1 Abbate, Inventing the Internet, pp. 8
* 2 Norberg, O'Neill, Transforming Computer Technology, pp. 166
* 3 Hafner, Where Wizards Stay Up Late, pp. 69, 77
* 4 A History of the ARPANET, Chapter III, pg.132, Section 2.3.4
A few people have pointed out that ARPAnet wasn't global. I'll look into it.
Tuesday, September 9, 2008
I was recently asked to be the program chair for a cloud computing conference happening in my home town of Toronto, Ontario. In my role as program chair I will be helping assemble the predominate cloud computing event in Canada and would like to personally invite you to get involved either as a speaker or sponsor or both.
If you're interested in getting involved, please feel free to get in touch with me.
Below is the PR.
Cloud computing has reached the critical stage where adoption is seriously being considered by start up companies, SMBs and enterprise. However, the decision to migrate into the cloud can be a costly and time-consuming task that includes risk. Knowledge is a key factor to success.
IT professionals and business executives will attend Cloud Computing an IT Paradigm Shift, an in-depth conference that combines a compelling knowledge-based seminar agenda with peer-to-peer discussion between users and industry experts.
Date: November 12, 2008
Metro Toronto Convention Centre, Toronto
Toronto’s first CloudCamp will co-locate and wrap up the day’s event.
Cloud Computing an IT Paradigm Shift is a highly visible event with an incredible marketing campaign. We invite you to enjoy the benefits of the opportunities to generate hard business that sponsorship provides.
To confirm one of only 12 speaking/sponsorship positions at the Cloud Computing Conference an IT Paradigm Shift and see your logo in 39,906 copies of the next issue of ComputerWorld Canada Magazine, contact us right away!
Sunday, September 7, 2008
Recently there has been a renewed interested in Cloud Application Marketplaces. This is a repost Originally Posted Sunday, September 7, 2008
Cloud computing isn't a management model so much as a software delivery paradigm. What Apple has done with its App Store is show the world that the key to monetizing the cloud is in the delivery of the key applications and assets (music, video, ringtones) through a simple and accessible channel.
With the series of recent announcements from a variety of mobile providers an exciting and potentially lucrative area in cloud computing appears to be emerging. In recently Microsoft, Google and T-mobile and others have all announced efforts to create what I am calling "cloud marketspaces" for the delivery of mobile software using a similar model to that of Apple's iPhone App Store. If successful, these new cloud marketspaces may signify a disruption to the traditional delivery of software.
According to an article in business week the opportunities for mobile application marketplace could be potentially tremendous. Since the App Store debut, users of Apple's iPhone and iPod Touch have downloaded more than 60 million applications, sampling the more than 3,000 games, calendars, and productivity applications for as much as $10 - $20 each. A good portion of these applications are available at no charge and are monetized via advertising (twitterriffic is a great example). Most of the applications available are provided via a burgeoning ecosystem of third party developers. What's more the iPhone app sales averaged $1 million a day in the first month with Apple taking 30% of each sale. In a matter of weeks, the iPhone App store has created a half billion dollar marketplace which by the end of the year could be much larger.
The article goes on to say, "In the coming six months, at least four would-be rivals of Apple will probably open their own online bazaars where developers of all stripes will sell downloadable software applications to make cell phones more fun and useful." Just about every major phone manufacturer and mobile provider will be forced to have something kind of app store in the works in the near future.
A recent job posting by Microsoft stated there may be future opportunities in the cloud application delivery space in their Windows Mobile division. They're tentatively calling this initiative "Skymarket". It was revealed in a job listing Microsoft posted earlier this month for a Senior Product Manager to oversee a marketplace service for Windows Mobile. The rumor is that the mobile applications marketplace may launch in tandem with the next version of Microsoft's cell-phone software, Windows Mobile 7, expected in 2009.
What I find more interesting is the potential for this type of cloud application delivery in the more traditional areas of technology such as consumer electronics. One such example is Intel's new CE platform which includes a Widget Channel, a software framework designed to help web developers, content providers and advertisers a quick and easy way to bring internet "cloud" services to TV devices. This platform will effectively allow consumers using Intel's embedded platform to download additional applications and content right on their TV or DVD with one click.
Another potentially large segment may be in the traditional desktop space. As we continue to transition away from desktop centric delivery to that of a cloud centric application models, the fight for the desktop will start to become the fight for the delivery of hybrid applications that make use of both local and remote resources. This could be a major reason why Google has entered the browser wars with Chrome or Dell with their mini 9 laptop. Both appear to understand that those who control the application experience own the customer experience.
At Enomaly we have also been busy working on a Service provider centric Cloud App Cener of our own. Through the App Center, a service provider can publish pre-built cloud applications directly to customers, either for free or with an additional cost. Customers can directly provision VMs on the cloud from a library of pre-existing virtual application images. Customers can also package their existing VMs and upload to them to the cloud. What it is becoming increasingly clear is the delivery of functional cloud applications will be a key aspect in any cloud providers toolset. In developing the Enomaly Service Provider Edition, we understood that no longer can a cloud provider be a "walled garden" driven by a single company. Instead we choose to foster our ecosystem of customers, partners and even competitors that all have a vested interest in the success of our platform and more importantly all who utilize it. The App Store has become the new software channel and the key to tapping into this channel are the cloud enablers and providers.
Saturday, September 6, 2008
The key to cloud infrastructure is abstraction to the point that it "just doesn't matter". Your infrastructure is always available and completely fault tolerant. Think more along the lines of the iphone application delivery model (App Store), and less like the desktop application models of the past. The companies that will success are the ones who embrace this new hybrid internet centric model. More simply, the Cloud is the computer.
Friday, September 5, 2008
Shane posted this earier.
I contacted customer support through their online live chat support. My expectation was that they would either point me to a page where I could go through a process of requesting a password reset or that they would have to reset my password and the system would automatically send it to my email address.Michael Sheehan, Technology Evangelist for GoGrid responded;
The support rep asked for my name, email address, and billing address for the credit card on file. What happened next, was a complete shock to me...in the chat window, there was my password in plain text. Not only did the rep have access to my password (which is completely unacceptable), but they actually gave it to me without any real assurance that I was who I said I was.
Thank you for pointing this out. I will be sure that our support team knows not to give out this type of information, or if it is given out, it is done in a secure manner.I'm not a gogrid customer nor have I used their service, so I can not confirm this report first hand. But regardless of whether or not the site is SSL encrypted, in order to gain access to someones account you appear to need is some basic credentials and they will freely give you access? Sounds scary to me.
Security is of utmost importance to us. If you have any other suggestion on how we can increase your comfort level (e.g., with password hints, temporary password resets, etc.) please let me know.
Do note that our entire GoGrid portal is run with SSL-encryption, INCLUDING the chat session so while I agree with you that the password should not have been delivered in that manner, the chat session was encrypted with RC4 128 bit encryption.
Wednesday, September 3, 2008
I had an idea; why not take the concept of a containerized data centers and combine it with a hybrid-electric engine or bus. Diesel-powered engines in hybrid electric buses store energy in batteries which in a disaster could feasibly power a mobile data center for a significant amount of time, even after the fuel runs out. Hybrid engine technologies could serve as the basis of self powered Mobile Hybrid Data Centers for use in emergencies and other various types of situations where the location and access to power may be problematic.
The article also states that "BAE’s newest buses, expected to reach full rate production next year, produces 200 kilowatts when the engine speed is at 2300 rpms. BAE estimate a single hybrid city bus could provide power to 36 households for a full day or a 12,400 sq.-ft. hospital for 22 hours, on a single tank of diesel gas." I haven't done the math, but I can only assume this could easily keep a mobile data center up and running for quite a while.
The article goes on to say that trailblazing cities could become models for the delivery of power via mass transit to disaster-prone urban centers like New Orleans, which has only restocked on biodiesel buses since Katrina.
Anyone at the DoD or department of homeland security, please feel to get in touch.
In doing some research earlier I came across the original definition of a cloudbust. In meteorology a cloudburst is an extreme form of rainfall, which normally lasts no longer than a few minutes but is capable of creating flood conditions. Similarly in IT a sudden and unexpected rise in demand can quickly overwhelm a data center. In coming up with the term cloudbursting, Jeff has give a simple name to a rather complex problem. At Enomaly this is a problem we've been debating for awhile; How do you effectively enable a kind of cloud overflow in a secure yet efficient manor?
Provisioning instances in Amazon EC2 for example is relatively easy, moving live workloads across a wide area is not. In most modern dynamic applications the idea of having a "hot cloud standby" or a prebuilt virtual machine that is basically waiting in the wings would solve a lot of problems. But in reality there are a number of complexities that need to be overcome. These complexities range from network optimization to secure data transfer & replication to load balancing across geographically diverse hosting environments, just to name a few.
To truly enable a capable cloudbursting infrastructure, I feel there needs to be a common consensus on how this may be archived and by what means. So the question in the short term is; what are some of the practical approaches, technologies and architectures needed to make this kind of hybrid cloud infrastructure feasible?
Monday, September 1, 2008
Slashdot as well a few other blogs are saying the new browser will be based on Webkit, the same basic engine as Mac OS X’s Safari. Also interesting that last week Google renewed their agreement as firefox's default search engine, just in time for the g-browser announcement. According to an article last week on networkworld, "Mozilla generates the bulk of its income from ties to Google, according to the company's latest financial figures. For the 2006 tax year -- the most recent numbers made public by Mozilla -- 85%, or about $57 million of the company's $67 million in annual revenues for the year, came from Google."
At first glance webkit would seem to be an odd choice for the basis of a G-Browser. Upon closer look WebKit is an open source application with portions licensed under the LGPL and BSD licenses, both of which are much more Google friendly compared to the Mozilla license.
More details on webkit > http://webkit.org/
Google Mozilla deal > http://www.networkworld.com/news/2008/082908-mozilla-renews-google-cash-cow.html
Over the last few weeks we have seen a noticable spike in desktop centric cloud computing inquires. What's more; this spike has been from a particular market segment, that of ISP's and telecoms who all seem to have had a sudden and dramatic increase in interest in this area.
For those of you who don't know about CDI (Cloud Desktop Infrastructure) or what I like to call desktop's in the cloud. It's an internet-centric computing approach to desktop management and deployment that combines the traditional thin-client, utility hosting, and cloud storage. It is designed to give system administrators and end-users the best of both worlds: the ability to host remotly managed desktop virtual machines in a data center while giving end users a portable PC desktop experience.
There are a few companies directly focusing on this space, notably a new startup called Desktone. Desktone describes their service as the ability for virtual desktops be outsourced and provided via subscription service. Think Amazon EC2 for your Desktop. For the most part Desktone has been vague about how they actually enable their service or how they overcome things like license management or network latency over a wide area. But regardless, they seem to have a clear vision for this new area of cloud computing. One of their biggest deals so far has been with Verizon, who appear to be rolling out some kind of hosted desktop offering via desktone.
To give some background, the VDI (Virtual Desktop Infrastructure) space is a fairly well established segment with numerous players including VMware, Microsoft, Citrix, Quest, Ericom, and Sun. Most if not all have focused on more traditional approaches utilizing centralized virtual desktop deployment architectures. It would seem that except for a couple notable exceptions the VDI space is ripe for disruption.
Recetly Microsoft has jumped into the fray via a new set of APIs called "Terminal Services Session Broker" which they describe as:
…a set of APIs that ISVs can use to create connection brokers for other kinds of devices. Basically, these APIs allow you to lobotomize the TS Session Broker and replace its brain—its brokering mechanism—with a new plug-in. This plug-in can contain a new set of rules that support redirection to other types of destinations. It can also provide different means of deciding the best target for new connections, such as load balancing rules based on server resources or login time…
Combined with new cloud friendly(er) licensing, the new API would seem to indicate that Microsoft is not only taking direct aim at the estiblished VDI solutions but also taking steps toward a cloud centric desktop future.
Will we soon see our desktops hosted by an ISP like Verizon or AT&T or is this just more cloud hype? I know one thing is for sure, I certainly wouldn't want to use a cloud desktop service if I'm forced to use "metered" bandwidth such as what Comcast is attempting to do.
So what do you think?