Sunday, November 29, 2009
Specifically, "Just as a number of local or regional companies provide both electricity and gas, independent telephone companies would be encouraged to provide both telephone and information utility services in their respective territories"
The original copy of this intriguing document resides in the Smithsonian National Museum of American History, Lemuelson Center for the Study of Invention & Innovation, in the Western Union Telegraph Company Records archival collection covering the years 1820-1995.
Here is the complete text.
1965: Western Union's Future Role-as the Nation's First Cloud Utility
Thursday, November 26, 2009
First let's look at scaling out, or to scale horizontally which basically means to add more nodes to a distributed system, such as adding a new servers or storage (which is easier). These could be in the form of physical or virtual servers. An example might be scaling out from one web server system to many dedicated slaves machines. Google has made an art form of scaling out. They have data centers around the globe geared toward this one core task - just in time hardware provisioning, but for most this is a very difficult and costly endeavour. Virtualization makes this sort of instant replication & provisioning of many virtual machines much easier.
Next is scaling up or the ability to scale vertically which means adding resources to a single server in a distributed system. Typically this involves the addition of CPUs or memory to a single virtual server in the form of Virtual CPU and RAM. Unlike a physical server, in a virtual environment you can change your virtual hardware characteristics, a physical server is what it is. You run at it's maximum potential limiting it's ability to easily scale up. If you need more scale you need more hardware or have to manually add more components to the physical server (RAM, CPU, storage, etc), which means downtime while the servers are upgraded. In virtual environment this isn't a limitation and can often be done on the fly.
Vertical scaling of existing systems also enables you to better leverage Virtualization technology because it provides more resources for the hosted Operating system and Applications that can share these resources in a multi-tenant environment. Virtualization also allows for more automated programmatic control of the system resources in correlation to the demands placed on the infrastructure or application being hosted. This is because in a virtual infrastructure you are not managing any actual physical components but instead virtual representations of them.
So it is very true that virtualization isn't a requirement of a cloud infrastructure, it just makes it a heck of lot easier to manage and scale out or up or both.
Wednesday, November 25, 2009
Hotel Seoul Kyoyuk Munhwa HoeKwan
202, Yangjae-Dong, Seocho-Gu, Seoul
13:55PM Introduction & Networking
14:10PM Lightning Talks (long form)- chaired by Chan-Hyun Yoon, KAIST (30 min each)
15:10PM Coffee Break
16:30PM unPanel / Breakout Discussion - chaired by Yang-Woo Kim (Dongkuk University)
17:30PM Wrap Up, Dinner & Networking
- KCSA (Korea Cloud Service Association)
- KISTI (http://www.kisti.re.kr/english/index.jsp)
Monday, November 23, 2009
Let me point of a few the more interesting points of my trip to the land of the rising sun. As I mentioned in my previous post about the opportunities for Cloud Computing in Asia, if my schedule is any indication of the demand for cloud products, there is a tremendous amount. Every minute of my trip was accounted for with non-stop meetings. I will also point out that the Japanese know how to entertain. As you can probably tell, I do a lot of traveling and am quite frequently taken to fancy restaurants, nothing comes close to the fine restaurants of Tokyo. Duck Sashimi anyone?
As for CloudCamp Tokyo, it was well attended with more then 160 in attendance. One of the more interesting aspects of the Camp was how the Japanese interact in an unconference setting. To put it simply, they don't. Getting them to publicly speak was a challenge. A few ask questions, but generally it was a one way conversation. I spoke, my translator spoke. The lightning presentations were also very well received. After the main unconference is when things got interesting. We had an open bar which probably helped loosen things up a bit. In an orderly single file fashion, almost everyone of the 160 or so attendees proceed to introduce themselves to me, handing me their business cards, with both hands, followed by a bow and a Hajimemashite (a polite 'Hello, I am pleased to make your acquaintance' which you only use the very first time you meet).
I also found it interesting that language and cultural differences are major barriers. Unlike Europe where most business people speak English, this is not the case in Japan. Most don't. To get around this we worked with a large Japanese System Integrator which provided us with two very nice Japanese translators (Eno-san and Maki-san - pictured) . The firm also provided us with introductions to most of the major Japanese cloud customers including the top Hosting, Data Centers, Telecoms etc. Without the help of the SI we would have had much more difficult time, a good portion of our meetings involved our translators doing the majority of the talking. So my suggestion to any company looking to sell cloud products and services to the Asian market is to find yourself a local partner who can act as a guide to the local business scene.
All in all a succesful week in Japan. Next week I'll be in Tel Aviv at the The World Summit of Cloud Computing. Should be interesting.
(P.S) Wear a suit and tie.
ENISA supported by a group of subject matter expert comprising representatives from Industries, Academia and Governmental Organizations, has conducted, in the context of the Emerging and Future Risk Framework project, an risks assessment on cloud computing business model and technologies. The result is an in-depth and independent analysis that outlines some of the information security benefits and key security risks of cloud computing. The report provide also a set of practical recommendations.
A few highlights of the report include:
- The Cloud’s economies of scale and flexibility are both a friend and a foe from a security point of view. The massive concentrations of resources and data present a more attractive target to attackers, but cloud-based defences can be more robust, scalable and cost-effective. This paper allows an informed assessment of the security risks and benefits of using cloud computing - providing security guidance for potential and existing users of cloud computing.
- Scale: commoditisation and the drive towards economic efficiency have led to massive concentrations of the hardware resources required to provide services. This encourages economies of scale - for all the kinds of resources required to provide computing services.
- Architecture: optimal resource use demands computing resources that are abstracted from underlying hardware. Unrelated customers who share hardware and software resources rely on logical isolation mechanisms to protect their data. Computing, content storage and processing are massively distributed. Global markets for commodities demand edge distribution networks where content is delivered and received as close to customers as possible. This tendency towards global distribution and redundancy means resources are usually managed in bulk, both physically and logically.
STANDARDISED INTERFACES FOR MANAGED SECURITY SERVICES: large cloud providers can offer a standardised, open interface to managed security services providers. This creates a more open and readily available market for security services.
LOCK-IN: there is currently little on offer in the way of tools, procedures or standard data formats or services interfaces that could guarantee data, application and service portability. This can make it difficult for the customer to migrate from one provider to another or migrate data and services back to an in-house IT environment. This introduces a dependency on a particular CP for service provision, especially if data portability, as the most fundamental aspect, is not enabled..
ISOLATION FAILURE: multi-tenancy and shared resources are defining characteristics of cloud computing. This risk category covers the failure of mechanisms separating storage, memory, routing and even reputation between different tenants (e.g., so-called guest-hopping attacks). However it should be considered that attacks on resource isolation mechanisms (e.g.,. against hypervisors) are still less numerous and much more difficult for an attacker to put in practice compared to attacks on traditional OSs.
MANAGEMENT INTERFACE COMPROMISE: customer management interfaces of a public cloud provider are accessible through the Internet and mediate access to larger sets of resources (than traditional hosting providers) and therefore pose an increased risk, especially when combined with remote access and web browser vulnerabilities.
Read the Complete Report Here >
Monday, November 16, 2009
Today, at the Microsoft Professional Developer Conference (PDC) in Los Angeles, Microsoft announced the release of version 4.0 of the.NET Micro Framework, but also that they are open sourcing the product and making it available under the Apache 2.0 license, which is already being used by the community within the embedded space.
The .NET Micro Framework,a development and execution environment for resource-constrained devices, was initially developed inside the Microsoft Startup Business Accelerator, but recently moved to the Developer Division so as to be more closely aligned with the overall direction of Microsoft development efforts.
Thursday, November 12, 2009
One of the more interesting side effects of creating the CloudCamp series of events around the globe has been as a market research vehicle. As interest in Cloud Computing increases in various geographic regions, so does the interest in folks on the ground who want to help organize local CloudCamp events. This network of local organizers has become an invaluable resource into new markets. These events have also done a tremendous job of forecasting potential high growth markets and more importantly the opportunities for Cloud computing within various emerging markets. And lately it seems that by far the largest opportunities are coming from one particular region of the world.
To give you some background, we have an upcoming CloudCamp next week in Tokyo (November 17th) organized by NTT among others as well as next month in Seoul, South Korea (Dec 16th) organized by the Korea Institute of Science and Technology Information and the newly formed Korea Cloud Service Association. The Japanese, South Korean and Chinese markets have been particularly strong for CloudCamp. Based on the this interest, we will also be doing a series of CloudCamp's in China (Shanghai, Beijing and Hong Kong), which will mostly likely take place in early 2010. (If you're interested in sponsoring one of these events, please get in touch)
As a more personal example, I will be in Tokyo next week for a CloudCamp Tokyo event on Tuesday as well as a number of business meetings. Purely from a demand point of view, from the moment I get off the plane on Monday until I leave on Sunday, I have non-stop meetings from 9am through dinners late into the evening every night of the week with various Japanese firms looking to capitalize on the booming Cloud Computing sector. We've seen so much interest from Japan that we've started to have to turn down meeting opportunities. To say the least, the interest in "Kumo" Japanese for cloud is astounding.
We've seen similar levels of interest in China as well where there seems to be a technological renaissance occurring. China is a very unique place when it comes to Cloud Computing. First of all they don't have the legacy infrastructure that most Western economies suffer from. It's in a sense a greenfield opportunity where the Chinese have the opportunity to choose the latest & best technology solutions without regard for how it may effect legacy systems -- since there really isn't any.
For instance, look at the massive adoption of mobile phones over the last several years, the traditional landline was almost completely bypassed for the newer and more efficient mobile options. Computing is also seeing a similar bypass, with projects such as national wifi networks being built in conjunction to a masssive multi-billion dollar national railway system. The Chinese seem to have realized that a national infrastructure is more then just a physical one, but also virtual.
I'm not alone in making this conclusion about the Asian market, In a recent report, Gartner said infrastructure software will account for 64.4 percent of overall enterprise software spending in the Asia-Pacific region next year, with APAC enterprise software spending to grow 10.2% in 2010 - the fast growth in any of the various global software markets.
Following upon the same sense Amazon Web Service has just announced an expansion into the Asian region in the first half of 2010. Saying "AWS customers will be able to access AWS’s infrastructure services from multiple Availability Zones in Singapore in the first half of 2010, then in other Availability Zones within Asia over the second half of 2010. AWS services available at the launch of the Asia-Pacific region will include Amazon EC2, Amazon S3, Amazon SimpleDB, Amazon Relational Database Service, Amazon Simple Queue Service, Amazon Elastic MapReduce, and Amazon CloudFront."
“Developers and businesses located in Asia, as well as those with a multi-national presence, have been eager for Asia-based infrastructure to minimize latency and optimize performance,” said Adam Selipsky, Vice President of Amazon Web Services. “We’re very excited to announce the expansion of AWS infrastructure into Asia to help our customers plan their technology investments and better serve their end-users in Asia.”
Tom Lounibos, CEO of SOASTA had an interesting comment on the opportunity in a twitter post earlier saying "AWS announces Singapore site 7 hours ago, and I wake to three SOASTA customer requesting Cloud Testing from Singapore! "Demand" wins!"
Although I am just one man from just one company I believe that in some small way that both Enomaly and CloudCamp represent the tip of the iceberg when it comes to the opportunity to offer Cloud Computing related products in service to the Asian Market and from where I sit there is no bigger opportunity then in Asia.
Monday, November 9, 2009
ECP is a carrier-class architecture & cloud hosting platform which supports the deployment of very large public cloud infrastructure for service providers. The platform has been designed to span multiple federated data centers in disparate geographies around the globe handling hundreds of thousands of VM's and multi-tenant customers.
This version of ECP Service Provider Edition brings the follow enhancements over 3.0.2:
- KVM is now directly supported as a hypervisor at install time.
- Sample data is installed during initial installation, so there is no need to create a customer/group/permissions before testing the system. See INSTALL for default user/pass.
- VNC window in customer UI is now identical to the Admin UI. Passwords for the VNC console are now found under Info button at VM level.
- Info window now shows how to connect with an external VNC client as well as the existing Java applet.
- VNC window can be disabled entirely on a per VM basis.
- App Center can now be searched/filtered. This is useful if you offer a large number of appliances.
- Admin Dashboard now shows graphical whole cluster resource usage.
- Network Manager has been removed. All deployments are recommended to use DHCP for IP assignment going forward.
- Various performance improvements have been added at customer UI level.
- Various performance improvements have been added to infrastructure code.
Enomaly's Cloud Service Provider Edition extends our core ECP platform, already used by thousands organizations around the world, with the key capabilities needed by xSPs, carriers, and web hosting providers who want to offer an Infrastructure-on-demand or IaaS service to their customers. Enomaly ECP Service Provider Edition provides a powerful but simple customer self-service interface, customer-facing REST API, theme engine, strong multi-tenant security, a hard quota system, and flexible integration with your billing, provisioning, and monitoring systems.
Screen Shots (click to enlarge)
Sunday, November 8, 2009
As someone who spends his days eating, breathing and sometimes drinking cloud computing, it's fun to see how the debate has recently devolved into a debate purely focused upon the finer semantic nuances of the various terminologies. The debate seems to generally focus on the varied usages within the companies that are attempting to "cloud-ify" themselves & their products/services. This cloudification seems to be the trend du'jour within the technology industry, an attempt to augment marketing materials and or product positioning to include cloud related buzz words, whether they make sense or not.
Actually one of the better stated criticism comes from Oracle CEO Larry Ellison who observes that cloud computing has been defined as "everything". It's everything and nothing in particular, a trendy word that is used more to impress than explain a particular problem. I for one completely agree.
As a marketing term, cloud has enabled us to broadly define the movement away from the desktop / server centric past to the cloud [Internet] enabled future. Wikipedia's cloud definition says it well, "it is a paradigm shift where technological details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them". Yup, enough said.
This message is to of you -- the ones who are jumping on the cloud bandwagon, let me say this as plainly as possible. Regardless of whether it's "the cloud" or "cloud computing" it all comes back to the fact that it's a buzzword. A way to say we're cool, we're now, we're new, with out saying it directly (a neologism). It's the New Coke of Computing / the new taste of the Internet.
So what is The Cloud? It's the Internet. And what is Cloud Computing? It's the next big thing in computing, it's using the Internet.
Friday, November 6, 2009
More specifically it was created with an open collaboration model in mind where both large companies and individuals can equally collaborate without fear of legal ramifications. Using the OWFa the actual spec development can be done in any forum the participants choose (Unincorporated Google groups / Social Networks, non-profits, startups, Enterprises, etc.)
I'll also be the first to point out that one of the key authors is David Rudin, a Microsoft Standards Attorney. But regardless of Rudin's employer, this is a well thought out document and I for one am very excited by the potential usage of OWFa within a variety of standards processes. I believe that OWFa has the potential to dramatically effect the way we as industry both collaborate and innovate when it comes to the development of common truely open standards, whitepapers and best practices. I encourage anyone who truly believes in the creation of an Open Web to take a look the OWFa.
You can download a copy of the the final draft from here.
To start things off, below is the first in what I hope will be many of these. I'm calling this new feature Transient Ambiance, The mood evoked by my ever changing environment.
When I recorded this entry using my iPhone Voice Memo App, I found myself in the London Underground. As I waited in the tube station for my return to Heathrow airport, I was in the midst of one of those strange surreal moments. With only a Violinist and myself in the middle of a typically busy London underground station. A momentary period of solitude in an otherwise hectic week of meetings and presentations. As I sat pondering life's mysteries, a soft melodic music echoed off the dark, damp underground walls.
Download here (MP3, 2.95mb, timing:3.13)
The scope will include Standardization for interoperable Distributed Application Platform and services including Web Services, Service Oriented Architecture (SOA), and Cloud Computing. SC 38 will pursue active liaison and collaboration with all appropriate bodies (including other JTC 1 subgroups and external organizations, e.g., consortia) to ensure the development and deployment of interoperable distributed application platform and services standards in relevant areas.
Similar to other ISO initiatives each member country that’s interested in participating in this group will come up with their own structure to provide feedback on work items and establish voting positions, including the InterNational Committee for Information Technology Standards (INCITS) who will be the US TAG.
Administrative support and leadership of SC 38 will be provided as follows:
The US National Body will serve as Secretariat for the SC and its Working Groups, and Dr. Donald R. Deutsch from the US National Body will serve as the Chair for the SC. The National Body of China will provide Ms. Yuan Yuan as the Convenor of the Working Group on SOA. The US National Body will provide the Convenor of the Working Group on Web Services. The National Body of Korea will provide Dr. Seungyun LEE as the Convenor of the Study Group on Cloud Computing. The National Body of China will provide Mr. Ping ZHOU as the Secretary of the Study Group on Cloud Computing.
I’ve pasted the complete resolution in detail below.
Resolution 36 ‐ New JTC 1 Subcommittee 38 on Distributed Application Platforms and Services (DAPS)
JTC 1 establishes a new JTC 1 Subcommittee 38 on Distributed Application Platforms and Services
(DAPS) with the following terms of reference:
Title: Distributed Application Platforms and Services (DAPS)
Scope: Standardization for interoperable Distributed Application Platform and services including:
• Web Services,
• Service Oriented Architecture (SOA), and
• Cloud Computing.
SC 38 will pursue active liaison and collaboration with all appropriate bodies (including other JTC 1 subgroups and external organizations, e.g., consortia) to ensure the development and deployment of interoperable distributed application platform and services standards in relevant areas.
As per the JTC 1 Directives, SC 38 will establish its own substructure at its first meeting. Based on discussions at the JTC 1 Plenary, it is anticipated that SC 38 will initially establish subgroups as follows:
a. A Working Group on Web Services
o Draft Terms of Reference:
i. Enhancements and maintenance of the Web Services registry (inventory database of Web Services and SOA Standards).
ii. Ongoing maintenance of previously approved standards from WS‐I PAS submissions, ISO/IEC 29361, ISO/IEC 29362 and ISO/IEC 29363.
iii. Maintenance of possible future PAS and Fast Track developed ISO/IEC standards in the area of Web Services.
iv. Investigation of where web service related standardization is already ongoing in JTC 1 entities.
v. Investigate gaps and commonalities in work in “iv” above.
b. A Working Group on SOA
o Draft Terms of Reference:
i. Enumeration of SOA principles.
ii. Coordination of SOA related activities in JTC 1.
iii. Investigation of where SOA related standardization is already ongoing in JTC 1 entities, and
iv. Investigate gaps and commonalities in work in “iii” above
c. A Study Group on Cloud Computing (SGCC) to investigate market requirements for standardization, initiate dialogues with relevant SDOs and consortia and to identify possible work items for JTC 1.
o Draft Terms of Reference:
i. Provide a taxonomy, terminology and value proposition for Cloud Computing.
ii. Assess the current state of standardization in Cloud Computing within JTC 1 and in other SDOs and consortia beginning with document JTC 1 N 9687.
iii. Document standardization market/business/user requirements and the challenges to be addressed.
iv. Liaise and collaborate with relevant SDOs and consortia related to Cloud Computing.
v. Hold workshops to gather requirements as needed.
vi. Provide a report of activities and recommendations to SC 38.
Topics related to Energy Efficiency of Data Centers are excluded. On topics of common interest (such as virtualization), coordination with the EEDC SGis required.
Membership in the Study Group will be open to:
1. National Bodies, Liaisons, and JTC 1 approved PAS submitters
2. JTC 1 SCs and relevant ISO and IEC TCs
3. Members of ISO and IEC central offices, and
4. Invited SDOs and consortia that are engaged in standardization in Cloud Computing, as approved by the SG
In addition, the Convenor may invite experts with specific expertise in the field.
Meetings of the group may be via face‐to‐face or preferably by electronic means.
The SC 38 Secretariat will issue a call for participants for the Study Group.
The SGCC Convenor is instructed to provide a report on the activities of the
Study Group at the SC 38 2010 Plenary meeting.
Administrative support and leadership of SC 38 will be provided as follows:
a. The US National Body will serve as Secretariat for the SC and its Working
Groups, and Dr. Donald R. Deutsch from the US National Body will serve as the Chair for the SC.
b. The National Body of China will provide Ms. Yuan Yuan as the Convenor of the Working Group on SOA.
c. The US National Body will provide the Convenor of the Working Group on Web Services.
d. The National Body of Korea will provide Dr. Seungyun LEE as the Convenor of the Study Group on Cloud Computing.
e. The National Body of China will provide Mr. Ping ZHOU as the Secretary of the Study Group on Cloud Computing.