Monday, June 29, 2009

Digital Illustration: The Art of War, Vietnam Sunrise

I'm currently in Halifax Nova Scotia attending my sister Yael's Wedding so I haven't had a chance to post anything new to the blog. So I thought I'd take this opportunity to show the more creative side of myself.

Some of you may not know that before I was a cloud computing pundit or whatever it is you wish to call me. I used to spend my days as a graphic artist. Over the next few weeks I will be posting some of various works with the related stories about how the images came to be as well as who they we're created for.

The Hendrix Back Story

Ever since I was a kid I enjoyed a keen fascination with Jimi Hendrix. As a teenager in the nineties, I was one of those guys who went out and bought an old fender stratocaster and the largest guitar amp I could find. I would then crank it up to 11 and tried my best at renditions of Voodoo Chile or Foxy Lady by my favorite guitarist Hendrix.

In 2003 I was lucky enough to have been contacted by the Hendrix Foundation to do some Jimi Hendrix Illustrations for a book that would feature him, although the book was never published the image still remains. Below are a few pieces of from that collection of illustrations.

Vietnam Sunrise

The first illustration is named "The Art of War, Vietnam Sunrise" was created in 2003 as it represented the parallels between the coming Iraq War and the Vietnam war. It is meant to show the pain as well as anticipation of going into war.

Please click the images to enlarge.

The second image is an abstract portrait of the late Jimi Hendrix. Both images were created digitally using a photoshop and wacom tablet.

Friday, June 26, 2009

Cloud Computing as a Euphemism

I had another random thought, it might be crazy, but I thought I'd share it. I think I may have been completely wrong in describing cloud computing as a metaphor or analogy, or more simply as Internet centric infrastructure.

Let me explain, the Internet is a physical network implementation. Basically the Internet can be thought of as a network of networks interconnected linked by copper wires, fiber-optic cables, wireless connections, and other physical hardware based aspects. In contrast the world wide web is a system of systems or a series of independent but interrelated elements comprising a unified whole -- the application.

To put it another way, the problem with describing cloud computing as a metaphor for the Internet is it's actually a "euphemism" for the world wide web.

According to wikipedia, Euphemisms may be formed in a number of ways. A Periphrasis or circumlocution is one of the most common — to "speak around" a given word or concept, implying it without saying it. Which it seems is what we are actually doing when referring to "cloud computing". Over time, circumlocutions become recognized as established euphemisms for particular words or ideas. Basically cloud computing is an other way to describe the web, or more broadly the next generation of the world wide web.

With this in mind, I think the challenge in trying to define a simple common definition for cloud computing is that we've been describing the wrong concept. We're not describing the physical infrastructure of the Internet so much as the application of that infrastructure.

Defining Infinite

Just a random thought, recently I've mentioned the word infinite in a few of my posts. I thought I'd take a moment to briefly describe my view of the term as it relates to cloud computing.

Infinite Capacity in Cloud Computing
Amazon EC2 is certainly finite, but for the user who needs quick access to a 1,000 AMI's it appears to be infinite. Or to put it another way, there is more capacity then any one user will ever need to use.
Reblog this post [with Zemanta]

Thursday, June 25, 2009

The New Global Cyber Cold War

There have been some rather dramatic moves in the world of cyber warfare over the last couple weeks which has brought the need for standardized interoperability & cooperation within multinational cyber defense systems to the forefront.

Last month President Barack Obama said protecting the US computer networks from attack would become a national security priority and that his administration would take active steps to protect critical pieces of US's network infrastructure. Continuing on that promise, yesterday US Defense Secretary Robert M. Gates issued an order creating US CYBERCOM a military command that will defend US military networks against cyber warfare. The very public disclosure places the United States within a broader group of countries pitted against one another in what can only be described as a "Global Cyber Cold War".

In his memo, Gates recommended that Lt. General Kieth B. Alexander the director of the National Security Agency (NSA) be promoted and run both the NSA and US CYBERCOM. The new command will be a division of the U.S. Strategic Command, which is responsible for operations pertaining to nuclear and cyber warfare. Gates also directed that the command needs to be fully operational by October 2010.

The memo went on to say that "more than 100 foreign intelligence organizations are trying to hack into the U.S. government's 15,000 networks, which connect 7 million computers, according to Deputy Defense Sec. William Lynn."

Alan Paller, the well-known director of SANS Institute had some great insight into yesterday's announcement:“Melding both defensive and offensive missions under the same command will allow for better threat preparedness. A unified command also increases the potential for interoperability and both process sharing and real time information sharing among the services".

The United States isn't alone in creating a cyber warefare division. Earlier this week Britain also announced the creation of a new "Cyber Security Operations Centre" which is said to bring together the expertise of MI5, the Government Communications Headquarters (GCHQ) listening post in Cheltenham and the Metropolitan Police force.

According to a statement by Britain's Security Minister, Lord West he said "It would be silly to say that we don't have any capability to do offensive work from Cheltenham," he went on to say that they had not employed any "ultra, ultra criminals" but that they needed the expertise of former "naughty boys". Which I think is British lingo for experienced network hackers & crackers.

Recently there has been growing concern among cyber security experts that we are now in the midst of (as I mentioned previously) a “cyber cold war”. Most if not all G8 countries now have some kind of cyber command amid escalating fears that hackers could gain the technology to shut down the computer systems that control the various G8 governments critical infrastructures such as power stations, water companies, air traffic, government and financial markets.

There have even been indications that Al-Qaeda has been actively engaged in the development of a so called "Jehadi Botnet" a.k.a "Jihadinet" ironically through the recruitment of zombie PC's in the United States. Which also raises the question of how do you fight an enemy that can actually be part of your very own network infrastructure?

With all this talk of offensive as well as defensive network tactics, the core concepts of network interoperability and cooperating among "allied" nations is quickly becoming a major point of contention. Last year seven NATO nations and the Allied Command transformation signed the documents for the formal establishment of a Cooperative Cyber Defence (CCD) Centre of Excellence (COE). The centre was formed to conduct research and training on cyber warfare including specialists from the sponsoring countries, Estonia, Germany, Italy, Latvia, Lithuania, Slovakia and Spain with United States, Canada, Britian said to be joinng in the near term.

Unnamed sources have also told me that one of the key areas of exploration for the Nato's Cooperative Cyber Defence initative has been focused on the development interoperability standards among the various state sponsors. What's all the more interesting is that the same requirements we seem to addressing in standardized interoperable cloud computing are also the same basic requirements needed for a multinational cyber defense force.

It is my opinion that as we move forward into this new world of proactive network defense, we must work to strength a cooperative environment among the various participants / combatants. The first step is ensuring we have a some-what uniform way for the various parties to to communicate with one another, and that means the creation of interoperability standards.

The Day the Cloud Died

Today (June 25th 2009) was one of those rare days that makes you remember why elasticity is so important when architecting your web application stack.

In case you don't follow the news or twittersphere, around noon eastern time came news that everyone's favorite Charlies Angel "Farrah Fawcett" had died, which resulted in a minor news flurry. Then later in the afternoon a much more major web storm erupted when news of Michael Jackson's death hit the social web. Unexpedly Twitter had major scaling issusing dealing with the sudden influx of hundreds of thousands of tweets as Michael Jackson's death spread. But twitter wasn't alone.

According to TechCrunch, twitter wasn't the only site suffering. "Various reports had the AOL-owned TMZ, which broke the story, being down at multiple points throughout the ordeal. As a result, Perez Hilton’s hugely popular blog may have failed as people rushed there to try and confirm the news. Then it was the LATimes which had a report saying Jackson was only in a coma rather than dead, so people rushed there, and that site went down. (The LATimes eventually confirmed his passing.)"

Needless to say that the use case for elastic cloud computing was made particularly clear today. For major sites like the TMZ or the LAtimes this was particularly embarrassing.

Provisioning additional instances in Amazon EC2 as well as other cloud providers has become relatively easy. For most modern scalable applications the idea of having a "hot cloud standby" or a prebuilt virtual machine that is basically waiting in the wings would solve a lot of problems with very little over head, technical adjustment or cost associated.

There is no longer any good reason for a professional website property to go down because of load. Cloud computing provides an almost infinite supply of computing capacity, be it a infrastructure as a service or platform as a service or even a traditional CDN. To not have a cloud bursting strategy in the age of cloud computing isn't just wrong -- it's idiotic.

IBM Solves Cryptographic Cloud Security

I usually don't post press releases, but this one sounded almost too good to be true. According to IBM, they have discovered a method to fully process encrypted data without knowing its content. If true, this could greatly further data privacy and strengthen cloud computing security.

---

An IBM researcher has solved a thorny mathematical problem that has confounded scientists since the invention of public-key encryption several decades ago. The breakthrough, called "privacy homomorphism," or "fully homomorphic encryption," makes possible the deep and unlimited analysis of encrypted information -- data that has been intentionally scrambled -- without sacrificing confidentiality.

IBM's solution, formulated by IBM Researcher Craig Gentry, uses a mathematical object called an "ideal lattice," and allows people to fully interact with encrypted data in ways previously thought impossible. With the breakthrough, computer vendors storing the confidential, electronic data of others will be able to fully analyze data on their clients' behalf without expensive interaction with the client, and without seeing any of the private data. With Gentry's technique, the analysis of encrypted information can yield the same detailed results as if the original data was fully visible to all.

Using the solution could help strengthen the business model of "cloud computing," where a computer vendor is entrusted to host the confidential data of others in a ubiquitous Internet presence. It might better enable a cloud computing vendor to perform computations on clients' data at their request, such as analyzing sales patterns, without exposing the original data.

Other potential applications include enabling filters to identify spam, even in encrypted email, or protecting information contained in electronic medical records. The breakthrough might also one day enable computer users to retrieve information from a search engine with more confidentiality.

"At IBM, as we aim to help businesses and governments operate in more intelligent ways, we are also pursuing the future of privacy and security," said Charles Lickel, vice president of Software Research at IBM. "Fully homomorphic encryption is a bit like enabling a layperson to perform flawless neurosurgery while blindfolded, and without later remembering the episode. We believe this breakthrough will enable businesses to make more informed decisions, based on more studied analysis, without compromising privacy. We also think that the lattice approach holds potential for helping to solve additional cryptography challenges in the future."

Two fathers of modern encryption -- Ron Rivest and Leonard Adleman -- together with Michael Dertouzos, introduced and struggled with the notion of fully homomorphic encryption approximately 30 years ago. Although advances through the years offered partial solutions to this problem, a full solution that achieves all the desired properties of homomorphic encryption did not exist until now.

IBM enjoys a tradition of making major cryptography breakthroughs, such as the design of the Data Encryption Standard (DES); Hash Message Authentication Code (HMAC); the first lattice-based encryption with a rigorous proof-of-security; and numerous other solutions that have helped advance Internet security.

Craig Gentry conducted research on privacy homomorphism while he was a summer student at IBM Research and while working on his PhD at Stanford University.

--- Update ---

According to a Forbes article, Gentry's elegant solution has a catch: It requires immense computational effort. In the case of a Google search, for instance, performing the process with encrypted keywords would multiply the necessary computing time by around 1 trillion, Gentry estimates. But now that Gentry has broken the theoretical barrier to fully homomorphic encryption, the steps to make it practical won't be far behind, predicts professor Rivest. "There's a lot of engineering work to be done," he says. "But until now we've thought this might not be possible. Now we know it is."

Reblog this post [with Zemanta]

Wednesday, June 24, 2009

Examining Utility Software Licensing

Recently I read an article about a traditional enterprise grid computing company who is attempting to enter the nascent cloud computing market. Without naming names, I will say the technology is probably decent, what they seem to lack is any real insight into the cost advantages that cloud computing enables. What I'm getting at is the ability to scale your resources -- hardware and software alike as you need them only paying for what your need, when you need it. This is arguably one of the key advantages of cloud computing, be it a private or public cloud.

My biggest issue with enterprise software companies applying traditional software licensing to cloud infrastructure software is that by charging $1,000 per year / per node, you are in a sense applying a static costing model to a dynamic environment which basically negates any of the costs advantages that cloud computing brings. It's almost like they're saying this how we've always done it, so why change? To put it another way, on one hand they're saying "reinvent your datacenter" yet on the other hand they saying" we don't need to reinvent how we bill you".

To give a little background, for our recent Enomaly's Service Provider Edition (SPE) launch we selected a utility software licensing approach that enables service providers the ability to pay based on their actual hardware utilization in the run of a of given month. Our platform provides the capability to send a secure automated report directly to us at the beginning of every month. Basically this report sends a CPU Core count for the previous month. This enables the cloud providers a backloaded cost that scales with them as they grow their cloud offerings with little or no up front costs. By choosing this model, it effectivily allows us to grow with our customers and encourages us to send them leads as well as participate in joint marketing activities. The better they do, the better we do, it's a symbiotic relationship. It also creates a long tail revenue source which overtime will compound as we deploy more and more cloud hosting infrastructures around the globe.

Another problem that a more traditional yearly / per node pricing brings is it doesn't take into consideration the ability for hosting providers to recycle off leases servers into heterogeneous clouds. A $1,000 per node price tag is the same whether you're using brand new dual quad cores or older off lease single core machines. In contrast a monthly per core or even memory based utility pricing model allows for a uniform yet scalable cost model that doesn't penalize you for using older hardware or growth within your business. To put it another way, the traditional yearly enterprise software license is basically a success penalty -- you should be rewarded for success not penalized for it.

As cloud computing becomes more entrenched in businesses around the globe, the traditional pricing models for enterprise and data center software need to adjust for the new cloud realities. The single server doesn't matter, nor does assuming static or linear growth. Cloud computing is about adjusting for the unknown variables quickly and efficiently. What I see right now is most legacy enterprise software companies seem to have a hard time understanding the new cost centric reality a cloud infrastructure provides.

Tuesday, June 23, 2009

MIT Technology Review Names Key Cloud Players

MIT Technology Review has published a series of Cloud Computing articles in the July/August Issue hitting newstands. I had the honor of contributing to several of the articles. I would also like to personaly thank the team at Technology Review for including me in both the Key Players and Private Cloud Companies to Watch.

To read the articles please see the following links.
» Technology Overview: Conjuring Clouds
» Industry Challenges: The Standards Question
» Market Watch: Virtual Computers, Real Money
» Companies to Watch Private, and Public
» Open Source Projects and Research Consortiums
» Key Players
» How it Works: Cloud Computing
» Webcast Interview with 10Gen CEO
» Case Study: Making Art Pay
» Map: Water-Powered Computers

The MIT Cloud Stack (Click to Enlarge)

Monday, June 22, 2009

More Details on UK Government's Cloud Strategy

In a followup post on the CCIF list, John Suffolk (UK Governments CIO) has further outlined his Cloud Strategy for the UK government.

The strategy has ten strands:

1. Standardise and simplify the desktop. Stop reinventing the design wheel, commoditise what should be commodity. Drive down price, drive up usability, capability, quality

2. Standardise, rationalise and simplify the plethora of networks. Build with the telecommunications industry the “Public Sector Network (PSN). An open market approach to joined-up secure networking for the Public Sector. Secure, ubiquitous, service model based and a price some 30% lower than we pay today.

3. Rationalise the data centre estate. Most are outsourced but they are in scope. Reduce from the central government 130+ to c9-12. Design a data centre eco system that is scalable, secure, green and economical.

4. Deliver against the Open source, open standards and reuse strategy I published in February. Buy at the “crown” not at an individual public body; treat proprietary software the same as open source (as in it should be available to all Public Servants); level the price comparison so full entry and exist cost to use the software must be taken into account. http://www.cabinetoffice.gov.uk/cio/transformational_government/open_source.aspx.
Let me have your thoughts via http://writetoreply.org/ukgovoss/.

Surrounding those four elements are two “wrapper” strategies:

5. Green IT: For each of the elements detailed above they have a Green IT wrapper. I published this strategy in June 2008. You can read it at http://www.cabinetoffice.gov.uk/cio/greening_government_ict.aspx. Each of the elements must conform to our Green IT strategy.

6. Information Security and Assurance: We published a National Information Assurance Strategy in June 2007 and the Data Handling Requirements in June 2008. Together with the Security Planning Framework that was published a few months ago form the basis for our information security requirements. I chair the Information Assurance Delivery Group and are accountable for helping Public Bodies conform to theses requirements, so security is uppermost in my mind.

This is where Government Cloud or “G-Cloud” comes in. With the elements detailed above we can begin to start the design and thinking about the establishment of a UK onshore, private G-Cloud. In essence infrastructure as a service, middleware / platforms as a service and software as a service. In relation to Saas I can’t see any reason why we couldn’t establish a Government Application Store (“G-AS” for want of a better code).

These six strategic elements have four other supporting strategies:

7. Shared Services: This is about ensuring that wherever possible we share everything... not just HR, Finance etc, but architectures, designs, solutions, people etc. We have moved from few users of shared back office services 3 years ago to many hundreds of thousands. The G-Cloud moves this thinking forward so rather than departments hosting a shared service for others, the applications are put into the cloud and saas kicks in.

8. Reliable Project Delivery. This is all about starting the right projects, executing them to successful completion and crucially delivering the social outcomes and business benefits. We have already made substantial progress in ensuring central government departments utilise portfolio management and best in class benefits realisation processes.

9. Supplier Management. Central Government is c65% outsourced and therefore having professional skills to deal with suppliers and having the most appropriate relationship with them is important. In this supporting strategy we have already worked with the ICT trade association, Intellect, to add additional tools to help the procurement process as well as implementing a Common Assessment Framework where assessment are made of projects undertaken by suppliers against a Common Assessment Framework! All the projects for a particular supplier are aggregated together to give insight into strengths and weaknesses.

Action plans are then developed. The suppliers also provide feedback on the client in a similar way.

10. Professionalising IT Enabled business change. Last but very much not least, the bedrock in fact, is growing the knowledge, skills and experience of our IT Professionals for which I am the Governments Head of Profession. We have about 50,000 IT professionals in the Public Sector. This strategic strand focuses on using the Skills Framework for the Information Age as a competency model for our ICT professionals. It is about personal growth and capability.

So in as few words as I can get it, this is the ten strategic strands we are following. They all work together and are all driven via the UK Government CIO Council. Some are more advanced than others, and clearly sitting beneath these strands is a whole lot more work and detail.

The posts also talked about language and getting that right and consistent. I couldn’t agree more. I seem to spend a fair amount of my time doing two things: firstly acting as a marriage guidance counsellor – bringing parties together; separating them when they are fighting; getting them to use the same words to mean the same things etc. The second thing is one of dating agent – someone has a problem and I date them with someone who has a solution. Key thing about language here is that we tend to apply a lot of labels to our problem and the solution provider uses a different set of labels for their solution. There is no easy fix to the language issue, but the more we talk, and listen (in at least equal quantities) the better.

Someone asked the question is the 2mbps a barrier. I can’t see any reason why. If we look at what many people do in their day jobs they don’t need 2mbs today. UK Government online transactions have increased enormously over the last 3 years as almost all services went online. The speed hasn’t been a barrier so far.

The final point was to my question about the Government Application Store (“G-AS”). The Public Sector will own many computer applications. To get full value, these could be moved into the G-AS. I think the Apple App Store is truly innovative and again for me creating something similar has attractions of speed, simplicity, innovation, cost effectiveness etc. I am still mulling over the points on certification of apps on the basis of if I can secure the infrastructure (and run the app in a virtualised world to protect the rest of the cloud), and the commercial model is say saas, if it doesn’t work should we care?

One final question for the time being. We are at the early stages of our thinking as you can see from my mutterings. Would it be possible to design a Government Cloud and a Government Application Store in a web 2.0 environment bringing in communities to detail the requirements, think though the issues, the designs and solutions etc?

Autonomic Cloud Security

Glenn Brunette, a Distinguished Engineer at Sun Microsystems has just informed me of a new project released earlier today called "OpenSolaris Immutable Service Containers" which may form the basis for what he describes as "Autonomic Security". According to Brunette using Immutable Service Containers as a core cloud building block enables some very interesting use cases in the area of adaptive and autonomic cloud security architectures. Several potential use cases are shown in a diagram set posted on flickr.

For those unfamiliar with Immutable Service Containers (ISC), it is an architectural deployment pattern used to describe a foundation for highly secure service delivery. ISCs are essentially a container into which a service or set of services is configured and deployed. First and foremost, ISCs are not based upon any one product or technology. In fact, an actual instantiation of an ISC can and often will differ based upon customer and application requirements. That said, each ISC embodies at its core the key principles inherent in the Sun Systemic Security framework including: self-preservation, defense in depth, least privilege, compartmentalization and proportionality.

As part of a more holistic view, it is expected that Immutable Service Containers will form the most basic architectural building block for more complex, highly adaptive and autonomic security architectures. The goal of this project is is to more fully describe the architecture and attributes of ISCs, their inherent benefits, their construction as well as to document practical examples using various web-scale software applications.

Immutable Service Containers offer the following benefits over more traditional deployment models:

  • Consistent, repeatable and secure packaging for the deployment and management of services. "One service per container", configured once and deployed everywhere.
  • Specific and clear approach to integrating platform security with services to provide enhanced security beyond what is typically deployed in most IT organizations today.
  • Strategy for the implementation of recommended security practices in a focused, supported way.
  • Flexible security to accommodate a variety of application and operational requirements and scenarios.
Also interesing to note, support for Sun's VirtualBox is coming soon as well. This interesting project is certainly worth a closer look.

The Role of the CTO & CIO in Cloud Computing

Recently I asked a question on twitter, one I figured would stir up some debate. (Which was the point) The question was "Does the CTO matter any more with the rise of Cloud Computing or is it all about the CIO with data reigning supreme?"

As the founder of a cloud software company, I am the self imposed CTO. I have no formal CTO training other then a passion for emerging technology. In a company full of PHd's, I have probably the least technical credentials with no formal post secondary education. As a CTO I view my job as the technical leader. My job is to stay ahead of the curve, spotting trends or even sometimes helping to create the trends based on what I see as a continued evolution occurring in computing. In this new information driven world, ideas have become the new currency and in this, I see my role as not only the technical leader but also the creative leader. I continually try to educate myself on the various emerging technologies with an eye toward their practical implementation within either our cloud software platform or within our customers infrastructures.

For me thought leadership is also a very important aspect of my job. For example, this very blog, is a way for me to publicly think through various concepts with a kind of public peer review.

I do admit, the job of a CTO can greatly vary depending on your company size and the market segment. Like any executive job role there is room for a standard deviation within it's definition. Most will agree there is no common definition of a CTO or it's responsibilities, apart from that of acting as the senior-most technologist in an organization. The role can also greatly vary depending on the type of work, industry or market segment of the organization. More over a CTO can be thought of as a "Jack of all technical trades" and possibly a master of some.

I found the follow excerpt on wikipedia contrasting the differences of a CIO Vs CTO particularly insightful, "The focus of a CTO may be contrasted with that of a CIO in that, whereas a CIO is predisposed to solve problems by acquiring and adapting ready-made technologies, a CTO is predisposed to solve problems by developing new technologies. In practice, each will typically blend both approaches."

"In an enterprise whose primary technology concerns are addressable by ready-made technologies, a CIO might be the primary representative of technology issues at the executive level. In an enterprise whose primary technology concerns are addressed by developing new technologies, or the general strategic exploitation of intellectual property held by the company, a CTO might be the primary representative of these concerns at the executive level."

"A CTO is focused on technology needed for products and technology sold to clients where a CIO is an internal facing job focused on technology to run the company and maintaining the platform to run services to sell to clients."

So basically a CTO is in charge of technology whether a phone system, security system, storage system or anything that has a technological aspect. In contrast the CIO leads the management of data / information and how it's utilized.

With the rise of cloud computing the role of the CIO is quickly becoming one of the most important jobs in any well manged business. Information has become a disruptive tool and defining the information architecture while assuring a near realtime access to an ever expanding world of data will be the key metric in which successful and competitive businesses are measured. I won't go as far as saying the role of the CTO is becoming less important, but the role of the CIO is certainly more important then ever before and this is especially true of most modern data driven companies.

I believe we are in the midst of a realtime information revolution. No longer can we sit back and analyze what happened yesterday, we must focus on what is happening now or even what will happen tomorrow. Those companies who have the most efficient access to a realtime data stream will dominate and the CIO not the CTO will be the person who will have the most influence in bringing about this coming corporate information revolution.

Saturday, June 20, 2009

Hoff's Cloud Metastructure

Recently, Chris Hoff posted an interesting concept for simply defining the logical parts of a cloud computing stack. Part of his concept is something he is calling the "Metastructure" or "essentially infrastructure is comprised of all the compute, network and storage moving parts that we identify as infrastructure today. The protocols and mechanisms that provide the interface between the infrastructure layer and the applications and information above it".

Actually I quite like the concept and the simplicity he uses to describe it. Hoff's variation is the practical implementation for a meta-abstraction layer that sits nicely between existing hardware and software stacks while enabling a bridge for future yet undiscovered / undeveloped technologies.. The idea of a Metastructure provides an extensible bridge between the legacy world of hardware based deployments and the new world of virtualized / unified computing. (You can see why Hoff is working at Cisco, he get's the core concepts of unified computing -- one API to rule them all)

In a post back in February, I described the contrast of a Cloud "Metaverse" as a logical inverse to the traditional structured view of infrastructure. My idea was to describe everything that exists beyond the confines of a particular virtualized environment through the use of an extensible semantic API abstraction. At the heart of this theory there is the ability to define the relationship of how multiple distributed clouds describe their interrelations between themselves (who, what, where, when, and so on).

I'd also like to point out that my concept was inspired in part by the suggestion of Scott Radeztsky at Sun to look at the problem of cloud interoperability as a meta-problem, (problems describing other problems). Also my original Metacloud post was inspired by a multiverse post I wrote which was itself inspired by a post by Chris Hoff where he proposed an interesting use case for IPv6. So we seem to have come full circle.

In order to solve abstract problems, we need abstract solutions to these problems. This fit perfectly into my Semantic Cloud Abstraction thesis loosely described as an API for other API's.
Reblog this post [with Zemanta]

Thursday, June 18, 2009

UK Government CIO sheds light on "G-Cloud" plans

Interesting discussion happening on the CCIF Mailing list today. John Suffolk (UK Government CIO) responded to the recent discussion of the UK governments plans to create country wide cloud Infrastructure (aka G-Cloud).

Suffolk had this to say:
Our approach to G-Cloud stems from the work we have been doing over the past 3 years: Focus on getting desktop designs standardized; rationalize the morass of telecommunications infrastructures into a "network of network" under the Public Sector Network Programme; rationalize the datacenteres; drive through the open source, open standards and reuse strategy; surround each of those individual elements with the Green IT strategy and our Information

Assurance strategy. That gives us the ability to start moving towards cloud in a sensible way. As part of this rather than having a shared services in departments we will move them to the cloud so the sharing across the public sector (more than 5 million people) can be even greater.

The open source, open standards and reuse policy provides an interesting opportunity. How easy would it to build a Government App Store? The European law on procurement for public sectors is complex but if we can crack this we shift the paradigm again. More than happy to listen to your views on this.

I find the concept of a government app store particularly interesting. In my recent conversations with the US CIO council the concept of improved IT procurement through the use of web based services was a hot topic of conversation. A kind of Amazon EC2 meets the Apple App Store makes a lot of sense in distributed organization such as a large governmental agency. One of the recurring themes was the lack of perceived security that such a deployment may have. This is especially true when considering a virtualized environment.

Another big point of contention has been within the aspects of standardization. I'm not speaking only about things such as standardized API's but also common terminologies. A perfect example is the recent NIST cloud definition which has become the formal definition for any cloud based services or platforms being acquired in the US federal government. Before you can hope to embrace a cloud centric "network of networks" computing approach, you first need to have a consensus on how to describe the various aspects found within unique parts of the cloud. Semantic approaches may also help.

Personally, when thinking about cloud computing with in a governmental context, interoperability and open standards are a particularly good starting point with open source references deployments the logical next step afterward. At the end of the day, governments represents some of the largest consumers of technology on the globe and as such you have the power to mandate these sorts of requirements when either considering or actually developing the next generation of IT systems and platforms.

Tuesday, June 16, 2009

The United Nations of Cloud Computing

With all the recent buzz around cloud computing within various governments around the globe there is one major international organization notably absent from the discussions -- The United Nations. I thought I'd take a moment to briefly explore some of the opportunities for cloud computing within the UN.

To give a little background, the UN's stated aims is to facilitate cooperation in international law, international security, economic development, social progress, human rights, and achieving world peace. I'm the first to admit that cloud computing can do a great many things, but solving world peace probably isn't one of them. But international law, international security, economic development, and social progress all fit nicely into potential applications for cloud computing, especially within emerging economies around the globe.

Among the UN's various technology related endeavours are the Millenium Development Goals (MDGs), a set of eight targets to help end extreme poverty worldwide by 2015.In attempting to achieve these goals one the early efforts was the United Nations Information and Communication Technologies Task Force, created to advance the UN's efforts around addressing the core issues related to the role of information and communication technology in economic development and eradication of poverty. In 2006 the Global Alliance for ICT and Development (GAID) replaced UNICTTF, and now has the task of providing an open policy dialogue on the role of information and communication technologies. GAID is probably one of the best places to address the need for cloud computing at the UN.

One of the more famous off shoots of this program is the One Laptop Per Child (OLPC) project, a nonprofit created by Nicholas Negroponte. The OLPC's goal is to create a laptop to sell for $100 each to governments to give away at no cost to school-aged children. One of the key drivers that brought about the "NetBook" trend.

As I've said before, Cloud computing isn't about one endless global cloud, one with no defined borders or geography, but instead it's about the localization of cloud computing within these new and emerging regions around globe. It's about the opportunity that flexible and efficient distributed computing enables as an economic & social stimulus. More over, it's about empowering those who have up until now been passed by on the information super highway. The UN has the opportunity to bring the ultimate equalizer, emerging technology as well as more importantly real access to information to these under enabled regions.

The UN has a long history of providing a multilateral source of grants for technological assistance and access around the world including being an early advocate of open source technology. Recent advances in technology have revolutionized the way people live, learn and work, but these benefits have not spread around the world evenly. The so called digital divide exists between communities in their access to computers, the Internet, and other technologies. Over the last few years the UN has been at the forefront of trying to knock down some of these technological barriers. With the emergence of Cloud Computing we may now have an even more effective tool to help in this on going effort to end poverty and encourage social progress around the globe.

British Government to Create Country Wide Cloud Capability

Interesting developments from our friends in the UK today. The British government's newly appointed chief information officer John Suffolk has been given new powers to sign-off all major IT projects with a particular focus placed on the creation of a country wide Cloud Computing infrastructure. Details of the new strategy were released as part of a Digital Britain report published earlier, which includes the development of “G-cloud” – a government-wide cloud computing network.

The report highlights the development of a virtual Public Service Network (PSN) with “common design, standards, service level agreements, security and governance.” which goes on to outline that "all those government bodies likely to procure ICT services should look to do so on a scalable, cloud basis such that other public bodies can benefit from the new capability,”

“The Digital Britain report recommends that the government take the necessary steps to secure that the Government CIO has a ‘double lock’ in terms of accountabilities and sign off for such projects. That will secure government-wide standards and systems” Needless to say, the need for cloud computing standards are now more important then ever with the UK joining a growing list of countries either exploring or activity building government sponsored clouds.

I should note I've been recently involved in similar discussions with the Canadian government regarding a national cloud strategy and platform. For anyone interested, I'm currently organizing a Canadian Federal Cloud Summit for this October in Ottawa. I will also be presenting the keynote at the upcoming Washington Cloud Standards summit on July 13th.

More details on the Digital Britain report are available here

Monday, June 15, 2009

The Big Blue Cloud, Getting Ready for the Zettabyte Age

Well IBM has gone and done it, they've announced a cloud offering yet again. Actually what's interesting about this go, is not that they're getting into the cloud business (again) but instead this time they're serious about it. And like it or not, they're approach actually does kind of make sense for, assuming you're within their target demographic (the large enterprise looking to save a few bucks).

My summary of the "Big Blue Cloud" is as follows: It's not what you can do for the cloud, but what the cloud can do for you. Or simply, it's about the application, duh?

In a statement earlier today in the New York Times, IBM CEO Sam Palmisano said, “The information technology infrastructure is under stress --- and the data flood is just accelerating.”

Palmisano isn't alone in this thinking, earlier this week Cisco Systems, the mobile networking gaint, released a report suggesting that global Internet traffic is growing exponentially. Scientific American said that Cisco needed a newer term -- zettabyte, or one trillion gigabytes -- to measure both the amount of uploading and downloading traffic on the Web and the bandwidth required to accommodate it. At the heart of IBM's cloud announcement is this forth coming growth in internet usage and the battle to control the flow of information in this new zettabyte age.

More to the point, the Cisco report has a lot of interesting statistics, including the prediction that the Web will nearly quadruple in size over the next four years. Cisco claims that, by 2013, what amounts to 10 billion DVDs will cross the Internet each month. Or to put it another way, it will take over a million years to watch just one month's worth of Web video traffic. The findings point to "consumer hyperactivity" -- that with Web-enabled phones and mobile devices, more powerful computers, and multitasking, growth will only increase. For such a surge in volume, networks must be able to accommodate the growth. Whether the private or public clouds for many companies cloud computing may help solve this problem.

One of IBM customers at the Interior Department’s National Business Center, a service center that handles payroll, human relations, financial reporting, contracting services and other computing tasks for dozens of federal agencies spelt out the opportunity. “For us, like other data centers, the volume of data continues to explode,” Douglas J. Bourgeois said. “We want to solve some of those problems with cloud computing, so we don’t have to build another $20 million data center.

Although not exactly sexy, IBM is hitting at the heart of the opportunity for big businesses looking at getting into cloud computing. First and foremost they're looking to address the issue of security and trust, secondly cost and thirdly massive exponential growth in network / internet based capacity. Turns out a lot of businesses aren't very comfortable handing over their data center and application management infrastructure to some up-start company or bookseller. The old saying "No manager ever got fired for choosing IBM" is as strong as ever. This statement is especially true of cloud computing. (And IBM knows it)

Here are the highlights of IBM’s announcement:

  • Smart Business Test Cloud — A private cloud behind the client’s firewall. Basically IBM is offering to install "private clouds" for companies in-house, behind their firewalls built on a suit of existing IBM software with some IBM magic on top.
  • Smart Business Development & Test on the IBM Cloud, a bundle of development and test tools that can be used on IBM's cloud -- a network running on 13 datacenters located around the globe.
  • IBM CloudBurst — 42-unit server cabinet that comes preloaded with hardware, storage, virtualization, networking and service management software. Think of this as managed outsourced private cloud housed in an IBM data center.

I will say there are some glaring holes in IBM newly reinvented cloud offering. One in particular is IBM seems to have put together a combination of several existing products, rather then re-imagining the data center, they seem to have found a new way to market what they already had, which I'm not saying is bad, just nothing new. This is in direct contrast to IBM's rivals such as Amazon Web Services, Google and even Microsoft who have managed to create a totally new and integrated stack of cloud components. IBM's hodgepodge approach may be indicative of future acquisitions they may need to make to fully realise there cloud ambitions. (IBM Rightscale or IBM SOASTA CloudTest anyone?) Regardless, this latest move firmly places IBM in the center of the hottest land grab in IT.

Social Hacktivism & The Social Denial of Service (SDoS)

Something interesting happen this past weekend in the new world of cyber warfare. More specifically, the emergence of the social web as the key tool in what can only be described as the first cyber revolution.

Yes, I know this sounds crazy, so let me explain. Over the weekend the Iranian opposition coordinated a series of cyber attacks that successfully managed to prevent access to several pro-Ahmadinejad Iranian web sites, including the President’s homepage. Currently the attack is on going with a good portion of the key Iranian governments websites either offline or loading very slowly.

What's interesting about these series of attacks is how they were organized in realtime using twitter as well other social tools. The attacks rely on a so called people’s information warfare concept first described by Dancho Danchev in 2007. Generally the concept goes like this; a distributed community of like minded "revolutionaries" use the new crop of social tools (facebook, twitter, myspace etc) for self-mobilization in conducting various hacktivism activities such as web site defacement's, or launching distributed denial of service attacks.

Unlike similar attacks last year, there are a few key differences in these latest denial of service attacks. First of all there is no botnet but instead the attacks are based on large volumes of ordinary individuals coming together for a common cause. Most seem to be using twitter to self organized themselves into what I'm calling a Social Denial of Service (SDoS) with the intention of limiting or disrupting access to key Internet sites. Unlike botnet attacks, these SDoS attacks rely on users browsers to create large amounts of traffic. A simple yet very effective tactic.

A recent post on ZDNET sheds some light on this latest approach to social hacktivism. "Among the first web-based denial of service attack used, is a tool called “Page Rebooter” which is basically allowing everyone to set an interval for refreshing a particular page, in this case it’s 1 second. Pre-defined links to the targeted sites were then distributed across Twitter and the Web"

Very interesting times, I'll keep you posted as a I learn more.

Friday, June 12, 2009

PR: Enomaly Launches Cloud Readiness Assessment Service

I'm happy to announce that Enomaly has launched a new Cloud Readiness Assessment Service for enterprises looking to take advantage of the economic and technical benefits that cloud computing offers.

Determining the optimal opportunities for cloud computing can be difficult, not every application is suitable for the application of cloud technologies. Complex business requirements as well as technology factors will allow some applications to benefit from the application of cloud computing technologies while others can not. Identifying opportunities for the effective and rapid application of cloud computing can produce immediate benefits in reduced costs, shortened delivery lifecycles, better user experience, and improved service levels — but identifying these opportunities can be an error-prone and process. Organizations are now under pressure to take action; however, enterprise technology teams generally do not have extensive background in cloud computing, and require time to mobilize these capabilities.

Enomaly’s Cloud Computing Readiness Assessment applies the experience gained through Enomaly’s thought leadership role since the earliest days of cloud computing to rapidly identify the best available opportunities for enterprises to gain significant benefits, in the short- and medium-term, form the application of cloud computing technologies and services, and to ascertain the optimal methods of doing so.

Thursday, June 11, 2009

Amazon EC2 gets Zapped Overnight

A number of users have reported that most of Amazon's Elastic Compute Cloud Service in North America was unavailable Wednesday evening and early Thursday.

According to a post on Amazon's forums the reason for the outage was "a lightning storm caused damage to a single Power Distribution Unit (PDU) in a single Availability Zone. While most instances were unaffected, a set of racks does not currently have power, so the instances on those racks are down. We have technicians on site, and we are working to replace the affected PDU. We do not yet have an ETA, but we expect to be able to recover the instances when we restore power. Besides these affected instances, all other instances, and all other Availability Zones, are operating normally. Users with affected instances can launch replacement instances in any of the US Region Availability Zones or wait until their instance(s) are restored"

The latest outage brings new and unneeded attention to the potential need for hybrid cloud offerings that allow for a combination of cloud services both internal and externally available. Recently Rackspace has also said that a hybrid cloud approach that combines existing data centers and cloud based resource may be the ideal deployment model for enterprises looking to use the cloud. Needless to say, a 100% cloud based infrastructure may be problematic when the power goes out.

-- Update --

The outage may not have been as broad as I first thought. According to several sources it was a fairly limited outage only effecting a small amount of EC2 users.

Wednesday, June 10, 2009

Cloud Computing Goes Local

Yesterday John Foley at InformationWeek wrote a post about how cloud computing is becoming more global as businesses in other parts of the world are discovering the cloud. Like most of Foley's articles, it was very insightful and did make me stop and think. But in someways I think he's missing the point of globalized cloud computing.

It's not about one endless global cloud, one with no defined borders or geography, but instead it's about the localization of cloud computing within these new and emerging regions around globe. It's about the opportunity that flexible and efficient distributed computing enables as an economic & social stimulus. More over, it's about empowering those who have up until now been passed by on the information super highway.

Let me say, I'm the first to admit that this very well might be a case where it certainly depends on your point of view (the glass is half full or or half empty). For a lot of American based technology vendors, writers, and techno pundants, the United States is the center of the world, anything beyond the borders of the U.S. can be looked at as "Global". But the reality is the opposite. Whether it's a State, a Province, Country or broader region -- geographic boundaries matter. The fact is a large portion of the world is not comfortable hosting applications within the U.S. -- whether for reason of compliance, regulatory, governance, speed, or cost, the U.S. is not an optimal location to host web applications.

If you provide any web based services to a particular region, why host else where? The sudden interest in cloud computing from regions outside of the U.S. is indicative of a move toward localization, not globalization.

I do agree with Foley's closing remarks, "As cloud computing expands around the world, it's interesting to see what opportunities companies are pursuing and what problems they're looking to solve. At the same time, look for early adopters to encounter new challenges in the areas such as security, data governance, and interoperability."

Oops, GSA Backs Away From Federal Cloud CTO Appointment

Interesting article on InformationWeek today in which the GSA is backing away from a recent appointment of Patrick Stingley as Federal Cloud CTO. Less than a month and a half after the announcement as federal cloud CTO position, Stingley has returned to his role as CTO of the Bureau of Land Management, with the General Services Administration saying the creation of the new role came too early.

According to the statement by GSA CIO Casey Coleman "It just wasn't the right time to have any formalized roles and responsibilities because this is still kind of in the analysis stage, Once it becomes an ongoing initiative, it might be a suitable time to look at roles such as a federal cloud CTO, but it's just a little premature."

To be honest, I was a little puzzled by the appointment of
Stingley to the post of Federal Cloud CTO someone with no direct "cloud" experience. I can't help but wonder if he may have over stepped a little on this one.

Oops.

Federal Enterprise Cloud Architecture & Framework

I just got pinged on twitter by Patrick Stingley, CTO of the Bureau of Land Management, (www.twitter.com/patrickstingley) in regards to work he is doing in mapping services associated with the cloud to the US Federal EA/SRM. He is currently looking for input and suggestions on his first draft.

According to Stingley's post he is trying to link his cloud model to the Federal Enterprise Architecture Framework. He goes on to say "I'm thinking that this framework should fit somewhere in between the Service Component Reference Model (SRM) and the Technical Reference Model (TRM). It's not a clean mapping to the SRM, but some things do match up. In the diagram --- the services in the SRM are called out by the SRM reference number. I have put asterisks by the one's I'm not sure of and I the ones with the black backgrounds are the "holes" in the SRM, where services exist, but are not addressed in the SRM."

Stingley has setup a site for comments.
http://sites.google.com/site/cloudarchitecture/

-- Credits --
All are encouraged to share the material on this page with attribution in accordance with Creative Commons. Patrick Stingley is the author of the text and most of the diagram. Nick Mistry deserves credit for coming up with an improved framework and sharing that with me. Thanks to Jeff Harrison of the Carbon Project for convincing me that web servers and web services are different enough to warrant separate mention and to Ty Fabling of ESRI for doing the same with Collaborative Services and e-mail. All of these ideas are subject to debate. Good debate is gratefully accepted. Colin Powel's Leadership tenant #3 - Share credit.

Tuesday, June 9, 2009

In Memoriam to LX Labs owner KT Ligesh

Over the past five and a half years since Enomaly was formed, I have had the honour of fostering relationships with unique characters in the burgeoning hosting and visualization world. During that period of time, a few individuals have stood out from the rest who have left a profound impression with me. These individuals are those rare people you meet who share that common passion for emerging technology as well as a distinct drive to compete. One such person was KT Ligesh, the founder and lead developer at LX Labs a multi-virtualization system vendor based Bangalore India.

Although Ligesh was the founder of a competitive software company, he and I had a chance to get to know each other. In describing Ligesh, I would say he was an intensely ambitious and creative individual with a flare for coming up with new and interesting products. He had a knack for the corner cases that made sense to only him. Something I contributed to his Indian heritage, but most likely was because of his unique form of genius. Unfortunately Ligesh was haunted by his troubled past which caught up with him yesterday. This a sad day for cloud computing and Ligesh will be missed.

My condolences go out to all who knew and cared for him.

Monday, June 8, 2009

Forget Data.gov think RealTime.gov

As I attempt to recover from a rather wild weekend in Montreal QC, celebrating my new brother in laws bachelor party I had a couple incomplete thoughts (after I sobered up). The first, albeit a little off topic was -- why is a hangover so much worse as I get older? Dude, really?

Joking aside, my real thought is with all the sudden interest in the realtime and semantic web it seems that the bigger opportunity for an open and transparent government isn't in providing a back library of data but instead providing a realtime information stream. A data stream that could provide real, tactical insight into the areas of government that are actually useful for businesses looking for an edge in this tough economic climate.

In a recent quote on GCN.com, federal chief information officer Vivek Kundra indicated that he "wants to continue to move agencies away from warehousing the data they collect and toward a model in which agencies can publish data in real time, much like the National Oceanic and Atmospheric Administration does now with its weather data."

But instead of weather, why not look at other aggregate information sources, why not population trends, food, medical/health as well as other actually useful realtimes statics? Why are we limited to just realtime weather, a science arguably just as complex as any. In a lot of ways looking a driving habits, mortgage defaults, employments trends or even crop trends are probably all easier to calculate.

Just a random thought.

Tuesday, June 2, 2009

IBM Experiments with Group Authorship for Cloud Interoperability

I'm happy to announce that IBM has created a new Cloud Computing Working Group focused on creating interoperability usage scenarios. IBM described this new effort as "an experiment in group authorship"

In a recent email from Dirk Nicol at IBM, he outlined the following to the Open Cloud Manifesto Supporters;

We have a growing community forming around the principles of the manifesto and have over 200 supporters. Several of you have asked about next steps. Ideas that have been raised include a white paper on customer usage scenarios, coordination on event/trade show participation, an open cloud manifesto 2 that is more technical in nature, brainstorming on how to encourage standards organization coordination, etc.

The one idea that seems to resonate across the board is the creation of a usage scenario white paper. We plan to propose this activity on the Open Cloud Manifesto LinkedIn group and discussion forum where we will solicit community participation (see the community section at http://www.opencloudmanifesto.org/resources.htm ).

In addition to the value that can be gained from a user input/perspective on the key scenarios, this will serve as an experiment in "group authorship". We will focus on usage scenarios rather than technical solutions where Intellectual Property (IP) concerns would require more of a formal organizational structure and formal agreements. Our goal is to quickly gain consensus for the scope for the project, and then allow those with the strongest interest and willingness to contribute to work together to produce a draft for review. This effort should help us understand the real customer needs for integration, interoperability, and flexibility/portability in context of usage scenarios. In order to keep the effort focused on producing results, we would like to target an initial draft of a white paper by July 31st.

The use cases:
  • Will provide a practical, customer experience based context for discussions on interoperability and standards.
  • Will make it clear where existing standards should be used.
  • Will focus the industry's attention on the importance of Open Cloud Computing.
  • Will make it clear where there is standards work to be done. If a particular use case can't be built today, or if it can only be built with proprietary APIs and products, the industry needs to define standards to make that use case possible.
In order to kick off this effort, below is a link to a google group that can be used for collaboration: http://groups.google.com/group/cloud-computing-use-cases

If you have any technical problems joining the google group, please contact IBM at help(at)opencloudmanifesto.org
Reblog this post [with Zemanta]

Enomaly Launches ECP Cloud Service Provider Edition

As some of you know over the last few months we've been busy working on new version of the Enomaly Elastic Computing Platform specifically tailored to the Cloud Service Provider / web hosting market. I'm happy to announced the launch of a new global group of cloud computing service providers who have standardized on and are powered by Enomaly’s Elastic Computing Platform (ECP), Service Provider Edition. These service providers are leveraging Enomaly’s next generation solution, designed for carriers and hosting providers looking to build a line of business offering Infrastructure-on-Demand or Infrastructure as-a-Service (IaaS) to customers.


Enomaly ECP Service Provider Edition extends our open source ECP Community Edition platform, used by more than 15,000 organizations around the world. The new offering makes it possible for service providers to rapidly generate revenue by offering turnkey cloud computing infrastructure as a service.

ECP Service Provider Edition can host Microsoft Windows, Linux, or any other operating system and can be used to host Web sites, power internal business applications, and provide burst capacity. The platform enables users to access and manage virtual servers in whatever quantity they need, on-demand, through a self-service dashboard and a Web-based API.

“We developed the Service Provider Edition of our popular ECP based on partner and customer response,” said Dr. Richard Reiner, CEO of Enomaly. “We have seen tremendous response and adoption of the solution from its earliest stages and look forward to rolling out an extensive feature set in the coming months. As always, our focus at Enomaly is customer service and innovation. We are pleased to be able to deliver this next step in the evolution of cloud computing.”

Enomaly has been working closely with a global group of cloud computing service providers that have helped define the product direction and the future roadmap. Due to strong demand, the launch program was oversubscribed, and a phase-1 customer group was selected including: 1-800-Hosting (US), CentriLogic (US and Canada), CiberConceito (Portugal), eVirtualCloud (Pakistan and Middle East), Family Hosting (US), ForLinux (UK), MediaWeb / Netsentia (Singapore), Next Wave Network (US), RazorServers (US). Global technology companies including Nomura RI (Japan), RedHat China, a large global telco, and a number of national hosting providers are also working with Enomaly and will be announcing their cloud computing initiatives at a future date.

“We continually strive to adopt new technology with the goal of satisfying the business needs of our customers,” said Robert Offley, President & CEO, Centrilogic. “To that end, we were delighted to be selected as a charter customer and work directly with Enomaly to shape the ECP Service Provider feature set. We look forward to working closely with the team to build out our solution and deliver it to customers of Centrilogic.”

If you're interested in offering turnkey cloud computing infrastructure to your hosting and data center customers, please feel free to get in touch.

#DigitalNibbles Podcast Sponsored by Intel

If you would like to be a guest on the show, please get in touch.

Instagram