So in a perfect world all data centers be magically become clouds and the world is a better place. All kidding aside..I am tired of all of the hype. Let me put it this way. All data centers cannot and will not become private clouds– at least not for most typical companies. Let me tell you why I say this. There are some key principles of the cloud that I think are worth recounting:
1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud.
2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand.
3. A cloud environment needs to be economically viable.
Why aren’t traditional data centers private clouds? What if a data center adds some self-service and virtualization? Is that enough? Probably not. A typical data center is a complex environment. It is not uncommon for a single data center to support five or six different operating systems, five or six different languages, four or five different hardware platforms and perhaps 20 or 30 applications of all sizes and shapes plus an unending number of tools to support the management and maintenance of that environment. In Cloud Computing for Dummies, written by the team at Hurwitz & Associates there is a considerable amount written about this issue. Given an environment like this it is almost impossible to achieve workload optimization. In addition, there are often line of business applications that are complicated, used by a few dozen employees, and are necessary to run the business. There is simply no economic rational for such applications to be moved to a cloud — public or private. The only alternative for such an application would be to outsource the application all together.
So what does belong in the private cloud? Application and business services that are consistent workloads that are designed for be used on demand by developers, employees, or partners. Many companies are becoming IT providers to their own employees, partners, customers and suppliers. These services are predictable and designed as well-defined components that can be optimized for elasticity. They can be used in different situations — for a single business situation to support a single customer or in a scenario that requires the business to support a huge partner network. Typically, these services can be designed to be used by a single operating system (typically Linux) that has been optimized to support these workloads. Many of the capabilities and tasks within this environment has been automated.
Could there be situations where an entire data center could be a private cloud? Sure, if an organization can plan well enough to limit the elements supported within the data center. I think this will happen with specialized companies that have the luxury of not supporting legacy. But for most organizations, reality is a lot messier.
You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago. As this anniversary approaches I thought it was appropriate to take a look back at what has changed. I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news. I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.
Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders. A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was passing fad that would disappear. Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.
Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud. Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies. Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.
Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.
Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.
Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.
Change Six: Cloud Service Level Management is a primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing. Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.
Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.
Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.
As companies try to get a handle on the costs involved in running data centers. In fact, this is one of the primary reasons that companies are looking to cloud computing to make the headaches go away. Like everything else is the complex world of computing, clouds solve some problems but they also cause the same type of lock-in problems that our industry has experienced for a few decades.
I wanted to add a little perspective before I launch into my thoughts about portability in the cloud. So, I was thinking about the traditional data centers and how their performance has long been hampered because of their lack of homogeneity. The typical data center is filled with a warehouse of different hardware platforms, operating systems, applications, networks – to name but a few. You might want to think of them as archeological digs – tracing the history of the computer industry. To protect their turf, each vendor came up with their own platforms, proprietary operating systems and specialized applications that would only work on a single platform.
In addition to the complexities involved in managing this type of environment, the applications that run in these data centers are also trapped. In fact, one of the main reasons that large IT organizations ended up with so many different hardware platforms running a myriad of different operating systems was because applications were tightly intertwined with the operating system and the underlying hardware.
As we begin to move towards the industrialization of software, there has been an effort to separate the components of computing so that application code is separate from the underlying operating system and the hardware. This has been the allure of both service oriented architectures and virtualization. Service orientation has enabled companies to create clean web services interfaces and to create business services that can be reused for a lot of different situations. SOA has taught us the business benefits that can be gained from encapsulating existing code so that it is isolated from other application code, operating systems and hardware.
Sever Virtualization takes the existing “clean” interface that is between the hardware and the software and separates the two. One benefit of fueling rapid adoption and market growth is that there is no need for rewriting of software between the x86 instructions and the software. As Server virtualization moves into the data center, companies can dramatically consolidate the massive number of machines that are dramatically underutilized to a new machines that are used in a much more efficient manner. The resultant cost savings from server virtualization include reduction in physical boxes, heating, maintenance, overhead, cooling, power etc.
Server virtualization has enabled users to create virtual images to recapture some efficiency in the data center. And although it fixes the problem of operating systems bonded to hardware platforms, it does nothing to address the intertwining of applications and operating systems.
Why bring this issue up now? Don’t we have hypervisors that take care of all of our problems of separating operating systems from applications? Don’t companies simply spin up another virtual image and that is the end of the story. I think the answer is no – especially with the projected growth of the cloud environment.
I got thinking about this issue after having a fascinating conversation with Greg O’Connor, CEO of AppZero. AppZero’s value proposition is quite interesting. In essence, AppZero provides an environment that separates the application from the underlying operating system, effectively moving up to the next level of the stack.
The company’s focus is particularly on the Windows operating system and for good reason. Unlike Linux or Zos, the Windows operating system does not allow applications to operate in a partition. Partitions act to effectively isolate applications from one another so that if a bad thing happens to one application it cannot effect another application. Because it is not possible to separate or isolate applications in the Windows based server environment when something goes bad with one application, it can hurt the rest of the system and other application in Windows.
In addition, when an application is loaded into Windows, DLLs (Dynamic Link Libraries) are often loaded into the operating system. DLLs are shared across applications and installing a new application can overwrite the current DLL of another application. As you can imagine, this conflict can have really bad side effects. .
Even when applications are installed on different servers – physical or virtual — installing software in Windows is a complicated issue. Applications create registry entries, modify registry entries of shared DLLS copy new DLLs over share libraries. This arrangement works fine unless you want to move that application to another environment. Movement requires a lot of work for the organization making the transition to another platform. It is especially complicated for independent software vendors (ISVs) that need to be able to move their application to whichever platform their customers prefer.
The problem gets even more complex when you start looking at issues related to Platform as a Service (PaaS). With PaaS platform a customer is using a cloud service that includes everything from the operating system to application development tools and a testing environment. Many PaaS vendors have created their own language to be used to link components together. While there are benefits to having a well-architected development and deployment cloud platform, there is a huge danger of lock in. Now, most of the PaaS vendors that I have been talking to promise that they will make it easy for customers to move from one Cloud environment to another. Of course, although I always believe everything a vendor tells me (that is meant as a joke….to lighten the mood) but I think that customers have to be wary about these claims of interoperability.
That was why I was intrigued with AppZero’s approach. Since the company decouples the operating system from the application code, it provides portability of pre-installed application from one environment to the next. The company positions its approach as a virtual application appliance . In essence, this software is designed as a layer that sits between the operating system and the application. This layer intercepts file I/O, shared memory I/O as well as a specific DLL and keeps them in separate “containers” that are isolated from the application code.
Therefore, the actual application does not change any of the files or registry entries on a Windows server. In this way, a company could run a single instance of the windows server operating system. In essence, it isolates the applications, the specific dependencies and configurations from the operating system so it requires fewer operating systems to manage a Microsoft windows server based data center.
AppZero enables the user to load an application from the network rather than to the local disk. It therefore should simplify the job for data center operations management by enabling a single application image to be provisioned to multiple environments- enabling them to keep track of changes within a Windows environment because the application isn’t tied to a particular OS. AppZero has found a niche selling its offerings to ISVs that want to move their offerings across different platforms without having to have people install the application. By having the application pre-installed in a virtual application appliance, the ISV can remove many of the errors that occur when a customer install the application into there environment. The application that is delivered in a virtual application appliance container greatly reduces the variability of components that might be effect the application with traditional installation process. In addition, the company has been able to establish partnerships with both Amazon and GoGrid.
So, what does this have to do with portability and the cloud? It seems to me that this approach of separating layers of software so that interdependencies do not interfere with portability is one of the key ingredients in software portability in the cloud. Clearly, it isn’t the only issue to be solved. There are issues such as standard interfaces, standards for security, and the like. But I expect that many of these problems will be solved by a combination of lessons learned from existing standards from the Internet, web services, Service Orientation, systems and network management. We’ll be ok, as long as we don’t try to reinvent everything that has already been invented.
I spent longer than I typically do at a conference last week when I went to VMworld. It was quite an active event — lots of customers, lots of cloud technology providers, and lots of integrators. What I took away from the conference where three major observations: the customers attending the conference are busily virtualizing servers; VMware is trying hard to position itself for leadership in the both virtualization and the cloud; well-established vendors are deepening their relationship with VMware while emerging vendors are trying to either fill a void or knock an existing leader out of the ring.
One of the things that really stood out for me was the stage of maturity of the customers. In speaking with attendees, it was clear to me that many of the VMware customers are in the early stages of moving to the cloud. In fact, most of them are not even thinking about clouds — other than rain clouds. The people attending this year’s event are typical of an emerging market. They are the hard core developers who have to deal with technology without the benefit of levels of abstraction. These are hard working developers who have deep expertise in virtualizing servers. Many of these developers have gained a lot of benefit from some of the key innovations that VMware has made over the years. One excellent example is VMware’s product called Vmotion which enables a developer to migrate a running virtual machine from one physical server to another with no service disruption. I started thinking about what implementing virtualization means to developers. I got thinking about this because I picked up a handy little guide at the conference called vSphere 4.0 Quick Start Guide Shortcuts down the path to Virtualization . What struck me from glancing through the book was the level of programming required configure and implement virtual machines. It is not for the faint hearted. Yes, when you’re done with the hard work of separating the software environment from the hardware, magic starts to happen.
It was interesting to juxtapose this bottoms up virtualization focus with emerging cloud technologies. Cloud computing is clearly emerging as a strategy for many of the vendors and many of the bosses of the participants at the conference. The cloud leverages virtualization as an enabler of the cloud but it is clearly the beginning and not the end. We have seen this so many times before with so many technology trends. You start with the sophisticated developers who want to work at the metal. They get great performance and great benefit for their companies. And then, technology matures and gets abstracted. Here is a good example. In the really, really early days of graphical interfaces, sophisticated programmers wanted nothing to do with an abstracted interface. The command line interface was the one and only way to go. After all, this command level interface gave them control that they could not image having from a graphical interface. How many programmers today would go back to a command line interface? (probably a few — but no one’s perfect).
So, I was left with the feeling that we are in between generations of technology at this year’s VMworld. The old world of virtualizing servers is about to be surplanted by the world of abstracting the data center itself. Virtualization is one of the pillars of this transformation but it not the end game.
I haven’t written a blog post in quite a while. Yes, I feel bad about that but I think I have a good excuse. I have been hard at work (along with my colleagues Marcia Kaufman, Robin Bloor, and Fern Halper) on Cloud Computing for Dummies. I will admit that we underestimated the effort. We thought that since we had already written Service Oriented Architectures for Dummies — twice; and Service Management for Dummies that Cloud Computing would be relatively easy. It wasn’t. Over the past six months we have learned a lot about the cloud and where it is headed. I thought that rather than try to rewrite the entire book right here I would give you a sense of some of the important things that I have learned. I will hold myself to 10 so that I don’t go overboard!
1. The cloud is both old and new at the same time. It is build on the knowledge and experience of timesharing, Internet services, Application Service Providers, hosting, and managed services. So, it is an evolution, not a revolution.
2. There are lots of shades of gray with cloud segmentation. Yes, there are three buckets that we put clouds into: infrastructure as a service, platform as a service, and software as a service. Now, that’s nice and simple. However, it isn’t because all of these areas are starting to blurr into each other. And, it is even more complicated because there is also business process as a service. This is not a distinct market unto itself – rather it is an important component in the cloud in general.
3. Market leadership is in flux. Six months ago the market place for cloud was fairly easy to figure out. There were companies like Amazon and Google and an assortment of other pure play companies. That landscape is shifting as we speak. The big guns like IBM, HP, EMC, VMware, Microsoft, and others are running in. They would like to control the cloud. It is indeed a market where big players will have a strategic advantage.
4. The cloud is an economic and business model. Business management wants the data center to be easily scalable and predictable and affordable. As it becomes clear that IT is the business, the industrialization of the data center follows. The economics of the cloud are complicated because so many factors are important: the cost of power; the cost of space; the existing resources — hardware, software, and personnel (and the status of utilization). Determining the most economical approach is harder than it might appear.
5. The private cloud is real. For a while there was a raging debate: is there such a thing as a private cloud? It has become clear to me that there is indeed a private cloud. A private cloud is the transformation of the data center into a modular, service oriented environment that makes the process of enabling users to safely procure infrastructure, platform and software services in a self-service manner. This may not be a replacement for an entire data center – a private cloud might be a portion of the data center dedicated to certain business units or certain tasks.
6. The hybrid cloud is the future. The future of the cloud is a combination of private, traditional data centers, hosting, and public clouds. Of course, there will be companies that will only use public cloud services for everything but the majority of companies will have a combination of cloud services.
7. Managing the cloud is complicated. This is not just a problem for the vendors providing cloud services. Any company using cloud services needs to be able to monitor service levels across the services they use. This will only get more complicated over time.
8. Security is king in the cloud. Many of the customers we talked to are scared about the security implications of putting their valuable data into a public cloud. Is it safe? Will my data cross country boarders? How strong is the vendor? What if it goes out of business? This issue is causing many customers to either only consider a private cloud or to hold back. The vendors who succeed in the cloud will have to have a strong brand that customers will trust. Security will always be a concern but it will be addressed by smart vendors.
9. Interoperability between clouds is the next frontier. In these early days customers tend to buy one service at a time for a single purpose — Salesforce.com for CRM, some compute services from Amazon, etc. However, over time, customers will want to have more interoperability across these platforms. They will want to be able to move their data and their code from one enviornment to another. There is some forward movement in this area but it is early. There are few standards for the cloud and little agreement.
10. The cloud in a box. There is a lot of packaging going on out there and it comes in two forms. Companies are creating appliance based environments for managing virtual images. Other vendors (especially the big ones like HP and IBM) are packaging their cloud offerings with their hardware for companies that want Private clouds.
I have only scratched the surface of this emerging market. What makes it so interesting and so important is that it actually is the coalescing of computing. It incorporates everything from hardware, management software, service orientation, security, software development, information management, the Internet, service managment, interoperability, and probably a dozen other components that I haven’t mentioned. It is truly the way we will achieve the industrialization of software.
I haven’t been to IBM’s Rational Conference in a couple of years so I was very interested not just to see what IBM had to say about the changing landscape of software development but how the customers attending the conference had changed. I was not disappointed. While I could write a whole book on the changes happening in software development (but I have enough problems) I thought I would mention some of the aspects of the conference that I found noteworthy.
One. Rational is moving from tools company to a software development platform. Rational has always been a complex organization to understand since it has evolved and changed so much over the years. The organization now seems to have found its focus.
Two. More management, fewer low level developers. In the old day, conferences like this would be dominated by programmers. While there were many developers in attendance, I found that there were a lot of upper level managers. For example, I sat at lunch with one CIO who was in the process of moving to a sophisticated service oriented architecture. Another person at my table was a manager looking to update his company’s current development platforms. Still another individual was a customer of one of the company’s that IBM had purchased who was looking to understand how to implement new capabilities added since the acquisition.
Three. Rational has changed dramatically through acquisitions. Rational is a tale of acquisitions. Rational Software, the lynch pin of IBM’s software development division, itself was a combination of many acquisitions. Rational, before being bought by IBM in 2002 for $2.1 billion, had acquired an impressive array of companies including Requiste, SQA, Performance Aware, Pure-Atria, and Object Time Ltd. After a period of absorbtion, IBM started acquiring more assets. BuildForge (build and release management) was purchased in 2006; Watchfire (Web application security vulnerability and compliance testing software) was bought in 2007; and Telelogic (requirements management) was purchased in 2008.
It has taken IBM a while to both absorb all of the acquisitions and then to create a unified architecture so that these software products could share components and interoperate. While IBM is not done, under Danny Sabbah’s leadership (General Manager), Rational made the transition from being a tools company to becoming platform for managing software complexity. It is work in progress.
Four. It’s all about Jazz. Jazz, IBM’s collaboration platform was a major focus of the conference. Jazz is an architecture intended to integrate data and function. Jazz’s foundation is the REST architecture and therefore it is well positioned for use in Web 2.0 applications. What is most important is that IBM is bringing all of its Rational technology under this model. Over the next few years, we can expect to see this framework under all of the Rational’s products.
Five. Rational doesn’t stand alone. It is easy to focus on all of the Rational portfolio (which could take a while). But what I found quite interesting was the emphasis on the intersection between the Rational platform and Tivoli’s management services as well as Websphere’s Service Oriented Architecture offerings. Rational also made a point of focusing on the use of collaboration elements provided by the Lotus division. Cloud computing was also a major focus of discussion at the event. While many customers at the event are evaluating the potential of using various Rational products in the cloud it is early. The one area that IBM seem to have hit a home run is its Cloud Burst appliance which is intended create and manage virtual images. Rational is also beginning to deliver its testing offerings as cloud based services. One of the most interesting elements of its approach is to use tokens as a licensing model. In other words, customers purchase a set number of tokens or virtual licenses that can be used to purchase services that are not tied to a specific project or product.