To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System. HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system. Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.
So, what is Matrix? Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.
HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software
There are three components to the HP cloud strategy:
- Cloud Services Automation
- Cloud Consulting Services
CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem. To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2. When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing. It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.
Cloud Service Automation. The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment. Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services. The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.
Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS. Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.
From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.
The Bottom Line. Making the transition to becoming a major cloud computing vendor is complicated. The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player. Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform. The Cloud Automation Services is a good start but still requires more evolution. For example, it needs to add more capabilities into its service catalog. Leveraging its Systinet registry/repository as part of its service catalog would be advisable. I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.
Just how much will HP plan to compete in the public cloud space is uncertain. Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?
It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services. This is a good vision in keeping with what customers need. And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.
So in a perfect world all data centers be magically become clouds and the world is a better place. All kidding aside..I am tired of all of the hype. Let me put it this way. All data centers cannot and will not become private clouds– at least not for most typical companies. Let me tell you why I say this. There are some key principles of the cloud that I think are worth recounting:
1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud.
2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand.
3. A cloud environment needs to be economically viable.
Why aren’t traditional data centers private clouds? What if a data center adds some self-service and virtualization? Is that enough? Probably not. A typical data center is a complex environment. It is not uncommon for a single data center to support five or six different operating systems, five or six different languages, four or five different hardware platforms and perhaps 20 or 30 applications of all sizes and shapes plus an unending number of tools to support the management and maintenance of that environment. In Cloud Computing for Dummies, written by the team at Hurwitz & Associates there is a considerable amount written about this issue. Given an environment like this it is almost impossible to achieve workload optimization. In addition, there are often line of business applications that are complicated, used by a few dozen employees, and are necessary to run the business. There is simply no economic rational for such applications to be moved to a cloud — public or private. The only alternative for such an application would be to outsource the application all together.
So what does belong in the private cloud? Application and business services that are consistent workloads that are designed for be used on demand by developers, employees, or partners. Many companies are becoming IT providers to their own employees, partners, customers and suppliers. These services are predictable and designed as well-defined components that can be optimized for elasticity. They can be used in different situations — for a single business situation to support a single customer or in a scenario that requires the business to support a huge partner network. Typically, these services can be designed to be used by a single operating system (typically Linux) that has been optimized to support these workloads. Many of the capabilities and tasks within this environment has been automated.
Could there be situations where an entire data center could be a private cloud? Sure, if an organization can plan well enough to limit the elements supported within the data center. I think this will happen with specialized companies that have the luxury of not supporting legacy. But for most organizations, reality is a lot messier.
Every year I attend IBM software analyst meeting. It is an opportunity to gain a snap shot of what the leadership team is thinking and saying. Since I have had the opportunity to attend many of these events over the year, it is always instructive to watch the evolution of IBM’s software business over the years.
So, what did I take away from this year’s conference? In many ways, it was not that much difference in what I experienced last year. And I think that is good. When you are a company of the size of IBM you can’t lurch from one strategy to the next and expect to survive. One of the advantages that IBM has in the market is that has a well developed roadmap that it is in the process of executing on. It is not easy to execute when you have as many software components as IBM does in its software portfolio.
While it isn’t possible to discuss all that I learned in my various discussions with IBM executives, I’d like to focus on IBM’s solutions strategy and its impact on the software portfolio. From my perspective, IBM has made impressive strides in enforcing a common set of services that underlie its software portfolio. It has been a complicated process that has taken decades and is still a work in progress. The result required that all of the business units within software are increasingly working together to provide underlying services to each other. For example, Tivoli provides management services to Rational and Information Management provides data management services to Tivoli. WebSphere provides middleware and service orientation to all of the various business units. Because of this approach, IBM is better able to move to a solutions focus.
It’s about the solutions.
In the late 1990s IBM got out of the applications business in order to focus on middleware, data management, and systems management. This proved to be a successful strategy for the next decade. IBM made a huge amount of money selling WebSphere, DB2, and Tivoli offerings for SAP and Oracle platforms. In addition, Global Services created a profitable business implementing these packaged applications for enterprises. But the world has begun to change. SAP and Oracle have both encroached on IBM’s software business. Some have criticized IBM for not being in the packaged software business. While IBM is not going into the packaged software business, it is investing a vast amount of money, development effort, and marketing into the “solutions” business.
How is the solutions business different than a packaged application? In some ways they are actually quite similar. Both provide a mechanism for codifying best practices into software and both are intended to save customers time when they need to solve a business problem. IBM took itself out of the packaged software business just as the market was taking off. Companies like SAP, Oracle, Seibel, PeopleSoft and hundreds of others were flooding the market with tightly integrated packages. In this period of the early 1990s, IBM decided that it would be more lucrative to partner with these companies that lacked independent middleware and enabling technologies. IBM decided that it would be better off enabling these packaged software companies than competing in the packaged software market.
This turned out to be the right decision for IBM at the time. The packaged software it had developed in the 80s was actually holding it back. Therefore, without the burden of trying to fix broken software, it was able to focus all of its energy and financial strength on its core enabling software business. But as companies like Oracle and SAP cornered the packaged software market and began to expand to enabling software, IBM began to evolve its strategy. IBM’s strategy is a hybrid of the traditional packaged software business and a solutions business based on best practices industry frameworks.
So, there are two components in IBM’s solutions strategy – vertical packaged solutions that can be applied across industries and solution frameworks that are focused on specific vertical markets.
Horizontal Packages. The horizontal solutions that IBM is offerings have been primarily based on acquisitions it has made over the past few years. While at first glance they look like any other packaged software, there is a method to what IBM has purchased. Without exception, these acquisitions are focused on providing packaged capabilities that are not specific to any market but are intended to be used in any vertical market. In essence, the packaged solutions that IBM has purchased resemble middleware more than end-to-end solutions. For example, Sterling Commerce, which IBM purchased in August 2010, is a cross channel commerce platform. It purchased Coremetrics in June, provides web analytics and bought Unica for marketing automation of core business processes. While each of these is indeed packaged, they reach represent a component of a solution that can be applied across industries.
Vertical Packages. IBM has been working on its vertical market packaging for more than a decade through its Business Services Group (BSG). IBM has taken its best practices from various industry practices and codified these patterns into software components. These components have been unified into solution frameworks for industries such as retail, banking, and insurance. While this has been an active approach with the Global Services for many years, there has been a major restructuring in IBM’s software organization this past year. In January, the software group split into two groups: one focused on middleware and another focused on software solutions. All of the newly acquired horizontal packages provide the underpinning for the vertical framework-based software solutions.
Leading with the solution. IBM software has changed dramatically over the past several decades. The solutions focus does not stop with the changes within the software business units itself; it extends to hardware as well. Increasingly, customers want to be able to buy their solutions as a package without having to buy the piece parts. IBM’s solution focus that encompasses solutions, middleware, appliances, and hardware is the strategy that IBM will take into the coming decade.
You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago. As this anniversary approaches I thought it was appropriate to take a look back at what has changed. I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news. I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.
Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders. A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was passing fad that would disappear. Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.
Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud. Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies. Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.
Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.
Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.
Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.
Change Six: Cloud Service Level Management is a primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing. Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.
Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.
Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.
Yesterday I read an interesting blog commenting on why Oracle seems so interested in Sun’s hardware.
I quote from a comment by Brian Aker, former head of architecture for MySQL on the O’Reily Radar blog site. He comments on his view on why Oracle bought Sun,
Brian Aker: I have my opinions, and they’re based on what I see happening in the market. IBM has been moving their P Series systems into datacenter after datacenter, replacing Sun-based hardware. I believe that Oracle saw this and asked themselves “What is the next thing that IBM is going to do?” That’s easy. IBM is going to start pushing DB2 and the rest of their software stack into those environments. Now whether or not they’ll be successful, I don’t know. I suspect once Oracle reflected on their own need for hardware to scale up on, they saw a need to dive into the hardware business. I’m betting that they looked at Apple’s margins on hardware, and saw potential in doing the same with Sun’s hardware business. I’m sure everything else Sun owned looked nice and scrumptious, but Oracle bought Sun for the hardware.
I think that Brian has a good point. In fact, in a post I wrote a few months ago, I commented on the fact that hardware is back. It is somewhat ironic. For a long time, the assumption has been that a software platform is the right leverage point to control markets. Clearly, the tide is shifting. IBM, for example, has taken full advantage of customer concerns about the future of the Sun platform. But IBM is not stopping there. I predict a hardware sneak attack that encompasses IBM’s platform software strength (i.e., middleware, automation, analytics, and service management) combined with its hardware platforms.
IBM will use its strength in systems and middleware software to expand its footprint into Oracle’s backyard surrounding its software with an integrated platform designed to work as a system of systems. It is clear that over the past five or six years IBM’s focus has been on software and services. Software has long provided good profitability for IBM. Services has made enormous strides over the past decade as IBM has learned to codify knowledge and best practices into what I have called Service as Software. The other most important movement has been IBM’s focused effort over the past decade to revamp the underlying structure of its software into modular services that are used across its software portfolio. Combine this approach with industry focused business frameworks and you have a pretty good idea of where IBM is headed with its software and services portfolios.
The hardware strategy has begun to evolve in 2005 when IBM software bought a little hardware XML accelerator hardware appliance company called DataPower. Many market watchers were confused. What would IBM software do with a hardware platform? Over time, IBM expanded the footprint of this platform and began to repurpose it as a means to pre-packaging software components. First there was a SOA-based appliance; then IBM added a virtual machine appliance called the CloudBurst appliance. On the Lotus side of the business, IBM bought another appliance company that evolved into the Lotus Foundations platform. Appliances became a great opportunity to package and preconfigure systems that could be remotely upgraded and managed. This packaging of software with systems demonstrated the potential not only for simplicity for customers but a new way of adding value and revenue.
Now, IBM is taking the idea of packaging hardware with software to new levels. It is starting to leverage the software and networking capability focused on hardware-driven systems. For example, within the systems environment, IBM is leveraging its knowledge of optimizing systems software so that it applications-based workloads can take advantage of capabilities such as threading, caching, and systems level networking.
In its recent announcement, IBM has developed its new hardware platforms based on the five most common workloads: transaction processing, analytics, business applications, records management and archiving, and collaboration. What does this mean to customers? If a customer has a transaction oriented system, the most important capability is to ensure that the environment uses as many threads as possible to optimize speed of throughput. In addition, caching repetitive workloads will also ensure that transactions move through the system as quickly as possible. While this has been doable in the past, the difference is that these capabilities are packaged as an end-to-end system. Thus, implementation could be faster and more precise. The same can be said for analytics workloads. These workloads demand a high level of efficiency to enable customers to look for patterns in the data that help predict outcomes. Analytics workloads require the caching and fast processing of algorithms and data across multiple sources.
The bottom line is that IBM is looking at its hardware as an extension of the type of workloads they are required to support. Rather than considering hardware as as set of separate platforms, IBM is following a systems of systems approach that is consistent with cloud computing. With this type of approach, IBM will continue on the path of viewing a system as a combination of the hardware platform, the systems software, and systems-based networking. These elements of computing are therefore configured based on the type of application and the nature of the current workload.
It is, in fact, workload optimization that is at the forefront of what is changing in hardware in the coming decade. This is true both in the data center and in the cloud. Cloud computing — and the hybrid environments that make up the future of computing are all predicated on predictable, scalable, and elastic workload management. It is the way we will start thinking about computing as a continuum of all of the component parts combined — hardware, software, services, networking, storage, collaboration, and applications. This reflects the dramatic changes that are just at the horizon.