To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System. HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system. Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.
So, what is Matrix? Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.
HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software
There are three components to the HP cloud strategy:
- Cloud Services Automation
- Cloud Consulting Services
CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem. To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2. When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing. It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.
Cloud Service Automation. The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment. Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services. The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.
Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS. Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.
From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.
The Bottom Line. Making the transition to becoming a major cloud computing vendor is complicated. The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player. Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform. The Cloud Automation Services is a good start but still requires more evolution. For example, it needs to add more capabilities into its service catalog. Leveraging its Systinet registry/repository as part of its service catalog would be advisable. I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.
Just how much will HP plan to compete in the public cloud space is uncertain. Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?
It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services. This is a good vision in keeping with what customers need. And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.
You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
When I first started as an industry analyst in the 1980s IBM software was in dire straits. It was the era where IBM was making the transition from the mainframe to a new generation of distributed computing. It didn’t go really well. Even with thousands of smart developers working their hearts out the first three foresees into a new generation of software were an abysmal failure. IBM’s new architectural framework called SAA(Systems Application Architecture) didn’t work; neither did the first application built on top of that called OfficeVision. It’s first development framework called Application Development Cycle (AD/Cycle) also ended up on the cutting room floor. Now fast forward 20 years and a lot has changed for IBM and its software strategy. While it is easy to sit back and laugh at these failures, it was also a signal to the market that things were changing faster than anyone could have expected. In the 1980s, the world looked very different — programming was procedural, architectures were rigid, and there were no standards except in basic networking.
My perspective on business is that embracing failure and learning from them is the only way to really have success for the future. Plenty of companies that I have worked with over my decades in the industry have made incredible mistakes in trying to lead the world. Most of them make those mistakes and keep making them until they crawl into a hole and die quietly. The companies I admire of the ones that make the mistakes, learn from them and keep pushing. I’d put both IBM, Microsoft, and Oracle in that space.
But I promised that this piece would be about IBM. I won’t bore you with more IBM history. Let’s just say that over the next 20 years IBM did not give up on distributed computing. So, where is IBM Software today? Since it isn’t time to write the book yet, I will tease you with the five most important observations that I have on where IBM is in its software journey:
1. Common components. If you look under the covers of the technology that is embedded in everything from Tivoli to Information Management and software development you will see common software components. There is one database engine; there is a single development framework, and a single analytics backbone. There are common interfaces between elements across a very big software portfolio. So, any management capabilities needed to manage an analytics engine will use Tivoli components, etc.
2. Analytics rules. No matter what you are doing, being able to analyze the information inside a management environment or a packaged application can make the difference between success and failure. IBM has pushed information management to the top of stack across its software portfolio. Since we are seeing increasing levels of automation in everything from cars to factory floors to healthcare equipment, collecting and analyzing this data is becoming the norm. This is where Information Management and Service Management come together.
3. Solutions don’t have to be packaged software. More than 10 years ago IBM made the decision that it would not be in the packaged software business. Even as SAP and Oracle continued to build their empires, IBM took a different path. IBM (like HP) is building solution frameworks that over time incorporate more and more best practices and software patterns. These frameworks are intended to work in partnership with packaged software. What’s the difference? Treat the packages like ERP as the underlying commodity engine and focus on the business value add.
4. Going cloud. Over the past few years, IBM has been making a major investment in cloud computing and has begun to release some public cloud offerings for software testing and development as a starting point. IBM is investing a lot in security and overall cloud management. It’s Cloud Burst appliance and packaged offerings are intended to be the opening salvo. In addition, and probably even more important are the private clouds that IBM is building for its largest customers. Ironically, the growing importance of the cloud may actually be the salvation of the Lotus brand.
5. The appliance lives. Even as we look towards the cloud to wean us off of hardware, IBM is putting big bets on hardware appliances. It is actually a good strategy. Packaging all the piece parts onto an appliance that can be remotely upgraded and managed is a good sales strategy for companies cutting back on staff but still requiring capabilities.
There is a lot more that is important about this stage in IBM’s evolution as a company. If I had to sum up what I took away from this annual analyst software event is that IBM is focused at winning the hearts, minds, and dollars of the business leader looking for ways to innovate. That’s what Smarter Planet is about. Will IBM be able to juggle its place as a software leader with its push into business leadership? It is a complicated task that will take years to accomplish and even longer to assess its success.
Just when it looked clear where the markets were lining up around data center automation and cloud computing, things change. I guess that is what makes this industry so very interesting. The proposed acquisition by HP of 3Com is a direct challenge to Cisco’s network management franchise. However, the implications of this move go further than what meets the eye. It also pits HP in a direct path against EMC with its Cisco partnership. And to make things even more interesting, it also puts these two companies in a competitive three way race against IBM and its cloud/data center automation strategy. And of course, it doesn’t stop there. A myriad of emerging companies like Google and Amazon want a larger share of the enterprise market for cloud services. Companies like Unisys and CSC that has focused on the outsourced secure data centers are getting into the act.
I don’t think that we will see a single winner — no matter what any one of these companies will tell you. The winners in this market shift will be those companies can build a compelling platform and a compelling value proposition for a partner ecosystem. The truth about the cloud is that it is not simply a network or a data center. It is a new way of providing services of all sorts that can support changing customer workloads in a secure and predictable manner.
In light of this, what does this say for HP’s plans to acquire 3Com? If we assume that the network infrastructure is a key component of an emerging cloud and data center strategy, HP is making a calculated risk in acquiring more assets in this market. The company that has found that its ProCurve networking division has begun gaining traction. HP ProCurve Networking is the networking division of HP. The division includes network switches, wireless access points, WAN routers, and Access Control servers and software. ProCurve competes directly with Cisco in the networking switch market. When HP had a tight partnership with Cisco, the company de-emphasized the networking. However, once Cisco started to move into the server market, the handcuffs came off. The 3Com acquisition takes the competitive play to a new level. 3Com has a variety of good pieces of technology that HP could leverage within ProCurve. Even more significantly, it picks up a strong security product called TippingPoint, a 3Com acquisition. TippingPoint fills a critical hole in HP’s security offering. TippingPoint, offers network security offerings including intrusion prevention and a product that inspects network packets. The former 3Com subsidiary has also established a database of security threats based a network of external researchers.
But I think that one of the most important reasons that HP bought 3Com is its strong relationships in the Chinese market. In fiscal year 2008 half of 3Com’s revenue came from its H3C joint venture with Chinese vendor, Huawei Technology. Therefore, it is not surprising that HP would have paid a premium to gain a foothold in this lucrative market. If HP is smart, it will do a good job leveraging the many software assets to build out both its networking assets as well as beefing up its software organization. In reality, HP is much more comfortable in the hardware market. Therefore, adding networking as a core competency makes sense. It will also bolster its position as a player in the high end data center market and in the private cloud space.
Cisco, on the other hand, is coming from the network and moving agressively into the cloud and the data center market. The company has purchased a position with VMWare and has established a tight partnership with EMC as a go to market strategy. For Cisco, it gives the company credibility and access to customers outside of its traditional markets. For EMC, the Cisco relationship strengthens its networking play. But an even bigger value for the relationship is to present a bigger footprint to customers as they move to take on HP, IBM, and the assortment of other players who all want to win. The Cisco/EMC/VMware play is to focus on the private cloud. In their view a private cloud is very similar to a private, preconfigured data center. It can be a compelling value proposition to a customer that needs a data center fast without having to deal with a lot of moving parts. The real question from a cloud computing perspective is the key question: is this really a cloud?
It was inevitable that this quiet market dominated by Google and Amazon would heat up as the cloud becomes a real market force. But I don’t expect that HP or Cisco/EMC will have a free run. They are being joined by IBM and Microsoft — among others. The impact could be better options for customers and prices that invariably will fall. The key to success for all of these players will be how well they manage what will be an increasingly heterogeneous, federated, and highly distributed hardware and software world. Management comes in many flavors: management of these highly distributed services and management of the workloads.
I had two interesting discussions over the past few weeks; one with an IT manager and the other with Rhett Glause and Matt French from Service-Now. Both discussions related to the issue of managing service processes in a complex computing environments. Let me start with the IT manager. He is charged with taking his organization’s web presence from 1990s architecture into a modern Web 2.0 design that will enable better support for customers and partners. It is a big effort with lots of interaction with the customer facing departments about what they want and with the IT organization about how this new environment will be supported. Now, this part isn’t out of the ordinary and this is not what this manager was having problems with. He was being driven crazy by process. The company he works for is devoted to ITIL (Information Technology Infrastructure Library). ITIL is a set of best practices designed to help companies create environments that have a common way to troubleshoot problems with managing complex services. They are intended as guidelines – not step-by-step instructions about how to managing service processes. In fact, ITIL best practices mandate that you need to start with your strategy for managing services before you get involved in the details.
The IT manager’s problem is that his company’s IT department was so embroiled in process that it was causing excessive delays in getting to a solution. It has a Configuration Management Database (CMDB) — a repository for all of the details about an application environment including who can change something; how a service or an application is configured and what the change management process is. This company’s problem is that it has set up a change review board that has to review and approve every change for the new environment. Therefore, something that should take a few days to develop is taking six month of endless meetings. In other words, the IT manager’s organization is too caught up in process so that it actually crippling the ability to get the job done. According to the IT manager, “It’s bureaucracy gone mad! This approach will not help make IT more responsive; it will do the opposite.”
I thought about the discussion in context with a great call I had with Matt French, director of marketing and product strategy and Rhett Glauser, communications manager at Service-Now, an IT service desk software as a service company. What did they think of my friend’s tale of woe? They agreed that this is a common perspective that they hear from customers. Many customers are beginning to understand that they have to take a pragmatic view of process. Their top recommendation was that companies should approach ITIL in a phrased approach.
So, here are some recommendations about how to handle process in context with driving business value:
- Establish a light-weight CMDB by only focusing on configuration items that the organization really needs. If a process isn’t likely to change, it might not be necessary to track that process. You don’t need a change management process for everything.
- Get IT management to take a step back from relying too heavily on IT processes. Rather management needs to be focused on what is important to business management and then execute in a pragmatic way.
- Every service should have a business owner who can make decisions.
- When a change management process is required make sure that there is a change advisory board. There needs to be one person who has the authority to manage that change in the context of the business drivers. The change management board should expedite process and should not become a bottleneck.
In the end it is about common sense. If IT organizations are going to be effective in managing business requirements they have to look at service management in context with the overall priorities of the business. This was the key message our team was aiming for when we wrote Service Management for Dummies. Service management is increasingly defining not only how we manage IT environments but how we managed businesses. Therefore a streamlined view of process management will be the difference between success and failure.
Almost every conversation I have had over the past year or so always comes back to security in the cloud. Is it really secure? Or we are thinking about implementing the cloud but we are worried about security. There are, of course, good reasons to plan a cloud security strategy. But in a sense, it is no different than planning a security strategy for your company. But it is the big scary cloud! Well, before I list the top then issues I would like to say one thing: if you think you need an entirely different security strategy for the cloud, you may not have a comprehensive security strategy to start with. Yes, you have to make sure that you cloud provider has a sophisticated approach to security. However, what about your Internet service provider? What about the level of security within your own IT department? Can you throw stones if you live in a glass house (yes, that is a pun…sorry)? So, before you start fretting about security in the cloud, get your own house in order. Do you have an identity management plan? Do you ensure that one individual within the data center can’t control all of the data within a single environment to minimize risks? If you don’t have a well executed internal security plan, you aren’t ready for the cloud. But let’s say that you have fixed that problem and you are ready to really plan your cloud security strategy. So, here five of the issues to consider. If you have others, let’s start a conversation.
1. You need to start at the beginning with understanding the characteristics of your cloud provider. Is the company well funded? Is its data center designed with security at the center? Your level of scrutiny will also depend on how you are using the cloud. If you are using Infrastructure as a Service for a short term project there is less risk than if you are planning to use a cloud to store important customer data.
2. How is your cloud provider implementing security in a multi-tenant environment? How do they ensure that one customer’s data doesn’t impact another customer’s data?
3. Does your cloud provider give you the ability to monitor security of your data in the cloud? This will be important both for compliance and to keep track of your own security policies.
4. Does your cloud provider encrypt your critical data? If not, why not?
5. Does your cloud provider give you the ability to control who is allowed to access your information based on roles and authorization? Does the cloud provider support federated identity management? This is basic security best practices.
Now you are probably saying to yourself that this isn’t rocket science. These are fundamental security approaches that any data center should follow. I recommend that you take a look at a great document published by the Cloud Security Alliance that details many of the key issues surrounding security in the cloud. So, I guess my principle message is that cloud security is not different than security in any data center. But the market does not seem to understand this because the perception is that a cloud is somehow not a data center that can be secured with regular old security. I think that we will see something interesting happen because of this perception: cloud vendors will begin to charge a premium for really good security. In fact, this is already happening. Vendors like Amazon and Salesforce are offering segregated implementations of their environments to customers who don’t trust their ordinary security approaches. This will work in the short term primarily because during this early phase of the cloud there is not enough focus on security. Long term, as the market matures, cloud vendors will have to demonstrate their ability to provide a secure environment based on basic security best practices. In the meantime, cloud vendors will rake in the cash for premium secure cloud services.