Archive

Archive for the ‘Hypervisors’ Category

HP’s Ambitious Cloud Computing Strategy: Can HP Emerge as a Power?

February 15, 2011 4 comments

To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System.  HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system.  Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.

So, what is Matrix?  Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.

HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software

There are three components to the HP cloud strategy:

  • CloudSystem
  • Cloud Services Automation
  • Cloud Consulting Services

CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem.  To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2.  When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing.  It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.

Cloud Service Automation.  The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment.  Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services.  The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.

Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS.  Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.

From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.

The Bottom Line.  Making the transition to becoming a major cloud computing vendor is complicated.  The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player.  Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform.  The Cloud Automation Services is a good start but still requires more evolution.  For example, it needs to add more capabilities into its service catalog.  Leveraging its Systinet registry/repository as part of its service catalog would be advisable.  I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.

Just how much will HP plan to compete in the public cloud space is uncertain.  Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?

It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services.  This is a good vision in keeping with what customers need.  And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.

Is application portability possible in the cloud?

October 8, 2009 1 comment

As companies try to get a handle on the costs involved in running data centers. In fact, this is one of the primary reasons that companies are looking to cloud computing to make the headaches go away.  Like everything else is the complex world of computing, clouds solve some problems but they also cause the same type of lock-in problems that our industry has experienced for a few decades.

I wanted to add a little perspective before I launch into my thoughts about portability in the cloud.  So, I was thinking about the traditional data centers and how their performance has long been hampered because of their lack of homogeneity.  The typical data center is   filled with a warehouse of different hardware platforms, operating systems, applications, networks – to name but a few.  You might want to think of them as archeological digs – tracing the history of the computer industry.   To protect their turf, each vendor came up with their own platforms, proprietary operating systems and specialized applications that would only work on a single platform.

In addition to the complexities involved in managing this type of environment, the applications that run in these data centers are also trapped.   In fact, one of the main reasons that large IT organizations ended up with so many different hardware platforms running a myriad of different operating systems was because applications were tightly intertwined with the operating system and the underlying hardware.

As we begin to move towards the industrialization of software, there has been an effort to separate the components of computing so that application code is separate from the underlying operating system and the hardware. This has been the allure of both service oriented architectures and virtualization.  Service orientation has enabled companies to create clean web services interfaces and to create business services that can be reused for a lot of different situations.  SOA has taught us the business benefits that can be gained from encapsulating existing code so that it is isolated from other application code, operating systems and hardware.

Sever Virtualization takes the existing “clean” interface that is between the hardware and the software and separates the two. One benefit of fueling rapid adoption and market growth is that there is no need for rewriting of software between the x86 instructions and the software. As Server virtualization moves into the data center, companies can dramatically consolidate the massive number of machines that are dramatically underutilized to a new machines that are used in a much more efficient manner. The resultant cost savings from server virtualization include reduction in physical boxes, heating, maintenance, overhead, cooling, power etc.

Server virtualization has enabled users to create virtual images to recapture some efficiency in the data center.  And although it fixes the problem of operating systems bonded to hardware platforms, it does nothing to address the intertwining of applications and operating systems.

Why bring this issue up now? Don’t we have hypervisors that take care of all of our problems of separating operating systems from applications? Don’t companies simply spin up another virtual image and that is the end of the story.  I think the answer is no – especially with the projected growth of the cloud environment.

I got thinking about this issue after having a fascinating conversation with Greg O’Connor, CEO of AppZero.  AppZero’s value proposition is quite interesting.  In essence, AppZero provides an environment that separates the application from the underlying operating system, effectively moving up to the next level of the stack.

The company’s focus is particularly on the Windows operating system and for good reason. Unlike Linux or Zos, the Windows operating system does not allow applications to operate in a partition.  Partitions act to effectively isolate applications from one another so that if a bad thing happens to one application it cannot effect another application.   Because it is not possible to separate or isolate applications in the Windows based server environment when something goes bad with one application, it can hurt the rest of the system and other application in Windows.

In addition, when an application is loaded into Windows, DLLs (Dynamic Link Libraries) are often loaded into the operating system. DLLs are shared across applications and installing a new application can overwrite the current DLL of another application. As you can imagine, this conflict can have really bad side effects. .

Even when applications are installed on different servers – physical or virtual — installing software in Windows is a complicated issue. Applications create registry entries, modify registry entries of shared DLLS copy new DLLs over share libraries. This arrangement works fine unless you want to move that application to another environment. Movement requires a lot of work for the organization making the transition to another platform. It is especially complicated for independent software vendors (ISVs) that need to be able to move their application to whichever platform their customers prefer.

The problem gets even more complex when you start looking at issues related to Platform as a Service (PaaS).  With PaaS platform a customer is using a cloud service that includes everything from the operating system to application development tools and a testing environment.  Many PaaS vendors have created their own language to be used to link components together.  While there are benefits to having a well-architected development and deployment cloud platform, there is a huge danger of lock in.  Now, most of the PaaS vendors that I have been talking to promise that they will make it easy for customers to move from one Cloud environment to another.  Of course, although I always believe everything a vendor tells me  (that is meant as a joke….to lighten the mood) but I think that customers have to be wary about these claims of interoperability.

That was why I was intrigued with AppZero’s approach. Since the company decouples the operating system from the application code, it provides portability of pre-installed application from one environment to the next.  The company positions its approach as a virtual application appliance . In essence, this software is designed as a layer that sits between the operating system and the application. This layer intercepts file I/O, shared memory I/O as well as a specific DLL and keeps them in separate “containers” that are isolated from the application code.

Therefore, the actual application does not change any of the files or registry entries on a Windows server. In this way, a company could run a single instance of the windows server operating system. In essence, it isolates the applications, the specific dependencies and configurations from the operating system so it requires fewer operating systems to manage a Microsoft windows server based data center.

AppZero enables the user to load an application from  the network rather than to the local disk.  It therefore should simplify the job for data center operations management by enabling a single application image to be provisioned to multiple environments- enabling them to keep track of changes within a Windows environment because the application isn’t tied to a particular OS.   AppZero has found a niche selling its offerings to ISVs that want to move their offerings across different platforms without having to have people install the application. By having the application pre-installed in a virtual application appliance, the ISV can remove many of the errors that occur when a customer install the application into there environment.  The application that is delivered in a virtual application appliance container greatly reduces the variability of components that might be effect the application with traditional installation process. In addition, the company has been able to establish partnerships with both Amazon and GoGrid.

So, what does this have to do with portability and the cloud? It seems to me that this approach of separating layers of software so that interdependencies do not interfere with portability is one of the key ingredients in software portability in the cloud. Clearly, it isn’t the only issue to be solved. There are issues such as standard interfaces, standards for security, and the like. But I expect that many of these problems will be solved by a combination of lessons learned from existing standards from the Internet, web services, Service Orientation, systems and network management. We’ll be ok, as long as we don’t try to reinvent everything that has already been invented.

Can HP Lead in Virtualization Management?

September 15, 2008 2 comments

HP has been a player in the virtualization market for quite a while.  It has offered many hardware products including its server blades have given it a respectable position in the market. In addition, HP has done a great job being an important partner to key virtualization software players including VMWare, Red Hat, and Citrix. It is also establishing itself as a key Microsoft partner as it moves boldly into virtualization with HyperV.  Thus far, HP’s virtualization strategy did not focus on software. That has started to change.  Now, if this had been the good old days, I think we would have seen a strategy that focused on cooler hardware and data center optimization. Now, don’t get me wrong — HP is very much focused on the hardware and the data center. But now there is a new element that I think will be important to watch.

HP is finally leveraging its software assets in the form of virtualization management.  If I were cynical I would say, it’s about time.  But to be fair, HP has added a lot of new assets to its software portfolio in the last couple of years that make a virtualization management strategy more possible and more believable.

It is interesting that when a company has key assets to offer customers, it often strengthens the message. I was struck by what I thought was a clear message that a found on one of their slides from their marketing pitch, “Your applications and business services don’t care where resources are, how they’re connected or how they’re managed, and neither should you. ”  This statement struck me as precisely the right message in this crazy overhyped virtualization market.  Could it be that HP is becoming a marketing company?

As virtualization goes mainstream, I predict that management of this environment will become the most important issue for customers. In fact, this is the message I have gotten load and clear from cusotmers trying to virtualize their applications on servers.  Couple this will the reality that no company virtualizes everything and even if they did they still have a physical environment to manage.  Therefore, HP focuses its strategy on a plan to manage the composite of physical and virtual.  Of course, HP is not alone here. I was at Citrix’s industry analyst meeting last week and they are adopting this same strategy. I promise that my next blog will be about Citrix.

HP is calling its virtualization strategy its Business Management Suite.  While this is a bit generic, HP is trying to leverage the hot business service management platform and wrap virtualization with it.  Within this wrapper, HP is including four componements:

  • Business Service Management — the technique for linking services across the physical and virtual worlds. This is intended to monitor the end-to-end health of the overall environment.
  • Business Service Automation — a technique for provisioning assets for distributed computing
  • IT Service Management — a technique for discovering what software is present and what licenses need to be managed
  • Quality Management — a technique for testing, scheduling, and provisioning resources across platforms. Many companies are starting to use virtualization as a way of testing complex composite applications before putting them into production. Companies are testing for both application quality and performance under different loads.

I am encouraged that HP seems to understand the nuances of this market.  HP’s strategy is to position itself as the “Switzerland” of the virtualization management space.  It is therefore creating a platform that includes infrastucture to manage across IBM, Microsoft, VMWare, Citrix, and Red Hat.  Therefore, it is positioning its management assets from its heritage software (OpenView) and its acquisitions to execute this strategy. For example, its IT Service Management offering is intended to manage the compliance with license terms and conditions as well as charge backs across hetergenous environments. It’s Asset manager is intended to track virtualized assets through its discovery and dependency mapping tools.  HP’s Operations Manager has extended its performance agents so that it can monitor capabilities from virtual machines to hypervisors.  The company’s SiteScope provides agentless monitoring of hypervisors for VMWare.  The HP Network Node manager has extended support for monitoring virtual networks.

HP’s goal to to focus on the overall health of these distributed, virtualized services from an availability, performance, capacity planning, end user experience, and service level management perspective.  It is indeed an ambitious plan that will take some time to develop but it is the right direction. I am particularly impressed with the partner program that HP is evolving around its CMDB (Configuration Management Database).  It is partnering with VMWare to embark on a joint development initiative to provide a federated CMDB that can collect information from a variety of hosts and guest hosts in an on demand approach. Other companies such as Red Hat and Citrix have joined the CMDB program.

This is an interesting time in the virtualization movement.  As virtualization matures, companies are starting to realize that simply virtualizing an application on a server does not by itself save the time and money they anticipated.  The world is a lot more complicated than that.  Management wants to understand how the entire environment is part of delivering value.  For example, an organization might put all of its call center personnel on a virtualized platform which works fine until an additional 20 users with heavy demands on the server suddenly causes performance to falter.  In other situations, everything works fine until there is a software error somewhere in the distributed environment.  The virtualized environment suddenly fails and it is very difficult for IT operations to diagnose the problem. This is when management stops getting excited about how wonderful it is that they can virtualize hundreds of users onto a single server and starts worrying about the quality of service and the reputation of the organization overall.

The bottom line is that HP seems to be pulling the right pieces together for its virtualization management strategy. It is indeed still early. Virtualization itself is only the tip of the distributed computing marketplace.  HP will have to continue to innovate on its own while investing in its partner ecosystem. Today partners are eager to work with HP because it is a good partner and non-threatening.  But HP won’t be alone in the management of virtualization.  I expect that other companies like IBM and Microsoft will be very aggressive in this market.  HP has a little breathing room right now that it should take advantage of before things change again. And they always change again.