Archive

Posts Tagged ‘Virtualization’

HP’s Ambitious Cloud Computing Strategy: Can HP Emerge as a Power?

February 15, 2011 4 comments

To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System.  HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system.  Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.

So, what is Matrix?  Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.

HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software

There are three components to the HP cloud strategy:

  • CloudSystem
  • Cloud Services Automation
  • Cloud Consulting Services

CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem.  To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2.  When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing.  It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.

Cloud Service Automation.  The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment.  Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services.  The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.

Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS.  Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.

From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.

The Bottom Line.  Making the transition to becoming a major cloud computing vendor is complicated.  The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player.  Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform.  The Cloud Automation Services is a good start but still requires more evolution.  For example, it needs to add more capabilities into its service catalog.  Leveraging its Systinet registry/repository as part of its service catalog would be advisable.  I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.

Just how much will HP plan to compete in the public cloud space is uncertain.  Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?

It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services.  This is a good vision in keeping with what customers need.  And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.

What’s a private cloud anyway?

February 4, 2011 2 comments

So in a perfect world all data centers be magically become clouds and the world is a better place. All kidding aside..I am tired of all of the hype. Let me put it this way.  All data centers cannot and will not become private clouds– at least not for most typical companies. Let me tell you why I say this.  There are some key principles of the cloud that I think are worth recounting:

1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud.

2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand.

3. A cloud environment needs to be economically viable.

Why aren’t traditional data centers private clouds?  What if a data center adds some self-service and  virtualization? Is that enough?  Probably not.  A typical data center is a complex environment.  It is not uncommon for a single data center to support five or six different operating systems, five or six different languages, four or five different hardware platforms and perhaps 20 or 30 applications of all sizes and shapes plus an unending number of tools to support the management and maintenance of that environment.  In Cloud Computing for Dummies, written by the team at Hurwitz & Associates there is a considerable amount written about this issue.  Given an environment like this it is almost impossible to achieve workload optimization.  In addition, there are often line of business applications that are complicated, used by a few dozen employees, and are necessary to run the business. There is simply no economic rational for such applications to be moved to a cloud — public or private.  The only alternative for such an application would be to outsource the application all together.

So what does belong in the private cloud? Application and business services that are consistent workloads that are designed for be used on demand by developers, employees, or partners.  Many companies are becoming IT providers to their own employees, partners, customers and suppliers.  These services are predictable and designed as well-defined components that can be optimized for elasticity. They can be used in different situations — for a single business situation to support a single customer or in a scenario that requires the business to support a huge partner network. Typically, these services can be designed to be used by a single operating system (typically Linux) that has been optimized to support these workloads. Many of the capabilities and tasks within this environment has been automated.

Could there be situations where an entire data center could be a private cloud? Sure, if an organization can plan well enough to limit the elements supported within the data center. I think this will happen with specialized companies that have the luxury of not supporting legacy. But for most organizations, reality is a lot messier.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Is application portability possible in the cloud?

October 8, 2009 1 comment

As companies try to get a handle on the costs involved in running data centers. In fact, this is one of the primary reasons that companies are looking to cloud computing to make the headaches go away.  Like everything else is the complex world of computing, clouds solve some problems but they also cause the same type of lock-in problems that our industry has experienced for a few decades.

I wanted to add a little perspective before I launch into my thoughts about portability in the cloud.  So, I was thinking about the traditional data centers and how their performance has long been hampered because of their lack of homogeneity.  The typical data center is   filled with a warehouse of different hardware platforms, operating systems, applications, networks – to name but a few.  You might want to think of them as archeological digs – tracing the history of the computer industry.   To protect their turf, each vendor came up with their own platforms, proprietary operating systems and specialized applications that would only work on a single platform.

In addition to the complexities involved in managing this type of environment, the applications that run in these data centers are also trapped.   In fact, one of the main reasons that large IT organizations ended up with so many different hardware platforms running a myriad of different operating systems was because applications were tightly intertwined with the operating system and the underlying hardware.

As we begin to move towards the industrialization of software, there has been an effort to separate the components of computing so that application code is separate from the underlying operating system and the hardware. This has been the allure of both service oriented architectures and virtualization.  Service orientation has enabled companies to create clean web services interfaces and to create business services that can be reused for a lot of different situations.  SOA has taught us the business benefits that can be gained from encapsulating existing code so that it is isolated from other application code, operating systems and hardware.

Sever Virtualization takes the existing “clean” interface that is between the hardware and the software and separates the two. One benefit of fueling rapid adoption and market growth is that there is no need for rewriting of software between the x86 instructions and the software. As Server virtualization moves into the data center, companies can dramatically consolidate the massive number of machines that are dramatically underutilized to a new machines that are used in a much more efficient manner. The resultant cost savings from server virtualization include reduction in physical boxes, heating, maintenance, overhead, cooling, power etc.

Server virtualization has enabled users to create virtual images to recapture some efficiency in the data center.  And although it fixes the problem of operating systems bonded to hardware platforms, it does nothing to address the intertwining of applications and operating systems.

Why bring this issue up now? Don’t we have hypervisors that take care of all of our problems of separating operating systems from applications? Don’t companies simply spin up another virtual image and that is the end of the story.  I think the answer is no – especially with the projected growth of the cloud environment.

I got thinking about this issue after having a fascinating conversation with Greg O’Connor, CEO of AppZero.  AppZero’s value proposition is quite interesting.  In essence, AppZero provides an environment that separates the application from the underlying operating system, effectively moving up to the next level of the stack.

The company’s focus is particularly on the Windows operating system and for good reason. Unlike Linux or Zos, the Windows operating system does not allow applications to operate in a partition.  Partitions act to effectively isolate applications from one another so that if a bad thing happens to one application it cannot effect another application.   Because it is not possible to separate or isolate applications in the Windows based server environment when something goes bad with one application, it can hurt the rest of the system and other application in Windows.

In addition, when an application is loaded into Windows, DLLs (Dynamic Link Libraries) are often loaded into the operating system. DLLs are shared across applications and installing a new application can overwrite the current DLL of another application. As you can imagine, this conflict can have really bad side effects. .

Even when applications are installed on different servers – physical or virtual — installing software in Windows is a complicated issue. Applications create registry entries, modify registry entries of shared DLLS copy new DLLs over share libraries. This arrangement works fine unless you want to move that application to another environment. Movement requires a lot of work for the organization making the transition to another platform. It is especially complicated for independent software vendors (ISVs) that need to be able to move their application to whichever platform their customers prefer.

The problem gets even more complex when you start looking at issues related to Platform as a Service (PaaS).  With PaaS platform a customer is using a cloud service that includes everything from the operating system to application development tools and a testing environment.  Many PaaS vendors have created their own language to be used to link components together.  While there are benefits to having a well-architected development and deployment cloud platform, there is a huge danger of lock in.  Now, most of the PaaS vendors that I have been talking to promise that they will make it easy for customers to move from one Cloud environment to another.  Of course, although I always believe everything a vendor tells me  (that is meant as a joke….to lighten the mood) but I think that customers have to be wary about these claims of interoperability.

That was why I was intrigued with AppZero’s approach. Since the company decouples the operating system from the application code, it provides portability of pre-installed application from one environment to the next.  The company positions its approach as a virtual application appliance . In essence, this software is designed as a layer that sits between the operating system and the application. This layer intercepts file I/O, shared memory I/O as well as a specific DLL and keeps them in separate “containers” that are isolated from the application code.

Therefore, the actual application does not change any of the files or registry entries on a Windows server. In this way, a company could run a single instance of the windows server operating system. In essence, it isolates the applications, the specific dependencies and configurations from the operating system so it requires fewer operating systems to manage a Microsoft windows server based data center.

AppZero enables the user to load an application from  the network rather than to the local disk.  It therefore should simplify the job for data center operations management by enabling a single application image to be provisioned to multiple environments- enabling them to keep track of changes within a Windows environment because the application isn’t tied to a particular OS.   AppZero has found a niche selling its offerings to ISVs that want to move their offerings across different platforms without having to have people install the application. By having the application pre-installed in a virtual application appliance, the ISV can remove many of the errors that occur when a customer install the application into there environment.  The application that is delivered in a virtual application appliance container greatly reduces the variability of components that might be effect the application with traditional installation process. In addition, the company has been able to establish partnerships with both Amazon and GoGrid.

So, what does this have to do with portability and the cloud? It seems to me that this approach of separating layers of software so that interdependencies do not interfere with portability is one of the key ingredients in software portability in the cloud. Clearly, it isn’t the only issue to be solved. There are issues such as standard interfaces, standards for security, and the like. But I expect that many of these problems will be solved by a combination of lessons learned from existing standards from the Internet, web services, Service Orientation, systems and network management. We’ll be ok, as long as we don’t try to reinvent everything that has already been invented.

Musings from VMworld Conference

September 10, 2009 2 comments

I spent longer than I typically do at a conference last week when I went to VMworld.  It was quite an active event — lots of customers, lots of cloud technology providers, and lots of integrators. What I took away from the conference where three major observations: the customers attending the conference are busily virtualizing servers; VMware is trying hard to position itself for leadership in the both virtualization and the cloud; well-established vendors are deepening their relationship with VMware while emerging vendors are trying to either fill a void or knock an existing leader out of the ring.

One of the things that really stood out for me was the stage of maturity of the customers. In speaking with attendees, it was clear to me that many of the VMware customers are in the early stages of moving to the cloud. In fact, most of them are not even thinking about clouds — other than rain clouds. The people attending this year’s event are typical of an emerging market. They are the hard core developers who have to deal with technology without the benefit of levels of abstraction. These are hard working developers who have deep expertise in virtualizing servers. Many of these developers have gained a lot of benefit from some of the key innovations that VMware has made over the years. One excellent example is VMware’s product called  Vmotion which enables a developer to migrate a running virtual machine from one physical server to another with no service disruption. I started thinking about what implementing virtualization means to developers. I got thinking about this because I picked up a handy little guide at the conference called vSphere 4.0 Quick Start Guide Shortcuts down the path to Virtualization . What struck me from glancing through the book was the level of programming required configure and implement virtual machines. It is not for the faint hearted. Yes, when you’re done with the hard work of separating the software environment from the hardware, magic starts to happen.

It was interesting to juxtapose this bottoms up virtualization focus with emerging cloud technologies.  Cloud computing is clearly emerging as a strategy for many of the vendors and many of the bosses of the participants at the conference. The cloud leverages virtualization as an enabler of the cloud but it is clearly the beginning and not the end. We have seen this so many times before with so many technology trends. You start with the sophisticated developers who want to work at the metal. They get great performance and great benefit for their companies. And then, technology matures and gets abstracted. Here is a good example. In the really, really early days of graphical interfaces, sophisticated programmers wanted nothing to do with an abstracted interface. The command line interface was the one and only way to go. After all, this command level interface gave them control that they could not image having from a graphical interface. How many programmers today would go back to a command line interface? (probably a few — but no one’s perfect).

So, I was left with the feeling that we are in between generations of technology at this year’s VMworld. The old world of virtualizing servers is about to be surplanted by the world of abstracting the data center itself. Virtualization is one of the pillars of this transformation but it not the end game.

Oracle Plus Sun: What does it mean?

April 20, 2009 16 comments

I guess this is one way to start a Monday morning. After IBM decided to pass on Sun, Oracle decided that it would be a great idea. While I have as many questions as answers, here are my top ten thoughts about what this combination will mean to the market:

1. Oracle’s acquisition of Sun definitely shakes up the technology market. Now, Oracle will become a hardware vendor, an operating system supplier, and will own Java.

2. Oracle gets a bigger share of the database market with MySQL. Had IBM purchased Sun, it would have been able to claim market leadership.

3. This move changes the competitive dynamics of the market. There are basically three technology giants: IBM, HP, and Oracle. This acquisition will put a lot of pressure on HP since it partners so closely with Oracle on the database and hardware fronts. It should also lead to more acquisitions by both IBM and HP.

4. The solutions market reigns! Oracle stated in its conference call this morning that the company will now be able to deliver top to bottom integrated solutions to its customers including hardware, packaged applications, operating systems, middleware, storage, database, etc. I feel a mainframe coming on…

5. Oracle could emerge as a cloud computing leader. Sun had accumulated some very good cloud computing/virtualization technologies over the last few years. Sun’s big cloud announcement got lost in the frenzy over the acquisition talks but there were some good ideas there.

6. Java gets  a new owner. It will be interesting to see how Oracle is able to monetize Java. Will Oracle turn Java over to a standards organization? Will it treat it as a business driver? That answer will tell the industry a lot about the future of both Oracle and Java.

7. What happens to all of Sun’s open source software? Back a few years ago, Sun decided that it would open source its entire software stack. What will Oracle do with that business model? What will happen to its biggest open source platform, MySQL? MySQL has a huge following in the open source world. I suspect that Oracle will not make dramatic changes, at least in the short run. Oracle does have open source offerings although they are not the central focus of the company by a long shot. I assume that Oracle will deemphasize MySQL.

8. Solaris is back. Lately, there has been more action around Solaris. IBM annouced support earlier in the year and HP recently announced support services. Now that Solaris has a strong owner it could shake up the dynamics of the operating system world. It could have an impact on the other gorilla not in the room — Microsoft.

9. What are the implications for Microsoft? Oracle and Microsoft have been bitter rivals for decades. This acquisition will only intensify the situation. Will Microsoft look at some big acquisitions in the enterprise market? Will new partnerships emerge? Competition does create strange bedfellows. What will this mean for Cisco, VMWare, and EMC? That is indeed something interesting to ponder.

10. Oracle could look for a services acquisition next. One of the key differences between Oracle and its two key rivals IBM and HP is in the services space. If Oracle is going to be focused on solutions, we might expect to see Oracle look to acquire a services company. Could Oracle be eyeing something like CSC?

I think I probably posed more questions than answers. But, indeed, these are early days. There is no doubt that this will shake up the technology market and will lead to increasing consolidation. In the long run, I think this will be good for customers. Customers do want to stop buying piece parts. Customers do want to buy a more integrated set of offerings. However, I don’t think that any customer wants to go back to the days where a solution approach meant lock-in. It will be important for customers to make sure that what these big players provide is the type of flexibility they have gotten used to in the last decade without so much pain.

My Top Eleven Predictions for 2009 (I bet you thought there would be only ten)

November 14, 2008 11 comments

What a difference a year makes. The past year was filled with a lot of interesting innovations and market shifts. For example, Software as a Service went from being something for small companies or departments within large ones to a mainstream option.  Real customers are beginning to solve real business problems with service oriented architecture.  The latest hype is around Cloud Computing – afterall, the software industry seems to need hype to survive. As we look forward into 2009, it is going to be a very different and difficult year but one that will be full of some surprising twists and turns.  Here are my top predictions for the coming year.
One. Software as a Service (SaaS) goes mainstream. It isn’t just for small companies anymore. While this has been happening slowly and steadily, it is rapidly becoming mainstream because with the dramatic cuts in capital budgets companies are going to fulfill their needs with SaaS.  While companies like SalesForce.com have been the successful pioneers, the big guys (like IBM, Oracle, Microsoft, and HP) are going to make a major push for dominance and strong partner ecosystems.
Two. Tough economic times favor the big and stable technology companies. Yes, these companies will trim expenses and cut back like everyone else. However, customers will be less willing to bet the farm on emerging startups with cool technology. The only way emerging companies will survive is to do what I call “follow the pain”. In other words, come up with compelling technology that solves really tough problems that others can’t do. They need to fill the white space that the big vendors have not filled yet. The best option for emerging companies is to use this time when people will be hiding under their beds to get aggressive and show value to customers and prospects. It is best to shout when everyone else is quiet. You will be heard!
Three.  The Service Oriented Architecture market enters the post hype phase. This is actually good news. We have had in-depth discussions with almost 30 companies for the second edition of SOA for Dummies (coming out December 19th). They are all finding business benefit from the transition. They are all view SOA as a journey – not a project.  So, there will be less noise in the market but more good work getting done.
Four. Service Management gets hot. This has long been an important area whether companies were looking at automating data centers or managing process tied to business metrics.  So, what is different? Companies are starting to seriously plan a service management strategy tied both to customer experience and satisfaction. They are tying this objective to their physical assets, their IT environment, and their business process across the company. There will be vendor consolidation and a lot of innovation in this area.
Five. The desktop takes a beating in a tough economy. When times get tough companies look for ways to cut back and I expect that the desktop will be an area where companies will delay replacement of existing PCs. They will make do with what they have or they will expand their virtualization implementation.
Six. The Cloud grows more serious. Cloud computing has actually been around since early time sharing days if we are to be honest with each other.  However, there is a difference is the emerging technologies like multi-tenancy that make this approach to shared resources different. Just as companies are moving to SaaS because of economic reasons, companies will move to Clouds with the same goal – decreasing capital expenditures.  Companies will start to have to gain an understanding of the impact of trusting a third party provider. Performance, scalability, predictability, and security are not guaranteed just because some company offers a cloud. Service management of the cloud will become a key success factors. And there will be plenty of problems to go around next year.
Seven. There will be tech companies that fail in 2009. Not all companies will make it through this financial crisis.  Even large companies with cash will be potentially on the failure list.  I predict that Sun Microsystems, for example, will fail to remain intact.  I expect that company will be broken apart.  It could be that the hardware assets could be sold to its partner Fujitsu while pieces of software could be sold off as well.  It is hard to see how a company without a well-crafted software strategy and execution model can remain financially viable. Similarly, companies without a focus on the consumer market will have a tough time in the coming year.
Eight. Open Source will soar in this tight market. Open Source companies are in a good position in this type of market—with a caveat.  There is a danger for customers to simply adopt an open source solution unless there is a strong commercial support structure behind it. Companies that offer commercial open source will emerge as strong players.
Nine.  Software goes vertical. I am not talking about packaged software. I anticipate that more and more companies will begin to package everything based on a solutions focus. Even middleware, data management, security, and process management will be packaged so that customers will spend less time building and more time configuring. This will have an impact in the next decade on the way systems integrators will make (or not make) money.
Ten. Appliances become a software platform of choice for customers. Hardware appliances have been around for a number of years and are growing in acceptance and capability.  This trend will accelerate in the coming year.  The most common solutions used with appliances include security, storage, and data warehousing. The appliance platform will expand dramatically this coming year.  More software solutions will be sold with prepackaged solutions to make the acceptance rate for complex enterprise software easier.

Eleven. Companies will spend money on anticipation management. Companies must be able to use their information resources to understand where things are going. Being able to anticipate trends and customer needs is critical.  Therefore, one of the bright spots this coming year will be the need to spend money getting a handle on data.  Companies will need to understand not just what happened last year but where they should invest for the future. They cannot do this without understanding their data.

The bottom line is that 2009 will be a complicated year for software.  There will be many companies without a compelling solution to customer pain will and should fail. The market favors safe companies. As in any down market, some companies will focus on avoiding any risk and waiting. The smart companies – both providers and users of software will take advantage of the rough market to plan for innovation and success when things improve – and they always do.

Ten things I learned about Citrix..and a little history lesson

September 23, 2008 1 comment

I attended Citrix’s industry analyst event a couple of weeks ago. I meant to write about Citrix right after the event but you know how things go. I got busy.  But I am glad that I took a little time because it has allowed me the luxury of thinking about Citrix as a company and where they have been and where they are headed.

A little history, perhaps? To understand where Citrix is headed, a little history helps. The company was founded in 1989 by a former IBMer who was frustrated that his ideas weren’t used at Big Blue.  The new company thought that it could leverage the future power of OS/2 (anyone remember that partnership between IBM and Microsoft?).  Citrix actually licensed OS/2 code from Microsoft and intended to provide support for hosting OS/2 on platforms like Unix.  When OS/2 failed to gain market traction, Citrix continued its partnership with Microsoft provide terminal services for both DOS and Windows.  When Citrix got into financial trouble in the mid-1990s, Microsoft invested $1 million in the company.  With this partnership firmly in place, Citrix was able to OEM its terminal servicer product to Microsoft which helped give the company financial stability.
The buying spree. What is interesting about Citrix is how it leveraged this position to begin buying companies that both supported its flagship business and move well beyond it.  For example, in 2003 it acquired Expertcity which had two products: GoToMyPC and GoToMeeting.  Both products mirrored the presentation server focus of the company and enhanced the Microsoft relationship. In a way, you could say that Citrix was ahead of the curve in buying this company when it did.
While the market saw Citrix as a stodgy presentation focused company things started to change in 2005. Citrix started to make some interesting acquisitions including NetScaler, an appliance intended to accelerate application performance,  and Teros, a web application firewall. There were a slew of acquisitions in 2006.  The first of the year was Reflectant, a little company in Lowell, Massachusetts that collected performance data on PCs.  The company had a lot of other technology assets in the performance management area that it was anxious to put to use.  Later in the year the company bought Orbital Data, a company that could optimize the delivery of applications to branch office users over wide area networks (WANs).  Citrix also picked up Ardence, which provided operating system and application streaming technology for Windows and Linux.
Digging into Virtualization. Clearly, Citrix was moving deeper into the virtualization space with these acquisitions and was starting to make the transition from the perception that it was just about presentation services. But the big bombshell came last year when the company purchased XenSource for $500M in cash and stock.   This acquisition moved Citrix right into the heart of the server, desktop and storage virtualization world.  Combine this acquisition with the strong Microsoft partnership and suddenly Citrix has become a power in the data center and virtualization market.

The ten things I learned about Citrix. You have been very patient, so now I’ll tell you what the things I thought were most significant about Citrix’s analyst meeting.

Number One:  It’s about the marketing.  Citrix is pulling together the pieces and presenting them as a platform to the market. My only wish is that some company would not use the “Center” naming convention for their product line.  But they have called this Delivery Center. The primary message is that Citrix will make distributed technology easier to deliver. The focus will be on provisioning, publish/subscribe, virtualization, and optimization over the network.

Number Two: Merging enterprise and consumer computing. Citrix’s strategy is to be the company that closes the gap between enterprise computing and consumer computing.  CEO, Mark Templeton firmly believes that the company’s participation in both markets makes it uniquely positioned to straddle these worlds.  I think that he is on to something.  How can you really separate the personal computing function from applications and distributed workloads in the enterprise?

Number Three.  Partnerships are a huge part of the strategy. Citrix has done an excellent job on the partnering front.  It has over 6,000 channel partners.  It has strong OEM agreements with HP and Dell and Microsoft.  Microsoft has made it clear that it intends to leverage the Citrix partnership to take on VMWare in the market.

Number Four: Going for more. The company has a clear vision around selecting adjacent markets to deliver an end-to-end solutions.  Clearly, there will be more acquisitions coming but at the same time, it will continue to leverage partnerships.

Number Five: It’s all about SaaS. Citrix has gained a lot of experience in the software as a service model over the past few years with its online division (GoToMyPC and GoToMeeting).  The company will invest a lot more in the SaaS model.

Number Six. And its all about the Cloud. Just like everyone else Citrix will move into Cloud Computing.  Because its NetScaler appliance is so prevalent in many SaaS environments, it believes that it has the opportunity to become a market leader. It is counting on its virtualization software, its workflow and orchestration technology to help them become a player.

Number Seven:  Going for the gold. With the acquisition of XenSource combined with its other assets, Citrix can take on VMWare for supremacy in virtualization.  This is clearly an ambitious goal given VMWare’s status in the market.


Number Eight.  Going after the Data Center market
. Citrix believes that it has the opportunity to be a key data center player. It is proposing that it can lead its data center strategy by starting with centralization through virtualization of servers, desktops, and operating systems and provide dynamic provisioning, workflow, and workload management.  Citrix has an opportunity but it is a complicated and crowded market.

Number Nine: Desktop graphic virtualization.   Project Apollo, Citrix’s desktop graphics virtualization project seems to be moving full steam ahead and could add substantial revenue to the bottom line over time.  However, there is a lot of emerging competition in this space so Citrix will have to move fast.

Number Ten: Size matters. And speaking of revenue — Citrix is ambitious. While its revenues have topped $1 billion, it hopes to triple that number over the next few years. And then, what? Who knows.

Can HP Lead in Virtualization Management?

September 15, 2008 2 comments

HP has been a player in the virtualization market for quite a while.  It has offered many hardware products including its server blades have given it a respectable position in the market. In addition, HP has done a great job being an important partner to key virtualization software players including VMWare, Red Hat, and Citrix. It is also establishing itself as a key Microsoft partner as it moves boldly into virtualization with HyperV.  Thus far, HP’s virtualization strategy did not focus on software. That has started to change.  Now, if this had been the good old days, I think we would have seen a strategy that focused on cooler hardware and data center optimization. Now, don’t get me wrong — HP is very much focused on the hardware and the data center. But now there is a new element that I think will be important to watch.

HP is finally leveraging its software assets in the form of virtualization management.  If I were cynical I would say, it’s about time.  But to be fair, HP has added a lot of new assets to its software portfolio in the last couple of years that make a virtualization management strategy more possible and more believable.

It is interesting that when a company has key assets to offer customers, it often strengthens the message. I was struck by what I thought was a clear message that a found on one of their slides from their marketing pitch, “Your applications and business services don’t care where resources are, how they’re connected or how they’re managed, and neither should you. ”  This statement struck me as precisely the right message in this crazy overhyped virtualization market.  Could it be that HP is becoming a marketing company?

As virtualization goes mainstream, I predict that management of this environment will become the most important issue for customers. In fact, this is the message I have gotten load and clear from cusotmers trying to virtualize their applications on servers.  Couple this will the reality that no company virtualizes everything and even if they did they still have a physical environment to manage.  Therefore, HP focuses its strategy on a plan to manage the composite of physical and virtual.  Of course, HP is not alone here. I was at Citrix’s industry analyst meeting last week and they are adopting this same strategy. I promise that my next blog will be about Citrix.

HP is calling its virtualization strategy its Business Management Suite.  While this is a bit generic, HP is trying to leverage the hot business service management platform and wrap virtualization with it.  Within this wrapper, HP is including four componements:

  • Business Service Management — the technique for linking services across the physical and virtual worlds. This is intended to monitor the end-to-end health of the overall environment.
  • Business Service Automation — a technique for provisioning assets for distributed computing
  • IT Service Management — a technique for discovering what software is present and what licenses need to be managed
  • Quality Management — a technique for testing, scheduling, and provisioning resources across platforms. Many companies are starting to use virtualization as a way of testing complex composite applications before putting them into production. Companies are testing for both application quality and performance under different loads.

I am encouraged that HP seems to understand the nuances of this market.  HP’s strategy is to position itself as the “Switzerland” of the virtualization management space.  It is therefore creating a platform that includes infrastucture to manage across IBM, Microsoft, VMWare, Citrix, and Red Hat.  Therefore, it is positioning its management assets from its heritage software (OpenView) and its acquisitions to execute this strategy. For example, its IT Service Management offering is intended to manage the compliance with license terms and conditions as well as charge backs across hetergenous environments. It’s Asset manager is intended to track virtualized assets through its discovery and dependency mapping tools.  HP’s Operations Manager has extended its performance agents so that it can monitor capabilities from virtual machines to hypervisors.  The company’s SiteScope provides agentless monitoring of hypervisors for VMWare.  The HP Network Node manager has extended support for monitoring virtual networks.

HP’s goal to to focus on the overall health of these distributed, virtualized services from an availability, performance, capacity planning, end user experience, and service level management perspective.  It is indeed an ambitious plan that will take some time to develop but it is the right direction. I am particularly impressed with the partner program that HP is evolving around its CMDB (Configuration Management Database).  It is partnering with VMWare to embark on a joint development initiative to provide a federated CMDB that can collect information from a variety of hosts and guest hosts in an on demand approach. Other companies such as Red Hat and Citrix have joined the CMDB program.

This is an interesting time in the virtualization movement.  As virtualization matures, companies are starting to realize that simply virtualizing an application on a server does not by itself save the time and money they anticipated.  The world is a lot more complicated than that.  Management wants to understand how the entire environment is part of delivering value.  For example, an organization might put all of its call center personnel on a virtualized platform which works fine until an additional 20 users with heavy demands on the server suddenly causes performance to falter.  In other situations, everything works fine until there is a software error somewhere in the distributed environment.  The virtualized environment suddenly fails and it is very difficult for IT operations to diagnose the problem. This is when management stops getting excited about how wonderful it is that they can virtualize hundreds of users onto a single server and starts worrying about the quality of service and the reputation of the organization overall.

The bottom line is that HP seems to be pulling the right pieces together for its virtualization management strategy. It is indeed still early. Virtualization itself is only the tip of the distributed computing marketplace.  HP will have to continue to innovate on its own while investing in its partner ecosystem. Today partners are eager to work with HP because it is a good partner and non-threatening.  But HP won’t be alone in the management of virtualization.  I expect that other companies like IBM and Microsoft will be very aggressive in this market.  HP has a little breathing room right now that it should take advantage of before things change again. And they always change again.

Will Desktop Virtualization Be Huge?

June 27, 2008 6 comments

I have been spending a lot of time over the past several months looking at issues around desktop virtualization. While a lot of the focus in the market has been around server based virtualization, I’d put my money behind desktop virtualization. I will even go out on a limb and predict that within the coming year there will a massive explosion in customers implementing desktop virtualization.

Here are the top three reasons that I make this prediction:

1. The cost of supporting PCs for hundreds or perhaps thousands of users is out of control — no upside ROI for a company.

2. In many situations a full PC is overkill. Does a customer support rep really need a PC? How about the increasing numbers of workers who do most of their work over the web? As the growth of Software as a Service continues to expand the need for a PC on every desk with diminish too.

3. Security of data has long been the bain of many security officers. If a user can easily download sensitive customer data onto a desktop problems will and do occur. I have seen too many articles about how a PC was accidentially lost with lots of customer data. While there are other ways of protecting data, many companies are looking at locking down the desktop device so that data storage is not even an option.

4. The capabilities of a thin client environment are growing more sophisticated. It is now becoming practical to implement multimedia on a non-PC. It is also possible to create a powerful environment where there is enough communications power to enable a user with a non-PC device to easily access information quickly.

5. And maybe your desktop capability will be available as a service. Several companies I have spoken with lately are making desktop sophistication available as a service. This is part of the overall movement to a long term cloud computing movement.

I think that once customers move out of the pilot stage with desktop virtualization they will move to wide deployments. I expect that the 100 desktop virtualization experiments that are successful will trigger deployments of 10s of thousands of deployments. Therefore, expect to see a huge surge of adoption within the next few years.