Archive

Posts Tagged ‘data center automation’

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Why we about to move from cloud computing to industrial computing?

April 5, 2010 7 comments

I spent the other week at a new conference called Cloud Connect. Being able to spend four days emerged in an industry discussion about cloud computing really allows you to step back and think about where we are with this emerging industry. While it would be possible to write endlessly about all the meeting and conversations I had, you probably wouldn’t have enough time to read all that. So, I’ll spare you and give you the top four things I learned at Cloud Connect. I recommend that you also take a look at Brenda Michelson’s blogs from the event for a lot more detail. I would also refer you to Joe McKendrick’s blog from the event.

1. Customers are still figuring out what Cloud Computing is all about.  For those of us who spend way too many hours on the topic of cloud computing, it is easy to make the assumption that everyone knows what it is all about.  The reality is that most customers do not understand what cloud computing is.  Marcia Kaufman and I conducted a full day workshop called Introduction to Cloud. The more than 60 people who dedicated a full day to a discussion of all aspects of the cloud made it clear to us that they are still figuring out the difference between infrastructure as a service and platform as a service. They are still trying to understand the issues around security and what cloud computing will mean to their jobs.

2. There is a parallel universe out there among people who have been living and breathing cloud computing for the last few years. In their view the questions are very different. The big issues discussed among the well-connected were focused on a few key issues: is there such a thing as a private cloud?; Is Software as a Service really cloud computing? Will we ever have a true segmentation of the cloud computing market?

3. From the vantage point of the market, it is becoming clear that we are about to enter one of those transitional times in this important evolution of computing. Cloud Connect reminded me a lot of the early days of the commercial Unix market. When I attended my first Unix conference in the mid-1980s it was a different experience than going to a conference like Comdex. It was small. I could go and have a conversation with every vendor exhibiting. I had great meetings with true innovators. There was a spirit of change and innovation in the halls. I had the same feeling about the Cloud Connect conference. There were a small number of exhibitors. The key innovators driving the future of the market were there to discuss and debate the future. There was electricity in the air.

4. I also anticipate a change in the direction of cloud computing now that it is about to pass that tipping point. I am a student of history so I look for patterns. When Unix reached the stage where the giants woke up and started seeing huge opportunity, they jumped in with a vengeance. The great but small Unix technology companies were either acquired, got big or went out of business. I think that we are on the cusp of the same situation with cloud computing. IBM, HP, Microsoft, and a vast array of others have seen the future and it is the cloud. This will mean that emerging companies with great technology will have to be both really luck and really smart.

The bottom line is that Cloud Connect represented a seminal moment in cloud computing. There is plenty of fear among customers who are trying to figure out what it will mean to their own data centers. What will the organizational structure of the future look like? They don’t know and they are afraid. The innovative companies are looking at the coming armies of large vendors and are wondering how to keep their differentiation so that they can become the next Google rather than the next company whose name we can’t remember. There was much debate about two important issues: cloud standards and private clouds. Are these issues related? Of course. Standards always become an issue when there is a power grab in a market. If a Google, Microsoft, Amazon, IBM, or an Oracle is able to set the terms for cloud computing, market control can shift over night. Will standard interfaces be able to save the customer? And how about private clouds? Are they real? My observation and contention is that yes, private clouds are real. If you deploy the same automation, provisioning software, and workload management inside a company rather than inside a public cloud it is still a cloud. Ironically, the debate over the private cloud is also about power and position in the market, not about ideology. If a company like Google, Amazon, or name whichever company is your favorite flavor… is able to debunk the private cloud — guess who gets all the money? If you are a large company where IT and the data center is core to how you conduct business — you can and should have a private cloud that you control and manage.

So, after taking a step back I believe that we are witnessing the next generation of computing — the industrialization of computing. It might not be as much fun as the wild west that we are in the midst of right now but it is coming and should be here before we realize that it has happened.

Oracle + Sun: Five questions to ponder

January 27, 2010 3 comments

I spent a couple of hours today listening to Oracle talk about the long-awaited integration with Sun Microsystems. A real end of an era and beginning of a new one. What does this mean for Oracle? Whatever you might think about Oracle, you have to give the company credit for successfully integrating the 60 companies it has purchased over the past few years. Having watched hundreds and perhaps thousands of acquisitions over the last few decades, it is clear that integration is hard. There are overlapping technologies, teams, cultures, and egos. Oracle has successfully managed to leverage the IP from its acquisitions to support its business goals. For example, it has kept packaged software customers happy by improving the software. Peoplesoft customers, for example, were able to continue to use the software they had become dependent on in primarily the same way as before the acquisition. In some cases, the quality of the software actually improved dramatically. The path has been more complicated with the various middleware and infrastructure platforms the company has acquired over the years because of overlapping functionality.

The acquisition of Sun Microsystems is the biggest game changer for Oracle since the acquisition of PeopleSoft. There is little doubt that Sun has significant software and hardware IP that will be very important in defining Oracle in the 21st century. But I don’t expect this to be a simple journey. Here are the five key issues that I think will be tricky for Oracle to navigate. Obviously, this is not a complete list but it is a start.

Issue One: Can Oracle recreate the mainframe world? The mainframe is dead — long live the mainframe. Oracle has a new fondness for the mainframe and what that model could represent. So, if you combine Sun’s hardware, networking layer, storage, security, packaged applications, middleware into a package do you get to own total share of a customer’s wallet? That is the idea. Oracle management has determined that IBM had the right ideas in the 1960s — everything was nicely integrated and the customer never had to worry about the pieces working together.
Issue Two: Can you package everything together and still be an open platform? To its credit, Oracle has build its software on standards such as Unix/Linux, XML, Java, etc. So, can you have it both ways? Can you claim openness when the platform itself is hermetically sealed? I think it may be a stretch. In order to accomplish this goal, Oracle would have to have well-defined and published APIs. It would have to be able to certify that with these APIs the integrated platform won’t be broken. Not an easy task.
Issue Three: Can you manage a complex computing environment? Computing environments get complicated because there are so many moving parts. There are configurations that change; software gets patched; new operating system versions are introduced; emerging technology enters and messes up the well established environment. Oracle would like to automate the process of managing this process for customers. It is an appealing idea since configuration problems, missing links, and poor testing are often responsible for many of the outages in computing environments today. Will customers be willing to have this type of integrated environment controlled and managed by a single vendor? Some customers will be happy to turn over these headaches. Others may have too much legacy or want to work with a variety of vendors. This is not a new dilemma for customers. Customers have long had to rationalize the benefits of a single source of technology against the risks of being locked in.
Issue Four: Can you teach an old dog new tricks? Can Oracle really be a hardware vendor? Clearly, Sun continues to be a leader in hardware despite its diminished fortunes. But as anyone who has ventured into the hardware world knows, hardware is a tough, brutal game. In fact, it is the inverse of software. Software takes many cycles to reach maturation. It needs to be tweaked and finessed. However, once it is in place it has a long, long life. The old saying goes, old software never dies. The same cannot be said for hardware. Hardware has a much straighter line to maturity. It is developed, designed, and delivered to the market. Sometimes it leapfrogs the competition enough that it has a long and very profitable life. Other times, it hits the market at the end of a cycle when a new more innovative player enters the market. The culmination of all the work and effort can be short as something new comes along at the right place at the right time. It is often a lot easier to get rid of hardware than software. The computer industry is littered with the corpses of failed hardware platforms that started with great fanfare and then faded away quickly. Will Oracle be successful with hardware? It will depend on how really good the company is in transforming its DNA.
Issue Five. Are customers ready to embrace Oracle’s brave new world? Oracle’s strategy is a good one — if you are Oracle. But what about for customers? And what about for partners? Customers need to understand the long-term implication and tradeoffs in buying into Oracle’s integrated approach to its platform. It will clearly mean fewer moving parts to worry about. It will mean one phone call and no finger pointing. However, customers have to understand the type of leverage that single company will have in terms of contract terms and conditions. And what about partners? How does an independent software vendor or a channel partner participate within the new Oracle? Is there room? What type of testing and preparation will be required to play?

The DNA of the Cloud Power Partnerships

January 15, 2010 2 comments

I have been thinking  alot about the new alliances forming around cloud computing over the past couple of months.  The most important of these moves are EMC,Cisco, and VMware, HP and Microsoft’s announced collaboration, and of course, Oracle’s planned acquisition of Sun.  Now, let’s add IBM’s cloud strategy into the mix which has a very different complexion from its competitors. And, of course, my discussion of the cloud power struggle wouldn’t be complete without adding in the insurgents — Google and Amazon.  While it is tempting to want to portray this power grab by all of the above as something brand new — it isn’t.  It is a replay of well-worn patterns that we have seen in the computer industry for the past several decades. Yes, I am old enough to have been around for all of these power shifts. So, I’d like to point out what the DNA of this power struggle looks like for the cloud and how we might see history repeating itself in the coming year.  So, here is a sample of how high profile partnerships have fared over the past few decades. While the past can never accurately predict the future, it does provide some interesting insights.

Partner realignment happens when the stakes change.  There was a time when Cisco was a very, very close partner with HP. In fact, I remember a time when HP got out of the customer service software market to collaborate with Cisco. That was back in 1997.

Here are the first couple of sentences from the press release:

SAN JOSE and PALO ALTO, Calif., Jan. 15, 1997 — Hewlett-Packard Company and Cisco Systems Inc. today announced an alliance to jointly develop Internet-ready networked-computing solutions to maximize the benefits of combining networking and computing. HP and Cisco will expand or begin collaboration in four areas: technology development, product integration, professional services and customer service and support.

If you are interested, here is a link to the full press release.  What’s my point? These type of partnerships are in both HP’s and Cisco’s DNA. Both companies have made significant and broad-reaching partnerships. For example, back in 2004, IBM and Cisco created a broad partnership focused on the data center. Here’s an excerpt from a CRN article:

From the April 29, 2004 issue of CRN Cisco Systems (NSDQ:CSCO) and IBM (NYSE:IBM) on Thursday expanded their long-standing strategic alliance to take aim at the data center market. Solution providers said the new integrated data center solutions, which include a Cisco Gigabit Ethernet Layer 2 switch module for IBM’s eServer Blade Center, will help speed deployment times and ease management of on-demand technology environments.
“This is a big win for IBM,” said Chris Swahn, president of sales at Amherst Technologies, a solution provider in Merrimack, N.H.
The partnership propels IBM past rival Hewlett-Packard, which has not been as quick to integrate its own ProCurve network equipment into its autonomic computing strategy, Swahn said.
Cisco and IBM said they are bringing together their server, storage, networking and management products to provide an integrated data center automation platform.

Here is a link to the rest of the article.

HP itself has had a long history of very interesting partnerships. A few that are most relevant include HP’s ill-fated partnership with BEA in the 1990s. At the time, HP invested $100 million in BEA to further the development of software to support HP’s software infrastructure and platform strategy.

HP Gives BEA $100m for Joint TP Development
Published:08-April-1999
By Computergram

Hewlett-Packard Co and BEA Systems Inc yesterday said they plan to develop new transaction processing software as well as integrate a raft of HP software with BEA’s WebLogic application server, OLTP and e-commerce software. In giving the nod to WebLogic as its choice of application server, HP stopped far short of an outright acquisition of the recently-troubled middleware company, a piece of Wall Street tittle tattle which has been doing the round for several weeks now. HP has agreed to put BEA products through all of its distribution channels and is committing $100m for integration and joint development.

Here’s a link to an article about the deal.

Oracle  probably has more partnerships and more entanglement with more companies than anyone else.  For example,  HP has a  longstanding partnership with Oracle on the data management front. HP partnered closely with Oracle and optimized its hardware for the Oracle database. Today, Oracle and HP have more than 100,000 joint customers. Likewise, Oracle has a strong partnership with IBM — especially around its solutions business. IBM Global Services operates a huge consulting practice based on implementing and running Oracle’s solutions.  Not to be outdone, EMC and Oracle have about 70,000 joint customers. Oracle supports EMC’s storage solutions for Oracle’s portfolio while EMC supports Oracle’s solutions portfolio.

Microsoft, like Oracle, has entanglements with most of the market leaders. Microsoft has partnered very closely with HP for the last couple of decades both on the PC front and on the software front. Clearly, the partnership between HP and Microsoft has evolved for many years so this latest partnership is a continuation of a long-standing relationship. Microsoft has long-standing relationships with EMC, Sun, and Oracle — to name a few.

And what about Amazon and Google? Because both companies were early innovators in cloud computing, they were able to gain credibility in a market that had not yet emerged as the center of power. Therefore, both companies were well positioned to create partnerships with every established vendors that needed to do something with the cloud.  Every company from IBM to Oracle to EMC and Microsoft — to name but a few — established partnerships with these companies. Amazon and Google were small, convenient and non-threatening. But as the power of both companies continues to –grow,  so will their ability to partner in the traditional way. I am reminded of the way IBM partnered with two small companies — Intel and Microsoft when it needed a processor and an operating system to help bring the IBM PC to market in the early 1980s.

The bottom line is that cloud computing is becoming more than a passing fad — it is the future of how computing will change in the coming decades. Because of this reality, partnerships are changing and will continue to change. So, I suspect that the pronouncements of strategic, critical and sustainable partnerships may or may not be worth the paper or compute cycles that created them. But the reality is that the power struggle for cloud dominance is on. It will not leave anything untouched. It will envelop hardware, software, networking, and services. No one can predict exactly what will happen, but the way these companies have acted in the past and the present give us clues to a chaotic and predictable future.

Why all workloads don’t belong in the cloud

November 2, 2009 5 comments

I had an interesting conversation with a CIO the other day about cloud computing. He had a simple question: I have an relatively old application and I want to move it to the cloud. How do I do that? I suspect that we will see a flurry of activity over the coming year where this question will be asked a lot.  And why not — the cloud is the rage and who wouldn’t want to demonstrate that with the cloud all problems are solved.  So, what was my answer to this CIO? Basically, I told him that all workloads do not belong in the cloud. It is not because this technically can’t be done. It can. It is quite possible to encapsulate an existing application and place it into a cloud environment so that new resources can be self-provisioned, etc. But, in reality, you have to look at this issue from an efficiency and an economic perspective.

ROI

Cloud computing gains an economic edge over a traditional data center when it supports a relatively small simple workload for a huge number of customers. For example, a singular workload like email or a payment service can be fairly optimized at all levels — the operating system, middleware, and the hardware can all be customized and tuned to support the workload. The economics favor this type of workload that support large numbers of customers. The same cannot be said for the poor aging Cobol application that is used by 10 people within an organization. While there might be incremental management productivity benefits, the cost/benefit analysis simply doesn’t work.

So, the answer is pretty simple. You just can’t throw every workload into the cloud. It is not a panacea for all IT problems.  Organizations that are trying to figure out what to do with these pesky old workloads need to look at three options:

1. Decide if that workload is still supporting business objectives in a cost effective manner. If it does the job, leave it alone.

2. That old workload might be better supported by traditional outsourcing. Let someone else keep the application alive while you move into more mission critical tasks.

3. Think about rebuilding that old workload — either by encapsulating key elements and placing them within a modular flexible environment. You might even discover that there are components that are actually useful across the organization. When you discover that sharing components across divisions/department is a productive and pragmatic approach, you might be ready to move those workloads into the cloud.

cloud_box

Ten things I learned about Citrix..and a little history lesson

September 23, 2008 1 comment

I attended Citrix’s industry analyst event a couple of weeks ago. I meant to write about Citrix right after the event but you know how things go. I got busy.  But I am glad that I took a little time because it has allowed me the luxury of thinking about Citrix as a company and where they have been and where they are headed.

A little history, perhaps? To understand where Citrix is headed, a little history helps. The company was founded in 1989 by a former IBMer who was frustrated that his ideas weren’t used at Big Blue.  The new company thought that it could leverage the future power of OS/2 (anyone remember that partnership between IBM and Microsoft?).  Citrix actually licensed OS/2 code from Microsoft and intended to provide support for hosting OS/2 on platforms like Unix.  When OS/2 failed to gain market traction, Citrix continued its partnership with Microsoft provide terminal services for both DOS and Windows.  When Citrix got into financial trouble in the mid-1990s, Microsoft invested $1 million in the company.  With this partnership firmly in place, Citrix was able to OEM its terminal servicer product to Microsoft which helped give the company financial stability.
The buying spree. What is interesting about Citrix is how it leveraged this position to begin buying companies that both supported its flagship business and move well beyond it.  For example, in 2003 it acquired Expertcity which had two products: GoToMyPC and GoToMeeting.  Both products mirrored the presentation server focus of the company and enhanced the Microsoft relationship. In a way, you could say that Citrix was ahead of the curve in buying this company when it did.
While the market saw Citrix as a stodgy presentation focused company things started to change in 2005. Citrix started to make some interesting acquisitions including NetScaler, an appliance intended to accelerate application performance,  and Teros, a web application firewall. There were a slew of acquisitions in 2006.  The first of the year was Reflectant, a little company in Lowell, Massachusetts that collected performance data on PCs.  The company had a lot of other technology assets in the performance management area that it was anxious to put to use.  Later in the year the company bought Orbital Data, a company that could optimize the delivery of applications to branch office users over wide area networks (WANs).  Citrix also picked up Ardence, which provided operating system and application streaming technology for Windows and Linux.
Digging into Virtualization. Clearly, Citrix was moving deeper into the virtualization space with these acquisitions and was starting to make the transition from the perception that it was just about presentation services. But the big bombshell came last year when the company purchased XenSource for $500M in cash and stock.   This acquisition moved Citrix right into the heart of the server, desktop and storage virtualization world.  Combine this acquisition with the strong Microsoft partnership and suddenly Citrix has become a power in the data center and virtualization market.

The ten things I learned about Citrix. You have been very patient, so now I’ll tell you what the things I thought were most significant about Citrix’s analyst meeting.

Number One:  It’s about the marketing.  Citrix is pulling together the pieces and presenting them as a platform to the market. My only wish is that some company would not use the “Center” naming convention for their product line.  But they have called this Delivery Center. The primary message is that Citrix will make distributed technology easier to deliver. The focus will be on provisioning, publish/subscribe, virtualization, and optimization over the network.

Number Two: Merging enterprise and consumer computing. Citrix’s strategy is to be the company that closes the gap between enterprise computing and consumer computing.  CEO, Mark Templeton firmly believes that the company’s participation in both markets makes it uniquely positioned to straddle these worlds.  I think that he is on to something.  How can you really separate the personal computing function from applications and distributed workloads in the enterprise?

Number Three.  Partnerships are a huge part of the strategy. Citrix has done an excellent job on the partnering front.  It has over 6,000 channel partners.  It has strong OEM agreements with HP and Dell and Microsoft.  Microsoft has made it clear that it intends to leverage the Citrix partnership to take on VMWare in the market.

Number Four: Going for more. The company has a clear vision around selecting adjacent markets to deliver an end-to-end solutions.  Clearly, there will be more acquisitions coming but at the same time, it will continue to leverage partnerships.

Number Five: It’s all about SaaS. Citrix has gained a lot of experience in the software as a service model over the past few years with its online division (GoToMyPC and GoToMeeting).  The company will invest a lot more in the SaaS model.

Number Six. And its all about the Cloud. Just like everyone else Citrix will move into Cloud Computing.  Because its NetScaler appliance is so prevalent in many SaaS environments, it believes that it has the opportunity to become a market leader. It is counting on its virtualization software, its workflow and orchestration technology to help them become a player.

Number Seven:  Going for the gold. With the acquisition of XenSource combined with its other assets, Citrix can take on VMWare for supremacy in virtualization.  This is clearly an ambitious goal given VMWare’s status in the market.


Number Eight.  Going after the Data Center market
. Citrix believes that it has the opportunity to be a key data center player. It is proposing that it can lead its data center strategy by starting with centralization through virtualization of servers, desktops, and operating systems and provide dynamic provisioning, workflow, and workload management.  Citrix has an opportunity but it is a complicated and crowded market.

Number Nine: Desktop graphic virtualization.   Project Apollo, Citrix’s desktop graphics virtualization project seems to be moving full steam ahead and could add substantial revenue to the bottom line over time.  However, there is a lot of emerging competition in this space so Citrix will have to move fast.

Number Ten: Size matters. And speaking of revenue — Citrix is ambitious. While its revenues have topped $1 billion, it hopes to triple that number over the next few years. And then, what? Who knows.

Is HP ready to rock and roll with its investments in software, hardware, and services?

April 2, 2008 1 comment

Every year for the past 20 years I have attended HP’s annual analyst meeting. The first one I attended was quite small with a couple dozen analysts with all the focus on hardware. Having spent so much time with HP over these years I am in a unique position to give a point in time report card. I thought I was going to write up each day but I decided that one assessment would be best. It is not an easy task especially since HP has made 16 acquisitions (mostly in software and services) in the past three years. HP will not buy packaged software, it will however invest in providing vertical offerings for its most important customer segments including financial services, communications, media and entertainment, manufacturing, public sector, health and life sciences, energy, retail and consumer packaged goods.

I also heard that HP has begun to revitalize its lab organization and invest more in R&D. HP Labs has been a source of innovation for HP. It will be interesting to watch what new software and hardware technologies come out of the resource over the next couple of years. Mark Hurd, during his discussion with us mentioned that HP is spending $3.7 billion on R&D. This could result in some interesting opportunities for HP.

My take on HP is complicated — HP is a complicated company. I am struck by four key issues about this point in time of HP’s transition:

1. HP has finally begun to understand that the combination of hardware, software, and services are synergistic. HP was rather quiet about this change but it was clear that there is beginning to be orchestration both at the sales level and just as importantly at the software level. I am starting to see that HP is beginning to leverage software assets developed in its consulting organization and moving them into the software group. Likewise, software that had been captive in hardware is being decoupled and potentially given new life.

2. HP hardware organization is changing in interesting ways. There are four focuses for the hardware business: blades, power management, storage, and overall next generation data center (called Adaptive Infrastucture). This is the traditional business for HP which should remain a strong pillar of the strategy. HP has a strong client side hardware business unit that is separate from the enterprise server and storage organization. It will be interesting to watch how HP leverages its blade enclosures across its many hardware blades to create a powerful go to market strategy.

3. The software business. I get lots of questions about the viability of HP’s software business. I think that this has been complicated for HP. After all, HP has purchased so many software companies over so many years that it can be confusing. I am beginning to see some signs that a strategy is emerging here. While HP has some work to do to make all of this understandable and digestable. One good move was that HP has divided its software business into two units: Business Information Optimization, Business Technology Optimization (Enterprise Management and Automation), and First, HP has a very focused information management strategy with a focus on its NeoView data warehousing offering. The company is starting to win some important business — especially against Teradata. The other component of this business unit is based on records management. This includes integrated archiving and e-discovery (HP just bought Tower Software). In addition, there are some interesting new offerings in this area such as Exstream Software, a new acquisition that turns document data into electronic form so it can become information services. The company is focused on managing the content/document ecosystem. This should be a very interesting acquisition.

The Business Technology Optimization area is the most complex. It encompasses most of the enterprise infrastructure software and it the least well understood. For example, HP’s SOA strategy is within this space. The crown jewel for HP is the Systinet registry/repository for SOA governance. The product was picked up as part of the Mercury Interactive acquisition and has given HP a great starting point. The Mercury Interactive acquisition also provided HP with a strong position in software quality and testing. It is a sign of the change the new HP software that HP has been able to keep the value intact and use it as a foundational technology in many of its initiatives.

HP’s legacy software platform: OpenView is also part of Business Technology Optimization. The business continues to grow well and over time I think it will be a better understood strategic management asset. But I imagine that traditional OpenView customers are probably a bit confused. Long term the new organizational software structure will help HP sell at a higher level in the organization.

The most interesting transition for Business Technology Optimization is the addition of Opsware into the mix. OpsWare has a lot of very interesting IP in data center automation and in overall integration. I am still learning about all the buried value but the underlying data model and integration technologies make it an interesting set of services to fill out HP’s SOA strategy. OpsWare has been part of HP for only about six months but it is clear that this set of resources and people are having a big impact on the HP software strategy.

4. HP’s Services. HP’s services organization which has grown steadily. The biggest change that I am seeing is the adoption of a common framework. There is a lot more synergy between the services organization and the software group. Services now exchanges its software services into software. This is one of the strongest areas where HP can demonstrate its SOA strategy.

5. Virtualization. HP started to talk a little bit about the importance of virtualization as part of the future for HP across hardware, software, and services. There were a few interesting comments about cloud computing. But HP has a huge opportunity in virtualization that I think did not come through in the way I would have expected. I imagine that by next year the volume for this strategy will be a lot louder (at least I hope). I was encouraged by Mark Hurd’s comment, “Virtualziation creates interruption of the server market. The more of the market that virtualizes, the better for HP.”