Archive

Archive for the ‘service management’ Category

Is cloud security really different than data center security?

October 30, 2009 7 comments

Almost every conversation I have had over the past year or so always comes back to security in the cloud.  Is it really secure? Or we are thinking about implementing the cloud but we are worried about security.  There are, of course, good reasons to plan a cloud security strategy. But in a sense, it is no different than planning a security strategy for your company. But it is the big scary cloud! Well, before I list the top then issues I would like to say one thing: if you think you need an entirely different security strategy for the cloud, you may not have a comprehensive security strategy to start with.  Yes, you have to make sure that you cloud provider has a sophisticated approach to security. However, what about your Internet service provider? What about the level of security within your own IT department? Can you throw stones if you live in a glass house (yes, that is a pun…sorry)?  So, before you start fretting about security in the cloud, get your own house in order.  Do you have an identity management plan? Do you ensure that one individual within the data center can’t control all of the data within a single environment to minimize risks? If you don’t have a well executed internal security plan, you aren’t ready for the cloud.  But let’s say that you have fixed that problem and you are ready to really plan your cloud security strategy. So, here five of the issues to consider. If you have others, let’s start a conversation.

security police

1. You need to start at the beginning with understanding the characteristics of your cloud provider. Is the company well funded? Is its data center designed with security at the center? Your level of scrutiny will also depend on how you are using the cloud. If you are using Infrastructure as a Service for a short term project there is less risk than if you are planning to use a cloud to store important customer data.

2. How is your cloud provider implementing security in a multi-tenant environment? How do they ensure that one customer’s data doesn’t impact another customer’s data?

3. Does your cloud provider give you the ability to monitor security of your data in the cloud? This will be important both for compliance and to keep track of your own security policies.

4. Does your cloud provider encrypt your critical data? If not, why not?

5. Does your cloud provider give you the ability to control who is allowed to access your information based on roles and authorization? Does the cloud provider support federated identity management? This is basic security best practices.

Now you are probably saying to yourself that this isn’t rocket science. These are fundamental security approaches that any data center should follow. I recommend that you take a look at a great document published by the Cloud Security Alliance that details many of the key issues surrounding security in the cloud. So, I guess my principle message is that cloud security is not different than security in any data center.  But the market does not seem to understand this because the perception is that a cloud is somehow not a data center that can be secured with regular old security. I think that we will see something interesting happen because of this perception: cloud vendors will begin to charge a premium for really good security.  In fact, this is already happening.  Vendors like Amazon and Salesforce are offering segregated implementations of their environments to customers who don’t trust their ordinary security approaches.  This will work in the short term primarily because during this early phase of the cloud there is not enough focus on security. Long term, as the market matures, cloud vendors will have to demonstrate their ability to provide a secure environment based on basic security best practices. In the meantime, cloud vendors will rake in the cash for premium secure cloud services.

What are the Unanticipated consequences of the cloud – part II

October 29, 2009 9 comments

As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:

4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically.  Clearly,  vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. old fashioned wagonIn fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.

5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill.  Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one. cash-wad

6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes.  This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.

I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.

Ten things I learned while writing Cloud Computing for Dummies

August 14, 2009 14 comments

I haven’t written a blog post in quite a while. Yes, I feel bad about that but I think I have a good excuse. I have been hard at work (along with my colleagues Marcia Kaufman, Robin Bloor, and Fern Halper) on Cloud Computing for Dummies. I will admit that we underestimated the effort. We thought that since we had already written Service Oriented Architectures for Dummies — twice; and Service Management for Dummies that Cloud Computing would be relatively easy. It wasn’t. Over the past six months we have learned a lot about the cloud and where it is headed. I thought that rather than try to rewrite the entire book right here I would give you a sense of some of the important things that I have learned. I will hold myself to 10 so that I don’t go overboard!

1. The cloud is both old and new at the same time. It is build on the knowledge and experience of timesharing, Internet services, Application Service Providers, hosting, and managed services. So, it is an evolution, not a revolution.

2. There are lots of shades of gray with cloud segmentation. Yes, there are three buckets that we put clouds into: infrastructure as a service, platform as a service, and software as a service. Now, that’s nice and simple. However, it isn’t because all of these areas are starting to blurr into each other. And, it is even more complicated because there is also business process as a service. This is not a distinct market unto itself – rather it is an important component in the cloud in general.

3. Market leadership is in flux. Six months ago the market place for cloud was fairly easy to figure out. There were companies like Amazon and Google and an assortment of other pure play companies. That landscape is shifting as we speak. The big guns like IBM, HP, EMC, VMware, Microsoft, and others are running in. They would like to control the cloud. It is indeed a market where big players will have a strategic advantage.

4. The cloud is an economic and business model. Business management wants the data center to be easily scalable and predictable and affordable. As it becomes clear that IT is the business, the industrialization of the data center follows. The economics of the cloud are complicated because so many factors are important: the cost of power; the cost of space; the existing resources — hardware, software, and personnel (and the status of utilization). Determining the most economical approach is harder than it might appear.

5. The private cloud is real.  For a while there was a raging debate: is there such a thing as a private cloud? It has become clear to me that there is indeed a private cloud. A private cloud is the transformation of the data center into a modular, service oriented environment that makes the process of enabling users to safely procure infrastructure, platform and software services in a self-service manner.  This may not be a replacement for an entire data center – a private cloud might be a portion of the data center dedicated to certain business units or certain tasks.

6. The hybrid cloud is the future. The future of the cloud is a combination of private, traditional data centers, hosting, and public clouds. Of course, there will be companies that will only use public cloud services for everything but the majority of companies will have a combination of cloud services.

7. Managing the cloud is complicated. This is not just a problem for the vendors providing cloud services. Any company using cloud services needs to be able to monitor service levels across the services they use. This will only get more complicated over time.

8. Security is king in the cloud. Many of the customers we talked to are scared about the security implications of putting their valuable data into a public cloud. Is it safe? Will my data cross country boarders? How strong is the vendor? What if it goes out of business? This issue is causing many customers to either only consider a private cloud or to hold back. The vendors who succeed in the cloud will have to have a strong brand that customers will trust. Security will always be a concern but it will be addressed by smart vendors.

9. Interoperability between clouds is the next frontier. In these early days customers tend to buy one service at a time for a single purpose — Salesforce.com for CRM, some compute services from Amazon, etc. However, over time, customers will want to have more interoperability across these platforms. They will want to be able to move their data and their code from one enviornment to another. There is some forward movement in this area but it is early. There are few standards for the cloud and little agreement.

10. The cloud in a box. There is a lot of packaging going on out there and it comes in two forms. Companies are creating appliance based environments for managing virtual images. Other vendors (especially the big ones like HP and IBM) are packaging their cloud offerings with their hardware for companies that want Private clouds.

I have only scratched the surface of this emerging market. What makes it so interesting and so important is that it actually is the coalescing of computing. It incorporates everything from hardware, management software, service orientation, security, software development, information management,  the Internet, service managment, interoperability, and probably a dozen other components that I haven’t mentioned. It is truly the way we will achieve the industrialization of software.

Five things I learned at IBM’s Rational Conference

June 9, 2009 3 comments

I haven’t been to IBM’s Rational Conference in a couple of years so I was very interested not just to see what IBM had to say about the changing landscape of software development but how the customers attending the conference had changed. I was not disappointed.  While I could write a whole book on the changes happening in software development (but I have enough problems) I thought I would mention some of the aspects of the conference that I found noteworthy.

One. Rational is moving from tools company to a software development platform. Rational has always been a complex organization to understand since it has evolved and changed so much over the years. The organization now seems to have found its focus.

Two. More management, fewer low level developers. In the old day, conferences like this would be dominated by programmers. While there were many developers  in attendance, I found that there were a lot of upper level managers. For example, I sat at lunch with one CIO who was in the process of moving to a sophisticated service oriented architecture. Another person at my table was a manager looking to update his company’s current development platforms. Still another individual was a customer of one of the company’s that IBM had purchased who was looking to understand how to implement new capabilities added since the acquisition.

Three. Rational has changed dramatically through acquisitions. Rational is a tale of acquisitions. Rational Software, the lynch pin of IBM’s software development division, itself was a combination of many acquisitions. Rational, before being bought by IBM in 2002 for $2.1 billion, had acquired an impressive array of companies including Requiste, SQA, Performance Aware, Pure-Atria, and Object Time Ltd.  After a period of absorbtion, IBM started acquiring more assets. BuildForge (build and release management) was purchased in 2006; Watchfire (Web application security vulnerability and compliance testing software) was bought in 2007; and Telelogic (requirements management) was purchased in 2008.

It has taken IBM a while to both absorb all of the acquisitions and then to create a unified architecture so that these software products could share components and interoperate. While IBM is not done, under Danny Sabbah’s leadership (General Manager), Rational made the transition from being a tools company to becoming platform for managing software complexity. It is work in progress.

Four. It’s all about Jazz. Jazz, IBM’s collaboration platform was a major focus of the conference.  Jazz is an architecture intended to integrate data and function.  Jazz’s foundation is the REST architecture and therefore it is well positioned for use in Web 2.0 applications. What is most important is that IBM is bringing all of its Rational technology under this model. Over the next few years, we can expect to see this framework under all of the Rational’s products.

Five. Rational doesn’t stand alone. It is easy to focus on all of the Rational portfolio (which could take a while). But what I found quite interesting was the emphasis on the intersection between the Rational platform and Tivoli’s management services as well as Websphere’s Service Oriented Architecture offerings. Rational also made a point of focusing on the use of collaboration elements provided by the Lotus division.  Cloud computing was also a major focus of discussion at the event. While many customers at the event are evaluating the potential of using various Rational products in the cloud it is early.  The one area that IBM seem to have hit a home run is its Cloud Burst appliance which is intended create and manage virtual images. Rational is also beginning to deliver its testing offerings as cloud based services. One of the most interesting elements of its approach is to use tokens as a licensing model. In other words, customers purchase a set number of tokens or virtual licenses that can be used to purchase services that are not tied to a specific project or product.

Does IT see the writing on the cloud wall?

April 15, 2009 5 comments

For the last six months or so I have been researching cloud computing. More recently, our team has started writing our next Dummies Book on Cloud Computing. Typically when we start a book we talk to everyone in the ecosystem — vendors big and small and lots of customers.  For example, when we started working on SOA for Dummies almost three years ago we found a lot of customers who could talk about their early experience. Not all of these companies had done things right. They had made lots of mistakes and started over. Many of them didn’t necessarily want their mistakes put into a book but they were willing to talk and share.  As I have mentioned in earlier writings, when we wrote the second edition of SOA for Dummies we had a huge number of customers that we could talk to. A lot of them have made tremendous progress in transforming not just their IT organization but the business as well.

We had a similar experience with Service Management for Dummies which comes out in June. Customers were eager to explain what they had learned about managing their increasingly complex computing and business infrastructures.  But something interesting in happening with the Cloud book. The experience feels very different and I think this is significant.

Our team has been talking to a lot of the vendors — big and small about their products and strategies around the cloud. Some of these vendors focused on some really important problems. Others are simply tacking the word cloud in front of their offerings hoping to get swept up in the excitment. But there is something missing. I think there are two things: there is a lack of clarity about what a cloud really is and what the component parts are. Is it simply Software as a Service? Is it an outsourced infrastructure? Is it storage capacity to supplement existing data centers? Is it a management platform that supports Software as a service? Does cloud require a massive ecosystem of partners? Is it a data center with APIs? Now, I am not going to answer these questions now (I’ll leave some of these to future writings).

What I wanted to talk about was what I see happening with customers.  I see customers being both confused and very wary. In fact, the other day I tried to set up a call with a senior executive from a large financial services company that I have spoken to about other emerging areas. This company always likes to be on the forefront of important technology trends. To my surprise, the executive was not willing to talk about clouds at all.  Other customers are putting their toes in the cloud (pun intended) by using some extra compute cycles from Amazon or by using Software as a Service offerings like SalesForce.com. Some customers are looking to implement a cloud-like capability within their own data center. Could it be there they are afraid that if they don’t offer something like Amazon’s EC2 cloud that they will be put out of business? Just as likely they are worried about the security of their intellectual property and their data.

I predict that the data center is about to go through a radical transformation that will forever change the landscape of corporate computing. Companies have recognized for a long time that data centers are very inefficient. They have tried clustering servers and virtualizing their servers with some level of success.  But the reality is that in time there will be a systematic approach to scalable computing based on the cloud.  It will not be a simple outsourced data center because of the transition to a new generation of software that is component based and service oriented. There is a new generation of service management technologies that makes the management of highly distributed environments much more seamless. The combination of service oriententation, service managment, and cloud will be the future of computing.

The bottom line is that while the vendor community sees dollar signs in this emerging cloud based world, the customers are afraid. The data center management team does not understand what this will mean for their future. If everything is tucked away in a cloud what is my job? Will we still have a data center? I suspect that it will not be that simple. At some point down the line we will actually move to utility computing where computing assets will all be based on a consistent set of standards so that customers will be able to mix and match the services they need in real time. We clearly are not there yet. Today there will be many data center activities that either cannot or will not be put into a cloud. Internal politics will keep this trend towards clouds moving slowly.

Has Service Management become Business Management?

March 22, 2009 7 comments

Having just completed Service Management for Dummies (scheduled to be in the book stores in June), I have taken a step back to think about what I learned from the process. When our team first started the research process a lot of people I talked to wanted to know if we were writing a book about ITIL 3.0 best practices. So, the answer is of course we covered ITIL 3.0 best practices. However, as part of our research and indepth discussions with customers it became apparent that there is something bigger happening here that transcends IT.  I am not sure that this issue has been noticed out there in the world of management of services but it is real and encouraging.  Corporate management is beginning to notice that much of their physical infrastructure and the components that are the essence of their corporate existence are technology enabled.  The X-ray that used to be stored on a piece of film and stored in a file cabinet is now digitized. The automobile is now managed by sensors and other computers. Security of physical buildings is computerized. The factory floor is a complex system. Of course, I could go on for months with lists that include RFID and the like. But I think I have made the point that increasingly everything must be thought of as a system, not just the servers and desktops and networks that sit in the data center.

In my view, this is why the service management arena is getting to be so exciting. Many of the CIOs that our team interviewed for Service Management for Dummies echoed this level of excitement.  These executives are finding that applying service management principles to both the physical and IT world is transformational. It means that organizations can have a greater ability to take a holistic approach to managing their companies from a holistic perspective.

In the book, our team uses the example of the ATM machine to make the point. The ATM is a relatively simple automated device that requires a matching of a customer number with an ID code. It requires that a request for cash from the consumer be matched with the availability of funds from that bank or one of its partner’s banks. It requires the ability to do the accounting to provide the customer with a receipt that states how much money was withdrawn and how much is left in the account.  And there is more! Behind that customer action that might take all of 5 seconds is a huge infrastructure: a data center, a security infrastructure, a sensor that detects of the machine itself is experiencing a problem. There is a network of trucks managed by a third party company that ensures that the trucks deliver cash to replenish the ATM machine. There are even more parts to this world that I am not mentioning — so forgive me. But what is most interesting is that all of these mini-ecosystems are intertwined. What if the bank’s management decides to save money by selecting a new cash delivery network. This company promises great service at a fraction of the cost. To save money the bank goes with the new service only to discover that its drivers are unreliable and cash is often not delivered in a timely manner.  Even if the ATM networks works well, the data center is flawless, and the security is solid, the bank is not able to deliver satisfaction to its customers because there is no cash.

The bottom line is that service management is becoming a corporate issue — not just an IT issue. The secret to service management is about the customer, partner, supplier, and employee experience. Like every other technology transformation over the past couple of decades, mature technology initiatives become management initiatives. Increasingly, service management is being tied to the key performance indicators of the business. Therefore, it is imperative that IT management understand the goals of corporate management as well as the needs of internal and external customers.

What’s different about SOA two years later? Why we wrote a second edition of SOA for Dummies

December 8, 2008 4 comments

soafd2It seems like just the other day that our team was busily finishing the first edition of SOA for Dummies. But it was two years ago since that book came out. A lot has change in that time. When we first wrote the book, we heard from lots of people that they really didn’t know what SOA was and were happy to have a book that would explain it to them in easy to understand language.

Because so much has changed, we were asked to write a second edition of SOA for Dummies which is coming out on December 19th.    What has changed in those two years?  Well, first of all, there have been a lot more implementations of SOA. In fact, in that edition, we were happy to have gotten 7 case studies.  Many of the customers that we talked (both that were featured in the book and those who took the time to speak with us without attribution) were just getting started. They were forming centers of excellence. They were beginning to form partnerships between the business and technical sides of their companies. They were implementing a service bus or were building their first sets of services.

In this second edition, we were fortunate to find 24 companies across 9 different verticals willing and able to talk on the record about their experiences implementing SOA.  What did we learn? While there is a lot I could say, I’d like to net it out to 5 things we learned:

1. Successful companies have spent the time starting with the both the key business services and business process before even thinking about implementation.

2. Companies have learned a lot since their initial pilots. They are now focused on how they can increase revenue for their companies through innovation using a service oriented approach.

3. Many companies have a strategic roadmap that they are focused on and therefore are implementing a plan in an incremental fashion.

4. A few companies are creating business services extracted from aging applications. Once this is done, they are mandating the use of these services across the company.

5. Companies that have been working on SOA for the last few years have learned to create modular business services that can have multiple uses. This was much harder than it appeared at first.

There are many other best practices and lessons learned in the case studies.  It is interesting to note just as many companies that said yes also were not able to participate because management felt that they didn’t want competitors to know what they were doing.

The bottom line is that SOA is beginning to mature. Companies are not just focused on backbone services such as service buses but on making their SOA services reach out to consumers and their business partners.

We have also added a bunch of new chapters to the book. For example, we have new chapters on SOA service management; SOA software development, software quality, component applications, and collaboration within the business process lifecycle.  Of course, we have updated all existing chapters based on the changes we have seen over the last few years.

We are very excited that we had the opportunity to update the book and look forward to continuing the dialog.

The ten reasons why software companies lose in a losing economy?

December 4, 2008 3 comments

I have been thinking about the software industry and what is going to happen to companies in this really lousy economy. There will clearly be companies that don’t weather the storm — either because their venture capital backers get nervous or because their customers do.  But, as in every downturn, there will be companies that figure out how to do the right thing and actually thrive. There will also be companies that simply have a business model for another time and will not make it.  So, I thought that I would put together a list of the characteristics of the software companies that will fail:

  1. My technology is so revolutionary everyone will want it. I see too many companies that don’t actually know what problems their technology solves for customers. If it doesn’t solve pain — don’t bother.
  2. The platform we offer to our customers is a complete architecture and we’re going to build an ecosystem. Software companies that think they can offer a complete platform to customers — even if they have only a few dollars in revenue.  This isn’t the time to try to do it all. Anyone, no one will believe you. Pick something you do well and stick to it!
  3. We don’t plan to try to partner with the big players; it’s too hard. In tough economic times, customers want to know that there is someone big and powerful behind the scenes…just in case.
  4. We’d love to partner with a large vendor if they are willing to put our product on their price list and sell for us. Keep dreaming. Big vendors will partner but only if there is something in it for them. If you can fill a hole in their product line you have a good chance but you have to be realistic.
  5. We sell a great tool. Everyone needs tools but they are commodities. So, unless a software company has a deep channel, this plan won’t work.  I have seen too many really nice tools companies go out of business. It takes a lot of energy to make one sale to a customer. If the return on the sales effort is only $199.00, it will takes a long time to get to a million.  And in tough economic times, customers will put a software company throught the same due diligence process for a $200 item as they would for a $20,000. Everyone is afraid to make a decision.
  6. Our technology sells itself. Just a few months ago companies were talking about how they wanted technology that would foster innovation. Now that desire hasn’t changed and probably won’t. However, customers want to know that they can get a fast return on investment.  Therefore, successful vendors are structuring their offerings in a modular way so that customers can quickly prove value.
  7. We sell an entire turnkey environment. The days of big all encompassing implementations are over — at least for now.  Customers need to be able to implement just what they can afford or get budget for. If it is successful, they want to be able to add the next chunk…next year.
  8. We are implementing precisely what our customers tell us they need. I know that it is important to listen to customers. However, there are important lessons to remember. Customers do not always say what they mean. They don’t always know what they want. They might be asking a software company to add functions that are specific to the way their company operates and may not be wise for the market overall. So, to avoid failure, listen but make sure that you are not walking into a trap. Look beyond fear and to what will make your buyer successful in their jobs.
  9. We are thinking about Software as a Service (SaaS)…but… In scary times, it is easier to stay with what you know and not make waves. But customers will buy SaaS offerings because there is no capital expenditure needed. If you don’t know how to do this, partner with someone who does. It is going to become the normal way that many software offerings are provided now and even more so in the future.
  10. We are limiting our outreach in the market. It is too expensive to advertise or market. We’re going to wait until things get better. While these are scary times it is not wise to hide.  While companies are hiding smart software companies are out there doing a lot of low cost but very effective marketing initiatives. It takes some hard work, but prospects will notice because everyone else is really quiet

There is no doubt that there is lots of uncertainty out there. There will be a lot of companies who don’t know how to position, price, and partner. There will be lots of companies that simply don’t know how to prove to prospects that they are worth betting on.  I suspect that the companies that survive will be the ones with great business models, interesting and accessible innovation and a lack of fear.

My Top Eleven Predictions for 2009 (I bet you thought there would be only ten)

November 14, 2008 11 comments

What a difference a year makes. The past year was filled with a lot of interesting innovations and market shifts. For example, Software as a Service went from being something for small companies or departments within large ones to a mainstream option.  Real customers are beginning to solve real business problems with service oriented architecture.  The latest hype is around Cloud Computing – afterall, the software industry seems to need hype to survive. As we look forward into 2009, it is going to be a very different and difficult year but one that will be full of some surprising twists and turns.  Here are my top predictions for the coming year.
One. Software as a Service (SaaS) goes mainstream. It isn’t just for small companies anymore. While this has been happening slowly and steadily, it is rapidly becoming mainstream because with the dramatic cuts in capital budgets companies are going to fulfill their needs with SaaS.  While companies like SalesForce.com have been the successful pioneers, the big guys (like IBM, Oracle, Microsoft, and HP) are going to make a major push for dominance and strong partner ecosystems.
Two. Tough economic times favor the big and stable technology companies. Yes, these companies will trim expenses and cut back like everyone else. However, customers will be less willing to bet the farm on emerging startups with cool technology. The only way emerging companies will survive is to do what I call “follow the pain”. In other words, come up with compelling technology that solves really tough problems that others can’t do. They need to fill the white space that the big vendors have not filled yet. The best option for emerging companies is to use this time when people will be hiding under their beds to get aggressive and show value to customers and prospects. It is best to shout when everyone else is quiet. You will be heard!
Three.  The Service Oriented Architecture market enters the post hype phase. This is actually good news. We have had in-depth discussions with almost 30 companies for the second edition of SOA for Dummies (coming out December 19th). They are all finding business benefit from the transition. They are all view SOA as a journey – not a project.  So, there will be less noise in the market but more good work getting done.
Four. Service Management gets hot. This has long been an important area whether companies were looking at automating data centers or managing process tied to business metrics.  So, what is different? Companies are starting to seriously plan a service management strategy tied both to customer experience and satisfaction. They are tying this objective to their physical assets, their IT environment, and their business process across the company. There will be vendor consolidation and a lot of innovation in this area.
Five. The desktop takes a beating in a tough economy. When times get tough companies look for ways to cut back and I expect that the desktop will be an area where companies will delay replacement of existing PCs. They will make do with what they have or they will expand their virtualization implementation.
Six. The Cloud grows more serious. Cloud computing has actually been around since early time sharing days if we are to be honest with each other.  However, there is a difference is the emerging technologies like multi-tenancy that make this approach to shared resources different. Just as companies are moving to SaaS because of economic reasons, companies will move to Clouds with the same goal – decreasing capital expenditures.  Companies will start to have to gain an understanding of the impact of trusting a third party provider. Performance, scalability, predictability, and security are not guaranteed just because some company offers a cloud. Service management of the cloud will become a key success factors. And there will be plenty of problems to go around next year.
Seven. There will be tech companies that fail in 2009. Not all companies will make it through this financial crisis.  Even large companies with cash will be potentially on the failure list.  I predict that Sun Microsystems, for example, will fail to remain intact.  I expect that company will be broken apart.  It could be that the hardware assets could be sold to its partner Fujitsu while pieces of software could be sold off as well.  It is hard to see how a company without a well-crafted software strategy and execution model can remain financially viable. Similarly, companies without a focus on the consumer market will have a tough time in the coming year.
Eight. Open Source will soar in this tight market. Open Source companies are in a good position in this type of market—with a caveat.  There is a danger for customers to simply adopt an open source solution unless there is a strong commercial support structure behind it. Companies that offer commercial open source will emerge as strong players.
Nine.  Software goes vertical. I am not talking about packaged software. I anticipate that more and more companies will begin to package everything based on a solutions focus. Even middleware, data management, security, and process management will be packaged so that customers will spend less time building and more time configuring. This will have an impact in the next decade on the way systems integrators will make (or not make) money.
Ten. Appliances become a software platform of choice for customers. Hardware appliances have been around for a number of years and are growing in acceptance and capability.  This trend will accelerate in the coming year.  The most common solutions used with appliances include security, storage, and data warehousing. The appliance platform will expand dramatically this coming year.  More software solutions will be sold with prepackaged solutions to make the acceptance rate for complex enterprise software easier.

Eleven. Companies will spend money on anticipation management. Companies must be able to use their information resources to understand where things are going. Being able to anticipate trends and customer needs is critical.  Therefore, one of the bright spots this coming year will be the need to spend money getting a handle on data.  Companies will need to understand not just what happened last year but where they should invest for the future. They cannot do this without understanding their data.

The bottom line is that 2009 will be a complicated year for software.  There will be many companies without a compelling solution to customer pain will and should fail. The market favors safe companies. As in any down market, some companies will focus on avoiding any risk and waiting. The smart companies – both providers and users of software will take advantage of the rough market to plan for innovation and success when things improve – and they always do.

Can HP Lead in Virtualization Management?

September 15, 2008 2 comments

HP has been a player in the virtualization market for quite a while.  It has offered many hardware products including its server blades have given it a respectable position in the market. In addition, HP has done a great job being an important partner to key virtualization software players including VMWare, Red Hat, and Citrix. It is also establishing itself as a key Microsoft partner as it moves boldly into virtualization with HyperV.  Thus far, HP’s virtualization strategy did not focus on software. That has started to change.  Now, if this had been the good old days, I think we would have seen a strategy that focused on cooler hardware and data center optimization. Now, don’t get me wrong — HP is very much focused on the hardware and the data center. But now there is a new element that I think will be important to watch.

HP is finally leveraging its software assets in the form of virtualization management.  If I were cynical I would say, it’s about time.  But to be fair, HP has added a lot of new assets to its software portfolio in the last couple of years that make a virtualization management strategy more possible and more believable.

It is interesting that when a company has key assets to offer customers, it often strengthens the message. I was struck by what I thought was a clear message that a found on one of their slides from their marketing pitch, “Your applications and business services don’t care where resources are, how they’re connected or how they’re managed, and neither should you. ”  This statement struck me as precisely the right message in this crazy overhyped virtualization market.  Could it be that HP is becoming a marketing company?

As virtualization goes mainstream, I predict that management of this environment will become the most important issue for customers. In fact, this is the message I have gotten load and clear from cusotmers trying to virtualize their applications on servers.  Couple this will the reality that no company virtualizes everything and even if they did they still have a physical environment to manage.  Therefore, HP focuses its strategy on a plan to manage the composite of physical and virtual.  Of course, HP is not alone here. I was at Citrix’s industry analyst meeting last week and they are adopting this same strategy. I promise that my next blog will be about Citrix.

HP is calling its virtualization strategy its Business Management Suite.  While this is a bit generic, HP is trying to leverage the hot business service management platform and wrap virtualization with it.  Within this wrapper, HP is including four componements:

  • Business Service Management — the technique for linking services across the physical and virtual worlds. This is intended to monitor the end-to-end health of the overall environment.
  • Business Service Automation — a technique for provisioning assets for distributed computing
  • IT Service Management — a technique for discovering what software is present and what licenses need to be managed
  • Quality Management — a technique for testing, scheduling, and provisioning resources across platforms. Many companies are starting to use virtualization as a way of testing complex composite applications before putting them into production. Companies are testing for both application quality and performance under different loads.

I am encouraged that HP seems to understand the nuances of this market.  HP’s strategy is to position itself as the “Switzerland” of the virtualization management space.  It is therefore creating a platform that includes infrastucture to manage across IBM, Microsoft, VMWare, Citrix, and Red Hat.  Therefore, it is positioning its management assets from its heritage software (OpenView) and its acquisitions to execute this strategy. For example, its IT Service Management offering is intended to manage the compliance with license terms and conditions as well as charge backs across hetergenous environments. It’s Asset manager is intended to track virtualized assets through its discovery and dependency mapping tools.  HP’s Operations Manager has extended its performance agents so that it can monitor capabilities from virtual machines to hypervisors.  The company’s SiteScope provides agentless monitoring of hypervisors for VMWare.  The HP Network Node manager has extended support for monitoring virtual networks.

HP’s goal to to focus on the overall health of these distributed, virtualized services from an availability, performance, capacity planning, end user experience, and service level management perspective.  It is indeed an ambitious plan that will take some time to develop but it is the right direction. I am particularly impressed with the partner program that HP is evolving around its CMDB (Configuration Management Database).  It is partnering with VMWare to embark on a joint development initiative to provide a federated CMDB that can collect information from a variety of hosts and guest hosts in an on demand approach. Other companies such as Red Hat and Citrix have joined the CMDB program.

This is an interesting time in the virtualization movement.  As virtualization matures, companies are starting to realize that simply virtualizing an application on a server does not by itself save the time and money they anticipated.  The world is a lot more complicated than that.  Management wants to understand how the entire environment is part of delivering value.  For example, an organization might put all of its call center personnel on a virtualized platform which works fine until an additional 20 users with heavy demands on the server suddenly causes performance to falter.  In other situations, everything works fine until there is a software error somewhere in the distributed environment.  The virtualized environment suddenly fails and it is very difficult for IT operations to diagnose the problem. This is when management stops getting excited about how wonderful it is that they can virtualize hundreds of users onto a single server and starts worrying about the quality of service and the reputation of the organization overall.

The bottom line is that HP seems to be pulling the right pieces together for its virtualization management strategy. It is indeed still early. Virtualization itself is only the tip of the distributed computing marketplace.  HP will have to continue to innovate on its own while investing in its partner ecosystem. Today partners are eager to work with HP because it is a good partner and non-threatening.  But HP won’t be alone in the management of virtualization.  I expect that other companies like IBM and Microsoft will be very aggressive in this market.  HP has a little breathing room right now that it should take advantage of before things change again. And they always change again.