Archive

Posts Tagged ‘service management’

HP’s Ambitious Cloud Computing Strategy: Can HP Emerge as a Power?

February 15, 2011 4 comments

To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System.  HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system.  Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.

So, what is Matrix?  Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.

HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software

There are three components to the HP cloud strategy:

  • CloudSystem
  • Cloud Services Automation
  • Cloud Consulting Services

CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem.  To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2.  When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing.  It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.

Cloud Service Automation.  The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment.  Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services.  The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.

Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS.  Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.

From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.

The Bottom Line.  Making the transition to becoming a major cloud computing vendor is complicated.  The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player.  Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform.  The Cloud Automation Services is a good start but still requires more evolution.  For example, it needs to add more capabilities into its service catalog.  Leveraging its Systinet registry/repository as part of its service catalog would be advisable.  I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.

Just how much will HP plan to compete in the public cloud space is uncertain.  Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?

It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services.  This is a good vision in keeping with what customers need.  And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Predictions for 2010: clouds, mergers, social networks and analytics

December 15, 2009 7 comments

Yes, it is predictions time. Let me start by saying that no market change happens in a single year. Therefore, what is important is to look at the nuance of a market or a technology change in the context of its evolution. So, it is in this spirit that I will make a few predictions. I’ve decided to just list my top six predictions (I don’t like odd numbers). Next week I will add another five or six predictions.

  1. Cloud computing will move out of the fear, uncertainty and doubt phase to the reality phase for many customers. This means that large corporations will begin to move segments of their infrastructure and applications to the cloud. It will be a slow but steady movement. The biggest impact on the market is that customers will begin putting pressure on vendors to guarantee predictability and reliability and portability.
  2. Service Management will become mainstream. Over the past five years the focus of service management has been around ITIL (Information Technology Infrastructure Library) processes and certification. There is a subtle change happening as corporations are starting to take a more holistic view of how they can effectively manage how everything that has a sensor, an actuator, or a computer interface is managed. Cloud computing will have a major impact on the growing importance of service management.
  3. Cloud service providers will begin to drop their prices dramatically as competition intensifies. This will be one of the primary drivers of growth of the use of cloud services. It will put a lot of pressure on smaller niche cloud providers as the larger companies try to gain control of this emerging market.
  4. It is not a stretch to state that the pace of technology acquisitions will accelerate in 2010.  I expect that HP, IBM, Cisco, Oracle, Microsoft, Google, and CA will be extremely active. While it would be foolhardy to pick a single area, I’ll go out on a limb and suggest that security, data center management, service management, and information management will be the focus of many of the acquisitions.
  5. Social Networking will become much more mainstream than it was in 2009. Marketers will finally realize that blatant sales pitches on Twitter or Facebook just won’t cut it.  We will begin to see markets learn how to integrate social networking into the fabric of marketing programs. As this happens there will be hundreds of new start ups focused on analyzing the effectiveness of these marketing efforts.
  6. Information management is at the cusp of a major change. While the individual database remains important, the issue for customers is focus on the need to manage information holistically so that they can anticipate change. As markets grow increasingly complex and competitive, the hottest products in 2010 will those that help companies anticipate what will happen next.  So expect that anything with the term predictive analytics to be hot, hot, hot.

Can IBM become a business leader and a software leader?

November 23, 2009 3 comments

When I first started as an industry analyst in the 1980s IBM software was in dire straits. It was the era where IBM was making the transition from the mainframe to a new generation of distributed computing. It didn’t go really well. Even with thousands of smart developers working their hearts out the first three foresees into a new generation of software were an abysmal failure. IBM’s new architectural framework called SAA(Systems Application Architecture) didn’t work; neither did the first application built on top of that called OfficeVision. It’s first development framework called Application Development  Cycle (AD/Cycle) also ended up on the cutting room floor.  Now fast forward 20 years and a lot has changed for IBM and its software strategy.  While it is easy to sit back and laugh at these failures, it was also a signal to the market that things were changing faster than anyone could have expected. In the 1980s, the world looked very different — programming was procedural, architectures were rigid, and there were no standards except in basic networking.

My perspective on business is that embracing failure and learning from them is the only way to really have success for the future. Plenty of companies that I have worked with over my decades in the industry have made incredible mistakes in trying to lead the world. Most of them make those mistakes and keep making them until they crawl into a hole and die quietly.  The companies I admire of the ones that make the mistakes, learn from them and keep pushing. I’d put both IBM, Microsoft, and Oracle in that space.

But I promised that this piece would be about IBM. I won’t bore you with more IBM history. Let’s just say that over the next 20 years IBM did not give up on distributed computing. So, where is IBM Software today? Since it isn’t time to write the book yet, I will tease you with the five most important observations that I have on where IBM is in its software journey:

1. Common components. If you look under the covers of the technology that is embedded in everything from Tivoli to Information Management and software development you will see common software components. There is one database engine; there is a single development framework, and a single analytics backbone.  There are common interfaces between elements across a very big software portfolio. So, any management capabilities needed to manage an analytics engine will use Tivoli components, etc.

2. Analytics rules. No matter what you are doing, being able to analyze the information inside a management environment or a packaged application can make the difference between success and failure.  IBM has pushed information management to the top of stack across its software portfolio. Since we are seeing increasing levels of automation in everything from cars to factory floors to healthcare equipment, collecting and analyzing this data is becoming the norm. This is where Information Management and Service Management come together.

3. Solutions don’t have to be packaged software. More than 10 years ago IBM made the decision that it would not be in the packaged software business. Even as SAP and Oracle continued to build their empires, IBM took a different path. IBM (like HP) is building solution frameworks that over time incorporate more and more best practices and software patterns. These frameworks are intended to work in partnership with packaged software. What’s the difference? Treat the packages like ERP as the underlying commodity engine and focus on the business value add.

4. Going cloud. Over the past few years, IBM has been making a major investment in cloud computing and has begun to release some public cloud offerings for software testing and development as a starting point. IBM is investing a lot in security and overall cloud management.  It’s Cloud Burst appliance and packaged offerings are intended to be the opening salvo.   In addition, and probably even more important are the private clouds that IBM is building for its largest customers. Ironically, the growing importance of the cloud may actually be the salvation of the Lotus brand.

5. The appliance lives. Even as we look towards the cloud to wean us off of hardware, IBM is putting big bets on hardware appliances. It is actually a good strategy. Packaging all the piece parts onto an appliance that can be remotely upgraded and managed is a good sales strategy for companies cutting back on staff but still requiring capabilities.

There is a lot more that is important about this stage in IBM’s evolution as a company. If I had to sum up what I took away from this annual analyst software event is that IBM is focused at winning the hearts, minds, and dollars of the business leader looking for ways to innovate. That’s what Smarter Planet is about. Will IBM be able to juggle its place as a software leader with its push into business leadership? It is a complicated task that will take years to accomplish and even longer to assess its success.

Tectonic shifts: HP Plus 3Com versus Cisco Plus EMC

November 18, 2009 4 comments

Just when it looked clear where the markets were lining up around data center automation and cloud computing, things change. I guess that is what makes this industry so very interesting.  The proposed acquisition by HP of 3Com is a direct challenge to Cisco’s network management franchise. However, the implications of this move go further than what meets the eye.  It also pits HP in a direct path against EMC with its Cisco partnership. And to make things even more interesting, it also puts these two companies in a competitive three way race against IBM and its cloud/data center automation strategy. And of course, it doesn’t stop there. A myriad of emerging companies like Google and Amazon want a larger share of the enterprise market for cloud services. Companies like Unisys and CSC that has focused on the outsourced secure data centers are getting into the act.

I don’t think that we will see a single winner — no matter what any one of these companies will tell you.  The winners in this market shift will be those companies can build a compelling platform and a compelling value proposition for a partner ecosystem.  The truth about the cloud is that it is not simply a network or a data center. It is a new way of providing services of all sorts that can support changing customer workloads in a secure and predictable manner.

In light of this, what does this say for HP’s plans to acquire 3Com? If we assume that the network infrastructure is a key component of an emerging cloud and data center strategy, HP is making a calculated risk in acquiring more assets in this market.  The company that has found that its ProCurve networking division has begun gaining traction. HP ProCurve Networking is the networking division of HP.  The division includes network switches, wireless access points, WAN routers, and Access Control servers and software.   ProCurve competes directly with Cisco in the networking switch market. When HP had a tight partnership with Cisco, the company de-emphasized the networking. However, once Cisco started to move into the server market, the handcuffs came off. The 3Com acquisition takes the competitive play to a new level. 3Com has a variety of good pieces of technology that HP could leverage within ProCurve. Even more significantly, it picks up a strong security product called TippingPoint, a 3Com acquisition. TippingPoint fills a critical hole in HP’s security offering. TippingPoint, offers network security offerings including intrusion prevention and a product that inspects network packets.  The former 3Com subsidiary has also established a database of security threats based a network of external researchers.

But I think that one of the most important reasons that HP bought 3Com is its strong relationships in the Chinese market. In fiscal year 2008 half of 3Com’s revenue came from its H3C joint venture with Chinese vendor, Huawei Technology. Therefore, it is not surprising that HP would have paid a premium to gain a foothold in this lucrative market. If HP is smart, it will do a good job leveraging the many software assets to build out both its networking assets as well as beefing up its software organization. In reality, HP is much more comfortable in the hardware market. Therefore, adding networking as a core competency makes sense. It will also bolster its position as a player in the high end data center market and in the private cloud space.

Cisco, on the other hand, is coming from the network and moving agressively into the cloud and the data center market.  The company has purchased a position with VMWare and has established a tight partnership with EMC as a go to market strategy.  For Cisco, it gives the company credibility and access to customers outside of its traditional markets. For EMC, the Cisco relationship strengthens its networking play.  But an even bigger value for the relationship is to present a bigger footprint to customers as they move to take on HP, IBM, and the assortment of other players who all want to win.  The Cisco/EMC/VMware play is to focus on the private cloud.  In their view a private cloud is very similar to a private, preconfigured data center.  It can be a compelling value proposition to a customer that needs a data center fast without having to deal with a lot of moving parts.  The real question from a cloud computing perspective is the key question: is this really a cloud?

It was inevitable that this quiet market dominated by Google and Amazon would heat up as the cloud becomes a real market force.  But I don’t expect that HP or Cisco/EMC will have a free run. They are being joined by IBM and Microsoft — among others. The impact could be better options for customers and prices that invariably will fall. The key to success for all of these players will be how well they manage what will be an increasingly heterogeneous, federated, and highly distributed hardware and software world. Management comes in many flavors: management of these highly distributed services and management of the workloads.

Bureaucracy gone mad: when process gets in the way of service management

November 3, 2009 Leave a comment

I had two interesting discussions over the past few weeks; one with an IT manager and the other with Rhett Glause and Matt French from Service-Now. Both discussions related to the issue of managing service processes in a complex computing environments.  Let me start with the IT manager. He is charged with taking his organization’s web presence from 1990s architecture into a modern Web 2.0 design that will enable better support for customers and partners.  It is a big effort with lots of interaction with the customer facing departments about what they want and with the IT organization about how this new environment will be supported.  Now, this part isn’t out of the ordinary and this is not what this manager was having problems with.  He was being driven crazy by process. The company he works for is devoted to ITIL (Information Technology Infrastructure Library). ITIL is a set of best practices designed to help companies create environments that have a common way to troubleshoot problems with managing complex services.  They are intended as guidelines – not step-by-step instructions about how to managing service processes. In fact, ITIL best practices mandate that you need to start with your strategy for managing services before you get involved in the details.

The IT manager’s problem is that his company’s IT department was so embroiled in process that it was causing excessive delays in getting to a solution. It has a Configuration Management Database (CMDB) —  a repository for all of the details about an application environment including who can change something; how a service or an application is configured and what the change management process is. This company’s problem is that it has set up a change review board that has to review and approve every change for the new environment.  Therefore, something that should take a few days to develop is taking six month of endless meetings.  In other words, the IT manager’s organization is too caught up in process so that it actually crippling the ability to get the job done.  According to the IT manager, “It’s bureaucracy gone mad! This approach will not help make IT more responsive; it will do the opposite.”

I thought about the discussion in context with a great call I had with Matt French, director of marketing and product strategy and Rhett Glauser, communications manager at Service-Now, an IT service desk software as a service company.  What did they think of my friend’s tale of woe? They agreed that this is a common perspective that they hear from customers.  Many customers are beginning to understand that they have to take a pragmatic view of process.  Their top recommendation was that companies should approach ITIL in a phrased approach.

So, here are some recommendations about how to handle process in context with driving business value:

  • Establish a light-weight CMDB by only focusing on configuration items that the organization really needs. If a process isn’t likely to change, it might not be necessary to track that process.  You don’t need a change management process for everything.
  • Get IT management to take a step back from relying too heavily on IT processes. Rather management needs to be focused on what is important to business management and then execute in a pragmatic way.
  • Every service should have a business owner who can make decisions.
  • When a change management process is required make sure that there is a change advisory board. There needs to be one person who has the authority to manage that change in the context of the business drivers. The change management board should expedite process and should not become a bottleneck.

In the end it is about common sense. If IT organizations are going to be effective in managing business requirements they have to look at service management in context with the overall priorities of the business. This was the key message our team was aiming for when we wrote Service Management for Dummies. Service management is increasingly defining not only how we manage IT environments but how we managed businesses. Therefore a streamlined view of process management will be the difference between success and failure.

Is cloud security really different than data center security?

October 30, 2009 7 comments

Almost every conversation I have had over the past year or so always comes back to security in the cloud.  Is it really secure? Or we are thinking about implementing the cloud but we are worried about security.  There are, of course, good reasons to plan a cloud security strategy. But in a sense, it is no different than planning a security strategy for your company. But it is the big scary cloud! Well, before I list the top then issues I would like to say one thing: if you think you need an entirely different security strategy for the cloud, you may not have a comprehensive security strategy to start with.  Yes, you have to make sure that you cloud provider has a sophisticated approach to security. However, what about your Internet service provider? What about the level of security within your own IT department? Can you throw stones if you live in a glass house (yes, that is a pun…sorry)?  So, before you start fretting about security in the cloud, get your own house in order.  Do you have an identity management plan? Do you ensure that one individual within the data center can’t control all of the data within a single environment to minimize risks? If you don’t have a well executed internal security plan, you aren’t ready for the cloud.  But let’s say that you have fixed that problem and you are ready to really plan your cloud security strategy. So, here five of the issues to consider. If you have others, let’s start a conversation.

security police

1. You need to start at the beginning with understanding the characteristics of your cloud provider. Is the company well funded? Is its data center designed with security at the center? Your level of scrutiny will also depend on how you are using the cloud. If you are using Infrastructure as a Service for a short term project there is less risk than if you are planning to use a cloud to store important customer data.

2. How is your cloud provider implementing security in a multi-tenant environment? How do they ensure that one customer’s data doesn’t impact another customer’s data?

3. Does your cloud provider give you the ability to monitor security of your data in the cloud? This will be important both for compliance and to keep track of your own security policies.

4. Does your cloud provider encrypt your critical data? If not, why not?

5. Does your cloud provider give you the ability to control who is allowed to access your information based on roles and authorization? Does the cloud provider support federated identity management? This is basic security best practices.

Now you are probably saying to yourself that this isn’t rocket science. These are fundamental security approaches that any data center should follow. I recommend that you take a look at a great document published by the Cloud Security Alliance that details many of the key issues surrounding security in the cloud. So, I guess my principle message is that cloud security is not different than security in any data center.  But the market does not seem to understand this because the perception is that a cloud is somehow not a data center that can be secured with regular old security. I think that we will see something interesting happen because of this perception: cloud vendors will begin to charge a premium for really good security.  In fact, this is already happening.  Vendors like Amazon and Salesforce are offering segregated implementations of their environments to customers who don’t trust their ordinary security approaches.  This will work in the short term primarily because during this early phase of the cloud there is not enough focus on security. Long term, as the market matures, cloud vendors will have to demonstrate their ability to provide a secure environment based on basic security best practices. In the meantime, cloud vendors will rake in the cash for premium secure cloud services.

What are the Unanticipated consequences of the cloud – part II

October 29, 2009 9 comments

As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:

4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically.  Clearly,  vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. old fashioned wagonIn fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.

5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill.  Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one. cash-wad

6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes.  This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.

I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.

Has Service Management become Business Management?

March 22, 2009 7 comments

Having just completed Service Management for Dummies (scheduled to be in the book stores in June), I have taken a step back to think about what I learned from the process. When our team first started the research process a lot of people I talked to wanted to know if we were writing a book about ITIL 3.0 best practices. So, the answer is of course we covered ITIL 3.0 best practices. However, as part of our research and indepth discussions with customers it became apparent that there is something bigger happening here that transcends IT.  I am not sure that this issue has been noticed out there in the world of management of services but it is real and encouraging.  Corporate management is beginning to notice that much of their physical infrastructure and the components that are the essence of their corporate existence are technology enabled.  The X-ray that used to be stored on a piece of film and stored in a file cabinet is now digitized. The automobile is now managed by sensors and other computers. Security of physical buildings is computerized. The factory floor is a complex system. Of course, I could go on for months with lists that include RFID and the like. But I think I have made the point that increasingly everything must be thought of as a system, not just the servers and desktops and networks that sit in the data center.

In my view, this is why the service management arena is getting to be so exciting. Many of the CIOs that our team interviewed for Service Management for Dummies echoed this level of excitement.  These executives are finding that applying service management principles to both the physical and IT world is transformational. It means that organizations can have a greater ability to take a holistic approach to managing their companies from a holistic perspective.

In the book, our team uses the example of the ATM machine to make the point. The ATM is a relatively simple automated device that requires a matching of a customer number with an ID code. It requires that a request for cash from the consumer be matched with the availability of funds from that bank or one of its partner’s banks. It requires the ability to do the accounting to provide the customer with a receipt that states how much money was withdrawn and how much is left in the account.  And there is more! Behind that customer action that might take all of 5 seconds is a huge infrastructure: a data center, a security infrastructure, a sensor that detects of the machine itself is experiencing a problem. There is a network of trucks managed by a third party company that ensures that the trucks deliver cash to replenish the ATM machine. There are even more parts to this world that I am not mentioning — so forgive me. But what is most interesting is that all of these mini-ecosystems are intertwined. What if the bank’s management decides to save money by selecting a new cash delivery network. This company promises great service at a fraction of the cost. To save money the bank goes with the new service only to discover that its drivers are unreliable and cash is often not delivered in a timely manner.  Even if the ATM networks works well, the data center is flawless, and the security is solid, the bank is not able to deliver satisfaction to its customers because there is no cash.

The bottom line is that service management is becoming a corporate issue — not just an IT issue. The secret to service management is about the customer, partner, supplier, and employee experience. Like every other technology transformation over the past couple of decades, mature technology initiatives become management initiatives. Increasingly, service management is being tied to the key performance indicators of the business. Therefore, it is imperative that IT management understand the goals of corporate management as well as the needs of internal and external customers.

My Top Eleven Predictions for 2009 (I bet you thought there would be only ten)

November 14, 2008 11 comments

What a difference a year makes. The past year was filled with a lot of interesting innovations and market shifts. For example, Software as a Service went from being something for small companies or departments within large ones to a mainstream option.  Real customers are beginning to solve real business problems with service oriented architecture.  The latest hype is around Cloud Computing – afterall, the software industry seems to need hype to survive. As we look forward into 2009, it is going to be a very different and difficult year but one that will be full of some surprising twists and turns.  Here are my top predictions for the coming year.
One. Software as a Service (SaaS) goes mainstream. It isn’t just for small companies anymore. While this has been happening slowly and steadily, it is rapidly becoming mainstream because with the dramatic cuts in capital budgets companies are going to fulfill their needs with SaaS.  While companies like SalesForce.com have been the successful pioneers, the big guys (like IBM, Oracle, Microsoft, and HP) are going to make a major push for dominance and strong partner ecosystems.
Two. Tough economic times favor the big and stable technology companies. Yes, these companies will trim expenses and cut back like everyone else. However, customers will be less willing to bet the farm on emerging startups with cool technology. The only way emerging companies will survive is to do what I call “follow the pain”. In other words, come up with compelling technology that solves really tough problems that others can’t do. They need to fill the white space that the big vendors have not filled yet. The best option for emerging companies is to use this time when people will be hiding under their beds to get aggressive and show value to customers and prospects. It is best to shout when everyone else is quiet. You will be heard!
Three.  The Service Oriented Architecture market enters the post hype phase. This is actually good news. We have had in-depth discussions with almost 30 companies for the second edition of SOA for Dummies (coming out December 19th). They are all finding business benefit from the transition. They are all view SOA as a journey – not a project.  So, there will be less noise in the market but more good work getting done.
Four. Service Management gets hot. This has long been an important area whether companies were looking at automating data centers or managing process tied to business metrics.  So, what is different? Companies are starting to seriously plan a service management strategy tied both to customer experience and satisfaction. They are tying this objective to their physical assets, their IT environment, and their business process across the company. There will be vendor consolidation and a lot of innovation in this area.
Five. The desktop takes a beating in a tough economy. When times get tough companies look for ways to cut back and I expect that the desktop will be an area where companies will delay replacement of existing PCs. They will make do with what they have or they will expand their virtualization implementation.
Six. The Cloud grows more serious. Cloud computing has actually been around since early time sharing days if we are to be honest with each other.  However, there is a difference is the emerging technologies like multi-tenancy that make this approach to shared resources different. Just as companies are moving to SaaS because of economic reasons, companies will move to Clouds with the same goal – decreasing capital expenditures.  Companies will start to have to gain an understanding of the impact of trusting a third party provider. Performance, scalability, predictability, and security are not guaranteed just because some company offers a cloud. Service management of the cloud will become a key success factors. And there will be plenty of problems to go around next year.
Seven. There will be tech companies that fail in 2009. Not all companies will make it through this financial crisis.  Even large companies with cash will be potentially on the failure list.  I predict that Sun Microsystems, for example, will fail to remain intact.  I expect that company will be broken apart.  It could be that the hardware assets could be sold to its partner Fujitsu while pieces of software could be sold off as well.  It is hard to see how a company without a well-crafted software strategy and execution model can remain financially viable. Similarly, companies without a focus on the consumer market will have a tough time in the coming year.
Eight. Open Source will soar in this tight market. Open Source companies are in a good position in this type of market—with a caveat.  There is a danger for customers to simply adopt an open source solution unless there is a strong commercial support structure behind it. Companies that offer commercial open source will emerge as strong players.
Nine.  Software goes vertical. I am not talking about packaged software. I anticipate that more and more companies will begin to package everything based on a solutions focus. Even middleware, data management, security, and process management will be packaged so that customers will spend less time building and more time configuring. This will have an impact in the next decade on the way systems integrators will make (or not make) money.
Ten. Appliances become a software platform of choice for customers. Hardware appliances have been around for a number of years and are growing in acceptance and capability.  This trend will accelerate in the coming year.  The most common solutions used with appliances include security, storage, and data warehousing. The appliance platform will expand dramatically this coming year.  More software solutions will be sold with prepackaged solutions to make the acceptance rate for complex enterprise software easier.

Eleven. Companies will spend money on anticipation management. Companies must be able to use their information resources to understand where things are going. Being able to anticipate trends and customer needs is critical.  Therefore, one of the bright spots this coming year will be the need to spend money getting a handle on data.  Companies will need to understand not just what happened last year but where they should invest for the future. They cannot do this without understanding their data.

The bottom line is that 2009 will be a complicated year for software.  There will be many companies without a compelling solution to customer pain will and should fail. The market favors safe companies. As in any down market, some companies will focus on avoiding any risk and waiting. The smart companies – both providers and users of software will take advantage of the rough market to plan for innovation and success when things improve – and they always do.