Archive

Archive for the ‘systems management’ Category

Predictions for 2011: getting ready to compete in real time

December 1, 2010 3 comments

2010 was a transition year for the tech sector. It was the year when cloud suddenly began to look realistic to the large companies that had scorned it. It was the year when social media suddenly became serious business. And it was the year when hardware and software were being united as a platform – something like in the old mainframe days – but different because of high-level interfaces and modularity. There were also important trends starting to emerge like the important of managing information across both the enterprise and among partners and suppliers. Competition for ownership of the enterprise software ecosystem headed up as did the leadership of the emerging cloud computing ecosystem.

So, what do I predict for this coming year? While at the outset it might look like 2011 will be a continuation of what has been happening this year, I think there will be some important changes that will impact the world of enterprise software for the rest of the decade.

First, I think it is going to be a very big year for acquisitions. Now I have said that before and I will say it again. The software market is consolidating around major players that need to fill out their software infrastructure in order to compete. It will come as no surprise if HP begins to purchase software companies if it intends to compete with IBM and Oracle on the software front.  But IBM, Oracle, SAP, and Microsoft will not sit still either.  All these companies will purchase the incremental technology companies they need to compete and expand their share of wallet with their customers.

This will be a transitional year for the up and coming players like Google, Amazon, Netflix, Salesforce.com, and others that haven’t hit the radar yet.  These companies are plotting their own strategies to gain leadership. These companies will continue to push the boundaries in search of dominance.  As they push up market as they grab market share, they will face the familiar problem of being able to support customers who will expect them to act like adults.

Customer support, in fact, will bubble to the top of the issues for emerging as well as established companies in the enterprise space – especially as cloud computing becomes a well-established distribution and delivery platform for computing.  All these companies, whether well established or startups will have to balance the requirements to provide sophisticated customer support with the need to make profit.  This will impact everything from license and maintenance revenue to how companies will charge for consulting and support services.

But what are customers be looking for in 2011? Customers are always looking to reduce their IT expenses – that is a given. However, the major change in 2011 will be the need to innovative based on customer facing initiatives.  Of course, the idea of focusing on customer facing software itself isn’t new there are some subtle changes.  The new initiatives are based on leveraging social networking from a secure perspective to both drive business traffic, anticipate customer needs and issues before they become issues.  Companies will spend money innovating on customer relationships.

Cloud Computing is the other issue in 2011. While it was clearly a major differentiator in 2010, the cloud will take an important leap forward in 2011.  While companies were testing the water this year, next year, companies will be looking at best practices in cloud computing.  2011 will be there year where customers are going to focus on three key issues: data integration across public, private, and data centers, manageability both in terms of workload optimization, security, and overall performance.  The vendors that can demonstrate that they can provide the right level of service across cloud-based services will win significant business. These vendors will increasingly focus on expanding their partner ecosystem as a way to lock in customers to their cloud platform.

Most importantly, 2011 will be the year of analytics.  The technology industry continues to provide data at an accelerated pace never seen before. But what can we do with this data? What does it mean in organizations’ ability to make better business decisions and to prepare for an unpredictable future?  The traditional warehouse simply is too slow to be effective. 2011 will be the year where predictive analytics and information management overall will emerge as among the hottest and most important initiatives.

Now I know that we all like lists, so I will take what I’ve just said and put them into my top ten predictions:

1. Both today’s market leaders and upstarts are going to continue to acquire assets to become more competitive.  Many emerging startups will be scooped up before they see the light of day. At the same time, there will be almost as many startups emerge as we saw in the dot-com era.

2. Hardware will continue to evolve in a new way. The market will move away from hardware as a commodity. The hardware platform in 2010 will be differentiated based on software and packaging. 2010 will be the year of smart hardware packaged with enterprise software, often as appliances.

3. Cloud computing models will put extreme pressure on everything from software license and maintenance pricing to customer support. Integration between different cloud computing models will be front and center. The cloud model is moving out of risk adverse pilots to serious deployments. Best practices will emerge as a major issue for customers that see the cloud as a way to boost innovation and the rate of change.

4. Managing highly distributed services in a compliant and predictable manner will take center stage. Service management and service level agreements across cloud and on-premises environments will become a prerequisite for buyers.

5. Security software will be redefined based on challenges of customer facing initiatives and the need to more aggressively open the corporate environment to support a constantly morphing relationship with customers, partners, and suppliers.

6. The fear of lock in will reach a fever pitch in 2011. SaaS vendors will increasingly add functionality to tighten their grip on customers.  Traditional vendors will purchase more of the components to support the lifecycle needs of customers.  How can everything be integrated from a business process and data integration standpoint and still allow for portability? Today, the answers are not there.

7. The definition of an application is changing. The traditional view that the packaged application is hermetically sealed is going away. More of the new packaged applications will be based on service orientation based on best practices. These applications will be parameter-driven so that they can be changed in real time. And yes, Service Oriented Architectures (SOA) didn’t die after all.

8. Social networking grows up and will be become business social networks. These initiatives will be driven by line of business executives as a way to engage with customers and employees, gain insights into trends, to fix problems before they become widespread. Companies will leverage social networking to enhance agility and new business models.

9. Managing end points will be one of the key technology drivers in 2011. Smart phones, sensors, and tablet computers are refining what computing means. It will drive the requirement for a new approach to role and process based security.

10. Data management and predictive analytics will explode based on both the need to understand traditional information and the need to manage data coming from new sales and communications channels.

The bottom line is that 2011 will be the year where the seeds that have been planted over the last few years are now ready to become the drivers of a new generation of innovation and business change. Put together everything from the flexibility of service orientation, business process management innovation, the wide-spread impact of social and collaborative networks, the new delivery and deployment models of the cloud. Now apply tools to harness these environments like service management, new security platforms, and analytics. From my view, innovative companies are grabbing the threads of technology and focusing on outcomes. 2011 is going to be an important transition year. The corporations that get this right and transform themselves so that they are ready to change on a dime can win – even if they are smaller than their competitors.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Eight things that changed since we wrote Cloud Computing for Dummies

October 8, 2010 3 comments

I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago.  As this anniversary approaches I thought it was appropriate to take a look back at what has changed.  I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news.  I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.

Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders.  A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was  passing fad that would disappear.  Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.

Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud.  Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies.  Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.

Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.

Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.

Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.

Change Six: Cloud Service Level Management is a  primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing.   Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.

Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.

Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.

Oracle + Sun: Five questions to ponder

January 27, 2010 3 comments

I spent a couple of hours today listening to Oracle talk about the long-awaited integration with Sun Microsystems. A real end of an era and beginning of a new one. What does this mean for Oracle? Whatever you might think about Oracle, you have to give the company credit for successfully integrating the 60 companies it has purchased over the past few years. Having watched hundreds and perhaps thousands of acquisitions over the last few decades, it is clear that integration is hard. There are overlapping technologies, teams, cultures, and egos. Oracle has successfully managed to leverage the IP from its acquisitions to support its business goals. For example, it has kept packaged software customers happy by improving the software. Peoplesoft customers, for example, were able to continue to use the software they had become dependent on in primarily the same way as before the acquisition. In some cases, the quality of the software actually improved dramatically. The path has been more complicated with the various middleware and infrastructure platforms the company has acquired over the years because of overlapping functionality.

The acquisition of Sun Microsystems is the biggest game changer for Oracle since the acquisition of PeopleSoft. There is little doubt that Sun has significant software and hardware IP that will be very important in defining Oracle in the 21st century. But I don’t expect this to be a simple journey. Here are the five key issues that I think will be tricky for Oracle to navigate. Obviously, this is not a complete list but it is a start.

Issue One: Can Oracle recreate the mainframe world? The mainframe is dead — long live the mainframe. Oracle has a new fondness for the mainframe and what that model could represent. So, if you combine Sun’s hardware, networking layer, storage, security, packaged applications, middleware into a package do you get to own total share of a customer’s wallet? That is the idea. Oracle management has determined that IBM had the right ideas in the 1960s — everything was nicely integrated and the customer never had to worry about the pieces working together.
Issue Two: Can you package everything together and still be an open platform? To its credit, Oracle has build its software on standards such as Unix/Linux, XML, Java, etc. So, can you have it both ways? Can you claim openness when the platform itself is hermetically sealed? I think it may be a stretch. In order to accomplish this goal, Oracle would have to have well-defined and published APIs. It would have to be able to certify that with these APIs the integrated platform won’t be broken. Not an easy task.
Issue Three: Can you manage a complex computing environment? Computing environments get complicated because there are so many moving parts. There are configurations that change; software gets patched; new operating system versions are introduced; emerging technology enters and messes up the well established environment. Oracle would like to automate the process of managing this process for customers. It is an appealing idea since configuration problems, missing links, and poor testing are often responsible for many of the outages in computing environments today. Will customers be willing to have this type of integrated environment controlled and managed by a single vendor? Some customers will be happy to turn over these headaches. Others may have too much legacy or want to work with a variety of vendors. This is not a new dilemma for customers. Customers have long had to rationalize the benefits of a single source of technology against the risks of being locked in.
Issue Four: Can you teach an old dog new tricks? Can Oracle really be a hardware vendor? Clearly, Sun continues to be a leader in hardware despite its diminished fortunes. But as anyone who has ventured into the hardware world knows, hardware is a tough, brutal game. In fact, it is the inverse of software. Software takes many cycles to reach maturation. It needs to be tweaked and finessed. However, once it is in place it has a long, long life. The old saying goes, old software never dies. The same cannot be said for hardware. Hardware has a much straighter line to maturity. It is developed, designed, and delivered to the market. Sometimes it leapfrogs the competition enough that it has a long and very profitable life. Other times, it hits the market at the end of a cycle when a new more innovative player enters the market. The culmination of all the work and effort can be short as something new comes along at the right place at the right time. It is often a lot easier to get rid of hardware than software. The computer industry is littered with the corpses of failed hardware platforms that started with great fanfare and then faded away quickly. Will Oracle be successful with hardware? It will depend on how really good the company is in transforming its DNA.
Issue Five. Are customers ready to embrace Oracle’s brave new world? Oracle’s strategy is a good one — if you are Oracle. But what about for customers? And what about for partners? Customers need to understand the long-term implication and tradeoffs in buying into Oracle’s integrated approach to its platform. It will clearly mean fewer moving parts to worry about. It will mean one phone call and no finger pointing. However, customers have to understand the type of leverage that single company will have in terms of contract terms and conditions. And what about partners? How does an independent software vendor or a channel partner participate within the new Oracle? Is there room? What type of testing and preparation will be required to play?

Predictions for 2010: clouds, mergers, social networks and analytics

December 15, 2009 7 comments

Yes, it is predictions time. Let me start by saying that no market change happens in a single year. Therefore, what is important is to look at the nuance of a market or a technology change in the context of its evolution. So, it is in this spirit that I will make a few predictions. I’ve decided to just list my top six predictions (I don’t like odd numbers). Next week I will add another five or six predictions.

  1. Cloud computing will move out of the fear, uncertainty and doubt phase to the reality phase for many customers. This means that large corporations will begin to move segments of their infrastructure and applications to the cloud. It will be a slow but steady movement. The biggest impact on the market is that customers will begin putting pressure on vendors to guarantee predictability and reliability and portability.
  2. Service Management will become mainstream. Over the past five years the focus of service management has been around ITIL (Information Technology Infrastructure Library) processes and certification. There is a subtle change happening as corporations are starting to take a more holistic view of how they can effectively manage how everything that has a sensor, an actuator, or a computer interface is managed. Cloud computing will have a major impact on the growing importance of service management.
  3. Cloud service providers will begin to drop their prices dramatically as competition intensifies. This will be one of the primary drivers of growth of the use of cloud services. It will put a lot of pressure on smaller niche cloud providers as the larger companies try to gain control of this emerging market.
  4. It is not a stretch to state that the pace of technology acquisitions will accelerate in 2010.  I expect that HP, IBM, Cisco, Oracle, Microsoft, Google, and CA will be extremely active. While it would be foolhardy to pick a single area, I’ll go out on a limb and suggest that security, data center management, service management, and information management will be the focus of many of the acquisitions.
  5. Social Networking will become much more mainstream than it was in 2009. Marketers will finally realize that blatant sales pitches on Twitter or Facebook just won’t cut it.  We will begin to see markets learn how to integrate social networking into the fabric of marketing programs. As this happens there will be hundreds of new start ups focused on analyzing the effectiveness of these marketing efforts.
  6. Information management is at the cusp of a major change. While the individual database remains important, the issue for customers is focus on the need to manage information holistically so that they can anticipate change. As markets grow increasingly complex and competitive, the hottest products in 2010 will those that help companies anticipate what will happen next.  So expect that anything with the term predictive analytics to be hot, hot, hot.

Tectonic shifts: HP Plus 3Com versus Cisco Plus EMC

November 18, 2009 4 comments

Just when it looked clear where the markets were lining up around data center automation and cloud computing, things change. I guess that is what makes this industry so very interesting.  The proposed acquisition by HP of 3Com is a direct challenge to Cisco’s network management franchise. However, the implications of this move go further than what meets the eye.  It also pits HP in a direct path against EMC with its Cisco partnership. And to make things even more interesting, it also puts these two companies in a competitive three way race against IBM and its cloud/data center automation strategy. And of course, it doesn’t stop there. A myriad of emerging companies like Google and Amazon want a larger share of the enterprise market for cloud services. Companies like Unisys and CSC that has focused on the outsourced secure data centers are getting into the act.

I don’t think that we will see a single winner — no matter what any one of these companies will tell you.  The winners in this market shift will be those companies can build a compelling platform and a compelling value proposition for a partner ecosystem.  The truth about the cloud is that it is not simply a network or a data center. It is a new way of providing services of all sorts that can support changing customer workloads in a secure and predictable manner.

In light of this, what does this say for HP’s plans to acquire 3Com? If we assume that the network infrastructure is a key component of an emerging cloud and data center strategy, HP is making a calculated risk in acquiring more assets in this market.  The company that has found that its ProCurve networking division has begun gaining traction. HP ProCurve Networking is the networking division of HP.  The division includes network switches, wireless access points, WAN routers, and Access Control servers and software.   ProCurve competes directly with Cisco in the networking switch market. When HP had a tight partnership with Cisco, the company de-emphasized the networking. However, once Cisco started to move into the server market, the handcuffs came off. The 3Com acquisition takes the competitive play to a new level. 3Com has a variety of good pieces of technology that HP could leverage within ProCurve. Even more significantly, it picks up a strong security product called TippingPoint, a 3Com acquisition. TippingPoint fills a critical hole in HP’s security offering. TippingPoint, offers network security offerings including intrusion prevention and a product that inspects network packets.  The former 3Com subsidiary has also established a database of security threats based a network of external researchers.

But I think that one of the most important reasons that HP bought 3Com is its strong relationships in the Chinese market. In fiscal year 2008 half of 3Com’s revenue came from its H3C joint venture with Chinese vendor, Huawei Technology. Therefore, it is not surprising that HP would have paid a premium to gain a foothold in this lucrative market. If HP is smart, it will do a good job leveraging the many software assets to build out both its networking assets as well as beefing up its software organization. In reality, HP is much more comfortable in the hardware market. Therefore, adding networking as a core competency makes sense. It will also bolster its position as a player in the high end data center market and in the private cloud space.

Cisco, on the other hand, is coming from the network and moving agressively into the cloud and the data center market.  The company has purchased a position with VMWare and has established a tight partnership with EMC as a go to market strategy.  For Cisco, it gives the company credibility and access to customers outside of its traditional markets. For EMC, the Cisco relationship strengthens its networking play.  But an even bigger value for the relationship is to present a bigger footprint to customers as they move to take on HP, IBM, and the assortment of other players who all want to win.  The Cisco/EMC/VMware play is to focus on the private cloud.  In their view a private cloud is very similar to a private, preconfigured data center.  It can be a compelling value proposition to a customer that needs a data center fast without having to deal with a lot of moving parts.  The real question from a cloud computing perspective is the key question: is this really a cloud?

It was inevitable that this quiet market dominated by Google and Amazon would heat up as the cloud becomes a real market force.  But I don’t expect that HP or Cisco/EMC will have a free run. They are being joined by IBM and Microsoft — among others. The impact could be better options for customers and prices that invariably will fall. The key to success for all of these players will be how well they manage what will be an increasingly heterogeneous, federated, and highly distributed hardware and software world. Management comes in many flavors: management of these highly distributed services and management of the workloads.

What are the Unanticipated consequences of the cloud – part II

October 29, 2009 9 comments

As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:

4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically.  Clearly,  vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. old fashioned wagonIn fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.

5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill.  Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one. cash-wad

6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes.  This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.

I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.

My Top Eleven Predictions for 2009 (I bet you thought there would be only ten)

November 14, 2008 11 comments

What a difference a year makes. The past year was filled with a lot of interesting innovations and market shifts. For example, Software as a Service went from being something for small companies or departments within large ones to a mainstream option.  Real customers are beginning to solve real business problems with service oriented architecture.  The latest hype is around Cloud Computing – afterall, the software industry seems to need hype to survive. As we look forward into 2009, it is going to be a very different and difficult year but one that will be full of some surprising twists and turns.  Here are my top predictions for the coming year.
One. Software as a Service (SaaS) goes mainstream. It isn’t just for small companies anymore. While this has been happening slowly and steadily, it is rapidly becoming mainstream because with the dramatic cuts in capital budgets companies are going to fulfill their needs with SaaS.  While companies like SalesForce.com have been the successful pioneers, the big guys (like IBM, Oracle, Microsoft, and HP) are going to make a major push for dominance and strong partner ecosystems.
Two. Tough economic times favor the big and stable technology companies. Yes, these companies will trim expenses and cut back like everyone else. However, customers will be less willing to bet the farm on emerging startups with cool technology. The only way emerging companies will survive is to do what I call “follow the pain”. In other words, come up with compelling technology that solves really tough problems that others can’t do. They need to fill the white space that the big vendors have not filled yet. The best option for emerging companies is to use this time when people will be hiding under their beds to get aggressive and show value to customers and prospects. It is best to shout when everyone else is quiet. You will be heard!
Three.  The Service Oriented Architecture market enters the post hype phase. This is actually good news. We have had in-depth discussions with almost 30 companies for the second edition of SOA for Dummies (coming out December 19th). They are all finding business benefit from the transition. They are all view SOA as a journey – not a project.  So, there will be less noise in the market but more good work getting done.
Four. Service Management gets hot. This has long been an important area whether companies were looking at automating data centers or managing process tied to business metrics.  So, what is different? Companies are starting to seriously plan a service management strategy tied both to customer experience and satisfaction. They are tying this objective to their physical assets, their IT environment, and their business process across the company. There will be vendor consolidation and a lot of innovation in this area.
Five. The desktop takes a beating in a tough economy. When times get tough companies look for ways to cut back and I expect that the desktop will be an area where companies will delay replacement of existing PCs. They will make do with what they have or they will expand their virtualization implementation.
Six. The Cloud grows more serious. Cloud computing has actually been around since early time sharing days if we are to be honest with each other.  However, there is a difference is the emerging technologies like multi-tenancy that make this approach to shared resources different. Just as companies are moving to SaaS because of economic reasons, companies will move to Clouds with the same goal – decreasing capital expenditures.  Companies will start to have to gain an understanding of the impact of trusting a third party provider. Performance, scalability, predictability, and security are not guaranteed just because some company offers a cloud. Service management of the cloud will become a key success factors. And there will be plenty of problems to go around next year.
Seven. There will be tech companies that fail in 2009. Not all companies will make it through this financial crisis.  Even large companies with cash will be potentially on the failure list.  I predict that Sun Microsystems, for example, will fail to remain intact.  I expect that company will be broken apart.  It could be that the hardware assets could be sold to its partner Fujitsu while pieces of software could be sold off as well.  It is hard to see how a company without a well-crafted software strategy and execution model can remain financially viable. Similarly, companies without a focus on the consumer market will have a tough time in the coming year.
Eight. Open Source will soar in this tight market. Open Source companies are in a good position in this type of market—with a caveat.  There is a danger for customers to simply adopt an open source solution unless there is a strong commercial support structure behind it. Companies that offer commercial open source will emerge as strong players.
Nine.  Software goes vertical. I am not talking about packaged software. I anticipate that more and more companies will begin to package everything based on a solutions focus. Even middleware, data management, security, and process management will be packaged so that customers will spend less time building and more time configuring. This will have an impact in the next decade on the way systems integrators will make (or not make) money.
Ten. Appliances become a software platform of choice for customers. Hardware appliances have been around for a number of years and are growing in acceptance and capability.  This trend will accelerate in the coming year.  The most common solutions used with appliances include security, storage, and data warehousing. The appliance platform will expand dramatically this coming year.  More software solutions will be sold with prepackaged solutions to make the acceptance rate for complex enterprise software easier.

Eleven. Companies will spend money on anticipation management. Companies must be able to use their information resources to understand where things are going. Being able to anticipate trends and customer needs is critical.  Therefore, one of the bright spots this coming year will be the need to spend money getting a handle on data.  Companies will need to understand not just what happened last year but where they should invest for the future. They cannot do this without understanding their data.

The bottom line is that 2009 will be a complicated year for software.  There will be many companies without a compelling solution to customer pain will and should fail. The market favors safe companies. As in any down market, some companies will focus on avoiding any risk and waiting. The smart companies – both providers and users of software will take advantage of the rough market to plan for innovation and success when things improve – and they always do.

Can HP Lead in Virtualization Management?

September 15, 2008 2 comments

HP has been a player in the virtualization market for quite a while.  It has offered many hardware products including its server blades have given it a respectable position in the market. In addition, HP has done a great job being an important partner to key virtualization software players including VMWare, Red Hat, and Citrix. It is also establishing itself as a key Microsoft partner as it moves boldly into virtualization with HyperV.  Thus far, HP’s virtualization strategy did not focus on software. That has started to change.  Now, if this had been the good old days, I think we would have seen a strategy that focused on cooler hardware and data center optimization. Now, don’t get me wrong — HP is very much focused on the hardware and the data center. But now there is a new element that I think will be important to watch.

HP is finally leveraging its software assets in the form of virtualization management.  If I were cynical I would say, it’s about time.  But to be fair, HP has added a lot of new assets to its software portfolio in the last couple of years that make a virtualization management strategy more possible and more believable.

It is interesting that when a company has key assets to offer customers, it often strengthens the message. I was struck by what I thought was a clear message that a found on one of their slides from their marketing pitch, “Your applications and business services don’t care where resources are, how they’re connected or how they’re managed, and neither should you. ”  This statement struck me as precisely the right message in this crazy overhyped virtualization market.  Could it be that HP is becoming a marketing company?

As virtualization goes mainstream, I predict that management of this environment will become the most important issue for customers. In fact, this is the message I have gotten load and clear from cusotmers trying to virtualize their applications on servers.  Couple this will the reality that no company virtualizes everything and even if they did they still have a physical environment to manage.  Therefore, HP focuses its strategy on a plan to manage the composite of physical and virtual.  Of course, HP is not alone here. I was at Citrix’s industry analyst meeting last week and they are adopting this same strategy. I promise that my next blog will be about Citrix.

HP is calling its virtualization strategy its Business Management Suite.  While this is a bit generic, HP is trying to leverage the hot business service management platform and wrap virtualization with it.  Within this wrapper, HP is including four componements:

  • Business Service Management — the technique for linking services across the physical and virtual worlds. This is intended to monitor the end-to-end health of the overall environment.
  • Business Service Automation — a technique for provisioning assets for distributed computing
  • IT Service Management — a technique for discovering what software is present and what licenses need to be managed
  • Quality Management — a technique for testing, scheduling, and provisioning resources across platforms. Many companies are starting to use virtualization as a way of testing complex composite applications before putting them into production. Companies are testing for both application quality and performance under different loads.

I am encouraged that HP seems to understand the nuances of this market.  HP’s strategy is to position itself as the “Switzerland” of the virtualization management space.  It is therefore creating a platform that includes infrastucture to manage across IBM, Microsoft, VMWare, Citrix, and Red Hat.  Therefore, it is positioning its management assets from its heritage software (OpenView) and its acquisitions to execute this strategy. For example, its IT Service Management offering is intended to manage the compliance with license terms and conditions as well as charge backs across hetergenous environments. It’s Asset manager is intended to track virtualized assets through its discovery and dependency mapping tools.  HP’s Operations Manager has extended its performance agents so that it can monitor capabilities from virtual machines to hypervisors.  The company’s SiteScope provides agentless monitoring of hypervisors for VMWare.  The HP Network Node manager has extended support for monitoring virtual networks.

HP’s goal to to focus on the overall health of these distributed, virtualized services from an availability, performance, capacity planning, end user experience, and service level management perspective.  It is indeed an ambitious plan that will take some time to develop but it is the right direction. I am particularly impressed with the partner program that HP is evolving around its CMDB (Configuration Management Database).  It is partnering with VMWare to embark on a joint development initiative to provide a federated CMDB that can collect information from a variety of hosts and guest hosts in an on demand approach. Other companies such as Red Hat and Citrix have joined the CMDB program.

This is an interesting time in the virtualization movement.  As virtualization matures, companies are starting to realize that simply virtualizing an application on a server does not by itself save the time and money they anticipated.  The world is a lot more complicated than that.  Management wants to understand how the entire environment is part of delivering value.  For example, an organization might put all of its call center personnel on a virtualized platform which works fine until an additional 20 users with heavy demands on the server suddenly causes performance to falter.  In other situations, everything works fine until there is a software error somewhere in the distributed environment.  The virtualized environment suddenly fails and it is very difficult for IT operations to diagnose the problem. This is when management stops getting excited about how wonderful it is that they can virtualize hundreds of users onto a single server and starts worrying about the quality of service and the reputation of the organization overall.

The bottom line is that HP seems to be pulling the right pieces together for its virtualization management strategy. It is indeed still early. Virtualization itself is only the tip of the distributed computing marketplace.  HP will have to continue to innovate on its own while investing in its partner ecosystem. Today partners are eager to work with HP because it is a good partner and non-threatening.  But HP won’t be alone in the management of virtualization.  I expect that other companies like IBM and Microsoft will be very aggressive in this market.  HP has a little breathing room right now that it should take advantage of before things change again. And they always change again.

Cloud Computing: a work in progress or a silver bullet?

August 6, 2008 5 comments

I have seen some research recently that suggested the CIOs are not seriously thinking about Cloud Computing. Is this a leading indicator on this emerging market or is it looking in the rearview mirror? I vote for the rear view mirror theory. Here is why. If you are the typical CIO, you are thinking about everything from your budget, how to reduce energy costs in your data centers, proving to the CEO and CFO that your investments are indeed showing a return on investment or at least giving you a competitive weapon. In our research we are in fact seeing that many CIOs that are implementing new infrastructure and plans for new efficient data centers and innovative Service Oriented Architecture are making progress.
However, when I see research about doubts about clouds I can come to only one conclusion: fear of the unknown. What is a cloud? A cloud is an Internet based set of services based on a Software as a Service (SaaS) approach. Typically a single vendor controls this hardware, networking, software, and management environment.
Many CIOs and IT managers simply don’t understand what this means. It isn’t their fault. I have yet to see an article or announcement from a major vendor that makes it clear what a cloud really is (other than something that might mean rain). If I were a CIO struggling with all of the problems of a down market and requirements to make everyone happy I would be skeptical too.
Is Cloud Computing simply another word for outsourcing infrastructure? I believe that many CIOs will see it this way. After all, like outsourcing, clouds mean that computing is no longer on premise. There are obviously key differences between Cloud Computing today and outsourcing. The most obvious difference is the rationale for use. Today companies tend to use clouds for a specific test environment or to in essence host an application by a trusted supplier.
Over time, I think that CIOs will come around and accept that Cloud Computing is actually a valuable approach that is cost effective and trustworthy. However, in my view, it is going to take careful planning to gain the trust of business oriented CIOs. Here are what I think are the top challenges for achieving commercial clouds.

1. A cloud can hide many benefits and many sins. Therefore, a CIO has to be able to get under the hood of a cloud environment so that it is clear what the technology architecture is. Many problems are already surfacing because the existing cloud environment cannot scale and does not have a sophisticated management capability.

2. What exactly does the organization want to use a cloud for? Is it in place of a data center? If so, the CIO needs to do a lot of homework and establish a very well constructed Service Level Agreement with financial incentives and penalties.

3. Does the organization view a cloud as a standalone environment for one use? If so, how does it connect to an existing infrastructure and how easy is it to move data and other content from one site to another?

4. What happens if the cloud fails? Is there a back up plan? This is especially important if employees and/or partners and customers are dependent on the application or system that lives in the cloud. What happens if the cloud supplier goes out of business?

5. How proprietary is the cloud and how does that impact integration? This is important if you decide to leave one environment for another or even decide to bring the application back in-house. For example, some clouds may have designed their own languages for integration that might mean that an organization is stuck with an expensive rewrite.

Conclusion: there are no silver bullets or silver linings. Now, I think that Cloud Computing is going to be a very important transition in the maturation of distributed computing. In the long run, it will provide the type of utility computing that some of us have been talking about for decades. However, like anything else in the technology world, it is not a simple fix to complicated problems. It is an IT infrastructure made up of technology components that have to be managed, scaled, and secured – to name but a few issues. I expect that we will see a lot of failures in the coming year that will seed doubts among potential customers. At the same time, it will open opportunities for smart companies who have the vision to bring the pieces together and make this stage of computing a reality. At least we won’t be bored!