Archive

Archive for the ‘monitoring’ Category

HP’s Ambitious Cloud Computing Strategy: Can HP Emerge as a Power?

February 15, 2011 4 comments

To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System.  HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system.  Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.

So, what is Matrix?  Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.

HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software

There are three components to the HP cloud strategy:

  • CloudSystem
  • Cloud Services Automation
  • Cloud Consulting Services

CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem.  To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2.  When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing.  It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.

Cloud Service Automation.  The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment.  Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services.  The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.

Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS.  Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.

From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.

The Bottom Line.  Making the transition to becoming a major cloud computing vendor is complicated.  The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player.  Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform.  The Cloud Automation Services is a good start but still requires more evolution.  For example, it needs to add more capabilities into its service catalog.  Leveraging its Systinet registry/repository as part of its service catalog would be advisable.  I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.

Just how much will HP plan to compete in the public cloud space is uncertain.  Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?

It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services.  This is a good vision in keeping with what customers need.  And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Eight things that changed since we wrote Cloud Computing for Dummies

October 8, 2010 3 comments

I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago.  As this anniversary approaches I thought it was appropriate to take a look back at what has changed.  I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news.  I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.

Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders.  A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was  passing fad that would disappear.  Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.

Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud.  Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies.  Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.

Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.

Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.

Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.

Change Six: Cloud Service Level Management is a  primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing.   Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.

Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.

Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.

What are the Unanticipated consequences of the cloud – part II

October 29, 2009 9 comments

As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:

4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically.  Clearly,  vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. old fashioned wagonIn fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.

5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill.  Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one. cash-wad

6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes.  This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.

I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.

Ten things I learned while writing Cloud Computing for Dummies

August 14, 2009 14 comments

I haven’t written a blog post in quite a while. Yes, I feel bad about that but I think I have a good excuse. I have been hard at work (along with my colleagues Marcia Kaufman, Robin Bloor, and Fern Halper) on Cloud Computing for Dummies. I will admit that we underestimated the effort. We thought that since we had already written Service Oriented Architectures for Dummies — twice; and Service Management for Dummies that Cloud Computing would be relatively easy. It wasn’t. Over the past six months we have learned a lot about the cloud and where it is headed. I thought that rather than try to rewrite the entire book right here I would give you a sense of some of the important things that I have learned. I will hold myself to 10 so that I don’t go overboard!

1. The cloud is both old and new at the same time. It is build on the knowledge and experience of timesharing, Internet services, Application Service Providers, hosting, and managed services. So, it is an evolution, not a revolution.

2. There are lots of shades of gray with cloud segmentation. Yes, there are three buckets that we put clouds into: infrastructure as a service, platform as a service, and software as a service. Now, that’s nice and simple. However, it isn’t because all of these areas are starting to blurr into each other. And, it is even more complicated because there is also business process as a service. This is not a distinct market unto itself – rather it is an important component in the cloud in general.

3. Market leadership is in flux. Six months ago the market place for cloud was fairly easy to figure out. There were companies like Amazon and Google and an assortment of other pure play companies. That landscape is shifting as we speak. The big guns like IBM, HP, EMC, VMware, Microsoft, and others are running in. They would like to control the cloud. It is indeed a market where big players will have a strategic advantage.

4. The cloud is an economic and business model. Business management wants the data center to be easily scalable and predictable and affordable. As it becomes clear that IT is the business, the industrialization of the data center follows. The economics of the cloud are complicated because so many factors are important: the cost of power; the cost of space; the existing resources — hardware, software, and personnel (and the status of utilization). Determining the most economical approach is harder than it might appear.

5. The private cloud is real.  For a while there was a raging debate: is there such a thing as a private cloud? It has become clear to me that there is indeed a private cloud. A private cloud is the transformation of the data center into a modular, service oriented environment that makes the process of enabling users to safely procure infrastructure, platform and software services in a self-service manner.  This may not be a replacement for an entire data center – a private cloud might be a portion of the data center dedicated to certain business units or certain tasks.

6. The hybrid cloud is the future. The future of the cloud is a combination of private, traditional data centers, hosting, and public clouds. Of course, there will be companies that will only use public cloud services for everything but the majority of companies will have a combination of cloud services.

7. Managing the cloud is complicated. This is not just a problem for the vendors providing cloud services. Any company using cloud services needs to be able to monitor service levels across the services they use. This will only get more complicated over time.

8. Security is king in the cloud. Many of the customers we talked to are scared about the security implications of putting their valuable data into a public cloud. Is it safe? Will my data cross country boarders? How strong is the vendor? What if it goes out of business? This issue is causing many customers to either only consider a private cloud or to hold back. The vendors who succeed in the cloud will have to have a strong brand that customers will trust. Security will always be a concern but it will be addressed by smart vendors.

9. Interoperability between clouds is the next frontier. In these early days customers tend to buy one service at a time for a single purpose — Salesforce.com for CRM, some compute services from Amazon, etc. However, over time, customers will want to have more interoperability across these platforms. They will want to be able to move their data and their code from one enviornment to another. There is some forward movement in this area but it is early. There are few standards for the cloud and little agreement.

10. The cloud in a box. There is a lot of packaging going on out there and it comes in two forms. Companies are creating appliance based environments for managing virtual images. Other vendors (especially the big ones like HP and IBM) are packaging their cloud offerings with their hardware for companies that want Private clouds.

I have only scratched the surface of this emerging market. What makes it so interesting and so important is that it actually is the coalescing of computing. It incorporates everything from hardware, management software, service orientation, security, software development, information management,  the Internet, service managment, interoperability, and probably a dozen other components that I haven’t mentioned. It is truly the way we will achieve the industrialization of software.

What’s a Smarter Planet (and what does that have to do with technology?)

November 20, 2008 2 comments

So, who could argue that we need a smarter planet. I certainly couldn’t. I am at an IBM software analyst meeting. I have been attending this meeting for many years. The focus, as you might imagine is on the software strategy. But  there was something this time that I think is worth talking about.  Rather than providing us analysts with a laundry list of products and go to market strategies (yes, they did some of that too), the focus this year is around vertical solutions and markets.  But more than that, there is an overarching theme that is about to become the major theme that will envelope IBM over the coming years – Smarter Planet.  This initiative is driven by Sam Palmisano not just with his operational good sense, but his ability to provide vision for the company.
In his address to the Council of Foreign Relations in New York City on November 6, 2008, Palmisano proclaimed that the next challenge as our world gets more interconnected, hotter, and challenged for growth we need to leverage a new approach to innovation that is smarter.
This approach according to Palmisano, “This isn’t just a metaphor. I mean infusing intelligence into the way the world literally works – the systems and processes that enable physical goods to be developed, manufactured, bought and sold…services to be delivered…everything from people and money to oil, water, and electronics to move…and billions of people to work and live.”
Good thinking but what does this mean from a technology lens? It is clear that we have an overabundance of technology. What we lack right now is the right way to leverage technology to truly focus on customer benefit from both an agility perspective (being able to change quickly and without too much pain) and the ability to support an increasingly connected world.  It is interesting to think about looking at the world this way.

If you think about it, the world is, in fact, a system. To make the concept even further, the world can be viewed as a biological system. The human body itself is an interconnected set of sensors, actions that trigger other actions. The body interacts with other humans, with the physical world as well as the virtual world. We take actions based on the information we are given or intuit from our experiences.
IBM is trying to tap into one of the most important transitions in our world today. And they are not shy about focusing these transformations to their products and services (it is a commercial world, after all).
Here’s a quick view of this idea of the Smarter Planet.  If we look at the idea of a Smarter Planet, it starts with the idea that everything is an asset that takes inputs processes them and produces outcomes.  Therefore, we can look at this from Smarter Planet from five different perspectives:
•    Innovation can transform companies, countries, and governments to lower costs and increase revenue
•    Intelligence that provides an ability to learn from the vast amounts of information in the world (I call this anticipation management). In essence, this means managing information, predicting outcomes, leveraging information across partners, suppliers, and customers
•    Optimizing, managing, and changing based on the customer experience. Organizations no matter how big or small are looking for ways to transform themselves so they are ready for whatever happens.  Companies that focused on this type of change are better able to weather very tough and complicated times.
•    Greening of business. You can’t talk about the planet without thinking about the impact of green on everything we do. This includes everything from saving cash by better usage of energy to protecting the climate.
•    Leveraging smart people.  I think that people makes or breaks this noble goal. Leveraging all these innovative approaches to doing things smarter and more responsibly typically fail if people don’t work together as effective teams.  Politics can kill innovation more quickly than anything else.

Now begin to take this concept out of the general view and apply it to specific industries, their problems, and opportunities. That is precisely what makes the idea of the Smarter Planet intriguing.  For example, manufacturing itself is being transformed as we speak.  Manufacturing has been transformed by technology with sensors and actuators so that the information produced is helping smart companies better control the manufacturing process both in terms of innovation, efficiency, and energy conservation.  In retail, companies are leveraging new processes and technology to leapfrog the competition. If a retailer can optimize the way they change inventory based on an early understanding of changing buying habits of customers they can become a leader.

I think it is important that IBM is talking about this idea now.  This idea of a Smarter Planet is really tailor made for a time when the natural inclination is to hide until things get better.  There is no question that we are in very challenging time.  It isn’t the first time that we have found ourselves in this position and it certainly won’t be the last.  But in my experience, the companies that take action when everyone else is hiding under the bed to innovate, change, and learn will win.  When the world comes back, these companies will be way ahead and everyone else will be playing catchup.

Taking the Pulse of The New Tivoli

May 20, 2008 3 comments

It is ironic that I was at the first Tivoli user conference called Planet Tivoli back in the early 1990s. Now, I am sitting at Tivoli’s first full blown user conference called Pulse. Pulse is very much like Tivoli itself, it is a combination of the Netcool, Maximo, and Tivoli user conferences. Over the past several years, IBM has had the challenge of taking its portfolio of individual products, rationalize them and create a management platform. One of the most fortunate events that helped Tivoli is the growing importance of service management. Service Management, the ability to manage a highly distributed computing environment in a consistent and predictable manner. Therefore, all of Tivoli’s offerings can be defined from this perspective. IBM’s Al Zolar, who runs the Tivoli organization said in his opening remarks that Tivoli common goal across its portfolio is to assure the effective delivery of services.

One of the interesting aspects of IBM’s management strategy is that the company intends to apply the idea of service management beyond IT operations. “Everything needs to be managed,” says Steve Mills, the senior vice president and GM of the software business. He points to many industries that are increasingly enabling intelligence into everything from trucks to assembly lines. Therefore, everything is becoming a manageable system. Companies are increasingly using RDIF tags to track products and equipment, for example. As everything becomes a “virtual system” — everything becomes a service to be managed. What an interesting opportunity and makes it clear by IBM would have bought a company like Maximo — a company that manages physical assets.

So, it is becoming clear that Tivoli is reinventing itself by focusing on service management in the broader corporate perspective. At the foundational level, Tivoli is looking at what the foundational services that are required to make this a viable strategy. I liked the fact that Zolar focused on Identity Management as one of the foundational services. Without managing the identity of the user or the device in a highly distributed environment, service management might work but it couldn’t be trusted.

Another major focal point for IBM’s emerging service management strategy is process automation. Now, this isn’t a surprise since process is the foundation of traditional operations management. However, it has a broader persona as well that transcends operations management. As we move to a more service oriented architecture, service management takes on a broader and more strategic role in the organization. You aren’t just managing the physical servers running applications but you are looking at managing an environment that requires the integration of business services, middleware, transactions, and a variety physical assets. Some of these pieces will be located at the client site while others might live in the cloud and yet others will live in a partner’s environment. These sets of virtual services have to be managed as though they are a physical system. Therefore, they are responsible for managing a meaningful process flow that is in compliance with corporate and IT governance rules. And all of this has to be done in a way that doesn’t require so many people that it is not economically feasible.

From my discussions at Pulse, I came away with the understanding that this is, in fact, IBM’s vision for service management. What is impressive is that IBM has taken begun to create a set of foundational services that are becoming the underpinnings of the Tivoli offerings. This metadata based framework was designed from some innovative (and very early technology) that came to IBM from the Candle acquisition. In fact, I had looked at this integration technology many years ago and always thought it was one of Candle’s crown jewels. I had wondered what happened to it — now I know.

IBM’s challenge will be to capitalize on this rationalization of its management assets. IBM has managed services it is offerings. IBM needs to be able to create an ecosystem based on its offerings so that it can compete with the emerging breed of cloud and service providers like Amazon.com and Google. It is becoming clear to me that customers and software vendors alike are looking for the emerging utility infrastructure providers. I think that with the right type of packaging, IBM could become a major player.

So, my take away from my first day at Pulse is this:

  • Tivoli is working to create a set of foundational meta data level services that link its various managed service offerings.
  • Because of the foundational services, Tivoli can now package its offerings in a much more effective way. It should make its offerings more competitive.
  • Tivoli’s goal is to leverage its operational management expertise in software to move up the food chain and manage both the IT and the business process infrastructures
  • Cloud computing is very important to IBM. It is still early but the investment is intense and being designed as the next generation of virtualization, SOA, and utility computing.
  • Green IT and energy efficiency is a key driver of Tivoli’s emerging services as a growth engine.

One of the primary themes that I heard is the industrialization of computing as the foundation for IBM’s management services. Indeed, I have often said that we are at the beginning of a new era where we computing moves from being an art based on experimentation and hope. The next generation focused on software and infrastructure as a service are becoming a reality and the last mile will be the management of all of those resources and more. This management focus is an imperative as we move towards the industrialization of software and computing

Can you measure and monitor your user experience?

November 6, 2007 Leave a comment

Since the web has increasingly become the platform for interacting with customers, partners, and the like — the notion of managing the online experience has skyrocketed. The term “user experience” is confusing. I have traditionally thought about this in terms of how the user interacts with software.  But I am finding that the more interesting definition is around the idea of actually measuring and monitoring the performance of the web environment from the user or customer experience perspective.

I met with an emerging vendor called SYMPHONIQ that was founded in 2003 by Hon Wong, CEO. Wong is a veteran of the management space, having been one of the founders of NetIQ in 1995 and of EcoSystems (purchased by Compuware in 1994). Therefore, although SYMPHONIQ is a relative newcomer, it’s management team has lots of experience under its belt.

The company is just introducing its second product — TrueView Express. This product is intended to provide performance measurement and monitoring. The product measures browser response for any HTTP application. Therefore, the company says that it can track performance from the browser through to the database. The product monitors actually user transactions in real time, isolates performance problems, and includes service level reporting of transaction performance.

Because TrueView is instrumented with HTML it does not require the downloading of an agent. In addition, performance data is collected behind the firewall.

In a way, this product is a sort of trojan horse (I love trojan horses since they provide a quick value to customers) for the company’s higher end products. For example, its flagship product called TrueView provides end-to-end diagnostics.

Clearly, the company has some interesting IP and some traction in the market. It made a smart move by partnering with F5 Networks. F5 Networks provides a platform for Application Delivery Networking. This relationship should help the company gain traction with customers that might otherwise look towards the big players — CA (Wily), EMC (NLayers), Compuware, and HP — and a host of others.

The company seems to be making some progress in the market with two products under its belt and a couple of dozen customers such as Lockheed, Starbucks Coffee, and AMD. The reality, however, is that the company is in a space dominated by big companies so it will have to partner with some big players that will be attracted by its ability to correlate and aggregate actual transaction data. Its reliance on HTML and HTTP means that it provides a lighter footprint than some of its competitors. This is definitely a company to watch.