Every year I attend IBM software analyst meeting. It is an opportunity to gain a snap shot of what the leadership team is thinking and saying. Since I have had the opportunity to attend many of these events over the year, it is always instructive to watch the evolution of IBM’s software business over the years.
So, what did I take away from this year’s conference? In many ways, it was not that much difference in what I experienced last year. And I think that is good. When you are a company of the size of IBM you can’t lurch from one strategy to the next and expect to survive. One of the advantages that IBM has in the market is that has a well developed roadmap that it is in the process of executing on. It is not easy to execute when you have as many software components as IBM does in its software portfolio.
While it isn’t possible to discuss all that I learned in my various discussions with IBM executives, I’d like to focus on IBM’s solutions strategy and its impact on the software portfolio. From my perspective, IBM has made impressive strides in enforcing a common set of services that underlie its software portfolio. It has been a complicated process that has taken decades and is still a work in progress. The result required that all of the business units within software are increasingly working together to provide underlying services to each other. For example, Tivoli provides management services to Rational and Information Management provides data management services to Tivoli. WebSphere provides middleware and service orientation to all of the various business units. Because of this approach, IBM is better able to move to a solutions focus.
It’s about the solutions.
In the late 1990s IBM got out of the applications business in order to focus on middleware, data management, and systems management. This proved to be a successful strategy for the next decade. IBM made a huge amount of money selling WebSphere, DB2, and Tivoli offerings for SAP and Oracle platforms. In addition, Global Services created a profitable business implementing these packaged applications for enterprises. But the world has begun to change. SAP and Oracle have both encroached on IBM’s software business. Some have criticized IBM for not being in the packaged software business. While IBM is not going into the packaged software business, it is investing a vast amount of money, development effort, and marketing into the “solutions” business.
How is the solutions business different than a packaged application? In some ways they are actually quite similar. Both provide a mechanism for codifying best practices into software and both are intended to save customers time when they need to solve a business problem. IBM took itself out of the packaged software business just as the market was taking off. Companies like SAP, Oracle, Seibel, PeopleSoft and hundreds of others were flooding the market with tightly integrated packages. In this period of the early 1990s, IBM decided that it would be more lucrative to partner with these companies that lacked independent middleware and enabling technologies. IBM decided that it would be better off enabling these packaged software companies than competing in the packaged software market.
This turned out to be the right decision for IBM at the time. The packaged software it had developed in the 80s was actually holding it back. Therefore, without the burden of trying to fix broken software, it was able to focus all of its energy and financial strength on its core enabling software business. But as companies like Oracle and SAP cornered the packaged software market and began to expand to enabling software, IBM began to evolve its strategy. IBM’s strategy is a hybrid of the traditional packaged software business and a solutions business based on best practices industry frameworks.
So, there are two components in IBM’s solutions strategy – vertical packaged solutions that can be applied across industries and solution frameworks that are focused on specific vertical markets.
Horizontal Packages. The horizontal solutions that IBM is offerings have been primarily based on acquisitions it has made over the past few years. While at first glance they look like any other packaged software, there is a method to what IBM has purchased. Without exception, these acquisitions are focused on providing packaged capabilities that are not specific to any market but are intended to be used in any vertical market. In essence, the packaged solutions that IBM has purchased resemble middleware more than end-to-end solutions. For example, Sterling Commerce, which IBM purchased in August 2010, is a cross channel commerce platform. It purchased Coremetrics in June, provides web analytics and bought Unica for marketing automation of core business processes. While each of these is indeed packaged, they reach represent a component of a solution that can be applied across industries.
Vertical Packages. IBM has been working on its vertical market packaging for more than a decade through its Business Services Group (BSG). IBM has taken its best practices from various industry practices and codified these patterns into software components. These components have been unified into solution frameworks for industries such as retail, banking, and insurance. While this has been an active approach with the Global Services for many years, there has been a major restructuring in IBM’s software organization this past year. In January, the software group split into two groups: one focused on middleware and another focused on software solutions. All of the newly acquired horizontal packages provide the underpinning for the vertical framework-based software solutions.
Leading with the solution. IBM software has changed dramatically over the past several decades. The solutions focus does not stop with the changes within the software business units itself; it extends to hardware as well. Increasingly, customers want to be able to buy their solutions as a package without having to buy the piece parts. IBM’s solution focus that encompasses solutions, middleware, appliances, and hardware is the strategy that IBM will take into the coming decade.
You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
It has been only a few weeks since Ann Thomas Manes wrote her blog stating that SOA is dead. Since then there has been a lot of chatter about whether this is indeed true and if SOA vendors should find a new line of work. So, I thought I would add my two cents to the conversation.
Let me start by saying, I told you so. Last year I wrote in a blog that we would know when SOA had become mainstream when the enormous hype cycle ended. Alas that has happened. What does this mean? Let’s keep this in perspective. Every technology that comes along and generates a lot of hype follows this same pattern. Why? I’ll make it simple. The hype machine is powerful. It goes like this. There is a new technology trend with thousands of new companies on the scene. All of them vie for dominance and a strong position on someone’s magic universe. They are able to gain attention in the market. Then the market takes on its own momentum. The technology moves from being a set of products focused on solving a business problem to the solution to any problem. We saw this with object orientation, open systems, and Enterprise Applications Integration – to name but a few. Smart entrepreneurs, sensing opportunity, stormed onto the market, promising huge promises of salvation for IT. Now, if I wanted to write a book I think I could come up with 100 different scenarios to prove my point but I will spare you the pain since the outcome is always the same.
So, what happens when each of these technology approaches moves from hype heaven to the dead zone? In some cases, the technology actually goes away because it simply doesn’t work – despite all of the hype. But in many situations an interesting thing happens – the technology evolves into something mainstream. It gets incorporated and sometimes buried into emerging products and implementation plans of companies. It becomes mainstream. I’ll just give you a few examples to support this premise:
• Remember open systems? In the early 1990s it was the biggest trend around. There were thousands of products that were released onto the market. There were hundreds of companies that renamed themselves open something or other. So, what happened? Open became mainstream and the idea of designing proprietary technologies without open interfaces and standards support became unpopular. No one creates a magic quadrant based on open systems but I don’t know many companies who can ignore standards and survive.
• Object orientation was as big a rage as open systems – maybe even bigger. There were conferences, publications, magic quandrants and lots and lots of products ranging from operating systems to databases to development environments. It was a hot, hot market. What happened? The idea of creating modular components that could be reused turned out to be a great idea. But the original purity and concepts needed to evolve into something more pragmatic and in fact they did. The concepts of object orientation changed the nature of how developers created software. It moved from the idea of creating small granular pieces of code that could be used in lots of different ways to larger grain pieces of code that could create composites. Object orientation is the foundation that most modern software sits on top of.
• Enterprise Applications Integration probably had even more companies than either open systems or object orientation combined. The idea that a company could buy technology that would allow their packaged software elements to talk to each other and pass data was revolutionary at the time. This trend was focused on providing packaged solutions to a nasty problem. If vendors could find a way to provide solutions that allowed customers to avoid resorting to massive coding, it would result in a big market opportunity. Vendors in this market promised to provide solutions that allowed the general ledger module to send data to and from the sales force application. What happened? There were hundreds of vendors telling into this market. However, it was a stopping off point. There are newer products that do a better job of integration based on a service oriented approach to integration and data management. In addition, this market evolved into technologies such as Enterprise Service Buses that did a better job of abstraction. There are plenty of Enterprise Application Integration technologies out there but they have emerged as a part of a loosely coupled environment where components are designed to send messages between them.
Now, I could go on for a long time with plenty more examples. But I think I have made my point. Technology innovation just works this way. The products that took the market by storm one year become stale the next. But the lessons learned and the innovation does not die. These lessons are used by a new generation of smart technologists to support the new generation of products.
So, Virginia, Service Oriented Architectures will do the same thing. But it is also a little different. It is not the same as a lot of other technology shiny toys because so much of SOA is actually about business practices – not technology. Sure when SOA started out a few years ago it was about specific products – hundreds of them. These products were eagerly adopted by developers who used them to created service interfaces and business services.
Today, business leaders are taking charge of their SOA initiatives. The innovative business leaders are using business focused templates to move more quickly. They are creating business services – code plus process. They are creating business services such as Order-to-Cash services that in the long run will be mandated as the way everyone across the company will implement a process according to corporate practices. Some of these companies would like to rid themselves of huge, complicated and expensive packaged software and replace them with these business services.
Today these products are becoming part of the fabric of the companies that use them. They are enablers of consistent and vetted business processes. They are the foundation of establishing good governance so that everyone in the organization uses a consistent set of rules, data, and processes. This is not glamorous. It is hard work that starts from a business planning cycle. It is the type of hard work where teams of technologist and business leaders determine what is the best way to satisfy the company’s need to implement order to cash processes across business units.
And yes, Virginia SOA is not stagnant. It is evolving because it offers business value to companies. There are new initiatives and new architectural principles that have value within this service orientation approach. There are architectures such as REST that helps make interaction within a business services approach more interactive. There are emerging standards that enable companies using SOA to be able to exchange information without massive coding. There are information services and security services evolving for the same reason. There are new approaches to make SOA environments more manageable based on the emerging idea that, in fact, everything we do with the world is a service of some type that needs to work with other services. The physical and virtual words are starting to blend – which makes service orientation even more important.
Maybe ten years from now, we won’t use the word Service Oriented Architecture because it won’t be seen as a market segment or a quadrant – it will be just the way things are done. So, stop worrying about whether SOA is alive, dead, or comatose – I have. So, relax Virginia, and get back to work!
It seems like just the other day that our team was busily finishing the first edition of SOA for Dummies. But it was two years ago since that book came out. A lot has change in that time. When we first wrote the book, we heard from lots of people that they really didn’t know what SOA was and were happy to have a book that would explain it to them in easy to understand language.
Because so much has changed, we were asked to write a second edition of SOA for Dummies which is coming out on December 19th. What has changed in those two years? Well, first of all, there have been a lot more implementations of SOA. In fact, in that edition, we were happy to have gotten 7 case studies. Many of the customers that we talked (both that were featured in the book and those who took the time to speak with us without attribution) were just getting started. They were forming centers of excellence. They were beginning to form partnerships between the business and technical sides of their companies. They were implementing a service bus or were building their first sets of services.
In this second edition, we were fortunate to find 24 companies across 9 different verticals willing and able to talk on the record about their experiences implementing SOA. What did we learn? While there is a lot I could say, I’d like to net it out to 5 things we learned:
1. Successful companies have spent the time starting with the both the key business services and business process before even thinking about implementation.
2. Companies have learned a lot since their initial pilots. They are now focused on how they can increase revenue for their companies through innovation using a service oriented approach.
3. Many companies have a strategic roadmap that they are focused on and therefore are implementing a plan in an incremental fashion.
4. A few companies are creating business services extracted from aging applications. Once this is done, they are mandating the use of these services across the company.
5. Companies that have been working on SOA for the last few years have learned to create modular business services that can have multiple uses. This was much harder than it appeared at first.
There are many other best practices and lessons learned in the case studies. It is interesting to note just as many companies that said yes also were not able to participate because management felt that they didn’t want competitors to know what they were doing.
The bottom line is that SOA is beginning to mature. Companies are not just focused on backbone services such as service buses but on making their SOA services reach out to consumers and their business partners.
We have also added a bunch of new chapters to the book. For example, we have new chapters on SOA service management; SOA software development, software quality, component applications, and collaboration within the business process lifecycle. Of course, we have updated all existing chapters based on the changes we have seen over the last few years.
We are very excited that we had the opportunity to update the book and look forward to continuing the dialog.
I have been researching and thinking about the problem of the packaged application for many years now. Over the years I have had conversations with many CIOs who are planning to implement large complex ERP systems as part of their initiative to streamline their operations. There is an assumption that implementing one of these systems will simplify corporate IT. There is also the assumption that it is possible to implement an ERP system as is – in other words, without complex customization. The sad reality is that this just doesn’t happen in the real world.
This brings me to a conversation I had about a month ago with a CIO. He was in charge of the IT organization in a relatively large corporation (I am not at liberty to mention the company name). The company had decided to replace its assortment of corporate business applications with a comprehensive ERP system. The idea was correct – the company needed a system that would implement business process and best practices to support the business in a uniform and efficient manner. The problem, in my mind was two fold – first the cost. To purchase and then implement this software cost the company $500 million dollars. Obviously, a considerable part of this expense was for professional services. And maybe that is the point. The idea that a company can purchase a packaged ERP system that is really packaged software is a misnomer. In reality, packaged software is not really packaged. It is a set of tools, a set of templates and processes that are linked together based on marketing and promise. The CIO I was speaking with provided some insight into the complexity of this implementation. It required a lot more customization than anyone had anticipated. The promise of out of the box implementation was a myth. Once the customization was applied to this package, the concept of a packaged environment was gone. Therefore, it should not have come as a shock when the next time the base platform of processes and tools had to be upgraded; it cost the company an additional $50 million.
So, what am I saying here? Should we throw the bums out? Should we declare that the concept of packaged software is dead and flawed? Probably. Now, let’s get real. Obviously, companies cannot and should not go back to paper based processes. However, I think that we need to get real about what it means to package software.
Here is what I propose. Let’s not pretend that packaged software is packaged. The reality is that good software that is designed to meet a specific corporate goal should have the following five components:
1. Business best practices should be component based. Packaged software should be a set of business services that implement well-tested business processes that are either industry or practice based. For example, accounting practices are fairly well understood and well codified. Accounting best practices may be different between industries but it is straightforward to create modular components that are populated with processes. It should not be constructed as a set of complex intertwined code. It should be independent modules that can be linked to each other and that can exchange data.
2. Create standards based links. Well defined interfaces that enable the customer to link these components and other components without complex coding, including easily usable interfaces to all data files and databases.
3. Separate business rules from code. Business rules should be contained in a separate set of components or a rules engine so that they can updated easily. These rules should have a visual interface so that management can easily review them and map them to corporate governance
4. Implementations should be configurable. It should be straightforward for an organization to change the details of a process or a service without recoding.
5. Modularity is the key. Company specific rules, configurations, and services should be modular and separate from the connective tissue that links the components of these environments. In this way, when a system foundation needs to be upgraded, it can be done without impacting the value that is the lifeblood of a company.
The bottom line: the packaged software market is at a transition point
The state of the packaged software market is complicated. Companies across the globe have spent trillion of dollars trying to automate business practices. Some implementations have been successful. But even those companies that have had the good fortune of implementing packaged software to streamline their business have done so at a steep financial and organizational price. I predict that we are entering a new stage of evolution of software. Many of the CIOs I have spoken with lately are beginning to rethink the conventional wisdom about packaged applications. They are beginning to take the concept of business services that is the foundation of a service oriented architecture and applying that to the packaging of codified best practices.
One CIO I spoke with has started methodically to peel away key business services from packaged applications. This might be an order to cash process that is rewritten hundreds of times across hundreds of applications. Now, the company has created one business service called order-to-cash. This order-to-cash service will be used anywhere in the company where this capability is needed. This very patient CIO plans to replace duplicated services locked in inflexible packaged applications with well-constructed and very independent business services. And some day, there will be no more complicated, inflexible, and repetitive packaged applications. I think this might lead to a lot more innovation at a fraction of the cost.
I was busily working away when I got a call from a hedge fund manager. Now, I don’t really know that many hedge fund managers so I thought this could be a good education. This fund manager — no I don’t remember his name — had a request. Could I spend an hour or at least 5 minutes (yes, that is exactly what he said) with him explaining the key trends in a SOA. Now, while I appreciated his thirst for knowledge, I have to admit I was perplexed. What exactly was he up to? I won’t keep you in suspense any more. Simply put, he wanted to know if SOA was passé. He was reading various blogs and articles that implied that SOA was fine while it lasted but it was basically over. Companies gave it the old college try but found it didn’t work and moved on.
Here is what I told my new friend. Clearly there are people who are proclaiming SOA dead. It is easier to get headlines that way. After all, who wants to write an article and say, it’s moving ahead slowly but surely — not a good headline. My perception is that from the many customers I have spoken with and continue to speak with, SOA is a continuing process. It is not a quick fix. It isn’t like building a single application. It is, as I have said many times, a business strategy and a different approach to building business services. It is not necessarily so easy because the IT group can’t go off and do it alone. The old style programmer who liked to sit in a quiet place and write code doesn’t do well with SOA. The new style developer is a business collaborator. That individual must partner with colleagues in the business units to determine what services and what business processes can be abstracted so that they can be used and repurposed to support business change. That is very different than other technology trends I have witnessed over my years in the business.
No not to belabor the point — SOA is not a fad. It is a business approach to software that is still in its first stage of development. There is a lot more work to be done in terms of products, services, and techniques. So, Mr. Hedge Fund Manager, SOA ain’t dead yet!
One of the hardest things for organizations to do is to retire old applications. Unlike hardware that tends to be replaced on a regular cycle, old software sticks around way too long. It definitely over stays its welcome. I remember when I worked at John Hancock decades ago and watching as departments struggled to replace aging systems. While they were ready and willing to make the change, they often didn’t know precisely how these old systems worked. The developers never documented what they wrote and those people had retired years earlier.
Now you would think that the problem had gone away. In reality, the problem got worse with the advent of client/server computing where there was less structure applied to the development process. I came across a very old article I wrote back in 1996 that talked about a lot of those issues (please ignore the picture). Just when you thought it couldn’t get any worse, web based development came along. Instead of having a few hundred developers, the web brought the advent of thousands of developers all provide changes and updates to applications. We are now at a cross roads that is quite unique.
While we still have many aging applications that cannot be easily updated, we also have the need to move to Web 2.0 to create Rich Internet applications (RIA). Web 2.0 offers a way to dramatically transform the user experience. Organizations are looking to this approach to development to make access to knowledge and information much more immediate and intuitive than ever before. But the transition isn’t easy.
I got thinking a lot about the transition from client/server applications and old web based applications when I met with Nexaweb a few weeks ago. The company has been around since 2000 and specializes in the Web 2.0 space. While there has been a lot of hype around Web 2.0 it actually is a very pragmatic technology infrastructure. While I think that a lot of customers assume that you can just approach Web 2.0 as though it were a simple web application. The reality is quite different. In fact, good Web 2.0 applications have to be well architected. What I liked about what Nexaweb is doing is their approach to application modernization with a Web 2.0 spin. In essence, Nexaweb is focused on modernization of aging client/server applications by providing tooling that documents the existing code. It is designed to identify bad code and provides a tool to generate a model driven architecture. Like any good consulting organization, Nexaweb has leveraged best practices used to help its consulting clients move old applications to Web 2.0. Nexaweb is selling a set of productivity tools that can generate a model driven architecture. It is intended to generate code as part of this process. The company claims that it can reduce the cost of transforming old code by as much as 70 percent.
The new product called Nexaweb’s Enterprise Web Suite including a UML modeling tool, a reporting tool that identifies repetitive processes, and code that is no longer used. Clearly, Nexaweb isn’t the only company taking advantage of modeling tools and an architectural approach. But the fact that the company is focused on helping companies transform their aging client/server applications into modular, service oriented approach is a step forward. It is one of the set of companies focused on not just updating applications by transforming into Web 2.0. What stands out is the fact that Nexaweb seems to be combining application transformation into business services (can you say Service Oriented Architectures). However, I must add that IBM has been on this track for quite a few years. Through its industry models, IBM has been helping companies transform its aging areapplications into industry specific business services. In addition, Microsoft’s Silverlight and Adobe’s Air are adding a new level of sophistication to the momentum. WaveMaker, that I discussed in an earlier entry is making a contribution as well.
The trend is clear and it is good for customers. We are finally seeing software companies providing a path to moving code into the new world that is based on reusable, modular services that are architected. The next stage in the movement towards a service oriented architecture is applying this approach to the new generation of Web 2.0. Let me add a disclaimer — this isn’t magic. There is hard work here. None of these approaches or tools are automatic. They give customers a head start but there is hard work to be done. The alternative is to hold your breath and hope that things don’t break too quickly. There are so many promises of easy solutions to hard problems. There are solutions and tools that take the drudgery out of leaving legacy applications behind. But there is worthwhile hard work that really has to be done.