Archive

Archive for the ‘middleware’ Category

Why is IBM in the horizontal solutions business?

December 15, 2010 Leave a comment

Every year I attend IBM software analyst meeting. It is an opportunity to gain a snap shot of what the leadership team is thinking and saying.  Since I have had the opportunity to attend many of these events over the year, it is always instructive to watch the evolution of IBM’s software business over the years.

So, what did I take away from this year’s conference? In many ways, it was not that much difference in what I experienced last year. And I think that is good. When you are a company of the size of IBM you can’t lurch from one strategy to the next and expect to survive.  One of the advantages that IBM has in the market is that has a well developed roadmap that it is in the process of executing on.  It is not easy to execute when you have as many software components as IBM does in its software portfolio.

While it isn’t possible to discuss all that I learned in my various discussions with IBM executives, I’d like to focus on IBM’s solutions strategy and its impact on the software portfolio.  From my perspective,  IBM has made impressive strides in enforcing a common set of services that underlie its software portfolio.  It has been a complicated process that has taken decades and is still a work in progress.  The result required that all of the business units within software are increasingly working together to provide underlying services to each other.  For example, Tivoli provides management services to Rational and Information Management provides data management services to Tivoli. WebSphere provides middleware and service orientation to all of the various business units.  Because of this approach, IBM is better able to move to a solutions focus.

It’s about the solutions.

In the late 1990s IBM got out of the applications business in order to focus on middleware, data management, and systems management. This proved to be a successful strategy for the next decade. IBM made a huge amount of money selling WebSphere, DB2, and Tivoli offerings for SAP and Oracle platforms. In addition, Global Services created a profitable business implementing these packaged applications for enterprises.  But the world has begun to change. SAP and Oracle have both encroached on IBM’s software business. Some have criticized IBM for not being in the packaged software business. While IBM is not going into the packaged software business, it is investing a vast amount of money, development effort, and marketing into the “solutions” business.

How is the solutions business different than a packaged application? In some ways they are actually quite similar. Both provide a mechanism for codifying best practices into software and both are intended to save customers time when they need to solve a business problem.  IBM took itself out of the packaged software business just as the market was taking off.  Companies like SAP, Oracle, Seibel, PeopleSoft and hundreds of others were flooding the market with tightly integrated packages.  In this period of the early 1990s, IBM decided that it would be more lucrative to partner with these companies that lacked independent middleware and enabling technologies.  IBM decided that it would be better off enabling these packaged software companies than competing in the packaged software market.

This turned out to be the right decision for IBM at the time.  The packaged software it had developed in the 80s was actually holding it back.  Therefore, without the burden of trying to fix broken software, it was able to focus all of its energy and financial strength on its core enabling software business.  But as companies like Oracle and SAP cornered the packaged software market and began to expand to enabling software, IBM began to evolve its strategy.  IBM’s strategy is a hybrid of the traditional packaged software business and a solutions business based on best practices industry frameworks.

So, there are two components in IBM’s solutions strategy – vertical packaged solutions that can be applied across industries and solution frameworks that are focused on specific vertical markets.

Horizontal Packages. The horizontal solutions that IBM is offerings have been primarily based on acquisitions it has made over the past few years.  While at first glance they look like any other packaged software, there is a method to what IBM has purchased.  Without exception, these acquisitions are focused on providing packaged capabilities that are not specific to any market but are intended to be used in any vertical market.  In essence, the packaged solutions that IBM has purchased resemble middleware more than end-to-end solutions. For example, Sterling Commerce, which IBM purchased in August 2010, is a cross channel commerce platform.  It purchased Coremetrics in June, provides web analytics and bought Unica for marketing automation of core business processes.  While each of these is indeed packaged, they reach represent a component of a solution that can be applied across industries.

Vertical Packages. IBM has been working on its vertical market packaging for more than a decade through its Business Services Group (BSG). IBM has taken its best practices from various industry practices and codified these patterns into software components. These components have been unified into solution frameworks for industries such as retail, banking, and insurance. While this has been an active approach with the Global Services for many years, there has been a major restructuring in IBM’s software organization this past year. In January, the software group split into two groups: one focused on middleware and another focused on software solutions.  All of the newly acquired horizontal packages provide the underpinning for the vertical framework-based software solutions.

Leading with the solution. IBM software has changed dramatically over the past several decades.  The solutions focus does not stop with the changes within the software business units itself; it extends to hardware as well.  Increasingly, customers want to be able to buy their solutions as a package without having to buy the piece parts.  IBM’s solution focus that encompasses solutions, middleware, appliances, and hardware is the strategy that IBM will take into the coming decade.

Oracle + Sun: Five questions to ponder

January 27, 2010 3 comments

I spent a couple of hours today listening to Oracle talk about the long-awaited integration with Sun Microsystems. A real end of an era and beginning of a new one. What does this mean for Oracle? Whatever you might think about Oracle, you have to give the company credit for successfully integrating the 60 companies it has purchased over the past few years. Having watched hundreds and perhaps thousands of acquisitions over the last few decades, it is clear that integration is hard. There are overlapping technologies, teams, cultures, and egos. Oracle has successfully managed to leverage the IP from its acquisitions to support its business goals. For example, it has kept packaged software customers happy by improving the software. Peoplesoft customers, for example, were able to continue to use the software they had become dependent on in primarily the same way as before the acquisition. In some cases, the quality of the software actually improved dramatically. The path has been more complicated with the various middleware and infrastructure platforms the company has acquired over the years because of overlapping functionality.

The acquisition of Sun Microsystems is the biggest game changer for Oracle since the acquisition of PeopleSoft. There is little doubt that Sun has significant software and hardware IP that will be very important in defining Oracle in the 21st century. But I don’t expect this to be a simple journey. Here are the five key issues that I think will be tricky for Oracle to navigate. Obviously, this is not a complete list but it is a start.

Issue One: Can Oracle recreate the mainframe world? The mainframe is dead — long live the mainframe. Oracle has a new fondness for the mainframe and what that model could represent. So, if you combine Sun’s hardware, networking layer, storage, security, packaged applications, middleware into a package do you get to own total share of a customer’s wallet? That is the idea. Oracle management has determined that IBM had the right ideas in the 1960s — everything was nicely integrated and the customer never had to worry about the pieces working together.
Issue Two: Can you package everything together and still be an open platform? To its credit, Oracle has build its software on standards such as Unix/Linux, XML, Java, etc. So, can you have it both ways? Can you claim openness when the platform itself is hermetically sealed? I think it may be a stretch. In order to accomplish this goal, Oracle would have to have well-defined and published APIs. It would have to be able to certify that with these APIs the integrated platform won’t be broken. Not an easy task.
Issue Three: Can you manage a complex computing environment? Computing environments get complicated because there are so many moving parts. There are configurations that change; software gets patched; new operating system versions are introduced; emerging technology enters and messes up the well established environment. Oracle would like to automate the process of managing this process for customers. It is an appealing idea since configuration problems, missing links, and poor testing are often responsible for many of the outages in computing environments today. Will customers be willing to have this type of integrated environment controlled and managed by a single vendor? Some customers will be happy to turn over these headaches. Others may have too much legacy or want to work with a variety of vendors. This is not a new dilemma for customers. Customers have long had to rationalize the benefits of a single source of technology against the risks of being locked in.
Issue Four: Can you teach an old dog new tricks? Can Oracle really be a hardware vendor? Clearly, Sun continues to be a leader in hardware despite its diminished fortunes. But as anyone who has ventured into the hardware world knows, hardware is a tough, brutal game. In fact, it is the inverse of software. Software takes many cycles to reach maturation. It needs to be tweaked and finessed. However, once it is in place it has a long, long life. The old saying goes, old software never dies. The same cannot be said for hardware. Hardware has a much straighter line to maturity. It is developed, designed, and delivered to the market. Sometimes it leapfrogs the competition enough that it has a long and very profitable life. Other times, it hits the market at the end of a cycle when a new more innovative player enters the market. The culmination of all the work and effort can be short as something new comes along at the right place at the right time. It is often a lot easier to get rid of hardware than software. The computer industry is littered with the corpses of failed hardware platforms that started with great fanfare and then faded away quickly. Will Oracle be successful with hardware? It will depend on how really good the company is in transforming its DNA.
Issue Five. Are customers ready to embrace Oracle’s brave new world? Oracle’s strategy is a good one — if you are Oracle. But what about for customers? And what about for partners? Customers need to understand the long-term implication and tradeoffs in buying into Oracle’s integrated approach to its platform. It will clearly mean fewer moving parts to worry about. It will mean one phone call and no finger pointing. However, customers have to understand the type of leverage that single company will have in terms of contract terms and conditions. And what about partners? How does an independent software vendor or a channel partner participate within the new Oracle? Is there room? What type of testing and preparation will be required to play?

Why did IBM buy Lombardi?

December 16, 2009 Leave a comment

Just as I was about to start figuring out my next six predictions for 2010 I had to stop the presses and focus on IBM’s latest acquisition. IBM just announced this morning that it has purchased Lombardi which focuses on Business Process Management software. Lombardi is one of the independent leaders in the market as well as a strong IBM business partner. The obvious question is why would IBM need yet another business process management platform? After all, IBM has a large portfolio of business process management software — some homegrown and some from various acquisitions such as Filenet, ILOG, and Webify. I think that the answer is actually quite straight forward. Lombardi’s offerings are used extensively in business units, by business management to codify complex processes that are at the heart of streamlining how businesses are able to differentiate themselves. Clearly, IBM has recognized the importance of Lombardi to its customers since it has had a long standing partnership with the company.  I think there are two reasons that this acquisition are significant beyond the need to provide direct support for business management. The ability to use Lombardi’s technology to sell more WebSphere offerings and the connection of business process to IBM’s Smarter Planet initiative are the two issues that stand out in my mind.

Selling more WebSphere products. There is no question that the WebSphere brand within IBM’s Software business unit includes a lot of products such as its registry/repository, applications integration, security, and various middleware offerings. IBM likes to sell its products by focusing on entry points — the immediate problem that the customer is trying to solve. The opportunity to gain direct access to business buyers who start with business process management and then may be see the value of adding new capabilities to that platform.

Supporting the Smarter Planet strategy. Business transformaton often starts by reconstructing process. IBM’s smarter planet strategy is based on the premise that customers want to be able to transform their businesses utilizing sophisticated technology. Therefore, it is important to look at how business innovation can be supported by IBM’s huge hardware, software, and services portfolio. The fact that Lombardi’s technology is the starting point for business units looking at transformational process changes is an important marker in IBM’s evolution as a company.

Why all workloads don’t belong in the cloud

November 2, 2009 5 comments

I had an interesting conversation with a CIO the other day about cloud computing. He had a simple question: I have an relatively old application and I want to move it to the cloud. How do I do that? I suspect that we will see a flurry of activity over the coming year where this question will be asked a lot.  And why not — the cloud is the rage and who wouldn’t want to demonstrate that with the cloud all problems are solved.  So, what was my answer to this CIO? Basically, I told him that all workloads do not belong in the cloud. It is not because this technically can’t be done. It can. It is quite possible to encapsulate an existing application and place it into a cloud environment so that new resources can be self-provisioned, etc. But, in reality, you have to look at this issue from an efficiency and an economic perspective.

ROI

Cloud computing gains an economic edge over a traditional data center when it supports a relatively small simple workload for a huge number of customers. For example, a singular workload like email or a payment service can be fairly optimized at all levels — the operating system, middleware, and the hardware can all be customized and tuned to support the workload. The economics favor this type of workload that support large numbers of customers. The same cannot be said for the poor aging Cobol application that is used by 10 people within an organization. While there might be incremental management productivity benefits, the cost/benefit analysis simply doesn’t work.

So, the answer is pretty simple. You just can’t throw every workload into the cloud. It is not a panacea for all IT problems.  Organizations that are trying to figure out what to do with these pesky old workloads need to look at three options:

1. Decide if that workload is still supporting business objectives in a cost effective manner. If it does the job, leave it alone.

2. That old workload might be better supported by traditional outsourcing. Let someone else keep the application alive while you move into more mission critical tasks.

3. Think about rebuilding that old workload — either by encapsulating key elements and placing them within a modular flexible environment. You might even discover that there are components that are actually useful across the organization. When you discover that sharing components across divisions/department is a productive and pragmatic approach, you might be ready to move those workloads into the cloud.

cloud_box

Ten things I learned while writing Cloud Computing for Dummies

August 14, 2009 14 comments

I haven’t written a blog post in quite a while. Yes, I feel bad about that but I think I have a good excuse. I have been hard at work (along with my colleagues Marcia Kaufman, Robin Bloor, and Fern Halper) on Cloud Computing for Dummies. I will admit that we underestimated the effort. We thought that since we had already written Service Oriented Architectures for Dummies — twice; and Service Management for Dummies that Cloud Computing would be relatively easy. It wasn’t. Over the past six months we have learned a lot about the cloud and where it is headed. I thought that rather than try to rewrite the entire book right here I would give you a sense of some of the important things that I have learned. I will hold myself to 10 so that I don’t go overboard!

1. The cloud is both old and new at the same time. It is build on the knowledge and experience of timesharing, Internet services, Application Service Providers, hosting, and managed services. So, it is an evolution, not a revolution.

2. There are lots of shades of gray with cloud segmentation. Yes, there are three buckets that we put clouds into: infrastructure as a service, platform as a service, and software as a service. Now, that’s nice and simple. However, it isn’t because all of these areas are starting to blurr into each other. And, it is even more complicated because there is also business process as a service. This is not a distinct market unto itself – rather it is an important component in the cloud in general.

3. Market leadership is in flux. Six months ago the market place for cloud was fairly easy to figure out. There were companies like Amazon and Google and an assortment of other pure play companies. That landscape is shifting as we speak. The big guns like IBM, HP, EMC, VMware, Microsoft, and others are running in. They would like to control the cloud. It is indeed a market where big players will have a strategic advantage.

4. The cloud is an economic and business model. Business management wants the data center to be easily scalable and predictable and affordable. As it becomes clear that IT is the business, the industrialization of the data center follows. The economics of the cloud are complicated because so many factors are important: the cost of power; the cost of space; the existing resources — hardware, software, and personnel (and the status of utilization). Determining the most economical approach is harder than it might appear.

5. The private cloud is real.  For a while there was a raging debate: is there such a thing as a private cloud? It has become clear to me that there is indeed a private cloud. A private cloud is the transformation of the data center into a modular, service oriented environment that makes the process of enabling users to safely procure infrastructure, platform and software services in a self-service manner.  This may not be a replacement for an entire data center – a private cloud might be a portion of the data center dedicated to certain business units or certain tasks.

6. The hybrid cloud is the future. The future of the cloud is a combination of private, traditional data centers, hosting, and public clouds. Of course, there will be companies that will only use public cloud services for everything but the majority of companies will have a combination of cloud services.

7. Managing the cloud is complicated. This is not just a problem for the vendors providing cloud services. Any company using cloud services needs to be able to monitor service levels across the services they use. This will only get more complicated over time.

8. Security is king in the cloud. Many of the customers we talked to are scared about the security implications of putting their valuable data into a public cloud. Is it safe? Will my data cross country boarders? How strong is the vendor? What if it goes out of business? This issue is causing many customers to either only consider a private cloud or to hold back. The vendors who succeed in the cloud will have to have a strong brand that customers will trust. Security will always be a concern but it will be addressed by smart vendors.

9. Interoperability between clouds is the next frontier. In these early days customers tend to buy one service at a time for a single purpose — Salesforce.com for CRM, some compute services from Amazon, etc. However, over time, customers will want to have more interoperability across these platforms. They will want to be able to move their data and their code from one enviornment to another. There is some forward movement in this area but it is early. There are few standards for the cloud and little agreement.

10. The cloud in a box. There is a lot of packaging going on out there and it comes in two forms. Companies are creating appliance based environments for managing virtual images. Other vendors (especially the big ones like HP and IBM) are packaging their cloud offerings with their hardware for companies that want Private clouds.

I have only scratched the surface of this emerging market. What makes it so interesting and so important is that it actually is the coalescing of computing. It incorporates everything from hardware, management software, service orientation, security, software development, information management,  the Internet, service managment, interoperability, and probably a dozen other components that I haven’t mentioned. It is truly the way we will achieve the industrialization of software.

Oracle Plus Sun: What does it mean?

April 20, 2009 16 comments

I guess this is one way to start a Monday morning. After IBM decided to pass on Sun, Oracle decided that it would be a great idea. While I have as many questions as answers, here are my top ten thoughts about what this combination will mean to the market:

1. Oracle’s acquisition of Sun definitely shakes up the technology market. Now, Oracle will become a hardware vendor, an operating system supplier, and will own Java.

2. Oracle gets a bigger share of the database market with MySQL. Had IBM purchased Sun, it would have been able to claim market leadership.

3. This move changes the competitive dynamics of the market. There are basically three technology giants: IBM, HP, and Oracle. This acquisition will put a lot of pressure on HP since it partners so closely with Oracle on the database and hardware fronts. It should also lead to more acquisitions by both IBM and HP.

4. The solutions market reigns! Oracle stated in its conference call this morning that the company will now be able to deliver top to bottom integrated solutions to its customers including hardware, packaged applications, operating systems, middleware, storage, database, etc. I feel a mainframe coming on…

5. Oracle could emerge as a cloud computing leader. Sun had accumulated some very good cloud computing/virtualization technologies over the last few years. Sun’s big cloud announcement got lost in the frenzy over the acquisition talks but there were some good ideas there.

6. Java gets  a new owner. It will be interesting to see how Oracle is able to monetize Java. Will Oracle turn Java over to a standards organization? Will it treat it as a business driver? That answer will tell the industry a lot about the future of both Oracle and Java.

7. What happens to all of Sun’s open source software? Back a few years ago, Sun decided that it would open source its entire software stack. What will Oracle do with that business model? What will happen to its biggest open source platform, MySQL? MySQL has a huge following in the open source world. I suspect that Oracle will not make dramatic changes, at least in the short run. Oracle does have open source offerings although they are not the central focus of the company by a long shot. I assume that Oracle will deemphasize MySQL.

8. Solaris is back. Lately, there has been more action around Solaris. IBM annouced support earlier in the year and HP recently announced support services. Now that Solaris has a strong owner it could shake up the dynamics of the operating system world. It could have an impact on the other gorilla not in the room — Microsoft.

9. What are the implications for Microsoft? Oracle and Microsoft have been bitter rivals for decades. This acquisition will only intensify the situation. Will Microsoft look at some big acquisitions in the enterprise market? Will new partnerships emerge? Competition does create strange bedfellows. What will this mean for Cisco, VMWare, and EMC? That is indeed something interesting to ponder.

10. Oracle could look for a services acquisition next. One of the key differences between Oracle and its two key rivals IBM and HP is in the services space. If Oracle is going to be focused on solutions, we might expect to see Oracle look to acquire a services company. Could Oracle be eyeing something like CSC?

I think I probably posed more questions than answers. But, indeed, these are early days. There is no doubt that this will shake up the technology market and will lead to increasing consolidation. In the long run, I think this will be good for customers. Customers do want to stop buying piece parts. Customers do want to buy a more integrated set of offerings. However, I don’t think that any customer wants to go back to the days where a solution approach meant lock-in. It will be important for customers to make sure that what these big players provide is the type of flexibility they have gotten used to in the last decade without so much pain.

Yes, Virginia, there is a SOA!

February 9, 2009 9 comments

It has been only a few weeks since Ann Thomas Manes wrote her blog stating that SOA is dead.  Since then there has been a lot of chatter about whether this is indeed true and if SOA vendors should find a new line of work.  So, I thought I would add my two cents to the conversation.

Let me start by saying, I told you so.  Last year I wrote in a blog that we would know when SOA had become mainstream when the enormous hype cycle ended. Alas that has happened. What does this mean? Let’s keep this in perspective. Every technology that comes along and generates a lot of hype follows this same pattern. Why? I’ll make it simple. The hype machine is powerful. It goes like this. There is a new technology trend with thousands of new companies on the scene.  All of them vie for dominance and a strong position on someone’s magic universe.  They are able to gain attention in the market. Then the market takes on its own momentum.  The technology moves from being a set of products focused on solving a business problem to the solution to any problem.  We saw this with object orientation, open systems, and Enterprise Applications Integration – to name but a few.  Smart entrepreneurs, sensing opportunity, stormed onto the market, promising huge promises of salvation for IT. Now, if I wanted to write a book I think I could come up with 100 different scenarios to prove my point but I will spare you the pain since the outcome is always the same.
So, what happens when each of these technology approaches moves from hype heaven to the dead zone? In some cases, the technology actually goes away because it simply doesn’t work – despite all of the hype.  But in many situations an interesting thing happens – the technology evolves into something mainstream. It gets incorporated and sometimes buried into emerging products and implementation plans of companies. It becomes mainstream.  I’ll just give you a few examples to support this premise:
•    Remember open systems? In the early 1990s it was the biggest trend around. There were thousands of products that were released onto the market. There were hundreds of companies that renamed themselves open something or other.  So, what happened? Open became mainstream and the idea of designing proprietary technologies without open interfaces and standards support became unpopular. No one creates a magic quadrant based on open systems but I don’t know many companies who can ignore standards and survive.

•    Object orientation was as big a rage as open systems – maybe even bigger. There were conferences, publications, magic quandrants and lots and lots of products ranging from operating systems to databases to development environments.  It was a hot, hot market.  What happened? The idea of creating modular components that could be reused turned out to be a great idea. But the original purity and concepts needed to evolve into something more pragmatic and in fact they did.  The concepts of object orientation changed the nature of how developers created software.  It moved from the idea of creating small granular pieces of code that could be used in lots of different ways to larger grain pieces of code that could create composites.  Object orientation is the foundation that most modern software sits on top of.

•    Enterprise Applications Integration probably had even more companies than either open systems or object orientation combined.  The idea that a company could buy technology that would allow their packaged software elements to talk to each other and pass data was revolutionary at the time.  This trend was focused on providing packaged solutions to a nasty problem. If vendors could find a way to provide solutions that allowed customers to avoid resorting to massive coding, it would result in a big market opportunity. Vendors in this market promised to provide solutions that allowed the general ledger module to send data to and from the sales force application. What happened? There were hundreds of vendors telling into this market.  However, it was a stopping off point.  There are newer products that do a better job of integration based on a service oriented approach to integration and data management.  In addition, this market evolved into technologies such as Enterprise Service Buses that did a better job of abstraction. There are plenty of Enterprise Application Integration technologies out there but they have emerged as a part of a loosely coupled environment where components are designed to send messages between them.
Now, I could go on for a long time with plenty more examples. But I think I have made my point. Technology innovation just works this way. The products that took the market by storm one year become stale the next. But the lessons learned and the innovation does not die. These lessons are used by a new generation of smart technologists to support the new generation of products.
So, Virginia, Service Oriented Architectures will do the same thing.  But it is also a little different.  It is not the same as a lot of other technology shiny toys because so much of SOA is actually about business practices – not technology.  Sure when SOA started out a few years ago it was about specific products – hundreds of them. These products were eagerly adopted by developers who used them to created service interfaces and business services.
Today, business leaders are taking charge of their SOA initiatives. The innovative business leaders are using business focused templates to move more quickly. They are creating business services – code plus process. They are creating business services such as Order-to-Cash services that in the long run will be mandated as the way everyone across the company will implement a process according to corporate practices.  Some of these companies would like to rid themselves of huge, complicated and expensive packaged software and replace them with these business services.
Today these products are becoming part of the fabric of the companies that use them. They are enablers of consistent and vetted business processes. They are the foundation of establishing good governance so that everyone in the organization uses a consistent set of rules, data, and processes. This is not glamorous.  It is hard work that starts from a business planning cycle.  It is the type of hard work where teams of technologist and business leaders determine what is the best way to satisfy the company’s need to implement order to cash processes across business units.
And yes, Virginia SOA is not stagnant.  It is evolving because it offers business value to companies.  There are new initiatives and new architectural principles that have value within this service orientation approach.  There are architectures such as REST that helps make interaction within a business services approach more interactive.  There are emerging standards that enable companies using SOA to be able to exchange information without massive coding. There are information services and security services evolving for the same reason. There are new approaches to make SOA environments more manageable based on the emerging idea that, in fact, everything we do with the world is a service of some type that needs to work with other services.  The physical and virtual words are starting to blend – which makes service orientation even more important.
Maybe ten years from now, we won’t use the word Service Oriented Architecture because it won’t be seen as a market segment or a quadrant – it will be just the way things are done.  So, stop worrying about whether SOA is alive, dead, or comatose – I have. So, relax Virginia, and get back to work!

Making sense of REST in context with SOA

January 19, 2009 2 comments

I continue to spend a lot of time thinking and researching REST — REpresentational State Transfer. Yes, REST is a set of architectural guidelines introduced by Roy Fielding in his dissertation where he defined HTTP. While I couldn’t find a link to Fielding’s disseration, I did find a very good blog entry written by Fielding.

Given its origins, REST follows the philosophy of HTTP. In other words, you give everything an ID, you link these services or components together though some standards methods.  These services communicate in a stateless manner.  In addition, these resources can be used in many different contexts.  What is very important about rest is this idea of linking resources based on self-describing interaction where there is no state between requests. Therefore, from a customer perspective it offers the fast, effective model for development that is fundamental to being able to make organizations more fluid and effective.

As part of my research, I had an interesting conversation with Martin Nally,  IBM Rational’s CTO  about REST, its value and its relationship to SOA. From his perspective, REST looks like a data set that is exposed with Internet protocols.  And if you look at the way REST is described in terms of GET, PUT, POST, and DELETE.  In Nally view, REST provides “a simple style of using HTTP if you can look at your problem set as a web of interconnected hyperlinked resources.”  I thought that Nally put it very well, “In the old days we would create a data model with a representation of department, employee, etc. to create the data in a database. Then we would write two styles of applications: one that was basically to conduct simple data based operations (Create, Read, Update, Delete) and a second to type of application that would apply that information to business process — how do you rate a customer’s credit worthiness?”  In other words, it is necessary to intermix web based services that are stateless and can link data together across a distributed computing environment combined with well defined business services that encapsulate business rules and process. In the operational environment based on business services — based on a Service Oriented Architecture — there are many resources provided based on components that require a lot more structuring of process and more overall governance.  For many types of implementations, there needs to be the foundation of technology concepts such as a registry/repository for both management and governance.  There needs to be a transport mechanism for guarenteed transactions.

I think that we need to look at both two world views — REST to support the web, data oriented linkage style with the structured world of  a services and process based approach.  Let’s leave the religious wars to someone else and recognize that there is room in our complicated software world for both.

My Top Eleven Predictions for 2009 (I bet you thought there would be only ten)

November 14, 2008 11 comments

What a difference a year makes. The past year was filled with a lot of interesting innovations and market shifts. For example, Software as a Service went from being something for small companies or departments within large ones to a mainstream option.  Real customers are beginning to solve real business problems with service oriented architecture.  The latest hype is around Cloud Computing – afterall, the software industry seems to need hype to survive. As we look forward into 2009, it is going to be a very different and difficult year but one that will be full of some surprising twists and turns.  Here are my top predictions for the coming year.
One. Software as a Service (SaaS) goes mainstream. It isn’t just for small companies anymore. While this has been happening slowly and steadily, it is rapidly becoming mainstream because with the dramatic cuts in capital budgets companies are going to fulfill their needs with SaaS.  While companies like SalesForce.com have been the successful pioneers, the big guys (like IBM, Oracle, Microsoft, and HP) are going to make a major push for dominance and strong partner ecosystems.
Two. Tough economic times favor the big and stable technology companies. Yes, these companies will trim expenses and cut back like everyone else. However, customers will be less willing to bet the farm on emerging startups with cool technology. The only way emerging companies will survive is to do what I call “follow the pain”. In other words, come up with compelling technology that solves really tough problems that others can’t do. They need to fill the white space that the big vendors have not filled yet. The best option for emerging companies is to use this time when people will be hiding under their beds to get aggressive and show value to customers and prospects. It is best to shout when everyone else is quiet. You will be heard!
Three.  The Service Oriented Architecture market enters the post hype phase. This is actually good news. We have had in-depth discussions with almost 30 companies for the second edition of SOA for Dummies (coming out December 19th). They are all finding business benefit from the transition. They are all view SOA as a journey – not a project.  So, there will be less noise in the market but more good work getting done.
Four. Service Management gets hot. This has long been an important area whether companies were looking at automating data centers or managing process tied to business metrics.  So, what is different? Companies are starting to seriously plan a service management strategy tied both to customer experience and satisfaction. They are tying this objective to their physical assets, their IT environment, and their business process across the company. There will be vendor consolidation and a lot of innovation in this area.
Five. The desktop takes a beating in a tough economy. When times get tough companies look for ways to cut back and I expect that the desktop will be an area where companies will delay replacement of existing PCs. They will make do with what they have or they will expand their virtualization implementation.
Six. The Cloud grows more serious. Cloud computing has actually been around since early time sharing days if we are to be honest with each other.  However, there is a difference is the emerging technologies like multi-tenancy that make this approach to shared resources different. Just as companies are moving to SaaS because of economic reasons, companies will move to Clouds with the same goal – decreasing capital expenditures.  Companies will start to have to gain an understanding of the impact of trusting a third party provider. Performance, scalability, predictability, and security are not guaranteed just because some company offers a cloud. Service management of the cloud will become a key success factors. And there will be plenty of problems to go around next year.
Seven. There will be tech companies that fail in 2009. Not all companies will make it through this financial crisis.  Even large companies with cash will be potentially on the failure list.  I predict that Sun Microsystems, for example, will fail to remain intact.  I expect that company will be broken apart.  It could be that the hardware assets could be sold to its partner Fujitsu while pieces of software could be sold off as well.  It is hard to see how a company without a well-crafted software strategy and execution model can remain financially viable. Similarly, companies without a focus on the consumer market will have a tough time in the coming year.
Eight. Open Source will soar in this tight market. Open Source companies are in a good position in this type of market—with a caveat.  There is a danger for customers to simply adopt an open source solution unless there is a strong commercial support structure behind it. Companies that offer commercial open source will emerge as strong players.
Nine.  Software goes vertical. I am not talking about packaged software. I anticipate that more and more companies will begin to package everything based on a solutions focus. Even middleware, data management, security, and process management will be packaged so that customers will spend less time building and more time configuring. This will have an impact in the next decade on the way systems integrators will make (or not make) money.
Ten. Appliances become a software platform of choice for customers. Hardware appliances have been around for a number of years and are growing in acceptance and capability.  This trend will accelerate in the coming year.  The most common solutions used with appliances include security, storage, and data warehousing. The appliance platform will expand dramatically this coming year.  More software solutions will be sold with prepackaged solutions to make the acceptance rate for complex enterprise software easier.

Eleven. Companies will spend money on anticipation management. Companies must be able to use their information resources to understand where things are going. Being able to anticipate trends and customer needs is critical.  Therefore, one of the bright spots this coming year will be the need to spend money getting a handle on data.  Companies will need to understand not just what happened last year but where they should invest for the future. They cannot do this without understanding their data.

The bottom line is that 2009 will be a complicated year for software.  There will be many companies without a compelling solution to customer pain will and should fail. The market favors safe companies. As in any down market, some companies will focus on avoiding any risk and waiting. The smart companies – both providers and users of software will take advantage of the rough market to plan for innovation and success when things improve – and they always do.

Has IBM Changed its Partner Strategy? The Hunt for OEMs

March 18, 2008 1 comment

Partners are getting more and more important to the major software players. IBM announced a very interesting relationship with Kana, a $60 million solution provider of multi-channel customer service software. This is indeed a growing area in the market. Kana sells its software to about 60% of the Fortune 100. The company started in 1996 and has managed to survive some rough times and come out strong.

While IBM, like other major industry players rely on their partner ecosystem as an important go to market strategy. Some partnerships work better than others. What I thought was particularly interesting about the Kana partnership is its depth. Kana has decided to embed IBM’s DB2, WebSphere (including the WebSphere Process Server) into its solutions. SOA is an important new direction for Kana and the two companies plan to do some joint development in this area. Relationships like this don’t just happen. More than half of Kana’s customers are also IBM customers. This is important because increasingly the customers that I am talking to are looking to buy solutions from one trusted provider rather than trying to get a bunch of individual vendors to work together.

IBM has had a strategy for more than a decade of partnering with packaged software providers rather than being in that business. On one level, this can be viewed as a risky strategy. One only has to look at the roles of Oracle and SAP in the market to wonder if these packaged offerings will swallow up the entire ISV partner ecosystem like a black hole. I guess that my conclusion is that it just isn’t that simple. Customers that I have spent time with look at software packages from a different vantage point than infrastructure software. Because Oracle or SAP provides an excellent package for supply chain management or accounting management does not necessarily mean that they are the right choice for middleware or SOA infrastructure.

IBM’s partner strategy with ISVs has evolved over the past several years. I see a change from the desire to have lots of partners who will enable their software to run one or more IBM software offerings to deeper more strategic relationships. The Kana relationship is an OEM relationship — not a simple membership in a partner program. In fact, IBM has more than 30 of these OEM partnerships with vendors including Fair Issacs, Cisco, Nortel, and PTC — to name a couple. I expect that OEM partners are going to became an important center focus of IBM’s partnering strategy in the coming year.