You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
Informatica might be thought of as the last independent data management company standing. In fact, that used to be Informatica’s main positioning in the market. That has begun to change over the last few years as Informatica can continued to make strategic acquisitions. Over the past two years Informatica has purchased five companies — the most recent was Siperian, a significant player in Master Data Management solutions. These acquisitions have paid off. Today Informatica has past the $500 million revenue mark with about 4,000 customers. It has deepened its strategic partnerships with HP, Ascenture, Salesforce.com, and MicroStrategies, In a nutshell, Informatica has made the transition from a focus on ETL (Extract, Transform, Load) tools to support data warehouses to a company focused broadly on managing information. Merv Adrian did a great job of providing context for Informatica’s strategy and acquisitions. To transition itself in the market, Informatica has set its sights on data service management — a culmination of data integration, master data management and data transformation, predictive analytics in a holistic manner across departments, divisions, and business partners.
In essence, Informatica is trying to position itself as a leading manager of data across its customers’ ecosystem. This requires a way to have consistent data definitions across silos (Master Data Management), ways to trust the integrity of that data (data cleansing), event processing, predictive analytics, integration tools to move and transform data, and the ability to prove that governance can be verified (data governance). Through its acquisitions, Informatica is working to put these pieces together. However, as a relatively small player living in a tough neighborhood (Oracle, IBM, SAS Institute,etc. it will be a difficult journey. This is one of the reasons that Informatica is putting so much emphasis on its new partner marketplace. A partner network can really help a smaller player appear and act bigger.
This Marketplace will include all of Informatica’s products. It will enable developers to develop within Informatica’s development cloud and deploy either in the cloud or on premise. Like its new partner marketplace, the cloud is offering another important opportunity for Informatica to compete. Informatica was an early partner with Salesforce.com. It has been offerings complementary information management products that can be used as options with Salesforce.com. This has provided Informatica access to customers who might not have ever thought about Informatica in the past. In addition, it taught Informatica about the value of cloud computing as a platform for the future. Therefore, I expect that with Informatica’s strong cloud-based offerings will help the company maintain its industry position. In addition, I expect that the company’s newly strengthened partnership with HP will be very important in the company’s growth.
What is Informatica’s roadmap? It intends to continue to deliver new releases every six months including new data services and new data integration services. It will including develop these services with a self-service interfaces. In the end, its goal is to be a great data steward to its customers. This is an admirable goal. Informatica has made very good acquisitions that support its strategic goals. It is making the right bets on cloud and on a partner ecosystem. The question that remains is whether Informatica can truly scale to the size where it can sustain the competitive threats. Companies like IBM, Oracle, Microsoft, SAP, and SAS Institute are not standing still. Each of these companies have built and will continue to expand their information management strategies and portfolios of offerings. If Informatica can break the mold on ease of implementation on complex data service management it will have earned a place at the head table.
As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:
4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically. Clearly, vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. In fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.
5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill. Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one.
6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes. This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.
I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.
I haven’t been to IBM’s Rational Conference in a couple of years so I was very interested not just to see what IBM had to say about the changing landscape of software development but how the customers attending the conference had changed. I was not disappointed. While I could write a whole book on the changes happening in software development (but I have enough problems) I thought I would mention some of the aspects of the conference that I found noteworthy.
One. Rational is moving from tools company to a software development platform. Rational has always been a complex organization to understand since it has evolved and changed so much over the years. The organization now seems to have found its focus.
Two. More management, fewer low level developers. In the old day, conferences like this would be dominated by programmers. While there were many developers in attendance, I found that there were a lot of upper level managers. For example, I sat at lunch with one CIO who was in the process of moving to a sophisticated service oriented architecture. Another person at my table was a manager looking to update his company’s current development platforms. Still another individual was a customer of one of the company’s that IBM had purchased who was looking to understand how to implement new capabilities added since the acquisition.
Three. Rational has changed dramatically through acquisitions. Rational is a tale of acquisitions. Rational Software, the lynch pin of IBM’s software development division, itself was a combination of many acquisitions. Rational, before being bought by IBM in 2002 for $2.1 billion, had acquired an impressive array of companies including Requiste, SQA, Performance Aware, Pure-Atria, and Object Time Ltd. After a period of absorbtion, IBM started acquiring more assets. BuildForge (build and release management) was purchased in 2006; Watchfire (Web application security vulnerability and compliance testing software) was bought in 2007; and Telelogic (requirements management) was purchased in 2008.
It has taken IBM a while to both absorb all of the acquisitions and then to create a unified architecture so that these software products could share components and interoperate. While IBM is not done, under Danny Sabbah’s leadership (General Manager), Rational made the transition from being a tools company to becoming platform for managing software complexity. It is work in progress.
Four. It’s all about Jazz. Jazz, IBM’s collaboration platform was a major focus of the conference. Jazz is an architecture intended to integrate data and function. Jazz’s foundation is the REST architecture and therefore it is well positioned for use in Web 2.0 applications. What is most important is that IBM is bringing all of its Rational technology under this model. Over the next few years, we can expect to see this framework under all of the Rational’s products.
Five. Rational doesn’t stand alone. It is easy to focus on all of the Rational portfolio (which could take a while). But what I found quite interesting was the emphasis on the intersection between the Rational platform and Tivoli’s management services as well as Websphere’s Service Oriented Architecture offerings. Rational also made a point of focusing on the use of collaboration elements provided by the Lotus division. Cloud computing was also a major focus of discussion at the event. While many customers at the event are evaluating the potential of using various Rational products in the cloud it is early. The one area that IBM seem to have hit a home run is its Cloud Burst appliance which is intended create and manage virtual images. Rational is also beginning to deliver its testing offerings as cloud based services. One of the most interesting elements of its approach is to use tokens as a licensing model. In other words, customers purchase a set number of tokens or virtual licenses that can be used to purchase services that are not tied to a specific project or product.
I admit that I didn’t read the whole article but then I really didn’t have to. I knew what Marc Benioff, CEO of Salesforce.com was trying to start. I remember many years ago seeing Marc at an industry conference where he proudly announced the end of software. A nice marketing approach that definitely got everyone’s attention. Of course, at that time Marc was working on a little software as a service enviornment that became Salesforce.com. The rest is history, as we like to say. Now, Marc is on a new mission to attack maintenance fees. While it is clear that Marc is trying to tweak the traditional software market I think that he is bringing up an interesting subject.
Software maintenance is not a simple topic to cover and I am sure that I could spend hundreds of pages discussing the topic because there are so many angles. Maintenance fees began as a way of ensuring that software companies had the revenue to fund development of new functionality in their software products. It is, of course, possible to buy software, pay once, and never pay the vendor anything else. Those situations exist of course. Ironically, the better designed the software, the less likely it is that customers will need upgrades. But, clearly that circumstance is rare.
There are major changes taking place in the economics of software. Customers are increasingly unhappy with paying huge yearly maintenance fees to software providers. Some of these fees are clearly justified. Software is complex and vendors are often required to continue to upgrade, add new features, and the like. There are other situations where customers are perfectly happy with software as is and only want to fix critical problems and don’t want to pay what they see as exorbitant maintenance fees.
Now, getting back to Marc Benioff’s comments about the end of maintenance. Here is a link from Vinnie Mirchandani’s recent blog on the topic.Marc is making a very important observation. As the world slowly moves to cloud computing for economic reasons there will be a major impact on how companies pay for software. Salesforce.com has indeed proven that companies are willing to trust their sales and customer data to a Software as a Service vendor. These customers are also willing to pay per user or per company yearly fees to rent software. Does this mean that they are no longer paying maintance fees? My answer would be no. It is all about accounting and economics. Clearly, Salesforce.com spends a lot of money adding functionality to its application and someone pays for that. So, what part of that monthly or yearly per user fee is allocated to maintaining the application? Who knows? And I am sure that it is not one of those statistics that Salesforce.com or any other Software as a Service or any Platform as a Service vendor is going to publish. Why? Because these companies don’t think of themselves as traditional software companies. They don’t expect that anyone will ever own a copy of their code.
The bottom line is that software will never be good enough to never need maintenance. Software vendors — whether they sell perpetual licenses or Software as a Service– will continue to charge for maintance. The reality is that the concrete idea of the maintenance fee will evolve over time. Customers will pay it but they probably won’t see it on their bills. Nevertheless, the impact on traditional software companies will be dramatic over time and a lot of these companies will have to rethink their strategies. Many software companies have become increasingly dependent on maintenance revenue to keep revenue growing. I think that Marc Benioff has started a conversation that will spark a debate that could have wide ranging implications for the future of not only maintenance but of what we think of as software.
It seems like just the other day that our team was busily finishing the first edition of SOA for Dummies. But it was two years ago since that book came out. A lot has change in that time. When we first wrote the book, we heard from lots of people that they really didn’t know what SOA was and were happy to have a book that would explain it to them in easy to understand language.
Because so much has changed, we were asked to write a second edition of SOA for Dummies which is coming out on December 19th. What has changed in those two years? Well, first of all, there have been a lot more implementations of SOA. In fact, in that edition, we were happy to have gotten 7 case studies. Many of the customers that we talked (both that were featured in the book and those who took the time to speak with us without attribution) were just getting started. They were forming centers of excellence. They were beginning to form partnerships between the business and technical sides of their companies. They were implementing a service bus or were building their first sets of services.
In this second edition, we were fortunate to find 24 companies across 9 different verticals willing and able to talk on the record about their experiences implementing SOA. What did we learn? While there is a lot I could say, I’d like to net it out to 5 things we learned:
1. Successful companies have spent the time starting with the both the key business services and business process before even thinking about implementation.
2. Companies have learned a lot since their initial pilots. They are now focused on how they can increase revenue for their companies through innovation using a service oriented approach.
3. Many companies have a strategic roadmap that they are focused on and therefore are implementing a plan in an incremental fashion.
4. A few companies are creating business services extracted from aging applications. Once this is done, they are mandating the use of these services across the company.
5. Companies that have been working on SOA for the last few years have learned to create modular business services that can have multiple uses. This was much harder than it appeared at first.
There are many other best practices and lessons learned in the case studies. It is interesting to note just as many companies that said yes also were not able to participate because management felt that they didn’t want competitors to know what they were doing.
The bottom line is that SOA is beginning to mature. Companies are not just focused on backbone services such as service buses but on making their SOA services reach out to consumers and their business partners.
We have also added a bunch of new chapters to the book. For example, we have new chapters on SOA service management; SOA software development, software quality, component applications, and collaboration within the business process lifecycle. Of course, we have updated all existing chapters based on the changes we have seen over the last few years.
We are very excited that we had the opportunity to update the book and look forward to continuing the dialog.
While 2004 started out with a whimper for the technology market, it ended with a sense that the momentum that had been missing from the market was finally beginning to take hold. Hurwitz & Associates predicts that the coming year will offer some interesting opportunities as well as challenges. Here are our top ten predictions:
1. Emerging Technologies
Leveraging emerging technologies in innovative ways to transform business will be the key driver in 2005. While many organizations are able to provide a predictable payback from technology acquisition with relative ease, the bar is being raised. The insightful companies are looking for technology to become a core competitive asset. For many industries technology innovation has the potential to transform business practice. We expect that this focus on technology as the foundation for business transformation will become the norm – not the exception. Traditional ROI methodologies will begin to be viewed as outdated. We expect the buying pattern to move away from cost saving technology purchase towards technology that offers business opportunity.
2. Open Source
While the Open Source market will continue to expand at a rapid rate some customers will begin to experience problems because of poor, undocumented implementations executed by inexperienced contractors. Customers will begin to learn the hard way that all open source is not the same. Companies that provide verification and certification of open source offerings will gain major momentum in the market. More software companies will continue to try to regain market momentum by putting their crown jewels in the open source arena. We predict that many of these efforts will be viewed skeptically and will not be commercially successful.
3. Data Quality
Quality in general will become a massive issue in 2005. This crisis will extend both to data quality and software quality in general. Data Quality, traditionally viewed as a back office function will begin to emerge as a major crisis in organizations. Studies are showing that few managers have confidence in the quality of their organization’s data. With tough regulations (Sarbanes Oxley, etc.) bearing down on companies, the quality of data becomes a front office issue. We anticipate that predictable data quality will become a battle cry for many CIOs and their bosses in the coming months. As software becomes the personification of the company, software quality moves form the isolated Q&A department.
How organizations are able to manage their information across departments and across organizational boundaries will be one of the hottest markets in 2005. While many software companies are beginning to leap into this emerging market, most will fail to gain critical mass. Customers will want to buy a well integrated package from a highly trusted source. Therefore, we expect to see many more acquisitions in this market. Those weaker players who are not acquired will go out of business.
5. IT Security
Security is moving from an applications play to an infrastructure play. Today there are thousands of small security companies focused on small pieces of a bigger puzzle. We expect that companies like IBM, CA, HP, Symantec, Novell, and BMC will position them for leadership by providing a consolidated set of offerings both at the infrastructure and the application level.
6. IT Security Innovation
Innovation in security will be driven by the need to anticipate problems before they materialize rather than having to react to threats – a move to real-time and away from reactive security. We anticipate the real action in startups will be in this area.
The Linux operating system will continue to gain significant market share at the expense of traditional Unix and Microsoft platforms. Innovative emerging software vendors are increasingly selecting Linux as their platform in order to compete with larger, more established players. The net effect will be a renaissance in innovative applications that do not need the same funding to approach the market. This will have dramatic implications for the SMB market. The barriers to entry are indeed being broken.
The definition of middleware will change in 2005. We anticipate that what had been viewed as industry specific packaged software will begin to be seen as corporate infrastructure and middleware. This has the potential to change the balance of power in the market. Oracle’s acquisition of PeopleSoft will start an avalanche of acquisitions by unexpected players who have not been in the packaged software market.
9. Software As A Service
This is the year that software as a service will become the norm. Increasingly, we are seeing customers accept that software can and should be bought as a service rather than in a perpetual license mode. This model will change the dynamics of software: it will be much easier for companies to walk away from their vendor if they become dissatisfied.
10. Software License Management
Being able to more easily manage software licenses will become a major market factor in 2005. Until now, it has been difficult for large organizations to know what software is installed on desktops and laptops throughout their organization. Activity will be driven by a combination of increasingly tight budgets, regulatory demands for accuracy in software fees, as well as security concerns, Increasingly, companies are unwilling to pay for software licenses that users do not access and do not need.