Archive

Archive for the ‘software development’ Category

Why is IBM in the horizontal solutions business?

December 15, 2010 Leave a comment

Every year I attend IBM software analyst meeting. It is an opportunity to gain a snap shot of what the leadership team is thinking and saying.  Since I have had the opportunity to attend many of these events over the year, it is always instructive to watch the evolution of IBM’s software business over the years.

So, what did I take away from this year’s conference? In many ways, it was not that much difference in what I experienced last year. And I think that is good. When you are a company of the size of IBM you can’t lurch from one strategy to the next and expect to survive.  One of the advantages that IBM has in the market is that has a well developed roadmap that it is in the process of executing on.  It is not easy to execute when you have as many software components as IBM does in its software portfolio.

While it isn’t possible to discuss all that I learned in my various discussions with IBM executives, I’d like to focus on IBM’s solutions strategy and its impact on the software portfolio.  From my perspective,  IBM has made impressive strides in enforcing a common set of services that underlie its software portfolio.  It has been a complicated process that has taken decades and is still a work in progress.  The result required that all of the business units within software are increasingly working together to provide underlying services to each other.  For example, Tivoli provides management services to Rational and Information Management provides data management services to Tivoli. WebSphere provides middleware and service orientation to all of the various business units.  Because of this approach, IBM is better able to move to a solutions focus.

It’s about the solutions.

In the late 1990s IBM got out of the applications business in order to focus on middleware, data management, and systems management. This proved to be a successful strategy for the next decade. IBM made a huge amount of money selling WebSphere, DB2, and Tivoli offerings for SAP and Oracle platforms. In addition, Global Services created a profitable business implementing these packaged applications for enterprises.  But the world has begun to change. SAP and Oracle have both encroached on IBM’s software business. Some have criticized IBM for not being in the packaged software business. While IBM is not going into the packaged software business, it is investing a vast amount of money, development effort, and marketing into the “solutions” business.

How is the solutions business different than a packaged application? In some ways they are actually quite similar. Both provide a mechanism for codifying best practices into software and both are intended to save customers time when they need to solve a business problem.  IBM took itself out of the packaged software business just as the market was taking off.  Companies like SAP, Oracle, Seibel, PeopleSoft and hundreds of others were flooding the market with tightly integrated packages.  In this period of the early 1990s, IBM decided that it would be more lucrative to partner with these companies that lacked independent middleware and enabling technologies.  IBM decided that it would be better off enabling these packaged software companies than competing in the packaged software market.

This turned out to be the right decision for IBM at the time.  The packaged software it had developed in the 80s was actually holding it back.  Therefore, without the burden of trying to fix broken software, it was able to focus all of its energy and financial strength on its core enabling software business.  But as companies like Oracle and SAP cornered the packaged software market and began to expand to enabling software, IBM began to evolve its strategy.  IBM’s strategy is a hybrid of the traditional packaged software business and a solutions business based on best practices industry frameworks.

So, there are two components in IBM’s solutions strategy – vertical packaged solutions that can be applied across industries and solution frameworks that are focused on specific vertical markets.

Horizontal Packages. The horizontal solutions that IBM is offerings have been primarily based on acquisitions it has made over the past few years.  While at first glance they look like any other packaged software, there is a method to what IBM has purchased.  Without exception, these acquisitions are focused on providing packaged capabilities that are not specific to any market but are intended to be used in any vertical market.  In essence, the packaged solutions that IBM has purchased resemble middleware more than end-to-end solutions. For example, Sterling Commerce, which IBM purchased in August 2010, is a cross channel commerce platform.  It purchased Coremetrics in June, provides web analytics and bought Unica for marketing automation of core business processes.  While each of these is indeed packaged, they reach represent a component of a solution that can be applied across industries.

Vertical Packages. IBM has been working on its vertical market packaging for more than a decade through its Business Services Group (BSG). IBM has taken its best practices from various industry practices and codified these patterns into software components. These components have been unified into solution frameworks for industries such as retail, banking, and insurance. While this has been an active approach with the Global Services for many years, there has been a major restructuring in IBM’s software organization this past year. In January, the software group split into two groups: one focused on middleware and another focused on software solutions.  All of the newly acquired horizontal packages provide the underpinning for the vertical framework-based software solutions.

Leading with the solution. IBM software has changed dramatically over the past several decades.  The solutions focus does not stop with the changes within the software business units itself; it extends to hardware as well.  Increasingly, customers want to be able to buy their solutions as a package without having to buy the piece parts.  IBM’s solution focus that encompasses solutions, middleware, appliances, and hardware is the strategy that IBM will take into the coming decade.

Predictions for 2011: getting ready to compete in real time

December 1, 2010 3 comments

2010 was a transition year for the tech sector. It was the year when cloud suddenly began to look realistic to the large companies that had scorned it. It was the year when social media suddenly became serious business. And it was the year when hardware and software were being united as a platform – something like in the old mainframe days – but different because of high-level interfaces and modularity. There were also important trends starting to emerge like the important of managing information across both the enterprise and among partners and suppliers. Competition for ownership of the enterprise software ecosystem headed up as did the leadership of the emerging cloud computing ecosystem.

So, what do I predict for this coming year? While at the outset it might look like 2011 will be a continuation of what has been happening this year, I think there will be some important changes that will impact the world of enterprise software for the rest of the decade.

First, I think it is going to be a very big year for acquisitions. Now I have said that before and I will say it again. The software market is consolidating around major players that need to fill out their software infrastructure in order to compete. It will come as no surprise if HP begins to purchase software companies if it intends to compete with IBM and Oracle on the software front.  But IBM, Oracle, SAP, and Microsoft will not sit still either.  All these companies will purchase the incremental technology companies they need to compete and expand their share of wallet with their customers.

This will be a transitional year for the up and coming players like Google, Amazon, Netflix, Salesforce.com, and others that haven’t hit the radar yet.  These companies are plotting their own strategies to gain leadership. These companies will continue to push the boundaries in search of dominance.  As they push up market as they grab market share, they will face the familiar problem of being able to support customers who will expect them to act like adults.

Customer support, in fact, will bubble to the top of the issues for emerging as well as established companies in the enterprise space – especially as cloud computing becomes a well-established distribution and delivery platform for computing.  All these companies, whether well established or startups will have to balance the requirements to provide sophisticated customer support with the need to make profit.  This will impact everything from license and maintenance revenue to how companies will charge for consulting and support services.

But what are customers be looking for in 2011? Customers are always looking to reduce their IT expenses – that is a given. However, the major change in 2011 will be the need to innovative based on customer facing initiatives.  Of course, the idea of focusing on customer facing software itself isn’t new there are some subtle changes.  The new initiatives are based on leveraging social networking from a secure perspective to both drive business traffic, anticipate customer needs and issues before they become issues.  Companies will spend money innovating on customer relationships.

Cloud Computing is the other issue in 2011. While it was clearly a major differentiator in 2010, the cloud will take an important leap forward in 2011.  While companies were testing the water this year, next year, companies will be looking at best practices in cloud computing.  2011 will be there year where customers are going to focus on three key issues: data integration across public, private, and data centers, manageability both in terms of workload optimization, security, and overall performance.  The vendors that can demonstrate that they can provide the right level of service across cloud-based services will win significant business. These vendors will increasingly focus on expanding their partner ecosystem as a way to lock in customers to their cloud platform.

Most importantly, 2011 will be the year of analytics.  The technology industry continues to provide data at an accelerated pace never seen before. But what can we do with this data? What does it mean in organizations’ ability to make better business decisions and to prepare for an unpredictable future?  The traditional warehouse simply is too slow to be effective. 2011 will be the year where predictive analytics and information management overall will emerge as among the hottest and most important initiatives.

Now I know that we all like lists, so I will take what I’ve just said and put them into my top ten predictions:

1. Both today’s market leaders and upstarts are going to continue to acquire assets to become more competitive.  Many emerging startups will be scooped up before they see the light of day. At the same time, there will be almost as many startups emerge as we saw in the dot-com era.

2. Hardware will continue to evolve in a new way. The market will move away from hardware as a commodity. The hardware platform in 2010 will be differentiated based on software and packaging. 2010 will be the year of smart hardware packaged with enterprise software, often as appliances.

3. Cloud computing models will put extreme pressure on everything from software license and maintenance pricing to customer support. Integration between different cloud computing models will be front and center. The cloud model is moving out of risk adverse pilots to serious deployments. Best practices will emerge as a major issue for customers that see the cloud as a way to boost innovation and the rate of change.

4. Managing highly distributed services in a compliant and predictable manner will take center stage. Service management and service level agreements across cloud and on-premises environments will become a prerequisite for buyers.

5. Security software will be redefined based on challenges of customer facing initiatives and the need to more aggressively open the corporate environment to support a constantly morphing relationship with customers, partners, and suppliers.

6. The fear of lock in will reach a fever pitch in 2011. SaaS vendors will increasingly add functionality to tighten their grip on customers.  Traditional vendors will purchase more of the components to support the lifecycle needs of customers.  How can everything be integrated from a business process and data integration standpoint and still allow for portability? Today, the answers are not there.

7. The definition of an application is changing. The traditional view that the packaged application is hermetically sealed is going away. More of the new packaged applications will be based on service orientation based on best practices. These applications will be parameter-driven so that they can be changed in real time. And yes, Service Oriented Architectures (SOA) didn’t die after all.

8. Social networking grows up and will be become business social networks. These initiatives will be driven by line of business executives as a way to engage with customers and employees, gain insights into trends, to fix problems before they become widespread. Companies will leverage social networking to enhance agility and new business models.

9. Managing end points will be one of the key technology drivers in 2011. Smart phones, sensors, and tablet computers are refining what computing means. It will drive the requirement for a new approach to role and process based security.

10. Data management and predictive analytics will explode based on both the need to understand traditional information and the need to manage data coming from new sales and communications channels.

The bottom line is that 2011 will be the year where the seeds that have been planted over the last few years are now ready to become the drivers of a new generation of innovation and business change. Put together everything from the flexibility of service orientation, business process management innovation, the wide-spread impact of social and collaborative networks, the new delivery and deployment models of the cloud. Now apply tools to harness these environments like service management, new security platforms, and analytics. From my view, innovative companies are grabbing the threads of technology and focusing on outcomes. 2011 is going to be an important transition year. The corporations that get this right and transform themselves so that they are ready to change on a dime can win – even if they are smaller than their competitors.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



What are the Unanticipated consequences of the cloud – part II

October 29, 2009 9 comments

As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:

4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically.  Clearly,  vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. old fashioned wagonIn fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.

5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill.  Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one. cash-wad

6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes.  This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.

I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.

Is application portability possible in the cloud?

October 8, 2009 1 comment

As companies try to get a handle on the costs involved in running data centers. In fact, this is one of the primary reasons that companies are looking to cloud computing to make the headaches go away.  Like everything else is the complex world of computing, clouds solve some problems but they also cause the same type of lock-in problems that our industry has experienced for a few decades.

I wanted to add a little perspective before I launch into my thoughts about portability in the cloud.  So, I was thinking about the traditional data centers and how their performance has long been hampered because of their lack of homogeneity.  The typical data center is   filled with a warehouse of different hardware platforms, operating systems, applications, networks – to name but a few.  You might want to think of them as archeological digs – tracing the history of the computer industry.   To protect their turf, each vendor came up with their own platforms, proprietary operating systems and specialized applications that would only work on a single platform.

In addition to the complexities involved in managing this type of environment, the applications that run in these data centers are also trapped.   In fact, one of the main reasons that large IT organizations ended up with so many different hardware platforms running a myriad of different operating systems was because applications were tightly intertwined with the operating system and the underlying hardware.

As we begin to move towards the industrialization of software, there has been an effort to separate the components of computing so that application code is separate from the underlying operating system and the hardware. This has been the allure of both service oriented architectures and virtualization.  Service orientation has enabled companies to create clean web services interfaces and to create business services that can be reused for a lot of different situations.  SOA has taught us the business benefits that can be gained from encapsulating existing code so that it is isolated from other application code, operating systems and hardware.

Sever Virtualization takes the existing “clean” interface that is between the hardware and the software and separates the two. One benefit of fueling rapid adoption and market growth is that there is no need for rewriting of software between the x86 instructions and the software. As Server virtualization moves into the data center, companies can dramatically consolidate the massive number of machines that are dramatically underutilized to a new machines that are used in a much more efficient manner. The resultant cost savings from server virtualization include reduction in physical boxes, heating, maintenance, overhead, cooling, power etc.

Server virtualization has enabled users to create virtual images to recapture some efficiency in the data center.  And although it fixes the problem of operating systems bonded to hardware platforms, it does nothing to address the intertwining of applications and operating systems.

Why bring this issue up now? Don’t we have hypervisors that take care of all of our problems of separating operating systems from applications? Don’t companies simply spin up another virtual image and that is the end of the story.  I think the answer is no – especially with the projected growth of the cloud environment.

I got thinking about this issue after having a fascinating conversation with Greg O’Connor, CEO of AppZero.  AppZero’s value proposition is quite interesting.  In essence, AppZero provides an environment that separates the application from the underlying operating system, effectively moving up to the next level of the stack.

The company’s focus is particularly on the Windows operating system and for good reason. Unlike Linux or Zos, the Windows operating system does not allow applications to operate in a partition.  Partitions act to effectively isolate applications from one another so that if a bad thing happens to one application it cannot effect another application.   Because it is not possible to separate or isolate applications in the Windows based server environment when something goes bad with one application, it can hurt the rest of the system and other application in Windows.

In addition, when an application is loaded into Windows, DLLs (Dynamic Link Libraries) are often loaded into the operating system. DLLs are shared across applications and installing a new application can overwrite the current DLL of another application. As you can imagine, this conflict can have really bad side effects. .

Even when applications are installed on different servers – physical or virtual — installing software in Windows is a complicated issue. Applications create registry entries, modify registry entries of shared DLLS copy new DLLs over share libraries. This arrangement works fine unless you want to move that application to another environment. Movement requires a lot of work for the organization making the transition to another platform. It is especially complicated for independent software vendors (ISVs) that need to be able to move their application to whichever platform their customers prefer.

The problem gets even more complex when you start looking at issues related to Platform as a Service (PaaS).  With PaaS platform a customer is using a cloud service that includes everything from the operating system to application development tools and a testing environment.  Many PaaS vendors have created their own language to be used to link components together.  While there are benefits to having a well-architected development and deployment cloud platform, there is a huge danger of lock in.  Now, most of the PaaS vendors that I have been talking to promise that they will make it easy for customers to move from one Cloud environment to another.  Of course, although I always believe everything a vendor tells me  (that is meant as a joke….to lighten the mood) but I think that customers have to be wary about these claims of interoperability.

That was why I was intrigued with AppZero’s approach. Since the company decouples the operating system from the application code, it provides portability of pre-installed application from one environment to the next.  The company positions its approach as a virtual application appliance . In essence, this software is designed as a layer that sits between the operating system and the application. This layer intercepts file I/O, shared memory I/O as well as a specific DLL and keeps them in separate “containers” that are isolated from the application code.

Therefore, the actual application does not change any of the files or registry entries on a Windows server. In this way, a company could run a single instance of the windows server operating system. In essence, it isolates the applications, the specific dependencies and configurations from the operating system so it requires fewer operating systems to manage a Microsoft windows server based data center.

AppZero enables the user to load an application from  the network rather than to the local disk.  It therefore should simplify the job for data center operations management by enabling a single application image to be provisioned to multiple environments- enabling them to keep track of changes within a Windows environment because the application isn’t tied to a particular OS.   AppZero has found a niche selling its offerings to ISVs that want to move their offerings across different platforms without having to have people install the application. By having the application pre-installed in a virtual application appliance, the ISV can remove many of the errors that occur when a customer install the application into there environment.  The application that is delivered in a virtual application appliance container greatly reduces the variability of components that might be effect the application with traditional installation process. In addition, the company has been able to establish partnerships with both Amazon and GoGrid.

So, what does this have to do with portability and the cloud? It seems to me that this approach of separating layers of software so that interdependencies do not interfere with portability is one of the key ingredients in software portability in the cloud. Clearly, it isn’t the only issue to be solved. There are issues such as standard interfaces, standards for security, and the like. But I expect that many of these problems will be solved by a combination of lessons learned from existing standards from the Internet, web services, Service Orientation, systems and network management. We’ll be ok, as long as we don’t try to reinvent everything that has already been invented.

Ten things I learned while writing Cloud Computing for Dummies

August 14, 2009 14 comments

I haven’t written a blog post in quite a while. Yes, I feel bad about that but I think I have a good excuse. I have been hard at work (along with my colleagues Marcia Kaufman, Robin Bloor, and Fern Halper) on Cloud Computing for Dummies. I will admit that we underestimated the effort. We thought that since we had already written Service Oriented Architectures for Dummies — twice; and Service Management for Dummies that Cloud Computing would be relatively easy. It wasn’t. Over the past six months we have learned a lot about the cloud and where it is headed. I thought that rather than try to rewrite the entire book right here I would give you a sense of some of the important things that I have learned. I will hold myself to 10 so that I don’t go overboard!

1. The cloud is both old and new at the same time. It is build on the knowledge and experience of timesharing, Internet services, Application Service Providers, hosting, and managed services. So, it is an evolution, not a revolution.

2. There are lots of shades of gray with cloud segmentation. Yes, there are three buckets that we put clouds into: infrastructure as a service, platform as a service, and software as a service. Now, that’s nice and simple. However, it isn’t because all of these areas are starting to blurr into each other. And, it is even more complicated because there is also business process as a service. This is not a distinct market unto itself – rather it is an important component in the cloud in general.

3. Market leadership is in flux. Six months ago the market place for cloud was fairly easy to figure out. There were companies like Amazon and Google and an assortment of other pure play companies. That landscape is shifting as we speak. The big guns like IBM, HP, EMC, VMware, Microsoft, and others are running in. They would like to control the cloud. It is indeed a market where big players will have a strategic advantage.

4. The cloud is an economic and business model. Business management wants the data center to be easily scalable and predictable and affordable. As it becomes clear that IT is the business, the industrialization of the data center follows. The economics of the cloud are complicated because so many factors are important: the cost of power; the cost of space; the existing resources — hardware, software, and personnel (and the status of utilization). Determining the most economical approach is harder than it might appear.

5. The private cloud is real.  For a while there was a raging debate: is there such a thing as a private cloud? It has become clear to me that there is indeed a private cloud. A private cloud is the transformation of the data center into a modular, service oriented environment that makes the process of enabling users to safely procure infrastructure, platform and software services in a self-service manner.  This may not be a replacement for an entire data center – a private cloud might be a portion of the data center dedicated to certain business units or certain tasks.

6. The hybrid cloud is the future. The future of the cloud is a combination of private, traditional data centers, hosting, and public clouds. Of course, there will be companies that will only use public cloud services for everything but the majority of companies will have a combination of cloud services.

7. Managing the cloud is complicated. This is not just a problem for the vendors providing cloud services. Any company using cloud services needs to be able to monitor service levels across the services they use. This will only get more complicated over time.

8. Security is king in the cloud. Many of the customers we talked to are scared about the security implications of putting their valuable data into a public cloud. Is it safe? Will my data cross country boarders? How strong is the vendor? What if it goes out of business? This issue is causing many customers to either only consider a private cloud or to hold back. The vendors who succeed in the cloud will have to have a strong brand that customers will trust. Security will always be a concern but it will be addressed by smart vendors.

9. Interoperability between clouds is the next frontier. In these early days customers tend to buy one service at a time for a single purpose — Salesforce.com for CRM, some compute services from Amazon, etc. However, over time, customers will want to have more interoperability across these platforms. They will want to be able to move their data and their code from one enviornment to another. There is some forward movement in this area but it is early. There are few standards for the cloud and little agreement.

10. The cloud in a box. There is a lot of packaging going on out there and it comes in two forms. Companies are creating appliance based environments for managing virtual images. Other vendors (especially the big ones like HP and IBM) are packaging their cloud offerings with their hardware for companies that want Private clouds.

I have only scratched the surface of this emerging market. What makes it so interesting and so important is that it actually is the coalescing of computing. It incorporates everything from hardware, management software, service orientation, security, software development, information management,  the Internet, service managment, interoperability, and probably a dozen other components that I haven’t mentioned. It is truly the way we will achieve the industrialization of software.

Five things I learned at IBM’s Rational Conference

June 9, 2009 3 comments

I haven’t been to IBM’s Rational Conference in a couple of years so I was very interested not just to see what IBM had to say about the changing landscape of software development but how the customers attending the conference had changed. I was not disappointed.  While I could write a whole book on the changes happening in software development (but I have enough problems) I thought I would mention some of the aspects of the conference that I found noteworthy.

One. Rational is moving from tools company to a software development platform. Rational has always been a complex organization to understand since it has evolved and changed so much over the years. The organization now seems to have found its focus.

Two. More management, fewer low level developers. In the old day, conferences like this would be dominated by programmers. While there were many developers  in attendance, I found that there were a lot of upper level managers. For example, I sat at lunch with one CIO who was in the process of moving to a sophisticated service oriented architecture. Another person at my table was a manager looking to update his company’s current development platforms. Still another individual was a customer of one of the company’s that IBM had purchased who was looking to understand how to implement new capabilities added since the acquisition.

Three. Rational has changed dramatically through acquisitions. Rational is a tale of acquisitions. Rational Software, the lynch pin of IBM’s software development division, itself was a combination of many acquisitions. Rational, before being bought by IBM in 2002 for $2.1 billion, had acquired an impressive array of companies including Requiste, SQA, Performance Aware, Pure-Atria, and Object Time Ltd.  After a period of absorbtion, IBM started acquiring more assets. BuildForge (build and release management) was purchased in 2006; Watchfire (Web application security vulnerability and compliance testing software) was bought in 2007; and Telelogic (requirements management) was purchased in 2008.

It has taken IBM a while to both absorb all of the acquisitions and then to create a unified architecture so that these software products could share components and interoperate. While IBM is not done, under Danny Sabbah’s leadership (General Manager), Rational made the transition from being a tools company to becoming platform for managing software complexity. It is work in progress.

Four. It’s all about Jazz. Jazz, IBM’s collaboration platform was a major focus of the conference.  Jazz is an architecture intended to integrate data and function.  Jazz’s foundation is the REST architecture and therefore it is well positioned for use in Web 2.0 applications. What is most important is that IBM is bringing all of its Rational technology under this model. Over the next few years, we can expect to see this framework under all of the Rational’s products.

Five. Rational doesn’t stand alone. It is easy to focus on all of the Rational portfolio (which could take a while). But what I found quite interesting was the emphasis on the intersection between the Rational platform and Tivoli’s management services as well as Websphere’s Service Oriented Architecture offerings. Rational also made a point of focusing on the use of collaboration elements provided by the Lotus division.  Cloud computing was also a major focus of discussion at the event. While many customers at the event are evaluating the potential of using various Rational products in the cloud it is early.  The one area that IBM seem to have hit a home run is its Cloud Burst appliance which is intended create and manage virtual images. Rational is also beginning to deliver its testing offerings as cloud based services. One of the most interesting elements of its approach is to use tokens as a licensing model. In other words, customers purchase a set number of tokens or virtual licenses that can be used to purchase services that are not tied to a specific project or product.