You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago. As this anniversary approaches I thought it was appropriate to take a look back at what has changed. I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news. I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.
Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders. A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was passing fad that would disappear. Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.
Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud. Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies. Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.
Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.
Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.
Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.
Change Six: Cloud Service Level Management is a primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing. Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.
Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.
Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.
I started thinking a lot about software as a service environments and what this really means to customers. I was talking to a CIO of a medium sized company the other day. His company is a customer of a major SaaS vendor (he didn’t want me to name the company). In the beginning things were quite good. The application is relatively easy to navigate and sales people were satisfied with the functionality. However, there was a problem. The use of this SaaS application was actually getting more complicated than the CIO had anticipated. First, the company had discovered that they were locked into a three-year contract to support 450 sales people. In addition, over the first several years of use, the company had hired a consultant to customize the workflow within the application.
So, what was the problem? The CIO was increasingly alarmed about three issues:
- The lack of elasticity. If the company suddenly had a bad quarter and wanted to reduce the number of licenses supported, they would be out of luck. One of the key promises of cloud computing and SaaS just went out the window.
- High costs of the services model. It occurred to the CIO that the company was paying a lot more to support the SaaS application than it would have cost to buy an on premise CRM application. While there were many benefits to the reduced hardware and support requirements, the CIO was starting to wonder if the costs were justified. Did the company really do the analysis to determine the long-term cost/benefit of cloud? How would he be able to explain the long- term ramifications of budget increases that he expects will come to the CFO? It is not a conversation that he is looking forward to having.
- No exit strategy. Given the amount of customization that the company has invested in, it is becoming increasingly clear that there is no easy answer – and no free lunch. One of the reasons that the company had decided to implement SaaS was the assumption that it would be possible to migrate from one SaaS application to another. However, while it might be possible to migrate basic data from a SaaS application, it is almost impossible to migrate the process information. Shouldn’t there be a different approach to integration in clouds than for on premise?
The bottom line is that Software as a Service has many benefits in terms of more rapid deployment, initial savings in hardware and support services, and ease of access for a highly distributed workforce. However, there are complications that are important to take into account. Many SaaS vendors, like their counterparts in the on-premise world, are looking for long-term agreements and lock-in with customers. These vendors expect and even encourage customers to customize their implication based on their specific business processes. There is nothing wrong with this – to make applications like CRM and HR productive they need to reflect a company’s own methods of doing business. However, companies need to understand what they are getting into. It is easy to get caught in the hype of the magic land of SaaS. As more and more SaaS companies are funded by venture capitalists, it is clear that they will not all survive. What happens to your customized processes and data if the company goes out of business?
It is becoming increasingly clear to me that we need a different approach to integration in the cloud than for on premise. It needs to leverage looser coupling, configurations rather than programmatic integration. We have the opportunity to rethink integration altogether – even for on premise applications.
There is no simple answer to the quandary. Companies looking to deploy a SaaS application need to do their homework before barreling in. Understand the risks and rewards. Can you separate out the business process from the basic SaaS application? Do you really want to lock yourself into a vendor you don’t know well? It may not be so easy to free your company, your processes, or your data.
Yesterday I read an interesting blog commenting on why Oracle seems so interested in Sun’s hardware.
I quote from a comment by Brian Aker, former head of architecture for MySQL on the O’Reily Radar blog site. He comments on his view on why Oracle bought Sun,
Brian Aker: I have my opinions, and they’re based on what I see happening in the market. IBM has been moving their P Series systems into datacenter after datacenter, replacing Sun-based hardware. I believe that Oracle saw this and asked themselves “What is the next thing that IBM is going to do?” That’s easy. IBM is going to start pushing DB2 and the rest of their software stack into those environments. Now whether or not they’ll be successful, I don’t know. I suspect once Oracle reflected on their own need for hardware to scale up on, they saw a need to dive into the hardware business. I’m betting that they looked at Apple’s margins on hardware, and saw potential in doing the same with Sun’s hardware business. I’m sure everything else Sun owned looked nice and scrumptious, but Oracle bought Sun for the hardware.
I think that Brian has a good point. In fact, in a post I wrote a few months ago, I commented on the fact that hardware is back. It is somewhat ironic. For a long time, the assumption has been that a software platform is the right leverage point to control markets. Clearly, the tide is shifting. IBM, for example, has taken full advantage of customer concerns about the future of the Sun platform. But IBM is not stopping there. I predict a hardware sneak attack that encompasses IBM’s platform software strength (i.e., middleware, automation, analytics, and service management) combined with its hardware platforms.
IBM will use its strength in systems and middleware software to expand its footprint into Oracle’s backyard surrounding its software with an integrated platform designed to work as a system of systems. It is clear that over the past five or six years IBM’s focus has been on software and services. Software has long provided good profitability for IBM. Services has made enormous strides over the past decade as IBM has learned to codify knowledge and best practices into what I have called Service as Software. The other most important movement has been IBM’s focused effort over the past decade to revamp the underlying structure of its software into modular services that are used across its software portfolio. Combine this approach with industry focused business frameworks and you have a pretty good idea of where IBM is headed with its software and services portfolios.
The hardware strategy has begun to evolve in 2005 when IBM software bought a little hardware XML accelerator hardware appliance company called DataPower. Many market watchers were confused. What would IBM software do with a hardware platform? Over time, IBM expanded the footprint of this platform and began to repurpose it as a means to pre-packaging software components. First there was a SOA-based appliance; then IBM added a virtual machine appliance called the CloudBurst appliance. On the Lotus side of the business, IBM bought another appliance company that evolved into the Lotus Foundations platform. Appliances became a great opportunity to package and preconfigure systems that could be remotely upgraded and managed. This packaging of software with systems demonstrated the potential not only for simplicity for customers but a new way of adding value and revenue.
Now, IBM is taking the idea of packaging hardware with software to new levels. It is starting to leverage the software and networking capability focused on hardware-driven systems. For example, within the systems environment, IBM is leveraging its knowledge of optimizing systems software so that it applications-based workloads can take advantage of capabilities such as threading, caching, and systems level networking.
In its recent announcement, IBM has developed its new hardware platforms based on the five most common workloads: transaction processing, analytics, business applications, records management and archiving, and collaboration. What does this mean to customers? If a customer has a transaction oriented system, the most important capability is to ensure that the environment uses as many threads as possible to optimize speed of throughput. In addition, caching repetitive workloads will also ensure that transactions move through the system as quickly as possible. While this has been doable in the past, the difference is that these capabilities are packaged as an end-to-end system. Thus, implementation could be faster and more precise. The same can be said for analytics workloads. These workloads demand a high level of efficiency to enable customers to look for patterns in the data that help predict outcomes. Analytics workloads require the caching and fast processing of algorithms and data across multiple sources.
The bottom line is that IBM is looking at its hardware as an extension of the type of workloads they are required to support. Rather than considering hardware as as set of separate platforms, IBM is following a systems of systems approach that is consistent with cloud computing. With this type of approach, IBM will continue on the path of viewing a system as a combination of the hardware platform, the systems software, and systems-based networking. These elements of computing are therefore configured based on the type of application and the nature of the current workload.
It is, in fact, workload optimization that is at the forefront of what is changing in hardware in the coming decade. This is true both in the data center and in the cloud. Cloud computing — and the hybrid environments that make up the future of computing are all predicated on predictable, scalable, and elastic workload management. It is the way we will start thinking about computing as a continuum of all of the component parts combined — hardware, software, services, networking, storage, collaboration, and applications. This reflects the dramatic changes that are just at the horizon.
As I was pointing out yesterday, there are many unintended consequences from any emerging technology platform — the cloud will be no exception. So, here are my next three picks for unintended consequences from the evolution of cloud computing:
4. The cloud will disrupt traditional computing sales models. I think that Larry Ellison is right to rant about Cloud Computing. He is clearly aware that if cloud computing becomes the preferred way for customers to purchase software the traditional model of paying maintenance on applications will change dramatically. Clearly, vendors can simply roll in the maintenance stream into the per user per month pricing. However, as I pointed out in Part I, prices will inevitably go down as competition for customers expands. There there will come a time when the vast sums of money collected to maintain software versions will seem a bit old fashioned. In fact, that will be one of the most important unintended consequences and will have a very disruptive effect on the economic models of computing. It has the potential to change the power dynamics of the entire hardware and software industries.The winners will be the customers and smart vendors who figure out how to make money without direct maintenance revenue. Like every other unintended consequence there will be new models emerging that will emerge that will make some really cleaver vendors very successful. But don’t ask me what they are. It is just too early to know.
5. The market for managing cloud services will boom. While service management vendors do pretty well today managing data center based systems, the cloud environment will make these vendors king of the hill. Think about it like this. You are a company that is moving to the cloud. You have seven different software as a service offerings from seven different vendors. You also have a small private cloud that you use to provision critical customer data. You also use a public cloud for some large scale testing. In addition, any new software development is done with a public cloud and then moved into the private cloud when it is completed. Existing workloads like ERP systems and legacy systems of record remain in the data center. All of these components put together are the enterprise computing environment. So, what is the service level of this composite environment? How do you ensure that you are compliant across these environment? Can you ensure security and performance standards? A new generation of products and maybe a new generation of vendors will rake in a lot of cash solving this one.
6. What will processes look like in the cloud. Like data, processes will have to be decoupled from the applications that they are an integral part of the applications of record. Now I don’t expect that we will rip processes out of every system of record. In fact, static systems such as ERP, HR, etc. will have tightly integrated processes. However, the dynamic processes that need to change as the business changes will have to be designed without these constraints. They will become trusted processes — sort of like business services that are codified but can be reconfigured when the business model changes. This will probably happen anyway with the emergence of Service Oriented Architectures. However, with the flexibility of cloud environment, this trend will accelerate. The need to have independent process and process models may have the potential of creating a brand new market.
I am happy to add more unintended consequences to my top six. Send me your comments and we can start a part III reflecting your ideas.
Maybe I am just obsessed with cloud computing these days. I guess that after spending more than 18 months researching the topic for our forthcoming book, Cloud Computing for Dummies, I can be excused for my obsession. Now that I am able to take a step back from the noise of the market, I have been thinking about what this will mean in the next ten years. Consequences of technology adoption are never what we expect. For example, in the late 1970s and early 1980s no one could imagine why anyone would want a personal computer. In fact, the only application people could imagine for a PC was a way to store recipes (I am not making this up). Keep in mind that this was before the first PC-based spreadsheet was designed by Dan Bricklin and Bob Franston(That’s them in the picture) . No one in those days could have predicted that everyone from a CEO to a three year old child would own a personal computer and its use would change the way we conduct business. (I never did find a recipe storing application).
The same logic can be applied to the Internet. While the Internet has been used 40 years ago by researchers, it was not a commercially viable option until the mid-1990s. In the early days of the Internet it was a sophisticated communications technology with a command line interface. Once the browser came along, businesses tended to use it to share price lists, marketing materials, and job postings. There were certainly message boards but only for the real techies. There were environments such as The Well which was the first online community used primarily by academics and wild-eyed researchers.
In that context, I was thinking about what we might expect to happen with cloud computing? There is a lot to say, so I decided to break this into two parts — each one will have three consequences. Here are today’s top three:
1. Cloud computing will begin to change the way we think of an application. To be truly useful to large groups of individuals and businesses requires economies of scale in terms of massively scaled workloads. The only way to accomplish this is either to cherry pick a few big workloads (like email) or to branch out. That branching out is inevitable and will mean that vendors with cloud offerings with componentize their software offerings into modular services that can be mixed and matched with other services.
2. The prices that vendors will charge for cloud computing services will drop dramatically over the next few years. As prices drop it will become a lot more economically viable to substitute on premise environment for the cloud environment. Today this is not the case; large companies supporting thousands of users in an application environment cannot justify the movement to a cloud platform. What if the costs drop to the point where the economics (with the right workloads) favor cloud based services? When this happens there will be a tipping point that we might not even notice for a few years. But I predict that it will happen. We are already seeing Amazon dropping prices for its EC2 environment based on the competitive threat from Microsoft Azure services announcement.
3. The cloud will change the way we manage data. The traditional way we think about data neatly stored in specific databases to handle a specific business problem will inevitably change. This won’t be an overnight change but it will happen. Data will increasingly be seen as a reusable resource that can be used in lots of different situations. There will continue to be strategic line of business applications but they will be more systems of record that keep track of the final result of actions that take place dynamically in the cloud. The value of data is not in its tight packaging as we have been used to for decades but it the flexibility to move, transform, and leverage data. The watch word for data in this new model will be Trusted Data in the Cloud.
I would love to know what you think of my top three choices; send me your comments and I will add them to my list for tomorrow.
As we deal with the cloud hype it is too easy to be dismissive and cynical. But we always treat complicated new trends that way — until one day they become the normal way of business and life.