You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
I started thinking a lot about software as a service environments and what this really means to customers. I was talking to a CIO of a medium sized company the other day. His company is a customer of a major SaaS vendor (he didn’t want me to name the company). In the beginning things were quite good. The application is relatively easy to navigate and sales people were satisfied with the functionality. However, there was a problem. The use of this SaaS application was actually getting more complicated than the CIO had anticipated. First, the company had discovered that they were locked into a three-year contract to support 450 sales people. In addition, over the first several years of use, the company had hired a consultant to customize the workflow within the application.
So, what was the problem? The CIO was increasingly alarmed about three issues:
- The lack of elasticity. If the company suddenly had a bad quarter and wanted to reduce the number of licenses supported, they would be out of luck. One of the key promises of cloud computing and SaaS just went out the window.
- High costs of the services model. It occurred to the CIO that the company was paying a lot more to support the SaaS application than it would have cost to buy an on premise CRM application. While there were many benefits to the reduced hardware and support requirements, the CIO was starting to wonder if the costs were justified. Did the company really do the analysis to determine the long-term cost/benefit of cloud? How would he be able to explain the long- term ramifications of budget increases that he expects will come to the CFO? It is not a conversation that he is looking forward to having.
- No exit strategy. Given the amount of customization that the company has invested in, it is becoming increasingly clear that there is no easy answer – and no free lunch. One of the reasons that the company had decided to implement SaaS was the assumption that it would be possible to migrate from one SaaS application to another. However, while it might be possible to migrate basic data from a SaaS application, it is almost impossible to migrate the process information. Shouldn’t there be a different approach to integration in clouds than for on premise?
The bottom line is that Software as a Service has many benefits in terms of more rapid deployment, initial savings in hardware and support services, and ease of access for a highly distributed workforce. However, there are complications that are important to take into account. Many SaaS vendors, like their counterparts in the on-premise world, are looking for long-term agreements and lock-in with customers. These vendors expect and even encourage customers to customize their implication based on their specific business processes. There is nothing wrong with this – to make applications like CRM and HR productive they need to reflect a company’s own methods of doing business. However, companies need to understand what they are getting into. It is easy to get caught in the hype of the magic land of SaaS. As more and more SaaS companies are funded by venture capitalists, it is clear that they will not all survive. What happens to your customized processes and data if the company goes out of business?
It is becoming increasingly clear to me that we need a different approach to integration in the cloud than for on premise. It needs to leverage looser coupling, configurations rather than programmatic integration. We have the opportunity to rethink integration altogether – even for on premise applications.
There is no simple answer to the quandary. Companies looking to deploy a SaaS application need to do their homework before barreling in. Understand the risks and rewards. Can you separate out the business process from the basic SaaS application? Do you really want to lock yourself into a vendor you don’t know well? It may not be so easy to free your company, your processes, or your data.
Informatica might be thought of as the last independent data management company standing. In fact, that used to be Informatica’s main positioning in the market. That has begun to change over the last few years as Informatica can continued to make strategic acquisitions. Over the past two years Informatica has purchased five companies — the most recent was Siperian, a significant player in Master Data Management solutions. These acquisitions have paid off. Today Informatica has past the $500 million revenue mark with about 4,000 customers. It has deepened its strategic partnerships with HP, Ascenture, Salesforce.com, and MicroStrategies, In a nutshell, Informatica has made the transition from a focus on ETL (Extract, Transform, Load) tools to support data warehouses to a company focused broadly on managing information. Merv Adrian did a great job of providing context for Informatica’s strategy and acquisitions. To transition itself in the market, Informatica has set its sights on data service management — a culmination of data integration, master data management and data transformation, predictive analytics in a holistic manner across departments, divisions, and business partners.
In essence, Informatica is trying to position itself as a leading manager of data across its customers’ ecosystem. This requires a way to have consistent data definitions across silos (Master Data Management), ways to trust the integrity of that data (data cleansing), event processing, predictive analytics, integration tools to move and transform data, and the ability to prove that governance can be verified (data governance). Through its acquisitions, Informatica is working to put these pieces together. However, as a relatively small player living in a tough neighborhood (Oracle, IBM, SAS Institute,etc. it will be a difficult journey. This is one of the reasons that Informatica is putting so much emphasis on its new partner marketplace. A partner network can really help a smaller player appear and act bigger.
This Marketplace will include all of Informatica’s products. It will enable developers to develop within Informatica’s development cloud and deploy either in the cloud or on premise. Like its new partner marketplace, the cloud is offering another important opportunity for Informatica to compete. Informatica was an early partner with Salesforce.com. It has been offerings complementary information management products that can be used as options with Salesforce.com. This has provided Informatica access to customers who might not have ever thought about Informatica in the past. In addition, it taught Informatica about the value of cloud computing as a platform for the future. Therefore, I expect that with Informatica’s strong cloud-based offerings will help the company maintain its industry position. In addition, I expect that the company’s newly strengthened partnership with HP will be very important in the company’s growth.
What is Informatica’s roadmap? It intends to continue to deliver new releases every six months including new data services and new data integration services. It will including develop these services with a self-service interfaces. In the end, its goal is to be a great data steward to its customers. This is an admirable goal. Informatica has made very good acquisitions that support its strategic goals. It is making the right bets on cloud and on a partner ecosystem. The question that remains is whether Informatica can truly scale to the size where it can sustain the competitive threats. Companies like IBM, Oracle, Microsoft, SAP, and SAS Institute are not standing still. Each of these companies have built and will continue to expand their information management strategies and portfolios of offerings. If Informatica can break the mold on ease of implementation on complex data service management it will have earned a place at the head table.
Just when it looked clear where the markets were lining up around data center automation and cloud computing, things change. I guess that is what makes this industry so very interesting. The proposed acquisition by HP of 3Com is a direct challenge to Cisco’s network management franchise. However, the implications of this move go further than what meets the eye. It also pits HP in a direct path against EMC with its Cisco partnership. And to make things even more interesting, it also puts these two companies in a competitive three way race against IBM and its cloud/data center automation strategy. And of course, it doesn’t stop there. A myriad of emerging companies like Google and Amazon want a larger share of the enterprise market for cloud services. Companies like Unisys and CSC that has focused on the outsourced secure data centers are getting into the act.
I don’t think that we will see a single winner — no matter what any one of these companies will tell you. The winners in this market shift will be those companies can build a compelling platform and a compelling value proposition for a partner ecosystem. The truth about the cloud is that it is not simply a network or a data center. It is a new way of providing services of all sorts that can support changing customer workloads in a secure and predictable manner.
In light of this, what does this say for HP’s plans to acquire 3Com? If we assume that the network infrastructure is a key component of an emerging cloud and data center strategy, HP is making a calculated risk in acquiring more assets in this market. The company that has found that its ProCurve networking division has begun gaining traction. HP ProCurve Networking is the networking division of HP. The division includes network switches, wireless access points, WAN routers, and Access Control servers and software. ProCurve competes directly with Cisco in the networking switch market. When HP had a tight partnership with Cisco, the company de-emphasized the networking. However, once Cisco started to move into the server market, the handcuffs came off. The 3Com acquisition takes the competitive play to a new level. 3Com has a variety of good pieces of technology that HP could leverage within ProCurve. Even more significantly, it picks up a strong security product called TippingPoint, a 3Com acquisition. TippingPoint fills a critical hole in HP’s security offering. TippingPoint, offers network security offerings including intrusion prevention and a product that inspects network packets. The former 3Com subsidiary has also established a database of security threats based a network of external researchers.
But I think that one of the most important reasons that HP bought 3Com is its strong relationships in the Chinese market. In fiscal year 2008 half of 3Com’s revenue came from its H3C joint venture with Chinese vendor, Huawei Technology. Therefore, it is not surprising that HP would have paid a premium to gain a foothold in this lucrative market. If HP is smart, it will do a good job leveraging the many software assets to build out both its networking assets as well as beefing up its software organization. In reality, HP is much more comfortable in the hardware market. Therefore, adding networking as a core competency makes sense. It will also bolster its position as a player in the high end data center market and in the private cloud space.
Cisco, on the other hand, is coming from the network and moving agressively into the cloud and the data center market. The company has purchased a position with VMWare and has established a tight partnership with EMC as a go to market strategy. For Cisco, it gives the company credibility and access to customers outside of its traditional markets. For EMC, the Cisco relationship strengthens its networking play. But an even bigger value for the relationship is to present a bigger footprint to customers as they move to take on HP, IBM, and the assortment of other players who all want to win. The Cisco/EMC/VMware play is to focus on the private cloud. In their view a private cloud is very similar to a private, preconfigured data center. It can be a compelling value proposition to a customer that needs a data center fast without having to deal with a lot of moving parts. The real question from a cloud computing perspective is the key question: is this really a cloud?
It was inevitable that this quiet market dominated by Google and Amazon would heat up as the cloud becomes a real market force. But I don’t expect that HP or Cisco/EMC will have a free run. They are being joined by IBM and Microsoft — among others. The impact could be better options for customers and prices that invariably will fall. The key to success for all of these players will be how well they manage what will be an increasingly heterogeneous, federated, and highly distributed hardware and software world. Management comes in many flavors: management of these highly distributed services and management of the workloads.
Almost every conversation I have had over the past year or so always comes back to security in the cloud. Is it really secure? Or we are thinking about implementing the cloud but we are worried about security. There are, of course, good reasons to plan a cloud security strategy. But in a sense, it is no different than planning a security strategy for your company. But it is the big scary cloud! Well, before I list the top then issues I would like to say one thing: if you think you need an entirely different security strategy for the cloud, you may not have a comprehensive security strategy to start with. Yes, you have to make sure that you cloud provider has a sophisticated approach to security. However, what about your Internet service provider? What about the level of security within your own IT department? Can you throw stones if you live in a glass house (yes, that is a pun…sorry)? So, before you start fretting about security in the cloud, get your own house in order. Do you have an identity management plan? Do you ensure that one individual within the data center can’t control all of the data within a single environment to minimize risks? If you don’t have a well executed internal security plan, you aren’t ready for the cloud. But let’s say that you have fixed that problem and you are ready to really plan your cloud security strategy. So, here five of the issues to consider. If you have others, let’s start a conversation.
1. You need to start at the beginning with understanding the characteristics of your cloud provider. Is the company well funded? Is its data center designed with security at the center? Your level of scrutiny will also depend on how you are using the cloud. If you are using Infrastructure as a Service for a short term project there is less risk than if you are planning to use a cloud to store important customer data.
2. How is your cloud provider implementing security in a multi-tenant environment? How do they ensure that one customer’s data doesn’t impact another customer’s data?
3. Does your cloud provider give you the ability to monitor security of your data in the cloud? This will be important both for compliance and to keep track of your own security policies.
4. Does your cloud provider encrypt your critical data? If not, why not?
5. Does your cloud provider give you the ability to control who is allowed to access your information based on roles and authorization? Does the cloud provider support federated identity management? This is basic security best practices.
Now you are probably saying to yourself that this isn’t rocket science. These are fundamental security approaches that any data center should follow. I recommend that you take a look at a great document published by the Cloud Security Alliance that details many of the key issues surrounding security in the cloud. So, I guess my principle message is that cloud security is not different than security in any data center. But the market does not seem to understand this because the perception is that a cloud is somehow not a data center that can be secured with regular old security. I think that we will see something interesting happen because of this perception: cloud vendors will begin to charge a premium for really good security. In fact, this is already happening. Vendors like Amazon and Salesforce are offering segregated implementations of their environments to customers who don’t trust their ordinary security approaches. This will work in the short term primarily because during this early phase of the cloud there is not enough focus on security. Long term, as the market matures, cloud vendors will have to demonstrate their ability to provide a secure environment based on basic security best practices. In the meantime, cloud vendors will rake in the cash for premium secure cloud services.