Archive

Archive for the ‘workload’ Category

Can IBM Build a Strong Cloud Partner Ecosystem?

May 4, 2011 1 comment

Despite all of the hand wringing surrounding Amazon.com’s service outages last week, it is clear to me that cloud computing is dramatically changing the delivery models of computing forever. We simply will not return to a model where organizations assume that they will consume primarily their own data center resources.  The traditional data center certainly isn’t going away but its role and its underlying technology will change forever.  One of the ramifications of this transition is the role of cloud infrastructure leaders in determining the direction of the partnership models.

Traditionally, System vendors have relied on partners to expand the coverage of their platforms. With the cloud, the requirement to have a strong partner ecosystem will not change. If anything, partners will be even more important in the cloud than they have been in traditional computing delivery models.  This is because with cloud computing, the barriers to leveraging different cloud-based software offerings – platform as a service and Software as a Service are very low. Any employee with a credit card can try out just about anything.  I think that the Amazon.com issues will be seen in the future as a tipping point for cloud computing. It, in fact, will not be the end to cloud but it will change the way companies view the way they select cloud partners.  Service management, scalability, and reliability will become the selection standard – not just for the end customer but for partners as well.

So, I was thinking about the cloud partnership model and how it is evolving. I expect that the major systems vendors will be in a perfect position to begin to reassert their power in the era of the cloud.  So, I decided to take a look at how IBM is approaching its partnership model in light of cloud computing.  Over the past several months, IBM has been revealing a new partnership model for the cloud computing market.  It has been difficult for most platform vendors to get noticed above the noise of cloud pioneers like Amazon and Google.  But this is starting to change.  It is not hard to figure out why.  IBM believes that cloud is a $181 billion business opportunity and it would like to grab a chunk of that opportunity.

Having followed IBM’s partnering initiatives for several decades I was not surprised to see a revamped cloud partnering program emerge this year. The new program is interesting for several different reasons.  First, it is focused on bringing together all of IBM’s cloud offerings across software, developer relations, hardware, and services into a single program.  This is important because it can be intimidating for an ISV, a Value Added Reseller, or a systems integrator to navigate the complexity of IBM’s offerings without some assistance.  In addition, IBM has to contend with a new breed of partners that are focused on public, private, and hybrid cloud offerings.

The new program is called the Cloud Specialty program and targeted to cover the entire cloud ecosystem including cloud builders (hardware and software resellers and systems integrators), Service Solution Providers (software and service resellers), Infrastructure Providers (telecom providers, hosting companies, Managed Service Providers, and distributors), Application Providers (ISVs and systems integrators), and Technology Providers (tools providers, and appliance vendors).

The focus of the cloud specialty program is not different than other partnering programs at IBM. It is focused on issues such as expanding the skills of partners, building revenue for both IBM and partners, and providing go to market programs to support its partners.  IBM is the first to admit that the complexity of the company and its offerings can be intimidating for partners.  Therefore, one of the objectives of the cloud specialty program is to clarify the requirements and benefits for partners. IBM is creating a tiered program based on the different types of cloud partners.  The level of partner investment and benefits differ based on the value of the type of partner and the expectation of those partners.  But there are some common offerings for all partners. All get early access to IBM’s cloud roadmap, use of the Partnerworld Cloud Specialty Mark, confidential updates on IBM’s cloud strategy and roadmap, internal use of LotusLive, networking opportunities. In addition, all these partners are entitled to up to $25,000 in business development funds.   There are some differences.  They include:

  • Cloud builders gain access to business leads, and access to IBM’s lab resources. In exchange these partners are expected to have IBM Cloud Reference architecture skills as well as cloud solutions provider and technical certification. They must also demonstrate ability to generate revenue. Revenue amounts vary based on the mix of hardware, software, and services that they resell.  They must also have two verified cloud references for the previous calendar year.
  • Service Solution Providers are provided with a named relationship manager and access to networking opportunities. In exchange, partners are expected to use IBM cloud products or services, demonstrate knowledge and skills in use of IBM cloud offerings, and the ability to generate $300,000 in revenue from the partnership.
  • Infrastructure Providers are given access to named IBM alliance manager, and access to business development workshops. In exchange, these partners are expected to use IBM’s cloud infrastructure products or services, demonstrate skills in IBM technology. Like service solution providers they must use and skills in IBM cloud offerings, have at least $300,000 a year in client references based on two cloud client references
  • Application Providers are given access to a named IBM alliance manager, and access to business development workshops. They are expected to use IBM cloud products or services, have skills in these technologies or services, and a minimum of $100,000 a year in revenue plus two cloud client references.
  • Technology Providers get access to networking opportunites, and IBM’s cloud and services assessment tools.  In exchange, these partners are required to demonstrate knowledge of IBM Cloud Reference architecture, have skills related to IBM’s cloud services. Like application providers, these partners must have at least $100,000 in IBM revenue and two client references.

What does IBM want? IBM’s goals with the cloud specialty program is to make it as attractive as possible for prospective partners to chose its platform. It is hoping that by offering financial and technical incentives that it can make inroads with cloud focused companies. For example, it is openings its labs and providing assistance to help partners define their offerings. IBM is also taking the unusual step of allowing partners to white label its products.  On the business development side, IBM is teaming with business partners on calls with prospective customers.  IBM anticipates that the impact on these partners could be significant – potentially generating as much as 30% gross margin growth.

Will the effort work? It is indeed an ambitious program. IBM will have to do a good job in explaining its huge portfolio of offerings to the prospective partners. For example, it has a range of services including CastIron for cloud integration, analytics services, collaboration services (based on LotusLive), middleware services, and Tivoli service management offerings.  In addition, IBM is encouraging partners to leverage its  extensive security services offerings.  It is also trying to encourage partners to leverage its hardware systems. One example of how IBM is trying to be more attractive to cloud-based companies like Software as a Service vendors to to price offerings attractively. Therefore, it is offering a subscription-based model for partners so that they can pay based on usage – the common model for most cloud platform vendors.

IBM is on the right track with this cloud focused partner initiative.  It is a sweeping program that is focused on provides a broad set of benefits for partners. It is pricing its services so that ISVs can rent a service (including IBM’s test and development cloud) by the month — an important issue in this emerging market.  It is also expecting partners to make a major investment in learning IBM’s software, hardware, and services offerings. It is also expecting partners to expand their knowledge of the markets they focus on.

Yes, you can have an elastic private cloud

April 11, 2011 3 comments

I was having a discussion with a skeptical CIO the other day. His issue was that a private cloud isn’t real.  Why? In contrast to the public cloud, which has unlimited capability on demand, a private cloud is limited by the size and capacity of the internal data center.  While I understand this point I disagree and here is why.  I don’t know of any data center that doesn’t have enough servers or capacity.  In fact, if you talk to most IT managers they will quickly admit that they don’t lack physical resources. This is why there has been so much focus on server virtualization. With server virtualization, these organizations actually get rid of servers and make their IT organization more efficient.

Even when data centers are able to improve their efficiency, they still do not lack resources.  What data centers lack is the organizational structure to enable provisioning of those resources in a proactive and efficient way.  The converse is also true: data centers lack the ability to reclaim resources once they have been provisioned.

So, I maintain that the problem with the data center is not a lack of resources but rather the management and the automation of those resources.  Imagine an organization leverages the existing physical resources in a data center by adding self-service provisioning and business process rules for allocating resources based on business need.  This would mean that when developers start working on a project they are allocated the amount of resources they need – not what they want. More importantly, when the project is over, those resources are returned to the pool.

This, of course, does not work for every application and every workload in the data center. There are applications that are highly specialized and are not going to benefit from automation. However, there indeed can increasingly large aspects of computing that can be transformed in the private cloud environment based on truly tuning workloads and resources to make the private cloud as elastic as what we think of as a ever expanding public cloud.

What’s a private cloud anyway?

February 4, 2011 2 comments

So in a perfect world all data centers be magically become clouds and the world is a better place. All kidding aside..I am tired of all of the hype. Let me put it this way.  All data centers cannot and will not become private clouds– at least not for most typical companies. Let me tell you why I say this.  There are some key principles of the cloud that I think are worth recounting:

1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud.

2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand.

3. A cloud environment needs to be economically viable.

Why aren’t traditional data centers private clouds?  What if a data center adds some self-service and  virtualization? Is that enough?  Probably not.  A typical data center is a complex environment.  It is not uncommon for a single data center to support five or six different operating systems, five or six different languages, four or five different hardware platforms and perhaps 20 or 30 applications of all sizes and shapes plus an unending number of tools to support the management and maintenance of that environment.  In Cloud Computing for Dummies, written by the team at Hurwitz & Associates there is a considerable amount written about this issue.  Given an environment like this it is almost impossible to achieve workload optimization.  In addition, there are often line of business applications that are complicated, used by a few dozen employees, and are necessary to run the business. There is simply no economic rational for such applications to be moved to a cloud — public or private.  The only alternative for such an application would be to outsource the application all together.

So what does belong in the private cloud? Application and business services that are consistent workloads that are designed for be used on demand by developers, employees, or partners.  Many companies are becoming IT providers to their own employees, partners, customers and suppliers.  These services are predictable and designed as well-defined components that can be optimized for elasticity. They can be used in different situations — for a single business situation to support a single customer or in a scenario that requires the business to support a huge partner network. Typically, these services can be designed to be used by a single operating system (typically Linux) that has been optimized to support these workloads. Many of the capabilities and tasks within this environment has been automated.

Could there be situations where an entire data center could be a private cloud? Sure, if an organization can plan well enough to limit the elements supported within the data center. I think this will happen with specialized companies that have the luxury of not supporting legacy. But for most organizations, reality is a lot messier.

Predictions for 2011: getting ready to compete in real time

December 1, 2010 3 comments

2010 was a transition year for the tech sector. It was the year when cloud suddenly began to look realistic to the large companies that had scorned it. It was the year when social media suddenly became serious business. And it was the year when hardware and software were being united as a platform – something like in the old mainframe days – but different because of high-level interfaces and modularity. There were also important trends starting to emerge like the important of managing information across both the enterprise and among partners and suppliers. Competition for ownership of the enterprise software ecosystem headed up as did the leadership of the emerging cloud computing ecosystem.

So, what do I predict for this coming year? While at the outset it might look like 2011 will be a continuation of what has been happening this year, I think there will be some important changes that will impact the world of enterprise software for the rest of the decade.

First, I think it is going to be a very big year for acquisitions. Now I have said that before and I will say it again. The software market is consolidating around major players that need to fill out their software infrastructure in order to compete. It will come as no surprise if HP begins to purchase software companies if it intends to compete with IBM and Oracle on the software front.  But IBM, Oracle, SAP, and Microsoft will not sit still either.  All these companies will purchase the incremental technology companies they need to compete and expand their share of wallet with their customers.

This will be a transitional year for the up and coming players like Google, Amazon, Netflix, Salesforce.com, and others that haven’t hit the radar yet.  These companies are plotting their own strategies to gain leadership. These companies will continue to push the boundaries in search of dominance.  As they push up market as they grab market share, they will face the familiar problem of being able to support customers who will expect them to act like adults.

Customer support, in fact, will bubble to the top of the issues for emerging as well as established companies in the enterprise space – especially as cloud computing becomes a well-established distribution and delivery platform for computing.  All these companies, whether well established or startups will have to balance the requirements to provide sophisticated customer support with the need to make profit.  This will impact everything from license and maintenance revenue to how companies will charge for consulting and support services.

But what are customers be looking for in 2011? Customers are always looking to reduce their IT expenses – that is a given. However, the major change in 2011 will be the need to innovative based on customer facing initiatives.  Of course, the idea of focusing on customer facing software itself isn’t new there are some subtle changes.  The new initiatives are based on leveraging social networking from a secure perspective to both drive business traffic, anticipate customer needs and issues before they become issues.  Companies will spend money innovating on customer relationships.

Cloud Computing is the other issue in 2011. While it was clearly a major differentiator in 2010, the cloud will take an important leap forward in 2011.  While companies were testing the water this year, next year, companies will be looking at best practices in cloud computing.  2011 will be there year where customers are going to focus on three key issues: data integration across public, private, and data centers, manageability both in terms of workload optimization, security, and overall performance.  The vendors that can demonstrate that they can provide the right level of service across cloud-based services will win significant business. These vendors will increasingly focus on expanding their partner ecosystem as a way to lock in customers to their cloud platform.

Most importantly, 2011 will be the year of analytics.  The technology industry continues to provide data at an accelerated pace never seen before. But what can we do with this data? What does it mean in organizations’ ability to make better business decisions and to prepare for an unpredictable future?  The traditional warehouse simply is too slow to be effective. 2011 will be the year where predictive analytics and information management overall will emerge as among the hottest and most important initiatives.

Now I know that we all like lists, so I will take what I’ve just said and put them into my top ten predictions:

1. Both today’s market leaders and upstarts are going to continue to acquire assets to become more competitive.  Many emerging startups will be scooped up before they see the light of day. At the same time, there will be almost as many startups emerge as we saw in the dot-com era.

2. Hardware will continue to evolve in a new way. The market will move away from hardware as a commodity. The hardware platform in 2010 will be differentiated based on software and packaging. 2010 will be the year of smart hardware packaged with enterprise software, often as appliances.

3. Cloud computing models will put extreme pressure on everything from software license and maintenance pricing to customer support. Integration between different cloud computing models will be front and center. The cloud model is moving out of risk adverse pilots to serious deployments. Best practices will emerge as a major issue for customers that see the cloud as a way to boost innovation and the rate of change.

4. Managing highly distributed services in a compliant and predictable manner will take center stage. Service management and service level agreements across cloud and on-premises environments will become a prerequisite for buyers.

5. Security software will be redefined based on challenges of customer facing initiatives and the need to more aggressively open the corporate environment to support a constantly morphing relationship with customers, partners, and suppliers.

6. The fear of lock in will reach a fever pitch in 2011. SaaS vendors will increasingly add functionality to tighten their grip on customers.  Traditional vendors will purchase more of the components to support the lifecycle needs of customers.  How can everything be integrated from a business process and data integration standpoint and still allow for portability? Today, the answers are not there.

7. The definition of an application is changing. The traditional view that the packaged application is hermetically sealed is going away. More of the new packaged applications will be based on service orientation based on best practices. These applications will be parameter-driven so that they can be changed in real time. And yes, Service Oriented Architectures (SOA) didn’t die after all.

8. Social networking grows up and will be become business social networks. These initiatives will be driven by line of business executives as a way to engage with customers and employees, gain insights into trends, to fix problems before they become widespread. Companies will leverage social networking to enhance agility and new business models.

9. Managing end points will be one of the key technology drivers in 2011. Smart phones, sensors, and tablet computers are refining what computing means. It will drive the requirement for a new approach to role and process based security.

10. Data management and predictive analytics will explode based on both the need to understand traditional information and the need to manage data coming from new sales and communications channels.

The bottom line is that 2011 will be the year where the seeds that have been planted over the last few years are now ready to become the drivers of a new generation of innovation and business change. Put together everything from the flexibility of service orientation, business process management innovation, the wide-spread impact of social and collaborative networks, the new delivery and deployment models of the cloud. Now apply tools to harness these environments like service management, new security platforms, and analytics. From my view, innovative companies are grabbing the threads of technology and focusing on outcomes. 2011 is going to be an important transition year. The corporations that get this right and transform themselves so that they are ready to change on a dime can win – even if they are smaller than their competitors.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Eight things that changed since we wrote Cloud Computing for Dummies

October 8, 2010 3 comments

I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago.  As this anniversary approaches I thought it was appropriate to take a look back at what has changed.  I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news.  I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.

Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders.  A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was  passing fad that would disappear.  Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.

Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud.  Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies.  Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.

Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.

Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.

Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.

Change Six: Cloud Service Level Management is a  primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing.   Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.

Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.

Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.

IBM’s hardware sneak attack

April 13, 2010 5 comments

Yesterday I read an interesting blog commenting on why Oracle seems so interested in Sun’s hardware.

I quote from a comment by Brian Aker, former head of architecture for MySQL on the O’Reily Radar blog site.  He comments on his view on why Oracle bought Sun,

Brian Aker: I have my opinions, and they’re based on what I see happening in the market. IBM has been moving their P Series systems into datacenter after datacenter, replacing Sun-based hardware. I believe that Oracle saw this and asked themselves “What is the next thing that IBM is going to do?” That’s easy. IBM is going to start pushing DB2 and the rest of their software stack into those environments. Now whether or not they’ll be successful, I don’t know. I suspect once Oracle reflected on their own need for hardware to scale up on, they saw a need to dive into the hardware business. I’m betting that they looked at Apple’s margins on hardware, and saw potential in doing the same with Sun’s hardware business. I’m sure everything else Sun owned looked nice and scrumptious, but Oracle bought Sun for the hardware.

I think that Brian has a good point. In fact, in a post I wrote a few months ago, I commented on the fact that hardware is back.  It is somewhat ironic. For a long time, the assumption has been that a software platform is the right leverage point to control markets.  Clearly, the tide is shifting.  IBM, for example, has taken full advantage of customer concerns about the future of the Sun platform. But IBM is not stopping there. I predict a hardware sneak attack that encompasses IBM’s platform software strength (i.e., middleware, automation, analytics, and service management) combined with its hardware platforms.

IBM will use its strength in systems and middleware software to expand its footprint into Oracle’s backyard surrounding its software with an integrated platform designed to work as a system of systems.  It is clear that over the past five or six years IBM’s focus has been on software and services.  Software has long provided good profitability for IBM. Services has made enormous strides over the past decade as IBM has learned to codify knowledge and best practices into what I have called Service as Software. The other most important movement has been IBM’s focused effort over the past decade to revamp the underlying structure of its software into modular services that are used across its software portfolio. Combine this approach with industry focused business frameworks and you have a pretty good idea of where IBM is headed with its software and services portfolios.

The hardware strategy has begun to evolve in 2005 when IBM software bought a little hardware XML accelerator hardware appliance company called DataPower. Many market watchers were confused. What would IBM software do with a hardware platform?  Over time, IBM expanded the footprint of this platform and began to repurpose it as a means to pre-packaging software components. First there was a SOA-based appliance; then IBM added a virtual machine appliance called the CloudBurst appliance.  On the Lotus side of the business, IBM bought another appliance company that evolved into the Lotus Foundations platform.  Appliances became a great opportunity to package and preconfigure systems that could be remotely upgraded and managed.  This packaging of software with systems demonstrated the potential not only for simplicity for customers but a new way of adding value and revenue.

Now, IBM is taking the idea of packaging hardware with software to new levels.  It is starting to leverage the software and networking capability focused on hardware-driven systems. For example, within the systems environment, IBM is leveraging its knowledge of optimizing systems software so that it applications-based workloads can take advantage of capabilities such as threading, caching, and systems level networking.

In its recent announcement, IBM has developed its new hardware platforms based on the five most common workloads: transaction processing, analytics, business applications, records management and archiving, and collaboration.  What does this mean to customers? If a customer has a transaction oriented system, the most important capability is to ensure that the environment uses as many threads as possible to optimize speed of throughput. In addition, caching repetitive workloads will also ensure that transactions move through the system as quickly as possible. While this has been doable in the past, the difference is that these capabilities are packaged as an end-to-end system. Thus, implementation could be faster and more precise. The same can be said for analytics workloads. These workloads demand a high level of efficiency to enable customers to look for patterns in the data that help predict outcomes.     Analytics workloads require the caching and fast processing of   algorithms and data across multiple sources.

The bottom line is that IBM is looking at its hardware as an extension of the type of workloads they are required to support.  Rather than considering hardware as as set of separate platforms, IBM is following a systems of systems approach that is consistent with cloud computing.  With this type of approach, IBM will continue on the path of viewing a system as a combination of the hardware platform, the systems software, and systems-based networking.  These elements of computing are therefore configured based on the type of application and the nature of the current workload.

It is, in fact, workload optimization that is at the forefront of what is changing in hardware in the coming decade. This is true both in the data center and in the cloud. Cloud computing — and the hybrid environments that make up the future of computing are all predicated on predictable, scalable, and elastic workload management.  It is the way we will start thinking about computing as a continuum of all of the component parts combined — hardware, software, services, networking, storage, collaboration, and applications.  This reflects the dramatic changes that are just at the horizon.

Why we about to move from cloud computing to industrial computing?

April 5, 2010 7 comments

I spent the other week at a new conference called Cloud Connect. Being able to spend four days emerged in an industry discussion about cloud computing really allows you to step back and think about where we are with this emerging industry. While it would be possible to write endlessly about all the meeting and conversations I had, you probably wouldn’t have enough time to read all that. So, I’ll spare you and give you the top four things I learned at Cloud Connect. I recommend that you also take a look at Brenda Michelson’s blogs from the event for a lot more detail. I would also refer you to Joe McKendrick’s blog from the event.

1. Customers are still figuring out what Cloud Computing is all about.  For those of us who spend way too many hours on the topic of cloud computing, it is easy to make the assumption that everyone knows what it is all about.  The reality is that most customers do not understand what cloud computing is.  Marcia Kaufman and I conducted a full day workshop called Introduction to Cloud. The more than 60 people who dedicated a full day to a discussion of all aspects of the cloud made it clear to us that they are still figuring out the difference between infrastructure as a service and platform as a service. They are still trying to understand the issues around security and what cloud computing will mean to their jobs.

2. There is a parallel universe out there among people who have been living and breathing cloud computing for the last few years. In their view the questions are very different. The big issues discussed among the well-connected were focused on a few key issues: is there such a thing as a private cloud?; Is Software as a Service really cloud computing? Will we ever have a true segmentation of the cloud computing market?

3. From the vantage point of the market, it is becoming clear that we are about to enter one of those transitional times in this important evolution of computing. Cloud Connect reminded me a lot of the early days of the commercial Unix market. When I attended my first Unix conference in the mid-1980s it was a different experience than going to a conference like Comdex. It was small. I could go and have a conversation with every vendor exhibiting. I had great meetings with true innovators. There was a spirit of change and innovation in the halls. I had the same feeling about the Cloud Connect conference. There were a small number of exhibitors. The key innovators driving the future of the market were there to discuss and debate the future. There was electricity in the air.

4. I also anticipate a change in the direction of cloud computing now that it is about to pass that tipping point. I am a student of history so I look for patterns. When Unix reached the stage where the giants woke up and started seeing huge opportunity, they jumped in with a vengeance. The great but small Unix technology companies were either acquired, got big or went out of business. I think that we are on the cusp of the same situation with cloud computing. IBM, HP, Microsoft, and a vast array of others have seen the future and it is the cloud. This will mean that emerging companies with great technology will have to be both really luck and really smart.

The bottom line is that Cloud Connect represented a seminal moment in cloud computing. There is plenty of fear among customers who are trying to figure out what it will mean to their own data centers. What will the organizational structure of the future look like? They don’t know and they are afraid. The innovative companies are looking at the coming armies of large vendors and are wondering how to keep their differentiation so that they can become the next Google rather than the next company whose name we can’t remember. There was much debate about two important issues: cloud standards and private clouds. Are these issues related? Of course. Standards always become an issue when there is a power grab in a market. If a Google, Microsoft, Amazon, IBM, or an Oracle is able to set the terms for cloud computing, market control can shift over night. Will standard interfaces be able to save the customer? And how about private clouds? Are they real? My observation and contention is that yes, private clouds are real. If you deploy the same automation, provisioning software, and workload management inside a company rather than inside a public cloud it is still a cloud. Ironically, the debate over the private cloud is also about power and position in the market, not about ideology. If a company like Google, Amazon, or name whichever company is your favorite flavor… is able to debunk the private cloud — guess who gets all the money? If you are a large company where IT and the data center is core to how you conduct business — you can and should have a private cloud that you control and manage.

So, after taking a step back I believe that we are witnessing the next generation of computing — the industrialization of computing. It might not be as much fun as the wild west that we are in the midst of right now but it is coming and should be here before we realize that it has happened.

Why hardware still matters– at least for a couple of years

February 9, 2010 3 comments

It is easy to assume that with the excitement around cloud computing would put a damper on the hardware market. But I have news for you. I am predicting that over the next few years hardware will be front and center.  Why would I make such a wild prediction. Here are my three reasons.

1. Hardware is front and center in almost all aspects of the computer industry. It is no wonder that Oracle wants to become a hardware company. Hardware is tangible. It’s revenue hits the bottom line right away. Hardware can envelop software and keep customers pinned down for many, many years. New generation platforms in the form of hardware appliances are a convenient delivery platform that helps the sales cycle. It is no wonder that Oracle wants a hardware platform. It completes the equation and allows Oracle to position itself as a fully integrated computing company. Likewise, IBM and HP are focused on building up their war chest full of strong hardware platforms. If you believe that customers want to deal with one large brand..or two, then the winners want to control the entire computing ecosystem.

2. The cloud looms. Companies like Amazon.com and Google do not buy hardware from the big iron providers and never will. For economic reasons, these companies go directly to component providers and purchase custom designed chips, board, etc. This approach means that for a very low price, these cloud providers can reduce their power consumption by making sure that the components are optimize for massively scaled clouds.  These cloud vendors are focused on undercutting the opportunity and power of the big systems providers. Therefore, cloud providers care a lot about hardware — it is through optimization of the hardware that they can threaten the power equilibrium in the computer market.

3. The clash between cloud and on premise environments. It is clear that the computer marketplace is at a transition point. The cloud vendors are betting that they can get the costs based on optimization of everything so low that they win. The large Systems vendors are betting that their sophisticated systems combining hardware, software, and service will win because of their ability to better protect the integrity of the customer’s business. These vendors will all provide their own version of the public and private cloud to ensure that they maintain power.

So, in my view there will be an incredible focus on hardware over the next two years. This will actually be good for customers because the level of sophistication, cost/performance metrics will be impressive. This hardware renaissance will not last. In the long run, hardware will be commoditized. The end game will be interesting because of the cloud. It will not a zero sum game. No, the data center doesn’t go away. But the difference is that purpose built hardware will be optimized for workloads to support the massively scaled environments that will be the heart of the future of computing. And then, it will be all about the software, the data, and the integration.

3.

Tectonic shifts: HP Plus 3Com versus Cisco Plus EMC

November 18, 2009 4 comments

Just when it looked clear where the markets were lining up around data center automation and cloud computing, things change. I guess that is what makes this industry so very interesting.  The proposed acquisition by HP of 3Com is a direct challenge to Cisco’s network management franchise. However, the implications of this move go further than what meets the eye.  It also pits HP in a direct path against EMC with its Cisco partnership. And to make things even more interesting, it also puts these two companies in a competitive three way race against IBM and its cloud/data center automation strategy. And of course, it doesn’t stop there. A myriad of emerging companies like Google and Amazon want a larger share of the enterprise market for cloud services. Companies like Unisys and CSC that has focused on the outsourced secure data centers are getting into the act.

I don’t think that we will see a single winner — no matter what any one of these companies will tell you.  The winners in this market shift will be those companies can build a compelling platform and a compelling value proposition for a partner ecosystem.  The truth about the cloud is that it is not simply a network or a data center. It is a new way of providing services of all sorts that can support changing customer workloads in a secure and predictable manner.

In light of this, what does this say for HP’s plans to acquire 3Com? If we assume that the network infrastructure is a key component of an emerging cloud and data center strategy, HP is making a calculated risk in acquiring more assets in this market.  The company that has found that its ProCurve networking division has begun gaining traction. HP ProCurve Networking is the networking division of HP.  The division includes network switches, wireless access points, WAN routers, and Access Control servers and software.   ProCurve competes directly with Cisco in the networking switch market. When HP had a tight partnership with Cisco, the company de-emphasized the networking. However, once Cisco started to move into the server market, the handcuffs came off. The 3Com acquisition takes the competitive play to a new level. 3Com has a variety of good pieces of technology that HP could leverage within ProCurve. Even more significantly, it picks up a strong security product called TippingPoint, a 3Com acquisition. TippingPoint fills a critical hole in HP’s security offering. TippingPoint, offers network security offerings including intrusion prevention and a product that inspects network packets.  The former 3Com subsidiary has also established a database of security threats based a network of external researchers.

But I think that one of the most important reasons that HP bought 3Com is its strong relationships in the Chinese market. In fiscal year 2008 half of 3Com’s revenue came from its H3C joint venture with Chinese vendor, Huawei Technology. Therefore, it is not surprising that HP would have paid a premium to gain a foothold in this lucrative market. If HP is smart, it will do a good job leveraging the many software assets to build out both its networking assets as well as beefing up its software organization. In reality, HP is much more comfortable in the hardware market. Therefore, adding networking as a core competency makes sense. It will also bolster its position as a player in the high end data center market and in the private cloud space.

Cisco, on the other hand, is coming from the network and moving agressively into the cloud and the data center market.  The company has purchased a position with VMWare and has established a tight partnership with EMC as a go to market strategy.  For Cisco, it gives the company credibility and access to customers outside of its traditional markets. For EMC, the Cisco relationship strengthens its networking play.  But an even bigger value for the relationship is to present a bigger footprint to customers as they move to take on HP, IBM, and the assortment of other players who all want to win.  The Cisco/EMC/VMware play is to focus on the private cloud.  In their view a private cloud is very similar to a private, preconfigured data center.  It can be a compelling value proposition to a customer that needs a data center fast without having to deal with a lot of moving parts.  The real question from a cloud computing perspective is the key question: is this really a cloud?

It was inevitable that this quiet market dominated by Google and Amazon would heat up as the cloud becomes a real market force.  But I don’t expect that HP or Cisco/EMC will have a free run. They are being joined by IBM and Microsoft — among others. The impact could be better options for customers and prices that invariably will fall. The key to success for all of these players will be how well they manage what will be an increasingly heterogeneous, federated, and highly distributed hardware and software world. Management comes in many flavors: management of these highly distributed services and management of the workloads.