Archive

Archive for the ‘autonomic computing’ Category

Yes, you can have an elastic private cloud

April 11, 2011 3 comments

I was having a discussion with a skeptical CIO the other day. His issue was that a private cloud isn’t real.  Why? In contrast to the public cloud, which has unlimited capability on demand, a private cloud is limited by the size and capacity of the internal data center.  While I understand this point I disagree and here is why.  I don’t know of any data center that doesn’t have enough servers or capacity.  In fact, if you talk to most IT managers they will quickly admit that they don’t lack physical resources. This is why there has been so much focus on server virtualization. With server virtualization, these organizations actually get rid of servers and make their IT organization more efficient.

Even when data centers are able to improve their efficiency, they still do not lack resources.  What data centers lack is the organizational structure to enable provisioning of those resources in a proactive and efficient way.  The converse is also true: data centers lack the ability to reclaim resources once they have been provisioned.

So, I maintain that the problem with the data center is not a lack of resources but rather the management and the automation of those resources.  Imagine an organization leverages the existing physical resources in a data center by adding self-service provisioning and business process rules for allocating resources based on business need.  This would mean that when developers start working on a project they are allocated the amount of resources they need – not what they want. More importantly, when the project is over, those resources are returned to the pool.

This, of course, does not work for every application and every workload in the data center. There are applications that are highly specialized and are not going to benefit from automation. However, there indeed can increasingly large aspects of computing that can be transformed in the private cloud environment based on truly tuning workloads and resources to make the private cloud as elastic as what we think of as a ever expanding public cloud.

Eight things that changed since we wrote Cloud Computing for Dummies

October 8, 2010 3 comments

I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago.  As this anniversary approaches I thought it was appropriate to take a look back at what has changed.  I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news.  I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.

Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders.  A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was  passing fad that would disappear.  Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.

Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud.  Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies.  Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.

Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.

Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.

Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.

Change Six: Cloud Service Level Management is a  primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing.   Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.

Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.

Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.

My Top Eleven Predictions for 2009 (I bet you thought there would be only ten)

November 14, 2008 11 comments

What a difference a year makes. The past year was filled with a lot of interesting innovations and market shifts. For example, Software as a Service went from being something for small companies or departments within large ones to a mainstream option.  Real customers are beginning to solve real business problems with service oriented architecture.  The latest hype is around Cloud Computing – afterall, the software industry seems to need hype to survive. As we look forward into 2009, it is going to be a very different and difficult year but one that will be full of some surprising twists and turns.  Here are my top predictions for the coming year.
One. Software as a Service (SaaS) goes mainstream. It isn’t just for small companies anymore. While this has been happening slowly and steadily, it is rapidly becoming mainstream because with the dramatic cuts in capital budgets companies are going to fulfill their needs with SaaS.  While companies like SalesForce.com have been the successful pioneers, the big guys (like IBM, Oracle, Microsoft, and HP) are going to make a major push for dominance and strong partner ecosystems.
Two. Tough economic times favor the big and stable technology companies. Yes, these companies will trim expenses and cut back like everyone else. However, customers will be less willing to bet the farm on emerging startups with cool technology. The only way emerging companies will survive is to do what I call “follow the pain”. In other words, come up with compelling technology that solves really tough problems that others can’t do. They need to fill the white space that the big vendors have not filled yet. The best option for emerging companies is to use this time when people will be hiding under their beds to get aggressive and show value to customers and prospects. It is best to shout when everyone else is quiet. You will be heard!
Three.  The Service Oriented Architecture market enters the post hype phase. This is actually good news. We have had in-depth discussions with almost 30 companies for the second edition of SOA for Dummies (coming out December 19th). They are all finding business benefit from the transition. They are all view SOA as a journey – not a project.  So, there will be less noise in the market but more good work getting done.
Four. Service Management gets hot. This has long been an important area whether companies were looking at automating data centers or managing process tied to business metrics.  So, what is different? Companies are starting to seriously plan a service management strategy tied both to customer experience and satisfaction. They are tying this objective to their physical assets, their IT environment, and their business process across the company. There will be vendor consolidation and a lot of innovation in this area.
Five. The desktop takes a beating in a tough economy. When times get tough companies look for ways to cut back and I expect that the desktop will be an area where companies will delay replacement of existing PCs. They will make do with what they have or they will expand their virtualization implementation.
Six. The Cloud grows more serious. Cloud computing has actually been around since early time sharing days if we are to be honest with each other.  However, there is a difference is the emerging technologies like multi-tenancy that make this approach to shared resources different. Just as companies are moving to SaaS because of economic reasons, companies will move to Clouds with the same goal – decreasing capital expenditures.  Companies will start to have to gain an understanding of the impact of trusting a third party provider. Performance, scalability, predictability, and security are not guaranteed just because some company offers a cloud. Service management of the cloud will become a key success factors. And there will be plenty of problems to go around next year.
Seven. There will be tech companies that fail in 2009. Not all companies will make it through this financial crisis.  Even large companies with cash will be potentially on the failure list.  I predict that Sun Microsystems, for example, will fail to remain intact.  I expect that company will be broken apart.  It could be that the hardware assets could be sold to its partner Fujitsu while pieces of software could be sold off as well.  It is hard to see how a company without a well-crafted software strategy and execution model can remain financially viable. Similarly, companies without a focus on the consumer market will have a tough time in the coming year.
Eight. Open Source will soar in this tight market. Open Source companies are in a good position in this type of market—with a caveat.  There is a danger for customers to simply adopt an open source solution unless there is a strong commercial support structure behind it. Companies that offer commercial open source will emerge as strong players.
Nine.  Software goes vertical. I am not talking about packaged software. I anticipate that more and more companies will begin to package everything based on a solutions focus. Even middleware, data management, security, and process management will be packaged so that customers will spend less time building and more time configuring. This will have an impact in the next decade on the way systems integrators will make (or not make) money.
Ten. Appliances become a software platform of choice for customers. Hardware appliances have been around for a number of years and are growing in acceptance and capability.  This trend will accelerate in the coming year.  The most common solutions used with appliances include security, storage, and data warehousing. The appliance platform will expand dramatically this coming year.  More software solutions will be sold with prepackaged solutions to make the acceptance rate for complex enterprise software easier.

Eleven. Companies will spend money on anticipation management. Companies must be able to use their information resources to understand where things are going. Being able to anticipate trends and customer needs is critical.  Therefore, one of the bright spots this coming year will be the need to spend money getting a handle on data.  Companies will need to understand not just what happened last year but where they should invest for the future. They cannot do this without understanding their data.

The bottom line is that 2009 will be a complicated year for software.  There will be many companies without a compelling solution to customer pain will and should fail. The market favors safe companies. As in any down market, some companies will focus on avoiding any risk and waiting. The smart companies – both providers and users of software will take advantage of the rough market to plan for innovation and success when things improve – and they always do.

Can Microsoft Pull Virtualization, SOA, Management, and SaaS Together?

June 17, 2008 5 comments

For three years in a row I have attended Microsoft’s server and tools analyst briefing. This is the vision of Microsoft that focuses on the server side of the company. A few years ago I predicted that this part of the company would get my vote in terms of growth and potential. I stand by my position. While Microsoft’s desktop division is suffering through a mid-life crisis, the server side is flexing its muscles. The transition towards power on the enterprise side is complicated for Microsoft. The challenges facing Microsoft is how to make the transition from its traditional role as champion and leader of the programmer to a leader in the next generation of distributed computing infrastructure. If Microsoft can make this transition in a coherent way it could emerge in an extremely powerful position.

So, I will provide what I think are the five most opportunities that the server and tools division of Microsoft is focused on.


Opportunity One. Virtualization as a foundation
. The greatest opportunity, ironically, is also the greatest threat. If customers decide to virtualize rather than to buy individual licenses, Microsoft could suffer – especially in the desktop arena. At the same time, Microsoft clearly sees the benefits in becoming a leader in virtualization. Therefore, virtualization is becoming the focus of the next generation of computing infrastructure both on the server and the desktop. Microsoft is making many investments in virtualization including the desktop, the hypervisor, the applications, the operating system, graphics, and overall management (including Identity Management). One smart move that Microsoft has made is to invest in its hypervisor intended to come out soon as HyperV. Rather than offering HyperV as a standalone product, Microsoft is adding the hypervisor into the to the fabric of Microsoft’s server platform. This is a pragmatic and forward thinking approach. If I were an independent hypervisor vendor I would hit the road right about now. Microsoft’s philosophy around enterprise computing is clear: unified and virtualized.

Microsoft’s management believes that within five to ten years all servers will be virtualized. To me this sounds like a logical assumption both in terms of manageability and power consumption. So, how does Microsoft gain supremacy in this market? Clearly, it understands that it has to take on the market leader: VMware. It hopes to do this in two ways: providing overall management of the management framework (including managing VMware) and though its partnership with Citrix. There was a lot of buzz for a while that Microsoft would buy Citrix. I don’t think so. The relationship is advantageous to both companies so I expect that Microsoft will enjoy the revenue and Citrix will enjoy the benefits of the Microsoft market clout.

Microsoft has been on an acquisition binge in the virtualization market. While they haven’t created the buzz of the Yahoo attempted acquisition, they are important pieces to support the new strategy. Investments include: Kidaro for desktop virtualization management (that sits on the virtual PC and is intended to provide application compatibility on the virtual desktop. Another investment, Calista Technologies, provides graphics virtualization that offers the full “vista experience” for the remote desktop. Last year Microsoft purchased Softricity, which offers application virtualization and OS streaming. Microsoft has said that it has sold 6.5 million Softricity seats (priced at $3.00 per copy). Now, add in the HyperV and the ID management offerings and things get very interesting.

One of the smartest things that Microsoft is doing is to position virtualization within the context of a management framework. In fact, in my view, virtualization is simply not viable without management. Microsoft positioned virtualization around this portfolio of offerings in the context of a management framework (System Center) for managing both the physical and virtual environment for customers.

Opportunity Two. Managing a combined physical and virtual world. Since Microsoft came out with SMS in the late 1990s, it has wanted to find a way to gain a leadership role in management software. It has been a complex journey and is still a work in progress. It is indeed a time of transition for Microsoft. The container for its management approach is System Center. Today with System Center, Microsoft has its sights on managing not only Windows systems but also a customer’s heterogeneous environment. Within the environment Microsoft has included identity management (leveraging active director as the management framework including provisioning and certificate management). This is one area where Microsoft seems to be embracing heterogeneity in a big way. Like many of the infrastructure leaders that Microsoft competes with, Microsoft’s leaders are talking about the ability to create a management framework that is “state aware” so that the overall environment is more easily self-managed. Microsoft envisions a world where through virtualization there are basically a pool of resources that are available and can be managed based on business policies and service levels. They talked a lot about automating the management of resources. Good thinking, but certainly not unique.

Microsoft is making a significant investment in management – especially in areas such as virtualization management, virtual machine management. More importantly, through its Zen-based connections (via Citrix) Microsoft will offer connectors to other system management platforms such as IBM’s Tivoli and HP’s OpenView. That means that Microsoft has ambitions to manage large-scale data centers. Microsoft is building its own data centers that will be the foundation for its cloud offerings.

Opportunity Three. Creating the next generation dynamic platform
. Every company I talk to lately is looking to own the next generation dynamic computing platform. This platform will be the foundation for the evolution of Service Oriented Architectures, social networks, and software as a service. But, obviously, this is complicated especially if you assume that you want to achieve ubiquitous integration between services that don’t know each other. Microsoft’s approach to this (they call it Oslo) is a based on a modeling language. Microsoft understands that achieving this nirvana requires a way to establish context. The world we live in is a web of relationships. Somehow in real life we humans are able to take tiny cues and construct a world view. Unfortunately, computers are not so bright. So, Microsoft is attacking this problem by developing a semantic language that will be the foundation for a model-based view of the world. Microsoft intends to leverage its network of developers to make this language based approach the focal point of a new way of creating modular services that can dynamically change based on context.

This is indeed an interesting approach. It is also a bottoms-up approach to the problem of semantic modeling. While Microsoft does have a lot of developers who will want to leverage this emerging technology I am concerned that a bottoms-up approach could be problematic. This must be combined with a tops-down approach if this approach is to be successful.

Opportunity Four. Software as a Service Plus.
I always thought that Microsoft envied AOL in the old days when it could get customers to pay per month while Microsoft sold perpetual licenses that might not be upgraded for years. Microsoft is trying to build a case that customers really want a hybrid environment so they can use an application on premise and then enable their mobile users to use this same capability as a service. Therefore, when Microsoft compares itself to companies like Salesforce.com, Netsuites, and Zoho they feel like Microsoft has a strategic advantage because they have full capabilities whether online or off line. But Microsoft is taking this further by taking services such as Exchange and offering that as a service. This will be primarily focused on the SMB market and for remote departments of large companies.

This is only the beginning from what I am seeing. Services such as Live Mesh, announced in April, is a services based web platform that helps developers with context over the web. Silverlight, also announced this spring is intended as a web 2.0 platform. Microsoft is taking these offerings plus others such as Visual Earth, SQL Server data services, cloud-based storage, and BizTalk services and offerings them as components in a service platform – both on its own and with its partners.

Opportunity Five. Microsoft revs up SOA. Microsoft has been slow to get on the SOA bandwagon. But it is starting to make some progress as it readies its registry/repository. This new offering will be built on top of SQL server and will include a UDDI version 3 service registry. For Master Data Management (MDM) – single view of the customer, Microsoft will create an offering based on SQLServer. It also views Sharepoint as a focal point for MDM. It intends to build an entity data model to support its MDM strategy.

While Microsoft has many of the building blocks it needs to create a Service Oriented Architecture strategy, the company still has a way to go. This is especially true in how the company creates a SOA framework so that customers know how to leverage its technology to move through the life cycle. Microsoft is beginning to talk a lot about business process including putting a common foundation for service interoperability by supporting key standards such as WS* and its own Windows Communications Foundation services.

The real problem is not in the component parts but the integration of those parts into a cohesive architectural foundation that customers can understand and work with. Also, Microsoft still lacks the in-depth business knowledge that customers are looking for. It relies on its integration partners to provide the industry knowledge.

The bottom line
Microsoft has made tremendous progress over the past five years in coming to terms with new models of computing that are not client or server centric but are dynamic. I perceive that the thinking is going in the right direction. Bringing process thinking with virtualization, management, and federated infrastructure and software as a service are all the right stuff. The question will be whether Microsoft can put all the pieces together that doesn’t just rely on its traditional base of developers to move it forward to the next generation. Microsoft has a unique opportunity to take its traditional customer base of programmers and move them to a new level of knowledge so they can participate in their vision of Dynamic IT.

Taking the Pulse of The New Tivoli

May 20, 2008 3 comments

It is ironic that I was at the first Tivoli user conference called Planet Tivoli back in the early 1990s. Now, I am sitting at Tivoli’s first full blown user conference called Pulse. Pulse is very much like Tivoli itself, it is a combination of the Netcool, Maximo, and Tivoli user conferences. Over the past several years, IBM has had the challenge of taking its portfolio of individual products, rationalize them and create a management platform. One of the most fortunate events that helped Tivoli is the growing importance of service management. Service Management, the ability to manage a highly distributed computing environment in a consistent and predictable manner. Therefore, all of Tivoli’s offerings can be defined from this perspective. IBM’s Al Zolar, who runs the Tivoli organization said in his opening remarks that Tivoli common goal across its portfolio is to assure the effective delivery of services.

One of the interesting aspects of IBM’s management strategy is that the company intends to apply the idea of service management beyond IT operations. “Everything needs to be managed,” says Steve Mills, the senior vice president and GM of the software business. He points to many industries that are increasingly enabling intelligence into everything from trucks to assembly lines. Therefore, everything is becoming a manageable system. Companies are increasingly using RDIF tags to track products and equipment, for example. As everything becomes a “virtual system” — everything becomes a service to be managed. What an interesting opportunity and makes it clear by IBM would have bought a company like Maximo — a company that manages physical assets.

So, it is becoming clear that Tivoli is reinventing itself by focusing on service management in the broader corporate perspective. At the foundational level, Tivoli is looking at what the foundational services that are required to make this a viable strategy. I liked the fact that Zolar focused on Identity Management as one of the foundational services. Without managing the identity of the user or the device in a highly distributed environment, service management might work but it couldn’t be trusted.

Another major focal point for IBM’s emerging service management strategy is process automation. Now, this isn’t a surprise since process is the foundation of traditional operations management. However, it has a broader persona as well that transcends operations management. As we move to a more service oriented architecture, service management takes on a broader and more strategic role in the organization. You aren’t just managing the physical servers running applications but you are looking at managing an environment that requires the integration of business services, middleware, transactions, and a variety physical assets. Some of these pieces will be located at the client site while others might live in the cloud and yet others will live in a partner’s environment. These sets of virtual services have to be managed as though they are a physical system. Therefore, they are responsible for managing a meaningful process flow that is in compliance with corporate and IT governance rules. And all of this has to be done in a way that doesn’t require so many people that it is not economically feasible.

From my discussions at Pulse, I came away with the understanding that this is, in fact, IBM’s vision for service management. What is impressive is that IBM has taken begun to create a set of foundational services that are becoming the underpinnings of the Tivoli offerings. This metadata based framework was designed from some innovative (and very early technology) that came to IBM from the Candle acquisition. In fact, I had looked at this integration technology many years ago and always thought it was one of Candle’s crown jewels. I had wondered what happened to it — now I know.

IBM’s challenge will be to capitalize on this rationalization of its management assets. IBM has managed services it is offerings. IBM needs to be able to create an ecosystem based on its offerings so that it can compete with the emerging breed of cloud and service providers like Amazon.com and Google. It is becoming clear to me that customers and software vendors alike are looking for the emerging utility infrastructure providers. I think that with the right type of packaging, IBM could become a major player.

So, my take away from my first day at Pulse is this:

  • Tivoli is working to create a set of foundational meta data level services that link its various managed service offerings.
  • Because of the foundational services, Tivoli can now package its offerings in a much more effective way. It should make its offerings more competitive.
  • Tivoli’s goal is to leverage its operational management expertise in software to move up the food chain and manage both the IT and the business process infrastructures
  • Cloud computing is very important to IBM. It is still early but the investment is intense and being designed as the next generation of virtualization, SOA, and utility computing.
  • Green IT and energy efficiency is a key driver of Tivoli’s emerging services as a growth engine.

One of the primary themes that I heard is the industrialization of computing as the foundation for IBM’s management services. Indeed, I have often said that we are at the beginning of a new era where we computing moves from being an art based on experimentation and hope. The next generation focused on software and infrastructure as a service are becoming a reality and the last mile will be the management of all of those resources and more. This management focus is an imperative as we move towards the industrialization of software and computing

When does the data center become the cloud?

March 28, 2008 1 comment

This is the beginning of another season of analyst meetings. Today I am at IBM’s Linux and Open Source meeting. Next week I will be with HP at their industry analyst meeting (hardware, software, services — no printers or PCs), that will be followed by IBM’s Impact (SOA conference), CA’s analyst meeting, and finally I will attend Microsoft’s tools and servers analyst meeting. I could attend many more but there aren’t enough days in the week and I still have to get some work done!

The overall Linux and Open Source meeting was quite interesting. But what I wanted to talk about is cloud computing. Irving Wladawsky-Berger started off with a fascinating discussion on cloud computing. I have known Irving Wladawsky-Berger for many years. For those of you who missed knowing him, he is one of the most interesting researchers /innovators and thinkers in the IBM organization. Irving retired from IBM last year and is now Chairman Emeritus of IBM Academy of Technology and a visiting professor at MIT. I first met Irving when he was the key thought leader in IBM’s e-business strategy and then the web strategy and autonomic computing — among others. So, it is always interesting to see what he is thinking about.

I wasn’t surprised when I found that he was thinking about Cloud Computing. From his view there is a continuum from the early internet days to clouds. He makes a connection between the disruptive qualities of IBM’s e-business strategy. However, he correctly points out that this strategy was not implemented in a vaccum. In fact, IBM both focused on innovation in context with legacy systems and applications that dominated the real world of its customers.

Today we are definitely at an inflection point. Irving makes an important observation that as we move to virtualized systems beginning with grids are simply a stage in a path to distributed computing. What links all resources together is Service Oriented Architecture based protocols. If you can encapsulate components and add clearly defined interfaces, you can move to a distributed world. The real challenge is how do you move from where we are today with virtualization to a systems wide approach to virtualization. To his credit, Irving concedes that this is going to be complicated and disruptive.

I am a firm believer that there is no such thing as brand new technology that emerges out of nowhere. Irving agrees and suggests that Cloud Computing is an evolution of everything that has been tried over the last 15 years — Internet, grids, clusters, etc. The cloud is massive implementation of virtualization.

Where do we go with this? One of the most important points that Irving mentioned that I think is at the heart of making cloud computing and any type of distributed computing a success is industrialization. In short, how do you make things work when workloads grow at an astronomical pace each year. Can you cram more and more servers into a traditional data center?

This thought opens up a lot of interesting debates that I will write more about in the next few days. What exactly is a cloud? Is it simply a new type of data center? Does it have to be multi-tenancy? Does it have to be “utility computing”? Because clouds are still new and are an evolution of vitualization, I think that the definitions will evolve over time. I don’t think there will be a single type of cloud in the market. I also think that because the cloud sits behind the view of most customers and users that it will not be clear what is a cloud and what is smoke and mirrors. That will certainly make the data center world an interesting place.

The reality, as usual, is more complicated. The real issues around clouds will be the same issues that we have always had in data centers — how do you manage at a massive scale without bringing on armies of people? How do you know which processes are allowed to exchange information with other processes safely? How do you remove the complexity?

One of the issues that Irving mentioned during his discussion is the ideas of ensembles. Basically these are a way of ordering components of the enterprise into like entities. These could be physical assets as well as software components (like business processes wrapped as business services). One of the keys to success is that these services must have clearly defined interfaces. What I find very interesting that in to create the next generation of distributed systems we must move to a service oriented approach.

IBM is clearly making a play for clouds as a path forward. It sees clouds as a way to manage increasingly expanding workloads by applying modularity and simplicity to the problem. IBM is making the connection between the movement towards clouds and autonomic computing. Obviously, if you are going to scale in this way an autonomic approach makes perfect sense (at least to me). In essence, IBM is presenting the view of clouds by three dimensions: simplified, shared, and dynamic. It is not clear how quickly IBM and others will be able to make this next generation of virtualization operational but these dimensions demonstrate that the thinking is on the right track.

What’s the value of autonomic computing?

January 29, 2007 Leave a comment

Ok, here is a test for you. What is the value of autonomic computing in the real world? Autonomic computing has always sounded a little like science fiction. The system sits there and anticipates when something is going to go wrong and rushes in, and fixes the problem – before any human ever knows that there was a problem. Sort of cool.

Most of my conversations about automatic computing has been with IBM. However, this week I ran into an interesting small company that specializes in automatic computing and even has an IBM partnership in this area. The company is called Embolics(www.embolics.com) is based in Ottawa, Canada and is less than a year old. Rather than coding from scratch, the company purchased the assets of a company called Symbium that focused on autonomic computing for the telecommunications market. Embolics is holding its cards pretty close to the vest but it looks like they have some pretty interesting software that discovers patterns of use and matches that to a workflow approach to securing access to software and hardware.

Unlike some of the solutions I have seen over the past few years, this one does not require the customer to create a complex set of rules from scratch. The company says that it’s software is self-securing and self-managing. The software itself is embedded either in an appliance or a card or in a virtualization layer. The company already has a few key partnerships – even with IBM Tivoli in its autonomic computing area. So, this is one of those emerging companies I plan to keep an eye on.