Archive

Archive for the ‘customer experience’ Category

What’s a private cloud anyway?

February 4, 2011 2 comments

So in a perfect world all data centers be magically become clouds and the world is a better place. All kidding aside..I am tired of all of the hype. Let me put it this way.  All data centers cannot and will not become private clouds– at least not for most typical companies. Let me tell you why I say this.  There are some key principles of the cloud that I think are worth recounting:

1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud.

2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand.

3. A cloud environment needs to be economically viable.

Why aren’t traditional data centers private clouds?  What if a data center adds some self-service and  virtualization? Is that enough?  Probably not.  A typical data center is a complex environment.  It is not uncommon for a single data center to support five or six different operating systems, five or six different languages, four or five different hardware platforms and perhaps 20 or 30 applications of all sizes and shapes plus an unending number of tools to support the management and maintenance of that environment.  In Cloud Computing for Dummies, written by the team at Hurwitz & Associates there is a considerable amount written about this issue.  Given an environment like this it is almost impossible to achieve workload optimization.  In addition, there are often line of business applications that are complicated, used by a few dozen employees, and are necessary to run the business. There is simply no economic rational for such applications to be moved to a cloud — public or private.  The only alternative for such an application would be to outsource the application all together.

So what does belong in the private cloud? Application and business services that are consistent workloads that are designed for be used on demand by developers, employees, or partners.  Many companies are becoming IT providers to their own employees, partners, customers and suppliers.  These services are predictable and designed as well-defined components that can be optimized for elasticity. They can be used in different situations — for a single business situation to support a single customer or in a scenario that requires the business to support a huge partner network. Typically, these services can be designed to be used by a single operating system (typically Linux) that has been optimized to support these workloads. Many of the capabilities and tasks within this environment has been automated.

Could there be situations where an entire data center could be a private cloud? Sure, if an organization can plan well enough to limit the elements supported within the data center. I think this will happen with specialized companies that have the luxury of not supporting legacy. But for most organizations, reality is a lot messier.

Lotus redux: a transformation in process

February 3, 2011 1 comment

I have attended Lotusphere for many years so it is very interesting to watch the transition. When Lotus Notes was first introduced in the late 1980s, it was a seminal moment in the evolution of collaborative computing. During those first few years, Lotus was able to establish a rich ecosystem of partners and really define the market for collaborative computing — before the general market even had time to think about the necessity for such a platform.  But a lot has changed.  Fast forward to 2011.  Today the ideas of collaboration platforms is now the norm. Individuals, virtual teams, and big corporations depend on collaboration platforms to get business done. For many years it was clear that Microsoft with its office franchise and SharePoint had captured the market. However, with the advent of cloud computing and Google’s push into Google Apps that the market dynamics were changing. Now, add social networking on top of that with services like Twitter, Facebook, and LinkedIn and the world gets a lot more interesting.

So, what does this have to do with Lotus? Actually a lot.  Companies that I have been talking to are frantically looking for ways to combine the spontaneity of social networking platforms with structured collaboration with customers, partners, and prospects. They are looking for new ways to expand their business flexibility and opportunities. This is where Lotus has an interesting opportunity. Lotus has traditionally sold Notes and Domino to the high end of the Mid-market and the enterprise market primarily as a communications platform — i.e. electronic mail.  That is what the typical user sees. But under that interface is complex applications that capture a lot of company intellectual property.  Over time, IBM has added a lot of sophisticated offerings for collaboration such as Quickr and Connections. Now add LotusLive, IBM’s cloud collaboration platform into the mix and things get interesting.  In addition to this new generation platform that brings together the traditional Notes environment with more dynamic collaboration and cloud computing, IBM is enabling analytics on the platform with tools from Cognos.

At the same time, IBM is being realistic this time around. It knows that it cannot displace Microsoft Sharepoint so it is enabling customers to make Sharepoint a component in an IBM driven collaboration environment. Likewise, it is allowing integration with various wireless smartphone environments as well.

But if I were to put a bet on one product that I think will have the greatest potential to bring IBM into the mainstream of social networking — or more specifically social business is LotusLive.  LotusLive in combination with the underlying sophistication of the Notes and Domino platforms, productivity solutions (Symphony), and partnerships and linkages with third party SaaS platforms will drive IBM’s place in the collaboration market.

IBM clearly has challenges getting existing customers comfortable with change and helping them to move their valuable assets to the new world.  But the components are in place. There are also important innovations coming out of the labs that will propel the environment forward.  IBM will have to gather a lot more partners and more adoption from customers who aren’t currently customers. But the opportunity is waiting.

Predictions for 2011: getting ready to compete in real time

December 1, 2010 3 comments

2010 was a transition year for the tech sector. It was the year when cloud suddenly began to look realistic to the large companies that had scorned it. It was the year when social media suddenly became serious business. And it was the year when hardware and software were being united as a platform – something like in the old mainframe days – but different because of high-level interfaces and modularity. There were also important trends starting to emerge like the important of managing information across both the enterprise and among partners and suppliers. Competition for ownership of the enterprise software ecosystem headed up as did the leadership of the emerging cloud computing ecosystem.

So, what do I predict for this coming year? While at the outset it might look like 2011 will be a continuation of what has been happening this year, I think there will be some important changes that will impact the world of enterprise software for the rest of the decade.

First, I think it is going to be a very big year for acquisitions. Now I have said that before and I will say it again. The software market is consolidating around major players that need to fill out their software infrastructure in order to compete. It will come as no surprise if HP begins to purchase software companies if it intends to compete with IBM and Oracle on the software front.  But IBM, Oracle, SAP, and Microsoft will not sit still either.  All these companies will purchase the incremental technology companies they need to compete and expand their share of wallet with their customers.

This will be a transitional year for the up and coming players like Google, Amazon, Netflix, Salesforce.com, and others that haven’t hit the radar yet.  These companies are plotting their own strategies to gain leadership. These companies will continue to push the boundaries in search of dominance.  As they push up market as they grab market share, they will face the familiar problem of being able to support customers who will expect them to act like adults.

Customer support, in fact, will bubble to the top of the issues for emerging as well as established companies in the enterprise space – especially as cloud computing becomes a well-established distribution and delivery platform for computing.  All these companies, whether well established or startups will have to balance the requirements to provide sophisticated customer support with the need to make profit.  This will impact everything from license and maintenance revenue to how companies will charge for consulting and support services.

But what are customers be looking for in 2011? Customers are always looking to reduce their IT expenses – that is a given. However, the major change in 2011 will be the need to innovative based on customer facing initiatives.  Of course, the idea of focusing on customer facing software itself isn’t new there are some subtle changes.  The new initiatives are based on leveraging social networking from a secure perspective to both drive business traffic, anticipate customer needs and issues before they become issues.  Companies will spend money innovating on customer relationships.

Cloud Computing is the other issue in 2011. While it was clearly a major differentiator in 2010, the cloud will take an important leap forward in 2011.  While companies were testing the water this year, next year, companies will be looking at best practices in cloud computing.  2011 will be there year where customers are going to focus on three key issues: data integration across public, private, and data centers, manageability both in terms of workload optimization, security, and overall performance.  The vendors that can demonstrate that they can provide the right level of service across cloud-based services will win significant business. These vendors will increasingly focus on expanding their partner ecosystem as a way to lock in customers to their cloud platform.

Most importantly, 2011 will be the year of analytics.  The technology industry continues to provide data at an accelerated pace never seen before. But what can we do with this data? What does it mean in organizations’ ability to make better business decisions and to prepare for an unpredictable future?  The traditional warehouse simply is too slow to be effective. 2011 will be the year where predictive analytics and information management overall will emerge as among the hottest and most important initiatives.

Now I know that we all like lists, so I will take what I’ve just said and put them into my top ten predictions:

1. Both today’s market leaders and upstarts are going to continue to acquire assets to become more competitive.  Many emerging startups will be scooped up before they see the light of day. At the same time, there will be almost as many startups emerge as we saw in the dot-com era.

2. Hardware will continue to evolve in a new way. The market will move away from hardware as a commodity. The hardware platform in 2010 will be differentiated based on software and packaging. 2010 will be the year of smart hardware packaged with enterprise software, often as appliances.

3. Cloud computing models will put extreme pressure on everything from software license and maintenance pricing to customer support. Integration between different cloud computing models will be front and center. The cloud model is moving out of risk adverse pilots to serious deployments. Best practices will emerge as a major issue for customers that see the cloud as a way to boost innovation and the rate of change.

4. Managing highly distributed services in a compliant and predictable manner will take center stage. Service management and service level agreements across cloud and on-premises environments will become a prerequisite for buyers.

5. Security software will be redefined based on challenges of customer facing initiatives and the need to more aggressively open the corporate environment to support a constantly morphing relationship with customers, partners, and suppliers.

6. The fear of lock in will reach a fever pitch in 2011. SaaS vendors will increasingly add functionality to tighten their grip on customers.  Traditional vendors will purchase more of the components to support the lifecycle needs of customers.  How can everything be integrated from a business process and data integration standpoint and still allow for portability? Today, the answers are not there.

7. The definition of an application is changing. The traditional view that the packaged application is hermetically sealed is going away. More of the new packaged applications will be based on service orientation based on best practices. These applications will be parameter-driven so that they can be changed in real time. And yes, Service Oriented Architectures (SOA) didn’t die after all.

8. Social networking grows up and will be become business social networks. These initiatives will be driven by line of business executives as a way to engage with customers and employees, gain insights into trends, to fix problems before they become widespread. Companies will leverage social networking to enhance agility and new business models.

9. Managing end points will be one of the key technology drivers in 2011. Smart phones, sensors, and tablet computers are refining what computing means. It will drive the requirement for a new approach to role and process based security.

10. Data management and predictive analytics will explode based on both the need to understand traditional information and the need to manage data coming from new sales and communications channels.

The bottom line is that 2011 will be the year where the seeds that have been planted over the last few years are now ready to become the drivers of a new generation of innovation and business change. Put together everything from the flexibility of service orientation, business process management innovation, the wide-spread impact of social and collaborative networks, the new delivery and deployment models of the cloud. Now apply tools to harness these environments like service management, new security platforms, and analytics. From my view, innovative companies are grabbing the threads of technology and focusing on outcomes. 2011 is going to be an important transition year. The corporations that get this right and transform themselves so that they are ready to change on a dime can win – even if they are smaller than their competitors.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Eight things that changed since we wrote Cloud Computing for Dummies

October 8, 2010 3 comments

I admit that I haven’t written a blog in more than three months — but I do have a good reason. I just finished writing my latest book — not a Dummies book this time. It will be my first business book based on almost three decades in the computer industry. Once I know the publication date I will tell you a lot more about it. But as I was finishing this book I was thinking about my last book, Cloud Computing for Dummies that was published almost two years ago.  As this anniversary approaches I thought it was appropriate to take a look back at what has changed.  I could probably go on for quite a while talking about how little information was available at that point and how few CIOs were willing to talk about or even consider cloud computing as a strategy. But that’s old news.  I decided that it would be most interesting to focus on eight of the changes that I have seen in this fast-moving market over the past two years.

Change One: IT is now on board with cloud computing. Cloud Computing has moved from a reaction to sluggish IT departments to a business strategy involving both business and technology leaders.  A few years ago, business leaders were reading about Amazon and Google in business magazines. They knew little about what was behind the hype. They focused on the fact that these early cloud pioneers seemed to be efficient at making cloud capability available on demand. No paperwork and no waiting for the procurement department to process an order. Two years ago IT leaders tried to pretend that cloud computing was  passing fad that would disappear.  Now I am finding that IT is treating cloud computing as a center piece of their future strategies — even if they are only testing the waters.

Change Two: enterprise computing vendors are all in with both private and public cloud offerings. Two years ago most traditional IT vendors did not pay too much attention to the cloud.  Today, most hardware, software, and services vendors have jumped on the bandwagon. They all have cloud computing strategies.  Most of these vendors are clearly focused on a private cloud strategy. However, many are beginning to offer specialized public cloud services with a focus on security and manageability. These vendors are melding all types of cloud services — public, private, and hybrid into interesting and sometimes compelling offerings.

Change Three: Service Orientation will make cloud computing successful. Service Orientation was hot two years ago. The huge hype behind cloud computing led many pundits to proclaim that Service Oriented Architectures was dead and gone. In fact, cloud vendors that are succeeding are those that are building true business services without dependencies that can migrate between public, private and hybrid clouds have a competitive advantage.

Change Four: System Vendors are banking on integration. Does a cloud really need hardware? The dialog only two years ago surrounded the contention that clouds meant no hardware would be necessary. What a difference a few years can make. The emphasis coming primarily from the major systems vendors is that hardware indeed matters. These vendors are integrating cloud infrastructure services with their hardware.

Change Five: Cloud Security takes center stage. Yes, cloud security was a huge topic two years ago but the dialog is beginning to change. There are three conversations that I am hearing. First, cloud security is a huge issue that is holding back widespread adoption. Second, there are well designed software and hardware offerings that can make cloud computing safe. Third, public clouds are just as secure as a an internal data center because these vendors have more security experts than any traditional data center. In addition, a large number of venture backed cloud security companies are entering the market with new and quite compelling value propositions.

Change Six: Cloud Service Level Management is a  primary customer concern. Two years ago no one our team interviewed for Cloud Computing for Dummies connected service level management with cloud computing.   Now that customers are seriously planning for wide spread adoption of cloud computing they are seriously examining their required level of service for cloud computing. IT managers are reading the service level agreements from public cloud vendors and Software as a Service vendors carefully. They are looking beyond the service level for a single service and beginning to think about the overall service level across their own data centers as well as the other cloud services they intend to use.

Change Seven: IT cares most about service automation. No, automation in the data center is not new; it has been an important consideration for years. However, what is new is that IT management is looking at the cloud not just to avoid the costs of purchasing hardware. They are automation of both routine functions as well as business processes as the primary benefit of cloud computing. In the long run, IT management intends to focus on automation and reduce hardware to interchanagable commodities.

Change Eight: Cloud computing moves to the front office. Two years ago IT and business leaders saw cloud computing as a way to improve back office efficiency. This is beginning to change. With the flexibility of cloud computing, management is now looking at the potential for to quickly innovate business processes that touch partners and customers.

The lock-in risks of Software as a Service

May 3, 2010 3 comments

I started thinking a lot about software as a service environments and what this really means to customers.  I was talking to a CIO of a medium sized company the other day. His company is a customer of a major SaaS vendor (he didn’t want me to name the company). In the beginning things were quite good. The application is relatively easy to navigate and sales people were satisfied with the functionality. However, there was a problem. The use of this SaaS application was actually getting more complicated than the CIO had anticipated.  First, the company had discovered that they were locked into a three-year contract to support 450 sales people.  In addition, over the first several years of use, the company had hired a consultant to customize the workflow within the application.

So, what was the problem?  The CIO was increasingly alarmed about three issues:

  • The lack of elasticity. If the company suddenly had a bad quarter and wanted to reduce the number of licenses supported, they would be out of luck. One of the key promises of cloud computing and SaaS just went out the window.
  • High costs of the services model. It occurred to the CIO that the company was paying a lot more to support the SaaS application than it would have cost to buy an on premise CRM application. While there were many benefits to the reduced hardware and support requirements, the CIO was starting to wonder if the costs were justified.  Did the company really do the analysis to determine the long-term cost/benefit of cloud?  How would he be able to explain the long- term ramifications of budget increases that he expects will come to the CFO? It is not a conversation that he is looking forward to having.
  • No exit strategy. Given the amount of customization that the company has invested in, it is becoming increasingly clear that there is no easy answer – and no free lunch. One of the reasons that the company had decided to implement SaaS was the assumption that it would be possible to migrate from one SaaS application to another.  However, while it might be possible to migrate basic data from a SaaS application, it is almost impossible to migrate the process information. Shouldn’t there be a different approach to integration in clouds than for on premise?

The bottom line is that Software as a Service has many benefits in terms of more rapid deployment, initial savings in hardware and support services, and ease of access for a highly distributed workforce.  However, there are complications that are important to take into account.  Many SaaS vendors, like their counterparts in the on-premise world, are looking for long-term agreements and lock-in with customers.  These vendors expect and even encourage customers to customize their implication based on their specific business processes.  There is nothing wrong with this – to make applications like CRM and HR productive they need to reflect a company’s own methods of doing business. However, companies need to understand what they are getting into. It is easy to get caught in the hype of the magic land of SaaS.  As more and more SaaS companies are funded by venture capitalists, it is clear that they will not all survive. What happens to your customized processes and data if the company goes out of business?

It is becoming increasingly clear to me that we need a different approach to integration in the cloud than for on premise. It needs to leverage looser coupling, configurations rather than programmatic integration. We have the opportunity to rethink integration altogether – even for on premise applications.

There is no simple answer to the quandary.  Companies looking to deploy a SaaS application need to do their homework before barreling in.  Understand the risks and rewards. Can you separate out the business process from the basic SaaS application? Do you really want to lock yourself into a vendor you don’t know well? It may not be so easy to free your company, your processes, or your data.

Oracle + Sun: Five questions to ponder

January 27, 2010 3 comments

I spent a couple of hours today listening to Oracle talk about the long-awaited integration with Sun Microsystems. A real end of an era and beginning of a new one. What does this mean for Oracle? Whatever you might think about Oracle, you have to give the company credit for successfully integrating the 60 companies it has purchased over the past few years. Having watched hundreds and perhaps thousands of acquisitions over the last few decades, it is clear that integration is hard. There are overlapping technologies, teams, cultures, and egos. Oracle has successfully managed to leverage the IP from its acquisitions to support its business goals. For example, it has kept packaged software customers happy by improving the software. Peoplesoft customers, for example, were able to continue to use the software they had become dependent on in primarily the same way as before the acquisition. In some cases, the quality of the software actually improved dramatically. The path has been more complicated with the various middleware and infrastructure platforms the company has acquired over the years because of overlapping functionality.

The acquisition of Sun Microsystems is the biggest game changer for Oracle since the acquisition of PeopleSoft. There is little doubt that Sun has significant software and hardware IP that will be very important in defining Oracle in the 21st century. But I don’t expect this to be a simple journey. Here are the five key issues that I think will be tricky for Oracle to navigate. Obviously, this is not a complete list but it is a start.

Issue One: Can Oracle recreate the mainframe world? The mainframe is dead — long live the mainframe. Oracle has a new fondness for the mainframe and what that model could represent. So, if you combine Sun’s hardware, networking layer, storage, security, packaged applications, middleware into a package do you get to own total share of a customer’s wallet? That is the idea. Oracle management has determined that IBM had the right ideas in the 1960s — everything was nicely integrated and the customer never had to worry about the pieces working together.
Issue Two: Can you package everything together and still be an open platform? To its credit, Oracle has build its software on standards such as Unix/Linux, XML, Java, etc. So, can you have it both ways? Can you claim openness when the platform itself is hermetically sealed? I think it may be a stretch. In order to accomplish this goal, Oracle would have to have well-defined and published APIs. It would have to be able to certify that with these APIs the integrated platform won’t be broken. Not an easy task.
Issue Three: Can you manage a complex computing environment? Computing environments get complicated because there are so many moving parts. There are configurations that change; software gets patched; new operating system versions are introduced; emerging technology enters and messes up the well established environment. Oracle would like to automate the process of managing this process for customers. It is an appealing idea since configuration problems, missing links, and poor testing are often responsible for many of the outages in computing environments today. Will customers be willing to have this type of integrated environment controlled and managed by a single vendor? Some customers will be happy to turn over these headaches. Others may have too much legacy or want to work with a variety of vendors. This is not a new dilemma for customers. Customers have long had to rationalize the benefits of a single source of technology against the risks of being locked in.
Issue Four: Can you teach an old dog new tricks? Can Oracle really be a hardware vendor? Clearly, Sun continues to be a leader in hardware despite its diminished fortunes. But as anyone who has ventured into the hardware world knows, hardware is a tough, brutal game. In fact, it is the inverse of software. Software takes many cycles to reach maturation. It needs to be tweaked and finessed. However, once it is in place it has a long, long life. The old saying goes, old software never dies. The same cannot be said for hardware. Hardware has a much straighter line to maturity. It is developed, designed, and delivered to the market. Sometimes it leapfrogs the competition enough that it has a long and very profitable life. Other times, it hits the market at the end of a cycle when a new more innovative player enters the market. The culmination of all the work and effort can be short as something new comes along at the right place at the right time. It is often a lot easier to get rid of hardware than software. The computer industry is littered with the corpses of failed hardware platforms that started with great fanfare and then faded away quickly. Will Oracle be successful with hardware? It will depend on how really good the company is in transforming its DNA.
Issue Five. Are customers ready to embrace Oracle’s brave new world? Oracle’s strategy is a good one — if you are Oracle. But what about for customers? And what about for partners? Customers need to understand the long-term implication and tradeoffs in buying into Oracle’s integrated approach to its platform. It will clearly mean fewer moving parts to worry about. It will mean one phone call and no finger pointing. However, customers have to understand the type of leverage that single company will have in terms of contract terms and conditions. And what about partners? How does an independent software vendor or a channel partner participate within the new Oracle? Is there room? What type of testing and preparation will be required to play?

Can we free process and data?

October 27, 2009 1 comment

I am still at IBM’s Information on Demand conference here in Las Vegas (not my favorite place..but what can you do). In listening to a lot of discussions around strategy and products I started thinking about one of the key problems that customers are facing around business process and managing increasingly complex data. What companies really want to do is to have the flexibility and freedom to leverage their critical data across applications and situations. They also want to be able to change processes based on changing business models.

This is the core issue that companies will be facing in the coming decade and will be the difference between success and failure for many  businesses.  Here’s an example of what I mean. Let’s take the example of a retailer in a competitive market. Let’s say our retailer had five or six applications: Accounting, Human Resources, supply chain management, a customer support system, and a customer facing e-commerce system. Each of these systems has an underlying database; each one manages this data based on the business process that is the foundation of the best practices that is the value of these packages. Even if each of the packages are the best in their markets there is a core problem since each solution is a silo. Processes that move between these systems tend to fall through the cracks.  This is why we, as customers of such retailers, are often frustrated when we call about a product that wasn’t delivered, doesn’t work, or requires a change only to discover that one department has no ability to know what is happening in another area. For most companies the dream of single view of the customer is aspirational but not practical right now. In reality, it is hard for companies to mess with their existing applications. These solutions are customized for their business environment; they were expensive and complicated to implement — and change is hard. In fact, companies only change when it is more painful to stay with the status quo than it is to change. In a retail scenario, companies change their approach to process and data management when they must change their business model because the current processes will lead to failure. Retailers are currently faced with emerging approaches to selling and managing customer relationships that are challenging traditional selling models.  Look what a company like Amazon.com or Netflex have done to their slower moving competitors.

A number of customers I have spoken with understand this very well. They are looking at ways to separate their core data assets from the underlying applications. Many of these customers are at the forefront of implementing a service oriented architecture (SOA) approach to managing their software assets. They are increasingly understanding that the secret to their future success is the knowledge they have about their customers, their needs and future requirements within their own set of offerings and those from partners. These companies are setting a priority of making this data independent, secure, and accurate. These business leaders are preparing for inevitable change.  At the same time, I have seen these customers creating SOA business services that are, in essence, codified business processes. For example, a business service could be a process that checks the credit of a potential partner or links a new customer request for service to the set of applications that confirms the request, orders the part, and notifies a partner.

So, here is the problem. These customers are implementing this new model of abstracting data and process based on specific projects or business initiatives.  These projects have gotten the attention of the C-team because of the impact on revenue. But, in reality, the real breakthrough will happen when the separation of data and process are the rule, not the exception.

This is going to be the overriding challenge for the next decade because it is so hard. There is inertia to move away from the predictable packaged applications that companies have implemented for more than 30 years. But I suggest that it will be inevitable that companies will begin to understand that if they are going to remain agile and change processes when they anticipate a competitive threat. These same companies will understand that their data is too important to leave it locked inside an application linked tightly to a process.

I don’t have the answers about what the tipping point will be when this starts to become a wide spread strategy. I think that the cloud will became a forcing action that will accelerate this trend. I would love to start a dialog. Send me your thoughts and I promise to post them.

What’s the future of the virtual conference?

June 11, 2009 10 comments

I am in the middle of attending Microsoft’s Server Technology Business industry analyst event. I have attended this for many years but this year Microsoft decided that it would be a virtual event. Sessions would be streamed over the web to be watched whenever. One on one sessions were scheduled with executives and customers in 30 minute increments. There was one live session (slides over LiveMeeting). So, what did I think? I had very mixed feelings. I was happy to forgo a plane trip. It is much nicer to sit in my own office and sleep in my own bed. However, I don’t think that the virtual conference itself is ready for prime time. Here are the things that don’t work for me.

There is no substitute for personal interaction with people. When I attend an industry analyst meeting I pay attention to more than the words the speaker is saying. I read the body language. I want to understand how the management team relates to each other. I want to have hallway and lunch time informal conversations. I also want to be able to talk to invited customers informally.

Streaming videos for presentations are a wonderful idea but the vendor providing the videos needs to make sure that this works with many different networks and many different systems. I happen to use a Mac which wasn’t the system of choice for the Microsoft hosts. Even those using Windows and Explorer had trouble with the videos stopping in mid sentence. Even if the vendor tests out the videos internally, they cannot begin to guess the participant’s environment.

Will a typical analyst have the patience to watch five hours of pre-recorded videos? Not likely. I might listen to a video that I am particularly interested in (like cloud computing or service oriented architectures, for example). But I will not listen to all the presentations. There are simply too many distractions and too many things to do. That is the reality of my life as a researcher, analyst, and writer. The reality is that unless you present compelling presentations with information that draws me in you will not capture my attention for long periods of time. The context of this type of meeting hurts the  virtual conference. It is something like watching television. If you start to watch a program and it gets boring, you start to channel surf. If you expect the audience to watch from beginning to end you have to grab their attention.

The reality is you can get away with a lot more in person than you can in a virtual meeting. In an in-person meeting there is enough going on and enough possibilities of interaction that it works. In a virtual meeting you have to pay much more attention to the details. It is show business. The virtual meeting has to be orchestrated and managed so that the seams do not show. Microsoft had a good idea when they planned the meeting. They actually sent each of us a LiveCam so that speakers and audience members could see each other. It was never used.

I think that we will get to the point where we can have meaningful virtual conferences — someday. But they have to have the following characteristics before I will be enthusiastic:

1. Virtual conferences need really good planning and execution. It cannot simply be a disconnected voice with some slides on a shared screen. That is called a conference call.

2. Streaming or live video is wonderful but it needs to have the technology foundation so that it will work no matter what the customer/participant’s environment happens to be.

3. If virtual conferences are to work they have to be conferences.  I don’t think that we have good models for executing virtual conferences that work. They need to be electric, informative, and have interactivity.  Right now the virtual meeting is not a true model. It is simply old execution applied to a new idea.

I think that we will see the emergence of a true virtual conferencing model. I can’t tell you that I can visualize a virtual conference that I would enjoy. Like many analysts, I am not good at passively sitting and watching. I need to be engaged and part of the action. I am not sure how you do this virtually. But I am ready to be surprised and delighted since it would be great not to get on an airplane.

The end of maintenance?

April 29, 2009 2 comments

I admit that I didn’t read the whole article but then I really didn’t have to. I knew what Marc Benioff, CEO of Salesforce.com was trying to start. I remember many years ago seeing Marc at an industry conference where he proudly announced the end of software.  A nice marketing approach that definitely got everyone’s attention. Of course, at that time Marc was working on a little software as a service enviornment that became Salesforce.com. The rest is history, as we like to say.  Now, Marc is on a new mission to attack maintenance fees. While it is clear that Marc is trying to tweak the traditional software market I think that he is bringing up an interesting subject.

Software maintenance is not a simple topic to cover and I am sure that I could spend hundreds of pages discussing the topic because there are so many angles. Maintenance fees began as a way of ensuring that software companies had the revenue to fund development of new functionality in their software products. It is, of course, possible to buy software, pay once, and never pay the vendor anything else. Those situations exist of course. Ironically, the better designed the software, the less likely it is that customers will need upgrades. But, clearly that circumstance is rare.

There are major changes taking place in the economics of software. Customers are increasingly unhappy with paying huge yearly maintenance fees to software providers. Some of these fees are clearly justified. Software is complex and vendors are often required to continue to upgrade, add new features, and the like. There are other situations where customers are perfectly happy with software as is and only want to fix critical problems and don’t want to pay what they see as exorbitant maintenance fees.

Now, getting back to Marc Benioff’s comments about the end of maintenance. Here is a link from Vinnie Mirchandani’s recent blog on the topic.Marc is making a very important observation. As the world slowly moves to cloud computing for economic reasons there will be a major impact on how companies pay for software. Salesforce.com has indeed proven that companies are willing to trust their sales and customer data to a Software as a Service vendor. These customers are also willing to pay per user or per company yearly fees to rent software. Does this mean that they are no longer paying maintance fees? My answer would be no. It is all about accounting and economics. Clearly, Salesforce.com spends a lot of money adding functionality to its application and someone pays for that. So, what part of that monthly or yearly per user fee is allocated to maintaining the application? Who knows? And I am sure that it is not one of those statistics that Salesforce.com or any other Software as a Service or any Platform as a Service vendor is going to publish. Why? Because these companies don’t think of themselves as traditional software companies. They don’t expect that anyone will ever own a copy of their code.

The bottom line is that software will never be good enough to never need maintenance. Software vendors — whether they sell perpetual licenses or Software as a Service– will continue to charge for maintance. The reality is that the concrete idea of the maintenance fee will evolve over time. Customers will pay it but they probably won’t see it on their bills.  Nevertheless, the impact on traditional software companies will be dramatic over time and a lot of these companies will have to rethink their strategies. Many software companies have become increasingly dependent on maintenance revenue to keep revenue growing.  I think that Marc Benioff has started a conversation that will spark a debate that could have wide ranging implications for the future of not only maintenance but of what we think of as software.