Archive

Posts Tagged ‘Google’

Why we about to move from cloud computing to industrial computing?

April 5, 2010 7 comments

I spent the other week at a new conference called Cloud Connect. Being able to spend four days emerged in an industry discussion about cloud computing really allows you to step back and think about where we are with this emerging industry. While it would be possible to write endlessly about all the meeting and conversations I had, you probably wouldn’t have enough time to read all that. So, I’ll spare you and give you the top four things I learned at Cloud Connect. I recommend that you also take a look at Brenda Michelson’s blogs from the event for a lot more detail. I would also refer you to Joe McKendrick’s blog from the event.

1. Customers are still figuring out what Cloud Computing is all about.  For those of us who spend way too many hours on the topic of cloud computing, it is easy to make the assumption that everyone knows what it is all about.  The reality is that most customers do not understand what cloud computing is.  Marcia Kaufman and I conducted a full day workshop called Introduction to Cloud. The more than 60 people who dedicated a full day to a discussion of all aspects of the cloud made it clear to us that they are still figuring out the difference between infrastructure as a service and platform as a service. They are still trying to understand the issues around security and what cloud computing will mean to their jobs.

2. There is a parallel universe out there among people who have been living and breathing cloud computing for the last few years. In their view the questions are very different. The big issues discussed among the well-connected were focused on a few key issues: is there such a thing as a private cloud?; Is Software as a Service really cloud computing? Will we ever have a true segmentation of the cloud computing market?

3. From the vantage point of the market, it is becoming clear that we are about to enter one of those transitional times in this important evolution of computing. Cloud Connect reminded me a lot of the early days of the commercial Unix market. When I attended my first Unix conference in the mid-1980s it was a different experience than going to a conference like Comdex. It was small. I could go and have a conversation with every vendor exhibiting. I had great meetings with true innovators. There was a spirit of change and innovation in the halls. I had the same feeling about the Cloud Connect conference. There were a small number of exhibitors. The key innovators driving the future of the market were there to discuss and debate the future. There was electricity in the air.

4. I also anticipate a change in the direction of cloud computing now that it is about to pass that tipping point. I am a student of history so I look for patterns. When Unix reached the stage where the giants woke up and started seeing huge opportunity, they jumped in with a vengeance. The great but small Unix technology companies were either acquired, got big or went out of business. I think that we are on the cusp of the same situation with cloud computing. IBM, HP, Microsoft, and a vast array of others have seen the future and it is the cloud. This will mean that emerging companies with great technology will have to be both really luck and really smart.

The bottom line is that Cloud Connect represented a seminal moment in cloud computing. There is plenty of fear among customers who are trying to figure out what it will mean to their own data centers. What will the organizational structure of the future look like? They don’t know and they are afraid. The innovative companies are looking at the coming armies of large vendors and are wondering how to keep their differentiation so that they can become the next Google rather than the next company whose name we can’t remember. There was much debate about two important issues: cloud standards and private clouds. Are these issues related? Of course. Standards always become an issue when there is a power grab in a market. If a Google, Microsoft, Amazon, IBM, or an Oracle is able to set the terms for cloud computing, market control can shift over night. Will standard interfaces be able to save the customer? And how about private clouds? Are they real? My observation and contention is that yes, private clouds are real. If you deploy the same automation, provisioning software, and workload management inside a company rather than inside a public cloud it is still a cloud. Ironically, the debate over the private cloud is also about power and position in the market, not about ideology. If a company like Google, Amazon, or name whichever company is your favorite flavor… is able to debunk the private cloud — guess who gets all the money? If you are a large company where IT and the data center is core to how you conduct business — you can and should have a private cloud that you control and manage.

So, after taking a step back I believe that we are witnessing the next generation of computing — the industrialization of computing. It might not be as much fun as the wild west that we are in the midst of right now but it is coming and should be here before we realize that it has happened.

Why hardware still matters– at least for a couple of years

February 9, 2010 3 comments

It is easy to assume that with the excitement around cloud computing would put a damper on the hardware market. But I have news for you. I am predicting that over the next few years hardware will be front and center.  Why would I make such a wild prediction. Here are my three reasons.

1. Hardware is front and center in almost all aspects of the computer industry. It is no wonder that Oracle wants to become a hardware company. Hardware is tangible. It’s revenue hits the bottom line right away. Hardware can envelop software and keep customers pinned down for many, many years. New generation platforms in the form of hardware appliances are a convenient delivery platform that helps the sales cycle. It is no wonder that Oracle wants a hardware platform. It completes the equation and allows Oracle to position itself as a fully integrated computing company. Likewise, IBM and HP are focused on building up their war chest full of strong hardware platforms. If you believe that customers want to deal with one large brand..or two, then the winners want to control the entire computing ecosystem.

2. The cloud looms. Companies like Amazon.com and Google do not buy hardware from the big iron providers and never will. For economic reasons, these companies go directly to component providers and purchase custom designed chips, board, etc. This approach means that for a very low price, these cloud providers can reduce their power consumption by making sure that the components are optimize for massively scaled clouds.  These cloud vendors are focused on undercutting the opportunity and power of the big systems providers. Therefore, cloud providers care a lot about hardware — it is through optimization of the hardware that they can threaten the power equilibrium in the computer market.

3. The clash between cloud and on premise environments. It is clear that the computer marketplace is at a transition point. The cloud vendors are betting that they can get the costs based on optimization of everything so low that they win. The large Systems vendors are betting that their sophisticated systems combining hardware, software, and service will win because of their ability to better protect the integrity of the customer’s business. These vendors will all provide their own version of the public and private cloud to ensure that they maintain power.

So, in my view there will be an incredible focus on hardware over the next two years. This will actually be good for customers because the level of sophistication, cost/performance metrics will be impressive. This hardware renaissance will not last. In the long run, hardware will be commoditized. The end game will be interesting because of the cloud. It will not a zero sum game. No, the data center doesn’t go away. But the difference is that purpose built hardware will be optimized for workloads to support the massively scaled environments that will be the heart of the future of computing. And then, it will be all about the software, the data, and the integration.

3.

The DNA of the Cloud Power Partnerships

January 15, 2010 2 comments

I have been thinking  alot about the new alliances forming around cloud computing over the past couple of months.  The most important of these moves are EMC,Cisco, and VMware, HP and Microsoft’s announced collaboration, and of course, Oracle’s planned acquisition of Sun.  Now, let’s add IBM’s cloud strategy into the mix which has a very different complexion from its competitors. And, of course, my discussion of the cloud power struggle wouldn’t be complete without adding in the insurgents — Google and Amazon.  While it is tempting to want to portray this power grab by all of the above as something brand new — it isn’t.  It is a replay of well-worn patterns that we have seen in the computer industry for the past several decades. Yes, I am old enough to have been around for all of these power shifts. So, I’d like to point out what the DNA of this power struggle looks like for the cloud and how we might see history repeating itself in the coming year.  So, here is a sample of how high profile partnerships have fared over the past few decades. While the past can never accurately predict the future, it does provide some interesting insights.

Partner realignment happens when the stakes change.  There was a time when Cisco was a very, very close partner with HP. In fact, I remember a time when HP got out of the customer service software market to collaborate with Cisco. That was back in 1997.

Here are the first couple of sentences from the press release:

SAN JOSE and PALO ALTO, Calif., Jan. 15, 1997 — Hewlett-Packard Company and Cisco Systems Inc. today announced an alliance to jointly develop Internet-ready networked-computing solutions to maximize the benefits of combining networking and computing. HP and Cisco will expand or begin collaboration in four areas: technology development, product integration, professional services and customer service and support.

If you are interested, here is a link to the full press release.  What’s my point? These type of partnerships are in both HP’s and Cisco’s DNA. Both companies have made significant and broad-reaching partnerships. For example, back in 2004, IBM and Cisco created a broad partnership focused on the data center. Here’s an excerpt from a CRN article:

From the April 29, 2004 issue of CRN Cisco Systems (NSDQ:CSCO) and IBM (NYSE:IBM) on Thursday expanded their long-standing strategic alliance to take aim at the data center market. Solution providers said the new integrated data center solutions, which include a Cisco Gigabit Ethernet Layer 2 switch module for IBM’s eServer Blade Center, will help speed deployment times and ease management of on-demand technology environments.
“This is a big win for IBM,” said Chris Swahn, president of sales at Amherst Technologies, a solution provider in Merrimack, N.H.
The partnership propels IBM past rival Hewlett-Packard, which has not been as quick to integrate its own ProCurve network equipment into its autonomic computing strategy, Swahn said.
Cisco and IBM said they are bringing together their server, storage, networking and management products to provide an integrated data center automation platform.

Here is a link to the rest of the article.

HP itself has had a long history of very interesting partnerships. A few that are most relevant include HP’s ill-fated partnership with BEA in the 1990s. At the time, HP invested $100 million in BEA to further the development of software to support HP’s software infrastructure and platform strategy.

HP Gives BEA $100m for Joint TP Development
Published:08-April-1999
By Computergram

Hewlett-Packard Co and BEA Systems Inc yesterday said they plan to develop new transaction processing software as well as integrate a raft of HP software with BEA’s WebLogic application server, OLTP and e-commerce software. In giving the nod to WebLogic as its choice of application server, HP stopped far short of an outright acquisition of the recently-troubled middleware company, a piece of Wall Street tittle tattle which has been doing the round for several weeks now. HP has agreed to put BEA products through all of its distribution channels and is committing $100m for integration and joint development.

Here’s a link to an article about the deal.

Oracle  probably has more partnerships and more entanglement with more companies than anyone else.  For example,  HP has a  longstanding partnership with Oracle on the data management front. HP partnered closely with Oracle and optimized its hardware for the Oracle database. Today, Oracle and HP have more than 100,000 joint customers. Likewise, Oracle has a strong partnership with IBM — especially around its solutions business. IBM Global Services operates a huge consulting practice based on implementing and running Oracle’s solutions.  Not to be outdone, EMC and Oracle have about 70,000 joint customers. Oracle supports EMC’s storage solutions for Oracle’s portfolio while EMC supports Oracle’s solutions portfolio.

Microsoft, like Oracle, has entanglements with most of the market leaders. Microsoft has partnered very closely with HP for the last couple of decades both on the PC front and on the software front. Clearly, the partnership between HP and Microsoft has evolved for many years so this latest partnership is a continuation of a long-standing relationship. Microsoft has long-standing relationships with EMC, Sun, and Oracle — to name a few.

And what about Amazon and Google? Because both companies were early innovators in cloud computing, they were able to gain credibility in a market that had not yet emerged as the center of power. Therefore, both companies were well positioned to create partnerships with every established vendors that needed to do something with the cloud.  Every company from IBM to Oracle to EMC and Microsoft — to name but a few — established partnerships with these companies. Amazon and Google were small, convenient and non-threatening. But as the power of both companies continues to –grow,  so will their ability to partner in the traditional way. I am reminded of the way IBM partnered with two small companies — Intel and Microsoft when it needed a processor and an operating system to help bring the IBM PC to market in the early 1980s.

The bottom line is that cloud computing is becoming more than a passing fad — it is the future of how computing will change in the coming decades. Because of this reality, partnerships are changing and will continue to change. So, I suspect that the pronouncements of strategic, critical and sustainable partnerships may or may not be worth the paper or compute cycles that created them. But the reality is that the power struggle for cloud dominance is on. It will not leave anything untouched. It will envelop hardware, software, networking, and services. No one can predict exactly what will happen, but the way these companies have acted in the past and the present give us clues to a chaotic and predictable future.

Predictions for 2010: clouds, mergers, social networks and analytics

December 15, 2009 7 comments

Yes, it is predictions time. Let me start by saying that no market change happens in a single year. Therefore, what is important is to look at the nuance of a market or a technology change in the context of its evolution. So, it is in this spirit that I will make a few predictions. I’ve decided to just list my top six predictions (I don’t like odd numbers). Next week I will add another five or six predictions.

  1. Cloud computing will move out of the fear, uncertainty and doubt phase to the reality phase for many customers. This means that large corporations will begin to move segments of their infrastructure and applications to the cloud. It will be a slow but steady movement. The biggest impact on the market is that customers will begin putting pressure on vendors to guarantee predictability and reliability and portability.
  2. Service Management will become mainstream. Over the past five years the focus of service management has been around ITIL (Information Technology Infrastructure Library) processes and certification. There is a subtle change happening as corporations are starting to take a more holistic view of how they can effectively manage how everything that has a sensor, an actuator, or a computer interface is managed. Cloud computing will have a major impact on the growing importance of service management.
  3. Cloud service providers will begin to drop their prices dramatically as competition intensifies. This will be one of the primary drivers of growth of the use of cloud services. It will put a lot of pressure on smaller niche cloud providers as the larger companies try to gain control of this emerging market.
  4. It is not a stretch to state that the pace of technology acquisitions will accelerate in 2010.  I expect that HP, IBM, Cisco, Oracle, Microsoft, Google, and CA will be extremely active. While it would be foolhardy to pick a single area, I’ll go out on a limb and suggest that security, data center management, service management, and information management will be the focus of many of the acquisitions.
  5. Social Networking will become much more mainstream than it was in 2009. Marketers will finally realize that blatant sales pitches on Twitter or Facebook just won’t cut it.  We will begin to see markets learn how to integrate social networking into the fabric of marketing programs. As this happens there will be hundreds of new start ups focused on analyzing the effectiveness of these marketing efforts.
  6. Information management is at the cusp of a major change. While the individual database remains important, the issue for customers is focus on the need to manage information holistically so that they can anticipate change. As markets grow increasingly complex and competitive, the hottest products in 2010 will those that help companies anticipate what will happen next.  So expect that anything with the term predictive analytics to be hot, hot, hot.

Predicting the future of computing by understanding the past

December 1, 2009 1 comment

Now that Thanksgiving is over I am ready to prepare to come up with predictions for 2010. This year, I decided to start by looking backwards. I first entered the computer industry in the late 1970s when mainframes roamed the earth and timesharing was king. Clearly, a lot has changed. But what I was thinking about was the assumptions that people had about the future of computing at that time and over the next several decades. So, I thought it would be instructive to mention a few interesting assumptions that I heard over the years. So, in preparation for my predictions in a couple of week, here are a few noteworthy predictions from past eras:

1. Late 1970s – The mainframe will always be the prevalent computing platform. The minicomputer is a toy.

2. Early 1980s – The PC will never be successful. It is for hobbyists. Who would ever want a personal computer in their home? And if they got one, what would they ever do with it — keep track of recipes?

3. Mid-1980s – The minicomputer will prevail. The personal computer and the networked based servers are just toys.

4. Mid-1980s – The leaders of the computer industry — IBM, Digital Equipment Corporation, and Wang Laboratories will prevail.

5. Early 1990s – The Internet has no real future as a computing platform. It is unreliable and too hard to use. How could it possibly scale enough to support millions of customers?

6. Early 1990s – Electronic Commerce is a pipe dream. It is too complicated to work in the real world.

7. Mid-1990s – If you give away software to gain “eyeballs” (the popular term in the era) and market share you will fail.

I could mention hundreds of other assumptions that I have come across that defied the conventional wisdom of the day. The reality is that these type of proof points are not without nuance.  For example, the mainframe remains an important platform today because of its ability to process high volume transactions and for its reliability and predictability. However, it is no longer the primary platform for computing. The minicomputer still exists but has morphed into more flexible server-based appliances. The PC would never have gotten off the ground without the pioneering work of done by Dan Bricklin and Bob Frankston who created the first PC-based spreadsheet. Also, if the mainframe and minicomputers had adopted a flexible computing model, corporations would never have brought millions of unmanageable PCs into their departments. Of the three computing giants of the late 80s, only IBM is still standing. Digital Equipment was swallowed by HP and Wang was bought by Getronics.  The lesson? Leaders come and go. Only the humble or paranoid survive. Who could have predicted the emergence of Google or Amazon.com? In the early days of online commerce it was unclear if it would really work. How could a vendor possible construct a system that could transmit transactions between partners and customers across the globe? It took time and lots of failures before it became the norm.

My final observation is actually the most complicated. In the mid-1990s during the dotcom era I worked with many companies that thought they could give away their software for a few dollars, gain a huge installed base and make money by monetizing those customers. I admit that I was skeptical. I would tell these companies, how can you make money and sustain your company? If you sell a few million copies of your software revenue will still be under $20 million — before expenses which would be huge. The reality is that none of these companies are around today. They simply couldn’t survive because there was no viable revenue model for the future. Fast forward almost 20 years. Google was built on top of the failures of these pioneers who understood that you could use an installed base to build something significant.

So, as I start to plan to predict 2010 I will try to keep in mind the assumptions, conventional wisdom, successes and failures of earlier times.

Tectonic shifts: HP Plus 3Com versus Cisco Plus EMC

November 18, 2009 4 comments

Just when it looked clear where the markets were lining up around data center automation and cloud computing, things change. I guess that is what makes this industry so very interesting.  The proposed acquisition by HP of 3Com is a direct challenge to Cisco’s network management franchise. However, the implications of this move go further than what meets the eye.  It also pits HP in a direct path against EMC with its Cisco partnership. And to make things even more interesting, it also puts these two companies in a competitive three way race against IBM and its cloud/data center automation strategy. And of course, it doesn’t stop there. A myriad of emerging companies like Google and Amazon want a larger share of the enterprise market for cloud services. Companies like Unisys and CSC that has focused on the outsourced secure data centers are getting into the act.

I don’t think that we will see a single winner — no matter what any one of these companies will tell you.  The winners in this market shift will be those companies can build a compelling platform and a compelling value proposition for a partner ecosystem.  The truth about the cloud is that it is not simply a network or a data center. It is a new way of providing services of all sorts that can support changing customer workloads in a secure and predictable manner.

In light of this, what does this say for HP’s plans to acquire 3Com? If we assume that the network infrastructure is a key component of an emerging cloud and data center strategy, HP is making a calculated risk in acquiring more assets in this market.  The company that has found that its ProCurve networking division has begun gaining traction. HP ProCurve Networking is the networking division of HP.  The division includes network switches, wireless access points, WAN routers, and Access Control servers and software.   ProCurve competes directly with Cisco in the networking switch market. When HP had a tight partnership with Cisco, the company de-emphasized the networking. However, once Cisco started to move into the server market, the handcuffs came off. The 3Com acquisition takes the competitive play to a new level. 3Com has a variety of good pieces of technology that HP could leverage within ProCurve. Even more significantly, it picks up a strong security product called TippingPoint, a 3Com acquisition. TippingPoint fills a critical hole in HP’s security offering. TippingPoint, offers network security offerings including intrusion prevention and a product that inspects network packets.  The former 3Com subsidiary has also established a database of security threats based a network of external researchers.

But I think that one of the most important reasons that HP bought 3Com is its strong relationships in the Chinese market. In fiscal year 2008 half of 3Com’s revenue came from its H3C joint venture with Chinese vendor, Huawei Technology. Therefore, it is not surprising that HP would have paid a premium to gain a foothold in this lucrative market. If HP is smart, it will do a good job leveraging the many software assets to build out both its networking assets as well as beefing up its software organization. In reality, HP is much more comfortable in the hardware market. Therefore, adding networking as a core competency makes sense. It will also bolster its position as a player in the high end data center market and in the private cloud space.

Cisco, on the other hand, is coming from the network and moving agressively into the cloud and the data center market.  The company has purchased a position with VMWare and has established a tight partnership with EMC as a go to market strategy.  For Cisco, it gives the company credibility and access to customers outside of its traditional markets. For EMC, the Cisco relationship strengthens its networking play.  But an even bigger value for the relationship is to present a bigger footprint to customers as they move to take on HP, IBM, and the assortment of other players who all want to win.  The Cisco/EMC/VMware play is to focus on the private cloud.  In their view a private cloud is very similar to a private, preconfigured data center.  It can be a compelling value proposition to a customer that needs a data center fast without having to deal with a lot of moving parts.  The real question from a cloud computing perspective is the key question: is this really a cloud?

It was inevitable that this quiet market dominated by Google and Amazon would heat up as the cloud becomes a real market force.  But I don’t expect that HP or Cisco/EMC will have a free run. They are being joined by IBM and Microsoft — among others. The impact could be better options for customers and prices that invariably will fall. The key to success for all of these players will be how well they manage what will be an increasingly heterogeneous, federated, and highly distributed hardware and software world. Management comes in many flavors: management of these highly distributed services and management of the workloads.

Is there a Twitter sneak attack in our future?

November 4, 2009 Leave a comment

Last year I wrote a post about what I called the Google Sneak attack. If you don’t feel like reading that post, I’ll make it simple for you. Google comes to market as a benign helpful little search engine that threatened no one. Fast forward a decade and Google now pulls in more ad revenue than most of the television networks combined. It has attacked Microsoft’s office franchise, is playing a key role in the cloud via Platform as a Service (Google AppEngine), not to mention the importance of its entry into the book business and who knows what else.  But let’s turn our attention to Twitter.  I’ve been using Twitter since 2007. For the first several months I couldn’t quite figure out what this was all about. It was confusing and intriguing at the same time.  In fact, my first blog about Twitter suggested that the Emperor has no clothes.

So fast forward to the end of 2009 and several very interesting things are happening:

1. Twitter is becoming as much a part of the cultural and technical fabric as Google did just a few years ago

2. A partner ecosystem has grown up around Twitter. A post from February by Matt Ingram of Gigaom echos this point.

3. The number of individuals, large corporations, and small businesses are using Twitter as everything from the neighborhood water cooler to a sales channel.

What does mean? Despite detractors who wonder what you can possibly accomplish in 140 characters, it is becoming clear that this company without a published business plan does have a plan to dominate.  It is, in fact, the same strategy that Google had. Which company would have been threatened by a small search company? And who could be threatened from a strange little company called Twitter that asked people to say it all in 140 characters? Today Twitter claims to have 18 Million users about 4% of adult internet users.  I suspect that we will begin to see a slow but well orchestrated roll out of services that leverage the Twitter platform. I suspect that we will see a combination of advertising plus commercial software aimed at helping companies reach new customers in new channels.

I am confident that within the next two years this small, profitless, patient company will roll out a plan targeting social networking world dominance. It will be fun to watch.

Confessions of a Twitter User

September 25, 2008 Leave a comment

Back in January of this year I signed up for a Twitter account. I have to admit I was skeptical. Why does anyone need to know what I am doing right now? I wrote a blog about how silly I thought it was. Then after playing around with Twitter for about five months I wrote another blog about how it had the potential for becoming a platform for innovation. So, clearly, I had changed my mind.  I began to see that something here was more interesting than what I had assumed.

Well, now a few months later I would like to report that I have been getting deeper and deeper into my Twitter research and I have some new conclusions that I would like to share.  Here are the five conclusions I have come to about why Twitter is important:

Number One. The water cooler effect. As a technology industry analyst I really enjoy connecting with other analysts. It is especially helpful when a bunch of us are at an industry analyst meeting and we can exchange impressions in real time about what speakers are really saying. When colleagues are at a meeting I am not attending, I get a vicarious real time impression about the meeting without being there in person! It is amazing what you learn from only 140 characters. I have found that the companies we analyst are twittering about eagerly follow what we say about them and their competitors.

Number Two. Connecting to the political world.  During this election season, I have connected to many of the candidates, pundants, and journalists Twitter links. They often will provide links to articles and commentary that I never would have thought to look at – and I probably would never have known that they existed. I also took the opportunity to send direct messages to some candidates. I’m sure they never read what I said but it made me feel better. (Some candidates removed the ability to send a direct message after a while). I have noticed that a number of cable news reporters are now using Twitter to connect to people about specific issues they researching.  It can definitely be a good reality check for these guys.

Number Three. Connecting to people in the computer industry. I have connected with executives and technologists that I haven’t been in touch with in a while. Sometimes, I have sent messages to set up a new meeting just based on seeing them make a statement about something happening in their company.  It isn’t a substitute for other communications methods — traditional email, etc. but it is handy.

Number Four. The reach of the platform. Twitter, like other social networking platforms has created a range of related services — some that add better interfaces and there are lots. Here is a link to Todd Ogasawara’s   blog that lists lots of them.

Number Five. Twitter will need a revenue based model at some point. Where’s the business model? This is something I haven’t figured out yet. How will Twitter make money?  Are they planning what I call a Google Sneak Attack? Is there a plan to create an advertising model or new SaaS software model built on the base platform?

Clearly Twitter has momentum and some buzz right now.  Will it last? I think some of that depends on how well the company does at working on scalability,  partnerships, and figuring out a business model. Semantic search is something they desperately need. I could envision Twitter evolving to create specific applications for companies that want to set up real time feedback with customers and partners. I’ll keep working with Twitter — I enjoy the interaction (when I have time).

Is there beef behind SalesForce.Com?

May 29, 2008 3 comments

I have been following Salesforce.com since its founding in the mid-1990s. Initially the company started by creating a contact management system which evolved into the sales force platform it offers today. Last month I attended a small dinner meeting in Boston hosted by Marc Benioff, Chairman and CEO of SalesForce.com, for some partners and customers. I met the Steve Pugh, CEO of CODA Financials, a subsidiary of Coda, a UK based developer of accounting software. I was intrigued that the company had built its new generation financial application on top of Salesforce.com’s infrastructure. In my next post, I’ll talk about Coda and why they made this decision. But before that I wanted to take a look at the Salesforce platform.

What is most interesting about Salesforce is that it intended to build a platform from day one. In my discussions with Marc in the early days he focused not specifically on the benefits of CRM but rather on “No Software”. If you think about it that was a radical concept ten years ago.

Therefore, It goes without saying that Salesforce has been a Software as a Service pioneer. For example, in June 2003 launched sforce, one of the first web services based SaaS platforms. It offered partners a published SOAP-based API. Rather than viewing Salesforce as an application, it views it as a “database in the sky.” It interprets this database as an integration platform. Likewise, from a customer perspective, Salesforce has designed its environment to “look like a block”. What does that mean? I would probably use a different term maybe a infrastructure blackbox.

Salesforce’s approach to creating its ecosystem has been incremental. It began, for example, by allowing customers to change tabs and create their own database objects. Next, the company added what it called the AppExchange which added published APIs so that third party software providers could integrate their applications into the Salesforce platform. Most of the applications on AppExchange are more like utilities than full fledged packaged applications. Many of the packages sold through the AppExchange are “tracking applications” for example, there is an application that tracks information about commercial and residential properties; another application is designed to optimize the sales process for media/advertising companies; still another package is intended to help analyze sales data.

But this is just the beginning of what Salesforce has planned. The company is bringing in expertise from traditional infrastructure companies like Oracle and BEA — among others. It’s head of engineering came from eBay. Bringing in experienced management that understands enterprise scalability will be important — especially because of Salesforce’s vast ambitions. I have been reading blogs by various Salesforce.com followers and critics. Josh Greenbaum, whom I have known for more than 20 years has been quite critical of Salesforce and has predicted its demise (within 18 months). He makes the comparison between Salesforce.com and Siebel. While any company that has risen as fast as Salesforce.com has will be a target, I do not believe that Salesforce.com is in trouble. There are two reasons I believe that they have a good chance for sustainability: their underlying SOA architecture and the indications that ISVs are beginning to see the company as a viable infrastructure.

So, what is the path that Salesforce is following on its quest for infrastructuredom (is that a real word — probably not). One of the primary reasons for my optimism is that Salesforce.com has a combination of traditional development through a procedural language it calls Apex that is intended to help developers write stored procedures or SQL statements. While this may disappoint some, it is a pragmatic move. But more important than Apex is the development of a standard XML based stylesheet interfaces to a service designed for use with Salesforce applications. This allows a developer to change the way the application looks. It is, in essence, the interface as a service. A third capability that I like is the technique that Salesforce has designed for creating common objects. In essence, this is a basic packaging that allows a third party to create their own version of Salesforce for its customers. For example, this has enabled Accenture to create a version of Salesforce for its customers in the health care.

But what is behind the curtain of Salesforce? First, Salesforce uses the Oracle database as a technique for serving up file pages (not as a relational database). But the core Intellectual Property that sits on top of Oracle is a metadata architecture. It is designed as a multi-tenancy service. Salesforce considers this metadata stack as the core of its differentiation in the market. The metadata layer is complex and includes an application server called Resin. The Resin Application Server is a high-performance XML application server for use with JSPs, servlets, JavaBeans, XML, and a host of other technologies. On top of this metadata layers is an authorization server. The metadata layer is structured so that each organization has a unique access to the stack. Therefore, two companies could be physically connected to the same server but there would be no way for them to access each other’s data. The metadata layer will only point to the data that is specific to a user. The environment is designed so that each organization (i.e., customer) has a specific WSDL-based API. In fact, the architecture includes the approach of access APIs through the WSDL interface. There are two versions of WSDL — one general and one for a specific customer implementation. If a customer wants to share data, for example, they have to go through the general WSDL interface.

Salesforce’s approach is to use XML based interfaces as an integration approach. It has used this to integrate with Google Apps. Salesforce has already begun partnering with Google around Adwords. This move simply deepened the relationship since both companies are faced with competitive threats.

The bottom line is that I think that Salesforce.com is well positioned in the market. It has an underlying architecture that is well conceived based on a SOA approach. It has created an ecosystem of partners that leverage its APIs and rely on its network to build their businesses. Most importantly, SalesForce.com has created an application that is approachable to mortals (as opposed to software gods). Companies like Siebel, in contract, created a platform that was complicated for customers to use — and therefore many purchased the software and never used it.

Salesforce.com is not without challenges. It needs to continue to innovate on its platform so that it does not get caught off guard by large (Microsoft, SAP, and Oracle) players who aren’t happy with an upstart in a market they feel entitled to own. They are also at risk from upstarts like Zoho and open source CRM players like SugarCRM. If Salesforce.com can collect more packaged software vendors like Coda to build their next generation applications on top of Salesforce’s environment, they may be able to weather the inevitable threats.

How Amazon cashes in on its Cloud

May 14, 2008 4 comments

I had a very interesting conversation with Jeff Barr, the senior web services evangelist at Amazon. I have known Jeff for almost 15 years. In those days Jeff was one of the architects at a company called Visix, an early graphical development environment that was ahead of its time. Visix’s software development environment was designed as an abstraction of the underlying infrastructure. Visix came into the market before the Internet infrastructure became the defacto standard. But for me, it set the vision for where we are today. Jeff started at Amazon in the summer of 2002 with the Visix and some Microsoft experience in his consciousness.

Amazon’s business model is different than a traditional software company that often spends 18-24 months convincing customers to adopt new hardware/software or services. Amazon is leveraging a different computing model based on providing customers will a set of predefined services that can be bought without making a long term commitment. In a sense, Amazon has had the luxury (or good sense) to roll out service after service and see what sticks. As Jeff sees it, “people’s brains light up. They can build their business and applications in a positive way without having to worry about bandwidth, power and cooling.” His perception is that customers don’t think about whether the cloud provided by Amazon will support their needs. Clearly, he is able to talk this way because Amazon has made the investment in a scalable architecture to support an infrastructure that is designed for massive scalability. The other issue is that having built this architecture for its own retail requirements, Amazon had the foresight to exploit the technology to create a new line of business — in essence, a compute cloud based on providing a set of tools and product offerings to the market. The message to the market is straight forward, use these services so that you can innovate quickly without having to build from scratch. In taking this approach, Amazon creates both a test bed that allows the company to collaborate on new functionality with partners. In addition, and perhaps most importantly, it allows customers to buy incremental capability so they can scale up and down when they need to. According to Jeff one of the benefits of the cloud is that it isn’t dominated by the needs of one customer. In other words, one customer may have a spike in demand while another has less need at that point in time. Over the years, Amazon is able to understand usage patterns that are predictable.

Amazon’s business model follow this approach. A customer creates an account with Amazon that in essence gives them a charge account with Amazon. Customers get access to all of the Web services APIs. Their usage is tracked and they are billed for what they use. The business model is quite straight forward. Amazon charges 15 cents per gigabyte per month — not a lot of money even when you scale. What is interesting to me is there is no contracts to negotiate — everyone understands the rules. I asked Jeff if customers ever ask to buy a “private cloud”. While Amazon has been asked by customers, Jeff felt that because of the amount of experience that Amazon has with its hosted services discourages customers from explaining, “This is a business we want to be in. We have a lot of experience in our organization. We build highly cost effective data centers and sophisticated monitoring and operations.” He contends that Amazon has the expertise based on its 13 years in the business is enough to keep customers from walking away from its cloud. If you do the math, it would be difficult to argue. For example, if a customer needed 500GB of storage for two years, the cost would be $1800. In addition, it avoids the requirements for managing that environment.

Jeff makes a good point. If a customer needs to scale from 10GB to 100 TB in a month it might be hard to pull off. “This is routine for us,” Jeff claims. From his vantage point, the cloud changes the relationship between the customer and their hardware vendors. In effect, customers are sharing hardware resources with lots of other customers. So, the question becomes, who is your partner? It is no longer the provider of the hardware or the operating system. You probably still have a relationship with your software provider.

So, Amazon’s view of the cloud is pretty straight forward — it is a way to get value out of virtualization. Jeff points out that if developers uses Amazon’s elastic cloud service, for example, they pay to access servers on an hourly basis. Amazon allocates server to that account, provides a copy of the operating system they need to get started. That process takes a few seconds.

Another dimension of Amazon’s business revolves around the companies that actually build applications that sit on its infrastructure. Amazon has built a bunch of its own applications that it offers as services. In addition, there are a number of application companies that are building applications on top of the Amazon platform. One that Jeff mentioned to me is called RightScale, an automated cloud computing management system intended to help customers of Amazon’s Elastic Compute Cloud with issues such as load balancing. In addition to this type of company there is a community of 370,000 developers. Because Amazon sets the barrier to use so low, it is easy for a developer to try a service without making a long term commitment.

The more I think about Amazon’s platform and business model, the more sense it makes to me. I believe that Amazon and others such as Google, Salesforce.com, and eBay are a peak into the future of the new generation computing. In a sense, this model breaks every rule that the traditional computing industry has been built on. This movement towards enterprise software as a service and utility computing is beginning to redefine hardware, software, management and services. I predict that this new business model is going to slowly but surely turn the industry upside down. It isn’t only that the business model is different. The underlying technology platform based on standards and a service oriented architecture is propelling this change. The only thing that will slow this transformation is fear of change. But what else is new.