Archive

Posts Tagged ‘XML’

IBM’s hardware sneak attack

April 13, 2010 5 comments

Yesterday I read an interesting blog commenting on why Oracle seems so interested in Sun’s hardware.

I quote from a comment by Brian Aker, former head of architecture for MySQL on the O’Reily Radar blog site.  He comments on his view on why Oracle bought Sun,

Brian Aker: I have my opinions, and they’re based on what I see happening in the market. IBM has been moving their P Series systems into datacenter after datacenter, replacing Sun-based hardware. I believe that Oracle saw this and asked themselves “What is the next thing that IBM is going to do?” That’s easy. IBM is going to start pushing DB2 and the rest of their software stack into those environments. Now whether or not they’ll be successful, I don’t know. I suspect once Oracle reflected on their own need for hardware to scale up on, they saw a need to dive into the hardware business. I’m betting that they looked at Apple’s margins on hardware, and saw potential in doing the same with Sun’s hardware business. I’m sure everything else Sun owned looked nice and scrumptious, but Oracle bought Sun for the hardware.

I think that Brian has a good point. In fact, in a post I wrote a few months ago, I commented on the fact that hardware is back.  It is somewhat ironic. For a long time, the assumption has been that a software platform is the right leverage point to control markets.  Clearly, the tide is shifting.  IBM, for example, has taken full advantage of customer concerns about the future of the Sun platform. But IBM is not stopping there. I predict a hardware sneak attack that encompasses IBM’s platform software strength (i.e., middleware, automation, analytics, and service management) combined with its hardware platforms.

IBM will use its strength in systems and middleware software to expand its footprint into Oracle’s backyard surrounding its software with an integrated platform designed to work as a system of systems.  It is clear that over the past five or six years IBM’s focus has been on software and services.  Software has long provided good profitability for IBM. Services has made enormous strides over the past decade as IBM has learned to codify knowledge and best practices into what I have called Service as Software. The other most important movement has been IBM’s focused effort over the past decade to revamp the underlying structure of its software into modular services that are used across its software portfolio. Combine this approach with industry focused business frameworks and you have a pretty good idea of where IBM is headed with its software and services portfolios.

The hardware strategy has begun to evolve in 2005 when IBM software bought a little hardware XML accelerator hardware appliance company called DataPower. Many market watchers were confused. What would IBM software do with a hardware platform?  Over time, IBM expanded the footprint of this platform and began to repurpose it as a means to pre-packaging software components. First there was a SOA-based appliance; then IBM added a virtual machine appliance called the CloudBurst appliance.  On the Lotus side of the business, IBM bought another appliance company that evolved into the Lotus Foundations platform.  Appliances became a great opportunity to package and preconfigure systems that could be remotely upgraded and managed.  This packaging of software with systems demonstrated the potential not only for simplicity for customers but a new way of adding value and revenue.

Now, IBM is taking the idea of packaging hardware with software to new levels.  It is starting to leverage the software and networking capability focused on hardware-driven systems. For example, within the systems environment, IBM is leveraging its knowledge of optimizing systems software so that it applications-based workloads can take advantage of capabilities such as threading, caching, and systems level networking.

In its recent announcement, IBM has developed its new hardware platforms based on the five most common workloads: transaction processing, analytics, business applications, records management and archiving, and collaboration.  What does this mean to customers? If a customer has a transaction oriented system, the most important capability is to ensure that the environment uses as many threads as possible to optimize speed of throughput. In addition, caching repetitive workloads will also ensure that transactions move through the system as quickly as possible. While this has been doable in the past, the difference is that these capabilities are packaged as an end-to-end system. Thus, implementation could be faster and more precise. The same can be said for analytics workloads. These workloads demand a high level of efficiency to enable customers to look for patterns in the data that help predict outcomes.     Analytics workloads require the caching and fast processing of   algorithms and data across multiple sources.

The bottom line is that IBM is looking at its hardware as an extension of the type of workloads they are required to support.  Rather than considering hardware as as set of separate platforms, IBM is following a systems of systems approach that is consistent with cloud computing.  With this type of approach, IBM will continue on the path of viewing a system as a combination of the hardware platform, the systems software, and systems-based networking.  These elements of computing are therefore configured based on the type of application and the nature of the current workload.

It is, in fact, workload optimization that is at the forefront of what is changing in hardware in the coming decade. This is true both in the data center and in the cloud. Cloud computing — and the hybrid environments that make up the future of computing are all predicated on predictable, scalable, and elastic workload management.  It is the way we will start thinking about computing as a continuum of all of the component parts combined — hardware, software, services, networking, storage, collaboration, and applications.  This reflects the dramatic changes that are just at the horizon.

Is there beef behind SalesForce.Com?

May 29, 2008 3 comments

I have been following Salesforce.com since its founding in the mid-1990s. Initially the company started by creating a contact management system which evolved into the sales force platform it offers today. Last month I attended a small dinner meeting in Boston hosted by Marc Benioff, Chairman and CEO of SalesForce.com, for some partners and customers. I met the Steve Pugh, CEO of CODA Financials, a subsidiary of Coda, a UK based developer of accounting software. I was intrigued that the company had built its new generation financial application on top of Salesforce.com’s infrastructure. In my next post, I’ll talk about Coda and why they made this decision. But before that I wanted to take a look at the Salesforce platform.

What is most interesting about Salesforce is that it intended to build a platform from day one. In my discussions with Marc in the early days he focused not specifically on the benefits of CRM but rather on “No Software”. If you think about it that was a radical concept ten years ago.

Therefore, It goes without saying that Salesforce has been a Software as a Service pioneer. For example, in June 2003 launched sforce, one of the first web services based SaaS platforms. It offered partners a published SOAP-based API. Rather than viewing Salesforce as an application, it views it as a “database in the sky.” It interprets this database as an integration platform. Likewise, from a customer perspective, Salesforce has designed its environment to “look like a block”. What does that mean? I would probably use a different term maybe a infrastructure blackbox.

Salesforce’s approach to creating its ecosystem has been incremental. It began, for example, by allowing customers to change tabs and create their own database objects. Next, the company added what it called the AppExchange which added published APIs so that third party software providers could integrate their applications into the Salesforce platform. Most of the applications on AppExchange are more like utilities than full fledged packaged applications. Many of the packages sold through the AppExchange are “tracking applications” for example, there is an application that tracks information about commercial and residential properties; another application is designed to optimize the sales process for media/advertising companies; still another package is intended to help analyze sales data.

But this is just the beginning of what Salesforce has planned. The company is bringing in expertise from traditional infrastructure companies like Oracle and BEA — among others. It’s head of engineering came from eBay. Bringing in experienced management that understands enterprise scalability will be important — especially because of Salesforce’s vast ambitions. I have been reading blogs by various Salesforce.com followers and critics. Josh Greenbaum, whom I have known for more than 20 years has been quite critical of Salesforce and has predicted its demise (within 18 months). He makes the comparison between Salesforce.com and Siebel. While any company that has risen as fast as Salesforce.com has will be a target, I do not believe that Salesforce.com is in trouble. There are two reasons I believe that they have a good chance for sustainability: their underlying SOA architecture and the indications that ISVs are beginning to see the company as a viable infrastructure.

So, what is the path that Salesforce is following on its quest for infrastructuredom (is that a real word — probably not). One of the primary reasons for my optimism is that Salesforce.com has a combination of traditional development through a procedural language it calls Apex that is intended to help developers write stored procedures or SQL statements. While this may disappoint some, it is a pragmatic move. But more important than Apex is the development of a standard XML based stylesheet interfaces to a service designed for use with Salesforce applications. This allows a developer to change the way the application looks. It is, in essence, the interface as a service. A third capability that I like is the technique that Salesforce has designed for creating common objects. In essence, this is a basic packaging that allows a third party to create their own version of Salesforce for its customers. For example, this has enabled Accenture to create a version of Salesforce for its customers in the health care.

But what is behind the curtain of Salesforce? First, Salesforce uses the Oracle database as a technique for serving up file pages (not as a relational database). But the core Intellectual Property that sits on top of Oracle is a metadata architecture. It is designed as a multi-tenancy service. Salesforce considers this metadata stack as the core of its differentiation in the market. The metadata layer is complex and includes an application server called Resin. The Resin Application Server is a high-performance XML application server for use with JSPs, servlets, JavaBeans, XML, and a host of other technologies. On top of this metadata layers is an authorization server. The metadata layer is structured so that each organization has a unique access to the stack. Therefore, two companies could be physically connected to the same server but there would be no way for them to access each other’s data. The metadata layer will only point to the data that is specific to a user. The environment is designed so that each organization (i.e., customer) has a specific WSDL-based API. In fact, the architecture includes the approach of access APIs through the WSDL interface. There are two versions of WSDL — one general and one for a specific customer implementation. If a customer wants to share data, for example, they have to go through the general WSDL interface.

Salesforce’s approach is to use XML based interfaces as an integration approach. It has used this to integrate with Google Apps. Salesforce has already begun partnering with Google around Adwords. This move simply deepened the relationship since both companies are faced with competitive threats.

The bottom line is that I think that Salesforce.com is well positioned in the market. It has an underlying architecture that is well conceived based on a SOA approach. It has created an ecosystem of partners that leverage its APIs and rely on its network to build their businesses. Most importantly, SalesForce.com has created an application that is approachable to mortals (as opposed to software gods). Companies like Siebel, in contract, created a platform that was complicated for customers to use — and therefore many purchased the software and never used it.

Salesforce.com is not without challenges. It needs to continue to innovate on its platform so that it does not get caught off guard by large (Microsoft, SAP, and Oracle) players who aren’t happy with an upstart in a market they feel entitled to own. They are also at risk from upstarts like Zoho and open source CRM players like SugarCRM. If Salesforce.com can collect more packaged software vendors like Coda to build their next generation applications on top of Salesforce’s environment, they may be able to weather the inevitable threats.

Microsoft’s dynamic response strategy: planning for cohesion

March 25, 2004 Leave a comment

How does Microsoft fit into the new world of dynamically responsive systems? To understand how Microsoft plays a part in this realm of Service Oriented Architectures (SOA), it is necessary to look at the implications of Microsoft’s .NETstrategy. It is clear that they have taken the notion of a loosely coupled, service oriented architecture to heart through the development of their .net framework. Over the past five years much of Microsoft’s approach to distributed computing has changed. At the same time, there is plenty that is still the same. In this column, I offer my opinion on what Microsoft is doing right for customers, and where there are still issues that need to be addressed.

Leveraging Standards: The most impressive part of Microsoft’s evolving strategy is its adoption of SOA standards, including XML, SOAP interfaces, Web Services Description Language (WSDL), and Universal Description, Discovery, and Integration (UDDI). Basing its next generation infrastructure on these codified standards will both help customers implement a heterogeneous environment and aid ISV’s that want to support both Microsoft and its competitors. Equally important is that Microsoft has taken the time to architect a system that is designed for dynamic response. For example, Microsoft’s database engine (SQLServer) includes a mapping tool that enables data to be routed without massive programming. In addition, Microsoft has moved XML right into SQLServer. This is supported through the maturing BizTalk, which offers both connectivity and business process integration across Microsoft systems as well as third party systems.

Built From The Ground Up: What is most important for CIO’s to understand about .NET and Microsoft’s SOA strategy is that it is not a simple set of interfaces. Rather, it is a rewrite of the entire foundation of Microsoft’s technology platform. Microsoft has undertaken the task of virtually starting from scratch and creating a well-architected but self-contained world. Even this self-contained world enables participation by the outside world.

Focus on Web Services: One of the most important features of the approach to dynamic response is Microsoft’s focus on web services and component architectures. .NET can present a consistent programming paradigm to software developers. Ironically, Microsoft learned the fundamentals of this approach from Hewlett Packard’s pioneering world with e-services in the mid-1990s. With .NET it has enabled developers to encapsulate code combined with the pieces that are needed to understand what it does so that it can run, unaided, as long as the interfaces have been adhered to. Therefore, programmers will be able to devise new business services that can be linked together to create applications. The same type of component services can be designed using Java-based development environments. The difference is that Microsoft has designed its own implementation that has key differences from Java.

It’s About the PC: One of the key differences between Microsoft’s and its competitors’ approaches to Dynamic Response is the emphasis on the desktop. While other enterprise infrastructure players are focused on server-to-server interaction as well as server-to-PC interaction, Microsoft’s primary objective is to help customers leverage their value from their systems of record (on mainframes, Unix servers, and the like) and push that value to the desktop. Microsoft has spent considerable resources architecting new runtime services in support of this goal. Once data or logic is moved to the desktop, it is put into various Microsoft PC applications such as Word or Excel. This is accomplished though the use of XML as a common SOA foundation. Therefore, through .NET, Microsoft talks to the mainframe through service interfaces. In essence, a request for information is made to the host computer and a result (not the actual data) is moved to the desktop. Microsoft has spent considerable time and effort writing web services interfaces to data stores on enterprise applications such as SAP’s systems. In essence, Microsoft has developed a web service that understands it is talking to an SAP API when it is taking data from that system and then using a standard web services call to talk to the PC environment.

While Microsoft would like customers to move to a fully integrated .NET framework-driven environment, the company is pragmatic in understanding that most customers will not be willing to rip out their old applications. Therefore, they are focusing initial efforts on offering mechanisms that will take data and services and, through connectivity services (BizTalk), move them to the PC. Microsoft is a patient company and is willing to wait until customers are convinced that the experience of having data at their fingertips and their desktops will be so compelling that they will eventually unplug their mainframe and Unix systems.

Conclusion: It would be nice to be able to say that there is one right way to go to insure a corporation will have a well-designed and scalable Service Oriented Architecture that will stand the test of time. In reality, CIO’s will have to live in a world where they will have to leverage .NET, Java, and a host of older and emerging options. Yes, the world will continue to be a messier place than many would hope. Microsoft has done an admirable job creating a unified platform for SOA-based standards and its own implementation of a world where its operating system is at the center of distributed computing. However, the world of enterprise computing is a complicated place. Java environments, which are supported by IBM, HP, BEA, SAP, and a host of other platforms and ISVs, will continue to be strong players in the market. Most market leaders, including those leading the Java opposition, will coexist with .NET because of market demand. In the long run, customers will be well served by a competitive environment where all protagonists are focused on Service Oriented Architectures.