Every year I attend IBM software analyst meeting. It is an opportunity to gain a snap shot of what the leadership team is thinking and saying. Since I have had the opportunity to attend many of these events over the year, it is always instructive to watch the evolution of IBM’s software business over the years.
So, what did I take away from this year’s conference? In many ways, it was not that much difference in what I experienced last year. And I think that is good. When you are a company of the size of IBM you can’t lurch from one strategy to the next and expect to survive. One of the advantages that IBM has in the market is that has a well developed roadmap that it is in the process of executing on. It is not easy to execute when you have as many software components as IBM does in its software portfolio.
While it isn’t possible to discuss all that I learned in my various discussions with IBM executives, I’d like to focus on IBM’s solutions strategy and its impact on the software portfolio. From my perspective, IBM has made impressive strides in enforcing a common set of services that underlie its software portfolio. It has been a complicated process that has taken decades and is still a work in progress. The result required that all of the business units within software are increasingly working together to provide underlying services to each other. For example, Tivoli provides management services to Rational and Information Management provides data management services to Tivoli. WebSphere provides middleware and service orientation to all of the various business units. Because of this approach, IBM is better able to move to a solutions focus.
It’s about the solutions.
In the late 1990s IBM got out of the applications business in order to focus on middleware, data management, and systems management. This proved to be a successful strategy for the next decade. IBM made a huge amount of money selling WebSphere, DB2, and Tivoli offerings for SAP and Oracle platforms. In addition, Global Services created a profitable business implementing these packaged applications for enterprises. But the world has begun to change. SAP and Oracle have both encroached on IBM’s software business. Some have criticized IBM for not being in the packaged software business. While IBM is not going into the packaged software business, it is investing a vast amount of money, development effort, and marketing into the “solutions” business.
How is the solutions business different than a packaged application? In some ways they are actually quite similar. Both provide a mechanism for codifying best practices into software and both are intended to save customers time when they need to solve a business problem. IBM took itself out of the packaged software business just as the market was taking off. Companies like SAP, Oracle, Seibel, PeopleSoft and hundreds of others were flooding the market with tightly integrated packages. In this period of the early 1990s, IBM decided that it would be more lucrative to partner with these companies that lacked independent middleware and enabling technologies. IBM decided that it would be better off enabling these packaged software companies than competing in the packaged software market.
This turned out to be the right decision for IBM at the time. The packaged software it had developed in the 80s was actually holding it back. Therefore, without the burden of trying to fix broken software, it was able to focus all of its energy and financial strength on its core enabling software business. But as companies like Oracle and SAP cornered the packaged software market and began to expand to enabling software, IBM began to evolve its strategy. IBM’s strategy is a hybrid of the traditional packaged software business and a solutions business based on best practices industry frameworks.
So, there are two components in IBM’s solutions strategy – vertical packaged solutions that can be applied across industries and solution frameworks that are focused on specific vertical markets.
Horizontal Packages. The horizontal solutions that IBM is offerings have been primarily based on acquisitions it has made over the past few years. While at first glance they look like any other packaged software, there is a method to what IBM has purchased. Without exception, these acquisitions are focused on providing packaged capabilities that are not specific to any market but are intended to be used in any vertical market. In essence, the packaged solutions that IBM has purchased resemble middleware more than end-to-end solutions. For example, Sterling Commerce, which IBM purchased in August 2010, is a cross channel commerce platform. It purchased Coremetrics in June, provides web analytics and bought Unica for marketing automation of core business processes. While each of these is indeed packaged, they reach represent a component of a solution that can be applied across industries.
Vertical Packages. IBM has been working on its vertical market packaging for more than a decade through its Business Services Group (BSG). IBM has taken its best practices from various industry practices and codified these patterns into software components. These components have been unified into solution frameworks for industries such as retail, banking, and insurance. While this has been an active approach with the Global Services for many years, there has been a major restructuring in IBM’s software organization this past year. In January, the software group split into two groups: one focused on middleware and another focused on software solutions. All of the newly acquired horizontal packages provide the underpinning for the vertical framework-based software solutions.
Leading with the solution. IBM software has changed dramatically over the past several decades. The solutions focus does not stop with the changes within the software business units itself; it extends to hardware as well. Increasingly, customers want to be able to buy their solutions as a package without having to buy the piece parts. IBM’s solution focus that encompasses solutions, middleware, appliances, and hardware is the strategy that IBM will take into the coming decade.
You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.
These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do. There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.
So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today. His view of the future requirements is quite intriguing.
I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:
One. Automation of placement of assets is critical. Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements. If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.
Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.
Three. Quality of Service needs a control fabric. If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical. From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.
Four. Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers. How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.
Five. Standard APIs protect customers. Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem. What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators. However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.
Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets? These management issues will become the most important issues for cloud providers in the future.
The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions. These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.
I spent a couple of hours today listening to Oracle talk about the long-awaited integration with Sun Microsystems. A real end of an era and beginning of a new one. What does this mean for Oracle? Whatever you might think about Oracle, you have to give the company credit for successfully integrating the 60 companies it has purchased over the past few years. Having watched hundreds and perhaps thousands of acquisitions over the last few decades, it is clear that integration is hard. There are overlapping technologies, teams, cultures, and egos. Oracle has successfully managed to leverage the IP from its acquisitions to support its business goals. For example, it has kept packaged software customers happy by improving the software. Peoplesoft customers, for example, were able to continue to use the software they had become dependent on in primarily the same way as before the acquisition. In some cases, the quality of the software actually improved dramatically. The path has been more complicated with the various middleware and infrastructure platforms the company has acquired over the years because of overlapping functionality.
The acquisition of Sun Microsystems is the biggest game changer for Oracle since the acquisition of PeopleSoft. There is little doubt that Sun has significant software and hardware IP that will be very important in defining Oracle in the 21st century. But I don’t expect this to be a simple journey. Here are the five key issues that I think will be tricky for Oracle to navigate. Obviously, this is not a complete list but it is a start.
Issue One: Can Oracle recreate the mainframe world? The mainframe is dead — long live the mainframe. Oracle has a new fondness for the mainframe and what that model could represent. So, if you combine Sun’s hardware, networking layer, storage, security, packaged applications, middleware into a package do you get to own total share of a customer’s wallet? That is the idea. Oracle management has determined that IBM had the right ideas in the 1960s — everything was nicely integrated and the customer never had to worry about the pieces working together.
Issue Two: Can you package everything together and still be an open platform? To its credit, Oracle has build its software on standards such as Unix/Linux, XML, Java, etc. So, can you have it both ways? Can you claim openness when the platform itself is hermetically sealed? I think it may be a stretch. In order to accomplish this goal, Oracle would have to have well-defined and published APIs. It would have to be able to certify that with these APIs the integrated platform won’t be broken. Not an easy task.
Issue Three: Can you manage a complex computing environment? Computing environments get complicated because there are so many moving parts. There are configurations that change; software gets patched; new operating system versions are introduced; emerging technology enters and messes up the well established environment. Oracle would like to automate the process of managing this process for customers. It is an appealing idea since configuration problems, missing links, and poor testing are often responsible for many of the outages in computing environments today. Will customers be willing to have this type of integrated environment controlled and managed by a single vendor? Some customers will be happy to turn over these headaches. Others may have too much legacy or want to work with a variety of vendors. This is not a new dilemma for customers. Customers have long had to rationalize the benefits of a single source of technology against the risks of being locked in.
Issue Four: Can you teach an old dog new tricks? Can Oracle really be a hardware vendor? Clearly, Sun continues to be a leader in hardware despite its diminished fortunes. But as anyone who has ventured into the hardware world knows, hardware is a tough, brutal game. In fact, it is the inverse of software. Software takes many cycles to reach maturation. It needs to be tweaked and finessed. However, once it is in place it has a long, long life. The old saying goes, old software never dies. The same cannot be said for hardware. Hardware has a much straighter line to maturity. It is developed, designed, and delivered to the market. Sometimes it leapfrogs the competition enough that it has a long and very profitable life. Other times, it hits the market at the end of a cycle when a new more innovative player enters the market. The culmination of all the work and effort can be short as something new comes along at the right place at the right time. It is often a lot easier to get rid of hardware than software. The computer industry is littered with the corpses of failed hardware platforms that started with great fanfare and then faded away quickly. Will Oracle be successful with hardware? It will depend on how really good the company is in transforming its DNA.
Issue Five. Are customers ready to embrace Oracle’s brave new world? Oracle’s strategy is a good one — if you are Oracle. But what about for customers? And what about for partners? Customers need to understand the long-term implication and tradeoffs in buying into Oracle’s integrated approach to its platform. It will clearly mean fewer moving parts to worry about. It will mean one phone call and no finger pointing. However, customers have to understand the type of leverage that single company will have in terms of contract terms and conditions. And what about partners? How does an independent software vendor or a channel partner participate within the new Oracle? Is there room? What type of testing and preparation will be required to play?
Having just completed Service Management for Dummies (scheduled to be in the book stores in June), I have taken a step back to think about what I learned from the process. When our team first started the research process a lot of people I talked to wanted to know if we were writing a book about ITIL 3.0 best practices. So, the answer is of course we covered ITIL 3.0 best practices. However, as part of our research and indepth discussions with customers it became apparent that there is something bigger happening here that transcends IT. I am not sure that this issue has been noticed out there in the world of management of services but it is real and encouraging. Corporate management is beginning to notice that much of their physical infrastructure and the components that are the essence of their corporate existence are technology enabled. The X-ray that used to be stored on a piece of film and stored in a file cabinet is now digitized. The automobile is now managed by sensors and other computers. Security of physical buildings is computerized. The factory floor is a complex system. Of course, I could go on for months with lists that include RFID and the like. But I think I have made the point that increasingly everything must be thought of as a system, not just the servers and desktops and networks that sit in the data center.
In my view, this is why the service management arena is getting to be so exciting. Many of the CIOs that our team interviewed for Service Management for Dummies echoed this level of excitement. These executives are finding that applying service management principles to both the physical and IT world is transformational. It means that organizations can have a greater ability to take a holistic approach to managing their companies from a holistic perspective.
In the book, our team uses the example of the ATM machine to make the point. The ATM is a relatively simple automated device that requires a matching of a customer number with an ID code. It requires that a request for cash from the consumer be matched with the availability of funds from that bank or one of its partner’s banks. It requires the ability to do the accounting to provide the customer with a receipt that states how much money was withdrawn and how much is left in the account. And there is more! Behind that customer action that might take all of 5 seconds is a huge infrastructure: a data center, a security infrastructure, a sensor that detects of the machine itself is experiencing a problem. There is a network of trucks managed by a third party company that ensures that the trucks deliver cash to replenish the ATM machine. There are even more parts to this world that I am not mentioning — so forgive me. But what is most interesting is that all of these mini-ecosystems are intertwined. What if the bank’s management decides to save money by selecting a new cash delivery network. This company promises great service at a fraction of the cost. To save money the bank goes with the new service only to discover that its drivers are unreliable and cash is often not delivered in a timely manner. Even if the ATM networks works well, the data center is flawless, and the security is solid, the bank is not able to deliver satisfaction to its customers because there is no cash.
The bottom line is that service management is becoming a corporate issue — not just an IT issue. The secret to service management is about the customer, partner, supplier, and employee experience. Like every other technology transformation over the past couple of decades, mature technology initiatives become management initiatives. Increasingly, service management is being tied to the key performance indicators of the business. Therefore, it is imperative that IT management understand the goals of corporate management as well as the needs of internal and external customers.
The other day I had to fly from Boston to Denver when it became clear that there was going to be a sizable snow storm in Boston. Once it became obvious that if I tried to fly out on Monday I would probably not make my Monday night dinner meeting and my Tuesday consulting I decided to take action. I called United Airlines and changed my flight to Sunday. Now, I hate traveling on Sunday but missing the meeting was not an option.
I was connected to a very, very friendly agent in India. She was extremely polite and informed me that, yes, I could change my flight but I would be charged a $100 change fee plus another $250.00 for the new flight. I was also informed that according to her system there were no weather problems that should cause me to have to change my flight arrangements. I suggested that she might want to look at a website called weather.com .
I was actually surprised that she looked at the site and did acknowledge that there was a storm coming into Boston. Despite this revelation, she would not budge. I asked to speak to a supervisor. To make a long and painful story short — I got the supervisor (also in India) to wave the $100 change fee, but not the $250 charge. I had no choice, I paid for the additional ticket charge.
On Sunday afternoon I arrived at the airport and discovered that I was not the only person who thought the weather might cause travel problems. The United Airline agent at the airport told me that I should not have been charged the $250 fee at all. She gave me a phone number to call and I would be able to get my money back. She was professional, informed and actually quite pleasant to deal with. While sitting and waiting for my Sunday flight to take off, I noticed that my Monday morning flight was indeed canceled.
On Monday, I called the number to get the $250 charge reversed and I found myself caught up in the same process that I experienced when I tried to change my flight. I spent more than an hour talking to very polite agents in India who seemed not to want to reverse the fee. I was finally given a promise that the fee would indeed be reversed and that I would receive an email confirmation. It is Saturday and I am still waiting.
So, what’s my point about process? Companies know that they have to provide customer service. However, it is not a profit center. So, companies like United Airlines create the following process for customer service. If you are a gold/platinum customer you can call a special number and talk to a local representative who treats you like you are a real person. My friend Henry was traveling the same day on United and because he travels on United frequently his experience was the opposite of mine. In essence, United established a class system that treats customers who traditionally spend more money differently from customers like me who tend to travel on other airlines.
On one level it makes sense. Set up a system that rewards loyal customers. The business process designed to support these customers is good. However, the process to handle all the other passengers is broken. Training smart people to be polite, follow a script, and never deviate from a defined process is flawed. What is missing is context. To be good at managing business process requires that there be context for what is happening such as problems with weather and unexpected emergencies. We live in a complex world that requires that customer management be treated as an opportunity for future business development — not as an over simplified process.
In the future, I will probably try to avoid United Airlines if I can (unless there is no direct flight to where I am headed). Had the company had a different business process and treated me as a potentially valuable customer I might have looked more favorably at United for future trips. The bottom line is that streamlining business process too much can be a dangerous thing.
The other day my wireless router at home failed. Like many of us, it is hard to envision life without wireless connectivity so I ran out and bought a new router. I took the device out of the package followed the directions and tried to get on line. It didn’t work. After three conversations with NetGear technical support representatives from India and the Philippines I have given up and am going to bring the router back and try with a different brand.
Why do I think it is important to talk about this issue? Yes, I want to vent but I also want to make a point about ease of use of technology that has become as much a part of the daily fabric of our lives as making a cup of tea. I maintain that it is not acceptable for technology companies to assume that the average consumer of computing technology should need to be sophisticated enough to solve engineering problems. Let me give you an example by showing you the email exchange with NetGear about how I should go about getting my poor sick wireless router to work (it was working if I plugged it in). First you will see the question that I send to the online support site and the answer I got back the next day.
1/7/2008 5:20:00 PM
I have had discussions with three technicians. I am unable to connect to any of the wireless pcs in my home. The technicians are able to get me connected. However, in about 20 minutes the connection is dropped and I am unable to get reconnected. I took the same PC to my office and am able to connect to my wireless router in the office with no problem at all. I am quite frustrated. Can you solve this problem or should I return your product and never buy another NetGear product again.
1/8/2008 9:55:00 AM
To resolve your issue please do the following:
First of all, please upgrade the firmware of your router by going to this website and following the instructions on the page:
After the firmware upgrade please follow these instructions to reset and reconfigure your router:
A – Please do a special reset of the unit by following these steps:
1. Disconnect all cables and cords from the device including the power cord
2. Press the reset button for 30 seconds
3. While keeping the reset button pressed, plug the power cord into the device this requires both hands
4. Keep the reset button pressed for an additional 30 seconds
5. Do a power cycle of the unit unplug the power cord, wait a minute and plug it in again
B – Connect the modem to the router
Please connect the Ethernet cable from the modem to the router’s Internet port the isolated port, not the 4 ports together.
C – Connect a computer to the router
Please make a wired connection between the router and a computer using the supplied Ethernet Cable. Connect the cable to one of the router’s 4 LAN ports. Please make sure the computer has dynamic IP address configured and there is no Firewall running on it.
How to make sure your computer has a dynamic IP address?
1. Click Start / Control Panel / Network Connections
2. Right-click Local Area Connection the adapter you are using to connect to the router, select Properties.
3. In the new window, click on the Internet Protocol TCP/IP and click Properties.
4. Make sure both settings are set to ‘Obtain…’.
5. Click OK.
6. Click Start / Control Panel / Internet Options
7. Click on the Connections tab
8. Click the LAN Settings button
9. Make sure all checkboxes are unchecked
10. Click OK.
D – Access the router page
1. Start Internet Explorer, type in the IP address of your router default: http://192.168.1.1 or http://www.routerlogin.com
2. Type in the username and password default username: admin, password: password
E – Set up the router’s internet connection
Once on the router page, please click Setup Wizard and follow the instructions on the screen to set up your router’s internet connection.
NOTE: Once you are finished with this process, you should be able to surf the internet from the computers connected to the router using Ethernet cables.
F – Check the wireless settings in the router
Still on the router page…
1. Click on Wireless Settings under Setup on the left side of the page.
2. Name SSID – type any name you’d like [this will be the name of your wireless network]
3. Region – United States
4. Channel – 6 other possible settings are 1 and 11
5. Mode – Auto 108Mbps or ‘g and b’ [depending of your wireless adapters’ capabilities]
6. Security Options – Disable
7. Click Apply.
8. Click on Wireless Settings under Advanced on the left side of the page
9. Make sure that both checkboxes next to “Enable Wireless Router Radio” and “Enable SSID Broadcast” are checked.
10. Click Apply
11. Click the Setup Access List button
12. On the new page, make sure that the checkbox next to “Turn Access Control On” is unchecked.
13. Click Apply
14. Click Logout on the bottom of the left side of the page.
G – Check settings on the wireless computers
1. Click on Start / Run, type services.msc, click OK
2. In the service local screen, locate Wireless Zero Configuration double click on it.
3. Under General Tab make sure Startup Type is AUTOMATIC if not change to automatic and hit apply
4. Make sure Services Status is STARTED if not start the service by clicking the Start button and click OK
5. Click Start / Control Panel / Network Connections, right-click on Wireless Network Connection, select Properties, under General tab select Internet protocol TCP/IP, click Properties and make sure it is set to Obtain IP automatically and DNS automatically. Click OK.
6. Click on Wireless Networks Tab at the top, check the box “Use windows to configure the wireless settings”.
7. Remove all items from the “Preferred networks” list by selecting them one by one and clicking Remove. Click OK.
8. Right click on Wireless Network Connection, select View available wireless networks.
9. Click on Refresh Network List
10. It will show your network, select it and click connect.
11. If it still doesn’t show, please make sure that your wireless network adapter is active. Most laptops have a button that activates/deactivates this device.
NOTE: Once you are finished with this step, you should be able to connect to the router wirelessly and even surf the internet if the internet configuration was also successful.
Once you review my response to your case you will be given the opportunity to close your case or update in order to troubleshoot further.
If this resolved your case please select YES resolved and YES to close.
If not, please select NO and update your case providing me with the information to proceed further.
If you do not wish to update at this time just close your browser window, you will be prompted again next time you log into this case. Please also be advised that your case will auto close after 10 days.
If for any reason I am unable to respond back to you within 24 hours, your case is in the main queue so any agent can review what we have done and assist you from there.
Again, I thank you for the opportunity to assist you and THANK YOU for choosing NETGEAR.
I am not sure that you are still with me, but if you took the time to read through all this you will understand my frustration and a complicated issue for technology companies. Simply put, customers should not have to be engineers to conduct business. It is time for hardware, software, and services organizations to change the assumptions about what is acceptable to ask a customer to do to solve problems.
In 2003 the software economy came to a virtual standstill. Buyers continued to hold back on software acquisitions – purchasing only the essentials. While 2004 will be a much better market for software, customers will continue to demand that products can make a significant contribution to the bottom line. Therefore, I do not see customers easing up on their potential vendors. If anything, it will be more difficult for emerging software vendors to gain the attention of the CIO. While it is important for customers to keep vendors focused on their immediate needs, it is equally important for buyers to focus on innovation as the driving requirement. Customers are increasingly demanding that suppliers provide solutions that can adapt to be dynamically responsive environment. In this context, there are several important areas of software that customers should pay particular attention to in 2004 because of their potential to transform business.
Customer Experience Management – Simply having a web presence is not enough to differentiate any corporation today. One of the most important trends to emerge in 2004 will be software that improves the way customers, partners, and suppliers experience a company’s web presence. This area of software will focus on how well a company can lead a customer to purchase the right product at the right time. This will be the natural evolution of search and customer relationship management technology, content, and document management.
Business Process Integration Solutions market will mature. While there have been products that created sophisticated workflows or helped companies codify business rules monitoring, none of these solutions have taken hold. A new generation of business process integration solutions will emerge that will help companies synchronize business rules, business process, and model best practices in a flexible innovative way that has the potential to transform business.
Wireless Management Solutions – WiFi will continue to be hot as vendors continue to figure out the right economic model so that adoption is widespread. As this begins to happen solutions will emerge to help companies manage the performance, scalability, and most importantly security of these solutions.
Security solutions will continue to accelerate. As distributed virtual environments continue to explode, new solutions will come onto the market that safeguards corporate assets. These solutions will protect companies even when they create component applications across partners and suppliers.
Web Services/Component Architecture software and web services tools will explode – Component architectures, tools, and integration solutions will become the hottest trend in applications development and deployment. This will happen because component architectures help companies create new packaged solutions without having to program from scratch. All sorts of new products aimed at creating modular flexible applications. These will be driven by the emergence of offerings and platform strategies from Microsoft, IBM, HP, and SAP.
Storage and backup – As more companies begin to use the same components to create new instances of value in real time, the need to have more sophisticated storage and backup capabilities will grow. Companies will continue to emerge with innovative methods of helping companies cope with an unending amount of data and combinations of software that needs to be stored and managed in dramatically different ways.
Integration solutions are changing from a programmatic model to a linkage model – Technologies that enable customers to more easily link their software assets together to create new value will continue to emerge. These technologies will hide underlying complexities of each component so that integration is dramatically simplified for the customer.
Business Process Management and Monitoring – Technologies that will help companies manage the interactions across their own departments as well as with partners and suppliers will experience tremendous growth.
Management Technology Market will experience new growth. Driven by component architectures and more distributed solutions, there will be many new innovations in the way companies can manage their highly virtualized computing environments.
Software as a service will begin to mature. An increasing number of companies are beginning to offer software as a hosted service. In 2004 this trend will accelerate. In fact, within the next five years, more customers will be looking for hosted services than for traditional purchasing models.
One of the most important trends for 2004 is that, as usual, there is little new under the sun. The technologies described above have all been around for decades. The difference for the coming year is that for the first time corporations will be able to deploy standards-based component architectures to solve real world problems. If the past decade has taught us anything it is that solutions must be designed with an equal amount of innovation and pragmatism to solve real customer problems in an increasingly real-time world.