Archive

Archive for the ‘innovation’ Category

Predictions for 2011: getting ready to compete in real time

December 1, 2010 3 comments

2010 was a transition year for the tech sector. It was the year when cloud suddenly began to look realistic to the large companies that had scorned it. It was the year when social media suddenly became serious business. And it was the year when hardware and software were being united as a platform – something like in the old mainframe days – but different because of high-level interfaces and modularity. There were also important trends starting to emerge like the important of managing information across both the enterprise and among partners and suppliers. Competition for ownership of the enterprise software ecosystem headed up as did the leadership of the emerging cloud computing ecosystem.

So, what do I predict for this coming year? While at the outset it might look like 2011 will be a continuation of what has been happening this year, I think there will be some important changes that will impact the world of enterprise software for the rest of the decade.

First, I think it is going to be a very big year for acquisitions. Now I have said that before and I will say it again. The software market is consolidating around major players that need to fill out their software infrastructure in order to compete. It will come as no surprise if HP begins to purchase software companies if it intends to compete with IBM and Oracle on the software front.  But IBM, Oracle, SAP, and Microsoft will not sit still either.  All these companies will purchase the incremental technology companies they need to compete and expand their share of wallet with their customers.

This will be a transitional year for the up and coming players like Google, Amazon, Netflix, Salesforce.com, and others that haven’t hit the radar yet.  These companies are plotting their own strategies to gain leadership. These companies will continue to push the boundaries in search of dominance.  As they push up market as they grab market share, they will face the familiar problem of being able to support customers who will expect them to act like adults.

Customer support, in fact, will bubble to the top of the issues for emerging as well as established companies in the enterprise space – especially as cloud computing becomes a well-established distribution and delivery platform for computing.  All these companies, whether well established or startups will have to balance the requirements to provide sophisticated customer support with the need to make profit.  This will impact everything from license and maintenance revenue to how companies will charge for consulting and support services.

But what are customers be looking for in 2011? Customers are always looking to reduce their IT expenses – that is a given. However, the major change in 2011 will be the need to innovative based on customer facing initiatives.  Of course, the idea of focusing on customer facing software itself isn’t new there are some subtle changes.  The new initiatives are based on leveraging social networking from a secure perspective to both drive business traffic, anticipate customer needs and issues before they become issues.  Companies will spend money innovating on customer relationships.

Cloud Computing is the other issue in 2011. While it was clearly a major differentiator in 2010, the cloud will take an important leap forward in 2011.  While companies were testing the water this year, next year, companies will be looking at best practices in cloud computing.  2011 will be there year where customers are going to focus on three key issues: data integration across public, private, and data centers, manageability both in terms of workload optimization, security, and overall performance.  The vendors that can demonstrate that they can provide the right level of service across cloud-based services will win significant business. These vendors will increasingly focus on expanding their partner ecosystem as a way to lock in customers to their cloud platform.

Most importantly, 2011 will be the year of analytics.  The technology industry continues to provide data at an accelerated pace never seen before. But what can we do with this data? What does it mean in organizations’ ability to make better business decisions and to prepare for an unpredictable future?  The traditional warehouse simply is too slow to be effective. 2011 will be the year where predictive analytics and information management overall will emerge as among the hottest and most important initiatives.

Now I know that we all like lists, so I will take what I’ve just said and put them into my top ten predictions:

1. Both today’s market leaders and upstarts are going to continue to acquire assets to become more competitive.  Many emerging startups will be scooped up before they see the light of day. At the same time, there will be almost as many startups emerge as we saw in the dot-com era.

2. Hardware will continue to evolve in a new way. The market will move away from hardware as a commodity. The hardware platform in 2010 will be differentiated based on software and packaging. 2010 will be the year of smart hardware packaged with enterprise software, often as appliances.

3. Cloud computing models will put extreme pressure on everything from software license and maintenance pricing to customer support. Integration between different cloud computing models will be front and center. The cloud model is moving out of risk adverse pilots to serious deployments. Best practices will emerge as a major issue for customers that see the cloud as a way to boost innovation and the rate of change.

4. Managing highly distributed services in a compliant and predictable manner will take center stage. Service management and service level agreements across cloud and on-premises environments will become a prerequisite for buyers.

5. Security software will be redefined based on challenges of customer facing initiatives and the need to more aggressively open the corporate environment to support a constantly morphing relationship with customers, partners, and suppliers.

6. The fear of lock in will reach a fever pitch in 2011. SaaS vendors will increasingly add functionality to tighten their grip on customers.  Traditional vendors will purchase more of the components to support the lifecycle needs of customers.  How can everything be integrated from a business process and data integration standpoint and still allow for portability? Today, the answers are not there.

7. The definition of an application is changing. The traditional view that the packaged application is hermetically sealed is going away. More of the new packaged applications will be based on service orientation based on best practices. These applications will be parameter-driven so that they can be changed in real time. And yes, Service Oriented Architectures (SOA) didn’t die after all.

8. Social networking grows up and will be become business social networks. These initiatives will be driven by line of business executives as a way to engage with customers and employees, gain insights into trends, to fix problems before they become widespread. Companies will leverage social networking to enhance agility and new business models.

9. Managing end points will be one of the key technology drivers in 2011. Smart phones, sensors, and tablet computers are refining what computing means. It will drive the requirement for a new approach to role and process based security.

10. Data management and predictive analytics will explode based on both the need to understand traditional information and the need to manage data coming from new sales and communications channels.

The bottom line is that 2011 will be the year where the seeds that have been planted over the last few years are now ready to become the drivers of a new generation of innovation and business change. Put together everything from the flexibility of service orientation, business process management innovation, the wide-spread impact of social and collaborative networks, the new delivery and deployment models of the cloud. Now apply tools to harness these environments like service management, new security platforms, and analytics. From my view, innovative companies are grabbing the threads of technology and focusing on outcomes. 2011 is going to be an important transition year. The corporations that get this right and transform themselves so that they are ready to change on a dime can win – even if they are smaller than their competitors.

What will it take to achieve great quality of service in the cloud?

November 9, 2010 1 comment

You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.



Why we about to move from cloud computing to industrial computing?

April 5, 2010 7 comments

I spent the other week at a new conference called Cloud Connect. Being able to spend four days emerged in an industry discussion about cloud computing really allows you to step back and think about where we are with this emerging industry. While it would be possible to write endlessly about all the meeting and conversations I had, you probably wouldn’t have enough time to read all that. So, I’ll spare you and give you the top four things I learned at Cloud Connect. I recommend that you also take a look at Brenda Michelson’s blogs from the event for a lot more detail. I would also refer you to Joe McKendrick’s blog from the event.

1. Customers are still figuring out what Cloud Computing is all about.  For those of us who spend way too many hours on the topic of cloud computing, it is easy to make the assumption that everyone knows what it is all about.  The reality is that most customers do not understand what cloud computing is.  Marcia Kaufman and I conducted a full day workshop called Introduction to Cloud. The more than 60 people who dedicated a full day to a discussion of all aspects of the cloud made it clear to us that they are still figuring out the difference between infrastructure as a service and platform as a service. They are still trying to understand the issues around security and what cloud computing will mean to their jobs.

2. There is a parallel universe out there among people who have been living and breathing cloud computing for the last few years. In their view the questions are very different. The big issues discussed among the well-connected were focused on a few key issues: is there such a thing as a private cloud?; Is Software as a Service really cloud computing? Will we ever have a true segmentation of the cloud computing market?

3. From the vantage point of the market, it is becoming clear that we are about to enter one of those transitional times in this important evolution of computing. Cloud Connect reminded me a lot of the early days of the commercial Unix market. When I attended my first Unix conference in the mid-1980s it was a different experience than going to a conference like Comdex. It was small. I could go and have a conversation with every vendor exhibiting. I had great meetings with true innovators. There was a spirit of change and innovation in the halls. I had the same feeling about the Cloud Connect conference. There were a small number of exhibitors. The key innovators driving the future of the market were there to discuss and debate the future. There was electricity in the air.

4. I also anticipate a change in the direction of cloud computing now that it is about to pass that tipping point. I am a student of history so I look for patterns. When Unix reached the stage where the giants woke up and started seeing huge opportunity, they jumped in with a vengeance. The great but small Unix technology companies were either acquired, got big or went out of business. I think that we are on the cusp of the same situation with cloud computing. IBM, HP, Microsoft, and a vast array of others have seen the future and it is the cloud. This will mean that emerging companies with great technology will have to be both really luck and really smart.

The bottom line is that Cloud Connect represented a seminal moment in cloud computing. There is plenty of fear among customers who are trying to figure out what it will mean to their own data centers. What will the organizational structure of the future look like? They don’t know and they are afraid. The innovative companies are looking at the coming armies of large vendors and are wondering how to keep their differentiation so that they can become the next Google rather than the next company whose name we can’t remember. There was much debate about two important issues: cloud standards and private clouds. Are these issues related? Of course. Standards always become an issue when there is a power grab in a market. If a Google, Microsoft, Amazon, IBM, or an Oracle is able to set the terms for cloud computing, market control can shift over night. Will standard interfaces be able to save the customer? And how about private clouds? Are they real? My observation and contention is that yes, private clouds are real. If you deploy the same automation, provisioning software, and workload management inside a company rather than inside a public cloud it is still a cloud. Ironically, the debate over the private cloud is also about power and position in the market, not about ideology. If a company like Google, Amazon, or name whichever company is your favorite flavor… is able to debunk the private cloud — guess who gets all the money? If you are a large company where IT and the data center is core to how you conduct business — you can and should have a private cloud that you control and manage.

So, after taking a step back I believe that we are witnessing the next generation of computing — the industrialization of computing. It might not be as much fun as the wild west that we are in the midst of right now but it is coming and should be here before we realize that it has happened.

Predicting the future of computing by understanding the past

December 1, 2009 1 comment

Now that Thanksgiving is over I am ready to prepare to come up with predictions for 2010. This year, I decided to start by looking backwards. I first entered the computer industry in the late 1970s when mainframes roamed the earth and timesharing was king. Clearly, a lot has changed. But what I was thinking about was the assumptions that people had about the future of computing at that time and over the next several decades. So, I thought it would be instructive to mention a few interesting assumptions that I heard over the years. So, in preparation for my predictions in a couple of week, here are a few noteworthy predictions from past eras:

1. Late 1970s – The mainframe will always be the prevalent computing platform. The minicomputer is a toy.

2. Early 1980s – The PC will never be successful. It is for hobbyists. Who would ever want a personal computer in their home? And if they got one, what would they ever do with it — keep track of recipes?

3. Mid-1980s – The minicomputer will prevail. The personal computer and the networked based servers are just toys.

4. Mid-1980s – The leaders of the computer industry — IBM, Digital Equipment Corporation, and Wang Laboratories will prevail.

5. Early 1990s – The Internet has no real future as a computing platform. It is unreliable and too hard to use. How could it possibly scale enough to support millions of customers?

6. Early 1990s – Electronic Commerce is a pipe dream. It is too complicated to work in the real world.

7. Mid-1990s – If you give away software to gain “eyeballs” (the popular term in the era) and market share you will fail.

I could mention hundreds of other assumptions that I have come across that defied the conventional wisdom of the day. The reality is that these type of proof points are not without nuance.  For example, the mainframe remains an important platform today because of its ability to process high volume transactions and for its reliability and predictability. However, it is no longer the primary platform for computing. The minicomputer still exists but has morphed into more flexible server-based appliances. The PC would never have gotten off the ground without the pioneering work of done by Dan Bricklin and Bob Frankston who created the first PC-based spreadsheet. Also, if the mainframe and minicomputers had adopted a flexible computing model, corporations would never have brought millions of unmanageable PCs into their departments. Of the three computing giants of the late 80s, only IBM is still standing. Digital Equipment was swallowed by HP and Wang was bought by Getronics.  The lesson? Leaders come and go. Only the humble or paranoid survive. Who could have predicted the emergence of Google or Amazon.com? In the early days of online commerce it was unclear if it would really work. How could a vendor possible construct a system that could transmit transactions between partners and customers across the globe? It took time and lots of failures before it became the norm.

My final observation is actually the most complicated. In the mid-1990s during the dotcom era I worked with many companies that thought they could give away their software for a few dollars, gain a huge installed base and make money by monetizing those customers. I admit that I was skeptical. I would tell these companies, how can you make money and sustain your company? If you sell a few million copies of your software revenue will still be under $20 million — before expenses which would be huge. The reality is that none of these companies are around today. They simply couldn’t survive because there was no viable revenue model for the future. Fast forward almost 20 years. Google was built on top of the failures of these pioneers who understood that you could use an installed base to build something significant.

So, as I start to plan to predict 2010 I will try to keep in mind the assumptions, conventional wisdom, successes and failures of earlier times.

Can IBM become a business leader and a software leader?

November 23, 2009 3 comments

When I first started as an industry analyst in the 1980s IBM software was in dire straits. It was the era where IBM was making the transition from the mainframe to a new generation of distributed computing. It didn’t go really well. Even with thousands of smart developers working their hearts out the first three foresees into a new generation of software were an abysmal failure. IBM’s new architectural framework called SAA(Systems Application Architecture) didn’t work; neither did the first application built on top of that called OfficeVision. It’s first development framework called Application Development  Cycle (AD/Cycle) also ended up on the cutting room floor.  Now fast forward 20 years and a lot has changed for IBM and its software strategy.  While it is easy to sit back and laugh at these failures, it was also a signal to the market that things were changing faster than anyone could have expected. In the 1980s, the world looked very different — programming was procedural, architectures were rigid, and there were no standards except in basic networking.

My perspective on business is that embracing failure and learning from them is the only way to really have success for the future. Plenty of companies that I have worked with over my decades in the industry have made incredible mistakes in trying to lead the world. Most of them make those mistakes and keep making them until they crawl into a hole and die quietly.  The companies I admire of the ones that make the mistakes, learn from them and keep pushing. I’d put both IBM, Microsoft, and Oracle in that space.

But I promised that this piece would be about IBM. I won’t bore you with more IBM history. Let’s just say that over the next 20 years IBM did not give up on distributed computing. So, where is IBM Software today? Since it isn’t time to write the book yet, I will tease you with the five most important observations that I have on where IBM is in its software journey:

1. Common components. If you look under the covers of the technology that is embedded in everything from Tivoli to Information Management and software development you will see common software components. There is one database engine; there is a single development framework, and a single analytics backbone.  There are common interfaces between elements across a very big software portfolio. So, any management capabilities needed to manage an analytics engine will use Tivoli components, etc.

2. Analytics rules. No matter what you are doing, being able to analyze the information inside a management environment or a packaged application can make the difference between success and failure.  IBM has pushed information management to the top of stack across its software portfolio. Since we are seeing increasing levels of automation in everything from cars to factory floors to healthcare equipment, collecting and analyzing this data is becoming the norm. This is where Information Management and Service Management come together.

3. Solutions don’t have to be packaged software. More than 10 years ago IBM made the decision that it would not be in the packaged software business. Even as SAP and Oracle continued to build their empires, IBM took a different path. IBM (like HP) is building solution frameworks that over time incorporate more and more best practices and software patterns. These frameworks are intended to work in partnership with packaged software. What’s the difference? Treat the packages like ERP as the underlying commodity engine and focus on the business value add.

4. Going cloud. Over the past few years, IBM has been making a major investment in cloud computing and has begun to release some public cloud offerings for software testing and development as a starting point. IBM is investing a lot in security and overall cloud management.  It’s Cloud Burst appliance and packaged offerings are intended to be the opening salvo.   In addition, and probably even more important are the private clouds that IBM is building for its largest customers. Ironically, the growing importance of the cloud may actually be the salvation of the Lotus brand.

5. The appliance lives. Even as we look towards the cloud to wean us off of hardware, IBM is putting big bets on hardware appliances. It is actually a good strategy. Packaging all the piece parts onto an appliance that can be remotely upgraded and managed is a good sales strategy for companies cutting back on staff but still requiring capabilities.

There is a lot more that is important about this stage in IBM’s evolution as a company. If I had to sum up what I took away from this annual analyst software event is that IBM is focused at winning the hearts, minds, and dollars of the business leader looking for ways to innovate. That’s what Smarter Planet is about. Will IBM be able to juggle its place as a software leader with its push into business leadership? It is a complicated task that will take years to accomplish and even longer to assess its success.

Can we free process and data?

October 27, 2009 1 comment

I am still at IBM’s Information on Demand conference here in Las Vegas (not my favorite place..but what can you do). In listening to a lot of discussions around strategy and products I started thinking about one of the key problems that customers are facing around business process and managing increasingly complex data. What companies really want to do is to have the flexibility and freedom to leverage their critical data across applications and situations. They also want to be able to change processes based on changing business models.

This is the core issue that companies will be facing in the coming decade and will be the difference between success and failure for many  businesses.  Here’s an example of what I mean. Let’s take the example of a retailer in a competitive market. Let’s say our retailer had five or six applications: Accounting, Human Resources, supply chain management, a customer support system, and a customer facing e-commerce system. Each of these systems has an underlying database; each one manages this data based on the business process that is the foundation of the best practices that is the value of these packages. Even if each of the packages are the best in their markets there is a core problem since each solution is a silo. Processes that move between these systems tend to fall through the cracks.  This is why we, as customers of such retailers, are often frustrated when we call about a product that wasn’t delivered, doesn’t work, or requires a change only to discover that one department has no ability to know what is happening in another area. For most companies the dream of single view of the customer is aspirational but not practical right now. In reality, it is hard for companies to mess with their existing applications. These solutions are customized for their business environment; they were expensive and complicated to implement — and change is hard. In fact, companies only change when it is more painful to stay with the status quo than it is to change. In a retail scenario, companies change their approach to process and data management when they must change their business model because the current processes will lead to failure. Retailers are currently faced with emerging approaches to selling and managing customer relationships that are challenging traditional selling models.  Look what a company like Amazon.com or Netflex have done to their slower moving competitors.

A number of customers I have spoken with understand this very well. They are looking at ways to separate their core data assets from the underlying applications. Many of these customers are at the forefront of implementing a service oriented architecture (SOA) approach to managing their software assets. They are increasingly understanding that the secret to their future success is the knowledge they have about their customers, their needs and future requirements within their own set of offerings and those from partners. These companies are setting a priority of making this data independent, secure, and accurate. These business leaders are preparing for inevitable change.  At the same time, I have seen these customers creating SOA business services that are, in essence, codified business processes. For example, a business service could be a process that checks the credit of a potential partner or links a new customer request for service to the set of applications that confirms the request, orders the part, and notifies a partner.

So, here is the problem. These customers are implementing this new model of abstracting data and process based on specific projects or business initiatives.  These projects have gotten the attention of the C-team because of the impact on revenue. But, in reality, the real breakthrough will happen when the separation of data and process are the rule, not the exception.

This is going to be the overriding challenge for the next decade because it is so hard. There is inertia to move away from the predictable packaged applications that companies have implemented for more than 30 years. But I suggest that it will be inevitable that companies will begin to understand that if they are going to remain agile and change processes when they anticipate a competitive threat. These same companies will understand that their data is too important to leave it locked inside an application linked tightly to a process.

I don’t have the answers about what the tipping point will be when this starts to become a wide spread strategy. I think that the cloud will became a forcing action that will accelerate this trend. I would love to start a dialog. Send me your thoughts and I promise to post them.

Can companies reinvent themselves in a down market?

April 2, 2009 2 comments

Many years ago I heard a story about how AT&T redesigned the phone network in the 1950s. It doesn’t really matter if it is true or not but it holds a valuable lesson. The story goes like this. In order for AT&T to take its telephony technology to the next level, it had to break the old model and start fresh. Management called all of its key engineers into a meeting and told them that the existing network had been destroyed and they had to start from scratch and design a new network.  And that is precisely what they did. They came up with a new design that was not burdened by the past.

What does this have to do with the world we are living in right now? I think that businesses have a unique opportunity to use this economic downturn to rethink the world.  What would we do if we could start over and reinvent the way we run a business or work with customers or design products that are more modular, more creative, and more accessible? What if the products and services we offer were blown up and we could start over?

I actually think that this may be happening behind the scenes. The really smart companies are using a time of crisis and uncertainty to prepare for the future. There will be a time in the future when customers will be more willing to buy products and services. There may be fewer providers in the world. The companies that survive and thrive are the ones that accept the chaos of the current business environment and see the hidden opportunities.

There is some very good thinking going on in companies that are blowing up their old models and thinking creatively. These companies will be the ones that become the powerful players in their markets in the future.  Who will the losers be? They are the companies that are filled with panic and looking for who to blame. So, whether you are in the technology market or in manufacturing or something completely different, it is a time to think about innovation and reinvention. It is time to rethink processes. Great companies are a combination of great flexible products and great innovative processes.

There are five things that future leaders should do:

1. Investigate your customers pain. What is it that they want that they can’t do. Even if their needs sound unsolvable, it may offer opportunities

2. Leverage emerging technologies. Leverage technology that lets your company explore its information about customers, product requirements, unsolved problems, and opportunities. This means that you need to stop looking in the rearview mirror at the past. Look at information in a way that allows you to anticipate the future and what is possible.

3. Don’t be held back by current reality. Clearly, you depend on revenue from existing products to stay afloat. However, think about your intellectual property in completely new ways.

4. Listen. I am finding that in this tough market people are doing more talking than listening. It is better to listen.

5. Experiment and fail. The only way to innovate is to try new things and fail. More innovation comes from failures than from initial success.