You are currently browsing the tag archive for the ‘Business Technology’ tag.

In various forms, business intelligence (BI) – as queries, reporting, dashboards and online analytical processing (OLAP) – is being used increasingly widely. And as basic BI capabilities spread to more organizations, innovative ones increasingly are exploring how to take advantage of the next step in the strategic use of BI: predictive analytics. The trend in Web searches for the phrase “predictive analytics” gives one indication of the rise in interest in this area. From 2004 through 2008, the number of Web search was relatively steady. Beginning in 2009, the number of searches rose significantly and has continued to rise.

While a market for products that deliver predictive analytics has existed for many years and enterprises in a few vertical industries and specific lines of business have been willing to pay large sums to acquire those capabilities, this constitutes but a fraction of the potential user base. Predictive analytics are nowhere near the top of the list of requirements for business intelligence implementations. In our recent benchmark research on business analytics, participants from more than 2,600 organizations ranked predictive analytics 10th among technologies they use today to generate analytics; only one in eight companies use them. Planning and forecasting fared only slightly better, ranking sixth with use by 26 percent. Yet in other research, 80 percent of organizations indicated that applying predictive analytics is important or very important.

For the untapped majority, technology has now advanced to a stage at which it is feasible to harness the potential of predictive analytics, and the value of these advanced analytics is increasingly clear. Like BI, predictive analytics also can appear in several identities, including predictive statistics, forecasting, simulation, optimization and data mining. Among lines of business, contact centers and supply chains are most interested in predictive analytics. By industry, telecommunications, medicine and financial services are among the heavier users of these analytics, but even there no more than 20 percent now employ them. Ironically, our recent research on analytics shows that finance departments are the line of business least likely to use predictive analytics, even though they could find wide applicability for them.

Predictive analytics defined broadly have a checkered history. Some software vendors have long offered tools for statistics, forecasting and data mining including Angoss, SAS and SPSS, which is now part of IBM. I was part of the team at Oracle that acquired data mining technology for the Oracle database a dozen years ago. Companies like KXEN sprang up around that time, perhaps hoping to capitalize on what appeared to be a growing market. The open source statistics project known simply as R has been around for more than a decade and some BI vendors like Information Builders have embedded into their product called WebFOCUS RStat.

But then things quieted down for almost a decade as the large vendors focused on consolidating BI capabilities into a single platform. Around 2007, interest began to rise again, as BI vendors looked for other ways to differentiate their product offerings. Information Builders began touting its successes in predictive analytics with a customer case study of how the Richmond, Va., police department was using the technology to fight crime. In 2008 Tibco acquired Insightful and Netezza acquired NuTech, both to gain more advanced analytics. And in 2009 IBM acquired SPSS.

As well these combinations, there are signs that more vendors see a market in supplying advanced analytics. Several startups have focused their attention on predictive analytics, including some such as Predixion with offerings that run in the cloud. Database companies have been pushing to add more advanced analytics to their products, some by forming partnerships with or acquiring advanced analytics vendors and others by building their own analytics frameworks. The trend has even been noted by mainstream media: For example, in 2009 an article in The New York Times suggested that “data mining has entered a golden age, whether being used to set ad prices, find new drugs more quickly or fine-tune financial models.”

Thus the predictive analytics market seems to be coming to life just as need intensifies. The huge volumes of data processed by major websites and online retailers, to name just two types, cannot be moved easily or quickly to a separate system for more advanced analyses. The Apache Hadoop project has been extended with a project called Mahout, a machine learning library that supports large data sets. Simultaneously, data warehouse vendors have been adding more analytics to their databases. Other vendors are touting in-memory systems as the way to deliver advanced analytics on large-scale data. It is possible that this marriage of analytics with large-scale data will cause a surge in the use of predictive analytics – unless other challenges prevent more widespread use of these capabilities.

One is the inherent complexity of the analytics themselves. They employ arcane algorithms and often require knowledge of sampling techniques to avoid bias in the analyses. Developing such analytics tools involves creating models, deploying them and managing them to understand when a model has become “stale” or outdated and ought to be revised or replaced. It should be obvious that only the most technically advanced user will have (or desire) any familiarity with this complexity; to achieve broad adoption, either there must be more highly trained users (which is unlikely) or vendors must mask the complexity and make their tools easier to use.

It also is possible that a lack of understanding of the business value these analyses provide is holding back adoption. Predictive analytics have been employed to advantage in certain segments of the market; other segments can learn from those deployments, but it is up to vendors to make the benefits and examples more widely known. In an upcoming benchmark research I’ll be investigating what is necessary to deploy these techniques more broadly and looking for the best practices from successful implementations.

Regards,

David Menninger – VP & Research Director

The information management (IM) technology market is undergoing a revolution similar to the one in the business intelligence (BI) market. We define information management as the acquisition, organization, control and use of information to create and enhance business value. It is a necessary ingredient of successful BI implementations, and while some vendors such as IBM, Information Builders, Pentaho and SAP are in addition integrating their BI and IM offerings, each discipline involves different aspects of the use of information and will require it sometimes integrated and sometimes separate.

Some might consider information management as the “plumbing” behind BI. They take it for granted and only notice when it is missing. We have a more holistic view. Our recent benchmark research on business analytics shows that, for example, organizations struggle to collect all the data they need, with two-thirds of them stating they spend more time in data related activities than analytic ones.

Three key issues are driving our information research agenda in 2011:

1) Combining all the sources and types of data into an integrated information architecture.
2) Enabling organizations to manage and analyze larger volumes of information.
3) Providing accessibility to information throughout the organization.

We’ll be updating our information management benchmark research this year to see how these central issues are impacting IM overall. In addition we will focus on five technology innovations my colleague has identified as the business technology revolution in 2011: cloud computing, mobile technologies, social media, analytics of more types over more data and collaboration. Let me flesh out each of these a bit as they impact the evolution of IM.

As I pointed out in “Clouds Are Raining Corporate Data,” cloud computing is having an increasingly large influence over the IT landscape. It’s likely that, whether you realize it or not, corporate data exists and/or is migrating outside the walls of your organization. Cloud-based applications and services raise information management challenges that don’t necessarily exist in on-premises deployments. We’re investigating these issues now in our Business Data in the Cloud benchmark research program and where many new providers dealing with cloud data like Dell Boomi, Jitterbit and Snapdata play into the existing landscape of IBM, Informatica, iWay Software, Oracle, Pervasive and Syncsort to name just a few.

Mobile technologies are enabling organizations to deliver information to users when and where they need it. They are one of the forces driving cloud adoption as organizations look to make it easier to deliver applications to users regardless of their location. Mobile applications also are consuming and producing more location-related information and creating a need to manage this kind of data.

In the world of information management, social media has created entirely new challenges. Most social media data is unstructured text and is forcing organizations to embrace text analytics to deal with it, in many cases for the first time. The volumes of social media data and the speed with which it should be collected and analyzed also present new challenges. From an IM perspective, organizations must learn how to solve these challenges while enforcing appropriate data quality, data governance and life-cycle management policies.

Analytics present an additional set of IM challenges. The necessity of managing more data and different types of data has led to the adoption of large-scale technologies such as Hadoop. In research that is under way now we are researching the various ways of dealing with these huge data volumes and the role of Hadoop in that process. Predictive analytics also create IM challenges. Sampling, which is a key to producing unbiased predictive models, may or may not become less critical as database analytics grow in popularity. The models and the scores that such analytics produce are another form of data that must be managed and retained, often for compliance and auditing purposes. We’ll be studying these issues as part of predictive analytics benchmark research that will commence in the first part of 2011.

Collaboration provides a means to communicate and extend the processes of IM. It creates a new channel not only for delivery of information but also for input into the delivery process. Using collaboration tools such as Twitter, Chatter  or Tibbr can help organizations use data and related information by involving more people. This wider audience collectively contains more knowledge about the underlying data and can also comment on its quality, which ultimately will lead to better data and more trust in it. Collaboration tools also provide a mechanism to link the workflows of information management with the constituents involved in the process.

Information management continues to evolve and grow, somewhat to my surprise as indicated in my recent assessment of Informatica. These changes present challenges for IT groups and lines of business alike. With our IM research agenda, we hope to provide useful information to both functions and help you navigate together through this changing landscape and achieve the goal of creating and enhancing business value.

Regards,

David Menninger – VP & Research Director

Follow on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22 other followers

RSS David Menninger’s Analyst Perspective’s at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

David Menninger – Twitter

Top Rated

Blog Stats

  • 46,006 hits
%d bloggers like this: