You are currently browsing the category archive for the ‘Business Performance’ category.

Fall is a busy time for software industry analysts. It’s a season filled with vendors’ user conferences and some industry conferences. Throughout the course of attending these events I’ve come to the realization that big vendors are often considered the Rodney Dangerfield of the software industry: They get no respect. What I mean by no respect is revealed in snarky social media comments, less enthusiastic coverage by tech media than smaller vendors get and a general sense that big vendors don’t do anything new with their development efforts. However, I suggest this is a shortsighted view of the software world. Smaller vendors serve a valuable function as a source of innovation for the industry, but they get a disproportionate share of attention. I suggest the big vendors deserve businesses’ attention, too, when they consider new software purchases.

If we define big vendors as those with at least US$1 billion in annual revenue, the list of analytics and data management software platform vendors includes companies such as IBM, Informatica, Microsoft, Oracle, SAP, SAS, Teradata and TIBCO. Each of these companies generates 10 to 100 times the revenue of even the most successful startup organizations. There are a handful of other large software platform vendors with revenue up to $1 billion such as Information Builders, MicroStrategy, Qlik, Splunk and Tableau. While the newer ones in this group still have some of the “glow” of their startup days, as a whole this group also suffers disrespect similar to the largest companies.

The fundamental problem is a mismatch in expectations. As an industry we should not generally expect groundbreaking innovations from the largest software companies. Sure, there are exceptions, but the focus of the largeventanaresearch_technologyinnovationawards_2016_white vendors’ research and development efforts is primarily on integrating various capabilities, often the result of an acquisition, and hardening those capabilities to stand up to mission-critical requirements. I recall working for a smaller “innovative” vendor that had hundreds of customers and tens of millions of dollars in revenue; the goal there with respect to workload management was to emulate one of the billion-dollar vendors above. It was considered “the gold standard.” So while the company had some innovative technology, we recognized that enterprises needed the features that larger, longer established vendors had been providing for years.

I’ve written about the interrelationship between large and small software vendors before as I described the software industry ecosystem. Small vendors often bring new technologies to market. Big vendors make things work, often in less obvious but also innovative ways. Both of these efforts are indispensable.

We kept this symbiosis in mind recently in completing our 2016 Ventana Research Technology Innovation Award Winners. In this list you will see a healthy representation of companies both large and small. Each has a role, so let’s give the big vendors some respect for the value that they provide.

Regards,

David Menninger

SVP & Research Director

Follow Me on Twitter @dmenningerVR and Connect with me on LinkedIn.

Data preparation is critical to the effectiveness of both operational and analytic business processes. Operational processes today are fed by streams of constantly generated data. Our data and analytics in the cloud benchmark research shows that more than half (55%) vr_dac_23_time_spent_in_analytics_updatedof organizations spend the most time in their analytic processes preparing data for analysis – a situation that reduces their productivity. Data now comes from more sources than ever, at a faster pace and in a dizzying array of formats; it often contains inconsistencies in both structure and content.

In response to these changing information conditions, data preparation technology is evolving. Big data, data science, streaming data and self-service all are impacting the way organizations collect and prepare data. Data sources used in analytic processes now include cloud-based data and external data. Many data sources now include large amounts of unstructured data, in contrast to just a few years ago when most organizations focused primarily on structured data. Our big data analytics benchmark research shows that nearly half (49%) include unstructured content such as documents or Web pages in their analyses.

The ways in which data is stored in organizations are changing as well. Historically, data was extracted, transformed and loaded, and only then made available to end users through data warehouses or data marts. Now data warehouses are being supplemented with, or in some cases replaced by, data lakes, which I have written about. As a result, the data preparation process may involve not just loading raw information into a data lake, but also retrieving and refining information from it.

The advent of big data technologies such as Hadoop and NoSQL databases intensifies the need to apply data science techniques to make sense of these volumes of information. In this case querying and reporting over such large amounts of information are both inefficient and ineffective analytical techniques. And using data science means addressing additional data preparation requirements such as normalizing, sampling, binning and dealing with missing or outlying values. For example, in our next-generation predictive analytics benchmark research 83 percent of organizations reported using sampling in preparing their analyses. Data scientists also frequently use sandboxes – copies of the data that can be manipulated without impacting operational processes or production data sources. Managing sandboxes adds yet another challenge to the data preparation process.

Data governance is always a challenge; in this new world it has if anything grown even more difficult as the volume and variety of data grow. At the moment most big data technologies trail their relational database counterparts in providing data governance capabilities. The developers of data preparation processes must adapt them to these new environments, supplementing them with processes that support governance and compliance of personally identifiable information (PII), payment card information (PCI), protected health information (PHI) and other standards for the handling of sensitive, restricted data.

In the emerging self-service approach to data preparation, three separate user personas typically are employed. Operational teams need to derive useful information from data as soon as it is generated to complete business transactions and keep operations flowing smoothly. Analysts need access to relevant information to guide better decision-making. And the IT organization is often called upon to support either or both of these roles when the complexities of data access and preparation exceed the skills of those in the lines of business. While IT departments probably welcome the opportunity to enable end users to perform more self-service tasks, they cannot do so to the extent that it ignores enterprise requirements. Nonetheless, the trend toward deploying tools that support self-service data preparation is growing. These two trends can lead to conflict for organizations that want to derive maximum business value from their data as quickly as possible while still maintaining appropriate data governance, security and consistency.

To help understand how organizations are tackling these changes, Ventana Research is conducting benchmark research on data preparation. This research will identify existing and planned approaches and related technologies, best practices for implementing them and market trends in data preparation. It will assess the current challenges associated with innovations in data preparation, including self-service capabilities and architectures that support big data environments. The research will assess the extent to which tools and processes for data preparation support superior performance and determine how organizations balance the demand for self-service capabilities with enterprise requirements for data governance and repeatability. It will uncover ways in which data preparation and supporting technologies are being used to enhance operational and analytic processes.

This research also will provide new insights into the changes now occurring in business and IT functions as organizations seek to capitalize on data preparation to gain competitive advantage and help with regulatory compliance and risk management and governance processes. The research will investigate how organizations are implementing data preparation tools to support all types of operational and business processes including operational intelligence, business intelligence and data science.

Data is an essential component of every aspect of business, and organizations that use it well are likely to gain advantages over competitors that do not. Watch our community for updates. We expect the research to reveal impactful insights that will help business and IT. When it is complete, we’ll share education and best practices about how organizations can tackle these challenges and opportunities.

Regards,

David Menninger

SVP & Research Director

Follow Me on Twitter @dmenningerVR and Connect with me on LinkedIn.

Follow on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22 other subscribers

RSS David Menninger’s Analyst Perspective’s at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Top Rated

Blog Stats

  • 46,795 hits