You are currently browsing the monthly archive for November 2010.

There’s a lot going on in search technology still, or again, depending on your perspective. We’ve analyzed search in a business context periodically over the years. I want to provide some more analysis on the business side of search after many announcements that I have been analyzing over the last two months from Endeca, our analysis of IBM CognosMarkLogic and my analysis of QlikView, all of which include significant enhancements to existing search capabilities in their most recent product upgrades.

The Cognos 10 and QlikView search capabilities are somewhat similar and focus on searching BI-related, mostly structured data. MarkLogic tackles unstructured data pretty much to the exclusion of structured BI data. Endeca Latitude makes a valiant attempt to bridge the gap between the two, but it still favors structured data over unstructured.

Our research validates the importance of search. In our business intelligence and performance management benchmark research, search is one of the top three capabilities organizations have deployed or are deploying. Participating organizations ranked search as the most important end-user capability in our information applications benchmark research. Clearly end users want search capabilities today.

As well as fulfilling a basic need in working with information, search is appealing because of its simplicity. Google has made nearly every Web user aware of search. It has the promise of bringing together disparate information and at least potentially can combine structured and unstructured information. It can cut across information from different systems and from departments with entirely different structures such as customer information, employee information and vendor information.

Another simplicity element of the most successful search technologies is that users do not have to create description information about the data in what is called metadata or impose any specific data structure on the disparate information they wish to access. Sure, metadata and structure may facilitate more accurate searches or speed the search process, but the search vendor takes care of that for the user.

Several challenges remain to enable more universal use of enterprise search. First, the divide between structured and unstructured data must be conquered. Users shouldn’t have to think about whether the data was in a PowerPoint presentation, a PDF document or an analytic dashboard. The second key challenge is the divide between internal and external data. Increasingly, external data includes not only unstructured Internet content but also cloud-based applications with structured data like that from Exalead. Lastly, as these challenges are overcome, users will want to leverage search agents across this universe of combined data. I expect we’ll hear more from companies like ConnotateFetch and Kapow Software. Today these technologies operate independent of the BI landscape, but the more they integrate with BI architectures the more valuable they will become. As BI vendors search for ways to broaden their product lines, they ought to look at what these vendors have to offer.

In any event, I expect search to continue to rise in popularity and in its presence in the enterprise software market. It is too compelling to end users to be ignored. Challenges remain, but we are seeing progress to overcome those challenges.

Let me know your thoughts or come and collaborate with me on  Facebook, LinkedIn and  Twitter .

Regards,
David Menninger – VP & Research Director

Tableau Software officially released Version 6 of its product this week. Tableau approaches business intelligence from the end user’s perspective, focusing primarily on delivering tools that allow people to easily interact with data and visualize it.  With this release, Tableau has advanced its in-memory processing capabilities significantly. Fundamentally Tableau 6 shifts from the intelligent caching scheme used in prior versions to a columnar, in-memory data architecture in order to increase performance and scalability.

Tableau provides an interesting twist in its implementation of in-memory capabilities, combining in-memory data with data stored on the disk. One of the big knocks against in-memory architectures has been the limitation imposed by the physical memory on the machine. In some cases products were known to crash if you exceeded the memory. In other cases the system didn’t crash, but it performed so much slower once you exceeded the memory that it almost appeared to have crashed.

The advent of 64-bit operating systems dramatically increased the theoretical limitations that existed in 32-bit operating systems. Servers can now be configured with significant amounts of memory at prices that are within reason, but putting your entire warehouse or large-scale data set entirely in-memory on a single machine is still a stretch for most organizations. With Tableau 6 a portion of the data can be loaded into memory and the remainder left on disk. Coupled with the feature that allows links to data in an RDBMS it provides considerable flexibility. Data can be loaded into memory, put on disk or linked to one of many supported databases. As the user interacts with data, it will be retrieved from the appropriate location. Tableau 6 also includes assistance in managing and optimizing the dividing line between data in-memory and on-disk, based on usage patterns.

However, one of the places where this new architecture comes up short is in the data refreshment process. In the current Tableau 6, users must manually request a refresh of the data that is currently in-memory. Ideally there should be an optional automated way to keep the in-memory data up to date. The other thing I would like to see in Tableau 6 and other in-memory products is better read/write facilities. Although this version includes better “table calcs,” which can be used to display some derived data and perform some limited what-if capabilities, there is no write-back capability that would let you use Tableau as a planning tool and record the changes you explore.

Tableau 6 includes a number of other features beyond the in-memory capabilities. It now supports a form of data federation in which data from multiple sources can be combined in a single analysis. The data can be joined on the fly in the Tableau server. Tableau refers to this as “data blending.” Users can also create hierarchies on the fly simply by dragging and dropping dimensions. And there are some new interactive graphing features such as dual-axis graphs with different levels of detail on each axis and the ability to exclude items from one axis but not the other, which can be helpful to correct for outliers such as the impact of one big sale on profitability or average deal size.

As well this release supports several new data sources including the Open Data Protocol (OData), the DataMarket section of Windows Azure Marketplace and Aster Data who my colleague recently assessed.

Version 6 also includes some IT-oriented enhancements. As Tableau has grown, its software has been deployed in ever-larger installations, which places a focus on its administrative facilities. The new release includes improved management for large numbers of users with grouping and assigning privileges and specific selection and edit options. It also includes a new project view of objects created and managed within Tableau. All of these help bring it forward to departmental and enterprise class analytics technology.

Overall, the release includes features that should be well received by both end users and IT. It shares an end user analytics category with QlikView 10, which I recently assessed, and Tibco Spotfire. I’ll be anxious to see if the company can push the in-memory capabilities even further in future releases. It is clear that Tableau brings another viable option to the category of analytics for analysts with new in-memory computing and blending of data from across data sources.

Let me know your thoughts or come and collaborate with me on  Facebook, LinkedIn and  Twitter .

Regards,

David Menninger – VP & Research Director

Follow on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22 other followers

RSS David Menninger’s Analyst Perspective’s at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

David Menninger – Twitter

Top Rated

Blog Stats

  • 46,006 hits
%d bloggers like this: