You are currently browsing the monthly archive for January 2012.

MicroStrategy, one of the largest independent vendors of business intelligence (BI) software, recently held its annual user conference, which I attended with some of my colleagues and more than 2,000 other attendees. At this year’s event, the company emphasized four key themes: mobility, cloud computing, big data and social media. In this post, I’ll assess what MicroStrategy is doing in each of the first three areas. My colleague, Mark Smith, covered MicroStrategy’s social intelligence efforts in his blog. I’ll also share some opinions on what might be missing from the company’s vision.

Michael Saylor, MicroStrategy’s CEO, is enamored with Apple and its mobile technology, which sure seems to be a good bet. Coincidentally, on the same day Saylor delivered his keynote speech, Apple announced record revenues based on iPhone and iPad sales. MicroStrategy made an early commitment to mobile technologies and Apple’s products. As a result it has a relatively mature set of native mobile products on the Apple platform; now it is bringing those capabilities to Android devices via the Android Marketplace. In addition to Android platform support, the current release, 9.2.1m, adds new mobile features including offline capabilities and user interface enhancements. As a testament to the maturity of MicroStrategy’s mobile capabilities, several customers I spoke with were deploying mobile applications first and then extending those applications to Web and desktop platforms.

At last year’s MicroStrategy World, the company was just getting familiar with the cloud. Since then it has delivered two types of cloud capabilities: Cloud Personal for individual use and a cloud version of its full platform including database and data integration capabilities. Support for Teradata in the enterprise cloud offering extends previously announced support for IBM Netezza and ParAccel. Data integration capabilities are provided via a partnership with Informatica. At the recent event it also introduced a third version (not yet available): Cloud Professional extends Cloud Personal with multiuser capabilities including user management, security, personalization and notification of dashboard updates. In addition, Cloud Personal has added the ability to import data directly from Salesforce.com applications.

It’s still early days for MicroStrategy in the cloud, as it is for most vendors, but the company appears to be “all in.” It has committed $100 million dollars to build out the cloud infrastructure and offers free capabilities to individual users via Cloud Personal. Perhaps most significant are the software partnerships to provide database and data integration capabilities – the first revenue sharing partnerships for MicroStrategy. In the past it delivered only capabilities developed internally. It made no acquisitions and no partnerships. This willingness to share revenue demonstrates how important the cloud is to MicroStrategy.

The company chose to be practical rather purist in its approach. The cloud implementation is based on MicroStrategy’s existing product architecture which is not multitenant. In other words each enterprise runs in a separate instance of the software rather than sharing a single instance. This approach has no immediate or obvious downside for customers. However, in the long run, it could prove to be more expensive and labor-intensive for MicroStrategy. Company officials said that over time it will migrate to a multitenant architecture to overcome these issues.

Another key theme, big data, received less attention. Certainly, MicroStrategy executives and presenters mentioned big data, but that is not new to the company. MicroStrategy built its business around large data sets, often from the retail industry, before the concept of “big data” existed. As a result, its core BI product has been architected to deal with big data which is evidenced by its longstanding relationship with Teradata and some of the other databases it supports, including Greenplum, Netezza, ParAccel and Vertica. In addition, MicroStrategy and Cloudera recently announced a partnership that  provides connectivity to Hadoop data sources. As our benchmark research shows, organizations use multiple technologies to tackle big-data challenges so MicroStrategy customers should welcome this partnership.

I see a couple of holes in MicroStrategy’s coverage. Mark Smith discusses how MicroStrategy is tackling social media as a data source. However, the company has not embraced social media in the context of collaborative BI. In a recent blog post, I noted that Ventana Research sees collaboration as one of five key influences on business intelligence, and there is plenty of movement here. Enterprises have started to adopt collaborative BI processes. Other BI software vendors have begun to support collaborative BI in their products. Soon we’ll be researching market requirements in an upcoming benchmark research project. Another area where MicroStrategy lags some of its competitors is advanced analytics. The company has some support for predictive analytics but limited capabilities for planning and what-if analysis.

Despite these areas where MicroStrategy can make additional investments, its annual event demonstrated the company’s determination to embrace new technologies and expand the horizons of business intelligence. It was well attended by customers and supported by a range of partners. If you are struggling with big data, mobile or cloud challenges, you may want to consider MicroStrategy. If so, you can try it easily via its cloud offerings.

Regards,

David Menninger – VP & Research Director

We recently published the results of our benchmark research on Big Data to complement the previously published benchmark research on Hadoop and Information Management. Ventana Research undertook this research to acquire real-world information about levels of maturity, trends and best practices in organizations’ use of large-scale data management systems now commonly called Big Data. The results are illuminating.

Volume, velocity and variety of data (the so-called three V’s) are often cited as characteristics of big data. Our research offers insight into each of these three categories. Regarding volume, over half the participating organizations process more than 10 terabytes of data, and 10% process more than 1 petabyte of data. In terms of velocity, 30% are producing more than 100 gigabytes of data per day. In terms of the variety of data, the most common types of big data are structured, containing information about customers and transactions. However, one-third (31%) of participants are working with large amounts of unstructured data. Of the three V’s, nine out of 10 participants rate scalability and performance as the most important evaluation criteria, suggesting that volume and velocity of big data are more important concerns than variety.

This research shows that big data is not a single thing with one uniform set of requirements. Hadoop, a well-publicized technology for dealing with big data, gets a lot of attention (including from me), but there are other technologies being used to store and analyze big data. The research data shows an environment that is still evolving. The majority of organizations still use relational databases but not exclusively: More than 90 percent of participants using relational databases also use at least one other technology for some of their big-data operations. One-third (34%) are using data warehouse appliances, which typically combine relational database technology with massively parallel processing. About as many (33%) are using in-memory databases. Each of these alternatives is being more widely used than Hadoop. As well, 15% use specialized databases such as columnar technologies, and one-quarter (26%) are using other technologies.

While these technologies enable organizations to do things they haven’t done before, there is no technological silver bullet that will solve all big-data challenges. Organizations struggle with people and process issues as well. In fact, our research shows that the most troublesome issues are not technical but people-related: staffing and training. Big data itself and these new approaches to processing it require additional resources and specialized skills. Hence we see high levels of interest in big-data industry events such as Hadoop World and the Strata Conference. Recognizing the dearth of trained resources here, some academic institutions have launched degree programs in analyzing big data, and IBM has started BigData University.

Research participants cited real-time capabilities and integration as their key technical challenges. The velocity with which they generate data and the fact that over half the organizations analyze their data more than once a day are forcing them to seek real-time capabilities; the pace of business today demands that they extract as soon as possible all useful information to support rapid decision-making. When respect to integration, less than half of participants are satisfied with integration of third-party products, and almost two-thirds cite lack of integration as an obstacle to analyzing big data. Three-quarters have integrated query and reporting with their big-data systems, but more advanced analytics such as data mining, visualization and what-if analysis are seldom available as integrated capabilities. Responding to such comments, vendors have been racing to integrate their business intelligence and information management products with big-data sources. As you consider big-data projects and technologies, make sure that the vendors you select can handle the big-data sources you must use.

Looking ahead we expect more changes in this evolving landscape. In some ways big-data challenges and the presence of Hadoop in particular have paved the way for other technologies besides relational databases. NoSQL alternatives, such as Cassandra, MongoDB and Couchbase, are gaining notice in enterprise IT organizations after the success of Hadoop. In-memory databases, once considered a niche technology, are being considered by SAP, in HANA, as its primary big-data analytical platform. There are differing opinions about whether these various big-data technologies will converge or diverge. We can look to the past for some indications of where the market might go. Over the years a variety of alternatives to relational databases have emerged, including OLAP, data warehouse appliances and columnar databases; each eventually was absorbed into relational databases.

We also see signs of the major relational vendors embracing big-data technologies. IBM acquired Netezza for its massively parallel data warehouse appliance technology. IBM has also invested heavily in Hadoop. Oracle introduced its own line of data warehouse appliances and recently brought a big-data appliance to market that includes Hadoop and NoSQL technologies. Microsoft has invested in massively parallel processing and Hadoop. We also see independent vendors such as Hadapt combining relational database technology with Hadoop. The past is not necessarily an indication of the future, but our research shows and recent market dynamics suggest it may be premature to write off the relational database vendors as out of touch.

In light of this information, I recommend that your organization explore various alternatives for solving specific challenges. At a minimum you should be aware of the alternatives so when the need arises you will know what is available. Use our big-data research to guide your use of these technologies and to help avoid some of the obstacles they present so you can be more successful in applying big data to business decisions.

Regards,

David Menninger – VP & Research Director

Follow on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 22 other followers

RSS David Menninger’s Analyst Perspective’s at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

David Menninger – Twitter

Top Rated

Blog Stats

  • 46,033 hits
%d bloggers like this: