Executive briefing on the failures and challenges of self service business intelligence technologies
The allure of Self Service business intelligence is irresistible: faster response times, better information, and increased adoption due to the intuitive nature of the underlying software. In theory, this technology enables less technical personnel to access and interact with data – driving analytics adoption is far greater numbers than ever before.
Yet, surveys and statistics tell a very difference story about the actual use and efficacy: the majority of organizations admit to “Struggling with Self Service BI,” and user adoption rates remain unchanged when measured against traditional BI technologies.
In this Executive Briefing on The Failure of Self Service BI, we review:
- The three critical shortcomings of self-service BI Technologies
- The role of Data preparation in a successful self-service deployment
- The effect of Big Data on self-service initiatives
- How successful organizations have overcome these challenges
Process: Analytics success in the “Age of Agile”
The notion that there must be process maturation accompanying any self-service initiative is often met with skepticism. As some point out, the driver of these initiatives is moving away from the legacy model of centralized, process driven, shared-service teams. From that perspective the technology alone enables change as self-service tools move analytics into an embedded business function, progressing from requirements definition through production output without leaving the department.
Yet, within that department we typically see a linear, waterfall approach to addressing each business requirement. The results are slightly more responsive, yet not necessarily more effective than the legacy approach supported by centralized IT and each project is generally executed in a reactive manner, based on which business issue is at the top of the priority list. There is no larger context to where and how these limited resources are allocated.
The most successful departments and organizations today are those who look at tool roll out in a much larger context. While the majority of self-service technologies are capable of adding significant value to any team, no tool will be successful without an effective prioritization of resources, processes for development, and means to engage key stakeholders. As such, organizations end up outsourcing content development to third party consultancies or internal shared service teams, which is the opposite of self-service tool investment.
Preparation: Don’t keep analyzing the same old data
Much attention is paid to the large amounts of multi-structured data in most organizations, and the potential value they bring to ad hoc analysis. Yet, tremendous work is required to prepare these data assets for analysis. As The Data Warehouse Institute noted, “self-service BI and data discovery tools can deliver much better visualization and data exploration, but the sources often remain limited to spreadsheets and silo’ed application-specific databases.”[v]
According to Gartner research, “data preparation is one of most difficult and time-consuming challenges facing business users of BI and data discovery tools”[vi] – heavily limiting the potential of self-service. Applications like Microsoft Power BI and Qlik Sense have begun to incorporate data preparation capabilities into their tools, targeting limited size, highly structured data sets.
Preparing large multi-structured data sets for analysis in a Hadoop environment requires a highly technical skillset gained through in-depth experience, not realistically acquired by your average business users. For the potential of self-service BI to be realized, business users must be empowered with back-end preparation capabilities to support their front end analysis.
Power: Desktop tool architectures with a Big Data problem
When evaluating self-service BI tools, the architectures are targeted at business-focused analysts, examining spreadsheet data or perhaps connecting to pre-existing data marts and warehouses. Yet, the reality of a “Big Data” world is that data sets are large and dynamic – bringing together lots of disparate data sources, in a dynamic manner typically handled with spreadsheets, but in large quantities typically requiring the processing power associated with a robust data warehouse.
The spreadsheet-centric functionality of self-service tools cannot scale for large data sets, and the skillset required to stand up large data warehouses and Hadoop deployments are not typically available. In order to analyze all of an organizations relevant data, even highly capable BI users are still beholden to the technical nature of big-data platforms – removing the independence and responsiveness which provides them the most value.
The good news is that cloud based Hadoop deployments (such as Microsoft Azure HD Insight and Amazon Web Services Elastic Map Reduce) can provide a scalable infrastructure to store and process large data sets, without the setup and support burden of on premise solutions. Furthermore, with the emergence of self-service data preparation technologies, there is no longer a need to employ an army of Map Reduce coders to implement. Today, embedded Business Analysts typically have the required skills for implementation and administration. With that, Hadoop has become a faster, lower cost, more scalable alternative to traditional Data Warehousing.
The Path Forward
While these challenges have limited the adoption of self-service BI tools, the future is bright. As Gartner writes, “With the rise of data discovery, access to multistructured data, data preparation tools and smart capabilities will further democratize access to analytics and stress the need for governance.” Much of this is due to an emerging class of technology, which fills the functional gaps discussed above. A great example of these is Datameer, which supports business user analysis of large, multistructured datasets in a Hadoop environment. With an intuitive GUI combining spreadsheet and wizard style work environments, it empowers less-technical users to perform integration, preparation and statistical analysis activities that previously required centralized IT support, making big data assets accessible to self-service analysis activities.
Also critical to success is working with an experienced consultancy who provides process as well as functional assistance in the early weeks of your initiative. That engagement should begin with identifying the highest value areas for resource investment and the fastest “wins” from a business standpoint, followed by an agile development methodology that capitalizes on the responsive nature of the technology. This promotes a higher value and more adopted outcome rather than simply mirroring the legacy approach.
While most organizations have not yet realized the potential benefits of their self-service BI investments, the maturing state of supporting areas provides opportunity for improvement. By refocusing attention and investment on Process, Power and Preparation, the weaknesses of these platforms can be mitigated and their true potential realized.
About iSoftStone North America:
iSoftStone North America is a technology consulting firm partnering with its clients to create innovative solutions designed for the “self-service era” of business analytics. By combining business and technical consulting with complimentary services from an in-house digital + creative agency, these solutions see far greater adoption and much greater performance impact than traditional, silo’ed approaches.
To learn more about iSoftStone North America, or to speak with the author of this report, please visit www.isoftstoneinc.com.
[i] 2015 State of Self-Service BI Report – Logi Analytics
[ii] 2015 State of Self-Service BI Report – Logi Analytics
[iii] 2014 Survey – BI Scorecard
[iv] BI Leadership Forum- Eckerson Group
[v] Preparing Data for the Self-Service Analytics Experience – TDWI –
[vi] Predicts 2015 – Gartner
We'd love to help make your business better.
Fill out the form to the right to unlock our BI Executive Brief.