Self-service BI has been gaining rapid popularity in recent times thanks to the technical sophistication provided by tools such as Tableau and Qlikview that have made it significantly easier for business-focused users to undertake complex analysis and data visualization tasks with minimal technical expertise. Yet, the true adoption of self-service BI in business community remains an elusive goal in majority of complex analysis scenarios. Meaningful insight delivery requires not only a deep understanding of data and underlying business processes but also to be able to perform ad-hoc analysis on the fly without getting bogged down in details of SQL programming or having to implement complex mathematical formulae for data transformation. We argue that without addressing the underlying data challenges, the true potential of self-service BI is unlikely to be realized notwithstanding the technical sophistication of reporting tools.

Defining self-service BI

To better understand the challenges involved in its more wide-spread business user adoption, we consider self-service BI as being made up of two key architectural building blocks

  • Reporting and Analysis layer-Tools used to perform analysis, build interactive dashboards, multi-device delivery mechanisms and so on
  • Data layer-Clean, transformed datasets assembled from multiple data sources and modelled as per unique business needs

Numerous technology solutions for the reporting and analysis layer have sprung up in the market and many major obstacles to wider business user adoption of BI (e.g. complex upfront data modelling, OLAP cube design, and query writing etc.) have been removed thanks to a compelling feature set that uses drag and drop functionality to implement powerful drill-down reporting and scenario-based analysis. However, an out-of-the-box cookie cutter style solution for the data layer remains an elusive goal for vendor community given the sheer amount of diversity in business requirements. Business users on the other hand cannot easily address the data challenges (despite rampant claims by many BI vendors of the ‘drag-and-drop’ capabilities of their tools) given the technical complexity involved. Consider just a few issues that highlight the challenges involved-

  • Getting automated data refresh from source-Tables used for analysis are usually aggregated from multiple data sources each with their own change frequency. Source systems have their own data transformation rules and multiple aggregation rules come into play before final datasets take shape. Take the example of a dataset that captures customer lifetime value. Orders are returned, free trials cancelled, or banks issue chargebacks and all these factors must be taken into account before calculating the final LTV at any given point in time. This is nearly impossible to implement in even the most advanced BI tools without extensive query writing
  • Data modelling challenges-Exploratory analysis requirements are rarely static. A table designed with a set of columns for one report may soon become obsolete as analysis requirements change. Without an appropriately designed data model, increases in data volumes increase can significantly degrade query processing times without significant upgrade of hardware capacity. Within Digital Marketing context, analysis of visitor level clickstream data for segment insights is virtually impossible without a well-defined underlying data model. Analysis of data in JSON nodes that are required to be changed frequently remains a big challenge.
  • Complexity of data transformations-Think calculated metrics in Tableau. While it may be relatively easy to write Tableau constructs to develop basic bespoke metrics, the approach becomes untenable (especially for business users) when complex data transformations and lookups are involved. A properly designed ETL backbone that generates appropriately formatted datasets remains the only viable option for addressing this constraint
  • IT (usually) does not understand the business-Dealing with marketing datasets requires significant understanding of both technology and marketing domain in general.  Building a database for conducting media de-duplication or attribution studies would typically involve strong IT skills with da eep understanding of principles of attribution, its commercial imperative, kind of data elements that would be required and the function of each and so on. Without these dual skills, the time and resource investment required by business users in educating IT staff would likely outweigh the benefits of obtaining clean data for self-service BI

The need then is for improving and expediting the data delivery mechanisms in terms of both cycle time and fitness for purpose so that high quality datasets can be made readily available to business users. We work with self-service Marketers to address exactly this issue. Our Consultants and Developers bring deep data integration and modelling expertise within the Digital Marketing domain and with extensive experience working with major BI tools including Tableau, Qlikview, and Microstrategy we can help rapidly build and deploy datasets that can be used with self-service BI tool of choice. With pay as you go pricing without any long term contracts and typical delivery cycle times of under 4 weeks, the business case for using our custom built marketing datafeeds can hardly be more compelling.

About the article

Informational article outlining the implementation components of self-service BI

Target audience

Tech savvy but primarily business focused users and analysts in Marketing departments looking for easier and more automated ways to insight delivery

About the Client

Specialist Marketing Solutions Consultancy looking to promote its marketing data integration service offering to self-service Marketers.