‘Data as New Oil’ and the Role of Big Data Analytics 2.0

PinIt
Big Data analytics

The concept of “data as new oil” has been recognized as one of the main forces for innovation in various industries over the past several years. There are two main explanations for this.

First, as a result of the expanding use of new data sources such as sensors and social networks, the volume of data in companies has exploded. For example, a consumer packaged goods (CPG) company that produces a personal care product generates 5,000 data samples every 33 milliseconds, resulting in 152,000 samples per second (and 13 billion samples per day).

Second, due to increased global competition, companies are forced to optimize their processes and improve business performances. Since in many domains the models for business processes are already re-engineered many times and are already fully optimized, new ideas for business improvements must come from monitoring process execution (i.e., from analyzing the Big Data generated in processes).

Understandably, the data processing and management community has promptly reacted to this demand. In the past five years, we have experienced enormous progress in developing powerful methods and tools for Big Data processing (e.g., Apache Hadoop, Apache Mahout and Apache Storm to name a few). Indeed, companies can now process zettabytes of information in seconds and the trend is positive: Additional increases in the data volume and velocity will become increasingly manageable with new developments in Big Data technologies.

However, this is only the first step in exploiting “data as new oil.” The second step is more challenging for the development community: Improving business performances based on Big Data processing. Indeed, this process is still more art than technology and requires highly skilled professionals (e.g., data scientists) to extract and mine business value in the data in a very intensive, manual way. Even the proper analytics tools for some mining steps are missing. Automatic solutions for process improvement based on data analytics are still far from realization and usage in realistic scenarios.

On the other hand, there is very strong demand from the industry for the automation of Big Data analytics processes, leading to the concept of Big Data analytics 2.0. Industry needs Process Optimization as a Service as the next step to make existing Analytics as a Service useful for business users. Data scientists are a scarce and expensive resource, and business managers want to drive the business optimization process on their own. It’s quite clear that the business doesn’t need IT experts for making decisions about business optimizations but only for ensuring that, in the business cockpit (the pilot doesn’t need any IT expertise for controlling the plane even in the very turbulent weather conditions), all signalization is properly working.

Process Optimization as a Service will empower business users to understand business problems and explore opportunities in a more proactive way. They can define their own perspective on the issues and select the opportunities that will automatically deploy corresponding analytics for their realization. Note that in a very dynamic, global business environment, business users need decision-making support that can deal with this dynamicity of data. It means that an analytics that is a part of the Process Optimization as a Service can be changed automatically based on the changes in the data (in order to ensure that the extracted business value is still valid and useful for business optimization).

This self-adaptivity is the point at which Big data analytics 2.0 extends the metaphor of the cockpit. Besides having an interface that is friendly to the business user and which visualizes all of the relevant analytics in the proper business context, Big data analytics 2.0 will support changes in the analytics based on the dynamicity in the data as well as on changes in the business context. This will empower business users to make optimal decisions, to consume fewer resources and to detect problems before they escalate, even in the most turbulent business conditions.

The role of data scientists will be to support the optimization of the data analytics tools and processes, performance, robustness and data privacy (as in the case of IT experts who can refine software for calculating and visualizing commands in the cockpit). Moreover, data scientists will be responsible for enabling self-adaptivity in Big Data analytics 2.0’s cockpit by defining change-management processes that will sense and interpret relevant changes. In this way, we close the loop for continual enhancement of this optimization process.

About Dr. Nenad Stojanovic

Dr. Nenad Stojanovic is a project manager at the FZI Research Center for Information Technologies at the University of Karlsruhe. He received his MS degree in Computer Science from the Faculty of Electronic Engineering, University of Nis, Serbia and Montenegro, and a PhD from the University of Karlsruhe. He has participated in several R&D programs funded by the European Commission's IST Programme related to ontologies, information retrieval and knowledge management. Currently, he is a technical and scientific manager of the SAKE "Semantec-enabled Agile Knowledge-based E-government" project. He can be reached at nstojano@fzi.de.

Leave a Reply