Data is the new oil. This metaphor clearly shows the value of data, but also highlights the difficulties when it comes to using it.
Data can be very valuable. An example: measured using stock market value, Google has managed to topple top dogs, such as the oil company Exxon Mobile, from their leading positions as the most expensive companies in the world. However, the value of its data only became apparent once it had been tapped into and refined. In order to generally identify the importance of data for a company, we don’t need to think about giants like Google. The range of companies that can benefit is extensive, and includes global IT Groups and retailers as well as regional producers. This can be seen in the manufacturing industry: For example, machine downtime can be optimised using predictive maintenance, and sales volume forecasts can help to keep a hold on over- and under-production. According to the PAC study “Predictive Analytics in Manufacturing Industries”, forecasting delivery dates is one of the most difficult activities in one in three companies. The number of available applications is large, and the analysts from PAC have also come to the conclusion that predictive analyses play a key role when it comes to making companies more efficient and innovative.
Independent processes are needed to extract systematic findings from data. First of all, the required data is extracted from a specific use case, which is then merged and analysed together with information from other data sources. The insight gained with this is then integrated into existing business processes, in order to, for example, reduce costs, save time or increase the efficiency of manufacturing processes. On the road towards implementing such a process, companies need to ask questions from various different areas and also outside of the box. Legal aspects, such as which data may be collected and processed and in which form, then play a role. Infrastructural topics, like the choice of platform or tools used, are also obvious aspects, but employee skills or the integration into existing processes are also important.
Which sensor data is needed in order to predict a machine stoppage? What form and granularity is needed for this data? What’s more, how is it possible to identify whether a machine is “healthy”? Questions like these are usually the starting point for deriving an appropriate model, which is to be utilised for predictive maintenance, for example.
The analysis itself is the step that can be compared with the refining of crude oil. Usable products in the form of findings are derived from the raw material, which is in this case the collected data. The crux of the matter here is being able to identify relationships and structures in the data, which are based on a causal relationship, and thus enable a forecast. For example, you could imagine a vibration sensor, which reports that a mounting is defective in advance. The preferred tools here are statistical and mathematical methods and models, combined with solid programming knowledge.
This combination is both a blessing and a curse for many companies: One the one hand, these tools enable the required level of abstraction, in order to expose the information that is hidden in the enormous quantities of data so that it can be seen with the human eye. On the other hand, this interdisciplinary playing field contains precisely those hurdles that need surmounting, as it is rare that all of these key skills are possessed by just one person.