Forbes says there are 2.5 quintillion bytes of data created each day. We use this data to make decisions, improve systems and processes, and gauge success on projects, initiatives, and goals. This only works, however, if the data we are using is 'good' - accurate and clean. How do you determine if your data is 'good?' And what steps do you take to cleanse 'bad' data? This month, we sat down with Anthony Honaker, Cohesive's Vice President of Product Strategy and development, and the mind behind our Propel Performance Management solution, to talk about all things data.
What role does data quality play in EAM today as it relates to maintenance and reliability?
Data Quality is the most common inhibitor to fully leveraging the data captured in a CMMS or EAM system to improve overall maintenance and reliability processes. Most customers readily admit that they do not have confidence in the data in their system, and as such have little to no trust in any KPIs that they may have developed. In fact, poor Data Quality or the belief that Data Quality is poor is often the reason why KPIs haven’t been developed.
How does an organization ensure they have ‘good’ data? And how do they go about restoring it if they don’t?
Bad data comes from processes that are not implemented according to expectations, or standards. Data Quality is a measure – and performing its measurement is imperative for fixing bad data. Creating Data Quality measures forces an organization to define what is good – based on following standardized processes – and what is the purpose of data. The definition of Data Quality is the measurement of the data’s ability to meet its purpose (e.g., process automation, regulatory requirements, historical analysis, etc.). Once measured, the path to addressing shortcomings becomes obvious (not necessarily easy, but obvious).
What trends do you see as it relates to analytics, data, and KPIs in maintenance, reliability and EAM space?
The key trend, or market buzz as it were, is to leverage data being gathered as a result of assets becoming more connected and “aware” – i.e., the Internet of Things (IoT). There are many new products and offerings in the market that focus on Asset Performance Management – attempting to optimize the value of Assets by using this “wave” of data coming from IoT. But we are also seeing a corollary trend of organizations returning to basics and focusing on fundamental analytics – measuring the basic vital signs of an organization’s critical processes – like Asset Management and Reliability, Work Management and Supply Chain. While there in unquestioned potential with the IoT opportunity, there’s still a significant amount of value to be realized by organizations simply improving what they are already doing.
You spoke at SMRP about “Too Much Data and Too Little Information,” can you elaborate on the inspiration behind this topic, and what listeners gained from it? Did anyone share a challenge, or have an interesting question that we can share?
A key tenet of the discussion was about this focus on fundamentals. So many are tempted to become data-driven and embark on journeys toward “big data” with newly employed “data scientists”, when they haven’t proven that they can appropriately capitalize on their existing “small data”. The presentation walked through a straightforward approach to identifying the key value drivers for an organization’s Asset Management organization, measuring those value drivers, and using the results to improve their operations. The audience engagement was high, and the common theme of questions was around “How do we start?” and addressing the barrier of poor Data Quality when it comes to measuring existing processes.
Are you concerned about your organization’s data quality? We help organizations turn trustworthy data into actionable information to propel their organization into continuous improvement. Learn more or contact us for more information.