A Scientific Paper extracted from the Coala Project Publications page, that has received funding from the European Union’s Horizon 2020 research and innovative programme under grant agreement nr. 957296
Digital Intelligence Assistants
Industrial maintenance strategies increasingly rely on artificial intelligence to predict asset conditions and prescribe maintenance actions. The related maintenance software and human maintenance actors can form a hybrid-augmented intelligence system where each side benefits from and enhances the other side’s intelligence. This system requires optimized human-machine interfaces to help users express their knowledge and retrieve information from difficult-to-use software. Therefore, this article proposes a novel approach for maintenance experts and operators to interact with a predictive maintenance system through a digital intelligent assistant. This assistant is artificial intelligence (AI) that could help its users interact with the system via natural language and collect their feedback about the success of maintenance interventions. Implementing hybrid-augmented intelligence in a predictive maintenance system faces several technical, social, economic, organizational, and legal challenges. The benefits, limitations, and risks of hybrid-augmented intelligence must be clear to all employees to advocate its use. AI-focused change management and employee training could be techniques to address these challenges. The success of the proposed approach also relies on the continuous improvement of natural language understanding. Such a process will need conversation-driven development where actual interactions with the assistant provide accurate training data for language and dialog models. Future research has to be interdisciplinary and may cover the integration of explainable AI, suitable AI laws, operationalized trustworthy AI, efficient design for human-computer interaction, and natural language processing adapted to predictive maintenance.
Predictive maintenance in manufacturing
Improving production’s reliability and efficiency while simultaneously taking sustainability and safety issues into account are pressing issues for all manufacturing industries (Bousdekis et al., 2018). Maintenance has a significant impact on these issues. Its improvement can lead to a 10–20% cost reduction in related labor and materials, which together make up 15% of the total costs of a typical manufacturing company (McCarthy & Spindelndreier, 2013). This potential means that these organizations need to consider maintenance an essential operation function (Bousdekis et al., 2015; Peng et al., 2010).
Optimizing human-machine collaboration in a PMS is challenging for at least two reasons:
- Encoding maintenance knowledge and experience into a PMS has constraints. For example, not all the required knowledge may be available, especially for new assets or production systems, since there is no experience. Consequently, interpretations of diagnoses and prognoses may be inaccurate, resulting in the wrong, or at least not optimal, decisions. Maintenance experts need to reconfigure the system continuously to integrate new experiences. Similarly, personnel may improve maintenance practices over time and with experience, resulting in new rules the decision-making component should understand.
- Current PMS concepts often lack an appropriate approach to include human maintenance actors in the decision-making process (Longo et al., 2017). While the adoption of advanced visualization technologies such as augmented reality (AR) proceeds, most operations still require complicated graphical user interfaces or programming skills. Maintenance personnel must learn using these interfaces and acquire related skills typically via professional training courses. Additional training is likely necessary if a PMS provider decides to change the user interface.
A simplified view of a generic, data-driven PMS. The human maintenance actors interact turn-wise with the DIA (conversation) through mobile or stationary devices.
Challenges for hybrid-augmented intelligence in predictive maintenance
- This article’s introduction identified two challenges that the approach above aims to address. Many more technical and social challenges remain, including economic, organizational, and legal ones. Each challenge, if unaddressed, poses a significant risk that hybrid-augmented intelligence in predictive maintenance will not meet expectations. Most challenges are not exclusive to maintenance and focus on non-technical aspects. This article focuses on organizational challenges because companies that want to use DIAs in PMS can influence them.
- Convincing managers. The starting point for most hybrid-augmented intelligence implementations will depend on the goodwill of managers with budget control. These managers must recognize substantial advantages of having a DIA in their “area of responsibility”, for instance, by reducing costs and time, increasing quality, or any mix of these benefits. Only then could these managers justify their investment. Therefore, it is critical to estimate these benefits even though there is little experience with hybrid-augmented intelligence in predictive maintenance – especially for specific industries or asset types. Attempts to meet this challenge will likely benefit from transferable demonstration cases, maybe as simple as efficient assistance with maintenance checklists. Besides benefits, it is also critical to clarify risks and limitations. A change management process focusing on human-AI collaboration could help reduce prejudice and fears and encourage experimenting with assistants in safe environments (sandboxes) before deciding.
- Convincing shop floor and office employees. Employees use the DIA’s information, advice, recommendation, or action to realize the assistant’s benefits. However, it cannot provide benefits if it does not improve the individual employee’s tasks, for instance, because it has low usability or is unreliable. Not meeting basic needs will cause further mistrust and lower employees’ DIA acceptance. Examples of such needs are protecting privacy at work and guaranteeing that the assistant does not monitor individual workers’ performance (typically required by workers councils or laws). Consequently, measures increasing the assistant’s trustworthiness are crucial for effective hybrid-augmented intelligence. Potential measures can be technical or organizational. Technical measures concern the assistant’s NLU components and their integration into the PMS. An important aspect is that employees will likely attribute existing data quality problems in the PMS, such as typos, missing data points, missing translations, and outdated information, to the assistant. Therefore, technical measures must be system-wide, which requires the collaboration of various business units. Organizational measures concern creating and maintaining training datasets free from biases, errors, inaccuracies, and mistakes. The responsibility for this could be with the DIA provider, a subcontracted third party, or the organization using the assistant.
- Preparing employees for human-AI collaboration. Digital assistants may become a sort of digital co-worker in the future. Their successful integration in PMS will likely require that employees understand – and preferably experience – their capabilities, challenges, and risks. Otherwise, digital assistants may likely not meet expectations and lose or never gain the employees’ goodwill. Many assistants can describe their capabilities, but not their conceptual challenges and inherent risks. For instance, the assistant will classify user utterances as “out of scope” and answer with a response, such as “I am not sure what you meant. Please rephrase what you said.”. This behavior is typical for assistant implementations but becomes frustrating quickly if users do not understand why the assistant wants them to rephrase (i.e., to encourage an utterance the assistant can classify more confidently). Conceptual challenges concern the assistant’s overall reliability, the accuracy of responses, and overall accountability. Important areas for risks are, for instance, data security and ethics. Therefore, meeting this challenge will require non-technical solutions, such as training the workforce in human-AI collaboration. Specialized training courses, learning materials and tools, and demonstration environments could become critical assets for PMS providers and the education industry.
- Conversation-Driven Development. Employees willing to use the assistant create conversation data to feed a conversation-driven development process for the assistant (Nichol, 2020). This process uses actual conversations to extend an assistant’s training datasets. Over time, the assistant will understand users more accurately and correctly predict responses. Low solution acceptance will likely lead to diminishing conversations and fewer opportunities to extend the training data. This problem could trigger a downward spiral where more and more employees avoid using the assistant, minimizing the potential training data until the assistant’s developers cannot increase the reliability – which further reduces the assistant’s use.
Digital intelligence assistants: conclusions
DIAs will take a unique place in socio-technical predictive maintenance systems because they can offer intuitive and unobtrusive assistance in all predictive maintenance phases. Companies can use DIAs with different hardware, such as smart speakers, tablets, smartphones, desktop computers, and mixed reality applications. This flexibility allows digital assistants to address workers’ requirements in maintenance tasks, such as hands-free interaction and fast access to information. Finally, the deeper integration of humans into predictive maintenance systems creates opportunities for so-called hybrid-intelligence systems. An essential characteristic of these systems is that humans and computers complement each other and evolve together – in consequence, humans remain deeply involved in decision-making.
Future research about hybrid-augmented intelligence in PMS will be an interdisciplinary task relevant to, for instance, computer science, engineering, management, social sciences, and humanities. The following list outlines research directions that we identified as relevant to advancing AI-related functionalities and the adoption of the solution:
- Explainable AI (XAI). DIAs must use interpretable algorithms and explainability to support the idea of human-centric AI. Researchers should identify or create such mechanisms and their applications in a DIA. Media informatics, for instance, should design and develop dialog models allowing users to ask for explanations and provide feedback about their adequacy. They could use the latter to customize the explanations to individual user characteristics (e.g., education background, expertise level, and language proficiency). Besides, it is an open question where explainability should be implemented, e.g., directly in a prediction system or as a separate, generic component.
- AI law & trustworthy AI. The evolution of AI law and trustworthy AI introduces new legal requirements that AI-based solutions must meet. It is not clear how current legal frameworks and the expectations of trustworthy AI (e.g., accountability and transparency) match in predictive maintenance. Researchers should identify related gaps and investigate the operationalization of the emerging AI laws and guidelines for AI’s ethical design, development, and use in this area.•
- Human-Computer Interaction (HCI). Since the design of conversational AI for PMS is in its infancy, media informatics should design, develop, and evaluate effective designs to facilitate the implementation of hybrid-augmented intelligence. This approach involves identifying fast interaction modalities (e.g., screen, voice, and haptics) and devices that workers want to use regularly. The latter includes Smartphones, tablet computers, smart speakers with and without screens, smartwatches and rings, desktop computers, and combinations of the former.•
- Natural Language Processing (NLP). Core technologies for DIAs are transcribing voice and interpreting natural language text. Researchers should evaluate how trained NLP models (e.g., trained with Coqui) perform when applied in predictive maintenance tasks. The jargon used in the domain, countries, industries, and companies is one critical factor to investigate. Another is employees’ language proficiency which depends on a person’s background and affects how people talk or write with a DIA. If the models are not accurate enough, researchers should design and curate publicly usable training datasets to extend language models and facilitate open science.•
- Human-AI collaboration. Finally, researchers from social sciences and humanities should identify and assess the impact of humans working with AI in predictive maintenance. This task is a long-term research direction because it requires operational hybrid-augmented intelligence solutions and sufficient observation times (e.g., months or years). Critical aspects of human-AI collaboration include, for instance, how working with a DIA influences the user’s autonomy (e.g., proactivity and critical thinking) and relationships with human colleagues (e.g., dependency and interaction behavior). Likewise, interdisciplinary teams should identify how interactions affect the evolution of trained AI models. Aspects to consider are the accumulation of bias and the model’s performance (e.g., accuracy, precision, and speed).
- The next step in our research is to implement a fully functioning demonstrator and evaluate it within an industrial use case. This demonstrator must also address general AI application challenges, such as an assistant’s accountability, biased training data, and biased dialog structures. It will, therefore, cover several of the research directions above.
This article has been extracted and freely adapted from the Sciencedirect.com website – June 2022