Building trustworthy AI is key for enterprises

Articles, News

Building trustworthy AI is key for enterprises

In April 2019, the European Union released a set of guidelines for developing trustworthy AI systems. However, enterprises are just starting to realize ROI with AI applications, and the movement to make these systems ethical and responsible is still nascent.

The core of these guidelines requires applications that use artificial intelligence capabilities be “lawful, ethical and robust.” Setting these guidelines in place within your organization requires significant time but has become an important part of AI adoption. Without consumer trust and proper guidelines, AI applications cannot reach their potential in the enterprise.

Real-world AI and the need for trust

One of the challenges with the pursuit of AI is the mismatch between the science fiction concept of artificial intelligence and the real-world, practical applications of AI. In movies and science fiction novels, AI systems are portrayed as super-intelligent machines that have cognitive capabilities equal to or greater than that of humans.

However, the reality is that much of what organizations are implementing today for artificial intelligence are narrow applications of AI. This is in clear contrast to artificial general intelligence (AGI). The limit of our current AI abilities lets organizations implement specific cognitive abilities in narrow domains, such as image recognition, conversational systems, predictive analytics as well as pattern and anomaly detection.



by Ronald Schmelzer – Jul 23rd, 2022

The publication of this article has been extracted from