Why explainable AI must be grounded in the boards risk management strategy.

Why explainable AI must be grounded in the boards risk management strategy.

Many issues have been reported on IT projects that fail after millions of dollars have been spent or that do not return the expected value. Today, we invest in AI. The industry is forecast to grow 12.3% to $156.5 billion this year, according to IDC.[1] However, most CEOs and board directors are not prepared to understand how to control the technology they are investing in because the resulting systems are unexplainable.

More failures?

It is just a matter of time for these investments to turn into the many IT failures that we have seen in the past. Forbes is warning of the same too. CEOs and board members should see unexplainable AI as a risk. Cindy Gordon sees the development of trusted systems — systems that can’t cause any harm to humans and society — as the CEO’s primary responsibility to control. She is asking herself, “Is the relatively new field of explainable AI the panacea?”

No alt text provided for this image

A good start is made by the US Department of Commerce. In August 2020, they identified four principles that define explainable AI.[3] They say a system must:

  • Provide an explanation: the system delivers accompanying evidence or reason(s) for all outputs
  • Be meaningful: the system provides explanations that are understandable to individual users
  • Have explanation accuracy: the explanation correctly reflects the system's process for generating the output
  • Have knowledge limits: the system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output

But it is not convincing enough for the CEO. Since the required audit systems to interface between AI and humans, providing the meaningful explanations, are not readily available, he will need to invest in the creation of these extra features. What arguments could he use to justify that investments without referencing compliance with the DoC principles?

More risks?

The answer lies in the fact that boards and CEOs that invest in AI systems without investing in explainability take an unacceptable high risk. I present here at least four reasons. Unexplainable AI systems are not trusted by employees, are more difficult to improve over time, don’t know their limitations (for example, due to COVID-19 disruptions), and will not meet expected government regulations. In other words, unexplainable AI systems have a lower ROI than explainable AI systems.

XAI 

Explainability should therefore be the top priority for organizations that want to invest in AI. Better to start with a small system that explains itself and can be readily improved than a big AI investment that needs additional investments to explain itself. How? 

No alt text provided for this image

Solutions

Existing methods that have been used in AI systems for a very long time (such as decision tables and business rules), combined with the exploratory power of machine learning algorithms, are available techniques to start small, create understandable systems that explain themselves, and stay in control.

No alt text provided for this image

 The bottom line is that we can’t just use AI instead of understanding — we must understand first and then add AI.

Check out my book AIX: Artificial Intelligence needs eXplanation for illustrated examples and more details. Continue reading my Linkedin series on XAI. For example on the five reasons to invest in XAI or what make an explanation good?

Great post and to the point info Silvie!

Like
Reply

To view or add a comment, sign in

Insights from the community

Explore topics