Explainable AI and Its Impact on AI Adoption

explainable-ai-and-its-impact-on-ai-adoption

Explainable AI (XAI) is not a new term. Back in the 1980s, explainability of AI expert systems was a big topic of interest, but soon after that, we went into an AI winter, and generally forgot about AI explainability, until now, when we see XAI reemerging as a major topic of interest.

XAI refers to the ability to explain the decisions, recommendations, predictions, or actions made by an AI system. Explainability is closely related to interpretability, as AI models that are not interpretable or introspected upon are hard to explain. Another term that is often used is justification, which refers to the ability to be able to justify a decision or action from a model, which does not necessarily mean that the AI model is interpretable. Explainability, interpretability, and justification are all terms that one hears very often these days in relation to modern AI, especially in relation to machine learning and deep learning models. All these terms can be put under the umbrella of XAI. AI today is largely dominated by machine learning, and particularly deep learning, which is often compared to a “black box” where it is hard to peer under the hood of models and understand why an AI model made a certain decision or gave a specific output. There is an interesting tradeoff between interpretability and accuracy, with low-accuracy models like linear regression being highly interpretable and high-accuracy models like deep learning being difficult to interpret.

Why Do We Need Explainable AI?

One can argue that it is hard to really nail down the specific reasons why humans make certain decisions, which could range from personal preferences, social factors, and environmental factors or be subconscious factors, which are even harder to interpret. While it is hard to provide clear and precise explanations around human decisions, we can perform some level of introspection, interpretability, or justification to explore the reasons why and how a decision was made. This is what XAI is trying to achieve, for the most part. Also, XAI allows humans to have more confidence in an AI system, and have faith and trust in the results. We can hold AI systems accountable and liable for their decisions, ensure that they are compliant to regulations, help verify and debug that the AI system is making the right decisions, and eventually help improve AI systems.

Tools for AI Explainability

There are several tools and frameworks that are working toward delivering explainability in AI today. Below is a selection of some of the key tools and techniques in XAI

Verticals that Will See the Most Impact

XAI is not a mantra that is necessarily needed for all AI applications and use cases. For example, if Google Maps is recommending a specific restaurant or Spotify is playing a certain song from my recommended playlist, it does not really add much value in terms of seeing the detailed guts of the AI model that is behind those recommendations. But if Facebook’s AI is tagging a specific news story as fake, then it would be useful to see why it has made that decision. Similarly, if an AI doctor makes a diagnosis of a medical image scan, it is necessary to have explainability as a core part of the AI model. The same would apply to decisions about loans, insurance claims, or legal contract analysis where explainability becomes almost mandatory from a regulatory and ethical business practice standpoint. Autonomous cars would also fall under the XAI category when accidents caused by autonomous cars will be under heavy scrutiny from regulators, as has been recently seen in the Uber crash in Arizona. Heavily regulated sectors like healthcare, finance (including insurance and investment), legal, and automotive will all need to adopt XAI soon. The military and defense sector is also likely to see some impact of XAI, especially in drone footage and surveillance.

Overall, the largest impact of XAI will be in vision and text-based use cases, with vision likely to be one of the hardest to implement because of the complex nature of algorithms and techniques that are used. Based on Tractica’s AI Market Forecasts, vision-based AI is largely dominated by verticals such as the public sector or government, driven by video surveillance, and in the automotive sector because of self-driving cars. For text, or specifically language-based AI, legal and insurance are the two biggest sectors in terms of revenue potential.

Currently, there seems to be very little adoption within these verticals of XAI techniques or understanding of the risks associated with not using explainability, justification, or interpretability as core principles. The success of these sectors and the use cases will be very much dependent on how well explainability is adopted.

Comments are closed.