Explainable AI (XAI) is not a new term. Back in the 1980s, explainability of AI expert systems was a big topic of interest, but soon after that, we went into an AI winter, and generally forgot about AI explainability, until now, when we see XAI reemerging as a major topic of interest.
XAI refers to the ability to explain the decisions, recommendations, predictions, or actions made by an AI system. Explainability is closely related to interpretability, as AI models that are not interpretable or introspected upon are hard to explain. Another term that is often used is justification, which refers to the ability to be able to justify a decision or action from a model, which does not necessarily mean that the AI model is interpretable. Explainability, interpretability, and justification are all terms that one hears very often these days in relation to modern AI, especially in relation to machine learning and deep learning models. All these terms can be put under the umbrella of XAI. AI today is largely dominated by machine learning, and particularly deep learning, which is often compared to a “black box” where it is hard to peer under the hood of models and understand why an AI model made a certain decision or gave a specific output. There is an interesting tradeoff between interpretability and accuracy, with low-accuracy models like linear regression being highly interpretable and high-accuracy models like deep learning being difficult to interpret.
Why Do We Need Explainable AI?
One can argue that it is hard to really nail down the specific reasons why humans make certain decisions, which could range from personal preferences, social factors, and environmental factors or be subconscious factors, which are even harder to interpret. While it is hard to provide clear and precise explanations around human decisions, we can perform some level of introspection, interpretability, or justification to explore the reasons why and how a decision was made. This is what XAI is trying to achieve, for the most part. Also, XAI allows humans to have more confidence in an AI system, and have faith and trust in the results. We can hold AI systems accountable and liable for their decisions, ensure that they are compliant to regulations, help verify and debug that the AI system is making the right decisions, and eventually help improve AI systems.
Tools for AI Explainability
There are several tools and frameworks that are working toward delivering explainability in AI today. Below is a selection of some of the key tools and techniques in XAI
- Introduction to Local Interpretable Model-Agnostic Explanations (LIME) Framework (University of Washington): This is a tool and framework that is aimed at classifiers for both text and image domains. LIME is model agnostic, breaking down images and text into subcategories that can then be flagged as key reasons why a classification decision was made.
- Generating Visual Explanations (UC Berkeley): This is tool that is used to provide explanations around visual classifiers like a deep neural network (DNN) or a convolutional neural network (CNN), providing reasons why a specific classification decision was made. This is dependent on the human labeling and supervised features, providing some reasoning behind why a bird was labeled in a certain class.
- Building Blocks of Interpretability (Google): This is a powerful and rich user interface (UI)/user experience (UX) and visualization tool, going one step beyond the LIME framework, which breaks down the process of how a neural network works, especially across its hidden layers, not just input and output. Essentially, every neuron activation is paired with a visualization, but the tool also provides a way to see which groups of neurons are being activated for a spatial location in an image.
- Most enterprise AI platforms have been slow to adopt explainability, but some are ahead of the curve. For example, ai has embedded the LIME framework into its AI development platform. Bonsai is another AI platform company that is incorporating XAI techniques into its platform. DataRobot is another AI platform that provides feature impact analysis, which is a step toward explainability. Kyndi is marketing itself as one of the world’s first AI platforms that is focused on explainability, pushing machine reasoning techniques, rather than deep learning techniques as the way to drive XAI.
- Some startup companies have also emerged that are pushing for explainability in specific domains. Factmata is applying explainability to media and news analysis tackling the issue of fake news.
Verticals that Will See the Most Impact
XAI is not a mantra that is necessarily needed for all AI applications and use cases. For example, if Google Maps is recommending a specific restaurant or Spotify is playing a certain song from my recommended playlist, it does not really add much value in terms of seeing the detailed guts of the AI model that is behind those recommendations. But if Facebook’s AI is tagging a specific news story as fake, then it would be useful to see why it has made that decision. Similarly, if an AI doctor makes a diagnosis of a medical image scan, it is necessary to have explainability as a core part of the AI model. The same would apply to decisions about loans, insurance claims, or legal contract analysis where explainability becomes almost mandatory from a regulatory and ethical business practice standpoint. Autonomous cars would also fall under the XAI category when accidents caused by autonomous cars will be under heavy scrutiny from regulators, as has been recently seen in the Uber crash in Arizona. Heavily regulated sectors like healthcare, finance (including insurance and investment), legal, and automotive will all need to adopt XAI soon. The military and defense sector is also likely to see some impact of XAI, especially in drone footage and surveillance.
Overall, the largest impact of XAI will be in vision and text-based use cases, with vision likely to be one of the hardest to implement because of the complex nature of algorithms and techniques that are used. Based on Tractica’s AI Market Forecasts, vision-based AI is largely dominated by verticals such as the public sector or government, driven by video surveillance, and in the automotive sector because of self-driving cars. For text, or specifically language-based AI, legal and insurance are the two biggest sectors in terms of revenue potential.
Currently, there seems to be very little adoption within these verticals of XAI techniques or understanding of the risks associated with not using explainability, justification, or interpretability as core principles. The success of these sectors and the use cases will be very much dependent on how well explainability is adopted.