Cracking Open the Black Box of Neural Networks

cracking-open-the-black-box-of-neural-networks

There is a certain allure to the deep learning space in that the very inspiration is based on biomimicry. Deep learning is a subset of artificial intelligence (AI) with an architecture that roughly mirrors the human brain: information is processed through multiple layers to compute an outcome. Unlike other machine learning algorithms, which only have one or two layers, deep learning is “deep” because it has multiple layers – typically between 10 and 100 layers. Computations at each level build upon previous levels, allowing the network to learn more nuanced and abstract characteristics. Each layer is responsible for the detection of one characteristic, basing assumptions on earlier layers. Layers on the top are more abstract, having learned and classified data based on inputs from the lower layers.

But it is more than data passing through these layers; so-called neural networks are forming the ontologies for decision-making. And when the goal of these algorithms is not just prediction, but discovery and insight, neural networks suffer from introspection, which is an ironically analogous challenge for humans. Outcomes generated by neural networks cannot be easily traced backwards. In other words, today’s deep learning is like a black box when it comes to explaining precisely why a machine arrived at a specific outcome. This issue, sometimes called “interpretability” or “explainability,” fosters yet more confusion, opacity, and distrust in a technology that is already difficult for most to grasp. Indeed, it is not just discovery that is problematic here; consider how this could threaten accountability, implicit erroneousness, compliance adherence, replicability, legal protections, disenfranchisement, and so forth.

Applying Principles for Development

This issue is not lost on the frenetic AI market. Just last week, the Association for Computing Machinery (ACM) released a statement on algorithmic transparency and accountability, including seven core principles developers and organizations can apply. They include:

  1. Awareness (of biases and potential harm)
  2. Access and redress (mechanisms for those adversely affected)
  3. Accountability (institutional responsibility for algorithmic decision-making)
  4. Explanation (of algorithmic procedures and decisions made)
  5. Data provenance (procedures, biases, governance related to data gathering and training processes)
  6. Auditability (recordability of data, algorithms, models, and decisions)
  7. Validation and testing (initial and ongoing documentation, assessment, publication of performance and potential harm)

Most organizations are just beginning to wrap their minds around AI and deep learning, so these principles are well-timed. The sooner companies apply best practices with ethical considerations in mind, the less crisis aversion, clean-up, or legal repercussion they will encounter.

Meanwhile, significant efforts are coalescing in the academic field. One notable example comes from the University of Washington’s Marco Tulio Ribiero, a researcher specializing in interpretability. Ribiero’s Local Interpretable Model-Agnostic Explanation (LIME) framework is a method of “perturbing” each part of the input to assess its weight of contribution to the output. Explanations are designed to be local, i.e., they should pertain precisely to the instance predicted; interpretable, insofar as a human can understand them; and model-agnostic, where the method can apply to any model without having to penetrate it. (The full paper, “Why Should I Trust You? Explaining Predictions of Any Classifier,” is available here.) Development and testing is still underway, and numerous other research efforts are chipping away at this problem.

The Necessity and Feasibility of Data Interpretation

Efforts abound all over the market as vendors and suppliers of all shapes and sizes are working toward unlocking this proverbial black box – a question many adopters do not yet know they should be asking. One startup, DeepLIFT, has recently formed to address precisely this issue by running its algorithms on top of existing models.

Many researchers argue that perhaps the issue of interpretability is less of an issue than a reality. For one thing, we use and trust our brains every day to process information and take actions, without any quantifiable way of knowing the full parameters for why. After all, the world is complex, even ineffable. Perhaps insisting on a reductionist approach to explaining it via algorithmically-generated decisions may be too much to surmount.

Comments are closed.