Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 23, 2016

“Why Should I Trust You?”…

Filed under: Artificial Intelligence,Machine Learning — Patrick Durusau @ 6:35 pm

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin.

Abstract:

Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.

In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.

LIME software at Github.

For a quick overview consider: Introduction to Local Interpretable Model-Agnostic Explanations (LIME) (blog post).

Or what originally sent me in this direction: Trusting Machine Learning Models with LIME at Data Skeptic, a podcast described as:

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there’s good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it’s conclusion.

Data Science Renee finds deeply interesting material such as this on a regular basis and should follow her account on Twitter.

I do have one caveat on a quick read of these materials. The authors say in the paper, under 4. Submodular Pick For Explaining Models:


Even though explanations of multiple instances can be insightful, these instances need to be selected judiciously, since users may not have the time to examine a large number of explanations. We represent the time/patience that humans have by a budget B that denotes the number of explanations they are willing to look at in order to understand a model. Given a set of instances X, we define the pick step as the task of selecting B instances for the user to inspect.

The pick step is not dependent on the existence of explanations – one of the main purpose of tools like Modeltracker [1] and others [11] is to assist users in selecting instances themselves, and examining the raw data and predictions. However, since looking at raw data is not enough to understand predictions and get insights, the pick step should take into account the explanations that accompany each prediction. Moreover, this method should pick a diverse, representative set of explanations to show the user – i.e. non-redundant explanations that represent how the model behaves globally.

The “judicious” selection of instances, in models of any degree of sophistication, based upon large data sets seems problematic.

The focus on the “non-redundant coverage intuition” is interesting but based on the assumption that changes in factors don’t lead to “redundant explanations.” In the cases presented that’s true, but I lack confidence that will be true in every case.

Still, a very important area of research and an effort that is worth tracking.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress