2.2 Importance of interpretability

2.2.1 Interpretability Goals

The interpretability of ML models is important for diverse needs.

Model Debugging

User Trust and Reliance

Bias Mitigation

Privacy Awareness

2.2.2 Stakeholders

  • AI Experts

  • Model Practitioners

  • End Users

2.2.3 Question Types

[6] classify user needs by what type of questions are needed to be explained.

  1. How (global): how system logic
    • How does it weigh different features
  2. Why: why a prediction?
  3. Why Not: why is the model not doing what it is not doing?
  4. How to be That: what are the changes required, often implying minimum changes, for an instance to get a different target prediction?
  5. How to Still Be This**: what are the permitted changes, often implying maximum changes, for an instance to still get the same prediction?
  6. What if**: What if the model is doing something else?

References

[6]
Q. V. Liao, D. Gruen, and S. Miller, “Questioning the AI: Informing design practices for explainable AI user experiences,” in Proceedings of the 2020 CHI conference on human factors in computing systems, 2020, pp. 1–15.