2.2 Importance of interpretability
2.2.1 Interpretability Goals
The interpretability of ML models is important for diverse needs.
Model Debugging
User Trust and Reliance
Bias Mitigation
Privacy Awareness
2.2.3 Question Types
[6] classify user needs by what type of questions are needed to be explained.
- How (global): how system logic
- How does it weigh different features
- Why: why a prediction?
- Why Not: why is the model not doing what it is not doing?
- How to be That: what are the changes required, often implying minimum changes, for an instance to get a different target prediction?
- How to Still Be This**: what are the permitted changes, often implying maximum changes, for an instance to still get the same prediction?
- What if**: What if the model is doing something else?
References
[6]
Q. V. Liao, D. Gruen, and S. Miller, “Questioning the AI: Informing design practices for explainable AI user experiences,” in Proceedings of the 2020 CHI conference on human factors in computing systems, 2020, pp. 1–15.