Summary

## <string>:1: DeprecationWarning: Importing display from IPython.core.display is deprecated since IPython 7.14, please import from IPython display
## <IPython.core.display.HTML object>

The machine learning models adopted in NLP or text mining involves statistics underpinning or complex neural network structure, which intermediate lots of novice researchers, model practitioners and end-users. The resurgence of explainable AI aims to address this issue by providing a better understanding of the model’s decision-making process. This notebook situate in the scope of NLP and text mining, and aims to provide a comprehensive overview of the popular explainable NLP methods.

Starting with the concepts of interpretability and explainability, I will then introduce the taxonomy of explainable NLP methods. Sticking to the taxonomy criterion, I discuss popular interpretable machine learning methods and use Python to showcase them.

I will cover the following methods: LIME, SHAP, Anchors, as well as neural networks interpretability methods.

All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This notebook will enable you to play with different interpretation methods that are interested to you in your machine learning project. Reading the notebook is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models, especially NLP models interpretable.

About me: My name is Jinfen Li, I’m a NLP and XAI researcher. My goal is to make NLP models interpretable.

Follow me on GitHub JinfeLi and Twitter @li_jinfen!