Hi, I am Jinfen Li
A researcher in the field of Interpretability of Natural Language Processing.
An Interpretable-NLP Researcher
A Machine Learning Engineer
A Social Media Analyst
About Me
My research interests lie in Natural Language Processing and Explainable AI. I develop NLP algorithms such as multi-emotion recognition, discourse parsing and propaganda detection, and apply them to social science topics. Recently, I focus on improving human-AI trust by incorporating explainability into NLP models.
Education
- Ph.D. in Information Studies, 2021 - Present
- M.S. in Computational Linguistics, 2018 - 2020
Syracuse University, Syracuse, NY, USA
- B.S. in Computer Science, 2014 - 2018
Guangdong University of Technology, Guangzhou, China
Contact
- 📮Email: jli284@syr.edu
Projects
NLP models. I develop NLP algorithms such as multi-emotion recognition, discourse parsing and propaganda detection, and apply them to social science topics.
Interpretable Fake News Detection
- script.py
This tool is under developed
- Terminal
Interpretable Stance Separation
- script.py
This tool is under developed
- Terminal
Multi-emotion Recognition Using Multi-EmoBERT and Emotion Analysis in Fake News
- script.py
!pip install multi-emotion
from multi_emotion import multi_emotion
print(multi_emotion.predict(["I am so happy today"]))
- Terminal
[{'text': 'i am so happy today',
'pred_label': 'joy,love,optimism',
'probability': '[{"anger": 0.00022063202050048858},
{"anticipation": 0.007108359131962061},
{"disgust": 0.0006860275752842426},
{"fear": 0.00044393239659257233},
{"joy": 0.9998739957809448},
{"love": 0.8244059085845947},
{"optimism": 0.931083083152771},
{"pessimism": 0.0002464792341925204},
{"sadness": 0.007342423778027296},
{"surprise": 0.001668739365413785},
{"trust": 0.009098367765545845}]'}]
Neural-based RST Parsing And Analysis In Persuasive Discourse
- script.py
!pip install rst-parser
from rst_parser import rst_parser
tree_results, dis_results = rst_parser.parse(["The scientific community is making significant progress in understanding climate change. Researchers have collected vast amounts of data on temperature fluctuations, greenhouse gas emissions, and sea-level rise.
This data shows a clear pattern of increasing global temperatures over the past century.
However, there are still debates about the causes and consequences of climate change."])
print(dis_results)
- Terminal
( Root (span 1 4)
( Nucleus (span 1 2) (rel2par span)
( Nucleus (leaf 1) (rel2par span) (text _!The scientific community is making significant progress!_) )
( Satellite (leaf 2) (rel2par Elaboration) (text _!in understanding climate change .!_) )
)
( Satellite (span 3 4) (rel2par Elaboration)
( Nucleus (leaf 3) (rel2par span) (text _!Researchers have collected vast amounts of data on temperature fluctuations , greenhouse gas emissions , and sea-level rise .!_) )
( Satellite (leaf 4) (rel2par Contrast) (text _!This data shows a clear pattern of increasing global temperatures over the past century . However , there are still debates about the causes and consequences of climate change .!_) )
)
)
BERT-based Models Design For Propagandistic Technique and Span Detection
- script.py
The pip-installable tool is under-developed,
but the source code is available.
- Terminal
Empathetic Chatbot
- script.py
The pip-installable tool is under-developed,
and the source code is not available yet.
- Terminal
all projects
current projects
past projects
Publications
Conference, Workshop papers
Teachings
NLP, Applied Deep Learning .
Instructor: Lu Xiao TA: Jinfen Li Semester: Fall, 2021
Instructor: Acuna Daniel TA: Jinfen Li Semester: Spring, 2022
Notebooks
I am working on two notebooks.
Let's make NLP interpretable!
Interpretability opens an opportunity to understand the model and its predictions. It is a crucial step in the model development process. It also paves the way for sustainable and ethical AI.