IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models
Interpretability and human oversight are fundamental pillars of deploying complex NLP models into real-world applications. However, applying explainability and human-in-the-loop methods requires technical proficiency. Despite existing toolkits for model understanding and analysis, options to integra...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
06-03-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Interpretability and human oversight are fundamental pillars of deploying
complex NLP models into real-world applications. However, applying
explainability and human-in-the-loop methods requires technical proficiency.
Despite existing toolkits for model understanding and analysis, options to
integrate human feedback are still limited. We propose IFAN, a framework for
real-time explanation-based interaction with NLP models. Through IFAN's
interface, users can provide feedback to selected model explanations, which is
then integrated through adapter layers to align the model with human rationale.
We show the system to be effective in debiasing a hate speech classifier with
minimal impact on performance. IFAN also offers a visual admin system and API
to manage models (and datasets) as well as control access rights. A demo is
live at https://ifan.ml. |
---|---|
DOI: | 10.48550/arxiv.2303.03124 |