Tidying Up the Conversational Recommender Systems' Biases
The growing popularity of language models has sparked interest in conversational recommender systems (CRS) within both industry and research circles. However, concerns regarding biases in these systems have emerged. While individual components of CRS have been subject to bias studies, a literature g...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
05-09-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The growing popularity of language models has sparked interest in
conversational recommender systems (CRS) within both industry and research
circles. However, concerns regarding biases in these systems have emerged.
While individual components of CRS have been subject to bias studies, a
literature gap remains in understanding specific biases unique to CRS and how
these biases may be amplified or reduced when integrated into complex CRS
models. In this paper, we provide a concise review of biases in CRS by
surveying recent literature. We examine the presence of biases throughout the
system's pipeline and consider the challenges that arise from combining
multiple models. Our study investigates biases in classic recommender systems
and their relevance to CRS. Moreover, we address specific biases in CRS,
considering variations with and without natural language understanding
capabilities, along with biases related to dialogue systems and language
models. Through our findings, we highlight the necessity of adopting a holistic
perspective when dealing with biases in complex CRS models. |
---|---|
DOI: | 10.48550/arxiv.2309.02550 |