Transparent assessment of information quality of online reviews using formal argumentation theory

Review scores collect users’ opinions in a simple and intuitive manner. However, review scores are also easily manipulable, hence they are often accompanied by explanations. A substantial amount of research has been devoted to ascertaining the quality of reviews, to identify the most useful and auth...

Full description

Saved in:
Bibliographic Details
Published in:Information systems (Oxford) Vol. 110; p. 102107
Main Authors: Ceolin, Davide, Primiero, Giuseppe, Soprano, Michael, Wielemaker, Jan
Format: Journal Article
Language:English
Published: Elsevier Ltd 01-12-2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Review scores collect users’ opinions in a simple and intuitive manner. However, review scores are also easily manipulable, hence they are often accompanied by explanations. A substantial amount of research has been devoted to ascertaining the quality of reviews, to identify the most useful and authentic scores through explanation analysis. In this paper, we advance the state of the art in review quality analysis. We introduce a rating system to identify review arguments and to define an appropriate weighted semantics through formal argumentation theory. We introduce an algorithm to construct a corresponding graph, based on a selection of weighted arguments, their semantic distance, and the supported ratings. We also provide an algorithm to identify the model of such an argumentation graph, maximizing the overall weight of the admitted nodes and edges. We evaluate these contributions on the Amazon review dataset by McAuley et al. (2015), by comparing the results of our argumentation assessment with the upvotes received by the reviews. Also, we deepen the evaluation by crowdsourcing a multidimensional assessment of reviews and comparing it to the argumentation assessment. Lastly, we perform a user study to evaluate the explainability of our method, i.e., to test whether the automated method we use to assess reviews is understandable by humans. Our method achieves two goals: (1) it identifies reviews that are considered useful, comprehensible, and complete by online users, and does so in an unsupervised manner, and (2) it provides an explanation of quality assessments. •We introduce a rating system and a weighted semantics to reason on review arguments.•We introduce an algorithm to construct an argumentation graph of a set of reviews.•We provide an algorithm to identify the model of an argumentation graph of reviews.•We crowdsource a multidimensional assessment of reviews by deepening our evaluation.•We show that our method is useful to explain the results obtained.
ISSN:0306-4379
1873-6076
DOI:10.1016/j.is.2022.102107