Detecting fake news stories via multimodal analysis

Filtering, vetting, and verifying digital information is an area of core interest in information science. Online fake news is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. Hence, fak...

Full description

Saved in:
Bibliographic Details
Published in:Journal of the Association for Information Science and Technology Vol. 72; no. 1; pp. 3 - 17
Main Authors: Singh, Vivek K., Ghosh, Isha, Sonagara, Darshan
Format: Journal Article
Language:English
Published: Hoboken, USA John Wiley & Sons, Inc 01-01-2021
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Filtering, vetting, and verifying digital information is an area of core interest in information science. Online fake news is a specific type of digital misinformation that poses serious threats to democratic institutions, misguides the public, and can lead to radicalization and violence. Hence, fake news detection is an important problem for information science research. While there have been multiple attempts to identify fake news, most of such efforts have focused on a single modality (e.g., only text‐based or only visual features). However, news articles are increasingly framed as multimodal news stories, and hence, in this work, we propose a multimodal approach combining text and visual analysis of online news stories to automatically detect fake news. Drawing on key theories of information processing and presentation, we identify multiple text and visual features that are associated with fake or credible news articles. We then perform a predictive analysis to detect features most strongly associated with fake news. Next, we combine these features in predictive models using multiple machine‐learning techniques. The experimental results indicate that a multimodal approach outperforms single‐modality approaches, allowing for better fake news detection.
ISSN:2330-1635
2330-1643
DOI:10.1002/asi.24359