Stochastic surprisal: An inferential measurement of free energy in neural networks

This paper conjectures and validates a framework that allows for action during inference in supervised neural networks. Supervised neural networks are constructed with the objective to maximize their performance metric in any given task. This is done by reducing free energy and its associated surpri...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in neuroscience Vol. 17; p. 926418
Main Authors: Prabhushankar, Mohit, AlRegib, Ghassan
Format: Journal Article
Language:English
Published: Switzerland Frontiers Research Foundation 14-03-2023
Frontiers Media S.A
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper conjectures and validates a framework that allows for action during inference in supervised neural networks. Supervised neural networks are constructed with the objective to maximize their performance metric in any given task. This is done by reducing free energy and its associated surprisal during training. However, the bottom-up inference nature of supervised networks is a passive process that renders them fallible to noise. In this paper, we provide a thorough background of supervised neural networks, both generative and discriminative, and discuss their functionality from the perspective of free energy principle. We then provide a framework for introducing action during inference. We introduce a new measurement called stochastic surprisal that is a function of the network, the input, and any possible action. This action can be any one of the outputs that the neural network has learnt, thereby lending to the measurement. Stochastic surprisal is validated on two applications: Image Quality Assessment and Recognition under noisy conditions. We show that, while noise characteristics are ignored to make robust recognition, they are analyzed to estimate image quality scores. We apply stochastic surprisal on two applications, three datasets, and as a plug-in on 12 networks. In all, it provides a statistically significant increase among all measures. We conclude by discussing the implications of the proposed stochastic surprisal in other areas of cognitive psychology including expectancy-mismatch and abductive reasoning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Reviewed by: Alexandra Psarrou, University of Westminster, United Kingdom; Yutao Liu, Tsinghua University, China
Edited by: John Jarvis, University of Westminster, United Kingdom
This article was submitted to Perception Science, a section of the journal Frontiers in Neuroscience
ISSN:1662-4548
1662-453X
1662-453X
DOI:10.3389/fnins.2023.926418