Dopaminergic and Prefrontal Basis of Learning from Sensory Confidence and Reward Value

Deciding between stimuli requires combining their learned value with one’s sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behavior conformed to a model that combin...

Full description

Saved in:
Bibliographic Details
Published in:Neuron (Cambridge, Mass.) Vol. 105; no. 4; pp. 700 - 711.e6
Main Authors: Lak, Armin, Okun, Michael, Moss, Morgane M., Gurnani, Harsha, Farrell, Karolina, Wells, Miles J., Reddy, Charu Bai, Kepecs, Adam, Harris, Kenneth D., Carandini, Matteo
Format: Journal Article
Language:English
Published: United States Elsevier Inc 19-02-2020
Elsevier Limited
Cell Press
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deciding between stimuli requires combining their learned value with one’s sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behavior conformed to a model that combines signal detection with reinforcement learning. In the model, the predicted value of the chosen option is the product of sensory confidence and learned value. We found precise correlates of this variable in the pre-outcome activity of midbrain dopamine neurons and of medial prefrontal cortical neurons. However, only the latter played a causal role: inactivating medial prefrontal cortex before outcome strengthened learning from the outcome. Dopamine neurons played a causal role only after outcome, when they encoded reward prediction errors graded by confidence, influencing subsequent choices. These results reveal neural signals that combine reward value with sensory confidence and guide subsequent learning. •Mouse choices depend on present confidence, learned rewards, and past confidence•Choices constrain a model that predicts activity in prefrontal and dopamine neurons•Learning relies on prefrontal signals encoding predicted value•Learning relies on dopamine signals encoding prediction error but not predicted value Lak et al. model the choices made by mice in a visual task with biased rewards and establish neural correlates of the model’s variables, revealing how choices and learning depend on sensory confidence and reward value.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Lead Contact
Present address: Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3PT, UK
Senior author
ISSN:0896-6273
1097-4199
DOI:10.1016/j.neuron.2019.11.018