Task structure and performance. (A) Illustration of the environment of the four rooms and examples of reward locations in the vertical context. An example participant trajectory is shown in red. In this example, the agent starts in the southwest (SW) room, explores the starting rock and does not find a cheese reward. As the cheese rewards vary vertically, the two cheese rewards must therefore be in the SE and NE rooms. (B) Different contexts signaled different reward covariances: on the first day (training), martini rewards appeared in vertically adjacent rooms, while peanut rewards appeared in horizontally adjacent rooms. On the second day, three different pairs of rewards were displayed, corresponding to the same covariance structure, and the two contexts were interleaved within a run. An example order of executions is shown, but this was balanced across participants. (C) Participants’ view of exploration in one of the rooms during training. The floors of all the rooms were purple in the scanner. (D) Heatmaps of average grid square occupancy per trial in each of the two contexts. The black arrows show the average transition vector of each grid square. Data is averaged across participants. Note that the average transition vectors also matched well when considering only participant-controlled movement periods (see Figure S1D). (E) Participants’ scores on each trial on days 1 (left) and 2 (right). With training, participants find rewards more quickly. On the first day, the contexts were blocked between trials to facilitate learning, while on the second day they were interleaved. (F) Participants learn to preferentially search in rooms suggested by the reward structure, and this behavior generalizes to new reward sets associated with each context on day 2. In (D) and (E), data are displayed smoothed without overlapping. sets of 4 adjacent trials for visualization. In (E), we only show room choices made by human participants and exclude those made by the agent. Error bars show the standard error of the mean across participants. Colored panels indicate different epochs with the same reward pairs. Credit: Neuron (2023). DOI: 10.1016/j.neuron.2023.08.021
Human decision-making has been the subject of a wide range of studies. Collectively, these research efforts could help better understand how people make different types of everyday choices while shedding light on the neural processes underlying these choices.
The results suggest that while making snap decisions, or in other words, choices that must be made quickly based on the information available at a given moment, humans rely heavily on contextual information. This contextual information can also guide so-called sequential decisions, which consist of making a choice after observing the sequential unfolding of a process.
Researchers from the University of Oxford, the National Research Council Rome, University College London (UCL) and the Max Planck Institute for Human Development recently conducted a study exploring the impact of context on goal-oriented decision making. Their findings, published in Neuronsuggest that goal seeking “compresses” the spatial maps of the brain’s hippocampus and orbitofrontal cortices.
“Humans can navigate flexibly to achieve their goals,” Paul S. Muhie-Karbe, Hannah Sheahan and colleagues wrote in their paper. “We asked how the neural representation of allocentric space is distorted by goal-directed behavior. Participants guided an agent toward two successive goal locations in a global grid environment comprising four interconnected rooms, with a contextual clue indicating the conditional dependence of one objective location on another.”
To further explore what happens in the brain during goal-directed decision-making, the researchers conducted an experiment involving 27 human participants. These participants completed a task on a computer screen, which involved navigating a virtual environment by controlling an avatar.
This avatar could move in a partially visible world represented in grid form. This virtual world consisted of four different interconnected rooms, and participants only saw the room their avatar occupied from above (i.e., from a bird’s eye view).
During each experimental trial, participants’ avatar appeared in a randomly chosen room, and participants were asked to move it using buttons on a keyboard to collect rewards by colliding with rocks. while avoiding empty rocks.
At the start of each trial, participants were also offered a context cue, concealing partial clues suggesting (but not clearly revealing) where rewards might be found in the virtual world. Notably, while participants completed this task requiring goal-directed decision-making, their brain activity was recorded by an fMRI scanner.
“By examining the neural geometry by which room and context were encoded in fMRI signals, we found that map representations of the environment appeared in both the hippocampus and neocortex,” Muhie-Karbe wrote, Sheahan and their colleagues.
“Cognitive maps of the hippocampus and orbitofrontal cortices were compressed such that locations marked as goals were encoded together in neural state space, and these distortions predicted successful learning. This effect was captured by a computational model in which current and potential locations are jointly encoded in a place code, providing a theory for how goals distort the neural representation of space in macroscopic neural signals.
Essentially, Muhie-Karbe, Sheahan and their colleagues found that the environment participants were accessing virtually was encoded as a map in parts of their brains, particularly the hippocampus and neocortex. Interestingly, however, these cognitive maps were somewhat compressed, encoding locations relevant to the goal they were trying to achieve together.
These findings shed new light on the neural underpinnings of goal-directed decision-making, suggesting that the brain might use compression mechanisms to contextually modulate sensory information during decision-making to achieve a specific goal. In the future, new studies could delve deeper into these compression processes, which could lead to fascinating new discoveries.
More information:
Paul S. Muhle-Karbe et al, Goal seeking compresses neural codes for space in the human hippocampus and orbitofrontal cortex, Neuron (2023). DOI: 10.1016/j.neuron.2023.08.021. www.sciencedirect.com/science/ …ii/S0896627323006323
© 2024 Science X Network
Quote: Cognitive maps of certain brain regions are compressed when making goal-seeking decisions (January 4, 2024) retrieved January 4, 2024 from
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.