The first visual areas of the brain adapt their representations of the same visual stimulus according to the task we are trying to perform. Credit: Rungratsameetaweemana Lab / Columbia Engineering
When you see a bag of carrots with grocery store, does your mind go to potatoes and parsnips or buffalo wings and celery?
It depends, of course, to make a generous winter stew or to prepare yourself to look at the Super Bowl.
Most scientists agree that the categorization of an object – such as thinking of a carrot like a root vegetable or a party snack – is the work of the prefrontal cortex, the region of the brain responsible for reasoning and other high -level functions that make us intelligently and social. In this story, the visual eyes and regions of the brain are a bit like a security camera collecting data and processing them standardized before making it pass for analysis.
However, a new study led by the biomedical and neuroscientist Nuttida Rungratsameetaweemana, deputy professor at Columbia Engineering, shows that the visual regions of the brain play a active role in the sense of information. Above all, the way she interprets the information depends on what the rest of the brain is working on.
If it is the Super Bowl on Sunday, the visual system sees these carrots on a vegetable tray before the prefrontal cortex knows that they exist.
Posted on April 11 in Nature communicationsThe study provides some of the clearest evidence that the first sensory systems play a role in decision -making and that they adapt in real time. It also points to new approaches for the design of AI systems which can adapt to new or unexpected situations.
We sat with Rungratsameetaweemana to find out more about research.
What is exciting in this new study?
Our results question the traditional opinion that the first sensory areas of the brain are simply “in search” or to “record” visual entry. In fact, the visual system of the human brain actively reshapes the way it represents exactly the same object depending on what you are trying to do. Even in the visual areas which are very close to the raw information that enters the eyes, the brain has flexibility to settle its interpretation and its responses according to the current task. It gives us a new way of thinking about flexibility in the brain and opens up ideas on how to create more adaptive AI systems modeled after these neural strategies.
How did you get to this surprising conclusion?
Most previous works have examined how people learn the categories over time, but this study zooms in on the flexibility piece: how does the brain quickly rock between the different ways to organize the same visual information?
Nuttida Rungratsameetaweemana, assistant professor of biomedical engineering. Credit: Rungratsameetaweemana Lab / Columbia Engineering
What did your experiences look like?
We used functional magnetic resonance imaging (IRM) to observe people’s brain activity while they put forms in different categories. The torsion was that the “rules” to categorize the forms did not stop changing. This allowed us to determine if the visual cortex changed the way it represented the forms according to the way in which we had defined the categories.
We have analyzed the data using automatic calculation learning tools, including multivariate classifiers. These tools allow us to examine brain activation models in response to different shape images and measure the end of the brain of distinction in different categories. We have seen that the brain reacts differently depending on the categories in which our participants sort the forms.
What have you seen in the data of these experiences?
Activity in the visual system – including primary and secondary visual cortex, which process data directly from the eyes – have endeavored by practically all tasks. They reorganized their activity according to the decision rules that people used, which has been demonstrated by the brain activation models becoming more distinctive when a form was near the gray area between the categories. These are the most difficult forms to distinguish, so it is exactly when an additional treatment would be the most useful.
We could actually see clear neural models in irmf data in cases where people have done a better job on tasks. This suggests that the visual cortex can help us directly solve flexible categorization tasks.
What are the implications of these results?
Flexible cognition is a characteristic of human cognition, and even cutting -edge AI systems currently have difficulties with flexible tasks. Our results can contribute to the design of AI systems that can better adapt to new situations. The results can also help understand how cognitive flexibility could be broken down under conditions such as ADHD or other cognitive disorders. It is also a reminder of the remarkable and effective of our brain, even in the early stages of treatment.
What is the next step for this search line?
We push neuroscience more by studying how flexible coding works at the neural circuits. With the IRM, we looked at large populations of neurons. In a new follow -up study, we study the flexible coding circuit mechanisms by recording a neurological activity inside the skull. This allows us to ask how individual neurons and neural circuits in the human brain support flexible behavior led by objectives.
We also start to explore how these ideas could be useful for artificial systems. Humans are really good to adapt to new goals, even when the rules change, but current AI systems often fight with this type of flexibility. We hope that what we learn from the human brain can help us contexts.
More information:
Margaret M. Henderson et al, Dynamic categorization rules modify representations in the human visual cortex, Nature communications (2025). DOI: 10.1038 / S41467-025-58707-4
Supplied by the Columbia University School of Engineering and Applied Science
Quote: See with objective: perception of the tunes of the visual cortex to correspond to the current objectives (2025, April 19) recovered on April 19, 2025 from
This document is subject to copyright. In addition to any fair program for private or research purposes, no part can be reproduced without written authorization. The content is provided only for information purposes.