“The phone” or “the book” or the “guitar” – in order to be able to function in our living environment, we have certain concepts of things in our heads that are related to their properties. But how does the brain summon knowledge if we cannot directly see, hear or feel things, but only show their name? The brain mirror image of the term is shaped by the aspect we focus on on an object, and it emerges from studies of brain activity in experiments. Based on the results, the researchers developed a model for how to process our knowledge.
If you read these lines and sort the information, your brain will do amazing things: It remains largely a mystery how complex information is processed through the interaction of neurons and brain regions. However, it seems essentially straightforward: in order to understand the world, we form spiritual notions about objects, people, or events. Concepts such as “phone” consist of visual features, such as shape and color, as well as noise – like ringing. However, the phone concept also includes actions – information on how and why this object is used.
If we read the word “phone,” our brain evokes the corresponding mental concept. It appears to mimic the properties of this object. but how? So far it is unclear if the entire concept is activated in the brain or if only individual features such as noise or actions that are currently important are called up. Specifically: to what extent do we always think of all the features of the phone or only the part that is currently needed? Neuroscientists at the Max Planck Institute for Cognitive and Brain Sciences in Leipzig have studied this question and other aspects.
Brain activity with concepts in mind
To this end, they screened 40 people with functional magnetic resonance imaging (fMRI). Imaging can show which areas of the brain are activated when certain requirements are met. During enrollment, study participants encountered multiple terminology. These included words like “phone” or “guitar,” which define things that can be heard and used, but also terms like “satellite,” which cannot be linked to noise or actions. In one round, test participants must decide whether the term represents an audible object. In another case, they should answer whether you can use the object in question. In this way, the participants were mentally adapted to focus on one side, even if the object contained both.
Recordings of brain activity by fMRI confirmed that the mental mirror image of an object depends on context: if study participants were previously distinguished on the sound side when calling a mental concept such as “phone”, a brain mirror image of that term appeared in the auditory areas of the cerebral cortex and was Active. On the other hand, if the use of terms like phone is in the foreground, then the so-called somatic motor areas of the brain come into play, which will also be active when the action is actually performed. Researchers describe this type of thought processing as method-specific.
When researchers asked about both aspects – hearing ability and usability – in other testing processes, another element of the treatment terminology emerged: the lower left parietal lobe (IPL) responsible for integration – so the researchers refer to this brain region as a multimodal region. Through further experiments, they were also able to show that the interaction between method-specific areas and the multimedia region is reflected in respondents ‘assessment of an object: the more intensely the areas involved work together, the more involved the participants’ attachment to an object – the term associated with actions or noise.
Levels of hierarchy are emerging
Experiments were approximated by sprinkling false words. These innovative words were intended to distinguish topics from real terms. It turns out that this task started a brain region that was neither responsible for actions nor for noise – the so-called anterior temporal lobe (ATL). According to the scientists, this area appears to process concepts in an abstract or stylized manner, completely separate from sensory impressions.
They finally integrated the results into a model that is supposed to describe how conceptual knowledge is represented in the human brain. Accordingly, the information is passed from one hierarchical level to the next level and at the same time it is more abstract with each step. At the lowest level are the areas of the method that process impressions or individual sensory actions. These transfer their information to multimedia areas such as the IPL, which can incorporate many related perceptions – such as sounds and verbs. At the highest level, the AModal ATL, in turn, is features separate from sensory impressions.
In the end, it became clear that our notions of objects, people, and events consist, on the one hand, of sensory impressions and associated actions, and on the other hand, of abstract, symbol-like features. What is activated, in turn, depends greatly on the situation or the task, ”this is how first author Philippe Konke of the Max Planck Institute for Human and Brain Sciences summarizes the new findings.
Source: Max Planck Institute for Cognitive and Brain Sciences, Article: Cerebral Cortex, Doi: 10.1093 / cercor / bhab026; Doi: 10.1093 / cercor / bhaa010
“Alcohol buff. Troublemaker. Introvert. Student. Social media lover. Web ninja. Bacon fan. Reader.”