In This Section
Using a corkscrew, writing a letter with a pen or unlocking a door by turning a key are actions that seem simple but actually require a complex orchestration of precise movements. So, how does the brain do it?
According to a new study by researchers from Carnegie Mellon University and the University of Coimbra, the human brain has a specialized system that builds these actions in a surprisingly systematic way.
Analogous to how all of the words in a language can be created by recombining the letters of its alphabet, the full repertoire of human hand actions can be built out of a small number of basic building block movements.
The researchers used computational modeling of functional MRI data to demonstrate that a brain region called the supramarginal gyrus (SMG) — located in the left inferior parietal lobe and already known for its role in planning object-directed actions — builds representations of complex actions by recombining a limited set of coordinated movement patterns of the fingers, hands, wrists and arms. Scientists call these movement patterns “kinematic synergies.”
As an example, the posture of the hand while using a pair of scissors is similar to that when using a pair of pliers — even though scissors and pliers have very different functions. By contrast, even though a pair of scissors and an X-ACTO knife might be used for the same function or purpose, the postures of the hand when using these two types of objects would be very different. The researchers found that activity in the SMG portion of the brain had very similar representations for objects that also had very similar hand postures.
“Just as brain regions supporting language function combine sounds, or phonemes, to form words, the brain also combines kinematic synergies to form complex, object-directed actions,” said Leyla Caglar, lead author of the study, which was published in Proceedings of the National Academy of Sciences on Aug. 18, 2025. “From this closed set of basic building blocks, the brain constructs the full repertoire of actions that can be performed with the human hand.”
Caglar is currently a postdoctoral fellow at Mount Sinai Medical Center. She led this research effort while she was a postdoctoral fellow, jointly at Carnegie Mellon University’s Department of Psychology and the University of Coimbra, Portugal.
“These findings support the idea that the supramarginal gyrus functions as an assembly hub, combining basic elements of actions into more complex, functional sequences,” Caglar said.
Implications for Robotics, Brain-Machine Interfaces and Action Deficits Caused by Brain Injury
While the idea that the brain uses a combinatorial structure of motor synergies is not new to this study, the new evidence provided by this study may have far-reaching implications for robotics and the development of effective brain-computer interfaces.
“If we can map these synergies directly from neural activity, we could build more efficient brain-machine interfaces that allow users to control prosthetics with greater naturalness, precision and flexibility,” said study coauthor Dr. Jorge Almeida. “This also moves us closer to creating artificial systems capable of acting with agility, efficiency and intelligence comparable to that of humans.”
The discovery might also offer new perspectives on disorders such as apraxia — a neurological condition in which patients can lose the ability to use objects correctly, despite being able to recognize the objects they cannot use.
“Just as a deficit to the ability to properly assemble the sounds of language into words impairs language function, damage to this brain area can make it difficult for people to plan and carry out complex actions with objects,” said Almeida.
Integration of Cognition, Perception and Action
When we use our hands to grasp objects we don’t have to “think” about building up actions out of their elemental parts — just like native speakers of a language don’t have to think about how to say the words they want to use. The processes supported by the supramarginal gyrus are always running automatically in the background, behind what we are thinking about in any given moment.
A key aspect of the system that supports complex hand actions is that its location — about one inch above and behind the left ear — is strategically located in the brain to receive and integrate many different types of information. These include visual, tactile, motor and conceptual information about the world and the status of the body.
“Imagine you reach out and take a sip from the cup on your desk,” said Dr. Brad Mahon, study coauthor and a professor in the CMU Department of Psychology. “Before you start that movement of your hand to grasp the cup, your brain has already computed all kinds of properties about the cup and its contents, including its weight, how slippery it is, where its surface is likely to be hot, its location relative to your hand and how big a sip you plan to take.”
“This study shows that the final posture of the hand when grasping an object is built from a vocabulary of basic building blocks and that vocabulary is the same across individuals," said Mahon.

Despite the very different kinds of interactions with objects that different people have, and despite differences in manual dexterity across individuals, all humans have a common neural system that supports complex, object-directed interactions. Similarly, while human infants are not born speaking any particular language, human infants are able to become a native speaker of any language in the world — and all humans have a common neural system that supports language.
“This study moves us one step closer to understanding the fundamental principles of brain organization that make human tool use possible,” said Caglar.
This research was funded by grants from the National Institutes of Health, Pennsylvania Department of Health and European Research Council.