AI breakthrough: Machines that master human tasks through language

AI breakthrough: Machines that master human tasks through language

summary: Researchers have made a major leap in the field of artificial intelligence by developing an artificial intelligence capable of learning new tasks from verbal or written instructions and then describing these tasks verbally to another artificial intelligence, enabling it to perform the same tasks. This development highlights a unique, human-like ability in AI for the first time: converting instructions into actions and linguistically communicating these actions to their peers.

The team used an artificial neural model connected to a pre-trained language understanding network, to simulate the language processing areas of the brain. This breakthrough not only advances our understanding of the interplay between language and behavior, but also holds great promise for robotics, envisioning a future where machines can communicate and learn from each other in ways similar to humans.

Key facts:

  1. Human-like learning and communication in artificial intelligence: The University of Geneva team created an AI model that can perform tasks based on verbal or written instructions and communicate these tasks to other AIs.
  2. Advanced neural model integration: By combining a pre-trained language model with a simpler network, the researchers mimicked the areas of the human brain responsible for language perception, interpretation, and production.
  3. Promising applications in robotics: This innovation opens up new possibilities for robotics, allowing the development of humanoid robots that understand and communicate with humans and each other.

source: University of Geneva

Performing a new task based solely on verbal or written instructions, and then describing it to others so they can reproduce it, is a cornerstone of human communication that continues to resist artificial intelligence.

A team from the University of Geneva (UNIGE) has successfully modeled an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to the “sibling” AI, which then carried them out.

See also  Researchers crack the code behind the cosmic planes

These promising results, especially with regard to robotics, have been published in Normal neuroscience.

In the first phase of the experiment, neuroscientists trained this network to mimic Wernicke's area, the part of our brain that enables us to perceive and interpret language. Credit: Neuroscience News

Performing a new task without prior training, based solely on verbal or written instructions, is a uniquely human ability. Furthermore, once we learn the task, we are able to describe it so that someone else can reproduce it.

This dual ability distinguishes us from other species that, in order to learn a new task, need numerous experiences accompanied by positive or negative reinforcement signals, without being able to communicate them to their peers.

One subfield of artificial intelligence (AI) – natural language processing – seeks to recreate this human ability, using machines that understand and respond to audio or text data. This technology is based on artificial neural networks inspired by our biological neurons and the way electrical signals are transmitted to each other in the brain.

However, the neural computations that would make possible the cognitive feat described above are still poorly understood.

“Currently, conversational agents using AI can combine linguistic information to produce text or an image. “But, as far as we know, they are not yet able to translate verbal or written instructions into sensorimotor action, let alone explain them to another AI so that than to reproduce them,” explains Alexandre Puget, Full Professor at the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.

Model brain

The researcher and his team have succeeded in developing an artificial neural model with this dual ability, albeit with prior training. '

“We started with an existing model of artificial neurons, S-Bert, which contains 300 million neurons and was pre-trained to understand language. We connected it to another, simpler network of a few thousand neurons,” explains Reidar Reifland, a PhD student. at the Department of Basic Neurosciences at UNIGE Faculty of Medicine, and first author of the study.

See also  Mysterious star survives a thermonuclear supernova explosion

In the first phase of the experiment, neuroscientists trained this network to mimic Wernicke's area, the part of our brain that enables us to perceive and interpret language. In the second stage, the network was trained to reproduce Broca's area, which under the influence of Wernicke's area is responsible for producing and pronouncing words. The entire process was performed on traditional laptops. Written instructions in English were then sent to Amnesty International.

For example: indicating the location – left or right – where the stimulus is perceived; Response in the opposite direction to the stimulus. Or, more complexly, between two visual stimuli with a slight difference in contrast, which appears the brighter. The scientists then evaluated the results of a model that simulated the intention to move, or in this case, the cue.

Once these tasks were learned, the network was able to describe them to a second network – a copy of the first network – so that it could reproduce them. “To our knowledge, this is the first time that two artificial intelligence systems have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.

For humans in the future

This model opens new horizons for understanding the interaction between language and behavior. This is particularly promising for the robotics sector, where developing technologies that enable machines to talk to each other is a major issue.

''The network we have developed is very small. Nothing now stands in the way of the development of more complex networks, on this basis, which can be integrated into humanoid robots capable of understanding us but also each other.

See also  A radio signal has been captured from 9 billion light-years away

About this AI research news

author: Antoine Guinot
source: University of Geneva
communication: Antoine Guénot – University of Geneva
picture: Image credited to Neuroscience News

Original search: Open access.
Natural language instructions induce syntactic generalization in neural networks“By Alexandre Puget et al. Normal neuroscience


a summary

Natural language instructions induce syntactic generalization in neural networks

One of the fundamental human cognitive accomplishments is to interpret linguistic instructions in order to perform novel tasks without explicit task experience. However, the neural computations that can be used to achieve this are still poorly understood. We use advances in natural language processing to create a neural model of generalization based on linguistic instructions.

The models are trained on a set of common psychophysical tasks, and receive instructions embedded in a pre-trained language model. Our best models can perform an unprecedented task with an average correct performance of 83% based on linguistic instructions alone (i.e. zero learning).

We have found that language supports sensorimotor representations, such that interrelated task activity shares a common architecture with semantic representations of instructions, allowing language to indicate the appropriate combination of skills practiced in non-visual environments.

We show how this model generates a linguistic description of a new task it has defined using only motor feedback, which can later guide a partner model to perform the task.

Our models make several experimentally testable predictions that explain how linguistic information is represented to facilitate flexible and general cognition in the human brain.

Leave a Reply

Your email address will not be published. Required fields are marked *