Human intelligence heavily depends on acquiring knowledge from other humans — accumulated through time as part of our cultural evolution. This type of social learning, known in literature as cultural transmission, enables us to imitate actions and behaviours in real time. But can AI also develop social learning skills the same way?
Imitation learning has long been a training approach for artificial intelligence, instructing the algorithms to observe humans complete a task and then try to mimic them. But usually AI tools need multiple examples and exposure to vast amounts of data to successfully copy their trainer.
Now, a groundbreaking study by DeepMind researchers claims that AI agents can also demonstrate social learning skills in real time, by imitating a human in novel contexts “without using any pre-collected human data.”
Specifically, the team focused on a particular form of cultural transmission, known as observational learning or (few-shot) imitation, which refers to the copying of body movement.
DeepMind ran its experiment in a simulated environment called GoalCycle3D, a virtual world with uneven terrain, footpaths, and obstacles, which the AI agents had to navigate.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
To help the AI learn, the researchers used reinforcement learning. For those unfamiliar with Pavlov’s work in the field, this method is based on offering rewards for every behaviour that facilitates learning and the desired result — in this case, finding the correct course.
At the following stage, the team added expert agents (either hard-coded or human-controlled) that already knew how to navigate the simulation. The AI agents understood quickly that the best way to reach their destination was to learn from the experts.
The researchers’ observations were twofold. Firstly, they found that the AI not only learned faster when mimicking the experts, but also that it applied the knowledge it had gained to other virtual paths. Secondly, DeepMind discovered that the AI agents could still use their new skills even in the absence of the experts, which, according to the study’s authors, constitutes an example of social learning.
While the authors note that more research is needed, they believe that their method can pave the way “for cultural evolution to play an algorithmic role in the development of artificial general intelligence.” They also look forward to further interdisciplinary cooperation between the fields of AI and cultural evolutionary psychology.
Despite its early stage, DeepMind’s breakthrough could have significant implications for the artificial intelligence industry. Such an advancement has the potential to reduce the traditional, resource-intensive training of algorithms, while increasing their problem-solving capabilities. It also raises the question of whether artificial intelligence could ever learn to acquire social and cultural elements of human thought.
The full study is published on the journal Nature Communications.
Comments are closed.