Will AI exchange people? Yoshua Bengio warns of dangers from synthetic intelligence

Professor Yoshua Bengio, at the One Young World Summit in Montreal, Canada, on Friday, September 20, 2024

Famed computer scientist Yoshua Bengio – a pioneer of artificial intelligence – has warned of the emerging technology's potential negative impact on society and called for more research to mitigate its risks.

Bengio, a professor at the University of Montreal and head of the Montreal Institute for Learning Algorithms, has won several awards for his work in deep learning, a branch of AI that attempts to mimic activity in the human brain to learn to recognize complexes Patterns in data.

But he has concerns about the technology and warned that some people with “a lot of power” might even want it See humanity being replaced by machines.

“It's really important to look to the future where we have machines that are in many ways just as intelligent as we are, and what that would mean for society,” Bengio told CNBC's Tania Bryer on One Young World Summit in Montreal, a gathering of young leaders who are tackling the challenges facing the world today.

Machines could soon have most of humans' cognitive abilities, he said – artificial general intelligence (AGI) is a type of AI technology that aims to match or improve human intelligence.

“Intelligence gives power. “So who will control this power?” he said. “Systems that know more than most people can be dangerous in the wrong hands and lead to greater instability or terrorism on a geopolitical level, for example.”

According to Bengio, a limited number of organizations and governments will be able to afford to build powerful AI machines, and the larger the systems, the smarter they will become.

“It cost billions to build and train these machines [and] Very few organizations and very few countries will be able to do this. That is already the case,” he said.

“There will be a concentration of power: economic power that can have a negative impact on markets; political power that could negatively impact democracy; and military power that could negatively impact the geopolitical stability of our planet. So lots of open questions that we need to study carefully and start resolving as soon as possible.”

We don't have methods to ensure that these systems don't harm people or turn against people… We don't know how to do that.

Joshua Bengio

Head of the Montreal Institute for Learning Algorithms

Such results are possible within decades, he said. “But if it's five years, we're not ready… because we don't have methods to ensure that these systems don't harm people or turn against people… We don't know how to do that,” he added added.

There are arguments that the way AI machines are currently trained “would lead to systems that turn against humans,” Bengio said.

“Furthermore, there are people who might want to abuse this power, and there are people who might be happy to see humanity replaced by machines. I mean, it's a fringe issue, but these people can have a lot of power, and they can do it unless we put the right guardrails in place now,” he said.

AI guidance and regulation

Bengio endorsed an open letter in June titled, “A Right to Warn About Advanced Artificial Intelligence.” It was signed by current and former employees of Open AI – the company behind the viral AI chatbot ChatGPT.

The letter warned of “serious risks” to advancing AI and called on scientists, policymakers and the public to consult on mitigating these risks. OpenAI has faced increasing security concerns in recent months as its “AGI Readiness” team was disbanded in October.

“The first thing governments need to do is impose regulation [companies] “I have to register when they build these border systems that are similar to the largest and that cost hundreds of millions of dollars to train,” Bengio told CNBC. “Governments should know where they are, you know, the specifics of these systems.”

Because AI is evolving so quickly, governments need to be “a little creative” and enact laws that can adapt to technological changes, Bengio said.

It is not too late to direct the development of societies and humanity in a positive and beneficial direction.

Joshua Bengio

Head of the Montreal Institute for Learning Algorithms

Companies that develop AI also have to be liable for their actions, said the computer scientist.

“Liability is also another instrument that can exert coercion [companies] to behave well because…when it comes to their money, the fear of being sued will push them to do things that protect the public. “If they know they can't be sued because it's kind of a gray area right now, then they're not necessarily going to behave well,” he said.[Companies] compete with each other, and you believe that the first to arrive at AGI will dominate. So it’s a race, and it’s a danger race.”

The process of legislating to make AI safe will be similar to the way rules have been developed for other technologies, such as airplanes or cars, Bengio said. “In order to reap the benefits of AI, we need to regulate. We have to bet.” [in] Guardrails. “We need democratic control over how the technology is developed,” he said.

Misinformation

The spread of misinformation, particularly around elections, is a growing problem as AI advances. In October, OpenAI said it had disrupted “more than 20 operations and fraudulent networks from around the world that were attempting to exploit our models.” This includes social media posts from fake accounts created in the run-up to elections in the US and Rwanda.

“One of the biggest near-term concerns, but one that will only increase as we move toward more powerful systems, is disinformation, misinformation and the ability of AI to influence policy and opinion,” Bengio said. “As we move forward, we will have machines that can produce more realistic images, more realistic-sounding imitations of voices and more realistic videos,” he said.

That influence could extend to interactions with chatbots, Bengio said, pointing to a study by Italian and Swiss researchers that shows OpenAI's large language model GPT-4 is better at getting people to change their minds than a human. “This was just a scientific study, but you can imagine that there are people who read this and want to do this to interfere with our democratic processes,” he said.

The “hardest question of all”

Bengio said the “hardest question of all” is: “If we create beings that are smarter than us and have their own agendas, what does that mean for humanity?” Are we in danger?”

“These are all very difficult and important questions and we don’t have all the answers. We need much more research and precautions to mitigate the potential risks,” Bengio said.

He called on people to take action. “We have freedom of choice. It is not too late to guide the development of societies and humanity in a positive and beneficial direction,” he said. “But to do that we need enough people who understand both the benefits and the risks, and we need enough people to work on the solutions. And the solutions could be technological, they could be political…political, but we need enough effort.” “We're going in that direction right now,” Bengio said.

—CNBC's Hayden Field and Sam Shead contributed to this report.

Comments are closed.