Why AI won’t ever rule the world

Call it the Skynet hypothesis, artificial general intelligence, or the advent of the singularity—for years, AI experts and non-experts alike have worried (and, for a small group, celebrated) the idea that artificial intelligence might one day be smarter than humans .

According to the theory, advances in AI — particularly machine learning, which is able to take in new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. In this interpretation of events, each AI advancement from Jeopardy-winning IBM machines to the massive AI language model GPT-3 brings humanity one step closer to an existential threat. We are literally building our soon-to-be sentient followers.

Except that it will never happen. At least, according to the authors of the new book Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Co-authors Barry Smith, a philosophy professor at the University of Buffalo, and Jobst Landgrebe, founder of German AI company Cognotekt, argue that human intelligence will not be overtaken by “an immortal dictator” any time soon — or ever. They explained their reasons for doing so to Digital Trends.

Digital Trends (DT): How did this topic get on your radar?

Jobst Landgrebe (JL): I am a medical doctor and biochemist by training. When I started my career, I ran experiments that generated a lot of data. I started studying mathematics to be able to interpret this data and saw how difficult it is to model biological systems mathematically. There was always this discrepancy between the mathematical methods and the biological data.

In my mid-thirties, I left academia and became a management consultant and entrepreneur working in artificial intelligence software systems. I’ve tried to build AI systems that mimic what humans can do. I realized I was running into the same problem I had in biology years earlier.

Clients said to me, ‘Why don’t you build chatbots?’ I said, ‘because they don’t work; we can’t model this kind of system properly.’ That ultimately led to me writing this book.

Professor Barry Smith (BS): I thought it was a very interesting problem. I had suspected similar problems with AI, but never thought it through. First, we wrote a paper called Making Artificial Intelligence Meaning Again. (That was in the Trump era.) It was about why neural networks fail at language modeling. Then we decided to expand the paper into a book that explores this topic in more detail.

DT: Your book expresses skepticism about how neural networks, which are critical to modern deep learning, emulate the human brain. They are approximations rather than exact models of how the biological brain works. But do you accept the core premise that it is possible that if we understand the brain in granular enough detail, it could be replicated artificially – and that this would result in intelligence or sentience?

JL: The name “neural network” is a complete misnomer. The neural networks we have today, even the most sophisticated, have nothing to do with how the brain works. The notion that the brain is a set of interconnected nodes, the way neural networks are constructed, is utterly naïve.

If you look at the most primitive bacterial cell, we still don’t even understand how it works. We understand some aspects of it, but we don’t have a model for how it works – let alone a neuron that’s much more complicated, or billions of neurons connected together. I believe it is scientifically impossible to understand how the brain works. We can only understand certain aspects and deal with these aspects. We don’t have a complete understanding of how the brain works, and we won’t get it either.

If we had a perfect understanding of how each molecule of the brain works, we could probably replicate it. That would mean putting everything in mathematical equations. Then you could replicate this with a computer. The only problem is that we cannot write down these equations and create them.

profile of head on computer chip artificial intelligenceGraphic “Digital Trends”.

BS: A lot of the most interesting things in the world happen at a level of granularity that we can’t get to. We just don’t have the imaging equipment, and we probably never will have the imaging equipment, to capture most of what’s going on at the very fine levels of the brain.

This means that we do not know, for example, what is responsible for consciousness. In fact, there are a number of quite interesting philosophical problems that will always be unsolvable by the method we follow – and so we should just ignore them.

Another is free will. We are very strong in favor of people having a will; we can have intentions, goals and so on. But we don’t know if it’s free will or not. It has to do with the physics of the brain. As far as the evidence before us is concerned, computers cannot have a will.

DT: The subtitle of the book is “Artificial Intelligence Without Fear”. What is the specific fear you are referring to?

BS: That was provoked by the literature on the singularity, which I know you’re familiar with. Nick Bostrom, David Chalmers, Elon Musk and the like. As we chatted with our real-world peers, we realized that there was indeed a certain fear among the populace that AI would eventually take over and change the world to the detriment of humans.

We have quite a lot in the book about Bostrum-type arguments. The core argument against this is: If the machine cannot have a will, then it cannot have an evil will either. Without ill will, there is nothing to fear. Now, of course, we can still be afraid of machines, just as we can be afraid of guns.

But that’s because the machines are managed by people with evil intentions. But then again, it’s not the AI ​​that’s evil; It is the people who build and program the AI

DT: Why is this notion of singularity or artificial general intelligence so interesting to people? Whether they’re scared of it or fascinated by it, there’s something about this idea that resonates with people across the board.

JL: There’s this idea that came up at the beginning of the 19th century and then was explained by Nietzsche at the end of that century that God is dead. Since the elites of our society are no longer Christian, they needed a replacement. Max Stirner, who like Karl Marx was a student of Hegel, wrote a book about it with the words: “I am my own God”.

If you are God, you also want to be a creator. If you could create a superintelligence, you would be like God. I think it has to do with the hyper-narcissistic tendencies in our culture. We don’t talk about it in the book, but that explains to me why this idea is so appealing in our time when there is no more transcendent entity to turn to.

brain with computer text scrolling artificial intelligenceChris DeGraw/Digital Trends, Getty Images

EN: Interesting. To pursue this, it’s the idea that creating AI – or aiming to create AI – is a narcissistic act. In this case, the concept that these creations would somehow become more powerful than us is a nightmarish twist. It is the child that kills the parents.

JL: A little like that, yes.

DT: What do you think the end result of your book would be if everyone were convinced of your arguments? What would that mean for the future of AI development?

JL: That’s a very good question. I can tell you exactly what I think would happen – and will happen. I think in the medium term people will accept our arguments and this will create better applied mathematics.

Something all great mathematicians and physicists are perfectly aware of was the limits of what they could mathematically achieve. Because they are aware of this, they only focus on certain problems. If you are aware of the limitations, then go around the world looking for and solving these problems. This is how Einstein found the equations for Brownian motion; how he came to his theories of relativity; how Planck solved black body radiation and thus founded the quantum theory of matter. They had a good sense of which problems can be solved mathematically and which cannot.

We believe that when people understand the message of our book, they will be able to build better systems because they will focus on what can actually be done – and stop wasting money and effort on what cannot be achieved.

BS: I think part of the message is already getting through, not because of what we’re saying, but because of what people experience when they give AI projects a lot of money and then the AI ​​projects fail. I assume you know the Joint Artificial Intelligence Center. I can’t remember the exact amount, but I think it was about $10 billion that they gave to a famous developer. In the end they get none of it. You canceled the contract.

(Editor’s note: JAIC, a subdivision of the US Armed Forces, was intended to “accelerate the deployment and adoption of AI to achieve large-scale mission impact.” It was merged into a larger unified organization, the Chief Digital and Artificial Intelligence Officer , with two more offices in June of that year. JAIC ceased to exist as a separate entity.)

DT: What do you think is, generally speaking, the most compelling argument you make in the book?

BS: Every AI system is mathematical in nature. Because we cannot mathematically model consciousness, will, or intelligence, they cannot be replicated by machines. So machines are not becoming intelligent, let alone super-intelligent.

JL: The structure of our brain only allows for limited models of nature. In physics, we choose a subset of reality that suits our mathematical modeling skills. This is how Newton, Maxwell, Einstein and Schrödinger got their famous and beautiful models. But these can only describe or predict a small set of systems. Our best models are the ones we use to construct technology. We are not able to create a complete mathematical model of living nature.

This interview has been edited for length and clarity.

Editor’s Recommendations



Comments are closed.