How deepfakes may also help implant false recollections in our minds

The human brain is a complex, wondrous thing. As we can best judge, it is the epitome of biological evolution. However, it does not include any pre-installed security software. And that makes it ridiculously easy to hack.

We like to think of the human brain as a huge neural network that speaks its own language. When we talk about the development of brain-computer interfaces, we are usually talking about some type of transceiver that interprets brain waves. The fact is, we’ve hacked human brains since the beginning of time.

Think of the actor using a sad memory to bring tears or the detective using reverse psychology to extract a suspect’s confession. These examples may seem less extraordinary than the Men in Black eraser, for example. However, the end result is essentially the same. We are able to manipulate the data that our minds use to ascertain the basic reality. And we’re really good at it.

background

A team of researchers from universities in Germany and the UK today published pre-printed research detailing a study into which they were successfully implanted and removed false memories in test subjects.

According to teamwork:

Human memory is fallible and malleable. This is especially challenging in forensic environments as people may falsely recall events with legal implications that actually never occurred. Despite an urgent need for remedial action, there is virtually no research on whether and how rich false autobiographical memories can be reversed under realistic conditions (i.e., using reverse strategies that can be applied in real-world settings).

Basically, it is relatively easy to implant false memories. Getting rid of them is the hard part.

The study was conducted on 52 volunteers who agreed to allow the researchers to try over several sessions to plant a false childhood memory in their mind. After a while, many of the subjects began to believe the false memories. The researchers then asked the subjects’ parents to claim the false stories were true.

The researchers found that adding a trusted person makes embedding and removing false memories easier.

On paper:

The present study therefore not only replicates and expands earlier demonstrations of false memories, but also decisively documents their reversibility retrospectively: Using two ecologically valid strategies, we show that rich but incorrect autobiographical memories can largely be reversed. Importantly, the reversal was specific to false memories (i.e., it did not occur for true memories).

Incorrect memory planting techniques have been around for a while, but there isn’t much research to reverse them. That is, this paper is not a moment too early.

Enter deepfakes

There aren’t many positive uses for false memory implantation. Fortunately, most of us don’t really have to worry about being the target of a mind control conspiracy where we’re slowly being led to believe in a false memory over multiple sessions with the complicity of our own parents.

But that’s exactly what happens every day on Facebook. Everything you do on the social media network is recorded and codified to create a detailed picture of who you are exactly. This data is used to determine which ads you see, where you see them, and how often they appear. And if someone on your trusted network happens to make a purchase through an ad, they are more likely to see those ads.

But we all already know that, don’t we? Of course, you can’t go a day without an article on how Facebook and Google and all the other big tech companies are manipulating us. Why do we put up with this?

This is because our brains can adapt to reality better than we attribute to them. The moment we know that there is a system that we can manipulate, we think that the system says something about us as human beings.

A team of Harvard researchers wrote about this phenomenon back in 2016:

In a study we conducted with 188 students, we found that participants were more interested in buying a Groupon for a restaurant that was advertised as high-quality if they thought the ad was on certain websites that they visited during a previous task (surfing the internet to create an itinerary) versus when the ad was targeted based on demographics (age and gender) or not at all.

What does this have to do with deepfakes? It’s simple: if we can be manipulated so easily that we are exposed to tiny ads on our Facebook feed, imagine what could happen if advertisers start hijacking the personas and visages of people we trust .

For example, you may not plan on purchasing some Grandma’s Cookies products anytime soon, but when you do your granny I’ll tell you how delicious they are in the commercial you’re watching.

With the technology in place, it would be trivial for a large tech company to determine, for example, that you are a college student who hasn’t seen your parents since last December. Knowing this, deepfakes, and the data it already has on you, it wouldn’t cost much to create targeted ads with your deepfaked parents asking you to buy hot cocoa or something similar.

But wrong memories?

It’s all fun and games when it comes to just a social media company that uses AI to convince you to buy some goodies. But what if a bad actor breaks the law? Or worse, what if it’s the government Not Break the law?

The police use a variety of techniques to obtain confessions. And law enforcement agencies generally have no obligation to tell the truth when doing this. In fact, in most places, it is perfectly legal for police officers to lie outright in order to get a confession.

A popular technique is to let a suspect know that their friends, family members, and co-conspirators have already told the police that they know they committed the crime. If you can convince someone that the people they respect and care about believe they did something wrong, it will be easier for them to accept it as fact.

How many law enforcement agencies in the world currently have an explicit policy against the use of compromised media in obtaining a confession? Our guess would be: close to zero.

And that’s just an example. Imagine what an autocratic or iron-fisted government could do on a large scale using these techniques.

The best defense …

It is good to know that there are already methods that we can use to extract these false memories. As the European team of researchers found, our brains tend to let go of the false memories when challenged, but cling to the real ones. This makes them more resistant to attack than we might think.

However, it always puts us on the defensive. For now, our only defense against the AI-assisted implantation of a false memory is to either see it come or get help after it happens.

Unfortunately that unknown unknowns Make this a terrible security plan. We just can’t plan how a bad actor can exploit the loophole that makes our brains easier to work with when someone we trust is helping the process.

With deepfakes and enough time, you can convince someone of almost anything, as long as you can find a way to get them to watch your videos.

Our only real defense is to develop technology that can see through deepfakes and other media manipulated by AI. Given that brain-computer interfaces will become available in consumer markets in the next few years and media generated by AI become less and less indistinguishable from reality by the minute, we are nearing a point where there is no return to technology gives.

Just as the invention of the weapon enabled those untrained in sword fighting to win a duel, and the creation of the calculator gave those struggling with math the ability to perform complex calculations, we may be on the verge of one Era when it was psychological manipulation becomes a push button enterprise.

Published on March 23, 2021 – 19:13 UTC

Comments are closed.