Europe should crack down on AI-written evaluations earlier than it is too late

Parts of modern life are inescapable. We all use mapping software to get directions, check the news on our phones, and read online reviews of products before we buy them.

Technology didn’t create these things, but it did democratize them, make them easier to access, or more easily complement them. Take online reviews. Today, people can express their honest opinions about products and services in ways that would have been impossible before.

But what the Internet gives, it can also take away?

It didn’t take long for nefarious actors to realize they could use this newfound ability technology to exploit them Flood the market with fake reviewswhich incidentally created a completely new industry.

Find out about our conference presentations

Watch videos of our past lectures for free with TNW All Access →

in the past few years, a large part of the discussion around fake reviews has resolved, but now? They’re back with a vengeance – and that’s only because of the AI.

The rise of large language models (LLMs) mag ChatGPT means we are entering a new era of fake reviews and governments in Europe and the rest of the world need to act before it’s too late.

AI written reviews? Who cares?

As succinct as that sounds, it’s a valid question. Fake reviews have been a part of online life for almost as long as the internet has existed. Will things really change if sophisticated machines write them instead of humans?

Spoilers: yes. Yes it will.

The main differentiator is the size. Until now, text generation software has been relatively simple. What they created was often sloppy and vague, meaning the public could immediately tell that it was untrustworthy and made by a stupid computer and not a slightly less stupid person.

This meant that in order for machine-written fake reviews to be successful and deceive people, other people had to be involved in the writing. The rise of LLMs and AI means that is no longer the case.

With ChatGPT almost anyone can create hundreds of fake reviews that basically read as if they were written by a real person.

But again: so what? More fake reviews? Who cares? I shared this with Kunal Purohit, Chief Digital Services Officer at Tech Mahindraan IT consulting company.

He tells me that “reviews are essential for businesses of all sizes.” The reason for this is that it helps them “build brand awareness and trust with potential customers or prospects.”

This is becoming increasingly important in the modern world as the competitiveness of the business sector means that customers are becoming more aware and demanding of businesses.

With user experience now a key selling point — and brands prioritizing this aspect of their business — Purohit says, bad reviews can destroy a company’s ability to do business effectively.

In other words, fake reviews aren’t just something that can convince you to buy a well-reviewed book that’s actually a bit boring. They can be used for both negative and positive reasons and when applied to a business can seriously affect that business’s reputation and ability to work.

That is why we – and the EU – must take computer-generated assessments seriously.

But what is actually going on out there?

At this point, much of the discussion is academic. Yes, we are aware that AI reviews have been written could be a problem, but are they? What actually happens?

Purohit tells me that already, “AI-powered chatbots are being used to create fake reviews for marketplace products.” Despite the platforms’ best efforts, they are being inundated with computer-generated reviews.

This is confirmed by Keith Nealon, the CEO of bazaar voice, a company that helps retailers display user-generated content on their website. He says he’s seen how “generative AI has recently been used to write fake product reviews” with the aim of “increasing the number of reviews for a product in order to achieve a higher conversion rate”.

Reviews written by AI are gaining ground, but, friends, this is just the beginning.

Long, hard years are ahead

The trust we have in reviews is about to be shaken.

Bazaarvoice’s Nealon says the deployment of AI at scale could have “serious implications for the future of online shopping,” especially if we reach a situation where “shoppers can no longer trust whether a product review is authentic.” .

The temptation to use computer-generated reviews for business will only increase.

“We all want our apps to be at the top of the rankings and we all know that one way to achieve that is by engaging users with reviews,” says Simon Bain – CEO of OmniIndex, an encrypted data platform – tell me. “If there is an opportunity to mass-produce these quickly with AI, some companies will go down that route, just as some are already doing with click farms for other forms of user engagement.”

He goes on to say that while the danger of computer-written reviews is great, the fact that this method is becoming an additional tool for click farms is even worse. Bain envisions a world where AI-generated text “can be combined with other activities like click fraud and mass-produced in a professional manner very cheaply.”

This means that AI-authored reviews are not a problem in their own right, but have the potential to become a giant cog in an even larger misinformation machinery. This could end up eroding trust in all aspects of online life.

So… is there anything you can do?

Take action against reviews written by AI

When it comes to combating computer-generated assessments, there were two common themes across all the experts I spoke to. The first was that it’s going to be tough. And the second? We will need artificial intelligence to fight against… artificial intelligence.

“It can be incredibly difficult to recognize AI-written content, especially when it’s produced by professionals,” says Bain. He believes we need to tackle this practice the same way we’ve tackled similar fraudulent activities: with AI.

According to Bain, this would work by analyzing huge pools of data around app usage and engagement. It uses tactics like “pattern recognition, natural language processing, and machine learning” to spot fraudulent content.

Purohit and Nealon agree, and in our conversations each point to the potential of AI to solve their problems.

Still, it’s Chelsea Ashbrook – Senior Manager, Corporate Digital Experience at Genentech, a biotechnology company – who puts it best: “But looking ahead, we may need to develop new tools and techniques. It is what it is; AI is getting smarter and we need to be too.”

The government must intervene

This is where we run into another problem: yes, AI tools can fight computer-generated reviews, but how does it actually work? what can be done

And this is where government bodies like the EU come in.

I told Ashbook, “You definitely have a lot of work to do,” she says. Ashbrook then suggests that one way for governments to combat this imminent scourge could be by “enacting policies that require transparency about the provenance of reviews.”

OmniIndex’s Bain, on the other hand, notes the importance of ensuring that existing laws and regulations around elements such as “fraud and cybercrime” remain up to date [AI] Is used.”

Tech Mahindra’s Purohit believes that we are already seeing many positive initiatives and policies from governments and key professionals in the AI ​​industry related to the responsible use of technology. Still, “there are several ways officials like the EU can prevent….” [it] from getting out of control.”

He points to “increasing research and development, [and] “Strengthening regulatory frameworks” as two important elements of this strategy.

In addition, Purohit believes that governments should update consumer protection laws to combat the dangers posed by AI-generated content. This could include a range of things, including “enforcing penalties for abusing or manipulating AI-generated reviews” or “platform liability for providing accurate and reliable information to consumers”.

There you go Europe, feel free to use these ideas to get the ball rolling.

AI-Written Reviews: Here to Stay

Want to read the least shocking thing you’ve seen in a while? AI will change the world.

Still, the themes the press is obsessed with are things like the uniqueness of a possible AI-driven apocalypse. Which, to be honest, sounds a lot sexier — and generates a lot more clicks.

But in my opinion, it’s the smaller things like AI-written reviews that will have the biggest impact on our immediate lives.

Basically, society is built on trust, on the notion that there are people around us who share a vague set of similar values. Using AI in this small way has the potential to undermine that. When we can no longer believe what we see, hear or read, we no longer trust anything.

And if that happens? Well, it won’t be long before things start to crumble around us.

That’s why government agencies like the EU can’t wait to regulate seemingly inconsequential areas like AI-authored reviews. There must be regulation – and fast. Because if we hesitate too long, it may already be too late.

Comments are closed.