Google Income From Spreading Faux Information – This is How

Search engines are one of society’s main gateways to information and people, but they are also channels of misinformation. Similar to problematic social media algorithms, search engines learn to offer you what you and others clicked before. Since people are drawn to the sensation, this dance between algorithms and human nature can encourage the spread of misinformation.

Search engine companies, like most online services, make money not only from selling ads, but also from tracking users and selling their data through real-time bidding. People are often led to misinformation by their desire for sensational and entertaining news, as well as information that is either controversial or confirms their views. One study found that more popular YouTube videos about diabetes are less likely to contain medically valid information than less popular videos on the subject, for example.

Ad-driven search engines like social media platforms are designed to reward clicks on enticing links as they help search companies improve their business metrics. As a researcher studying search and recommendation systems, my colleagues and I show that this dangerous combination of the corporate profit motive and individual vulnerability makes the problem difficult to fix.

How search results go wrong

When you click on a search result, the search algorithm learns that the link you clicked is relevant to your search query. This is known as relevance feedback. This feedback helps the search engine to give this link a higher weight for this query in the future. If enough people click this link enough times to provide strong relevance feedback, this website will appear higher in search results for this and related queries.

People are more likely to click links that appear at the top of the search results list. This creates a positive feedback loop – the higher a website is displayed, the more clicks there are, and that website is moving higher or keeping it higher. Search engine optimization techniques use this knowledge to increase the visibility of websites.

This misinformation problem has two aspects: How is a search algorithm rated and how do people react to headings, titles and snippets? Search engines, like most online services, are judged on a number of metrics, one of which is user interaction. It is in the interests of search engine companies to give you things to read, look at, or just click on. Therefore, when a search engine or recommendation engine compiles a list of the items to display, it calculates the probability that you will click on the items.

Traditionally, this should bring out the information that is most relevant. However, the notion of relevance is fuzzy because people have used search to find fun search results as well as really relevant information.

Imagine you are looking for a piano tuner. If someone showed you a video of a cat playing the piano, would you click on it? Many would, even if that has nothing to do with piano parts. The tracing service feels confirmed with positive relevance feedback and learns that it is okay to show a cat playing the piano when people are looking for piano tuners.

In many cases, it’s even better than showing the relevant results. People enjoy watching funny cat videos and the search system gets more clicks and more user interaction.

This might seem harmless. What if from time to time people get distracted and click on results that are not relevant to the query? The problem is that people are drawn to exciting images and sensational headlines. They tend to click on conspiracy theories and sensational news, not just cats playing the piano, and more so than real news or relevant information.

Famous but fake spiders

In 2018, Google searched for a Facebook post claiming a new deadly spider had killed multiple people in multiple states. In the first week of this trend survey, my colleagues and I analyzed the 100 best results of the Google search for “new deadly spider”.

The first two pages of Google search results for “New Deadly Spider” in August 2018 (shaded area) referred to the original fake news post on the subject, not to debunking or otherwise factual information. Chirag Shah, CC BY-ND

This story turned out to be fake, but the people who searched for it were mostly exposed to misinformation related to the original fake post. While users continued to click and share this misinformation, Google continued to post these pages at the top of search results.

This pattern of exciting and unverified stories popping up and people clicking on continues, with people apparently either not interested in the truth or believing the stories must be true if a trusted service like the Google Find them showing these stories. More recently, a refuted report claiming that China leaked the coronavirus from a laboratory has caught on with search engines because of this vicious cycle.

Find the misinformation

To test how well people differentiate between accurate information and misinformation, we developed a simple game called “Google Or Not”. This online game shows two sets of results for the same query. The goal is simple: choose the kit that is reliable, trustworthy, or most relevant.

A screenshot showing two sets of Google search results side by sideIn tests, about half the time cannot tell the difference between Google search results with misinformation and those with only trustworthy results. Chirag Shah, CC BY-ND

Either of these two sentences has a result or two that are either verified and flagged as misinformation or a debunked story. We made the game publicly available and promoted it through various social media channels. We have collected a total of 2,100 responses from over 30 countries.

When we analyzed the results, we found that about half of the people mistakenly selected the set with one or two misinformation outcomes as trustworthy. Our experiments with hundreds of other users over many iterations have produced similar results. In other words, roughly half the time it takes people to select outcomes that contain conspiracy theories and false news. The more people select these inaccurate and misleading results, the more the search engines learn that this is what people want.

Big tech regulation and self-regulation issues aside, it’s important that people understand how these systems work and how they make money. Otherwise, market economies and people’s natural inclination will be attracted Eye catcher Left keep the vicious circle going.

This article by Chirag Shah, Associate Professor of Information Science at the University of Washington, has been republished by The Conversation under a Creative Commons license. Read the original article.

Read on: Report: UK cuts EV subsidies for cars above £ 35,000

Comments are closed.