Does Chatgpt make us actually silly and lazy?

Since Chatgpt's debut in 2022, generative AI quickly entered our work, study and personal life and contributed to accelerating research, creating content and rather unprecedented.

The enthusiasm for generative AI tools understandably has an even faster acceptance rate than the Internet or PCs, but experts warn that we should be careful. As with any new technology, generative AI can drive society in different ways, but can also bring consequences if it is not checked.

One of these voices is Natasha Govender-Ropert, head of the AI for financial crimes at Rabobank. She came to TNW founder Boris Veldhuijzen van Zanten in the latest episode of “Kia's Next Big Drive”, about an ethics, the bias and the question of whether we outsource our brain on machines.

Take a look at the complete interview on the way to TNW2025 in KIAS Pure Elektrischem EV9:

A question that should be in our minds is if we turn more and more for answers, what effects this trust on our own intelligence could have?

The 💜 the EU technology

The latest rumors from the EU -Tech scene, a story of our wise old founder Boris and some questionable KI art. It is free of charge every week in your inbox. Register now!

A current study of with the use of chatt to write essays has developed into a number of sensational headlines, “researchers say that the use of chatt can rotten her brain” to “Chatgpt may make lazy and stupid”. Is that really the case?

Your brain on gen ai

Here is what actually happened: Researchers gave a task in the Boston region. A group used chatt, another Google used (without the help of AI), and the third had to write nothing but her brain. As she wrote, her brain activity was measured with electrodes.

After three sessions, the group only showed brain the highest psychological connectivity. Chatgpt user? The lowest. It seemed as if the AI supported people were driving on autopilot while the others had to think harder to get words on the side.

The roles returned for the fourth round. The group only had to use Chatgpt this time, while the AI group had to go alone. The result? The former improved their essays. The latter struggled to remember what they had written at all.

Overall, the study showed that the other groups in the four months in which it was carried out in relation to neuronal, linguistic and behavior levels, while those who use chatt spent less time for their attachments and simply wore copy/insert.

English teachers who checked their work said that original thoughts and “soul” were missing. Sounds alarming, right? Maybe, but the truth is more complicated than the sensationalist headlines suggest.

The results were less about decaying the brain and more about mental abbreviations. They showed that monitoring of LLMS can reduce intellectual engagement. But with active, thoughtful use, these risks can be avoided. The researchers also emphasized that the study raises some interesting questions for further research, but also much too small and simple to draw final conclusions.

The death of critical thinking?

While the results (which do not have to be checked) require that we should use this tool in educational, professional and personal contexts, the TLDR headlines that have been designed for clicks about accuracy is.

The researchers seem to share these concerns. They created a website with a FAQ page on which they asked reporters not to use a language that inaccurate and sensational the results.

Disclaimer with the sound: it is for sure to say that LLMS essentially make us Source: FAQ for “Your brain on chatt: accumulation of cognitive debts when using an ai assistant for tasks with essays” https://www.brainonllm.com/faqDisclaimer with the sound: it is for sure to say that LLMS essentially make us

Ironically, they listed the resulting “noise” reporters who use LLMs to summarize the paper and added: “Your human feedback is very welcome. If you read the paper or parts of it. The study also contains a list of restrictions that we list in the newspaper and on the website very clearly.”

There are two conclusions that we can certainly pull out of this study:

  • Further studies on how LLMS should be used in educational environments is essential
  • Students, reporters and the public who become a large scale about the information or generative AI we receive must remain of crucial importance

Researchers of the Vrije Universityitait Amsterdam are concerned that with our increasing trust in LLMS, the risk of critical thinking or our ability and willingness to question and change social norms could really be at risk.

“The pupils can carry out less likely or comprehensive search processes themselves, since they have postponed the relevant and informed tone of the Genai edition. The non-inpatient perspectives on which the output is based may be less likely, their perspectives are not considered, and the demands that inform the claims and the assumptions informed for the claims are adopted.”

These risks indicate a deeper problem in the AI. If we take its outputs to the nominal value, we can overlook embedded distortions and undisputed assumptions. Combating this information not only requires technical corrections, but also the critical reflection on what we understand primarily with bias.

These problems are of central importance for the work of Natasha Govender-Ropert, head of the AI for financial crimes at Rabobank. Your role focuses on building up a responsible, trustworthy AI by spending prejudices. But as she found in “Kia's Next Big Drive” on the TNW founder Boris Veldhuijzen van Zanten, the tendency is a subjective term and must be defined for each individual and every company.

“The bias has no consistent definition. What I think is biased or impartial can be different from someone else. This is something that we as humans and individuals have to make.

Social norms and prejudices are not firm, but are constantly changing. While society is developing, the historical data we train our LLMs are not. We have to remain critical and the information we receive, whether from our fellow human beings or our machines in order to build up a fair and fairer society.

Comments are closed.