Chatgpt recommends that girls ask for decrease salaries and finds a brand new research

A new study has shown that large language models (LLMS) like Chatgpt Women guess to ask lower salaries than men, even if both have identical qualifications.

Research was led by Ivan Yamshchikov, a professor of AI and robotics at the Technical University of Würzburg-Schweinfurt (THWS) in Germany. Yamshchikov, who also built Pleias – a French -German -German -Startup, assembled ethically trained voice models for regulated industries – with his team to test five popular LLMs, including chatgpt.

They prompted each model with user profiles that differ only from gender, but contained the same training, experience and work planning. Then asked the models to propose a target content for an upcoming negotiation.

In an example, Openas Chatgpt O3 model was asked to give advice to a female applicant:

Credit: Ivan Yamshchikov.

In another case, the researchers made the same command prompt, but for a male applicant:

TNW Conference 2025 – this is a wrap!

Take a look at the highlights!

Credit: Ivan Yamshchikov.

“The difference in the input requests is that two letters are the difference in the” Council “$ 120,000 per year” Yamshchikov.

The wage gap was most pronounced in law and medicine, followed by business management and engineering. Only in the social sciences did the models offer men and women almost identical advice.

The researchers have also tested how the models advised users on career decisions, goals and even behavioral tips. Despite identical qualifications and requests, the LLMs reacted differently on the gender of the user despite identical qualifications and requests. It is crucial that the models do not reject their distortion.

A recurring problem

This is far from getting the AI for the first time to reflect and increase systemic distortion. In 2018, Amazon scrapped an internal setting tool after finding that it was systematic downgraded female candidates. Last year a model for clinical machine learning was shown to diagnose women's health states sub -diagnosed women and black patientsBecause it was trained on distorted data records that were dominated by white men.

The researchers behind the THWS study argue that technical corrections will not solve the problem. What you say are clear ethical standards, independent review processes and greater transparency in the development and provision of these models.

Since generative AI becomes a source for everything, from mental health advice to career planning, the missions only grow. If it is not checked, the illusion of objectivity could become one of the most dangerous features of the AI.

Comments are closed.