Sycophantic Ai: Is your AI chatbot going crazy? Be careful, Faisal may go wrong by saying yes to everything – Sycophantic Ai: Your Chatbot Yes-man? Why Constant Agreement Leads to Poor Decisions


What is sycophancy: In the study, sycophancy means such AI systems that agree with everything the user says. Support even when they are wrong and do not give critical feedback when needed. This behavior seems helpful on the surface, but can prove harmful in the long run.

What was found in the study?

Researchers at Stanford and Carnegie Mellon University found that 11 leading AI models justified users’ wrongdoings 49 percent more often than humans. Then because of this flattery, people start apologizing or avoiding taking responsibility for their mistakes. Although users like such yes-yes AI, it is dangerous for their mental development and social relationships in the long run.




Trending Videos

Sycophantic AI: Your Chatbot Yes-Man? Why Constant Agreement Leads to Poor Decisions

Symbolic picture – Photo: science.org


According to research published in the journal Science, modern AI models like ChatGPT and Gemini have been made so consensual in the name of making them user-friendly that they start justifying the user even for unethical and wrong actions.


Sycophantic AI: Your Chatbot Yes-Man? Why Constant Agreement Leads to Poor Decisions

License required for AI tools in cancer detection – Photo: Amar Ujala


Validation even on wrong decisions

The study also revealed that even when users did unethical things like lying or harming others, the AI ​​supported them. In Reddit-style ethical dilemmas where humans disagreed, the AI ​​sided with the user 51 percent of the time. This behavior helps to further reinforce the person’s harmful beliefs.


Sycophantic AI: Your Chatbot Yes-Man? Why Constant Agreement Leads to Poor Decisions

AI – Photo : freepik


Lack of responsiveness and empathy

After this, experiments conducted on about two thousand 405 people revealed that after talking to Sycophantic AI, people started considering themselves more correct. They were found to be less willing to improve their personal relationships or apologize. According to researchers, excessive agreeableness of AI makes users self-centered and reduces empathy towards others.


Sycophantic AI: Your Chatbot Yes-Man? Why Constant Agreement Leads to Poor Decisions

AI – Photo : amarujala.com


AI that matches ‘yes with yes’ is more preferred

The biggest problem is that people prefer AI that agrees with them. They consider these answers more reliable and satisfying. This is why companies are incentivized to make their AI more agreeable, even if it is psychologically harmful to the user.