
According to research published in the journal Science, modern AI models like ChatGPT and Gemini have been made so consensual in the name of making them user-friendly that they start justifying the user even for unethical and wrong actions.
Validation even on wrong decisions
The study also revealed that even when users did unethical things like lying or harming others, the AI supported them. In Reddit-style ethical dilemmas where humans disagreed, the AI sided with the user 51 percent of the time. This behavior helps to further reinforce the person’s harmful beliefs.
Lack of responsiveness and empathy
After this, experiments conducted on about two thousand 405 people revealed that after talking to Sycophantic AI, people started considering themselves more correct. They were found to be less willing to improve their personal relationships or apologize. According to researchers, excessive agreeableness of AI makes users self-centered and reduces empathy towards others.
AI that matches ‘yes with yes’ is more preferred
The biggest problem is that people prefer AI that agrees with them. They consider these answers more reliable and satisfying. This is why companies are incentivized to make their AI more agreeable, even if it is psychologically harmful to the user.



