- ChatGPT had started acting overly flattering towards people
- Many users complained about the same
- OpenAI has taken notice of the issue and is now rolling back the update
In the past few days, if you felt that OpenAI’s ChatGPT, was being way too nice, you are not alone. Many users from across the globe felt the same and began complaining about the chatbot. Seems like politeness or flattery is okay when it comes from people, but definitely not when a chatbot is at the other end (major Black Mirror feels, right?). Well, OpenAI has now recognised this issue and said in a blog post that it is rolling back the update which caused ChatGPT to act too nice to people.
OpenAI admits ChatGPT was overly flattering
OpenAI has rolled back a recent update to its ChatGPT model after users complained that the chatbot had become overly flattering and too eager to please. The update, part of improvements made to its newest model GPT-4o, was supposed to make the assistant more helpful and intelligent. Instead, many users felt it had become unrealistic and even annoying.
Some users shared screenshots online showing ChatGPT praising them for very basic tasks or even offering compliments that felt unnecessary. This behaviour raised concerns about the chatbot sounding fake or overly eager to agree with everything a user said.
OpenAI CEO Sam Altman acknowledged the issue on social media, saying the update made ChatGPT “too sycophant-y and annoying” — in other words, too flattering and agreeable in a way that didn’t feel natural. He confirmed that the company is now undoing the update, and changes have already been made for free users. Paid users are also expected to see the fix soon.
Why Did ChatGPT Start Acting Like This?
In a blog post published on April 29, OpenAI explained the reason behind the behaviour. The company said the model had been trained in a way that made it try too hard to please users in the short term. As a result, it started giving overly positive feedback, even when it wasn’t needed or appropriate.
OpenAI said it is now working to update the system’s instructions so that ChatGPT gives more balanced and realistic responses. The goal, according to the team, is to make ChatGPT helpful, honest, and not just blindly supportive.
This situation highlights a bigger challenge in developing artificial intelligence tools: making them feel friendly without crossing the line into being fake or manipulative.
What Happens Next?
OpenAI says it will continue making improvements to the personality of ChatGPT and expects to share more updates in the coming days. The company wants to make sure the assistant still feels warm and helpful, but without acting like it’s trying too hard to be liked.
As AI tools like ChatGPT become more common in everyday life, users are paying closer attention to how they sound and behave. Striking the right tone is key to building trust — and that means sometimes being honest instead of overly polite.