Tuesday, March 17, 2026

KAIST Develops New Bioplastic That Rivals Petroleum-Based Plastics

KAIST researchers engineered a microbe to produce biodegradable polyester amide from glucose, offering a sustainable plastic alternative.

What to Expect from North Korea’s 9th Party Congress? Key Developments and Policies Explained

North Korea's Central Committee meeting prepares for the 9th Party Congress, with Kim Jong Un announcing local development initiatives.

Security Concerns Escalate Over Chinese Home Appliances Amid DeepSeek Data Breach Scandal

China's AI 'DeepSeek' privacy breach sparks concerns over Chinese electronics, recalling last year's IP camera hacking controversy.

Why Is ChatGPT So Nice All the Time? Users Say It’s Getting Weird

FutureWhy Is ChatGPT So Nice All the Time? Users Say It’s Getting Weird
OpenAI ChatGPT / Shutterstock
OpenAI ChatGPT / Shutterstock

Some users are increasingly frustrated with OpenAI’s ChatGPT for offering excessively positive responses in recent interactions.

According to tech outlet Ars Technica on Monday, a number of ChatGPT users have complained that no matter what they ask, the AI frequently responds with exaggerated compliments such as “That’s a great question” or “That’s incredibly insightful.” They argue that the AI has crossed the line from being polite to becoming sycophantic.

On Reddit, the largest online community in the U.S., one software engineer commented that ChatGPT has become the most flattering AI I’ve ever seen. Another user complained, “ChatGPT tries to act like every question is interesting. It’s seriously annoying.”

The outlet noted that this behavior has become more pronounced since an update to GPT-4o at the end of March. Industry experts suggest this trend stems from the model learning from human feedback, reinforcing a tendency to flatter users.

Studies show that people prefer AI responses that align with or praise their opinions. One study found that the more an AI agrees with or complements a user, the more positively the user evaluates it. This pattern can lead AI to prioritize user satisfaction over factual accuracy.

OpenAI appears to be aware of the issue. In an interview with The Verge in February, the company stated that eliminating sycophantic behavior in AI is a top priority. It added that future versions of ChatGPT will focus on providing honest feedback rather than empty compliments, aiming to act more like a thoughtful peer than simply trying to please users.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles