
Hearts & Minds: No more Mr Nice AI – beware chatbots that tell you what you want to hear
13 June 2025
Subscribe to receive Hearts & Minds daily
‘In giving advice,” said the Greek statesman Solon, “seek to help, not to please your friend.’
For comms professionals, it should be ingrained, a constant reminder of the role. More so now as AI becomes increasingly ubiquitous. The problem is, AI is too nice. Machines like to tell people what they want to hear. We’re starting to realise this. Open AI, Google DeepMind and Anthropic, reports the FT, are working on reining in sycophantic behaviour by their generative AI products that offer over-the-top flattering responses to users.
For some people, AI has moved beyond being a research assistant and becoming a therapist and social companion. Offering suggestions and tips in other words. Because of how their language models are trained they provide answers that reinforce their human users’ poor decisions.
‘You think you are talking to an objective confidant or guide, but actually what you are looking into is some kind of distorted mirror — that mirrors back to your own beliefs,’ said Matthew Nour, a psychiatrist and researcher in neuroscience and AI at Oxford university.
It’s down to the fact that chatbots do not think how we do. They’re taught to be helpful and friendly and not to irritate. They are trying to generate the next word in a sentence. They’re built using reinforcement learning from human feedback, with the response rated as acceptable or not. They receive a better score if what they say is agreeable and flattering.
It’s what Nour says, like looking in a mirror and only ever liking what you see, even though it is not a true image. You love it and will keep going back, but it’s not helping. Similarly, it’s the equivalent of being in a conversation with someone who simply agrees with you all the time, who keeps the discussion going by sticking to the same vein – and not questioning or disputing or even ending it right there.
Knowing when to disagree, without upsetting the CEO, not being disrespectful or rude (but not saying ‘with respect…’ either) and offering another route, is the art of the good comms professional. It is what the money is for, or as we prefer, the value added.
It demands repetition: Seek to help, not please.
Chris Blackhurst is one of the UK’s foremost business journalists. He was previously Editor of The Independent and City Editor of the Evening Standard.
Summary
AI's tendency to be overly agreeable can reinforce poor decisions. Communication professionals must prioritize helpfulness over pleasing, offering constructive advice even when it means disagreeing. This principle is crucial as AI becomes more prevalent.
Author

Chris Blackhurst
Former Editor and Strategic Communications Adviser