menu
On Message: Take control of your own narrative in the ChatGPT era
Homepage arrow_right Resources arrow_right Newsletters arrow_right On Message arrow_right On Message: Take control of your own narrative in the ChatGPT era

On Message: Take control of your own narrative in the ChatGPT era

05 November 2025

Subscribe to receive On Message weekly

Subscribe now chevron_right
close

How the comms world has changed. Three years ago this month, ChatGPT launched, and everything altered. Slowly at first – but today, we are seeing evidence of its usage everywhere. 

 

Corporates want added value from their advisors - simply typing a topic into an LLM and regurgitating the result is something they can do. Too often, journalists receive press releases where it is obvious they are being written by a bot. 

Sometimes, it is not so transparent, but even a hint is enough. Worse, is the subsequent conversation with the sender when it is clear they do not have a clue what they are talking about, what they sent out under the client’s name. But the practice is doing down the profession, and for the good of everybody, for the maintenance of standards, it should cease.

The recent Deloitte case is ammunition enough and ought to act as a warning. There, Deloitte has agreed to pay a partial refund to the Australian government over a report containing errors, after admitting the study was produced with the assistance of generative AI. 

The Department of Employment and Workplace Relations confirmed the firm would hand back the final instalment under its contract. The episode prompted one senator to accuse the consultancy of having a ‘human intelligence problem’. Ouch. 

Multiple errors were found in the document, including non-existent references and citations. An academic who was the first to raise the alarm said it contained ‘hallucinations’ where AI had simply filled in gaps, misinterpreted data, or attempted to guess answers. 

Another senator who sat on an inquiry into the integrity of consulting firms said it looked like ‘AI is being left to do the heavy lifting’. 

While the overall tenor of the review stood, she concluded that ‘anyone looking to contact these firms should be asking exactly who is doing the work they are paying for and having that expertise and no AI use verified.

‘Perhaps instead of a big consulting firm, procurers would be better off signing up for a ChatGPT subscription.’ Print off her words and put them above the desk. 

That pratfall was self-inflicted. But sticking with that same Australian integrity inquiry, the Senate committee itself had to apologise after evidence it produced under parliamentary privilege regarding allegations of serious wrongdoing was found to be false. Why? The witnesses who submitted the material relied on AI and did not fact-check what they were arguing. 

The Google Bard AI service highlighted case studies that never occurred - such as partners being dismissed by firms, including KPMG, that never actually employed them - and they were cited by the academic as supporting why structural reform was needed. KPMG duly complained. The inquiry was embarrassed but only momentarily - typical politicians - as they were able to turn the grievance to their advantage by saying the mistake ‘raised important questions’ about the use of AI and how the serious risks the technology presented required proper understanding and rigorous fact-checking. 

KPMG’s experience served to highlight another facet of AI where reputational damage is concerned, that businesses must own their narratives. AI is grabbing what is out there, with the consequence that a single bad article or piece of coverage can receive greater prominence than it deserves, to the lessening of everything else. The more a business can create and control its content by producing high-quality blogs, social media posts, newsletters, and website copy, the better. That way, a corporation can highlight its successes and the items it wishes to focus upon, balancing whatever ChatGPT and the rest might otherwise be tempted to select. 

A good example here is Amazon, which last week, rather than wait for leaks to gain traction regarding its decision to make substantial lay-offs and sparking speculation, decided to tackle the news head-on. The company took the rare step of publishing a detailed internal memo stating the number and the reasoning. 

There were rumours of 30,000 cuts. Amazon said it was 14,000. The note went further, and as you might expect for an internal bulletin, was more candid and open than one intended for the public. They were going to ‘flatten layers, remove unnecessary complexity, and help us move faster.’ 

The memo relayed how the giant wanted to operate more like ‘the world’s largest startup’ by becoming more agile, customer-obsessed, and better positioned for long-term growth. The impact of AI was addressed directly. The tech is ‘enabling companies to innovate much faster than ever before (in existing market segments and altogether new ones). We’re convinced that we need to be organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business.’

By publishing the memo, Amazon received plaudits for its frankness. Rather than wait for the media and critics to jump to conclusions, it released the rationale behind the decision, managed the tone, and provided the details it wished to provide. It was Amazon pushing back, or rather moving ahead of ChatGPT, ensuring that what the AI gathers is the correct information. Smart, in other words. 

Summary

AI has transformed communications, but overreliance erodes trust. Corporations own their narratives to control reputation and outpace AI-driven misinformation.

Author

Chris Blackhurst

Chris Blackhurst

Former Editor and Strategic Communications Adviser

Subscribe

close

Sign up with your email