News location:

Friday, December 5, 2025 | Digital Edition | Crossword & Sudoku

Digital danger lies in believe first, question never

“The malicious actor will usually be in the form of victim-hero. They know how much we are hooked on our devices, and our need to feel that someone, anyone, cares. That leaves a big door open for them.” Photo: Aner Tau

“What we see, read and hear on social media is consumed like fast food – quickly with little thought about the long-term damage.” Political columnist ANDREW HUGHES warns of that disinformation itself is changing and evolving rapidly.

The other day, I had firsthand experience of how easy it is for malicious actors to influence others in the digital age. 

Dr Andrew Hughes.

It was a lesson in how the subtle behavioural changes from our use of social media platforms has made us accustomed to believe first, question never. 

What we see, read and hear is consumed like fast food – quickly with little thought about the long-term damage. 

These behavioural influences though have also permeated into much of how we also behave in many other ways of our modern life. 

The example? A person I know had swallowed whole what a malicious actor had told them. Pure disinformation. The person blindly accepted what they had been told. No challenge, no thought of if that information correlated with other information or the credibility of the source. 

When that person was made to think about what they had been told, they found it difficult to acknowledge that perhaps they had been had. Guilt and shame of knowing you’ve been had by a manipulator is one big reason why many don’t confront it in the first place. 

The malicious actor will usually be in the form of victim-hero. They know how much we are hooked on our devices, and our need to feel that someone, anyone, cares. That leaves a big door open for them. 

This is why information quality, reliability and validity presents both our single biggest opportunity and threat to our democracy. 

It offers the opportunity to allow minor parties and independents to quickly generate awareness and build momentum behind their campaigns. 

Social media levels the political communication field massively, it allows for audiences to be reached effectively, but importantly for mitigating the power of stakeholders in politics, relatively cheaply. 

The threat is disinformation and misinformation. I have seen some incredibly smart people fall for disinformation. And disinformation itself is changing and evolving rapidly. 

Those engaging in disinformation learn from other content creators, especially those in social media. That young 20-year-old digital nomad has learnt lots, practices lots, but also informs lots of others on what works. 

Now content is professionally made, even with studio backdrops to add credibility and authenticity, with presenters looking more mainstream than fringe. And that is because they could well be. 

The other big change on disinformation is how governments have started to slide into this. The best disinformation works on creating doubt in the viewer’s mind on multiple constructs, but essentially the belief system. The plausible deniability feature. 

Maybe this is fake, but maybe it’s not. It makes sense after all because I saw something like this from someone different last week. Plus they had 100,000 followers. And the government seems elusive on this topic, like the person said, so yeah, maybe this is the truth. 

That’s how manipulation in a message works really well. To get you to doubt the facts as you currently know them. If a picture tells a thousand words, a three-minute video is a thesis. 

AI? It adds speed to the malevolent actor’s armoury. Messages are made in minutes. Using real people in their own voices. It takes only 15 seconds of someone’s voice to recreate it. 

When you watch something do you think now – is it fake or is it real? That doubt destroys the truthful messages, emboldens the fake and dishonest. 

Disinformation in 2025 is proactive because methods to stop it are reactive. It uses that time gap effectively, creating impact before any learned colleague can stop it. 

How can we mitigate its influence in politics? 

Firstly, candidate images and voices should only be allowed to be used during election campaigns with the approval of those candidates. Big change, yes, but big changes have already happened in campaigns in the last decade. 

Next, making it clearer when AI is being used. Mandatory use of a brand symbol such as Content Credentials may help. It would let people know if the content they’re watching was made by humans or not. 

Political advertising and communications definitions need modernising. Government advertising should not be political advertising. Far too many governments of all colours have relied on government credibility to achieve political credibility. 

The flow-on effect is people questioning all government advertising, harming campaigns we need to see be effective, such as healthy eating, speeding or a myriad of other social marketing campaigns that governments run. 

This is only a start. We have to act now to protect our democracy with modern means in its fight against the digital dark arts. Otherwise it will be too late. 

Dr Andrew Hughes lectures at the ANU Research School of Management, where he specialises in political marketing.

 

Andrew Hughes

Andrew Hughes

Share this

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

Related Posts

Health

Australia is poised to eliminate cervical cancer

"A key challenge everywhere is to have governments recognise Human Papilloma Virus vaccination as an investment. The costs associated with vaccination invariably return a net benefit," writes columnist MICHAEL MOORE.

Follow us on Instagram @canberracitynews