The Times Australia
Google AI
The Times World News

.

AI Has a Stunning Ability to Supply Information — But Can It Be Harnessed for Harm by Bad Actors?

  • Written by Times Media
Can AI do harm

Artificial intelligence has become one of the most extraordinary technological leaps of the 21st century. In just a few years, generative AI systems have gone from experimental curiosities to powerful tools capable of producing human-level writing, analysing vast volumes of data, automating entire workflows, and supporting complex decision-making across medicine, logistics, science and public administration.

For most Australians, this progress feels overwhelmingly positive. AI can help diagnose disease, reduce road fatalities, improve customer service, streamline government services, and boost productivity in a nation grappling with labour shortages. It can save time, reduce costs, and simplify processes that once required teams of experts.

But alongside this remarkable potential sits a more unsettling reality: the same tools that empower communities, small businesses and scientists can also be exploited by those with harmful intent. As AI’s capabilities grow, so does the conversation about whether bad actors — from criminals to foreign adversaries — could harness these systems in ways that threaten safety, stability and trust.

This article explores that duality: the stunning capability of AI to supply information and insight, and the escalating concern about how easily it could be weaponised.

AI’s Power Lies in Its Ability to Scale Human Knowledge

At its core, AI is a multiplier of human capability. What once required hours of research can now be produced in seconds. What once demanded specialised training — data modelling, coding, statistical analysis, translation — can now be executed by anyone with a smartphone.

This accessibility is AI’s greatest achievement, but also its greatest vulnerability.

AI models can:

  • Analyse massive datasets to reveal patterns invisible to humans

  • Generate detailed reports, code, essays, summaries and translations instantly

  • Provide step-by-step guidance on topics once restricted to experts

  • Automate communications, interactions and decision flows at unprecedented scale

For legitimate users, these features are transformative.

For bad actors, they can be dangerously enabling.

Could AI Be Used to Spread Manipulation and Disinformation? Absolutely.

One of the most widely recognised risks is AI-driven disinformation. Generative AI can create:

  • Convincing fake news articles

  • Realistic but fabricated audio and video

  • Chatbots posing as real individuals

  • Targeted political messaging

  • Manufactured social-media movements

Australia has already seen glimpses of online campaigns driven by coordinated inauthentic activity. The difference now is scale.

What once required teams of human operators can be managed by a handful of bad actors using automated agents capable of generating thousands of personalised posts per hour.

Deepfakes present an even more alarming frontier: false videos of politicians, business leaders or celebrities could influence elections, markets or public responses to emergencies.

Democracies everywhere — including Australia — must prepare long before this becomes a crisis rather than a concern.

Cybercrime: AI as a Force Multiplier for Hackers

Cybercriminals thrive on automation, and AI is supercharging their capabilities.

AI can help criminals:

  • Write sophisticated phishing emails free of spelling or grammar errors

  • Mimic the writing style of colleagues or executives to increase scam success

  • Generate malware code or identify software vulnerabilities

  • Test thousands of attack vectors at machine speed

  • Improve the social engineering scripts that trick victims into payment or access

Australians are already increasingly vulnerable as scams grow more personalised. Many cybersecurity experts warn that AI-enhanced fraud could be virtually indistinguishable from legitimate communication.

The risk is not hypothetical — it is active and evolving.

AI in the Hands of Extremists or Hostile States

Beyond criminal activity, there are national-security risks.

Bad actors could potentially use AI systems to:

  • Analyse critical infrastructure vulnerabilities

  • Automate reconnaissance on government networks

  • Assist in planning attacks

  • Generate propaganda tailored for radicalisation

  • Accelerate research into biological, chemical or cyber-weapons

While responsible AI companies implement safeguards to restrict dangerous outputs, no system is perfect — and open-source models can be modified to bypass guardrails entirely.

Australia’s intelligence community has already warned that foreign adversaries view AI as a strategic asset. If left unchecked, the technology could shift the balance of power in ways that undermine democratic institutions and national stability.

Economic Harm: Market Manipulation and Financial Abuse

Bad actors don’t need weapons to cause widespread harm. They can use AI to disrupt economic systems.

Potential misuse includes:

  • AI-generated financial scams

  • Automated pump-and-dump schemes

  • Fake analyst reports influencing share prices

  • AI bots manipulating crypto markets

  • Fabricated legal or regulatory documents

Markets rely heavily on trust and signal accuracy. AI-driven manipulation could distort that trust within seconds.

AI Can Also Be Used for Harassment, Identity Theft and Personal Harm

On a micro level, individuals may be targeted through:

  • AI-generated revenge porn or sexually explicit deepfakes

  • Identity replication for fraud

  • Personalised harassment campaigns

  • Stalking aided by predictive data tools

  • Automated creation of defamatory content

These threats affect not just public figures, but everyday Australians — especially young people.

Where Do Solutions Come From? Regulation, Collaboration and Technology

Preventing AI from being used maliciously requires a multi-layered approach.

1. Strong, enforceable regulation

Governments must implement frameworks that:

  • Define acceptable use

  • Require transparency for high-risk systems

  • Mandate safety audits

  • Penalise misuse

  • Support law-enforcement capability

The EU, US and UK have taken first steps; Australia is developing its own framework but must act quickly.

2. Industry-wide safety standards

Tech companies need shared guardrails so that safety does not become a competitive disadvantage. This includes:

  • Red-team testing

  • Misuse detection

  • Responsible data sourcing

  • Mandatory watermarking of AI-generated content

3. Public education

Australians must become AI-literate, much like they became cyber-literate. Understanding risks reduces vulnerability.

4. International cooperation

Threats will not respect borders. Neither can solutions.

The Duality of AI: Stunning Capability, Serious Vulnerability

AI is neither inherently good nor inherently evil. It is a powerful tool — one of the most powerful humanity has ever created — and its impact depends on who controls it, how it is used, and how society adapts.

Used responsibly, AI can elevate productivity, empower small businesses, advance science, support democracy and enrich everyday life.

In the wrong hands, it can distort reality, undermine trust, accelerate crime, and destabilise institutions.

The challenge now is not to slow AI’s progress, but to guide its trajectory. Australia — like every nation — must ensure that innovation does not outrun safety, that openness does not invite exploitation, and that this extraordinary technology remains a tool for empowerment, not a weapon for harm.

As we stand at the edge of a new technological era, one truth is clear: the future of AI will be determined not by the machines, but by the people who wield them.

Times Magazine

Freak Weather Spikes ‘Allergic Disease’ and Eczema As Temperatures Dip

“Allergic disease” and eczema cases are spiking due to the current freak weather as the Bureau o...

IPECS Phone System in 2026: The Future of Smart Business Communication

By 2026, business communication is no longer just about making and receiving calls. It’s about speed...

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

The Times Features

How to get managers to say yes to flexible work arrangements, according to new research

In the modern workplace, flexible arrangements can be as important as salary[1] for some. For ma...

Coalition split is massive blow for Ley but the fault lies with Littleproud

Sussan Ley may pay the price for the implosion of the Coalition, but the blame rests squarely wi...

How to beat the post-holiday blues

As the summer holidays come to an end, many Aussies will be dreading their return to work and st...

One Nation surges above Coalition in Newspoll as Labor still well ahead, in contrast with other polls

The aftermath of the Bondi terror attacks has brought about a shift in polling for the Albanese ...

The Fears Australians Have About Getting Involved With Cryptocurrency

Cryptocurrency is no longer a fringe topic. It is discussed in boardrooms, on trading apps, and at...

The Quintessential Australian Road Trip

Mallacoota to Coolangatta — places to stay and things to see There are few journeys that captur...

Fitstop Just Got a New Look - And It’s All About Power, Progress and Feeling Strong

Fitstop has unveiled a bold new brand look designed to match how its members actually train: strong...

What We Know About Zenless Zone Zero 2.6 So Far

Zenless Zone Zero is currently enjoying its 2.5 version update with new characters like Ye Shunguang...

For Young People, Life Is an All-New Adventure. For Older People, Memories of Good Times and Lost Friends Come to Mind

Life does not stand still. It moves forward relentlessly, but it does not move the same way for ...