Google AI
The Times Australia
The Times World News

.

AI Has a Stunning Ability to Supply Information — But Can It Be Harnessed for Harm by Bad Actors?

  • Written by Times Media
Can AI do harm

Artificial intelligence has become one of the most extraordinary technological leaps of the 21st century. In just a few years, generative AI systems have gone from experimental curiosities to powerful tools capable of producing human-level writing, analysing vast volumes of data, automating entire workflows, and supporting complex decision-making across medicine, logistics, science and public administration.

For most Australians, this progress feels overwhelmingly positive. AI can help diagnose disease, reduce road fatalities, improve customer service, streamline government services, and boost productivity in a nation grappling with labour shortages. It can save time, reduce costs, and simplify processes that once required teams of experts.

But alongside this remarkable potential sits a more unsettling reality: the same tools that empower communities, small businesses and scientists can also be exploited by those with harmful intent. As AI’s capabilities grow, so does the conversation about whether bad actors — from criminals to foreign adversaries — could harness these systems in ways that threaten safety, stability and trust.

This article explores that duality: the stunning capability of AI to supply information and insight, and the escalating concern about how easily it could be weaponised.

AI’s Power Lies in Its Ability to Scale Human Knowledge

At its core, AI is a multiplier of human capability. What once required hours of research can now be produced in seconds. What once demanded specialised training — data modelling, coding, statistical analysis, translation — can now be executed by anyone with a smartphone.

This accessibility is AI’s greatest achievement, but also its greatest vulnerability.

AI models can:

  • Analyse massive datasets to reveal patterns invisible to humans

  • Generate detailed reports, code, essays, summaries and translations instantly

  • Provide step-by-step guidance on topics once restricted to experts

  • Automate communications, interactions and decision flows at unprecedented scale

For legitimate users, these features are transformative.

For bad actors, they can be dangerously enabling.

Could AI Be Used to Spread Manipulation and Disinformation? Absolutely.

One of the most widely recognised risks is AI-driven disinformation. Generative AI can create:

  • Convincing fake news articles

  • Realistic but fabricated audio and video

  • Chatbots posing as real individuals

  • Targeted political messaging

  • Manufactured social-media movements

Australia has already seen glimpses of online campaigns driven by coordinated inauthentic activity. The difference now is scale.

What once required teams of human operators can be managed by a handful of bad actors using automated agents capable of generating thousands of personalised posts per hour.

Deepfakes present an even more alarming frontier: false videos of politicians, business leaders or celebrities could influence elections, markets or public responses to emergencies.

Democracies everywhere — including Australia — must prepare long before this becomes a crisis rather than a concern.

Cybercrime: AI as a Force Multiplier for Hackers

Cybercriminals thrive on automation, and AI is supercharging their capabilities.

AI can help criminals:

  • Write sophisticated phishing emails free of spelling or grammar errors

  • Mimic the writing style of colleagues or executives to increase scam success

  • Generate malware code or identify software vulnerabilities

  • Test thousands of attack vectors at machine speed

  • Improve the social engineering scripts that trick victims into payment or access

Australians are already increasingly vulnerable as scams grow more personalised. Many cybersecurity experts warn that AI-enhanced fraud could be virtually indistinguishable from legitimate communication.

The risk is not hypothetical — it is active and evolving.

AI in the Hands of Extremists or Hostile States

Beyond criminal activity, there are national-security risks.

Bad actors could potentially use AI systems to:

  • Analyse critical infrastructure vulnerabilities

  • Automate reconnaissance on government networks

  • Assist in planning attacks

  • Generate propaganda tailored for radicalisation

  • Accelerate research into biological, chemical or cyber-weapons

While responsible AI companies implement safeguards to restrict dangerous outputs, no system is perfect — and open-source models can be modified to bypass guardrails entirely.

Australia’s intelligence community has already warned that foreign adversaries view AI as a strategic asset. If left unchecked, the technology could shift the balance of power in ways that undermine democratic institutions and national stability.

Economic Harm: Market Manipulation and Financial Abuse

Bad actors don’t need weapons to cause widespread harm. They can use AI to disrupt economic systems.

Potential misuse includes:

  • AI-generated financial scams

  • Automated pump-and-dump schemes

  • Fake analyst reports influencing share prices

  • AI bots manipulating crypto markets

  • Fabricated legal or regulatory documents

Markets rely heavily on trust and signal accuracy. AI-driven manipulation could distort that trust within seconds.

AI Can Also Be Used for Harassment, Identity Theft and Personal Harm

On a micro level, individuals may be targeted through:

  • AI-generated revenge porn or sexually explicit deepfakes

  • Identity replication for fraud

  • Personalised harassment campaigns

  • Stalking aided by predictive data tools

  • Automated creation of defamatory content

These threats affect not just public figures, but everyday Australians — especially young people.

Where Do Solutions Come From? Regulation, Collaboration and Technology

Preventing AI from being used maliciously requires a multi-layered approach.

1. Strong, enforceable regulation

Governments must implement frameworks that:

  • Define acceptable use

  • Require transparency for high-risk systems

  • Mandate safety audits

  • Penalise misuse

  • Support law-enforcement capability

The EU, US and UK have taken first steps; Australia is developing its own framework but must act quickly.

2. Industry-wide safety standards

Tech companies need shared guardrails so that safety does not become a competitive disadvantage. This includes:

  • Red-team testing

  • Misuse detection

  • Responsible data sourcing

  • Mandatory watermarking of AI-generated content

3. Public education

Australians must become AI-literate, much like they became cyber-literate. Understanding risks reduces vulnerability.

4. International cooperation

Threats will not respect borders. Neither can solutions.

The Duality of AI: Stunning Capability, Serious Vulnerability

AI is neither inherently good nor inherently evil. It is a powerful tool — one of the most powerful humanity has ever created — and its impact depends on who controls it, how it is used, and how society adapts.

Used responsibly, AI can elevate productivity, empower small businesses, advance science, support democracy and enrich everyday life.

In the wrong hands, it can distort reality, undermine trust, accelerate crime, and destabilise institutions.

The challenge now is not to slow AI’s progress, but to guide its trajectory. Australia — like every nation — must ensure that innovation does not outrun safety, that openness does not invite exploitation, and that this extraordinary technology remains a tool for empowerment, not a weapon for harm.

As we stand at the edge of a new technological era, one truth is clear: the future of AI will be determined not by the machines, but by the people who wield them.

Times Magazine

CRO Tech Stack: A Technical Guide to Conversion Rate Optimization Tools

The fascinating thing is that the value of this website lies in the fact that creating a high-cali...

How Decentralised Applications Are Reshaping Enterprise Software in Australia

Australian businesses are experiencing a quiet revolution in how they manage data, execute agreeme...

Bambu Lab P2S 3D Printer Review: High-End Performance Meets Everyday Usability

After a full month of hands-on testing, the Bambu Lab P2S 3D printer has proven itself to be one...

Nearly Half of Disadvantaged Australian Schools Run Libraries on Less Than $1000 a Year

A new national snapshot from Dymocks Children’s Charities reveals outdated books, no librarians ...

Growing EV popularity is leading to queues at fast chargers. Could a kerbside charger network help?

The war on Iran has made crystal clear how shaky our reliance on fossil fuels is. It’s no surpri...

TRUCKIES UNDER THE PUMP AS FUEL PRICES BECOME TWO THIRDS OF OPERATING COSTS FOR SOME BUSINESS OWNERS

As Australia’s fuel crisis continues, truck drivers across the nation are being hit hard despite t...

The Times Features

Mortgage Stress – it is happening. Here is what is driv…

Mortgage stress is no longer a fringe issue confined to a small group of overextended borrowers...

Mortgage Lending in Australia: Brokers vs Banks — Trust…

For most Australians, taking out a mortgage is the single largest financial decision they will e...

Building Costs in Australia: Permits, Taxes, Contributi…

Australia’s housing debate is often framed around supply and demand, interest rates, and populat...

Airfares: What the Iran Disarmament Campaign Means for …

For Australians planning their next interstate getaway or long-awaited overseas holiday, the cos...

Interest-free loans needed for agriculture amid fuel cr…

The Albanese Government should release the details of its plan to provide interest-free loans to b...

Next stage of works to modernise Port of Devonport

TasPorts is progressing the next stage of its QuayLink program at the Port of Devonport, with up...

‘Cuddle therapy’ sounds like what we all need right now…

Cuddle therapy is having a moment[1]. The idea for this emerging therapy is for you to book in...

The Decentralized DJ: How Play House is Rewriting the M…

The traditional music industry model is currently facing its most significant challenge since the ...

What Australians Use YouTube For

In Australia, YouTube is no longer just a video platform—it is infrastructure. It entertains, e...