The Times Australia
The Times World News

.

Google has dropped its promise not to use AI for weapons. It’s part of a troubling trend

  • Written by Zena Assaad, Senior Lecturer, School of Engineering, Australian National University

Last week, Google quietly abandoned a long-standing commitment to not use artificial intelligence (AI) technology in weapons or surveillance. In an update to its AI principles, which were first published in 2018[1], the tech giant removed statements promising not to pursue:

  • technologies that cause or are likely to cause overall harm
  • weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people
  • technologies that gather or use information for surveillance violating internationally accepted norms
  • technologies whose purpose contravenes widely accepted principles of international law and human rights.

The update came after United States President Donald Trump revoked former President Joe Biden’s executive order[2] aimed at promoting safe, secure and trustworthy development and use of AI.

The Google decision follows a recent trend of big tech entering the national security arena and accommodating more military applications of AI. So why is this happening now? And what will be the impact of more military use of AI?

The growing trend of militarised AI

In September, senior officials from the Biden government met[3] with bosses of leading AI companies, such as OpenAI, to discuss AI development. The government then announced a taskforce to coordinate the development of data centres, while weighing economic, national security and environmental goals.

The following month, the Biden government published a memo[4] that in part dealt with “harnessing AI to fulfil national security objectives”.

Big tech companies quickly heeded the message.

In November 2024, tech giant Meta announced[5] it would make its “Llama” AI models available to government agencies and private companies involved in defence and national security.

This was despite Meta’s own policy[6] which prohibits the use of Llama for “[m]ilitary, warfare, nuclear industries or applications”.

Around the same time, AI company Anthropic also announced[7] it was teaming up with data analytics firm Palantir and Amazon Web Services to provide US intelligence and defence agencies access to its AI models.

The following month, OpenAI announced[8] it had partnered with defence startup Anduril Industries to develop AI for the US Department of Defence.

The companies claim they will combine OpenAI’s GPT-4o and o1 models with Anduril’s systems and software to improve US military’s defences against drone attacks.

Mobile phone screen displaying the words OpenAI
OpenAI is partnering with a defence startup to develop AI for the US Department of Defence. Michael Dwyer/AP

Defending national security

The three companies defended the changes to their policies on the basis of US national security interests.

Take Google. In a blog post published earlier this month[9], the company cited global AI competition, complex geopolitical landscapes and national security interests as reasons for changing its AI principles.

In October 2022, the US issued export controls[10] restricting China’s access to particular kinds of high-end computer chips used for AI research. In response, China issued their own export control measures[11] on high-tech metals, which are crucial for the AI chip industry.

The tensions from this trade war escalated in recent weeks thanks to the release of highly efficient AI models[12] by Chinese tech company DeepSeek. DeepSeek purchased 10,000 Nvidia A100 chips[13] prior to the US export control measures and allegedly used these to develop their AI models.

It has not been made clear how the militarisation of commercial AI would protect US national interests. But there are clear indications tensions with the US’s biggest geopolitical rival, China, are influencing the decisions being made.

A large toll on human life

What is already clear is that the use of AI in military contexts has a demonstrated toll on human life.

For example, in the war in Gaza, the Israeli military has been relying heavily on advanced AI tools[14]. These tools require huge volumes of data and greater computing and storage services, which is being provided by Microsoft[15] and Google[16]. These AI tools are used to identify potential targets but are often inaccurate.

Israeli soldiers have said[17] these inaccuracies have accelerated the death toll in the war, which is now more than 61,000[18], according to authorities in Gaza.

A pickup truck loaded with people drives along a dirt road among destroyed buildings.
The Israeli military has been relying heavily on advanced AI tools in the war in Gaza. Mohammed Saber/EPA

Google removing the “harm” clause from their AI principles contravenes the international law on human rights[19]. This identifies “security of person” as a key measure.

It is concerning to consider why a commercial tech company would need to remove a clause around harm.

Avoiding the risks of AI-enabled warfare

In its updated principles[20], Google does say its products will still align with “widely accepted principles of international law and human rights”.

Despite this, Human Rights Watch has criticised the removal[21] of the more explicit statements regarding weapons development in the original principles.

The organisation also points out that Google has not explained exactly how its products will align with human rights.

This is something Joe Biden’s revoked executive order about AI was also concerned with.

Biden’s initiative wasn’t perfect, but it was a step[22] towards establishing guardrails for responsible development and use of AI technologies[23].

Such guardrails are needed now more than ever as big tech becomes more enmeshed with military organisations – and the risk that come with AI-enabled warfare and the breach of human rights increases.

References

  1. ^ first published in 2018 (blog.google)
  2. ^ revoked former President Joe Biden’s executive order (www.reuters.com)
  3. ^ senior officials from the Biden government met (www.reuters.com)
  4. ^ published a memo (home.treasury.gov)
  5. ^ Meta announced (theconversation.com)
  6. ^ Meta’s own policy (ai.meta.com)
  7. ^ Anthropic also announced (techcrunch.com)
  8. ^ OpenAI announced (www.engadget.com)
  9. ^ In a blog post published earlier this month (blog.google)
  10. ^ export controls (www.bis.doc.gov)
  11. ^ China issued their own export control measures (www.ips-journal.eu)
  12. ^ release of highly efficient AI models (theconversation.com)
  13. ^ purchased 10,000 Nvidia A100 chips (cyber.fsi.stanford.edu)
  14. ^ relying heavily on advanced AI tools (www.washingtonpost.com)
  15. ^ provided by Microsoft (www.theguardian.com)
  16. ^ Google (www.washingtonpost.com)
  17. ^ Israeli soldiers have said (www.972mag.com)
  18. ^ more than 61,000 (www.aljazeera.com)
  19. ^ international law on human rights (www.ohchr.org)
  20. ^ updated principles (ai.google)
  21. ^ criticised the removal (www.hrw.org)
  22. ^ was a step (theconversation.com)
  23. ^ responsible development and use of AI technologies (www.potterclarkson.com)

Read more https://theconversation.com/google-has-dropped-its-promise-not-to-use-ai-for-weapons-its-part-of-a-troubling-trend-249169

Times Magazine

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

The Times Features

Is our mental health determined by where we live – or is it the other way round? New research sheds more light

Ever felt like where you live is having an impact on your mental health? Turns out, you’re not imagining things. Our new analysis[1] of eight years of data from the New Zeal...

Going Off the Beaten Path? Here's How to Power Up Without the Grid

There’s something incredibly freeing about heading off the beaten path. No traffic, no crowded campsites, no glowing screens in every direction — just you, the landscape, and the...

West HQ is bringing in a season of culinary celebration this July

Western Sydney’s leading entertainment and lifestyle precinct is bringing the fire this July and not just in the kitchen. From $29 lobster feasts and award-winning Asian banque...

What Endo Took and What It Gave Me

From pain to purpose: how one woman turned endometriosis into a movement After years of misdiagnosis, hormone chaos, and major surgery, Jo Barry was done being dismissed. What beg...

Why Parents Must Break the Silence on Money and Start Teaching Financial Skills at Home

Australia’s financial literacy rates are in decline, and our kids are paying the price. Certified Money Coach and Financial Educator Sandra McGuire, who has over 20 years’ exp...

Australia’s Grill’d Transforms Operations with Qlik

Boosting Burgers and Business Clean, connected data powers real-time insights, smarter staffing, and standout customer experiences Sydney, Australia, 14 July 2025 – Qlik®, a g...