Google AI
The Times Australia
The Times World News

.

How artists are sabotaging AI to take revenge on image generators

  • Written by: T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University
how artists are sabotaging AI to take revenge on image generators

Imagine this. You need an image of a balloon for a work presentation and turn to a text-to-image generator, like Midjourney or DALL-E, to create a suitable image.

You enter the prompt: “red balloon against a blue sky” but the generator returns an image of an egg instead. You try again but this time, the generator shows an image of a watermelon.

What’s going on?

The generator you’re using may have been “poisoned”.

What is ‘data poisoning’?

Text-to-image generators work by being trained on large datasets that include millions or billions of images. Some generators, like those offered by Adobe or Getty, are only trained with images the generator’s maker owns or has a licence to use.

But other generators have been trained by indiscriminately scraping online images, many of which may be under copyright. This has led to a slew of copyright infringement cases[1] where artists have accused big tech companies of stealing and profiting from their work.

This is also where the idea of “poison” comes in. Researchers who want to empower individual artists have recently created a tool named “Nightshade[2]” to fight back against unauthorised image scraping.

The tool works by subtly altering an image’s pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human’s eyes.

If an organisation then scrapes one of these images to train a future AI model, its data pool becomes “poisoned”. This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results.

Symptoms of poisoning

As in our earlier example, a balloon might become an egg. A request for an image in the style of Monet might instead return an image in the style of Picasso.

Some of the issues with earlier AI models, such as trouble accurately rendering hands, for example, could return. The models could also introduce other odd and illogical features to images – think six-legged dogs or deformed couches.

The higher the number of “poisoned” images in the training data, the greater the disruption. Because of how generative AI works, the damage from “poisoned” images also affects related prompt keywords.

Read more: Do AI systems really have their own secret language?[3]

For example, if a “poisoned” image of a Ferrari is used in training data, prompt results for other car brands and for other related terms, such as vehicle and automobile, can also be affected.

Nightshade’s developer hopes the tool will make big tech companies more respectful of copyright, but it’s also possible users could abuse the tool and intentionally upload “poisoned” images to generators to try and disrupt their services.

Is there an antidote?

In response, stakeholders have proposed a range of technological and human solutions. The most obvious is paying greater attention to where input data are coming from and how they can be used. Doing so would result in less indiscriminate data harvesting.

This approach does challenge a common belief among computer scientists: that data found online can be used for any purpose they see fit.

Other technological fixes also include the use of “ensemble modeling[4]” where different models are trained on many different subsets of data and compared to locate specific outliers. This approach can be used not only for training but also to detect and discard suspected “poisoned” images.

Audits[5] are another option. One audit approach involves developing a “test battery” – a small, highly curated, and well-labelled dataset – using “hold-out” data that are never used for training. This dataset can then be used to examine the model’s accuracy.

Strategies against technology

So-called “adversarial approaches” (those that degrade, deny, deceive, or manipulate AI systems), including data poisoning, are nothing new. They have also historically included using make-up and costumes to circumvent facial recognition systems.

Human rights activists, for example, have been concerned for some time about the indiscriminate use of machine vision in wider society. This concern is particularly acute concerning facial recognition.

Systems like Clearview AI[6], which hosts a massive searchable database of faces scraped from the internet, are used by law enforcement and government agencies worldwide. In 2021, Australia’s government determined Clearview AI breached the privacy of Australians[7].

Read more: Australian police are using the Clearview AI facial recognition system with no accountability[8]

In response to facial recognition systems being used to profile specific individuals, including legitimate protesters, artists devised adversarial make-up patterns[9] of jagged lines and asymmetric curves that prevent surveillance systems from accurately identifying them.

There is a clear connection between these cases and the issue of data poisoning, as both relate to larger questions around technological governance.

Many technology vendors will consider data poisoning a pesky issue to be fixed with technological solutions. However, it may be better to see data poisoning as an innovative solution to an intrusion on the fundamental moral rights of artists and users.

Read more https://theconversation.com/data-poisoning-how-artists-are-sabotaging-ai-to-take-revenge-on-image-generators-219335

Times Magazine

Federal Budget and Motoring: Luxury Car Tax, Fuel Excise and the Cost of Driving in Australia

For millions of Australians, the Federal Budget is not an abstract economic document discussed onl...

Buying a New Car: Insider Tips

Buying a new car is one of the largest purchases many Australians make outside buying a home. Yet ...

Hybrid Vehicles: What Is a Hybrid, an EV and a Plug-In Hybrid?

Australia’s car market is changing faster than at any point since the decline of the local Holden ...

Chinese Cars: If You Are Not Willing to Risk Buying One, What Are the Current Affordable Petrol Alternatives

For years Australian motorists shopping for an affordable new car generally looked toward familiar...

Australia’s East Coast Braces for Wet Week as Weather Pattern Shifts

Large sections of Australia’s east coast are preparing for a significant period of wet weather as ...

A Report From France: The Mood of a Nation

France occupies a unique place in the global imagination. To many outsiders, it remains the land ...

The Times Features

The 2026 Budget: What the Federal Opposition Has to Say

The Albanese Government’s 2026 federal budget has triggered an immediate and fierce response from ...

Budget for Misery: Federal Budget Fails to Bridge the S…

The 2026-27 Federal Budget headlines boast of millions.  Yet the reality on our homeless streets ...

The NDIS: A Great Australian Idea Created With Flaws — …

The National Disability Insurance Scheme was created with noble intentions. Few Australians dispu...

Capital Gains Tax in Australia: The Federal Budget Chan…

The Federal Budget delivered yesterday may prove to be one of the most significant taxation turnin...

Why Your Saliva Is a Powerful Indicator of Your Overall…

We rarely give it a second thought. It helps us chew, speak, and digest our food seamlessly. But t...

The Complete Guide to Pool & Spa Maintenance: Keep …

There's nothing quite like a sparkling pool or a steaming spa waiting for you at the end of a long...

A new wave of Australian indie music hits Berry this Ma…

Berry NSW will come alive with indie sounds across multiple venues on Thursday May 21 and Sunday May...

Day Care in Australia: How Child Care Funding Works

For many Australian families, child care is no longer simply a convenience. It is an essential par...

The Global Nappy Industry: The Big Players

The global nappy industry is one of the largest, most resilient and most quietly profitable consum...