a grid showing how Nightshade alters images so that they can't be used by AI

Nightshade, developed by computer scientist Ben Zhao to protect artwork from generative AI, can turn a dog into a cat or a hat into a cake. (Image courtesy Ben Zhao)

Poisoning the machine

Visual artists are under threat from generative AI. Computer scientist Ben Zhao explains how his tools Glaze and Nightshade protect them.

If computer science professor Ben Zhao is at all surprised by the appearance of his audience–a small cohort of twentysomethings in cowboy boots and gore makeup, corsets and billowing brown skirts, overalls adorned with red bows–he doesn’t show it. After introducing his talk, Protecting Human Creatives Against Generative AI, he scans the aseptic Rubenstein Forum conference room and its colorful guests. “I would think that this topic is highly relevant to everyone in the building at the moment,” he says.

This is a fair assessment. Zhao’s lecture was part of UChicago’s 21st annual anime and video game convention, UChi-Con, held in Winter Quarter. A daylong event involving lectures, workshops, a costume competition, live music, art vendors, and more, UChi-Con is nothing if not a celebration of human creativity.

One of the main draws of the con is the Artist’s Alley, a bazaar of handmade prints, posters, stickers, jewelry, and other crafts. “We have had artists fly in to be there before, because it's a way for them to turn a profit,” says fourth-year Jonathan Li, president of UChicago Japanese Animation Society (UJAS), the club that organizes the event. The Artist’s Alley drew so many attendees that UChi-Con expanded from Ida Noyes this year, adding the Rubenstein Forum as an additional venue.

Zhao, a Neubauer Professor of Computer Science, is the creator of Glaze and Nightshade, free-to-use tools that protect artists from having their work mimicked by artificial intelligence (AI). He has the vibe of a kind of righteous cowboy in the Wild West of AI. An air of admiration—almost reverence—emanates from his audience. (During the Q&A, one guest prefaces their question with “I never thought I'd get to actually see the creator of Glaze.”)

Zhao and his lab began working on Glaze in December 2022 after receiving an email from Kim Van Deun, a digital fantasy artist. Van Deun had heard about a project from Zhao’s lab that protected against unlawful facial recognition. Might something similar be used to protect artists?

“We were utterly confused,” Zhao says. “Why does art need protection?” A few months later, the scales fell from his eyes. Van Deun had been talking about generative AI.

Generative AI and traditional AI, Zhao explains, are fundamentally different beasts. Traditional AI models have been around for decades, and their applications—climate modeling, drug research, car safety—are often beneficial.

Generative AI models, on the other hand, create media based on human-inputted prompts. These models are trained on vast quantities of data trawled from across the internet. For image generation models like Midjourney, this “data” is the work of human artists. Through a process called fine-tuning, Zhao says, “you can teach the model to produce things in an individual artist’s style without their consent or knowledge.”

He gives the example of digital fantasy artist Greg Rutkowski, whose name is used up to 150,000 times per month on Midjourney to generate images mimicking his characteristic dramatic, sprawling style. “Most of the artists I talk to on a daily basis are still afraid of waking up one day and seeing their name on one of these models,” Zhao says. “When that happens, your career is more or less over.” 

In the fall of 2022, Zhao reached back out to Van Deun. After some email exchanges and an appearance at an online town hall hosted by the Concept Art Association, Zhao and his lab got to work. Glaze was made in just three months and was released to the public in March 2023.

The program works by exploiting the differences in the way humans and AI models “see.” When an artist runs their work through Glaze, it adds a layer onto the art—a layer that humans can’t perceive and that AI can’t miss. These changes make art look like a profoundly different style to the model.

For example, AI might be given a Glazed watercolor portrait and think it’s looking at Cubist color blocks. In his presentation Zhao displays art by Karla Ortiz, the concept artist who first drew up the costume for the titular hero in Marvel’s Doctor Strange. (Concept artists, hired by game or movie studios to create initial designs for characters and settings, have had a particularly hard time weathering the storm of AI art.) Next to Ortiz’s art on the presentation was an AI mimicry by a model that had been fed Ortiz’s Glazed work. The model, instead of replicating Ortiz’s realistic figures and high contrast colors, had generated an abstract, warbling swirl vaguely reminiscent of Van Gogh.

Once Glaze was released, “we had five million global downloads in 21 months,” Zhao says. Currently, that count is up to almost seven million.

In the latter half of his lecture, Zhao shifts to talk of his newer venture, Nightshade. If Glaze changes what an AI model sees in terms of style, Nightshade changes what it sees in terms of content.

Zhao created Nightshade as a copyright protection mechanism for people and companies who don’t want their intellectual property used to train AI models. “AI models have no idea what reality looks like,” Zhao explains. “Everything they know is fed to it with labels. If it sees a bunch of dog images all labeled as cats, it will think that it is a cat.” Give the model an image of a motorcycle run through Nightshade, for example, and the model might think it’s looking at a coffee mug. This damages the model’s ability to generate accurate representations of the world, effectively “poisoning” the system.

Do this enough times, Zhao says, and you end up with “model implosion,” in which everything in the model, regardless of whether it was originally in the poisoned set, starts to degrade. By the time 500 prompts have been poisoned, the model is useless for anything except generating random pixels.

Zhao’s blunt, pragmatic optimism propels him through a 45-minute Q&A. He fields questions about a version of Glaze for audio recordings (researchers from WashU are developing one), Nightshade’s potential to interfere with traditional AI tools like classifiers (none at all), and his projects in other AI spheres (there’s a program in the works to quash AI-generated scam websites).

At the core of his responses was a palpable faith in people, artists or otherwise. When asked what artists can do to help his work, he tells them to just keep creating.

“People are starting to recognize the difference between AI-generated images and real human art,” he says. “There will be a day very soon when people are like, no, I don’t want this AI crap. Give me the real thing.”


Read more about UChi-con.