New Tools Protect Artists from AI Model Training

Artists face a dilemma as AI development progresses, risking their work being used without consent or compensation. Shawn Shan, an AI systems researcher from the University of Chicago, developed two tools, Glaze and Nightshade, to safeguard artists' works from AI exploitation. Glaze, launched in 2022 with over two million downloads, adds a protective layer to art, altering its style to prevent AI models from mimicking the original. However, Glaze's passive approach isn't always effective for artists without a consistent style. To address this, Shan introduced Nightshade in October 2023, taking an active approach by polluting AI training data, ensuring models trained on "shaded" images produce inaccurate results.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

Nightshade quickly gained traction, with over 300,000 downloads within months. It distorts data AI models would encounter, tricking them into learning incorrect information. For instance, an AI might interpret an image of a cow as a leather purse, leading to inaccurate outputs. These tools give artists some control over their work's protection while larger-scale legal protections are pursued. Shan's team, running The Glaze Project as a nonprofit, refuses venture capital funding to maintain independence and focus on artists' rights.

Read more