
This new data poisoning tool lets artists fight back against generative AI | MIT Technology Review
Date : 2023-10-23
Description
Summary drafted by a large language model.
Melissa Heikkilä reports on Nightshade, a new data poisoning tool developed by researchers at the University of Chicago that lets artists add invisible changes to their art before uploading it online. If scraped into an AI training set, the resulting model can break in chaotic and unpredictable ways. The tool is intended to tip the power balance back towards artists from AI companies that use their work without consent or compensation. Nightshade exploits a security vulnerability in generative AI models trained on vast amounts of data scraped from the internet. The more poisoned images scraped into an AI model's dataset, the more damage the technique will cause.
Read article here
Artificial Intelligence : what everyone can agree on
Evaluation of Sports Performance: Cognitive Biases, Vectors an...
Recently on :
Artificial Intelligence
Design | Culture
PITTI - 2026-03-05
Scaling Trust : a Missing Piece in Multi-Agent Worlds
Humanity’s ability to build complex civilizations relies on an "invisible infrastructure" - the shared culture, institutions, a...
PITTI - 2026-01-14
Cultural, Ideological and Political Bias in LLMs
Transcription of a talk given during the work sessions organized by Technoréalisme on December 9, 2025, in Paris. The talk pres...
WEB - 2025-11-13
Measuring political bias in Claude
Anthropic gives insights into their evaluation methods to measure political bias in models.
WEB - 2025-10-09
Defining and evaluating political bias in LLMs
OpenAI created a political bias evaluation that mirrors real-world usage to stress-test their models’ ability to remain objecti...
WEB - 2025-07-23
Preventing Woke AI In Federal Government
Citing concerns that ideological agendas like Diversity, Equity, and Inclusion (DEI) are compromising accuracy, this executive ...