
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Date : 2024-01-12
Description
This summary was drafted with mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf
This report provides a comprehensive report that develops a taxonomy for adversarial machine learning (AML). The taxonomy includes key types of ML methods, stages of attack, attacker goals and capabilities, and attacker knowledge. AML attacks are classified as evasion, poisoning, or privacy attacks, with corresponding mitigations discussed. The report highlights open challenges in the field, such as transferability of attacks between different models, systems, and datasets. Additionally, the authors provide a glossary that defines key terms associated with the security of AI systems, aiming to assist non-expert readers.
Read article here
Recently on :
Artificial Intelligence
Security | Surveillance | Privacy
PITTI - 2026-03-05
Scaling Trust : a Missing Piece in Multi-Agent Worlds
Humanity’s ability to build complex civilizations relies on an "invisible infrastructure" - the shared culture, institutions, a...
PITTI - 2026-01-14
Cultural, Ideological and Political Bias in LLMs
Transcription of a talk given during the work sessions organized by Technoréalisme on December 9, 2025, in Paris. The talk pres...
WEB - 2025-11-13
Measuring political bias in Claude
Anthropic gives insights into their evaluation methods to measure political bias in models.
WEB - 2025-10-09
Defining and evaluating political bias in LLMs
OpenAI created a political bias evaluation that mirrors real-world usage to stress-test their models’ ability to remain objecti...
WEB - 2025-07-23
Preventing Woke AI In Federal Government
Citing concerns that ideological agendas like Diversity, Equity, and Inclusion (DEI) are compromising accuracy, this executive ...