Model Poisoning Attack

Table of Contents
NeurIPS
ICML
ICLR
NDSS
CVPR
IJCAI
AAAI
AISTATS
KDD
CCS
ACL
NAACL-HLT
EMNLP
SIGIR
ICDE
WWW
SP
USENIX Security Symposium
ACM Multimedia
IEEE Trans. Neural Networks Learn. Syst.
IEEE Trans. Artif. Intell.
IEEE Trans. Knowl. Data Eng.
IEEE Trans. Emerg. Top. Comput.
IEEE Trans. Inf. Forensics Secur.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
IEEE Trans. Computers
Nat. Mac. Intell.
Inf. Sci.
Expert Syst. Appl.
Neural Networks
arXiv

NeurIPS

Expand NeurIPS

2024

Title Venue Year Link
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models. NeurIPS 2024 Link
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. NeurIPS 2024 Link
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models. NeurIPS 2024 Link

2023

Title Venue Year Link
RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks. NeurIPS 2023 Link

2021

Title Venue Year Link
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective. NeurIPS 2021 Link

ICML

Expand ICML

2025

Title Venue Year Link
PoisonedEye: Knowledge Poisoning Attack on Retrieval-Augmented Generation based Large Vision-Language Models. ICML 2025 Link

2024

Title Venue Year Link
FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error. ICML 2024 Link
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright BreachesWithout Adjusting Finetuning Pipeline. ICML 2024 Link

2023

Title Venue Year Link
Exploring Model Dynamics for Accumulative Poisoning Discovery. ICML 2023 Link
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks. ICML 2023 Link
LeadFL: Client Self-Defense against Model Poisoning in Federated Learning. ICML 2023 Link
Poisoning Language Models During Instruction Tuning. ICML 2023 Link

2021

Title Venue Year Link
Model-Targeted Poisoning Attacks with Provable Convergence. ICML 2021 Link

ICLR

Expand ICLR

2025

Title Venue Year Link
Concept-ROT: Poisoning Concepts in Large Language Models with Model Editing. ICLR 2025 Link

2020

Title Venue Year Link
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks. ICLR 2020 Link

NDSS

Expand NDSS

2021

Title Venue Year Link
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. NDSS 2021 Link

CVPR

Expand CVPR

2025

Title Venue Year Link
Model Poisoning Attacks to Federated Learning via Multi-Round Consistency. CVPR 2025 Link
Silent Branding Attack: Trigger-free Data Poisoning Attack on Text-to-Image Diffusion Models. CVPR 2025 Link

2024

Title Venue Year Link
Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-Grained Knowledge Alignment. CVPR 2024 Link

IJCAI

Expand IJCAI

2024

Title Venue Year Link
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning. IJCAI 2024 Link

2023

Title Venue Year Link
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning. IJCAI 2023 Link

2022

Title Venue Year Link
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios. IJCAI 2022 Link

AAAI

Expand AAAI

2023

Title Venue Year Link
DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness. AAAI 2023 Link

2016

Title Venue Year Link
Data Poisoning Attacks against Autoregressive Models. AAAI 2016 Link

AISTATS

Expand AISTATS

2022

Title Venue Year Link
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. AISTATS 2022 Link

KDD

Expand KDD

2024

Title Venue Year Link
FedRoLA: Robust Federated Learning Against Model Poisoning via Layer-based Aggregation. KDD 2024 Link

2022

Title Venue Year Link
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. KDD 2022 Link

2021

Title Venue Year Link
Data Poisoning Attacks Against Outcome Interpretations of Predictive Models. KDD 2021 Link

2015

Title Venue Year Link
Predictive Modeling for Public Health: Preventing Childhood Lead Poisoning. KDD 2015 Link

CCS

Expand CCS

2025

Title Venue Year Link
On the Feasibility of Poisoning Text-to-Image AI Models via Adversarial Mislabeling. CCS 2025 Link

2022

Title Venue Year Link
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CCS 2022 Link

ACL

Expand ACL

2024

Title Venue Year Link
RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models. ACL 2024 Link

2020

Title Venue Year Link
Weight Poisoning Attacks on Pretrained Models. ACL 2020 Link

NAACL-HLT

Expand NAACL-HLT

2021

Title Venue Year Link
Concealed Data Poisoning Attacks on NLP Models. NAACL-HLT 2021 Link

EMNLP

Expand EMNLP

2021

Title Venue Year Link
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. EMNLP 2021 Link

SIGIR

Expand SIGIR

2024

Title Venue Year Link
Revisit Targeted Model Poisoning on Federated Recommendation: Optimize via Multi-objective Transport. SIGIR 2024 Link

ICDE

Expand ICDE

2022

Title Venue Year Link
FedRecAttack: Model Poisoning Attack to Federated Recommendation. ICDE 2022 Link

WWW

Expand WWW

2025

Title Venue Year Link
Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability. WWW 2025 Link

2023

Title Venue Year Link
MaSS: Model-agnostic, Semantic and Stealthy Data Poisoning Attack on Knowledge Graph Embedding. WWW 2023 Link

SP

Expand SP

2025

Title Venue Year Link
Preference Poisoning Attacks on Reward Model Learning. SP 2025 Link

2024

Title Venue Year Link
Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models. SP 2024 Link
Test-Time Poisoning Attacks Against Test-Time Adaptation Models. SP 2024 Link
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. SP 2024 Link

2023

Title Venue Year Link
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models. SP 2023 Link

USENIX Security Symposium

Expand USENIX Security Symposium

2024

Title Venue Year Link
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning. USENIX Security Symposium 2024 Link

2020

Title Venue Year Link
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security Symposium 2020 Link

ACM Multimedia

Expand ACM Multimedia

2023

Title Venue Year Link
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. ACM Multimedia 2023 Link

IEEE Trans. Neural Networks Learn. Syst.

Expand IEEE Trans. Neural Networks Learn. Syst.

2025

Title Venue Year Link
Defending Against Neural Network Model Inversion Attacks via Data Poisoning. IEEE Trans. Neural Networks Learn. Syst. 2025 Link
Leverage Variational Graph Representation for Model Poisoning on Federated Learning. IEEE Trans. Neural Networks Learn. Syst. 2025 Link
Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning. IEEE Trans. Neural Networks Learn. Syst. 2025 Link

IEEE Trans. Artif. Intell.

Expand IEEE Trans. Artif. Intell.

2024

Title Venue Year Link
Attacking-Distance-Aware Attack: Semi-targeted Model Poisoning on Federated Learning. IEEE Trans. Artif. Intell. 2024 Link

IEEE Trans. Knowl. Data Eng.

Expand IEEE Trans. Knowl. Data Eng.

2025

Title Venue Year Link
HidAttack: An Effective and Undetectable Model Poisoning Attack to Federated Recommenders. IEEE Trans. Knowl. Data Eng. 2025 Link

2024

Title Venue Year Link
BIC-Based Mixture Model Defense Against Data Poisoning Attacks on Classifiers: A Comprehensive Study. IEEE Trans. Knowl. Data Eng. 2024 Link

IEEE Trans. Emerg. Top. Comput.

Expand IEEE Trans. Emerg. Top. Comput.

2024

Title Venue Year Link
Blockchain-Based Federated Learning With SMPC Model Verification Against Poisoning Attack for Healthcare Systems. IEEE Trans. Emerg. Top. Comput. 2024 Link

IEEE Trans. Inf. Forensics Secur.

Expand IEEE Trans. Inf. Forensics Secur.

2025

Title Venue Year Link
DamPa: Dynamic Adaptive Model Poisoning Attack in Federated Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
FLGuardian: Defending Against Model Poisoning Attacks via Fine-Grained Detection in Federated Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
Maximizing Uncertainty for Federated Learning via Bayesian Optimization-Based Model Poisoning. IEEE Trans. Inf. Forensics Secur. 2025 Link
PoisonPatch: Natural Adversarial Patches via Diffusion Models and Federated Learning Poisoning. IEEE Trans. Inf. Forensics Secur. 2025 Link
Securing Federated Learning Against Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection on Local Updates. IEEE Trans. Inf. Forensics Secur. 2025 Link
The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks Through Model Poisoning. IEEE Trans. Inf. Forensics Secur. 2025 Link

2024

Title Venue Year Link
A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks. IEEE Trans. Inf. Forensics Secur. 2024 Link
Data-Agnostic Model Poisoning Against Federated Learning: A Graph Autoencoder Approach. IEEE Trans. Inf. Forensics Secur. 2024 Link
MODEL: A Model Poisoning Defense Framework for Federated Learning via Truth Discovery. IEEE Trans. Inf. Forensics Secur. 2024 Link
Secure Model Aggregation Against Poisoning Attacks for Cross-Silo Federated Learning With Robustness and Fairness. IEEE Trans. Inf. Forensics Secur. 2024 Link

2023

Title Venue Year Link
A Manifold Consistency Interpolation Method of Poisoning Attacks Against Semi-Supervised Model. IEEE Trans. Inf. Forensics Secur. 2023 Link
Categorical Inference Poisoning: Verifiable Defense Against Black-Box DNN Model Stealing Without Constraining Surrogate Data and Query Times. IEEE Trans. Inf. Forensics Secur. 2023 Link

2022

Title Venue Year Link
ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning. IEEE Trans. Inf. Forensics Secur. 2022 Link

2021

Title Venue Year Link
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models. IEEE Trans. Inf. Forensics Secur. 2021 Link

IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.

Expand IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.

2022

Title Venue Year Link
Enhancing Reliability and Security: A Configurable Poisoning PUF Against Modeling Attacks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2022 Link

IEEE Trans. Computers

Expand IEEE Trans. Computers

2025

Title Venue Year Link
BaDFL: Mitigating Model Poisoning in Decentralized Federated Learning. IEEE Trans. Computers 2025 Link

2023

Title Venue Year Link
Model Poisoning Attack on Neural Network Without Reference Data. IEEE Trans. Computers 2023 Link

Nat. Mac. Intell.

Expand Nat. Mac. Intell.

2024

Title Venue Year Link
Poisoning medical knowledge using large language models. Nat. Mac. Intell. 2024 Link

Inf. Sci.

Expand Inf. Sci.

2025

Title Venue Year Link
TPFL: Privacy-preserving personalized federated learning mitigates model poisoning attacks. Inf. Sci. 2025 Link

2023

Title Venue Year Link
Compromise privacy in large-batch Federated Learning via model poisoning. Inf. Sci. 2023 Link
Model poisoning attack in differential privacy-based federated learning. Inf. Sci. 2023 Link

Expert Syst. Appl.

Expand Expert Syst. Appl.

2023

Title Venue Year Link
Bandit-based data poisoning attack against federated learning for autonomous driving models. Expert Syst. Appl. 2023 Link

Neural Networks

Expand Neural Networks

2024

Title Venue Year Link
Learning a robust foundation model against clean-label data poisoning attacks at downstream tasks. Neural Networks 2024 Link

arXiv

Expand arXiv

2026

Title Venue Year Link
Beyond Denial-of-Service: The Puppeteer's Attack for Fine-Grained Control in Ranking-Based Federated Learning arXiv 2026 Link
Byzantine-Robust Federated Learning Framework with Post-Quantum Secure Aggregation for Real-Time Threat Intelligence Sharing in Critical IoT Infrastructure arXiv 2026 Link

2025

Title Venue Year Link
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning arXiv 2025 Link
Backdoors in Conditional Diffusion: Threats to Responsible Synthetic Data Pipelines arXiv 2025 Link
BlindFL: Segmented Federated Learning with Fully Homomorphic Encryption arXiv 2025 Link
Byzantine-Robust Federated Learning Using Generative Adversarial Networks arXiv 2025 Link
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for Model Differences arXiv 2025 Link
DP2Guard: A Lightweight and Byzantine-Robust Privacy-Preserving Federated Learning Scheme for Industrial IoT arXiv 2025 Link
Deep Learning based Moving Target Defence for Federated Learning against Poisoning Attack in MEC Systems with a 6G Wireless Model arXiv 2025 Link
Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning arXiv 2025 Link
EByFTVeS: Efficient Byzantine Fault Tolerant-based Verifiable Secret-sharing in Distributed Privacy-preserving Machine Learning arXiv 2025 Link
Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning arXiv 2025 Link
FedGuard: A Diverse-Byzantine-Robust Mechanism for Federated Learning with Major Malicious Clients arXiv 2025 Link
FedStrategist: A Meta-Learning Framework for Adaptive and Robust Aggregation in Federated Learning arXiv 2025 Link
FedUP: Efficient Pruning-based Federated Unlearning for Model Poisoning Attacks arXiv 2025 Link
Federated Learning-Based Data Collaboration Method for Enhancing Edge Cloud AI System Security Using Large Language Models arXiv 2025 Link
GRANITE : a Byzantine-Resilient Dynamic Gossip Learning Framework arXiv 2025 Link
Graph Representation-based Model Poisoning on Federated Large Language Models arXiv 2025 Link
Graph Representation-based Model Poisoning on the Heterogeneous Internet of Agents arXiv 2025 Link
Intelligent Attacks and Defense Methods in Federated Learning-enabled Energy-Efficient Wireless Networks arXiv 2025 Link
KeTS: Kernel-based Trust Segmentation against Model Poisoning Attacks arXiv 2025 Link
Maximizing Uncertainty for Federated learning via Bayesian Optimisation-based Model Poisoning arXiv 2025 Link
Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning arXiv 2025 Link
On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy arXiv 2025 Link
Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach arXiv 2025 Link
Poisoning Bayesian Inference via Data Deletion and Replication arXiv 2025 Link
Privacy-Preserving Federated Learning Scheme with Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures arXiv 2025 Link
REVERB-FL: Server-Side Adversarial and Reserve-Enhanced Federated Learning for Robust Audio Classification arXiv 2025 Link
RF Sensing Security and Malicious Exploitation: A Comprehensive Survey arXiv 2025 Link
RepuNet: A Reputation System for Mitigating Malicious Clients in DFL arXiv 2025 Link
SLVR: Securely Leveraging Client Validation for Robust Federated Learning arXiv 2025 Link
Scalable Hierarchical AI-Blockchain Framework for Real-Time Anomaly Detection in Large-Scale Autonomous Vehicle Networks arXiv 2025 Link
SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning arXiv 2025 Link
The Art of Hide and Seek: Making Pickle-Based Model Supply Chain Poisoning Stealthy Again arXiv 2025 Link
The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks through Model Poisoning arXiv 2025 Link
Trojan Horse Hunt in Time Series Forecasting for Space Operations arXiv 2025 Link
Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks arXiv 2025 Link
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning arXiv 2025 Link

2024

Title Venue Year Link
A Novel Defense Against Poisoning Attacks on Federated Learning: LayerCAM Augmented with Autoencoder arXiv 2024 Link
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning arXiv 2024 Link
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning arXiv 2024 Link
DeTrigger: A Gradient-Centric Approach to Backdoor Attack Mitigation in Federated Learning arXiv 2024 Link
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning arXiv 2024 Link
Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning arXiv 2024 Link
FedRDF: A Robust and Dynamic Aggregation Function against Poisoning Attacks in Federated Learning arXiv 2024 Link
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning arXiv 2024 Link
How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution arXiv 2024 Link
Leverage Variational Graph Representation For Model Poisoning on Federated Learning arXiv 2024 Link
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks arXiv 2024 Link
Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense arXiv 2024 Link
Model Poisoning Attacks to Federated Learning via Multi-Round Consistency arXiv 2024 Link
Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems arXiv 2024 Link
No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning arXiv 2024 Link
On the Hardness of Decentralized Multi-Agent Policy Evaluation under Byzantine Attacks arXiv 2024 Link
PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning arXiv 2024 Link
Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning arXiv 2024 Link
Poisoning Decentralized Collaborative Recommender System and Its Countermeasures arXiv 2024 Link
Resilience in Online Federated Learning: Mitigating Model-Poisoning Attacks via Partial Sharing arXiv 2024 Link
Robust Federated Contrastive Recommender System against Model Poisoning Attack arXiv 2024 Link
Securing Distributed Network Digital Twin Systems Against Model Poisoning Attacks arXiv 2024 Link
Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks arXiv 2024 Link
Securing Tomorrow's Smart Cities: Investigating Software Security in Internet of Vehicles and Deep Learning Technologies arXiv 2024 Link
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning arXiv 2024 Link
pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology arXiv 2024 Link

2023

Title Venue Year Link
A Data-Driven Defense against Edge-case Model Poisoning Attacks on Federated Learning arXiv 2023 Link
A First Order Meta Stackelberg Method for Robust Federated Learning arXiv 2023 Link
A First Order Meta Stackelberg Method for Robust Federated Learning (Technical Report) arXiv 2023 Link
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems arXiv 2023 Link
Anticipatory Thinking Challenges in Open Worlds: Risk Management arXiv 2023 Link
CADeSH: Collaborative Anomaly Detection for Smart Homes arXiv 2023 Link
CATFL: Certificateless Authentication-based Trustworthy Federated Learning for 6G Semantic Communications arXiv 2023 Link
Can We Trust the Similarity Measurement in Federated Learning? arXiv 2023 Link
Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning arXiv 2023 Link
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks arXiv 2023 Link
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey arXiv 2023 Link
Data-Agnostic Model Poisoning against Federated Learning: A Graph Autoencoder Approach arXiv 2023 Link
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning arXiv 2023 Link
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks arXiv 2023 Link
FLoW3 -- Web3 Empowered Federated Learning arXiv 2023 Link
FedDefender: Client-Side Attack-Tolerant Federated Learning arXiv 2023 Link
Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios arXiv 2023 Link
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers? arXiv 2023 Link
Identifying the Truth of Global Model: A Generic Solution to Defend Against Byzantine and Backdoor Attacks in Federated Learning (full version) arXiv 2023 Link
Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification arXiv 2023 Link
Manipulating Visually-aware Federated Recommender Systems and Its Countermeasures arXiv 2023 Link
Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers arXiv 2023 Link
Poster: Sponge ML Model Attacks of Mobile Apps arXiv 2023 Link
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection arXiv 2023 Link
RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks arXiv 2023 Link
Secure Federated Learning against Model Poisoning Attacks via Client Filtering arXiv 2023 Link
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection arXiv 2023 Link

2022

Title Venue Year Link
A Streamlit-based Artificial Intelligence Trust Platform for Next-Generation Wireless Networks arXiv 2022 Link
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures arXiv 2022 Link
BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine Learning arXiv 2022 Link
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling arXiv 2022 Link
Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey arXiv 2022 Link
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients arXiv 2022 Link
FedCC: Robust Federated Learning against Model Poisoning Attacks arXiv 2022 Link
FedPerm: Private and Robust Federated Learning by Parameter Permutation arXiv 2022 Link
Federated Learning: Balancing the Thin Line Between Data Intelligence and Privacy arXiv 2022 Link
Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing arXiv 2022 Link
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients arXiv 2022 Link
Membership Inference Attacks Against Semantic Segmentation Models arXiv 2022 Link
Performance Weighting for Robust Federated Learning Against Corrupted Sources arXiv 2022 Link
SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks arXiv 2022 Link
Security Analysis of SplitFed Learning arXiv 2022 Link
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis arXiv 2022 Link
Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors arXiv 2022 Link
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning arXiv 2022 Link
Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of Robustness arXiv 2022 Link
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications arXiv 2022 Link

2021

Title Venue Year Link
A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples arXiv 2021 Link
ARFED: Attack-Resistant Federated averaging based on outlier elimination arXiv 2021 Link
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning arXiv 2021 Link
Byzantine-Resilient Federated Machine Learning via Over-the-Air Computation arXiv 2021 Link
Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering arXiv 2021 Link
Check Your Other Door! Creating Backdoor Attacks in the Frequency Domain arXiv 2021 Link
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization arXiv 2021 Link
DFL: High-Performance Blockchain-Based Federated Learning arXiv 2021 Link
DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning arXiv 2021 Link
FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective arXiv 2021 Link
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning arXiv 2021 Link
FedRAD: Federated Robust Adaptive Distillation arXiv 2021 Link
On the Security Risks of AutoML arXiv 2021 Link
PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy arXiv 2021 Link
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion arXiv 2021 Link
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance arXiv 2021 Link
Robust Federated Learning with Attack-Adaptive Aggregation arXiv 2021 Link
SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation arXiv 2021 Link
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification arXiv 2021 Link
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks arXiv 2021 Link
Turning Federated Learning Systems Into Covert Channels arXiv 2021 Link
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation arXiv 2021 Link

2020

Title Venue Year Link
2CP: Decentralized Protocols to Transparently Evaluate Contributivity in Blockchain Federated Learning Environments arXiv 2020 Link
BaFFLe: Backdoor detection via Feedback-based Federated Learning arXiv 2020 Link
Ditto: Fair and Robust Federated Learning Through Personalization arXiv 2020 Link
Exact Support Recovery in Federated Regression with One-shot Communication arXiv 2020 Link
Learning to Detect Malicious Clients for Robust Federated Learning arXiv 2020 Link
Mitigating Sybil Attacks on Differential Privacy based Federated Learning arXiv 2020 Link
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion arXiv 2020 Link

2019

Title Venue Year Link
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning arXiv 2019 Link
Mixed Strategy Game Model Against Data Poisoning Attacks arXiv 2019 Link

2018

Title Venue Year Link
Analyzing Federated Learning through an Adversarial Lens arXiv 2018 Link
How To Backdoor Federated Learning arXiv 2018 Link
Mitigating Sybils in Federated Learning Poisoning arXiv 2018 Link