Model Poisoning Attack
Table of Contents
NDSS
AISTATS
NAACL-HLT
EMNLP
SIGIR
ICDE
ACM Multimedia
IEEE Trans. Neural Networks Learn. Syst.
IEEE Trans. Artif. Intell.
IEEE Trans. Emerg. Top. Comput.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
Nat. Mac. Intell.
Expert Syst. Appl.
Neural Networks
NeurIPS
Expand NeurIPS
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models. | NeurIPS | 2024 | Link |
| Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models. | NeurIPS | 2024 | Link |
| Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models. | NeurIPS | 2024 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks. | NeurIPS | 2023 | Link |
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective. | NeurIPS | 2021 | Link |
ICML
Expand ICML
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| PoisonedEye: Knowledge Poisoning Attack on Retrieval-Augmented Generation based Large Vision-Language Models. | ICML | 2025 | Link |
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error. | ICML | 2024 | Link |
| The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright BreachesWithout Adjusting Finetuning Pipeline. | ICML | 2024 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Exploring Model Dynamics for Accumulative Poisoning Discovery. | ICML | 2023 | Link |
| Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks. | ICML | 2023 | Link |
| LeadFL: Client Self-Defense against Model Poisoning in Federated Learning. | ICML | 2023 | Link |
| Poisoning Language Models During Instruction Tuning. | ICML | 2023 | Link |
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| Model-Targeted Poisoning Attacks with Provable Convergence. | ICML | 2021 | Link |
ICLR
Expand ICLR
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| Concept-ROT: Poisoning Concepts in Large Language Models with Model Editing. | ICLR | 2025 | Link |
2020
| Title | Venue | Year | Link |
|---|---|---|---|
| Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks. | ICLR | 2020 | Link |
NDSS
Expand NDSS
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. | NDSS | 2021 | Link |
CVPR
Expand CVPR
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| Model Poisoning Attacks to Federated Learning via Multi-Round Consistency. | CVPR | 2025 | Link |
| Silent Branding Attack: Trigger-free Data Poisoning Attack on Text-to-Image Diffusion Models. | CVPR | 2025 | Link |
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Semantic Shield: Defending Vision-Language Models Against Backdooring and Poisoning via Fine-Grained Knowledge Alignment. | CVPR | 2024 | Link |
IJCAI
Expand IJCAI
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning. | IJCAI | 2024 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning. | IJCAI | 2023 | Link |
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios. | IJCAI | 2022 | Link |
AAAI
Expand AAAI
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness. | AAAI | 2023 | Link |
2016
| Title | Venue | Year | Link |
|---|---|---|---|
| Data Poisoning Attacks against Autoregressive Models. | AAAI | 2016 | Link |
AISTATS
Expand AISTATS
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification. | AISTATS | 2022 | Link |
KDD
Expand KDD
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| FedRoLA: Robust Federated Learning Against Model Poisoning via Layer-based Aggregation. | KDD | 2024 | Link |
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. | KDD | 2022 | Link |
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| Data Poisoning Attacks Against Outcome Interpretations of Predictive Models. | KDD | 2021 | Link |
2015
| Title | Venue | Year | Link |
|---|---|---|---|
| Predictive Modeling for Public Health: Preventing Childhood Lead Poisoning. | KDD | 2015 | Link |
CCS
Expand CCS
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| On the Feasibility of Poisoning Text-to-Image AI Models via Adversarial Mislabeling. | CCS | 2025 | Link |
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. | CCS | 2022 | Link |
ACL
Expand ACL
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| RLHFPoison: Reward Poisoning Attack for Reinforcement Learning with Human Feedback in Large Language Models. | ACL | 2024 | Link |
2020
| Title | Venue | Year | Link |
|---|---|---|---|
| Weight Poisoning Attacks on Pretrained Models. | ACL | 2020 | Link |
NAACL-HLT
Expand NAACL-HLT
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| Concealed Data Poisoning Attacks on NLP Models. | NAACL-HLT | 2021 | Link |
EMNLP
Expand EMNLP
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning. | EMNLP | 2021 | Link |
SIGIR
Expand SIGIR
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Revisit Targeted Model Poisoning on Federated Recommendation: Optimize via Multi-objective Transport. | SIGIR | 2024 | Link |
ICDE
Expand ICDE
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| FedRecAttack: Model Poisoning Attack to Federated Recommendation. | ICDE | 2022 | Link |
WWW
Expand WWW
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| Model Supply Chain Poisoning: Backdooring Pre-trained Models via Embedding Indistinguishability. | WWW | 2025 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| MaSS: Model-agnostic, Semantic and Stealthy Data Poisoning Attack on Knowledge Graph Embedding. | WWW | 2023 | Link |
SP
Expand SP
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| Preference Poisoning Attacks on Reward Model Learning. | SP | 2025 | Link |
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models. | SP | 2024 | Link |
| Test-Time Poisoning Attacks Against Test-Time Adaptation Models. | SP | 2024 | Link |
| TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. | SP | 2024 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models. | SP | 2023 | Link |
USENIX Security Symposium
Expand USENIX Security Symposium
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning. | USENIX Security Symposium | 2024 | Link |
2020
| Title | Venue | Year | Link |
|---|---|---|---|
| Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. | USENIX Security Symposium | 2020 | Link |
ACM Multimedia
Expand ACM Multimedia
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. | ACM Multimedia | 2023 | Link |
IEEE Trans. Neural Networks Learn. Syst.
Expand IEEE Trans. Neural Networks Learn. Syst.
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| Defending Against Neural Network Model Inversion Attacks via Data Poisoning. | IEEE Trans. Neural Networks Learn. Syst. | 2025 | Link |
| Leverage Variational Graph Representation for Model Poisoning on Federated Learning. | IEEE Trans. Neural Networks Learn. Syst. | 2025 | Link |
| Secure and Efficient Federated Learning Against Model Poisoning Attacks in Horizontal and Vertical Data Partitioning. | IEEE Trans. Neural Networks Learn. Syst. | 2025 | Link |
IEEE Trans. Artif. Intell.
Expand IEEE Trans. Artif. Intell.
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Attacking-Distance-Aware Attack: Semi-targeted Model Poisoning on Federated Learning. | IEEE Trans. Artif. Intell. | 2024 | Link |
IEEE Trans. Knowl. Data Eng.
Expand IEEE Trans. Knowl. Data Eng.
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| HidAttack: An Effective and Undetectable Model Poisoning Attack to Federated Recommenders. | IEEE Trans. Knowl. Data Eng. | 2025 | Link |
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| BIC-Based Mixture Model Defense Against Data Poisoning Attacks on Classifiers: A Comprehensive Study. | IEEE Trans. Knowl. Data Eng. | 2024 | Link |
IEEE Trans. Emerg. Top. Comput.
Expand IEEE Trans. Emerg. Top. Comput.
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Blockchain-Based Federated Learning With SMPC Model Verification Against Poisoning Attack for Healthcare Systems. | IEEE Trans. Emerg. Top. Comput. | 2024 | Link |
IEEE Trans. Inf. Forensics Secur.
Expand IEEE Trans. Inf. Forensics Secur.
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| DamPa: Dynamic Adaptive Model Poisoning Attack in Federated Learning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| FLGuardian: Defending Against Model Poisoning Attacks via Fine-Grained Detection in Federated Learning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| Maximizing Uncertainty for Federated Learning via Bayesian Optimization-Based Model Poisoning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| PoisonPatch: Natural Adversarial Patches via Diffusion Models and Federated Learning Poisoning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| Securing Federated Learning Against Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection on Local Updates. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
| The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks Through Model Poisoning. | IEEE Trans. Inf. Forensics Secur. | 2025 | Link |
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks. | IEEE Trans. Inf. Forensics Secur. | 2024 | Link |
| Data-Agnostic Model Poisoning Against Federated Learning: A Graph Autoencoder Approach. | IEEE Trans. Inf. Forensics Secur. | 2024 | Link |
| MODEL: A Model Poisoning Defense Framework for Federated Learning via Truth Discovery. | IEEE Trans. Inf. Forensics Secur. | 2024 | Link |
| Secure Model Aggregation Against Poisoning Attacks for Cross-Silo Federated Learning With Robustness and Fairness. | IEEE Trans. Inf. Forensics Secur. | 2024 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| A Manifold Consistency Interpolation Method of Poisoning Attacks Against Semi-Supervised Model. | IEEE Trans. Inf. Forensics Secur. | 2023 | Link |
| Categorical Inference Poisoning: Verifiable Defense Against Black-Box DNN Model Stealing Without Constraining Surrogate Data and Query Times. | IEEE Trans. Inf. Forensics Secur. | 2023 | Link |
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning. | IEEE Trans. Inf. Forensics Secur. | 2022 | Link |
2021
| Title | Venue | Year | Link |
|---|---|---|---|
| With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models. | IEEE Trans. Inf. Forensics Secur. | 2021 | Link |
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
Expand IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
2022
| Title | Venue | Year | Link |
|---|---|---|---|
| Enhancing Reliability and Security: A Configurable Poisoning PUF Against Modeling Attacks. | IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. | 2022 | Link |
IEEE Trans. Computers
Expand IEEE Trans. Computers
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| BaDFL: Mitigating Model Poisoning in Decentralized Federated Learning. | IEEE Trans. Computers | 2025 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Model Poisoning Attack on Neural Network Without Reference Data. | IEEE Trans. Computers | 2023 | Link |
Nat. Mac. Intell.
Expand Nat. Mac. Intell.
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Poisoning medical knowledge using large language models. | Nat. Mac. Intell. | 2024 | Link |
Inf. Sci.
Expand Inf. Sci.
2025
| Title | Venue | Year | Link |
|---|---|---|---|
| TPFL: Privacy-preserving personalized federated learning mitigates model poisoning attacks. | Inf. Sci. | 2025 | Link |
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Compromise privacy in large-batch Federated Learning via model poisoning. | Inf. Sci. | 2023 | Link |
| Model poisoning attack in differential privacy-based federated learning. | Inf. Sci. | 2023 | Link |
Expert Syst. Appl.
Expand Expert Syst. Appl.
2023
| Title | Venue | Year | Link |
|---|---|---|---|
| Bandit-based data poisoning attack against federated learning for autonomous driving models. | Expert Syst. Appl. | 2023 | Link |
Neural Networks
Expand Neural Networks
2024
| Title | Venue | Year | Link |
|---|---|---|---|
| Learning a robust foundation model against clean-label data poisoning attacks at downstream tasks. | Neural Networks | 2024 | Link |