Inversion Attacks

Table of Contents
IJCAI
AAAI
AISTATS
NeurIPS
ICML
ICLR
UAI
IEEE Trans. Pattern Anal. Mach. Intell.
KDD
SP
CCS
USENIX Security Symposium
NDSS
CVPR
ICCV
ECCV
ACM Multimedia
ACL
EMNLP
COLING
SIGIR
WWW
DAC
IEEE Trans. Computers
WACV
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.
IEEE Trans. Neural Networks Learn. Syst.
IEEE Trans. Big Data
IEEE Trans. Medical Imaging
IEEE Trans. Inf. Forensics Secur.
ACM Trans. Priv. Secur.
IEEE J. Biomed. Health Informatics
Medical Image Anal.
Knowl. Based Syst.
Neurocomputing
IEEE Trans. Knowl. Data Eng.
IEEE Trans. Syst. Man Cybern. Syst.
IEEE Trans. Emerg. Top. Comput.
Neural Networks
J. Artif. Intell. Res.
IEEE Trans. Image Process.
IEEE Trans. Neural Networks
IEEE Trans. Intell. Transp. Syst.
Expert Syst. Appl.
IEEE ACM Trans. Audio Speech Lang. Process.
IEEE Trans. Speech Audio Process.
Neural Comput. Appl.
ICDE
MobiCom
IEEE Symposium on Security and Privacy
CollSec
Proc. ACM Manag. Data
ICSE Companion
CIKM
ACM Trans. Intell. Syst. Technol.
ACM Trans. Knowl. Discov. Data
Proc. VLDB Endow.
IEEE Trans. Parallel Distributed Syst.
Pattern Recognit.
Inf. Sci.
IEEE Trans. Signal Process.
IEEE Trans. Cybern.
Pattern Recognit. Lett.
ACM Trans. Inf. Syst. Secur.
INFOCOM
Mach. Learn.
Int. J. Comput. Vis.
Comput. Vis. Image Underst.

IJCAI

Expand IJCAI

2025

Title Venue Year Link
MMGIA: Gradient Inversion Attack Against Multimodal Federated Learning via Intermodal Correlation. IJCAI 2025 Link

2023

Title Venue Year Link
Boosting Decision-Based Black-Box Adversarial Attack with Gradient Priors. IJCAI 2023 Link

2022

Title Venue Year Link
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions. IJCAI 2022 Link
Measuring Data Leakage in Machine-Learning Models with Fisher Information (Extended Abstract). IJCAI 2022 Link

2021

Title Venue Year Link
InverseNet: Augmenting Model Extraction Attacks with Training Data Inversion. IJCAI 2021 Link

2015

Title Venue Year Link
Regression Model Fitting under Differential Privacy and Model Inversion Attack. IJCAI 2015 Link

1989

Title Venue Year Link
A "Small Leakage" Model for Diffusion Smoothing of Image Data. IJCAI 1989 Link

AAAI

Expand AAAI

2025

Title Venue Year Link
A New Federated Learning Framework Against Gradient Inversion Attacks. AAAI 2025 Link
A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks. AAAI 2025 Link
Against All Odds: Overcoming Typology, Script, and Language Confusion in Multilingual Embedding Inversion Attacks. AAAI 2025 Link
AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples. AAAI 2025 Link

2024

Title Venue Year Link
Compositional Inversion for Stable Diffusion Models. AAAI 2024 Link
DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models. AAAI 2024 Link
Foreseeing Reconstruction Quality of Gradient Inversion: An Optimization Perspective. AAAI 2024 Link
High-Fidelity Gradient Inversion in Distributed Learning. AAAI 2024 Link
IPRemover: A Generative Model Inversion Attack against Deep Neural Network Fingerprinting and Watermarking. AAAI 2024 Link
Music Style Transfer with Time-Varying Inversion of Diffusion Models. AAAI 2024 Link
Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks. AAAI 2024 Link

2023

Title Venue Year Link
Let Graph Be the Go Board: Gradient-Free Node Injection Attack for Graph Neural Networks via Reinforcement Learning. AAAI 2023 Link
MGIA: Mutual Gradient Inversion Attack in Multi-Modal Federated Learning (Student Abstract). AAAI 2023 Link
Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network. AAAI 2023 Link

2022

Title Venue Year Link
Improved Gradient-Based Adversarial Attacks for Quantized Networks. AAAI 2022 Link

2021

Title Venue Year Link
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution Based Text Attacks. AAAI 2021 Link
Beating Attackers At Their Own Games: Adversarial Example Detection Using Adversarial Gradient Directions. AAAI 2021 Link
Improving Robustness to Model Inversion Attacks via Mutual Information Regularization. AAAI 2021 Link

2020

Title Venue Year Link
A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories. AAAI 2020 Link

2013

Title Venue Year Link
Gradient Networks: Explicit Shape Matching Without Extracting Edges. AAAI 2013 Link

AISTATS

Expand AISTATS

2025

Title Venue Year Link
MEDUSA: Medical Data Under Shadow Attacks via Hybrid Model Inversion. AISTATS 2025 Link
Signal Recovery from Random Dot-Product Graphs under Local Differential Privacy. AISTATS 2025 Link

2021

Title Venue Year Link
Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks. AISTATS 2021 Link

NeurIPS

Expand NeurIPS

2024

Title Venue Year Link
BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models. NeurIPS 2024 Link
DAGER: Exact Gradient Inversion for Large Language Models. NeurIPS 2024 Link
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes. NeurIPS 2024 Link
Gradient-free Decoder Inversion in Latent Diffusion Models. NeurIPS 2024 Link
Pseudo-Private Data Guided Model Inversion Attacks. NeurIPS 2024 Link
ReMAP: Neural Model Reprogramming with Network Inversion and Retrieval-Augmented Mapping for Adaptive Motion Forecasting. NeurIPS 2024 Link
Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference. NeurIPS 2024 Link
SPEAR: Exact Gradient Inversion of Batches in Federated Learning. NeurIPS 2024 Link
Trap-MID: Trapdoor-based Defense against Model Inversion Attacks. NeurIPS 2024 Link

2023

Title Venue Year Link
Label-Only Model Inversion Attacks via Knowledge Transfer. NeurIPS 2023 Link
Understanding Deep Gradient Leakage via Inversion Influence Functions. NeurIPS 2023 Link

2022

Title Venue Year Link
LAMP: Extracting Text from Gradients with Language Model Priors. NeurIPS 2022 Link
Learning to Generate Inversion-Resistant Model Explanations. NeurIPS 2022 Link
Recovering Private Text in Federated Learning of Language Models. NeurIPS 2022 Link
Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias. NeurIPS 2022 Link

2021

Title Venue Year Link
Catastrophic Data Leakage in Vertical Federated Learning. NeurIPS 2021 Link
Designing Counterfactual Generators using Deep Model Inversion. NeurIPS 2021 Link
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. NeurIPS 2021 Link
Gradient Inversion with Generative Image Prior. NeurIPS 2021 Link
Variational Model Inversion Attacks. NeurIPS 2021 Link

2020

Title Venue Year Link
Model Inversion Networks for Model-Based Optimization. NeurIPS 2020 Link
Robustness of Bayesian Neural Networks to Gradient-Based Attacks. NeurIPS 2020 Link

2018

Title Venue Year Link
Faithful Inversion of Generative Models for Effective Amortized Inference. NeurIPS 2018 Link

ICML

Expand ICML

2025

Title Venue Year Link
Gradient Inversion of Multimodal Models. ICML 2025 Link
How Contaminated Is Your Benchmark? Measuring Dataset Leakage in Large Language Models with Kernel Divergence. ICML 2025 Link
Smoothed Preference Optimization via ReNoise Inversion for Aligning Diffusion Models with Varied Human Preferences. ICML 2025 Link

2024

Title Venue Year Link
Differentially private exact recovery for stochastic block models. ICML 2024 Link
Referee Can Play: An Alternative Approach to Conditional Generation via Model Inversion. ICML 2024 Link
SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding. ICML 2024 Link
Single-Model Attribution of Generative Models Through Final-Layer Inversion. ICML 2024 Link
Sparse Model Inversion: Efficient Inversion of Vision Transformers for Data-Free Applications. ICML 2024 Link

2023

Title Venue Year Link
TabLeak: Tabular Data Leakage in Federated Learning. ICML 2023 Link

2022

Title Venue Year Link
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks. ICML 2022 Link
Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. ICML 2022 Link
Diversified Adversarial Attacks based on Conjugate Gradient Method. ICML 2022 Link
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. ICML 2022 Link
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations. ICML 2022 Link

2021

Title Venue Year Link
Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation. ICML 2021 Link

2020

Title Venue Year Link
Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks. ICML 2020 Link

ICLR

Expand ICLR

2025

Title Venue Year Link
Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks. ICLR 2025 Link
ConcreTizer: Model Inversion Attack via Occupancy Classification and Dispersion Control for 3D Point Cloud Restoration. ICLR 2025 Link
Lightning-Fast Image Inversion and Editing for Text-to-Image Diffusion Models. ICLR 2025 Link
REFINE: Inversion-Free Backdoor Defense via Model Reprogramming. ICLR 2025 Link
Stealthy Shield Defense: A Conditional Mutual Information-Based Approach against Black-Box Model Inversion Attacks. ICLR 2025 Link
Visually Guided Decoding: Gradient-Free Hard Prompt Inversion with Language Models. ICLR 2025 Link

2024

Title Venue Year Link
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks. ICLR 2024 Link
Enhancing Transferable Adversarial Attacks on Vision Transformers through Gradient Normalization Scaling and High-Frequency Adaptation. ICLR 2024 Link
Language Model Inversion. ICLR 2024 Link
Towards Eliminating Hard Label Constraints in Gradient Inversion Attacks. ICLR 2024 Link

2022

Title Venue Year Link
Transferable Adversarial Attack based on Integrated Gradients. ICLR 2022 Link

2021

Title Venue Year Link
R-GAP: Recursive Gradient Attack on Privacy. ICLR 2021 Link

2020

Title Venue Year Link
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. ICLR 2020 Link

2018

Title Venue Year Link
Black-box Attacks on Deep Neural Networks via Gradient Estimation. ICLR 2018 Link

2013

Title Venue Year Link
Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models ICLR 2013 Link

UAI

Expand UAI

2023

Title Venue Year Link
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning. UAI 2023 Link

2021

Title Venue Year Link
Measuring data leakage in machine-learning models with Fisher information. UAI 2021 Link

IEEE Trans. Pattern Anal. Mach. Intell.

Expand IEEE Trans. Pattern Anal. Mach. Intell.

2025

Title Venue Year Link
Unknown-Aware Bilateral Dependency Optimization for Defending Against Model Inversion Attacks. IEEE Trans. Pattern Anal. Mach. Intell. 2025 Link

2024

Title Venue Year Link
Gradient Inversion Attacks: Impact Factors Analyses and Privacy Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2024 Link

2023

Title Venue Year Link
Comprehensive Vulnerability Evaluation of Face Recognition Systems to Template Inversion Attacks via 3D Face Reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 2023 Link

2007

Title Venue Year Link
Normalization-Cooperated Gradient Feature Extraction for Handwritten Character Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2007 Link

1979

Title Venue Year Link
Image Feature Extraction Using Diameter-Limited Gradient Direction Histograms. IEEE Trans. Pattern Anal. Mach. Intell. 1979 Link

KDD

Expand KDD

2025

Title Venue Year Link
Prompt as a Double-Edged Sword: A Dynamic Equilibrium Gradient-Assigned Attack against Graph Prompt Learning. KDD 2025 Link

2022

Title Venue Year Link
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. KDD 2022 Link
LeapAttack: Hard-Label Adversarial Attack on Text via Gradient-Based Optimization. KDD 2022 Link

2011

Title Venue Year Link
Leakage in data mining: formulation, detection, and avoidance. KDD 2011 Link

SP

Expand SP

2025

Title Venue Year Link
Is MPC Secure? Leveraging Neural Network Classifiers to Detect Data Leakage Vulnerabilities in MPC Implementations. SP 2025 Link
Prompt Inversion Attack Against Collaborative Inference of Large Language Models. SP 2025 Link

2024

Title Venue Year Link
Architectural Mimicry: Innovative Instructions to Efficiently Address Control-Flow Leakage in Data-Oblivious Programs. SP 2024 Link
Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearning. SP 2024 Link

2022

Title Venue Year Link
LINKTELLER: Recovering Private Edges from Graph Neural Networks via Influence Analysis. SP 2022 Link
Mitigating Information Leakage Vulnerabilities with Type-based Data Isolation. SP 2022 Link

CCS

Expand CCS

2025

Title Venue Year Link
IOValve: Leakage-Free I/O Sandbox for Large-Scale Untrusted Data Processing. CCS 2025 Link

2024

Title Venue Year Link
Uncovering Gradient Inversion Risks in Practical Language Model Training. CCS 2024 Link

2021

Title Venue Year Link
LEAP: Leakage-Abuse Attack on Efficiently Deployable, Efficiently Searchable Encryption with Partially Known Dataset. CCS 2021 Link

2019

Title Venue Year Link
Mitigating Leakage in Secure Cloud-Hosted Data Structures: Volume-Hiding for Multi-Maps via Hashing. CCS 2019 Link
Poster: Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality. CCS 2019 Link

2018

Title Venue Year Link
Pump up the Volume: Practical Database Reconstruction from Volume Leakage on Range Queries. CCS 2018 Link
Unveiling Hardware-based Data Prefetcher, a Hidden Source of Information Leakage. CCS 2018 Link

2016

Title Venue Year Link
UniSan: Proactive Kernel Memory Initialization to Eliminate Data Leakages. CCS 2016 Link

2015

Title Venue Year Link
Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. CCS 2015 Link

2013

Title Venue Year Link
AppIntent: analyzing sensitive data transmission in android for privacy leakage detection. CCS 2013 Link

2000

Title Venue Year Link
Threshold-based identity recovery for privacy enhanced applications. CCS 2000 Link

USENIX Security Symposium

Expand USENIX Security Symposium

2025

Title Venue Year Link
Boosting Gradient Leakage Attacks: Data Reconstruction in Realistic FL Settings. USENIX Security Symposium 2025 Link
Cross-Modal Prompt Inversion: Unifying Threats to Text and Image Generative AI Models. USENIX Security Symposium 2025 Link
Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning. USENIX Security Symposium 2025 Link
SoK: Gradient Inversion Attacks in Federated Learning. USENIX Security Symposium 2025 Link
SpeechGuard: Recoverable and Customizable Speech Privacy Protection. USENIX Security Symposium 2025 Link

2024

Title Venue Year Link
FaceObfuscator: Defending Deep Learning-based Privacy Attacks with Gradient Descent-resistant Features in Face Recognition. USENIX Security Symposium 2024 Link
Go Go Gadget Hammer: Flipping Nested Pointers for Arbitrary Data Leakage. USENIX Security Symposium 2024 Link
Length Leakage in Oblivious Data Access Mechanisms. USENIX Security Symposium 2024 Link
Secure Account Recovery for a Privacy-Preserving Web Service. USENIX Security Symposium 2024 Link
d-DSE: Distinct Dynamic Searchable Encryption Resisting Volume Leakage in Encrypted Databases. USENIX Security Symposium 2024 Link

2022

Title Venue Year Link
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models. USENIX Security Symposium 2022 Link

2021

Title Venue Year Link
Leakage of Dataset Properties in Multi-Party Machine Learning. USENIX Security Symposium 2021 Link

2020

Title Venue Year Link
Medusa: Microarchitectural Data Leakage via Automated Attack Synthesis. USENIX Security Symposium 2020 Link
SEAL: Attack Mitigation for Encrypted Databases via Adjustable Leakage. USENIX Security Symposium 2020 Link

NDSS

Expand NDSS

2025

Title Venue Year Link
CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling. NDSS 2025 Link
LeakLess: Selective Data Protection against Memory Leakage Attacks for Serverless Platforms. NDSS 2025 Link
Scale-MIA: A Scalable Model Inversion Attack against Secure Federated Learning via Latent Space Reconstruction. NDSS 2025 Link

2024

Title Venue Year Link
Crafter: Facial Feature Crafting against Inversion-based Identity Theft on Deep Models. NDSS 2024 Link
Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering. NDSS 2024 Link

2023

Title Venue Year Link
Focusing on Pinocchio's Nose: A Gradients Scrutinizer to Thwart Split-Learning Hijacking Attacks Using Intrinsic Attributes. NDSS 2023 Link

2022

Title Venue Year Link
MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity. NDSS 2022 Link

2019

Title Venue Year Link
Geo-locating Drivers: A Study of Sensitive Data Leakage in Ride-Hailing Services. NDSS 2019 Link

2015

Title Venue Year Link
Checking More and Alerting Less: Detecting Privacy Leakages via Enhanced Data-flow Analysis and Peer Voting. NDSS 2015 Link

2013

Title Venue Year Link
OIRS: Outsourced Image Recovery Service From Comprehensive Sensing With Privacy Assurance. NDSS 2013 Link

CVPR

Expand CVPR

2025

Title Venue Year Link
From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning. CVPR 2025 Link
Gradient Inversion Attacks on Parameter-Efficient Fine-Tuning. CVPR 2025 Link
InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment. CVPR 2025 Link
Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems. CVPR 2025 Link
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients. CVPR 2025 Link

2024

Title Venue Year Link
CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion. CVPR 2024 Link
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization. CVPR 2024 Link
Dual-Consistency Model Inversion for Non-Exemplar Class Incremental Learning. CVPR 2024 Link
Inversion-Free Image Editing with Language-Guided Diffusion Models. CVPR 2024 Link
Localization is All You Evaluate: Data Leakage in Online Mapping Datasets and How to Fix it. CVPR 2024 Link
Model Inversion Robustness: Can Transfer Learning Help? CVPR 2024 Link
Prompting Hard or Hardly Prompting: Prompt Inversion for Text-to-Image Diffusion Models. CVPR 2024 Link

2023

Title Venue Year Link
Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack. CVPR 2023 Link
Efficient Loss Function by Minimizing the Detrimental Effect of Floating-Point Errors on Gradient-Based Attacks. CVPR 2023 Link
Inversion-based Style Transfer with Diffusion Models. CVPR 2023 Link
Null-text Inversion for Editing Real Images using Guided Diffusion Models. CVPR 2023 Link
Privacy-Preserving Representations are not Enough: Recovering Scene Content from Camera Poses. CVPR 2023 Link
Rate Gradient Approximation Attack Threats Deep Spiking Neural Networks. CVPR 2023 Link
Re-Thinking Model Inversion Attacks Against Deep Neural Networks. CVPR 2023 Link
Reinforcement Learning-Based Black-Box Model Inversion Attacks. CVPR 2023 Link
Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization. CVPR 2023 Link

2022

Title Venue Year Link
GradViT: Gradient Inversion of Vision Transformers. CVPR 2022 Link
Label-Only Model Inversion Attacks via Boundary Repulsion. CVPR 2022 Link
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. CVPR 2022 Link
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients. CVPR 2022 Link

2021

Title Venue Year Link
How Privacy-Preserving Are Line Clouds? Recovering Scene Details From 3D Lines. CVPR 2021 Link
IMAGINE: Image Synthesis by Image-Guided Model Inversion. CVPR 2021 Link
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation. CVPR 2021 Link

2020

Title Venue Year Link
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. CVPR 2020 Link

2019

Title Venue Year Link
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses. CVPR 2019 Link

2013

Title Venue Year Link
BFO Meets HOG: Feature Extraction Based on Histograms of Oriented p.d.f. Gradients for Image Classification. CVPR 2013 Link
Supervised Semantic Gradient Extraction Using Linear-Time Optimization. CVPR 2013 Link

ICCV

Expand ICCV

2023

Title Venue Year Link
Boosting Adversarial Transferability via Gradient Relevance Attack. ICCV 2023 Link
Controllable Inversion of Black-Box Face Recognition Models via Diffusion. ICCV 2023 Link
GIFD: A Generative Gradient Inversion Method with Feature Domain Optimization. ICCV 2023 Link
Generative Gradient Inversion via Over-Parameterized Networks in Federated Learning. ICCV 2023 Link
Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient. ICCV 2023 Link
Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models. ICCV 2023 Link
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored Models. ICCV 2023 Link
Template Inversion Attack against Face Recognition Systems using 3D Face Reconstruction. ICCV 2023 Link
Transferable Adversarial Attack for Both Vision Transformers and Convolutional Networks via Momentum Integrated Gradients. ICCV 2023 Link

2021

Title Venue Year Link
Exploiting Explanations for Model Inversion Attacks. ICCV 2021 Link
Knowledge-Enriched Distributional Model Inversion Attacks. ICCV 2021 Link
Meta Gradient Adversarial Attack. ICCV 2021 Link

2007

Title Venue Year Link
On the Extraction of Curve Skeletons using Gradient Vector Flow. ICCV 2007 Link

ECCV

Expand ECCV

2024

Title Venue Year Link
A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks. ECCV 2024 Link
Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures. ECCV 2024 Link
Learning a Dynamic Privacy-Preserving Camera Robust to Inversion Attacks. ECCV 2024 Link
On the Vulnerability of Skip Connections to Model Inversion Attacks. ECCV 2024 Link
Prediction Exposes Your Face: Black-Box Model Inversion via Prediction Alignment. ECCV 2024 Link
Safeguard Text-to-Image Diffusion Models with Human Feedback Inversion. ECCV 2024 Link
Source Prompt Disentangled Inversion for Boosting Image Editability with Diffusion Models. ECCV 2024 Link
Viewpoint Textual Inversion: Discovering Scene Representations and 3D View Control in 2D Diffusion Models. ECCV 2024 Link

2022

Title Venue Year Link
SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination. ECCV 2022 Link

2020

Title Venue Year Link
Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds. ECCV 2020 Link

2008

Title Venue Year Link
Image Feature Extraction Using Gradient Local Auto-Correlations. ECCV 2008 Link

ACM Multimedia

Expand ACM Multimedia

2024

Title Venue Year Link
Informative Point cloud Dataset Extraction for Classification via Gradient-based Points Moving. ACM Multimedia 2024 Link

2023

Title Venue Year Link
Gradient-Free Textual Inversion. ACM Multimedia 2023 Link
Model Inversion Attack via Dynamic Memory Learning. ACM Multimedia 2023 Link

2020

Title Venue Year Link
Efficient Joint Gradient Based Attack Against SOR Defense for 3D Point Cloud Classification. ACM Multimedia 2020 Link

ACL

Expand ACL

2025

Title Venue Year Link
ALGEN: Few-shot Inversion Attacks on Textual Embeddings via Cross-Model Alignment and Generation. ACL 2025 Link
Mitigating Paraphrase Attacks on Machine-Text Detection via Paraphrase Inversion. ACL 2025 Link
ObfusLM: Privacy-preserving Language Model Service against Embedding Inversion Attacks. ACL 2025 Link
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization. ACL 2025 Link
Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack. ACL 2025 Link
The Inverse Scaling Effect of Pre-Trained Language Model Surprisal Is Not Due to Data Leakage. ACL 2025 Link

2024

Title Venue Year Link
Continual Few-shot Relation Extraction via Adaptive Gradient Correction and Knowledge Decomposition. ACL 2024 Link
Text Embedding Inversion Security for Multilingual Language Models. ACL 2024 Link
Towards Multiple References Era - Addressing Data Leakage and Limited Reference Diversity in Machine Translation Evaluation. ACL 2024 Link
Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries. ACL 2024 Link

2023

Title Venue Year Link
A Gradient Control Method for Backdoor Attacks on Parameter-Efficient Tuning. ACL 2023 Link
Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework. ACL 2023 Link
Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence. ACL 2023 Link

EMNLP

Expand EMNLP

2024

Title Venue Year Link
An Inversion Attack Against Obfuscated Embedding Matrix in Language Model Inference. EMNLP 2024 Link
On Leakage of Code Generation Evaluation Datasets. EMNLP 2024 Link
OpenSep: Leveraging Large Language Models with Textual Inversion for Open World Audio Separation. EMNLP 2024 Link
SecureSQL: Evaluating Data Leakage of Large Language Models as Natural Language Interfaces to Databases. EMNLP 2024 Link
Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients. EMNLP 2024 Link

2023

Title Venue Year Link
UPTON: Preventing Authorship Leakage from Public Text Release via Data Poisoning. EMNLP 2023 Link

2022

Title Venue Year Link
Backdoor Attacks in Federated Learning by Rare Embeddings and Gradient Ensembling. EMNLP 2022 Link
Invernet: An Inversion Attack Framework to Infer Fine-Tuning Datasets through Word Embeddings. EMNLP 2022 Link

2021

Title Venue Year Link
Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction. EMNLP 2021 Link
Gradient-based Adversarial Attacks against Text Transformers. EMNLP 2021 Link
TAG: Gradient Attack on Transformer-based Language Models. EMNLP 2021 Link

COLING

Expand COLING

2025

Title Venue Year Link
Gradient Inversion Attack in Federated Learning: Exposing Text Data through Discrete Optimization. COLING 2025 Link

2012

Title Venue Year Link
Code-Switch Language Model with Inversion Constraints for Mixed Language Speech Recognition. COLING 2012 Link

1986

Title Venue Year Link
The Role of Inversion and PP-Fronting in Relating Discourse Elements: some implications for cognitive and computational models of Natural Language Processing. COLING 1986 Link

SIGIR

Expand SIGIR

2025

Title Venue Year Link
Information Leakage of Sentence Embeddings via Generative Embedding Inversion Attacks. SIGIR 2025 Link

WWW

Expand WWW

2024

Title Venue Year Link
Detecting Poisoning Attacks on Federated Learning Using Gradient-Weighted Class Activation Mapping. WWW 2024 Link

2023

Title Venue Year Link
Ginver: Generative Model Inversion Attacks Against Collaborative Inference. WWW 2023 Link
NetGuard: Protecting Commercial Web APIs from Model Inversion Attacks using GAN-generated Fake Samples. WWW 2023 Link

2019

Title Venue Year Link
UNVEIL: Capture and Visualise WiFi Data Leakages. WWW 2019 Link
VACCINE: Using Contextual Integrity For Data Leakage Detection. WWW 2019 Link

2017

Title Venue Year Link
Trajectory Recovery From Ash: User Privacy Is NOT Preserved in Aggregated Mobility Data. WWW 2017 Link

DAC

Expand DAC

2025

Title Venue Year Link
Data Oblivious CPU: Microarchitectural Side-channel Leakage-Resilient Processor. DAC 2025 Link
Ensembler: Protect Collaborative Inference Privacy from Model Inversion Attack via Selective Ensemble. DAC 2025 Link

2023

Title Venue Year Link
NNTesting: Neural Network Fault Attacks Detection Using Gradient-Based Test Vector Generation. DAC 2023 Link

2021

Title Venue Year Link
PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems. DAC 2021 Link

IEEE Trans. Computers

Expand IEEE Trans. Computers

2021

Title Venue Year Link
Leakage-Free Dissemination of Authenticated Tree-Structured Data With Multi-Party Control. IEEE Trans. Computers 2021 Link

2000

Title Venue Year Link
Generalized Inversion Attack on Nonlinear Filter Generators. IEEE Trans. Computers 2000 Link

WACV

Expand WACV

2025

Title Venue Year Link
Negative-Prompt Inversion: Fast Image Inversion for Editing with Text-Guided Diffusion Models. WACV 2025 Link
Recoverable Anonymization for Pose Estimation: A Privacy-Enhancing Approach. WACV 2025 Link

2024

Title Venue Year Link
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks. WACV 2024 Link

2022

Title Venue Year Link
Reconstructing Training Data from Diverse ML Models by Ensemble Inversion. WACV 2022 Link

2019

Title Venue Year Link
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks. WACV 2019 Link

2008

Title Venue Year Link
Iris Extraction Based on Intensity Gradient and Texture Difference. WACV 2008 Link

IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.

Expand IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.

2000

Title Venue Year Link
SPICE models for flicker noise in n-MOSFETs from subthreshold tostrong inversion. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2000 Link

1992

Title Venue Year Link
A mobility model including the screening effect in MOS inversion layer. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1992 Link

1989

Title Venue Year Link
Extracting transistor changes from device simulations by gradient fitting. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1989 Link
Universality of mobility-gate field characteristics of electrons in the inversion charge layer and its application in MOSFET modeling. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 1989 Link

IEEE Trans. Neural Networks Learn. Syst.

Expand IEEE Trans. Neural Networks Learn. Syst.

2025

Title Venue Year Link
Defending Against Neural Network Model Inversion Attacks via Data Poisoning. IEEE Trans. Neural Networks Learn. Syst. 2025 Link
Neural Honeypoint: An Active Defense Framework Against Model Inversion Attacks. IEEE Trans. Neural Networks Learn. Syst. 2025 Link

2024

Title Venue Year Link
A Dynamic-Varying Parameter Enhanced ZNN Model for Solving Time-Varying Complex-Valued Tensor Inversion With Its Application to Image Encryption. IEEE Trans. Neural Networks Learn. Syst. 2024 Link
GNN Model for Time-Varying Matrix Inversion With Robust Finite-Time Convergence. IEEE Trans. Neural Networks Learn. Syst. 2024 Link
Gradient Correction for White-Box Adversarial Attacks. IEEE Trans. Neural Networks Learn. Syst. 2024 Link

2023

Title Venue Year Link
Dynamic Moore-Penrose Inversion With Unknown Derivatives: Gradient Neural Network Approach. IEEE Trans. Neural Networks Learn. Syst. 2023 Link
Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient. IEEE Trans. Neural Networks Learn. Syst. 2023 Link

2022

Title Venue Year Link
SMGEA: A New Ensemble Adversarial Attack Powered by Long-Term Gradient Memories. IEEE Trans. Neural Networks Learn. Syst. 2022 Link

2021

Title Venue Year Link
Gradients Cannot Be Tamed: Behind the Impossible Paradox of Blocking Targeted Adversarial Attacks. IEEE Trans. Neural Networks Learn. Syst. 2021 Link

2020

Title Venue Year Link
New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE Trans. Neural Networks Learn. Syst. 2020 Link

2013

Title Venue Year Link
Common Nature of Learning Between Back-Propagation and Hopfield-Type Neural Networks for Generalized Matrix Inversion With Simplified Models. IEEE Trans. Neural Networks Learn. Syst. 2013 Link

IEEE Trans. Big Data

Expand IEEE Trans. Big Data

2025

Title Venue Year Link
Comprehensive Privacy Analysis on Recommendation With Causal Embedding Against Model Inversion Attacks. IEEE Trans. Big Data 2025 Link

2024

Title Venue Year Link
Augmented Multi-Party Computation Against Gradient Leakage in Federated Learning. IEEE Trans. Big Data 2024 Link
Improved Gradient Inversion Attacks and Defenses in Federated Learning. IEEE Trans. Big Data 2024 Link

2023

Title Venue Year Link
A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks. IEEE Trans. Big Data 2023 Link

2021

Title Venue Year Link
Leakage Resilient Leveled FHE on Multiple Bits Message. IEEE Trans. Big Data 2021 Link

IEEE Trans. Medical Imaging

Expand IEEE Trans. Medical Imaging

2023

Title Venue Year Link
Do Gradient Inversion Attacks Make Federated Learning Unsafe? IEEE Trans. Medical Imaging 2023 Link
Ensemble Inversion for Brain Tumor Growth Models With Mass Effect. IEEE Trans. Medical Imaging 2023 Link

2016

Title Venue Year Link
Real-Time Model-Based Inversion in Cross-Sectional Optoacoustic Tomography. IEEE Trans. Medical Imaging 2016 Link

2015

Title Venue Year Link
Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation With Brain Iron in Normal Aging. IEEE Trans. Medical Imaging 2015 Link

2014

Title Venue Year Link
Model Based Inversion for Deriving Maps of Histological Parameters Characteristic of Cancer From Ex-Vivo Multispectral Images of the Colon. IEEE Trans. Medical Imaging 2014 Link

2010

Title Venue Year Link
Fast Semi-Analytical Model-Based Acoustic Inversion for Quantitative Optoacoustic Tomography. IEEE Trans. Medical Imaging 2010 Link

IEEE Trans. Inf. Forensics Secur.

Expand IEEE Trans. Inf. Forensics Secur.

2025

Title Venue Year Link
Adv-Inversion: Stealthy Adversarial Attacks via GAN-Inversion for Facial Privacy Protection. IEEE Trans. Inf. Forensics Secur. 2025 Link
Defending Against Model Inversion Attack via Feature Purification. IEEE Trans. Inf. Forensics Secur. 2025 Link
Distributional Black-Box Model Inversion Attack With Multi-Agent Reinforcement Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
Dual Dependency Disentangling for Defending Model Inversion Attacks in Split Federated Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
Enabling Gradient Inversion Attack Against SplitFed Learning via L2 Norm Amplification. IEEE Trans. Inf. Forensics Secur. 2025 Link
FGMIA: Feature-Guided Model Inversion Attacks Against Face Recognition Models. IEEE Trans. Inf. Forensics Secur. 2025 Link
GI-NAS: Boosting Gradient Inversion Attacks Through Adaptive Neural Architecture Search. IEEE Trans. Inf. Forensics Secur. 2025 Link
Gradient Inversion of Text-Modal Data in Distributed Learning. IEEE Trans. Inf. Forensics Secur. 2025 Link
Query-Efficient Model Inversion Attacks: An Information Flow View. IEEE Trans. Inf. Forensics Secur. 2025 Link
Recovering Reed-Solomon Codes Privately. IEEE Trans. Inf. Forensics Secur. 2025 Link
Robust Token Gradient and Frequency-Aware Transferable Adversarial Attacks on Vision Transformers. IEEE Trans. Inf. Forensics Secur. 2025 Link
Semantic and Precise Trigger Inversion: Detecting Backdoored Language Models. IEEE Trans. Inf. Forensics Secur. 2025 Link
The Gradient Puppeteer: Adversarial Domination in Gradient Leakage Attacks Through Model Poisoning. IEEE Trans. Inf. Forensics Secur. 2025 Link
TrapNet: Model Inversion Defense via Trapdoor. IEEE Trans. Inf. Forensics Secur. 2025 Link

2024

Title Venue Year Link
Cross-User Leakage Mitigation for Authorized Multi-User Encrypted Data Sharing. IEEE Trans. Inf. Forensics Secur. 2024 Link
Data Generation and Augmentation Method for Deep Learning-Based VDU Leakage Signal Restoration Algorithm. IEEE Trans. Inf. Forensics Secur. 2024 Link
Gradient-Leaks: Enabling Black-Box Membership Inference Attacks Against Machine Learning Models. IEEE Trans. Inf. Forensics Secur. 2024 Link
Inversion-Guided Defense: Detecting Model Stealing Attacks by Output Inverting. IEEE Trans. Inf. Forensics Secur. 2024 Link
RVE-PFL: Robust Variational Encoder-Based Personalized Federated Learning Against Model Inversion Attacks. IEEE Trans. Inf. Forensics Secur. 2024 Link
Unstoppable Attack: Label-Only Model Inversion Via Conditional Diffusion Model. IEEE Trans. Inf. Forensics Secur. 2024 Link
Vulnerability of State-of-the-Art Face Recognition Models to Template Inversion Attack. IEEE Trans. Inf. Forensics Secur. 2024 Link

2023

Title Venue Year Link
A GAN-Based Defense Framework Against Model Inversion Attacks. IEEE Trans. Inf. Forensics Secur. 2023 Link
Analysis and Utilization of Hidden Information in Model Inversion Attacks. IEEE Trans. Inf. Forensics Secur. 2023 Link
EGIA: An External Gradient Inversion Attack in Federated Learning. IEEE Trans. Inf. Forensics Secur. 2023 Link
Label-Only Model Inversion Attacks: Attack With the Least Information. IEEE Trans. Inf. Forensics Secur. 2023 Link
Privacy-Encoded Federated Learning Against Gradient-Based Data Reconstruction Attacks. IEEE Trans. Inf. Forensics Secur. 2023 Link
Using Highly Compressed Gradients in Federated Learning for Data Reconstruction Attacks. IEEE Trans. Inf. Forensics Secur. 2023 Link

2022

Title Venue Year Link
Data Disclosure With Non-Zero Leakage and Non-Invertible Leakage Matrix. IEEE Trans. Inf. Forensics Secur. 2022 Link
Gradient Leakage Attack Resilient Deep Learning. IEEE Trans. Inf. Forensics Secur. 2022 Link
Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System. IEEE Trans. Inf. Forensics Secur. 2022 Link

2019

Title Venue Year Link
Impact of Prior Knowledge and Data Correlation on Privacy Leakage: A Unified Analysis. IEEE Trans. Inf. Forensics Secur. 2019 Link

2017

Title Venue Year Link
A Zero-Leakage Fuzzy Embedder From the Theoretical Formulation to Real Data. IEEE Trans. Inf. Forensics Secur. 2017 Link
Optimized Quantization in Zero Leakage Helper Data Systems. IEEE Trans. Inf. Forensics Secur. 2017 Link

ACM Trans. Priv. Secur.

Expand ACM Trans. Priv. Secur.

2025

Title Venue Year Link
Quantifying and Exploiting Adversarial Vulnerability: Gradient-Based Input Pre-Filtering for Enhanced Performance in Black-Box Attacks. ACM Trans. Priv. Secur. 2025 Link

2023

Title Venue Year Link
Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks. ACM Trans. Priv. Secur. 2023 Link

IEEE J. Biomed. Health Informatics

Expand IEEE J. Biomed. Health Informatics

2023

Title Venue Year Link
E2EGI: End-to-End Gradient Inversion in Federated Learning. IEEE J. Biomed. Health Informatics 2023 Link

Medical Image Anal.

Expand Medical Image Anal.

2026

Title Venue Year Link
A novel gradient inversion attack framework to investigate privacy vulnerabilities during retinal image-based federated learning. Medical Image Anal. 2026 Link

2025

Title Venue Year Link
Shadow defense against gradient inversion attack in federated learning. Medical Image Anal. 2025 Link

Knowl. Based Syst.

Expand Knowl. Based Syst.

2025

Title Venue Year Link
Stand-in Model Protection: Synthetic defense for membership inference and model inversion attacks. Knowl. Based Syst. 2025 Link
Subspectrum mixup-based adversarial attack and evading defenses by structure-enhanced gradient purification. Knowl. Based Syst. 2025 Link

2024

Title Venue Year Link
AGS: Transferable adversarial attack for person re-identification by adaptive gradient similarity attack. Knowl. Based Syst. 2024 Link
Defending against gradient inversion attacks in federated learning via statistical machine unlearning. Knowl. Based Syst. 2024 Link
Spatial-frequency gradient fusion based model augmentation for high transferability adversarial attack. Knowl. Based Syst. 2024 Link

2023

Title Venue Year Link
MP-CLF: An effective Model-Preserving Collaborative deep Learning Framework for mitigating data leakage under the GAN. Knowl. Based Syst. 2023 Link

2017

Title Venue Year Link
Data leakage detection algorithm based on task sequences and probabilities. Knowl. Based Syst. 2017 Link

Neurocomputing

Expand Neurocomputing

2025

Title Venue Year Link
Exploiting the connections between images and deep feature vectors in model inversion attacks. Neurocomputing 2025 Link
Label-only model inversion attacks: Adaptive boundary exclusion for limited queries. Neurocomputing 2025 Link
Momentum gradient-based untargeted poisoning attack on hypergraph neural networks. Neurocomputing 2025 Link

2024

Title Venue Year Link
Adaptive Gradient-based Word Saliency for adversarial text attacks. Neurocomputing 2024 Link
Improved gradient leakage attack against compressed gradients in federated learning. Neurocomputing 2024 Link

2020

Title Venue Year Link
Modified gradient neural networks for solving the time-varying Sylvester equation with adaptive coefficients and elimination of matrix inversion. Neurocomputing 2020 Link
New error function designs for finite-time ZNN models with application to dynamic matrix inversion. Neurocomputing 2020 Link

2015

Title Venue Year Link
On sampled-data control for stabilization of genetic regulatory networks with leakage delays. Neurocomputing 2015 Link

IEEE Trans. Knowl. Data Eng.

Expand IEEE Trans. Knowl. Data Eng.

2025

Title Venue Year Link
Practical Equi-Join Over Encrypted Database With Reduced Leakage. IEEE Trans. Knowl. Data Eng. 2025 Link

2024

Title Venue Year Link
On Data Distribution Leakage in Cross-Silo Federated Learning. IEEE Trans. Knowl. Data Eng. 2024 Link

2023

Title Venue Year Link
Model Inversion Attacks Against Graph Neural Networks. IEEE Trans. Knowl. Data Eng. 2023 Link
Reveal Your Images: Gradient Leakage Attack Against Unbiased Sampling-Based Secure Aggregation. IEEE Trans. Knowl. Data Eng. 2023 Link
Time-Aware Gradient Attack on Dynamic Network Link Prediction. IEEE Trans. Knowl. Data Eng. 2023 Link

2011

Title Venue Year Link
Data Leakage Detection. IEEE Trans. Knowl. Data Eng. 2011 Link

IEEE Trans. Syst. Man Cybern. Syst.

Expand IEEE Trans. Syst. Man Cybern. Syst.

2023

Title Venue Year Link
Adversarial Attacks on Regression Systems via Gradient Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2023 Link
Social IoT Approach to Cyber Defense of a Deep-Learning-Based Recognition System in Front of Media Clones Generated by Model Inversion Attack. IEEE Trans. Syst. Man Cybern. Syst. 2023 Link

IEEE Trans. Emerg. Top. Comput.

Expand IEEE Trans. Emerg. Top. Comput.

2023

Title Venue Year Link
A Web Back-End Database Leakage Incident Reconstruction Framework Over Unlabeled Logs. IEEE Trans. Emerg. Top. Comput. 2023 Link

2022

Title Venue Year Link
An Approximate Memory Based Defense Against Model Inversion Attacks to Neural Networks. IEEE Trans. Emerg. Top. Comput. 2022 Link

Neural Networks

Expand Neural Networks

2024

Title Venue Year Link
Aligning the domains in cross domain model inversion attack. Neural Networks 2024 Link
Structural prior-driven feature extraction with gradient-momentum combined optimization for convolutional neural network image classification. Neural Networks 2024 Link

2019

Title Venue Year Link
A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion. Neural Networks 2019 Link

2007

Title Venue Year Link
Model inversion by parameter fit using NN emulating the forward model - Evaluation of indirect measurements. Neural Networks 2007 Link

J. Artif. Intell. Res.

Expand J. Artif. Intell. Res.

2025

Title Venue Year Link
Detecting Generative Model Inversion Attacks for Protecting Intellectual Property of Deep Neural Networks. J. Artif. Intell. Res. 2025 Link

IEEE Trans. Image Process.

Expand IEEE Trans. Image Process.

2021

Title Venue Year Link
Gradient-Based Feature Extraction From Raw Bayer Pattern Images. IEEE Trans. Image Process. 2021 Link

2003

Title Venue Year Link
A local spectral inversion of a linearized TV model for denoising and deblurring. IEEE Trans. Image Process. 2003 Link

1992

Title Venue Year Link
A system model and inversion for synthetic aperture radar imaging. IEEE Trans. Image Process. 1992 Link

IEEE Trans. Neural Networks

Expand IEEE Trans. Neural Networks

2005

Title Venue Year Link
Design and analysis of a general recurrent neural network model for time-varying matrix inversion. IEEE Trans. Neural Networks 2005 Link

IEEE Trans. Intell. Transp. Syst.

Expand IEEE Trans. Intell. Transp. Syst.

2022

Title Venue Year Link
Optimal Trajectory Planning and Robust Tracking Using Vehicle Model Inversion. IEEE Trans. Intell. Transp. Syst. 2022 Link

Expert Syst. Appl.

Expand Expert Syst. Appl.

2026

Title Venue Year Link
ExSGD: Exploiting previous gradient for distributed large-batch training of building extraction network. Expert Syst. Appl. 2026 Link
FMGHA: Future momentum gradient-based attack on hypergraph neural networks. Expert Syst. Appl. 2026 Link
Transferable and defense-aware dual-objective meta gradient memory attack against deepfake generation. Expert Syst. Appl. 2026 Link

2025

Title Venue Year Link
ILAMP: Improved text extraction from gradients in federated learning using language model priors and sequence beam search. Expert Syst. Appl. 2025 Link
Uni-3DAD: Gan-inversion aided universal 3D anomaly detection on model-free products. Expert Syst. Appl. 2025 Link
You cannot handle the weather: Progressive amplified adverse-weather-gradient projection adversarial attack. Expert Syst. Appl. 2025 Link

2024

Title Venue Year Link
AGD-GAN: Adaptive Gradient-Guided and Depth-supervised generative adversarial networks for ancient mural sketch extraction. Expert Syst. Appl. 2024 Link
Fixed-time convergence ZNN model for solving rectangular dynamic full-rank matrices inversion. Expert Syst. Appl. 2024 Link
Securecipher: An instantaneous synchronization stream encryption system for insider threat data leakage protection. Expert Syst. Appl. 2024 Link

2023

Title Venue Year Link
An empirical study of pattern leakage impact during data preprocessing on machine learning-based intrusion detection models reliability. Expert Syst. Appl. 2023 Link
Opt-TCAE: Optimal temporal convolutional auto-encoder for boiler tube leakage detection in a thermal power plant using multi-sensor data. Expert Syst. Appl. 2023 Link

2016

Title Venue Year Link
Ridders algorithm in approximate inversion of fuzzy model with parametrized consequences. Expert Syst. Appl. 2016 Link

2014

Title Venue Year Link
Inversion mechanism with functional extrema model for identification incommensurate and hyper fractional chaos via differential evolution. Expert Syst. Appl. 2014 Link

IEEE ACM Trans. Audio Speech Lang. Process.

Expand IEEE ACM Trans. Audio Speech Lang. Process.

2022

Title Venue Year Link
Acoustic-to-Articulatory Mapping With Joint Optimization of Deep Speech Enhancement and Articulatory Inversion Models. IEEE ACM Trans. Audio Speech Lang. Process. 2022 Link

IEEE Trans. Speech Audio Process.

Expand IEEE Trans. Speech Audio Process.

2013

Title Venue Year Link
Model-Based Inversion of Dynamic Range Compression. IEEE Trans. Speech Audio Process. 2013 Link

Neural Comput. Appl.

Expand Neural Comput. Appl.

2025

Title Venue Year Link
Integrated gradients-based defense against adversarial word substitution attacks. Neural Comput. Appl. 2025 Link

2016

Title Venue Year Link
The stabilization of BAM neural networks with time-varying delays in the leakage terms via sampled-data control. Neural Comput. Appl. 2016 Link

2012

Title Venue Year Link
A novel neural-based model for acoustic-articulatory inversion mapping. Neural Comput. Appl. 2012 Link

ICDE

Expand ICDE

2024

Title Venue Year Link
LDPRecover: Recovering Frequencies from Poisoning Attacks Against Local Differential Privacy. ICDE 2024 Link
Secure Normal Form: Mediation Among Cross Cryptographic Leakages in Encrypted Databases. ICDE 2024 Link

2020

Title Venue Year Link
An Anomaly Detection System for the Protection of Relational Database Systems against Data Leakage by Application Programs. ICDE 2020 Link

2009

Title Venue Year Link
A Model for Data Leakage Detection. ICDE 2009 Link

2005

Title Venue Year Link
XGuard: A System for Publishing XML Documents without Information Leakage in the Presence of Data Inference. ICDE 2005 Link

MobiCom

Expand MobiCom

2024

Title Venue Year Link
A Black-Box Approach for Quantifying Leakage of Trace-Based Correlated Data. MobiCom 2024 Link

IEEE Symposium on Security and Privacy

Expand IEEE Symposium on Security and Privacy

2019

Title Venue Year Link
Data Recovery on Encrypted Databases with k-Nearest Neighbor Query Leakage. IEEE Symposium on Security and Privacy 2019 Link
Why Does Your Data Leak? Uncovering the Data Leakage in Cloud from Mobile Apps. IEEE Symposium on Security and Privacy 2019 Link

2018

Title Venue Year Link
Improved Reconstruction Attacks on Encrypted Data Using Range Query Leakage. IEEE Symposium on Security and Privacy 2018 Link

CollSec

Expand CollSec

2010

Title Venue Year Link
Analyzing Group Communication for Preventing Accidental Data Leakage via Email. CollSec 2010 Link

Proc. ACM Manag. Data

Expand Proc. ACM Manag. Data

2024

Title Venue Year Link
Counterfactual Explanation at Will, with Zero Privacy Leakage. Proc. ACM Manag. Data 2024 Link

2023

Title Venue Year Link
RLS Side Channels: Investigating Leakage of Row-Level Security Protected Data Through Query Execution Time. Proc. ACM Manag. Data 2023 Link

ICSE Companion

Expand ICSE Companion

2025

Title Venue Year Link
CODEMORPH: Mitigating Data Leakage in Large Language Model Assessment. ICSE Companion 2025 Link

CIKM

Expand CIKM

2022

Title Venue Year Link
Are Gradients on Graph Structure Reliable in Gray-box Attacks? CIKM 2022 Link

2005

Title Venue Year Link
Privacy leakage in multi-relational databases via pattern based semi-supervised learning. CIKM 2005 Link

ACM Trans. Intell. Syst. Technol.

Expand ACM Trans. Intell. Syst. Technol.

2022

Title Venue Year Link
GRNN: Generative Regression Neural Network - A Data Leakage Attack for Federated Learning. ACM Trans. Intell. Syst. Technol. 2022 Link

ACM Trans. Knowl. Discov. Data

Expand ACM Trans. Knowl. Discov. Data

2012

Title Venue Year Link
Leakage in data mining: Formulation, detection, and avoidance. ACM Trans. Knowl. Discov. Data 2012 Link

Proc. VLDB Endow.

Expand Proc. VLDB Endow.

2024

Title Venue Year Link
SWAT: A System-Wide Approach to Tunable Leakage Mitigation in Encrypted Data Stores. Proc. VLDB Endow. 2024 Link

2022

Title Venue Year Link
Don't Be a Tattle-Tale: Preventing Leakages through Data Dependencies on Access Control Protected Data. Proc. VLDB Endow. 2022 Link

2018

Title Venue Year Link
ConTPL: Controlling Temporal Privacy Leakage in Differentially Private Continuous Data Release. Proc. VLDB Endow. 2018 Link

IEEE Trans. Parallel Distributed Syst.

Expand IEEE Trans. Parallel Distributed Syst.

2013

Title Venue Year Link
A Privacy Leakage Upper Bound Constraint-Based Approach for Cost-Effective Privacy Preserving of Intermediate Data Sets in Cloud. IEEE Trans. Parallel Distributed Syst. 2013 Link

1990

Title Venue Year Link
Error Recovery in Shared Memory Multiprocessors Using Private Caches. IEEE Trans. Parallel Distributed Syst. 1990 Link

Pattern Recognit.

Expand Pattern Recognit.

2026

Title Venue Year Link
Global aggregated gradient-guided adversarial attacks for person re-identification. Pattern Recognit. 2026 Link
SemiSketch: An ancient mural sketch extraction network based on reference prior and gradient frequency compensation. Pattern Recognit. 2026 Link
Staircase Sign Method: Boosting adversarial attacks by mitigating gradient distortion. Pattern Recognit. 2026 Link

2025

Title Venue Year Link
Gradient-based sparse voxel attacks on point cloud object detection. Pattern Recognit. 2025 Link

2023

Title Venue Year Link
A Learnable Gradient operator for face presentation attack detection. Pattern Recognit. 2023 Link

2022

Title Venue Year Link
Practical protection against video data leakage via universal adversarial head. Pattern Recognit. 2022 Link

2021

Title Venue Year Link
AG3line: Active grouping and geometry-gradient combined validation for fast line segment extraction. Pattern Recognit. 2021 Link

2013

Title Venue Year Link
Rotation invariant textural feature extraction for image retrieval using eigen value analysis of intensity gradients and multi-resolution analysis. Pattern Recognit. 2013 Link

2003

Title Venue Year Link
Gradient feature extraction for classification-based face detection. Pattern Recognit. 2003 Link

1996

Title Venue Year Link
Extracting facial features by an inhibitory mechanism based on gradient distributions. Pattern Recognit. 1996 Link

Inf. Sci.

Expand Inf. Sci.

2025

Title Venue Year Link
Graph neural networks adversarial attacks based on node gradient and importance score. Inf. Sci. 2025 Link

2024

Title Venue Year Link
GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison. Inf. Sci. 2024 Link

2021

Title Venue Year Link
Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations. Inf. Sci. 2021 Link
Target attack on biomedical image segmentation model based on multi-scale gradients. Inf. Sci. 2021 Link

2017

Title Venue Year Link
Local gradient patterns (LGP): An effective local-statistical-feature extraction scheme for no-reference image quality assessment. Inf. Sci. 2017 Link

2014

Title Venue Year Link
CoBAn: A context based model for data leakage prevention. Inf. Sci. 2014 Link

IEEE Trans. Signal Process.

Expand IEEE Trans. Signal Process.

2023

Title Venue Year Link
Secure Distributed Optimization Under Gradient Attacks. IEEE Trans. Signal Process. 2023 Link

2020

Title Venue Year Link
Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks. IEEE Trans. Signal Process. 2020 Link

2019

Title Venue Year Link
Distributed Gradient Descent Algorithm Robust to an Arbitrary Number of Byzantine Attackers. IEEE Trans. Signal Process. 2019 Link
Gradient Algorithms for Complex Non-Gaussian Independent Component/Vector Extraction, Question of Convergence. IEEE Trans. Signal Process. 2019 Link

IEEE Trans. Cybern.

Expand IEEE Trans. Cybern.

2023

Title Venue Year Link
Deep Cascade Gradient RBF Networks With Output-Relevant Feature Extraction and Adaptation for Nonlinear and Nonstationary Processes. IEEE Trans. Cybern. 2023 Link

Pattern Recognit. Lett.

Expand Pattern Recognit. Lett.

2020

Title Venue Year Link
Perturbation analysis of gradient-based adversarial attacks. Pattern Recognit. Lett. 2020 Link

2015

Title Venue Year Link
Gradient operators for feature extraction from omnidirectional panoramic images. Pattern Recognit. Lett. 2015 Link

2013

Title Venue Year Link
Object extraction from T2 weighted brain MR image using histogram based gradient calculation. Pattern Recognit. Lett. 2013 Link

2010

Title Venue Year Link
Gradient operators for feature extraction and characterisation in range images. Pattern Recognit. Lett. 2010 Link

2008

Title Venue Year Link
Boundary extraction of linear features using dual paths through gradient profiles. Pattern Recognit. Lett. 2008 Link
Gradient-based local affine invariant feature extraction for mobile robot localization in indoor environments. Pattern Recognit. Lett. 2008 Link

ACM Trans. Inf. Syst. Secur.

Expand ACM Trans. Inf. Syst. Secur.

2004

Title Venue Year Link
A key recovery attack on the 802.11b wired equivalent privacy protocol (WEP). ACM Trans. Inf. Syst. Secur. 2004 Link

INFOCOM

Expand INFOCOM

2025

Title Venue Year Link
VaniKG: Vanishing Key Gradient Attack and Defense for Robust Federated Aggregation. INFOCOM 2025 Link

2023

Title Venue Year Link
Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients. INFOCOM 2023 Link

2022

Title Venue Year Link
Protect Privacy from Gradient Leakage Attack in Federated Learning. INFOCOM 2022 Link

Mach. Learn.

Expand Mach. Learn.

2025

Title Venue Year Link
HFIA: a parasitic feature inference attack and gradient-based defense strategy in SplitNN-based vertical federated learning. Mach. Learn. 2025 Link

Int. J. Comput. Vis.

Expand Int. J. Comput. Vis.

2020

Title Venue Year Link
Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks. Int. J. Comput. Vis. 2020 Link

Comput. Vis. Image Underst.

Expand Comput. Vis. Image Underst.

2023

Title Venue Year Link
Improving the robustness of adversarial attacks using an affine-invariant gradient estimator. Comput. Vis. Image Underst. 2023 Link