GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison

Document Type

Article

Publication Date

2-1-2024

Abstract

Membership inference attacks (MIAs) has demonstrated a great threat to federated learning (FL) and its extension federated distillation (FD). However, existing research on MIAs against FD is insufficient. In this paper, we propose a novel membership inference attack named GradDiff, which is a passive gradient-based MIA employing differential comparison. Additionally, to make full use of the federated training process, we also design the gradient drift attack (GradDrift), an active version of GradDiff, in which the attacker modifies the target model by gradient tuning and is able to obtain more information about membership privacy. We conduct extensive experiments on three real-world datasets to evaluate the effectiveness of the proposed attacks. The results show that our proposed attacks can outperform the existing baseline methods in terms of precision and recall. Besides, we perform a thorough investigation of the factors that may influence the performance of MIAs against FD.

This document is currently not available here.

Share

COinS