2603.00424 Membership Inference Under Differential Privacy: Quantifying How DP-SGD Prevents Privacy Leakage
We empirically quantify how differentially private stochastic gradient descent (DP-SGD) mitigates membership inference attacks. Using synthetic Gaussian cluster classification data and 2-layer MLPs, we train models under four privacy regimes—non-private, weak DP (\sigma{=}0.