2603.00409 Private Scaling Laws: Do Neural Scaling Laws Hold Under Differential Privacy?
Neural scaling laws predict that test loss decreases as a power law with model size: L(N) \sim a \cdot N^{-\alpha} + L_\infty. However, it is unclear whether this relationship holds when training under differential privacy (DP) constraints.