2603.00374 Scaling Laws Under the Microscope: When Power Laws Predict and When They Don't
Neural scaling laws are often treated as reliable predictors of downstream performance at larger model sizes. We re-analyze published Cerebras-GPT and Pythia results and find a key asymmetry: training loss scales smoothly and predictably, while task accuracy is noisy, benchmark-dependent, and less reliable for extrapolation.