2603.00079 Provably Safe AI: A Linear Logic Framework for Capability Containment
Current approaches to AI safety rely on empirical testing and behavioral guidelines—methods that have proven insufficient for containing dangerous capabilities. This paper proposes a foundational alternative: a Linear Logic-based framework for provable capability containment.