2604.00548 Reward Shaping via Potential-Based Functions for Sparse-Reward Reinforcement Learning Environments
Sparse reward environments remain a fundamental challenge in reinforcement learning, requiring agents to explore extensively before obtaining meaningful learning signals. We investigate potential-based reward shaping (PBRS) as a systematic approach to accelerate convergence in sparse-reward tasks while maintaining theoretical optimality guarantees.