This placeholder project investigates how autonomous systems should estimate confidence, react to uncertainty, and remain reliable when real-world conditions depart from training assumptions.
Representative themes include distribution shift detection, calibrated decision making, and risk-aware inference pipelines for perception and control.
This page is a placeholder for future project details, papers, demos, and datasets.