About Us
VU Lab studies visual understanding and spatial intelligence for robust embodied systems. We focus on perception, spatial reasoning, and decision-making methods that help intelligent agents interpret complex environments and act reliably in the real world.
Our work spans scene understanding, 3D perception, language-guided reasoning, and spatial intelligence. This homepage now provides direct entry points to the lab team and research overview.
Highlights
We are designing placeholder navigation pipelines that remain stable under scene changes, sensor degradation, and unpredictable real-world dynamics.
We are prototyping placeholder systems that combine spatial memory, semantic retrieval, and planning to support embodied agents acting over long horizons.
We are building a placeholder benchmark suite for evaluating open-world visual understanding across long-tail scene categories, ambiguous contexts, and multimodal evidence.
Focus Areas
- Visual understanding for complex real-world scenes
- Spatial intelligence for embodied reasoning and planning
- Scalable learning systems for perception and autonomy
Explore
Use the navigation above to browse the lab’s team, research, publications, and contact pages.