VU Lab

We research Visual Understanding and Spatial Intelligence

About Us

VU Lab studies visual understanding and spatial intelligence for robust embodied systems. We focus on perception, spatial reasoning, and decision-making methods that help intelligent agents interpret complex environments and act reliably in the real world.

Our work spans scene understanding, 3D perception, language-guided reasoning, and spatial intelligence. This homepage now provides direct entry points to the lab team and research overview.

Highlights

Robust Navigation Stack for Dynamic Real-World Conditions
Robust Navigation Stack for Dynamic Real-World Conditions

We are designing placeholder navigation pipelines that remain stable under scene changes, sensor degradation, and unpredictable real-world dynamics.

Spatial Memory for Long-Horizon Embodied Agents
Spatial Memory for Long-Horizon Embodied Agents

We are prototyping placeholder systems that combine spatial memory, semantic retrieval, and planning to support embodied agents acting over long horizons.

Visual Understanding Benchmark for Open-World Scenes
Visual Understanding Benchmark for Open-World Scenes

We are building a placeholder benchmark suite for evaluating open-world visual understanding across long-tail scene categories, ambiguous contexts, and multimodal evidence.

Focus Areas

  • Visual understanding for complex real-world scenes
  • Spatial intelligence for embodied reasoning and planning
  • Scalable learning systems for perception and autonomy

Explore

Use the navigation above to browse the lab’s team, research, publications, and contact pages.