An innovative, infrastructure-free solution designed to assist visually impaired people in navigating indoor environments

Dr. Vinod Namboodiri’s project, MABLE (Mapping for Accessibility in Built Environments), is developing digital maps that make indoor spaces more navigable for people with disabilities.  By combining AI, building modeling, robotics, and low-power electronics, MABLE extracts detailed accessibility data from floor plans to help users assess and move through buildings with greater confidence and independence. The project is community-centered, engaging users directly in map creation and ongoing feedback to ensure inclusivity. With a web app, mobile app, and localization tools in development, MABLE represents a scalable, human-centered approach to improving accessibility and quality of life for individuals with mobility challenges.

Background

Indoor navigation is challenging due to the absence of satellite positioning. This challenge is manifold greater for Visually Impaired People (VIPs) who lack the ability to get information from wayfinding signage.

Other sensor signals (e.g., Bluetooth and LiDAR) can be used to create turn-by-turn navigation solutions with position updates for users. Unfortunately, these solutions require tags to be installed all around the environment or the use of fairly expensive hardware. Moreover, these solutions require a high degree of manual involvement that raises costs, thus hampering scalability.

Technology Overview

Researchers at Lehigh University developed an image‑centric indoor Navigation Solution for Visually Impaired People (NaVIP) that is both scalable and does not rely on expensive hardware or extensive tagging of the environment.

The solution leverages a large‑scale image dataset curated from phone camera data used to provide detailed environmental understanding and navigation assistance through descriptive captions and precise positioning.

The query images are instantly localized using absolute pose regression (APR) methods and image descriptions meeting the needs of VIPs are created using multimodal large language models. This has significant potential for developing tools to navigate dynamic or unfamiliar spaces effectively.

Benefits

  • Infrastructure-free and cost-effective deployment
  • Scalable across different indoor environments and settings
  • Real-time inference for immediate navigational assistance
  • Enhanced autonomy for visually impaired individuals through detailed environmental descriptions
  • Open-source dataset and tools for community-driven improvements

Applications

  • Assistive technologies for visually impaired individuals
  • Indoor navigation systems for public buildings, malls, and complex infrastructures
  • Augmented reality applications requiring precise indoor positioning
  • Research and development in computer vision and accessibility technology