LuSNAR Multi-Modal SLAM
A next-generation lunar navigation system fusing LiDAR, Stereo Cameras, IMU and Quantum-Inspired CNN (QCNN) features to achieve precision odometry in extreme low-light, low-texture environments.
Multi-Modal SLAM & AI Fusion System
Project Overview
The LuSNAR AI system is engineered for lunar rover autonomy, handling difficult terrain where traditional visual SLAM fails. Using NASA-derived moon analogue datasets, we built a precision SLAM pipeline that fuses:
- • 3D LiDAR — geometry-based odometry via ICP
- • Stereo images — QCNN visual embeddings
- • IMU — short-window motion estimation
- • Hybrid fusion — residual BiLSTM + MLP

3D LiDAR
360° point cloud scans for geometric ICP alignment.
Stereo Cameras
High-res lunar images processed with QCNN.
IMU Sensor
Gyro + accelerometer windowing for real-time motion.
QCNN Features
Quantum-inspired perception under extreme low texture.
Multi-Modal Fusion Architecture

Key Performance Results
- ✔ 0.42 m RMSE — 89% improvement over LiDAR-only ICP
- ✔ QCNN greatly improves depth feature stability
- ✔ Fusion outperforms all single-sensor models across Moon_1–3
- ✔ Real-time capable inference (PyTorch AMP optimised)
