Topic 2 — Shared Perception, Mapping & World Models
Robots that collaborate must also see the same world. This topic covers how teams of robots jointly build, share, and reconcile maps—along with sharing sensor streams, fusing knowledge, and ensuring their digital twins (simulation copies) remain consistent.
2.1 Multi-Robot SLAM
Simultaneous Localization and Mapping (SLAM) can be extended from single-robot to multi-robot scenarios:
- Parallel mapping: Robots explore different regions independently, each building partial maps.
- Map merging: As robots encounter overlapping areas or connect, they merge local maps into a unified global map.
- Conflict resolution: Overlapping scans/data must be aligned (e.g., via feature or scan matching), and disagreements are resolved based on sensor trust, recency, or confidence weighting.
Key points:
- Robots can share raw sensor data (bandwidth-heavy) or higher-level features/landmarks to reduce comms.
- Global optimization (bundle adjustment, pose-graph alignment) is critical for accurate map fusion.
- Fleet systems must synchronize time and spatial frames (TF trees, timestamps).
2.2 Shared Sensor Fusion
Beyond mapping, robots should share sensory knowledge:
- Cross-broadcasting: Raw or processed sensor streams (e.g., point clouds, detected objects, IMU data).
- Fusion strategies:
- Centralized (all to master node) or decentralized (peer-to-peer, with aggregation at each agent).
- Statistical fusion (weighted averaging, Kalman-like merges).
- Confidence weighting: Prioritize the most reliable, recent, or better-positioned observations when fusing noisy or contradictory input.
- Distributed target recognition: Detections (objects/humans) made by one agent are shared and confirmed/refined by others—improving robustness.
2.3 Digital Twin Synchronization
Digital twins (simulated environments) are necessary for safe experimentation, diagnostics, and even operator oversight.
- Map/scene sync: Ensure all robots—and their simulators—see the same virtual world (room layouts, obstacles, moving objects).
- State updates: Each agent pushes changes (found objects, map corrections) to the global digital twin.
- Conflict handling: Rule-based or consensus-driven policies resolve conflicting updates (e.g., two robots report incompatible obstacle positions).
- Applications: Debugging, fleet management dashboards, operator-in-the-loop for remote assistance.
Diagram: Shared Map Merge Timeline
Robot A explores ------+-----------+
| |
[Merge] [Final Global Map]
| |
Robot B explores ---+---+-------+
| |
+---+
After meeting at overlap, A and B merge their partial maps.
Result: Unified, higher-confidence world model.
Lab: Shared Mapping and Sensor Fusion
Goal: Simulate two (or more) robots exploring a world, merging partial maps and detections into a shared world model.
Tasks:
- Launch multiple robots in Gazebo/Isaac Sim with:
- Distinct starting positions and exploration paths.
- Each builds a local map (SLAM, partial occupancy grid).
- At rendezvous/overlap:
- Robots exchange map segments (topics/services).
- Attempt automated map merge (using known overlaps, matching features).
- Extend with sensor-fusion:
- Each robot detects/labels objects.
- Share detections and update world state accordingly.
- Evaluate:
- Is the merged map globally consistent?
- Are objects deduplicated/conflicts resolved?
Deliverables:
- Launch/config with multi-robot SLAM setup.
- Scripts/code for map merge and detection-fusion demo.
- Logs and short report: How does sharing improve performance/robustness vs single-robot mapping?
Summary
A robust, accurate shared world model is the backbone of all coordinated decision-making in multi-agent robotics. This enables team-level planning, dynamic task allocation, and consistent obstacle negotiation in the topics that follow.