Sensors (Basel). 2026 Apr 8;26(8):2285. doi: 10.3390/s26082285.
ABSTRACT
We present a systematic evaluation of the positional and rotational tracking accuracy of the Meta Quest 3 mixed-reality headset using a TECHMAN TM5-900 collaborative robot (±0.05 mm repeatability) as a highly repeatable robot-driven reference. The headset was rigidly attached to the robot's tool flange and subjected to single-axis translational motions (200 mm along X, Y, and Z) and rotational motions (Roll ± 65°, Pitch ± 85°, and Yaw ± 85°). Each test was repeated three times, and the resulting trajectories were averaged to improve statistical robustness. Both data sources were integrated into a single Python-based application running on the same computer. The headset streamed its data via UDP, while the robot, implemented as an ROS2 node, published its data to the same host. This configuration enabled simultaneous acquisition of both streams, ensuring temporal consistency without the need for offline interpolation. All comparisons were performed in a relative reference frame, thereby avoiding the need for absolute hand-eye calibration. Coordinate-frame alignment was achieved using Singular Value Decomposition (SVD)-based rigid-body Procrustes analysis. Over 2848 synchronized samples spanning 151.46 s, the Meta Quest 3 achieved a mean translational RMSE of 0.346 mm (3D RMSE = 0.621 mm) and a mean rotational RMSE of 0.143°, with Pearson correlation coefficients greater than 0.9999 on all axes. These results show sub-millimeter positional tracking and sub-degree rotational tracking under controlled conditions, supporting the potential of the Meta Quest 3 for precision-oriented mixed-reality applications in industrial and research settings.
PMID:42076394 | PMC:PMC13119968 | DOI:10.3390/s26082285

