ACS Appl Mater Interfaces. 2026 Mar 20. doi: 10.1021/acsami.6c00620. Online ahead of print.
ABSTRACT
The combination of continuous sign language recognition and virtual reality (VR) offers considerable promise for establishing inclusive digital environments and ensuring equitable communication between hearing and hearing-impaired individuals. However, existing sign language recognition systems still face notable limitations: vision-based approaches are susceptible to environmental interference and privacy issues, while current wearable solutions often lack the precision required for accurate gesture capture, thus impeding the creation of fully accessible communication platforms. Herein, we introduce smart gloves equipped with triboelectric-inertial dual-mode sensors that real-time hand-shape and motion tracking through integrated liquid metal-based triboelectric strain-sensing fibers and an inertial measurement unit (IMU), enabling high-accuracy sign language interpretation and bidirectional gesture interaction in VR. The developed triboelectric strain-sensing fiber exhibits exceptional wearability (ø500 μm), high sensitivity (13.37 V/%), and excellent linearity (R2 = 0.995), complemented by an IMU with millimeter-scale trajectory precision. Combined with a sliding-window-assisted 1D-CNN algorithm, the system achieves recognition accuracies of 99.84% for sign language vocabulary and 95% for continuous sentences. Furthermore, several VR interactive applications including sign language tutoring, seamless bidirectional dialogue, and gesture-driven motion capture are demonstrated by a self-developed Unity3D-based platform. This study outlines a viable technical framework for fostering inclusive digital interaction and narrowing the communication divide between hearing and hearing-impaired communities.
PMID:41861099 | DOI:10.1021/acsami.6c00620

