TY - JOUR
T1 - Online Hand Gesture Recognition Using Semantically Interpretable Attention Mechanism
AU - Chae, Moon Ju
AU - Han, Sang Hoon
AU - Nam, Hyeok
AU - Park, Jae Hyeon
AU - Cha, Min Hee
AU - Cho, Sung In
N1 - Publisher Copyright:
© 2025 The Authors.
PY - 2025
Y1 - 2025
N2 - Hand gesture recognition (HGR) is a field of action recognition widely used in various domains such as robotics, virtual reality (VR), and augmented reality (AR). In this paper, we propose a semantically interpretable attention technique based on the compression and exchange of local and global information for real-time dynamic hand gesture recognition. In this research, we focus on data comprising hand landmark coordinates and online recognition of multiple gestures within a single sequence. Specifically, our approach has two paths to learn intraframe and interframe information separately. The learned information is compressed in the local and global perspectives, and the compressed information is exchanged through cross-attention. By using this approach, the importance of each hand landmark and frame, which can be interpreted semantically, can be extracted, and this information is used in the attention process on the intraframe and interframe information. Finally, the intraframe and interframe information to which attention is applied is integrated, which effectively enables comprehensive feature extraction of both local and global information. Experimental results demonstrated that the proposed method enabled concise and rapid hand-gesture recognition. It provided 95% accuracy in real-time hand-gesture recognition on a SHREC’22 dataset and accurately estimated the conclusion of a given gesture. Additionally, with a speed of approximately 294 frames per second (FPS), our model is well-suited for real-time systems, offering users immersive experience. This demonstrates its potential for effective application in real-world environments.
AB - Hand gesture recognition (HGR) is a field of action recognition widely used in various domains such as robotics, virtual reality (VR), and augmented reality (AR). In this paper, we propose a semantically interpretable attention technique based on the compression and exchange of local and global information for real-time dynamic hand gesture recognition. In this research, we focus on data comprising hand landmark coordinates and online recognition of multiple gestures within a single sequence. Specifically, our approach has two paths to learn intraframe and interframe information separately. The learned information is compressed in the local and global perspectives, and the compressed information is exchanged through cross-attention. By using this approach, the importance of each hand landmark and frame, which can be interpreted semantically, can be extracted, and this information is used in the attention process on the intraframe and interframe information. Finally, the intraframe and interframe information to which attention is applied is integrated, which effectively enables comprehensive feature extraction of both local and global information. Experimental results demonstrated that the proposed method enabled concise and rapid hand-gesture recognition. It provided 95% accuracy in real-time hand-gesture recognition on a SHREC’22 dataset and accurately estimated the conclusion of a given gesture. Additionally, with a speed of approximately 294 frames per second (FPS), our model is well-suited for real-time systems, offering users immersive experience. This demonstrates its potential for effective application in real-world environments.
KW - Hand gesture recognition
KW - cross-attention
KW - intraframe and interframe information
KW - online recognition
UR - http://www.scopus.com/inward/record.url?scp=85218434740&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2025.3540721
DO - 10.1109/ACCESS.2025.3540721
M3 - Article
AN - SCOPUS:85218434740
SN - 2169-3536
VL - 13
SP - 32329
EP - 32340
JO - IEEE Access
JF - IEEE Access
ER -