Title | Date | Abstract | Comment | CodeRepository |
---|---|---|---|---|
Social Gesture Recognition in spHRI: Leveraging Fabric-Based Tactile Sensing on Humanoid Robots | 2025-03-05 | ShowHumans are able to convey different messages using only touch. Equipping robots with the ability to understand social touch adds another modality in which humans and robots can communicate. In this paper, we present a social gesture recognition system using a fabric-based, large-scale tactile sensor integrated onto the arms of a humanoid robot. We built a social gesture dataset using multiple participants and extracted temporal features for classification. By collecting real-world data on a humanoid robot, our system provides valuable insights into human-robot social touch, further advancing the development of spHRI systems for more natural and effective communication. |
Accep...Accepted to ICRA 25. 8 pages, 8 figures |
None |
FABG : End-to-end Imitation Learning for Embodied Affective Human-Robot Interaction | 2025-03-04 | ShowThis paper proposes FABG (Facial Affective Behavior Generation), an end-to-end imitation learning system for human-robot interaction, designed to generate natural and fluid facial affective behaviors. In interaction, effectively obtaining high-quality demonstrations remains a challenge. In this work, we develop an immersive virtual reality (VR) demonstration system that allows operators to perceive stereoscopic environments. This system ensures "the operator's visual perception matches the robot's sensory input" and "the operator's actions directly determine the robot's behaviors" - as if the operator replaces the robot in human interaction engagements. We propose a prediction-driven latency compensation strategy to reduce robotic reaction delays and enhance interaction fluency. FABG naturally acquires human interactive behaviors and subconscious motions driven by intuition, eliminating manual behavior scripting. We deploy FABG on a real-world 25-degree-of-freedom (DoF) humanoid robot, validating its effectiveness through four fundamental interaction tasks: expression response, dynamic gaze, foveated attention, and gesture recognition, supported by data collection and policy training. Project website: https://cybergenies.github.io |
Proje...Project website: https://cybergenies.github.io |
None |
One-Shot Gesture Recognition for Underwater Diver-To-Robot Communication | 2025-03-02 | ShowReliable human-robot communication is essential for underwater human-robot interaction (U-HRI), yet traditional methods such as acoustic signaling and predefined gesture-based models suffer from limitations in adaptability and robustness. In this work, we propose One-Shot Gesture Recognition (OSG), a novel method that enables real-time, pose-based, temporal gesture recognition underwater from a single demonstration, eliminating the need for extensive dataset collection or model retraining. OSG leverages shape-based classification techniques, including Hu moments, Zernike moments, and Fourier descriptors, to robustly recognize gestures in visually-challenging underwater environments. Our system achieves high accuracy on real-world underwater data and operates efficiently on embedded hardware commonly found on autonomous underwater vehicles (AUVs), demonstrating its feasibility for deployment on-board robots. Compared to deep learning approaches, OSG is lightweight, computationally efficient, and highly adaptable, making it ideal for diver-to-robot communication. We evaluate OSG's performance on an augmented gesture dataset and real-world underwater video data, comparing its accuracy against deep learning methods. Our results show OSG's potential to enhance U-HRI by enabling the immediate deployment of user-defined gestures without the constraints of predefined gesture languages. |
17 pa...17 pages, 8 figures, 2 tables, submitted to IROS2025 |
None |
Duo Streamers: A Streaming Gesture Recognition Framework | 2025-02-25 | ShowGesture recognition in resource-constrained scenarios faces significant challenges in achieving high accuracy and low latency. The streaming gesture recognition framework, Duo Streamers, proposed in this paper, addresses these challenges through a three-stage sparse recognition mechanism, an RNN-lite model with an external hidden state, and specialized training and post-processing pipelines, thereby making innovative progress in real-time performance and lightweight design. Experimental results show that Duo Streamers matches mainstream methods in accuracy metrics, while reducing the real-time factor by approximately 92.3%, i.e., delivering a nearly 13-fold speedup. In addition, the framework shrinks parameter counts to 1/38 (idle state) and 1/9 (busy state) compared to mainstream models. In summary, Duo Streamers not only offers an efficient and practical solution for streaming gesture recognition in resource-constrained devices but also lays a solid foundation for extended applications in multimodal and diverse scenarios. |
10 pages, 4 figures | None |
Online hand gesture recognition using Continual Graph Transformers | 2025-02-20 | ShowOnline continuous action recognition has emerged as a critical research area due to its practical implications in real-world applications, such as human-computer interaction, healthcare, and robotics. Among various modalities, skeleton-based approaches have gained significant popularity, demonstrating their effectiveness in capturing 3D temporal data while ensuring robustness to environmental variations. However, most existing works focus on segment-based recognition, making them unsuitable for real-time, continuous recognition scenarios. In this paper, we propose a novel online recognition system designed for real-time skeleton sequence streaming. Our approach leverages a hybrid architecture combining Spatial Graph Convolutional Networks (S-GCN) for spatial feature extraction and a Transformer-based Graph Encoder (TGE) for capturing temporal dependencies across frames. Additionally, we introduce a continual learning mechanism to enhance model adaptability to evolving data distributions, ensuring robust recognition in dynamic environments. We evaluate our method on the SHREC'21 benchmark dataset, demonstrating its superior performance in online hand gesture recognition. Our approach not only achieves state-of-the-art accuracy but also significantly reduces false positive rates, making it a compelling solution for real-time applications. The proposed system can be seamlessly integrated into various domains, including human-robot collaboration and assistive technologies, where natural and intuitive interaction is crucial. |
None | |
CrossFi: A Cross Domain Wi-Fi Sensing Framework Based on Siamese Network | 2025-02-14 | ShowIn recent years, Wi-Fi sensing has garnered significant attention due to its numerous benefits, such as privacy protection, low cost, and penetration ability. Extensive research has been conducted in this field, focusing on areas such as gesture recognition, people identification, and fall detection. However, many data-driven methods encounter challenges related to domain shift, where the model fails to perform well in environments different from the training data. One major factor contributing to this issue is the limited availability of Wi-Fi sensing datasets, which makes models learn excessive irrelevant information and over-fit to the training set. Unfortunately, collecting large-scale Wi-Fi sensing datasets across diverse scenarios is a challenging task. To address this problem, we propose CrossFi, a siamese network-based approach that excels in both in-domain scenario and cross-domain scenario, including few-shot, zero-shot scenarios, and even works in few-shot new-class scenario where testing set contains new categories. The core component of CrossFi is a sample-similarity calculation network called CSi-Net, which improves the structure of the siamese network by using an attention mechanism to capture similarity information, instead of simply calculating the distance or cosine similarity. Based on it, we develop an extra Weight-Net that can generate a template for each class, so that our CrossFi can work in different scenarios. Experimental results demonstrate that our CrossFi achieves state-of-the-art performance across various scenarios. In gesture recognition task, our CrossFi achieves an accuracy of 98.17% in in-domain scenario, 91.72% in one-shot cross-domain scenario, 64.81% in zero-shot cross-domain scenario, and 84.75% in one-shot new-class scenario. The code for our model is publicly available at https://github.com/RS2002/CrossFi. |
Code Link | |
DGSense: A Domain Generalization Framework for Wireless Sensing | 2025-02-12 | ShowWireless sensing is of great benefits to our daily lives. However, wireless signals are sensitive to the surroundings. Various factors, e.g. environments, locations, and individuals, may induce extra impact on wireless propagation. Such a change can be regarded as a domain, in which the data distribution shifts. A vast majority of the sensing schemes are learning-based. They are dependent on the training domains, resulting in performance degradation in unseen domains. Researchers have proposed various solutions to address this issue. But these solutions leverage either semi-supervised or unsupervised domain adaptation techniques. They still require some data in the target domains and do not perform well in unseen domains. In this paper, we propose a domain generalization framework DGSense, to eliminate the domain dependence problem in wireless sensing. The framework is a general solution working across diverse sensing tasks and wireless technologies. Once the sensing model is built, it can generalize to unseen domains without any data from the target domain. To achieve the goal, we first increase the diversity of the training set by a virtual data generator, and then extract the domain independent features via episodic training between the main feature extractor and the domain feature extractors. The feature extractors employ a pre-trained Residual Network (ResNet) with an attention mechanism for spatial features, and a 1D Convolutional Neural Network (1DCNN) for temporal features. To demonstrate the effectiveness and generality of DGSense, we evaluated on WiFi gesture recognition, Millimeter Wave (mmWave) activity recognition, and acoustic fall detection. All the systems exhibited high generalization capability to unseen domains, including new users, locations, and environments, free of new data and retraining. |
15 pages | None |
MoEMba: A Mamba-based Mixture of Experts for High-Density EMG-based Hand Gesture Recognition | 2025-02-09 | ShowHigh-Density surface Electromyography (HDsEMG) has emerged as a pivotal resource for Human-Computer Interaction (HCI), offering direct insights into muscle activities and motion intentions. However, a significant challenge in practical implementations of HD-sEMG-based models is the low accuracy of inter-session and inter-subject classification. Variability between sessions can reach up to 40% due to the inherent temporal variability of HD-sEMG signals. Targeting this challenge, the paper introduces the MoEMba framework, a novel approach leveraging Selective StateSpace Models (SSMs) to enhance HD-sEMG-based gesture recognition. The MoEMba framework captures temporal dependencies and cross-channel interactions through channel attention techniques. Furthermore, wavelet feature modulation is integrated to capture multi-scale temporal and spatial relations, improving signal representation. Experimental results on the CapgMyo HD-sEMG dataset demonstrate that MoEMba achieves a balanced accuracy of 56.9%, outperforming its state-of-the-art counterparts. The proposed framework's robustness to session-to-session variability and its efficient handling of high-dimensional multivariate time series data highlight its potential for advancing HD-sEMG-powered HCI systems. |
None | |
Identity-free Artificial Emotional Intelligence via Micro-Gesture Understanding | 2025-02-05 | ShowIn this work, we focus on a special group of human body language -- the micro-gesture (MG), which differs from the range of ordinary illustrative gestures in that they are not intentional behaviors performed to convey information to others, but rather unintentional behaviors driven by inner feelings. This characteristic introduces two novel challenges regarding micro-gestures that are worth rethinking. The first is whether strategies designed for other action recognition are entirely applicable to micro-gestures. The second is whether micro-gestures, as supplementary data, can provide additional insights for emotional understanding. In recognizing micro-gestures, we explored various augmentation strategies that take into account the subtle spatial and brief temporal characteristics of micro-gestures, often accompanied by repetitiveness, to determine more suitable augmentation methods. Considering the significance of temporal domain information for micro-gestures, we introduce a simple and efficient plug-and-play spatiotemporal balancing fusion method. We not only studied our method on the considered micro-gesture dataset but also conducted experiments on mainstream action datasets. The results show that our approach performs well in micro-gesture recognition and on other datasets, achieving state-of-the-art performance compared to previous micro-gesture recognition methods. For emotional understanding based on micro-gestures, we construct complex emotional reasoning scenarios. Our evaluation, conducted with large language models, shows that micro-gestures play a significant and positive role in enhancing comprehensive emotional understanding. The scenarios we developed can be extended to other micro-gesture-based tasks such as deception detection and interviews. We confirm that our new insights contribute to advancing research in micro-gesture and emotional artificial intelligence. |
We pr...We provide a link to the public release of the code and data in this new version |
None |
From Soft Materials to Controllers with NeuroTouch: A Neuromorphic Tactile Sensor for Real-Time Gesture Recognition | 2025-01-31 | ShowThis work presents NeuroTouch, an optical-based tactile sensor that combines a highly deformable dome-shaped soft material with an integrated neuromorphic camera, leveraging frame-based and dynamic vision for gesture detection. Our approach transforms an elastic body into a rich and nuanced interactive controller by tracking markers printed on its surface with event-based methods and harnessing their trajectories through RANSAC-based techniques. To benchmark our framework, we have created a 25 min gesture dataset, which we make publicly available to foster research in this area. Achieving over 91% accuracy in gesture classification, a 3.41 mm finger localization distance error, and a 0.96 mm gesture intensity error, our real-time, lightweight, and low-latency pipeline holds promise for applications in video games, augmented/virtual reality, and accessible devices. This research lays the groundwork for advancements in gesture detection for vision-based soft-material input technologies. Dataset: Coming Soon, Video: Coming Soon |
None | |
Spatial Adaptation Layer: Interpretable Domain Adaptation For Biosignal Sensor Array Applications | 2025-01-29 | ShowMachine learning offers promising methods for processing signals recorded with wearable devices such as surface electromyography (sEMG) and electroencephalography (EEG). However, in these applications, despite high within-session performance, intersession performance is hindered by electrode shift, a known issue across modalities. Existing solutions often require large and expensive datasets and/or lack robustness and interpretability. Thus, we propose the Spatial Adaptation Layer (SAL), which can be applied to any biosignal array model and learns a parametrized affine transformation at the input between two recording sessions. We also introduce learnable baseline normalization (LBN) to reduce baseline fluctuations. Tested on two HD-sEMG gesture recognition datasets, SAL and LBN outperformed standard fine-tuning on regular arrays, achieving competitive performance even with a logistic regressor, with orders of magnitude less, physically interpretable parameters. Our ablation study showed that forearm circumferential translations account for the majority of performance improvements. |
IJCNN...IJCNN(submitted), 6 pages |
None |
A Low-Cost, High-Precision Human-Machine Interaction Solution Based on Multi-Coil Wireless Charging Pads | 2025-01-27 | ShowWireless charging pads are common, yet their functionality is mainly restricted to charging. Existing gesture recognition techniques, such as those based on machine vision and WiFi, have drawbacks like high costs and poor precision. This paper presents a new human machine interaction solution using multicoil wireless charging pads. The proposed approach leverages the pads existing modules without additional wearable sensors. It determines gestures by monitoring current and power changes in different coils. The data processing includes noise removal, sorting, highpass filtering, and slicing. A Bayesian network and particle filtering are employed for motion tracking. Through experiments, this solution proves to have wide applications, high recognition accuracy, and low cost. It can effectively identify diverse gestures, increasing the value of wireless charging pads. It outperforms traditional methods, with a 0.73 improvement in recognition accuracy and better environmental adaptability. |
ISPA | None |
EgoHand: Ego-centric Hand Pose Estimation and Gesture Recognition with Head-mounted Millimeter-wave Radar and IMUs | 2025-01-23 | ShowRecent advanced Virtual Reality (VR) headsets, such as the Apple Vision Pro, employ bottom-facing cameras to detect hand gestures and inputs, which offers users significant convenience in VR interactions. However, these bottom-facing cameras can sometimes be inconvenient and pose a risk of unintentionally exposing sensitive information, such as private body parts or personal surroundings. To mitigate these issues, we introduce EgoHand. This system provides an alternative solution by integrating millimeter-wave radar and IMUs for hand gesture recognition, thereby offering users an additional option for gesture interaction that enhances privacy protection. To accurately recognize hand gestures, we devise a two-stage skeleton-based gesture recognition scheme. In the first stage, a novel end-to-end Transformer architecture is employed to estimate the coordinates of hand joints. Subsequently, these estimated joint coordinates are utilized for gesture recognition. Extensive experiments involving 10 subjects show that EgoHand can detect hand gestures with 90.8% accuracy. Furthermore, EgoHand demonstrates robust performance across a variety of cross-domain tests, including different users, dominant hands, body postures, and scenes. |
10 pages | None |
DSTSA-GCN: Advancing Skeleton-Based Gesture Recognition with Semantic-Aware Spatio-Temporal Topology Modeling | 2025-01-21 | ShowGraph convolutional networks (GCNs) have emerged as a powerful tool for skeleton-based action and gesture recognition, thanks to their ability to model spatial and temporal dependencies in skeleton data. However, existing GCN-based methods face critical limitations: (1) they lack effective spatio-temporal topology modeling that captures dynamic variations in skeletal motion, and (2) they struggle to model multiscale structural relationships beyond local joint connectivity. To address these issues, we propose a novel framework called Dynamic Spatial-Temporal Semantic Awareness Graph Convolutional Network (DSTSA-GCN). DSTSA-GCN introduces three key modules: Group Channel-wise Graph Convolution (GC-GC), Group Temporal-wise Graph Convolution (GT-GC), and Multi-Scale Temporal Convolution (MS-TCN). GC-GC and GT-GC operate in parallel to independently model channel-specific and frame-specific correlations, enabling robust topology learning that accounts for temporal variations. Additionally, both modules employ a grouping strategy to adaptively capture multiscale structural relationships. Complementing this, MS-TCN enhances temporal modeling through group-wise temporal convolutions with diverse receptive fields. Extensive experiments demonstrate that DSTSA-GCN significantly improves the topology modeling capabilities of GCNs, achieving state-of-the-art performance on benchmark datasets for gesture and action recognition, including SHREC17 Track, DHG-14/28, NTU-RGB+D, and NTU-RGB+D-120. |
submi...submit to Neurocomputing |
None |
Survey on Hand Gesture Recognition from Visual Input | 2025-01-21 | ShowHand gesture recognition has become an important research area, driven by the growing demand for human-computer interaction in fields such as sign language recognition, virtual and augmented reality, and robotics. Despite the rapid growth of the field, there are few surveys that comprehensively cover recent research developments, available solutions, and benchmark datasets. This survey addresses this gap by examining the latest advancements in hand gesture and 3D hand pose recognition from various types of camera input data including RGB images, depth images, and videos from monocular or multiview cameras, examining the differing methodological requirements of each approach. Furthermore, an overview of widely used datasets is provided, detailing their main characteristics and application domains. Finally, open challenges such as achieving robust recognition in real-world environments, handling occlusions, ensuring generalization across diverse users, and addressing computational efficiency for real-time applications are highlighted to guide future research directions. By synthesizing the objectives, methodologies, and applications of recent studies, this survey offers valuable insights into current trends, challenges, and opportunities for future research in human hand gesture recognition. |
None | |
Assessing the Acceptance of a Mid-Air Gesture Syntax for Smart Space Interaction: An Empirical Study | 2025-01-08 | ShowThis article explores the use of a location-aware mid-air gesture-based command triplet syntax to interact with a smart space. The syntax, inspired by human language, is built as a vocative case with an imperative structure. In a sentence like 'Light, please switch on', the object being activated is invoked via making a gesture that mimics its initial letter/acronym (vocative, coincident with the sentence's elliptical subject). A geometrical or directional gesture then identifies the action (imperative verb) and may include an object feature or a second object with which to network (complement), which also represented by the initial or acronym letter. Technically, an interpreter relying on a trainable multidevice gesture recognition layer makes the pair/triplet syntax decoding possible. The recognition layer works on acceleration and position input signals from graspable (smartphone) and free-hand devices (smartwatch and external depth cameras), as well as a specific compiler. On a specific deployment at a Living Lab facility, the syntax has been instantiated via the use of a lexicon derived from English (with respect to the initial letters and acronyms). A within-subject analysis with twelve users has enabled the analysis of the syntax acceptance (in terms of usability, gesture agreement for actions over objects, and social acceptance) and technology preference of the gesture syntax within its three device implementations (graspable, wearable, and device-free ones). Participants express consensus regarding the simplicity of learning the syntax and its potential effectiveness in managing smart resources. Socially, participants favoured the Watch for outdoor activities and the Phone for home and work settings, underscoring the importance of social context in technology design. The Phone emerged as the preferred option for gesture recognition due to its efficiency and familiarity. |
None | |
Extraction Of Cumulative Blobs From Dynamic Gestures | 2025-01-07 | ShowGesture recognition is a perceptual user interface, which is based on CV technology that allows the computer to interpret human motions as commands, allowing users to communicate with a computer without the use of hands, thus making the mouse and keyboard superfluous. Gesture recognition's main weakness is a light condition because gesture control is based on computer vision, which heavily relies on cameras. These cameras are used to interpret gestures in 2D and 3D, so the extracted information can vary depending on the source of light. The limitation of the system cannot work in a dark environment. A simple night vision camera can be used as our camera for motion capture as they also blast out infrared light which is not visible to humans but can be clearly seen with a camera that has no infrared filter this majorly overcomes the limitation of systems which cannot work in a dark environment. So, the video stream from the camera is fed into a Raspberry Pi which has a Python program running OpenCV module which is used for detecting, isolating and tracking the path of dynamic gesture, then we use an algorithm of machine learning to recognize the pattern drawn and accordingly control the GPIOs of the raspberry pi to perform some activities. |
None | |
KNN-MMD: Cross Domain Wireless Sensing via Local Distribution Alignment | 2025-01-07 | ShowWireless sensing has recently found widespread applications in diverse environments, including homes, offices, and public spaces. By analyzing patterns in channel state information (CSI), it is possible to infer human actions for tasks such as person identification, gesture recognition, and fall detection. However, CSI is highly sensitive to environmental changes, where even minor alterations can significantly distort the CSI patterns. This sensitivity often leads to performance degradation or outright failure when applying wireless sensing models trained in one environment to another. To address this challenge, Domain Alignment (DAL) has been widely adopted for cross-domain classification tasks, as it focuses on aligning the global distributions of the source and target domains in feature space. Despite its popularity, DAL often neglects inter-category relationships, which can lead to misalignment between categories across domains, even when global alignment is achieved. To overcome these limitations, we propose K-Nearest Neighbors Maximum Mean Discrepancy (KNN-MMD), a novel few-shot method for cross-domain wireless sensing. Our approach begins by constructing a help set using KNN from the target domain, enabling local alignment between the source and target domains within each category using MMD. Additionally, we address a key instability issue commonly observed in cross-domain methods, where model performance fluctuates sharply between epochs. Further, most existing methods struggle to determine an optimal stopping point during training due to the absence of labeled data from the target domain. Our method resolves this by excluding the support set from the target domain during training and employing it as a validation set to determine the stopping criterion. |
None | |
Multiscaled Multi-Head Attention-based Video Transformer Network for Hand Gesture Recognition | 2025-01-01 | ShowDynamic gesture recognition is one of the challenging research areas due to variations in pose, size, and shape of the signer's hand. In this letter, Multiscaled Multi-Head Attention Video Transformer Network (MsMHA-VTN) for dynamic hand gesture recognition is proposed. A pyramidal hierarchy of multiscale features is extracted using the transformer multiscaled head attention model. The proposed model employs different attention dimensions for each head of the transformer which enables it to provide attention at the multiscale level. Further, in addition to single modality, recognition performance using multiple modalities is examined. Extensive experiments demonstrate the superior performance of the proposed MsMHA-VTN with an overall accuracy of 88.22% and 99.10% on NVGesture and Briareo datasets, respectively. |
None | |
Computer Vision-Driven Gesture Recognition: Toward Natural and Intuitive Human-Computer | 2024-12-24 | ShowThis study mainly explores the application of natural gesture recognition based on computer vision in human-computer interaction, aiming to improve the fluency and naturalness of human-computer interaction through gesture recognition technology. In the fields of virtual reality, augmented reality and smart home, traditional input methods have gradually failed to meet the needs of users for interactive experience. As an intuitive and convenient interaction method, gestures have received more and more attention. This paper proposes a gesture recognition method based on a three-dimensional hand skeleton model. By simulating the three-dimensional spatial distribution of hand joints, a simplified hand skeleton structure is constructed. By connecting the palm and each finger joint, a dynamic and static gesture model of the hand is formed, which further improves the accuracy and efficiency of gesture recognition. Experimental results show that this method can effectively recognize various gestures and maintain high recognition accuracy and real-time response capabilities in different environments. In addition, combined with multimodal technologies such as eye tracking, the intelligence level of the gesture recognition system can be further improved, bringing a richer and more intuitive user experience. In the future, with the continuous development of computer vision, deep learning and multimodal interaction technology, natural interaction based on gestures will play an important role in a wider range of application scenarios and promote revolutionary progress in human-computer interaction. |
None | |
Hand1000: Generating Realistic Hands from Text with Only 1,000 Images | 2024-12-21 | ShowText-to-image generation models have achieved remarkable advancements in recent years, aiming to produce realistic images from textual descriptions. However, these models often struggle with generating anatomically accurate representations of human hands. The resulting images frequently exhibit issues such as incorrect numbers of fingers, unnatural twisting or interlacing of fingers, or blurred and indistinct hands. These issues stem from the inherent complexity of hand structures and the difficulty in aligning textual descriptions with precise visual depictions of hands. To address these challenges, we propose a novel approach named Hand1000 that enables the generation of realistic hand images with target gesture using only 1,000 training samples. The training of Hand1000 is divided into three stages with the first stage aiming to enhance the model's understanding of hand anatomy by using a pre-trained hand gesture recognition model to extract gesture representation. The second stage further optimizes text embedding by incorporating the extracted hand gesture representation, to improve alignment between the textual descriptions and the generated hand images. The third stage utilizes the optimized embedding to fine-tune the Stable Diffusion model to generate realistic hand images. In addition, we construct the first publicly available dataset specifically designed for text-to-hand image generation. Based on the existing hand gesture recognition dataset, we adopt advanced image captioning models and LLaMA3 to generate high-quality textual descriptions enriched with detailed gesture information. Extensive experiments demonstrate that Hand1000 significantly outperforms existing models in producing anatomically correct hand images while faithfully representing other details in the text, such as faces, clothing, and colors. |
Accep...Accepted by AAAI 2025. Project page https://haozhuo-zhang.github.io/Hand1000-project-page/ |
Code Link |
iRadar: Synthesizing Millimeter-Waves from Wearable Inertial Inputs for Human Gesture Sensing | 2024-12-20 | ShowMillimeter-wave (mmWave) radar-based gesture recognition is gaining attention as a key technology to enable intuitive human-machine interaction. Nevertheless, the significant challenge lies in obtaining large-scale, high-quality mmWave gesture datasets. To tackle this problem, we present iRadar, a novel cross-modal gesture recognition framework that employs Inertial Measurement Unit (IMU) data to synthesize the radar signals generated by the corresponding gestures. The key idea is to exploit the IMU signals, which are commonly available in contemporary wearable devices, to synthesize the radar signals that would be produced if the same gesture was performed in front of a mmWave radar. However, several technical obstacles must be overcome due to the differences between mmWave and IMU signals, the noisy gesture sensing of mmWave radar, and the dynamics of human gestures. Firstly, we develop a method for processing IMU and mmWave data to extract critical gesture features. Secondly, we propose a diffusion-based IMU-to-radar translation model that accurately transforms IMU data into mmWave data. Lastly, we devise a novel transformer model to enhance gesture recognition performance. We thoroughly evaluate iRadar, involving 18 gestures and 30 subjects in three scenarios, using five wearable devices. Experimental results demonstrate that iRadar consistently achieves 99.82% Top-3 accuracy across diverse scenarios. |
None | |
Designing Semi-Structured Pruning of Graph Convolutional Networks for Skeleton-based Recognition | 2024-12-16 | ShowDeep neural networks (DNNs) are nowadays witnessing a major success in solving many pattern recognition tasks including skeleton-based classification. The deployment of DNNs on edge-devices, endowed with limited time and memory resources, requires designing lightweight and efficient variants of these networks. Pruning is one of the lightweight network design techniques that operate by removing unnecessary network parts, in a structured or an unstructured manner, including individual weights, neurons or even entire channels. Nonetheless, structured and unstructured pruning methods, when applied separately, may either be inefficient or ineffective. In this paper, we devise a novel semi-structured method that discards the downsides of structured and unstructured pruning while gathering their upsides to some extent. The proposed solution is based on a differentiable cascaded parametrization which combines (i) a band-stop mechanism that prunes weights depending on their magnitudes, (ii) a weight-sharing parametrization that prunes connections either individually or group-wise, and (iii) a gating mechanism which arbitrates between different group-wise and entry-wise pruning. All these cascaded parametrizations are built upon a common latent tensor which is trained end-to-end by minimizing a classification loss and a surrogate tensor rank regularizer. Extensive experiments, conducted on the challenging tasks of action and hand-gesture recognition, show the clear advantage of our proposed semi-structured pruning approach against both structured and unstructured pruning, when taken separately, as well as the related work. |
None | |
You Can Wash Hands Better: Accurate Daily Handwashing Assessment with Smartwatches | 2024-12-13 | ShowHand hygiene is among the most effective daily practices for preventing infectious diseases such as influenza, malaria, and skin infections. While professional guidelines emphasize proper handwashing to reduce the risk of viral infections, surveys reveal that adherence to these recommendations remains low. To address this gap, we propose UWash, a wearable solution leveraging smartwatches to evaluate handwashing procedures, aiming to raise awareness and cultivate high-quality handwashing habits. We frame the task of handwashing assessment as an action segmentation problem, similar to those in computer vision, and introduce a simple yet efficient two-stream UNet-like network to achieve this goal. Experiments involving 51 subjects demonstrate that UWash achieves 92.27% accuracy in handwashing gesture recognition, an error of <0.5 seconds in onset/offset detection, and an error of <5 points in gesture scoring under user-dependent settings. The system also performs robustly in user-independent and user-independent-location-independent evaluations. Remarkably, UWash maintains high performance in real-world tests, including evaluations with 10 random passersby at a hospital 9 months later and 10 passersby in an in-the-wild test conducted 2 years later. UWash is the first system to score handwashing quality based on gesture sequences, offering actionable guidance for improving daily hand hygiene. The code and dataset are publicly available at \url{https://github.com/aiotgroup/UWash}. |
Under...Under review. 13 pages, 12 figures, 7 tables |
Code Link |
Automated Patient Positioning with Learned 3D Hand Gestures | 2024-12-09 | ShowPositioning patients for scanning and interventional procedures is a critical task that requires high precision and accuracy. The conventional workflow involves manually adjusting the patient support to align the center of the target body part with the laser projector or other guiding devices. This process is not only time-consuming but also prone to inaccuracies. In this work, we propose an automated patient positioning system that utilizes a camera to detect specific hand gestures from technicians, allowing users to indicate the target patient region to the system and initiate automated positioning. Our approach relies on a novel multi-stage pipeline to recognize and interpret the technicians' gestures, translating them into precise motions of medical devices. We evaluate our proposed pipeline during actual MRI scanning procedures, using RGB-Depth cameras to capture the process. Results show that our system achieves accurate and precise patient positioning with minimal technician intervention. Furthermore, we validate our method on HaGRID, a large-scale hand gesture dataset, demonstrating its effectiveness in hand detection and gesture recognition. |
Accepted by WACV2025 | None |
Exploring the Impact of Synthetic Data on Human Gesture Recognition Tasks Using GANs | 2024-12-09 | ShowIn the evolving domain of Human Activity Recognition (HAR) using Internet of Things (IoT) devices, there is an emerging interest in employing Deep Generative Models (DGMs) to address data scarcity, enhance data quality, and improve classification metrics scores. Among these types of models, Generative Adversarial Networks (GANs) have arisen as a powerful tool for generating synthetic data that mimic real-world scenarios with high fidelity. However, Human Gesture Recognition (HGR), a subset of HAR, particularly in healthcare applications, using time series data such as allergic gestures, remains highly unexplored. In this paper, we examine and evaluate the performance of two GANs in the generation of synthetic gesture motion data that compose a part of an open-source benchmark dataset. The data is related to the disease identification domain and healthcare, specifically to allergic rhinitis. We also focus on these AI models' performance in terms of fidelity, diversity, and privacy. Furthermore, we examine the scenario if the synthetic data can substitute real data, in training scenarios and how well models trained on synthetic data can be generalized for the allergic rhinitis gestures. In our work, these gestures are related to 6-axes accelerometer and gyroscope data, serving as multi-variate time series instances, and retrieved from smart wearable devices. To the best of our knowledge, this study is the first to explore the feasibility of synthesizing motion gestures for allergic rhinitis from wearable IoT device data using Generative Adversarial Networks (GANs) and testing their impact on the generalization of gesture recognition systems. It is worth noting that, even if our method has been applied to a specific category of gestures, it is designed to be generalized and can be deployed also to other motion data in the HGR domain. |
8 pag...8 pages, 5 figures, 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT), 2024 |
None |
Mining Limited Data Sufficiently: A BERT-inspired Approach for CSI Time Series Application in Wireless Communication and Sensing | 2024-12-09 | ShowChannel State Information (CSI) is the cornerstone in both wireless communication and sensing systems. In wireless communication systems, CSI provides essential insights into channel conditions, enabling system optimizations like channel compensation and dynamic resource allocation. However, the high computational complexity of CSI estimation algorithms necessitates the development of fast deep learning methods for CSI prediction. In wireless sensing systems, CSI can be leveraged to infer environmental changes, facilitating various functions, including gesture recognition and people identification. Deep learning methods have demonstrated significant advantages over model-based approaches in these fine-grained CSI classification tasks, particularly when classes vary across different scenarios. However, a major challenge in training deep learning networks for wireless systems is the limited availability of data, further complicated by the diverse formats of many public datasets, which hinder integration. Additionally, collecting CSI data can be resource-intensive, requiring considerable time and manpower. To address these challenges, we propose CSI-BERT2 for CSI prediction and classification tasks, effectively utilizing limited data through a pre-training and fine-tuning approach. Building on CSI-BERT1, we enhance the model architecture by introducing an Adaptive Re-Weighting Layer (ARL) and a Multi-Layer Perceptron (MLP) to better capture sub-carrier and timestamp information, effectively addressing the permutation-invariance problem. Furthermore, we propose a Mask Prediction Model (MPM) fine-tuning method to improve the model's adaptability for CSI prediction tasks. Experimental results demonstrate that CSI-BERT2 achieves state-of-the-art performance across all tasks. |
None | |
Towards Open-Set Myoelectric Gesture Recognition via Dual-Perspective Inconsistency Learning | 2024-12-09 | ShowGesture recognition based on surface electromyography (sEMG) has achieved significant progress in human-machine interaction (HMI), especially in prosthetic control and movement rehabilitation. However, accurately recognizing predefined gestures within a closed set is still inadequate in practice; a robust open-set system needs to effectively reject unknown gestures while correctly classifying known ones, which is rarely explored in the field of myoelectric gesture recognition. To handle this challenge, we first report a significant distinction in prediction inconsistency discovered for unknown classes, which arises from different perspectives and can substantially enhance open-set recognition performance. Based on this insight, we propose a novel dual-perspective inconsistency learning approach, PredIN, to explicitly magnify the prediction inconsistency by enhancing the inconsistency of class feature distribution within different perspectives. Specifically, PredIN maximizes the class feature distribution inconsistency among the dual perspectives to enhance their differences. Meanwhile, it optimizes inter-class separability within an individual perspective to maintain individual performance. Comprehensive experiments on various benchmark datasets demonstrate that the PredIN outperforms state-of-the-art methods by a clear margin. Our proposed method simultaneously achieves accurate closed-set classification for predefined gestures and effective rejection for unknown gestures, exhibiting its efficacy and superiority in open-set gesture recognition based on sEMG. |
Under review | None |
Human-Machine Interfaces for Subsea Telerobotics: From Soda-straw to Natural Language Interactions | 2024-12-02 | ShowThis review explores the evolution of human-machine interfaces (HMIs) for subsea telerobotics, tracing back the transition from traditional first-person "soda-straw" consoles (narrow field-of-view camera feed) to advanced interfaces powered by gesture recognition, virtual reality, and natural language models. First, we discuss various forms of subsea telerobotics applications, current state-of-the-art (SOTA) interface systems, and the challenges they face in robust underwater sensing, real-time estimation, and low-latency communication. Through this analysis, we highlight how advanced HMIs facilitate intuitive interactions between human operators and robots to overcome these challenges. A detailed review then categorizes and evaluates the cutting-edge HMI systems based on their offered features from both human perspectives (e.g., enhancing operator control and situational awareness) and machine perspectives (e.g., improving safety, mission accuracy, and task efficiency). Moreover, we examine the literature on bidirectional interaction and intelligent collaboration in terms of sensory feedback and intuitive control mechanisms for both physical and virtual interfaces. The paper concludes by identifying critical challenges, open research questions, and future directions, emphasizing the need for multidisciplinary collaboration in subsea telerobotics. |
37 pa...37 pages including 22 pages of main text |
None |
ConvMixFormer- A Resource-efficient Convolution Mixer for Transformer-based Dynamic Hand Gesture Recognition | 2024-12-02 | ShowTransformer models have demonstrated remarkable success in many domains such as natural language processing (NLP) and computer vision. With the growing interest in transformer-based architectures, they are now utilized for gesture recognition. So, we also explore and devise a novel ConvMixFormer architecture for dynamic hand gestures. The transformers use quadratic scaling of the attention features with the sequential data, due to which these models are computationally complex and heavy. We have considered this drawback of the transformer and designed a resource-efficient model that replaces the self-attention in the transformer with the simple convolutional layer-based token mixer. The computational cost and the parameters used for the convolution-based mixer are comparatively less than the quadratic self-attention. Convolution-mixer helps the model capture the local spatial features that self-attention struggles to capture due to their sequential processing nature. Further, an efficient gate mechanism is employed instead of a conventional feed-forward network in the transformer to help the model control the flow of features within different stages of the proposed model. This design uses fewer learnable parameters which is nearly half the vanilla transformer that helps in fast and efficient training. The proposed method is evaluated on NVidia Dynamic Hand Gesture and Briareo datasets and our model has achieved state-of-the-art results on single and multimodal inputs. We have also shown the parameter efficiency of the proposed ConvMixFormer model compared to other methods. The source code is available at https://github.com/mallikagarg/ConvMixFormer. |
Code Link | |
HaGRIDv2: 1M Images for Static and Dynamic Hand Gesture Recognition | 2024-12-02 | ShowThis paper proposes the second version of the widespread Hand Gesture Recognition dataset HaGRID -- HaGRIDv2. We cover 15 new gestures with conversation and control functions, including two-handed ones. Building on the foundational concepts proposed by HaGRID's authors, we implemented the dynamic gesture recognition algorithm and further enhanced it by adding three new groups of manipulation gestures. The ``no gesture" class was diversified by adding samples of natural hand movements, which allowed us to minimize false positives by 6 times. Combining extra samples with HaGRID, the received version outperforms the original in pre-training models for gesture-related tasks. Besides, we achieved the best generalization ability among gesture and hand detection datasets. In addition, the second version enhances the quality of the gestures generated by the diffusion model. HaGRIDv2, pre-trained models, and a dynamic gesture recognition algorithm are publicly available. |
hand ...hand gesture recognition, dataset, hgr system, large-scale database |
None |
Micro Hand Gesture Recognition System Using Ultrasonic Active Sensing | 2024-11-29 | ShowIn this paper, we propose a micro hand gesture recognition system and methods using ultrasonic active sensing. This system uses micro dynamic hand gestures for recognition to achieve human-computer interaction (HCI). The implemented system, called hand-ultrasonic gesture (HUG), consists of ultrasonic active sensing, pulsed radar signal processing, and time-sequence pattern recognition by machine learning. We adopt lower frequency (300 kHz) ultrasonic active sensing to obtain high resolution range-Doppler image features. Using high quality sequential range-Doppler features, we propose a state-transition-based hidden Markov model for gesture recognition. This method achieves a recognition accuracy of nearly 90% by using symbolized range-Doppler features and significantly reduces the computational complexity and power consumption. Furthermore, to achieve higher classification accuracy, we utilize an end-to-end neural network model and obtain a recognition accuracy of 96.32%. In addition to offline analysis, a real-time prototype is released to verify our method's potential for application in the real world. |
None | |
Robust Dynamic Gesture Recognition at Ultra-Long Distances | 2024-11-27 | ShowDynamic hand gestures play a crucial role in conveying nonverbal information for Human-Robot Interaction (HRI), eliminating the need for complex interfaces. Current models for dynamic gesture recognition suffer from limitations in effective recognition range, restricting their application to close proximity scenarios. In this letter, we present a novel approach to recognizing dynamic gestures in an ultra-range distance of up to 28 meters, enabling natural, directive communication for guiding robots in both indoor and outdoor environments. Our proposed SlowFast-Transformer (SFT) model effectively integrates the SlowFast architecture with Transformer layers to efficiently process and classify gesture sequences captured at ultra-range distances, overcoming challenges of low resolution and environmental noise. We further introduce a distance-weighted loss function shown to enhance learning and improve model robustness at varying distances. Our model demonstrates significant performance improvement over state-of-the-art gesture recognition frameworks, achieving a recognition accuracy of 95.1% on a diverse dataset with challenging ultra-range gestures. This enables robots to react appropriately to human commands from a far distance, providing an essential enhancement in HRI, especially in scenarios requiring seamless and natural interaction. |
None | |
FORS-EMG: A Novel sEMG Dataset for Hand Gesture Recognition Across Multiple Forearm Orientations | 2024-11-26 | ShowSurface electromyography (sEMG) signals hold significant potential for gesture recognition and robust prosthetic hand development. However, sEMG signals are affected by various physiological and dynamic factors, including forearm orientation, electrode displacement, and limb position. Most existing sEMG datasets lack these dynamic considerations. This study introduces a novel multichannel sEMG dataset to evaluate commonly used hand gestures across three distinct forearm orientations. The dataset was collected from nineteen able-bodied subjects performing twelve hand gestures in three forearm orientations--supination, rest, and pronation. Eight MFI EMG electrodes were strategically placed at the elbow and mid-forearm to record high-quality EMG signals. Signal quality was validated through Signal-to-Noise Ratio (SNR) and Signal-to-Motion artifact ratio (SMR) metrics. Hand gesture classification performance across forearm orientations was evaluated using machine learning classifiers, including LDA, SVM, and KNN, alongside five feature extraction methods: TDD, TSD, FTDD, AR-RMS, and SNTDF. Furthermore, deep learning models such as 1D CNN, RNN, LSTM, and hybrid architectures were employed for a comprehensive analysis. Notably, the LDA classifier achieved the highest F1 score of 88.58% with the SNTDF feature set when trained on hand gesture data of resting and tested across gesture data of all orientations. The promising results from extensive analyses underscore the proposed dataset's potential as a benchmark for advancing gesture recognition technologies, clinical sEMG research, and human-computer interaction applications. The dataset is publicly available in MATLAB format. Dataset: \url{https://www.kaggle.com/datasets/ummerummanchaity/fors-emg-a-novel-semg-dataset} |
13 pages, 10 figures | None |
Real-Time Multimodal Signal Processing for HRI in RoboCup: Understanding a Human Referee | 2024-11-26 | ShowAdvancing human-robot communication is crucial for autonomous systems operating in dynamic environments, where accurate real-time interpretation of human signals is essential. RoboCup provides a compelling scenario for testing these capabilities, requiring robots to understand referee gestures and whistle with minimal network reliance. Using the NAO robot platform, this study implements a two-stage pipeline for gesture recognition through keypoint extraction and classification, alongside continuous convolutional neural networks (CCNNs) for efficient whistle detection. The proposed approach enhances real-time human-robot interaction in a competitive setting like RoboCup, offering some tools to advance the development of autonomous systems capable of cooperating with humans. |
11th ...11th Italian Workshop on Artificial Intelligence and Robotics (AIRO 2024), Published in CEUR Workshop Proceedings AI*IA Series |
None |
Machine Learning-based sEMG Signal Classification for Hand Gesture Recognition | 2024-11-23 | ShowEMG-based hand gesture recognition uses electromyographic~(EMG) signals to interpret and classify hand movements by analyzing electrical activity generated by muscle contractions. It has wide applications in prosthesis control, rehabilitation training, and human-computer interaction. Using electrodes placed on the skin, the EMG sensor captures muscle signals, which are processed and filtered to reduce noise. Numerous feature extraction and machine learning algorithms have been proposed to extract and classify muscle signals to distinguish between various hand gestures. This paper aims to benchmark the performance of EMG-based hand gesture recognition using novel feature extraction methods, namely, fused time-domain descriptors, temporal-spatial descriptors, and wavelet transform-based features, combined with the state-of-the-art machine and deep learning models. Experimental investigations on the Grabmyo dataset demonstrate that the 1D Dilated CNN performed the best with an accuracy of |
IEEE BIBM 2024 | None |
A Diffusion-based Data Generator for Training Object Recognition Models in Ultra-Range Distance | 2024-11-23 | ShowObject recognition, commonly performed by a camera, is a fundamental requirement for robots to complete complex tasks. Some tasks require recognizing objects far from the robot's camera. A challenging example is Ultra-Range Gesture Recognition (URGR) in human-robot interaction where the user exhibits directive gestures at a distance of up to 25~m from the robot. However, training a model to recognize hardly visible objects located in ultra-range requires an exhaustive collection of a significant amount of labeled samples. The generation of synthetic training datasets is a recent solution to the lack of real-world data, while unable to properly replicate the realistic visual characteristics of distant objects in images. In this letter, we propose the Diffusion in Ultra-Range (DUR) framework based on a Diffusion model to generate labeled images of distant objects in various scenes. The DUR generator receives a desired distance and class (e.g., gesture) and outputs a corresponding synthetic image. We apply DUR to train a URGR model with directive gestures in which fine details of the gesturing hand are challenging to distinguish. DUR is compared to other types of generative models showcasing superiority both in fidelity and in recognition success rate when training a URGR model. More importantly, training a DUR model on a limited amount of real data and then using it to generate synthetic data for training a URGR model outperforms directly training the URGR model on real data. The synthetic-based URGR model is also demonstrated in gesture-based direction of a ground robot. |
None | |
Optimizing Gesture Recognition for Seamless UI Interaction Using Convolutional Neural Networks | 2024-11-23 | ShowThis study introduces an advanced gesture recognition and user interface (UI) interaction system powered by deep learning, highlighting its transformative impact on UI design and functionality. By utilizing optimized convolutional neural networks (CNNs), the system achieves high-precision gesture recognition, significantly improving user interactions with digital interfaces. The process begins with preprocessing collected gesture images to meet CNN input requirements, followed by sophisticated feature extraction and classification techniques. To address class imbalance, we employ Focal Loss as the loss function, ensuring robust model performance across diverse gesture types. Experimental results demonstrate notable improvements in model metrics, with the Area Under the Curve (AUC) and Recall metrics improving as we transition from simpler models like VGG16 to more advanced ones such as DenseNet. Our enhanced model achieves strong AUC and Recall values, outperforming standard benchmarks. Notably, the system's ability to support real-time and efficient gesture recognition paves the way for a new era in UI design, where intuitive user gestures can be seamlessly integrated into everyday technology use, reducing the learning curve and enhancing user satisfaction. The implications of this development extend beyond technical performance to fundamentally reshape user-technology interactions, underscoring the critical role of gesture-based interfaces in the next generation of UI development. Such advancements promise to significantly enhance smart life experiences, positioning gesture recognition as a key driver in the evolution of user-centric interfaces. |
None | |
AzSLD: Azerbaijani Sign Language Dataset for Fingerspelling, Word, and Sentence Translation with Baseline Software | 2024-11-23 | ShowSign language processing technology development relies on extensive and reliable datasets, instructions, and ethical guidelines. We present a comprehensive Azerbaijani Sign Language Dataset (AzSLD) collected from diverse sign language users and linguistic parameters to facilitate advancements in sign recognition and translation systems and support the local sign language community. The dataset was created within the framework of a vision-based AzSL translation project. This study introduces the dataset as a summary of the fingerspelling alphabet and sentence- and word-level sign language datasets. The dataset was collected from signers of different ages, genders, and signing styles, with videos recorded from two camera angles to capture each sign in full detail. This approach ensures robust training and evaluation of gesture recognition models. AzSLD contains 30,000 videos, each carefully annotated with accurate sign labels and corresponding linguistic translations. The dataset is accompanied by technical documentation and source code to facilitate its use in training and testing. This dataset offers a valuable resource of labeled data for researchers and developers working on sign language recognition, translation, or synthesis. Ethical guidelines were strictly followed throughout the project, with all participants providing informed consent for collecting, publishing, and using the data. |
None | |
TSkips: Efficiency Through Explicit Temporal Delay Connections in Spiking Neural Networks | 2024-11-22 | ShowSpiking Neural Networks (SNNs) with their bio-inspired Leaky Integrate-and-Fire (LIF) neurons inherently capture temporal information. This makes them well-suited for sequential tasks like processing event-based data from Dynamic Vision Sensors (DVS) and event-based speech tasks. Harnessing the temporal capabilities of SNNs requires mitigating vanishing spikes during training, capturing spatio-temporal patterns and enhancing precise spike timing. To address these challenges, we propose TSkips, augmenting SNN architectures with forward and backward skip connections that incorporate explicit temporal delays. These connections capture long-term spatio-temporal dependencies and facilitate better spike flow over long sequences. The introduction of TSkips creates a vast search space of possible configurations, encompassing skip positions and time delay values. To efficiently navigate this search space, this work leverages training-free Neural Architecture Search (NAS) to identify optimal network structures and corresponding delays. We demonstrate the effectiveness of our approach on four event-based datasets: DSEC-flow for optical flow estimation, DVS128 Gesture for hand gesture recognition and Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC) for speech recognition. Our method achieves significant improvements across these datasets: up to 18% reduction in Average Endpoint Error (AEE) on DSEC-flow, 8% increase in classification accuracy on DVS128 Gesture, and up to 8% and 16% higher classification accuracy on SHD and SSC, respectively. |
None | |
N-DriverMotion: Driver motion learning and prediction using an event-based camera and directly trained spiking neural networks on Loihi 2 | 2024-11-18 | ShowDriver motion recognition is a principal factor in ensuring the safety of driving systems. This paper presents a novel system for learning and predicting driver motions and an event-based high-resolution (1280x720) dataset, N-DriverMotion, newly collected to train on a neuromorphic vision system. The system comprises an event-based camera that generates the first high-resolution driver motion dataset representing spike inputs and efficient spiking neural networks (SNNs) that are effective in training and predicting the driver's gestures. The event dataset consists of 13 driver motion categories classified by direction (front, side), illumination (bright, moderate, dark), and participant. A novel simplified four-layer convolutional spiking neural network (CSNN) that we proposed was directly trained using the high-resolution dataset without any time-consuming preprocessing. This enables efficient adaptation to on-device SNNs for real-time inference on high-resolution event-based streams. Compared with recent gesture recognition systems adopting neural networks for vision processing, the proposed neuromorphic vision system achieves comparable accuracy, 94.04%, in recognizing driver motions with the CSNN architecture. Our proposed CSNN and the dataset can be used to develop safer and more efficient driver monitoring systems for autonomous vehicles or edge devices requiring an efficient neural network architecture. |
Accep...Accepted for publication in IEEE Open Journal of Vehicular Technology (OJVT) on 18 November 2024 |
None |
Enhancing Bidirectional Sign Language Communication: Integrating YOLOv8 and NLP for Real-Time Gesture Recognition & Translation | 2024-11-18 | ShowThe primary concern of this research is to take American Sign Language (ASL) data through real time camera footage and be able to convert the data and information into text. Adding to that, we are also putting focus on creating a framework that can also convert text into sign language in real time which can help us break the language barrier for the people who are in need. In this work, for recognising American Sign Language (ASL), we have used the You Only Look Once(YOLO) model and Convolutional Neural Network (CNN) model. YOLO model is run in real time and automatically extracts discriminative spatial-temporal characteristics from the raw video stream without the need for any prior knowledge, eliminating design flaws. The CNN model here is also run in real time for sign language detection. We have introduced a novel method for converting text based input to sign language by making a framework that will take a sentence as input, identify keywords from that sentence and then show a video where sign language is performed with respect to the sentence given as input in real time. To the best of our knowledge, this is a rare study to demonstrate bidirectional sign language communication in real time in the American Sign Language (ASL). |
None | |
Unsupervised Domain Adaptation for RF-based Gesture Recognition | 2024-11-16 | ShowHuman gesture recognition with Radio Frequency (RF) signals has attained acclaim due to the omnipresence, privacy protection, and broad coverage nature of RF signals. These gesture recognition systems rely on neural networks trained with a large number of labeled data. However, the recognition model trained with data under certain conditions would suffer from significant performance degradation when applied in practical deployment, which limits the application of gesture recognition systems. In this paper, we propose an unsupervised domain adaptation framework for RF-based gesture recognition aiming to enhance the performance of the recognition model in new conditions by making effective use of the unlabeled data from new conditions. We first propose pseudo-labeling and consistency regularization to utilize unlabeled data for model training and eliminate the feature discrepancies in different domains. Then we propose a confidence constraint loss to enhance the effectiveness of pseudo-labeling, and design two corresponding data augmentation methods based on the characteristic of the RF signals to strengthen the performance of the consistency regularization, which can make the framework more effective and robust. Furthermore, we propose a cross-match loss to integrate the pseudo-labeling and consistency regularization, which makes the whole framework simple yet effective. Extensive experiments demonstrate that the proposed framework could achieve 4.35% and 2.25% accuracy improvement comparing with the state-of-the-art methods on public WiFi dataset and millimeter wave (mmWave) radar dataset, respectively. |
This ...This paper has been published by the IEEE Internet of Things Journal |
None |
Towards Scalable Handwriting Communication via EEG Decoding and Latent Embedding Integration | 2024-11-14 | ShowIn recent years, brain-computer interfaces have made advances in decoding various motor-related tasks, including gesture recognition and movement classification, utilizing electroencephalogram (EEG) data. These developments are fundamental in exploring how neural signals can be interpreted to recognize specific physical actions. This study centers on a written alphabet classification task, where we aim to decode EEG signals associated with handwriting. To achieve this, we incorporate hand kinematics to guide the extraction of the consistent embeddings from high-dimensional neural recordings using auxiliary variables (CEBRA). These CEBRA embeddings, along with the EEG, are processed by a parallel convolutional neural network model that extracts features from both data sources simultaneously. The model classifies nine different handwritten characters, including symbols such as exclamation marks and commas, within the alphabet. We evaluate the model using a quantitative five-fold cross-validation approach and explore the structure of the embedding space through visualizations. Our approach achieves a classification accuracy of 91 % for the nine-class task, demonstrating the feasibility of fine-grained handwriting decoding from EEG. |
4 pag...4 pages, 2 figures, 1 table, Name of Conference: International Conference on Brain-Computer Interface |
None |
Generative AI for Data Augmentation in Wireless Networks: Analysis, Applications, and Case Study | 2024-11-13 | ShowData augmentation is a powerful technique to mitigate data scarcity. However, owing to fundamental differences in wireless data structures, traditional data augmentation techniques may not be suitable for wireless data. Fortunately, Generative Artificial Intelligence (GenAI) can be an effective alternative to wireless data augmentation due to its excellent data generation capability. This article systemically explores the potential and effectiveness of GenAI-driven data augmentation in wireless networks. We first briefly review data augmentation techniques, discuss their limitations in wireless networks, and introduce generative data augmentation, including reviewing GenAI models and their applications in data augmentation. We then explore the application prospects of GenAI-driven data augmentation in wireless networks from the physical, network, and application layers, which provides a GenAI-driven data augmentation architecture for each application. Subsequently, we propose a general generative diffusion model-based data augmentation framework for Wi-Fi gesture recognition, which uses transformer-based diffusion models to generate high-quality channel state information data. Furthermore, we develop residual neural network models for Wi-Fi gesture recognition to evaluate the role of augmented data and conduct a case study based on a real dataset. Simulation results demonstrate the effectiveness of the proposed framework. Finally, we discuss research directions for generative data augmentation. |
None | |
LapGSR: Laplacian Reconstructive Network for Guided Thermal Super-Resolution | 2024-11-12 | ShowIn the last few years, the fusion of multi-modal data has been widely studied for various applications such as robotics, gesture recognition, and autonomous navigation. Indeed, high-quality visual sensors are expensive, and consumer-grade sensors produce low-resolution images. Researchers have developed methods to combine RGB color images with non-visual data, such as thermal, to overcome this limitation to improve resolution. Fusing multiple modalities to produce visually appealing, high-resolution images often requires dense models with millions of parameters and a heavy computational load, which is commonly attributed to the intricate architecture of the model. We propose LapGSR, a multimodal, lightweight, generative model incorporating Laplacian image pyramids for guided thermal super-resolution. This approach uses a Laplacian Pyramid on RGB color images to extract vital edge information, which is then used to bypass heavy feature map computation in the higher layers of the model in tandem with a combined pixel and adversarial loss. LapGSR preserves the spatial and structural details of the image while also being efficient and compact. This results in a model with significantly fewer parameters than other SOTA models while demonstrating excellent results on two cross-domain datasets viz. ULB17-VT and VGTSR datasets. |
None | |
Ring-a-Pose: A Ring for Continuous Hand Pose Tracking | 2024-11-12 | ShowWe present Ring-a-Pose, a single untethered ring that tracks continuous 3D hand poses. Located in the center of the hand, the ring emits an inaudible acoustic signal that each hand pose reflects differently. Ring-a-Pose imposes minimal obtrusions on the hand, unlike multi-ring or glove systems. It is not affected by the choice of clothing that may cover wrist-worn systems. In a series of three user studies with a total of 30 participants, we evaluate Ring-a-Pose's performance on pose tracking and micro-finger gesture recognition. Without collecting any training data from a user, Ring-a-Pose tracks continuous hand poses with a joint error of 14.1mm. The joint error decreases to 10.3mm for fine-tuned user-dependent models. Ring-a-Pose recognizes 7-class micro-gestures with a 90.60% and 99.27% accuracy for user-independent and user-dependent models, respectively. Furthermore, the ring exhibits promising performance when worn on any finger. Ring-a-Pose enables the future of smart rings to track and recognize hand poses using relatively low-power acoustic sensing. |
None | |
Delayed Memory Unit: Modelling Temporal Dependency Through Delay Gate | 2024-11-10 | ShowRecurrent Neural Networks (RNNs) are widely recognized for their proficiency in modeling temporal dependencies, making them highly prevalent in sequential data processing applications. Nevertheless, vanilla RNNs are confronted with the well-known issue of gradient vanishing and exploding, posing a significant challenge for learning and establishing long-range dependencies. Additionally, gated RNNs tend to be over-parameterized, resulting in poor computational efficiency and network generalization. To address these challenges, this paper proposes a novel Delayed Memory Unit (DMU). The DMU incorporates a delay line structure along with delay gates into vanilla RNN, thereby enhancing temporal interaction and facilitating temporal credit assignment. Specifically, the DMU is designed to directly distribute the input information to the optimal time instant in the future, rather than aggregating and redistributing it over time through intricate network dynamics. Our proposed DMU demonstrates superior temporal modeling capabilities across a broad range of sequential modeling tasks, utilizing considerably fewer parameters than other state-of-the-art gated RNN models in applications such as speech recognition, radar gesture recognition, ECG waveform segmentation, and permuted sequential image classification. |
Accep...Accepted for publication in IEEE Transactions on Neural Networks and Learning Systems, 2024 |
None |
Object Recognition in Human Computer Interaction:- A Comparative Analysis | 2024-11-06 | ShowHuman-computer interaction (HCI) has been a widely researched area for many years, with continuous advancements in technology leading to the development of new techniques that change the way we interact with computers. With the recent advent of powerful computers, we recognize human actions and interact accordingly, thus revolutionizing the way we interact with computers. The purpose of this paper is to provide a comparative analysis of various algorithms used for recognizing user faces and gestures in the context of computer vision and HCI. This study aims to explore and evaluate the performance of different algorithms in terms of accuracy, robustness, and efficiency. This study aims to provide a comprehensive analysis of algorithms for face and gesture recognition in the context of computer vision and HCI, with the goal of improving the design and development of interactive systems that are more intuitive, efficient, and user-friendly. |
None | |
One-Stage-TFS: Thai One-Stage Fingerspelling Dataset for Fingerspelling Recognition Frameworks | 2024-11-05 | ShowThe Thai One-Stage Fingerspelling (One-Stage-TFS) dataset is a comprehensive resource designed to advance research in hand gesture recognition, explicitly focusing on the recognition of Thai sign language. This dataset comprises 7,200 images capturing 15 one-stage consonant gestures performed by undergraduate students from Rajabhat Maha Sarakham University, Thailand. The contributors include both expert students from the Special Education Department with proficiency in Thai sign language and students from other departments without prior sign language experience. Images were collected between July and December 2021 using a DSLR camera, with contributors demonstrating hand gestures against both simple and complex backgrounds. The One-Stage-TFS dataset presents challenges in detecting and recognizing hand gestures, offering opportunities to develop novel end-to-end recognition frameworks. Researchers can utilize this dataset to explore deep learning methods, such as YOLO, EfficientDet, RetinaNet, and Detectron, for hand detection, followed by feature extraction and recognition using techniques like convolutional neural networks, transformers, and adaptive feature fusion networks. The dataset is accessible via the Mendeley Data repository and supports a wide range of applications in computer science, including deep learning, computer vision, and pattern recognition, thereby encouraging further innovation and exploration in these fields. |
12 pages, 9 figures | None |
High-density Electromyography for Effective Gesture-based Control of Physically Assistive Mobile Manipulators | 2024-11-04 | ShowHigh-density electromyography (HDEMG) can detect myoelectric activity as control inputs to a variety of electronically-controlled devices. Furthermore, HDEMG sensors may be built into a variety of clothing, allowing for a non-intrusive myoelectric interface that is integrated into a user's routine. In our work, we introduce an easily-producible HDEMG device that interfaces with the control of a mobile manipulator to perform a range of household and physically assistive tasks. Mobile manipulators can operate throughout the home and are applicable for a spectrum of assistive and daily tasks in the home. We evaluate the use of real-time myoelectric gesture recognition using our device to enable precise control over the intricate mobility and manipulation functionalities of an 8 degree-of-freedom mobile manipulator. Our evaluation, involving 13 participants engaging in challenging self-care and household activities, demonstrates the potential of our wearable HDEMG system to control a mobile manipulator in the home. |
None | |
SpGesture: Source-Free Domain-adaptive sEMG-based Gesture Recognition with Jaccard Attentive Spiking Neural Network | 2024-10-30 | ShowSurface electromyography (sEMG) based gesture recognition offers a natural and intuitive interaction modality for wearable devices. Despite significant advancements in sEMG-based gesture-recognition models, existing methods often suffer from high computational latency and increased energy consumption. Additionally, the inherent instability of sEMG signals, combined with their sensitivity to distribution shifts in real-world settings, compromises model robustness. To tackle these challenges, we propose a novel SpGesture framework based on Spiking Neural Networks, which possesses several unique merits compared with existing methods: (1) Robustness: By utilizing membrane potential as a memory list, we pioneer the introduction of Source-Free Domain Adaptation into SNN for the first time. This enables SpGesture to mitigate the accuracy degradation caused by distribution shifts. (2) High Accuracy: With a novel Spiking Jaccard Attention, SpGesture enhances the SNNs' ability to represent sEMG features, leading to a notable rise in system accuracy. To validate SpGesture's performance, we collected a new sEMG gesture dataset which has different forearm postures, where SpGesture achieved the highest accuracy among the baselines ( |
Accep...Accepted by NeurIPS 2024 |
Code Link |
S-TLLR: STDP-inspired Temporal Local Learning Rule for Spiking Neural Networks | 2024-10-29 | ShowSpiking Neural Networks (SNNs) are biologically plausible models that have been identified as potentially apt for deploying energy-efficient intelligence at the edge, particularly for sequential learning tasks. However, training of SNNs poses significant challenges due to the necessity for precise temporal and spatial credit assignment. Back-propagation through time (BPTT) algorithm, whilst the most widely used method for addressing these issues, incurs high computational cost due to its temporal dependency. In this work, we propose S-TLLR, a novel three-factor temporal local learning rule inspired by the Spike-Timing Dependent Plasticity (STDP) mechanism, aimed at training deep SNNs on event-based learning tasks. Furthermore, S-TLLR is designed to have low memory and time complexities, which are independent of the number of time steps, rendering it suitable for online learning on low-power edge devices. To demonstrate the scalability of our proposed method, we have conducted extensive evaluations on event-based datasets spanning a wide range of applications, such as image and gesture recognition, audio classification, and optical flow estimation. In all the experiments, S-TLLR achieved high accuracy, comparable to BPTT, with a reduction in memory between |
23 pages, 5 figures | None |
x-RAGE: eXtended Reality -- Action & Gesture Events Dataset | 2024-10-28 | ShowWith the emergence of the Metaverse and focus on wearable devices in the recent years gesture based human-computer interaction has gained significance. To enable gesture recognition for VR/AR headsets and glasses several datasets focusing on egocentric i.e. first-person view have emerged in recent years. However, standard frame-based vision suffers from limitations in data bandwidth requirements as well as ability to capture fast motions. To overcome these limitation bio-inspired approaches such as event-based cameras present an attractive alternative. In this work, we present the first event-camera based egocentric gesture dataset for enabling neuromorphic, low-power solutions for XR-centric gesture recognition. The dataset has been made available publicly at the following URL: https://gitlab.com/NVM_IITD_Research/xrage. |
None | |
Research on gesture recognition method based on SEDCNN-SVM | 2024-10-24 | ShowGesture recognition based on surface electromyographic signal (sEMG) is one of the most used methods. The traditional manual feature extraction can only extract some low-level signal features, this causes poor classifier performance and low recognition accuracy when dealing with some complex signals. A recognition method, namely SEDCNN-SVM, is proposed to recognize sEMG of different gestures. SEDCNN-SVM consists of an improved deep convolutional neural network (DCNN) and a support vector machine (SVM). The DCNN can automatically extract and learn the feature information of sEMG through the convolution operation of the convolutional layer, so that it can capture the complex and high-level features in the data. The Squeeze and Excitation Networks (SE-Net) and the residual module were added to the model, so that the feature representation of each channel could be improved, the loss of feature information in convolutional operations was reduced, useful feature information was captured, and the problem of network gradient vanishing was eased. The SVM can improve the generalization ability and classification accuracy of the model by constructing an optimal hyperplane of the feature space. Hence, the SVM was used to replace the full connection layer and the Softmax function layer of the DCNN, the use of a suitable kernel function in SVM can improve the model's generalization ability and classification accuracy. To verify the effectiveness of the proposed classification algorithm, this method is analyzed and compared with other comparative classification methods. The recognition accuracy of SEDCNN-SVM can reach 0.955, it is significantly improved compared with other classification methods, the SEDCNN-SVM model is recognized online in real time. |
None | |
OmniRace: 6D Hand Pose Estimation for Intuitive Guidance of Racing Drone | 2024-10-21 | ShowThis paper presents the OmniRace approach to controlling a racing drone with 6-degree of freedom (DoF) hand pose estimation and gesture recognition. To our knowledge, it is the first-ever technology that allows for low-level control of high-speed drones using gestures. OmniRace employs a gesture interface based on computer vision and a deep neural network to estimate a 6-DoF hand pose. The advanced machine learning algorithm robustly interprets human gestures, allowing users to control drone motion intuitively. Real-time control of a racing drone demonstrates the effectiveness of the system, validating its potential to revolutionize drone racing and other applications. Experimental results conducted in the Gazebo simulation environment revealed that OmniRace allows the users to complite the UAV race track significantly (by 25.1%) faster and to decrease the length of the test drone path (from 102.9 to 83.7 m). Users preferred the gesture interface for attractiveness (1.57 UEQ score), hedonic quality (1.56 UEQ score), and lower perceived temporal demand (32.0 score in NASA-TLX), while noting the high efficiency (0.75 UEQ score) and low physical demand (19.0 score in NASA-TLX) of the baseline remote controller. The deep neural network attains an average accuracy of 99.75% when applied to both normalized datasets and raw datasets. OmniRace can potentially change the way humans interact with and navigate racing drones in dynamic and complex environments. The source code is available at https://github.com/SerValera/OmniRace.git. |
Code Link | |
Advanced Gesture Recognition in Autism: Integrating YOLOv7, Video Augmentation and VideoMAE for Video Analysis | 2024-10-12 | ShowDeep learning and advancements in contactless sensors have significantly enhanced our ability to understand complex human activities in healthcare settings. In particular, deep learning models utilizing computer vision have been developed to enable detailed analysis of human gesture recognition, especially repetitive gestures which are commonly observed behaviors in children with autism. This research work aims to identify repetitive behaviors indicative of autism by analyzing videos captured in natural settings as children engage in daily activities. The focus is on accurately categorizing real-time repetitive gestures such as spinning, head banging, and arm flapping. To this end, we utilize the publicly accessible Self-Stimulatory Behavior Dataset (SSBD) to classify these stereotypical movements. A key component of the proposed methodology is the use of \textbf{VideoMAE}, a model designed to improve both spatial and temporal analysis of video data through a masking and reconstruction mechanism. This model significantly outperformed traditional methods, achieving an accuracy of 97.7%, a 14.7% improvement over the previous state-of-the-art. |
None | |
Exploring Gestural Interaction with a Cushion Interface for Smart Home Control | 2024-10-07 | ShowIn this research, we aim to realize cushion interface for operating smart home. We designed user-defined gestures using cushion and developed gesture recognition system. We asked some users to make gestures using cushions for operating home appliances and determined user-defined gesture sets. We developed two methods for gesture identification. The First, We inserted sensor modules consisting of photo reflective sensors and acceleration sensor inside a cushion. The second, we embedded the acceleration sensor arrays in the cushion cover. Gesture recognizer was implemented using Convolutional Neural Networks (CNN). To evaluate our method, We conducted an experiment to measure recognition accuracy. Results showed that an average accuracy was 94.8% when training for each user, and an average accuracy of 91.3% when testing with a user that did not exist in the training data set. |
10 pages | None |
Masked Autoencoder with Swin Transformer Network for Mitigating Electrode Shift in HD-EMG-based Gesture Recognition | 2024-10-07 | ShowMulti-channel surface Electromyography (sEMG), also referred to as high-density sEMG (HD-sEMG), plays a crucial role in improving gesture recognition performance for myoelectric control. Pattern recognition models developed based on HD-sEMG, however, are vulnerable to changing recording conditions (e.g., signal variability due to electrode shift). This has resulted in significant degradation in performance across subjects, and sessions. In this context, the paper proposes the Masked Autoencoder with Swin Transformer (MAST) framework, where training is performed on a masked subset of HDsEMG channels. A combination of four masking strategies, i.e., random block masking; temporal masking; sensor-wise random masking, and; multi-scale masking, is used to learn latent representations and increase robustness against electrode shift. The masked data is then passed through MAST's three-path encoder-decoder structure, leveraging a multi-path Swin-Unet architecture that simultaneously captures time-domain, frequency-domain, and magnitude-based features of the underlying HD-sEMG signal. These augmented inputs are then used in a self-supervised pre-training fashion to improve the model's generalization capabilities. Experimental results demonstrate the superior performance of the proposed MAST framework in comparison to its counterparts. |
None | |
Real-Time Hand Gesture Recognition: Integrating Skeleton-Based Data Fusion and Multi-Stream CNN | 2024-10-06 | ShowHand Gesture Recognition (HGR) enables intuitive human-computer interactions in various real-world contexts. However, existing frameworks often struggle to meet the real-time requirements essential for practical HGR applications. This study introduces a robust, skeleton-based framework for dynamic HGR that simplifies the recognition of dynamic hand gestures into a static image classification task, effectively reducing both hardware and computational demands. Our framework utilizes a data-level fusion technique to encode 3D skeleton data from dynamic gestures into static RGB spatiotemporal images. It incorporates a specialized end-to-end Ensemble Tuner (e2eET) Multi-Stream CNN architecture that optimizes the semantic connections between data representations while minimizing computational needs. Tested across five benchmark datasets (SHREC'17, DHG-14/28, FPHA, LMDHG, and CNR), the framework showed competitive performance with the state-of-the-art. Its capability to support real-time HGR applications was also demonstrated through deployment on standard consumer PC hardware, showcasing low latency and minimal resource usage in real-world settings. The successful deployment of this framework underscores its potential to enhance real-time applications in fields such as virtual/augmented reality, ambient intelligence, and assistive technologies, providing a scalable and efficient solution for dynamic gesture recognition. |
14 pa...14 pages. 7 figures. Code available at https://github.com/Outsiders17711/e2eET-Skeleton-Based-HGR-Using-Data-Level-Fusion |
Code Link |
Boosting Gesture Recognition with an Automatic Gesture Annotation Framework | 2024-10-05 | ShowTraining a real-time gesture recognition model heavily relies on annotated data. However, manual data annotation is costly and demands substantial human effort. In order to address this challenge, we propose a framework that can automatically annotate gesture classes and identify their temporal ranges. Our framework consists of two key components: (1) a novel annotation model that leverages the Connectionist Temporal Classification (CTC) loss, and (2) a semi-supervised learning pipeline that enables the model to improve its performance by training on its own predictions, known as pseudo labels. These high-quality pseudo labels can also be used to enhance the accuracy of other downstream gesture recognition models. To evaluate our framework, we conducted experiments using two publicly available gesture datasets. Our ablation study demonstrates that our annotation model design surpasses the baseline in terms of both gesture classification accuracy (3-4% improvement) and localization accuracy (71-75% improvement). Additionally, we illustrate that the pseudo-labeled dataset produced from the proposed framework significantly boosts the accuracy of a pre-trained downstream gesture recognition model by 11-18%. We believe that this annotation framework has immense potential to improve the training of downstream gesture recognition models using unlabeled datasets. |
None | |
Towards Open-World Gesture Recognition | 2024-10-05 | ShowProviding users with accurate gestural interfaces, such as gesture recognition based on wrist-worn devices, is a key challenge in mixed reality. However, static machine learning processes in gesture recognition assume that training and test data come from the same underlying distribution. Unfortunately, in real-world applications involving gesture recognition, such as gesture recognition based on wrist-worn devices, the data distribution may change over time. We formulate this problem of adapting recognition models to new tasks, where new data patterns emerge, as open-world gesture recognition (OWGR). We propose the use of continual learning to enable machine learning models to be adaptive to new tasks without degrading performance on previously learned tasks. However, the process of exploring parameters for questions around when, and how, to train and deploy recognition models requires resource-intensive user studies may be impractical. To address this challenge, we propose a design engineering approach that enables offline analysis on a collected large-scale dataset by systematically examining various parameters and comparing different continual learning methods. Finally, we provide design guidelines to enhance the development of an open-world wrist-worn gesture recognition process. |
None | |
An Evaluation of Large Pre-Trained Models for Gesture Recognition using Synthetic Videos | 2024-10-03 | ShowIn this work, we explore the possibility of using synthetically generated data for video-based gesture recognition with large pre-trained models. We consider whether these models have sufficiently robust and expressive representation spaces to enable "training-free" classification. Specifically, we utilize various state-of-the-art video encoders to extract features for use in k-nearest neighbors classification, where the training data points are derived from synthetic videos only. We compare these results with another training-free approach -- zero-shot classification using text descriptions of each gesture. In our experiments with the RoCoG-v2 dataset, we find that using synthetic training videos yields significantly lower classification accuracy on real test videos compared to using a relatively small number of real training videos. We also observe that video backbones that were fine-tuned on classification tasks serve as superior feature extractors, and that the choice of fine-tuning data has a substantial impact on k-nearest neighbors performance. Lastly, we find that zero-shot text-based classification performs poorly on the gesture recognition task, as gestures are not easily described through natural language. |
Synth...Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications II (SPIE Defense + Commercial Sensing, 2024) |
None |
Vision-Based Hand Gesture Customization from a Single Demonstration | 2024-10-02 | ShowHand gesture recognition is becoming a more prevalent mode of human-computer interaction, especially as cameras proliferate across everyday devices. Despite continued progress in this field, gesture customization is often underexplored. Customization is crucial since it enables users to define and demonstrate gestures that are more natural, memorable, and accessible. However, customization requires efficient usage of user-provided data. We introduce a method that enables users to easily design bespoke gestures with a monocular camera from one demonstration. We employ transformers and meta-learning techniques to address few-shot learning challenges. Unlike prior work, our method supports any combination of one-handed, two-handed, static, and dynamic gestures, including different viewpoints, and the ability to handle irrelevant hand movements. We implement three real-world applications using our customization method, conduct a user study, and achieve up to 94% average recognition accuracy from one demonstration. Our work provides a viable path for vision-based gesture customization, laying the foundation for future advancements in this domain. |
2024 ...2024 (UIST' 24). USA, 14 pages |
None |
Gesture Recognition for Feedback Based Mixed Reality and Robotic Fabrication: A Case Study of the UnLog Tower | 2024-09-28 | ShowMixed Reality (MR) platforms enable users to interact with three-dimensional holographic instructions during the assembly and fabrication of highly custom and parametric architectural constructions without the necessity of two-dimensional drawings. Previous MR fabrication projects have primarily relied on digital menus and custom buttons as the interface for user interaction with the MR environment. Despite this approach being widely adopted, it is limited in its ability to allow for direct human interaction with physical objects to modify fabrication instructions within the MR environment. This research integrates user interactions with physical objects through real-time gesture recognition as input to modify, update or generate new digital information enabling reciprocal stimuli between the physical and the virtual environment. Consequently, the digital environment is generative of the user's provided interaction with physical objects to allow seamless feedback in the fabrication process. This research investigates gesture recognition for feedback-based MR workflows for robotic fabrication, human assembly, and quality control in the construction of the UnLog Tower. |
16 pa...16 pages, 16 figures. Published in the Proceedings of the International Conference on Computational Design and Robotic Fabrication (CDRF) 2023 |
None |
Hand Gesture Classification Based on Forearm Ultrasound Video Snippets Using 3D Convolutional Neural Networks | 2024-09-24 | ShowUltrasound based hand movement estimation is a crucial area of research with applications in human-machine interaction. Forearm ultrasound offers detailed information about muscle morphology changes during hand movement which can be used to estimate hand gestures. Previous work has focused on analyzing 2-Dimensional (2D) ultrasound image frames using techniques such as convolutional neural networks (CNNs). However, such 2D techniques do not capture temporal features from segments of ultrasound data corresponding to continuous hand movements. This study uses 3D CNN based techniques to capture spatio-temporal patterns within ultrasound video segments for gesture recognition. We compared the performance of a 2D convolution-based network with (2+1)D convolution-based, 3D convolution-based, and our proposed network. Our methodology enhanced the gesture classification accuracy to 98.8 +/- 0.9%, from 96.5 +/- 2.3% compared to a network trained with 2D convolution layers. These results demonstrate the advantages of using ultrasound video snippets for improving hand gesture classification performance. |
Accepted to IUS 2024 | None |
Complex-valued convolutional neural network classification of hand gesture from radar images | 2024-09-17 | ShowHand gesture recognition systems have yielded many exciting advancements in the last decade and become more popular in HCI (human-computer interaction) with several application areas, which spans from safety and security applications to automotive field. Various deep neural network architectures have already been inspected for hand gesture recognition systems, including multi-layer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN) and a cascade of the last two architectures known as CNN-RNN. However, a major problem still exists, which is most of the existing ML algorithms are designed and developed the building blocks and techniques for real-valued (RV). Researchers applied various RV techniques on the complex-valued (CV) radar images, such as converting a CV optimisation problem into a RV one, by splitting the complex numbers into their real and imaginary parts. However, the major disadvantage of this method is that the resulting algorithm will double the network dimensions. Recent work on RNNs and other fundamental theoretical analysis suggest that CV numbers have a richer representational capacity, but due to the absence of the building blocks required to design such models, the performance of CV networks are marginalised. In this report, we propose a fully CV-CNN, including all building blocks, forward and backward operations, and derivatives all in complex domain. We explore our proposed classification model on two sets of CV hand gesture radar images in comparison with the equivalent RV model. In chapter five, we propose a CV-forward residual network, for the purpose of binary classification of the two sets of CV hand gesture radar datasets and compare its performance with our proposed CV-CNN and a baseline CV-forward CNN. |
173 p...173 pages, 36 tables, 50 figures |
None |
Impact of Electrode Position on Forearm Orientation Invariant Hand Gesture Recognition | 2024-09-16 | ShowObjective: Variation of forearm orientation is one of the crucial factors that drastically degrades the forearm orientation invariant hand gesture recognition performance or the degree of freedom and limits the successful commercialization of myoelectric prosthetic hand or electromyogram (EMG) signal-based human-computer interfacing devices. This study investigates the impact of surface EMG electrode positions (elbow and forearm) on forearm orientation invariant hand gesture recognition. Methods: The study has been performed over 19 intact limbed subjects, considering 12 daily living hand gestures. The quality of the EMG signal is confirmed in terms of three indices. Then, the recognition performance is evaluated and validated by considering three training strategies, six feature extraction methods, and three classifiers. Results: The forearm electrode position provides comparable to or better EMG signal quality considering three indices. In this research, the forearm electrode position achieves up to 5.35% improved forearm orientation invariant hand gesture recognition performance compared to the elbow electrode position. The obtained performance is validated by considering six feature extraction methods, three classifiers, and real-time experiments. In addition, the forearm electrode position shows its robustness with the existence of recent works, considering recognition performance, investigated gestures, the number of channels, the dimensionality of feature space, and the number of subjects. Conclusion: The forearm electrode position can be the best choice for getting improved forearm orientation invariant hand gesture recognition performance. Significance: The performance of myoelectric prosthesis and human-computer interfacing devices can be improved with this optimized electrode position. |
10 pa...10 pages, 4 figures, 5 tables |
None |
Forearm Ultrasound based Gesture Recognition on Edge | 2024-09-16 | ShowUltrasound imaging of the forearm has demonstrated significant potential for accurate hand gesture classification. Despite this progress, there has been limited focus on developing a stand-alone end- to-end gesture recognition system which makes it mobile, real-time and more user friendly. To bridge this gap, this paper explores the deployment of deep neural networks for forearm ultrasound-based hand gesture recognition on edge devices. Utilizing quantization techniques, we achieve substantial reductions in model size while maintaining high accuracy and low latency. Our best model, with Float16 quantization, achieves a test accuracy of 92% and an inference time of 0.31 seconds on a Raspberry Pi. These results demonstrate the feasibility of efficient, real-time gesture recognition on resource-limited edge devices, paving the way for wearable ultrasound-based systems. |
Pleas...Please contact the authors for code and any additional questions pertaining to the project. You can reach Keshav Bimbraw at bimbrawkeshav at gmail dot com |
None |
Customized Mid-Air Gestures for Accessibility: A $B Recognizer for Multi-Dimensional Biosignal Gestures | 2024-09-12 | ShowBiosignal interfaces, using sensors in, on, or around the body, promise to enhance wearables interaction and improve device accessibility for people with motor disabilities. However, biosignals are multi-modal, multi-dimensional, and noisy, requiring domain expertise to design input features for gesture classifiers. The $B-recognizer enables mid-air gesture recognition without needing expertise in biosignals or algorithms. $B resamples, normalizes, and performs dimensionality reduction to reduce noise and enhance signals relevant to the recognition. We tested $B on a dataset of 26 participants with and 8 participants without upper-body motor disabilities performing personalized ability-based gestures. For two conditions (user-dependent, gesture articulation variability), $B outperformed our comparison algorithms (traditional machine learning with expert features and deep learning), with > 95% recognition rate. For the user-independent condition, $B and deep learning performed comparably for participants with disabilities. Our biosignal dataset is publicly available online. $B highlights the potential and feasibility of accessible biosignal interfaces. |
20 pa...20 pages, 7 figures, 1 table |
None |
Advancements in Gesture Recognition Techniques and Machine Learning for Enhanced Human-Robot Interaction: A Comprehensive Review | 2024-09-10 | ShowIn recent years robots have become an important part of our day-to-day lives with various applications. Human-robot interaction creates a positive impact in the field of robotics to interact and communicate with the robots. Gesture recognition techniques combined with machine learning algorithms have shown remarkable progress in recent years, particularly in human-robot interaction (HRI). This paper comprehensively reviews the latest advancements in gesture recognition methods and their integration with machine learning approaches to enhance HRI. Furthermore, this paper represents the vision-based gesture recognition for safe and reliable human-robot-interaction with a depth-sensing system, analyses the role of machine learning algorithms such as deep learning, reinforcement learning, and transfer learning in improving the accuracy and robustness of gesture recognition systems for effective communication between humans and robots. |
19 pages,1 Figure | None |
MVTN: A Multiscale Video Transformer Network for Hand Gesture Recognition | 2024-09-05 | ShowIn this paper, we introduce a novel Multiscale Video Transformer Network (MVTN) for dynamic hand gesture recognition, since multiscale features can extract features with variable size, pose, and shape of hand which is a challenge in hand gesture recognition. The proposed model incorporates a multiscale feature hierarchy to capture diverse levels of detail and context within hand gestures which enhances the model's ability. This multiscale hierarchy is obtained by extracting different dimensions of attention in different transformer stages with initial stages to model high-resolution features and later stages to model low-resolution features. Our approach also leverages multimodal data, utilizing depth maps, infrared data, and surface normals along with RGB images from NVGesture and Briareo datasets. Experiments show that the proposed MVTN achieves state-of-the-art results with less computational complexity and parameters. The source code is available at https://github.com/mallikagarg/MVTN. |
Code Link | |
UAV (Unmanned Aerial Vehicles): Diverse Applications of UAV Datasets in Segmentation, Classification, Detection, and Tracking | 2024-09-05 | ShowUnmanned Aerial Vehicles (UAVs), have greatly revolutionized the process of gathering and analyzing data in diverse research domains, providing unmatched adaptability and effectiveness. This paper presents a thorough examination of Unmanned Aerial Vehicle (UAV) datasets, emphasizing their wide range of applications and progress. UAV datasets consist of various types of data, such as satellite imagery, images captured by drones, and videos. These datasets can be categorized as either unimodal or multimodal, offering a wide range of detailed and comprehensive information. These datasets play a crucial role in disaster damage assessment, aerial surveillance, object recognition, and tracking. They facilitate the development of sophisticated models for tasks like semantic segmentation, pose estimation, vehicle re-identification, and gesture recognition. By leveraging UAV datasets, researchers can significantly enhance the capabilities of computer vision models, thereby advancing technology and improving our understanding of complex, dynamic environments from an aerial perspective. This review aims to encapsulate the multifaceted utility of UAV datasets, emphasizing their pivotal role in driving innovation and practical applications in multiple domains. |
None | |
A Critical Analysis on Machine Learning Techniques for Video-based Human Activity Recognition of Surveillance Systems: A Review | 2024-09-01 | ShowUpsurging abnormal activities in crowded locations such as airports, train stations, bus stops, shopping malls, etc., urges the necessity for an intelligent surveillance system. An intelligent surveillance system can differentiate between normal and suspicious activities from real-time video analysis that will enable to take appropriate measures regarding the level of an anomaly instantaneously and efficiently. Video-based human activity recognition has intrigued many researchers with its pressing issues and a variety of applications ranging from simple hand gesture recognition to crucial behavior recognition in a surveillance system. This paper provides a critical survey of video-based Human Activity Recognition (HAR) techniques beginning with an examination of basic approaches for detecting and recognizing suspicious behavior followed by a critical analysis of machine learning and deep learning techniques such as Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Hidden Markov Model (HMM), K-means Clustering etc. A detailed investigation and comparison are done on these learning techniques on the basis of feature extraction techniques, parameter initialization, and optimization algorithms, accuracy, etc. The purpose of this review is to prioritize positive schemes and to assist researchers with emerging advancements in this field's future endeavors. This paper also pragmatically discusses existing challenges in the field of HAR and examines the prospects in the field. |
None | |
Enhancing Sign Language Detection through Mediapipe and Convolutional Neural Networks (CNN) | 2024-08-27 | ShowThis research combines MediaPipe and CNNs for the efficient and accurate interpretation of ASL dataset for the real-time detection of sign language. The system presented here captures and processes hands' gestures in real time. the intended purpose was to create a very easy, accurate, and fast way of entering commands without the necessity of touching something.MediaPipe supports one of the powerful frameworks in real-time hand tracking capabilities for the ability to capture and preprocess hand movements, which increases the accuracy of the gesture recognition system. Actually, the integration of CNN with the MediaPipe results in higher efficiency in using the model of real-time processing.The accuracy achieved by the model on ASL datasets is 99.12%.The model was tested using American Sign Language (ASL) datasets. The results were then compared to those of existing methods to evaluate how well it performed, using established evaluation techniques. The system will have applications in the communication, education, and accessibility domains. Making systems such as described in this paper even better will assist people with hearing impairment and make things accessible to them. We tested the recognition and translation performance on an ASL dataset and achieved better accuracy over previous models.It is meant to the research is to identify the characters that American signs recognize using hand images taken from a web camera by based on mediapipe and CNNs |
We ha...We have decided to withdraw our paper due to significant revisions and improvements that need to be made based on new findings. After further analysis, we believe these changes are necessary to ensure the accuracy and completeness of our work. We plan to resubmit the revised version in the future once the updates are complete |
None |
Helios: An extremely low power event-based gesture recognition for always-on smart eyewear | 2024-08-26 | ShowThis paper introduces Helios, the first extremely low-power, real-time, event-based hand gesture recognition system designed for all-day on smart eyewear. As augmented reality (AR) evolves, current smart glasses like the Meta Ray-Bans prioritize visual and wearable comfort at the expense of functionality. Existing human-machine interfaces (HMIs) in these devices, such as capacitive touch and voice controls, present limitations in ergonomics, privacy and power consumption. Helios addresses these challenges by leveraging natural hand interactions for a more intuitive and comfortable user experience. Our system utilizes a extremely low-power and compact 3mmx4mm/20mW event camera to perform natural hand-based gesture recognition for always-on smart eyewear. The camera's output is processed by a convolutional neural network (CNN) running on a NXP Nano UltraLite compute platform, consuming less than 350mW. Helios can recognize seven classes of gestures, including subtle microgestures like swipes and pinches, with 91% accuracy. We also demonstrate real-time performance across 20 users at a remarkably low latency of 60ms. Our user testing results align with the positive feedback we received during our recent successful demo at AWE-USA-2024. |
Accep...Accepted at ECCV-Integrating Computer Vision in Smart Eyewear, 2024. 18 pages, 10 figures. First three authors contributed equally to this paper |
None |
Towards Battery-Free Wireless Sensing via Radio-Frequency Energy Harvesting | 2024-08-26 | ShowDiverse Wi-Fi-based wireless applications have been proposed, ranging from daily activity recognition to vital sign monitoring. Despite their remarkable sensing accuracy, the high energy consumption and the requirement for customized hardware modification hinder the wide deployment of the existing sensing solutions. In this paper, we propose REHSense, an energy-efficient wireless sensing solution based on Radio-Frequency (RF) energy harvesting. Instead of relying on a power-hungry Wi-Fi receiver, REHSense leverages an RF energy harvester as the sensor and utilizes the voltage signals harvested from the ambient Wi-Fi signals to enable simultaneous context sensing and energy harvesting. We design and implement REHSense using a commercial-off-the-shelf (COTS) RF energy harvester. Extensive evaluation of three fine-grained wireless sensing tasks (i.e., respiration monitoring, human activity, and hand gesture recognition) shows that REHSense can achieve comparable sensing accuracy with conventional Wi-Fi-based solutions while adapting to different sensing environments, reducing the power consumption by 98.7% and harvesting up to 4.5mW of power from RF energy. |
None | |
EMG-Based Hand Gesture Recognition through Diverse Domain Feature Enhancement and Machine Learning-Based Approach | 2024-08-25 | ShowSurface electromyography (EMG) serves as a pivotal tool in hand gesture recognition and human-computer interaction, offering a non-invasive means of signal acquisition. This study presents a novel methodology for classifying hand gestures using EMG signals. To address the challenges associated with feature extraction where, we explored 23 distinct morphological, time domain and frequency domain feature extraction techniques. However, the substantial size of the features may increase the computational complexity issues that can hinder machine learning algorithm performance. We employ an efficient feature selection approach, specifically an extra tree classifier, to mitigate this. The selected potential feature fed into the various machine learning-based classification algorithms where our model achieved 97.43% accuracy with the KNN algorithm and selected feature. By leveraging a comprehensive feature extraction and selection strategy, our methodology enhances the accuracy and usability of EMG-based hand gesture recognition systems. The higher performance accuracy proves the effectiveness of the proposed model over the existing system. \keywords{EMG signal, machine learning approach, hand gesture recognition. |
None | |
Energy-Efficient Spiking Recurrent Neural Network for Gesture Recognition on Embedded GPUs | 2024-08-23 | ShowImplementing AI algorithms on event-based embedded devices enables real-time processing of data, minimizes latency, and enhances power efficiency in edge computing. This research explores the deployment of a spiking recurrent neural network (SRNN) with liquid time constant neurons for gesture recognition. We focus on the energy efficiency and computational efficacy of NVIDIA Jetson Nano embedded GPU platforms. The embedded GPU showcases a 14-fold increase in power efficiency relative to a conventional GPU, making a compelling argument for its use in energy-constrained applications. The study's empirical findings also highlight that batch processing significantly boosts frame rates across various batch sizes while maintaining accuracy levels well above the baseline. These insights validate the SRNN with liquid time constant neurons as a robust model for interpreting temporal-spatial data in gesture recognition, striking a critical balance between processing speed and power frugality. |
None | |
Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition | 2024-08-21 | ShowPurpose: In order to produce a surgical gesture recognition system that can support a wide variety of procedures, either a very large annotated dataset must be acquired, or fitted models must generalize to new labels (so called "zero-shot" capability). In this paper we investigate the feasibility of latter option. Methods: Leveraging the Bridge-Prompt framework, we prompt-tune a pre-trained vision-text model (CLIP) for gesture recognition in surgical videos. This can utilize extensive outside video data such as text, but also make use of label meta-data and weakly supervised contrastive losses. Results: Our experiments show that prompt-based video encoder outperforms standard encoders in surgical gesture recognition tasks. Notably, it displays strong performance in zero-shot scenarios, where gestures/tasks that were not provided during the encoder training phase are included in the prediction phase. Additionally, we measure the benefit of inclusion text descriptions in the feature extractor training schema. Conclusion Bridge-Prompt and similar pre-trained+prompt-tuned video encoder models present significant visual representation for surgical robotics, especially in gesture recognition tasks. Given the diverse range of surgical tasks (gestures), the ability of these models to zero-shot transfer without the need for any task (gesture) specific retraining makes them invaluable. |
17 pa...17 pages,4 figures, 7 tables, IPCAI 2024 & IJCARS |
None |
Data-Free Class Incremental Gesture Recognition via Synthetic Feature Sampling | 2024-08-21 | ShowData-Free Class Incremental Learning (DFCIL) aims to enable models to continuously learn new classes while retraining knowledge of old classes, even when the training data for old classes is unavailable. Although explored primarily with image datasets by researchers, this study focuses on investigating DFCIL for skeleton-based gesture classification due to its significant real-world implications, particularly considering the growing prevalence of VR/AR headsets where gestures serve as the primary means of control and interaction. In this work, we made an intriguing observation: skeleton models trained with base classes(even very limited) demonstrate strong generalization capabilities to unseen classes without requiring additional training. Building on this insight, we developed Synthetic Feature Replay (SFR) that can sample synthetic features from class prototypes to replay for old classes and augment for new classes (under a few-shot setting). Our proposed method showcases significant advancements over the state-of-the-art, achieving up to 15% enhancements in mean accuracy across all steps and largely mitigating the accuracy imbalance between base classes and new classes. |
None | |
An Advanced Deep Learning Based Three-Stream Hybrid Model for Dynamic Hand Gesture Recognition | 2024-08-15 | ShowIn the modern context, hand gesture recognition has emerged as a focal point. This is due to its wide range of applications, which include comprehending sign language, factories, hands-free devices, and guiding robots. Many researchers have attempted to develop more effective techniques for recognizing these hand gestures. However, there are challenges like dataset limitations, variations in hand forms, external environments, and inconsistent lighting conditions. To address these challenges, we proposed a novel three-stream hybrid model that combines RGB pixel and skeleton-based features to recognize hand gestures. In the procedure, we preprocessed the dataset, including augmentation, to make rotation, translation, and scaling independent systems. We employed a three-stream hybrid model to extract the multi-feature fusion using the power of the deep learning module. In the first stream, we extracted the initial feature using the pre-trained Imagenet module and then enhanced this feature by using a multi-layer of the GRU and LSTM modules. In the second stream, we extracted the initial feature with the pre-trained ReseNet module and enhanced it with the various combinations of the GRU and LSTM modules. In the third stream, we extracted the hand pose key points using the media pipe and then enhanced them using the stacked LSTM to produce the hierarchical feature. After that, we concatenated the three features to produce the final. Finally, we employed a classification module to produce the probabilistic map to generate predicted output. We mainly produced a powerful feature vector by taking advantage of the pixel-based deep learning feature and pos-estimation-based stacked deep learning feature, including a pre-trained model with a scratched deep learning model for unequalled gesture detection capabilities. |
None | |
A Methodological and Structural Review of Hand Gesture Recognition Across Diverse Data Modalities | 2024-08-10 | ShowResearchers have been developing Hand Gesture Recognition (HGR) systems to enhance natural, efficient, and authentic human-computer interaction, especially benefiting those who rely solely on hand gestures for communication. Despite significant progress, the automatic and precise identification of hand gestures remains a considerable challenge in computer vision. Recent studies have focused on specific modalities like RGB images, skeleton data, and spatiotemporal interest points. This paper provides a comprehensive review of HGR techniques and data modalities from 2014 to 2024, exploring advancements in sensor technology and computer vision. We highlight accomplishments using various modalities, including RGB, Skeleton, Depth, Audio, EMG, EEG, and Multimodal approaches and identify areas needing further research. We reviewed over 200 articles from prominent databases, focusing on data collection, data settings, and gesture representation. Our review assesses the efficacy of HGR systems through their recognition accuracy and identifies a gap in research on continuous gesture recognition, indicating the need for improved vision-based gesture systems. The field has experienced steady research progress, including advancements in hand-crafted features and deep learning (DL) techniques. Additionally, we report on the promising developments in HGR methods and the area of multimodal approaches. We hope this survey will serve as a potential guideline for diverse data modality-based HGR research. |
None | |
Prototype Learning for Micro-gesture Classification | 2024-08-06 | ShowIn this paper, we briefly introduce the solution developed by our team, HFUT-VUT, for the track of Micro-gesture Classification in the MiGA challenge at IJCAI 2024. The task of micro-gesture classification task involves recognizing the category of a given video clip, which focuses on more fine-grained and subtle body movements compared to typical action recognition tasks. Given the inherent complexity of micro-gesture recognition, which includes large intra-class variability and minimal inter-class differences, we utilize two innovative modules, i.e., the cross-modal fusion module and prototypical refinement module, to improve the discriminative ability of MG features, thereby improving the classification accuracy. Our solution achieved significant success, ranking 1st in the track of Micro-gesture Classification. We surpassed the performance of last year's leading team by a substantial margin, improving Top-1 accuracy by 6.13%. |
1st P...1st Place in Micro-gesture Classification in MiGA at IJCAI-2024 |
None |
SynthoGestures: A Novel Framework for Synthetic Dynamic Hand Gesture Generation for Driving Scenarios | 2024-08-01 | ShowCreating a diverse and comprehensive dataset of hand gestures for dynamic human-machine interfaces in the automotive domain can be challenging and time-consuming. To overcome this challenge, we propose using synthetic gesture datasets generated by virtual 3D models. Our framework utilizes Unreal Engine to synthesize realistic hand gestures, offering customization options and reducing the risk of overfitting. Multiple variants, including gesture speed, performance, and hand shape, are generated to improve generalizability. In addition, we simulate different camera locations and types, such as RGB, infrared, and depth cameras, without incurring additional time and cost to obtain these cameras. Experimental results demonstrate that our proposed framework, SynthoGestures (https://github.com/amrgomaaelhady/SynthoGestures), improves gesture recognition accuracy and can replace or augment real-hand datasets. By saving time and effort in the creation of the data set, our tool accelerates the development of gesture recognition systems for automotive applications. |
Accep...Accepted at IEEE IV'24. Shorter versions were accepted as AutomotiveUI2023 Work in Progress and UIST2023 Poster Papers |
Code Link |
Dynamic Gesture Recognition in Ultra-Range Distance for Effective Human-Robot Interaction | 2024-07-31 | ShowThis paper presents a novel approach for ultra-range gesture recognition, addressing Human-Robot Interaction (HRI) challenges over extended distances. By leveraging human gestures in video data, we propose the Temporal-Spatiotemporal Fusion Network (TSFN) model that surpasses the limitations of current methods, enabling robots to understand gestures from long distances. With applications in service robots, search and rescue operations, and drone-based interactions, our approach enhances HRI in expansive environments. Experimental validation demonstrates significant advancements in gesture recognition accuracy, particularly in prolonged gesture sequences. |
None | |
ALERT-Transformer: Bridging Asynchronous and Synchronous Machine Learning for Real-Time Event-based Spatio-Temporal Data | 2024-07-30 | ShowWe seek to enable classic processing of continuous ultra-sparse spatiotemporal data generated by event-based sensors with dense machine learning models. We propose a novel hybrid pipeline composed of asynchronous sensing and synchronous processing that combines several ideas: (1) an embedding based on PointNet models -- the ALERT module -- that can continuously integrate new and dismiss old events thanks to a leakage mechanism, (2) a flexible readout of the embedded data that allows to feed any downstream model with always up-to-date features at any sampling rate, (3) exploiting the input sparsity in a patch-based approach inspired by Vision Transformer to optimize the efficiency of the method. These embeddings are then processed by a transformer model trained for object and gesture recognition. Using this approach, we achieve performances at the state-of-the-art with a lower latency than competitors. We also demonstrate that our asynchronous model can operate at any desired sampling rate. |
Origi...Originally published in the Proceedings of Machine Learning Research ICML 2024 |
None |
GesturePrint: Enabling User Identification for mmWave-based Gesture Recognition Systems | 2024-07-25 | ShowThe millimeter-wave (mmWave) radar has been exploited for gesture recognition. However, existing mmWave-based gesture recognition methods cannot identify different users, which is important for ubiquitous gesture interaction in many applications. In this paper, we propose GesturePrint, which is the first to achieve gesture recognition and gesture-based user identification using a commodity mmWave radar sensor. GesturePrint features an effective pipeline that enables the gesture recognition system to identify users at a minor additional cost. By introducing an efficient signal preprocessing stage and a network architecture GesIDNet, which employs an attention-based multilevel feature fusion mechanism, GesturePrint effectively extracts unique gesture features for gesture recognition and personalized motion pattern features for user identification. We implement GesturePrint and collect data from 17 participants performing 15 gestures in a meeting room and an office, respectively. GesturePrint achieves a gesture recognition accuracy (GRA) of 98.87% with a user identification accuracy (UIA) of 99.78% in the meeting room, and 98.22% GRA with 99.26% UIA in the office. Extensive experiments on three public datasets and a new gesture dataset show GesturePrint's superior performance in enabling effective user identification for gesture recognition systems. |
Accep...Accepted to the 44th IEEE International Conference on Distributed Computing Systems (ICDCS 2024) |
None |
Zero-Shot Underwater Gesture Recognition | 2024-07-19 | ShowHand gesture recognition allows humans to interact with machines non-verbally, which has a huge application in underwater exploration using autonomous underwater vehicles. Recently, a new gesture-based language called CADDIAN has been devised for divers, and supervised learning methods have been applied to recognize the gestures with high accuracy. However, such methods fail when they encounter unseen gestures in real time. In this work, we advocate the need for zero-shot underwater gesture recognition (ZSUGR), where the objective is to train a model with visual samples of gestures from a few ``seen'' classes only and transfer the gained knowledge at test time to recognize semantically-similar unseen gesture classes as well. After discussing the problem and dataset-specific challenges, we propose new seen-unseen splits for gesture classes in CADDY dataset. Then, we present a two-stage framework, where a novel transformer learns strong visual gesture cues and feeds them to a conditional generative adversarial network that learns to mimic feature distribution. We use the trained generator as a feature synthesizer for unseen classes, enabling zero-shot learning. Extensive experiments demonstrate that our method outperforms the existing zero-shot techniques. We conclude by providing useful insights into our framework and suggesting directions for future research. |
Accep...Accepted to ICPR 2024. 15 pages, 6 figures. Project page: https://github.com/sandipan211/ZSUGR |
Code Link |
Exploring the Boundaries of On-Device Inference: When Tiny Falls Short, Go Hierarchical | 2024-07-10 | ShowOn-device inference holds great potential for increased energy efficiency, responsiveness, and privacy in edge ML systems. However, due to less capable ML models that can be embedded in resource-limited devices, use cases are limited to simple inference tasks such as visual keyword spotting, gesture recognition, and predictive analytics. In this context, the Hierarchical Inference (HI) system has emerged as a promising solution that augments the capabilities of the local ML by offloading selected samples to an edge server or cloud for remote ML inference. Existing works demonstrate through simulation that HI improves accuracy. However, they do not account for the latency and energy consumption on the device, nor do they consider three key heterogeneous dimensions that characterize ML systems: hardware, network connectivity, and models. In contrast, this paper systematically compares the performance of HI with on-device inference based on measurements of accuracy, latency, and energy for running embedded ML models on five devices with different capabilities and three image classification datasets. For a given accuracy requirement, the HI systems we designed achieved up to 73% lower latency and up to 77% lower device energy consumption than an on-device inference system. The key to building an efficient HI system is the availability of small-size, reasonably accurate on-device models whose outputs can be effectively differentiated for samples that require remote inference. Despite the performance gains, HI requires on-device inference for all samples, which adds a fixed overhead to its latency and energy consumption. Therefore, we design a hybrid system, Early Exit with HI (EE-HI), and demonstrate that compared to HI, EE-HI reduces the latency by up to 59.7% and lowers the device's energy consumption by up to 60.4%. |
None | |
On-Device Training Empowered Transfer Learning For Human Activity Recognition | 2024-07-04 | ShowHuman Activity Recognition (HAR) is an attractive topic to perceive human behavior and supplying assistive services. Besides the classical inertial unit and vision-based HAR methods, new sensing technologies, such as ultrasound and body-area electric fields, have emerged in HAR to enhance user experience and accommodate new application scenarios. As those sensors are often paired with AI for HAR, they frequently encounter challenges due to limited training data compared to the more widely IMU or vision-based HAR solutions. Additionally, user-induced concept drift (UICD) is common in such HAR scenarios. UICD is characterized by deviations in the sample distribution of new users from that of the training participants, leading to deteriorated recognition performance. This paper proposes an on-device transfer learning (ODTL) scheme tailored for energy- and resource-constrained IoT edge devices. Optimized on-device training engines are developed for two representative MCU-level edge computing platforms: STM32F756ZG and GAP9. Based on this, we evaluated the ODTL benefits in three HAR scenarios: body capacitance-based gym activity recognition, QVAR- and ultrasonic-based hand gesture recognition. We demonstrated an improvement of 3.73%, 17.38%, and 3.70% in the activity recognition accuracy, respectively. Besides this, we observed that the RISC-V-based GAP9 achieves 20x and 280x less latency and power consumption than STM32F7 MCU during the ODTL deployment, demonstrating the advantages of employing the latest low-power parallel computing devices for edge tasks. |
None | |
Neuro-BERT: Rethinking Masked Autoencoding for Self-supervised Neurological Pretraining | 2024-07-03 | ShowDeep learning associated with neurological signals is poised to drive major advancements in diverse fields such as medical diagnostics, neurorehabilitation, and brain-computer interfaces. The challenge in harnessing the full potential of these signals lies in the dependency on extensive, high-quality annotated data, which is often scarce and expensive to acquire, requiring specialized infrastructure and domain expertise. To address the appetite for data in deep learning, we present Neuro-BERT, a self-supervised pre-training framework of neurological signals based on masked autoencoding in the Fourier domain. The intuition behind our approach is simple: frequency and phase distribution of neurological signals can reveal intricate neurological activities. We propose a novel pre-training task dubbed Fourier Inversion Prediction (FIP), which randomly masks out a portion of the input signal and then predicts the missing information using the Fourier inversion theorem. Pre-trained models can be potentially used for various downstream tasks such as sleep stage classification and gesture recognition. Unlike contrastive-based methods, which strongly rely on carefully hand-crafted augmentations and siamese structure, our approach works reasonably well with a simple transformer encoder with no augmentation requirements. By evaluating our method on several benchmark datasets, we show that Neuro-BERT improves downstream neurological-related tasks by a large margin. |
IEEE ...IEEE Journal of Biomedical and Health Informatics 2024 (camera ready) |
None |
Novel Human Machine Interface via Robust Hand Gesture Recognition System using Channel Pruned YOLOv5s Model | 2024-07-02 | ShowHand gesture recognition (HGR) is a vital component in enhancing the human-computer interaction experience, particularly in multimedia applications, such as virtual reality, gaming, smart home automation systems, etc. Users can control and navigate through these applications seamlessly by accurately detecting and recognizing gestures. However, in a real-time scenario, the performance of the gesture recognition system is sometimes affected due to the presence of complex background, low-light illumination, occlusion problems, etc. Another issue is building a fast and robust gesture-controlled human-computer interface (HCI) in the real-time scenario. The overall objective of this paper is to develop an efficient hand gesture detection and classification model using a channel-pruned YOLOv5-small model and utilize the model to build a gesture-controlled HCI with a quick response time (in ms) and higher detection speed (in fps). First, the YOLOv5s model is chosen for the gesture detection task. Next, the model is simplified by using a channel-pruned algorithm. After that, the pruned model is further fine-tuned to ensure detection efficiency. We have compared our suggested scheme with other state-of-the-art works, and it is observed that our model has shown superior results in terms of mAP (mean average precision), precision (%), recall (%), and F1-score (%), fast inference time (in ms), and detection speed (in fps). Our proposed method paves the way for deploying a pruned YOLOv5s model for a real-time gesture-command-based HCI to control some applications, such as the VLC media player, Spotify player, etc., using correctly classified gesture commands in real-time scenarios. The average detection speed of our proposed system has reached more than 60 frames per second (fps) in real-time, which meets the perfect requirement in real-time application control. |
None | |
CSI4Free: GAN-Augmented mmWave CSI for Improved Pose Classification | 2024-06-28 | ShowIn recent years, Joint Communication and Sensing (JC&S), has demonstrated significant success, particularly in utilizing sub-6 GHz frequencies with commercial-off-the-shelf (COTS) Wi-Fi devices for applications such as localization, gesture recognition, and pose classification. Deep learning and the existence of large public datasets has been pivotal in achieving such results. However, at mmWave frequencies (30-300 GHz), which has shown potential for more accurate sensing performance, there is a noticeable lack of research in the domain of COTS Wi-Fi sensing. Challenges such as limited research hardware, the absence of large datasets, limited functionality in COTS hardware, and the complexities of data collection present obstacles to a comprehensive exploration of this field. In this work, we aim to address these challenges by developing a method that can generate synthetic mmWave channel state information (CSI) samples. In particular, we use a generative adversarial network (GAN) on an existing dataset, to generate 30,000 additional CSI samples. The augmented samples exhibit a remarkable degree of consistency with the original data, as indicated by the notably high GAN-train and GAN-test scores. Furthermore, we integrate the augmented samples in training a pose classification model. We observe that the augmented samples complement the real data and improve the generalization of the classification model. |
None | |
Underwater Human-Robot and Human-Swarm Interaction: A Review and Perspective | 2024-06-21 | ShowThere has been a growing interest in extending the capabilities of autonomous underwater vehicles (AUVs) in subsea missions, particularly in integrating underwater human-robot interaction (UHRI) for control. UHRI and its subfield,underwater gesture recognition (UGR), play a significant role in enhancing diver-robot communication for marine research. This review explores the latest developments in UHRI and examines its promising applications for multi-robot systems. With the developments in UGR, opportunities are presented for underwater robots to work alongside human divers to increase their functionality. Human gestures creates a seamless and safe collaborative environment where divers and robots can interact more efficiently. By highlighting the state-of-the-art in this field, we can potentially encourage advancements in underwater multi-robot system (UMRS) blending the natural communication channels of human-robot interaction with the multi-faceted coordination capabilities of underwater swarms,thus enhancing robustness in complex aquatic environments. |
OCEAN...OCEANS 2024-Singapore, 1-9 |
None |
Deep self-supervised learning with visualisation for automatic gesture recognition | 2024-06-18 | ShowGesture is an important mean of non-verbal communication, with visual modality allows human to convey information during interaction, facilitating peoples and human-machine interactions. However, it is considered difficult to automatically recognise gestures. In this work, we explore three different means to recognise hand signs using deep learning: supervised learning based methods, self-supervised methods and visualisation based techniques applied to 3D moving skeleton data. Self-supervised learning used to train fully connected, CNN and LSTM method. Then, reconstruction method is applied to unlabelled data in simulated settings using CNN as a backbone where we use the learnt features to perform the prediction in the remaining labelled data. Lastly, Grad-CAM is applied to discover the focus of the models. Our experiments results show that supervised learning method is capable to recognise gesture accurately, with self-supervised learning increasing the accuracy in simulated settings. Finally, Grad-CAM visualisation shows that indeed the models focus on relevant skeleton joints on the associated gesture. |
Stude...Student research project with company collaboration |
None |
Expanding the Design Space of Computer Vision-based Interactive Systems for Group Dance Practice | 2024-06-17 | ShowGroup dance, a sub-genre characterized by intricate motions made by a cohort of performers in tight synchronization, has a longstanding and culturally significant history and, in modern forms such as cheerleading, a broad base of current adherents. However, despite its popularity, learning group dance routines remains challenging. Based on the prior success of interactive systems to support individual dance learning, this paper argues that group dance settings are fertile ground for augmentation by interactive aids. To better understand these design opportunities, this paper presents a sequence of user-centered studies of and with amateur cheerleading troupes, spanning from the formative (interviews, observations) through the generative (an ideation workshop) to concept validation (technology probes and speed dating). The outcomes are a nuanced understanding of the lived practice of group dance learning, a set of interactive concepts to support those practices, and design directions derived from validating the proposed concepts. Through this empirical work, we expand the design space of interactive dance practice systems from the established context of single-user practice (primarily focused on gesture recognition) to a multi-user, group-based scenario focused on feedback and communication. |
20 pa...20 pages, 10 figures, 1 table, to be published in the proceedings of the ACM Designing Interactive Systems Conference, 2024, (DIS '24) |
None |
Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation | 2024-06-11 | ShowHand gestures can provide a natural means of human-computer interaction and enable people who cannot speak to communicate efficiently. Existing hand gesture recognition methods heavily depend on pre-defined gestures, however, motor-impaired individuals require new gestures tailored to each individual's gesture motion and style. Gesture samples collected from different persons have distribution shifts due to their health conditions, the severity of the disability, motion patterns of the arms, etc. In this paper, we introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning (FSCL) framework that significantly improves the performance of fine-tuning a model for out-of-distribution data. Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge, along with intra-gesture divergence derived from two additional embeddings. Thus, the model can capture latent statistical structure in highly variable gestures with limited samples. We conduct an experimental evaluation using the SmartWatch Gesture and the Motion Gesture datasets. The proposed method results in an average test accuracy of 57.0%, 64.6%, and 69.3% by using one, three, and five samples for six different gestures. Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied in human-computer interaction and social communication. Code is available at: https://github.com/riyadRafiq/wearable-latent-embedding-exploitation |
Accep...Accepted at AI for Social Good track of IJCAI 2024 (the 33rd International Joint Conference on Artificial Intelligence), 14 pages, 11 figures |
Code Link |
The Logarithmic Memristor-Based Bayesian Machine | 2024-06-05 | ShowThe demand for explainable and energy-efficient artificial intelligence (AI) systems for edge computing has led to significant interest in electronic systems dedicated to Bayesian inference. Traditional designs of such systems often rely on stochastic computing, which offers high energy efficiency but suffers from latency issues and struggles with low-probability values. In this paper, we introduce the logarithmic memristor-based Bayesian machine, an innovative design that leverages the unique properties of memristors and logarithmic computing as an alternative to stochastic computing. We present a prototype machine fabricated in a hybrid CMOS/hafnium-oxide memristor process. We validate the versatility and robustness of our system through experimental validation and extensive simulations in two distinct applications: gesture recognition and sleep stage classification. The logarithmic approach simplifies the computational model by converting multiplications into additions and enhances the handling of low-probability events, which are crucial in time-dependent tasks. Our results demonstrate that the logarithmic Bayesian machine achieves superior performance in terms of accuracy and energy efficiency compared to its stochastic counterpart, particularly in scenarios involving complex probabilistic models. This work paves the way for the deployment of advanced AI capabilities in edge devices, where power efficiency and reliability are paramount. |
None | |
HHMR: Holistic Hand Mesh Recovery by Enhancing the Multimodal Controllability of Graph Diffusion Models | 2024-06-03 | ShowRecent years have witnessed a trend of the deep integration of the generation and reconstruction paradigms. In this paper, we extend the ability of controllable generative models for a more comprehensive hand mesh recovery task: direct hand mesh generation, inpainting, reconstruction, and fitting in a single framework, which we name as Holistic Hand Mesh Recovery (HHMR). Our key observation is that different kinds of hand mesh recovery tasks can be achieved by a single generative model with strong multimodal controllability, and in such a framework, realizing different tasks only requires giving different signals as conditions. To achieve this goal, we propose an all-in-one diffusion framework based on graph convolution and attention mechanisms for holistic hand mesh recovery. In order to achieve strong control generation capability while ensuring the decoupling of multimodal control signals, we map different modalities to a shared feature space and apply cross-scale random masking in both modality and feature levels. In this way, the correlation between different modalities can be fully exploited during the learning of hand priors. Furthermore, we propose Condition-aligned Gradient Guidance to enhance the alignment of the generated model with the control signals, which significantly improves the accuracy of the hand mesh reconstruction and fitting. Experiments show that our novel framework can realize multiple hand mesh recovery tasks simultaneously and outperform the existing methods in different tasks, which provides more possibilities for subsequent downstream applications including gesture recognition, pose generation, mesh editing, and so on. |
accep...accepted in CVPR2024, project page: https://dw1010.github.io/project/HHMR/HHMR.html |
Code Link |
TENNs-PLEIADES: Building Temporal Kernels with Orthogonal Polynomials | 2024-05-31 | ShowWe introduce a neural network named PLEIADES (PoLynomial Expansion In Adaptive Distributed Event-based Systems), belonging to the TENNs (Temporal Neural Networks) architecture. We focus on interfacing these networks with event-based data to perform online spatiotemporal classification and detection with low latency. By virtue of using structured temporal kernels and event-based data, we have the freedom to vary the sample rate of the data along with the discretization step-size of the network without additional finetuning. We experimented with three event-based benchmarks and obtained state-of-the-art results on all three by large margins with significantly smaller memory and compute costs. We achieved: 1) 99.59% accuracy with 192K parameters on the DVS128 hand gesture recognition dataset and 100% with a small additional output filter; 2) 99.58% test accuracy with 277K parameters on the AIS 2024 eye tracking challenge; and 3) 0.556 mAP with 576k parameters on the PROPHESEE 1 Megapixel Automotive Detection Dataset. |
11 pages, 3 figures | None |
Advancements in Tactile Hand Gesture Recognition for Enhanced Human-Machine Interaction | 2024-05-27 | ShowMotivated by the growing interest in enhancing intuitive physical Human-Machine Interaction (HRI/HVI), this study aims to propose a robust tactile hand gesture recognition system. We performed a comprehensive evaluation of different hand gesture recognition approaches for a large area tactile sensing interface (touch interface) constructed from conductive textiles. Our evaluation encompassed traditional feature engineering methods, as well as contemporary deep learning techniques capable of real-time interpretation of a range of hand gestures, accommodating variations in hand sizes, movement velocities, applied pressure levels, and interaction points. Our extensive analysis of the various methods makes a significant contribution to tactile-based gesture recognition in the field of human-machine interaction. |
None | |
Application of Artificial Intelligence in Hand Gesture Recognition with Virtual Reality: Survey and Analysis of Hand Gesture Hardware Selection | 2024-05-25 | ShowThe ongoing usage of artificial intelligence technologies in virtual reality has led to a large number of researchers exploring immersive virtual reality interaction. Gesture controllers and head-mounted displays are the primary pieces of hardware used in virtual reality applications. This article analyzes the advantages and disadvantages of various hardware tools as well as possible use cases for hand gesture recognition. Several popular data sets used in data pre-processing and hand gesture detection systems are also summarized. This paper's primary objectives are to evaluate the current state of the art in gesture recognition research and to offer potential routes for future study to guide researchers working on immersive virtual reality interaction. Comparing recent study results and taking into account the state of the development of human-computer interaction technology, a number of potential research directions are highlighted, including the following: input data noise should be filtered, a lightweight network should be designed, tactile feedback should be created, and eye and gesture data should be combined. |
None | |
Resonate-and-Fire Spiking Neurons for Target Detection and Hand Gesture Recognition: A Hybrid Approach | 2024-05-22 | ShowHand gesture recognition using radar often relies on computationally expensive fast Fourier transforms. This paper proposes an alternative approach that bypasses fast Fourier transforms using resonate-and-fire neurons. These neurons directly detect the hand in the time-domain signal, eliminating the need for fast Fourier transforms to retrieve range information. Following detection, a simple Goertzel algorithm is employed to extract five key features, eliminating the need for a second fast Fourier transform. These features are then fed into a recurrent neural network, achieving an accuracy of 98.21% for classifying five gestures. The proposed approach demonstrates competitive performance with reduced complexity compared to traditional methods |
None | |
GestFormer: Multiscale Wavelet Pooling Transformer Network for Dynamic Hand Gesture Recognition | 2024-05-18 | ShowTransformer model have achieved state-of-the-art results in many applications like NLP, classification, etc. But their exploration in gesture recognition task is still limited. So, we propose a novel GestFormer architecture for dynamic hand gesture recognition. The motivation behind this design is to propose a resource efficient transformer model, since transformers are computationally expensive and very complex. So, we propose to use a pooling based token mixer named PoolFormer, since it uses only pooling layer which is a non-parametric layer instead of quadratic attention. The proposed model also leverages the space-invariant features of the wavelet transform and also the multiscale features are selected using multi-scale pooling. Further, a gated mechanism helps to focus on fine details of the gesture with the contextual information. This enhances the performance of the proposed model compared to the traditional transformer with fewer parameters, when evaluated on dynamic hand gesture datasets, NVidia Dynamic Hand Gesture and Briareo datasets. To prove the efficacy of the proposed model, we have experimented on single as well multimodal inputs such as infrared, normals, depth, optical flow and color images. We have also compared the proposed GestFormer in terms of resource efficiency and number of operations. The source code is available at https://github.com/mallikagarg/GestFormer. |
Code Link | |
Air Signing and Privacy-Preserving Signature Verification for Digital Documents | 2024-05-17 | ShowThis paper presents a novel approach to the digital signing of electronic documents through the use of a camera-based interaction system, single-finger tracking for sign recognition, and multi commands executing hand gestures. The proposed solution, referred to as "Air Signature," involves writing the signature in front of the camera, rather than relying on traditional methods such as mouse drawing or physically signing on paper and showing it to a web camera. The goal is to develop a state-of-the-art method for detecting and tracking gestures and objects in real-time. The proposed methods include applying existing gesture recognition and object tracking systems, improving accuracy through smoothing and line drawing, and maintaining continuity during fast finger movements. An evaluation of the fingertip detection, sketching, and overall signing process is performed to assess the effectiveness of the proposed solution. The secondary objective of this research is to develop a model that can effectively recognize the unique signature of a user. This type of signature can be verified by neural cores that analyze the movement, speed, and stroke pixels of the signing in real time. The neural cores use machine learning algorithms to match air signatures to the individual's stored signatures, providing a secure and efficient method of verification. Our proposed System does not require sensors or any hardware other than the camera. |
None | |
Pose2Gest: A Few-Shot Model-Free Approach Applied In South Indian Classical Dance Gesture Recognition | 2024-05-17 | ShowThe classical dances from India utilize a set of hand gestures known as Mudras, serving as the foundational elements of its posture vocabulary. Identifying these mudras represents a primary task in digitizing the dance performances. With Kathakali, a dance-drama, as the focus, this work addresses mudra recognition by framing it as a 24-class classification problem and proposes a novel vector-similarity-based approach leveraging pose estimation techniques. This method obviates the need for extensive training or fine-tuning, thus mitigating the issue of limited data availability common in similar AI applications. Achieving an accuracy rate of 92%, our approach demonstrates comparable or superior performance to existing model-training-based methodologies in this domain. Notably, it remains effective even with small datasets comprising just 1 or 5 samples, albeit with a slightly diminished performance. Furthermore, our system supports processing images, videos, and real-time streams, accommodating both hand-cropped and full-body images. As part of this research, we have curated and released a publicly accessible Hasta Mudra dataset, which applies to multiple South Indian art forms including Kathakali. The implementation of the proposed method is also made available as a web application. |
None | |
MEET: Mixture of Experts Extra Tree-Based sEMG Hand Gesture Identification | 2024-05-06 | ShowArtificial intelligence (AI) has made significant advances in recent years and opened up new possibilities in exploring applications in various fields such as biomedical, robotics, education, industry, etc. Among these fields, human hand gesture recognition is a subject of study that has recently emerged as a research interest in robotic hand control using electromyography (EMG). Surface electromyography (sEMG) is a primary technique used in EMG, which is popular due to its non-invasive nature and is used to capture gesture movements using signal acquisition devices placed on the surface of the forearm. Moreover, these signals are pre-processed to extract significant handcrafted features through time and frequency domain analysis. These are helpful and act as input to machine learning (ML) models to identify hand gestures. However, handling multiple classes and biases are major limitations that can affect the performance of an ML model. Therefore, to address this issue, a new mixture of experts extra tree (MEET) model is proposed to identify more accurate and effective hand gesture movements. This model combines individual ML models referred to as experts, each focusing on a minimal class of two. Moreover, a fully trained model known as the gate is employed to weigh the output of individual expert models. This amalgamation of the expert models with the gate model is known as a mixture of experts extra tree (MEET) model. In this study, four subjects with six hand gesture movements have been considered and their identification is evaluated among eleven models, including the MEET classifier. Results elucidate that the MEET classifier performed best among other algorithms and identified hand gesture movement accurately. |
None | |
Enhancing Micro Gesture Recognition for Emotion Understanding via Context-aware Visual-Text Contrastive Learning | 2024-05-03 | ShowPsychological studies have shown that Micro Gestures (MG) are closely linked to human emotions. MG-based emotion understanding has attracted much attention because it allows for emotion understanding through nonverbal body gestures without relying on identity information (e.g., facial and electrocardiogram data). Therefore, it is essential to recognize MG effectively for advanced emotion understanding. However, existing Micro Gesture Recognition (MGR) methods utilize only a single modality (e.g., RGB or skeleton) while overlooking crucial textual information. In this letter, we propose a simple but effective visual-text contrastive learning solution that utilizes text information for MGR. In addition, instead of using handcrafted prompts for visual-text contrastive learning, we propose a novel module called Adaptive prompting to generate context-aware prompts. The experimental results show that the proposed method achieves state-of-the-art performance on two public datasets. Furthermore, based on an empirical study utilizing the results of MGR for emotion understanding, we demonstrate that using the textual results of MGR significantly improves performance by 6%+ compared to directly using video as input. |
accep...accepted by IEEE Signal Processing Letters |
None |
Unimodal and Multimodal Sensor Fusion for Wearable Activity Recognition | 2024-04-24 | ShowCombining different sensing modalities with multiple positions helps form a unified perception and understanding of complex situations such as human behavior. Hence, human activity recognition (HAR) benefits from combining redundant and complementary information (Unimodal/Multimodal). Even so, it is not an easy task. It requires a multidisciplinary approach, including expertise in sensor technologies, signal processing, data fusion algorithms, and domain-specific knowledge. This Ph.D. work employs sensing modalities such as inertial, pressure (audio and atmospheric pressure), and textile capacitive sensing for HAR. The scenarios explored are gesture and hand position tracking, facial and head pattern recognition, and body posture and gesture recognition. The selected wearable devices and sensing modalities are fully integrated with machine learning-based algorithms, some of which are implemented in the embedded device, on the edge, and tested in real-time. |
Accep...Accepted in IEEE 22nd International Conference on Pervasive Computing and Communications (PerCom 2024) |
None |
G3R: Generating Rich and Fine-grained mmWave Radar Data from 2D Videos for Generalized Gesture Recognition | 2024-04-23 | ShowMillimeter wave radar is gaining traction recently as a promising modality for enabling pervasive and privacy-preserving gesture recognition. However, the lack of rich and fine-grained radar datasets hinders progress in developing generalized deep learning models for gesture recognition across various user postures (e.g., standing, sitting), positions, and scenes. To remedy this, we resort to designing a software pipeline that exploits wealthy 2D videos to generate realistic radar data, but it needs to address the challenge of simulating diversified and fine-grained reflection properties of user gestures. To this end, we design G3R with three key components: (i) a gesture reflection point generator expands the arm's skeleton points to form human reflection points; (ii) a signal simulation model simulates the multipath reflection and attenuation of radar signals to output the human intensity map; (iii) an encoder-decoder model combines a sampling module and a fitting module to address the differences in number and distribution of points between generated and real-world radar data for generating realistic radar data. We implement and evaluate G3R using 2D videos from public data sources and self-collected real-world radar data, demonstrating its superiority over other state-of-the-art approaches for gesture recognition. |
18 pages, 29 figures | None |
Stream State-tying for Sign Language Recognition | 2024-04-21 | ShowIn this paper, a novel approach to sign language recognition based on state tying in each of data streams is presented. In this framework, it is assumed that hand gesture signal is represented in terms of six synchronous data streams, i.e., the left/right hand position, left/right hand orientation and left/right handshape. This approach offers a very accurate representation of the sign space and keeps the number of parameters reasonably small in favor of a fast decoding. Experiments were carried out for 5177 Chinese signs. The real time isolated recognition rate is 94.8%. For continuous sign recognition, the word correct rate is 91.4%. Keywords: Sign language recognition; Automatic sign language translation; Hand gesture recognition; Hidden Markov models; State-tying; Multimodal user interface; Virtual reality; Man-machine systems. |
None | |
Towards Robust and Interpretable EMG-based Hand Gesture Recognition using Deep Metric Meta Learning | 2024-04-17 | ShowCurrent electromyography (EMG) pattern recognition (PR) models have been shown to generalize poorly in unconstrained environments, setting back their adoption in applications such as hand gesture control. This problem is often due to limited training data, exacerbated by the use of supervised classification frameworks that are known to be suboptimal in such settings. In this work, we propose a shift to deep metric-based meta-learning in EMG PR to supervise the creation of meaningful and interpretable representations. We use a Siamese Deep Convolutional Neural Network (SDCNN) and contrastive triplet loss to learn an EMG feature embedding space that captures the distribution of the different classes. A nearest-centroid approach is subsequently employed for inference, relying on how closely a test sample aligns with the established data distributions. We derive a robust class proximity-based confidence estimator that leads to a better rejection of incorrect decisions, i.e. false positives, especially when operating beyond the training data domain. We show our approach's efficacy by testing the trained SDCNN's predictions and confidence estimations on unseen data, both in and out of the training domain. The evaluation metrics include the accuracy-rejection curve and the Kullback-Leibler divergence between the confidence distributions of accurate and inaccurate predictions. Outperforming comparable models on both metrics, our results demonstrate that the proposed meta-learning approach improves the classifier's precision in active decisions (after rejection), thus leading to better generalization and applicability. |
11 pa...11 pages, 9 figures, submitted to IEEE Transactions on Neural Networks and Learning Systems |
None |
Revisiting Noise Resilience Strategies in Gesture Recognition: Short-Term Enhancement in Surface Electromyographic Signal Analysis | 2024-04-17 | ShowGesture recognition based on surface electromyography (sEMG) has been gaining importance in many 3D Interactive Scenes. However, sEMG is easily influenced by various forms of noise in real-world environments, leading to challenges in providing long-term stable interactions through sEMG. Existing methods often struggle to enhance model noise resilience through various predefined data augmentation techniques. In this work, we revisit the problem from a short term enhancement perspective to improve precision and robustness against various common noisy scenarios with learnable denoise using sEMG intrinsic pattern information and sliding-window attention. We propose a Short Term Enhancement Module(STEM) which can be easily integrated with various models. STEM offers several benefits: 1) Learnable denoise, enabling noise reduction without manual data augmentation; 2) Scalability, adaptable to various models; and 3) Cost-effectiveness, achieving short-term enhancement through minimal weight-sharing in an efficient attention mechanism. In particular, we incorporate STEM into a transformer, creating the Short Term Enhanced Transformer (STET). Compared with best-competing approaches, the impact of noise on STET is reduced by more than 20%. We also report promising results on both classification and regression datasets and demonstrate that STEM generalizes across different gesture recognition tasks. |
None | |
A Technique for Classifying Static Gestures Using UWB Radar | 2024-04-12 | ShowOur paper presents a robust framework for UWB-based static gesture recognition, leveraging proprietary UWB radar sensor technology. Extensive data collection efforts were undertaken to compile datasets containing five commonly used gestures. Our approach involves a comprehensive data pre-processing pipeline that encompasses outlier handling, aspect ratio-preserving resizing, and false-color image transformation. Both CNN and MobileNet models were trained on the processed images. Remarkably, our best-performing model achieved an accuracy of 96.78%. Additionally, we developed a user-friendly GUI framework to assess the model's system resource usage and processing times, which revealed low memory utilization and real-time task completion in under one second. This research marks a significant step towards enhancing static gesture recognition using UWB technology, promising practical applications in various domains. |
This ...This is not a technical research paper, but an excerpt of what was applied during a funded project for the promotion of Open Science |
None |
Ultra-Range Gesture Recognition using a Web-Camera in Human-Robot Interaction | 2024-04-10 | ShowHand gestures play a significant role in human interactions where non-verbal intentions, thoughts and commands are conveyed. In Human-Robot Interaction (HRI), hand gestures offer a similar and efficient medium for conveying clear and rapid directives to a robotic agent. However, state-of-the-art vision-based methods for gesture recognition have been shown to be effective only up to a user-camera distance of seven meters. Such a short distance range limits practical HRI with, for example, service robots, search and rescue robots and drones. In this work, we address the Ultra-Range Gesture Recognition (URGR) problem by aiming for a recognition distance of up to 25 meters and in the context of HRI. We propose the URGR framework, a novel deep-learning, using solely a simple RGB camera. Gesture inference is based on a single image. First, a novel super-resolution model termed High-Quality Network (HQ-Net) uses a set of self-attention and convolutional layers to enhance the low-resolution image of the user. Then, we propose a novel URGR classifier termed Graph Vision Transformer (GViT) which takes the enhanced image as input. GViT combines the benefits of a Graph Convolutional Network (GCN) and a modified Vision Transformer (ViT). Evaluation of the proposed framework over diverse test data yields a high recognition rate of 98.1%. The framework has also exhibited superior performance compared to human recognition in ultra-range distances. With the framework, we analyze and demonstrate the performance of an autonomous quadruped robot directed by human gestures in complex ultra-range indoor and outdoor environments, acquiring 96% recognition rate on average. |
Engin...Engineering Applications of Artificial Intelligence, In press |
None |
SDFR: Synthetic Data for Face Recognition Competition | 2024-04-09 | ShowLarge-scale face recognition datasets are collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns. With the recent advances in generative models, recently several works proposed generating synthetic face recognition datasets to mitigate concerns in web-crawled face recognition datasets. This paper presents the summary of the Synthetic Data for Face Recognition (SDFR) Competition held in conjunction with the 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024) and established to investigate the use of synthetic data for training face recognition models. The SDFR competition was split into two tasks, allowing participants to train face recognition systems using new synthetic datasets and/or existing ones. In the first task, the face recognition backbone was fixed and the dataset size was limited, while the second task provided almost complete freedom on the model backbone, the dataset, and the training pipeline. The submitted models were trained on existing and also new synthetic datasets and used clever methods to improve training with synthetic data. The submissions were evaluated and ranked on a diverse set of seven benchmarking datasets. The paper gives an overview of the submitted face recognition models and reports achieved performance compared to baseline models trained on real and synthetic datasets. Furthermore, the evaluation of submissions is extended to bias assessment across different demography groups. Lastly, an outlook on the current state of the research in training face recognition models using synthetic data is presented, and existing problems as well as potential future directions are also discussed. |
The 1...The 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024) |
None |
Optimizing the Deployment of Tiny Transformers on Low-Power MCUs | 2024-04-03 | ShowTransformer networks are rapidly becoming SotA in many fields, such as NLP and CV. Similarly to CNN, there is a strong push for deploying Transformer models at the extreme edge, ultimately fitting the tiny power budget and memory footprint of MCUs. However, the early approaches in this direction are mostly ad-hoc, platform, and model-specific. This work aims to enable and optimize the flexible, multi-platform deployment of encoder Tiny Transformers on commercial MCUs. We propose a complete framework to perform end-to-end deployment of Transformer models onto single and multi-core MCUs. Our framework provides an optimized library of kernels to maximize data reuse and avoid unnecessary data marshaling operations into the crucial attention block. A novel MHSA inference schedule, named Fused-Weight Self-Attention, is introduced, fusing the linear projection weights offline to further reduce the number of operations and parameters. Furthermore, to mitigate the memory peak reached by the computation of the attention map, we present a Depth-First Tiling scheme for MHSA. We evaluate our framework on three different MCU classes exploiting ARM and RISC-V ISA, namely the STM32H7, the STM32L4, and GAP9 (RV32IMC-XpulpV2). We reach an average of 4.79x and 2.0x lower latency compared to SotA libraries CMSIS-NN (ARM) and PULP-NN (RISC-V), respectively. Moreover, we show that our MHSA depth-first tiling scheme reduces the memory peak by up to 6.19x, while the fused-weight attention can reduce the runtime by 1.53x, and number of parameters by 25%. We report significant improvements across several Tiny Transformers: for instance, when executing a transformer block for the task of radar-based hand-gesture recognition on GAP9, we achieve a latency of 0.14ms and energy consumption of 4.92 micro-joules, 2.32x lower than the SotA PULP-NN library on the same platform. |
Pre-p...Pre-print manuscript submitted for review to the IEEE Transactions on Computers |
None |
Multi-Scale Spatial-Temporal Self-Attention Graph Convolutional Networks for Skeleton-based Action Recognition | 2024-04-03 | ShowSkeleton-based gesture recognition methods have achieved high success using Graph Convolutional Network (GCN). In addition, context-dependent adaptive topology as a neighborhood vertex information and attention mechanism leverages a model to better represent actions. In this paper, we propose self-attention GCN hybrid model, Multi-Scale Spatial-Temporal self-attention (MSST)-GCN to effectively improve modeling ability to achieve state-of-the-art results on several datasets. We utilize spatial self-attention module with adaptive topology to understand intra-frame interactions within a frame among different body parts, and temporal self-attention module to examine correlations between frames of a node. These two are followed by multi-scale convolution network with dilations, which not only captures the long-range temporal dependencies of joints but also the long-range spatial dependencies (i.e., long-distance dependencies) of node temporal behaviors. They are combined into high-level spatial-temporal representations and output the predicted action with the softmax classifier. |
9 pages, 3 figures | None |
iFace: Hand-Over-Face Gesture Recognition Leveraging Impedance Sensing | 2024-03-27 | ShowHand-over-face gestures can provide important implicit interactions during conversations, such as frustration or excitement. However, in situations where interlocutors are not visible, such as phone calls or textual communication, the potential meaning contained in the hand-over-face gestures is lost. In this work, we present iFace, an unobtrusive, wearable impedance-sensing solution for recognizing different hand-over-face gestures. In contrast to most existing works, iFace does not require the placement of sensors on the user's face or hands. Instead, we proposed a novel sensing configuration, the shoulders, which remains invisible to both the user and outside observers. The system can monitor the shoulder-to-shoulder impedance variation caused by gestures through electrodes attached to each shoulder. We evaluated iFace in a user study with eight participants, collecting six kinds of hand-over-face gestures with different meanings. Using a convolutional neural network and a user-dependent classification, iFace reaches 82.58 % macro F1 score. We discuss potential application scenarios of iFace as an implicit interaction interface. |
Accep...Accepted by Augmented Humans 2024 |
None |
An Evolutionary Network Architecture Search Framework with Adaptive Multimodal Fusion for Hand Gesture Recognition | 2024-03-27 | ShowHand gesture recognition (HGR) based on multimodal data has attracted considerable attention owing to its great potential in applications. Various manually designed multimodal deep networks have performed well in multimodal HGR (MHGR), but most of existing algorithms require a lot of expert experience and time-consuming manual trials. To address these issues, we propose an evolutionary network architecture search framework with the adaptive multimodel fusion (AMF-ENAS). Specifically, we design an encoding space that simultaneously considers fusion positions and ratios of the multimodal data, allowing for the automatic construction of multimodal networks with different architectures through decoding. Additionally, we consider three input streams corresponding to intra-modal surface electromyography (sEMG), intra-modal accelerometer (ACC), and inter-modal sEMG-ACC. To automatically adapt to various datasets, the ENAS framework is designed to automatically search a MHGR network with appropriate fusion positions and ratios. To the best of our knowledge, this is the first time that ENAS has been utilized in MHGR to tackle issues related to the fusion position and ratio of multimodal data. Experimental results demonstrate that AMF-ENAS achieves state-of-the-art performance on the Ninapro DB2, DB3, and DB7 datasets. |
None | |
User Training with Error Augmentation for Electromyogram-based Gesture Classification | 2024-03-22 | ShowWe designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: veridical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration, modified feedback, in which we applied a hidden augmentation of error to these probabilities, and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that, relative to baseline, the modified feedback condition led to significantly improved accuracy and improved gesture class separation. These findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications. |
10 pa...10 pages, 10 figures. V2: Fix latex characters in author name. V3: Add published DOI and Copyright notice |
None |
CoPlay: Audio-agnostic Cognitive Scaling for Acoustic Sensing | 2024-03-16 | ShowAcoustic sensing manifests great potential in various applications that encompass health monitoring, gesture interface and imaging by leveraging the speakers and microphones on smart devices. However, in ongoing research and development in acoustic sensing, one problem is often overlooked: the same speaker, when used concurrently for sensing and other traditional applications (like playing music), could cause interference in both making it impractical to use in the real world. The strong ultrasonic sensing signals mixed with music would overload the speaker's mixer. To confront this issue of overloaded signals, current solutions are clipping or down-scaling, both of which affect the music playback quality and also sensing range and accuracy. To address this challenge, we propose CoPlay, a deep learning based optimization algorithm to cognitively adapt the sensing signal. It can 1) maximize the sensing signal magnitude within the available bandwidth left by the concurrent music to optimize sensing range and accuracy and 2) minimize any consequential frequency distortion that can affect music playback. In this work, we design a deep learning model and test it on common types of sensing signals (sine wave or Frequency Modulated Continuous Wave FMCW) as inputs with various agnostic concurrent music and speech. First, we evaluated the model performance to show the quality of the generated signals. Then we conducted field studies of downstream acoustic sensing tasks in the real world. A study with 12 users proved that respiration monitoring and gesture recognition using our adapted signal achieve similar accuracy as no-concurrent-music scenarios, while clipping or down-scaling manifests worse accuracy. A qualitative study also manifests that the music play quality is not degraded, unlike traditional clipping or down-scaling methods. |
None | |
High-speed Low-consumption sEMG-based Transient-state micro-Gesture Recognition | 2024-03-13 | ShowGesture recognition on wearable devices is extensively applied in human-computer interaction. Electromyography (EMG) has been used in many gesture recognition systems for its rapid perception of muscle signals. However, analyzing EMG signals on devices, like smart wristbands, usually needs inference models to have high performances, such as low inference latency, low power consumption, and low memory occupation. Therefore, this paper proposes an improved spiking neural network (SNN) to achieve these goals. We propose an adaptive multi-delta coding as a spiking coding method to improve recognition accuracy. We propose two additive solvers for SNN, which can reduce inference energy consumption and amount of parameters significantly, and improve the robustness of temporal differences. In addition, we propose a linear action detection method TAD-LIF, which is suitable for SNNs. TAD-LIF is an improved LIF neuron that can detect transient-state gestures quickly and accurately. We collected two datasets from 20 subjects including 6 micro gestures. The collection devices are two designed lightweight consumer-level sEMG wristbands (3 and 8 electrode channels respectively). Compared to CNN, FCN, and normal SNN-based methods, the proposed SNN has higher recognition accuracy. The accuracy of the proposed SNN is 83.85% and 93.52% on the two datasets respectively. In addition, the inference latency of the proposed SNN is about 1% of CNN, the power consumption is about 0.1% of CNN, and the memory occupation is about 20% of CNN. The proposed methods can be used for precise, high-speed, and low-power micro-gesture recognition tasks, and are suitable for consumer-level intelligent wearable devices, which is a general way to achieve ubiquitous computing. |
None | |
Agile gesture recognition for low-power applications: customisation for generalisation | 2024-03-12 | ShowAutomated hand gesture recognition has long been a focal point in the AI community. Traditionally, research in this field has predominantly focused on scenarios with access to a continuous flow of hand's images. This focus has been driven by the widespread use of cameras and the abundant availability of image data. However, there is an increasing demand for gesture recognition technologies that operate on low-power sensor devices. This is due to the rising concerns for data leakage and end-user privacy, as well as the limited battery capacity and the computing power in low-cost devices. Moreover, the challenge in data collection for individually designed hardware also hinders the generalisation of a gesture recognition model. In this study, we unveil a novel methodology for pattern recognition systems using adaptive and agile error correction, designed to enhance the performance of legacy gesture recognition models on devices with limited battery capacity and computing power. This system comprises a compact Support Vector Machine as the base model for live gesture recognition. Additionally, it features an adaptive agile error corrector that employs few-shot learning within the feature space induced by high-dimensional kernel mappings. The error corrector can be customised for each user, allowing for dynamic adjustments to the gesture prediction based on their movement patterns while maintaining the agile performance of its base model on a low-cost and low-power micro-controller. This proposed system is distinguished by its compact size, rapid processing speed, and low power consumption, making it ideal for a wide range of embedded systems. |
None | |
Multimodal Transformers for Real-Time Surgical Activity Prediction | 2024-03-11 | ShowReal-time recognition and prediction of surgical activities are fundamental to advancing safety and autonomy in robot-assisted surgery. This paper presents a multimodal transformer architecture for real-time recognition and prediction of surgical gestures and trajectories based on short segments of kinematic and video data. We conduct an ablation study to evaluate the impact of fusing different input modalities and their representations on gesture recognition and prediction performance. We perform an end-to-end assessment of the proposed architecture using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) dataset. Our model outperforms the state-of-the-art (SOTA) with 89.5% accuracy for gesture prediction through effective fusion of kinematic features with spatial and contextual video features. It achieves the real-time performance of 1.1-1.3ms for processing a 1-second input window by relying on a computationally efficient model. |
This ...This work has been submitted to the IEEE for possible publication |
None |
A Simple Baseline for Efficient Hand Mesh Reconstruction | 2024-03-04 | Show3D hand pose estimation has found broad application in areas such as gesture recognition and human-machine interaction tasks. As performance improves, the complexity of the systems also increases, which can limit the comparative analysis and practical implementation of these methods. In this paper, we propose a simple yet effective baseline that not only surpasses state-of-the-art (SOTA) methods but also demonstrates computational efficiency. To establish this baseline, we abstract existing work into two components: a token generator and a mesh regressor, and then examine their core structures. A core structure, in this context, is one that fulfills intrinsic functions, brings about significant improvements, and achieves excellent performance without unnecessary complexities. Our proposed approach is decoupled from any modifications to the backbone, making it adaptable to any modern models. Our method outperforms existing solutions, achieving state-of-the-art (SOTA) results across multiple datasets. On the FreiHAND dataset, our approach produced a PA-MPJPE of 5.7mm and a PA-MPVPE of 6.0mm. Similarly, on the Dexycb dataset, we observed a PA-MPJPE of 5.5mm and a PA-MPVPE of 5.0mm. As for performance speed, our method reached up to 33 frames per second (fps) when using HRNet and up to 70 fps when employing FastViT-MA36 |
None | |
On the Design of Human-Robot Collaboration Gestures | 2024-02-29 | ShowEffective communication between humans and collaborative robots is essential for seamless Human-Robot Collaboration (HRC). In noisy industrial settings, nonverbal communication, such as gestures, plays a key role in conveying commands and information to robots efficiently. While existing literature has thoroughly examined gesture recognition and robots' responses to these gestures, there is a notable gap in exploring the design of these gestures. The criteria for creating efficient HRC gestures are scattered across numerous studies. This paper surveys the design principles of HRC gestures, as contained in the literature, aiming to consolidate a set of criteria for HRC gesture design. It also examines the methods used for designing and evaluating HRC gestures to highlight research gaps and present directions for future research in this area. |
None | |
Snapture -- A Novel Neural Architecture for Combined Static and Dynamic Hand Gesture Recognition | 2024-02-27 | ShowAs robots are expected to get more involved in people's everyday lives, frameworks that enable intuitive user interfaces are in demand. Hand gesture recognition systems provide a natural way of communication and, thus, are an integral part of seamless Human-Robot Interaction (HRI). Recent years have witnessed an immense evolution of computational models powered by deep learning. However, state-of-the-art models fall short in expanding across different gesture domains, such as emblems and co-speech. In this paper, we propose a novel hybrid hand gesture recognition system. Our architecture enables learning both static and dynamic gestures: by capturing a so-called "snapshot" of the gesture performance at its peak, we integrate the hand pose along with the dynamic movement. Moreover, we present a method for analyzing the motion profile of a gesture to uncover its dynamic characteristics and which allows regulating a static channel based on the amount of motion. Our evaluation demonstrates the superiority of our approach on two gesture benchmarks compared to a CNNLSTM baseline. We also provide an analysis on a gesture class basis that unveils the potential of our Snapture architecture for performance improvements. Thanks to its modular implementation, our framework allows the integration of other multimodal data like facial expressions and head tracking, which are important cues in HRI scenarios, into one architecture. Thus, our work contributes both to gesture recognition research and machine learning applications for non-verbal communication with robots. |
In Co...In Cognitive Computation(Accepted:30/06/2023, Published:17/07/2023),20 pages,20 figures,4 tables;Please find the published version/info to cite: https://doi.org/10.1007/s12559-023-10174-z;Repositories: https://zenodo.org/doi/10.5281/zenodo.10679196, https://zenodo.org/doi/10.5281/zenodo.10693816;This work was co-funded by Horizon Europe project TERAIS under Grant agreement number 101079338 |
None |
Radar-Based Recognition of Static Hand Gestures in American Sign Language | 2024-02-20 | ShowIn the fast-paced field of human-computer interaction (HCI) and virtual reality (VR), automatic gesture recognition has become increasingly essential. This is particularly true for the recognition of hand signs, providing an intuitive way to effortlessly navigate and control VR and HCI applications. Considering increased privacy requirements, radar sensors emerge as a compelling alternative to cameras. They operate effectively in low-light conditions without capturing identifiable human details, thanks to their lower resolution and distinct wavelength compared to visible light. While previous works predominantly deploy radar sensors for dynamic hand gesture recognition based on Doppler information, our approach prioritizes classification using an imaging radar that operates on spatial information, e.g. image-like data. However, generating large training datasets required for neural networks (NN) is a time-consuming and challenging process, often falling short of covering all potential scenarios. Acknowledging these challenges, this study explores the efficacy of synthetic data generated by an advanced radar ray-tracing simulator. This simulator employs an intuitive material model that can be adjusted to introduce data diversity. Despite exclusively training the NN on synthetic data, it demonstrates promising performance when put to the test with real measurement data. This emphasizes the practicality of our methodology in overcoming data scarcity challenges and advancing the field of automatic gesture recognition in VR and HCI applications. |
5 pag...5 pages, 6 figures. Accepted to IEEE Radarconf2024 |
None |
Surface EMG-Based Inter-Session/Inter-Subject Gesture Recognition by Leveraging Lightweight All-ConvNet and Transfer Learning | 2024-02-19 | ShowGesture recognition using low-resolution instantaneous HD-sEMG images opens up new avenues for the development of more fluid and natural muscle-computer interfaces. However, the data variability between inter-session and inter-subject scenarios presents a great challenge. The existing approaches employed very large and complex deep ConvNet or 2SRNN-based domain adaptation methods to approximate the distribution shift caused by these inter-session and inter-subject data variability. Hence, these methods also require learning over millions of training parameters and a large pre-trained and target domain dataset in both the pre-training and adaptation stages. As a result, it makes high-end resource-bounded and computationally very expensive for deployment in real-time applications. To overcome this problem, we propose a lightweight All-ConvNet+TL model that leverages lightweight All-ConvNet and transfer learning (TL) for the enhancement of inter-session and inter-subject gesture recognition performance. The All-ConvNet+TL model consists solely of convolutional layers, a simple yet efficient framework for learning invariant and discriminative representations to address the distribution shifts caused by inter-session and inter-subject data variability. Experiments on four datasets demonstrate that our proposed methods outperform the most complex existing approaches by a large margin and achieve state-of-the-art results on inter-session and inter-subject scenarios and perform on par or competitively on intra-session gesture recognition. These performance gaps increase even more when a tiny amount (e.g., a single trial) of data is available on the target domain for adaptation. These outstanding experimental results provide evidence that the current state-of-the-art models may be overparameterized for sEMG-based inter-session and inter-subject gesture recognition tasks. |
None | |
Hand Shape and Gesture Recognition using Multiscale Template Matching, Background Subtraction and Binary Image Analysis | 2024-02-15 | ShowThis paper presents a hand shape classification approach employing multiscale template matching. The integration of background subtraction is utilized to derive a binary image of the hand object, enabling the extraction of key features such as centroid and bounding box. The methodology, while simple, demonstrates effectiveness in basic hand shape classification tasks, laying the foundation for potential applications in straightforward human-computer interaction scenarios. Experimental results highlight the system's capability in controlled environments. |
None | |
BdSLW60: A Word-Level Bangla Sign Language Dataset | 2024-02-13 | ShowSign language discourse is an essential mode of daily communication for the deaf and hard-of-hearing people. However, research on Bangla Sign Language (BdSL) faces notable limitations, primarily due to the lack of datasets. Recognizing wordlevel signs in BdSL (WL-BdSL) presents a multitude of challenges, including the need for well-annotated datasets, capturing the dynamic nature of sign gestures from facial or hand landmarks, developing suitable machine learning or deep learning-based models with substantial video samples, and so on. In this paper, we address these challenges by creating a comprehensive BdSL word-level dataset named BdSLW60 in an unconstrained and natural setting, allowing positional and temporal variations and allowing sign users to change hand dominance freely. The dataset encompasses 60 Bangla sign words, with a significant scale of 9307 video trials provided by 18 signers under the supervision of a sign language professional. The dataset was rigorously annotated and cross-checked by 60 annotators. We also introduced a unique approach of a relative quantization-based key frame encoding technique for landmark based sign gesture recognition. We report the benchmarking of our BdSLW60 dataset using the Support Vector Machine (SVM) with testing accuracy up to 67.6% and an attention-based bi-LSTM with testing accuracy up to 75.1%. The dataset is available at https://www.kaggle.com/datasets/hasaniut/bdslw60 and the code base is accessible from https://github.com/hasanssl/BdSLW60_Code. |
Code Link | |
Spiking Neural Network Enhanced Hand Gesture Recognition Using Low-Cost Single-photon Avalanche Diode Array | 2024-02-08 | ShowWe present a compact spiking convolutional neural network (SCNN) and spiking multilayer perceptron (SMLP) to recognize ten different gestures in dark and bright light environments, using a $9.6 single-photon avalanche diode (SPAD) array. In our hand gesture recognition (HGR) system, photon intensity data was leveraged to train and test the network. A vanilla convolutional neural network (CNN) was also implemented to compare the performance of SCNN with the same network topologies and training strategies. Our SCNN was trained from scratch instead of being converted from the CNN. We tested the three models in dark and ambient light (AL)-corrupted environments. The results indicate that SCNN achieves comparable accuracy (90.8%) to CNN (92.9%) and exhibits lower floating operations with only 8 timesteps. SMLP also presents a trade-off between computational workload and accuracy. The code and collected datasets of this work are available at https://github.com/zzy666666zzy/TinyLiDAR_NET_SNN. |
9 pages, 5 figures | Code Link |
Phase-driven Domain Generalizable Learning for Nonstationary Time Series | 2024-02-05 | ShowMonitoring and recognizing patterns in continuous sensing data is crucial for many practical applications. These real-world time-series data are often nonstationary, characterized by varying statistical and spectral properties over time. This poses a significant challenge in developing learning models that can effectively generalize across different distributions. In this work, based on our observation that nonstationary statistics are intrinsically linked to the phase information, we propose a time-series learning framework, PhASER. It consists of three novel elements: 1) phase augmentation that diversifies non-stationarity while preserving discriminatory semantics, 2) separate feature encoding by viewing time-varying magnitude and phase as independent modalities, and 3) feature broadcasting by incorporating phase with a novel residual connection for inherent regularization to enhance distribution invariant learning. Upon extensive evaluation on 5 datasets from human activity recognition, sleep-stage classification, and gesture recognition against 10 state-of-the-art baseline methods, we demonstrate that PhASER consistently outperforms the best baselines by an average of 5% and up to 13% in some cases. Moreover, PhASER's principles can be applied broadly to boost the generalization ability of existing time series classification models. |
None | |
WiOpen: A Robust Wi-Fi-based Open-set Gesture Recognition Framework | 2024-02-01 | ShowRecent years have witnessed a growing interest in Wi-Fi-based gesture recognition. However, existing works have predominantly focused on closed-set paradigms, where all testing gestures are predefined during training. This poses a significant challenge in real-world applications, as unseen gestures might be misclassified as known classes during testing. To address this issue, we propose WiOpen, a robust Wi-Fi-based Open-Set Gesture Recognition (OSGR) framework. Implementing OSGR requires addressing challenges caused by the unique uncertainty in Wi-Fi sensing. This uncertainty, resulting from noise and domains, leads to widely scattered and irregular data distributions in collected Wi-Fi sensing data. Consequently, data ambiguity between classes and challenges in defining appropriate decision boundaries to identify unknowns arise. To tackle these challenges, WiOpen adopts a two-fold approach to eliminate uncertainty and define precise decision boundaries. Initially, it addresses uncertainty induced by noise during data preprocessing by utilizing the CSI ratio. Next, it designs the OSGR network based on an uncertainty quantification method. Throughout the learning process, this network effectively mitigates uncertainty stemming from domains. Ultimately, the network leverages relationships among samples' neighbors to dynamically define open-set decision boundaries, successfully realizing OSGR. Comprehensive experiments on publicly accessible datasets confirm WiOpen's effectiveness. Notably, WiOpen also demonstrates superiority in cross-domain tasks when compared to state-of-the-art approaches. |
None | |
Efficient Gesture Recognition on Spiking Convolutional Networks Through Sensor Fusion of Event-Based and Depth Data | 2024-01-30 | ShowAs intelligent systems become increasingly important in our daily lives, new ways of interaction are needed. Classical user interfaces pose issues for the physically impaired and are partially not practical or convenient. Gesture recognition is an alternative, but often not reactive enough when conventional cameras are used. This work proposes a Spiking Convolutional Neural Network, processing event- and depth data for gesture recognition. The network is simulated using the open-source neuromorphic computing framework LAVA for offline training and evaluation on an embedded system. For the evaluation three open source data sets are used. Since these do not represent the applied bi-modality, a new data set with synchronized event- and depth data was recorded. The results show the viability of temporal encoding on depth information and modality fusion, even on differently encoded data, to be beneficial to network performance and generalization capabilities. |
Accep...Accepted to IEEE International Conference on Robotics and Automation (ICRA 2024) |
None |
Gesture Recognition for FMCW Radar on the Edge | 2024-01-26 | ShowThis paper introduces a lightweight gesture recognition system based on 60 GHz frequency modulated continuous wave (FMCW) radar. We show that gestures can be characterized efficiently by a set of five features, and propose a slim radar processing algorithm to extract these features. In contrast to previous approaches, we avoid heavy 2D processing, i.e. range-Doppler imaging, and perform instead an early target detection - this allows us to port the system to fully embedded platforms with tight constraints on memory, compute and power consumption. A recurrent neural network (RNN) based architecture exploits these features to jointly detect and classify five different gestures. The proposed system recognizes gestures with an F1 score of 98.4% on our hold-out test dataset, it runs on an Arm Cortex-M4 microcontroller requiring less than 280 kB of flash memory, 120 kB of RAM, and consuming 75 mW of power. |
4 pag...4 pages, 5 figures, accepted in 2024 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNeT) |
None |
Language Modeling on a SpiNNaker 2 Neuromorphic Chip | 2024-01-24 | ShowAs large language models continue to scale in size rapidly, so too does the computational power required to run them. Event-based networks on neuromorphic devices offer a potential way to reduce energy consumption for inference significantly. However, to date, most event-based networks that can run on neuromorphic hardware, including spiking neural networks (SNNs), have not achieved task performance even on par with LSTM models for language modeling. As a result, language modeling on neuromorphic devices has seemed a distant prospect. In this work, we demonstrate the first-ever implementation of a language model on a neuromorphic device - specifically the SpiNNaker 2 chip - based on a recently published event-based architecture called the EGRU. SpiNNaker 2 is a many-core neuromorphic chip designed for large-scale asynchronous processing, while the EGRU is architected to leverage such hardware efficiently while maintaining competitive task performance. This implementation marks the first time a neuromorphic language model matches LSTMs, setting the stage for taking task performance to the level of large language models. We also demonstrate results on a gesture recognition task based on inputs from a DVS camera. Overall, our results showcase the feasibility of this neuro-inspired neural network in hardware, highlighting significant gains versus conventional hardware in energy efficiency for the common use case of single batch inference. |
None | |
HaGRID - HAnd Gesture Recognition Image Dataset | 2024-01-18 | ShowThis paper introduces an enormous dataset, HaGRID (HAnd Gesture Recognition Image Dataset), to build a hand gesture recognition (HGR) system concentrating on interaction with devices to manage them. That is why all 18 chosen gestures are endowed with the semiotic function and can be interpreted as a specific action. Although the gestures are static, they were picked up, especially for the ability to design several dynamic gestures. It allows the trained model to recognize not only static gestures such as "like" and "stop" but also "swipes" and "drag and drop" dynamic gestures. The HaGRID contains 554,800 images and bounding box annotations with gesture labels to solve hand detection and gesture classification tasks. The low variability in context and subjects of other datasets was the reason for creating the dataset without such limitations. Utilizing crowdsourcing platforms allowed us to collect samples recorded by 37,583 subjects in at least as many scenes with subject-to-camera distances from 0.5 to 4 meters in various natural light conditions. The influence of the diversity characteristics was assessed in ablation study experiments. Also, we demonstrate the HaGRID ability to be used for pretraining models in HGR tasks. The HaGRID and pretrained models are publicly available. |
12 pa...12 pages, 5 figures, open-source dataset for computer vision |
None |
Experimental Shake Gesture Detection API for Apple Watch | 2024-01-18 | ShowIn this paper we present the WatchShaker project The project involves an experimental API that detects the Apple Watchs shake gesturea surprisingly absent natively feature Through a simple heuristic leveraging the Apple Watchs accelerometer data the API discerns not just the occurrence of shake gestures but also their direction enhancing the interactivity potential of the device Despite the projects simplicity and lack of formal testing it has garnered significant attention indicating a genuine interest and need within the developer community for such functionality The WatchShaker project exemplifies how a minimalistic approach can yield a practical and impactful tool in wearable technology providing a springboard for further research and development in intuitive gesture recognition |
None | |
Resource-Efficient Gesture Recognition using Low-Resolution Thermal Camera via Spiking Neural Networks and Sparse Segmentation | 2024-01-12 | ShowThis work proposes a novel approach for hand gesture recognition using an inexpensive, low-resolution (24 x 32) thermal sensor processed by a Spiking Neural Network (SNN) followed by Sparse Segmentation and feature-based gesture classification via Robust Principal Component Analysis (R-PCA). Compared to the use of standard RGB cameras, the proposed system is insensitive to lighting variations while being significantly less expensive compared to high-frequency radars, time-of-flight cameras and high-resolution thermal sensors previously used in literature. Crucially, this paper shows that the innovative use of the recently proposed Monostable Multivibrator (MMV) neural networks as a new class of SNN achieves more than one order of magnitude smaller memory and compute complexity compared to deep learning approaches, while reaching a top gesture recognition accuracy of 93.9% using a 5-class thermal camera dataset acquired in a car cabin, within an automotive context. Our dataset is released for helping future research. |
None | |
Evaluating Gesture Recognition in Virtual Reality | 2024-01-09 | ShowHuman-Robot Interaction (HRI) has become increasingly important as robots are being integrated into various aspects of daily life. One key aspect of HRI is gesture recognition, which allows robots to interpret and respond to human gestures in real-time. Gesture recognition plays an important role in non-verbal communication in HRI. To this aim, there is ongoing research on how such non-verbal communication can strengthen verbal communication and improve the system's overall efficiency, thereby enhancing the user experience with the robot. However, several challenges need to be addressed in gesture recognition systems, which include data generation, transferability, scalability, generalizability, standardization, and lack of benchmarking of the gestural systems. In this preliminary paper, we want to address the challenges of data generation using virtual reality simulations and standardization issues by presenting gestures to some commands that can be used as a standard in ground robots. |
4 Pag...4 Pages, 3 Figures, Workshop YOUR Study Design!(WYSD 2023) Participatory critique and refinement of participants' studies. Workshop at the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2023). March 13-16, 2023, Stockholm, SE |
None |
A multimodal gesture recognition dataset for desktop human-computer interaction | 2024-01-08 | ShowGesture recognition is an indispensable component of natural and efficient human-computer interaction technology, particularly in desktop-level applications, where it can significantly enhance people's productivity. However, the current gesture recognition community lacks a suitable desktop-level (top-view perspective) dataset for lightweight gesture capture devices. In this study, we have established a dataset named GR4DHCI. What distinguishes this dataset is its inherent naturalness, intuitive characteristics, and diversity. Its primary purpose is to serve as a valuable resource for the development of desktop-level portable applications. GR4DHCI comprises over 7,000 gesture samples and a total of 382,447 frames for both Stereo IR and skeletal modalities. We also address the variances in hand positioning during desktop interactions by incorporating 27 different hand positions into the dataset. Building upon the GR4DHCI dataset, we conducted a series of experimental studies, the results of which demonstrate that the fine-grained classification blocks proposed in this paper can enhance the model's recognition accuracy. Our dataset and experimental findings presented in this paper are anticipated to propel advancements in desktop-level gesture recognition research. |
None | |
Tackling Electrode Shift In Gesture Recognition with HD-EMG Electrode Subsets | 2024-01-05 | ShowsEMG pattern recognition algorithms have been explored extensively in decoding movement intent, yet are known to be vulnerable to changing recording conditions, exhibiting significant drops in performance across subjects, and even across sessions. Multi-channel surface EMG, also referred to as high-density sEMG (HD-sEMG) systems, have been used to improve performance with the information collected through the use of additional electrodes. However, a lack of robustness is ever present due to limited datasets and the difficulties in addressing sources of variability, such as electrode placement. In this study, we propose training on a collection of input channel subsets and augmenting our training distribution with data from different electrode locations, simultaneously targeting electrode shift and reducing input dimensionality. Our method increases robustness against electrode shift and results in significantly higher intersession performance across subjects and classification algorithms. |
5 pages | None |
Wi-Fi Sensing: Applications and Challenges | 2023-12-31 | ShowWi-Fi technology has strong potentials in indoor and outdoor sensing applications, it has several important features which makes it an appealing option compared to other sensing technologies. This paper presents a survey on different applications of Wi-Fi based sensing systems such as elderly people monitoring, activity classification, gesture recognition, people counting, through the wall sensing, behind the corner sensing, and many other applications. The challenges and interesting future directions are also highlighted. |
None | |
EvHandPose: Event-based 3D Hand Pose Estimation with Sparse Supervision | 2023-12-28 | ShowEvent camera shows great potential in 3D hand pose estimation, especially addressing the challenges of fast motion and high dynamic range in a low-power way. However, due to the asynchronous differential imaging mechanism, it is challenging to design event representation to encode hand motion information especially when the hands are not moving (causing motion ambiguity), and it is infeasible to fully annotate the temporally dense event stream. In this paper, we propose EvHandPose with novel hand flow representations in Event-to-Pose module for accurate hand pose estimation and alleviating the motion ambiguity issue. To solve the problem under sparse annotation, we design contrast maximization and hand-edge constraints in Pose-to-IWE (Image with Warped Events) module and formulate EvHandPose in a weakly-supervision framework. We further build EvRealHands, the first large-scale real-world event-based hand pose dataset on several challenging scenes to bridge the real-synthetic domain gap. Experiments on EvRealHands demonstrate that EvHandPose outperforms previous event-based methods under all evaluation scenes, achieves accurate and stable hand pose estimation with high temporal resolution in fast motion and strong light scenes compared with RGB-based methods, generalizes well to outdoor scenes and another type of event camera, and shows the potential for the hand gesture recognition task. |
None | |
TeamCAD -- A Multimodal Interface for Remote Computer Aided Design | 2023-12-19 | ShowRemote collaboration is a common reality of spatial design processes, but tools for computer aided design were made for single users. Via TeamCAD, we introduce a user experience where online remote collaboration experience is more like working on a table. Using speech and gesture recognition based on state of the art machine learning through webcam and microphone input, TeamCAD plugs into existing software through API's, keybindings, and mouse input. We share results from user studies conducted on graduate students from . Our user tests were run on Blender animation software, making simultaneous use of both modalities for given tasks. We mitigated challenges in terms of robustness and latency in readily available voice recognition models. Our prototype has proven to be an intuitive interface, providing a suitable denominator for collaborators with or without previous experience in three-dimensional modeling applications. |
Creat...Created for 6.835, Intelligent Multimodal User Interfaces at MIT, Spring 2022. Submitted to CHI 2024. 10 pages, 6 figures, 1 table |
None |
EMG subspace alignment and visualization for cross-subject hand gesture classification | 2023-12-18 | ShowElectromyograms (EMG)-based hand gesture recognition systems are a promising technology for human/machine interfaces. However, one of their main limitations is the long calibration time that is typically required to handle new users. The paper discusses and analyses the challenge of cross-subject generalization thanks to an original dataset containing the EMG signals of 14 human subjects during hand gestures. The experimental results show that, though an accurate generalization based on pooling multiple subjects is hardly achievable, it is possible to improve the cross-subject estimation by identifying a robust low-dimensional subspace for multiple subjects and aligning it to a target subject. A visualization of the subspace enables us to provide insights for the improvement of cross-subject generalization with EMG signals. |
8 pag...8 pages + 1 appendix page 6 figures (one in appendix) Published in the Adapting to Change: Reliable Learning Across Domains workshop from ECML-PKDD 2023 |
None |
Natural Interaction Modalities for Human-CPS Interaction in Construction Progress Monitoring | 2023-12-10 | ShowThis article explores natural interaction modalities for human-cyber-physical systems (CPS) interaction in construction. CPS has been applied in construction for many purposes with the promise of improving the safety and productivity of construction operations. However, there is little research on human-CPS interaction in construction. This study proposes two methodologies for human-CPS interactions for construction progress monitoring - a) hand gesture interaction using transfer learning, and b) voice command interaction using natural language processing. User studies with thirty-two users validated the generalizability of the proposed methodologies. The proposed hand gesture recognition method achieved higher accuracy (99.69% vs 87.72%) and speed (36.05ms vs 578.91ms) than the proposed voice command recognition method, though users performed the progress monitoring task more correctly with voice commands than hand gestures (88% vs 66.1%). The main contribution of the study is the development of an ML pipeline and computational framework to recognize hand gestures and voice commands without the need for a large training dataset for human-CPS interaction. |
15 pages | None |
Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing | 2023-12-08 | ShowDevice-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications. However, data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems. Motivated by the observation that signals recorded by wireless receivers are closely related to a set of physical-layer semantic features, in this paper we propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data. We develop a novel physical-layer semantic-aware network (pSAN) framework to characterize the correlation between physical-layer semantic features and the sensing data distributions across different receivers. We then propose a pSAN-based zero-shot learning solution in which each receiver can obtain a location-specific gesture recognition model by directly aggregating the already constructed models of other receivers. We theoretically prove that models obtained by our proposed solution can approach the optimal model without requiring any local model training. Experimental results once again verify that the accuracy of models derived by our proposed solution matches that of the models trained by the real labeled data based on supervised learning approach. |
accep...accepted at IEEE International Conference on Network Protocols (ICNP) Workshop, Reykjavik, Iceland, October 10-13, 2023 |
None |
Interpretable Underwater Diver Gesture Recognition | 2023-12-08 | ShowIn recent years, usage and applications of Autonomous Underwater Vehicles has grown rapidly. Interaction of divers with the AUVs remains an integral part of the usage of AUVs for various applications and makes building robust and efficient underwater gesture recognition systems extremely important. In this paper, we propose an Underwater Gesture Recognition system trained on the Cognitive Autonomous Diving Buddy Underwater gesture dataset using deep learning that achieves 98.01% accuracy on the dataset, which to the best of our knowledge is the best performance achieved on this dataset at the time of writing this paper. We also improve the Gesture Recognition System Interpretability by using XAI techniques to visualize the model's predictions. |
None | |
Towards Open-set Gesture Recognition via Feature Activation Enhancement and Orthogonal Prototype Learning | 2023-12-05 | ShowGesture recognition is a foundational task in human-machine interaction (HMI). While there has been significant progress in gesture recognition based on surface electromyography (sEMG), accurate recognition of predefined gestures only within a closed set is still inadequate in practice. It is essential to effectively discern and reject unknown gestures of disinterest in a robust system. Numerous methods based on prototype learning (PL) have been proposed to tackle this open set recognition (OSR) problem. However, they do not fully explore the inherent distinctions between known and unknown classes. In this paper, we propose a more effective PL method leveraging two novel and inherent distinctions, feature activation level and projection inconsistency. Specifically, the Feature Activation Enhancement Mechanism (FAEM) widens the gap in feature activation values between known and unknown classes. Furthermore, we introduce Orthogonal Prototype Learning (OPL) to construct multiple perspectives. OPL acts to project a sample from orthogonal directions to maximize the distinction between its two projections, where unknown samples will be projected near the clusters of different known classes while known samples still maintain intra-class similarity. Our proposed method simultaneously achieves accurate closed-set classification for predefined gestures and effective rejection for unknown gestures. Extensive experiments demonstrate its efficacy and superiority in open-set gesture recognition based on sEMG. |
None | |
A Spatio-Temporal Graph Convolutional Network for Gesture Recognition from High-Density Electromyography | 2023-12-01 | ShowAccurate hand gesture prediction is crucial for effective upper-limb prosthetic limbs control. As the high flexibility and multiple degrees of freedom exhibited by human hands, there has been a growing interest in integrating deep networks with high-density surface electromyography (HD-sEMG) grids to enhance gesture recognition capabilities. However, many existing methods fall short in fully exploit the specific spatial topology and temporal dependencies present in HD-sEMG data. Additionally, these studies are often limited number of gestures and lack generality. Hence, this study introduces a novel gesture recognition method, named STGCN-GR, which leverages spatio-temporal graph convolution networks for HD-sEMG-based human-machine interfaces. Firstly, we construct muscle networks based on functional connectivity between channels, creating a graph representation of HD-sEMG recordings. Subsequently, a temporal convolution module is applied to capture the temporal dependences in the HD-sEMG series and a spatial graph convolution module is employed to effectively learn the intrinsic spatial topology information among distinct HD-sEMG channels. We evaluate our proposed model on a public HD-sEMG dataset comprising a substantial number of gestures (i.e., 65). Our results demonstrate the remarkable capability of the STGCN-GR method, achieving an impressive accuracy of 91.07% in predicting gestures, which surpasses state-of-the-art deep learning methods applied to the same dataset. |
None | |
Study and Survey on Gesture Recognition Systems | 2023-12-01 | ShowIn recent years, there has been a considerable amount of research in the Gesture Recognition domain, mainly owing to the technological advancements in Computer Vision. Various new applications have been conceptualised and developed in this field. This paper discusses the implementation of gesture recognition systems in multiple sectors such as gaming, healthcare, home appliances, industrial robots, and virtual reality. Different methodologies for capturing gestures are compared and contrasted throughout this survey. Various data sources and data acquisition techniques have been discussed. The role of gestures in sign language has been studied and existing approaches have been reviewed. Common challenges faced while building gesture recognition systems have also been explored. |
6 pag...6 pages, accepted at the ICCUBEA, IEEE 2023 conference |
None |
Fast and Expressive Gesture Recognition using a Combination-Homomorphic Electromyogram Encoder | 2023-11-29 | ShowWe study the task of gesture recognition from electromyography (EMG), with the goal of enabling expressive human-computer interaction at high accuracy, while minimizing the time required for new subjects to provide calibration data. To fulfill these goals, we define combination gestures consisting of a direction component and a modifier component. New subjects only demonstrate the single component gestures and we seek to extrapolate from these to all possible single or combination gestures. We extrapolate to unseen combination gestures by combining the feature vectors of real single gestures to produce synthetic training data. This strategy allows us to provide a large and flexible gesture vocabulary, while not requiring new subjects to demonstrate combinatorially many example gestures. We pre-train an encoder and a combination operator using self-supervision, so that we can produce useful synthetic training data for unseen test subjects. To evaluate the proposed method, we collect a real-world EMG dataset, and measure the effect of augmented supervision against two baselines: a partially-supervised model trained with only single gesture data from the unseen subject, and a fully-supervised model trained with real single and real combination gesture data from the unseen subject. We find that the proposed method provides a dramatic improvement over the partially-supervised model, and achieves a useful classification accuracy that in some cases approaches the performance of the fully-supervised model. |
24 pa...24 pages, 7 figures, 6 tables V2: add link to code, fix bibliography |
None |
Simultaneous Energy Harvesting and Hand Gesture Recognition in Large Area Monolithic Dye-Sensitized Solar Cells | 2023-11-27 | ShowInternet of Things (IoT) devices have become prevalent, embedding intelligence into our environment. It is projected that over 75 billion IoT devices will be connected by 2025 worldwide, with the majority being operated indoors. Dye-sensitized solar cells (DSSC) have recently been optimized for ambient light, having the capabilities of providing sufficient energy for self-powered IoT devices. Interaction with digital technologies, termed Human Computer Interaction (HCI), is often achieved via physical mechanisms (e.g. remote controls, cell phones) which can hinder the natural interface between users and IoT devices, a key consideration for HCI. What if the solar cell that is powering the IoT device can also recognize hand gestures which would allow the user to naturally interact with the system? Previous attempts to achieve this have necessarily employed an array of solar cell/photodiodes to detect directionality. In this work, we demonstrate that by monitoring the photocurrent output of an asymmetrically patterned monolithic (i.e., single cell) DSSC, and using machine learning, we can recognize simple hand gestures, achieving an accuracy prediction of 97.71%. This work shows that, DSSCs are the perfect choice for self-powered interactive technologies, both in terms of powering IoT devices in ambient light conditions and having aesthetic qualities that are prioritized by users. As well as powering interactive technologies, they can also provide a means of interactive control. |
Main ...Main body: 10 pages, 6 figures, 3 tables. Document includes supplementary info: 30 pages, 47 supplementary figures |
None |
SniffyArt: The Dataset of Smelling Persons | 2023-11-20 | ShowSmell gestures play a crucial role in the investigation of past smells in the visual arts yet their automated recognition poses significant challenges. This paper introduces the SniffyArt dataset, consisting of 1941 individuals represented in 441 historical artworks. Each person is annotated with a tightly fitting bounding box, 17 pose keypoints, and a gesture label. By integrating these annotations, the dataset enables the development of hybrid classification approaches for smell gesture recognition. The datasets high-quality human pose estimation keypoints are achieved through the merging of five separate sets of keypoint annotations per person. The paper also presents a baseline analysis, evaluating the performance of representative algorithms for detection, keypoint estimation, and classification tasks, showcasing the potential of combining keypoint estimation with smell gesture classification. The SniffyArt dataset lays a solid foundation for future research and the exploration of multi-task approaches leveraging pose keypoints and person boxes to advance human gesture and olfactory dimension analysis in historical artworks. |
10 pages, 8 figures | None |
Improving Unimodal Inference with Multimodal Transformers | 2023-11-16 | ShowThis paper proposes an approach for improving performance of unimodal models with multimodal training. Our approach involves a multi-branch architecture that incorporates unimodal models with a multimodal transformer-based branch. By co-training these branches, the stronger multimodal branch can transfer its knowledge to the weaker unimodal branches through a multi-task objective, thereby improving the performance of the resulting unimodal models. We evaluate our approach on tasks of dynamic hand gesture recognition based on RGB and Depth, audiovisual emotion recognition based on speech and facial video, and audio-video-text based sentiment analysis. Our approach outperforms the conventionally trained unimodal counterparts. Interestingly, we also observe that optimization of the unimodal branches improves the multimodal branch, compared to a similar multimodal model trained from scratch. |
None | |
A Comprehensive Review of Leap Motion Controller-based Hand Gesture Datasets | 2023-11-07 | ShowThis paper comprehensively reviews hand gesture datasets based on Ultraleap's leap motion controller, a popular device for capturing and tracking hand gestures in real-time. The aim is to offer researchers and practitioners a valuable resource for developing and evaluating gesture recognition algorithms. The review compares various datasets found in the literature, considering factors such as target domain, dataset size, gesture diversity, subject numbers, and data modality. The strengths and limitations of each dataset are discussed, along with the applications and research areas in which they have been utilized. An experimental evaluation of the leap motion controller 2 device is conducted to assess its capabilities in generating gesture data for various applications, specifically focusing on touchless interactive systems and virtual reality. This review serves as a roadmap for researchers, aiding them in selecting appropriate datasets for their specific gesture recognition tasks and advancing the field of hand gesture recognition using leap motion controller technology. |
None | |
Hand Gesture Classification on Praxis Dataset: Trading Accuracy for Expense | 2023-11-01 | ShowIn this paper, we investigate hand gesture classifiers that rely upon the abstracted 'skeletal' data recorded using the RGB-Depth sensor. We focus on 'skeletal' data represented by the body joint coordinates, from the Praxis dataset. The PRAXIS dataset contains recordings of patients with cortical pathologies such as Alzheimer's disease, performing a Praxis test under the direction of a clinician. In this paper, we propose hand gesture classifiers that are more effective with the PRAXIS dataset than previously proposed models. Body joint data offers a compressed form of data that can be analyzed specifically for hand gesture recognition. Using a combination of windowing techniques with deep learning architecture such as a Recurrent Neural Network (RNN), we achieved an overall accuracy of 70.8% using only body joint data. In addition, we investigated a long-short-term-memory (LSTM) to extract and analyze the movement of the joints through time to recognize the hand gestures being performed and achieved a gesture recognition rate of 74.3% and 67.3% for static and dynamic gestures, respectively. The proposed approach contributed to the task of developing an automated, accurate, and inexpensive approach to diagnosing cortical pathologies for multiple healthcare applications. |
8 pages, 6 figures | None |
GRLib: An Open-Source Hand Gesture Detection and Recognition Python Library | 2023-10-23 | ShowHand gesture recognition systems provide a natural way for humans to interact with computer systems. Although various algorithms have been designed for this task, a host of external conditions, such as poor lighting or distance from the camera, make it difficult to create an algorithm that performs well across a range of environments. In this work, we present GRLib: an open-source Python library able to detect and classify static and dynamic hand gestures. Moreover, the library can be trained on existing data for improved classification robustness. The proposed solution utilizes a feed from an RGB camera. The retrieved frames are then subjected to data augmentation and passed on to MediaPipe Hands to perform hand landmark detection. The landmarks are then classified into their respective gesture class. The library supports dynamic hand gestures through trajectories and keyframe extraction. It was found that the library outperforms another publicly available HGR system - MediaPipe Solutions, on three diverse, real-world datasets. The library is available at https://github.com/mikhail-vlasenko/grlib and can be installed with pip. |
Code Link | |
Embodied Cognition Guides Virtual-Real Interaction Design to Help Yicheng Flower Drum Intangible Cultural Heritage Dissemination | 2023-10-07 | ShowIn order to make the non-heritage culture of Yicheng Flower Drum more relevant to the trend of the digital era and promote its dissemination and inheritance, the design and application of gesture recognition and virtual reality technologies guided by embodied cognition theory in the process of non-heritage culture dissemination is studied. At the same time, it will enhance the interaction between people and NRM culture, stimulate the audience's interest in understanding NRM and spreading NRM, and create awareness of preserving NRM culture. Using embodied cognition as a theoretical guide, expanding the unidirectional communication mode through human-computer interaction close to natural behavior and cooperating with multisensory information reception channels, so as to construct an embodied and immersive interactive atmosphere for the participants and enable them to naturally form the cognition and understanding of the traditional culture in the process of interaction. The dissemination of the non-heritage culture Yicheng Flower Drum can take the theory of embodied cognition as an entry point, and through the virtual and real scenes of Yicheng Flower Drum and the immersive experience, we can empower the interaction design of non-heritage culture dissemination of the virtual and real, and provide a new method for the research of digital design of non-heritage culture. |
7 pages, 4 figures | None |
It's all about you: Personalized in-Vehicle Gesture Recognition with a Time-of-Flight Camera | 2023-10-02 | ShowDespite significant advances in gesture recognition technology, recognizing gestures in a driving environment remains challenging due to limited and costly data and its dynamic, ever-changing nature. In this work, we propose a model-adaptation approach to personalize the training of a CNNLSTM model and improve recognition accuracy while reducing data requirements. Our approach contributes to the field of dynamic hand gesture recognition while driving by providing a more efficient and accurate method that can be customized for individual users, ultimately enhancing the safety and convenience of in-vehicle interactions, as well as driver's experience and system trust. We incorporate hardware enhancement using a time-of-flight camera and algorithmic enhancement through data augmentation, personalized adaptation, and incremental learning techniques. We evaluate the performance of our approach in terms of recognition accuracy, achieving up to 90%, and show the effectiveness of personalized adaptation and incremental learning for a user-centered design. |
Accep...Accepted at AutoUI2023 |
None |
EMGTFNet: Fuzzy Vision Transformer to decode Upperlimb sEMG signals for Hand Gestures Recognition | 2023-09-23 | ShowMyoelectric control is an area of electromyography of increasing interest nowadays, particularly in applications such as Hand Gesture Recognition (HGR) for bionic prostheses. Today's focus is on pattern recognition using Machine Learning and, more recently, Deep Learning methods. Despite achieving good results on sparse sEMG signals, the latter models typically require large datasets and training times. Furthermore, due to the nature of stochastic sEMG signals, traditional models fail to generalize samples for atypical or noisy values. In this paper, we propose the design of a Vision Transformer (ViT) based architecture with a Fuzzy Neural Block (FNB) called EMGTFNet to perform Hand Gesture Recognition from surface electromyography (sEMG) signals. The proposed EMGTFNet architecture can accurately classify a variety of hand gestures without any need for data augmentation techniques, transfer learning or a significant increase in the number of parameters in the network. The accuracy of the proposed model is tested using the publicly available NinaPro database consisting of 49 different hand gestures. Experiments yield an average test accuracy of 83.57% & 3.5% using a 200 ms window size and only 56,793 trainable parameters. Our results outperform the ViT without FNB, thus demonstrating that including FNB improves its performance. Our proposal framework EMGTFNet reported the significant potential for its practical application for prosthetic control. |
None | |
A Deep Learning Sequential Decoder for Transient High-Density Electromyography in Hand Gesture Recognition Using Subject-Embedded Transfer Learning | 2023-09-23 | ShowHand gesture recognition (HGR) has gained significant attention due to the increasing use of AI-powered human-computer interfaces that can interpret the deep spatiotemporal dynamics of biosignals from the peripheral nervous system, such as surface electromyography (sEMG). These interfaces have a range of applications, including the control of extended reality, agile prosthetics, and exoskeletons. However, the natural variability of sEMG among individuals has led researchers to focus on subject-specific solutions. Deep learning methods, which often have complex structures, are particularly data-hungry and can be time-consuming to train, making them less practical for subject-specific applications. In this paper, we propose and develop a generalizable, sequential decoder of transient high-density sEMG (HD-sEMG) that achieves 73% average accuracy on 65 gestures for partially-observed subjects through subject-embedded transfer learning, leveraging pre-knowledge of HGR acquired during pre-training. The use of transient HD-sEMG before gesture stabilization allows us to predict gestures with the ultimate goal of counterbalancing system control delays. The results show that the proposed generalized models significantly outperform subject-specific approaches, especially when the training data is limited, and there is a significant number of gesture classes. By building on pre-knowledge and incorporating a multiplicative subject-embedded structure, our method comparatively achieves more than 13% average accuracy across partially observed subjects with minimal data availability. This work highlights the potential of HD-sEMG and demonstrates the benefits of modeling common patterns across users to reduce the need for large amounts of data for new users, enhancing practicality. |
None | |
ViT-MDHGR: Cross-day Reliability and Agility in Dynamic Hand Gesture Prediction via HD-sEMG Signal Decoding | 2023-09-22 | ShowSurface electromyography (sEMG) and high-density sEMG (HD-sEMG) biosignals have been extensively investigated for myoelectric control of prosthetic devices, neurorobotics, and more recently human-computer interfaces because of their capability for hand gesture recognition/prediction in a wearable and non-invasive manner. High intraday (same-day) performance has been reported. However, the interday performance (separating training and testing days) is substantially degraded due to the poor generalizability of conventional approaches over time, hindering the application of such techniques in real-life practices. There are limited recent studies on the feasibility of multi-day hand gesture recognition. The existing studies face a major challenge: the need for long sEMG epochs makes the corresponding neural interfaces impractical due to the induced delay in myoelectric control. This paper proposes a compact ViT-based network for multi-day dynamic hand gesture prediction. We tackle the main challenge as the proposed model only relies on very short HD-sEMG signal windows (i.e., 50 ms, accounting for only one-sixth of the convention for real-time myoelectric implementation), boosting agility and responsiveness. Our proposed model can predict 11 dynamic gestures for 20 subjects with an average accuracy of over 71% on the testing day, 3-25 days after training. Moreover, when calibrated on just a small portion of data from the testing day, the proposed model can achieve over 92% accuracy by retraining less than 10% of the parameters for computational efficiency. |
None | |
Hanstreamer: an Open-source Webcam-based Live Data Presentation System | 2023-09-21 | ShowWe present Hanstreamer, a free and open-source system for webcam-based data presentation. The system performs real-time gesture recognition on the user's webcam video stream to provide interactive data visuals. Apart from the standard chart and map visuals, Hanstreamer is the first such video data presentation system to support network visualisation and interactive DimpVis-style time-series data exploration. The system is ready for use with popular online meeting software such as Zoom and Microsoft Teams. |
3 pages, 5 figures | None |
Hand Gesture Recognition with Two Stage Approach Using Transfer Learning and Deep Ensemble Learning | 2023-09-20 | ShowHuman-Computer Interaction (HCI) has been the subject of research for many years, and recent studies have focused on improving its performance through various techniques. In the past decade, deep learning studies have shown high performance in various research areas, leading researchers to explore their application to HCI. Convolutional neural networks can be used to recognize hand gestures from images using deep architectures. In this study, we evaluated pre-trained high-performance deep architectures on the HG14 dataset, which consists of 14 different hand gesture classes. Among 22 different models, versions of the VGGNet and MobileNet models attained the highest accuracy rates. Specifically, the VGG16 and VGG19 models achieved accuracy rates of 94.64% and 94.36%, respectively, while the MobileNet and MobileNetV2 models achieved accuracy rates of 96.79% and 94.43%, respectively. We performed hand gesture recognition on the dataset using an ensemble learning technique, which combined the four most successful models. By utilizing these models as base learners and applying the Dirichlet ensemble technique, we achieved an accuracy rate of 98.88%. These results demonstrate the effectiveness of the deep ensemble learning technique for HCI and its potential applications in areas such as augmented reality, virtual reality, and game technologies. |
ICISN...ICISNA'23 - 1st International Conference on Intelligent Systems and New Applications Proceedings Book, Liverpool, UNITED KINGDOM, April 28-30, 2023. E-ISBN: 978-605-72180-3-2 |
None |
Dynamic Hand Gesture-Featured Human Motor Adaptation in Tool Delivery using Voice Recognition | 2023-09-20 | ShowHuman-robot collaboration has benefited users with higher efficiency towards interactive tasks. Nevertheless, most collaborative schemes rely on complicated human-machine interfaces, which might lack the requisite intuitiveness compared with natural limb control. We also expect to understand human intent with low training data requirements. In response to these challenges, this paper introduces an innovative human-robot collaborative framework that seamlessly integrates hand gesture and dynamic movement recognition, voice recognition, and a switchable control adaptation strategy. These modules provide a user-friendly approach that enables the robot to deliver the tools as per user need, especially when the user is working with both hands. Therefore, users can focus on their task execution without additional training in the use of human-machine interfaces, while the robot interprets their intuitive gestures. The proposed multimodal interaction framework is executed in the UR5e robot platform equipped with a RealSense D435i camera, and the effectiveness is assessed through a soldering circuit board task. The experiment results have demonstrated superior performance in hand gesture recognition, where the static hand gesture recognition module achieves an accuracy of 94.3%, while the dynamic motion recognition module reaches 97.6% accuracy. Compared with human solo manipulation, the proposed approach facilitates higher efficiency tool delivery, without significantly distracting from human intents. |
This ...This work has been submitted to the IEEE for possible publication |
None |
On-device Real-time Custom Hand Gesture Recognition | 2023-09-19 | ShowMost existing hand gesture recognition (HGR) systems are limited to a predefined set of gestures. However, users and developers often want to recognize new, unseen gestures. This is challenging due to the vast diversity of all plausible hand shapes, e.g. it is impossible for developers to include all hand gestures in a predefined list. In this paper, we present a user-friendly framework that lets users easily customize and deploy their own gesture recognition pipeline. Our framework provides a pre-trained single-hand embedding model that can be fine-tuned for custom gesture recognition. Users can perform gestures in front of a webcam to collect a small amount of images per gesture. We also offer a low-code solution to train and deploy the custom gesture recognition model. This makes it easy for users with limited ML expertise to use our framework. We further provide a no-code web front-end for users without any ML expertise. This makes it even easier to build and test the end-to-end pipeline. The resulting custom HGR is then ready to be run on-device for real-time scenarios. This can be done by calling a simple function in our open-sourced model inference API, MediaPipe Tasks. This entire process only takes a few minutes. |
5 pag...5 pages, 6 figures; Accepted to ICCV Workshop on Computer Vision for Metaverse, Paris, France, 2023 |
None |
Gesture Recognition in Millimeter-Wave Radar Based on Spatio-Temporal Feature Sequences | 2023-09-18 | ShowGesture recognition is a pivotal technology in the realm of intelligent education, and millimeter-wave (mmWave) signals possess advantages such as high resolution and strong penetration capability. This paper introduces a highly accurate and robust gesture recognition method using mmWave radar. The method involves capturing the raw signals of hand movements with the mmWave radar module and preprocessing the received radar signals, including Fourier transformation, distance compression, Doppler processing, and noise reduction through moving target indication (MTI). The preprocessed signals are then fed into the Convolutional Neural Network-Time Domain Convolutional Network (CNN-TCN) model to extract spatio-temporal features, with recognition performance evaluated through classification. Experimental results demonstrate that this method achieves an accuracy rate of 98.2% in domain-specific recognition and maintains a consistently high recognition rate across different neural networks, showcasing exceptional recognition performance and robustness. |
None | |
From Unimodal to Multimodal: improving sEMG-Based Pattern Recognition via deep generative models | 2023-09-18 | ShowObjective: Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy compared to unimodal HGR systems. However, acquiring multimodal gesture recognition data typically requires users to wear additional sensors, thereby increasing hardware costs. Methods: This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals. Specifically, we trained a deep generative model based on the intrinsic correlation between forearm sEMG signals and forearm IMU signals to generate virtual forearm IMU signals from the input forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU signals were fed into a multimodal Convolutional Neural Network (CNN) model for gesture recognition. Results: We conducted evaluations on six databases, including five publicly available databases and our collected database comprising 28 subjects performing 38 gestures, containing both sEMG and IMU data. The results show that our proposed approach significantly outperforms the sEMG-based unimodal HGR approach (with increases of 2.15%-13.10%). Moreover, it achieves accuracy levels closely matching those of multimodal HGR when using virtual Acceleration (ACC) signals. Conclusion: It demonstrates that incorporating virtual IMU signals, generated by deep generative models, can significantly improve the accuracy of sEMG-based HGR. Significance: The proposed approach represents a successful attempt to bridge the gap between unimodal HGR and multimodal HGR without additional sensor hardware, which can help to promote further development of natural and cost-effective myoelectric interfaces in the biomedical engineering field. |
None | |
A Multi-label Classification Approach to Increase Expressivity of EMG-based Gesture Recognition | 2023-09-13 | ShowObjective: The objective of the study is to efficiently increase the expressivity of surface electromyography-based (sEMG) gesture recognition systems. Approach: We use a problem transformation approach, in which actions were subset into two biomechanically independent components - a set of wrist directions and a set of finger modifiers. To maintain fast calibration time, we train models for each component using only individual gestures, and extrapolate to the full product space of combination gestures by generating synthetic data. We collected a supervised dataset with high-confidence ground truth labels in which subjects performed combination gestures while holding a joystick, and conducted experiments to analyze the impact of model architectures, classifier algorithms, and synthetic data generation strategies on the performance of the proposed approach. Main Results: We found that a problem transformation approach using a parallel model architecture in combination with a non-linear classifier, along with restricted synthetic data generation, shows promise in increasing the expressivity of sEMG-based gestures with a short calibration time. Significance: sEMG-based gesture recognition has applications in human-computer interaction, virtual reality, and the control of robotic and prosthetic devices. Existing approaches require exhaustive model calibration. The proposed approach increases expressivity without requiring users to demonstrate all combination gesture classes. Our results may be extended to larger gesture vocabularies and more complicated model architectures. |
14 pages, 12 figures | None |
Multi-stage Factorized Spatio-Temporal Representation for RGB-D Action and Gesture Recognition | 2023-09-11 | ShowRGB-D action and gesture recognition remain an interesting topic in human-centered scene understanding, primarily due to the multiple granularities and large variation in human motion. Although many RGB-D based action and gesture recognition approaches have demonstrated remarkable results by utilizing highly integrated spatio-temporal representations across multiple modalities (i.e., RGB and depth data), they still encounter several challenges. Firstly, vanilla 3D convolution makes it hard to capture fine-grained motion differences between local clips under different modalities. Secondly, the intricate nature of highly integrated spatio-temporal modeling can lead to optimization difficulties. Thirdly, duplicate and unnecessary information can add complexity and complicate entangled spatio-temporal modeling. To address the above issues, we propose an innovative heuristic architecture called Multi-stage Factorized Spatio-Temporal (MFST) for RGB-D action and gesture recognition. The proposed MFST model comprises a 3D Central Difference Convolution Stem (CDC-Stem) module and multiple factorized spatio-temporal stages. The CDC-Stem enriches fine-grained temporal perception, and the multiple hierarchical spatio-temporal stages construct dimension-independent higher-order semantic primitives. Specifically, the CDC-Stem module captures bottom-level spatio-temporal features and passes them successively to the following spatio-temporal factored stages to capture the hierarchical spatial and temporal features through the Multi- Scale Convolution and Transformer (MSC-Trans) hybrid block and Weight-shared Multi-Scale Transformer (WMS-Trans) block. The seamless integration of these innovative designs results in a robust spatio-temporal representation that outperforms state-of-the-art approaches on RGB-D action and gesture recognition datasets. |
ACM MM'23 | None |
MIMOCrypt: Multi-User Privacy-Preserving Wi-Fi Sensing via MIMO Encryption | 2023-09-01 | ShowWi-Fi signals may help realize low-cost and non-invasive human sensing, yet it can also be exploited by eavesdroppers to capture private information. Very few studies rise to handle this privacy concern so far; they either jam all sensing attempts or rely on sophisticated technologies to support only a single sensing user, rendering them impractical for multi-user scenarios. Moreover, these proposals all fail to exploit Wi-Fi's multiple-in multiple-out (MIMO) capability. To this end, we propose MIMOCrypt, a privacy-preserving Wi-Fi sensing framework to support realistic multi-user scenarios. To thwart unauthorized eavesdropping while retaining the sensing and communication capabilities for legitimate users, MIMOCrypt innovates in exploiting MIMO to physically encrypt Wi-Fi channels, treating the sensed human activities as physical plaintexts. The encryption scheme is further enhanced via an optimization framework, aiming to strike a balance among i) risk of eavesdropping, ii) sensing accuracy, and iii) communication quality, upon securely conveying decryption keys to legitimate users. We implement a prototype of MIMOCrypt on an SDR platform and perform extensive experiments to evaluate its effectiveness in common application scenarios, especially privacy-sensitive human gesture recognition. |
IEEE ...IEEE S&P 2024, 19 pages, 22 figures, including meta reviews and responses |
None |
Gesture Recognition based on Long-Short Term Memory Cells using Smartphone IMUs | 2023-08-16 | ShowOver the last few decades, Smartphone technology has seen significant improvements. Enhancements specific to built-in Inertial Measurement Units (IMUs) and other dedicated sensors of the smartphones(which are often available as default) such as- Accelerometer, Gyroscope, Magnetometer, Fingerprint reader, Proximity and Ambient light sensors have made devices smarter and the interaction seamless. Gesture recognition using these smart phones have been experimented with many techniques. In this solution, a Recurrent Neural Network (RNN) approach, LSTM (Long-Short Term Memory Cells) has been used to classify ten different gestures based on data from Accelerometer and Gyroscope. Selection of sensor data (Accelerometer and Gyroscope) was based on the ones that provided maximum information regarding the movement and orientation of the phone. Various models were experimented in this project, the results of which are presented in the later sections. Furthermore, the properties and characteristics of the collected data were studied and a set of improvements have been suggested in the future work section. |
Proce...Proceedings of Fachpraktikum Interaktive Systeme (FIS 18), FIS 18, Summer 2018, University of Stuttgart, Germany |
None |
Long-Distance Gesture Recognition using Dynamic Neural Networks | 2023-08-09 | ShowGestures form an important medium of communication between humans and machines. An overwhelming majority of existing gesture recognition methods are tailored to a scenario where humans and machines are located very close to each other. This short-distance assumption does not hold true for several types of interactions, for example gesture-based interactions with a floor cleaning robot or with a drone. Methods made for short-distance recognition are unable to perform well on long-distance recognition due to gestures occupying only a small portion of the input data. Their performance is especially worse in resource constrained settings where they are not able to effectively focus their limited compute on the gesturing subject. We propose a novel, accurate and efficient method for the recognition of gestures from longer distances. It uses a dynamic neural network to select features from gesture-containing spatial regions of the input sensor data for further processing. This helps the network focus on features important for gesture recognition while discarding background features early on, thus making it more compute efficient compared to other techniques. We demonstrate the performance of our method on the LD-ConGR long-distance dataset where it outperforms previous state-of-the-art methods on recognition accuracy and compute efficiency. |
Accep...Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) |
None |
NeuroAiR: Deep Learning Framework for Airwriting Recognition from Scalp-recorded Neural Signals | 2023-08-07 | ShowAirwriting recognition is a task that involves identifying letters written in free space using finger movement. It is a special case of gesture recognition, where gestures correspond to letters in a specific language. Electroencephalography (EEG) is a non-invasive technique for recording brain activity and has been widely used in brain-computer interface applications. Leveraging EEG signals for airwriting recognition offers a promising alternative input method for Human-Computer Interaction. One key advantage of airwriting recognition is that users don't need to learn new gestures. By concatenating recognized letters, a wide range of words can be formed, making it applicable to a broader population. However, there has been limited research in the recognition of airwriting using EEG signals, which forms the core focus of this study. The NeuroAiR dataset comprising EEG signals recorded during writing English uppercase alphabets is first constructed. Various features are then explored in conjunction with different deep learning models to achieve accurate airwriting recognition. These features include processed EEG data, Independent Component Analysis components, source-domain-based scout time series, and spherical and head harmonic decomposition-based features. Furthermore, the impact of different EEG frequency bands on system performance is comprehensively investigated. The highest accuracy achieved in this study is 44.04% using Independent Component Analysis components and the EEGNet classification model. The results highlight the potential of EEG-based airwriting recognition as a user-friendly modality for alternative input methods in Human-Computer Interaction applications. This research sets a strong baseline for future advancements and demonstrates the viability and utility of EEG-based airwriting recognition. |
None | |
A Survey of mmWave-based Human Sensing: Technology, Platform and Applications | 2023-08-06 | ShowWith the rapid development of the Internet of Things (IoT) and the rise of 5G communication networks and automatic driving, millimeter wave (mmWave) sensing is emerging and starts impacting our life and workspace. mmWave sensing can sense humans and objects in a contactless way, providing fine-grained sensing ability. In the past few years, many mmWave sensing techniques have been proposed and applied in various human sensing applications (e.g., human localization, gesture recognition, and vital monitoring). We discover the need of a comprehensive survey to summarize the technology, platforms and applications of mmWave-based human sensing. In this survey, we first present the mmWave hardware platforms and some key techniques of mmWave sensing. We then provide a comprehensive review of existing mmWave-based human sensing works. Specifically, we divide existing works into four categories according to the sensing granularity: human tracking and localization, motion recognition, biometric measurement and human imaging. Finally, we discuss the potential research challenges and present future directions in this area. |
30 pa...30 pages, 17 figures, IEEE Survey & Tutorial |
None |
DIVERSIFY: A General Framework for Time Series Out-of-distribution Detection and Generalization | 2023-08-04 | ShowTime series remains one of the most challenging modalities in machine learning research. The out-of-distribution (OOD) detection and generalization on time series tend to suffer due to its non-stationary property, i.e., the distribution changes over time. The dynamic distributions inside time series pose great challenges to existing algorithms to identify invariant distributions since they mainly focus on the scenario where the domain information is given as prior knowledge. In this paper, we attempt to exploit subdomains within a whole dataset to counteract issues induced by non-stationary for generalized representation learning. We propose DIVERSIFY, a general framework, for OOD detection and generalization on dynamic distributions of time series. DIVERSIFY takes an iterative process: it first obtains the "worst-case" latent distribution scenario via adversarial training, then reduces the gap between these latent distributions. We implement DIVERSIFY via combining existing OOD detection methods according to either extracted features or outputs of models for detection while we also directly utilize outputs for classification. In addition, theoretical insights illustrate that DIVERSIFY is theoretically supported. Extensive experiments are conducted on seven datasets with different OOD settings across gesture recognition, speech commands recognition, wearable stress and affect detection, and sensor-based human activity recognition. Qualitative and quantitative results demonstrate that DIVERSIFY learns more generalized features and significantly outperforms other baselines. |
Journ...Journal version of arXiv:2209.07027; 17 pages |
None |
Hand tracking for clinical applications: validation of the Google MediaPipe Hand (GMH) and the depth-enhanced GMH-D frameworks | 2023-08-02 | ShowAccurate 3D tracking of hand and fingers movements poses significant challenges in computer vision. The potential applications span across multiple domains, including human-computer interaction, virtual reality, industry, and medicine. While gesture recognition has achieved remarkable accuracy, quantifying fine movements remains a hurdle, particularly in clinical applications where the assessment of hand dysfunctions and rehabilitation training outcomes necessitate precise measurements. Several novel and lightweight frameworks based on Deep Learning have emerged to address this issue; however, their performance in accurately and reliably measuring fingers movements requires validation against well-established gold standard systems. In this paper, the aim is to validate the handtracking framework implemented by Google MediaPipe Hand (GMH) and an innovative enhanced version, GMH-D, that exploits the depth estimation of an RGB-Depth camera to achieve more accurate tracking of 3D movements. Three dynamic exercises commonly administered by clinicians to assess hand dysfunctions, namely Hand Opening-Closing, Single Finger Tapping and Multiple Finger Tapping are considered. Results demonstrate high temporal and spectral consistency of both frameworks with the gold standard. However, the enhanced GMH-D framework exhibits superior accuracy in spatial measurements compared to the baseline GMH, for both slow and fast movements. Overall, our study contributes to the advancement of hand tracking technology, the establishment of a validation procedure as a good-practice to prove efficacy of deep-learning-based hand-tracking, and proves the effectiveness of GMH-D as a reliable framework for assessing 3D hand movements in clinical applications. |
None | |
Hierarchical Semi-Supervised Learning Framework for Surgical Gesture Segmentation and Recognition Based on Multi-Modality Data | 2023-07-31 | ShowSegmenting and recognizing surgical operation trajectories into distinct, meaningful gestures is a critical preliminary step in surgical workflow analysis for robot-assisted surgery. This step is necessary for facilitating learning from demonstrations for autonomous robotic surgery, evaluating surgical skills, and so on. In this work, we develop a hierarchical semi-supervised learning framework for surgical gesture segmentation using multi-modality data (i.e. kinematics and vision data). More specifically, surgical tasks are initially segmented based on distance characteristics-based profiles and variance characteristics-based profiles constructed using kinematics data. Subsequently, a Transformer-based network with a pre-trained `ResNet-18' backbone is used to extract visual features from the surgical operation videos. By combining the potential segmentation points obtained from both modalities, we can determine the final segmentation points. Furthermore, gesture recognition can be implemented based on supervised learning. The proposed approach has been evaluated using data from the publicly available JIGSAWS database, including Suturing, Needle Passing, and Knot Tying tasks. The results reveal an average F1 score of 0.623 for segmentation and an accuracy of 0.856 for recognition. |
8 pag...8 pages, 7 figures. Accepted by 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023). For more details about this paper, please visit our website: \url{https://sites.google.com/view/surseg/home} |
None |
Robotic Vision for Human-Robot Interaction and Collaboration: A Survey and Systematic Review | 2023-07-28 | ShowRobotic vision for human-robot interaction and collaboration is a critical process for robots to collect and interpret detailed information related to human actions, goals, and preferences, enabling robots to provide more useful services to people. This survey and systematic review presents a comprehensive analysis on robotic vision in human-robot interaction and collaboration over the last 10 years. From a detailed search of 3850 articles, systematic extraction and evaluation was used to identify and explore 310 papers in depth. These papers described robots with some level of autonomy using robotic vision for locomotion, manipulation and/or visual communication to collaborate or interact with people. This paper provides an in-depth analysis of current trends, common domains, methods and procedures, technical processes, data sets and models, experimental testing, sample populations, performance metrics and future challenges. This manuscript found that robotic vision was often used in action and gesture recognition, robot movement in human spaces, object handover and collaborative actions, social communication and learning from demonstration. Few high-impact and novel techniques from the computer vision field had been translated into human-robot interaction and collaboration. Overall, notable advancements have been made on how to develop and deploy robots to assist people. |
None | |
Energy Efficient Personalized Hand-Gesture Recognition with Neuromorphic Computing | 2023-07-25 | ShowHand gestures are a form of non-verbal communication that is used in social interaction and it is therefore required for more natural human-robot interaction. Neuromorphic (brain-inspired) computing offers a low-power solution for Spiking neural networks (SNNs) that can be used for the classification and recognition of gestures. This article introduces the preliminary results of a novel methodology for training spiking convolutional neural networks for hand-gesture recognition so that a humanoid robot with integrated neuromorphic hardware will be able to personalise the interaction with a user according to the shown hand gesture. It also describes other approaches that could improve the overall performance of the model. |
None | |
Systematic Adaptation of Communication-focused Machine Learning Models from Real to Virtual Environments for Human-Robot Collaboration | 2023-07-21 | ShowVirtual reality has proved to be useful in applications in several fields ranging from gaming, medicine, and training to development of interfaces that enable human-robot collaboration. It empowers designers to explore applications outside of the constraints posed by the real world environment and develop innovative solutions and experiences. Hand gestures recognition which has been a topic of much research and subsequent commercialization in the real world has been possible because of the creation of large, labelled datasets. In order to utilize the power of natural and intuitive hand gestures in the virtual domain for enabling embodied teleoperation of collaborative robots, similarly large datasets must be created so as to keep the working interface easy to learn and flexible enough to add more gestures. Depending on the application, this may be computationally or economically prohibitive. Thus, the adaptation of trained deep learning models that perform well in the real environment to the virtual may be a solution to this challenge. This paper presents a systematic framework for the real to virtual adaptation using limited size of virtual dataset along with guidelines for creating a curated dataset. Finally, while hand gestures have been considered as the communication mode, the guidelines and recommendations presented are generic. These are applicable to other modes such as body poses and facial expressions which have large datasets available in the real domain which must be adapted to the virtual one. |
None | |
Joint Skeletal and Semantic Embedding Loss for Micro-gesture Classification | 2023-07-20 | ShowIn this paper, we briefly introduce the solution of our team HFUT-VUT for the Micros-gesture Classification in the MiGA challenge at IJCAI 2023. The micro-gesture classification task aims at recognizing the action category of a given video based on the skeleton data. For this task, we propose a 3D-CNNs-based micro-gesture recognition network, which incorporates a skeletal and semantic embedding loss to improve action classification performance. Finally, we rank 1st in the Micro-gesture Classification Challenge, surpassing the second-place team in terms of Top-1 accuracy by 1.10%. |
1st P...1st Place in Micro-gesture Classification sub-challenge in MiGA at IJCAI-2023 |
None |
MorphoArms: Morphogenetic Teleoperation of Multimanual Robot | 2023-07-06 | ShowNowadays, there are few unmanned aerial vehicles (UAVs) capable of flying, walking and grasping. A drone with all these functionalities can significantly improve its performance in complex tasks such as monitoring and exploring different types of terrain, and rescue operations. This paper presents MorphoArms, a novel system that consists of a morphogenetic chassis and a hand gesture recognition teleoperation system. The mechanics, electronics, control architecture, and walking behavior of the morphogenetic chassis are described. This robot is capable of walking and grasping objects using four robotic limbs. Robotic limbs with four degrees-of-freedom are used as pedipulators when walking and as manipulators when performing actions in the environment. The robot control system is implemented using teleoperation, where commands are given by hand gestures. A motion capture system is used to track the user's hands and to recognize their gestures. The method of controlling the robot was experimentally tested in a study involving 10 users. The evaluation included three questionnaires (NASA TLX, SUS, and UEQ). The results showed that the proposed system was more user-friendly than 56% of the systems, and it was rated above average in terms of attractiveness, stimulation, and novelty. |
IEEE ...IEEE International Conference on Automation Science and Engineering (CASE 2023), Cordis, New Zeland, 26-30 August, 2023, in print |
None |
Gesture Recognition with mmWave Wi-Fi Access Points: Lessons Learned | 2023-06-29 | ShowIn recent years, channel state information (CSI) at sub-6 GHz has been widely exploited for Wi-Fi sensing, particularly for activity and gesture recognition. In this work, we instead explore mmWave (60 GHz) Wi-Fi signals for gesture recognition/pose estimation. Our focus is on the mmWave Wi-Fi signals so that they can be used not only for high data rate communication but also for improved sensing e.g., for extended reality (XR) applications. For this reason, we extract spatial beam signal-to-noise ratios (SNRs) from the periodic beam training employed by IEEE 802.11ad devices. We consider a set of 10 gestures/poses motivated by XR applications. We conduct experiments in two environments and with three people.As a comparison, we also collect CSI from IEEE 802.11ac devices. To extract features from the CSI and the beam SNR, we leverage a deep neural network (DNN). The DNN classifier achieves promising results on the beam SNR task with state-of-the-art 96.7% accuracy in a single environment, even with a limited dataset. We also investigate the robustness of the beam SNR against CSI across different environments. Our experiments reveal that features from the CSI generalize without additional re-training, while those from beam SNRs do not. Therefore, re-training is required in the latter case. |
None | |
Evaluating the Task Generalization of Temporal Convolutional Networks for Surgical Gesture and Motion Recognition using Kinematic Data | 2023-06-28 | ShowFine-grained activity recognition enables explainable analysis of procedures for skill assessment, autonomy, and error detection in robot-assisted surgery. However, existing recognition models suffer from the limited availability of annotated datasets with both kinematic and video data and an inability to generalize to unseen subjects and tasks. Kinematic data from the surgical robot is particularly critical for safety monitoring and autonomy, as it is unaffected by common camera issues such as occlusions and lens contamination. We leverage an aggregated dataset of six dry-lab surgical tasks from a total of 28 subjects to train activity recognition models at the gesture and motion primitive (MP) levels and for separate robotic arms using only kinematic data. The models are evaluated using the LOUO (Leave-One-User-Out) and our proposed LOTO (Leave-One-Task-Out) cross validation methods to assess their ability to generalize to unseen users and tasks respectively. Gesture recognition models achieve higher accuracies and edit scores than MP recognition models. But, using MPs enables the training of models that can generalize better to unseen tasks. Also, higher MP recognition accuracy can be achieved by training separate models for the left and right robot arms. For task-generalization, MP recognition models perform best if trained on similar tasks and/or tasks from the same dataset. |
8 pag...8 pages, 4 figures, 6 tables. To be published in IEEE Robotics and Automation Letters (RA-L) |
None |
Transfer: Cross Modality Knowledge Transfer using Adversarial Networks -- A Study on Gesture Recognition | 2023-06-26 | ShowKnowledge transfer across sensing technology is a novel concept that has been recently explored in many application domains, including gesture-based human computer interaction. The main aim is to gather semantic or data driven information from a source technology to classify / recognize instances of unseen classes in the target technology. The primary challenge is the significant difference in dimensionality and distribution of feature sets between the source and the target technologies. In this paper, we propose TRANSFER, a generic framework for knowledge transfer between a source and a target technology. TRANSFER uses a language-based representation of a hand gesture, which captures a temporal combination of concepts such as handshape, location, and movement that are semantically related to the meaning of a word. By utilizing a pre-specified syntactic structure and tokenizer, TRANSFER segments a hand gesture into tokens and identifies individual components using a token recognizer. The tokenizer in this language-based recognition system abstracts the low-level technology-specific characteristics to the machine interface, enabling the design of a discriminator that learns technology-invariant features essential for recognition of gestures in both source and target technologies. We demonstrate the usage of TRANSFER for three different scenarios: a) transferring knowledge across technology by learning gesture models from video and recognizing gestures using WiFi, b) transferring knowledge from video to accelerometer, and d) transferring knowledge from accelerometer to WiFi signals. |
None |