An efficient multi-modal sensors feature fusion approach for handwritten characters recognition using Shapley values and deep autoencoder
Abstract
Handwriting is essential for the development of fine motor skills in children. Handwritten character recognition has the potential to facilitate natural human–machine interactions, aiding in the digitization of handwritten text for educational environments such as smart classrooms. Electromyography (EMG), a widely recognized biosignal, captures complex electrical patterns generated by muscle activity during handwriting movements, offering detailed insights into neuromuscular function. This study proposes an efficient multi-modal handwritten character recognition pipeline integrating physiological (EMG) and Inertial Measurement Unit (IMU) sensors. EMG signals provide valuable information about muscle function and activation patterns, while IMU sensors track motion and orientation associated with handwriting. The proposed system employs feature fusion, combining data from both sensor types. A cooperative game theory-based feature ranking method and a modified deep auto-encoder architecture are utilized for enhanced data representation and feature extraction. A novel dataset comprising 26 isolated handwritten English alphabets written on a whiteboard was collected for experimental validation. The proposed pipeline demonstrates high efficiency, achieving a classification accuracy of 99.01% for the isolated handwritten characters. Additional performance metrics, including the Matthews correlation coefficient (98.77) and Kappa Score (98.97), were assessed to validate the model's effectiveness. The fusion of EMG and IMU data enhances system robustness, offering significant potential for digitizing handwritten notes in smart classrooms and for clinical handwriting analysis, including the diagnosis and monitoring of Alzheimer's disease © 2024 Elsevier Ltd