6533b830fe1ef96bd1296861

RESEARCH PRODUCT

WiWeHAR: Multimodal Human Activity Recognition Using Wi-Fi and Wearable Sensing Modalities

Ahmed AbdelgawwadMatthias PatzoldAli ChelliAndreu Català MallofréMuhammad Muaaz

subject

General Computer ScienceComputer scienceFeature extractionPrincipal component analysisComputació centrada en humansWearable computer02 engineering and technologyDoppler EfecteAccelerometerRadio frequency sensinglaw.inventionActivity recognitionlawInertial measurement unitMachine learning0202 electrical engineering electronic engineering information engineeringfeature fusionGeneral Materials ScienceComputer visionReconeixement de formes (Informàtica)VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550Feature fusionModality (human–computer interaction)business.industryfeature extractionSupervised learningGeneral Engineering:Enginyeria de la telecomunicació::Processament del senyal::Reconeixement de formes [Àrees temàtiques de la UPC]020206 networking & telecommunicationsGyroscopemicro-Doppler signatureDoppler effectWearable sensingmachine learningHuman-centered computingActivity recognitionFeature extractionMicro-Doppler signature020201 artificial intelligence & image processing:Informàtica::Intel·ligència artificial [Àrees temàtiques de la UPC]Artificial intelligencelcsh:Electrical engineering. Electronics. Nuclear engineeringHuman activity recognitionbusinesslcsh:TK1-9971

description

Robust and accurate human activity recognition (HAR) systems are essential to many human-centric services within active assisted living and healthcare facilities. Traditional HAR systems mostly leverage a single sensing modality (e.g., either wearable, vision, or radio frequency sensing) combined with machine learning techniques to recognize human activities. Such unimodal HAR systems do not cope well with real-time changes in the environment. To overcome this limitation, new HAR systems that incorporate multiple sensing modalities are needed. Multiple diverse sensors can provide more accurate and complete information resulting in better recognition of the performed activities. This article presents WiWeHAR—a multimodal HAR system that uses combined Wi-Fi and wearable sensing modalities to simultaneously sense the performed activities. WiWeHAR makes use of standard Wi-Fi network interface cards to collect the channel state information (CSI) and a wearable inertial measurement unit (IMU) consisting of accelerometer, gyroscope, magnetometer sensors to collect the user’s local body movements. We compute the time-variant mean Doppler shift (MDS) from the processed CSI data and magnitude from the inertial data for each sensor of the IMU. Thereafter, we separately extract various time- and frequency-domain features from the magnitude data and the MDS. We apply feature-level fusion to combine the extracted features, and finally supervised learning techniques are used to recognize the performed activities. We evaluate the performance of WiWeHAR by using a multimodal human activity data set, which was obtained from 9 participants. Each participant carried out four activities, such as walking, falling, sitting, and picking up an object from the floor. Our results indicate that the proposed multimodal WiWeHAR system outperforms the unimodal CSI, accelerometer, gyroscope, and magnetometer HAR systems and achieves an overall recognition accuracy of 99.6%–100%. This work was supported by the WiCare project funded by the Research Council of Norway under Grant 261895/F20. Peer Reviewed

10.1109/access.2020.3022287https://hdl.handle.net/11250/2733538