6533b871fe1ef96bd12d2666
RESEARCH PRODUCT
Multimodal data as a means to understand the learning experience
Kshitij SharmaMichail N. GiannakosIlias O. PappasIlias O. PappasVassilis KostakosEduardo Vellososubject
Computer Networks and CommunicationsComputer scienceMultimodal data05 social sciencesWord error rateFeature selection02 engineering and technologyLibrary and Information SciencesSkill developmentVariety (cybernetics)Dreyfus model of skill acquisitionLearning experienceHuman–computer interaction020204 information systems0502 economics and business0202 electrical engineering electronic engineering information engineering050211 marketingSet (psychology)Information Systemsdescription
Most work in the design of learning technology uses click-streams as their primary data source for modelling & predicting learning behaviour. In this paper we set out to quantify what, if any, advantages do physiological sensing techniques provide for the design of learning technologies. We conducted a lab study with 251 game sessions and 17 users focusing on skill development (i.e., user's ability to master complex tasks). We collected click-stream data, as well as eye-tracking, electroencephalography (EEG), video, and wristband data during the experiment. Our analysis shows that traditional click-stream models achieve 39% error rate in predicting learning performance (and 18% when we perform feature selection), while for fused multimodal the error drops up to 6%. Our work highlights the limitations of standalone click-stream models, and quantifies the expected benefits of using a variety of multimodal data coming from physiological sensing. Our findings help shape the future of learning technology research by pointing out the substantial benefits of physiological sensing. © 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/BY/4.0/).
year | journal | country | edition | language |
---|---|---|---|---|
2019-10-01 |