0000000000331714

AUTHOR

Kshitij Sharma

Seeking Information on Social Commerce: An Examination of the Impact of User- and Marketer-generated Content Through an Eye-tracking Study

Following the growing popularity of social commerce sites, there is an increased interest in understanding how consumers decide what products to purchase based on the available information. Consumers nowadays are confronted with the task of assessing marketer-generated (MGC) as well as user-generated information (UGC) in a range of different forms to make informed purchase-related decisions. This study examines the information types and forms that influence consumers in their decision-making process on social commerce. Building on uses and gratifications and dual-process theories, we distinguish between marketer and user generated content, and differentiate formats into informational and no…

research product

How Quickly Can We Predict Users’ Ratings on Aesthetic Evaluations of Websites? Employing Machine Learning on Eye-Tracking Data

This study examines how quickly we can predict users’ ratings on visual aesthetics in terms of simplicity, diversity, colorfulness, craftsmanship. To predict users’ ratings, first we capture gaze behavior while looking at high, neutral, and low visually appealing websites, followed by a survey regarding user perceptions on visual aesthetics towards the same websites. We conduct an experiment with 23 experienced users in online shopping, capture gaze behavior and through employing machine learning we examine how fast we can accurately predict their ratings. The findings show that after 25 s we can predict ratings with an error rate ranging from 9% to 11% depending on which facet of visual ae…

research product

Fitbit for learning: Towards capturing the learning experience using wearable sensing

The assessment of learning during class activities mostly relies on standardized questionnaires to evaluate the efficacy of the learning design elements. However, standardized questionnaires pose additional strain on students, do not provide “temporal” information during the learning experience, require considerable effort and language competence, and sometimes are not appropriate. To overcome these challenges, we propose using wearable devices, which allow for continuous and unobtrusive monitoring of physiological parameters during learning. In this paper we set out to quantify how well we can infer students’ learning experience from wrist-worn devices capturing physiological data. We coll…

research product

Wearable Sensing and Quantified-self to explain Learning Experience

Author's accepted manuscript © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The confluence of wearable technologies for sensing learners and the quantified-self provides a unique opportunity to understand learners’ experience in diverse learning contexts. We use data from learners using Empatica Wristbands and self-reported questionnaire. We compute stress, ar…

research product

How Quickly Can We Predict Users’ Ratings on Aesthetic Evaluations of Websites? Employing Machine Learning on Eye-Tracking Data

This study examines how quickly we can predict users’ ratings on visual aesthetics in terms of simplicity, diversity, colorfulness, craftsmanship. To predict users’ ratings, first we capture gaze behavior while looking at high, neutral, and low visually appealing websites, followed by a survey regarding user perceptions on visual aesthetics towards the same websites. We conduct an experiment with 23 experienced users in online shopping, capture gaze behavior and through employing machine learning we examine how fast we can accurately predict their ratings. The findings show that after 25 s we can predict ratings with an error rate ranging from 9% to 11% depending on which facet of visual ae…

research product

Utilizing Multimodal Data Through fsQCA to Explain Engagement in Adaptive Learning

Investigating and explaining the patterns of learners’ engagement in adaptive learning conditions is a core issue towards improving the quality of personalized learning services. This article collects learner data from multiple sources during an adaptive learning activity, and employs a fuzzy set qualitative comparative analysis (fsQCA) approach to shed light to learners’ engagement patterns, with respect to their learning performance. Specifically, this article measures and codes learners’ engagement by fusing and compiling clickstreams (e.g., response time), physiological data (e.g., eye-tracking, electroencephalography, electrodermal activity), and survey data (e.g., goal-orientation) to…

research product

Multimodal data as a means to understand the learning experience

Most work in the design of learning technology uses click-streams as their primary data source for modelling & predicting learning behaviour. In this paper we set out to quantify what, if any, advantages do physiological sensing techniques provide for the design of learning technologies. We conducted a lab study with 251 game sessions and 17 users focusing on skill development (i.e., user's ability to master complex tasks). We collected click-stream data, as well as eye-tracking, electroencephalography (EEG), video, and wristband data during the experiment. Our analysis shows that traditional click-stream models achieve 39% error rate in predicting learning performance (and 18% when we perf…

research product