Exploring Gesture and Gaze Proxies to Communicate Instructor's Nonverbal Cues in Lecture Videos

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Teaching via lecture video has become the defacto standard for remote education, but videos make it difficult to interpret instructors' nonverbal referencing to the content. This is problematic, as nonverbal cues are essential for students to follow and understand a lecture. As remedy, we explored different proxies representing instructors' pointing gestures and gaze to provide students a point of reference in a lecture video: no proxy, gesture proxy, gaze proxy, alternating proxy, and concurrent proxies. In an online study with 100 students, we evaluated the proxies' effects on mental effort, cognitive load, learning performance, and user experience. Our results show that the proxies had no significant effect on learning-directed aspects and that the gesture and alternating proxy achieved the highest pragmatic quality. Furthermore, we found that alternating between proxies is a promising approach providing students with information about instructors' pointing and gaze position in a lecture video.

Original languageEnglish
Title of host publicationCHI 2023 - Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems
Number of pages7
PublisherAssociation for Computing Machinery, Inc.
Publication date2023
Article number113
ISBN (Electronic)9781450394222
DOIs
Publication statusPublished - 2023
Event2023 CHI Conference on Human Factors in Computing Systems, CHI 2023 - Hamburg, Germany
Duration: 23 Apr 202328 Apr 2023

Conference

Conference2023 CHI Conference on Human Factors in Computing Systems, CHI 2023
LandGermany
ByHamburg
Periode23/04/202328/04/2023
SponsorACM SIGCHI, Apple, Bloomberg, Google, NSF, Siemens

Bibliographical note

Publisher Copyright:
© 2023 Owner/Author.

    Research areas

  • Education, Eye-tracking, Gaze, Gesture, Lecture video

ID: 347300683