Augmented reality views for occluded interaction
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Augmented reality views for occluded interaction. / Lilija, Klemen; Pohl, Henning; Boring, Sebastian; Hornbæk, Kasper.
CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, 2019. 446.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Augmented reality views for occluded interaction
AU - Lilija, Klemen
AU - Pohl, Henning
AU - Boring, Sebastian
AU - Hornbæk, Kasper
PY - 2019
Y1 - 2019
N2 - We rely on our sight when manipulating objects. When objects are occluded, manipulation becomes difficult. Such occluded objects can be shown via augmented reality to re-enable visual guidance. However, it is unclear how to do so to best support object manipulation. We compare four views of occluded objects and their effect on performance and satisfaction across a set of everyday manipulation tasks of varying complexity. The best performing views were a see-through view and a displaced 3D view. The former enabled participants to observe the manipulated object through the occluder, while the latter showed the 3D view of the manipulated object offset from the object’s real location. The worst performing view showed remote imagery from a simulated hand-mounted camera. Our results suggest that alignment of virtual objects with their real-world location is less important than an appropriate point-of-view and view stability.
AB - We rely on our sight when manipulating objects. When objects are occluded, manipulation becomes difficult. Such occluded objects can be shown via augmented reality to re-enable visual guidance. However, it is unclear how to do so to best support object manipulation. We compare four views of occluded objects and their effect on performance and satisfaction across a set of everyday manipulation tasks of varying complexity. The best performing views were a see-through view and a displaced 3D view. The former enabled participants to observe the manipulated object through the occluder, while the latter showed the 3D view of the manipulated object offset from the object’s real location. The worst performing view showed remote imagery from a simulated hand-mounted camera. Our results suggest that alignment of virtual objects with their real-world location is less important than an appropriate point-of-view and view stability.
KW - Augmented reality
KW - Finger-camera
KW - Manipulation task
UR - http://www.scopus.com/inward/record.url?scp=85067633165&partnerID=8YFLogxK
U2 - 10.1145/3290605.3300676
DO - 10.1145/3290605.3300676
M3 - Article in proceedings
AN - SCOPUS:85067633165
BT - CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
PB - Association for Computing Machinery
T2 - 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019
Y2 - 4 May 2019 through 9 May 2019
ER -
ID: 251262041