OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VR
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Dokumenter
- Fulltext
Forlagets udgivne version, 5,09 MB, PDF-dokument
We introduce OVRlap, a VR interaction technique that lets the user perceive multiple places at the same time from a first-person perspective. OVRlap achieves this by overlapping viewpoints. At any time, only one viewpoint is active, meaning that the user may interact with objects therein. Objects seen from the active viewpoint are opaque, whereas objects seen from passive viewpoints are transparent. This allows users to perceive multiple locations at once and easily switch to the one in which they want to interact. We compare OVRlap and a single-viewpoint technique in a study where 20 participants complete object-collection and monitoring tasks. We find that in both tasks, participants are significantly faster and move their head significantly less with OVRlap. We propose how the technique might be improved through automated switching of the active viewpoint and intelligent viewpoint rendering.
Originalsprog | Engelsk |
---|---|
Titel | CHI 2022 - Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems |
Forlag | Association for Computing Machinery |
Publikationsdato | 2022 |
Sider | 1-13 |
Artikelnummer | 355 |
ISBN (Elektronisk) | 9781450391573 |
DOI | |
Status | Udgivet - 2022 |
Begivenhed | 2022 CHI Conference on Human Factors in Computing Systems, CHI 2022 - Virtual, Online, USA Varighed: 30 apr. 2022 → 5 maj 2022 |
Konference
Konference | 2022 CHI Conference on Human Factors in Computing Systems, CHI 2022 |
---|---|
Land | USA |
By | Virtual, Online |
Periode | 30/04/2022 → 05/05/2022 |
Sponsor | ACM SIGCHI |
Bibliografisk note
Funding Information:
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement 648785).
Publisher Copyright:
© 2022 ACM.
Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk
ID: 309121492