ARiana: Augmented Reality Based In-Situ Annotation of Assembly Videos
Publikation: Bidrag til tidsskrift › Tidsskriftartikel › fagfællebedømt
3,57 MB, PDF-dokument
Annotated videos are commonly produced for documenting assembly and maintenance processes in the manufacturing industry. However, according to a semi-structured interview we conducted with industrial experts, the current process of creating annotated assembly videos, in which the annotator annotates the video capturing the expert's demonstration of assembly and maintenance process, is cumbersome and time-consuming. The key challenges include three key problems in annotation: (1) unnecessary extra communications between field workers and annotators, (2) lack of suitable camera gear, and (3) wasting time in the manual removal of non-informative portions of captured videos. Because annotation always follows video capture, the problem 1 remains out of scope for state-of-the-art video annotation tools. And making the assumption of a perfect captured video, which covers no occlusion and contains only relevant assembly or maintenance information, causes problem 2 and 3. As a result, we have developed ARiana, a wearable augmented reality-based in-situ video annotation tool that guides field experts to create annotations efficiently while conducting the assembly or maintenance tasks. ARiana has three key features that include context-awareness enabled by hand-object interaction, multimodal interaction for annotation on the fly, and real-time audiovisual guidance enabled by edge offloading. We have implemented ARiana on Android-based smart glasses, equipped with first-person camera and microphone. In a usability test based on attempting to assemble a toy model and to annotate the recorded video simultaneously, ARiana demonstrated higher efficiency and effectiveness compared to one of the state-of-the-art video annotation tools, in which the assembling process is followed by the annotation process. In particular, ARiana helps users finish annotation tasks four times faster, and increase the annotation accuracy by 23%.
|Status||Udgivet - 2022|
© 2013 IEEE.