Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Distributed usability evaluation : enabling large-scale usability evaluation with user-controlled instrumentation. / Christensen, Lars; Frøkjær, Erik.

Proceedings of the 6th Nordic Conference on Human-Computer Interaction: extending boundaries. Association for Computing Machinery, 2010. p. 118-127.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Christensen, L & Frøkjær, E 2010, Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation. in Proceedings of the 6th Nordic Conference on Human-Computer Interaction: extending boundaries. Association for Computing Machinery, pp. 118-127, 6th Nordic Conference on Human-Computer Interaction, Reykjavik, Iceland, 16/10/2010. https://doi.org/10.1145/1868914.1868932

APA

Christensen, L., & Frøkjær, E. (2010). Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: extending boundaries (pp. 118-127). Association for Computing Machinery. https://doi.org/10.1145/1868914.1868932

Vancouver

Christensen L, Frøkjær E. Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: extending boundaries. Association for Computing Machinery. 2010. p. 118-127 https://doi.org/10.1145/1868914.1868932

Author

Christensen, Lars ; Frøkjær, Erik. / Distributed usability evaluation : enabling large-scale usability evaluation with user-controlled instrumentation. Proceedings of the 6th Nordic Conference on Human-Computer Interaction: extending boundaries. Association for Computing Machinery, 2010. pp. 118-127

Bibtex

@inproceedings{ee04de0e1fcf4eee8a2ee29925f3a388,
title = "Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation",
abstract = "We present DUE (Distributed Usability Evaluation), a technique for collecting and evaluating usability data. The DUE infrastructure involves a client-server network. A client-based tool resides on the workstation of each user, providing a screen video recording, microphone input of voice commentary, and a window for a severity rating. The idea is for the user to work naturalistically, clicking a button when a usability problem or point of uncertainty is encountered, to describe it verbally along with illustrating it on screen, and to rate its severity. These incidents are accumulated on a server, providing access to an evaluator (usability expert) and to product developers or managers who want to review the incidents and analyse them. DUE supports evaluation in the development stages from running prototypes and onwards. A case study of the use of DUE in a corporate environment is presented. The study indicates that the DUE technique is effective in terms of low bias, high efficiency, and clear communication of usability issues among users, evaluators and developers. Further, DUE is supporting long-term evaluations making possible empirical studies of learnability.",
keywords = "Faculty of Science, automation, beta test, case study, distributed, evaluator effect, instrumentation, learnability, remote, screen video, software industry, think-aloud, usability evaluation, voice commentary}",
author = "Lars Christensen and Erik Fr{\o}kj{\ae}r",
year = "2010",
doi = "10.1145/1868914.1868932",
language = "English",
isbn = "978-1-60558-934-3",
pages = "118--127",
booktitle = "Proceedings of the 6th Nordic Conference on Human-Computer Interaction",
publisher = "Association for Computing Machinery",
note = "6th Nordic Conference on Human-Computer Interaction : extending boundaries, NordiCHI 2010 ; Conference date: 16-10-2010 Through 20-10-2010",

}

RIS

TY - GEN

T1 - Distributed usability evaluation

T2 - 6th Nordic Conference on Human-Computer Interaction

AU - Christensen, Lars

AU - Frøkjær, Erik

N1 - Conference code: 6

PY - 2010

Y1 - 2010

N2 - We present DUE (Distributed Usability Evaluation), a technique for collecting and evaluating usability data. The DUE infrastructure involves a client-server network. A client-based tool resides on the workstation of each user, providing a screen video recording, microphone input of voice commentary, and a window for a severity rating. The idea is for the user to work naturalistically, clicking a button when a usability problem or point of uncertainty is encountered, to describe it verbally along with illustrating it on screen, and to rate its severity. These incidents are accumulated on a server, providing access to an evaluator (usability expert) and to product developers or managers who want to review the incidents and analyse them. DUE supports evaluation in the development stages from running prototypes and onwards. A case study of the use of DUE in a corporate environment is presented. The study indicates that the DUE technique is effective in terms of low bias, high efficiency, and clear communication of usability issues among users, evaluators and developers. Further, DUE is supporting long-term evaluations making possible empirical studies of learnability.

AB - We present DUE (Distributed Usability Evaluation), a technique for collecting and evaluating usability data. The DUE infrastructure involves a client-server network. A client-based tool resides on the workstation of each user, providing a screen video recording, microphone input of voice commentary, and a window for a severity rating. The idea is for the user to work naturalistically, clicking a button when a usability problem or point of uncertainty is encountered, to describe it verbally along with illustrating it on screen, and to rate its severity. These incidents are accumulated on a server, providing access to an evaluator (usability expert) and to product developers or managers who want to review the incidents and analyse them. DUE supports evaluation in the development stages from running prototypes and onwards. A case study of the use of DUE in a corporate environment is presented. The study indicates that the DUE technique is effective in terms of low bias, high efficiency, and clear communication of usability issues among users, evaluators and developers. Further, DUE is supporting long-term evaluations making possible empirical studies of learnability.

KW - Faculty of Science

KW - automation, beta test, case study, distributed, evaluator effect, instrumentation, learnability, remote, screen video, software industry, think-aloud, usability evaluation, voice commentary}

U2 - 10.1145/1868914.1868932

DO - 10.1145/1868914.1868932

M3 - Article in proceedings

SN - 978-1-60558-934-3

SP - 118

EP - 127

BT - Proceedings of the 6th Nordic Conference on Human-Computer Interaction

PB - Association for Computing Machinery

Y2 - 16 October 2010 through 20 October 2010

ER -

ID: 32962960