Providing dynamic visual information for collaborative tasks: Experiments with automatic camera control

Jeremy Birnholtz*, Abhishek Ranjan, Ravin Balakrishnan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

One possibility presented by novel communication technologies is the ability for remotely located experts to provide guidance to others who are performing difficult technical tasks in the real world, such as medical procedures or engine repair. In these scenarios, video views and other visual information seem likely to be useful in the ongoing negotiation of shared understanding, or common ground, but actual results with experimental systems have been mixed. One difficulty in designing these systems is achieving a balance between close-up shots that allow for discussion of detail and wide shots that allow for orientation or establishing a mutual point of focus in a larger space. Achieving this balance can be difficult without disorienting or overloading task participants. In this article we present results from two experiments involving three automated camera control systems for remote repair tasks. Results show that a system providing both detailed and overview information was superior to systems providing only one or the other in terms of performance but that some participants preferred the detail-only system.

Original languageEnglish (US)
Pages (from-to)261-287
Number of pages27
JournalHuman-Computer Interaction
Volume25
Issue number3
DOIs
StatePublished - Jul 1 2010

ASJC Scopus subject areas

  • Applied Psychology
  • Human-Computer Interaction

Fingerprint Dive into the research topics of 'Providing dynamic visual information for collaborative tasks: Experiments with automatic camera control'. Together they form a unique fingerprint.

Cite this