Abstract
A natural way of communicating an audio concept is to imitate it with one's voice. This creates an approximation of the imagined sound (e.g. a particular owl's hoot), much like how a visual sketch approximates a visual concept (e.g a drawing of the owl). If a machine could understand vocal imitations, users could communicate with software in this natural way, enabling new interactions (e.g. programming a music synthesizer by imitating the desired sound with one's voice). In this work, we collect thousands of crowd-sourced vocal imitations of a large set of diverse sounds, along with data on the crowd's ability to correctly label these vocal imitations. The resulting data set will help the research community understand which audio concepts can be effectively communicated with this approach. We have released the data set so the community can study the related issues and build systems that leverage vocal imitation as an interaction modality.
Original language | English (US) |
---|---|
Title of host publication | CHI 2015 - Proceedings of the 33rd Annual CHI Conference on Human Factors in Computing Systems |
Subtitle of host publication | Crossings |
Publisher | Association for Computing Machinery |
Pages | 43-46 |
Number of pages | 4 |
Volume | 2015-April |
ISBN (Electronic) | 9781450331456 |
DOIs | |
State | Published - Apr 18 2015 |
Event | 33rd Annual CHI Conference on Human Factors in Computing Systems, CHI 2015 - Seoul, Korea, Republic of Duration: Apr 18 2015 → Apr 23 2015 |
Other
Other | 33rd Annual CHI Conference on Human Factors in Computing Systems, CHI 2015 |
---|---|
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 4/18/15 → 4/23/15 |
Keywords
- Audio software
- Data set
- User interaction
- Vocal imitation
ASJC Scopus subject areas
- Software
- Human-Computer Interaction
- Computer Graphics and Computer-Aided Design
Fingerprint
Dive into the research topics of 'VocalSketch: Vocally imitating audio concepts'. Together they form a unique fingerprint.Datasets
-
VocalSketch Data Set v1.0.4
Cartwright, M. (Contributor) & Pardo, B. A. (Contributor), ZENODO, 2015
DOI: 10.5281/zenodo.13862, https://zenodo.org/record/13862
Dataset