I-SED: An Interactive sound event detector

Bongjun Kim, Bryan A Pardo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations


Tagging of sound events is essential in many research areas. However, finding sound events and labeling them within a long audio file is tedious and time-consuming. Building an automatic recognition system using machine learning techniques is often not feasible because it requires a large number of human-labeled training examples and fine tuning the model for a specific application. Fully automated labeling is also not reliable enough for all uses. We present I-SED, an interactive sound detection interface using a human-in-The-loop approach that lets a user reduce the time required to label audio that is tediously long (e.g. 20 hours) to do manually and has too few prior labeled examples (e.g. one) to train a state-of-The-Art machine audio labeling system. We performed a human-subject study to validate its effectiveness and the results showed that our tool helped participants label all target sound events within a recording twice as fast as labeling them manually.

Original languageEnglish (US)
Title of host publicationIUI 2017 - Proceedings of the 22nd International Conference on Intelligent User Interfaces
PublisherAssociation for Computing Machinery
Number of pages5
ISBN (Electronic)9781450343480
StatePublished - Mar 7 2017
Event22nd International Conference on Intelligent User Interfaces, IUI 2017 - Limassol, Cyprus
Duration: Mar 13 2017Mar 16 2017

Publication series

NameInternational Conference on Intelligent User Interfaces, Proceedings IUI


Other22nd International Conference on Intelligent User Interfaces, IUI 2017


  • Human-in-The-loop system
  • Interactive machine learning
  • Sound event detection

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction


Dive into the research topics of 'I-SED: An Interactive sound event detector'. Together they form a unique fingerprint.

Cite this