TY - GEN
T1 - SOURCE SEPARATION BY STEERING PRETRAINED MUSIC MODELS
AU - Manilow, Ethan
AU - O'Reilly, Patrick
AU - Seetharaman, Prem
AU - Pardo, Bryan
N1 - Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - We showcase a method that repurposes deep models trained for music generation and music tagging for audio source separation, without any retraining. An audio generation model is conditioned on an input mixture, producing a latent encoding of the audio used to generate audio. This generated audio is fed to a pretrained music tagger that creates source labels. The cross-entropy loss between the tag distribution for the generated audio and a predefined distribution for an isolated source is used to guide gradient ascent in the (unchanging) latent space of the generative model. This system does not update the weights of the generative model or the tagger, and only relies on moving through the generative model's latent space to produce separated sources. We use OpenAI's JUKEBOX as the pretrained generative model, and we couple it with four kinds of pretrained music taggers (two architectures and two tagging datasets). Experimental results on two source separation datasets, show this approach can produce separation estimates for a wider variety of sources than any tested system. This work points to the vast and heretofore untapped potential of large pretrained music models for audio-to-audio tasks like source separation.
AB - We showcase a method that repurposes deep models trained for music generation and music tagging for audio source separation, without any retraining. An audio generation model is conditioned on an input mixture, producing a latent encoding of the audio used to generate audio. This generated audio is fed to a pretrained music tagger that creates source labels. The cross-entropy loss between the tag distribution for the generated audio and a predefined distribution for an isolated source is used to guide gradient ascent in the (unchanging) latent space of the generative model. This system does not update the weights of the generative model or the tagger, and only relies on moving through the generative model's latent space to produce separated sources. We use OpenAI's JUKEBOX as the pretrained generative model, and we couple it with four kinds of pretrained music taggers (two architectures and two tagging datasets). Experimental results on two source separation datasets, show this approach can produce separation estimates for a wider variety of sources than any tested system. This work points to the vast and heretofore untapped potential of large pretrained music models for audio-to-audio tasks like source separation.
KW - automatic music tagging
KW - generative music models
KW - gradient ascent
KW - music source separation
UR - http://www.scopus.com/inward/record.url?scp=85134018857&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85134018857&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9747909
DO - 10.1109/ICASSP43922.2022.9747909
M3 - Conference contribution
AN - SCOPUS:85134018857
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 126
EP - 130
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2022
Y2 - 22 May 2022 through 27 May 2022
ER -