Existing perceptual models of audio quality, such as PEAQ, perform poorly when applied to blind audio source separation (BASS). We propose to create a perceptual model designed specifically for BASS algorithms. To create this model, we have designed a study to capture subjective human assessments of signal distortions resulting from BASS. In this study, humans rate the similarity between pairs of sounds. The first sound in each pair is a reference sound. The second sound is a distorted version of the reference, extracted from a multi-source mixture by a current BASS approach. We then correlate human similarity assessments with machine-measurable parameters. This paper describes preliminary results from a pilot study of three participants. Results indicate a strong correlation between human similarity assessments and the relative fraction of frames for which at last one frequency band in the distorted signal contains a significant noise component (RDF).