Finding the best matching database target to a melodic query has been of great interest in the music IR world. The string alignment paradigm works well for this task when comparing a monophonic query to a database of monophonic pieces. However, most tonal music is polyphonic, with multiple concurrent musical lines. Such pieces are not adequately represented as strings. Moreover, users often represent polyphonic pieces in their queries by skipping from one part (the soprano) to another (the bass). Current string matching approaches are not designed to handle this situation. This paper outlines approaches to extending string alignment that allow measuring similarity between a monophonic query and a polyphonic piece. These approaches are compared using synthetic queries on a database of Bach pieces. Results indicate that when a monophonic query is drawn from multiple parts in the target, a method which explicitly takes the multi-part structure of a piece into account significantly outperforms the one that does not.