In this work we propose an imputation method that leverages repeating structures in audio, which are a common element in music. This work is inspired by the REpeating Pattern Extraction Technique (REPET), which is a blind audio source separation algorithm designed to separate repeating 'background' elements from nonrepeating 'foreground' elements. Here, as in REPET, we construct a model of the repeating structures by overlaying frames and calculating a median value for each time-frequency bin within the repeating period. Instead of using this model to do separation, we show how this median model can be used to impute missing time-frequency values. This method requires no pre-Training and can impute in scenarios where missing or corrupt frames span the entire audio spectrum. Human evaluation results show that this method produces higher quality imputation than existing methods in signals with a high amount of repetition.