Traditional point-by-point image similarity metrics, such as the ℓ2-norm, are not always consistent with human perception, especially in textured regions. We consider the problem of identifying textures that are perceptually identical to a query texture; this is important for image retrieval, compression, and restoration applications. Recently proposed structural texture similarity (STSIM) metrics assign high similarity scores to such perceptually identical textured patches, even though they may have significant pixel-wise deviations. We use an STSIM approach that compares a set of statistical patch descriptors through a weighted distance, and, given a dataset of labeled texture images partitioned into classes of perceptually identical patches, we calculate the weights as the variances of each statistic centered around the mean of its class. Experimental results demonstrate that the proposed approach outperforms existing structural similarity metrics and STSIMs as well as traditional point-by-point metrics when assessing texture similarity in both noisy and noise-free conditions.