When judging the similarity of two stimuli, people's ratings often differ depending on the order in which the comparison is presented (A vs. B or B vs. A). Such directional asymmetries have typically been demonstrated using complex concepts that have a large number of semantic features and a standard explanation is that different sets of features are emphasized depending on the direction of the comparison. In this study, we show that directional asymmetries in the similarity of simple perceptual stimuli can be predictably manipulated merely by presenting each member of a pair with different frequency. Participants rated the similarity of color patches before and after performing an irrelevant training task in which a subset of colors was presented ten times more frequently than others. The similarity ratings after training were significantly more asymmetric than the ratings before training. We discuss the implications of these findings for models of similarity judgment and propose a computationally explicit explanation based on asymmetries in representational stability.