In this paper, we propose a new self-supervised learning method for competitive learning as well as selforganizing maps. In this model, a network enhances its state by itself, and this enhanced state is to be imitated by another state of the network. We set up an enhanced and a relaxed state, and the relaxed state tries to imitate the enhanced state as much as possible by minimizing the free energy. To demonstrate the effectiveness of this method, we apply information enhancement learning to the SOM. For this purpose, we introduce collectiveness, in which all neurons collectively respond to input patterns, into an enhanced state. Then, this enhanced and collective state should be imitated by the other non-enhanced and relaxed state. We applied the method to an artificial data and three data from the well-known machine learning database. Experimental results showed that the U-matrices obtained were significantly similar to those produced by the conventional SOM. However, better performance could be obtained in terms of quantitative and topological errors. The experimental results suggest the possibility for self-supervised learning to be applied to many different neural network models.