In this paper, a smoothing algorithm for training max-min neural networks is proposed. Specifically, we apply a smooth function to approximate functions and use this smoothing technique twice, once to eliminate the inner operator and once to eliminate the operator. In place of actual network output by its approximation function, we use all partial derivatives of the approximation function with respect to weight to substitute those of the actual network output. Then, the smoothing algorithm is constructed by the gradient descent method. This algorithm can also be used to solve fuzzy relational equations. Finally, two numerical examples are provided to show the effectiveness of our smoothing algorithm for training max-min neural networks.