Nowadays, machine learning applications deal most often with large and/or distributed datasets. In this context, distributed learning seems to be the most promising line of research to handle both situations since large datasets can be allocated across several locations. Moreover, the current trend of reducing the speed of processors in favor of multi-core processors and computer clusters leads to a suitable context for distributed learning. Notwithstanding, only a few distributed learning algorithms have been proposed so far in the literature. One of them is DEvoNet, which uses artificial neural networks and genetic algorithms. DEvoNet shows a good performance on many datasets but several limitations were pointed out in connection with its poor performance on nonuniform class-probability distributions of data. An improvement of DEvoNet, which is based on distributing the computation of the genetic algorithm, is presented in this paper. The results obtained during experimentation show a notorious improvement of the performance of DEvoNet on both uniform and nonuniform classprobability distributions of data.