We construct a geometrical perspective to justify the slow learning period and fast learning period during training. We plot the error surfaces and the solution space on the input space for a single neuron with two inputs. We study various training paths on this space when we run the back-propagation (BP) learning algorithm . We display the relation between the learning curve and the training path. We apply this study to correctly and efficiently operate the momentum method  to accelerate the training.