Training algorithms suitable to work under imprecise conditions are proposed. They require only the algebraic sign of the error function or its gradient to be correct, and depending on the way they update the weights, they are analyzed as composite nonhear Successive OverRRlaxation (SOR) methods or composite nonlinear Jacobi methods, applied to the gradient of the error function. The local convergence behavior of the proposed algorithms is also studied. The proposed approach seems practically useful when training is affected by technology imperfections, limited precision in operations and data, hardware component variations and environmental changes that cause unpredictable deviations of parameter values from the designed configuration. Therefore, it may be difficult or impossible to obtain very precise values for the error function and the gradient of the error during training.