We have used supervised machine learning to apply microtiming to music specified only in terms of quantized note times for a variety of percussion instruments. The output of the regression schemes we tried is simply the microtiming deviation to apply to each note. In particular, we trained Locally Weighted Linear Regression / KNearest-Neighbors (LWLR/KNN), Kernel Ridge Regression (KRR), and Gaussian Process Regression (GPR) on data from skilled human performance of a variety of Brazilian rhythms. Although our results are still far from the dream of inputting an arbitrary score and having the result sound as if expert human performers played it in the appropriate musical style, we believe we are on the right track. Evaluating our results with cross-validation, we found that the three methods are quite comparable, and in all cases the mean squared error is substantially less than the mean squared microtiming of the original data. Subjectively, our results are satisfactory; the applied microtiming captures some element of musical style and sounds much more expressive than the quantized input.