In this paper we study the problem of minimizing the average of a large number (n) of smooth convex loss functions. We propose a new method, S2GD (Semi-Stochastic Gradient Descent), which runs for one or several epochs in each of which a single full gradient and a random number of stochastic gradients is computed, following a geometric law. The total work needed for the method to output an ε-accurate solution in expectation, measured in the number of passes over data, or equivalently, in units equivalent to the computation of a single gradient of the empirical loss, is O((n/κ) log(1/ε)), where κ is the condition number. This is achieved by running the method for O(log(1/ε)) epochs, with a single gradient evaluation and O(κ) stochastic gradient evaluations in each. The SVRG method of Johnson and Zhang  arises as a special case. If our method is limited to a single epoch only, it needs to evaluate at most O((κ/ε) log(1/ε)) stochastic gradients. In contrast, SVRG requires O(κ/ε) stochastic gradients. To illustrate our theoretical results, S2GD only needs the workload equivalent to about 2.1 full gradient evaluations to find an 10−6-accurate solution for a problem with n = 10 and κ = 10.