Few studies on speaker verification have directly used a deep neural network (DNN) as a classifier. It is difficult to directly apply a DNN as a discriminative model to speaker-verification tasks because the training data for each speaker are very limited. Therefore, a b-vector has been proposed to solve the problem. However, the DNN with the b-vectors showed lower performance than the conventional i-vector probabilistic linear-discriminant analysis (PLDA) system. In this paper, we propose an improved version of the b-vector DNN system, which incorporates the background speakers' information into the DNN. In this study, each input feature is paired with a representative background speaker's feature vectors, and a b-vector is extracted from each pair; thus, feeding background information into the DNN. We confirmed that the performance improvements of the proposed system compensate for the shortcomings of conventional b-vectors in experiments carried out using the National Institute of Standards and Technology 2008 Speaker-Recognition Evaluation tests.