Classifier ensembles, in which multiple predictive models are combined to produce predictions for new cases, generally perform better than a single classifier. Most existing methods construct static ensembles, in which one collection is used for all test cases. Recently, some researchers have proposed dynamic ensemble construction algorithms, which choose an ensemble specifically for each point from a large pool of classifiers. Ensemble performance is generally seen as having two factors: the accuracy of the individual classifiers and the diversity of the ensemble. In this study we employ heuristic optimization to examine the role of a third factor: the confidence of each classifier’s prediction on the specific data point. We experiment with genetic algorithms and various hill climbing algorithms, in both singleand multi-objective scenarios, to choose locally-optimal sets of 25 classifiers from a large pool to classify each new example. We focus on dynamic ensemble construction by analyzing how diversity, accuracy and confidence interact with each other and how they affect the performance of the ensemble on new examples.