Extensive research on time series classification in the last decade has produced fast and accurate algorithms for the single-dimensional case. However, the increasing prevalence of inexpensive sensors has reinforced the need for algorithms to handle multi-dimensional time series. For example, modern smartphones have at least a dozen sensors capable of producing streaming time series, and hospital-based (and increasingly, home-based) medical devices can produce time series streams from more than twenty sensors. The two most common ways to generalize from single to multi-dimensional data are to use all the streams or just the single best stream as determined at training time. However, as we show here, both approaches can be very brittle. Moreover, neither approach exploits the observation that different sensors may be considered "experts" on different classes. In this work, we introduce a novel framework for multi-dimensional time series classification that weights the class prediction from each time series stream. These weights are based not only on each stream's previous track record on the class it is currently predicting, but also on the distance from the unlabeled object. As we demonstrate with extensive experiments on real data, our method is more accurate than current approaches and particularly robust in the face of concept drift or sensor noise.