Learn More
Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassi-fies. We(More)
—This paper is concerned with model reduction for complex Markov chain models. The Kullback–Leibler divergence rate is employed as a metric to measure the difference between the Markov model and its approximation. For a certain relaxation of the bi-partition model reduction problem, the solution is shown to be characterized by an associated eigenvalue(More)
Significant changes in the instance distribution or associated cost function of a learning problem require one to reoptimize a previously-learned classifier to work under new conditions. We study the problem of reoptimizing a multi-class classifier based on its ROC hypersurface and a matrix describing the costs of each type of prediction error. For a binary(More)
—In active learning, where a learning algorithm has to purchase the labels of its training examples, it is often assumed that there is only one labeler available to label examples, and that this labeler is noise-free. In reality, it is possible that there are multiple labelers available (such as human labelers in the online annotation tool Amazon Mechanical(More)
We explore the problem of budgeted machine learning, in which the learning algorithm has free access to the training examples' labels but has to pay for each attribute that is specified. This learning model is appropriate in many areas, including medical applications. We present new algorithms for choosing which attributes to purchase of which examples in(More)
— This paper addresses model reduction for a Markov chain on a large state space. A simulation-based framework is introduced to perform state aggregation of the Markov chain based on observations of a single sample path. The Kullback-Leibler (K-L) divergence rate is employed as a metric to measure the distance between two stationary Markov chains. Model(More)
— This paper is concerned with modeling, analysis and optimization/control of occupancy evolution in a large building. The main concern is efficient evacuation of a building in the event of threat or emergency. Complexity arises from the curse of dimensionality in a large building, as well as the uncertain and nonlinear dynamics of individuals. In this(More)