Corpus ID: 44078499

Active and Adaptive Sequential learning

@article{Bu2018ActiveAA,
  title={Active and Adaptive Sequential learning},
  author={Yuheng Bu and Jiaxun Lu and Venugopal V. Veeravalli},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.11710}
}
A framework is introduced for actively and adaptively solving a sequence of machine learning problems, which are changing in bounded manner from one time step to the next. An algorithm is developed that actively queries the labels of the most informative samples from an unlabeled data pool, and that adapts to the change by utilizing the information acquired in the previous steps. Our analysis shows that the proposed active learning algorithm based on stochastic gradient descent achieves a near… Expand
Active and Adaptive Sequential Learning with Per Time-step Excess Risk Guarantees
TLDR
An active and adaptive learning framework is proposed, in which an active querying algorithm actively query the labels of the most informative samples from an unlabeled data pool, and adapt to the change by utilizing the information acquired in the previous steps to satisfy a pre-specified bound on the excess risk at each time-step. Expand
Model Change Detection with Application to Machine Learning
  • Yuheng Bu, Jiaxun Lu, V. Veeravalli
  • Computer Science, Mathematics
  • ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2019
TLDR
An empirical difference test (EDT) is constructed, which approximates the generalized likelihood ratio test (GLRT) and has low computational complexity and an approximation method to set the threshold of the EDT to meet the false alarm constraint. Expand
A Constructivist Approach and Tool for Autonomous Agent Bottom-up Sequential Learning
During the initial phase of cognitive development, infants exhibit amazing abilities to generate novel behaviors in unfamiliar situations, and explore actively to learn the best while lackingExpand

References

SHOWING 1-10 OF 16 REFERENCES
Agnostic active learning
TLDR
The first active learning algorithm which works in the presence of arbitrary forms of noise is state and analyzed, and it is shown that A2 achieves an exponential improvement over the usual sample complexity of supervised learning. Expand
Convergence Rates of Active Learning for Maximum Likelihood Estimation
TLDR
This paper provides an upper bound on the label requirement of the algorithm, and a lower bound that matches it up to lower order terms, and shows that unlike binary classification in the realizable case, just a single extra round of interaction is sufficient to achieve near-optimal performance in maximum likelihood estimation. Expand
A Survey on Transfer Learning
TLDR
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed. Expand
Competing with the Empirical Risk Minimizer in a Single Pass
TLDR
This work provides a simple streaming algorithm which, under standard regularity assumptions on the underlying problem, enjoys the following properties: * The algorithm can be implemented in linear time with a single pass of the observed data, using space linear in the size of a single sample. Expand
Adaptive Sequential Stochastic Optimization
A framework is introduced for sequentially solving convex stochastic minimization problems, where the objective functions change slowly, in the sense that the distance between successive minimizersExpand
Adaptive importance sampling in general mixture classes
TLDR
An adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the performance of importance sampling, as measured by an entropy criterion is proposed. Expand
Matrix regularization techniques for online multitask learning
TLDR
This paper examines the problem of prediction with expert advice in a setup where the learner is presented with a sequence of examples coming from different tasks, and proposes regularization techniques to enforce the constraints. Expand
Regularized multi--task learning
TLDR
An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented. Expand
Learning Multiple Tasks using Manifold Regularization
TLDR
An approximation of the manifold regularization scheme is presented that preserves the convexity of the single task learning problem, and makes the proposed MTL framework efficient and easy to implement. Expand
A Convex Formulation for Learning Task Relationships in Multi-Task Learning
TLDR
This paper proposes a regularization formulation for learning the relationships between tasks in multi-task learning, called MTRL, which can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Expand
...
1
2
...