HC-Search: Learning Heuristics and Cost Functions for Structured Prediction

@article{Doppa2013HCSearchLH,
  title={HC-Search: Learning Heuristics and Cost Functions for Structured Prediction},
  author={Janardhan Rao Doppa and Alan Fern and Prasad Tadepalli},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2013}
}
Structured prediction is the problem of learning a function from structured inputs to structured outputs with prototypical examples being part-of-speech tagging and image labeling. Inspired by the recent successes of search-based structured prediction, we introduce a new framework for structured prediction called {\em HC-Search}. Given a structured input, the framework uses a search procedure guided by a learned heuristic H to uncover high quality candidate outputs and then uses a separate… 

Figures and Tables from this paper

HC-Search: A Learning Framework for Search-based Structured Prediction
TLDR
This work introduces a new framework for structured prediction called HC-Search, which significantly outperforms several state-of-the-art methods and is sensitive to the particular loss function of interest and the time-bound allowed for predictions.
Structured prediction via output space search
TLDR
A novel approach to automatically defining an effective search space over structured outputs, which is able to leverage the availability of powerful classification learning algorithms, is described and the limited-discrepancy search space is defined and related to the quality of learned classifiers.
HC-Search for Multi-Label Prediction: An Empirical Study
TLDR
This paper empirically evaluates the instantiation of the HC-Search framework along with many existing multi-label learning algorithms on a variety of benchmarks by employing diverse task loss functions, and demonstrates that the performance of existing algorithms tends to be very similar in most cases and that theHC-Search approach is comparable and often better than all the other algorithms across different loss functions.
Output Feature Augmented Lasso
TLDR
This paper proposes to augment Lasso with output by decoupling the joint feature mapping function of traditional structured learning by using the Augmented Lagrangian Method with Alternating Direction Minimizing to find the optimal model parameters.
Adversarial Structured Output Prediction by
TLDR
This oral exam studies the state-of-the-art methods for solving the problem of structured learning and output prediction in adversarial settings, and mentions the strengths and weaknesses of the existing methods, and point to the open problems in the field.
AN ABSTRACT OF THE THESIS OF Chao Ma for the degree of Doctor of Philosophy in Computer Science presented on July 15, 2019. Title: New Directions in Search-based Structured Prediction: Multi-Task Learning and Integration of Deep Models Abstract approved:
TLDR
A search-based learning approach called “Prune-and-Score” to improve the accuracy of greedy policy based structured prediction for search spaces with large action spaces and the HC-Nets framework, which allows to incorporate prior knowledge in the form of constraints.
Extreme classification under limited space and time budget
TLDR
A new framework for solving extreme classification is discussed, in which the original problem is reduced to a structured prediction problem, and learning algorithms that work under a strict time and space budget are obtained.
Learning to control a structured-prediction decoder for detection of HTTP-layer DDoS attackers
TLDR
An online policy-gradient method is derived that finds the parameters of the controller and of the structured-prediction model in a joint optimization problem and obtains a convergence guarantee for the latter method.
Mixed heuristic search for sketch prediction on chemical structure drawing
TLDR
This approach transforms the sketch prediction problem into a search problem to find a hamiltonian path in the corresponding sub-graph with polynomial time complexity and introduces mixed heuristics to guide the search procedure.
Rectifying Classifier Chains for Multi-Label Classification
TLDR
This work analyzes the influence of a potential pitfall of the learning process, namely the discrepancy between the feature spaces used in training and testing, and proposes two modifications of classifier chains that are meant to overcome this problem.
...
...

References

SHOWING 1-10 OF 28 REFERENCES
Output Space Search for Structured Prediction
TLDR
This paper defines the limited-discrepancy search space over structured outputs, which is able to leverage powerful classification learning algorithms to improve the search space quality and gives a generic cost function learning approach.
Structured Prediction Cascades
TLDR
It is shown that the learned cascades are capable of reducing the complexity of inference by up to ve orders of magnitude, enabling the use of models which incorporate higher order features and yield higher accuracy.
Learning Linear Ranking Functions for Beam Search with Application to Planning
Beam search is commonly used to help maintain tractability in large search spaces at the expense of completeness and optimality. Here we study supervised learning of linear ranking functions for
Sidestepping Intractable Inference with Structured Ensemble Cascades
TLDR
This work proposes sidestepping intractable inference altogether by learning ensembles of tractable sub-models as part of a structured prediction cascade, focusing in particular on problems with high-treewidth and large state-spaces, which occur in many computer vision tasks.
A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning
TLDR
This paper proposes a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting and demonstrates that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.
Training Factor Graphs with Reinforcement Learning for Efficient MAP Inference
TLDR
This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL); rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on held out test data.
Search-based structured prediction
TLDR
Searn is an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision and comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies goodperformance on the structured prediction problem.
Learned Prioritization for Trading Off Accuracy and Speed
TLDR
This work proposes a hybrid reinforcement/apprenticeship learning algorithm that learns to speed up an initial policy, trading off accuracy for speed according to various settings of a speed term in the loss function.
Efficient Reductions for Imitation Learning
TLDR
This work proposes two alternative algorithms for imitation learning where training occurs over several episodes of interaction and shows that this leads to stronger performance guarantees and improved performance on two challenging problems: training a learner to play a 3D racing game and Mario Bros.
Support vector machine learning for interdependent and structured output spaces
TLDR
This paper proposes to generalize multiclass Support Vector Machine learning in a formulation that involves features extracted jointly from inputs and outputs, and demonstrates the versatility and effectiveness of the method on problems ranging from supervised grammar learning and named-entity recognition, to taxonomic text classification and sequence alignment.
...
...