• Corpus ID: 88515464

Shape Parameter Estimation

@article{Zheng2017ShapePE,
  title={Shape Parameter Estimation},
  author={Peng Zheng and Aleksandr Y. Aravkin and Karthikeyan Natesan Ramamurthy},
  journal={arXiv: Machine Learning},
  year={2017}
}
Performance of machine learning approaches depends strongly on the choice of misfit penalty, and correct choice of penalty parameters, such as the threshold of the Huber function. These parameters are typically chosen using expert knowledge, cross-validation, or black-box optimization, which are time consuming for large-scale applications. We present a principled, data-driven approach to simultaneously learn the model pa- rameters and the misfit penalty parameters. We discuss theoretical… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 24 REFERENCES
Practical Bayesian Optimization of Machine Learning Algorithms
TLDR
This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets
TLDR
A generative model for the validation error as a function of training set size is proposed, which learns during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset.
Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization
TLDR
A novel algorithm is introduced, Hyperband, for hyperparameter optimization as a pure-exploration non-stochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations.
Efficient Hyperparameter Optimization and Infinitely Many Armed Bandits
TLDR
This work introduces Hyperband for hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data.
Sparse/robust estimation and Kalman smoothing with nonsmooth log-concave densities: modeling, computation, and theory
TLDR
It is shown that the extended framework allows arbitrary PLQ densities to be used, and that the proposed IP approach solves the generalized Kalman smoothing problem while maintaining the linear complexity in the size of the time series, just as in the Gaussian case.
Automatic Inference of the Quantile Parameter
TLDR
This paper proposes to jointly infer the quantile parameter and the unknown function parameters, for the asymmetric quantile Huber and quantile losses, and proposes an algorithm to jointly estimate these.
Orthogonal Matching Pursuit for Sparse Quantile Regression
TLDR
This work proposes a generalized Orthogonal Matching Pursuit algorithm for variable selection, taking the misfit loss to be either the traditional quantile loss or a smooth version the authors call quantile Huber, and applies a recently proposed interior point methodology to efficiently solve all formulations.
Bayesian Quantile Regression
Recent work by Schennach (2005) has opened the way to a Bayesian treatment of quantile regression. Her method, called Bayesian exponentially tilted empirical likelihood (BETEL), provides a likelihood
Bayesian variable selection and estimation in maximum entropy quantile regression
TLDR
An efficient Gibbs sampler algorithm is developed and it is shown that the performance of the proposed method is superior than the Bayesian adaptive Lasso and Bayesian Lasso through simulation studies and a real data analysis.
RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images
TLDR
This paper reduces this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence.
...
...