• Corpus ID: 239016316

A Bayesian approach to multi-task learning with network lasso

@article{Shimamura2021ABA,
  title={A Bayesian approach to multi-task learning with network lasso},
  author={Kaito Shimamura and Shuichi Kawano},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.09040}
}
Network lasso is a method for solving a multi-task learning problem through the regularized maximum likelihood method. A characteristic of network lasso is setting a different model for each sample. The relationships among the models are represented by relational coefficients. A crucial issue in network lasso is to provide appropriate values for these relational coefficients. In this paper, we propose a Bayesian approach to solve multi-task learning problems by network lasso. This approach… 

Tables from this paper

References

SHOWING 1-10 OF 18 REFERENCES
The Bayesian Lasso
The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.
Classifying Partially Labeled Networked Data VIA Logistic Network Lasso
  • Nguyen Tran, Henrik Ambos, A. Jung
  • Computer Science, Mathematics
    ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2020
TLDR
This work applies the network Lasso to classify partially labeled data points which are characterized by high-dimensional feature vectors to solve a regularized empirical risk minimization problem using the total variation of a classifier as a regularizer.
Localized Lasso for High-Dimensional Regression
TLDR
The localized Lasso is introduced, which is suited for learning models that are both interpretable and have a high predictive power in problems with high dimensionality and small sample size, and a simple yet efficient iterative least-squares based optimization procedure is proposed.
Multi-Task Feature Learning
TLDR
The method builds upon the well-known 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks, and develops an iterative algorithm for solving it.
Inference with normal-gamma prior distributions in regression problems
This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken
Convex multi-task feature learning
TLDR
It is proved that the method for learning sparse representations shared across multiple tasks is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution.
Network Lasso: Clustering and Optimization in Large Graphs
TLDR
The network lasso is introduced, a generalization of the group lasso to a network setting that allows for simultaneous clustering and optimization on graphs and an algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in a distributed and scalable manner.
Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction
We study the classic problem of choosing a prior distribution for a location parameter β = (β1, . . . , βp) as p grows large. First, we study the standard “global-local shrinkage” approach, based on
Alternative prior distributions for variable selection with very many more variables than observations
The problem of variable selection in regression and the generalised linear model is addressed. We adopt a Bayesian approach with priors for the regression coefficients that are scale mixtures of
Dirichlet–Laplace Priors for Optimal Shrinkage
TLDR
This article proposes a new class of Dirichlet–Laplace priors, which possess optimal posterior concentration and lead to efficient posterior computation.
...
1
2
...