#### Filter Results:

- Full text PDF available (41)

#### Publication Year

2002

2017

- This year (8)
- Last 5 years (30)
- Last 10 years (50)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

When training and test samples follow different input distributions (i.e., the situation called covariate shift), the maximum likelihood estimator is known to lose its consistency. For regaining consistency, the log-likelihood terms need to be weighted according to the importance (i.e., the ratio of test and training input densities). Thus, accurately… (More)

- Masashi Sugiyama, Tsuyoshi Idé, Shinichi Nakajima, Jun Sese
- Machine Learning
- 2008

When only a small number of labeled samples are available, supervised dimensionality reduction methods tend to perform poorly because of overfitting. In such cases, unlabeled samples could be useful in improving the performance. In this paper, we propose a semi-supervised dimensionality reduction method which preserves the global structure of unlabeled… (More)

- Nils Plath, Marc Toussaint, Shinichi Nakajima
- ICML
- 2009

A key aspect of semantic image segmentation is to integrate local and global features for the prediction of local segment labels. We present an approach to multi-class segmentation which combines two methods for this integration: a Conditional Random Field (CRF) which couples to local image features and an image classification method which considers global… (More)

- Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan, Ryota Tomioka
- Journal of Machine Learning Research
- 2013

The variational Bayesian (VB) approximation is known to be a promising approach to Bayesian estimation, when the rigorous calculation of the Bayes posterior is intractable. The VB approximation has been successfully applied to matrix factorization (MF), offering automatic dimensionality selection for principal component analysis. Generally, finding the VB… (More)

- S. Derin Babacan, Shinichi Nakajima, Minh N. Do
- IEEE Transactions on Signal Processing
- 2014

In this paper, we present a general class of multivariate priors for group-sparse modeling within the Bayesian framework. We show that special cases of this class correspond to multivariate versions of several classical priors used for sparse modeling. Hence, this general prior formulation is helpful in analyzing the properties of different modeling… (More)

- Shinichi Nakajima, Masashi Sugiyama, S. Derin Babacan
- Machine Learning
- 2013

Principal component analysis (PCA) approximates a data matrix with a low-rank one by imposing sparsity on its singular values. Its robust variant can cope with spiky noise by introducing an element-wise sparse term. In this paper, we extend such sparse matrix learning methods, and propose a novel framework called sparse additive matrix factorization (SAMF).… (More)

In order to achieve good performance in object classification problems, it is necessary to combine information from various image features. Because the large margin classifiers are constructed based on similarity measures between samples called kernels, finding appropriate feature combinations boils down to designing good kernels among a set of candidates,… (More)

- Shinichi Nakajima, Masashi Sugiyama
- Journal of Machine Learning Research
- 2011

- Masashi Sugiyama, Shinichi Nakajima
- Machine Learning
- 2009

The goal of pool-based active learning is to choose the best input points to gather output values from a ‘pool’ of input samples. We develop two pool-based active learning criteria for linear regression. The first criterion allows us to obtain a closed-form solution so it is computationally very efficient. However, this solution is not necessarily optimal… (More)

- Shinichi Nakajima, Alexander Binder, +4 authors Motoaki Kawanabe
- 2009

Combining information from various image descriptors has become a standard technique for image classification tasks. Multiple kernel learning (MKL) approaches allow to determine the optimal combination of such similarity matrices and the optimal classifier simultaneously. Most MKL approaches employ an `-regularization on the mixing coefficients to promote… (More)