#### Filter Results:

- Full text PDF available (26)

#### Publication Year

2001

2019

- This year (14)
- Last 5 years (36)
- Last 10 years (49)

#### Publication Type

#### Co-author

#### Journals and Conferences

Learn More

- Junya Honda, Hirosuke Yamamoto
- IEEE Transactions on Information Theory
- 2013

This paper considers polar coding for asymmetric settings, that is, channel coding for asymmetric channels and lossy source coding for nonuniform sources and/or asymmetric distortion measures. The… (More)

- Junya Honda, Akimichi Takemura
- COLT
- 2010

Multiarmed bandit problem is a typical example of a dilemma between exploration and exploitation in reinforcement learning. This problem is expressed as a model of a gambler playing a slot machine… (More)

We study the $K$-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms. We introduce a tight… (More)

- Junya Honda, Hirosuke Yamamoto
- International Symposium on Information Theory and…
- 2012

Recently Linear Programming (LP) decoding is attracting much attention as an alternative to Belief Propagation (BP) decoding for LDPC codes. It is well known for the BP decoding that nonbinary LDPC… (More)

- Hirosuke Yamamoto, Masato Tsuchihashi, Junya Honda
- IEEE Transactions on Information Theory
- 2015

We propose almost instantaneous fixed-to-variable length (AIFV) codes such that two (resp. K - 1) code trees are used, if code symbols are binary (resp. K-ary for K ≥ 3), and source symbols are… (More)

- Junya Honda, Akimichi Takemura
- AISTATS
- 2013

In stochastic bandit problems, a Bayesian policy called Thompson sampling (TS) has recently attracted much attention for its excellent empirical performance. However, the theoretical analysis of this… (More)

- Junya Honda
- IEEE International Symposium on Information…
- 2015

Error probabilities of random codes for memoryless channels are considered in this paper. In the area of communication systems, admissible error probability is very small and it is sometimes more… (More)

- Junya Honda, Akimichi Takemura
- Machine Learning
- 2009

In the multiarmed bandit problem the dilemma between exploration and exploitation in reinforcement learning is expressed as a model of a gambler playing a slot machine with multiple arms. A policy… (More)

- Junpei Komiyama, Junya Honda, Hiroshi Nakagawa
- ICML
- 2016

We study the K-armed dueling bandit problem, a variation of the standard stochastic bandit problem where the feedback is limited to relative comparisons of a pair of arms. The hardness of… (More)

- Junya Honda, Akimichi Takemura
- J. Mach. Learn. Res.
- 2015

In this paper we consider a stochastic multiarmed bandit problem. It is known in this problem that Deterministic Minimum Empirical Divergence (DMED) policy achieves the asymptotic theoretical bound… (More)