#### Filter Results:

- Full text PDF available (166)

#### Publication Year

1987

2017

- This year (9)
- Last 5 years (41)
- Last 10 years (70)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

#### Organism

Learn More

- Michael Kearns
- J. ACM
- 1993

In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize a new but related model of learning from <italic>statistical queries</italic>. Intuitively,… (More)

- Michael Kearns, Satinder P. Singh
- Machine Learning
- 1998

We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted… (More)

- Michael Kearns, Leslie G. Valiant
- J. ACM
- 1989

In this paper, we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are <italic>representation independent</italic>, in that they hold regardless of the syntactic form in which the learner chooses to… (More)

- Michael Kearns, Yishay Mansour, Andrew Y. Ng
- Machine Learning
- 1999

A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows… (More)

- Sally A. Goldman, Michael Kearns
- J. Comput. Syst. Sci.
- 1991

While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the… (More)

We introduce a compact graph-theoretic representation for multi-party game theory. Our main result is a provably correct and efficient algorithm for computing approximate Nash equilibria in one-stage games represented by trees or sparse graphs.

- Michael Kearns, Diane J. Litman, Satinder P. Singh, Marilyn A. Walker
- J. Artif. Intell. Res.
- 2002

Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and… (More)

Multi-agent games are becoming an increasingly prevalent formalism for the study of electronic commerce and auctions. The speed at which transactions can take place and the growing complexity of electronic marketplaces makes the study of computationally simple agents an appealing direction. In this work, we analyze the behavior of agents that incrementally… (More)

- Michael Kearns, Robert E. Schapire, Linda Sellie
- Machine Learning
- 1992

In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed <italic>agnostic learning</italic>, in which we make virtually no assumptions on the target function. The name… (More)