#### Filter Results:

- Full text PDF available (17)

#### Publication Year

2001

2015

- This year (0)
- Last 5 years (6)
- Last 10 years (16)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Cell Type

#### Data Set Used

#### Key Phrases

#### Method

Learn More

- Alexander L. Strehl, Carlos Diuk, Michael L. Littman
- AAAI
- 2007

We consider the problem of reinforcement learning in factored-state MDPs in the setting in which learning is conducted in one long trial with no resets allowed. We show how to extend existing efficient algorithms that learn the conditional probability tables of dynamic Bayesian networks (DBNs) given their structure to the case in which DBN structure is not… (More)

- Carlos Diuk, Lihong Li, Bethany R. Leffler
- ICML
- 2009

The purpose of this paper is three-fold. First, we formalize and study a problem of learning probabilistic concepts in the recently proposed KWIK framework. We give details of an algorithm, known as the Adaptive <i>k</i>-Meteorologists Algorithm, analyze its sample-complexity upper bound, and give a <i>matching</i> lower bound. Second, this algorithm is… (More)

- Carlos Diuk, Andre Cohen, Michael L. Littman
- ICML
- 2008

Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization… (More)

- José J.F. Ribas-Fernandes, Alec Solway, +4 authors Matthew M. Botvinick
- Neuron
- 2011

Human behavior displays hierarchical structure: simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those… (More)

This paper presents a new algorithm for online linear regression whose efficiency guarantees satisfy the requirements of the KWIK (Knows What It Knows) framework. The algorithm improves on the computational and storage complexity bounds of the current state-of-the-art procedure in this setting. We explore several applications of this algorithm for learning… (More)

- Alec Solway, Carlos Diuk, +4 authors Matthew Botvinick
- PLoS Computational Biology
- 2014

Human behavior has long been recognized to display hierarchical structure: actions fit together into subtasks, which cohere into extended goal-directed activities. Arranging actions hierarchically has well established benefits, allowing behaviors to be represented efficiently by the brain, and allowing solutions to new tasks to be discovered easily.… (More)

- Carlos Diuk, Karin Tsai, Jonathan Wallis, Matthew Botvinick, Yael Niv
- The Journal of neuroscience : the official…
- 2013

Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a… (More)

This paper develops a generalized apprenticeship learning protocol for reinforcementlearning agents with access to a teacher who provides policy traces (transition and reward observations). We characterize sufficient conditions of the underlying models for efficient apprenticeship learning and link this criteria to two established learnability classes (KWIK… (More)

- Carlos Diuk, Alexander L. Strehl, Michael L. Littman
- AAMAS
- 2006

Factored representations, model-based learning, and hierarchies are well-studied techniques for improving the learning efficiency of reinforcement-learning algorithms in large-scale state spaces. We bring these three ideas together in a new algorithm. Our algorithm tackles two open problems from the reinforcement-learning literature, and provides a solution… (More)

- Carlos Diuk, Anna C. Schapiro, Natalia Córdova, José Ribas-Fernandes, Yael Niv, Matthew Botvinick
- Computational and Robotic Models of the…
- 2013

The field of computational reinforcement learning (RL) has proved extremely useful in research on human and animal behavior and brain function. However, the simple forms of RL considered in most empirical research do not scale well, making their relevance to complex, real-world behavior unclear. In computational RL, one strategy for addressing the scaling… (More)