Learn More
  • Monica C Vroman, Michael L Littman, Monica Babes¸-Vroman, Vukosi Marivate, Kaushik Subramanian, Michael Littman +18 others
  • 2015
Learning desirable behavior from a limited number of demonstrations, also known as inverse reinforcement learning, is a challenging task in machine learning. I apply maximum likelihood estimation to the problem of inverse reinforcement learning, and show that it quickly and successfully identifies the unknown reward function from traces of optimal or(More)
—Controlling particle swarm optimization is typically an unintuitive task, involving a process of adjusting low-level parameters of the system that often do not have obvious correlations with the emergent properties of the optimization process. We propose a method for controlling particle swarm optimization with non-explicit control parameters: parameters(More)
This paper addresses the problem of training an artificial agent to follow verbal instructions representing high-level tasks using a set of instructions paired with demonstration traces of appropriate behavior. From this data, a mapping from instructions to tasks is learned, enabling the agent to carry out new instructions in novel environments.
We consider the problem of inference in a prob-abilistic model for transient populations where we wish to learn about arrivals, departures, and population size over all time, but the only available data are periodic counts of the population size at specific observation times. The underlying model arises in queueing theory (as an M t /G/∞ queue) and also in(More)
  • Penny Rheingans, Marie Desjardins, Wallace Brown, Alex Morrow, Doug Stull, Kevin Winner +1 other
  • 2011
In many scientific fields, models are used to characterize relationships and processes , as well as to predict outcomes from initial conditions and inputs. These models can support the decision-making process by allowing investigators to consider the likely effects of possible interventions and identify efficient ways to achieve desired outcomes. Machine(More)
  • 1