Jacob W. Crandall

Learn More
The ability of robots to autonomously perform tasks is increasing. More autonomy in robots means that the human managing the robot may have available free time. It is desirable to use this free time productively, and a current trend is use this available free time to manage multiple robots. We present the notion of neglect tolerance as a means for(More)
It is often desirable for a human to manage multiple robots. Autonomy is required to keep workload within tolerable ranges, and dynamically adapting the type of autonomy may be useful for responding to environment and workload changes. We identify two management styles for managing multiple robots and present results from four experiments that have(More)
Efforts are underway to make it possible for a single operator to effectively control multiple robots. In these high workload situations, many questions arise including how many robots should be in the team (Fan-out), what level of autonomy should the robots have, and when should this level of autonomy change (i.e., dynamic autonomy). We propose that a set(More)
Human-robot interaction is becoming an increasingly important research area. In this paper, we present our work on designing a human-robot system with adjustable autonomy and describe not only the prototype interface but also the corresponding robot behaviors. In our approach, we grant the human meta-level control over the level of robot autonomy, but we(More)
Human-robot interaction is becoming an increasingly important research area. In this paper, we present a theoretical characterization of interaction efficiency with an eye towards designing a human-robot system with adjustable robot autonomy. In our approach, we analyze how modifying robot control schemes for a given autonomy mode can increase system(More)
With reduced radar signatures, increased endurance and the removal of humans from immediate threat, uninhabited (also known as unmanned) aerial vehicles (UAVs) have become indispensable assets to militarized forces. UAVs require human guidance to varying degrees and often through several operators. However, with current military focus on streamlining(More)
Learning algorithms often obtain relatively low average payoffs in repeated general-sum games between other learning agents due to a focus on myopic best-response and one-shot Nash equilibrium (NE) strategies. A less myopic approach places focus on NEs of the repeated game, which suggests that (at the least) a learning agent should possess two properties.(More)
In this paper we develop a method for predicting the performance of human-robot teams consisting of a single user and multiple robots. To predict the performance of a team, we first measure the neglect tolerance and interface efficiency of the interaction schemes employed by the team. We then describe a method that shows how these measurements can be used(More)
We consider the problem of learning in repeated general-sum matrix games when a learning algorithm can observe the actions but not the payoffs of its associates. Due to the non-stationarity of the environment caused by learning associates in these games, most state-of-the-art algorithms perform poorly in some important repeated games due to an inability to(More)