Matthew Klenk

Learn More
Dynamic changes in complex, real-time environments, such as modern video games, can violate an agent’s expectations. We describe a system that responds competently to such violations by changing its own goals, using an algorithm based on a conceptual model for goal driven autonomy. We describe this model, clarify when such behavior is beneficial, and(More)
Understanding common sense reasoning about the physical world is one of the goals of qualitative reasoning research. This paper describes how we combine qualitative mechanics and analogy to solve everyday physical reasoning problems posed as sketches. The problems are drawn from the Bennett Mechanical Comprehension Test, which is used to evaluate technician(More)
Planning in dynamic continuous environments requires reasoning about nonlinear continuous effects, which previous Hierarchical Task Network (HTN) planners do not support. In this paper, we extend an existing HTN planner with a new state projection algorithm. To our knowledge, this is the first HTN planner that can reason about nonlinear continuous effects.(More)
Transfer learning is the ability of an agent to apply knowledge learned in previous tasks to new problems or domains. We approach this problem by focusing on model formulation, i.e., how to move from the unruly, broad set of concepts used in everyday life to a concise, formal vocabulary of abstractions that can be used effectively for problem solving. This(More)
We present a cognitively motivated model of moral decisionmaking, MoralDM, which models psychological findings about utilitarian and deontological modes of reasoning. Current theories of moral decision-making extend beyond pure utilitarian models by including contextual factors that vary culturally. Our model employs both first-principles reasoning and(More)
To operate autonomously in complex environments, an agent must monitor its environment and determine how to respond to new situations. To be considered intelligent, an agent should select actions in pursuit of its goals, and adapt accordingly when its goals need revision. However, most agents assume that their goals are given to them; they cannot recognize(More)
The companion cognitive architecture supports experiments in achieving human-level intelligence. This article describes seven key design goals of companions, relating them to properties of human reasoning and learning, and to engineering concerns raised by attempting to build large-scale cognitive systems. We summarize our experiences with companions in two(More)
We present a computational model, MoralDM, which integrates several AI techniques in order to model recent psychological findings on moral decision-making. Current theories of moral decision-making extend beyond pure utilitarian models by relying on contextual factors that vary with culture. MoralDM uses a natural language system to produce formal(More)