This paper describes an architecture for enabling robust autonomous decision making and task execution. A key feature of the architecture is that agent behavior is constrained by sets of agent societal laws similar to Asimov's laws of robotics. In accordance with embedded philosophical principles, agents use decision theory in their negotiations to evaluate the expected utility of proposed actions and use of resources. This results in planning and task execution that is dynamic, rational, distributed, occurs at multiple levels of granularity, and can be trusted. We report on our initial investigations of agent architectures that embody philosophical and social layers. Our investigations have included the effect of misinformation among cooperative agents in worth-oriented domains, and active countermeasures for dealing with the misinformation. We examine the agents" use of philosophical principles for mission preeminence and rational progress towards goals.
Unfortunately, ACM prohibits us from displaying non-influential references for this paper.
To see the full reference list, please visit http://dl.acm.org/citation.cfm?id=545091.