Yusen Zhan

Learn More
When scaling systems to hundreds or thousands of agents, the ability of agents to observe their environment and to coordinate during decision making becomes increasingly difficult. This increased complexity frequently results in significantly reduced system performance as the number of agents in the system increases. We address this by introducing the(More)
In this paper, we study the computational complexity of solution concepts in the context of coalitional games. Firstly , we distinguish two different kinds of core, the undom-inated core and excess core, and investigate the difference and relationship between them. Secondly, we thoroughly investigate the computational complexity of undominated core and(More)
This paper extends our existing teacher-student framework to allow a knowledgeable agent to teach human students. An agent teacher instructs a human student by suggesting actions the student should take as it learns. This paper extends previous algorithms, used for agents teaching other agents, to develop several new algorithms for agents teaching humans.(More)
Policy advice is a transfer learning method where a student agent is able to learn faster via advice from a teacher. However, both this and other reinforcement learning transfer methods have little theoretical analysis. This paper formally defines a setting where multiple teacher agents can provide advice to a student and introduces an algorithm to leverage(More)
This paper proposes an online transfer framework to capture the interaction among agents and shows that current transfer learning in reinforcement learning is a special case of online transfer. Furthermore, this paper re-characterizes existing agents-teaching-agents methods as online transfer and analyze one such teaching method in three ways. First, the(More)
Interactions in multiagent systems are generally more complicated than single agent ones. Game theory provides solutions on how to act in multiagent scenarios; however, it assumes that all agents will act rationally. Moreover, some works also assume the opponent will use a stationary strategy. These assumptions usually do not hold in real world scenarios(More)
The success or failure of any learning algorithm is partially due to the exploration strategy it exerts. However, most exploration strategies assume that the environment is stationary and non-strategic. In this work we shed light on how to design exploration strategies in non-stationary and adversarial environments. Our proposed adversarial drift(More)
The issues of coalition formation have been investigated from many aspects and recently more and more attention has been paid to overlapping coalition formation. The (optimal) coalition structure generation(CSG) problem is one of the essential problems in coalition formation which is an important topic of cooperation in multiagent system. In this paper, we(More)
  • 1