Brian Swenson

Learn More
The paper is concerned with distributed learning in large-scale games. The well-known fictitious play (FP) algorithm is addressed, which, despite theoretical convergence results, might be impractical to implement in large-scale settings due to intense computation and communication requirements. An adaptation of the FP algorithm, designated as the empirical(More)
—The paper concerns the development of distributed equilibria learning strategies in large-scale multi-agent games with repeated plays. With inter-agent information exchange being restricted to a preassigned communication graph, the paper presents a modified version of the fictitious play algorithm that relies only on local neighborhood information exchange(More)
—The paper is concerned with learning in large-scale multi-agent games. The empirical centroid fictitious play (ECFP) algorithm is a variant of the well-known fictitious play algorithm that is practical and computationally tractable in large-scale games. ECFP has been shown to be an effective tool in learning consensus equilibria (a subset of the Nash(More)
The paper is concerned with distributed learning and optimization in large-scale settings. The well-known Fictitious Play (FP) algorithm has been shown to achieve Nash equilibrium learning in certain classes of multi-agent games. However, FP can be computationally difficult to implement when the number of players is large. Sampled FP is a variant of FP that(More)
The paper studies the highly prototypical Fictitious Play (FP) algorithm, as well as a broad class of learning processes based on best-response dynamics, that we refer to as FP-type algorithms. A well-known shortcoming of FP is that, while players may learn an equilibrium strategy in some abstract sense, there are no guarantees that the period-by-period(More)
Empirical Centroid Fictitious Play (ECFP) is a generalization of the well-known Fictitious Play (FP) algorithm designed for implementation in large-scale games. In ECFP, the set of players is subdivided into equivalence classes with players in the same class possessing similar properties. Players choose a next-stage action by tracking and responding to(More)
—Learning processes that converge to mixed-strategy equilibria often exhibit learning only in the weak sense in that the time-averaged empirical distribution of players' actions converges to a set of equilibria. A stronger notion of learning mixed equilibria is to require that players period-by-period strategies converge to a set of equilibria. A simple and(More)
—The paper studies algorithms for learning pure-strategy Nash equilibria (NE) in networked multi-agent systems with uncertainty. In many such real-world systems, information is naturally distributed among agents and must be disseminated using a sparse inter-agent communication infrastructure. The paper considers a scenario in which (i) each agent may(More)