Authors: Ardeshir Kianercy,Aram Galstyan
ArXiv: 1308.1049
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1308.1049v1
This paper presents a model of network formation in repeated games where the
players adapt their strategies and network ties simultaneously using a simple
reinforcement-learning scheme. It is demonstrated that the coevolutionary
dynamics of such systems can be described via coupled replicator equations. We
provide a comprehensive analysis for three-player two-action games, which is
the minimum system size with nontrivial structural dynamics. In particular, we
characterize the Nash equilibria (NE) in such games and examine the local
stability of the rest points corresponding to those equilibria. We also study
general n-player networks via both simulations and analytical methods and find
that in the absence of exploration, the stable equilibria consist of star
motifs as the main building blocks of the network. Furthermore, in all stable
equilibria the agents play pure strategies, even when the game allows mixed NE.
Finally, we study the impact of exploration on learning outcomes, and observe
that there is a critical exploration rate above which the symmetric and
uniformly connected network topology becomes stable.