Authors: Hao Tang,Dan Xu,Wei Wang,Yan Yan,Nicu Sebe
ArXiv: 1901.04604
Document:
PDF
DOI
Artifact development version:
GitHub
Abstract URL: http://arxiv.org/abs/1901.04604v1
State-of-the-art methods for image-to-image translation with Generative
Adversarial Networks (GANs) can learn a mapping from one domain to another
domain using unpaired image data. However, these methods require the training
of one specific model for every pair of image domains, which limits the
scalability in dealing with more than two image domains. In addition, the
training stage of these methods has the common problem of model collapse that
degrades the quality of the generated images. To tackle these issues, we
propose a Dual Generator Generative Adversarial Network (G$^2$GAN), which is a
robust and scalable approach allowing to perform unpaired image-to-image
translation for multiple domains using only dual generators within a single
model. Moreover, we explore different optimization losses for better training
of G$^2$GAN, and thus make unpaired image-to-image translation with higher
consistency and better stability. Extensive experiments on six publicly
available datasets with different scenarios, i.e., architectural buildings,
seasons, landscape and human faces, demonstrate that the proposed G$^2$GAN
achieves superior model capacity and better generation performance comparing
with existing image-to-image translation GAN models.