Unsupervised Person Image Synthesis in Arbitrary Poses

person-synthesis

Given an original image of a person (left) and a desired body pose defined by a 2D skeleton (bottom-row), our model generates new photo-realistic images of the person under that pose (top-row). The main contribution of our work is to train this generative model with unlabeled data.

Method

person-synthesis

The proposed model consists of four main components: (1) A generator $G(I|\mathbf{p})$ that acts as a differentiable render mapping one input image of a given person under a specific pose to an output image of the same person under a different pose. Note that $G$ is used twice in our network, first to map the input image $I_{p_o}\rightarrow I_{p_f}$ and then to render the latter back to the original pose $I_{p_f}\rightarrow \hat{I}_{p_o}$; (2) A regressor $\Phi$ responsible of estimating the 2D joint locations of a given image; (3) A discriminator $D_{\text{I}}(I)$ that seeks to discriminate between generated and real samples; (4) A loss function, computed without ground truth, that aims to preserve the person identity. For this purpose, we devise a novel loss function that enforces semantic content similarity of $I_{p_o}$ and $\hat{I}_{p_o}$, and style similarity between $I_{p_o}$ and $I_{p_f}$.

Results

person-synthesis

BibTex

@inproceedings{pumarola2018unsupervised,
    title={{Unsupervised Person Image Synthesis in Arbitrary Poses}},
    author={A. Pumarola and A. Agudo and A. Sanfeliu and F. Moreno-Noguer},
    booktitle={Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2018}
}

Publications

2018

  • Unsupervised Person Image Synthesis in Arbitrary Poses
    • Unsupervised Person Image Synthesis in Arbitrary Poses
    • A. Pumarola, A. Agudo, A. Sanfeliu and F. Moreno-Noguer
    • Conference in Computer Vision and Pattern Recognition (CVPR), 2018.
    • Spotlight

Acknowledgments

This work has been partially supported by the Spanish Ministry of Science and Innovation under projects HuMoUR TIN2017-90086-R and ColRobTransp DPI2016-78957; by the European project AEROARMS (H2020-ICT-2014-1-644271); by a Google faculty award; and by the Spanish State Research Agency through the Mar\'ia de Maeztu Seal of Excellence to IRI MDM-2016-0656.