6533b86dfe1ef96bd12c9ea2

RESEARCH PRODUCT

Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

Michael WandChuan Li

subject

PixelArtificial neural networkComputer sciencebusiness.industryComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONMarkov process020207 software engineeringPattern recognition02 engineering and technologyTexture (music)symbols.namesakeMargin (machine learning)0202 electrical engineering electronic engineering information engineeringFeature (machine learning)symbols020201 artificial intelligence & image processingDeconvolutionArtificial intelligencebusinessTexture synthesis

description

This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required at generation time, our run-time performance (0.25 M pixel images at 25 Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.

https://doi.org/10.1007/978-3-319-46487-9_43