Deep Terrains – Code and data

Deep Terrains – Code and data

You will find hereby everything you need to train, and execute our terrain synthesizer from the article Interactive Example-Based Terrain Authoring with Conditional Generative Adversarial Networks


The code is an adaptation of a tensorflow implementation of pix2pix. It is available here. Follow the original instructions from pix2pix implementation to install/train/test the code. Don’t forget to use the option –png16bits that will allow the use of 16 bits png images.


We provide only the training data for the terrain synthesizer from sketches. Download the archive, uncompress it and use the following command:

  python --png16bits --mode train
      --output_dir multi_train --max_epochs 500
      --input_dir multi/train --which_direction AtoB


For those who don’t have a Titan X… we provide a pre-trained model to be used directly. Download, uncompress it and use the following command:

  python --png16bits --mode test
     --output_dir multi_test --input_dir multi/val
     --checkpoint multi_train

Your input data should be in the multi/val directory in this example.


This code and data is provided without any warranty. If you use it please attribute the credit to the article it was drawn and the original pix2pix implementation.