Deep Terrains – Code and data

You will find hereby everything you need to train, and execute our terrain synthesizer from the article Interactive Example-Based Terrain Authoring with Conditional Generative Adversarial Networks

Tensorflow code

The code is an adaptation of a tensorflow implementation of pix2pix. It is available here. Follow the original instructions from pix2pix implementation to install/train/test the code. Don’t forget to use the option –png16bits that will allow the use of 16 bits png images.

Training data

We provide only the training data for the terrain synthesizer from sketches. Download the archive, uncompress it and use the following command:

python pix2pix.py --png16bits --mode train 
    --output_dir multi_train --max_epochs 500 
    --input_dir multi/train --which_direction AtoB

Pre-trained model

For those who don’t have a Titan X… we provide a pre-trained model to be used directly. Download, uncompress it and use the following command:

python pix2pix.py --png16bits --mode test 
   --output_dir multi_test --input_dir multi/val 
   --checkpoint multi_train

Your input data should be in the multi/val directory in this example.

Important notes

This code and data is provided without any warranty. If you use it please attribute the credit to the article it was drawn and the original pix2pix implementation.

Une réponse à “Deep Terrains – Code and data

  1. Thanks for posting this code and pre-trained model! I think the use of GANs for terrain synthesis is quite clever, and it looks like you’re getting excellent results in the paper.I spent some time playing with your code, and here is an example result I generated:https://imgur.com/a/WhlkFFor others who want to try it out, a detail that was tripping me up for a while was that I was making my source image have a white background as the paper shows, and this wasn’t working at all. The source image needs to be a 512×256 resolution .png file with a black background, red ridges, and blue rivers to ensure good results. You have to be careful to be using opaque colors, it seems like any time I used a ‘brush’ that utilized an alpha channel when drawing the code errored out. The input belongs in the 256×256 square on the left. I don’t really know what I should set the ‘target’ image (right 256×256 pixels) to when I just want to synthesize terrain… what did you do for the paper? Is it easy to just pull out the generator/synthesizer from the code and not use the discriminator?Also, It appears to me based on your training images provided that the green channel of the image is how you implement your ‘altitude’ or ‘elevation cues’ mentioned in the paper, but I’m having a hard time discerning exactly how they work. Do the brighter green dots indicate higher elevation, and darker green indicate lower elevation? In Fig. 3 the paper refers to « Points of Interest »… does that mean there is more going on with the green dots (some kind of separate or special feature set you’re drawing from for those POIs) or are they merely elevation cues?Again thanks so much for your time, and for posting the code and pre-trained model. It’s quite fun to play with!

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *