I began testing a few different methods of creating the landscape within the gauGAN program. I wanted the process to go back and forth if possible as I thought this created interesting results when I had tried it previously and it also seems a more engaging collaboration with AI.
First I tested feeding the generated image straight back in to the machine with no input from myself
Definitely produces some fascinating results, however I found that it often seems to loose definition over time and become quite non-descript/undefined.
Next I tried painting over-top each time
More defined using this method
And lastly tried repainting the scene each time
This method seemed to produce the most unique result
I would be curious to know how these would be interpreted had they been painted much more realistically, spending a lot more time on each piece. This however could be quite a length process.
Some other potential ways to test are –
- Photobashing – this is typically how concept art pieces are created, this could yield a more realistic looking piece
- Physcial painting
- The program has the option for style filters – I could input some concept art into this to get more variance, so that the program isn’t limited to making landscape images that work within our planet (ie it thinks grass must be green sky must be blue etc






































