Artificially Imagining Trees

To fill out the points of the line, we developed a system to generate new 2D digital trees to fill out all the 44k points along The Common Line using machine learning. The idea here is to create “Artificially Imagined” trees for the immersive experiences which will be eventually replaced by “Human Imagined” and then actual trees as the project runs – reclaiming the line from the digital systems and bringing it into reality.

To do this, we trained a version of the Pix2Pix Generative Adversarial Network (or GAN) on around 4k Creative Commons licensed images from Flickr tagged “tree”, so we get a big mix of images as input. You can take a look at how this works here: https://affinelayer.com/pixsrv/ (works best in chrome), as it takes in line drawings as edges and attempts to generate a new image.

Example training images…

Trees
Trees
Tree

GANs use two neural networks, or AIs, to generate new images based on an input. The first AI is trained on a set of images to recognize something (for us – a tree), while the second attempts to generate a new image. The first AI then validates the second’s image and is only accepted when the first AI thinks that it’s been presented with something that looks like what it’s learned about.

Imagine this conversation between two neural networks:

“I’ve made a tree!”
“That’s not a tree try again”
Repeating until…
“Yeah, that looks like a tree to me”

And you understand the basics of how a GAN works.

Once that’s trained, we end up with a neural network which we can run separately. On a high-end PC with a powerful graphics card, it took just under two days of compute time to complete. This is not something you can do on your laptop.

Next, we used a Processing script to generate 44k line drawings with a L-System algorithm to use as an input for our trained GAN. Each of the line drawings takes a random input, with the algorithm working recursively to generate trees based on fractal patterns. The pattern divides and divides a line until it reaches a stopping point – we then draw that line into an image file ready to feed into our trained neural network.

Generating input images…

Eight hours later on our high-end PC we end up with 44,000 new images filled with detail, but the images we receive are a very low resolution which one doesn’t necessarily work for our immersive environment. To resolve this we use one more image-based network – Neural Enhance. This is pre-trained, so we can use it right away. Neural Enhance takes any image and uses it’s own neural network to quadruple the resolution – effectively hallucinating the missing detail.

A selection of resulting trees…

The result is our 44k “Artificially Imagined” trees – intentionally unrealistic and dreamlike. The mix of input images warped this further – under the flickr tag there are pictures of people, buildings and planes in images. There is still more to do to get them looking right within the augmented reality experience, but with this, we can consider the line “complete” – there is now some form of content accessible at every point along the line.

Christopher Hunt, November 2018

Leave a Reply

Your email address will not be published. Required fields are marked *