Circuit-bending neural network layers to produce ZALGO hellscapes.

Inceptionism: Going Deeper into Neural Networks

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the "output" layer is reached. The network's "answer" comes from this final output layer.

So then they reach inside to one of the layers and spin the knob randomly to fuck it up. Lower layers are edges and curves. Higher layers are faces, eyes and shoggoth ovipositors.

So here's one surprise: neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too.

That part, they kinda gloss over...

But the best part is not when they just glitch an image -- which is a fun kind of embossing at one end, and the "extra eyes" filter at the other -- but is when they take a net trained on some particular set of objects and feed it static, then zoom in, and feed the output back in repeatedly. That's when you converge upon the platonic ideal of those objects, which -- it turns out -- tend to be Giger nightmare landscapes. Who knew. (I knew.)

PS: Inception sucked.

Previously, previously, previously, previously.

Tags: , , , , , , , , ,

12 Responses:

  1. brone says:

    Why do you not call out dogsaddle?

    Once you've had a good, long look at dogsaddle, and your brain is nicely warmed up, you might be ready for stage 2, maybe, after a few stiff drinks.

    Stage 2 pictures:



  2. crowding says:

    So Heironymous Bosch works at a higher level than Van Gogh. Noted.

    Also, what are humans going to be good for if computers can even hallucinate better?

  3. Middle left: I'm pretty sure that's the house where Terrence McKenna's DMT elves live.

  4. It's not a surprise to anyone who has read the papers on deep learning. I haven't, but at least I've read the titles. The third paper on Hinton's website is called To recognize shapes, first learn to generate images.

  5. Owen W. says:

    The number of people on different reddit threads who say this is what LSD or DMT hallucinations look like is pretty compelling. Which makes me wonder if there's such a thing as a not-bad trip.

  6. zompist says:

    At the Laundry, they have dossiers of these images-- black & white because the British government can't afford color-- with appended instructions on how to call up the associated demon.

  7. crowding says:

    You realize that inside google there is almost certainly a CNN that's trained on dicks. Just, sees dicks everywhere, generates nightmare dickscapes.

  8. Dave says:

    This is what I expect the screensaver to look like:
    Large Scale Deep neural net (shout objects to dream about)

    From the QnA:
    Can I haz source? In a couple of days, we'll share the source online.

    but oh the noes:
    What do you need to run this? 1 GPU (we used alternatingly a 980, 680 or Tesla K40), 32 Gigabytes of RAM, a 6 core CPU

  • Previously