GMU:Artists Lab/Rachel Smith

From Medien Wiki
< GMU:Artists Lab
Revision as of 17:23, 7 May 2017 by Rachel (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Intercepting Neural Networks

Convolutional neural networks have a strong ability to recognise and describe images. They are behind Google’s ‘Deep Dream’, ‘Style Transfer’ and also used in practical applications such as driverless cars and text-to-speech. Images are abstracted through many ‘hidden’ layers of the network in order to gain high level information (such as the existence of leaves, eyes etc) as easily as possible. When inspected by human eyes, these layers bear little or no resemblance to the original image and, in latter stages, are reduced to completely abstract forms. What happens when these layers are reinterpreted in a human way? How will the network function with subjective intruders?

In this project I will intercept the hidden layers of an image classifying neural network, extract several images, draw them myself and reinsert them into the process.


Jason Yosinksi.pngGene Kogan.png

Jason Yosinski, Gene Kogan