filmov
tv
Deep Dream: a Machine Dreamily Considers a row of DIMMs

Показать описание
Microway showed this Deep Dream Video at the NVIDIA GTC 2016 conference in San Jose, CA. A computer looks at a picture of computer memory and then "dreams" about what it thinks it sees in the picture.
A Google Research Blog from 2015 introduced dream-like surreal visuals which could be generated by artificial neural networks trained for image classification. Here, similar dream-like imagery is created using the same methodology. Using the Berkeley Vision and Learning Lab's Caffe Neural Network training software, a GoogLeNet Inception Artificial Neural Network was trained on 205 categories of places from the MIT Places 205 Image Set.
Dream-like imagery can be created from images consisting of pure noise, frosted glass, clouds, or walls with hanging vines, for example. Similar to how humans use visual improvisation to imagine "seeing" imaginary features in clouds, such as faces, for example, the methodology demonstrated here can be used to extract similar visually improvisational images from an Artificial Convolutional Neural Network trained for image classification.
Essentially, the method selects a particular feature layer of the network and reinforces any features which are initially activated by the presentation of the input image. Then these initially activated features are reinforced by the algorithm, amplifying what the network "thinks" it is seeing.
A Google Research Blog from 2015 introduced dream-like surreal visuals which could be generated by artificial neural networks trained for image classification. Here, similar dream-like imagery is created using the same methodology. Using the Berkeley Vision and Learning Lab's Caffe Neural Network training software, a GoogLeNet Inception Artificial Neural Network was trained on 205 categories of places from the MIT Places 205 Image Set.
Dream-like imagery can be created from images consisting of pure noise, frosted glass, clouds, or walls with hanging vines, for example. Similar to how humans use visual improvisation to imagine "seeing" imaginary features in clouds, such as faces, for example, the methodology demonstrated here can be used to extract similar visually improvisational images from an Artificial Convolutional Neural Network trained for image classification.
Essentially, the method selects a particular feature layer of the network and reinforces any features which are initially activated by the presentation of the input image. Then these initially activated features are reinforced by the algorithm, amplifying what the network "thinks" it is seeing.