DeepDream is a small, educational repository that accompanies Google’s original “Inceptionism” blog post by providing a runnable IPython/Jupyter notebook that demonstrates how to “dream” through a convolutional neural network. The notebook shows how to take a trained vision model and iteratively amplify patterns the network detects, producing the hallmark surreal, hallucinatory visuals. It walks through loading a pretrained network, selecting layers and channels to maximize, computing gradients with respect to the input image, and applying multi-scale “octave” processing to reveal fine and coarse patterns. The code is intentionally compact and exploratory, encouraging users to tweak layers, step sizes, and scales to influence the aesthetic. Although minimal, it illustrates important concepts like feature visualization, activation maximization, and the effect of different receptive fields on the final image.
Features
- Runnable Jupyter notebook for activation-maximization workflows
- Layer/channel selection to control visual aesthetics
- Multi-scale “octave” processing for fine and coarse detail
- Step-by-step gradient ascent on input pixels
- Works with pretrained CNNs to visualize learned features
- Minimal, hackable code to encourage experimentation