Code360 powered by Coding Ninjas X Code360 powered by Coding Ninjas X
Table of contents
Base Of Google Deep Dream
Digging deeper Into Google Deep Dream
Key Takeaways
Last Updated: Mar 27, 2024

Google Deep Dream

Author Mayank Goyal
0 upvote
Master Python: Predicting weather forecasts
Ashwin Goyal
Product Manager @


DeepDream is one of the more intriguing deep learning applications in computer vision. Alexender Mordvintsev, a Google engineer, created the application. This application employs algorithmic pareidolia to detect and enhance picture patterns using a convolutional neural network. In the processed visuals, it produces dream-like psychedelic hallucinations. This model has been used in art history. Researchers have also used it to allow people to explore virtual reality surroundings to simulate the effects of psychotropic substances. The term (deep) "dreaming" was popularised by Google's program to refer to the production of images that induce desirable activations in a trained deep network. The phrase has now come to apply to a group of related methodologies.

Base Of Google Deep Dream

Inception is the foundation for Google Deep Dream, and it was first presented at the ILSVRC in 2014. The deep convolutional neural network architecture earns a new categorization state-of-the-art and detection. Increased efficiency in the use of computing resources within the network. It increased the network's depth and breadth while maintaining a 1.5-hour computational budget. At inference moment, there are a billion multiply-adds.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job


We train an artificial neural network by exposing it to millions of training samples and gradually tweaking the network parameters until our desired classifications are achieved. The network is typically made up of 10-30 layers of artificial neurons stacked on top of each other. Each image is supplied into the input layer, communicating with the different layers until it reaches the "output" layer. This final output layer provides the network's "response."

Understanding what happens at each layer is one of the challenges of neural networks. We know that each layer extracts higher-level image elements after training until the final layer essentially decides what the picture reveals. The first layer, for example, can search for edges or corners. Intermediate layers look for overall shapes or components, such as a door or a leaf, by interpreting the basic features. These neurons activate in response to incredibly complex items such as whole houses or trees, and the final few layers integrate them into complete interpretations.

Turning the network upside down and asking it to enhance an input image to elicit a specific interpretation is one approach to visualize what's going on. We want to know what kind of image "Banana" would produce. Begin with a noisy image and gradually improve it until it resembles a banana, as determined by the neural network. That doesn't function very well on its own. Still, if we add a primary constraint, the image must have statistics similar to natural images, such as neighboring pixels must be associated.



We teach networks by showing them many examples of what we want them to learn in the hopes that they will extract the substance of the subject (e.g., a fork requires a handle and 2-4 tines) and learn to disregard what doesn't (a fork can be any shape, size, color or orientation). But how can you know if the network has learned the correct features? It can aid in the visualization of a fork in the network.

Rather than specifying whatever feature we want the network to amplify, we can allow the network to make the selection. In this scenario, we just input an arbitrary image or photo to the network and analyze the image. The network is then asked to enhance anything it detects by selecting a layer. Because each layer of the network deals with features at a different level of abstraction, the complexity of the features we generate is determined by which layer we enhance.

Again, we take an existing image and feed it into our neural network. "Whatever you see there, I want more of it!" we implore the network. This generates a feedback loop: if a cloud resembles a bird, the network will make it resemble a bird even more. This causes the network to recognize the bird even more clearly on the next run until a highly detailed bird pops out of nowhere.

The findings are intriguing: even a tiny neural network may be used to over-interpret an image, just as we used to enjoy watching clouds and understanding the random shapes as youngsters. Because this network was mainly trained on animal imagery, it naturally interprets shapes as animals. However, because the data is maintained at such a high level of abstraction, the outputs are an intriguing mix of these learned characteristics.

Digging deeper Into Google Deep Dream

We receive an unlimited stream of fresh impressions if we use the algorithm recursively on its outputs and apply some zooming after each cycle, investigating the set of items the network knows about. We can even start with a random-noise image to ensure that the final output is solely the result of the neural network. For example,


The approaches here can better understand and visualize how neural networks can do challenging classification tasks, improve network architecture, and verify what the network has learned during training. It also makes us question if artists may use neural networks as a tool—a new approach to remix visual concepts—or possibly provide some light on the origins of the creative process in general.


The idea of dreaming can be applied to hidden (internal) neurons other than those in the output, allowing researchers to investigate the functions and representations of diverse network components. It's also feasible to tailor the input to satisfy a single neuron (a technique known as Activity Maximization) or a whole layer of neurons. While dreaming is most commonly used to visualize networks or create computer art, it has recently been claimed that including "dreamed" inputs in the training set can speed up training durations for abstractions in Computer Science. It has also been shown that the DeepDream model may be used in art history. Foster the People's music video for "Doing It for the Money" featured DeepDream.
Also read, Sampling and Quantization


  1. What is the purpose of Google DeepDream?
    DeepDream is a computer vision program developed by Google engineer Alexander Mordvintsev that uses a convolutional neural network to discover and enhance patterns in photographs using algorithmic pareidolia, giving the images a dream-like appearance reminiscent of a psychedelic trip.
  2. Why is it that DeepDream sees dogs?
    The surreal effects of Deep Dream arise from feeding it an initial image and then triggering a feedback loop, in which the network tries to detect what it recognizes what it sees. To put it another way, Google's Deep Dream sees dog faces everywhere because it was specifically trained to do so.
  3. What is the DeepDream generator, and how does it work?
    The Deep Dream Generator is a computer vision application that lets users upload photographs and transform them into artificial intelligence. In another way, numerous layers of neural networks process the images fed into the software.

Key Takeaways

Let us brief out the article.

Firstly, we saw the meaning of google's deep dream, its history, and how it was known to the world. Later, we saw the working of deep dreams with some of its few applications.

Well, that's the end of the article. 

I hope you all like it.

If you need a perfect guiding partner in your learning phase, we have an ideal partner for you.

Happy Learning Ninjas!

Next article
Hallucinations in Computer Vision
Live masterclass