Code360 powered by Coding Ninjas X Code360 powered by Coding Ninjas X
Table of contents
What is GoogLeNet?
GoogleNet Model Architecture
Features of GoogleNet
Inception Module
Auxiliary Classifier
Advantages of GoogleNet Model
Frequently Asked Questions
What are the applications of GoogleNet in today’s world?
How deep is the GoogleNet Neural Network?
What is the benefit of using Inception Units?
Last Updated: May 24, 2024

GoogLeNet Model

Master Python: Predicting weather forecasts
Ashwin Goyal
Product Manager @

What is GoogLeNet?

GoogLeNet, which is famously known as Inception Net, is a Deep Learning model built by researchers at Google. Going Deeper with Convolutions was the paper by which the GoogleNet Model first came into existence.

GoogLeNet Model

The ILSVRC (ImageNet Large Scale Visual Recognition Challenge) evaluates models based on their object detection and image classification performance every year. GoogLeNet was declared the winner in ILSVRC 2014 for the image classification challenge. The performance of GoogLeNet was significantly better than VGG Model (runner-up). Also, the error rate was reduced to a much greater extent when compared with 2013 and 2012 ILSVRC winners ZF-Net and AlexNet, respectively.

GoogleNet introduced the inception module in deep learning, which we’ll study more in this blog.

GoogleNet Model Architecture

There are 22 Parameterized Layers in the Google Net architecture; these are Convolutional Layers and Fully-Connected Layers; if we include the non-parameterized layers like Max-Pooling, there are a total of 27 layers in the GoogleNet Model.

In the below architecture, every box represents a layer,

  • Blue Box - Convolutional Layer
  • Green Box - Feature Concatenation
  • Red Box - MaxPool Layer
  • Yellow Box - Softmax Layer


GoogleNet Architecture

GoogleNet Architecture


GoogleNet Layer

GoogLeNet Layer Description


Input - The GoogLeNet model takes an input image of 224 x 224.


Output - The output layer (or the softmax layer) has 1000 nodes that correspond to 1000 different classes of objects.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job

Features of GoogleNet

GoogleNet, also known as Inception v1, is a deep convolutional neural network architecture designed by Google for image classification tasks. It won the ILSVRC 2014 competition. Here are its key features:

Inception Module

The idea of the Inception module is to bring down the number of parameters in a Deep Neural Network; the inception module is built with many small convolutions.

The models like AlexNet have 60 Million parameters, whereas GoogleNet had only 4 Million parameters also; the architecture of GoogleNet was much deeper than AlexNet.

GoogleNet model has nine identical units known as inception units or inception modules.

Inception Module

Inception Unit


There are Convolutional Kernels of different shapes 1x1, 3x3, & 5x5 in the Inception Module. The output of all these kernels is stacked together at the end of the inception unit. Larger convolutional kernels cover a large area in an image to get the information, and the smaller kernels work on a smaller area in the image.

So, a 1x1 convolutional kernel will give us a finer detail in the image when compared with a 5x5 convolutional kernel.

Auxiliary Classifier

Auxiliary Classifier is a classification unit added twice in the middle of the GoogleNet network. We use Auxiliary Classifiers to tackle the problem of vanishing gradient.

Auxiliary Classifier

Auxiliary Classifier


Each auxiliary classifier has a 5x5 average pooling layer, a 1x1 convolutional layer, two fully connected layers with 1024 units, and a softmax layer with 1000 units.

The auxiliary classifier units are added on top of the middle inception units. When we train the neural network, the loss of the auxiliary classifier units is multiplied by 0.3 and then added to the total network loss.


Advantages of GoogleNet Model

In today’s world, we use GoogleNet for various Computer Vision applications, including Object Detection, Image Classification, etc.


Advantages of GoogleNet Model



Around 18% of the GoogleNet applications are based on Image Classification, and about 10-10% are Object Detection and Quantization.

  • GoogleNet has proven to be faster when compared with other image-classification models like VGG.
  • GoogleNet is much more concise, the size of a pre-trained VGG16 model is 528 MB, and that of a VGG19 model is 549 MB, whereas the size of a pre-trained GoogleNet is 96 MB & InceptionV3 is 92 MB.
  • GoogleNet achieves higher efficiency by compressing the input image and simultaneously retaining the important features/information.


Frequently Asked Questions

What are the applications of GoogleNet in today’s world?

We perform many computer-vision tasks using GoogleNet such as:

  • Image Classification,
  • Object Detection,
  • Object Recognition,
  • Quantization,
  • Face Recognition,
  • Object Classification, and many more.

How deep is the GoogleNet Neural Network?

In GoogleNet model architecture, there are 22 parameterized layers and a total of 27 layers, including the MaxPooling Layers.

What is the benefit of using Inception Units?

Inception units help decrease the number of parameters in a network, allowing us to train a deeper network.


Recommended Readings: ResNet Architecture


In today’s scenario, more & more industries are adapting to AutoML applications in their products; with this rise, it has become clear that AutoML can be the next boon in the technology. Check this article to learn more about AutoML applications.

Check out this link if you are a Machine Learning enthusiast or want to brush up your knowledge with ML blogs.

If you are preparing for the upcoming Campus Placements, don't worry. Visit this link for a carefully crafted and designed course on-campus placements and interview preparation.

Previous article
ResNet Architecture
Next article
Visualizing Convolutional Neural Networks with Filters
Live masterclass