Last updated: Feb 7, 2022

Regularization

Regularization is a Deep Learning method for reducing noise and complexity in any model in order to avoid further complications such as overfitting. It is an important technique that can be used to significantly improve the model's overall performance.
Bias variance tradeoff EASY
The blog discusses the bias-variance tradeoff in detail and how it can be dealt with.
Early Stopping In Deep Learning
This article is about early stopping in deep learning and how it can solve the problem of overfitting
Parameter Sharing and Tying
In this article, we will learn about parameter sharing and tying and there use. Further, we will see the issues faced by l1 and how GROWL overcomes those issues.
Noise Injection in Neural Networks EASY
In this article, we try to discuss the concept of noise injection in neural networks and how it will work in neural networks by adding noise to the network to reduce overfitting.
Ensemble Method
This blog will look up Ensemble learning technique and implementation from scratch.
Dropout - Regularization Method
In this article, we will learn a powerful regularization technique i.e., dropout. Also will discuss, its uses.
Greedy Layer-wise Pre-Training HARD
In this article, we will learn greedy layer-wise pre-training in deep learning neural networks. Also, we will implement code for it.
Activation Functions - Introduction
Activation functions are the most essential aspect of any neural network as they are functions that help train neural networks in accordance with the data given.
Neural Network Activation Functions EASY
This article aims to understand various activation functions.
Batch Normalization - Introduction
In this blog, you will learn about the batch normalization method used to accelerate the training of deep learning neural networks.
Batch Normalization - Implementation
This blog focuses on Batch normalization and its implementation.