Last updated: Feb 7, 2022

Optimization of Neural Network

Learn about different algorithms, architectures, and techniques for optimising neural networks for better prediction and analysis, as well as improving the accuracy of any model.
Gradient Descent with Momentum
This article will focus on the limitations of gradient descent and how momentum overcomes those shortcomings. Finally, we will look into the maths part.
Nesterov Accelerated Gradient EASY
Nesterov Accelerated Gradient (NAG) is an optimization technique that improves gradient descent by adding a momentum term for faster convergence and reduced oscillations.
What is AdaGrad?
This blog is about AdaGrad. Why it is used, and what is the concept behind it.
AdaDelta
In this blog, we will review the Gradient Descent algorithm and its drawbacks and the various Optimizers to remove the drawback.
RMSProp
This article focuses on the RMSProp optimizer, the reason behind using it, the work and mathematics behind it. Finally at last, we will look into a python implementation.
Adaptive Moment Estimation (ADAM)
In this article, we will learn ADAM (adaptive moment estimation) optimizer. The derivation of ADAM optimizer and implementation of it.
Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm
In this article, we will discuss the L-BFGS(Limited-Memory Broyden-Fletcher-Goldfarb-Shanno) Algorithm and its maths.