What is AutoGrad?
We use AutoGrad in the Backpropagation step while training the neural networks to calculate error gradients with respect to all the parameters.
AutoGrad creates an acyclic Dynamic Computational Graph and records all the operations performed on the tensor. It calculates the gradients by multiplying every gradient from the root to the leaf by the chain rule.
Pytorch Autograd has a huge application in Neural Networks. In Neural Networks, we first forward pass the data through the neurons and get some output. The output is then compared with the real output to get the loss. After this, we backpropagate through the network and calculate the gradient for each weight. Finally, the weights are updated to reduce the loss. The above steps are repeated till the loss is minimized or we reach some optimal loss.
In the above architecture, we can use AutoGrad to perform the backpropagation step to calculate the gradients.
AutoGrad Demonstration
We will take three different tensors, which will act as parameters.
# Gradient Enabled Tensors
p = torch.tensor([10.], requires_grad=True)
q = torch.tensor([8.], requires_grad=True)
r = torch.tensor([5.], requires_grad=True)

You can also try this code with Online Python Compiler
Run Code
print("Tensor P: ",p,"\n")
print("Tensor Q: ",q,"\n")
print("Tensor R: ",r,"\n")

You can also try this code with Online Python Compiler
Run Code
Output

We’ll perform a mathematical operation between the parameters and store them in an output variable ‘out.’

out = (r**3) * (q**2) - p**3
print("Output Tensor: ",out,"\n")

You can also try this code with Online Python Compiler
Run Code

Manually Calculating the Gradient with respect to different parameters:



After replacing with values, i.e., p=10, q=8, r=5

Now, Let’s use AutoGrad to calculate the derivatives.
# The backward() function calculates the gradients by passing the tensor through the backward graph to the leaf node
out.backward()

You can also try this code with Online Python Compiler
Run Code
print(p.grad)

You can also try this code with Online Python Compiler
Run Code
Output
tensor([-300.])
print(q.grad)

You can also try this code with Online Python Compiler
Run Code
Output
tensor([2000.])
print(r.grad)

You can also try this code with Online Python Compiler
Run Code
Output
tensor([4800.])
FAQs
1. What are Tensors?
Tensors are containers with N dimensions. We can store data in tensors the same way as Numpy arrays. A tensor can have any number of dimensions. A one-dimensional tensor will be a vector of data, and a two-dimensional tensor will be a matrix, a three-dimensional tensor will be a cube, etc.
2. What makes PyTorch different from other ML Libraries?
The following factors make PyTorch better than other ML libraries:
- It offers dynamic computation graphs.
- It can make use of standard Python flow control.
- It ensures dynamic inspection of Variables and Gradients.
3. How can we increase the size of a tensor?
We can expand a tensor using the torch.expand() function. The size will be increased to the shape provided in the parameter.
4. What is the use of AutoGrad in Deep Learning?
We use AutoGrad in the Backpropagation step while training the neural networks to calculate the gradients of error with respect to all the parameters.
Key Takeaways
Congratulations on making it to the end of the blog. If you want to read the basics of the PyTorch, check out this blog.
Check out this link if you are a Machine Learning enthusiast or want to brush up on your knowledge with ML blogs.
If you are preparing for the upcoming Campus Placements, don't worry. Coding Ninjas has your back. Visit this link for a carefully crafted and designed course for campus placements and interview preparation.