Table of contents
1.
Introduction
2.
Why do we need PyTorch?
3.
Basic Terminologies
4.
PyTorch v/s TensorFlow
4.1.
PyTorch
4.2.
TensorFlow
5.
Basic Implementations 
6.
Frequently Asked Questions
7.
Key Takeaways
Last Updated: Mar 27, 2024

Introduction to PyTorch

Author Rajkeshav
0 upvote
Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

Deep learning is the collection of statistical machine learning algorithms used to learn feature hierarchies based on artificial neural networks. Python provides four main deep learning libraries: Theano, Tensorflow, Keras, and PyTorch. In this article, we will be discussing PyTorch.

 

Why do we need PyTorch?

  1. PyTorch is the scientific computing package that offers native support for Python and the use of all of its libraries.
  2. It is actively used in the development of Facebook and its subsidiary companies working in similar technology.
  3. PyTorch ensures an easy-to-use API, ensuring easy usability and better understanding while coding.
  4. We have dynamic computation graphs that ensure that the chart will be dynamic. At every point of code execution, we can build the graph and can be manipulated at the runtime based on the needs.
  5. PyTorch is speedy, which ensures more effortless coding and faster processing.

 

Basic Terminologies

TensorTensor is an n-dimensional array running on the GPU.

VariablesVariables are nodes in the computation graph used to store the data and the gradients.

Module- Modules in the neural networks are used to store states. States are also called learners with weights.

 

PyTorch v/s TensorFlow

PyTorch

  • PyTorch offers dynamic computation graph.
  • PyTorch can make use of standard Python flow control.
  • PyTorch supports native Python debugger.
  • PyTorch ensures dynamic inspection of Variables and Gradients.
  • PyTorch is mainly used for research at this point in time.

 

TensorFlow

  • TensorFlow does not offer dynamic computation graph.
  • TensorFlow cannot make use of common Python flow control.
  • TensorFlow cannot use native Python debuggers.
  • TensorFlow does not support dynamism.
  • TensorFlow is used in the production.

 

Basic Implementations 

The first step is to install all the packages. We will be importing the torch package here. Let's start by constructing a 5x3 matrix that is uninitialized.

from __future__ import print_function
import torch
# constructing a 5x3 matrix, uninitialized:
x = torch.empty(5,3)
print(x)

 

Output

 

From the output, we have the empty tensor. Similarly, let's construct another 5x3 matrix. But this time, we will be putting random values into it, and we can check out the output.

 

x = torch.rand(5, 3)
print(x)

 

Output

 

Every time we run the code, we'll get different output. Let's construct a matrix filled with zeros, and specifically, we'll mention the data type to belong. 

 

# constructing a matrix filled with zeroes and of dtype long
x = torch.zeros(5,3,dtype=torch.long)
print(x)

 

Output

 

We can even construct the tensor directly from the data. 

 

x = torch.tensor([5.5, 3])
print(x)

 

Output

 

Here 5.5 and 3 are not dimensions but the tensor data. Next, we will create a tensor that is based on an existing tensor. These methods will reuse all the properties of the input tensor. For example, the data type and any other dependencies on other packages unless the user provides the new values.  

 

x = x.new_ones(5,3, dtype=torch.double)
print(x)
x= torch.randn_like(x, dtype=torch.float)
print(x)

 

Output

 

Here we'll be filling the tensor with ones and later overriding the data type from double to float. So the output we get is the float data type of the new tensor value.

We can print the size of the tensor like:

 

print(x.size())

 

Output

 

The size is 5x3. But 5x3 is a tuple output, so that it will support all the tuple operations there. For the case of simplicity, Let's consider the essential tuple operation called Addition. 

We'll construct another tensor, fill it with random dimensions 5x3, and perform the Addition operations.

 

y = torch.rand(5, 3)
print(x+y)

 

Output

In this way, we can perform various operations on tensor objects.

Frequently Asked Questions

  1. How is PyTorch designed?
    Pytorch is the cousin of lua-based frameworks. It is actively used in the development of Facebook.
     
  2. What are the main elements of PyTorch?
    1) PyTorch tensors
    2) PyTorch NumPy
    3) Mathematical operations
    4) Autograd Module
    5) Optim Module
    6) nn Module
     
  3. What are Tensors in PyTorch?
    Tensors in PyTorch are the same as the numpy array. It is a multi-dimensional array of the same data type.
     
  4. What are the Features of PyTorch?
    The main Features of PyTorch are-
    1) PyTorch offers dynamic computation graphs.
    2) PyTorch can make use of standard Python flow control.
    3) PyTorch ensures dynamic inspection of Variables and Gradients.
     
  5. What is an Activation function?
    The Activation function calculates a weighted sum and further adds bias with it to give the result.

Key Takeaways

This blog taught the concepts and the basic implementation behind PyTorch. PyTorch is the field of research, and various models have been implemented using it. Check the link for advanced implementation. For a detailed concept in Machine Learning, visit here

Check my other blog- Logistic Regression.                                                                                                         

Live masterclass