Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Last Updated: Mar 27, 2024

Cloud TPU

Leveraging ChatGPT - GenAI as a Microsoft Data Expert
Speaker
Prerita Agarwal
Data Specialist @
23 Jul, 2024 @ 01:30 PM

Introduction

Google created hardware accelerators called Tensor Processing Units, or TPUs, for machine learning workloads. TPUs are made accessible as scalable computing resources on Google Cloud by the web service known as Cloud TPU.

In this blog, we will see what cloud TPU is, how it works, and when we need to use it.

So, let’s begin.

Cloud TPU

Working of Cloud TPU

TPUs use hardware created for huge matrix operations, which are frequently encountered in machine learning methods, to train your models more effectively. Thanks to TPUs' on-chip high-bandwidth memory (HBM), Larger models and batch sizes are possible. You can scale up your workloads by connecting TPUs in groups called Pods.

Working of cloud tpu

 

To understand how TPUs work, first we need to understand how CPU and GPU works.

Working of CPU

A general-purpose processor known as a CPU uses the von Neumann design. That indicates that a CPU utilises memory and software. The flexibility of CPUs is their main advantage. Any sort of software can be loaded on a CPU for a wide range of purposes. 

working of cpu

 

Every time a calculation is made, a CPU loads values from memory, runs a calculation on the values, and then saves the outcome back in memory. Memory access is slower than computation speed, which might reduce the overall throughput of CPUs. The von Neumann bottleneck is another name for this situation.

Working of GPU

Hundreds of Arithmetic Logic Units (ALUs) are embedded into a single GPU to increase the throughput. There are almost 2500 to 5000 ALUs in a GPU due to this the performance is increased by almost 1000 times.

gpu working

 

However, the GPU is still a general-purpose processor that must support a wide range of software and applications. Consequently, GPUs and CPUs both share the same issue. A GPU must access registers or shared memory to receive operands and store the results of intermediate calculations for each calculation made by one of the hundreds of ALUs.

Working of TPU

The main function of TPUs is Matrix processing, which combines multiply and accumulates operations. TPUs have a huge physical matrix made up of thousands of multiply-accumulators that are coupled to one another directly. This arrangement is known as systolic array architecture. On a single CPU, Cloud TPU v3 has two systolic arrays of 128 × 128 ALUs.

Data is streamed into an infeed queue by the TPU host. Data is loaded into HBM memory by the TPU from the infeed queue. The TPU loads the outcomes into the outfeed queue after the calculation is finished. After reading the results from the outfeed queue, the TPU host saves them in host memory.
 

The TPU loads the parameters into the MXU from HBM memory to execute the matrix operations.

 

Data is then loaded from HBM memory by the TPU. The outcome of each multiplication is passed on to the following multiply-accumulator. The output is the total of all the outcomes of multiplying the data by the parameters. The matrix multiplication method doesn't involve any memory access.

 

TPUs can therefore do calculations involving neural networks at high computational throughput.

Get the tech career you deserve, faster!
Connect with our expert counsellors to understand how to hack your way to success
User rating 4.7/5
1:1 doubt support
95% placement record
Akash Pal
Senior Software Engineer
326% Hike After Job Bootcamp
Himanshu Gusain
Programmer Analyst
32 LPA After Job Bootcamp
After Job
Bootcamp

When to use TPU

when to use tpu

 

Cloud TPUs are optimized for specific workloads. Machine learning workloads may occasionally be conducted on Compute Engine instances using GPUs or CPUs. Generally speaking, you can base your choice of hardware for your workload on the following principles:

GPUs

  • Models that need to operate at least partially on CPUs and have a considerable number of customized TensorFlow/PyTorch/JAX operations
     
  • Models with TensorFlow/PyTorch ops that are not available on Cloud TPU.
     
  • Larger effective batch sizes for medium-to-large models
     

CPUs

  • Rapid prototyping with the highest degree of flexibility.
     
  • Simple models with short training times.
     
  • Smaller models and efficient batch sizes.
     
  • Models created in C++ that use numerous unique TensorFlow, PyTorch, or JAX actions.
     
  • Models with I/O or networking bandwidth restrictions on the host system.

 

TPUs

  • Models that heavily rely on matrix calculations.
     
  • Models without specialized TensorFlow, PyTorch, or JAX operations during the primary training loop.
     
  • Models who train for several weeks or months.
     
  • Big models and big effective batch sizes.
     

Frequently Asked Questions

frequently asked questions

 

Is TPU faster than a GPU?

When using neural network inference in commercial AI applications, the TPU is 15 to 30 times quicker than existing GPUs and CPUs.
 

What is Cloud TPU?

Google applications including Translate, Photos, Search, Assistant, and Gmail are all powered by Cloud TPU, a machine learning ASIC that was specifically created for Google.
 

What is better GPU or TPU?

TPUs were created specifically for neural network loads and have the capacity to operate faster than GPUs while also utilizing fewer resources. GPUs can divide complex issues into dozens or millions of smaller tasks and solve them all at once.

Conclusion

In this article, we have extensively discussed cloud TPU, what is cloud TPU, and when we need to use Cloud TPU.
 

If you think this blog has helped you enhance your knowledge about Cloud TPU, and if you would like to learn more, check out our articles Introduction to Google Cloud PlatformGoogle Prediction APIUsing API in Cloud MonitoringUtility API, and many more on our Website.
 

Refer to our Guided Path on Coding Ninjas Studio to upskill yourself in Data Structures and AlgorithmsCompetitive ProgrammingJavaScriptSystem Design, and many more! If you have just started your learning process and are looking for questions from tech giants like Amazon, Microsoft, Uber, etc., you must look at the problemsinterview experiences, and interview bundle for placement preparations.

Thank You

Please upvote our blog to help other ninjas grow.

Happy Learning!

Topics covered
1.
Introduction
2.
Working of Cloud TPU
2.1.
Working of CPU
2.2.
Working of GPU
2.3.
Working of TPU
3.
When to use TPU
3.1.
GPUs
3.2.
CPUs
3.3.
TPUs
4.
Frequently Asked Questions
4.1.
Is TPU faster than a GPU?
4.2.
What is Cloud TPU?
4.3.
What is better GPU or TPU?
5.
Conclusion