Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
Load Balancing
3.
Need for Load Balancing
4.
Load Balancing Options
5.
Choosing a Right Load Balancer
5.1.
External Load Balancer
5.2.
Network load balancer
5.3.
HTTP(S) load balancers
5.4.
Proxy-based load balancers (TCP and SSL)
5.5.
Internal Load Balancer
6.
Frequently Asked Questions
6.1.
Is load balancer software?
6.2.
How is load balancing done?
6.3.
What is load balancing in virtualization?
6.4.
What is the difference between load balancer and an API gateway?
6.5.
Is AWS API gateway a load balancer?
7.
Conclusion
Last Updated: Mar 27, 2024

Load Balancing in Cloud

Author Amit Singh
0 upvote

Introduction

Load balancing is an essential part of any cloud environment. It is critical to the availability of your cloud-based applications to customers, business partners, and end users.

Load balancers are extremely useful in cloud environments, where massive workloads can easily overwhelm a single server and high levels of service availability and response times are critical to certain business processes or mandated by SLAs.

cloud

In this article, we will learn about Load Balancing in the Google Cloud Platform in detail. We will also discuss the features, types of load balancing and interfaces. So, keep on reading till the last: 

title

Load Balancing

In Cloud Computing, load balancing is the division of workloads and computer resources. By allocating resources among numerous computers, networks, or servers, it lets businesses to manage workload or application demands effectively. Managing workload traffic and demand via the Internet is a component of cloud load balancing.

A load balancer receives incoming traffic and routes those requests to active targets based on a configured policy. A load balancing service also monitors the health of the individual targets to ensure that those resources  are fully operational.

cloud

Need for Load Balancing

There are many reasons why load balancing was developed. 

  • One of them involves increasing speed and performance, while the other involves lowering each device's performance in order to stop it from reaching its performance limits.
     
  • The same frontend-serving technology that runs Google is the foundation for Cloud Load Balancing. It consistently delivers excellent speed and low latency while supporting 1 million or more queries per second. 
     
  • Using Google's fast private network backbone, traffic enters Cloud Load Balancing across 80+ different worldwide load balancing sites. You may offer content as close as feasible to your users by utilizing cloud load balancing.
     
  • You may offer content as close to your users as possible with Google Cloud Load Balancing on an infrastructure that can handle over 1 million requests per second!
     

traffic

Load Balancing Options

If you want to figure out which load balancer is best suited for your implementation, you’ll need to think about things like,

  1. Whether you need Global or Regional load balancing. Global load balancing refers to backend endpoints that live in more than one region. Regional load balancing refers to backend endpoints that live in a single region.
     
  2. Or you need an External load balancing or Internal load balancing.
     
  3. It also depends on the type of traffic you are dealing with. Like whether the traffic is HTTP, HTTPS, TCP, UDP, SSL, etc.

Choosing a Right Load Balancer

Assume you are in charge of a complicated website, Coding heroes(your one non-stop platform for programmers). 

load balancer

And it's a big hit on the internet! It continues to receive a high volume of traffic, and you are unsure whether your website's backend can handle the volume of traffic from all over the world. 

You're aware that load balancers are required, but the options are befuddling. It can be difficult to decide on a load balancing architecture that meets your needs and determine the prerequisites you require for the best performance without breaking the bank.

In this section, we will be going over the right load balancer for your website:

It is so obvious that there are many load balancing alternatives depending on the precise location in the architecture where a load balancer is required.

Google Cloud load balancers are divided into external load balancers and internal load balancers.

They would employ internal load balancers to distribute traffic within their Google Cloud Platform network and external load balancers to distribute traffic entering their Google Cloud network from the internet.

Google

External Load Balancer

External load balancing includes four options:

  1. HTTP(S) Load Balancing for HTTP or HTTPS traffic,
     
  2. TCP Proxy for TCP traffic for ports other than 80 and 8080, without SSL offload.
     
  3. SSL Proxy for SSL offload on ports other than 80 or 8080.
     
  4. Network Load Balancing for TCP/UDP traffic.

Network load balancer

The foundation of network load balancing is maglev. It is possible to load balance traffic on your systems using this load balancer based on IP protocol data that is incoming, such as an address, protocol, and port (optional). This load balancing system is regional and unproxied. So a network load balancer is a pass-through load balancer, not a client connection proxy.

Network load balancers for backend services can handle TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.

Network load balancers that use target pools can only handle TCP or UDP traffic.

The regional Network Load Balancer is for Layer-4 traffic and is constructed using Maglev, whereas the global HTTP(S) load balancer is for Layer-7 traffic and is built using the Google Front End Engines at the edge of Google's network.

What is Maglev?

In order to load balance all traffic entering data centers and deliver it to front-end engines at network edges, Google designed Maglev in 2008. A number of regional backend instances were used to receive the traffic.

Due to its software foundation and active-active scale-out architecture, Maglev differs from conventional load balancers. Maglev minimizes the detrimental effects of unanticipated errors on connection-oriented protocols by distributing traffic evenly across hundreds of backends. When you wish to maintain the client IP address all the way to the backend instance and conduct TLS termination on these instances, it's perfect for lightweight L4-based load balancing.

HTTP(S) load balancers

Global HTTP(S) Load Balancing is actually a proxy-based Layer 7 load balancer that allows you to run as well as scale your services behind a single external IP address.

As opposed to employing the conventional DNS-based strategy, Google extended load balancing out to the edge network on front-end servers. As a result, a single Anycast virtual IPv4 or IPv6 address can provide worldwide load-balancing capability. It means that you can distribute resources across several areas without changing the DNS records or adding new load balancer IP addresses for additional regions. 

So, it is obvious that cross-region failover and overflow are possible with global HTTP(S) load balancing!

In the case that instances in the area nearest to the end user fail or have insufficient capacity, the distribution algorithm automatically routes traffic to the next closest instance with available capacity.

Proxy-based load balancers (TCP and SSL)

Google Cloud Platform offers load balancers that are proxy-based for TCP and SSL traffic.  The fascinating part is that both of them use the same globally distributed infrastructure.

When you need to deal with TCP traffic, you don't need SSL offload; instead, you can use a TCP proxy load balancer.

tcp

If or not you need SSL offload will often determine whether you choose to employ them.

If you need SSL offload and are dealing with TCP traffic, use an SSL proxy load balancer.

ssl

Let's imagine a case in which you will be receiving only HTTP and HTTPS traffic. 

In this case, you will select an HTTPS load balancer because you will only be receiving HTTP and HTTPS traffic. 

However, the website, like the majority of businesses, also has private workloads, such as application servers, that must be shielded from the public web. These services must expand and scale behind a private virtual IP that only internal instances can access.

The regional layer-7 internal load balancing built on Google's Andromeda network virtualization stack is your best choice for this.

Internal Load Balancer

Internal HTTP(S) Load Balancing is a managed service based on the free and open source Envoy proxy and is built on the Andromeda network virtualization stack. Internal proxy-based load balancing that is of Layer 7 application data is offered by this load balancer. 

Use URL maps to specify the traffic routing. 

The load balancer operates as the front end to your backends using an internal IP address.

An Internal L7 load balancer is similar to the HTTP(S) Load Balancer and Network Load Balancer in that it is neither a hardware appliance nor an instance-based solution. 

It can support as many connections per second as you require because there is no load balancer in the path between the client and the backend instances.

internal

Frequently Asked Questions

Is load balancer software?

Software load balancing is how administrators route network traffic to different servers. Load balancers evaluate client requests by examining application-level characteristics (the IP address, the HTTP header, and the contents of the request).

How is load balancing done?

The most popular technique for a stateless load balancer is to hash the client's IP address down to a tiny number. The balancer uses the number to determine which server should receive the request. It can choose an entire server at random or even perform a round-robin process.

What is load balancing in virtualization?

A Virtual Load Balancer provides more flexibility to balance the workload of a server by distributing traffic across multiple network servers. The aim of Virtual load balancing is to mimic software-driven infrastructure through virtualization.

What is the difference between load balancer and an API gateway?

An API gateway connects microservices, while load balancers redirect multiple instances of the same microservice components as they scale out.

Is AWS API gateway a load balancer?

API Gateway is able to manage and balance out the network traffic just as a Load Balancer, just in a distinct way. Instead of distributing the requests evenly to a set of backend resources (e.g., a cluster of servers), an API Gateway can be configured to direct requests to specific resources based on the endpoints being requested.

Conclusion

In this article, we have studied about Load Balancing in Cloud in detail. We have also discussed the different types of Load Balancers in detail. We have also discussed the features as well.

We hope that this article has provided you with the help to enhance your knowledge regarding Load Balancing in Cloud and if you would like to learn more, check out our articles on cloud domains and cloud hypervisor.

thanks

Refer to our guided paths on Coding Ninjas Studio to learn more about DSA, Competitive Programming, JavaScript, System Design, etc. Enrol in our courses and refer to the mock test and problems available, Take a look at the interview experiences and interview bundle for placement preparations.

Do upvote our blog to help other ninjas grow.

Merry Learning!

Live masterclass