Table of contents
1.
Introduction
2.
Configuring Reverse Proxy
3.
Configuring TCP or UDP Load Balancing
4.
Example
5.
FAQS
6.
Key Takeaways
Last Updated: Mar 27, 2024
Easy

NGINX TCP And UDP Load Balancing

Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

TCP stands for Transfer Control Protocol, and UDP stands for User Datagram Protocol. TCP is the most commonly used protocol in many popular day-to-day applications and services such as LDAP, RTMP, and MySQL. On the other hand, UDP is frequently used in non-transactional applications such as DNS, Syslog, and RADIUS.

NGINX is a web server, and it can also be used as a reverse proxy, mail proxy, HTTP cache, and load balancer. The Nginx Load Balancer attempts to distribute the workload evenly among various instances, either stand-alone or clustered. By doing so, it increases the overall throughput of the system. Nginx can balance both UDP and TCP traffic.

Recommended Topic About, 8085 Microprocessor Pin Diagram

Configuring Reverse Proxy

We need to reverse proxy our Nginx open-source or Nginx Plus servers to forward the TCP connections or the UDP datagrams from the client to the proxied or the upstream server.

To configure a reverse proxy on Nginx or Nginx Plus, we will need to follow the following steps,

  1. First, we will need to create our stream block. We will be creating all the servers in this top-level block
  2. Now, we will define one or more server blocks for each virtual server in our stream block.
  3. We will have to add the Listen directive in the server block. The listen directive will define the IP address for the servers. For UDP, we further need to add a UDP parameter. TCP is the default protocol. Thus, no additional parameters are required to define TCP.
  4. As the final step, we will need to add the proxy pass directive to define our upstream or proxied server for forwarding the server traffic.

A complete reverse proxy server will look something like this,

stream {  
    server {  
        listen     3300;  
        proxy_pass stream_back;  
    }  
    server {  
        listen     3301;   
        proxy_pass back.example.com:3301;  
    }  
    server {  
        listen     40 udp;  
        proxy_pass dns_ser;  
    }  
    # ...  
}  
You can also try this code with Online Python Compiler
Run Code

 

Also See, YII Framework

Configuring TCP or UDP Load Balancing

Now, after configuring the reverse proxy servers, we can move forward with the configuration of load balancers. To configure the load balancer, we will need to follow the following steps.

1.) First, we will need to create the servers that have to be load-balanced. We will create the upstream blocks inside the stream block to create the servers. Let us create two upstream blocks, one for each TCP and UDP.

stream {  
    upstream stream_back {  
        # ...  
    }  
    upstream dns_ser {  
        # ...  
    }  
    # ...  
}
You can also try this code with Online Python Compiler
Run Code


2.) Now, we will populate our upstream block with the servers. The upstream block includes a server directive for each upstream server, specifying its hostname or IP address and an obligatory port number.

stream {  
    upstream stream_back {  
        server back1.example.com:3300;  
        server back2.example.com:3301;  
        server back3.example.com:3302;  
        # ...  
    }  
    upstream dns_ser {  
        server 192.168.136.130:40;  
        server 192.168.136.131:41;  
        # ...  
    }   
    # ...  
} 
You can also try this code with Online Python Compiler
Run Code


3.) Now, we will configure the method of load balancing, which will be used by the proxied server. Nginx provides us with the following algorithms.

Also Read -  Ibegin TCS

Round Robin:  It is the default Load Balancing method in Nginx or Nginx Plus. The requests are evenly distributed across all the servers. The server weights have no consideration whatsoever.

upstream back {  
    #we don’t need to define a balancing method for Round Robin
    server back1.example.com:3300;  
    server back2.example.com:3301;  
    server back3.example.com:3302; 
}
You can also try this code with Online Python Compiler
Run Code


Least Connections: As the name suggests, in this Load Balancing method, we send the request to the server having the lowest number of connections. Similar to Round Robin, the server weights have no consideration.

upstream back {  
    least_conn;
    server back1.example.com:3300;  
    server back2.example.com:3301;  
}
You can also try this code with Online Python Compiler
Run Code


Least Time: This method is only provided by Nginx Plus. In this Load Balancing method, the request is sent to the server with the least latency or the least number of connections.

upstream back {  
    Least_timefirst_byte;
    server back1.example.com:3300;  
    server back2.example.com:3301;  
}
You can also try this code with Online Python Compiler
Run Code


Hash: In this method, we determine the server using a user-defined key that can be a string, number, or a combination of both.

upstream back{  
    hash $remote_addr;
    server back1.example.com:3301;  
    server back2.example.com:3300;  
}
You can also try this code with Online Python Compiler
Run Code


Random: This type of Load Balancing is also native to Nginx Plus. Every request is passed to a randomly selected server in this method. First, Nginx randomly selects two servers considering the server weights, and then the second one is chosen based on the specified method. The user can specify three methods in Nginx Plus.

Least_conn - Server having the least number of active connections.

least_time=header - Server having the least average time to receive the response header from the server.

Least_time=last_byte - Server having the least average time to receive the complete response from the server.

upstream back{  
    least_time first_byte;   
    server back1.example.com:3300;  
    server back2.example.com:3301; 
}
You can also try this code with Online Python Compiler
Run Code


4.) We can also add more parameters to the server connections to add more configurations. The most commonly used parameters are the maximum number of connections, server weight, etc.

upstream stream_back {  
    hash   $remote_addr consistent;  
    server back1.example.com:3300 weight=4;  
    server back2.example.com:3301;  
    server back3.example.com:3302 max_conns=2;  
}  
upstream dns_ser {  
    least_conn;  
    server 192.168.136.130:40;  
    server 192.168.136.131:41;  
    # ...  
}  
You can also try this code with Online Python Compiler
Run Code

Example

A complete code for TCP and UDP load balancing will look something like this,

stream {  
    upstream stream_back {  
        least_conn;  
        server back1.example.com:3300 weight=4;  
        server back2.example.com:3300 max_fails=4 fail_timeout=30s;  
        server back3.example.com:3300 max_conns=2;  
    }    
    upstream dns_ser {  
        least_conn;  
        server 192.168.136.130:40;  
        server 192.168.136.131:40;  
        server 192.168.136.132:40;  
    }   
    server {  
        listen        3300;  
        proxy_pass    stream_back;  
        proxy_timeout 3s;  
        proxy_connect_timeout 1s;  
    }  
    server {  
        listen     40 udp;  
        proxy_pass dns_ser;  
    }  
    server {  
        listen     3301;  
        proxy_pass back4.example.com:3301;  
    }  
} 
You can also try this code with Online Python Compiler
Run Code

All UDP and TCP proxy-related functionalities are configured inside our stream block in the above example.

There are two blocks named upstream, each block containing three servers that host the same content as one another. The port number follows the server name. The upstream block named stream_back is used to declare TCP servers, whereas the block name dns_ser is used to declare UDP servers. We have also added different configurations to different servers depicting the variety present in the parameters.

You can also read about mock interview.

Also see, Must Do Coding Questions

FAQS

  1. What is Nginx?
    NGINX is a web server, and it can also be used as a reverse proxy, mail proxy, HTTP cache, and load balancer.
     
  2. What is Load Balancing?
    The Nginx Load Balancer attempts to distribute the workload evenly among various instances, either stand-alone or clustered. By doing so, it increases the overall throughput of the system.
     
  3. What is Session Persistence?
    Nginx Plus provides the feature of Session Persistence. In Session Persistence, Nginx Plus identifies all the user sessions, and then it routes all the requests in a session to the same upstream server.
     
  4. What are the benefits of using NGINX?
    NGINX provides a lot of benefits like,
    Standardized JSON Configuration Files
    HTTP configuration API
    Consistent networking layer
    Reconfiguration without restarts
     
  5. What are the four methods of Load Balancing in Nginx?
    Nginx provides four methods of Load Balancing,
    Round Robin
    Least Connections
    IP Hash
    Generic Hash

Key Takeaways

This Blog covered all the necessary points about NGINX TCP and UDP Load Balancing and all the TCP and UDP Load Balancing methods present in Nginx and Nginx Plus.

Don’t stop here; check out Coding Ninjas for more unique courses and guided paths. Also, try Coding Ninjas Studio for more exciting articles, interview experiences, and fantastic Data Structures and Algorithms problems.

Check out this problem - Reverse Nodes In K Group

Live masterclass