Web Servers 7d ago 12 views 4 min read

How to configure a load balancer in Nginx

Set up an Nginx upstream pool and reverse proxy to distribute traffic across multiple backend servers. This guide covers configuring upstream blocks, health checks, and failover settings on Ubuntu 24.04.

Roy S
Updated 14h ago
Sponsored

Cloud VPS — scale in minutes

Instantly deploy SSD cloud VPS with guaranteed resources, snapshots and per-hour billing. Pay only for what you use.

You will configure Nginx to act as a load balancer distributing traffic across multiple backend application servers. These steps target Ubuntu 24.04 with Nginx 1.24.0 installed via the official repository or APT. You will define an upstream block, configure health checks, and set up a reverse proxy to forward requests to the pool.

Prerequisites

  • Ubuntu 24.04 LTS server with root or sudo privileges.
  • Nginx 1.24.0 installed and running (verify with nginx -v).
  • At least two backend application servers reachable via private IP.
  • Firewall rules allowing traffic on port 80 and 443.
  • Basic knowledge of Nginx configuration file syntax.

Step 1: Create the upstream block

Open the main Nginx configuration file to define the upstream pool. You will create a block named backend_servers that lists the IP addresses of your backend servers. Each server entry can include a weight parameter to control traffic distribution ratios.

sudo nano /etc/nginx/sites-available/default

Insert the following configuration inside the http block. Replace 192.168.1.10 and 192.168.1.11 with your actual backend server IPs. Adjust weights as needed for traffic balancing.

upstream backend_servers {
    server 192.168.1.10:80 weight=5;
    server 192.168.1.11:80 weight=5;
    server 192.168.1.12:80 backup;
}

server {
    listen 80;
    server_name your-domain.com;

    location / {
        proxy_pass http://backend_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Step 2: Configure health checks and failover

Add health check parameters to the upstream block to ensure Nginx removes failed servers automatically. Use the max_fails and fail_timeout directives to define how many consecutive failures trigger removal and how long to wait before retrying.

upstream backend_servers {
    server 192.168.1.10:80 max_fails=3 fail_timeout=30s weight=5;
    server 192.168.1.11:80 max_fails=3 fail_timeout=30s weight=5;
    server 192.168.1.12:80 max_fails=3 fail_timeout=30s backup;
}

The backup keyword ensures the third server only receives traffic if the primary servers fail. This setup provides automatic failover without manual intervention.

Step 3: Add rate limiting and connection limits

Protect your backend servers from overload by adding rate limiting and connection limits to the proxy configuration. Use the limit_conn and limit_req directives to control concurrent connections and request rates.

limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;

server {
    listen 80;
    server_name your-domain.com;

    location / {
        limit_conn addr 10;
        limit_req zone=one burst=5 nodelay;

        proxy_pass http://backend_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Adjust the rate and burst values based on your backend server capacity and expected traffic volume.

Step 4: Test and reload Nginx

Validate the configuration syntax before applying changes to live traffic. Run the test command to check for errors in the configuration file. If successful, reload Nginx to apply the new settings without dropping active connections.

sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once the test passes, reload Nginx to activate the load balancer configuration.

sudo nginx -s reload

Verify the installation

Send multiple requests to the load balancer to confirm traffic is distributed across backend servers. Use curl with the -L flag to follow redirects and observe the response headers.

curl -L -H "Host: your-domain.com" http://your-domain.com

Check the Nginx access logs to verify which backend server handled each request. Look for the X-Forwarded-For header to ensure proxy settings are correct.

sudo tail -f /var/log/nginx/access.log

Observe that requests alternate between the primary servers and skip the backup server unless a failure occurs.

Troubleshooting

If the load balancer does not distribute traffic evenly, verify the upstream server IPs are reachable. Use ping or curl to test connectivity to each backend server from the Nginx server.

curl -I http://192.168.1.10:80
curl -I http://192.168.1.11:80

Ensure the backend servers respond with a valid HTTP status code. A 502 Bad Gateway error indicates an upstream server is unreachable or returning an invalid response.

Check Nginx error logs for specific failure messages. Look for lines containing upstream prematurely closed connection or connect() failed.

sudo tail -f /var/log/nginx/error.log

If a backend server fails, Nginx automatically removes it from the pool. Verify this by checking the active upstream status.

curl -v http://your-domain.com 2>&1 | grep -i "upstream"

Restart Nginx if configuration changes do not take effect. Ensure no syntax errors exist in the configuration file before reloading.

sudo nginx -t && sudo nginx -s reload

Confirm firewall rules allow traffic on port 80 and 443. Use ufw or iptables to verify open ports.

sudo ufw status

Ensure the backup server is configured correctly with the backup keyword. Without this, Nginx may treat it as a primary server and distribute traffic equally.

Sponsored

Powerful Dedicated Servers — Linux & Windows

Bare-metal performance with SSD storage, DDoS protection and 24/7 expert support. Ideal for production workloads, databases and high-traffic sites.

Tags: UbuntuNginxWeb ServerLoad Balancer
0
Was this helpful?

Related tutorials

Comments 0

Login to leave a comment.

No comments yet — be the first to share your thoughts.