Nginx负载均衡 #

一、负载均衡概述 #

1.1 什么是负载均衡 #

负载均衡(Load Balancing)是将网络流量分发到多台服务器的技术,可以提高应用的可用性、可靠性和性能。

text
                    ┌─────────────┐
                    │   Client    │
                    └──────┬──────┘
                           │
                    ┌──────▼──────┐
                    │    Nginx    │
                    │ Load Balancer│
                    └──────┬──────┘
            ┌──────────────┼──────────────┐
            │              │              │
     ┌──────▼──────┐┌──────▼──────┐┌──────▼──────┐
     │  Server 1   ││  Server 2   ││  Server 3   │
     └─────────────┘└─────────────┘└─────────────┘

1.2 负载均衡的优势 #

  • 高可用性:单点故障不影响整体服务
  • 可扩展性:轻松添加服务器应对流量增长
  • 性能优化:合理分配请求,避免单机过载
  • 灵活部署:支持灰度发布、蓝绿部署

二、upstream配置 #

2.1 基本配置 #

nginx
upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

server {
    listen 80;
    server_name api.example.com;
    
    location / {
        proxy_pass http://backend;
    }
}

2.2 服务器参数 #

nginx
upstream backend {
    server 192.168.1.10:8080 weight=3;
    server 192.168.1.11:8080 weight=2;
    server 192.168.1.12:8080 backup;
    server 192.168.1.13:8080 down;
    server 192.168.1.14:8080 max_fails=3 fail_timeout=30s;
    server 192.168.1.15:8080 max_conns=1000;
}
参数 说明 默认值
weight 权重 1
max_fails 最大失败次数 1
fail_timeout 失败超时时间 10s
max_conns 最大连接数 0(无限制)
backup 备用服务器 -
down 标记服务器下线 -

2.3 动态upstream(商业版) #

nginx
upstream backend {
    zone backend 64k;
    
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

server {
    location /api {
        proxy_pass http://backend;
    }
    
    location /upstream_conf {
        upstream_conf;
        allow 127.0.0.1;
        deny all;
    }
}

三、负载均衡策略 #

3.1 轮询(Round Robin) #

默认策略,按顺序分配请求:

nginx
upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

3.2 加权轮询(Weighted Round Robin) #

根据权重分配请求:

nginx
upstream backend {
    server 192.168.1.10:8080 weight=5;
    server 192.168.1.11:8080 weight=3;
    server 192.168.1.12:8080 weight=2;
}

请求分配比例:5:3:2

3.3 最少连接(Least Connections) #

将请求分配给连接数最少的服务器:

nginx
upstream backend {
    least_conn;
    
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

3.4 IP哈希(IP Hash) #

根据客户端IP分配服务器,保证会话粘性:

nginx
upstream backend {
    ip_hash;
    
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

3.5 一致性哈希(Consistent Hashing) #

根据指定key进行一致性哈希:

nginx
upstream backend {
    hash $request_uri consistent;
    
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

可用的key:

  • $remote_addr:客户端IP
  • $request_uri:请求URI
  • $cookie_sessionid:会话Cookie
  • $arg_user_id:URL参数

3.6 随机(Random) #

随机选择服务器:

nginx
upstream backend {
    random two least_conn;
    
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

四、策略对比 #

策略 适用场景 优点 缺点
轮询 服务器性能相近 简单公平 不考虑服务器差异
加权轮询 服务器性能不同 考虑服务器差异 需要手动设置权重
最少连接 请求处理时间差异大 动态均衡 需要维护连接计数
IP哈希 需要会话粘性 简单实现会话保持 可能导致负载不均
一致性哈希 缓存场景 减少缓存失效 配置复杂

五、健康检查 #

5.1 被动健康检查 #

Nginx开源版支持被动健康检查:

nginx
upstream backend {
    server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
    server 192.168.1.12:8080 max_fails=3 fail_timeout=30s;
}

工作原理:

  1. 请求失败时记录失败次数
  2. 失败次数达到max_fails时标记服务器不可用
  3. fail_timeout后再次尝试

5.2 主动健康检查(商业版) #

nginx
upstream backend {
    zone backend 64k;
    
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}

server {
    location / {
        proxy_pass http://backend;
        health_check interval=5s fails=3 passes=2 uri=/health;
    }
}

5.3 使用第三方模块 #

使用 ngx_http_healthcheck_module

nginx
upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    
    check interval=3000 rise=2 fall=3 timeout=1000 type=http;
    check_http_send "GET /health HTTP/1.0\r\n\r\n";
    check_http_expect_alive http_2xx http_3xx;
}

5.4 自定义健康检查端点 #

nginx
location /health {
    access_log off;
    return 200 "OK\n";
    add_header Content-Type text/plain;
}

六、会话保持 #

6.1 IP哈希方式 #

nginx
upstream backend {
    ip_hash;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

6.2 一致性哈希方式 #

nginx
upstream backend {
    hash $cookie_sessionid consistent;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

6.3 Sticky模块方式 #

nginx
upstream backend {
    sticky cookie srv_id expires=1h domain=.example.com path=/;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

6.4 会话保持策略对比 #

方式 原理 优点 缺点
IP哈希 根据IP分配 简单 负载可能不均
Cookie 根据Cookie分配 精确控制 需要客户端支持Cookie
一致性哈希 根据key哈希 扩容影响小 需要选择合适的key

七、长连接配置 #

7.1 keepalive配置 #

nginx
upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    keepalive 32;
    keepalive_timeout 60s;
    keepalive_requests 1000;
}

server {
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

7.2 参数说明 #

参数 说明
keepalive 保持的空闲连接数
keepalive_timeout 连接超时时间
keepalive_requests 每个连接最大请求数

八、灰度发布 #

8.1 基于权重的灰度 #

nginx
upstream backend {
    server 192.168.1.10:8080 weight=90;
    server 192.168.1.11:8080 weight=10;
}

8.2 基于Header的灰度 #

nginx
upstream stable {
    server 192.168.1.10:8080;
}

upstream canary {
    server 192.168.1.11:8080;
}

map $http_x_canary $backend {
    default stable;
    "true" canary;
}

server {
    location / {
        proxy_pass http://$backend;
    }
}

8.3 基于Cookie的灰度 #

nginx
map $cookie_version $backend {
    default stable;
    "v2" canary;
}

upstream stable {
    server 192.168.1.10:8080;
}

upstream canary {
    server 192.168.1.11:8080;
}

server {
    location / {
        proxy_pass http://$backend;
    }
}

8.4 基于IP的灰度 #

nginx
split_clients "${remote_addr}" $backend {
    10% canary;
    * stable;
}

upstream stable {
    server 192.168.1.10:8080;
}

upstream canary {
    server 192.168.1.11:8080;
}

server {
    location / {
        proxy_pass http://$backend;
    }
}

九、多协议支持 #

9.1 TCP负载均衡 #

nginx
stream {
    upstream mysql_servers {
        server 192.168.1.10:3306;
        server 192.168.1.11:3306;
    }
    
    server {
        listen 3306;
        proxy_pass mysql_servers;
    }
}

9.2 UDP负载均衡 #

nginx
stream {
    upstream dns_servers {
        server 192.168.1.10:53;
        server 192.168.1.11:53;
    }
    
    server {
        listen 53 udp;
        proxy_pass dns_servers;
    }
}

9.3 TCP健康检查 #

nginx
stream {
    upstream mysql_servers {
        zone mysql_servers 64k;
        server 192.168.1.10:3306;
        server 192.168.1.11:3306;
    }
    
    server {
        listen 3306;
        proxy_pass mysql_servers;
        proxy_timeout 3s;
        proxy_connect_timeout 1s;
    }
}

十、监控与统计 #

10.1 状态监控 #

nginx
upstream backend {
    zone backend 64k;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

server {
    location /upstream_status {
        stub_status on;
        allow 127.0.0.1;
        deny all;
    }
}

10.2 Prometheus监控 #

使用 nginx-module-vts

nginx
vhost_traffic_status_zone;

server {
    location /status {
        vhost_traffic_status_display;
        vhost_traffic_status_display_format html;
    }
}

十一、完整配置示例 #

nginx
upstream api_backend {
    zone api_backend 64k;
    
    least_conn;
    
    server 192.168.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 192.168.1.12:8080 weight=2 max_fails=3 fail_timeout=30s;
    server 192.168.1.13:8080 backup;
    
    keepalive 32;
    keepalive_timeout 60s;
    keepalive_requests 1000;
}

server {
    listen 80;
    server_name api.example.com;
    
    location / {
        proxy_pass http://api_backend;
        
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Connection "";
        
        proxy_connect_timeout 5s;
        proxy_send_timeout 30s;
        proxy_read_timeout 30s;
        
        proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 10s;
    }
    
    location /health {
        access_log off;
        return 200 "OK\n";
        add_header Content-Type text/plain;
    }
    
    location /upstream_status {
        stub_status on;
        allow 127.0.0.1;
        deny all;
    }
}

十二、总结 #

本章我们学习了:

  1. 负载均衡原理:分发请求到多台服务器
  2. upstream配置:服务器参数和权重设置
  3. 均衡策略:轮询、最少连接、IP哈希、一致性哈希
  4. 健康检查:被动检查和主动检查
  5. 会话保持:多种会话粘性方案
  6. 长连接配置:keepalive优化
  7. 灰度发布:基于权重、Header、Cookie的灰度
  8. 多协议支持:TCP/UDP负载均衡

掌握负载均衡后,让我们进入下一章,学习虚拟主机配置!

最后更新:2026-03-27