Nginx 性能调优
原文地址:http://nginx.com/blog/tuning-nginx/
Tuning NGINX for Performance
Nginx 性能调优
NGINX is well known as a high performance load balancer, cache and web server, powering over 40% of the busiest websites in the world. Most of the default NGINX and Linux settings work well for most use cases, but it can be necessary to do some tuning to achieve optimal performance. This blog post will discuss some of the NGINX and Linux settings to consider when tuning a system. There are many settings available, but for this post we will cover the few settings recommended for most users to consider adjusting. The settings not covered in this post are ones that should only be considered by those with a deep understanding of NGINX and Linux, or after a recommendation by the NGINX support or professional services teams. NGINX professional services has worked with some of the world’s busiest websites to tune NGINX to get the maximum level of performance and are available to work with any customer who needs to get the most out of their system.
Nginx闻名于高性能负载均衡,缓存和webserver。为全世界40%最繁忙的站点提供支持。
在我们大多数使用情况下,默认的 Nginx 和 Linux 配置能得到满足。
可是有时候调试出更优的性能是非常有必要的。
本文将讨论调试一个系统时须要考虑的Nginx 和 Linux 设置。有非常多的设置可用。可是本博中我们仅仅涉及到少数几个大多数用户调试时推荐过的设置项。本文没有提及的配置项一般是那些对Nginx 和 Linux 有着深入理解的用户会使用到,或者是在 Nginx 官方或专业服务团队推荐才会使用。Nginx 专业服务帮助那些世界上訪问量最大的站点调试 Nginx 以达到最高性能,还有那些须要想要充分利用他们系统的顾客。
Introduction
简单介绍
A basic understanding of the NGINX architecture and configuration concepts is assumed. This post will not attempt to duplicate the NGINX documentation, but will provide an overview of the various options with links to the relevant documentation.
本文如果读者已经对Nginx的架构和配置的概念有了主要的理解,因此不是去简单的复制一份 Nginx 文档,可是会提供各种选项的概述以及相关文档的链接。
A good rule to follow when doing tuning is to change one setting at a time and if it does not result in a positive change in performance, then to set it back to the default value.
一个非常好的原则是调优时每次仅仅改动一个配置。假设对配置的改动不能提高性能的话。改回默认值。
We will start with a discussion of Linux tuning since some of these values can impact some of the values you will use for your NGINX configuration.
我们将从Linux调优開始由于有些值会影响到你调优Nginx时用到的一些配置參数。
Linux Configuration
Linux 配置
Modern Linux kernels (2.6+) do a good job in sizing the various settings but there are some settings that you may want to change. If the operation system settings are too low then you will see errors in the kernel log to help indicate that you should adjust them. There are many possible Linux settings but we will cover those settings that are most likely in need of tuning for normal workloads. Please refer to Linux documentation for details on adjusting these settings.
流行的 Linux 内核(2.6以后)在各种设置的大小调整上做得非常好了可是相同有一些设置是你可能想要改动的。
假设你的操作系统设置太低导致你在内核日志里看到错误信息了,那表明你应该调整配置了。可能的Linux 配置有非常多可是我们仅仅讨论几个在普通工作负载调优下须要用到的。请參考 Linux 文档获取这些调整到的配置项的详情。
The Backlog Queue
积压队列
The following settings relate directly to connections and how they are queued. If you have high rate of incoming connections and you are setting uneven levels of performance, for example some connections appear to be stalling, then running these settings may help.
以下这些配置直接与连接和连接怎样排队相关。假设你快速率的接入而且你的性能配置不均衡,比如一些连接出现延时的情况,那么以下的调优配置将起到作用。
net.core.somaxconn: This sets the size of the queue for connections waiting for NGINX to accept them. Since NGINX accepts connections very quickly, this value does not usually need to be very large, but the default can be very low, so increasing can be a good idea if you have a high traffic website. If the setting is too low then you should see error message in the kernel log and increase this value until the errors stop. Note: if you set this to a value greater then 512, you should change your NGINX configuration using the backlog parameter of the listen directive to match this number.
net.core.somaxconn:这一项设置了等待 Nginx 接收的连接队列的大小。由于 Nginx 接受连接非常快,所以这个值一般不须要太大。可是默认值非常低。所以假设你的站点流量非常高的话把这个值加大是非常好的办法。
假设这个值过低,你会在内核日志里看到错误信息,那就要一直增大这个值直到不再报错。注意:假设你设置此值大于512。你须要把Nginx 配置中的 listen 指令的 backlog 參数改动与此数相等。(译者注:listen 指令的 backlog 这个參数设置了等待边接的队列的最大长度。
默认情况下。backlog 在FreeDSB 和 Mac OS X 下设置为 -1,其它平台为511)
net.core.netdev_max_backlog: This sets the rate at which packets can be buffered by the network card before being handed off the the CPU. For machines with a high amount of bandwidth this value may need to increased. Check the documentation for your network card for advice on this setting or check the kernel log for errors relating to this setting.
net.core.netdev_max_backlog:这一项设置包在哪个速率下会被网卡在移交CPU之前缓冲。 当主机须要很大的流量的时候这个值须要添加。查看一下你的网卡的文档对这个设置的建议,或者看看内核日志中与该项曙光的错误。
File Descriptors
文件描写叙述符
File descriptors are operating system resources used to handle things such as connections and open files. NGINX can use up to two file descriptors per connection, for example if it is proxying, then it can have one for the client connection and another for the connection to the proxied server, although if HTTP keepalives are used this ratio will be much lower. For a system that will see a large number of connections, these settings may need to be adjusted:
文件描写叙述符是用来处理如连接或者打开的文件等的操作系统资源。Nginx 每一个连接能够建立两个描写叙述符。比如它在进行代理 。那么它有一个指向client连接,一个指向代理server连接,虽然在使用 HTTP 长连接的情况下这个比率非常低。
sys.fs.file_max: This is the system wide limit for file descriptors.
sys.fs.file_max:这一项是文件描写叙述符的系统范围限制。
nofile: This is the user file descriptor limit and is set in the /etc/security/limits.conf file.
nofile是用户文件描写叙述符的限制,在/etc/security/limits.conf文件里设置。
Ephemeral ports
暂时port
When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral port.
当Nginx作为一个代理时。每个到上游server的连接都使用暂时port。
net.ipv4.ip_local_port_range: This specifies the starting and ending port value to use. If you see that you are running out of ports, you can increase this range. A common setting it use ports 1024 to 65000.
net.ipv4.ip_local_port_range:这一项指定了可用port的范围。假设你发现你执行在这些port以外,那么你须要增大这个范围了。
通常的设置范围为1024 到 65000。
net.ipv4.tcp_fin_timeout: This specifies how long after port is no being used that it can be used again for another connection. This usually defaults to 60 seconds but can usually be safely reduced to 30 or even 15 seconds.
net.ipv4.tcp_fin_timeout:这一项指定一个port多久没有被使用之后能够被其它连接使用。通常默认的默觉得60秒,只是减到30秒甚至15秒会更安全。
NGINX Configuration
Nginx 配置
The following are some NGINX directives that can impact performance. As stated above, we will only be discussing those directives that we recommend most users look at adjusting. Any directive not mentioned here is one that we recommend not to be changed without direction from the Nginx team.
接下来是一些会影响性能的 Nginx 指令。
如上所述,我们仅仅讨论大多数用户调试时推荐的指令。本文没有提到的指令,我们建议在没有Nginx 团队的指导下不要随便修改。
Worker Processes
工作进程
NGINX can run multiple worker processes, each capable of processing a large number of connections. You can control how many worker processes are run and how connections are handled with the following directives:
Nginx能执行多个工作进程。每个工作进程能处理非常大量的连接。你能够通过以下的指令集控制执行多少个工作进程。以及假设处理连接:
worker_processes: This controls the number of worker processes that NGINX will run. In most cases, running one worker process per CPU core works well. This can be achieved by setting this directive to “auto”. There are times when you may want to increase this number, such as when the work processes have to do a lot of disk I/O. The default is 1.
worker processes:这一项是Nginx执行的工作进程。在多数情况下。有几个CPU内核就执行几个工作进程。这个值也能够通过将该指令设置为 ”auto” 取得。
也有些时间你须要调大这个数,比方当工作进程须要从大量磁盘读写的时候。
该值默觉得1.
worker_connections: This is the maximum number of connections that can be processed at one time by each worker process. The default is 512, but most systems can handle a larger number. What this number should be set to will depend on the size of the server and the nature of the traffic and can be discovered through testing.
worker_connections:这是一个工作进程能够同一时候处理的最大连接数。
默觉得512,可是多数系统能够处理更大的数。这一项的取值取决于server和大小和流量的性质,这些都能够通过測试得到。
Keepalives
Keepalives
Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed for opening and closing connections. NGINX terminates all client connections and has separate and independent connections to the upstream servers. NGINX supports keepalives for the client and upstream servers. The following directives deal with client keepalives:
Keepalive 连接能够通过减少CPU和网络在打开和关闭连接时须要的开销来影响性能。Nginx终止了全部client请求而且分离和独立了全部连接上游server的连接。Nginx支持client和上游server的长连接。接下来这些指令处理client keepalives:
keepalive_requests: This is the number of requests a client can make over a single keepalive connection. The default is 100, but can be set to a much higher value and this can be especially useful for testing where the load generating tool is sending many requests from a single client.
keepalive_requests:这是一个client能够通过一个keepalive连接的请求次数。缺省值是100,可是也能够调得非常高,并且这对于測试负载生成工具从哪里使用一个client发送这么多请求非常实用。
keepalive_timeout: How long a keepalive connection will remain open once it becomes idle.
The following directives deal with upstream keepalives:
keepalive_tiimeout:一个keepalive 连接被闲置以后还能保持多久打开状态。
以下这些指令处理upstream keepalives:
keepalive: This specifies the number of idle keepalive connections to an upstream server that remain open for each worker process. There is no default value for this directive.
keepalive:这一项指定一个工作进程保持打开状态的闲置的上游server的连接数。这一项没有缺省值。
To enable keepalive connections to the upstream you must add the following directives:
proxy_http_version 1.1;
proxy_set_header Connection “”;
要启用upstream的keepalive 连接,你必须增加以下的指令:
proxy_http_version 1.1;
proxy_set_header Connection “”;
Access Logging
訪问日志记录
Logging each requests takes both CPU and I/O cycles and one way to reduce this impact is to enable access log buffering. This will cause NGINX to buffer a series of log entries and write them to the file at one time rather then as separate write operation. Access log buffering is enabled by specifying the “buffer=size” option of the access_log directive. This sets the size of the buffer to be used. You can also use the “flush=time” option to tell NGINX to write the entries in the buffer after this amount of time. With these two options defined, NGINX will write entries to the log file when the next log entry will not fit into the buffer or if the entries in the buffer are older than the time specified for the flush parameter. Log entries will also be written when a worker process is re-opening log files or is shutting down. It is also possible to disable access logging completely.
记录每个请求同一时候须要CPU和I/O周期,降低这一影响的一个方法就是启用訪问日志缓存。打开后能使Nginx一次性缓冲一堆日志内容到文件中而不每一条日志做一次写操作。
訪问日志的缓冲是通过设置 access_log 指的 “buffer=size” 选项来启用的。这一项设置缓冲区的大小。也能够通过 “flush=time” 一项设置 Nginx 将缓冲区中所有数据写到文件的间隔时间。这两项都定义了以后。Nginx将在缓冲区满了或者缓冲区里的条目生成时间比 flush 參数指定的时间更早的情况下把缓冲区里的所有条目写入日志文件。日志记录还会在工作进程又一次打开或者关闭日志文件时写入。
这也可能彻底地彬訪问日志。
Sendfile
Sendfile is an operating system feature that can be enabled on NGINX. It can provide for faster tcp data transfers by doing in-kernel copying of data from one file descriptor to another, often achieving zero-copy. NGINX can use it to write cached or on-disk content down a socket, without any context switching to user space, making it extremely fast and using less CPU overhead. Because the data never touches user space, it’s not possible to insert filters that need to access the data into the processing chain, so you cannot use any of the NGINX filters that change the content, e.g. the gzip filter. It is disabled by default.
Sendfile 是Nginx可以启用的一个操作系统功能。
它能通过在内核中从一个文件描写叙述符拷贝数据到还有一个文件描写叙述符来提供更快的 tcp 传输数据,通常能实现零拷贝。Nginx 能使用这个功能在没有不论什么上下文切换到用户空间的情况下。通过套接字写缓存或者磁盘里的内容,能免速度极快且使用更少的CPU开销。
由于数据进不了用户空间,所以也不可能插入进程链中须要到的过滤器,所以你不可以使用不论什么的Nginx过滤器来改动这些内容,比如gzip过滤器,默认是禁用的。
Limits
NGINX and NGINX Plus allow you to set various limits that can be used to help control the resources consumed by clients and therefore impact the performance of your system and also affect user experience and security. The following are some of these directives:
Nginx 和 Nginx 加能够设置各种限制来帮助控制来自client的资源消耗。提升系统性能,提升用户体验和安全性。
以下就是些想着的指令:
limit_conn/limit_conn_zone: These directives can be used to limit the number of connections NGINX will allow, for example from a single client IP address. This can help prevent individual clients from opening too many connections and consuming too many resources.
linut_conn/limit_conn_zone:这两个指令用于限制Nginx同意的连接数量,比如从一个IP地址来的连接数量。
可以帮助阻止个别的client利用打开很多的连接来消耗过多的资源。
limit_rate: This will limit the amount of bandwidth allowed for a client on a single connection. This can prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.
limit_rate:限制单个连接的带宽量。可以防止系统因一些client而超载。能确保全部用户都享用质量的服务。
limit_req/limit_req_zone: These directives can be used to limit the rate of requests being processed by NGINX. As with limit_rate this can help prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service. They can also be used to improve security, especially for login pages, by limiting the requests rate so that it is adequate for a human user but one that will slow programs trying to access your application.
limit_req/limit_req_zone:这些指令用于限制正在被Nginx 处理的请求的比率。
使用 limit_rate 能防止系统某几个client超载而且能确保全部客户获取高质量的服务。这些指令相同能提高安全性。尤其是登录页。通过限制请求率来做更适合人类用户的请求,减慢试图訪问你应用的程序用户。
max_conns: This is set for a server in an upstream group and is the maximum number of simultaneous connections allowed to that server. This can help prevent the upstream servers from being overloaded. The default is zero, meaning that there is no limit.
max_conns:该项设置上游分组里的server同意同一时候连接的最大数目。能限制上游server的的超载。
缺省值为0。即无限制。
queue: If max_conns is set for any upstream servers, then the queue directive governs what happens when a request cannot be processed because there are no available servers in the upstream group and some of those servers have reached the max_conns limit. This directive can be set to the number of requests to queue and for how long. If this directive is not set, then no queueing will occur.
queue:假设有上游server配置的max_conns一项。当有请求由于没有可用的上游分组中的server而且有些server达到max_conns的限制时,queue指令便起使用。
queue 指令能决定请求队列的大小和时长。假设该值没有配置,那么不会有队列产生。
Additional considerations
其它注意事项
There are additional features of NGINX that can be used to increase the performance of a web application that don’t really fall under the heading of tuning but are worth mentioning because their impact can be considerable. We will discuss two of these features.
另一些不是非要放到调优这个标题下的Nginx功能可以提高一个站点应用的性能,可是依旧要提一下由于他们的影响是值得注意的。我们讨论这当中的两个功能。
Caching
缓存
By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically increase the response time to the client while at the same time dramatically reducing the load on the backend servers. Caching is a subject of its own and will not be covered here. For more information on configurating NGINX for caching please see: NGINX Admin Guide – Caching.
在一组做了负载均衡的站点或应用server上开启缓存 。可以戏剧性地在减轻后端server负载的同一时候添加(译者注:怎么认为应该是优化或者降低的意思,作者写错了?)到client的响应时间。
缓存是Nginx 自己的主题在这时太不多言。
很多其它信息情看:NGINX 管理指南——缓存。
Compression
压缩
Compressing the responses to clients can greatly reduce the size of the responses, requiring less bandwidth, however it does require CPU resources to do the compression so is best used when there is value to reducing bandwidth. It is important to note that you should not enable compression for objects that are already compressed, such as jpegs. For more information on configuring NGINX for compression please see: NGINX Admin Guide – Compression and Decompression
压缩到client的响应可以非常显著的降低响应大小,降低所需带宽。然后须要耗费CPU资源来进行压缩所以最后是在降低带宽 有价值的时候再使用。
特别注意的是假设你的对象已经压缩过了如jpegs,那就不须要再启用压缩了。
很多其它关于配置Nginx压缩的信息请看: NGINX 管理指南——压缩与解压缩。