What is nginx
Nginx (pronounced "engine X") is a web server with an asynchronous framework, and can also be used as a reverse proxy, load balancer, and HTTP cache. The software was created by Igor Sesoyev and released to the public for the first time in 2004. A company of the same name was established in 2011 to provide support. On March 11, 2019, Nginx was acquired by F5 Networks for US$670 million
Application scenarios of nginx
nginx command line commonly used commands
- nginx # start nginx
- nginx -s reload # Send a signal to the main process, reload the configuration file, and hot restart
- nginx -s reopen # restart Nginx
- nginx -s stop # fast shutdown
- nginx -s quit # Wait for the completion of the work process and close it
- nginx -t # Check whether there is an error in the current Nginx configuration
- nginx -t -c
# Check whether there is a problem with the configuration, if it is already in the configuration directory, no -c is required
Detailed nginx configuration file
Online applications are often configured with several domain names on one nginx, and each domain name is placed in a separate configuration file. Then reference these files in nginx.conf, so it can be understood that each time nginx starts, nginx.conf will be loaded by default, and nginx.conf will quote the relevant server configuration to form a large nginx file
- main: global settings
- events: The configuration affects the
Nginx
server or the network connection with the user - http: http module settings
- upstream: Load balancing settings
- server: http server configuration, there can be multiple server modules in one http module
- location: URL matching configuration, a server module can contain multiple location modules
The structure of a nginx
configuration file is as shown by nginx.conf
, the syntax rules of the configuration file:
- The configuration file is composed of modules
- Use
#
to add comments - Use
$
to use variables - Use
include
to reference multiple configuration files
nginx and php communication
Access path
www.sziiit.cn/index.php
|
|
Nginx
|
|
php-fpm listens on 127.0.0.1:9000 address
|
|
www.sziiit.cn/index.php request forwarded to 127.0.0.1:9000
|
|
The fastcgi module of nginx maps http requests to fastcgi requests
|
|
php-fpm monitors fastcgi requests
|
|
php-fpm receives the request and processes the request through the worker process
|
|
php-fpm processes the request and returns it to nginx
|
|
Communication between nginx and php
tcp-socket
For tcp socket communication mode, you need to fill in the ip address and port number of php-fpm running in the nginx configuration file. This method supports cross-server, that is, when nginx and php-fpm are no longer on the same machine.
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
}
unix-socket
For unix socket communication, you need to fill in the pid file address of php-fpm running in the nginx configuration file. Unix socket is also called IPC (inter process communication) socket, which is used to realize inter-process communication on a unified host
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
}
The difference between the two
Unix sockets do not need to go through the network protocol stack, do not need to pack and unpack, calculate checksums, maintain serial numbers and responses, etc., just copy application layer data from one process to another. So its efficiency is higher than that of tcp socket, which can reduce unnecessary tcp overhead. However, unix sockets are unstable when high concurrency occurs. When the number of connections bursts, a large amount of long-term cache will be generated. Without the support of connection-oriented protocols, large data packets may go wrong without returning an exception. A connection-oriented protocol such as tcp can better ensure the correctness and integrity of communication.
Therefore, if you are facing a high-concurrency service, consider using a more reliable tcp socket first. We can provide efficiency through load balancing, kernel optimization, etc.
nginx configuration dynamic and static separation
What is dynamic and static separation
In web development, generally speaking, dynamic resources actually refer to those background resources, while static resources refer to files such as HTML, JavaScript, CSS, img, etc.
After using front-end and back-end separation, the access speed of static resources can be greatly improved. At the same time, the front-end and back-end development can be paralleled during the development process, which can effectively increase development time and effectively reduce joint debugging time.
Dynamic and static separation scheme
- Directly use different domain names and place static resources on an independent cloud server. This kind of solution is currently highly respected.
- Dynamic requests and static files are put together, separated by nginx configuration
server {
location /www/ {
root /www/;
index index.html index.htm;
}
location /image/ {
root /image/;
}
}
nginx configure reverse proxy
Reverse proxy is often used to directly access the domain name to process the request without exposing the port
server {
listen 80;
server_name www.sziiit.cn;
location /swoole/ {
proxy_pass http://127.0.0.1:9501;
}
location /node/ {
proxy_pass http://127.0.0.1:9502;
}
}
nginx configuration load balancing
upstream phpServer{
server 127.0.0.1:9501;
server 127.0.0.1:9502;
server 127.0.0.1:9503;
}
server {
listen 80;
server_name www.sziiit.cn;
location / {
proxy_pass http://phpServer;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_next_upstream error timeout invalid_header;
proxy_max_temp_file_size 0;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
Common load balancing strategies
round-robin/polling: The request to the application server is distributed in a round-robin/polling way
upstream phpServer{
server 127.0.0.1:9501 weight=3;
server 127.0.0.1:9502;
server 127.0.0.1:9503;
}
In this configuration, every 5 new requests will be dispatched in the application instance as follows: 3 requests are dispatched to 9501, one to 9502, and the other to 9503
least-connected: The next request will be dispatched to the server with the least number of active connections
upstream phpServer{
least_conn;
server 127.0.0.1:9501;
server 127.0.0.1:9502;
server 127.0.0.1:9503;
}
When some requests take longer to complete, the least connection can more fairly control the load on the application instance
ip-hash/IP hash: Use the hash algorithm to decide which server to choose for the next request (based on the client IP address)
upstream phpServer{
ip_hash;
server 127.0.0.1:9501;
server 127.0.0.1:9502;
server 127.0.0.1:9503;
}
Bind a client to a specific application server
nginx configuration cross-domain
Due to the browser's same-origin policy, the behavior of loading resources from other sources in one source is restricted. Cross-domain request prohibition will appear.
The so-called homology refers to the same domain name, protocol, and port
- Access-Control-Allow-Origin: Allowed domain name, only * (wildcard) or single domain name can be filled in.
- Access-Control-Allow-Methods: Allowed methods, multiple methods are separated by commas.
- Access-Control-Allow-Headers: Allowed headers, multiple methods are separated by commas.
- Access-Control-Allow-Credentials: Whether to allow sending cookies.
nginx pseudo static
Application scenario
- seo optimization
- Safety
- Traffic forwarding
location ^~ /saas {
root /home/work/php/saas/public;
index index.php;
rewrite ^/saas(/[^\?]*)?((\?.*)?)$ /index.php$1$2 last;
break;
}
Post comment 取消回复