June 13, 2013
Yii Jquery Dialog -HowTo-Nginx in front of Apache to act as reverse caching,balancing proxy, lowering load on servers.
NGINX – APACHE 2
At
To make sure we provide to clients the best service available.
Finally, last night we made the first deployment and try out with a Drupal website [this drupal website], and testing if it failed at any point.
We are now working on automated scripts that should take over the infrastructure in a network outage, power failure, or hardware failure.

That would be what we’re trying to accomplish here, so I’ve thought about sharing what we’ve been doing around here. Of course, we’ll skip some parts that use programs and helpers developed in the company.
So our testbed for this was: Debian 7, Apache 2, MySQL [Apache Server] => Check Server element in the menu.
And the Nginx server is in Secaucus, NJ and its Debian 6, has Apache, MySQL and Nginx.
The first was to change our DNS, so www.derekdemuro.com(link is external) and derekdemuro. I pointed to the server in Secaucus [Ping this website for the IP], second, was to configure Nginx as to reverse proxy this server.
There’s a trick though, both servers have DNS Servers, and indeed the server in New Jersey has a Slave zone of www.derekdemuro.com(link is external); why? To avoid having Nginx looking up for the domain
So let me share here the configs!
Files:
Reverse Caching Proxy = /etc/nginx/proxy.conf
Nginx Configuriation = /etc/nginx/nginx.conf
Mime Types = /etc/nginx/mime.conf
Sites Enabled Folders = /etc/nginx/sites-enabled/ =>Sites
Reverse Caching Config.
proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header Cookie $http_cookie; proxy_cache dynamic; proxy_cache_key "$scheme://$host$proxy_host$uri$is_args$args"; proxy_cache_valid 200 301 302 20m; # Cache pages for 20mins proxy_cache_valid 200 304 7d; # Cache pages for 7day proxy_cache_valid 301 302 1h; # Cache pages for 1 hour proxy_cache_valid 404 403 402 401 504 502 20m; # Cache Other errors for 20mins proxy_cache_valid any 15m; # Cache others for 15 minute proxy_cache_valid 404 1m; # Cache errors for 1 minute proxy_cache_use_stale error timeout invalid_header updating; proxy_connect_timeout 180; proxy_send_timeout 180; proxy_read_timeout 180; proxy_buffers 8 16k; proxy_buffer_size 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_pass_header Expires; proxy_pass_header Cache-Control; proxy_pass_header Last-Modified; proxy_pass_header ETag; proxy_pass_header Content-Length; # only honor internal Caching policies proxy_ignore_headers X-Accel-Expires; # let cookies from the backend pass proxy_pass_header Set-Cookie;
nginx.conf
#Run as webuser to minimize permission conflict user www-data www-data; # For high performance you'll need one worker process per disk spindle # but in most cases 1 or 2 is fine. worker_processes 1; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; events { # Max concurrent connections = worker_processes * worker_connections # You can increase this past 1024 but you must set the rlimit before starting # ngxinx using the ulimit command (say ulimit -n 8192) worker_connections 1024; #Linux performance awesomeness on use epoll; } http { #server_name_hash_bucket_size 64; #Mime stuff #Mime-type table include /etc/nginx/mime.types; default_type application/octet-stream; #Size Limits client_max_body_size 200M; client_body_buffer_size 8M; client_header_buffer_size 128k; large_client_header_buffers 1 8k; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; #Time out client_body_timeout 120; client_header_timeout 120; send_timeout 120; keepalive_timeout 60 60; ignore_invalid_headers on; recursive_error_pages on; # Where to store the body of large client requests on disk # NGINX will stream this to disk before posting it to your Mongrels, # preventing slow clients tying up your app. proxy_cache_path /home/cache/dynamic levels=1:2 keys_zone=dynamic:1024m inactive=7d max_size=1000m; proxy_cache_path /home/cache/static levels=1:2 keys_zone=static:1024m inactive=7d max_size=1000m; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #Compression gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_min_length 1100; gzip_buffers 16 8k; #Some version of IE 6 don't handle compression well on some mime-types, so just diable for them gzip_disable "MSIE [1-6].(?!.*SV1)"; #Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_requests 0; include /etc/nginx/proxy.conf; include /etc/nginx/sites-enabled/*; }
/etc/nginx/sites-enabled/site.com
## Apache (vm02) backend for www.example.com ## upstream mdsite { server static.site.me:80; #Apache1 } server { listen 80; server_name www.site.me site.me; access_log /home/cache/logs/site.me.access.log; error_log /home/cache/logs/site.me.error.log debug; # If request is different from GET, HEAD or POST it will return the error code 444 if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # All files are cashed for 120 minutes and up to 200 pages, this have to be removed location / { proxy_cache dynamic; expires 2h; proxy_pass https://mdsite $request_uri; # If the backend server send one of these error Nginx will serve from cashed files proxy_cache_use_stale error timeout invalid_header updating; } # Static files are cashed for 2 days location ^~.(swf|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { proxy_cache static; proxy_pass https://mdsite $request_uri; expires max; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }Download “Nginx Example” nginx.zip – Downloaded 414 times – 2.76 KB