Stefanos Vardalos

6 minute read

Serving your application using Nginx

You have created a web application and now you are searching for the right web server to use to serve it. Your application might consist of multiple static files (HTML , CSS , Javascript etc) and a backend API service or even, multiple webservices . Using Nginx might be the what you are looking for, and there are couple of reasons for that. NGINX can be a powerful web server, using a non-threaded, event-driven architecture that enables him to outperform Apache, but can also do other important things, like load balancing, caching or used as a reverse proxy.

Below we will see some basic steps on how to install NGINX and then how to configure the most important parts of it.

Basic Installation - Architecture

There are two ways to install NGINX, either using a pre-built binary or building it up from source. The first is obviously the easiest and faster option, but the second one gives the ability to choose and include various third-party modules that make NGINX even more powerful and customized to fit your needs. Based on the needs of the application, you might gain a lot from building up your own binary.

To install a Prebuilt Debian Package you only have to do :

sudo apt-get update
sudo apt-get install nginx

After the installation process has finished, you can verify everything is ok by running the below command, which should print the latest version of nginx :

sudo nginx -v
nginx version: nginx/1.6.2

After all is finished, your new webserver will be installed under /etc/nginx/ . If you go inside this folder you will see several files and folders, the most important ones that will require our attention later is the file nginx.conf and the folder sites-available.

Configuration Settings

The core settings on NGINX are in the nginx.conf file, which by default should look like this.

user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
	worker_connections 768;
	# multi_accept on;
}

http {

	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	keepalive_timeout 65;
	types_hash_max_size 2048;
	# server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	gzip on;
	gzip_disable "msie6";

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}

The file is structured into Contexts. The first one is the events Context and the second one is the http Context. This structure enables some advanced layering of your configuration as each context can have other nested context that inherits everything from its parent but can also override them as needed.

Various things in this file can be tweaked based on your needs but, NGINX is so simple to use that you can go along even with the default settings. Some of the most important stuff are the following :

worker_processes : This is the number of worker processes that NGINX will use. Because NGINX is single threaded, this number should usually be equal to the number of CPU core’s.

worker_connections : This is the maximum number of simultaneous connections for each worker process. This number can be limited from you host machine, but the bigger it is, the more simultaneous users the NGINX will be able to serve.

access_log & error_log : These are the files that NGINX will log various things and should be accessed for debugging and troubleshooting.

gzip : These are the settings for GZIP compression of NGINX responses. Enabling this one along with the various sub-settings that by default are commented out, will result in a quite a big upgrade of performance. From the sub-settings of GZIP, care should be taken on gzip_comp_level, which is the level (1-10) of compression, and generally should not be above 6 as the gain in terms of size reduce is insignificant while it needs a lot more cpu usage, and on gzip_types, which is a list of response types that compression will be applied on.

Lastly, we see at the end that we include everything from the sites-enabled folder. This folder actually holds symlinks to files inside the sites-available folder, so that’s where the actual work should be done.

Inside sites-available folder we can have our virtual host configuration files. Either create a new file for you application or edit the default one. A typical configuration looks like the below one.

upstream remoteApplicationServer {
    server 10.10.10.10;
}

upstream remoteAPIServer {
    server 20.20.20.20;
    server 20.20.20.21;
    server 20.20.20.22;
    server 20.20.20.23;
}


server {
    listen 80;
    server_name www.customapp.com customapp.com
    root /var/www/html;
    index index.html

        location / {
            alias /var/www/html/customapp/;
            try_files $uri $uri/ =404;
        }

        location /remoteapp {
            proxy_set_header   Host             $host:$server_port;
            proxy_set_header   X-Real-IP        $remote_addr;
            proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_pass http://remoteAPIServer/;
        }

        location /api/v1/ {
            proxy_pass https://remoteAPIServer/api/v1/;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
            proxy_redirect http:// https://;
        }
}

Much like the nginx.conf, this one also uses the concept of nested contexts (and all of these are also nested inside http context of nginx.conf, so they also inherit everything from it).

The server context defines a specific virtual server to handle your clients requests. You can have multiple server blocks and NGINX will choose between of them based on the listen and server_name directives.

Inside a server block, we define multiple location contexts that are used to decide how to handle the client requests. Whenever a request come’s in, NGINX will try to match its URI to one of those location definitions and handle it accordingly. There are many important directives that can be used under the location context, like :

try_files : will try to serve a static file found under the folder that points the root directive.

proxy_pass : will send the request to a specified proxied server.

rewrite : will rewrite the incoming URI based on a reqular expression so that another location block will be able to handle it.

The upstream context defines a pool of servers that NGINX will proxy the requests to. After we create an upstream block and define a server inside it we can then reference it by name inside our location blocks. Further more, an upstream context can have many servers assigned under it so that NGINX will do some load balancing when proxying the requests.

Start NGINX

After we have finished with the configuration and we have moved our web application over to the appropriate folder, we can start up NGINX using the below command :

sudo service nginx start

After that, whenever we change something on our configuration we only have to reload it (without downtime) using :

service nginx reload

Lastly, we can check nginx status using :

service nginx status

Conclusion

Nginx Arch

With so many features out of the box, NGINX can be a great way to serve your application or, even to use it only as a proxy and/or load balancer for your other web servers. Understanding the way NGINX works and handles the requests will give a lot of power to the administrator of the applications.

comments powered by Disqus