Nginx Apache Tomcat

broken image


You can use Nginx as a loadbalancer in front of your web application.

With that all working we currently have two web sites being served. The Tomcat 8 welcome page hosted on port 8080 and the nginx welcome page on the default H. To cache for 15 minutes. Header always set Access-Control-Max-Age '900' As you can see, the value is in seconds. To cache for one hour. Addheader Access-Control-Max-Age '3600'; Once added, restart Nginx to see the results. There is only one option to set here – true.

For example, if your enterprise application is running on Apache (or Tomcat), you can setup an 2nd instance of your enterprise application on Apache (or Tomcat) on a different server.

And then, you can put Nginx at the front-end, which will load balance between the two Apache (or Tomcat, or JBoss) servers.

If you are new to Nginx, it is important to understand the difference between Nginx vs Apache, and Nginx Architecture.

Nginx supports the following three types of load balancing:
Beautiful girl drawing.

  1. round-robin – This is the default type for Nginx, which uses the typical round-robin algorithm to decide where to send the incoming request
  2. least-connected – As the name suggests, the incoming request will be sent to the server that has the less number of connection.
  3. ip-hash – This is helpful when you want to have persistence or stick connection of the incoming request. In this type, the client's ip-address is used to decide which server the request should be sent to.

1. Define upstream and proxy_pass in Nginx Config File

For load balancing, you need to add two things to the nginx configuration file: 1) upstream 2) proxy_pass

First, upstream: Specify a unique name (may be name of your application) and list all the servers that will be load-balanced by this Nginx.

In the following example, 'crmdev' is the name of the upstream, which is the name of the application that is running on both the individual Apache webserver (101.1 and 102.2 as shown below). Instead of 'crmdev', you can specify anything that you like here.

Note: The upstream should be defined inside your 'http' context of Nginx.

Note: If the individual servers are running on a different port (other than port 80), then specify the port number as shown below in the upstream

Second, proxy_pass: Specify the unique upstream name that was defined in the previous step as proxy_pass inside your 'location' section, which will be under 'server' section as shown below.

Note: In this example, nginx itself is listening on port 80 as shown above by the listen parameter.

Please note that you can also use proxy_pass to Setup Nginx as a Reverse Proxy to Apache/PHP.

2. Define upstream and proxy_pass in Nginx Default Config File

Note: Typically, the above should be under http or https as shown below.

But, if you are using the default.conf that comes with the default nginx.conf file, you don't need to give the 'http', as it is already defined in the http context.

Note: For HTTPS, replace the 'http' context (in the 1st line above) with https. Also, in the proxy_pass line, use https://{your-upstream-name}

Nginx Apache Tomcat Download

In this case, if you use the 'http' as shown above, you might get the following http directive is not allowed error message:

Nginx

This is the copy of my default.conf file, where I've added upstream at the top (without the http), and then commented out the default location, and added the new location using the proxy_pass.

3. Setup least-connected Algorithm for Nginx Load Balancer

In this algorithm, the incoming request is sent to the server that has least number of existing active connection.

For this, add the keyword 'least_conn' at the top of the upstream as shown below.

If you have several servers listed in the least_conn, and if multiple servers are having the similar low number of existing active connections, then among those servers, one will be picked based on weighted round-robin.

4. Setup Persistence or Sticky Algorithm for Nginx Load Balancer

The disadvantage of the round-robin and least-connected method is that, the subsequent connection from the client will not go to the same server in the pool. This may be Ok for a non session dependent application.

But if your application is dependent on session, then once an initial connection is established with a particular server, then you want all future connection from that particular client to go to the same server. For this, use the ip_hash algorithm as shown below.

For the hash, for IPv4 address, first three octets are used. If it is a IPv6 address, the entire address is used.

5. Weight Options for the Individual Servers

You can also specify a weight for a particular server in your pool. By default, all the servers has the equal priority (weight). i.e The default value of weight is 1.

But, you can change this behavior by assigning a weight to a server as shown below.

In this example, we have total of 5 servers. But the weight on 3rd server is 2. This means that for every 6 new request, 2 request will go to 3rd server, and rest of the server will get 1 request.

So, this is helpful to distribute more load on a specific server which has more horsepower.

Eventhough in the above example, weight is used with the default round-robin algorithm, you can use weight for least_conn and ip_hash also.

6. Timeout Options for the Individual Servers – max_fails and fail_timeout

You can also specify max_fails and fail_timeout to a particular server as shown below.

In the above:

  • The default fail_timeout is 10 seconds. In this above example, this is set to 30 seconds. This means that within 30 seconds if there were x number of failed attempts (as defined by max_fails), then the server will be unavailable. Also, the server will remain unavailable for 30 seconds.
  • The default max_fails is 1 attempt. In the above example, this is set to 3 attempts. This means that after 3 unsuccessful attempts to connect to this particular server, Nginx will consider this server unavailable for the duration of fail_timeout which is 30 seconds.

7. Assign a Backup Server in Nginx LoadBalancer Pool

In the following example, the 5th server is marked as backup using 'backup' keyword at the end of server parameter.

Apache

The above will make the 5th server (192.168.101.5) as a backup server. Incoming request will not be passed to this server unless all the other 4 servers are down.

This walkthrough assumes you have deployed an API built with Java/Spring to a Virtual Machine and have configured the application to run on port 8080 as a SystemD service.

If you need a refresher check out the Running a Java Application as a Linux Service walkthrough.

Apache

Please note that you can also use proxy_pass to Setup Nginx as a Reverse Proxy to Apache/PHP.

2. Define upstream and proxy_pass in Nginx Default Config File

Note: Typically, the above should be under http or https as shown below.

But, if you are using the default.conf that comes with the default nginx.conf file, you don't need to give the 'http', as it is already defined in the http context.

Note: For HTTPS, replace the 'http' context (in the 1st line above) with https. Also, in the proxy_pass line, use https://{your-upstream-name}

Nginx Apache Tomcat Download

In this case, if you use the 'http' as shown above, you might get the following http directive is not allowed error message:

This is the copy of my default.conf file, where I've added upstream at the top (without the http), and then commented out the default location, and added the new location using the proxy_pass.

3. Setup least-connected Algorithm for Nginx Load Balancer

In this algorithm, the incoming request is sent to the server that has least number of existing active connection.

For this, add the keyword 'least_conn' at the top of the upstream as shown below.

If you have several servers listed in the least_conn, and if multiple servers are having the similar low number of existing active connections, then among those servers, one will be picked based on weighted round-robin.

4. Setup Persistence or Sticky Algorithm for Nginx Load Balancer

The disadvantage of the round-robin and least-connected method is that, the subsequent connection from the client will not go to the same server in the pool. This may be Ok for a non session dependent application.

But if your application is dependent on session, then once an initial connection is established with a particular server, then you want all future connection from that particular client to go to the same server. For this, use the ip_hash algorithm as shown below.

For the hash, for IPv4 address, first three octets are used. If it is a IPv6 address, the entire address is used.

5. Weight Options for the Individual Servers

You can also specify a weight for a particular server in your pool. By default, all the servers has the equal priority (weight). i.e The default value of weight is 1.

But, you can change this behavior by assigning a weight to a server as shown below.

In this example, we have total of 5 servers. But the weight on 3rd server is 2. This means that for every 6 new request, 2 request will go to 3rd server, and rest of the server will get 1 request.

So, this is helpful to distribute more load on a specific server which has more horsepower.

Eventhough in the above example, weight is used with the default round-robin algorithm, you can use weight for least_conn and ip_hash also.

6. Timeout Options for the Individual Servers – max_fails and fail_timeout

You can also specify max_fails and fail_timeout to a particular server as shown below.

In the above:

  • The default fail_timeout is 10 seconds. In this above example, this is set to 30 seconds. This means that within 30 seconds if there were x number of failed attempts (as defined by max_fails), then the server will be unavailable. Also, the server will remain unavailable for 30 seconds.
  • The default max_fails is 1 attempt. In the above example, this is set to 3 attempts. This means that after 3 unsuccessful attempts to connect to this particular server, Nginx will consider this server unavailable for the duration of fail_timeout which is 30 seconds.

7. Assign a Backup Server in Nginx LoadBalancer Pool

In the following example, the 5th server is marked as backup using 'backup' keyword at the end of server parameter.

The above will make the 5th server (192.168.101.5) as a backup server. Incoming request will not be passed to this server unless all the other 4 servers are down.

This walkthrough assumes you have deployed an API built with Java/Spring to a Virtual Machine and have configured the application to run on port 8080 as a SystemD service.

If you need a refresher check out the Running a Java Application as a Linux Service walkthrough.

That walkthrough ended with us running the application as a SystemD service on port 8080. Our current EC2 instance has a security group allowing traffic on port 80. We could just open up port 8080, but a best practice would be to setup an intermediary web server that would proxy requests from port 80, to our application on port 8080.

In this case we will be using NGINX as the web server to catch requests on port 80, and then pass the request to our applications Tomcat web server running on port 8080. The response from our Tomcat web server is passed back to NGINX and then is sent back to the location of the original request.

We will need to:

  • install NGINX
  • configure NGINX

Install NGINX¶

You can verify it installed properly with:

While this worked for us in this educational environment, it is not a reliable solution for running our application. In the case of the server crashing, the EC2 instance going down, or experiencing a power outage the application would need to be manually started with the command used above.

Configure NGINX¶

We are simply using NGINX to proxy requests on port 80 to port 8080. We will need to update NGINX configuration information so that it points to our running application.

We will be overwriting /etc/nginx/nginx.conf with the following lines:

Tomcat Vs Nginx

This is simply informing the NGINX webserver running on port 80 to send all requests to http://localhost:8080.

After making the change we will need to reload the NGINX configuration file:

Nginx Vs Apache Tomcat

Now you can make a request from the EC2 instance with:

You can do the same thing from another machine since there is a security group allowing inbound traffic from anywhere on port 80.

Nginx Tomcat Https

Review¶

This walkthrough installed and configured the NGINX web server to handle HTTP requests on port 80 and forwards them to our running application on port 8080.

This is a common practice you see in operations as it separates concerns. Our NGINX web server has one responsibility handling HTTP traffic. If we needed to do something with that traffic before passing it to our Java/Spring application we would do it with NGINX. An example of this would be enforcing SSL/TLS, a responsibility of NGINX not Java/Spring.





broken image