What are the use cases for nginx

Nemanja Tomic

Aug 14, 2025 6 min read

Thumbnail

When it comes to hosting website to the public internet, there is one clear winner - nginx. According to w3techs, nginx is used in around 33.6% of all the websites whose web servers are known.

Created by Russian developer Igor Sysoev and released in 2004, nginx is an open-source software that allows you to set up your server for several use cases, one of them being web servers. It also works well as a reverse proxy, load balancer, content cache, or TCP/UDP proxy server. It even supports container platforms like Kubernetes or Azure Kubernetes Service. Let's dive into what nginx has to offer in detail. We will analyze nginx for the following use cases:

  1. Host your own website.
  2. Proxy your requests through an external server.
  3. Load balance your application.
  4. Expose your Kubernetes application with nginx Gateway Fabric.

Host your own website

The use case that most people working in tech have utilized in some point in their career is to make a website publicly accessible to the internet. And it couldn't get simpler. The only thing you have to do is to install the nginx application on your server and... that's it. This will start the nginx service and open port 80. Consequently, when you curl the IP of you server, you should get the standard nginx welcome page.

Of course you still have to configure a few additional things that are vital for every website, for example the path to your actual application or a secure connection with TLS. You would also want to redirect all HTTP traffic to HTTPS, to ensure users can only connect with an encrypted connection. To do this, modify the configuration file named "nginx.conf" under "/etc/nginx". A typical configuration for a web server with redirect to HTTPS could look like this.

server {
    listen 80;
    server_name example.com www.example.com;

    # Redirect all HTTP requests to HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name example.com www.example.com;

    # SSL Configuration
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Location of the Web Application
    location / {
        root /var/www/example.com;
        index index.html index.htm;
    }
}

Proxy your requests through an external server

Say, for example, you want to search for something on Google and make a search request. What normally happens is Google's server receives your request and sends you back a response. But Google often not only sends you back the results you desired, but also things you did not ask for.

With a proxy however, you can block out all things you do not want. This is done by routing your request through a server running an nginx proxy. The proxy server then performs the request for you and, after it gets a response, filters out all things you don't want on your home network, including ads or malware. It is therefore a great and efficient way of strengthening your security and privacy.

It is also very beneficial if a lot of people on the same network make the same request several times within a short period. You can configure the proxy to cache the request, and if another user in your network makes the same request again, the proxy just responds with the cached request and therefore doesn't have to repeat the request over and over again. This type of proxy is also called the forward proxy.

server {
    listen 8888;

    location / {
        # Specify the DNS you want your proxy to use
        resolver 8.8.8.8;

        # Perform the forward proxy
        proxy_pass http://$http_host$uri$is_args$args;
    }
}

Load balance your application

We just discussed the forward proxy, let's now get to a similar technique you can achieve with nginx - the load balancer. Just like the forward proxy, it forwards requests and responses. However, instead of forwarding request from your network to other servers, it forwards requests from the internet made to you network. This is particularly useful if you have a lot of traffic on you application and you need to scale out horizontally. This means that you don't host your app on one server, but on 3 or more servers, each having a different IP-address.

Now you have to somehow redirect your traffic in equal proportions to all three servers, with only one hostname. A load balancer is how you achieve this. The load balancer acts as the gateway for all traffic, and its only job is to redirect all requests in equal amounts to your remaining servers. This makes your infrastructure infinitely scalable, since you can always buy more servers and connect them to your load balancer.

Load balancers also make your infrastructure more secure if you have a lot of different applications on different servers. Because you have one point of entry for all your different servers, this means that you only have to configure your firewall and operational security once, and that is on the reverse proxy. It will then block all unwanted traffic and secure your whole infrastructure. This is how I configure my load balancers.

http {
    upstream myapp1 {
        # Define the Load Balancing Algorithm (defaults to round-robin)
        least_conn;
        server srv1.example.com;
        server srv2.example.com;
        server srv3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myapp1;
        }
    }
}

Keep in mind that for the load balancer to work correctly, you should also comment out the default server configuration.

Expose your Kubernetes application with nginx Gateway Fabric

This one is a bit trickier. It requires not only expertize in web server administration but also in container orchestration. If you don't have any experience in Kubernetes, I recommend you check out this course from KodeKloud. It is a great starting point if you want to get into the topic. Anyway, let's get to the interesting part.

Let's say you have an application inside a Kubernetes cluster. The way it works is that each pod has its own IP-address, and it is encapsulated from the internet until you expose it. You could of course use a NodePort service, but then you would expose it to one of the high, non-standard ports (30000-32767) instead of standard HTTP/HTTPS (80/443). Your external access would break every time the node IP changes. And features like TLS termination, routing rules, and load balancing across multiple backends are not supported. No bueno.

Instead, you would want to use the Gateway API. And for this, you need an implementation of the Gateway API, which the nginx Gateway Fabric provides. It let's you configure an HTTP or TCP/UDP load balancer, reverse-proxy, or API gateway for applications running on Kubernetes. And the best thing is, NGF operates the gateway to your cluster on a very high level. There are virtually no configuration files you have to manually edit. It is also compatible with cloud platform load balancers like Google Kubernetes Engine or Azure Kubernetes Service. It is very easy to set up once you understand containerzation. If you want to try it out yourself, I can recommend you this great blog post from nginx, describing how to set up a gateway for you cluster for the first time.

Subscribe To My Blog

Enter your email to receive the latest articles about tech, work and life.

© 2025 Nemanja Tomic. All rights reserved.