I manage over 13 Docker containers on my own VPS. These include my projects like hesapciyiz.com, islistesi.com, this blog site, and several background services. Exposing each one to the internet using separate ports is not very practical, neither for security nor for management.
The question of how to publish so many services on a single IP address using standard ports like 80 and 443 initially gave me a lot of thought. The solution lay in migrating the Nginx reverse proxy architecture, which I've been using for years, to a Docker environment. In this post, I will explain step-by-step how I set up this structure and the experiences I gained during this process.
Why is Managing Multiple Services on a Single VPS a Problem?
When you want to run multiple web applications on a single server, you encounter a fundamental problem: port conflicts. Ports 80 for HTTP and 443 for HTTPS are standard, and it's not possible for each service to directly listen on these ports. Only one process on the server can use these ports at a time.
When I first encountered this situation, I tried assigning each service a different port (e.g., site1.com:3000, site2.com:4000). However, this resulted in a poor user experience and created additional complexity in areas like SSL certificate management. As my projects grew, I realized this approach was unsustainable and began searching for a more centralized solution.
The Fundamentals of Nginx Reverse Proxy
Nginx reverse proxy acts as a gatekeeper layer to solve this complex situation. It first receives all incoming HTTP/HTTPS requests, then checks the domain name (Host header) the request came from and forwards it to the relevant Docker container. This way, only Nginx's ports 80 and 443 are exposed to the outside world, while the internal Docker services continue to run on their own private ports.
Thanks to this structure, I can manage SSL certificates from a single point (on Nginx). At the same time, I can achieve performance improvements by utilizing Nginx's caching capabilities in conjunction with CDN services like Cloudflare. In essence, Nginx becomes not just a router for me, but also a performance and security layer.
My Nginx + Docker Environment Setup
On my own VPS, I run Nginx inside a Docker container. This provides me with isolation and allows me to easily manage Nginx configurations and dependencies. Below, you will find the main steps and important details of this setup.
ℹ️ Why Nginx Inside a Docker Container?
Running Nginx inside Docker instead of installing it directly on the host system prevents dependency conflicts, allows you to easily test different Nginx versions, and helps you keep your Nginx configurations under version control alongside your application. This isolation is especially valuable for someone like me managing over 13 containers.
Docker Network Structure
To enable secure and name-based communication between the Nginx container and other application containers, creating a dedicated Docker network is critical. This allows containers to reach each other by name rather than IP addresses.
docker network create nginx-proxy-net
With this command, I create a bridge network named nginx-proxy-net. I connect all my web application containers and the Nginx container to this network. This way, Nginx can reach my application with a simple expression like http://my_app_container_name:3000 in the proxy_pass directive.
Running the Nginx Container
I typically use a docker-compose.yml file to run the Nginx container. This allows me to mount Nginx configuration files from my host system and manage port mappings and network connections from a single place.
Here's a simple docker-compose.yml example:
version: '3.8'
services:
nginx:
image: nginx:latest
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
networks:
- nginx-proxy-net
command: "/bin/sh -c 'while :; do sleep 6h & wait $!; nginx -s reload; done & nginx -g \"daemon off;\"'"
networks:
nginx-proxy-net:
external: true
In this configuration, Nginx's main configuration file (nginx.conf) and the virtual host configurations (conf.d) I keep separately for each domain are read from my host system. Additionally, I mount the letsencrypt directory for SSL certificates automatically obtained with Certbot. The command line ensures that Nginx reloads its configuration at regular intervals, which is important for automatically activating new certificates.
SSL Certificate Management (with Certbot)
SSL certificates are indispensable nowadays. I generally use Let's Encrypt and Certbot. I also run Certbot as a separate Docker container and share the same letsencrypt and certbot/www directories with the Nginx container.
💡 Certificate Renewal Automation
Running Certbot regularly with
cronorsystemd.timerensures that your certificates do not expire. Thenginx -s reloadpart in thecommandline of mynginx-proxycontainer is critical for Nginx to start using a new certificate once Certbot obtains it.
To obtain the initial certificate, I typically use the command certbot certonly --webroot -w /var/www/certbot -d domain.com. Then, I update my Nginx configuration to use this certificate.
Configuring Services with Nginx
I create a separate .conf file for each Docker service within Nginx's conf.d directory. This keeps the configuration organized and means that when I want to add a new service, I only need to add its corresponding file.
Example Configuration for an Application (Astro Blog)
This blog site also runs within a Docker container on my VPS. Here's a simplified version of the blog.mustafaerbay.com.conf file:
server {
listen 80;
server_name blog.mustafaerbay.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name blog.mustafaerbay.com;
ssl_certificate /etc/letsencrypt/live/blog.mustafaerbay.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/blog.mustafaerbay.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/blog.mustafaerbay.com/chain.pem;
include /etc/nginx/snippets/ssl-params.conf; # My common SSL settings
location / {
proxy_pass http://blog_app_container_name:3000; # The port where the application is running
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
# Cache-Control override for Astro's static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|webp|woff2|woff|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, max-age=31536000, immutable";
proxy_pass http://blog_app_container_name:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
}
In this configuration, I automatically redirect requests coming from port 80 to port 443. After defining my SSL certificates, I forward all requests to the / path to the 3000 port of the Docker container named blog_app_container_name. The proxy_set_header directives ensure that the application receives the correct Host, IP, and protocol information.
Adding Multiple Applications
When I want to add a new application, I essentially follow the same steps:
- I connect the application as a Docker container to the
nginx-proxy-netnetwork. - I create a new
domain.com.conffile for the application and add it to theconf.ddirectory. - I send a
docker exec nginx-proxy nginx -s reloadcommand to the Nginx container to reload the configuration.
With this method, I can seamlessly publish different projects like hesapciyiz.com, islistesi.com, and spamkalkani.com on the same physical server through the same Nginx reverse proxy. Each has its own domain name, and Nginx directs the request to the correct container based on the Host header.
Common Problems and Their Solutions
Of course, I've encountered some problems while setting up and managing this structure. Field experience is partly about learning from these mistakes.
Nginx Configuration Errors
It's very easy to make a syntax error in Nginx configuration files. A forgotten semicolon (;) or an incorrect directive can prevent Nginx from starting or cause it to malfunction.
⚠️ Nginx Config Test
I always use the
nginx -tcommand before deploying an Nginx configuration. This helps me detect syntax errors and logical issues in the configuration file in advance. If you are inside a Docker container, you can run it asdocker exec nginx-proxy nginx -t.
If Nginx cannot start or doesn't work as expected, the first place I look is the docker logs nginx-proxy output. The error messages here usually directly indicate the source of the problem.
Docker Network Communication Issues
Sometimes Nginx cannot reach the target Docker container via proxy_pass. This usually stems from the following reasons:
- Incorrect Container Name: Using the wrong container name in the
proxy_passdirective. - Different Networks: The Nginx and application containers not being on the same Docker network. On my VPS, I once connected an application container to a different network, and Nginx naturally returned a "upstream timed out" error.
- Application Port: Specifying the wrong port that the application is listening on within Docker (e.g., writing 8080 instead of 3000).
In such cases, I check which networks the container is connected to and its IP address using the docker inspect <container_name> command. I also test if I can reach the application container by running curl http://<app_container_name>:<port> from within the Nginx container.
Cache and Header Management (with Cloudflare)
Sometimes I need to fine-tune Cloudflare's cache for my Astro sites. Astro can default to returning headers like Cache-Control: public, max-age=0, must-revalidate, which causes Cloudflare to always go to the origin.
To overcome this, I add specific Cache-Control headers for static assets in the Nginx configuration. As shown in the Astro blog example above, for URLs with file extensions like .js, .css, .png, I ensure Nginx adds the expires 1y; and add_header Cache-Control "public, max-age=31536000, immutable"; headers. This way, Cloudflare can cache these assets for a long time, and fewer requests reach the server.
Performance and Security Tips
I don't just use Nginx as a reverse proxy; I also make some adjustments to improve my server's performance and security.
Nginx Worker Settings
The number of worker_processes Nginx uses should be adjusted based on your server's CPU core count. I generally use worker_processes auto;, which allows Nginx to automatically detect the number of CPU cores. worker_connections determines how many concurrent connections each worker process can handle. On my VPS, I usually keep this value at 1024 or 2048.
Security Headers
I add some HTTP security headers through Nginx to make my web applications more secure. These headers help prevent browsers from exploiting certain security vulnerabilities.
Here are some examples I keep in a file like snippets/security-headers.conf:
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
By including this snippets file in each of my server blocks, I automatically add these headers to all my sites.
Rate Limiting
Spam bots and malicious requests have always been a problem for me, especially for my API-based services like spamkalkani.com. Nginx's rate limiting features are a great tool for limiting requests from specific IP addresses.
# Add to the http block in nginx.conf
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=5r/s;
The configuration above defines a limit allowing 5 requests per second per IP address. I can then apply this to a location block:
location /api/ {
limit_req zone=mylimit burst=10 nodelay;
proxy_pass http://api_app_container_name:3001;
# ... other proxy settings
}
This helps me keep bot requests to spamkalkani.com's API under control. Last month, my CPU spiked to 90% due to a bot attack; after implementing this setting, I could finally breathe easy.
Conclusion
Managing multiple web services on a single VPS using Nginx reverse proxy and Docker has been a cost-effective and highly organized solution for me. With this setup, I can efficiently utilize my server resources, easily scale my services, and manage security and performance from a central point.
I had a lot of headaches setting up this system, especially with inter-container network issues at first. But now it works like a charm, and adding a new project only takes a few minutes. Do you have a different approach you use for managing multiple services on a single VPS? Or did you get stuck somewhere while applying the steps in this guide? We can discuss it in the comments. Perhaps in my next post, I'll explain how I added monitoring on top of this structure.
Top comments (0)