Basic Load Balancer using nginx container

Blog about setting up load balancer to distribute the load.

Image of how the stones are being balanced

Quick Overview of Load balancer

Load balancing refers to efficiently distributing incoming network traffic across a group of backend servers. It offers reliability to the system, by sending traffic only to the servers that are online.

Basic architecture of the system that we are going to build

Diagram showing the basic design of our final build.



Creating the root directory of the project

#mkdir ~/Desktop/container_load_balancing

Directory Structure

Create this structure before proceeding further

Tree structure of the project

Building App1

Navigate to the app1 directory
# cd ~/Desktop/container_load_balancing/app1

Create python virtual environment to prevent packages conflict with the local packages installed.
# python3 -m venv project_venv
# source project_venv/bin/activate

Install flask if its not installed
# python3 -m pip install flask

Edit and write the following code, which simply responds with “Hi. welcome to App1” when its being requested by client.

Generate requirements.txt; this contains all the packages and its version for proper working of app1
# pip freeze >> requirements.txt

requirements.txt of app1

Next step is to Dockerize the app

Create Dockerfile within app1 dir and configure it.

This is the file, that is used during building a container, we are just using python as the base image(which is already present in, then copying requirements.txt that we generated in previous step to python image, then we are installing those packages mentioned in requirements.txt in python image. We are running the app “$ python3”

Dockerfile of app1

Building App2

Follow the same process on app2 directory like we did on app1

[Just make necessary changes, replace “app1” to “app2" ]

Building nginx

Just to give brief overview, nginx is the most popular for creating load balancers, web servers, reverse proxy. For more details, check the official page and

Navigate to nginx dir
# cd ~/Desktop/conatiner_loadbalancing/nginx

We need to configure nginx to orchecturate the traffic, with the help of this file “nginx.conf” we will achieve the same.

we are naming the upstream as “loadbalancer”, this can by anything but make sure to give the same name for proxy_pass as that of the upstream. Then mention the any IPs to the container running app1 and app2. Mention the route that needs to be load balanced


Create nginx image

Add this following code, to build nginx container image.

Using the “nginx” base image from, and then removing the default nginx.conf file and adding our configured nginx.conf file created in previous step

Dockerfile of nginx

Building our required system

docker-compose.yml will help us in creating our required architecture

Navigate to the root of this project and create docker-compose file
# cd ~/Desktop/container_loadbalancing

Add this following to docker-compose.yml file.

We are using version 3; and we can think of services as docker containers, so we need 3 containers (1 for app1, 1 for app2 and 1 for nginx). build: will actually find the “Dockerfile” and executes it. Port mapping is done, for app1 and app2 we are exposing 5001 and 5002 respectively. And for nginx we are mapping port 80 of nginx with port 8080 of our localsystem


Time to build
# docker-compose up --build

Testing the load balancer

Once the build is completed, we have total 3 containers (verify it by running #docker ps)

  1. app1
  2. app2
  3. nginx (for load balancing)

nginx is accessible from localhost’s 8080 as we have configured in that way (check docker-compose file)

We will send 10 requests to load balancer and observe how it reacts. From the image below, we can see the sometimes app1 responds and sometimes app2. This shows load balancer is distributing the requests.

Sending 10 requests to loadbalancer

app1 is configured with weight as 6
app2 is configured with weight as 4

So, load balancer will send more requests to app1 as weight is more. Also, we are using default algorithm to distribute load (Round robin)

Logs while requests are served

Logs generated when


This blog simulates the HTTP load balancing by creating 2 container apps and 1 nginx container which sits in between client and servers and distributes the incoming requests.

Code for the above is in this repo:




Site Reliability Engineer, Cohesity | RHCSA

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Running a Terrad validator on GCP

What language does your thermostat speak?

Build and Test Pipelines with Azure

The Upside for New 5G Network Transport Infrastructure

Creating a Mathematics Blog with Jekyll

Auto-mounting partitions and Fstab

The Search for the Perfect Test Environment (2nd part)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Site Reliability Engineer, Cohesity | RHCSA

More from Medium

Running Redis with resilience in Linux containers on Windows — Part 4

Step by Step guide to install Jenkins on Amazon Linux

Build Nginx-HTML Docker Image using Dockerfile

Build Nginx-HTML Docker Image using Dockerfile

How to install docker on RHEL using Ansible role