Scaling Microservices with Docker Compose, Interlock, and HAProxy/NGINX

Back in the days, we had a monolithic application running on a heavy VM. We knew everything about it, where it was running, its IP address , which TCP/UDP port was it listening to, its status and health…etc. When we wanted to find that app, we knew where to find it. We even gave it a static IP address and advertised its DNS name for other apps to refer to it. But things have changed, and the the monolith is now a bunch of microservices running in Docker containers.

The new microservices architecture has quite the advantages: flexibility, portability, agility, and scalability to name a few. But it also introduces some challenges ( especially for the operations teams used to the monolithic approach). One of which is stochasticity. There is a notion of randomness/chaos when deploying microservices(especially on a cluster of nodes e.g Swarm) as some attributes that were well-defined in the old days become unknown or hard to find with the new architecture.

In this article, I will describe how the challenge of microservices stochasticity can be addressed with a service-discovery service called Interlock. I will also describe how to scale these microservices by combining Interlock with Docker Compose and an HAProxy/NGINX service.

Let’s start with a simple app composed of two microservices : web (Python) and db(Mongo). Each of these microservices does one and only one thing. The web microservice receives http requests and responds to them. The db microservice saves data persistently and provides it upon the web microservice’s request. These two microservices are described in a docker-compose.yml file. If you’re not familiar with Docker Compose, please visit this site to learn more.

# Mongo DB
 image: mongo
 - 27017
 command: --smallfiles
# Python App
 build: . 
 - "5000:5000" 
 - db:db

By installing Docker Compose and running this docker-compose.yml file, we can run this app on a single Docker engine or on a Docker Swarm cluster. In this example, I will be running this app on a Swarm cluster. Docker Compose will take care of building/pulling the images and spinning the two containers on one of the Swarm nodes. Note: since these two services are linked, Swarm will ensure they get placed on the same node.

dockchat#docker-compose up -d
Recreating dockchat_db_1...
Recreating dockchat_web_1...

dockchat#docker-compose ps
 Name Command State Ports
dockchat_db_1 / --smallfiles Up 27017/tcp
dockchat_web_1 python Up>5000/tcp

Now let’s use the awesome “scale” feature that Docker Compose has. This feature creates new containers from the same service description. In this case, let’s try to scale our web service to 5.

#docker-compose scale web=5
Creating dockchat_web_2...
Creating dockchat_web_3...
Creating dockchat_web_4...
Creating dockchat_web_5...
Starting dockchat_web_2...
Starting dockchat_web_3...
Starting dockchat_web_4...
Starting dockchat_web_5...

#docker-compose ps
 Name Command State Ports
dockchat_db_1 / --smallfiles Up 27017/tcp
dockchat_web_1 python Up>5000/tcp
dockchat_web_2 python Up>5000/tcp
dockchat_web_3 python Up>5000/tcp
dockchat_web_4 python Up>5000/tcp
dockchat_web_5 python Up>5000/tcp

Looks good so far! The app is up on one of the Swarm nodes, but how can we access it ? Which node/IP is it running on ? What happens if we re-deploy this app ? Are all the web containers being utilized ?

All of these are valid questions, and they highlight the challenge of stochasticity that I described earlier. There needs to be a better way to access these microservices without the need to do any manual search for their deployment attributes such as IP, port, node, and so on. We need a dynamic , event-driven system to discover these services and register their attributes for other services to use, and that’s exactly what Interlock does.

Interlock is a service that detects new microservice containers when they come up/go down and register/removes them from with other services that need to know about these new services. An example would be a load-balancing service such as HAProxy or NGINX. Interlock supports both NGINX and HAProxy. But anyone can write new plugin for their preferred service.

Let’s pick from where we left off, and see how Interlock can solve this problem. In this example, I’m using the HAProxy plugin( you can easily substitute in NGINX instead). I added a new service in the docker-compose.yml file called ‘interlock’ with the following parameters:

# Interlock/HAProxy
 image: ehazlett/interlock:latest
 - "80:80"
 - /var/lib/docker:/etc/docker
 command: "--swarm-url tcp://<SWARM_MASTER_IP>:3375 --debug --plugin haproxy start"
# DB
 image: mongo
 - 27017
 command: --smallfiles
# Web
 build: .
 - INTERLOCK_DATA={"hostname":"demo","domain":""}
 - "5000"
 - db:db

Main thing to note here is that Inerlock is launched like any other service in a container. I also needed to add INERLOCK_DATA as an env var in web service container. INTERLOCK_DATA will tell Interlock how to register this new service.

One of the great features of Docker is the Events API. From the Docker Client, ‘docker events’ will show all events happening with the specific Docker Engine or Docker Swarm cluster. Many projects out there today utilize the events API for container operations’ visibility and monitoring. Similarly, Interlock uses the events API to discover when containers come up or go down. It then gets some attributes about the containers such as service name, hostname, host IP and port. This info is then used to create new config for HAProxy/NGINX. The following diagram and log outputs show how Interlock works:

Interlock Visual (2)

dockchat-interlock# docker-compose up -d
Creating dockchatinterlock_db_1...
Creating dockchatinterlock_web_1...
Creating dockchatinterlock_interlock_1...

dockchat-interlock# docker-compose scale web=2
Creating dockchatinterlock_web_2...
Starting dockchatinterlock_web_2...

#docker logs dockchatinterlock_interlock_1
time="2015-09-19T20:44:32Z" level="debug" msg="[interlock] event: date=1442695472 type=create image=dockchatinterlock_web node:ip-10-0-0-80 container=dab9acc32f274710e0a2dc17d2546f1735993cd1b9b1e5354bd7e1a9a27ce83a"
time="2015-09-19T20:44:32Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-09-19T20:44:32Z" level="debug" msg="[interlock] event: date=1442695472 type=start image=dockchatinterlock_web node:ip-10-0-0-80 container=dab9acc32f274710e0a2dc17d2546f1735993cd1b9b1e5354bd7e1a9a27ce83a"
time="2015-09-19T20:44:32Z" level="info" msg="[interlock] dispatching event to plugin: name=haproxy version=0.1"
time="2015-09-19T20:44:32Z" level="debug" msg="[haproxy] update request received"
time="2015-09-19T20:44:32Z" level="debug" msg="[haproxy] generating proxy config"
time="2015-09-19T20:44:32Z" level="info" msg="[haproxy] upstream= container=dockchatinterlock_web_2"
time="2015-09-19T20:44:32Z" level="info" msg="[haproxy] upstream= container=dockchatinterlock_web_1"
time="2015-09-19T20:44:32Z" level="debug" msg="[haproxy] adding host name=demo_interlock_com"
time="2015-09-19T20:44:32Z" level="debug" msg="[haproxy] jobs: 0"
time="2015-09-19T20:44:32Z" level="debug" msg="[haproxy] reload triggered"
time="2015-09-19T20:44:32Z" level="info" msg="[haproxy] proxy reloaded and ready"

We can see that when we scaled the web app, Interlock detected the new services and registered them in HAProxy. To verify which services belong to,  we can look at HAProxy stats page ( Another point to note here is that a single Interlock service can register multiple sites automatically. The moment a new container comes up with a different INTERLOCK_DATA hostname data, Interlock will add the new config in HAProxy. Pretty neat!

Screen Shot 2015-09-19 at 1.58.06 PM

Finally, we can see from the stats page and from the web service itself that traffic hitting was load-balanced across these new containers as can be seen here:

Screen Shot 2015-09-19 at 1.51.55 PM

Note: to access the site from you browser, make sure you add an /etc/hosts entry for with Interlock container’s host IP address.

Conclusion: In this article, I went over the challenge of stochasticity when deploying microservice containers on a cluster of nodes. I described how Interlock can be used to address this challenge by dynamically discovering these services and registering them with other services like HAProxy. Finally, I demonstrated how combining Interlock with an HAProxy can ease scaling your applications on a Swarm cluster.

Source repo can be found here.

Special thanks to Evan Hazlett for creating Interlock.

5 thoughts on “Scaling Microservices with Docker Compose, Interlock, and HAProxy/NGINX

  1. Great read, thanks! This is def a missing component in the Docker Toolbox so happy to see a nice solution for filling the gap. Would love to hear what approach you have to updating the containers that are load balanced without having HAProxy redirect traffic to the container(s) that are currently being updated.

    Liked by 1 person

    1. Nicola Kabar

      Interlock watches out for the events api, so only way to remove a container from the haproxy config is when Interlock detects it went down. You can either inject new udpate and restart the container or fork Interlock and adjust how it handles plugin re-configs.


  2. Pingback: Web Operations Weekly No.34 | ENUE Blog

  3. Pingback: Cloud Development Weekly No.23 | ENUE Blog

  4. Pingback: 08-10-2015 - links - Magnus Udbjørg

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s