Thursday, 17 November 2016

How to Scale web Application With docker


Scaling Web Application with Docker


In the previous post i have discussed about building the laravel application with docker. In this post I will try scale the same application . if you haven't looked yet i will recommend you to read that post first.

we will clone the same project from the GitHub and will make necessary changes.

Scaling the web application is very simple in docker we can scale our application by docker-compose command very easily.

But before scaling our application we have to setup a load balancer which will evenly distribute the request to each instances of particular scaled service fortunately docker hub have official image of haproxy  which will act like a load balancer. We will use that image in our docker-compose.yml file and will make a service by using it. Let see it in action.


What is HAProxy?

Official docs
HAProxy is a free, open source high availability solution, providing load balancing and proxying for TCP and HTTP-based applications by spreading requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage)

To know more about HAProxy read the official docs Click Here .
We will modify our docker-compose.yml file to configure HAProxy. Copy the code given below and paste it into docker-compose.yml file.

Docker-compose.yml file
loadbalancer:
image: eeacms/haproxy
container_name: loadbalancer
links:
 - web
ports:

 - "80:5000"
- "1936:1936"
web:
build: .
# container_name: laravel_container
volumes:
  - .:/var/www/html/app
links:
  - mysql
mysql:
image: mysql:latest
container_name: mysql_laravel
ports:
 - "3306:3306"
environment:
 MYSQL_ROOT_PASSWORD: hrhk
 MYSQL_PASSWORD: hrhk   
 MYSQL_DATABASE: laravel_db

In this docker-compose.yml file i have created three service
  1. Loadbalancer
  2. Web
  3. Mysql

Web Service
The definition and  configuration of web service defined in this  yml file is almost same with what we have defined in previous post/project except i have commented the container_name because when we scale our web service all web service will have same container name which conflict with each other and thus causing problem to scale our web service. We will leave it on docker engine to give the default name to each container. If you pay attention on web service in above docker-compose.yml file then you will notice that i haven’t used the port directive which i  have used  in the previous project. This is only because we want to scale only this particular service not mysql and load balancer service. Giving the port statically will result in port conflict error because we can not run multiple container at same port in the localhost. I hope you getting my point. Anyways Questions are welcomed.
Load Balancer Service
The purpose of this whole project is to teach you about horizontally scaling the web servers and managing the load balance.
This loadbalancer  is the main service which will receive all request from the client and distribute evenly to the the multiple web server running on the host. I have  used eeacms/haproxy image for building this service. In the previous project our laravel container was listening on the port 80 but in this project we will run the load balancer container on the port 80 and inside the loadbalancer container our load balancer service will be running on port 5000. Apart from making this container to run on port 80 i have linked loadbalancer service to  web service through which this load balancer service will  automatically discover the web service and can divert all incoming connection to healthy web servers.

Note
Loadbalancer service is running inside the Loadbalancer container on port 5000 where as Loadbalancer Container is running on host system at port 80.

Now the question arises why i am using eeacms/haproxy image instead of haproxy or tutum/haproxy image?

If i use official haproxy image than i have to manually configure the haproxy.cfg file located in etc/haproxy directory which will be very headache if we have large no of conatainers and other problem is whenever we fire docker-compose up all conatiners will be assigned a new ip thus we have to re configure our haaproxy.cfg file. To overcome this problem I am using this eeacms/haproxy which will automate all this task using python script.
This image will generate pre configured haproxy.cfg file having listed all the instance of scaled web service.
You must have to force recreate the loadbalancer service.

Haproxy configuration file looks like

global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http

frontend localnodes bind *:80 mode http default_backend nodes

backend nodes mode http balance roundrobin option forwardfor http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc } option httpchk HEAD / HTTP/1.1\r\nHost:localhost server http-server1 127.0.0.1:9000 check server http-server2 127.0.0.1:9001 check server http-server3 127.0.0.1:9002 check

listen stats *:1936 stats enable stats uri / stats hide-version stats auth someuser:password




Load Balancing Configuration

To get started balancing traffic between our three HTTP listeners, we need to set some options within HAProxy:
  • frontend - where HAProxy listens to connections
  • backend - Where HAPoxy sends incoming connections
  • stats - Optionally, setup HAProxy web tool for monitoring the load balancer and its nodes



Mysql Service

This is the same service which i have used in the previous project please go and read about this service.
I have covered only theory now it is time for action. Run the following command


cd laravel-docker-app


sudo docker-compose up -d

Note:
  1. docker-compose is a tool not shipped with docker. We have install it manually google it
    2. Make sure you have stopped apache and mysql on localhost to free port 80 and 3306
        because  we will run loadbalancer container on 80 and mysql container on 3306
        sudo service apache2 stop
        sudo service mysql stop


If no error occurred then successfully two docker container was deployed. We can check by following command


sudo docker ps OR sudo docker-compose ps


CONTAINER ID        IMAGE                 COMMAND                 CREATED             STATUS              PORTS               NAMES
748a76f5242f    eeacms/haproxy:latest   "python /haproxy/main"   23 seconds ago Up 20 seconds       443/tcp, 0.0.0.0:80->80/tcp, 1936/tcp   laraveldockerapp_loadbalancer_1

7cb3ba02a3ee        laraveldockerapp_web   "/my_init"               25 seconds ago      Up 23 seconds       80/tcp                                  laraveldockerapp_web_1

7c306b02411f        mysql:latest           "docker-entrypoint.sh"   26 seconds ago      Up 25 seconds       0.0.0.0:3306->3306/tcp                  mysql_laravel



I already have  given the overview of this project in the previous post if you have no idea about this laravel project go and have a look.

Now we will test our web API for that we have to insert same fake records in mysql database. In the previous Post scroll Down till you get "Two container are running " and follow rest of Instruction. you will have Your laravel API Project build with docker will be Up and Running.

Now we will scale our service. Run this command
docker-compose scale web=15



Creating and starting laraveldockerapp_web_1 ... done
Creating and starting laraveldockerapp_web_2 ... done
Creating and starting laraveldockerapp_web_3 ... done
Creating and starting laraveldockerapp_web_4 ... done
Creating and starting laraveldockerapp_web_5 ... done
Creating and starting laraveldockerapp_web_6 ... done
Creating and starting laraveldockerapp_web_7 ... done
Creating and starting laraveldockerapp_web_8 ... done
Creating and starting laraveldockerapp_web_9 ... done
Creating and starting laraveldockerapp_web_10 ... done
Creating and starting laraveldockerapp_web_10 ... done
Creating and starting laraveldockerapp_web_12 ... done
Creating and starting laraveldockerapp_web_13 ... done
Creating and starting laraveldockerapp_web_14 ... done
Creating and starting laraveldockerapp_web_15 ... done

After Scaling you must have to recreate haproxy container by following command.

docker-compose -f docker-compose.yml up --force-recreate

After recreating You can check your ha proxy configuration by getting into loadbalancer container

sudo docker exec -it loadbalancer bash
cd etc\haproxy
cat haproxy.cfg

or

sudo docker exec -it loadbalancer cat etc\haproxy\haproxy.cfg


Haproxy also have UI stat page from where you can see all servers and their state

http://localhost:1936



We have scaled our web service. Check with sudo  docker ps  command you will see that 15 instance of our web service will be running on our localhost. How cool is that we have scaled our web server with single command this is the power of docker.

BUT we only have scaled it on our local host. In real scenario Scaling is done on multiple host machine may be 10 ,20 ,50 or more than that depending upon the load on the server.

Scaling the dockerized Application on different node was done in Two ways
  1. Docker swarm (Native clustering solution of docker)
  2. Kubernetes (based on google experience to cluster the application )
We will see both of them in Action but for that we need a real environment to experiment we will use google compute engine to make a cluster of node but for that you have to wait for my next post

Thanks











No comments:

Post a Comment