How to Create a Web Application that Auto Scales with AWS

auto scaling with aws

This is the fourth and last post from chapter 5, Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling. On this exercise we will combine everything we already know about AWS and show its potential. It is a long exercise, but it is worth it.

We will create a load balancer and a set of webservers, which will scale out and in depending on the amount of traffic they receive. The webservers will deploy, bootstrap and register to the load balancer automatically, without any user intervention.

On this post we will continue with the preparation for the AWS CERTIFIED SOLUTIONS ARCHITECT EXAM. Remember, the exercises we are solving here, are from AWS Certified Solutions Architect Official Study Guide: Associate Exam by Joe Baron and others.

EXERCISE 5.6: Create a Web Application That Scales

1.Create a small web application architected with an Elastic Load Balancing load balancer, an Auto Scaling group spanning two Availability Zones that uses an Amazon CloudWatch metric, and an alarm attached to a scaling policy used by the Auto Scaling group.

We will create the security group with the rules for the web-servers and load balancer:

aws ec2 create-security-group --group-name autoscale --description "chapter5.6"
     "GroupId": "sg-0f9b8ab2a7298cb55"

Add rules allowing SSH and HTTP access from the IP address of your workstation ( is a good way to determine your IP address) and from the security group itself so the ELB can connect to the webservers:

aws ec2 authorize-security-group-ingress --group-id sg-0f9b8ab2a7298cb55 --protocol tcp --port 22 --cidr
aws ec2 authorize-security-group-ingress --group-id sg-0f9b8ab2a7298cb55 --protocol tcp --port 80 --cidr
aws ec2 authorize-security-group-ingress --group-id sg-0f9b8ab2a7298cb55 --protocol tcp --port 80 --source-group sg-0f9b8ab2a7298cb55

We will create a bash file that we will use for bootstrapping the linux instances we create:

So, lets create a file with the following content and save it with the name on the machine you run AWS CLI

apt-get -y update
apt-get -y install apache2
echo $HOSTNAME > /var/www/html/index.html

We will create the launch configuration which we will use for scaling out and in:

aws autoscaling create-launch-configuration --launch-configuration-name my-launch-config --image-id ami-09f4cd7c0b533b081 --instance-type t2.micro --key-name AWSKey --security-groups sg-0f9b8ab2a7298cb55 --user-data file://

We will create the target group for load balancing we will use on the Auto scaling group:

aws elbv2 create-target-group --name my-targets --protocol HTTP --port 80 --vpc-id vpc-da024ebd

Take note of the targetgroup ARN for the next step:

"TargetGroupArn": "arn:aws:elasticloadbalancing:sa-east-1:772378070873:targetgroup/my-targets/44ee8a3cdc360921"

Now we create the Auto-scaling group:

aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-auto-scaling-group --launch-configuration-name my-launch-config --min-size 1 --max-size 3 --availability-zones sa-east-1a --target-group-arns "arn:aws:elasticloadbalancing:sa-east-1:772378070873:targetgroup/my-targets/44ee8a3cdc360921

This configuration will auto scale nodes and register them to the target group which we will use for load balancing.

Now lets create the loadbalancer and listener to receive traffic:

aws elbv2 create-load-balancer --name my-load-balancer --subnets subnet-39d81c62 subnet-9c6c92fa --security-groups sg-0f9b8ab2a7298cb55

Take note of the LoadBalancer ARN for the next step:

aws elbv2 create-listener --load-balancer-arn arn:aws:elasticloadbalancing:sa-east-1:772378070873:loadbalancer/app/my-load-balancer/c9b7adbf73460b84 --protocol HTTP --port 80 --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:sa-east-1:772378070873:targetgroup/my-targets/44ee8a3cdc360921

After all these steps we should have 1 linux instance up and running associated to a target group and loadbalancer. Lets check that is the case, connecting to the loadbalancer (in our example the DNS name is

$ curl 

That is exactly what we were expecting, the hostname of the Linux instance

2.Verify that Auto Scaling is operating correctly by removing instances and driving the metric up and down to force Auto Scaling

Now, in order to verify Auto scaling is working, we will kill the current instance and check if the auto-scaling policy kicks-in and launch a new one.
Our linux instance id is i-0bf32d2b6913cd35e

aws ec2 terminate-instances --instance-ids i-0bf32d2b6913cd35e

Lets wait 5 minutes now, and try to connect to the ELB DNS name again to see if we get a new hostname, that would be awesome as it will show everything is running smoothly!

(Go and get a coffee…)

Ok, lets try now:

$ curl 

That is awesome! Note the backend hostname is different. Although we are connecting to the same Load balancer DNS. We just simulated we killed the instance and auto-scaling group policy kicked-in and launched a new one, the bootstrap script installed apache and the instance itself registered to the target group, to be accessible through the ELB. Sweet..

For the final test, we will modify the Auto-scaling policy to auto-scale if Request usage goes above 5 per target, we will use the GUI for this operation, as is way easier than the AWS CLI for this. So, lets go to Auto-Scaling Groups->Scaling Policies->Add Policy and type the values like the ones below:

Now that the policy is created, we are going to repeat the curl command with a while loop and see what we got:

 while true; do 
> curl
> done

If we keep the while running after 5 minutes or so, the auto-scaling policy should kick-in, deploy new instances and you should see the output of two additional back-end servers as shown below:


That is exactly what we were expecting, lets see the Activity history on the Auto-scaling policy:

That is exactly what we waiting for the auto-scaling group created two additional nodes, so we have a total of 3 available.

So, now if we stop the while loop (CTRL+C), the auto-scaling group should scale-in and after 10 minutes or so we were left with just 1 node as when we started.

This concludes this exercise. It was a challenging one, but it shows the powerful of elastic infrastructure that scales out and in automatically based on the amount of traffic of our app. Awesome right?