AWS: How to Create a Launch Configuration and Auto Scaling Group

auto scaling

This is the third post for exercises from Chapter 5, Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling. This post is an interesting one as we will show how to auto scale resources based on load. This is one of the great features of Cloud computing where you can scale out and scale in resources quickly optimising costs and matching your resources to your requirements at anytime.

On this post we will continue with the preparation for the AWS CERTIFIED SOLUTIONS ARCHITECT EXAM. Remember the exercises we are solving here are from AWS Certified Solutions Architect Official Study Guide: Associate Exam by Joe Baron and others.

EXERCISE 5.4: Create a Launch Configuration and Auto Scaling Group

Create a launch configuration using an existing AMI.

The Launch Configuration we will create has these settings:

  • Subnet:subnet-39d81c62
  • AMI:ami-09f4cd7c0b533b081
  • Instancetype:t2.micro
  • SSH Key: AWSKey
  • Security group: sg-07803a04e87e1e0aen
aws autoscaling create-launch-configuration --launch-configuration-name my-launch-config --image-id ami-09f4cd7c0b533b081 --instance-type t2.micro --key-name AWSKey --security-groups sg-07803a04e87e1e0ae

Create an Auto Scaling group using this launch configuration with a group size of three and spanning two Availability Zones. Do not use a scaling policy. Keep the group at its initial size.

aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-auto-scaling-group --launch-configuration-name my-launch-config --min-size 1 --max-size 3 --availability-zones sa-east-1a

Now the Autoscaling group should have created 1 instance, lets check that is the case:

$ aws autoscaling describe-auto-scaling-instances
 {
     "AutoScalingInstances": [
         {
             "ProtectedFromScaleIn": false, 
             "AvailabilityZone": "sa-east-1a", 
             "InstanceId": "i-000e7e400aebca7fc", 
             "AutoScalingGroupName": "my-auto-scaling-group", 
             "HealthStatus": "HEALTHY", 
             "LifecycleState": "InService", 
             "LaunchConfigurationName": "my-launch-config"
         }
     ]
 }

Manually terminate an Amazon EC2 instance, and observe Auto Scaling launch a new Amazon EC2 instance.

We will terminate now the instance and check if the autoscaling group creates a new one.

Terminate instance:

$ aws ec2 terminate-instances --instance-ids i-000e7e400aebca7fc
 {
     "TerminatingInstances": [
         {
             "InstanceId": "i-000e7e400aebca7fc", 
             "CurrentState": {
                 "Code": 32, 
                 "Name": "shutting-down"
             }, 
             "PreviousState": {
                 "Code": 16, 
                 "Name": "running"
             }
         }
     ]
 }


Lets wait 5 minutes and check if a new instance has been created by the scaling group, we should see a new instance with a different id now:

aws autoscaling describe-auto-scaling-instances
 {
     "AutoScalingInstances": [
         {
             "ProtectedFromScaleIn": false, 
             "AvailabilityZone": "sa-east-1a", 
             "InstanceId": "i-0b8996f47ad79029e", 
             "AutoScalingGroupName": "my-auto-scaling-group", 
             "HealthStatus": "HEALTHY", 
             "LifecycleState": "InService", 
             "LaunchConfigurationName": "my-launch-config"
         }
     ]
 }

EXERCISE 5.5: Create a Scaling Policy

For this exercise we wll use the GUI because the CLI commands are quite long and hard to put together, so we do think on this scenario is best to use the AWS Management Console.

1. Create an Amazon Cloud Watch metric and alarm for CPU utilization using the AWS Management Console

So we will go to Services->Management & Governance->Cloudwatch and we will select ‘Alarms’ and then ‘Create Alarm’. A pop-up like will appear and we will populate the fields.
In our example we want to send a notification by email when CPU goes above 15% on a Linux instance.

CPU alarm notification

In order to drive the CPU utilization UP we can use a handy trick and run the ‘yes’ utility, with will just print yes on the console on a loop, this actually pushes the CPU usage up dramatically. In order to avoid hanging our console we can redirect that to /dev/null as shown below:

yes > /dev/null &

we can monitor the CPU utilization went UP with the top command:

top command output

we will kill this process with ‘killall yes’ when we are done, don’t do it now though. We will wait 5 minutes and see if the alarm is fired.

If the alarm is triggered, you will get an email similar to the following:

You are receiving this email because your Amazon CloudWatch Alarm
"CPU utilization above 15" in the South America (Sao Paulo) region
has entered the ALARM state, because "Threshold Crossed: 1 datapoint
[16.16666666666665 (04/04/19 14:45:00)] was greater than or equal to
the threshold (15.0)." at "Thursday 04 April, 2019 14:50:20 UTC".

if that is the case then you can stop the yes process with ‘killall yes’

2.Using the Auto Scaling group from Exercise 5.4, edit the Auto Scaling group to include a policy that uses the CPU utilization alarm.

For this lets go to the Auto-Scaling group and on Scaling Policies lets populate the values like shown in the image below:

auto scaling cpu 15

3.Drive CPU utilization on the monitored Amazon EC2 instance(s) up to observe Auto Scaling.

Like in the previous exercise, we will run the ‘yes’ utility:

yes > /dev/null &

We will kill this process with ‘killall yes’ when we are done, don’t do it now though. We will wait 5 minutes and check what is happening on the Auto-Scaling policy as shown below:

auto scaling out cpu

we can see the policy has scaled out two aditional instances due to the CPU increase. That is exactly what we wanted!

Now we will kill the yes process on the first instance so we can see if the Auto-scaling policies scale-in. So run:

killall yes

and lets wait 5 minutes and check again:

auto scaling in cpu

Great, that is exactly what we were expecting, the Auto-scaling policy terminating the two instances launched as the CPU utilization dropped after we stopped the yes process.