OLD | NEW |
(Empty) | |
| 1 .. _autoscale_tut: |
| 2 |
| 3 ============================================= |
| 4 An Introduction to boto's Autoscale interface |
| 5 ============================================= |
| 6 |
| 7 This tutorial focuses on the boto interface to the Autoscale service. This |
| 8 assumes you are familiar with boto's EC2 interface and concepts. |
| 9 |
| 10 Autoscale Concepts |
| 11 ------------------ |
| 12 |
| 13 The AWS Autoscale service is comprised of three core concepts: |
| 14 |
| 15 #. *Autoscale Group (AG):* An AG can be viewed as a collection of criteria for |
| 16 maintaining or scaling a set of EC2 instances over one or more availability |
| 17 zones. An AG is limited to a single region. |
| 18 #. *Launch Configuration (LC):* An LC is the set of information needed by the |
| 19 AG to launch new instances - this can encompass image ids, startup data, |
| 20 security groups and keys. Only one LC is attached to an AG. |
| 21 #. *Triggers*: A trigger is essentially a set of rules for determining when to |
| 22 scale an AG up or down. These rules can encompass a set of metrics such as |
| 23 average CPU usage across instances, or incoming requests, a threshold for |
| 24 when an action will take place, as well as parameters to control how long |
| 25 to wait after a threshold is crossed. |
| 26 |
| 27 Creating a Connection |
| 28 --------------------- |
| 29 The first step in accessing autoscaling is to create a connection to the service
. |
| 30 There are two ways to do this in boto. The first is: |
| 31 |
| 32 >>> from boto.ec2.autoscale import AutoScaleConnection |
| 33 >>> conn = AutoScaleConnection('<aws access key>', '<aws secret key>') |
| 34 |
| 35 Alternatively, you can use the shortcut: |
| 36 |
| 37 >>> conn = boto.connect_autoscale() |
| 38 |
| 39 A Note About Regions and Endpoints |
| 40 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 41 Like EC2 the Autoscale service has a different endpoint for each region. By |
| 42 default the US endpoint is used. To choose a specific region, instantiate the |
| 43 AutoScaleConnection object with that region's endpoint. |
| 44 |
| 45 >>> import boto.ec2.autoscale |
| 46 >>> ec2 = boto.ec2.autoscale.connect_to_region('eu-west-1') |
| 47 |
| 48 Alternatively, edit your boto.cfg with the default Autoscale endpoint to use:: |
| 49 |
| 50 [Boto] |
| 51 autoscale_endpoint = autoscaling.eu-west-1.amazonaws.com |
| 52 |
| 53 Getting Existing AutoScale Groups |
| 54 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 55 |
| 56 To retrieve existing autoscale groups: |
| 57 |
| 58 >>> conn.get_all_groups() |
| 59 |
| 60 You will get back a list of AutoScale group objects, one for each AG you have. |
| 61 |
| 62 Creating Autoscaling Groups |
| 63 --------------------------- |
| 64 An Autoscaling group has a number of parameters associated with it. |
| 65 |
| 66 #. *Name*: The name of the AG. |
| 67 #. *Availability Zones*: The list of availability zones it is defined over. |
| 68 #. *Minimum Size*: Minimum number of instances running at one time. |
| 69 #. *Maximum Size*: Maximum number of instances running at one time. |
| 70 #. *Launch Configuration (LC)*: A set of instructions on how to launch an insta
nce. |
| 71 #. *Load Balancer*: An optional ELB load balancer to use. See the ELB tutorial |
| 72 for information on how to create a load balancer. |
| 73 |
| 74 For the purposes of this tutorial, let's assume we want to create one autoscale |
| 75 group over the us-east-1a and us-east-1b availability zones. We want to have |
| 76 two instances in each availability zone, thus a minimum size of 4. For now we |
| 77 won't worry about scaling up or down - we'll introduce that later when we talk |
| 78 about triggers. Thus we'll set a maximum size of 4 as well. We'll also associate |
| 79 the AG with a load balancer which we assume we've already created, called 'my_lb
'. |
| 80 |
| 81 Our LC tells us how to start an instance. This will at least include the image |
| 82 id to use, security_group, and key information. We assume the image id, key |
| 83 name and security groups have already been defined elsewhere - see the EC2 |
| 84 tutorial for information on how to create these. |
| 85 |
| 86 >>> from boto.ec2.autoscale import LaunchConfiguration |
| 87 >>> from boto.ec2.autoscale import AutoScalingGroup |
| 88 >>> lc = LaunchConfiguration(name='my-launch_config', image_id='my-ami', |
| 89 key_name='my_key_name', |
| 90 security_groups=['my_security_groups']) |
| 91 >>> conn.create_launch_configuration(lc) |
| 92 |
| 93 We now have created a launch configuration called 'my-launch-config'. We are now |
| 94 ready to associate it with our new autoscale group. |
| 95 |
| 96 >>> ag = AutoScalingGroup(group_name='my_group', load_balancers=['my-lb'], |
| 97 availability_zones=['us-east-1a', 'us-east-1b'], |
| 98 launch_config=lc, min_size=4, max_size=8, |
| 99 connection=conn) |
| 100 >>> conn.create_auto_scaling_group(ag) |
| 101 |
| 102 We now have a new autoscaling group defined! At this point instances should be |
| 103 starting to launch. To view activity on an autoscale group: |
| 104 |
| 105 >>> ag.get_activities() |
| 106 [Activity:Launching a new EC2 instance status:Successful progress:100, |
| 107 ...] |
| 108 |
| 109 or alternatively: |
| 110 |
| 111 >>> conn.get_all_activities(ag) |
| 112 |
| 113 This autoscale group is fairly useful in that it will maintain the minimum size
without |
| 114 breaching the maximum size defined. That means if one instance crashes, the auto
scale |
| 115 group will use the launch configuration to start a new one in an attempt to main
tain |
| 116 its minimum defined size. It knows instance health using the health check define
d on |
| 117 its associated load balancer. |
| 118 |
| 119 Scaling a Group Up or Down |
| 120 ^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 121 It can also be useful to scale a group up or down depending on certain criteria. |
| 122 For example, if the average CPU utilization of the group goes above 70%, you may |
| 123 want to scale up the number of instances to deal with demand. Likewise, you |
| 124 might want to scale down if usage drops again. |
| 125 These rules for **how** to scale are defined by *Scaling Policies*, and the rule
s for |
| 126 **when** to scale are defined by CloudWatch *Metric Alarms*. |
| 127 |
| 128 For example, let's configure scaling for the above group based on CPU utilizatio
n. |
| 129 We'll say it should scale up if the average CPU usage goes above 70% and scale |
| 130 down if it goes below 40%. |
| 131 |
| 132 Firstly, define some Scaling Policies. These tell Auto Scaling how to scale |
| 133 the group (but not when to do it, we'll specify that later). |
| 134 |
| 135 We need one policy for scaling up and one for scaling down. |
| 136 |
| 137 >>> from boto.ec2.autoscale import ScalingPolicy |
| 138 >>> scale_up_policy = ScalingPolicy( |
| 139 name='scale_up', adjustment_type='ChangeInCapacity', |
| 140 as_name='my_group', scaling_adjustment=1, cooldown=180) |
| 141 >>> scale_down_policy = ScalingPolicy( |
| 142 name='scale_down', adjustment_type='ChangeInCapacity', |
| 143 as_name='my_group', scaling_adjustment=-1, cooldown=180) |
| 144 |
| 145 The policy objects are now defined locally. |
| 146 Let's submit them to AWS. |
| 147 |
| 148 >>> conn.create_scaling_policy(scale_up_policy) |
| 149 >>> conn.create_scaling_policy(scale_down_policy) |
| 150 |
| 151 Now that the polices have been digested by AWS, they have extra properties |
| 152 that we aren't aware of locally. We need to refresh them by requesting them |
| 153 back again. |
| 154 |
| 155 >>> scale_up_policy = conn.get_all_policies( |
| 156 as_group='my_group', policy_names=['scale_up'])[0] |
| 157 >>> scale_down_policy = conn.get_all_policies( |
| 158 as_group='my_group', policy_names=['scale_down'])[0] |
| 159 |
| 160 Specifically, we'll need the Amazon Resource Name (ARN) of each policy, which |
| 161 will now be a property of our ScalingPolicy objects. |
| 162 |
| 163 Next we'll create CloudWatch alarms that will define when to run the |
| 164 Auto Scaling Policies. |
| 165 |
| 166 >>> cloudwatch = boto.connect_cloudwatch() |
| 167 |
| 168 It makes sense to measure the average CPU usage across the whole Auto Scaling |
| 169 Group, rather than individual instances. We express that as CloudWatch |
| 170 *Dimensions*. |
| 171 |
| 172 >>> alarm_dimensions = {"AutoScalingGroupName": 'my_group'} |
| 173 |
| 174 Create an alarm for when to scale up, and one for when to scale down. |
| 175 |
| 176 >>> from boto.ec2.cloudwatch import MetricAlarm |
| 177 >>> scale_up_alarm = MetricAlarm( |
| 178 name='scale_up_on_cpu', namespace='AWS/EC2', |
| 179 metric='CPUUtilization', statistic='Average', |
| 180 comparison='>', threshold='70', |
| 181 period='60', evaluation_periods=2, |
| 182 alarm_actions=[scale_up_policy.policy_arn], |
| 183 dimensions=alarm_dimensions) |
| 184 >>> cloudwatch.create_alarm(scale_up_alarm) |
| 185 |
| 186 >>> scale_down_alarm = MetricAlarm( |
| 187 name='scale_down_on_cpu', namespace='AWS/EC2', |
| 188 metric='CPUUtilization', statistic='Average', |
| 189 comparison='<', threshold='40', |
| 190 period='60', evaluation_periods=2, |
| 191 alarm_actions=[scale_down_policy.policy_arn], |
| 192 dimensions=alarm_dimensions) |
| 193 >>> cloudwatch.create_alarm(scale_down_alarm) |
| 194 |
| 195 Auto Scaling will now create a new instance if the existing cluster averages |
| 196 more than 70% CPU for two minutes. Similarly, it will terminate an instance |
| 197 when CPU usage sits below 40%. Auto Scaling will not add or remove instances |
| 198 beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties. |
| 199 |
| 200 To retrieve the instances in your autoscale group: |
| 201 |
| 202 >>> ec2 = boto.connect_ec2() |
| 203 >>> conn.get_all_groups(names=['my_group'])[0] |
| 204 >>> instance_ids = [i.instance_id for i in group.instances] |
| 205 >>> reservations = ec2.get_all_instances(instance_ids) |
| 206 >>> instances = [i for i in reservations for i in r.instances] |
| 207 |
| 208 To delete your autoscale group, we first need to shutdown all the |
| 209 instances: |
| 210 |
| 211 >>> ag.shutdown_instances() |
| 212 |
| 213 Once the instances have been shutdown, you can delete the autoscale |
| 214 group: |
| 215 |
| 216 >>> ag.delete() |
| 217 |
| 218 You can also delete your launch configuration: |
| 219 |
| 220 >>> lc.delete() |
OLD | NEW |