





{"id":439195,"date":"2017-02-14T11:47:34","date_gmt":"2017-02-14T11:47:34","guid":{"rendered":"https:\/\/exotel.com\/id\/?post_type=blog&#038;p=439195"},"modified":"2023-08-25T13:06:34","modified_gmt":"2023-08-25T13:06:34","slug":"autoscaling-aws-exotel","status":"publish","type":"blog","link":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/","title":{"rendered":"Auto Scaling on AWS"},"content":{"rendered":"<!DOCTYPE html PUBLIC \"-\/\/W3C\/\/DTD HTML 4.0 Transitional\/\/EN\" \"http:\/\/www.w3.org\/TR\/REC-html40\/loose.dtd\">\n<html><body><p><span style=\"font-weight: 400;\">Exotel is growing faster than it ever has. We now have reached a phase where we handle more than 4 million phone calls per day and this number only keeps growing! &nbsp;A few months ago we decided to work on our technical debt by overhauling some of our DevOps practices. In this post, we talk about the problems we faced while re-architecting bits of our infrastructure and the approach we took to address them. Our hope is that people can use some of our learnings and re-use some of the <a href=\"https:\/\/github.com\/exotel\/aws-auto-scaling\" target=\"_blank\" rel=\"noopener\">code we wrote<\/a><\/span><span style=\"font-weight: 400;\">, to minimize the time and effort required to set up an auto-scaling infrastructure on Amazon AWS.<\/span><\/p>\n<h3><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-353293\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/03\/Grafana-new-obelix-appengine-asg.png\" alt=\"\" width=\"920\" height=\"262\"><\/h3>\n<h3 style=\"margin-top: 30px;\">Getting Started<\/h3>\n<p>Here&rsquo;s how things worked earlier &ndash; we&rsquo;d bake service-based AMIs, then manually update the codebase and make other changes while adding new machines behind the ELBs. This greatly affected the rate at which we could add instances or push\/revert code\/environment changes. Besides, there was always a scope for human errors because the whole process was manual. We would usually add instances when we anticipated higher volumes or just observed a high load average across the instances.<\/p>\n<p>We started off at the sidelines &ndash; setting up a build pipeline and storing artifacts on S3. This eliminated human errors while updating\/reverting code.<\/p>\n<p>With builds formalized, we needed to formalize deployments. It&rsquo;s tricky because there are several ways of doing it and there&rsquo;s no one-size-fits-all solution. We experimented with a bunch of different stuff. Pre-baked or vanilla AMIs? wget+unarchive or rsync? More machines or larger machines? Eventually, what worked best for us was to go with semi-baked AMIs that contained the packages that took too long to install. Everything else would be taken care of by Ansible.<\/p>\n<h3 style=\"margin-top: 30px;\">Build pipeline and deployment<\/h3>\n<p>We used Jenkins to set up a build pipeline. The Ansible scripts and the actual code of the services have different pipelines.<\/p>\n<p>Each service has a Jenkins job which builds the project and uploads the build artifacts to S3.<\/p>\n<pre class=\"theme:github lang:sh decode:true\" title=\"Script to create the build artifact of a service\">#!\/bin\/bash\r\n\r\ncd $WORKSPACE\r\naws s3 cp s3:\/\/build\/$ARTIFACT\/prod\/latest.txt RELEASE\r\nif [ \"$?\" -eq 0 ]; then\r\n    VERSION=$(( `grep VERSION RELEASE | cut -d'=' -f2` + 1 ))\r\nelse\r\n    VERSION=1\r\nfi\r\necho \"MODULE=$ARTIFACT\r\nVERSION=$VERSION\r\nBUILD=$BUILD_NUMBER\r\nGIT_COMMIT=$GIT_COMMIT\r\nTIMESTAMP=$BUILD_TIMESTAMP\" &gt; RELEASE\r\nrm -rf $ARTIFACT.tar\r\ntar cf $ARTIFACT.tar obelix commonix RELEASE --exclude=\"*\/.git\"\r\naws s3 cp $ARTIFACT.tar s3:\/\/build\/$ARTIFACT\/prod\/$ARTIFACT-${VERSION}.tar --storage-class REDUCED_REDUNDANCY --sse AES256\r\naws s3 cp $WORKSPACE\/RELEASE s3:\/\/build\/$ARTIFACT\/prod\/latest.txt --storage-class REDUCED_REDUNDANCY --sse AES256<\/pre>\n<p>This Jenkins job takes the git branch to be used as a parameter. This allows us to deploy patches by forking a patch branch based out of &ldquo;release&rdquo; branch, applying our patch and then deploying the patch branch to production.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-353159\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/02\/Jenkins.png\" alt=\"Jenkins\" width=\"947\" height=\"183\"><\/p>\n<p><span style=\"font-weight: 400;\">This newly created build is considered to be the &ldquo;latest&rdquo; build. <\/span><\/p>\n<p>When an ASG cluster scales out, the instance is configured with the &ldquo;latest-stable&rdquo; build version of the service.<\/p>\n<p>There is another Jenkins job for each service to promote any of it&rsquo;s builds to &ldquo;latest-stable&rdquo;.<\/p>\n<pre class=\"theme:github lang:sh decode:true\" title=\"Script to promote a build to be the \">#!\/bin\/bash\r\n\r\nset -e\r\nif [ \"latest\" == \"$RELEASE_VERSION\" ]; then\r\n\taws s3 cp s3:\/\/build\/$ARTIFACT\/prod\/latest.txt RELEASE\r\n    GIT_COMMIT_ID=`grep GIT_COMMIT RELEASE | cut -d'=' -f2`\r\n    cd $WORKSPACE\r\n    git config user.email \"XXX@yyy.GIT_COMMIT\"\r\n  \tgit config user.name \"Demo user\"\r\n    git tag -d $GIT_TAG_NAME || echo \"Tag doesn't exist. Creating one\"\r\n    git push origin :refs\/tags\/$GIT_TAG_NAME || echo \"Tag doesn't exist. Creating one\"\r\n    git tag -a $GIT_TAG_NAME -m \"$GIT_TAG_NAME\" $GIT_COMMIT_ID\r\n    git push --tags\r\n    aws s3 cp s3:\/\/build\/$ARTIFACT\/prod\/latest.txt s3:\/\/build\/$ARTIFACT\/prod\/latest-stable.txt --storage-class REDUCED_REDUNDANCY --sse AES256\r\nelif [[ $RELEASE_VERSION =~ ^[0-9]+$ ]]; then\r\n\taws s3 cp s3:\/\/build\/$ARTIFACT\/prod\/$ARTIFACT-$RELEASE_VERSION.tar $ARTIFACT-$RELEASE_VERSION.tar\r\n    if [ \"$?\" -ne 0 ]; then\r\n        echo \"ERROR: s3:\/\/build\/$ARTIFACT\/prod\/$ARTIFACT-$RELEASE_VERSION.tar does not exist.\"\r\n        exit 1\r\n    else\r\n        rm -rf $ARTIFACT-$RELEASE_VERSION\r\n        mkdir $ARTIFACT-$RELEASE_VERSION\r\n        cd $ARTIFACT-$RELEASE_VERSION\r\n        tar xf ..\/$ARTIFACT-$RELEASE_VERSION.tar\r\n        GIT_COMMIT_ID=`grep GIT_COMMIT RELEASE | cut -d'=' -f2`\r\n    \tcd $WORKSPACE\r\n    \tgit config user.email \"xxx@yyy.com\"\r\n  \t\tgit config user.name \"Demo user\"\r\n    \tgit tag -d $GIT_TAG_NAME || echo \"Tag doesn't exist. Creating one\"\r\n    \tgit push origin :refs\/tags\/$GIT_TAG_NAME || echo \"Tag doesn't exist. Creating one\"\r\n    \tgit tag -a $GIT_TAG_NAME -m \"$GIT_TAG_NAME\" $GIT_COMMIT_ID\r\n    \tgit push --tags\r\n        aws s3 cp $ARTIFACT-$RELEASE_VERSION\/RELEASE s3:\/\/build\/$ARTIFACT\/prod\/latest-stable.txt\r\n    fi\r\nelse\r\n\techo \"Invalid release number. Please specify valid one\"\r\n    exit 1\r\nfi<\/pre>\n<p>Either the latest build or a specific build version of that service can be made the stable version. When making a particular build as &ldquo;latest-stable&rdquo;, we also tag the most recent commit ID of the branch in our git repository so that we can track the version of code running in production.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-353164\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/02\/Jenkins-1.png\" alt=\"Jenkins-1\" width=\"941\" height=\"238\"><\/p>\n<p>Deploying code to a specific instance or an entire ASG cluster can be done by choosing the relevant options from a Jenkins job &ndash;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-353166 alignleft\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/02\/Jenkins-1-1.png\" alt=\"Jenkins (1)\" width=\"621\" height=\"681\"><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">This build internally uses <\/span><a href=\"https:\/\/raw.githubusercontent.com\/ansible\/ansible\/devel\/contrib\/inventory\/ec2.py\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">EC2 external inventory script<\/span><\/a><span style=\"font-weight: 400;\"> to get the IPs of the instances of the ASG to which the deployment has to be done. <\/span><a href=\"https:\/\/github.com\/exotel\/aws-auto-scaling\/blob\/master\/jenkins-jobs\/asg-code-push.sh\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Here<\/span><\/a><span style=\"font-weight: 400;\"> is the script triggered by the above Jenkins job.<\/span><\/p>\n<h3 style=\"margin-top: 30px;\">Auto Scaling<\/h3>\n<p><span style=\"font-weight: 400;\">Breaking from our earlier bad practice of using the AWS management console to configure the infrastructure, we decided to use <\/span><a href=\"https:\/\/www.terraform.io\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Terraform<\/span><\/a><span style=\"font-weight: 400;\"> to make sure that the infrastructure set up is codified and versioned. As <\/span><a href=\"http:\/\/techblog.netflix.com\/2012\/01\/auto-scaling-in-amazon-cloud.html\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">explained<\/span><\/a><span style=\"font-weight: 400;\"> by Netflix about the lessons they learned from their auto-scaling implementation, we scale up early and scale down slowly. At steady state, every web server cluster has exactly one On-Demand m4.large instance and one or more m4.large Spot instances.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We use Spot instances very heavily. Spot instances are around 80% cheaper than on-demand instances of the same configuration. We set bid prices for Spot instances to be higher than the price of the On-Demand instance of the same configuration. In effect, the Spot instances would almost never be terminated as it is unlikely for the price of the Spot instances to be greater than the price of the corresponding On-Demand instance. AWS provides a way wherein a warning is triggered two minutes before the spot instance would be terminated when the current Spot price rises above our bid price (<\/span><a href=\"https:\/\/aws.amazon.com\/blogs\/aws\/new-ec2-spot-instance-termination-notices\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Spot Instance Termination Notice<\/span><\/a><span style=\"font-weight: 400;\">). We have set up a <\/span><a href=\"https:\/\/github.com\/exotel\/aws-auto-scaling\/blob\/master\/ansible\/roles\/spot-notifier\/files\/spot_termination_notifier.sh\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">cron job<\/span><\/a><span style=\"font-weight: 400;\"> which polls this endpoint every 5 seconds. If it learns that the instance is scheduled for termination, we increase the desired count of the corresponding On-demand cluster by one. Thus, even in the case when spot instances are terminated abruptly, there will not be a disruption of the service.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Setting up the scaling policies was trickier than we expected. We tried out various combinations of scaling policies before settling down on the one that worked best for us. This is what we finally ended up with &ndash;<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-353202\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/02\/obelix-appengine-scaling-policies.png\" alt=\"obelix-appengine-scaling-policies\" width=\"722\" height=\"573\"><\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-353203\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/02\/obelix-appengine-scaling-policies-ondemand.png\" alt=\"obelix-appengine-scaling-policies-ondemand\" width=\"712\" height=\"368\"><\/p>\n<p><span style=\"font-weight: 400;\">Most of our web server clusters are currently on Apache. We found that 50% CPU utilization over 5 minutes to be a safe threshold above which the cluster had to be scaled up. To be able to handle sudden spikes in traffic another scaling policy is set up to scale out when CPU utilization of spot cluster is more than 85% over two consecutive 1 minute periods. The scale out policy for the On-demand cluster has a higher CPU utilization threshold than that of the Spot cluster. This is because we assume that On-demand instances will be needed only if Spot instances are not being spawned for some reason. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">We noticed that AWS&rsquo; default scale down policies don&rsquo;t work well. Though an option to specify the &ldquo;Seconds to warm up after each step&rdquo; is provided, a scale-in activity is triggered every minute. This results in cluster scaling in too aggressively, thereby not having enough capacity at times. Instead &ldquo;Simple Scaling policy&rdquo; works as expected for scaling down wherein &ldquo;Seconds before allowing another scaling activity&rdquo; can be specified. We remove 10% of the instances of the cluster when the CPU utilization of the cluster is less than 30% for 10 consecutive periods of 60 seconds.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The ASG is associated with a target group. The target group is in turn associated with an Application Load Balancer (ALB). The instances are in a private subnet and they only accept requests on port 80 from the corresponding ALB. The ALB is associated with a public subnet and accepts requests on both ports 80 and 443 from everywhere. SSL termination is done only at the ALB.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When an instance is spawned from an auto-scaling group, the user data is set to download a setup script from S3.<\/span><\/p>\n<pre class=\"theme:github lang:sh decode:true\" title=\"AWS launch configuration user daa\">#!\/bin\/bash\r\n\r\nSERVICE=obelix-appengine\r\nS3BUCKET=\"s3:\/\/build\/deploy-scripts\/prod\"\r\n\r\nif [ -z $ANS_VERSION ]; then\r\n    ANS_VERSION=latest-stable\r\nfi\r\nif [ -z $VERSION ]; then\r\n    VERSION=latest-stable\r\nfi\r\n\r\naws s3 cp ${S3BUCKET}\/${SERVICE}.sh obelix-appengine.sh\r\nchmod +x obelix-appengine.sh\r\n.\/obelix-appengine.sh $ANS_VERSION $VERSION<\/pre>\n<p><a href=\"https:\/\/github.com\/exotel\/aws-auto-scaling\/blob\/master\/ansible\/deployment\/obelix-appengine.sh\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">This script<\/span><\/a><span style=\"font-weight: 400;\">, in turn, downloads the &ldquo;latest-stable&rdquo; version of the ansible scripts and the service code or binary from S3. The script then executes the ansible playbook of the service to configure the newly launched instance. The last step of the ansible playbook on successful execution is to copy a health check file into the right location. Thus after the instance has been fully set up, the instance is brought into service after the target group&rsquo;s health check passes.<\/span><\/p>\n<h3 style=\"margin-top: 30px;\">Logging and Monitoring<\/h3>\n<p>We still had two things to figure out &ndash; logging and monitoring. When instances are spawned and terminated dynamically depending on traffic, we needed to figure out a way to ship logs out of the machines and dump them in a centralized location. We tried out Filebeat and Heka for shipping but eventually settled on good old rsyslog, which ships logs to a Kafka cluster. The log messages are later consumed from Kafka and indexed in ElasticSearch.<\/p>\n<p><span style=\"font-weight: 400;\">As far as monitoring is concerned, the major problem was the maintenance of dynamic inventory in Nagios. Since we were okay with a short delay in adding new hosts to monitoring, we decided to poll for ASG changes every few minutes and <\/span><a href=\"https:\/\/github.com\/exotel\/aws-auto-scaling\/tree\/master\/nagios-config-generator\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">update the hosts <\/span><\/a><span style=\"font-weight: 400;\">in the monitoring config.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We created a dashboard on <\/span><a href=\"http:\/\/grafana.org\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Grafana<\/span><\/a><span style=\"font-weight: 400;\"> for each web server cluster with metrics from AWS Cloudwatch to visualize the status of the cluster &ndash;<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-353206\" src=\"https:\/\/exotel.com\/wp-content\/uploads\/sites\/6\/2017\/02\/Grafana-new-obelix-appengine-asg-1024x601.png\" alt=\"Grafana new obelix appengine asg\" width=\"891\" height=\"523\"><\/p>\n<p><span style=\"font-weight: 400;\">While the above-described setup worked pretty smoothly for a few months now, one day the default YUM repository mirrors used by the AWS instances went down. This caused deployment failures and the cluster couldn&rsquo;t be scaled up as expected. To prevent such an incident we the future, we now host the YUM repositories ourselves on an S3 bucket and use them instead.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">That pretty much sums up our little adventure with auto scaling at Exotel. Our key takeaways:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Traffic-based scaling<\/span><\/li>\n<li>A whopping 79% reduction in AWS costs by leveraging spot instances<\/li>\n<li>Consistency in environment and code<\/li>\n<li>Rolling updates, quick reverts<\/li>\n<li>No human errors<\/li>\n<li>Centralized logging &ndash; no need for (just about) anyone to access production machines<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Don&rsquo;t like this approach or have ideas to make this better? Come join us. We&rsquo;re <a href=\"\/careers\/\">hiring<\/a>!<\/span><\/p>\n<\/body><\/html>\n","protected":false},"excerpt":{"rendered":"<p>Exotel is growing faster than it ever has. We now have reached a phase where we handle more than 4 million phone calls per day and this number only keeps growing! &nbsp;A few months ago we decided to work on our technical debt by overhauling some of our DevOps practices. In this post, we talk [&hellip;]<\/p>\n","protected":false},"author":18,"featured_media":439197,"template":"","meta":{"_acf_changed":false,"om_disable_all_campaigns":false},"tags":[],"blog-category":[228],"components":[],"class_list":["post-439195","blog","type-blog","status-publish","has-post-thumbnail","hentry","blog-category-technology-and-roadmap"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v20.5 (Yoast SEO v27.1.1) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Auto Scaling on AWS - Indonesia<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Auto Scaling on AWS\" \/>\n<meta property=\"og:description\" content=\"Exotel is growing faster than it ever has. We now have reached a phase where we handle more than 4 million phone calls per day and this number only keeps growing! &nbsp;A few months ago we decided to work on our technical debt by overhauling some of our DevOps practices. In this post, we talk [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/\" \/>\n<meta property=\"og:site_name\" content=\"Indonesia\" \/>\n<meta property=\"article:modified_time\" content=\"2023-08-25T13:06:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"800\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/\",\"url\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/\",\"name\":\"Auto Scaling on AWS - Indonesia\",\"isPartOf\":{\"@id\":\"https:\/\/exotel.com\/id\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg\",\"datePublished\":\"2017-02-14T11:47:34+00:00\",\"dateModified\":\"2023-08-25T13:06:34+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#breadcrumb\"},\"inLanguage\":\"en-ID\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ID\",\"@id\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#primaryimage\",\"url\":\"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg\",\"contentUrl\":\"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg\",\"width\":800,\"height\":600},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/exotel.com\/id\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blogs\",\"item\":\"https:\/\/exotel.com\/id\/en\/blog\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Auto Scaling on AWS\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/exotel.com\/id\/#website\",\"url\":\"https:\/\/exotel.com\/id\/\",\"name\":\"Indonesia\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/exotel.com\/id\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/exotel.com\/id\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-ID\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/exotel.com\/id\/#organization\",\"name\":\"Indonesia\",\"url\":\"https:\/\/exotel.com\/id\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ID\",\"@id\":\"https:\/\/exotel.com\/id\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2025\/07\/green-black-logo.png\",\"contentUrl\":\"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2025\/07\/green-black-logo.png\",\"width\":373,\"height\":110,\"caption\":\"Indonesia\"},\"image\":{\"@id\":\"https:\/\/exotel.com\/id\/#\/schema\/logo\/image\/\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Auto Scaling on AWS - Indonesia","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/","og_locale":"en_US","og_type":"article","og_title":"Auto Scaling on AWS","og_description":"Exotel is growing faster than it ever has. We now have reached a phase where we handle more than 4 million phone calls per day and this number only keeps growing! &nbsp;A few months ago we decided to work on our technical debt by overhauling some of our DevOps practices. In this post, we talk [&hellip;]","og_url":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/","og_site_name":"Indonesia","article_modified_time":"2023-08-25T13:06:34+00:00","og_image":[{"width":800,"height":600,"url":"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/","url":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/","name":"Auto Scaling on AWS - Indonesia","isPartOf":{"@id":"https:\/\/exotel.com\/id\/#website"},"primaryImageOfPage":{"@id":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#primaryimage"},"image":{"@id":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#primaryimage"},"thumbnailUrl":"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg","datePublished":"2017-02-14T11:47:34+00:00","dateModified":"2023-08-25T13:06:34+00:00","breadcrumb":{"@id":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#breadcrumb"},"inLanguage":"en-ID","potentialAction":[{"@type":"ReadAction","target":["https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/"]}]},{"@type":"ImageObject","inLanguage":"en-ID","@id":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#primaryimage","url":"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg","contentUrl":"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2023\/06\/IMG_1272-e1488711792396-800x600-1.jpg","width":800,"height":600},{"@type":"BreadcrumbList","@id":"https:\/\/exotel.com\/id\/en\/blog\/autoscaling-aws-exotel\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/exotel.com\/id\/en\/"},{"@type":"ListItem","position":2,"name":"Blogs","item":"https:\/\/exotel.com\/id\/en\/blog\/"},{"@type":"ListItem","position":3,"name":"Auto Scaling on AWS"}]},{"@type":"WebSite","@id":"https:\/\/exotel.com\/id\/#website","url":"https:\/\/exotel.com\/id\/","name":"Indonesia","description":"","publisher":{"@id":"https:\/\/exotel.com\/id\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/exotel.com\/id\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-ID"},{"@type":"Organization","@id":"https:\/\/exotel.com\/id\/#organization","name":"Indonesia","url":"https:\/\/exotel.com\/id\/","logo":{"@type":"ImageObject","inLanguage":"en-ID","@id":"https:\/\/exotel.com\/id\/#\/schema\/logo\/image\/","url":"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2025\/07\/green-black-logo.png","contentUrl":"https:\/\/exotel.com\/id\/wp-content\/uploads\/sites\/6\/2025\/07\/green-black-logo.png","width":373,"height":110,"caption":"Indonesia"},"image":{"@id":"https:\/\/exotel.com\/id\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/blog\/439195","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/blog"}],"about":[{"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/types\/blog"}],"author":[{"embeddable":true,"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/users\/18"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/media\/439197"}],"wp:attachment":[{"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/media?parent=439195"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/tags?post=439195"},{"taxonomy":"blog-category","embeddable":true,"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/blog-category?post=439195"},{"taxonomy":"components","embeddable":true,"href":"https:\/\/exotel.com\/id\/en\/wp-json\/wp\/v2\/components?post=439195"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}