10 Ways to Optimize AWS Costs

Table of Contents

Users of Amazon Web Services are likely familiar with some AWS cost optimization best practices, but probably not all of them.

It’s not unusual to read headlines stating businesses are overspending in the cloud – that a double-figure percentage of money is being wasted on unused services or that millions of companies provision resources with more capacity than they need.

The most common “solutions” to the reported issues are Rightsizing, Scheduling, and Purchasing Reserved Instances/Savings Plans for predictable workloads. Most AWS users are familiar with these best practices, but they are not the “best” best. Sometimes they don’t save a fraction of the cost it has claimed, while plenty of other, often overlooked, AWS cost optimization best practices can save a lot more.

Here’s a compiled list of the ten best practices and a proposed solution to optimize AWS costs.

The 10 AWS cost optimization best practices 

Rightsizing EC2 Instances: The purpose of rightsizing is to match instance sizes to their workloads. Unfortunately, it doesn’t quite work like that because of how instances double in capacity for each increase in size. Capacity changes with size, so if the size goes up by one notch, capacity is doubled. Or if the size goes down by one, capacity is halved. Therefore, rightsizing is only a worthwhile best practice if there are instances whose peak utilization does not exceed ~45%. It’s still worth analyzing utilization metrics to find opportunities to move workloads to different families (other than “General Purpose”) that better suit their needs.

Scheduling on/off times: It’s worth scheduling on/off times for non-production instances such as those used for developing, staging, testing, and QA, as it saves around 65% of the time of running these instances on applying an “on” schedule of 8.00 a.m. to 8.00 p.m. Monday to Friday. However, it’s possible to save a lot more—primarily if development teams work in irregular patterns or at irregular hours. More aggressive schedules can be applied by analyzing utilization metrics to determine when the instances are most frequently used or applying an always-stopped schedule that can be interrupted when access to the instances is required. It’s worth pointing out that while instances are scheduled to be off, one is still being charged for EBS volumes and other components attached to them.

Purchasing Reserved Instances and Savings Plans: Purchasing Reserved Instances is an easy way to reduce AWS costs. It can also be an easy option to increase AWS costs if the utilization of the Reserved Instance is not as much as expected. The wrong type or a “standard” Reserved Instance can cause AWS prices to fall over the term of reservation by more than the reservation “saves.” Therefore it is recommended to effectively manage Reserved Instances as an AWS cost optimization best practice—effective management consisting of weighing up all the variables before making a purchase and then monitoring utilization throughout the reservation’s lifecycle.

Delete unattached EBS volumes: An EBS volume is attached to the instance that acts as its local block storage when an EC2 instance is launched. On terminating the instance, the EBS volume will only be deleted if the “delete on termination” box was checked when launching the instance. Otherwise, the EBS volume will still exist and contribute toward the monthly AWS bill. Depending on how long the business has been operating in the cloud and the number of instances launched without the delete box being checked, there could be thousands of unattached EBS volumes in the AWS Cloud. It’s certainly one of the AWS cost optimization best practices, even if the business is relatively new to the AWS Cloud.

Delete obsolete snapshots: Snapshots are an efficient way to back up data on an EBS volume to an S3 storage bucket because they only back up data that’s changed since the last snapshot to prevent duplications in the S3 bucket. Consequently, each snapshot contains all the information needed to restore your data (from when the snapshot was taken) to a new EBS volume. Usually, the most recent snapshot is required to restore data if something goes wrong (although it’s advisable to keep snapshots for a couple of weeks depending on the frequency with which they’re taken). Snapshots don’t cost very much; however, thousands of dollars can be saved by deleting those no longer needed.

Release unattached Elastic IP addresses: Elastic IP addresses are public IPv4 addresses from Amazon’s pool of IP addresses that are allocated to an instance so it can be reached via the internet. Businesses are allowed a maximum of five Elastic IP addresses per account because Amazon doesn´t have an unlimited pool of IP addresses. However, they are free of charge when attached to running service. Exceptions to the free of charge rule occur if an IP address is remapped more than 100 times a month or if a business hangs on to an unattached Elastic IP address after terminating the instances to which they were once attached. The charge for unattached Elastic IP addresses may only be $0.01 per hour, but if there are fifty AWS accounts each holding back two IP addresses, that amounts to $8,760 of waste per year.

Upgrade instances to the latest generation: Due to Amazon Web Services’ broad array of products and services, there are frequent announcements about how products have been upgraded or feature introduced to support specific services. With regards to AWS cost optimization best practices, the announcements to look out for are those relating to the latest generation instances. When Amazon Web Services releases a new generation of instances, they tend to have improved performance and functionality compared to their predecessors. This means either upgrading the existing instances to the latest generation or downsizing existing instances with borderline utilization metrics to benefit from the same level of performance at a lower cost.

Purchase reserved nodes for Redshift and ElastiCache Services: One recent AWS announcement detailed how the discount program for Amazon Redshift and ElastiCache has changed. Previously, businesses could purchase advanced-payment “Heavy Utilization” discounts, but these have now switched to (almost) mirror Reserved Instance purchases for EC2 and RDS instances. Reserved Nodes can be purchased for Redshift, ElasticCache, Redis, and Memcached Services for 1-year or 3-year terms, with the option of paying the full amount upfront, partially, or monthly. One important note is that to take advantage of reservations on the ElastiCache Service, it is first required to upgrade Nodes to the latest generation.

Terminate zombie assets: The term “zombie assets” is most often used to describe any unused asset contributing to the cost of operating in the AWS Cloud—many typical zombie assets have already been mentioned (unattached EBS volumes, obsolete snapshots, etc.). Other assets that fall into this category include components of instances that were activated when an instance failed to launch and unused Elastic Load Balancers. Businesses often encounter problems when trying to implement AWS cost optimization best practices because some unused assets are challenging to find. For example, unattached IP addresses are notoriously difficult to locate in AWS System Manager or AWS Console.

Move infrequently-accessed data to lower-cost tiers: Amazon Web Services currently offers six tiers of storage at different price points. Determining which storage tier is most suitable for data will depend on factors such as how often data is accessed (as retrieval fees apply to the lower tiers) and how quickly a business would need to retrieve data in the event of a disaster (as it may take hours to retrieve from a lower tier). The savings from storing infrequently accessed, non-critical data in a lower-cost tier can be substantial. The cost per month of storing up to 50TB of data in a standard S3 storage bucket is $0.023 per GB (US East Region), whereas storing the same data in S3 Glacier Deep Archive storage is $0.00099 per GB per month.

The six tiers of storage are: S3 Standard S3 Intelligent Tiering S3 Infrequent Access S3 Infrequent Access (Single Zone) S3 Glacier S3 Deep Archive Glacier

AWS cost optimization is an ongoing process

Applying AWS cost optimization best practices is an ongoing process. AWS Cloud needs to be monitored at all times to identify when assets are being under-utilized (or not utilized at all) and when opportunities exist to reduce costs by deleting/terminating/releasing zombie assets. It’s also important to stay on top of Reserved Instances to ensure they’re being fully utilized.

 

Article Credits: Adit Modi

Adit is a Cloud, DevOps & Big Data Evangelist | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified | AWS Community Builder | AWS Educate Cloud Ambassador.

Related Posts

Tags

By: