top of page
  • Cloud Life Team

Guide to RDS Cost Optimization


In this post, we’ll review the different points to take into consideration when optimizing your RDS resources for cost. The goal is to have the most cost-effective infrastructure without sacrificing performance. We will go over architecture decisions, instance type selection, storage options, scaling options, and finally reserved instance purchases.

Architecture Decisions

There are several points to take into consideration when it comes to architecting a cost-optimized RDS implementation.

Engine Type Selection

First and foremost RDS engine type selection will play into the cost. Without naming names a good majority of the enterprise databases will be quite a bit more expensive than the open source databases such as MySql or Postgres. If you have the opportunity to select your database engine of choice, it's generally more cost-effective to select open-source engines such as Postgres or MySQL.

Pick the correct database for the job

This is an entire blog post on its own. The idea here is that you should do the homework to make sure that you are using the appropriate database for the job at hand. This could mean that you should use a NoSQL solution like DynamoDB, or a Mongo-compatible database like DocumentDB. Typically, we see unnecessary costs incurred when customers try and use the wrong database for the job.

Optimize your usage

Along similar lines to the previous topic. Often times we see poor query construction adversely affect the performance of the database. This in turn usually means that databases need to be scaled vertically to accommodate the increased load. AWS has tools like RDS performance insights, additionally, AWS CodeGuru can help in understanding expensive lines of code that may be affecting performance.

Query Caching

Caching in front of a database in order to help increase performance and reduce costs is frequently overlooked. There are three main types of caching.

  1. Database Integrated Caches: Some databases such as Amazon Aurora have integrated caching. This caching happens in the background which means you do not need to update your application to take advantage of it. When data in the underlying database is upgraded, it automatically updates the cache.

  2. Local Caches: Also considered application-level caching, requires the application to handle the caching. This results in a very performant application, but the downside is that there are no economies of scale. Meaning, that if I have an application running several instances each one is caching independently.

  3. Remote Caches: Using an in-memory database such as Redis or Memcached allows for sub-second response times. Additionally, you are able to leverage the shared nature of this implementation. It’s sort of a happy medium between database integration and local caching.

Plenty more information here on caching.

Instance Type Selection

Compared to the previous section this is a bit more straightforward. There are two main types of instance types to choose from; General Purpose and Memory Optimized. More often than not we end up with a general-purpose instance type but there are some use cases where the memory-optimized instance types make more sense. The best way to understand which is best suited for your use case is to actually test them out in a development environment. Proving infrastructure selections is often an overlooked portion of the development environment. Obviously, running a large production-sized instance in a development environment is ill-advised, but understanding how an application can effect database performance metrics is important.

Once you are able to understand the proper instance type, we will typically pick a Graviton2 processor. We’ve found that the price performance on these CPUs has been outstanding in most cases.

Storage Options

RDS is backed by Amazon EBS. There are three types to choose from; General Purpose SSD, Provisioned IOPS SSD, and Magnetic. General-purpose SSD is a good default choice for most workloads. There are two types for SSD volumes, GP2 and GP3, GP3 typically has better performance, and is a good choice for most workloads. Use provisioned IOPS for workloads that are intensive in i/o. Magnetic EBS shouldn’t be used except for cases where you need backward compatibility.

There are detailed explanations in the RDS User Guide

Scaling Options

Similar to all of the other aspects of RDS and Aurora there are plenty of options to consider when talking about how to properly scale RDS. First and foremost we see a lot of customers that have provisioned a reader endpoint yet the application doesn't utilize it.

Aurora has more options for scaling in general and has plenty of options that will allow you to fully optimize your workloads. Some of the options available to you are:

  • Scaling read replicas to allow for a higher read volume

  • Aurora Serverless allows you to scale the entire server

  • Using an RDS Proxy will allow you to pool connections, and reduce CPU and memory usage.

Reserved Instances

Purchasing reserved instances is the easiest way to save money depending on the commitment you’re willing to make. However, this should be the last step you take after you have optimized the other aspects first. This is pretty straightforward and definitely should be done.

💡 Key takeaways and other tidbits:

1. It’s important to take into account all of the different aspects of your RDS architecture to make sure you are optimizing from the beginning. 

2. Cache your queries to help reduce the impact of repetitive queries, and expensive queries.

3. Use single AZ for non-production workloads

4. Consider using Aurora Serverless to save on costs during periods of low usage.

5. Consider using Amazon Aurora instead of traditional RDS engines to save on storage costs.

6. Consider using Amazon RDS On-Demand Backup instead of Continuous Backup if you do not need point-in-time recovery.

7. Use Amazon RDS Proxy to pool connections to the database, which can save on CPU and memory resources.

8. Use Graviton2 unless you have specific reasons not to. 

1 view

Recent Posts

See All

Steps to Upgrade IPv4 to IPv6

As you probably know AWS is now charging for the use of every public IPv4 IP address used in your environment. The current rate is $.005/hr and while that doesn’t sound like much it amounts to approxi

How to Read IPv6 Addresses

IPv6 (Internet Protocol version 6) addressing has been around about 20 years now. It was implemented when it became apparent that we would run out of IPv4 addresses. The move to IPv6 has been slow but

Discover How Many IPv4 Addresses You will be Charged For

Starting Feb 1, 2024 AWS will charge your account for every IPv4 in your account whether it is attached or not. That is approx. $45/year for every single IPv4. Here are AWS services that can assign pu


bottom of page