From Manual to Dynamic: The Evolution of Database Scaling with Aurora Serverless

| Articles
From Manual to Dynamic: The Evolution of Database Scaling with Aurora Serverless

Why Serverless Database?

Serverless databases, such as Aurora Serverless, are increasingly favored for their capability to dynamically adapt capacity in accordance with real-time workloads. This eliminates the necessity for manual provisioning and management of database resources, resulting in cost-efficiency and streamlined operations. The inherent ability to seamlessly scale up or down in response to demand remains a pivotal advantage of serverless databases.

In this article, we will explore the dynamic world of serverless databases, with a specific focus on Aurora Serverless. These databases have been gaining immense popularity for their unique ability to adapt capacity in real-time, reducing the need for manual resource management and significantly streamlining operations. We will delve into the advantages of serverless databases and help you understand when it makes sense to go completely serverless or opt for a mixed configuration to strike the right balance between cost efficiency and performance. So, let’s dive into the intricacies of serverless database deployment and discover the best strategies for your unique database needs.

When to go completely Serverless?

Different firms have unique usage patterns influenced by factors like their target audience and operational hours. However, adopting a purely serverless approach may not be ideal for all. Since serverless instances tend to be approximately 1.5 times more expensive than regular instances, it’s often a prudent strategy to allocate the baseline load to standalone instances and leverage serverless resources only when scaling is truly needed.

Application Scenario 1: For an application that needs to maintain a 15% baseline load throughout the day with occasional surges, a full serverless approach may not offer the most cost-effective solution.

Exhibit 1

Application Scenario 2: On the other hand, for an application that experiences a fluctuating load that lacks a predictable pattern and requires varying resources throughout the day. In such cases, the serverless approach is often the most cost-effective choice.

Exhibit 2

Exhibit 1&2: table demonstrating ACU usage and cost in a serverless throughout the day for an application
Exhibit 3

Exhibit 4

Exhibit 3&4: table demonstrating ACU usage and cost in a serverless and standalone instance-based database throughout the day
Exhibit 5

Exhibit 6

Exhibit 5&6: figure demonstrating ACU usage in a serverless and standalone instance-based database throughout the day

Analyzing the cost data for Aurora Serverless and standalone instances as demonstrated by the two application scenarios above reveals divergent expense profiles. Application 1 incurs greater costs with Aurora, while application 2 Serverless is more cost-effective. These differences arise from the distinct usage patterns each application encounters. These stories underscore the notion that there is no universal, one-size-fits-all configuration for database deployment. Instead, the choice between Aurora Serverless and standalone instances demands a meticulous assessment to determine what aligns best with the specific needs and usage patterns of your application. In essence, it’s crucial to make a well-informed decision that caters to your unique requirements.

The Mixed Configuration

Suppose a food delivery company database experiences two usage peaks: one from 12 p.m. to 1 p.m. and another from 8 p.m. to 10 p.m. These are the time slots when the majority of your customers are actively using your application.

Exhibit7

Exhibit 7: figure demonstrating ACU usage in a serverless and standalone instance-based database throughout the day

In this scenario, it’s prudent to handle the constant baseline load with more cost-effective standalone instances and utilize serverless instances exclusively when you require scaling.

How to build a Mixed Configuration

A mixed configuration RDS Database is basically a database where you create a DB that has both serverless instances and standalone working together.

First, create an RDS Instance

• Open the Amazon RDS console and choose Create Database. On the page.
• For Engine type, choose Aurora (MySQL Compatible) or Aurora (PostgreSQL Compatible).
• For Capacity settings, select the minimum and maximum ACU based on requirement.

• Choose Create a database to create your Aurora cluster with an Aurora Serverless v2 DB instance as the writer instance, also known as the primary DB instance.
• Now on your database homepage choose `add reader`

• Now on the Instance configuration tab choose Memory optimized and select add reader at the bottom

Now, by defining a higher priority for the standalone instance, you will basically route most of your traffic to the cheaper instance and only use the serverless instance to their maximum capacity when required, thus achieving maximum throughput.

Apart from this configuration also helps in better load handling as standalone instance handles the load while the serverless instances are using resources while they are being scaled out.

Choosing the right parameters for your Database

Configuring ACU Limits for Aurora Serverless

Aurora Serverless offers the flexibility to set both minimum and maximum ACUs (Aurora Capacity Units) for your database, and making the right choices for these limits is vital to your database’s performance:
1. Assess Baseline and Peak Usage: Start by assessing your application’s typical workload and peak demands, including any addons and services, to determine the necessary baseline and maximum capacity. This information can be gathered by monitoring your existing RDS instance.
2. Performance Testing: Conduct performance testing to pinpoint the minimum capacity required to meet your performance expectations. This step ensures smooth application operation even during periods of low demand.
3. Scalability Considerations: Equally important is selecting the maximum ACUs. Evaluate your application’s scalability needs and opt for a maximum ACU limit that can comfortably handle peak traffic without incurring unnecessary cost overhead.
4. Real-time Adjustment: Continuously monitor your database’s performance through real-time metrics. Adapt the maximum ACUs as needed, factoring in the time required to optimize the database’s performance and leaving room to accommodate additional load when necessary.

Scaling Strategy

Vertical Scaling: Vertical scaling enhances database performance through resource allocation adjustments, either increasing or decreasing Aurora Capacity Units (ACUs).

Advantages:

Speed: Rapid implementation without the need for creating new instances.
Simplicity: Straightforward management of configuration settings.

Considerations:

Resource Limits: Scaling is limited to the capacity within the current serving regions.

Horizontal Scaling: Horizontal scaling involves adding read-only Aurora Replicas to your Aurora cluster to distribute read traffic across multiple instances.

Advantages:

High Availability: Enhances fault tolerance and availability.
Scalability: Enables easier expansion and scalability across multiple regions.

Considerations:

Latency: Potential introduction of additional latency due to data replication.

The decision between vertical and horizontal scaling hinges on your application’s needs, traffic behavior, and budget considerations. Vertical scaling offers a swift solution for adapting to predictable workloads and abrupt surges in traffic. Meanwhile, horizontal scaling proves cost-effective for handling steady read-heavy workloads and ensuring high availability. Opting for a hybrid approach can provide an optimal balance between performance and cost-efficiency for your Aurora Serverless database.

How to create an Autoscaling Policy?

On your RDS Database main page go to Logs & events and click Add

Finally click on Add policy and your policy will be ready to be used.

Auto-scaling operates through a policy that leverages metrics like average CPU utilization, the average number of database connections, and custom metrics to fine-tune the read capacity. When auto scaling is initiated, a fresh reader instance is introduced to the cluster. Conversely, when the target metric falls below the set threshold, the newly added instance is removed. It’s worth noting that this scaling-up process may entail a few minutes, as it involves launching a new reader database instance. If you are already using the auto-scaling feature, you have the option to consider replacing it with the vertical scaling provided by Aurora Serverless v2.

How to tune In-Database Hyperparameters

Connection Pooling: Modify the maximum allowable simultaneous connections to prevent resource depletion and enhance performance. Avoid excessive over-provisioning.
Caching Strategies: Employ the query cache and buffer pool to enhance the performance of read-intensive workloads. Configure cache sizes thoughtfully, taking available memory into account.
InnoDB Buffer Pool: Fine-tune the InnoDB buffer pool size to optimize storage and access performance while closely monitoring read and write requirements.
Log File Size: Adapt the transaction log size to align with the database’s write workload, preventing performance issues related to log space.
Disabling Metrics: Disabling constant metric flow over time conserves valuable resources, allowing you to extract more efficiency from your database.

How to build your optimal database in 4 steps

Step 1: Choose a configuration based on requirement

Begin by thoroughly assessing your application’s usage patterns, daily demands, and frequency of resource throttling. This evaluation will guide you in making a crucial decision: choosing between serverless, standalone, or a mixed RDS configuration, depending on your specific requirements.

Step 2: Look at your usage pattern to build scaling policies

Analyze your usage patterns in-depth to formulate effective scalability policies. Define the minimum and maximum load variations over time and establish the ideal Amazon Aurora Capacity Units (ACUs) requirements. This allows you to fine-tune your RDS scaling strategies for optimal performance.

Step 3: Selecting ideal parameters of your database

To ensure the efficiency of your database, consider the number of reader instances required to support your intended regional coverage. Determine the optimal configuration based on the geographic regions in which you want to serve your audience. Additionally, assess the fail-safe needs to guarantee reliable and responsive database performance.

Step 4: Fine-Tuning for Peak Performance

To further optimize your database’s performance, apply a range of adjustments:

• Configure Connection Pooling to prevent resource exhaustion.
• Implement Caching Policies to enhance read-heavy workloads.
• Optimize InnoDB settings to balance storage and access performance.
• Adjust Log File Size in line with your database’s write workload.
• Fine-tune Analytics settings to ensure optimal performance.

Conclusion

The world of database deployment is not one-size-fits-all. As we’ve seen, the decision to go fully serverless or opt for a mixed configuration, horizontal vs vertical scaling depends on your specific usage patterns and needs. By carefully assessing your requirements, fine-tuning scalability policies, and selecting the right parameters, you can create an optimal database setup that delivers peak performance and cost-efficiency. Whether you choose the agility of serverless or the reliability of a mixed approach, making an informed decision is crucial in catering to your unique application demands. In the ever-evolving realm of database management, adaptability, and precision are key to ensuring your database seamlessly supports your growth and operational goals.

  • 100 Overlook Center, 2nd Floor Princeton, NJ 08540
© 2022 Digital Alpha Technologies, Inc.