Skip to main content
Free Tool

AWS RDS Max Connection Calculator

See the default max_connections for any RDS instance class — Postgres, MySQL, Aurora, SQL Server, Oracle. Plan capacity before you scale.

Step 1 of 3 33%

Which database engine?

The engine determines the max_connections formula in the AWS-default parameter group.

Who This Tool Is For

Backend engineers, SREs, and DBAs sizing RDS or Aurora for production. If you've ever hit FATAL: remaining connection slots are reserved at 3am during a deploy, this tool tells you exactly how much headroom your current instance gives you and when to reach for RDS Proxy.

Why We Built This Tool

The AWS-default max_connections value is a parameter-group expression — LEAST({DBInstanceClassMemory/9531392}, 5000) — that engineers paste into Slack threads and get wrong as often as right. The math depends on engine-specific divisors and instance memory. We built this to remove the 10 minutes of "wait, is that bytes or KB?" every time someone scales an instance class up or down.

What Problem It Solves

  • Capacity sizing before scaling. Know if r6i.xlarge gives you the connections you need before you provision it.
  • App-side pool math. Convert max_connections into per-pod or per-process pool sizes with safe headroom.
  • RDS Proxy decision support. See where the multiplexing cost-benefit kicks in based on your fan-out.
  • Engine comparison. Quantify why moving from MySQL → Postgres or to Aurora changes your connection envelope.

When connection caps become a real bottleneck, our RDS consulting service is the next step — instance right-sizing, parameter group tuning, and RDS Proxy rollout.

Frequently Asked Questions

Where does the max_connections formula come from?

It's the AWS default parameter group expression. For Postgres and Aurora Postgres it's LEAST({DBInstanceClassMemory/9531392}, 5000). MySQL/MariaDB use {DBInstanceClassMemory/12582880}. AWS evaluates this at instance start, so changing the instance class re-runs the formula on the new memory size automatically — unless you've created a custom parameter group with a literal value.

Can I override max_connections to a higher value?

Yes — create a custom parameter group with a static integer for max_connections. But you're trading RAM-per-connection for connection count: each Postgres backend uses ~10 MB plus per-query memory, and at high override values you'll OOM the instance before you ever reach the new ceiling. The default formula is calibrated for the instance's memory; override only with measured evidence.

When should I use RDS Proxy?

RDS Proxy multiplexes many short-lived application connections onto a small pool of database connections. It's worth it when (1) you have many app instances/Lambda functions opening connections, (2) deploys cause connection storms, or (3) you're running serverless workloads. Cost is ~$0.015/vCPU-hour of the underlying instance. The rough rule: 10+ application pods or any Lambda traffic.

Do read replicas count separately?

Yes. Each read replica has its own max_connections envelope. If you balance reads across two r6i.large replicas, your read-side connection pool can be twice as large as a single instance suggests. Just don't forget that promoting a replica during failover doubles the writer's connection load instantly — size with that in mind.

Why does the cap exist (5000, 16000, 20000)?

Postgres and Aurora hardcap at safe maximums to protect against pathological cases on very-large instances. Once you have a 768 GB r6i.16xlarge, the formula would yield 80,000+ connections — far beyond what the engine can serve responsively. The caps reflect engineering reality, not memory limits.

What about Aurora Serverless v2?

Aurora Serverless v2 sizes by ACU (Aurora Capacity Unit). Each ACU = ~2 GB of memory, so the same formula applies — multiply your max ACU setting by 2 GB and run the engine's divisor. A serverless v2 cluster with a 16 ACU max behaves like a 32 GB provisioned instance for connection-count purposes.

200+
RDS deployments
7
Engines covered
50+
AWS certifications