Skip to content

AWS Application Load Balancer cost estimation

I’ve been helping a customer who recently migrated their application to AWS. They have a “big monolithic web server”. One of the obvious next incremental steps is to put an application load balancer (ALB) in front of it.

The customer asked the obvious question: how much will the ALB cost? This is a surprisingly hard question to answer confidently. Or perhaps not surprising to those who work in this space and read blogs such as Corey Quinn’s excellent exploration of why data transfer between AZs is twice as expensive as most people think!

Architecture Overview

We’ll focus only on parts of the application architecture that are relevant for this specific cost estimation, and we’ll simplify the numbers a bit. Our application is in eu-west-1 (Ireland) and so all prices are calculated for this region. Pricing is correct at the time of writing but AWS may change them.

Currently there is a “big monolithic web server” in eu-west-1 with an Elastic IP. It receives HTTPS traffic directly from end-users, including a lot of large files. For simplicity, I’ve taken the ingress as 15,000GB/month and the egress is 20,000GB/month.

We want to add an application load balancer in front of the “big monolithic web server” (for many reasons - see appendix). We’ll remove the elastic IP.

This blog focuses on these immediate costs. It does not cover the many other improvements, such as fixing the single-point-of-failure by having an auto-scaling group of web-servers.

Many other significant costs stay the same, such as EC2 instance costs and the data transfer OUT from Amazon EC2 to the Internet. They are therefore not considered in these cost estimates.

Cost Estimates

For this use-case, I estimate that adding the load balancer adds an extra $300 per month: expensive, but worth it for all the benefits it brings.

Application Load Balancer

AWS pricing gives the Application Load Balancer costs as:

  • $0.0252 per ALB-hour (or partial hour)
  • $0.008 per LCU-hour (or partial hour)

The number of LCU-Hours, described as “the least intuitive unit known to humankind”, are based on the maximum of new connections, active connections, processed bytes and rule evaluations. For our use-case, “processed bytes” is the biggie. At 35,000GB per month (the sum of ingress and egress), this averages at 49GB per hour so an average of 49 LCUs.

That gives an approximate monthly cost of $300 per month: ($0.0252 + $0.008 * 49) * 24 hours * 30 days.

 

Costs that Don’t Apply

Cross-availability Zone Data Transfer

The ALB is highly-available across two availability zones, but sending traffic to just one VM in one AZ, half of the inbound/outbound traffic will go cross-availability-zone. Does this cost an extra $350 a month?

The answer is no. To quote the EC2 pricing docs for Data Transfer within the same AWS Region:

Data transferred "in" to and "out" from Amazon Classic and Application Elastic Load Balancers using private IP addresses, between EC2 instances and the load balancer in the same AWS Region is free.

Interestingly it sounds like if our use-case had required Network Load Balancer we’d have been billed (cross-zone load balancing is configurable for network load balancers, but always enabled for ALBs).

Removal of Elastic IP Data Transfer Costs

There is currently a lot of traffic to the Elastic IP. To quote the EC2 pricing docs:

IPv4: Data transferred “in” to and “out” from public or Elastic IPv4 address is charged at $0.01/GB in each direction.

Will we save $350 per month?

The answer is no. This charge is under the heading “Data Transfer within the same AWS Region”. Because the requests/responses are for user traffic originating from outside of AWS, we don’t get charged for this.

Measuring the Costs

Experimentation

Inspired by Corey Quinn’s blog on data transfer between AZs, in which he ran a test in a new AWS account to see the real prices, I decided to do something similar. Unfortunately my setup is way more complicated!

In two new non free-tier AWS accounts, I set up the test. The first had a web-server with elastic IP. The second had a web-server behind a load balancer. I hit each of these with 10GB of requests and 10GB of responses (I should have used 15+20GB, to be more realistic). I waited a day for AWS Billing to tell me the costs.

You can find the test code in this github repo (disclaimer: it’s just test code that I knocked together, rather than production quality; I was happy to cut corners to run my experiments sooner).

In brief, here’s what I did to replicate the existing setup:

  • In account one, I deployed (using Terraform) a VPC with two public subnets, a VM in a public subnet, associated an Elastic IP, and created a security group to allow only my CIDR block to access it.
  • I set up Apache on this, so it would accept GET requests for different object sizes, and POST requests to upload big files.
  • I hit the Elastic IP with lots of GET and POST requests, originating from outside AWS, to get my desired level of data transfer.

And for the new desired setup:

  • In account two, I deployed (using Terraform) a VPC with two public subnets, a VM in a public subnet, an application load balancer with a target group that has the VM associated, and appropriate security groups.
  • As was done in account one, I set up Apache on the VM.
  • To ensure my traffic was evenly distributed across the load balancer’s two availability zones, I did `nslookup ${ALB_DNS}` to get the ALB’s two IP addresses and then alternated requests between the two IPs.

I used AWS Cost Explorer to see the cost breakdown, to compare the two accounts.

This simplified load test approximately matched the price calculations described above (see table below). The small differences show how careful you have to be when running such tests. For example, the VM setup (e.g. installing httpd) contributed towards data transfer costs; the ALB and VMs were left running for just under 8 hours; logging into the VMs and running test commands before applying the load caused some variation; and the 10GB of ingress/egress was not applied evenly over the hour which may have affected the number of LCUs slightly; I also should have used the 15GB and 20GB ingress/egress rather than having to extrapolate!

 

UsageType Standalone With ALB
EU-BoxUsage:t3.medium($) 0.3269140152 0.3260020152
EU-EBS:VolumeUsage.gp2($) 0.0084931302 0.0084829451
EU-DataTransfer-Out-Bytes($) 0.910953434709 0.915158082015
EU-DataTransfer-Regional-Bytes($) 0.0000002414 0.0000000437
EU-LoadBalancerUsage($)   0.2016
EU-LCUUsage($)   0.1634343296144
Total cost ($) 1.246360821509 1.6146774156294
     
EU-DataTransfer-Out-Bytes(GB) 10.1217048301 10.1684231335
EU-DataTransfer-Regional-Bytes(GB) 0.000024136 0.0000043642
EU-DataTransfer-In-Bytes(GB) 10.603882962 10.5041905397
Total usage (GB) 20.7256119281 20.6726180374

Real Production Bills

We look forward to seeing the customer’s real production bills once this change is rolled out! 

The ALB costs should be easy to pick out now that we've identified which types of charges are relevant. However, it can still be difficult to read and interpret the production bill - for example identifying how much of the egress traffic went through the load balancer. There are additional production application components running in the same AWS account that contribute to the data transfer costs (e.g. with an Aurora database you’d expect some cross-AZ data transfer; we'll see some egress traffic via the NAT Gateway, etc). Even when using tagging for cost allocation, it can be hard to tease out the difference caused by an architectural change, particularly when the load varies from month to month.

 

Other Cost Considerations

Various other differences in cost have been ignored in this blog. These include:

  • DNS costs. We’ll create an alias record pointing at the load balancer. The TTL for an alias record is 60 seconds. If this is different from the existing A Record’s TTL then costs could increase.
  • ALB Access Logs. When access logs are enabled they are written to S3, which incurs additional costs.

Costs that remain the same include:

  • Data transfer OUT from Amazon Region to internet at $1,750 per month (20,000GB egress).
  • EC2 instance costs for the web-server.
  • Storage and access costs (for EFS and S3).
  • Other components of the application (e.g. database, etc)

 

Future Changes and Costs

There are many other architectural changes that will likely affect costs in interesting ways. Some of these are touched on below.

File Upload/Download Direct to S3

The large files could be downloaded direct from S3 using pre-signed URLs, rather than going through the web-server. Similarly, the files could be uploaded direct to S3 using pre-signed URLs. This would improve performance and reliability. The cost benefits would also be big:

  • It would dramatically reduce the load balancer bytes processed (and thus LCUs).
  • It would reduce the load on the EC2 VM(s), allowing smaller instance sizes to be used.

CloudFront

CloudFront would give some small cost benefits for our use-case. The large files are not downloaded often (the average is less than twice per file). Therefore caching by CloudFront edge locations would not give much benefit.

Web Application Firewall

By using an application load balancer, it unlocks a lot of security benefits. One of these is the ability to associate a Web Application Firewall pricing.

However, calculating the pricing for the new AWS managed rules and the WAF Capacity Units (WCUs) would require a blog post all of its own!

Private Subnets

These first changes include locking down the VM with tighter security groups.

The VM could be further locked down by moving it to a private subnet, and thus removing its public IP address. If not already present, this would require a NAT Gateway per availability zone. With two AZs, this would be approximately $69 per month: $0.048 * 24 hours * 30 days * 2 AZs.

Very little traffic would go over the NAT Gateways (priced at $0.048 per GB processed) - only where the VM had to communicate directly with the internet, such as for yum update. A NAT instance could be used instead, to lower this cost.

 

Appendix: Why Add an Application Load Balancer?

For those wondering why we want to add an application load balancer, here are a few reasons:

  • Security improvements:
    • Application Load Balancer forwards only valid HTTP requests. This protects you against attacks such as TCP SYN Flood.
    • AWS Shield Standard offers some defence against DDoS.
    • Can make use of the AWS Web Application Firewall (WAF).
    • The VM can be much more locked down (no direct access); we can combine this with the use of Systems Manager Session Manager to entirely remove the need to ever reach the VM directly.
  • Highly available cluster:
    • The load balancer can forward requests to multiple web servers, which can be provisioned across multiple AZs. This removes the single point of failure.
    • The load balancer can handle health checks to forward requests only to healthy web servers.
    • This enables lots of benefits, including better upgrade processes where we provision new VMs rather than reconfiguring the existing production VMs (part of the “pets vs cattle” argument).
  • Simplified operations:
    • The Application Load Balancer is highly-available, auto-scaling, fault-tolerant and fully managed by AWS.
    • Can use AWS Certificate Manager for the SSL/TLS Certificates on the Load Balancer, including auto-renewal.
    • SSL Termination can be done at the ALB, and http (port 80) from the ALB to the web servers (if infosec don’t require end-to-end encryption). This reduces the responsibilities (and thus the configuration) of the primary server.
    • Putting the site into maintenance mode is simpler: it can be done at the load-balancer level, allowing the VM to be taken entirely offline.
  • Decreased load on the web-servers:
    • By using HTTP keep alive, the same connection can be reused from the load balancer to the web-servers.
    • It removes the cost of the SSL handshake from the web-server, but this benefit is negligible: it’s not that expensive.

 

Related Posts