AWS vs DigitalOcean - A Performance Comparison

Amazon Web Services (AWS) is a juggernaut in the public cloud industry, holding 45% market share according to Synergy Research Group's Q3 2016 data. They were one of the first to make cloud infrastructures available to the masses. DigitalOcean is an up-and-coming cloud host that offers SSD performance and a really simple API for quick and easy developer integrations.

I've used both and have found that each have their pros and cons, but one thing I was curious about was performance. To test, I used Nanobox to setup two identical test apps.

The Setup

As with any experiment, it's important to remove as many variables as possible. Nanobox lets me deploy a single codebase to servers on both AWS and DigitalOcean. On deploy, Nanobox uses settings defined in the boxfile.yml to build and configure my app's runtime and environment, both locally and on live servers.

Essentially it lets me easily create two identical apps on separate hosts.

Here's what the process looked like (feel free to duplicate it):

Simple Test App

Since I really only wanted to test the baseline performance of the hardware, I didn't need a super-complex app to test against. I used a simple "to-do" app built with Phoenix (you can view the source code here) that includes basic CRUD functionality. The app is comprised of an Elixir runtime, a Phoenix webserver, and Postgres database.

Connect My Hosting Provider Accounts

Nanobox has official integrations with both AWS and DigitalOcean. In my Nanobox dashboard, I connected my Nanobox account to my AWS and DigitalOcean accounts so Nanobox could provision and deploy the apps on my behalf.

Setup Hosting Providers

The Setting up a Hosting Account documentation walks through the process of connecting to a hosting provider.

Create the Two Apps

In the Nanobox dashboard, I created two separate apps – benchmark-aws and benchmark-do. During the app creation process, Nanobox let's you select which of your providers and to use as well as a region.

Select your Region

I wanted the two apps to be as geographically close as possible to remove any significant latency potential, so I provisioned benchmark-aws in Amazon's US West datacenter in Northern California and benchmark-do in DigitalOcean's San Francisco 1 datacenter.

Connect the Project to the Apps & Deploy

In my local project, I added each of my apps as remotes and deployed to each.

# add benchmark-aws app as a remote with the 'aws' alias
nanobox remote add benchmark-aws aws

# add benchmark-do app as a remote with the 'do' alias
nanobox remote add benchmark-do do

# deploy to the app on aws
nanobox deploy aws

# deploy to the app on digitalocean
nanobox deploy do  

Nanobox uses Docker to build and network all of the necessary app components on each server. Code and environment-wise, the two apps were identical. The only difference was the underlying hardware.

Add SSL/TLS Encryption

I also wanted to test the apps using both HTTP and HTTPS connections, so I used Nanobox to install LetsEncrypt certificates on each app. This process is covered in the Adding SSL/TLS documentation, but it only took about 3 minutes to install both.

Server Specs

There isn't a one-to-one relationship between DigitalOcean's and AWS's EC2's offering, but I used as closely spec'd servers as I could.

AWS EC2 t2.nano DigitalOcean Standard Small
Memory 512MB 512MB
CPU Variable (.25 Core baseline) 1 Core
Disk 20GB 20GB SSD
Transfer Unlimited 1TB
Monthly Cost ≈$6 USD $5 USD

Amazon Lightsail does provide one-to-one server specs with DigitalOcean, but is still considered experimental and is not available in all AWS datacenters... yet.

The Tests

I used Siege to test the two apps with simulated traffic. Siege provides a summary after each test.

I tested two pages:

  • / - A static landing page
  • /todos - A populated list of to-dos pulled from the database.

I ran Siege in "benchmark" mode which disables the default behavior of including a delay between requests. Each test ran for 1 minute with 10 concurrent users. Below are the commands I used:

# Static landing page, 10 concurrent users over 1 minute, http
siege -b -c 10 -t1m http://benchmark-aws.nanoapp.io/  
siege -b -c 10 -t1m http://benchmark-do.nanoapp.io/

# Static landing page, 10 concurrent users over 1 minute, https
siege -b -c 10 -t1m https://benchmark-aws.nanoapp.io/  
siege -b -c 10 -t1m https://benchmark-do.nanoapp.io/

# Dynamic page, 10 concurrent users over 1 minute, http
siege -b -c 10 -t1m http://benchmark-aws.nanoapp.io/todos  
siege -b -c 10 -t1m http://benchmark-do.nanoapp.io/todos

# Dynamic page, 10 concurrent users over 1 minute, https
siege -b -c 10 -t1m https://benchmark-aws.nanoapp.io/todos  
siege -b -c 10 -t1m https://benchmark-do.nanoapp.io/todos  

I ran each benchmark five times with breaks between each to allow the server time to return to its baseline resource usage. The data below represents averages of these tests.

The Results

Siege returned the following data after each test:

Transactions:
The number of server hits during the test.

Availability:
The percentage of successful transactions.

Data Transferred:
Total data transferred during the test.

Response Time:
Average response time of all requests.

Transaction Rate:
Number of transactions per second.

Throughput:
Average throughput throughout the duration of the test.

Concurrency:
Average number of simultaneous connections (the lower the better).

Longest Transaction:
Duration of the longest transaction throughout the test.

Shortest Transaction:
Duration of the shortest transaction throughout the test.

Test 1

Static landing page, 10 concurrent users over 1 minute, http

AWS DigitalOcean
Transactions 4668 4227
Availability 100% 100%
Data Transferred 4.93 MB 4.47 MB
Response Time 0.13 secs 0.14 secs
Transaction Rate 77.92 trans/sec 71.25 trans/sec
Throughput 0.08 MB/sec 0.08 MB/sec
Concurrency 9.97 9.97
Longest Transaction 1.23 secs 3.12 secs
Shortest Transaction 0.11 secs 0.12 secs

Test 1 EC2 Resource Usage Test 1 DigitalOcean Resource Usage

Test 2

Static landing page, 10 concurrent users over 1 minute, https

AWS DigitalOcean
Transactions 1432 792
Availability 100% 100%
Data Transferred 1.51 MB 0.84 MB
Response Time 0.41 secs 0.75 secs
Transaction Rate 23.92 trans/sec 13.34 trans/sec
Throughput 0.03 MB/sec 0.01 MB/sec
Concurrency 9.80 9.95
Longest Transaction 0.81 secs 1.66 secs
Shortest Transaction 0.30 secs 0.38 secs

Test 2 EC2 Resource Usage Test 2 DigitalOcean Resource Usage

Test 3

Dynamic page, 10 concurrent users over 1 minute, http

AWS DigitalOcean
Transactions 3989 2274
Availability 100% 100%
Data Transferred 15.98 MB 9.11 MB
Response Time 0.22 secs 0.24 secs
Transaction Rate 67.38 trans/sec 38.54 trans/sec
Throughput 0.27 MB/sec 0.15 MB/sec
Concurrency 9.96 9.36
Longest Transaction 1.65 secs 3.86 secs
Shortest Transaction 0.11 secs 0.12 secs

Test 3 EC2 Resource Usage Test 3 DigitalOcean Resource Usage

Test 4

Dynamic page, 10 concurrent users over 1 minute, https

AWS DigitalOcean
Transactions 1369 845
Availability 100% 100%
Data Transferred 5.48 MB 3.39 MB
Response Time 0.43 secs 0.70 secs
Transaction Rate 23.14 trans/sec 14.11 trans/sec
Throughput 0.09 MB/sec 0.06 MB/sec
Concurrency 9.94 9.93
Longest Transaction 1.91 sec 2.22 sec
Shortest Transaction 0.28 sec 0.37 sec

Test 4 EC2 Resource Usage Test 4 DigitalOcean Resource Usage

The Take-Aways

I'll be honest, I'm a little surprised at how one-sided the results are. I thought DigitalOcean's SSDs would give them a significant advantage, but it doesn't appear they have.

It was close...except when it wasn't

In many of the metrics, AWS had only a slight edge over DigitalOcean, but when it wasn't close, it wasn't close at all. This is most apparent in the HTTPS tests.

AWS was more consistent

AWS's response times and transaction lengths were much more consistent than DigitalOcean's. While the majority of DigitalOcean's were low, I did see many more outliers than I did with AWS.

The app on AWS used a tiny bit more RAM

This is more interesting than significant, but in all the tests, the app on AWS consistently used 2-5% more memory than the app running on DigitalOcean. I don't think this would have a significant impact in a production app, but it is interesting.

Variable CPU for the win!

CPU generally isn't a bottleneck for most web applications, but terminating SSL/TLS connections is a CPU-heavy task. In the tests using HTTPS, the DigitalOcean droplet consistently used 100% of its available CPU while the EC2 instance used somewhere in the 80% range.

The effects are evident in the results. AWS, hands-down, outperformed DigitalOcean when handling requests over HTTPS. Everything from response times and transfer rates to the number of transactions completed during the test.

AWS provides "burstable" CPU with a guaranteed baseline .25 Core. This means that you can (and do) use more when needed, potentially up to the physical limit of the bare-metal machine on which the instance is running. This means that the EC2 instance could actually have access to 1, 2, 4, or 8+ cores, depending on what the machine actually has and what other instances on the same machine are using.

In Conclusion

AWS came out the clear winner in these tests. There are a lot of factors that could play into this including datacenter nuances, network I/O, network latency, etc. I'd be interested to see others run the same tests in other datacenters. Any takers?

Scott Anderson

Designer, code-dabbler, writer, foodie, husband, and father. Core Team Member at Nanobox.

Subscribe to Nanobox

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!