Amazon Web Services (AWS) is a juggernaut in the public cloud industry, holding 45% market share according to Synergy Research Group's Q3 2016 data. They were one of the first to make cloud infrastructures available to the masses. Digital Ocean is an up-and-coming cloud host that offers SSD performance and a really simple API for quick and easy developer integrations.
I've used both and have found that each have their pros and cons, but one thing I was curious about was performance. To test, I used Nanobox to setup two identical test apps.
As with any experiment, it's important to remove as many variables as possible. Nanobox lets me deploy a single codebase to servers on both AWS and Digital Ocean. On deploy, Nanobox uses settings defined in the
boxfile.yml to build and configure my app's runtime and environment, both locally and on live servers.
Essentially it lets me easily create two identical apps on separate hosts.
Here's what the process looked like (feel free to duplicate it):
Simple Test App
Since I really only wanted to test the baseline performance of the hardware, I didn't need a super-complex app to test against. I used a simple "to-do" app built with Phoenix (you can view the source code here) that includes basic CRUD functionality. The app is comprised of an Elixir runtime, a Phoenix webserver, and Postgres database.
Connect My Hosting Provider Accounts
Nanobox has official integrations with both AWS and Digital Ocean. In my Nanobox dashboard, I connected my Nanobox account to my AWS and Digital Ocean accounts so Nanobox could provision and deploy the apps on my behalf.
The Setting up a Hosting Account documentation walks through the process of connecting to a hosting provider.
Create the Two Apps
In the Nanobox dashboard, I created two separate apps –
benchmark-do. During the app creation process, Nanobox let's you select which of your providers and to use as well as a region.
I wanted the two apps to be as geographically close as possible to remove any significant latency potential, so I provisioned
benchmark-aws in Amazon's US West datacenter in Northern California and
benchmark-do in Digital Ocean's San Francisco 1 datacenter.
Connect the Project to the Apps & Deploy
In my local project, I added each of my apps as remotes and deployed to each.
# add benchmark-aws app as a remote with the 'aws' alias nanobox remote add benchmark-aws aws # add benchmark-do app as a remote with the 'do' alias nanobox remote add benchmark-do do # deploy to the app on aws nanobox deploy aws # deploy to the app on digital ocean nanobox deploy do
Nanobox uses Docker to build and network all of the necessary app components on each server. Code and environment-wise, the two apps were identical. The only difference was the underlying hardware.
Add SSL/TLS Encryption
I also wanted to test the apps using both HTTP and HTTPS connections, so I used Nanobox to install LetsEncrypt certificates on each app. This process is covered in the Adding SSL/TLS documentation, but it only took about 3 minutes to install both.
There isn't a one-to-one relationship between Digital Ocean's and AWS's EC2's offering, but I used as closely spec'd servers as I could.
|AWS EC2 t2.nano||Digital Ocean Standard Small|
|CPU||Variable (.25 Core baseline)||1 Core|
|Monthly Cost||≈$6 USD||$5 USD|
Amazon Lightsail does provide one-to-one server specs with Digital Ocean, but is still considered experimental and is not available in all AWS datacenters... yet.
I used Siege to test the two apps with simulated traffic. Siege provides a summary after each test.
I tested two pages:
/- A static landing page
/todos- A populated list of to-dos pulled from the database.
I ran Siege in "benchmark" mode which disables the default behavior of including a delay between requests. Each test ran for 1 minute with 10 concurrent users. Below are the commands I used:
# Static landing page, 10 concurrent users over 1 minute, http siege -b -c 10 -t1m http://benchmark-aws.nanoapp.io/ siege -b -c 10 -t1m http://benchmark-do.nanoapp.io/ # Static landing page, 10 concurrent users over 1 minute, https siege -b -c 10 -t1m https://benchmark-aws.nanoapp.io/ siege -b -c 10 -t1m https://benchmark-do.nanoapp.io/ # Dynamic page, 10 concurrent users over 1 minute, http siege -b -c 10 -t1m http://benchmark-aws.nanoapp.io/todos siege -b -c 10 -t1m http://benchmark-do.nanoapp.io/todos # Dynamic page, 10 concurrent users over 1 minute, https siege -b -c 10 -t1m https://benchmark-aws.nanoapp.io/todos siege -b -c 10 -t1m https://benchmark-do.nanoapp.io/todos
I ran each benchmark five times with breaks between each to allow the server time to return to its baseline resource usage. The data below represents averages of these tests.
Siege returned the following data after each test:
The number of server hits during the test.
The percentage of successful transactions.
Total data transferred during the test.
Average response time of all requests.
Number of transactions per second.
Average throughput throughout the duration of the test.
Average number of simultaneous connections (the lower the better).
Duration of the longest transaction throughout the test.
Duration of the shortest transaction throughout the test.
Static landing page, 10 concurrent users over 1 minute, http
|Data Transferred||4.93 MB||4.47 MB|
|Response Time||0.13 secs||0.14 secs|
|Transaction Rate||77.92 trans/sec||71.25 trans/sec|
|Throughput||0.08 MB/sec||0.08 MB/sec|
|Longest Transaction||1.23 secs||3.12 secs|
|Shortest Transaction||0.11 secs||0.12 secs|
Static landing page, 10 concurrent users over 1 minute, https
|Data Transferred||1.51 MB||0.84 MB|
|Response Time||0.41 secs||0.75 secs|
|Transaction Rate||23.92 trans/sec||13.34 trans/sec|
|Throughput||0.03 MB/sec||0.01 MB/sec|
|Longest Transaction||0.81 secs||1.66 secs|
|Shortest Transaction||0.30 secs||0.38 secs|
Dynamic page, 10 concurrent users over 1 minute, http
|Data Transferred||15.98 MB||9.11 MB|
|Response Time||0.22 secs||0.24 secs|
|Transaction Rate||67.38 trans/sec||38.54 trans/sec|
|Throughput||0.27 MB/sec||0.15 MB/sec|
|Longest Transaction||1.65 secs||3.86 secs|
|Shortest Transaction||0.11 secs||0.12 secs|
Dynamic page, 10 concurrent users over 1 minute, https
|Data Transferred||5.48 MB||3.39 MB|
|Response Time||0.43 secs||0.70 secs|
|Transaction Rate||23.14 trans/sec||14.11 trans/sec|
|Throughput||0.09 MB/sec||0.06 MB/sec|
|Longest Transaction||1.91 sec||2.22 sec|
|Shortest Transaction||0.28 sec||0.37 sec|
I'll be honest, I'm a little surprised at how one-sided the results are. I thought Digital Ocean's SSDs would give them a significant advantage, but it doesn't appear they have.
It was close...except when it wasn't
In many of the metrics, AWS had only a slight edge over Digital Ocean, but when it wasn't close, it wasn't close at all. This is most apparent in the HTTPS tests.
AWS was more consistent
AWS's response times and transaction lengths were much more consistent than Digital Ocean's. While the majority of Digital Ocean's were low, I did see many more outliers than I did with AWS.
The app on AWS used a tiny bit more RAM
This is more interesting than significant, but in all the tests, the app on AWS consistently used 2-5% more memory than the app running on Digital Ocean. I don't think this would have a significant impact in a production app, but it is interesting.
Variable CPU for the win!
CPU generally isn't a bottleneck for most web applications, but terminating SSL/TLS connections is a CPU-heavy task. In the tests using HTTPS, the Digital Ocean droplet consistently used 100% of its available CPU while the EC2 instance used somewhere in the 80% range.
The effects are evident in the results. AWS, hands-down, outperformed Digital Ocean when handling requests over HTTPS. Everything from response times and transfer rates to the number of transactions completed during the test.
AWS provides "burstable" CPU with a guaranteed baseline .25 Core. This means that you can (and do) use more when needed, potentially up to the physical limit of the bare-metal machine on which the instance is running. This means that the EC2 instance could actually have access to 1, 2, 4, or 8+ cores, depending on what the machine actually has and what other instances on the same machine are using.
AWS came out the clear winner in these tests. There are a lot of factors that could play into this including datacenter nuances, network I/O, network latency, etc. I'd be interested to see others run the same tests in other datacenters. Any takers?