Ruby App Deployment with Nanobox

Ruby is so fun. Seriously, of all the languages I’ve ever learned and used, none come close to matching the developer productivity of Ruby. I wish I could pretend to be a super responsible developer and say that was the primary reason I use it, but the truth is it’s just freaking fun. There is a weird satisfaction that I get with Ruby applications where I start to see them more as pieces of art to be admired than applications with functionality. Over the past decade, I can’t tell you how many times our team has sat down just to show off how clean or fancy some ruby project is. It’s hard to describe the feeling, Ruby is just special.

So naturally as early adopters of Ruby on Rails, we went all in and didn’t give much thought to how to run these application in production. Ruby had us on a natural high, and Rails captivated our wildest imaginations. We just assumed running a Ruby app in production would be simple, sort of like our previous experience with PHP. We were, well… wrong. Needless to say, we spent the majority of the next few years studying ruby web servers, forking vs threaded models, and how to get these apps to scream.

Fast-forward about eight years and now our team has a completely new focus: Nanobox – A portable, micro platform for developing and deploying apps. Our mission is to help developers be more productive, and as a result, help organizations succeed. As Ruby is a language designed for developer productivity, it’s a natural fit. So much so, we’ve spent a significant amount of time developing a native Ruby development environment and deployment engine.

Ruby Deployments with Nanobox

Ruby Deployment Options

So you have a Ruby app and you want to deploy it. At this point, most of the hard parts have already been standardized, with projects like Rack. So really, the only decision to be made, is which web server to use. I’ll recommend an option, but let’s consider the available concurrency models:

Forking

A forking web server delegates the concurrency to the unix/linux kernel. The concept of forking refers to a unix process copying itself, and all memory contents, to a new process. In this model, the web server will “fork” the app into a new unix process to handle the web request.

This is perhaps the most thread-safe model, as none of the processes share a memory space. Each web request does its own thing in an isolated process space, then exits after the request has been served. While this approach can be very fast, it generally requires more resource overhead.

If you know your app isn’t thread-safe, this is the recommended approach. If you’re not sure, but you’re using a new-ish Ruby framework, it’s probably fine. Unicorn is the most popular forking web server, and seriously, its implementation rocks. I’ve spent hours studying the unicorn implementation and, to be honest, I learned most of what I know about unix process management studying this project. It’s pretty boss.

Threaded

The threaded model leverages Ruby’s threads and fibers. Instead of forking a new process for each request, requests are handled in threads or fibers. Threads and fibers are basically the same thing, the only difference being whether pausing and resuming the thread’s execution is handled by Ruby or the application.

For a long time, Rails apps were not thread-safe, so this model was not common until recently. At this point in time, this model seems to be the most common. There are a few good products that implement this model, but I’ve had a lot of success with Puma.

Chances are, you’ll be just fine with Puma. I personally recommend it, and in this tutorial we’ll configure the production app to use Puma. If you’re interested in looking at alternatives, there is a great comparison of the available Ruby web servers here: A Comparison of (Rack) Web Servers for Ruby Web Applications

Ruby Local Development

As with any deployment workflow, I need to first get my app running in a local development environment. The app I’ll be using is available on Github if you’d like to explore: https://github.com/nanobox-quickstarts/nanobox-rails

Download Nanobox

Nanobox uses Docker to provision an isolated Ruby development environment on my local machine. Once I’m ready to deploy, it’ll make deploying my Rails app to a live server really simple.

Download Nanobox

Create a boxfile.yml

The boxfile.yml defines my environment configuration, including my Ruby runtime and supporting data services. I’ll create a boxfile.yml with the following and place it in the root of my project.

run.config:  
  engine: ruby

  # add extra packages
  extra_packages:
    - nodejs
    - nginx

# run commands at specific points of a deploy
deploy.config:  
  # compile assets on deploy
  extra_steps:
    - rake assets:precompile
  # seed or run db migrations on deploy
  before_live:
    web.main:
      - rake db:setup_or_migrate

# add a database
data.db:  
  image: nanobox/postgresql:9.4

# add a web component and give it a "start" command
web.main:  
  start:
    nginx: nginx -c /app/config/nginx.conf
    puma: bundle exec puma -C /app/config/puma.rb

  # add writable dirs to the web component
  writable_dirs:
    - tmp
    - log

  # the path to a logfile you want streamed to the nanobox dashboard
  log_watch:
    rails: 'log/production.log'

The boxfile.yml will give me a Ruby runtime with a Postgres database in both local and production environments. It will also provide a web server running Nginx, which proxies down to Puma (more on that below).

Modify the Postgres Connection Config

Because my app needs to be portable between environments, I’m going to use environment variables to populate my Postgres connection. Nanobox will auto-generate these environment variables in each environment. In my config/database.yml:

default: &default  
  adapter: postgresql
  encoding: unicode
  pool: 5
  timeout: 5000
  host: <%= ENV['DATA_DB_HOST'] %>
  username: <%= ENV['DATA_DB_USER'] %>
  password: <%= ENV['DATA_DB_PASS'] %>

Setup the Local Environment

To setup my local environment, just as a matter of convenience, I’ll add a dns alias for my local app so I can access easily from a browser. I’ll then use the nanobox run command to spin up my app locally then drop into a console inside my Ruby environment. Once there I’ll seed my local database.

# Add a convenient way to access the app from the browser
nanobox dns add local rails.dev

# Create the dev environment and drop into a console
nanobox run

# Seed the database
rake db:setup  

Run the App Locally

With the database seeded, I’m ready to fire up Rails. Inside the Nanobox console, I’ll just run:

rails s  

Deploying Ruby

I’m going to go ahead and deploy my RoR app to live servers with Nanobox, but before I do, I’ll need to configure the nginx proxy. I also want to test the deploy process locally.

Configure Nginx & Puma

In production, I’m going to run an Nginx web server that proxies down to my Puma process. To do this, I need to provide an nginx.conf and puma.rb in my project. I’ll store in my config directory.

config/nginx.conf
worker_processes 1;  
daemon off;

events {  
    worker_connections 1024;
}

http {  
    include /data/etc/nginx/mime.types;
    sendfile on;

    gzip              on;
    gzip_http_version 1.0;
    gzip_proxied      any;
    gzip_min_length   500;
    gzip_disable      "MSIE [1-6]\.";
    gzip_types        text/plain text/xml text/css
                      text/comma-separated-values
                      text/javascript
                      application/x-javascript
                      application/atom+xml;

    # Proxy upstream to the puma process
    upstream rails {
        server 127.0.0.1:3000;
    }

    # Configuration for Nginx
    server {

        # Listen on port 8080
        listen 8080;

        root /app/public;

        try_files $uri/index.html $uri @rails;

        # Proxy connections to rails
        location @rails {
            proxy_pass         http://rails;
            proxy_redirect     off;
            proxy_set_header   Host $host;
        }
    }
}
config/puma.rb:
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum, this matches the default thread size of Active Record.
#
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i  
threads threads_count, threads_count

# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
#
port        ENV.fetch("PORT") { 3000 }

# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }

# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }

# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory. If you use this option
# you need to make sure to reconnect any threads in the `on_worker_boot`
# block.
#
# preload_app!

# The code in the `on_worker_boot` will be called if you are using
# clustered mode by specifying a number of `workers`. After each worker
# process is booted this block will be run, if you are using `preload_app!`
# option you will want to use this block to reconnect to any threads
# or connections that may have been created at application boot, Ruby
# cannot share connections between processes.
#
# on_worker_boot do
#   ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
# end

# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart  

These are used by the start commands under web.main in my `boxfile.yml`.  

Preview a Production Deploy

Nanobox allows you to stage a production deploy on your local machine through its “dry-run“ functionality. I’ll first add a dns alias to my dry-run app, just to make it easy to access from the browser after it’s running.

# add a convenient way to access the app
nanobox dns add dry-run rails.st

# do a dry-run deploy
nanobox deploy dry-run  

This will spin up a local app just as if it were deploying to production. Once complete, I can visit and test the app at http://rails.st.

Deploy Rails to Production Servers

After testing everything and a dry-run environment, I’m ready to deploy my app to live servers. I’m going to create a new app with Nanobox, add my new app as a remote on my project, then deploy.

# add my new app as a remote
nanobox remote add app-name

# deploy to my new app
nanobox deploy  

Nanobox will provision a live server using my cloud provider account, deploy my local codebase to the server, create containers for each of my app’s components (web and database), seed my database and start Rails.

Get Started With Nanobox

Nanobox is free for personal use and open source projects. You can learn about our paid plans by visiting our pricing page.

To get started with Nanobox, sign up for an account, set up Nanobox following one of our language or framework guides, and watch your life as a software creator and publisher become simpler and easier.

If you have other questions or need some help getting started email us at hello@nanobox.io, call us at 801-396-7422, or message us through any of our social media channels.

Scott Anderson

Designer, code-dabbler, writer, foodie, husband, and father. Core Team Member at Nanobox.

Subscribe to Nanobox

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!