How to Deploy Rails Applications to Google Cloud Platform with Nanobox

Ruby on Rails (RoR or Rails) came into the world and won the hearts and minds of developers with its elegant, readable syntax and ease of use. It has since become the go-to framework for many web developers powers many of today's web applications. Google Cloud Platform (GCP) is a robust collection of cloud-based tools and services covering everything from Infrastructure as a Service (IaaS) to machine learning and security.

In this article, I'm going to walk through deploying a Rails application to GCP's Compute Engine using Nanobox. Nanobox uses Docker to build local development and staging environments, as well as scalable, highly-available production environments on GCP.

Before You Begin

If you haven't already, create a free Nanobox account and download Nanobox Desktop.

Setup Your Rails Project

Whether you have an existing Rails project or are starting from scratch, the process of configuring it for Nanobox is the same.

Note: This tutorial is specific to Rails 5.

Add a boxfile.yml

Nanobox uses the boxfile.yml to build and configure your app's environment both locally and in production. Create a boxfile.yml in the root of your project with the following:

  engine: ruby
    runtime: ruby-2.4
    - nodejs
    - nginx
    - pkgconf
    - libxml2
    - libxslt

    - rake assets:precompile RAILS_ENV=production
      - rake db:setup_or_migrate

  image: nanobox/postgresql:9.5

    nginx: nginx -c /app/config/nginx.conf
    puma: bundle exec puma -C /app/config/puma.rb
    - tmp
    - db
    rails: 'log/production.log'

This includes everything Rails needs to install and run. You may need to update a few items specific to your project, but in this walk-through, I'm going to use:

  • A Ruby runtime
  • Node.js for asset compilation
  • Static asset digest generation on deploy
  • Database creation/migration on deploy
  • A Postgres database
  • A web component that will serve Rails through an Nginx reverse proxy

If you need Sidekiq, I've included a section on that below.

Start the Local Dev Environment

With the boxfile.yml in place, you can fire up a virtualized local development environment. I recommend adding a DNS alias just so the app will be easier to access from a browser.

# Add a convenient way to access the app from a browser
nanobox dns add local rails.local

# Start the dev environment
nanobox run

Nanobox will provision a local development environment, spin up a containerized Postgres database, mount your local codebase into the VM, load your app's dependencies, then drop you into a console inside the VM.

Generate a New Rails Project

If you have an existing Rails project, you can skip this section. To generate a new Rails project from scratch, run the following from inside the Nanobox console:

# Install Nokogiri
gem install nokogiri -- --use-system-libraries --with-xml2-config=/data/bin/xml2-config --with-xslt-config=/data/bin/xslt-config

# Install Rails
gem install rails

# Generate a new Rails application
rails new .

Your project's current working directory is mounted into the /app directory in the VM, so all the Rails files written there will propagate back down to your machine's filesystem and vice versa.

Install the DB Adapter

Since we're using Postgres in this walkthrough, go ahead and install the pg gem. If you're using a different database, install the appropriate gem/adapter.

# Install the Postgres 9.6 client
pkgin in postgresql96-client

# Install the pg gem
bundle add pg

Update the Database Connection

When Nanobox spins up a Postgres database, it generates environment variables for the necessary connection credentials. Update the database connection in your config/database.yml to use the provided environment variables.

default: &default
  adapter: postgresql
  encoding: unicode
  pool: 5
  timeout: 5000
  host: <%= ENV['DATA_DB_HOST'] %>
  username: <%= ENV['DATA_DB_USER'] %>
  password: <%= ENV['DATA_DB_PASS'] %>
  database: gonano

  <<: *default

  <<: *default

  <<: *default

Run Rails Locally

With your database connection updated and dependencies loaded, you're ready to start Rails in your local dev environment. Inside Nanobox, web processes need to bind to From the /app directory in your Nanobox console:

rails s -b

You'll then be able to access your running Rails app at rails.local:3000.

Whenever you exit out of the Nanobox console, it'll shut your VM down and drop you back into your host OS.

Setup Sidekiq

This section is only necessary if you're using Sidekiq. If you're not, feel free to skip to the next section.

Install Sidekiq

To include Sidekiq in your project, install the sidekiq gem.

bundle add sidekiq

Add a Sidekiq Worker & Redis to Your boxfile.yml

Add a Sidekiq worker component to your boxfile.yml along with a Redis component.

  start: sidekiq

  image: nanobox/redis:4.0

Update Sidekiq's Redis Connection

Add a Sidekiq initializer at config/initializers/sidekiq.rb and include the following to connect to Redis using the auto-generated environment variables.

Sidekiq.configure_client do |config|
  config.redis = { url: "redis://#{ENV['DATA_QUEUE_HOST']}:6379/5" }

Sidekiq.configure_server do |config|
    config.redis = { url: "redis://#{ENV['DATA_QUEUE_HOST']}:6379/5" }                 

Run Sidekiq Locally

When using Nanobox locally, web and worker processes need to be started manually. To run Rails and Sidekiq together, start Rails in one terminal session and Sidekiq in another.

Terminal 1
nanobox run rails s -b
Terminal 2
nanobox run sidekiq

The two run sessions run inside the same local container, just in separate sessions.

Prepare Rails for Deploy

There are just a few things you need to do before you deploy Rails with Nanobox.

Configure Nginx & Puma

When deployed, Nanobox is going to start an Nginx reverse proxy and serve Rails through Puma as specified in the boxfile.yml above. Add the following nginx.conf and puma.rb into the config directory in your project. Rails may have already generated a puma.rb. If so, you don't need to replace it.

worker_processes 1;
daemon off;

events {
    worker_connections 1024;

http {
    include /data/etc/nginx/mime.types;
    sendfile on;

    gzip              on;
    gzip_http_version 1.0;
    gzip_proxied      any;
    gzip_min_length   500;
    gzip_disable      "MSIE [1-6]\.";
    gzip_types        text/plain text/xml text/css

    # Proxy upstream to the puma process
    upstream rails {

    # Configuration for Nginx
    server {

        # Listen on port 8080
        listen 8080;

        root /app/public;

        try_files $uri/index.html $uri @rails;

        # Proxy connections to rails
        location @rails {
            proxy_pass         http://rails;
            proxy_redirect     off;
            proxy_set_header   Host $host;
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum, this matches the default thread size of Active Record.
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count

# Specifies the `port` that Puma will listen on to receive requests, default is 3000.
port        ENV.fetch("PORT") { 3000 }

# Specifies the `environment` that Puma will run in.
environment ENV.fetch("RAILS_ENV") { "development" }

# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked webserver processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }

# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory. If you use this option
# you need to make sure to reconnect any threads in the `on_worker_boot`
# block.
# preload_app!

# The code in the `on_worker_boot` will be called if you are using
# clustered mode by specifying a number of `workers`. After each worker
# process is booted this block will be run, if you are using `preload_app!`
# option you will want to use this block to reconnect to any threads
# or connections that may have been created at application boot, Ruby
# cannot share connections between processes.
# on_worker_boot do
#   ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
# end

# Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart

Add a Database Setup or Migrate Rake Task

The boxfile.yml above includes a rake db:setup_or_migrate deploy hook that is designed to check the status of your database. If it doesn't exist yet, it'll run the db:setup task. If it does, it'll run the db:migrate task.

Add this task to lib/tasks/db.rake:

namespace :db do
  desc 'Setup the db or migrate depending on state of db'
  task setup_or_migrate: :environment do
    rescue ActiveRecord::NoDatabaseError

The purpose of this rake task is to make your app as portable as possible across different environments without overwriting existing data, or starting rails with an un-seeded database.

Alright! Now to the fun part!

Setup Your GCP Account

If you haven't already, create a GCP account. In your admin panel's left-nav, go to "IAM & admin" > "Service Accounts" section.

GCP IAM & Admin > Service Accounts

Create a new service account with at least the following roles enabled:

  • Compute Instance Admin (v1)
  • Compute Network Admin
  • Compute Security Admin
  • Service Account Actor

GCP Service Account Roles

Select the "Furnish a new private key" option, save, and download the private key.

Add a New Provider to Your Nanobox Account

Add New Provider Account

Select Google Compute and click "Proceed."

Select Google Compute

Nanobox needs your GCP service email, service key, and project ID to authenticate with your GCP account and provision compute instances on your behalf. Paste in your key and click "Verify & Proceed."

Enter your GCP auth credentials

Name your provider and choose a default region. The name is arbitrary and only meant to help you identify it in your list of provider accounts.

Name your provider and select a default region

Launch a New App

Go to the home page of your Nanobox dashboard and click the "Launch New App" button. Select your GCP provider from the dropdown and choose the region in which you'd like to deploy your app.

Select your GCP provider

Confirm and click "Let's Go!" Nanobox will order a Compute instance under your GPC account. When the instance is up, Nanobox will provision platform components necessary for your app to run:

  • Load-Balancer: The public endpoint for your application. Routes and load-balances requests to web nodes.
  • Monitor: Monitors the health of your server(s) and application components.
  • Logger: Streams and stores your app's aggregated log stream.
  • Message Bus: Sends app information to the Nanobox dashboard.
  • Warehouse: Storage used for deploy packages, backups, etc.

Once all the platform components are provisioned and running, you're ready to deploy your app.

Stage Your App Locally

Nanobox provides "dry-run" functionality that simulates a full production deploy on your local machine. This step is optional, but recommended. If the app deploys successfully in a dry-run environment, it will work when deployed to your live environment.

nanobox deploy dry-run

More information about dry-run environments is available in the Dry-Run documentation.


Add Your New App as a Remote

From the root of your project directory, add your newly created app as a remote.

nanobox remote add app-name

This connects your local codebase to your live app. More information about the remote command is available in the Nanobox Documentation.

Deploy to Your Live App

With your app added as a remote, you're ready to deploy.

nanobox deploy

Nanobox will compile and package your application code, send it up to your live app, provision all your app's components inside your live compute instance, network everything together, and BOOM! Your app will be live on GCP.

Manage & Scale

Once your app is deployed, Nanobox makes it easy to manage and scale your production infrastructure. In your Nanobox dashboard you'll find health metrics for all your app's instances/containers. Your application logs are streamed in your dashboard and can be streamed using the Nanobox CLI.

Although every app starts out on a single compute instance with containerized components, you can break components out into individual instances and/or scalable clusters through the Nanobox dashboard. Nanobox handles the deep DevOps stuff so you don't have to. Enjoy!



Nanobox uses Docker to containerize your application within its own private network. Using or localhost (the "loopback" IP) inside of a container will loopback to the container, not the host machine. In order for requests to reach your application, your application needs to listen on all available IP's.

Why Port 8080?

Nanobox provides your application with a router that directs traffic through a private network created for your app. The router listens on ports 80 and 443, terminates SSL, and forwards all requests to port 8080.

Note: Your app/framework can listen on a port other than 8080, but you will need to implement a proxy that listens on 8080 and forwards to your custom port.

Why Use Environment Variables?

Environment variables serve two purposes:

  • They obscure sensitive information in your codebase. Environment variables referenced in your code are populated at runtime, keeping potentially sensitive values out of your codebase.

  • Due to the dynamic nature of containerized applications, it's hard to predict the host IP of running services. These IP's are subject to change as your infrastructure changes. When creating your infrastructure, Nanobox knows what these values are and creates evars for necessary connection details.

Posted in Rails, Ruby, GCP, Google Compute