This guide will show you how you can easily move away from Heroku and on to Google Cloud Platform (GPC). GPC is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search and YouTube. GPC gives you options for computing and hosting. You can choose to work with a managed application platform, leverage container technologies to gain lots of flexibility, or build your own cloud-based infrastructure to have the most control and flexibility.
If you're interested in learning all the complexities of provisioning, configuring, and managing a production infrastructure this guide probably isn't for you. The idea here is to make migrating from Heroku to GPC as simple and painless as possible. Since you're already using Heroku, I'm betting you're not really interested in all the nitty gritty details. You just want a way to get all of the great things Heroku has to offer, without Heroku. Nanobox is a powerful Heroku alternative that gives you the same great workflow you're used to with Heroku with all the flexibility and control of GPC.
Configure Your App
Every web application has an underlying infrastructure. This infrastructure is typically made up of web servers, databases, and workers. The purpose of Nanobox is to provision, configure, and deploy your application's infrastructure so you don't have to.
Similar to Heroku's Procfile, Nanobox reads a config file called a
boxfile.yml which describes your application's infrastructure. This file lives at the root of your project and tells Nanobox everything your application needs so Nanobox can build and configure your infrastructure for you.
Procfile to boxfile.yml Example
Procfile to a
boxfile.yml is very simple. A typical
Procfile might look like this:
web: bundle exec puma -C config/puma.rb worker: bundle exec sidekiq
boxfile.yml would look like this:
run.config: engine: ruby web.site: start: bundle exec puma -C config/puma.rb worker.sidekiq: start: bundle exec sidekiq data.db: image: nanobox/postgresql:9.6
Understanding the boxfile.yml
There are a few obvious differences between the
Procfile and the
boxfile.yml. The following section explains each node individually to give you a better understanding of what everything means. If you aren't interested in the details, feel free to skip ahead.
An engine is comparable to a Heroku buildpack. It's a set of scripts that build and configure your app's environment and runtime. These scripts retrieve dependencies, compile your application code, and more.
boxfile.yml must have an engine specified:
run.config: engine: ruby
ruby engine tells Nanobox to build an environment that includes Ruby, Gem, and Bundler. The ruby engine also tells Nanobox to run
bundle install for you during the build process.
Nanobox has engines available for many different languages:
- web: Receives HTTP, TCP, and UDP requests from your app’s router.
- worker: Background process inaccessible from your app’s router.
- data: Components designed for handling data of any kind – databases, caches, job queues, etc.
Web & Worker Components
Web and worker components only require a
start command, although other config options are available. The start command would be the same command you pass with your process in a
# Procfile web: bundle exec puma -C config/puma.rb worker: bundle exec sidekiq
# boxfile.yml web.site: start: bundle exec puma -C config/puma.rb worker.sidekiq: start: bundle exec sidekiq
Note: Each component in your
boxfile.yml has a component ID that follows the pattern:
type.name. For example,
worker.sidekiq. The types are one of the three from above, but the name is completely arbitrary. Any additional component settings are nested under each ID.
Until recently, Heroku didn’t offer any internally managed database options, but instead integrated with external service providers through "Add-Ons". With Nanobox, databases are internally-managed as a part of your app’s private network and come with monitoring and scaling options.
Note: Nanobox still allows for you to use externally hosted/managed data services such as S3 or RDS if you prefer.
If you choose to manage your data components with Nanobox, simply include them in your
boxfile.yml. The only required option for a data component is the
image, referencing the Docker image used to provision the component:
data.postgres: image: nanobox/postgresql:9.6
Note: You're free to use your own Docker images, however Nanobox provides official images that include functionality specific to Nanobox (recommended).
If you're already running on Heroku, there aren't many changes you'll need to make to your app, but there are some important things to note:
Listen on 0.0.0.0:8080
With Nanobox, in order for your app to receive requests from the public network, it must listen on
0.0.0.0:8080. This is generally configured in the app itself, as part of the web server config, when the app is started, or with Nginx.
If you want to use a custom port, you'll need to setup a proxy that forwards from
8080 down to your custom port. We actually recommend this method. However you still need to listen on
0.0.0.0 rather than
Nanobox's flexibility allows you to easily connect to externally hosted/managed data services such as S3 or RDS. Or you can choose to use officially supported data components and have them managed through your Nanobox platform.
Externally Managed Database
If you choose to use an externally managed databases or services, you don't need to update your connection credentials unless those credentials are changing. Chances are that you're populating those with environment variables. If that's the case, just be sure to add those variables to your environments.
Nanobox Data Component
When using officially supported data services, Nanobox will auto-generate environment variables for each required credential, using the component's ID.
For example, with the following data components in a
data.postgres image: nanobox/postgresql:9.5 data.redis image: nanobox/redis:3.0
...the following environment variables will be generated:
# Postgres Connection Variables DATA_POSTGRES_HOST DATA_POSTGRES_USER DATA_POSTGRES_PASS # Redis Connection Variable DATA_REDIS_HOST
Note: For data services that require a database name, we create a default
gonano database, but you can also create your own. Also, the port will always be the service's default port.
Update Database Connections
Using environment variables for service connections ensures your app's portable across environments. You'll need to update your applications database config to connect using the Nanobox environment variables.
In Rails, for example, you might update your
config/database.yml to something similar to this:
default: &default adapter: postgresql encoding: unicode pool: 5 timeout: 5000 host: <%= ENV['DATA_DB_HOST'] %> username: <%= ENV['DATA_DB_USER'] %> password: <%= ENV['DATA_DB_PASS'] %>
Heroku's recommended method for storing files that need to persist between deploys is using Amazon S3. You can do the same with Nanobox, but you also have another option with Nanobox storage components.
If you're going to stick with S3, just be sure to add the required auth credentials,
S3_BUCKET_NAME as environment variables.
Setup Your GCP Account
If you haven't already, create a GCP account. In your admin panel's left-nav, go to "IAM & admin" > "Service Accounts" section.
Create a new service account with at least the following roles enabled:
- Compute Instance Admin (v1)
- Compute Network Admin
- Compute Security Admin
- Service Account Actor
Select the "Furnish a new private key" option, save, and download the private key.
Add a New Provider to Your Nanobox Account
Select Google Compute and click "Proceed."
Nanobox needs your GCP service email, service key, and project ID to authenticate with your GCP account and provision compute instances on your behalf. Paste in your key and click "Verify & Proceed."
Name your provider and choose a default region. The name is arbitrary and only meant to help you identify it in your list of provider accounts.
Launch a New App
Go to the home page of your Nanobox dashboard and click the "Launch New App" button. Select your GCP provider from the dropdown and choose the region in which you'd like to deploy your app.
Confirm and click "Let's Go!" Nanobox will order a Compute instance under your GPC account. When the instance is up, Nanobox will provision platform components necessary for your app to run:
- Load-Balancer: The public endpoint for your application. Routes and load-balances requests to web nodes.
- Monitor: Monitors the health of your server(s) and application components.
- Logger: Streams and stores your app's aggregated log stream.
- Message Bus: Sends app information to the Nanobox dashboard.
- Warehouse: Storage used for deploy packages, backups, etc.
Once all the platform components are provisioned and running, you're ready to deploy your app.
Stage Your App Locally
Nanobox provides "dry-run" functionality that simulates a full production deploy on your local machine. This step is optional, but recommended. If the app deploys successfully in a dry-run environment, it will work when deployed to your live environment.
nanobox deploy dry-run
More information about dry-run environments is available in the Dry-Run documentation.
Add Your New App as a Remote
From the root of your project directory, add your newly created app as a remote.
nanobox remote add app-name
This connects your local codebase to your live app. More information about the
remote command is available in the Nanobox Documentation.
Deploy to Your Live App
With your app added as a remote, you're ready to deploy.
Nanobox will compile and package your application code, send it up to your live app, provision all your app's components inside your live compute instance, network everything together, and BOOM! Your app will be live on GCP.
Manage & Scale
Once your app is deployed, Nanobox makes it easy to manage and scale your production infrastructure. In your Nanobox dashboard you'll find health metrics for all your app's instances/containers. Your application logs are streamed in your dashboard and can be streamed using the Nanobox CLI.
Although every app starts out on a single compute instance with containerized components, you can break components out into individual instances and/or scalable clusters through the Nanobox dashboard. Nanobox handles the deep DevOps stuff so you don't have to. Enjoy!
Moving can be a daunting task. Hopefully this guide has given you a good place to start. If you have any questions, please reach out!
Get in Touch
Subscribe to Nanobox
Get the latest posts delivered right to your inbox