How to Deploy Python Applications to OVH with Nanobox
Python is "a programming language that lets you work quickly and integrate systems more effectively" src. Developers love it for its ease of use and flexibility. OVH delivers high-performance, secure cloud-infrastructures across four continents.
In this article, I'm going to walk through deploying a Python application to OVH using Nanobox. Nanobox uses Docker to build local development and staging environments, as well as scalable, highly-available production environments on OVH.
Download Nanobox
Go ahead and create a Nanobox account and download Nanobox Desktop, the Nanobox CLI.
Setup Your Python Project
Whether you have an existing project or are starting the scratch, the process of configuring it for Nanobox is simple.
Add a boxfile.yml
Nanobox uses the boxfile.yml
to build and configure your app's environment both locally and in production. Create a boxfile.yml
in the root of your project with the following:
run.config:
engine: python
This will give you a bare-bones Python environment in which to work. By default, it will use the most recent version of Python (3.x), but if you haven't made the jump to Python 3 yet, you can specify your Python runtime in your boxfile.yml
.
run.config:
engine: python
engine.config:
runtime: python-2.7
Start the Local Dev Environment
With the boxfile.yml
in place, you can fire up a virtualized local development environment. I recommend adding a DNS alias just so the app will be easier to access from a browser.
# Add a convenient way to access the app from a browser
nanobox dns add local python.local
# Start the dev environment
nanobox run
Nanobox will provision a local development environment, mount your local codebase into the VM, load your app's dependencies (if a requirements.txt
is present), then drop you into a console inside the VM.
Create Your App
If you have an existing app, you can skip this section. If not, go ahead and create a new app. As a basic example, I'm going to create a simple web.py "Hello Nanobox!" app.
# Install webpy
pip install web.py
# Freeze your requirements.txt
pip freeze > requirements.txt
For this example, I'll create an app.py
in the root of my project with the following contents:
import web
urls = (
'/(.*)', 'hello'
)
app = web.application(urls, globals())
class hello:
def GET(self, name):
if not name:
name = 'Nanobox'
return 'Hello, ' + name + '!'
# get the wsgi app from web.py application object
# required for gunincorn
wsgiapp = app.wsgifunc()
if __name__ == "__main__":
import sys; sys.argv.append('3000')
app.run()
Notice that I've configured the app to run on port 3000
rather than 8080
. This will be important when I add gunincorn and an Nginx proxy later.
Your project's current working directory is mounted into the /app
directory in the VM, so any files written there will propagate up to your VM and vice versa.
Configure Your App to Run on 0.0.0.0
In order for requests to make it to your Python app running with Nanobox, the app needs to run on 0.0.0.0
. Below is an example.
If you're using my example, you don't need to do this since web.py runs on 0.0.0.0
by default.
if __name__ == "__main__":
app.run(host='0.0.0.0')
Run the App Locally
With your app configured to run on 0.0.0.0
, you're ready to start it in your local dev environment. With my example, you'd just run:
python app.py
You'll then be able to access your running Python app at python.local:3000. The port may change depending on your application.
If your app needs a database, go ahead and exit out of the running app and the Nanobox console. This will shut down your VM and drop you back into your host OS.
Add a Database
When it comes to databases, you can pick your poison. Check out the Nanobox Guides to see what databases are officially supported. All you need to do is add a data component to your boxfile.yml
with a Nanobox Docker image for your database of choice.
Below is an example boxfile.yml
config for a Postgres database.
data.db:
image: nanobox/postgresql:9.5
The next time you run nanobox run
, Nanobox will build a containerized Postgres database in your local development environment.
Update Your Database Connection
When Nanobox spins up a data component, it generates environment variables for the necessary connection credentials. In the case of Postgres, Nanobox provides environment variables for the host, user, and password.
Python is pretty free-form when it comes to configuring a database connection. However you choose to configure yours, you should use the auto-generated environment variables.
import os
host = os.environ.get('DATA_DB_HOST')
user = os.environ.get('DATA_DB_USER')
passwd = os.environ.get('DATA_DB_PASS')
Nanobox also provides a default database named gonano
for most data components, but you're welcome to create your own.
Install Necessary Adapters
In order for Python to connect to your data service, you'll need to install the appropriate adapter. Using Postgres as an example, you'd need psycopg2
, the Python-Postgres adapter. If it doesn't already exist in your requirements.txt
, start your dev environment, drop into a console, and use pip
to install the package.
# Start the dev environment and drop into a console
nanobox run
# Install your psycopg2
pip install psycopg2
# Freeze your requirements.txt
pip freeze > requirements.txt
Prepare Your App for Deploy
Before you deploy your project, theres a few things you need to do to make sure everything will run properly in production.
Add a Web Component
When running your app locally, everything runs inside of a code container inside your local VM. When deploying, you need to tell Nanobox to create a publicly accessible web component and include nginx
in your runtime. This is done in your boxfile.yml
.
Include nginx
in your project by adding it as an extra_package
in the run.config
section of your boxfile.yml
. Your web component only needs one or more start
commands - commands that start your production web service. I highly recommend using gunicorn and an Nginx proxy.
run.config:
engine: python
extra_packages:
- nginx
web.site:
start:
nginx: nginx -c /app/etc/nginx.conf
python: gunicorn -c /app/etc/gunicorn.py app:wsgiapp
Install gunicorn
If you don't already have gunicorn
in your requirements.txt
, install it. From the root of your project:
# Start the local dev environment
nanobox run
# Install gunicorn
pip install gunicorn
# Freeze dependencies
pip freeze > requirements.txt
# Exit Nanobox
exit
Add Nginx & gunicorn Config Files
Create two files in your project: etc/nginx.conf
and etc/gunicorn.py
.
Note: The important setting in the config files below is the upstream port: line 25 of etc/nginx.conf
and line 2 of etc/gunicorn.py
. These must match the port on which your app runs. If your app runs on 8080
, it won't work behind the Nginx proxy. Nginx listens on 8080
and proxies upstream. It can listen on and proxy to the same port.
worker_processes 1;
daemon off;
events {
worker_connections 1024;
}
http {
include /data/etc/nginx/mime.types;
sendfile on;
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript
application/x-javascript
application/atom+xml;
# Proxy upstream to the python process
upstream python {
server 127.0.0.1:3000;
}
# Configuration for Nginx
server {
# Listen on port 8080
listen 8080;
root /app/public;
try_files $uri/index.html $uri @python;
# Proxy connections to python
location @python {
proxy_pass http://python;
proxy_redirect off;
proxy_set_header Host $host;
}
}
}
# Server mechanics
bind = '0.0.0.0:3000'
backlog = 2048
daemon = False
pidfile = None
umask = 0
user = None
group = None
tmp_upload_dir = None
proc_name = None
# Logging
errorlog = '-'
loglevel = 'info'
accesslog = '-'
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
#
# Worker processes
#
# workers - The number of worker processes that this server
# should keep alive for handling requests.
#
# A positive integer generally in the 2-4 x $(NUM_CORES)
# range. You'll want to vary this a bit to find the best
# for your particular application's work load.
#
# worker_class - The type of workers to use. The default
# sync class should handle most 'normal' types of work
# loads. You'll want to read
# http://docs.gunicorn.org/en/latest/design.html#choosing-a-worker-type
# for information on when you might want to choose one
# of the other worker classes.
#
# An string referring to a 'gunicorn.workers' entry point
# or a python path to a subclass of
# gunicorn.workers.base.Worker. The default provided values
# are:
#
# egg:gunicorn#sync
# egg:gunicorn#eventlet - Requires eventlet >= 0.9.7
# egg:gunicorn#gevent - Requires gevent >= 0.12.2 (?)
# egg:gunicorn#tornado - Requires tornado >= 0.2
#
# worker_connections - For the eventlet and gevent worker classes
# this limits the maximum number of simultaneous clients that
# a single process can handle.
#
# A positive integer generally set to around 1000.
#
# timeout - If a worker does not notify the master process in this
# number of seconds it is killed and a new worker is spawned
# to replace it.
#
# Generally set to thirty seconds. Only set this noticeably
# higher if you're sure of the repercussions for sync workers.
# For the non sync workers it just means that the worker
# process is still communicating and is not tied to the length
# of time required to handle a single request.
#
# keepalive - The number of seconds to wait for the next request
# on a Keep-Alive HTTP connection.
#
# A positive integer. Generally set in the 1-5 seconds range.
#
workers = 1
worker_class = 'sync'
worker_connections = 1000
timeout = 30
keepalive = 2
spew = False
#
# Server hooks
#
# post_fork - Called just after a worker has been forked.
#
# A callable that takes a server and worker instance
# as arguments.
#
# pre_fork - Called just prior to forking the worker subprocess.
#
# A callable that accepts the same arguments as after_fork
#
# pre_exec - Called just prior to forking off a secondary
# master process during things like config reloading.
#
# A callable that takes a server instance as the sole argument.
#
def post_fork(server, worker):
server.log.info("Worker spawned (pid: %s)", worker.pid)
def pre_fork(server, worker):
pass
def pre_exec(server):
server.log.info("Forked child, re-executing.")
def when_ready(server):
server.log.info("Server is ready. Spawning workers")
def worker_int(worker):
worker.log.info("worker received INT or QUIT signal")
## get traceback info
import threading, sys, traceback
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""),
threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename,
lineno, name))
if line:
code.append(" %s" % (line.strip()))
worker.log.debug("\n".join(code))
def worker_abort(worker):
worker.log.info("worker received SIGABRT signal")
Alright! Now to the fun stuff!
Setup Your OVH Account
If you haven't already, create a OVH account.
Generate API Credentials
Generate API credentials by visiting one of OVH's token creation pages. Which page you visit depends on your location.
Europe & Africa
Everywhere Else
Input the email and password associated with your OVH account. Give your API key a name, description, validity timeframe, and full rights on each HTTP method by specifying /*
as the allowed path for each. Then click "Create Keys".
Copy and store your Application Key, Application Secret, and Consumer Key. You'll need these later.
Order a New Cloud Project
Visit your OVH control panel and, under the "Cloud" section, click "Order" and select "Cloud Project." Agree to OVH's terms and conditions, enter a project name, provide a payment method, and click "Activate My Cloud Account".
Go to the home page of your new project, and copy the project ID from the URL. It's just after project/
. You will need this.
Create a New Provider Account
In your Nanobox dashboard, go to the Hosting Accounts section of your account admin and click "Add Account", select OVH, and click "Proceed".
Enter the required credentials. If located in Europe or Africa, specify the Region as .
. If located anywhere else, specify ca
as the region.
Click "Verify & Proceed". Name your provider, select your default region, then click "Finalize/Create".
Launch a New App
Go to the home page of your Nanobox dashboard and click the "Launch New App" button. Select your OVH provider from the dropdown and choose the region in which you'd like to deploy your app.
Confirm and click "Let's Go!" Nanobox will order an server on OVH under your account. When the server is up, Nanobox will provision platform components necessary for your app to run:
- Load-Balancer: The public endpoint for your application. Routes and load-balances requests to web nodes.
- Monitor: Monitors the health of your server(s) and application components.
- Logger: Streams and stores your app's aggregated log stream.
- Message Bus: Sends app information to the Nanobox dashboard.
- Warehouse: Storage used for deploy packages, backups, etc.
Once all the platform components are provisioned and running, you're ready to deploy your app.
Stage Your App Locally
Nanobox provides "dry-run" functionality that simulates a full production deploy on your local machine. This step is optional, but recommended. If the app deploys successfully in a dry-run environment, it will work when deployed to your live environment.
nanobox deploy dry-run
More information about dry-run environments is available in the Dry-Run documentation.
Deploy
Add Your New App as a Remote
From the root of your project directory, add your newly created app as a remote.
nanobox remote add app-name
This connects your local codebase to your live app. More information about the remote
command is available in the Nanobox Documentation.
Deploy to Your Live App
With your app added as a remote, you're ready to deploy.
nanobox deploy
Nanobox will compile and package your application code, send it up to your live app, provision all your app's components inside your live server, network everything together, and voilà! Your app will be live.
Manage & Scale
Once your app is deployed, Nanobox makes it easy to manage and scale your production infrastructure. In your Nanobox dashboard you'll find health metrics for all your app's servers/containers. Your application logs are streamed in your dashboard and can be streamed using the Nanobox CLI.
Although every app starts out on a single server with containerized components, you can break components out into individual servers and/or scalable clusters through the Nanobox dashboard. Nanobox handles the deep DevOps stuff so you don't have to. Enjoy!
Nanobox Documentation Links
Subscribe to Nanobox
Get the latest posts delivered right to your inbox