How to Deploy Django Applications to Scaleway with Nanobox

Django, for many, is the go-to framework for web applications. It's known for its clean design and leaning towards rapid development cycles. Scaleway provides high-performance virtual and bare-metal servers in multiple EU-based datacenters.

In this article, I'm going to walk through deploying a Django application to Scaleway using Nanobox. Nanobox uses Docker to build local development and staging environments, as well as scalable, highly-available production environments on Scaleway.

Download Nanobox

Go ahead and create a Nanobox account and download Nanobox Desktop, the Nanobox CLI.

Setup Your Django Project

Whether you have an existing Django project or are starting the scratch, the process of configuring it for Nanobox is the same.

Add a boxfile.yml

Nanobox uses the boxfile.yml to build and configure your app's environment both locally and in production. Create a boxfile.yml in the root of your project with the following:

  engine: python
    - nginx

    - python collectstatic --noinput --clear
      - python migrate --fake-initial

    nginx: nginx -c /app/etc/nginx.conf
    django: gunicorn -c /app/etc/ app.wsgi

  image: nanobox/postgresql:9.5

This includes everything Django needs to run. You may need to update a few items specific to your project, but in this walk-through, I'm going to use:

  • A Python runtime.
  • A web component running a gunicorn web server and an Nginx proxy.
  • A Postgres database.
  • Static asset collection and database migrations on deploy.

Start the Local Dev Environment

With the boxfile.yml in place, you can fire up a virtualized local development environment. I recommend adding a DNS alias just so the app will be easier to access from a browser.

# Add a convenient way to access the app from a browser
nanobox dns add local django.local

# Start the dev environment
nanobox run

Nanobox will provision a local development environment, spin up a containerized Postgres database, mount your local codebase into the VM, load your app's dependencies, then drop you into a console inside the VM.

Generate a New Django Project

If you have an existing Django project, you can skip this section. To generate a new Django project from scratch, run the following from inside the Nanobox console:

# Install django so we can use it to generate our application
pip install Django

# Freeze the pip modules into the requirements.txt
pip freeze > requirements.txt

# cd into the /tmp dir to create an app
cd /tmp

# Generate the django app
django-admin startproject app

# cd back into the /app dir
# Enable the hidden files shell option
# Copy the generated app into the project dir
cd -
shopt -s dotglob
cp -a /tmp/app/* .

Your project's current working directory is mounted into the /app directory in the VM, so all the Django files written there will propagate back down to your machine's filesystem and vice versa.

Update Django's Allowed Hosts

Django whitelists domains in the app/ file. Add the DNS alias you added earlier to this list:


Update the Database Connection

When Nanobox spins up a Postgres database, it generates environment variables for the necessary connection credentials. Update the database connection in your app/

    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'gonano',
        'USER': os.environ.get('DATA_DB_USER'),
        'PASSWORD': os.environ.get('DATA_DB_PASS'),
        'HOST': os.environ.get('DATA_DB_HOST'),
        'PORT': '',

Install psycopg2

If you don't already have psycopg2, the Python-Postgres adapter, in your requirements.txt, you'll need to install it. From the /app directory in your Nanobox console:

# Install your psycopg2
pip install psycopg2

# Freeze your requirements.txt
pip freeze > requirements.txt

Run Data Migrations

Run a data migration for any remaining INSTALLED_APPS. You’ll need to decide which apps you want enabled by default. You can disable apps by commenting them out in the INSTALLED_APPS section of app/ file. Unless you commented out all of the INSTALLED_APPS, you’ll need to run any pending data migrations:

python migrate

Run Django Locally

With your ALLOWED_HOSTS and database connection updated, you're ready to start Django in your local dev environment. When running web apps inside Nanobox, they should broadcast on From the /app directory in your Nanobox console:

python runserver

You'll then be able to access your running Django app at django.local:8000.

You don't need to use Nginx or gunicorn when running Django locally. Those will just be used when the app is deployed.

Whenever you exit out of the Nanobox console, it'll shut your VM down and drop you back into your host OS.

Prepare Django for Deploy

Before you deploy the project, you need to make sure gunicorn is installed and include Nginx and gunicorn config files to use in production.

Install gunicorn

If you don't already have gunicorn in your requirements.txt, you'll need to install it. From the root of your project:

# Start the local dev environment
nanobox run

# Install gunicorn
pip install gunicorn

# Freeze dependencies
pip freeze > requirements.txt

# Exit Nanobox

Add Nginx & gunicorn Config Files

Create two files in your project: etc/nginx.conf and etc/

worker_processes 1;
daemon off;

events {
    worker_connections 1024;

http {
    include /data/etc/nginx/mime.types;
    sendfile on;

    gzip              on;
    gzip_http_version 1.0;
    gzip_proxied      any;
    gzip_min_length   500;
    gzip_disable      "MSIE [1-6]\.";
    gzip_types        text/plain text/xml text/css

    # Proxy upstream to the gunicorn process
    upstream django {

    # Configuration for Nginx
    server {

        # Listen on port 8080
        listen 8080;

        # Settings to serve static files
        location ^~ /static/  {
            root /app/;

        # Serve a static file (ex. favico)
        # outside /static directory
        location = /favico.ico  {
            root /app/favico.ico;

        # Proxy connections to django
        location / {
            proxy_pass         http://django;
            proxy_redirect     off;
            proxy_set_header   Host $host;
# Server mechanics
bind = ''
backlog = 2048
daemon = False
pidfile = None
umask = 0
user = None
group = None
tmp_upload_dir = None
proc_name = None

# Logging
errorlog = '-'
loglevel = 'info'
accesslog = '-'
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'

# Worker processes
#   workers - The number of worker processes that this server
#       should keep alive for handling requests.
#       A positive integer generally in the 2-4 x $(NUM_CORES)
#       range. You'll want to vary this a bit to find the best
#       for your particular application's work load.
#   worker_class - The type of workers to use. The default
#       sync class should handle most 'normal' types of work
#       loads. You'll want to read
#       for information on when you might want to choose one
#       of the other worker classes.
#       An string referring to a 'gunicorn.workers' entry point
#       or a python path to a subclass of
#       gunicorn.workers.base.Worker. The default provided values
#       are:
#           egg:gunicorn#sync
#           egg:gunicorn#eventlet   - Requires eventlet >= 0.9.7
#           egg:gunicorn#gevent     - Requires gevent >= 0.12.2 (?)
#           egg:gunicorn#tornado    - Requires tornado >= 0.2
#   worker_connections - For the eventlet and gevent worker classes
#       this limits the maximum number of simultaneous clients that
#       a single process can handle.
#       A positive integer generally set to around 1000.
#   timeout - If a worker does not notify the master process in this
#       number of seconds it is killed and a new worker is spawned
#       to replace it.
#       Generally set to thirty seconds. Only set this noticeably
#       higher if you're sure of the repercussions for sync workers.
#       For the non sync workers it just means that the worker
#       process is still communicating and is not tied to the length
#       of time required to handle a single request.
#   keepalive - The number of seconds to wait for the next request
#       on a Keep-Alive HTTP connection.
#       A positive integer. Generally set in the 1-5 seconds range.

workers = 1
worker_class = 'sync'
worker_connections = 1000
timeout = 30
keepalive = 2

spew = False

# Server hooks
#   post_fork - Called just after a worker has been forked.
#       A callable that takes a server and worker instance
#       as arguments.
#   pre_fork - Called just prior to forking the worker subprocess.
#       A callable that accepts the same arguments as after_fork
#   pre_exec - Called just prior to forking off a secondary
#       master process during things like config reloading.
#       A callable that takes a server instance as the sole argument.

def post_fork(server, worker):"Worker spawned (pid: %s)",

def pre_fork(server, worker):

def pre_exec(server):"Forked child, re-executing.")

def when_ready(server):"Server is ready. Spawning workers")

def worker_int(worker):"worker received INT or QUIT signal")

    ## get traceback info
    import threading, sys, traceback
    id2name = dict([(th.ident, for th in threading.enumerate()])
    code = []
    for threadId, stack in sys._current_frames().items():
        code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""),
        for filename, lineno, name, line in traceback.extract_stack(stack):
            code.append('File: "%s", line %d, in %s' % (filename,
                lineno, name))
            if line:
                code.append("  %s" % (line.strip()))

def worker_abort(worker):"worker received SIGABRT signal")

Update Your Django Asset Path for Nginx

To allow Nginx to cache your static assets, set your STATIC_ROOT in your

STATIC_ROOT = os.path.join(BASE_DIR, 'static/')

Alright! Now to the fun stuff!

Setup Your Scaleway Account

If you haven't already, create a Scaleway account. In your Scaleway dashboard, click on your user in the upper-right corner and go to "Credentials".

Account Credentials

Once there, click the "Create new token" button.

Create New Token

Copy and store your Access Key and your new Token.

Copy Access Key & Token

Create a New Provider Account

In your Nanobox dashboard, go to the Hosting Accounts section of your account admin and click "Add Account", select Scaleway, and click "Proceed".

Add a New Scaleway Provider

Enter your Scaleway access key and API token.

Enter Scaleway Auth Credentials

Click "Verify & Proceed". Name your provider, select your default region, then click "Finalize/Create".

Name Your Provider & Select a Default Region

Launch a New App

Go to the home page of your Nanobox dashboard and click the "Launch New App" button. Select your Scaleway provider from the dropdown and choose the region in which you'd like to deploy your app.

Select your Scaleway provider

Confirm and click "Let's Go!" Nanobox will order an server on Scaleway under your account. When the server is up, Nanobox will provision platform components necessary for your app to run:

  • Load-Balancer: The public endpoint for your application. Routes and load-balances requests to web nodes.
  • Monitor: Monitors the health of your server(s) and application components.
  • Logger: Streams and stores your app's aggregated log stream.
  • Message Bus: Sends app information to the Nanobox dashboard.
  • Warehouse: Storage used for deploy packages, backups, etc.

Once all the platform components are provisioned and running, you're ready to deploy your app.

Stage Your App Locally

Nanobox provides "dry-run" functionality that simulates a full production deploy on your local machine. This step is optional, but recommended. If the app deploys successfully in a dry-run environment, it will work when deployed to your live environment.

nanobox deploy dry-run

More information about dry-run environments is available in the Dry-Run documentation.


Add Your New App as a Remote

From the root of your project directory, add your newly created app as a remote.

nanobox remote add app-name

This connects your local codebase to your live app. More information about the remote command is available in the Nanobox Documentation.

Deploy to Your Live App

With your app added as a remote, you're ready to deploy.

nanobox deploy

Nanobox will compile and package your application code, send it up to your live app, provision all your app's components inside your live server, network everything together, and voilà! Your app will be live.

Manage & Scale

Once your app is deployed, Nanobox makes it easy to manage and scale your production infrastructure. In your Nanobox dashboard you'll find health metrics for all your app's servers/containers. Your application logs are streamed in your dashboard and can be streamed using the Nanobox CLI.

Although every app starts out on a single server with containerized components, you can break components out into individual servers and/or scalable clusters through the Nanobox dashboard. Nanobox handles the deep DevOps stuff so you don't have to. Enjoy!

Posted in Django, Python, Scaleway, Deployment