In this tutorial we will create a new Django project using Docker and PostgreSQL.

Docker is a tool for creating isolated operating systems. The easiest way to think of it is as a large virtual environment that contains everything needed for our Django project: dependencies, database, caching services, and any other tools needed.

A big reason to use Docker is that it completely removes any issues around local development setup. Instead of worrying about which software packages are installed or running a local database alongside a project, you simply run a Docker image of the entire project. Best of all, this can be shared in groups and makes team development much simpler.

Install Docker

The first step is to install the desktop Docker app for your local machine:

This might take some time to download. It’s a big file. We can go ahead and install and configure our Django project locally while we’re waiting.

Django project

We will use the Message Board app from Django for Beginners. It provides the code for a basic message board app using SQLite that can be updated in the admin.

Create a new directory on your Desktop and clone the repo into it.

$ cd ~/Desktop
$ git clone https://github.com/wsvincent/djangoforbeginners.git
$ cd djangoforbeginners
$ cd ch4-message-board-app

Then install the software packages specified by Pipenv and start a new shell.

$ pipenv install
$ pipenv shell

The actual name of your virtual environment will now be (ch4-message-board-app-XXX) where the XXX will be random. I’ll shorten this to (mb) going forward.

Make sure to migrate our database after these changes.

(mb) $ ./manage.py migrate

If you now use the ./manage.py runserver command you can see a working version of our application at http://localhost:8000.

Docker (again)

Hopefully Docker is done installing by this point. To confirm the installation was successful quit the local server with Control+c and then type docker run hello-world on the command line. You should see a response like this:

$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Images and Containers

There are two importance concepts to grasp in Docker: images and containers. An image is the list of instructions for a project–software packages, where the code is located, etc–and the container is the actual “runtime instance of an image.” In other words, an image describes what will happen and a container is what actually runs.

To configure Docker images and containers we use two files: Dockerfile and docker-compose.yml.

The Dockerfile contains the list of instructions for the image, aka, What actually goes on in the environment of the container.

Create a new Dockerfile file.

(mb) $ touch Dockerfile

Then add the following code in your text editor.

FROM python:3.6

ENV PYTHONUNBUFFERED 1

COPY . /code/
WORKDIR /code/

RUN pip install pipenv
RUN pipenv install --system

EXPOSE 8000

On the top line we’re using the Docker image for Python 3.6. We use this so that we get bug fixes and security updates for the latest Python 3.6.x version.

Next we create an environment variable PYTHONUNBUFFERED so that we’ll see output in our console the way we’re familiar with. Otherwise Docker would buffer the output which we don’t want.

The next two lines copy the code from our current directory, represented by . to the directory /code/ in our Docker image. We also set the WORKDIR to it so that in the future to run any commands like manage.py we can just use WORKDIR rather than need to remember where on Docker our code is actually located.

The RUN command lets us run commands in Docker just as we would on the command line. We first install Pipenv and then use it to install the packages specified in our Pipfile. Note that’s important to add the flag --system so packages are applied in the entire Docker container.

Finally we want to have our Docker container to have access to port 8000 just like a normal Django application.

We can’t run a Docker container until it has an image so let’s do that by building it.

$ docker build .

You will see a lot of output if successful!

Next we need a new docker-compose.yml file. This tells Docker how to run our Docker container.

(mb) $ touch docker-compose.yml

Then type in the following code.

version: '3'

services:
  db:
    image: postgres:10.1
    volumes:
      - postgres_data:/var/lib/postgresl/data/
  web:
    build: .
    command: python /code/manage.py migrate --noinput
    command: python /code/manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - "8000:8000"
    depends_on:
      - db

volumes:
  postgres_data:

On the top line we’re using the most recent version of Compose which is “3”.

Under db for the database we want the Docker image for Postgres 10.1 and use volumes to tell Compose where the container should be located in our Docker container.

For web we’re specifying how the web service will run. First Compose needs to build an image from the current directory, automatically run migrations and hide the output, then start up the server at 0.0.0.0:8000. We use volumes to tell Compose to store the code in our Docker container at /code/. The ports config lets us map our own port 8000 to the port 8000 in the Docker container. And finally depends_on says that we should start the db first before running our web services.

The last section volumes is because Compose has a rule that you must list named volumes in a top-level volumes key.

Docker is all set!

Update to PostgreSQL

We need to update our Message Board app to use PostgreSQL instead of SQLite. First install psycopg2 for our database bindings to PostgreSQL.

(mb) $ pipenv install psycopg2

Then update the settings.py file to specify we’ll be using PostgreSQL not SQLite.

# settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db', # set in docker-compose.yml
        'POST': 5432 # default postgres port
    }
}

We should migrate our database at this point on Docker.

(mb) $ docker-compose run web python /code/manage.py migrate --noinput

Also since the Message Board app requires using the admin, create a superuser on Docker. Fill out the prompts after running the command below.

(mb) $ docker-compose run web python /code/manage.py createsuperuser

Run Docker

We’re finally ready to run Docker itself! The first time you execute the command docker-compose up might take a while as Docker has to download all the required content. But it will cache this information so future spinups will be much faster.

Type the following command:

(mb) $ docker-compose up

We can confirm it works by navigating to http://127.0.0.1:8000/ where you’ll see the same homepage as before.

Now go to http://127.0.0.1:8000/admin and login. You can add new posts and then seem them on the homepage just as described in Django for Beginners.

When you’re done, don’t forget to close down your Docker container.

(mb) $ docker-compose down

There’s a lot of good tutorials on the web on Docker, less on using Django and Docker together. I recommend the following links for future study: