In this tutorial we will create a new Django project using Docker and PostgreSQL. Django ships with built-in SQLite support but even for local development you are better off using a “real” database like PostgreSQL that matches what is in production.

It’s possible to run PostgreSQL locally using a tool like however the preferred choice among many developers today is to use Docker, a tool for creating isolated operating systems. The easiest way to think of it is as a large virtual environment that contains everything needed for our Django project: dependencies, database, caching services, and any other tools needed.

A big reason to use Docker is that it completely removes any issues around local development set up. Instead of worrying about which software packages are installed or running a local database alongside a project, you simply run a Docker image of the entire project. Best of all, this can be shared in groups and makes team development much simpler.

Install Docker

The first step is to install the desktop Docker app for your local machine:

If you are on Linux there are multiple download options available. You will also need to install Docker Compose which is automatically included with Mac/Windows downloads but not on Linux. You can do this with sudo pip install docker-compose after your Docker installation is complete.

The initial download of Docker might take some time to download. It’s a big file. We can go ahead and install and configure our Django project locally while we’re waiting.

Django project

We will use the Message Board app from Django for Beginners. It provides the code for a basic message board app using SQLite that can be updated in the admin.

Create a new directory on your Desktop and clone the repo into it.

$ cd ~/Desktop
$ git clone
$ cd djangoforbeginners
$ cd ch4-message-board-app

Then install the software packages specified by Pipenv and start a new shell.

$ pipenv install
$ pipenv shell

The actual name of your virtual environment will now be (ch4-message-board-app-XXX) where the XXX will be random. I’ll shorten this to (mb) going forward.

Make sure to migrate our database after these changes.

(mb) $ python migrate

If you now use the python runserver command you can see a working version of our application at http://localhost:8000.

Docker (again)

Hopefully Docker is done installing by this point. To confirm the installation was successful quit the local server with Control+c and then type docker run hello-world on the command line. You should see a response like this:

$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Images and Containers

There are two importance concepts to grasp in Docker: images and containers.

  • Image: the list of instructions for all the software packages in your projects
  • Container: a copy of the image that actually runs, aka runtime instance of the image

In other words, an image describes what will happen and a container is what actually runs.

To configure Docker images and containers we use two files: Dockerfile and docker-compose.yml.

The Dockerfile contains the list of instructions for the image, aka, What actually goes on in the environment of the container.

Create a new Dockerfile file.

(mb) $ touch Dockerfile

Then add the following code in your text editor.

# Pull base image
FROM python:3.6-slim

# Set environment varibles

# Set work directory

# Install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /code/Pipfile
RUN pipenv install --deploy --system --skip-lock --dev

# Copy project
COPY . /code/

On the top line we’re using an official Docker image for Python 3.6. We could specify the exact version number such as 3.6.5 but by just using the 6 we get bug fixes and security updates for the latest Python 3.6.x version automatically.

Next we create two environment variables. PYTHONUNBUFFERED ensures our console output looks familiar and is not buffered by Docker, which we don’t want. PYTHONDONTWRITEBYTECODE means Python won’t try to write .pyc files which we also do not desire.

The next line sets the WORKDIR to /code. This means the working directory is located at /code so in the future to run any commands like we can just use WORKDIR rather than need to remember where exactly on Docker our code is actually located.

Then we install our dependencies, making sure we have the latest version of pip, installing pipenv, copying our local Pipfile onto Docker and then running it to install our dependencies. The RUN command lets us run commands in Docker just as we would on the command line.

We can’t run a Docker container until it has an image so let’s do that by building the image for the first time.

$ docker build .

You will see a lot of output if successful!

Next we need a new docker-compose.yml file. This tells Docker how to run our Docker container.

(mb) $ touch docker-compose.yml

Then type in the following code.

version: '3.6'

    image: postgres:10.1-alpine
      - postgres_data:/var/lib/postgresql/data/
    build: .
    command: python /code/ migrate --noinput
    command: python /code/ runserver
      - .:/code
      - 8000:8000
      - SECRET_KEY=changemeinprod
      - db


On the top line we’re using the most recent version of Compose which is “3.6”.

Under db for the database we want the Docker image for Postgres 10.1 and use volumes to tell Compose where the container should be located in our Docker container.

For web we’re specifying how the web service will run. First Compose needs to build an image from the current directory, automatically run migrations and hide the output, then start up the server at We use volumes to tell Compose to store the code in our Docker container at /code/. The ports config lets us map our own port 8000 to the port 8000 in the Docker container. This is the default Django port. We use environment to set a SECRET_KEY and finally depends_on says that we should start the db first before running our web services.

The last section volumes is because Compose has a rule that you must list named volumes in a top-level volumes key.

Docker is all set!

Update to PostgreSQL

We need to update our Message Board app to use PostgreSQL instead of SQLite. First install psycopg2 for our database bindings to PostgreSQL.

(mb) $ pipenv install psycopg2

Then update the file to specify we’ll be using PostgreSQL not SQLite.

    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db', # set in docker-compose.yml
        'PORT': 5432 # default postgres port

We should migrate our database at this point on Docker.

(mb) $ docker-compose run web python /code/ migrate --noinput

Also since the Message Board app requires using the admin, create a superuser on Docker. Fill out the prompts after running the command below.

(mb) $ docker-compose run web python /code/ createsuperuser

Run Docker

We’re finally ready to run Docker itself! The first time you execute the command might take a while as Docker has to download all the required content. But it will cache this information so future spinups will be much faster.

Type the following command:

(mb) $ docker-compose up -d --build

We can confirm it works by navigating to where you’ll see the same homepage as before.

Now go to and login. You can add new posts and then seem them on the homepage just as described in Django for Beginners.

When you’re done, don’t forget to close down your Docker container.

(mb) $ docker-compose down

Quick Review

Here is a short version of the terms and concepts we’ve covered in this post:

  • Image: the “definition” of your project
  • Container: what your project actually runs in (an instance of the image)
  • Dockerfile: defines what your image looks like
  • docker-compose.yml: a YAML file that takes the Dockerfile and adds additional instructions for how our Docker container should behave in production

We use the dockerfile to tell Docker how to build our image. Then we run our actual project within a container. The docker-compose.yml file provides additional information for how our Docker container should behave in production.

There’s a lot of good tutorials on the web on Docker, less on using Django and Docker together. I recommend the following links for future study:

Looking for a way to go from beginner to intermediate Django developer? Check out my books Django for Beginners and REST APIs with Django.