Using Docker we can deploy a Python project along with all his dependencies in completely isolated containers, exposing only the strictly necessary ports for running our apps. The only packages we need to install in the server are Docker and SSH, this way we will keep our production server in mint conditions.
Here you will learn how to deploy your Python project to the production server using docker, and how to configure and link together all these components:
- Pyramid/Pylons (Python Framework)
- Postgres (Main database)
- Cassandra (Non SQL database)
- Elasticsearch (Search engine)
- Redis (Caching system)
- Nginx (Proxy server)
- uWSGI (App Server)
This is just a docker tutorial, for harder deployments when there is more than one production server involved you should combine docker with some orchestration software like Ansible, Puppet, Chef or Salt.
For this guide I am using a Macbook Pro (OS X El Capitan) and a Ubuntu Server (Ubuntu Trusty 14.04.3) running in a VirtualBox. We will use the Ubuntu Server installed in the VirtualBox as a fake production server, and we will connect to it through ssh to perform the deployments.
Installing VirtualBox + Ubuntu server
You will find here all information regarding the VirtualBox installation.
Create Virtual machine
After installing VirtualBox in your system you should create a new Virtual machine with a Linux distribution (I am using Ubuntu 64-bit).
Configure and install Linux in your Virtual machine
Once the Virtual machine has been created, you need to download a linux distribution (I got Ubuntu Server 14.04.3). Then go to VirtualBox menu: Settings -> Storage and click on The CD icon, browse for the linux image you have downloaded. You should mark also the “Live CD/DVD” option. This way it will use the linux image as a Live CD and the next time you initialize the Virtual machine you should get the Linux installation screen. Go ahead, install Linux.
Note.- If after installation you are still getting the installation screen, go to Menu -> System -> Boot Order and place the Hard drive before the CD. Otherwise it will try to Initialize from the CD every time you run the Virtual machine.
Configure VirtualBox to allow SSH connection
We just need to open our Ubuntu’s VirtualBox settings, go to Network -> Port Forwarding, and add the following rule:
Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
---|---|---|---|---|---|
SSH | TCP | 3022 | 22 |
We will also need to install the SSH server in our Virtual machine. Just initialize the Ubuntu server normally and install the required packages. Use apt-get update
first for fetching the latest versions of all packages:
sudo apt-get update
sudo apt-get install openssh-server
Now we should be able to connect to the Virtual machine using port 3022 and your user created during the Ubuntu installation.
ssh -p 3022 my_user_name@127.0.0.1
Install Docker in the VirtualBox server
For installing Docker in the Ubuntu server please follow this instructions from Docker documentation.
Once it is installed add your user to the docker group. So you don’t need to write sudo every single time you are using docker:
sudo usermod -aG docker my_user_name
Create folders for releases and docker files
We will need to add some folders for storing the releases and docker files. The docker files will be stored in a folder called docker inside the data folder, you will understand why afterwards. I created these folders in /var/data/
, but the location is really up to you.
sudo mkdir /var/data sudo mkdir /var/data/releases sudo mkdir /var/data/docker sudo mkdir /var/data/docker/elasticsearch sudo mkdir /var/data/docker/cassandra sudo mkdir /var/data/docker/postgres sudo mkdir /var/data/docker/postgres/data sudo mkdir /var/data/docker/nginx sudo mkdir /var/data/docker/redis sudo mkdir /var/data/docker/app
Now we change the group of this folders to docker and add write permissions to the group members. This way we will be able to use the scp command for creating files inside these folders without getting permission errors.
sudo chgrp -R docker /var/data
sudo chmod -R 0775 /var/data
This will add the group docker to all new files created within the directory:
sudo chmod -R g+s /var/data
Exporting Python project to the Production server
In this tutorial I am using a Pylons/Pyramid project, but the release process should be fairly similar in Bottle, Flask, Django, Falcon or any other frameworks. I am assuming that everybody knows how to create releases with Git. So I will try to be brief here, I use the Gitflow Workflow, I am creating tags for each release in the master branch. All dependencies of the project should be defined in the file requirements.txt
, this file is automatically generated in the root folder of the project when running pip freeze:
pip freeze > requirements.txt
So you must be sure that this file is up to date in the release branch before going any further.
Exporting App
Using git archive
we can export our app in a gzip file. In this case I am exporting the project using the tagged branch 1.0.0:
git archive 1.0.0 --format=tar | gzip > /tmp/my_app.tar.gz
Copy app to the Production server:
scp -P 3022 /tmp/my_app.tar.gz your_user_name@127.0.0.1:/var/data/releases/1.0.0/my_app.tar.gz
Exporting database
We also need to export the database data, in postgres this can be achieved easily like this:
# Exporting all database data: pg_dump my_db > /tmp/db_import.psql
Now we send the the file to the production server:
scp -P 3022 /tmp/db_import.psql your_user_name@127.0.0.1:/var/data/docker/postgres/data/db_import.psql
Project, dependencies and database are now in the production server, so it is time to use Docker.
Creating docker files for the App
Now it’s time to connect to the production server and start creating our docker files.
ssh -p 3022 my_user_name@127.0.0.1
The folder /var/data/docker will be in charge of building our releases, it contains 6 folders: elasticsearch, nginx, app, postgres, cassandra, redis. Each of theses folders will contain at least one docker configuration file which will take care of creating the corresponding component along with his respective configuration.
Copy app and create initialization script
We start with our app, first thing we need is to get a fresh copy of the current release located at /var/data/releases/1.0.0 and place it in the /var/data/docker/app folder. Don’t use symbolic links because they are not supported in Docker for good.
cp /var/data/releases/1.0.0/app.tar.gz /var/data/docker/app/app.tar.gz
Now we add the docker initialization script using our favorite text editor:
vim /var/data/docker/app/docker-entrypoint.sh
Add the following content:
#!/bin/bash set -e if [ "$1" = 'uwsgi' ]; then . /env/bin/activate; exec gosu www-data "$@" fi exec "$@"
Maybe you are wondering what is the gosu thing, it’s a command very extended in the docker world, similar to sudo but with better signal handling. Basically it will execute the commands provided to the script using the www-data user.
Creating the app docker file
Create the docker file:
vim /var/data/docker/app/my_app.docker
Write the docker file:
# my_app.docker # Use Ubuntu Trusty v14.04.3 FROM ubuntu:14.04.3 # Get gosu, we will use it in the entrypoint instead of sudo. RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \ && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \ && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \ && gpg --verify /usr/local/bin/gosu.asc \ && rm /usr/local/bin/gosu.asc \ && chmod +x /usr/local/bin/gosu \ && apt-get purge -y --auto-remove ca-certificates wget # Refresh all packages, install virtualenv and all the required packages. RUN apt-get update && apt-get install -qy \ build-essential python3-dev libev4 libev-dev libpq-dev python-virtualenv openssl libffi-dev libssl-dev libpcre3 libpcre3-dev # Python 3.4 is already installed in the current Ubuntu version, # so we just need to create a virtual environment. RUN virtualenv -p /usr/bin/python3.4 /env # Deploy application in the app folder ADD app.tar.gz /var/www/my_app # Add group and permissions for the uwsgi server (Running with the user: www-data) RUN chown -R www-data:www-data /var/www/my_app\ && chmod -R 775 /var/www/my_app/server /var/www/my_app/logs\ && chmod 770 /var/www/my_app/production.ini /var/www/my_app/development.ini # Activate the Python 3.4 environment and install all required pip packages. RUN . /env/bin/activate; \ pip install pip==7.1.2 \ && pip install setuptools==18.4 \ && pip install -r /var/www/my_app/requirements.txt # Add the Docker initialization script for running UWSGI COPY docker-entrypoint.sh /entrypoint.sh # Set root to the app directory. WORKDIR /var/www/my_app # Create a volume for the app directory. It will be used by nginx # for accessing the static files and connecting to the uwsgi server. VOLUME /var/www/my_app ENTRYPOINT ["/entrypoint.sh", "uwsgi"] CMD ["--ini-paste", "/var/www/my_app/production.ini"]
This docker file is based on a linux ubuntu:14.04.3, It installs all the required libraries and a virtualenv where our app will be executed. The ADD command extract all files from our compressed app.tar.gz to the folder /var/www/my_app (inside the docker context) and changes the ownership of the app folder /var/www/my_app to the user www-data, this is really important because the uwsgi server will run under the www-data user. Otherwise it will not be able to reach the unix socket and the connection with the nginx server will fail.
It also add 770 permissions to some files that could contain sensitive information like the production.ini. Afterwards the script will install all pip dependencies based on the file requirements.txt.
COPY docker-entrypoint.sh will add the initialization script to docker so it can be used as the entrypoint, meaning that this script will be executed every time we run the docker container.
The VOLUME instruction, creates a volume for the app directory, this way we will be able to share the app between docker containers. For example nginx will also use this volume for connecting through the unix socket (/var/www/my_app/server/uwsgi.socket) and for serving static content.
Finally The ENTRYPOINT and CMD set the default params that will be executed when we run the container, executing the uwsgi server:
uwsgi --ini-paste /var/www/my_app/production.ini
uWSGI configuration
This is the uwsgi configuration in the production.ini file.
[uwsgi] uid = www-data gid = www-data chmod-socket = 664 chdir = /var/www/my_app socket = /var/www/my_app/server/uwsgi.sock master = true workers = 1 harakiri = 60 harakiri-verbose = true limit-post = 65536 post-buffering = 8192 pidfile = ./server/pid_5000.pid listen = 128 max-requests = 1000 reload-on-as = 128 reload-on-rss = 96 no-orphans = true log-slow = true virtualenv = /env thunder-lock = true
It is important that the uid and gid are www-data because the folder containing the socket is owned by this user. There is no need to use the daemonice option, docker will take care of running the app and collecting the logs.
Creating docker files for Postgres
First we create the initialization script:
vim /var/data/docker/postgres/data/init.sh
That will take care of creating the database and add restricted roles for the different users.
#!/bin/bash psql --username deployer <<-EOSQL CREATE DATABASE "my_db_production" WITH OWNER deployer TEMPLATE template0 ENCODING 'UTF8' LC_COLLATE 'en_US.UTF-8' ; EOSQL echo psql -U deployer my_db_production < /docker-entrypoint-initdb.d/db_import.psql psql --username deployer <<-EOSQL # Create role write with basic read/write permissions: CREATE ROLE write WITH NOSUPERUSER CREATEDB ENCRYPTED NOCREATEROLE NOREPLICATION; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO write; GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO write; # Create user "production" and grant all privileges of the role "write" to him. CREATE ROLE production WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOREPLICATION ENCRYPTED PASSWORD 'j973jDdioosPDufO'; GRANT write TO production; EOSQL echo
I have used the deployer user with root privileges for taking care of creating the database and perform all the pertinent modifications. Looking at the last part of the file you may wonder why I added those database users. It is a good practice creating database users with limited privileges. If you are using postgres, you can follow this approach, I have created a role called write with permission only for reading and writing on existing tables, but they cannot modify the database structure whatsoever:
CREATE ROLE write WITH NOSUPERUSER CREATEDB ENCRYPTED NOCREATEROLE NOREPLICATION; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO write; /* We also need to add sequence permissions. The sequences are the objects generating unique numeric identifiers (usually for primary keys). */ GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO write;
After we have applied the role write to the user production, who will inherit all the privileges (You should use the user production in your app configuration for connecting to the database):
/* Create a role called production with a secret password */ CREATE ROLE production WITH NOSUPERUSER NOCREATEDB NOCREATEROLE NOREPLICATION ENCRYPTED PASSWORD 'jdikejss934j48DufO'; /* Add Write's privileges to production user */ GRANT write TO production;
We also add the file db_import.psql to the same folder:
/var/data/docker/postgres/data/db_import.psql
which contains the whole schema of hour database.
Writing the postgres docker file
Lets create the file:
vim /var/data/docker/postgres/postgres.docker
This is the content:
# postgres.docker # Use postgres version 9.4.5 FROM postgres:9.4.5 ENV POSTGRES_USER deployer ENV POSTGRES_PASSWORD your_secret_paswword COPY data/ /docker-entrypoint-initdb.d/
The script imports postgres v9.4.5 and creates a super user with the data defined in POSTGRES_USER and POSTGRES_PASSWORD. In this case I am creating the user deployer with password your_secret_paswword. The last steps just copies the scripts db_import.psql and init.sh within the folder /docker-entrypoint-initdb.d inside the docker container. This folder has the peculiarity of executing all the .sh files within the directory during initialization.
Creating docker files for Nginx
Nginx is the proxy server that will send all http/https requests to the uWSGI server.
Create config file
We will need to modify the default Nginx configuration for setting the correct paths to our application, configure the unix socket, ETC. This file will be copied by the nginx.docker file overwriting the default configuration.
vim /var/data/docker/nginx/nginx.conf
This is the content of my nginx.conf file:
# I strongly recommend to use "www-data" as the nginx and uWSGI user. # This way we will avoid permission issues when connecting through unix sockets. user www-data; worker_processes 1; error_log /var/log/nginx/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]\."; gzip_types text/plain text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; # Configuration for Nginx server { listen 80; server_name localhost; charset utf-8; #access_log logs/host.access.log main; # Settings to by-pass for static files location ^~ /static/ { root /var/www/my_app/my_app; } # Serve a static file (ex. favico) outside static dir. location = /favico.ico { root /var/www/my_app/my_app/static; } location / { uwsgi_pass unix:///var/www/my_app/server/uwsgi.sock; include uwsgi_params; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /error.html; location = /error.html { root /var/www/my_app/my_app/static/page; } } }
First we set the Nginx user to www-data (same way as we did with uwsgi), this way we assure that there won’t be permission issues when connecting through the unix socket. The line location ^~ /static/
is just a bypass, this way Nginx will load the static files directly from the filesystem without making unnecessary calls to the uWSGI server.
Note that you may need to change these paths according to your app structure.
Writing the nginx docker file
Lets create the file:
vim /var/data/docker/nginx/nginx.docker
With the following content:
# nginx.docker # Use nginx FROM nginx:1.9.5 # Update nginx configuration COPY nginx.conf /etc/nginx/nginx.conf
This docker file simply collects the nginx v1.9.5 installed in a lightweight Debian Linux distribution and applies the Nginx configuration we created in the previous step.
Creating docker files for Elasticsearch
Let’s create the docker file for the search engine:
vim /var/data/docker/elasticsearch/elasticsearch.docker
Content of the file:
# elasticsearch.docker # Use elasticsearch 1.7.3 FROM elasticsearch:1.7.3 RUN /usr/share/elasticsearch/bin/plugin install license RUN /usr/share/elasticsearch/bin/plugin install marvel-agent
This docker file simply retrieves Elasticsearch 1.7.3 installed in a lightweight Debian distribution and also installs Marvel which is an Elasticsearch monitoring tool.
Creating docker files for Redis
For Redis we don’t really need a docker file, we will create the Redis container using the default image: redis:3.0.5.
Creating docker files for Cassandra
Add Cassandra’s docker file:
vim /var/data/docker/cassandra/cassandra.docker
Content of the file:
# cassandra.docker # Use Cassandra version 2.2.3 FROM cassandra:2.2.3 ENV CASSANDRA_CLUSTER_NAME my_app_cluster ENV CASSANDRA_ENDPOINT_SNITCH PropertyFileSnitch
This docker file retrieves Cassandra 2.2.3 installed in a lightweight Debian distribution and sets the cluster name and snitch configuration. You can learn more about how to configure Cassandra here.
Building the docker images
Now it is time to create the images out of our docker files. I will use this nomenclature my_app/component_name for all components, for instance elasctisearch image will be created as my_app/elasticsearch.
Build the app
For building the app we just need to run the docker build
command at the root directory of the docker app.
cd /var/data/docker/app
docker build -t my_app -f my_app.docker .;
Build Postgres
Building the docker files is pretty much the same for all docker files. Just go to the root directory of each docker component and execute docker build
.
cd /var/data/docker/postgres
docker build -t my_app/postgres -f postgres.docker .;
Build Nginx
cd /var/data/docker/nginx
docker build -t my_app/nginx -f nginx.docker .;
Build Elasticsearch
cd /var/data/docker/elasticsearch
docker build -t my_app/elasticsearch -f elasticsearch.docker .;
Build cassandra
cd /var/data/docker/cassandra
docker build -t my_app/cassandra -f cassandra.docker .;
Check the new images
You can see all the images created with the command:
docker images
Run the components/Creating containers
Having all the images already in place now we can create the containers for each of the components. The app and Nginx will be created last respectively, because the app relies on the others containers and Nginx needs the volume /var/www/my_app defined in app to run. Once the containers are created you can display them using docker ps.
If you have any problem running containers just check the logs.
Run Postgres
For creating the Postgres database container just run:
docker run --name my_app_postgres -d my_app/postgres
The name is actually up to you, but it’s needed for linking the containers afterwards.
Now that the Postgres database is running you can connect to the PSQL console at any time:
docker run -it --link my_app_postgres:postgres --rm my_app/postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U deployer'
This will connect to the database using the deployer user who has superuser privileges. The –rm option is just for creating a temporal container for accessing the console. It will be deleted when closing the console.
Run Elasticsearch
Create Elasticsearch container:
docker run --name my_app_elasticsearch -d my_app/elasticsearch
Run Cassandra
Create Cassandra container:
docker run --name my_app_cassandra -d my_app/cassandra
If this is the first time you run the container and you want to access the cqlsh console for adding default keyspaces and tables, this could be a good moment. If you just created the container you may need to wait a few seconds for Cassandra initialization, then run this command:
docker run -it --link my_app_cassandra:cassandra --rm my_app/cassandra cqlsh cassandra
The –rm option is just for creating a temporal container for accessing the console. It will be deleted when closing the console.
Run Redis
Create Redis container:
docker run --name my_app_redis -d redis:3.0.5
Run the App
Now that we have all the required containers in place we can run the app and link it to the components.
docker run --name my_app:1.0.0 --link my_app_redis:redis --link my_app_elasticsearch:elasticsearch --link my_app_cassandra:cassandra --link my_app_postgres:postgres -d my_app
There is a lot of magic going on here. First it’s creating our app container tagged with the version 1.0.0 (Our release). Then is linking all the existing containers to the app. What do you mean linking? If you look at the containers they all have a colon “:” followed by a name. For example in Redis my_app_redis:redis, the name following the colon is an alias that will be created as a domain inside our app container. For example if we access to the app container and take a look at the /etc/hosts file, we should see something like this:
172.17.0.12 postgres 4e70c95bb61a my_app_postgres 172.17.0.18 redis e39bcae5be48 my_app_redis 172.17.0.17 cassandra e3d756eb63f1 my_app_cassandra 172.17.0.20 elasticsearch cfc93af8c3b4 my_app_elasticsearch
Docker is dynamically associating the ip of each container to the aliases and names that we defined. You must take this into account when writing your production.ini file.
For instance if you have the following postgres connection defined in your development.ini :
postgresql://my_user:my_pass@127.0.0.1/my_db
In the production.ini the ip must be replaced by the postgres domain defined in --link my_app_postgres:postgres
postgresql://my_user:my_pass@postgres/my_db
Do the same with all components.
Run Nginx
Now that the app container is running we need HTTP/HTTPS access to test it, and the Nginx server will take care of that. We cannot run Nginx the same way as we did with the other components because Nginx also needs to be accessed externally through HTTP/HTTPS. That’s why we use the -p (port) option, to define which port of our production server will be linked to the HTTP/HTTPS ports of the Nginx server.
docker run --name my_app_nginx -d -p 0.0.0.0:32769:80 -p 0.0.0.0:32768:443 --volumes-from my_app my_app/nginx
We have attached the addresses 0.0.0.0:32769, 0.0.0.0:32768 of our production server to the ports 80(HTTP) and 443(HTTPS) respectively. This part: --volumes-from my_app
is also extremely important, it is telling Nginx to use the volumes of my_app. Meaning that they are sharing the folder that contains our app /var/www/my_app. Otherwise Nginx would not be able to connect to uwsgi through the unix socket located at /var/www/my_app/server/.
Now we should be able to check if Nginx is working just typing these addresses in the web browser of the production server. But wait, there is no browser there… so lets try with curl:
curl 0.0.0.0:32769
You should see your app’s main page.
Testing with the web browser
Testing webpages with curl is a bit tiresome isn’t it? So we are going to link those addresses to our main computer, this way we can test it using our favorite web browser. We have done this already with the ssh ports, we just need to open our Ubuntu’s VirtualBox settings, go to Network -> Port Forwarding, and add the HTTP/HTTPS ports:
Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
---|---|---|---|---|---|
SSH | TCP | 3022 | 22 | ||
HTTP | TCP | 32769 | 32769 | ||
HTTPS | TCP | 32768 | 32768 |
You may need to reboot the production server afterwards. Once the server is ready, the containers will be probably stopped. So we just need to reload them:
docker start my_app_elasticsearch docker start my_app_redis docker start my_app_cassandra docker start my_app_postgres docker start my_app docker start my_app_nginx
And that’s all, just open any browser in your computer and check the address http://127.0.0.1:32769.
Notice that in a real production server we won’t use strange addresses like 0.0.0.0:32769. I have used these ports because VirtualBox only allow forwarding ports higher than 1024. In a real production server we will probably use the domain name and the port 80 (HTTP) to redirect all HTTP requests to our App.
Some useful docker commands
Just some useful commands that your will end up using sooner or later.
Access to a running container
You can access to any running container through bash console using the following command:
docker exec -it container_name bash
Execute command into a running container
You can execute any command within the container to any running container through bash console using the following command:
docker exec -it container_name echo "I'm inside the container!"
Display containers
The running containers can be displayed with ps
:
docker ps
If you need to see all containers just add -a:
docker ps -a
Inspect containers
This command is extremely useful, it displays everything about the container; ip address, volumes, linked ports, environment variables, ETC.
docker inspect my_container_name
Docker logs
If you have any problems creating containers, or if every time you run a container is stopping automatically you should take a look at the logs:
docker logs my_container_name
Debug your image
When the container stops automatically for some error during the initialization, we can override the entry point for debugging purposes:
docker run -it --rm --entrypoint=/bin/bash my_image:latest
Delete all containers
Forces deletion of all existing containers.
docker rm -f $(docker ps -a -q)
Delete all images
Forces deletion of all existing images.
docker rmi -f $(docker images -q)
Delete unused volumes
Remove all unused local volumes. Unused local volumes are those which are not referenced by any containers
docker volume prune