wait for db with docker and python

In this post we will see how to resolve classic problem to let python application wait to mysql docker container to be ready.

We have two ways to put the waiting process:

In python application

# assume 'db' is a peewee database object

def wait_for_db_connection(max_tries, sleep_duration_in_seconds):
    try_count = 0
    while True:
        try_count += 1
        try:
            db.connect()
            logger.info("database server connection try {0}: OK".format(try_count))
            return
        except Exception as error:
            if try_count < max_tries:
                logger.info("database server connection try {0}: FAILED".format(try_count))
                time.sleep(sleep_duration_in_seconds)
                pass
            else:
                logger.error("database server connection reached max tries. Unable to connect to DB")
                logger.exception(error)
                raise

Then call wait_for_db_connection(..)

In Dockerfile

COPY ./wait-for-db.sh /wait-for-db.sh
RUN chmod +x {{ compose_project_path }}/wait-for-db.sh
CMD [ "/wait-for-db.sh", "python", "app.py" ]

Where wait-for-db.sh (jinja2 like):

#!/bin/sh
# wait-for-db.sh

set -e

host="{{ database_host }}"
port={{ database_port }}

shift
cmd="$@"

until nc -z -v -w30 $host $port; do
  >&2 echo "Database server is unavailable - sleeping"
  sleep 2
done

>&2 echo "Database server is up - executing command"
exec $cmd

docker could not resolve deb.debian.org

While building a Dockerfile which installs some packages, following error is raised:

Could not resolve 'deb.debian.org'

In fact, docker daemon can’t resolve specified host, because no dns server is set in config.

To fix issue, at least one valid dns server must be configured in /etc/docker/daemon.json like so (google dns as example):

{
    "dns": ["8.8.8.8"]
}

And then restart docker daemon sudo service docker restart

register and update docker containers ips in etc hosts

Use following script to update /etc/hosts with entries of docker containers mycontainersXXXXwitr.local

# delete lines between '# docker-compose containers start' and '# docker-compose containers end' in /etc/hosts
sudo sed -i.bak '/^# docker-compose containers start/,/# docker-compose containers end/d' /etc/hosts


echo -e "\nyour /etc/hosts is updated with following:"
echo "======================"
echo "" | sudo tee -a /etc/hosts
echo "# docker-compose containers start" | sudo tee -a /etc/hosts
echo "" | sudo tee -a /etc/hosts
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}} {{.Name}}' $(docker ps --filter name=mycontainers.*witr.local -q) | sed 's/\///' | sudo tee -a /etc/hosts
echo "" | sudo tee -a /etc/hosts
echo "# docker-compose containers end" | sudo tee -a /etc/hosts
echo "======================"

ansible with docker containers as target

Ansible needs ssh access to target machines. But for testing It’s so heavy to use vagrant/virtualbox vm.

Even if docker containers are not the appropriate target to be used with ansible, but light weight and speed starting of containers helps to quickly test playbooks.

To do we will run an ssh server inside our docker container.

Create Dockerfile

FROM debian:jessie

RUN apt update && DEBIAN_FRONTEND=noninteractive apt install -y openssh-server sudo python python-apt apt-transport-https

RUN apt install -y unzip

RUN mkdir -p /var/run/sshd && sed -i "s/UsePrivilegeSeparation.*/UsePrivilegeSeparation no/g" /etc/ssh/sshd_config \
  && sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config \
  && touch /root/.Xauthority \
  && true

RUN useradd myuser \
        && passwd -d mypassword \
        && mkdir /home/myuser \
        && chown myuser:myuser /home/myuser \
        && addgroup myuser staff \
        && addgroup myuser sudo \
        && true

RUN echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

ADD ./entrypoint.sh /entrypoint.sh

EXPOSE 22
ENTRYPOINT ["/entrypoint.sh"]

Create entrypoint.sh:

#!/bin/bash
set -e

if [ -z "${SSH_KEY}" ]; then
        echo "ERROR: missed public key in the SSH_KEY environment variable"
        exit 1
fi

for MYHOME in /root /home/myuser; do
        echo "=> Adding SSH key to ${MYHOME}"
        mkdir -p ${MYHOME}/.ssh
        chmod go-rwx ${MYHOME}/.ssh
        echo "${SSH_KEY}" > ${MYHOME}/.ssh/authorized_keys
        chmod go-rw ${MYHOME}/.ssh/authorized_keys
        echo "${MYHOME} ssh configured: OK"
done
chown -R myuser:myuser /home/myuser/.ssh

echo "========================================================================"
echo "You can now connect to this container via SSH using:"
echo ""
echo "    ssh root@<host>"
echo "    ssh myuser@<host>"
echo ""
echo "========================================================================"

exec /sbin/init
exec /usr/sbin/sshd -D

Build and run container

docker build -t my/image .
docker run -d my/image

how to access to host machine from docker container

Firstly create network

docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 witrnet

Then just start container on previously created network

docker run --net=witrnet my/image

Or use it in docker-compose

version: '2'

services:
  myservice:
    image: my/image
    networks:
      - witrnet

networks:
  witrnet:
    external: true

migrate local docker images to a remote docker host

A. New remote host
– install docker
– setup simple configuration to let docker daemon be accessible from other machines:
Edit /etc/default/docker, and add following line

DOCKER_OPTS=”-H tcp://0.0.0.0:2375″

– restart docker daemon

sudo service docker restart

IMPORTANT:
With systemd systems (trusty and vivid), we must add new file :
/etc/systemd/system/docker.service.d/docker.conf
with following content:

[Service]
EnvironmentFile=-/etc/default/docker
ExecStart=
ExecStart=/usr/bin/docker -d $DOCKER_OPTS

And then restart socker.

Now, when trying :

docker ps

we’ll got error:

Get http:///var/run/docker.sock/v1.19/containers/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?

That’s because docker client try to connect to file socket by default. So, from now, we must specify which host to connect to when running docker client. Like so :

docker -H localhost:2375 ps
# or
docker -H 127.0.0.1:2375 ps
# or
simply docker -H :2375 ps

Ok, but it’s annoying to specify the host all the time !
So, the solution is to define an environment variable DOCKER_HOST:

export DOCKER_HOST=0.0.0.0:2375

then we can just run :

docker ps

B. Local host

We must set environment variable DOCKER_HOST like following (assume host machien ip is 192.168.33.10)

export DOCKER_HOST=192.168.33.10:2375

C. transfer images from local docker host to remote docker host
Now, to move all local images to the new docker host, we have to make it in three steps :
1. under local machine: export images as tar files

docker -H “” save -o /tmp/myimage.tar myimage

2. transfer tar files from local machine to the remote one (with scp for example)

scp /tmp/myimage.tar user@192.168.33.10:/tmp/

3. under remote machine: load images

docker load -i /tmp/myimage.tar

Hope this help

access to a dockerized apache from local mochine

Get ubuntu

docker pull ubuntu:vivid

Start ubuntu container

docker run -ti ubuntu:vivid /bin/bash

Install apache

root@54a954be4ca3:/# apt-get install apache2

Type CTRL+P then CTRL+Q : this will exit the shell without killing the process
Now in the local machine, check out the container name

docker ps

Save the container with its apache installed (suppose your container name is happy_pasteur)

docker commit -a “witr ” -m “install apache2” happy_pasteur witr/myapache

Stop the container

docker stop happy_pasteur

Start apache in new container “myapache” and bind ports

docker run -d -p 9999:80 witr/myapache /usr/sbin/apache2ctl -D FOREGROUND

Finally browse http://localhost:9999