Running multiple web applications on a Docker host with Apache

Note: This guide quickly seems to fall out of date, and because it was intended as an introduction to using Docker when in-transition from an environment that does not use it, it’s difficult to maintain. I recommend, for a more practical use-case for Docker (and Docker Compose), checking out this update guide to building and scaling an application in Docker.

Create a Docker-based Service on DigitalOcean with Docker Compose
Quickly Build and Deploy your Web Application environment with Docker and Composemedium.com

This is a topic I see a lot for users new to Docker struggle with, so for the sake of keeping this simple, and for the benefit of those who learn by doing (or just need a push over the deceptively steep learning curve), I’ll outline how to configure a Docker host to run multiple web applications (in separate containers) through a single Apache installation (using Apache as a proxy).

This is especially helpful if you’d like to map a domain to an application running in a Docker container. For the sake of simplicity, this tutorial will use Apache rather than a load-balancing package like HAProxy, which is also excellent, and would likely be an excellent use-case here as well.

There are a ton of options out there to do this on a very sophisticated level (with PaaS solutions like Deis and Dokku and larger scale deployment environments like CoreOS — which can be used with Deis — and Rancher), but those tools can be intimidating, and in the end, knowing how a solution like that works is half the battle in making the most of it. Docker is at the core of this, and it never hurts to start at the beginning.

The benefits of using Docker (or containers in general) are pretty welldocumented, so I won’t go into that here; let’s be practical for now.

The configurations covered in this guide do not make use of Docker’s more intricate features such as container linking, and more advanced networking (limited to port forwarding); this is just intended to be a simple, practical guide to using Docker for something that is incredibly common, and can be used as the basis for digging deeper into containerization.

So, let’s get started.

The first step is to install Docker, or if you use many cloud providers, a pre-built image may be available to you. DigitalOcean has a one-click imagebuilt on Ubuntu 14.04 with Docker preinstalled.

Once you are logged in, you can start by creating your Apache container:

docker run -it -p 80:80 -p 443:443 --name apache ubuntu:latest /bin/bash

You’ve just created a container named “apache”, and forwarded ports 80 and 443 on the container to port 80 and 443 on the host. This means if you connect to the server IP over port 80, it maps to port 80 on the container. You will use this again shortly to demonstrate more apparent port forwarding.

Once you run the above, you’ll get dropped into a prompt like this (your container ID will be different):

[email protected]:~#

You can configure the container using the following commands:

apt-get update
apt-get install apache2 -y
a2enmod proxy
a2enmod proxy_http
service apache2 restart

Now, let’s exit this first container with by hitting CTRL+P and then CTRL+Q; this will leave the container running in the background. We will be back later to configure Apache to connect to each container over port 80.

You can check the status of running containers using:

docker ps

The output, by the end of this tutorial, should look similar to:

[email protected] ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
676e2a55504c lb1:latest “/bin/bash” About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp apache
b8f7244f2639 ubuntu:latest “/bin/bash” About an hour ago Up About an hour 0.0.0.0:4443->443/tcp, 0.0.0.0:8080->80/tcp client1
a5b7354r2534 ubuntu:latest “/bin/bash” About an hour ago Up About an hour 0.0.0.0:4443->443/tcp, 0.0.0.0:8081->80/tcp client2

For now, it should only have the “apache” container listed. If you need to start a container again, you can use the following command to get it going again:

docker start <container name>

Next, create the client container:

docker run -it -p 8080:80 -p 4443:443 --name client1 ubuntu:latest /bin/bash

In the shell for this container, run the following commands:

apt-get update
apt-get install apache2 php5 -y
echo “<?php phpinfo(); ?>” >> /var/www/html/info.php
service apache2 restart

Now, you can test the PHP info page, connecting directly to the container, but remember that in the above command, you mapped port 80 on the container to 8080 on the host, and 443 to 4443, so this means, you can reach your test site at:

http://your_IP_address:8080/info.php

If you reached the PHP info page, then you were successful! Exit the container (CTRL+P and then CTRL+Q once more), and move on to creating your second container.

There are two options. The first is to basically repeat the process:

docker run -it -p 8081:80 -p 4444:443 --name client1 ubuntu:latest /bin/bash

Since you cannot port forward a single host port to multiple container ports, you’ll notice we used port 8081 and 4444 on this second container.

Run the commands to install Apache and PHP again on this container, and test the new URL:

http://your_IP_address:8081/info.php

The second option is to leverage containerization:

First create an image of your first container:

docker commit client1 client-template

Then you can redeploy using that image, and skip having to configure the container and move onto testing the URL by creating the container from the client-template image:

docker run -it -p 8081:80 -p 4444:443 --name client2 client-template /bin/bash

If the method you tried worked, exit the container (leave it running), and now we’ll move on to mapping your domains to the containers.

Connect to your Apache container once again:

docker attach apache

You may need to hit “Enter” once to jump to a new line.

Using the text editor of your choice, create a Virtual Host file for your first container:

nano /etc/apache2/sites-available/site1.docker.biz.conf

Your VirtualHost file should look like:

<VirtualHost *:80>
ServerName site1.docker.biz
<Proxy *>
Allow from localhost
</Proxy>
ProxyPass / http://localhost:8080/
</VirtualHost>

Save the file, and then enable the VirtualHost:

a2ensite site1.docker.biz.conf
service apache2 reload

You can repeat this process for the second container and a second hostname. For additional sites in additional containers, you can repeat this process (as space and resources allow on your server).

After you enable the virtual hosts and reload Apache, you should be able to access your domain, and let Docker pass it through to the containers.

The above is just the very basics of using Docker, and you can enhance how your containers interact with the system with a very full feature set.

For example, you can use the following command line options to set a shared directory between the host and container:

docker run -it \
...
    -v /home/docker_clod:/home/container_huehue
...

and to set environment variables:

docker run -it
...
    -e HELLO=HELLO
...

and some resource limiting:

docker run -it 
    --memory 512mb 
...

among many other options.