In this article, I will demonstrate how the old thinking leads us to bad practices when it comes to organizing digital information.
I have always missed some easy to follow tutorial on Docker, so I have decided to create it myself.
I hope it will help you understand why Docker is such a popular tool and why more and more developers are choosing Docker over Vagrant and other solutions.
Development environments
You have basically three options how to approach the problem of preparing development environment for your project:
- Manual approach
- Virtual machines
- Virtual services
Disclosure: Please note that some of the links in this post are affiliate links for products I use and love. This means if you click on such a link and take action (like subscribe, or make a purchase), I may receive some coffee money at no extra cost to you. This helps me create more content free of charge for you. And, as an Amazon Associate, I earn from qualifying purchases. Thanks for your support!
1. Manual approach
Manual approach is the oldest way to install individual services on your development machine. But with different versions of these services on staging and production environment, you can get into dependencies problems rather quickly.
Also, it’s quite challenging to manage different projects with different requirements on a single computer. One application might need PHP 5.5 and MySQL 5.7 while another app won’t run unless you have PHP 7.0 and MySQL 5.1.

2. Virtual machines
You can solve this problem with virtual machines. Download free VirtualBox or VMware Workstation Player, use built-in Hyper-V on Windows PC or buy Parallels Desktop for Mac and set up your environment individually for every project so they are totally separated and won’t influence each other.
When you need to deploy your project with its specific environment to the remote server, you will just provision the whole virtual machine.
Vagrant can help you with this process because it allows you to deploy your project directly to the cloud, for example to a DigitalOcean‘s droplet.
But the problem is that you are working with full-blown operating systems even though they are virtualized.

3. Virtual services
What if there is another way? What if you don’t need full operating system encapsulated in virtual machines to keep your projects separated?
What if you would still be able to have the same development environment everywhere, on your local machine, on a testing server and even on a production server?
This is all possible thanks to Docker.

To better understand the difference between Docker and solutions based on virtual machines, take a look at the image below:

Docker can help you with your development workflow by:
- eliminating the “it works on my machine” problem once and for all because it will package dependencies with your apps in the container for portability and predictability during development, testing, and deployment,
- allowing you to deploy both microservices and traditional apps anywhere without costly rewrites by isolating apps in containers; this will eliminate conflicts and enhance security,
- streamlining collaboration with system operators and getting updates into production faster.
Interested? Great! Let’s give it a try.
Install Docker
This part is very easy, go to Get Started with Docker and download the package for your operating system.

You will install Docker as any other application. On Mac, it’s as easy as dragging the app to the Application folder. On Windows, it’s a standard installer you’re probably familiar with.


When you start Docker on Mac for the first time, you will have to grant it privileged access by providing your password. That’s perfectly fine, don’t worry and do it.


That’s it, now you should see the dashboard with tutorial which you can take or skip. I suggest you skip it for now so we can move on.

You won’t need to do anything else in the Docker app, just keep it open. You can easily check out if it’s running from the menu bar on Mac or from the system tray on Windows.

Install Visual Studio Code
Next, you will need some code editor that will allow you to edit plain text without adding any formatting.
I personally moved from Atom (which will be soon discontinued anyway) to Visual Studio Code from Microsoft.
You can use whatever code editor you already have, but if you’re new to programming, you definitely don’t want to use Word or Pages. Just make sure that your editor works with terminal.
Visual Studio Code is available for many platforms including Linux. Download and install the package for your operating system.
Create your first Docker image
The easiest way to create a Docker image is with Dockerfile which is something like a recipe for building an image.
Image is something like a blueprint. You can use one blueprint to create many objects like cars or houses. Similarly, you can use one image to create many containers.
Let’s take a look how easy it is to create a development environment based on PHP 7 and Apache web server.
Create a new folder on your Desktop and name it docker-apache-php7.

Open Visual Studio Code, choose File -> Open Folder… and select the docker-apache-php7 folder from your Desktop:

Next, choose Terminal -> New Terminal to open the terminal window right at the bottom of VS Code:

On Windows, you’ll get PowerShell instead, but that’s just fine:

Inside docker-apache-php7 folder, create a new folder called src.

Next, we need to create a new file called phpinfo.php inside the src folder.
If you’re on Mac, you can quickly create this file by executing one command from your terminal:
touch ~/Desktop/docker-apache-php7/src/phpinfo.php

Select the phpinfo.php file, put this code inside and save the changes:
<?php phpinfo(); ?>
This simple code will call just one simple PHP function that outputs a detailed table with configuration settings of the development environment we are just creating.
Here’s what it should look like on your computer:

Now we’ll need another file, this time it must be stored directly inside the docker-apache-php7 folder. Let’s call it Dockerfile (just like this, without any extension).
Again, you can create this file on your Mac with this terminal command:
touch ~/Desktop/docker-apache-php7/Dockerfile

We want to build our development environment on PHP 7 and Apache web server. The best way is to start with the official Docker image that’s available on Docker Hub.

We want to use the 7.4.30-apache version because it includes the Apache web server so we won’t have to install Apache separately.
Select php official image and type 7.4.30-apache into Filter Tags search field:

Go ahead and click this link which will take you to GitHub Repository the image. Just take a look at that huge Dockerfile!

Don’t panic, our Dockerfile will have just a couple of lines because we will take advantage of the hard work of PHP team and use their image.
This doesn’t mean that you can’t just sit down and write your own image based on Debian Linux, but why would you want to waste your time, when you can just use what’s already prepared, right?
In order to use this image, go to your Dockerfile and write this line of code:
FROM php:7.4.30-apache
This says that our own Docker image will be based on the 7.4.30-apache image created and maintained by PHP team. Sweet.
Next, add the highlighted code below:
FROM php:7.4.30-apache COPY src/ /var/www/html/
Make sure there is a space between src/ and /var. It’s very important!
This line says that we want the content of the src folder we have created a few minutes ago, to be copied to /var/www/html/ but you might wonder why and where it is located.
This is the folder structure that will be created while our image will be built or more specifically while we will create a container from that image.
I showed you the Dockerfile for 7.4.30-apache image on purpose. Remember the line FROM debian:bullseye?

Our image will be based on 7.4.30-apache image, but even this image is based on another Docker image named debian:bullseye-slim
So PHP team grabbed debian:bullseye-slim (a Linux distribution also known as Debian 11) and used their Dockerfile to add some features and modifications.
Similarly, we will add our own features and modifications to the 7.4.30-apache image with our Dockerfile.
The point is that we are all adding up layers of new functionality to the vanilla Debian Linux and as you probably know, Linux file system starts with root (/) followed by specific subfolders.
Mac OS is based on UNIX and it works similarly. Your Desktop, for example, is actually located in /Users/your-name/Desktop. In my case, it is /Users/zavrelj/Desktop.
For the content of the web, Apache web server uses a directory called html which is stored inside www directory inside var directory.
That’s how it is and because we know it, we can say that once our Docker image with Apache web server is initialized (or spun up) to create a container, we can safely copy the content of our src folder to /var/www/html folder on Debian Linux.
That’s because this folder will already exist since the Apache web server will create it during its own installation.
Give yourself a pause and let this all sink in. It’s a really important concept.
Ok, the last line we will add to our Dockerfile looks like this:
FROM php:7.4.30-apache COPY src/ /var/www/html/ EXPOSE 80
This says that we want the port 80 to be available for incoming HTTP requests.
Your Dockerfile should look like this now:

Make sure you wrote everything correctly and save the changes.
It’s important to be in the directory where the Dockerfile is saved so we can build our image from it.
In the terminal below, set your docker-apache-php7 folder as a working directory with this command:
cd ~/Desktop/docker-apache-php7
If you followed me step by step and you are on a Mac computer, you already are in the right directory. But if you’re somewhere else, this command will get you there.
To make sure you’re at the right place, type this command to print your current directory to the terminal:
pwd
Let’s see the content of your current directory and check that Dockerfile is there:
ls
If you see this, you’re good to go:

Type this command in terminal and hit Enter:
docker build -t php-image .
This command will build our image, -t option lets you give the image a custom name. It will be php-image in our case because I want you to make a distinction between images and containers.
Finally, the trailing dot at the end of this command means that the Dockerfile is located in the current directory. That’s why we wanted to save it there!
If everything went right, you should see something similar in your terminal:

Docker has just created our image. To see the list of all your images, just type this command in the terminal:
docker image ls
You should see a table with one row and there’s your php-image you’ve just created:

You can see its name, tag, id, size and when it was created.
Create your first Docker container
Now that we have our custom Docker image, we can create a container.
Docker image is something like a snapshot. If you want to work with your services like PHP and Apache web server, you need to spin up the container from your image first.
Type this command in your terminal to create a new container from your php-image:
docker run -p 80:80 -d --name php-container php-image
Let’s take a look at all parameters:
- -p 80:80 is port mapping. Remember how we exposed 80 in the Dockerfile? Well, now we need to tell the container to use the exposed port 80 and deliver its content to the port 80 of our localhost.
- -d stands for a detached mode which will bring the process to the background so you can still use the same terminal window.
- –name php-container allows us to give our container custom name, otherwise Docker would pick one for us randomly.
- php-image is the name of our image from which we want to spin up our new container.
Remember that the name of the image must be put only after all other options as the last parameter!
If you get an error that port 80 is already taken, you can use any other port like 90 for example. In such case, use -p 90:80 as a parameter.
To make sure that your new container is up and running, type this command in terminal:
docker ps
You should see container ID, the image it was created from, ports, name and status:

Now that our container is up and running, let’s make it do its job. Open your web browser and type localhost/phpinfo.php in the address bar.
Again, if port 80 didn’t work for you, you must specify the port you’ve chosen manually now: localhost:90/phpinfo.php. That’s because port 80 is a default port so you don’t have to specify it, but any other port must be specified.
You should see this web page:

This means that everything works and you are running PHP 7.4.30 on your local web server! Great work!
Adding database
Our php-container won’t work with a database because the image we used contained ony PHP and Apache web server.
To add a database server like MySQL to our development environment, we need to create another container for database and connect it to our php-container, so the services inside those containers could talk to each other.
We will start again in Docker Hub and search for mysql:

And sure enough, there is an official image maintained by MySQL team.
Let’s create our own mysql-container with this command:
docker run -d --name mysql-container -e MYSQL_ROOT_PASSWORD=secret mysql
When you run this command, this is what will happen:
- Docker will look for mysql:latest image on your computer.
- If it’s not available, it will pull it from Docker Hub and then spin up the container from it.
Docker is trying to save your disk space. If an image is already on your machine, it will not download yet another copy. Remember this concept. We will make use of it later.

Run docker ps command again. You should see two containers, both are running.

This demonstrates, that you can immediately spin up a container from an already existing image. Only if you want to create your own image, you need to actually build it first and use it later to spin up a container from it.
Let’s write some mysql code to test if MySQL server is really working. Since we run PHP 7.4.30, we can use mysqli connector to connect to our database.
In src directory, create a new file and call it mysql.php.
touch ~/Desktop/docker-apache-php7/src/mysql.php
Let’s place this code inside:
<?php $servername = "mysql"; $username = "root"; $password = "secret"; // Create connection $conn = mysqli_connect($servername, $username, $password); // Check connection if (!$conn) { die("Connection failed: " . mysqli_connect_error()); } echo "Connected successfully"; ?>
This is a very simple php code. It tries to connect to the database server with the credentials we provided.
If the connection cannot be established, it will display an error message “Connection failed”. If the connection is successful, it will display a success message “Connected successfully”.
Now go to localhost/mysql.php in your web browser. You should get this message:

Our mysql.php file can not be found, even though it is in the same directory as phpinfo.php file which works just fine. What’s going on?
We added a new file and thus changed the content of our project, but we are still running the old php-container based on the original php-image which has no clue about the changes we have just made.
To fix this, we need to rebuild our php-image and spin up a new container from the updated image.
If you think that this is a lot of hassle, you’re right, but stay with me just for now. You will truly appreciate the feature called volumes I will explain later, once we get through this inconvenience.
First, let’s stop the php-container we have created from php-image by running this command:
docker stop php-container
To get the list all existing containers including those that are not running, use docker ps -a command. You can see that php-container has exited:

Once the container is stopped, we can remove it with this command:
docker rm php-container
You can remove the container even while it is running. In that case, you need to add an -f option to the end of the command above.
Once the php-container is removed, you can remove the php-image as well.
docker rmi php-image
If you tried to remove php-image while the php-container still existed, Docker would protest.
Now we can rebuild our php-image again and the only reason for that is to copy our new mysql.php file into /var/www/html folder.
Remember the instruction from Dockerfile? Here it is again: COPY src/ /var/www/html/
This is why we did all of this. To get our new mysql.php copied from the src folder to the /var/www/html folder. Luckily, there is a much quicker way, and you will learn it soon.
Let’s build our image again:
docker build -t php-image .
and spin up the updated container from it:
docker run -p 80:80 -d --name php-container php-image
Navigate to localhost/mysql.php from your web browser. The file apparently exists, but we have another problem:

Official PHP image is very lightweight so it doesn’t include too many extensions and mysqli is one of those missing. This means that our PHP doesn’t know about any function called mysqli_connect().
To fix this, we need to add mysqli extension to our php-image. But don’t be scared, we won’t undergo again the same painful process of deleting and recreating everything.
You can directly rebuild the image and then run the container without the painful process of stopping the container, removing the container and rebuilding the image.
But I didn’t tell you sooner because I wanted you to try all these commands so you know how to manage containers and images. I hope you will forgive me this pesky move 🙂
Go to your Dockerfile and add this line at the bottom:
FROM php:7.4.30-apache COPY src/ /var/www/html/ EXPOSE 80 RUN docker-php-ext-install mysqli
This will add mysqli extension to our PHP image. Now you can just run this command in terminal:
docker build -t php-image .
You can see that in Step 3/3, a mysql extension has been added to our image:

If you list all images with docker image ls, you can see that php-image has been created only about a minute ago.

This means that if you build the image with the same name, the original image is overwritten with the new one.
The same can’t be done with the container, though. If you try and run this command now while the original container is still running…
docker run -p 80:80 -d --name php-container php-image
…you will get the error message that the container with the same name already exists.
You need to stop and remove the currently running php-container first. As I mentioned already, you can do both of these two steps at the same time by using an -f option:
docker rm php-container -f
Now you can spin up the php-container again, but this time from the updated php-image:
docker run -p 80:80 -d --name php-container php-image
Alternatively, you can spin up a new container with a different name and keep the original container running. But since the original port 80 is already taken, you’ll need to use a different port (-p 90:80) and specify it explicitly when accessing localhost (localhost:90/mysql.php).
Navigate to localhost/mysql.php from your web browser again. Even though we get another warning, we are getting closer because the new error message comes directly from mysqli_connect() function.
That means that it exists and PHP knows about it. But it seems like there is a problem with a network address:

This is caused by the fact that we have two separate containers. One for PHP and Apache server (php-container) and another one for MySQL server (mysql-container).
The problem is that they don’t know about each other, they don’t talk to each other. Let’s fix this.
Stop the php-container once again and remove it at the same time:
docker rm php-container -f
Now run this command:
docker run -p 80:80 -d --name php-container --link mysql-container:mysql php-image
You are already familiar with this command except for the link parameter which says that we want to link our php-container to mysql-container.
Again, navigate to localhost/mysql.php from your web browser, this time you should see the message that you are connected to the database server:

In order to be able to modify the content of our src folder without the need to rebuild images all the time, we will add the -v option to our command.
So for the last time, stop and remove php-container:
docker rm php-container -f
Run the command with this new parameter:
docker run -p 80:80 -d -v ~/Desktop/docker-apache-php7/src/:/var/www/html/ --name php-container --link mysql-container:mysql php-image
This option should be quite familiar. We used something similar in our Dockerfile to tell our image to copy the content of our src folder to the default directory of Apache web server inside the container.
This time, we will create the volume, which means that those two locations will be in sync. Actually, we will mount our folder saved in Desktop to the location inside the container.
Once you make any kind of change in src folder, it will be automatically available in /var/www/html folder in Apache web server.
Let’s test this! Go to your mysql.php file and add “**AMAZING!**” at the end of the echo command like this:
<?php $servername = "mysql"; $username = "root"; $password = "secret"; // Create connection $conn = mysqli_connect($servername, $username, $password); // Check connection if (!$conn) { die("Connection failed: " . mysqli_connect_error()); } echo "Connected successfully **AMAZING!**"; ?>
Save the file and refresh the browser! Isn’t that amazing? 🙂

Docker Compose

So far, we did it all manually. We configured and created images, we created containers and link them together. If you work with two or three containers, it is doable, even though we have spent quite some time with this.
However, if you need to set up the environment with many more containers, it will become very tedious to go through all those steps manually every time.
Luckily, there is a better way. Docker Compose is a tool for defining and running multi-container Docker applications.
It allows you to create a YAML configuration file where you will configure your application’s sevices, and define all the steps necessary to build images, spin up containers and link them together. Finally, once all this is done, you will just set it all in motion with a single command.
Let’s take a look at how this works. This time, we will create the LEMP stack which will consist of Linux, PHP 7, Ngnix, and MySQL.
It is generally recommended to have one process or microservice per container, so we will separate things here. We will create six containers and orchestrate them with Docker Compose.
As we already did in the previous section, we will again use official images and extend them with our Dockerfiles. First, let’s delete all the containers and images we created so far and lets start with a clean slate.
To list all containers:
docker ps -a
To delete both containers:
docker rm php-container -f docker rm mysql-container -f
To list all images:
docker image ls
To delete all unused images:
docker image prune -a
To make sure everything is gone, list all containers (docker ps -a) and all images (docker image ls) once again.
All clear? Great! Let’s begin!
Go to your Desktop and create a new folder called docker-ngnix-php7 and open this folder in VS Code.
Nginx web server
Let’s start with a web server. Instead of Apache, we will use Nginx this time. First, we will check if there is any official image on Docker Hub. And sure enough, here it is:

We will choose the tag latest, so I hope you remember, that the name of the image and the tag go together like this: nginx:latest

Create a new file in your docker-nginx-php7 directory and save it as docker-compose.yml:
touch ~/Desktop/docker-nginx-php7/docker-compose.yml
Write this text inside:
nginx: image: nginx:latest container_name: nginx-container ports: - 80:80
This should be familiar. Remember when we ran mysql image? We used this command in the terminal: docker run -p 80:80 -d –name php-container php-image.
Now, instead of running this command, we will take the options and save them in a configuration file. Then, we will let Docker Compose run commands for us by following the instructions in this file.
Save the file. This is what it should look like:

Make sure you’re in the docker-nginx-php7 directory in your terminal and run this command:
docker-compose up -d
Docker Compose will pull Nginx image from Docker Hub, create a container and give it a name we specified. Then, it will start the container for us. Docker Compose will do all of these steps automatically.
I gave the container a specific name just for educational purposes here, so we can easily identify it. But it’s not a good practice in general because container names must be unique. If you specify a custom name, you won’t be able to scale that service beyond one container. So it’s probably better to let Docker assign automatically generated names. In this case, I wanted you to understand how things are working.
Use the familiar docker ps command to see the list of running containers.
In your web browser, navigate to localhost and you should see this welcome message:

You don’t have to specify the port number since 80 is a default value, but if your default port is already taken by another service (like in my case) you need to specify the custom port you use (localhost:90).
PHP
Let’s say that we want to add the PHP to the mix and we want it to be automatically downloaded, configured and started.
We also want to modify our Nginx web server a bit. You know the drill. If you want to modify the official image and add your own changes, you need to use Dockerfile as we already did in the previous section.
Let’s do this again. First, we will create the nginx directory inside our docker-ngnix-php7 folder.
In this directory, we will create a new Dockerfile:
touch ~/Desktop/docker-nginx-php7/nginx/Dockerfile
Next, we will create a new index.php file which will be saved in www/html directory inside docker-ngnix-php7 folder with this content:
<!DOCTYPE html> <head> <title>Hello World! </head> <body> <h1>Hello World! <p><?php echo 'We are running PHP, version: ' . phpversion(); ?></p> </body>
In VS Code, you can quickly create a new file and the whole new directory structure at the same time. Instead of typing just the name of the file, include the whole path www/html/index.php. VS Code will create the file for you and both directories as well!

Your folder structure should look like this now:

Ngnix configuration
To configure our Nginx web server, we will use the default.conf, so create this file in the nginx folder and add this content inside:
server { listen 80 default_server; root /var/www/html; index index.html index.php; charset utf-8; location / { try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } access_log off; error_log /var/log/nginx/error.log error; sendfile off; client_max_body_size 100m; location ~ .php$ { fastcgi_split_path_info ^(.+.php)(/.+)$; fastcgi_pass php:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors off; fastcgi_buffer_size 16k; fastcgi_buffers 4 16k; } location ~ /.ht { deny all; } }
Now back to the Dockerfile for Nginx. Write these two lines in it and save the changes:
FROM nginx:latest COPY ./default.conf /etc/nginx/conf.d/default.conf
This means that we will start with the default nginx image (nginx:latest), but then, we will use our own configuration we have just saved in the default.conf file and copy it to the location of the original configuration.
Now we need to tell Docker to use our own Dockerfile instead of downloading the original image and since Dockerfile is inside the nginx directory, we need to point to that directory. So instead of using image: nginx:latest in the docker-compose.yml file, we will use build: ./nginx/
nginx:image: nginx:latestbuild: ./nginx/ container_name: nginx-container ports: - 80:80
We will also create volumes so the Nginx web server and PHP as well can see the content of our www/html/ directory, namely the index.php file:
nginx: build: ./nginx/ container_name: nginx-container ports: - 80:80 volumes: - ./www/html/:/var/www/html/
This content will be in sync with container’s directory /var/www/html/ and what’s more important, it will be persistent even when we decide to destroy containers.
Next, we will create a new php-container using the original PHP image, this time with FPM which minimizes the memory consumption and improves performance. We need to expose port 9000 we set in the default.conf file because the original image doesn’t expose it by default:
nginx: build: ./nginx/ container_name: nginx-container ports: - 80:80 volumes: - ./www/html/:/var/www/html/ php: image: php:7.4.30-fpm container_name: php-container expose: - 9000 volumes: - ./www/html/:/var/www/html/
And finally, we need to link our nginx-container to php-container:
nginx: build: ./nginx/ container_name: nginx-container ports: - 80:80 links: - php volumes: - ./www/html/:/var/www/html/ php: image: php:7.4.30-fpm container_name: php-container expose: - 9000 volumes: - ./www/html/:/var/www/html/
You might wonder what is the difference between ports and expose. Exposed ports are accessible only by the container to which they were exposed. In our case the php-container will expose port 9000 only to the linked container which happens to be the nginx-container. Ports defined just as ports are accessible by the host machine, so in my case, it would be my MacBook or rather the web browser I will use to access those ports.
Even though our nginx-container is still running, we can run this command:
docker-compose up -d
This time, Docker will pull the php:7.4.30-fpm image from Docker Hub and create a new image based on the instructions in our Dockerfile.
As you can see, Docker is warning us that it built the image for nginx service only because it didn’t exist. This means that if this image already existed, Docker wouldn’t build it and it would use the existing image instead.

This is very important because even though you will change Dockerfile in the future, Docker will ignore those changes unless you tell it specifically that you want to rebuild an existing image by using the command docker-compose build.
Go ahead and take a look at the list of all images:
docker image ls
You should see two images in the list. The official php image that has been pulled from Docker Hub and our modified version of the official nginx image which name is docker-nginx-php7_nginx.
This name is based on the name of the directory where our docker-compose.yml file is located. The last part of the name after the underscore (_) comes from the name of the image from which our custom image is derived, in our case it’s nginx.

Run docker ps in terminal to see the list of containers:
docker ps

If your nginx-container is not running, use docker logs nginx-container command in terminal to see what is the problem. It will be probably some kind of typo in the default.conf file.
Even though we didn’t stop the original nginx-container based on the official nginx image, it’s not only stopped, it’s completely gone.
Instead, we have our new modified nginx-container running, but this one is spun up from our custom docker-nginx-php7_nginx image.
If you go back to your web browser and refresh the page, you should see this:

Let’s see if the mounted directory works as expected. Go to your index.php file and write AMAZING! inside the <h1> tag like this:
<!DOCTYPE html> <head> <title>Hello World! </head> <body> <h1>Hello World! AMAZING!</h1> <p><?php echo 'We are running PHP, version: ' . phpversion(); ?></p> </body>
When you refresh the page, AMAZING! will appear:

Data container
As you might have noticed, we have mounted the same directory www/html/ to both nginx-container and php-container.
While this is perfectly legit, it is a common practice to have a special data container for this purpose. Data container holds data and all other containers are connected (linked) to it.
In order to set this up, we need to change our docker-compose.yml file once again:
nginx: build: ./nginx/ container_name: nginx-container ports: - 80:80 links: - phpvolumes:- ./www/html/:/var/www/html/volumes_from: - app-data php: image: php:7.4.30-fpm container_name: php-container expose: - 9000volumes:- ./www/html/:/var/www/html/volumes_from: - app-data app-data: image: php:7.4.30-fpm container_name: app-data-container volumes: - ./www/html/:/var/www/html/ command: “true”
As you can see, we added the app-data-container, which uses the same volumes parameters we used for php-container and nginx-container so far.
This data container will hold the application code only, so it doesn’t need to run. It just needs to exist to be accessible, but since it won’t serve any other purpose, there is no need to keep it running and thus wasting resources.
To save some disk space, we use the same official PHP image we already have pulled previously. We don’t need to pull any new image for this purpose, PHP image will work just fine.
Also, we told Docker to mount volumes from app-data for nginx-container and php-container, so we won’t need volumes options for those two containers anymore.
Finally, we say that both nginx-container and php-container will use volumes from app-data-container.
Run docker-compose up -d once again:
docker-compose up -d
As you can see in the terminal, Docker has just created a new app-data-container and recreated php-container and nginx-container:

Now, let’s see the list of containers, but this time, let’s display all containers, not just the those that are running:
docker ps -a

As you can see, the app-data-container has been created but it’s not running because there is no reason for it to run. It only holds data.
And it has been created from the same image as php-container, so we saved hundreds of megabytes we would otherwise need if we pulled a data-only container from Docker Hub.
MySQL database
We need to modify our PHP image because we need to install the extension that will allow PHP to connect to MySQL. This time, we will use the PDO connector instead of the mysqli connector.
To do so, we will create a new folder named php inside our docker-nginx-php7 directory and we will place a new Dockerfile inside with this content:
FROM php:7.4.30-fpm RUN docker-php-ext-install pdo_mysql
Your folder structure should look like this now:

Now we need to change our docker-compose.yml file again. We will change the way the php-container is built.
Next, we will add mysql-container and mysql-data-container and finally, we will link php-container to mysql-container.
nginx: build: ./nginx/ container_name: nginx-container ports: - 80:80 links: - php volumes_from: - app-data php:image: php:7.4.30-fpmbuild: ./php/ container_name: php-container expose: - 9000 links: - mysql volumes_from: - app-data app-data: image: php:7.4.30-fpm container_name: app-data-container volumes: - ./www/html/:/var/www/html/ command: “true” mysql: image: mysql:latest container_name: mysql-container volumes_from: - mysql-data environment: MYSQL_ROOT_PASSWORD: secret MYSQL_DATABASE: zavrel_db MYSQL_USER: user MYSQL_PASSWORD: password mysql-data: image: mysql:latest container_name: mysql-data-container volumes: - /var/lib/mysql command: "true"
To test our MySQL setup, we will modify our index.php as well, so we can try to access our database:
<!DOCTYPE html> <head> <title>Hello World!</title> </head> <body> <h1>Hello World! AMAZING!</h1> <p><?php echo 'We are running PHP, version: ' . phpversion(); ?></p> <? $database ="zavrel_db"; $user = "user"; $password = "password"; $host = "mysql"; $connection = new PDO("mysql:host={$host};dbname={$database};charset=utf8", $user, $password); $query = $connection->query("SELECT TABLE_NAME FROM information_schema.TABLES WHERE TABLE_TYPE='BASE TABLE'"); $tables = $query->fetchAll(PDO::FETCH_COLUMN); if (empty($tables)) { echo "<p>There are no tables in database " .$database. ".</p>"; } else { echo "<p>Database " .$database. " has the following tables:</p>"; echo "<ul>"; foreach ($tables as $table) { echo "<li>$table</li>"; } echo "</ul>"; } ?> </body>
This new script will take values we defined for the database and try to establish the database connection.
Notice that database, user and password variables are the same as environment values we set for our mysql-container.
Once the connection is established, the script will try to select all tables from INFORMATION_SCHEMA where table type is BASE TABLE.
Now, if you’re not familiar with MySQL, this might be a bit confusing for you.
Every MySQL instance has a special database that stores information about all the other databases that the MySQL server maintains. This special database is called INFORMATION_SCHEMA. INFORMATION_SCHEMA database contains several read-only tables. They are actually views, not databases. Databases are of BASE TABLE type.
So when we try to select the table of type BASE TABLE we are actually looking for a database only and it is the database we will yet have to create.
If it’s too much for you, don’t worry, it will all make sense soon.
Anyway, once you have Dockerfile and index.php updated, run docker-compose up -d again. Docker will pull mysql image and install the PDO extension for connection to the database.
Finally, the script will:
- start app-data-container,
- create mysql-data-container and mysql-container,
- recreate php-container.

Check with docker ps -a that you have five containers now, 2 of them exited (mysql-data-container and app-data-container)

Refresh index.php in your web browser. You should see the list of tables in the database:

There are even more tables, but those are not visible by a regular user. If you want to see them, go to the index.php and change the value of the $user to “root” and $password to “secret”:

This way, you will get access to everything!
Refresh the browser once more:

Let’s put back our regular user who can see only what he should see:

Deep down the rabbit hole
So far, containers were like black boxes for us. We ran them, we listed them, but we never saw what is inside.
That’s about to change now. I will show you how you can get right inside mysql-container and work with MySQL server from within.
Run this command in terminal:
docker exec -it mysql-container /bin/bash
Now you are inside the container! You can tell by the new prompt in your terminal:

You can take a look around as you would in any other Linux system:
- ls command will show you the list of files and directories,
- pwd command will show the current directory, which is root directory (/),
- uname -or command will show you the kernel release and that this is actually Linux operating system (5.10.104-linuxkit GNU/Linux)
Do you remember how we defined the volumes for mysql-data-container in the docker-compose.yml file?

Let’s take a look at this directory:
cd /var/lib/mysql
ls command will show you its content and you can spot our zavrel_db database there:

All right, let’s end this quick trip by going back to the root directory:
cd /
I want you to stop now for a while to let this sink in and appreciate. You are working on your physical computer. This computer is running an operating system, Windows or Mac (if you’re on Linux, it’s a bit different). Inside your operating system, you are running a Docker container which is basically a Linux machine.
Now, we will go even deeper and run another interface to work with the database server. Can you see how we go deeper and deeper, layer after layer, down the rabbit hole? 🙂
To get access to MySQL CLI (command line interface), we need user and password. Luckily for us, we already created both user and password when we set up the environment variables for our mysql-container insidethe docker-compose.yml file.
I hope you noticed that we also set up the root password as an environment variable. Remember this line?

You might ask, how do we know that there is a user named root. Well, there is always this user. That’s why we were able to set the password for him with MYSQL_ROOT_PASSWORD variable without even questioning his existence.
To sign in mysql server, though, we won’t use root access because that would give us too many results as root can see everything.
Run this command from terminal to sign in MySQL server as a regular user:
mysql -uuser -ppassword
-uuser means user is “user”, -ppassword means password is “password”
Once you run this command, you will get deeper inside the world of MySQL server.
Again, you can tell by the prompt which changed now from bash-4.4# to mysql> that we are somewhere else:

Inside mysql, there are different rules and different commands.
Start with the command show databases;
show databases;

You should see a table with a list of three databases. One of them is our zavrel_db database.
Remember when we created it? We defined it while preparing our mysql-container in the docker-compose.yml file:

Let’s create a new table in our database. First, we need to select it, so MySQL knows which database we want to work with:
use zavrel_db
You will get the information that database has been changed.
Now, we can create a new table:
CREATE TABLE users (id int);
Go to your web browser and refresh the page, you will see the users table at the bottom of the list:

Ok. We are done here, let’s get all the way back to the familiar terminal of our computer. First, we need to leave MySQL CLI. This can be done by command q
exit
Go ahead and run it! MySQL will say Bye and you are back inside your mysql-container. Again, you can tell by the prompt bash-4.4#.
Let’s go one layer up. To leave mysql-container, just use the shortcut CTRL + D or type exit and hit Enter.
exit
See? We are finally back to our computer terminal! How was it? Did you like the trip? I hope you did!

I wanted to show you this rather complicated way of working with databases and tables so you can truly appreciate the web client we will learn about in a minute.
But first, I want to go back to volumes once again, because we need to address few more things about them.
Inspecting containers
Do you remember how I told you that we didn’t really care about where Docker stores volumes of mysql-data-container on our computer because we won’t access them directly anyway?
Well, if you are curious where they are located, there is a way how to find it out.
Run this command:
docker inspect mysql-data-container
Look for the Mounts section in the output. Next to the Source attribute there’s the location of database data on your computer. If you’re on Mac, it should be something like /var/lib/docker/volumes/ and so on:

Dangling volumes
When you create a container with mounted volumes and destroy that container later, mounted volumes will stay intact unless you specifically say you want them destroyed as well. Such orphan volumes are called dangling volumes.
So far we used the command docker rm container-name -f
to remove containers, but if you want to destroy volumes as well, you need to add the -v
option.
So the command will look like this: docker rm -v container-name -f
.
But what about containers we already destroyed so far without destroying their mounted volumes? Let’s check out if there are any such volumes.
First, let’s list all the volumes we have created so far:
docker volume ls
Now let’s narrow the list by adding the filter for dangling volumes only:
docker volume ls -qf dangling=true
-q
stand for quiet which only displays volume names, -f
stands for filter, but you can write both options together like -qf
It seems that we have quite a lot of dangling volumes here:

To delete them all, use this command that will remove volumes not used by any container:
docker volume prune
Notice how much free space has been reclaimed. It’s a good idea to run this command frequently if you’re running out of disk storage.

Now if you check volumes again with docker volume ls
, you should have only one volume left.
phpMyAdmin
Let’s move on and spin up our last container. phpMyAdmin is a great tool for managing MySQL databases directly from the web browser.
No one will force you to stop your trips deep inside MySQL CLI if that’s what you like, but a web interface is much more convenient in my opinion.
Add the following lines at the end of your docker-compose.yml file:
phpmyadmin: image: phpmyadmin/phpmyadmin container_name: phpmyadmin-container ports: - 8080:80 links: - mysql environment: PMA_HOST: mysql
By now, everything should be fairly clear. We start with the official docker image, publish container’s port 80 to port 8080 of our computer, so we can access phpMyAdmin from the web browser.
We can’t use port 80, because it’s already taken by Nginx web server. That’s why we use port 8080 instead.
Finally, we will link this container to our mysql-container and set the PMA_HOST environment variable.
Go ahead and run this command once again:
docker-compose up -d
Docker will pull phpMyAdmin image and create phpmyadmin-container.
Go to your web browser, type localhost:8080 and hit Enter. You should be presented with this login screen:

Log in as a regular user (user / password). You’re in MySQL server! Check the list of databases on the left pane and click on zavrel_db. Can you see the table users we have recently created inside MySQL CLI?

Give yourself a little break, maybe a cup of coffee, and let it all digest a bit. When you’re ready come back and we will continue with even more exciting stuff!
GitHub Volume
Mounting a local directory to make it accessible for nginx-container and php-container is fine until you need to deploy your application to some remote virtual private server (VPS).
In such case, it would be great to have your code copied to a remote volume automatically. In this section, I will show you how to use GitHub for this.
Let’s make a copy of our docker-compose.yml file and save it as docker-compose-github.yml. We will make some changes to our app-data-container so it won’t mount a local directory but rather get a repository from GitHub.
In case you have your code on GitHub in a public repository, this will make it very easy to spin up your development environment on a remote server with the code cloned from your repository.
First, we need to create a Dockerfile for app-data image. Create a new folder called app-data and save the Dockerfile there with this content:
FROM php:7.4.30-fpm RUN apt-get update && apt-get install -y git RUN git clone https://github.com/zavrelj/docker-tutorial/ /var/www/html/ VOLUME /var/www/html/
We are using the already pulled official PHP image, but on top of that, we will update the underlying debian:bullseye Linux distro and then install git.
Next, we will clone my public repository I have created for this article and save it inside the /var/www/html directory inside our container.
Finally, we will create a volume from this directory, so other containers, namely nginx-container and php-container can access it.
Your folder structure should look like this now:

Now, we need to change app-data image instructions in our docker-compose-github.yml file like this:
app-data:image: php:7.4.30-fpmbuild: ./app-data/ container_name: app-data-containervolumes:- ./www/html/:/var/www/html/command: “true”
Let’s clean up everything and start from scratch.
Stop all containers created with the docker-compose
command:
docker-compose stop
Remove all those stopped containers including volumes that were attached to them:
docker-compose rm -v
Clean dangling volumes:
docker volume prune
In order to use our new docker-compose-github.yml file, we need to tell docker-compose about it, otherwise, it would use the default docker-compose.yml as always.
Rebuild the images with the new configuration file:
docker-compose -f docker-compose-github.yml build
And spin up all containers again:
docker-compose -f docker-compose-github.yml up -d
Navigate to localhost in the web browser and you should see this:

Digital Ocean
Let’s get our local development environment to a remote server. Digital Ocean is a great service for that. If you don’t have an account yet, sign up with my referral link and you will get $100 in credit over 60 days.
Once you’re in, create a new Droplet:

Select Marketplace tab and search for Docker:

Choose the smallest available size of droplet, it’s more than enough for our purposes:

Since I want you to use SSH for the remote access to your droplet, you need to set it up, unless you already have it.
The whole process is quite easy. Type this command in terminal:
ssh-keygen -t rsa
When you’re asked where to save the key, just hit Enter. Next, choose the password for the newly generated key.
Once you see this, your key is ready:

Run this command to display the public key:
cat ~/.ssh/id_rsa.pub
Select it and use ⌘ + C (or Ctrl + C on Windows) shortcut to copy it to the clipboard:

Scroll down in your Droplet setup to Authentication and click the New SSH Key button:

Paste the public key from the clipboard to the form, fill out the name of your computer and click the Add SSH Key button:

Make sure, your computer is selected for SSH access:

Finally, hit that green button at the bottom to Create Droplet:

Once your Droplet is created, write down its IP address:

Transferring the project folder
If you have followed me step by step, you should have the docker-nginx-php7 folder on your Desktop.
We will copy this folder to our Droplet so we can run Docker Compose with our YML configuration file remotely from the Droplet.
To copy the folder, we will use rsync command:
rsync -r -e ssh ~/Desktop/docker-nginx-php7 root@165.227.82.24:~/
Instead of my IP in to command, use the actual IP address of your Droplet.
Terminal will ask for your SSH key password and then it will create a copy of docker-nginx-php7 folder inside the home folder of the user root (/root).
Now, let’s check if everything has been transferred. SSH into your remote server ssh root@165.227.82.24
(use your actual IP address instead of mine). Navigate to the docker-nginx-php7 directory cd docker-nginx-php7
and check its content ls
:
ssh root@165.227.82.24 cd docker-nginx-php7 ls
Can you see your familiar directory structure including two configuration files?

Nice! Everything seems to be in place!
There’s no Docker Compose on this particular droplet, but it’s fairly easy to install it. First, we need to install python-pip:
apt-get update apt-get -y install python3-pip
Next, we can install Docker Compose via pip:
pip install docker-compose
Now we are ready to let Docker Compose do its magic. Let’s run our familiar command that will automate the whole process of pulling and building images, getting the code from GitHub and spinning up all containers.
Since there are no images to rebuild, we can use the up
command directly:
docker-compose -f docker-compose-github.yml up -d
Once everything is done and all containers are running, you can navigate to the IP address of your Droplet in your web browser.
Octocat should be waiting for you:

And if you add port 8080 behind the IP address, you will get phpMyAdmin welcome screen:

Go ahead and login with user / password or root / secret, both will work. Make sure that our zavrel_db database is there:

Once you’re done with Digital Ocean, make sure to destroy your running Droplet so you won’t be billed. Or in case you used my referral link and received those $100 in credit, to not waste it all by running the Droplet you don’t need after you finish this tutorial.
Alright! That’s all. I hope you have learned something useful today.