Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, you can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.
The big advantage of using Compose is you can define your application stack in a file, keep it at the root of your project repo (it's now version controlled), and easily enable someone else to contribute to your project. Someone would only need to clone your repo and start the compose app. In fact, you might see quite a few projects on GitHub/GitLab doing exactly this now.
The Docker extension includes several Visual Studio Code tasks to control the behavior of Docker build and run, and form the basis of container startup for debugging. The tasks allow for a great deal of control and customization.
So, how do you get started?
Browse other questions tagged php docker visual-studio-code docker-compose xdebug or ask your own question. The Overflow Blog Podcast 331: One in four visitors to Stack Overflow copies code. Podcast 332: Non-fungible Talking. Featured on Meta Stack Overflow. The Docker extension provides a docker debug configuration provider that manages how VS Code will launch an application and/or attach a debugger to the application in a running Docker container. This provider is configured via entries within launch.json, with configuration being specific to each application platform supported by the provider. Visual Studio Code and Docker CLI (cross-platform tools for Mac, Linux, and Windows) If you prefer a lightweight, cross-platform editor supporting any development language, you can use Visual Studio Code and Docker CLI. These products provide a simple yet robust experience, which is critical for streamlining the developer workflow.
If you installed Docker Desktop for either Windows or Mac, you already have Docker Compose! Play-with-Docker instances already have Docker Compose installed as well. If you are on a Linux machine, you will need to install Docker Compose using the instructions here.
After installation, you should be able to run the following and see version information.
At the root of the app project, create a file named docker-compose.yml
.
In the compose file, we'll start off by defining the schema version. In most cases, it's best to use the latest supported version. You can look at the Compose file reference for the current schema versions and the compatibility matrix.
Next, define the list of services (or containers) you want to run as part of your application.
And now, you'll start migrating a service at a time into the compose file.
To remember, this was the command you used to define your app container (replace the characters with
`
in Windows PowerShell).
First, define the service entry and the image for the container. You can pick any name for the service. The name will automatically become a network alias, which will be useful when defining the MySQL service.
Typically, you'll see the command close to the image
definition, although there is no requirement on ordering. So, go ahead and move that into the file.
Migrate the -p 3000:3000
part of the command by defining the ports
for the service. You'll use the short syntax here, but there is also a more verbose long syntax available as well.
Next, migrate both the working directory (-w /app
) and the volume mapping (-v ${PWD}:/app
) by using the working_dir
and volumes
definitions. Volumes also has a short and long syntax.
One advantage of Docker Compose volume definitions is you can use relative paths from the current directory.
Finally, migrate the environment variable definitions using the environment
key.
Now, it's time to define the MySQL service. The command that you used for that container was the following (replace the characters with
`
in Windows PowerShell):
First, define the new service and name it mysql
so it automatically gets the network alias. Specify the image to use as well.
Next, define the volume mapping. When you ran the container with docker run
, the named volume was created automatically. However, that doesn't happen when running with Compose. You need to define the volume in the top-level volumes:
section and then specify the mountpoint in the service config. By simply providing only the volume name, the default options are used. There are many more options available though.
Finally, you only need to specify the environment variables.
At this point, the complete docker-compose.yml
should look like this:
Now that you have the docker-compose.yml
file, you can start it up!
First, make sure no other copies of the app and database are running (docker ps
and docker rm -f <ids>
).
Start up the application stack using the docker-compose up
command. Add the -d
flag to run everything in the background. Alternatively, you can right-click on your Compose file and select the Compose Up option for the VS Code side bar.
When you run this, you should see output like this:
You'll notice that the volume was created as well as a network! By default, Docker Compose automatically creates a network specifically for the application stack (which is why you didn't define one in the compose file).
Look at the logs using the docker-compose logs -f
command. You'll see the logs from each of the services interleaved into a single stream. This is incredibly useful when you want to watch for timing-related issues. The -f
flag 'follows' the log, so will give you live output as it's generated.
If you don't already, you'll see output that looks like this:
The service name is displayed at the beginning of the line (often colored) to help distinguish messages. If you want to view the logs for a specific service, you can add the service name to the end of the logs command (for example, docker-compose logs -f app
).
Hama gmbh & co kg input devices driver download for windows. Tip
Waiting for the DB before starting the appWhen the app is starting up, it actually sits and waits for MySQL to be up and ready before trying to connect to it.Docker doesn't have any built-in support to wait for another container to be fully up, running, and ready before starting another container. For Node-based projects, you can use the wait-port dependency. Similar projects exist for other languages/frameworks.
At this point, you should be able to open your app and see it running. And hey! You're down to a single command!
If you look at the Docker extension, you can change the grouping options using the 'cog' and 'group by'. In this instance, you want to see containers grouped by Compose Project name:
If you twirl down the network, you will see the two containers you defined in the compose file.
When you're ready to tear it all down, simply run docker-compose down
, or right-click on the application in the containers list in the VS Code Docker extension and select Compose Down. The containers will stop and the network will be removed.
Warning
Removing VolumesBy default, named volumes in your compose file are NOT removed when running docker-compose down
. If you want to remove the volumes, you will need to add the --volumes
flag.
Once torn down, you can switch to another project, run docker-compose up
and be ready to contribute to that project! It really doesn't get much simpler than that!
In this section, you learned about Docker Compose and how it helps dramatically simplify the defining and sharing of multi-service applications. You created a Compose file by translating the commands you were using into the appropriate compose format.
At this point, you're starting to wrap up the tutorial. However, there are a few best practices about image building to cover, as there is a big issue with the Dockerfile you've been using. So, let's take a look!
Continue with the tutorial!
This is aguest post from Jochen Zehnder. Jochen is a Docker Community Leader and workingas a Site Reliability Engineer for 56K.Cloud. He started his career as aSoftware Developer, where he learned the ins and outs of creating software. Heis not only focused on development but also on the automation to bridge the gapto the operations side. At 56K.Cloud he helps companies to adapt technologiesand concepts like Cloud, Containers, and DevOps. 56K.Cloudis a Technology company from Switzerland focusing on Automation, IoT,Containerization, and DevOps.
Jochen Zehnder joined 56K.Cloud inFebruary, after working as a software developer for several years. He always triesto make the lives easier for everybody involved in the development process. OneVS Code feature that excels at this is the Visual Studio Code Remote –Containers extension. It is one of many extensions of the Visual Studio RemoteDevelopment feature.
This postis based on the work Jochen did for the 56K.Cloud internal handbook. It uses Jekyll to generate a static website out ofmarkdown files. This is a perfect example of how to make lives easier foreverybody. Nobody should know how to install, configure, … Jekyll to makechanges to the handbook. With the Remote Development feature, you addall the configurations and customizations to the version control system of yourproject. This means a small group implements it, and the whole team benefits.
One thing Ineed to mention is that as of now, this feature is still in preview. However, Inever ran into any issues while using it, and I hope that it will get out ofpreview soon.
Prerequisites
You need tofulfil the following prerequisites, to use this feature:
Enable it for an existing folder
The Remote– Container extension provides several ways to develop in a container. Youcan find more information in the documentation,with several Quick start sections. In this post, I will focus on how toenable this feature for an existing local folder.
As with allthe other VS Code extensions, you also manage this with the Command Palette.You can either use the shortcut or the green button in the bottom left cornerto open it. In the popup, search for Remote-Containers and select OpenFolder in Container…
In the nextpopup, you have to select the folder which you want to open in the container.For this folder, you then need to Add the Development ContainerConfiguration Files. VS Code shows you a list with predefined containerconfigurations. In my case, I selected the Jekyll configuration. Afterthat, VS Code starts building the container image and opens the folder in thecontainer.
If you now have a look at the Explorer you can see, that there is a new folder called `.devcontainer`. In my case, it added two files. The `Dockerfile` contains all the instructions to build the container image. The `devcontainer.json` contains all the needed runtime configurations. Some of the predefined containers will add more files. For example, in the `.vscode` folder to add useful Tasks. You can have a look at the GitHub Repo to find out more about the existing configurations. There you can also find information about how to use the provided template to write your own.
Customizations
Thepredefined container definitions provide a basic configuration, but you cancustomize them. Making these adjustments is easy and I explain the two changesI had to do below. The first was to install extra packages in the operatingsystem. To do so, I added the instructions to the `Dockerfile`. The secondchange was to configure the port mappings. In the `devcontainer.json`, Iuncommented the `forwardPorts` attribute and added the needed ports. Be aware,for some changes you just need to restart the container. Whereas for others,you need to rebuild the container image.
Using and sharing
After youopened the folder in the container you can keep on working as you are used to.Even the terminal connects to the shell in the container. Whenever you open anew terminal, it will set the working directory to the folder you opened in thecontainer. In my case, it allows me to type in the Jekyll commands to build andserve the site.
After Imade all the configurations and customizations, I committed and pushed the newfiles to the git repository. This made them available to my colleagues, andthey can benefit from my work.
Summary
VisualStudio Code supports multiple ways to do remote development. The VisualStudio Code Remote – Containers extension allows you to develop inside acontainer. The configuration and customizations are all part of your code. Youcan add them to the version control system and share them with everybodyworking on the project.
More Information
For moreinformation about the topic you can head over to the following links:
The Remote Container extension uses Docker as the container runtime.There is also a Docker extension, called: Docker for Visual Studio Code. Briangave a very good introduction at DockerCon LIVE 2020. The recording of his talk Become aDocker Power User With Microsoft Visual Studio Code isavailable online.
Find out more about 56K.Cloud
We love Cloud, IoT, Containers, DevOps, and Infrastructure as Code. If you are interested in chatting connect with us on Twitter or drop us an email: info@56K.Cloud. We hope you found this article helpful. If there is anything you would like to contribute or you have questions, please let us know!
This post originally appeared here.