Docker deep dive pdf download
A tragic waste of company capital and resources! Hello VMware! Amid all of this, VMware, Inc. And almost overnight, the world changed into a much better place! Cue wild celebrations! This was a game changer! IT no longer needed to procure a brand new oversized server every time the business asked for a new application. More often than not, they could run new apps on existing servers that were sitting around with spare capacity.
The fact that every VM requires its own dedicated OS is a major flaw. Every OS needs patching and monitoring. And in some cases, every OS requires a license. All of this is a waste of op-ex and cap-ex. The VM model has other challenges too. Hello Containers! For a long time, the big web-scale players, like Google, have been using container technologies to address the shortcomings of the VM model. In the container model, the container is roughly analogous to the VM.
The major difference is that every container does not require its own full-blown OS. In fact, all containers on a single host share a single OS. It also reduces potential licensing costs and reduces the overhead of OS patching and other maintenance. Net result: savings on the cap-ex and op-ex fronts. Containers are also fast to start and ultra-portable.
Moving container workloads from your laptop, to the cloud, and then to VMs or bare metal in your data center, is a breeze. Linux containers Modern containers started in the Linux world, and are the product of an immense amount of work from a wide variety of people, over a long period of time.
Just as one example, Google LLC has contributed many container-related technologies to the Linux kernel. Some of the major technologies that enabled the massive growth of containers in recent years include; kernel namespaces, control groups, union filesystems, and of course Docker.
To re-emphasize what was said earlier — the modern container ecosystem is deeply indebted to the many individuals and organizations that laid the strong foundations that we currently build on. Thank you! Despite all of this, containers remained complex and outside of the reach of most organizations. However, in this book we are restricting our conversation and comments to modern containers that have been made popular by Docker.
Hello Docker! Put another way, Docker, Inc. Windows containers Over the past few years, Microsoft Corp. At the time of writing, Windows containers are available on the Windows 10 and Windows Server platforms. In achieving this, Microsoft has worked closely with Docker, Inc. The core Windows kernel technologies required to implement containers are col- lectively referred to as Windows Containers.
The user-space tooling to work with these Windows Containers is Docker. This makes the Docker experience on Windows almost exactly the same as Docker on Linux.
This way developers and sysadmins familiar with the Docker toolset from the Linux platform will feel at home using Windows containers. This revision of the book includes Linux and Windows examples for many of the lab exercises cited throughout the book.
This means that a containerized app designed to run on a host with a Windows kernel will not run on a Linux host. This means that you can think of it like this at a high level — Windows containers require a Windows Host, and Linux containers require a Linux host. For example, Docker for Windows a product offering from Docker, Inc. This is an area that is developing fast and you should consult the Docker documen- tation for the latest.
What about Mac containers? There is currently no such thing as Mac containers. However, you can run Linux containers on your Mac using Docker for Mac.
This works by seamlessly running your containers inside of a lightweight Linux VM on your Mac. What about Kubernetes Kubernetes is an open-source project out of Google that has quickly emerged as the leading orchestrator of containerized apps.
At the time of writing, Kubernetes uses Docker as its default container runtime — the piece of Kubernetes that starts and stops containers, as well as pulls images etc. However, Kubernetes has a pluggable container runtime interface called the CRI. This makes it easy to swap-out Docker for a different container runtime. In the future, Docker might be replaced by containerd as the default container runtime in Kubernetes.
More on containerd later in the book. Check out my Kubernetes book and my Getting Started with Kubernetes video training course2 for more info on Kubernetes.
Chapter Summary We used to live in a world where every time the business wanted a new application, we had to buy a brand-new server for it. Following the success of VMware and hypervisors came a newer more efficient and lightweight virtualization technology called containers.
But containers were initially hard to implement and were only found in the data centers of web giants that had Linux kernel engineers on staff. Then along came Docker Inc.
Docker, Inc. Docker the container runtime and orchestration technology 3. Docker the open source project this is now called Moby. It creates, manages and orchestrates containers. The software is developed in the open as part of the Moby open-source project on GitHub. Interestingly, Docker, Inc. Behind the scenes, the dotCloud platform leveraged Linux contain- ers.
In the dotCloud PaaS business was struggling and the company needed a new lease of life. Today Docker, Inc. Almost all of this funding was raised after the company pivoted to become Docker, Inc. Since becoming Docker, Inc. At the time of writing, Docker, Inc. The goal of Dockercon is to bring together the growing container ecosystem and drive the adoption of Docker and container technologies.
The Docker Engine is the infrastructure plumbing software that runs and orchestrates containers. In the same way that ESXi is the core hypervisor technology that runs virtual machines, the Docker Engine is the core container runtime that runs containers.
All other Docker, Inc. Figure 2. All of the other products in the diagram build on top of the Engine and leverage its core capabilities. The Enterprise Edition and the Community Edition both have a stable release channel with quarterly releases. Each Community Edition will be supported for 4 months and each Enterprise Edition will be supported for 12 months.
The Community Edition has an additional monthly release via an edge channel. Starting from Q1 Docker version numbers follow the YY. MM-xx versioning scheme, similar to Ubuntu and other projects. For example, the first release of the Community Edition in June will be Note: Prior to Q1 , Docker version numbers followed the ma- jor. The last version prior to the new scheme was Docker 1.
This is the set of tools that get combined into things like the Docker daemon and client you can download and install from docker. The goal of the Moby project is to be the upstream for Docker, and to break Docker down into more modular components — and to do this in the open. As an open-source project, the source code is publicly available, and you are free to download it, contribute to it, tweak it, and use it, as long as you adhere to the terms of the Apache License 2.
Most of the project and its tools are written in Golang — the relatively new system- level programming language from Google also known as Go. This does away with a lot of the old ways where code was proprietary and locked behind closed doors. It also means that release cycles are published and worked on in the open. No more uncertain release cycles that are kept a secret and then pre-announced months-in-advance to ridiculous pomp and ceremony.
Most things are done in the open for all to see and all to contribute to. The Moby project, and the wider Docker movement, is huge and gaining momentum. It has thousands of GitHub pull requests, tens of thousands of Dockerized projects, not to mention the billions of image pulls from Docker Hub. The project literally is taking the industry by storm! Be under no illusion, Docker is being used! The container ecosystem One of the core philosophies at Docker, Inc.
This is a way of saying you can swap out a lot of the native Docker stuff and replace it with stuff from 3rd-parties. A good example of this is the networking stack. The core Docker product ships with built-in networking. But the networking stack is pluggable meaning you can rip out the native Docker networking and replace it with something else from a 3rd-party. Plenty of people do that.
In the early days, it was common for 3rd-party plugins to be better than the native offerings that shipped with Docker. However, this presented some business model challenges for Docker, Inc. After all, Docker, Inc. As a result, the batteries that are included are getting better and better. This has caused tension and raised levels competition within the ecosystem.
Despite this, the container ecosystem is flourishing with a healthy balance of co- operation and competition. This is great! Healthy competition is the mother of innovation! So, this is container history according to Nigel :-D From day one, use of Docker has grown like crazy. More and more people used it in more and more ways for more and more things.
So, it was inevitable that some parties would get frustrated. This is normal and healthy. So they did something about it! They created a new open standard called appc6 that defined things like image format and container runtime. This put the container ecosystem in an awkward position with two competing standards. Getting back to the story though, this threatened to fracture the ecosystem and present users and customers with a dilemma. While competition is usually a good thing, competing standards is usually not.
They cause confusion and slowdown user adoption. Not good for anybody. With this in mind, everybody did their best to act like adults and came together to form the OCI — a lightweight agile council to govern container standards. At the time of writing, the OCI has published two specifications standards -.
These two standards are like agreeing on standard sizes and properties of rail tracks. Nobody wants two competing standards for rail track sizes! As of Docker 1. So far, the OCI has achieved good things and gone some way to bringing the ecosystem together.
However, standards always slow innovation! Especially with new technologies that are developing at close to warp speed. This has resulted in some raging arguments passionate discussions in the container community. In the opinion of your author, this is a good thing! Expect more passionate discussions about standards and innovation! Chapter summary In this chapter, we learned a bit about Docker, Inc. They were arguably the first-movers and instigators of the container modern revolution.
But a huge ecosystem of partners and competitors now exists. The Open Container Initiative OCI has been instrumental in standardizing the container runtime format and container image format. It spins up a single-engine Docker environment on a bit Windows 10 desktop or laptop. The second thing to note is that it is a Community Edition CE app. The third thing of note is that it might suffer some feature-lag. This is because Docker, Inc. All three points add up to a quick and easy installation, but one that is not intended for production.
Enough waffle. First up, pre-requisites. Docker for Windows requires:. If it is not, you should carefully follow the procedure for your particular machine. The first thing to do in Windows 10, is make sure the Hyper-V and Containers features are installed and enabled. Right-click the Windows Start button and choose Apps and Features.
Click the Programs and Features link a small link on the right. Click Turn Windows features on or off. This will install and enable the Hyper-V and Containers features. Your system may require a restart. The Containers feature is only available if you are running the summer Windows 10 Anniversary Update build or later.
Click one of the Get Docker download links. Docker for Windows has a stable and edge channel. The edge channel contains newer features but may not be as stable. An installer package called Docker for Windows Installer. Locate and launch the installer package downloaded in the previous step.
Step through the installation wizard and provide local administrator credentials to complete the installation. Docker will automatically start, as a system service, and a Moby Dock whale icon will appear in the Windows notifications tray. You have installed Docker for Windows. Open a command prompt or PowerShell terminal and try the following commands:.
Server: Engine: Version: This is because the default installation currently installs the Docker daemon inside of a lightweight Linux Hyper-V VM. In this scenario, you will only be able to run Linux containers on your Docker for Windows install. If you want to run native Windows containers, you can right click the Docker whale icon in the Windows notifications tray and select Switch to Windows contain- ers If you already have the Windows Containers feature enabled, it will only take a few seconds to make the switch.
Once the switch has been made, the output to the docker version command will look like this. This means the daemon is running natively on the Windows kernel and will only run Windows containers.
Also note that the system is now running the experimental version of Docker Experimental: true. As previously mentioned, Docker for Windows has a stable and an edge channel. At the time of writing, Windows Containers is an experimental feature of the edge channel. You can check which channel you are running with the dockercli -Version com- mand.
Docker for Windows Version: Installing DfM is ridiculously easy. What is Docker for Mac? First up, Docker for Mac is a packaged product from Docker, Inc. Behind the scenes, the Docker daemon is running inside a lightweight Linux VM.
It then seamlessly exposes the daemon and API to your Mac environment. This means you can open a terminal on your Mac and use the regular Docker commands.
Figure 3. Note: For the curious reader, Docker for Mac leverages HyperKit9 to implement an extremely lightweight hypervisor. HyperKit is based on the xhive hypervisor Docker for Mac also leverages features from DataKit11 and runs a highly tuned Linux distro called Moby that is based on Alpine Linux Click one of the Get Docker CE download links.
Docker for Mac has a stable and edge channel. Edge has newer features, at the expense of stability. A Docker. Launch the Docker. You will be asked to drag and drop the Moby Dock whale image into the Applications folder. Open your Applications folder it may open automatically and double-click the Docker application icon to Start it. You may be asked to confirm the action because the application was downloaded from the internet.
Enter your password so that the installer can create the components that require elevated privileges. The Docker daemon will now start. An animated whale icon will appear in the status bar at the top of your screen while Docker starts. Once Docker has successfully started, the whale will stop being animated. You can click the whale icon to manage DfM. Now that DfM is installed, you can open a terminal window and run some regular Docker commands.
Try the following. Server: Version: This is because the daemon is running inside of the Linux VM we mentioned earlier. Also note that the system is running the experimental version Experimental: true of Docker.
This is because the system is running the edge channel which comes with experimental features turned on. Run some more Docker commands. The following three commands show you how to verify that all of these components installed successfully, as well as which versions you have. It should also work on CentOS and its upstream and downstream forks.
It makes absolutely no difference if your Linux machine is a physical server in your own data center, on the other side of the planet in a public cloud, or a VM on your laptop.
The first thing you need to decide is which edition to install. There are currently two editions:. Note: You should ensure that your system is up-to-date with the latest packages and security patches before continuing.
Open a new shell on your Linux machine. It is best practice to use non-root users when working with Docker. To do this, you need to add your non-root users to the local docker Unix group.
The following command shows you how to add the npoulton user to the docker group and verify that the operation succeeded. You will need to use a valid user account on your own system. If you are already logged in as the user that you just added to the docker group, you will need to log out and log back in for the group membership to take effect.
Docker is now installed on your Linux machine. Run the following commands to verify the installation. This will take you to the official Docker installation instructions which are usually kept up to date. Be warned though, the instructions on the Docker website tend use package managers that require a lot more steps than the procedure we used above.
Warning: If you install Docker from a source other than the official Docker repositories, you may end up with a forked version of Docker. In the past, some vendors and distros chose to fork the Docker project and develop their own slightly customized versions. You need to watch out for things like this, as you could unwittingly end up in a situation where you are running a fork that has diverged from the official Docker project. If it is not what you intend, it can lead to situations where modifications and fixes your vendor makes do not make it back upstream in to the official Docker project.
In these situations, you will not be able to get commercial support for your installation from Docker, Inc. Install the Windows Containers feature 2. Install Docker 3. Verify the installation. Before proceeding, you should ensure that your system is up-to-date with the latest package versions and security updates.
You can do this quickly with the sconfig command and choosing option 6 to install updates. This may require a system restart. Ensure that the Containers feature is installed and enabled. Right-click the Windows Start button and select Programs and Features. This will open the Programs and Features console. This will open the Server Manager app.
Make sure the Dashboard is selected and choose Add Roles and Features. Click through the wizard until you get to the Features page. Make sure that the Containers feature is checked, then complete the wizard. Your system may require a system restart. Now that the Windows Containers feature is installed, you can install Docker.
Open a new PowerShell Administrator terminal. Use the following command to install the Docker package management provider. If prompted, accept the request to install the NuGet provider.
Install Docker. Name Version Source Summary Docker Docker is now installed and configured to automatically start when the system boots. You may want to restart your system to make sure that none of changes have introduced issues that cause your system not to boot.
You can also check that Docker automatically starts after the reboot. Docker is now installed and you can start deploying containers. The following two commands are good ways to verify that the installation succeeded. Docker is now installed and you are ready to start using Windows containers. Upgrading the Docker Engine Upgrading the Docker Engine is an important task in any Docker environment — especially production.
This section of the chapter will give you the high-level process of upgrading the Docker engine, as well as some general tips and a couple of upgrade examples. The high-level process of upgrading the Docker Engine is this: Take care of any pre-requisites. Stop the Docker daemon 2. Remove the old version 3.
Install the new version 4. Ensure containers have restarted. Each version of Linux has its own slightly different commands for upgrading Docker. Upgrading Docker CE on Ubuntu Running commands as root is obviously not recommended, but it does keep examples in the book simpler. However, you will have to prepend the following commands with sudo.
Update your apt package list. The Docker engine has had several different package names in the past. This command makes sure all older versions get removed.
Install the new version. There are different versions of Docker and different ways to install each one. For example, Docker CE can be installed from apt or deb packages, or using a script on docker. Synchronizing state of docker. At this point you might want to restart the node. This will make sure that no issues have been introduced that prevent your system from booting in the future.
Make sure any containers and services have restarted. Remember, other methods of upgrading and installing Docker exist. All commands should be ran from a PowerShell terminal. Uninstall any potentially older modules provided by Microsoft, and install the module from Docker. Update the docker package. This command will force the update no uninstall is required and configure Docker to automatically start each time the system boots.
You might want to reboot your server at this point to make sure the changes have not introduced any issues that prevent it from restarting in the future. Check that containers and services have restarted. Docker and storage drivers Every Docker container gets its own area of local storage where image layers are stacked and the container filesystem is mounted. Historically, this local storage area has been managed by the storage driver, which we sometimes call the graph driver or graphdriver.
Although the high-level concepts of stacking image layers and using copy-on-write technologies are constant, Docker on Linux supports several different storage drivers, each of which implements layering and copy-on-write in its own way. While these implementation differences do not affect the way we interact with Docker, they can have a significant impact on performance and stability. Docker on Windows only supports a single storage driver, the windowsfilter driver.
Selecting a storage driver is a per node decision. This means a single Docker host can only run a single storage driver — you cannot select the storage driver per-container.
The following snippet shows the storage driver set to overlay2. Note: If the configuration line is not the last line in the configuration file, you will need to add a comma to the end. If you change the storage driver on an already-running Docker host, existing images and containers will not be available after Docker is restarted. Changing the storage driver obviously changes where Docker looks for images and containers.
Reverting the storage driver to the previous configuration will make the older images and containers available again. If you need to change your storage driver, and you need your images and containers to be available after the change, you need to save them with docker save, push the saved images to a repo, change the storage driver, restart Docker, pull the images locally, and restart your containers.
Choosing which storage driver, and configuring it properly, is important in any Docker environment — especially production. The following list can be used as a guide to help you choose which storage driver to use. However, you should always consult the latest support documentation from Docker, as well as your Linux provider. Again, this list should only be used as a guide.
Always check the latest support and compatibility matrixes in the Docker documentation, and with your Linux provider. Devicemapper configuration Most of the Linux storage drivers require little or no configuration. However, devicemapper needs configuring in order to perform well. By default, devicemapper uses loopback mounted sparse files to underpin the storage it provides to Docker.
To get the best performance out of devicemapper, as well as production support, you must configure it in direct-lvm mode. This significantly increases performance by leveraging an LVM thinpool backed by raw block devices. Docker However, at the time of writing, it has some limitations.
The main ones being; it will only configure a single block device, and it only works for fresh installations. This might change in the future, but a single block device will not give you the best in terms of performance and resiliency. The following simple procedure will let Docker automatically configure devicemap- per for direct-lvm.
Device Mapper and LVM are complex topics, and beyond the scope of a heterogeneous Docker book like this. Restart Docker. Verify that Docker is running and the devicemapper configuration is correctly loaded.
Although Docker will only configure direct-lvm mode with a single block device, it will still perform significantly better than loopback mode! Walking you through the entire process of manually configuring device mapper direct-lvm is beyond the scope of this book. It is also something that can change and vary between OS versions. However, the following items are things you should know and consider when performing a configuration. You need to have block devices available in order to configure direct-lvm mode.
If your Docker environment is in the public cloud, these can be any form of high performance block storage usually SSD-based supported by your cloud provider. This means you will need to configure the required physical devices pdev , volume group vg , logical volumes lv , and thinpool tp.
You should use dedicated physical volumes and form them into a new volume group. You should not share the volume group with non-Docker workloads. You will also need to configure two logical volumes; one for data and the other for metadata. Create an LVM profile specifying the auto-extend threshold and auto-extend values, and configure monitoring so that auto- extend operations can happen.
The name of the dm. Once the configuration is saved you can start the Docker daemon. For more detailed information, see the Docker documentation or talk to your Docker technical account manager. We looked at how to upgrade the Docker Engine on Ubuntu We also learned that selecting the right storage driver is essential when using Docker on Linux in production environments. These two sections will give you a good idea of what Docker is all about and how some of the major components fit together.
It is recommended that you read both sections to get the dev and the ops perspectives. DevOps anyone? This is about giving you a feel of things — setting you up so that when we get into the details in later chapters, you have an idea of how the pieces fit together.
All you need, to follow along, is a single Docker host with an internet connection. All it needs, is to be running Docker with a connection to the internet. Play With Docker is a web-based Docker playground that you can use for free. You can use the docker version command to test that the client and daemon server are running and talking to each other. If you are using Linux and get an error response from the Server component, try the command again with sudo in front of it: sudo docker version.
If it works with sudo you will need to add your user account to the local docker group, or prefix the remainder of the commands in the book with sudo. A virtual machine template is essentially a stopped virtual machine.
In the Docker world, an image is effectively a stopped container. Run the docker image ls command on your Docker host. If you are working from a freshly installed Docker host, or Play With Docker, you will have no images and will look like the output above. If you are following along with Linux, pull the ubuntu:latest image.
Run the docker image ls command again to see the image you just pulled. When working with images, you can refer to them using either IDs or names. Containers Now that we have an image pulled locally, we can use the docker container run command to launch a container from it.
For Linux:. All rights reserved. Look closely at the output from the previous commands. You should notice that the shell prompt has changed in each instance. This is because the -it flags switch your shell into the terminal of the container — you are literally inside of the new container! Finally, we tell Docker which process we want to run inside of the container. Run a ps command from inside of the container to list all running processes.
Linux example:. The presence of the ps -elf process in the Linux output can be a bit misleading, as it is a short-lived process that dies as soon as the ps command exits. The Windows container has a lot more going on. This is an artefact of the way the Windows Operating System works. Press Ctrl-PQ to exit the container without terminating it. This will land your shell back at the terminal of your Docker host. You can verify this by looking at your shell prompt. Now that you are back at the shell prompt of your Docker host, run the ps command again.
Notice how many more processes are running on your Docker host compared to their respective containers. Windows containers run far fewer processes than Windows hosts, and Linux containers run far less than Linux hosts. In a previous step, you pressed Ctrl-PQ to exit from the container. Doing this from inside of a container will exit you from the container without killing it. You can see all running containers on your system using the docker container ls command.
The output above shows a single running container. This is the container that you created earlier. You can also see that it was created 7 minutes ago and has been running for 7 minutes. Attaching to running containers You can attach your shell to the terminal of a running container with the docker container exec command. Notice that your shell prompt has changed again. You are logged in to the container again. We referenced the container by name, and told it to run the bash shell PowerShell in the Windows example.
We could easily have referenced the container by its hex ID. Exit the container again by pressing Ctrl-PQ. Your shell prompt should be back to your Docker host. Run the docker container ls command again to verify that your container is still running. Stop the container and kill it using the docker container stop and docker con- tainer rm commands. Verify that the container was successfully deleted by running the docker container ls command with the -a flag.
Adding -a tells Docker to list all containers, even those in the stopped state. The Dev Perspective Containers are all about the apps! However, both examples are containerizing simple web apps, so the process is the same. Where there are differences in the Windows example we will highlight them to help you follow along. Run all of the following commands from a terminal on your Docker host. Clone the repo locally. This will pull the application code to your local Docker host ready for you to containerize it.
Be sure to substitute the following repo with the Windows repo if you are following along with the Windows example. Checking connectivity The Linux example is a simple nodejs web app.
The Windows example is a simple ASP. NET Core web app. Both Git repos contain a file called Dockerfile. A Dockerfile is a plain-text document describing how to build an app into a Docker image. List the contents of the Dockerfile. The contents of the Dockerfile in the Windows example are different.
At this point we have pulled some application code from a remote Git repo. We also have a Dockerfile containing instructions on how to build the app into a Docker image. Use the docker image build command to create a new image using the instructions in the Dockerfile.
This example creates a new Docker image called test:latest. Be sure to perform this command from within the directory containing the app code and Dockerfile. Sending build context to Docker daemon Note: It may take a long time for the build to finish in the Windows example. This is because of the size and complexity of the image being pulled. Once the build is complete, check to make sure that the new test:latest image exists on your host. You now have a newly-built Docker image with the app inside.
Run a container from the image and test the app. Open a web browser and navigate to the DNS name or IP address of the Docker host that you are running the container from, and point it to port You will see the following web page. If you are following along with Docker for Windows or Docker for Mac, you will be able to use localhost or Well done.
You then ran a container from it. Chapter Summary In the Op section of the chapter you; downloaded a Docker image, launched a container from it, logged into the container, executed a command inside of it, and then stopped and deleted the container.
In the Dev section, you containerized a simple application by pulling some source code from GitHub and building it into an image using instructions in a Dockerfile. You then ran the containerized app.
This big picture view should help you with the up-coming chapters where we will dig deeper into images and containers. So, feel free to skip it. So, to be a real Docker master, you need to know the stuff in this chapter. This will be a theory-based chapter with no hands-on exercises. We often refer to it simply as Docker, or the Docker platform.
The Docker engine is modular in design with many swappable components. At the time of writing, the major components that make up the Docker engine are: the Docker client, the Docker daemon, containerd, and runc. Together, these create and run containers.
Figure 5. This is intentional and not a mistake. The Docker daemon was a monolithic binary. It contained all of the code for the Docker client, the Docker API, the container runtime, image builds, and much more. LXC provided the daemon with access to the fundamental building-blocks of containers that existed in the Linux kernel. Things like namespaces and control groups cgroups. First up, LXC is Linux-specific. This was a problem for a project that had aspirations of being multi-platform.
Second up, being reliant on an external tool for something so core to the project was a huge risk that could hinder development. As a result, Docker. Docker with access to the fundamental container building-blocks that exist inside the kernel. Libcontainer replaced LXC as the default execution driver in Docker 0. Getting rid of the monolithic Docker daemon Over time, the monolithic nature of the Docker daemon became more and more problematic:.
It got slower. The aim of this work was to break out as much of the functionality as possible from the daemon, and re-implement it in smaller specialized tools. These specialized tools can be swapped out, as well as easily re-used by third parties to build other tools. This plan follows the tried-and-tested Unix philosophy of building small specialized tools that can be pieced together into larger tools. This work of breaking apart and re-factoring the Docker engine is an ongoing process.
However, it has already seen all of the container execution and container runtime code entirely removed from the daemon and refactored into small, specialized tools. Both specifications were released as version 1. For example, the Docker daemon no longer contains any container runtime code — all container runtime code is implemented in a separate OCI-compliant layer. By default, Docker uses a tool called runc for this.
This is the runc container runtime layer in Figure 5. A goal of the runc project be in-line with the OCI spec.
As well as this, the containerd component of the Docker Engine makes sure Docker images are presented to runc as valid OCI bundles. Note: The Docker engine implemented portions of the OCI specs before the specs were officially released as version 1. If you strip everything else away, runc is a small, lightweight CLI wrapper for libcontainer remember that libcontainer originally replaced LXC in the early Docker architecture.
And fast! See Figure 5. You can see runc release information at:. Its sole purpose in life was to manage container lifecycle operations — start stop pause rm In the Docker engine stack, containerd sits between the daemon and runc at the OCI layer. Kubernetes can also use containerd via cri-containerd.
As previously stated, containerd was originally intended to be small, lightweight, and designed for a single task in life — container lifecycle operations. However, over time it has branched out and taken on more functionality. Things like image management. One of the reasons for this, is to make it easier to use in other projects. For example, containerd is a popular container runtime in Kubernetes. However, in projects like Kubernetes, it was beneficial for containerd to be able to do additional things like push and pull images.
For these reasons, containerd now does a lot more than simple container lifecycle management. However, all the extra functionality is modular and optional, meaning you can pick and choose which bits you want.
It released version 1. You can see release information at:. The most common way of starting containers is using the Docker CLI. The following docker container run command will start a simple new container based on the alpine:latest image. The API is implemented in the daemon. Once the daemon receives the command to create a new container, it makes a call to containerd. Remember that the daemon no-longer contains any code to create containers!
Despite its name, containerd cannot actually create containers. It uses runc to do that. It converts the required Docker image into an OCI bundle and tells runc to use this to create a new container. The container process is started as a child-process of runc, and as soon as it is started runc will exit.
The container is now started. The process is summarized in Figure 5. One huge benefit of this model Having all of the logic and code to start and manage containers removed from the daemon means that the entire container runtime is decoupled from the Docker daemon.
In the old model, where all of container runtime logic was implemented in the daemon, starting and stopping the daemon would kill all running containers on the host. This was a huge problem in production environments — especially when you consider how frequently new versions of Docker are released! Every daemon upgrade would kill all containers on that host — not good! Fortunately, this is no longer a problem. Some of the diagrams in the chapter have shown a shim component.
The shim is integral to the implementation of daemonless containers what we just mentioned about decoupling running containers from the daemon for things like daemon upgrades. We mentioned earlier that containerd uses runc to create new containers.
In fact, it forks a new instance of runc for every container it creates. However, once each container is created, its parent runc process exits. This means we can run hundreds of containers without having to run hundreds of runc instances.
You can see all of these on a Linux system by running a ps command on the Docker host. Obviously, some of them will only be present when the system has running containers.
Obviously, the answer to this question will change over time as more and more functionality is stripped out and modularized. However, at the time of writing, some of the major functionality that still exists in the daemon includes; image management, image builds, the REST API, authentication, security, core networking, and orchestration.
Chapter summary The Docker engine is modular in design and based heavily on open-standards from the OCI.
Container execution is handled by containerd. You can think of it as a container supervisor that handles container lifecycle operations. It is small and lightweight and can be used by other projects and third-party tools. By default, Docker uses runc as its default container runtime. There is still a lot of functionality implemented in the Docker daemon. More of this may be broken out over time.
Functionality currently still inside of the Docker daemon include, but is not limited to: the API, image management, authentication, security features, core networking, and volumes. The work of modularizing the Docker engine is ongoing. The aim of the game is to give you a solid understanding of what Docker images are, and how to perform basic operations. You start by pulling images from an image registry. The most popular registry is Docker Hub19 , but others do exist.
This is the course for you! This course provides a soup-to-nuts learning experience for core Docker technologies, including the Docker Engine, Images, Containers, Registries, Networking, Storage, and more. In producing this edition, I've gone through every page and every example to make sure everything is up-to-date with the latest versions of Docker and the latest trends in the cloud-native ecosystem. Docker Deep Dive - This course will cover Docker The basics of how Docker works.
How to install the Docker Community Edition. How to manage images, containers, networks, and volumes.