Docker architecture
Before going to the docker architecture, let us have a look at the traditional/virtualised environment. The picture below will give us a clear idea of it.
A Quick Summary
Now let us go through the Containerized Environment.
This picture illustrates the containerised environment. Here, on top of the operating system, we installed a container runtime/ container Engine that manages the containers. On top of that, containers are running. Each container has its binaries, libraries and also it can hold its own packages. So each container is in a completely isolated environment, which help the containers to avoid the issues with the version mismatch. For example, application 1 needs java 8 to properly function, whereas application two is not compatible with java 8. It requires Java 6 to function. If it was running on a traditional setup, we need to do some finetuning to make both applications working. However, it is pretty easy in the containers. We can have both the application running on separate containers.
Further isolation achieves by using the different features of Linux. For resource limiting, we use cgrop ( memory, CPU etc.). Isolation of network is done with the help of network namespace and so on. Let us continue our discussion. We will plan another series for advanced concepts.
The container runtime is the software that is responsible for running containers. Containers can run in several container runtimes like Docker, container, CRI-O etc.
If you started looking into Kubernetes, you might have heard that Kubernetes is stopping the support of Docker from the Kubernetes version 1.22. No need to worry about it. Docker as an underlying runtime is deprecated. The docker concept, docker images remain the same. Which means we still need to understand containers.
Skip this part if you are not a Linux admin
In simple words, if you are a Linux administrator, the only change is, instead of installing the docker package, you need to install some other container runtime.
For Ubuntu/ Debian
apt-get install < docker > this will be replaced with the < container runtime > of your choice
Or if you are using Redhat/ Centos
"yum install < docker >", this will be replaced with the < container runtime > of your choice
And the service starts to change accordingly to the container runtime of your choice.
systemctl < service name > start
We need to have a clear understanding of what is a container, what is a container run time. Else it makes a lot of confusion. Especially when we are going through some of the discussion forums in Kubernetes. Let me add a blog post that gives a clear idea about the changes.
Containers vs Virtual Machine
This picture shows the comparison of the container with the virtual machine. Conceptually containers and VMs have similarities since both provide an isolated environment for the application. The difference also lies here because they provide different levels of isolation.
A docker is a process running on your system with some level of isolation.
The virtual machine is a complete operating system running on top of the hypervisor irrespective of the base OS. We can run a Linux virtual machine on top of a windows machine. But that is not the case with the container.
In a virtual environment, we virtualise the hardware. We allocate CPU, memory, disk and network to a VM. In the container world, the containers share the host OS kernel. The kernel allocates resources like CPU cycle and memory to the containers ( basically kernel allocating the resources to each process).
Containers are much smaller in size and faster. Starting a virtual machine is time consuming compared to containers.
Pictorial Representation of a Production Application
Web Frontend
Database
Reporting
Payment application
Logging
Variables
Secret Keys
These are just examples. Depends on the complexity of the application, this structure may vary.
Application Scaling
Application Scaling traditional method
If we find any resource constrain in a physical or virtual machine, then the traditional method to handle the situation is to increase the resources such as CPU or memory. But if the server has multiple applications and only one application needs additional resources, then allocating resources to that particular application is nearly impossible.
This method, adding resources to the existing setup, is called vertical scaling.
Application Scaling in Docker Environment
Let us examine the container setup. If the container is unable to serve the load, we will spin up another set of containers. That is called horizontal scaling. Here we can perform the scale-down once the utilisation becomes normal. All these processes are much easier if we pick the right container orchestrator.
Patching and security fixes
Starting
The payment application cameup with the new image
First payment application container broght down for patching
Brought down the second container for patching
The sameway this will continue and complete the patching
Ideally, we will not patch a container. Instead, we create a new image that has all the security fixes and perform the update. Here the update means to bring down the old container and spin up a new container from the newly build image. By implementing a container orchestrator and by choosing the appropriate methods, we can avoid application downtime.
Application Development
Since the application is modular, it is easy to develop the application individually and pack them together. The application patching, adding new features can be performed independently. The application development can be handled by a different team; since they are running in separate containers.
What is Next?
More on Docker
Dockerfile
Docker build
Docker images
Container Orchestration
Kubernetes