Introduction to Docker in 10 mints
Here is the first part of "Introduction to the World of Docker." series. In this module, let us go through the high-level overview of docker from a non-technical aspect.
A Quick Summary
Comparison with the existing setup
Comparison with the shipping industry
How does the docker bundle everything
Docker in detail
What is the problem with the existing model?
To understand this, let us go through the present working style.
What is the marketing keyword of an IT service?
or
What is the bottom line of a support managers review meeting?
Both talk about the service uptime, "We Provide 99.99% uptime." Depends on the client requirement and criticality of the business, the last two digits may vary. But basically, all are in a running race to achieve this number.
Let us see how the application build is happening in IT
It included two steps
Install
Configure
Once the server is procured and racked in the data centre with the necessary cabling, the server is handover to the corresponding build team. Depends on the application requirement, the build team install the operating system. It includes the remaining OS level configurations also.
Once the OS configuration is complete, it will hand over to the application team. They install and configure the application.
But the story ends here?
No, only the initial setup for the development environment is ready. We need to repeat the same processes for test and production setup.
Now we completed all the build, let us take some rest. But what if something went wrong?
Story ends Here?
We need to troubleshoot and fix it. It is a never-ending story. Most of us have a lot of such experiences with a sleepless night.
Do you like to continue the rest of your professional life like this?
What is the solution to this problem?
How docker overcame this?
Docker packs the application code, supporting packages, binaries and other dependencies along with the operating system. A microservice is a bundle of all these. So once it is packed and build as an image, we can reproduce it anywhere. We can make multiple copies of it within a fraction of seconds. So we do not have a separate build cycle for development, test and production. The build is happening only once. It is a huge time saver.
Comparison with the shipping industry
The idea of docker evolved from the shipping industries. It uses the same thought behind cargo containers.
The Problem faced in the shipping industry
Each goods item needs to treat separately
If we need to transport a car, we need to take it to the port with some suitable transporting method and then load it to the ship. The same applies to spices or fish; it needs to treat as it is. Most of the time we can not use any modern equipment like a crane or other moving equipment.
It consumes lots of time
Since each item needs to handle separately, we can not use any commonly used equipment to move it. Because of this, it takes lots of time to load and unload the goods.
The loss and damage of the goods items were higher
Here we can not keep spices or fish nearby, which will damage both. So while placing the items, we need to take utmost care.
The Solution to the Problem
They introduced a standard container. We can load any goods item in the container and sealed it from the manufacturing unit. It is easy to use different movement and transporting equipment. We can use any equipment shown at the bottom of the picture. Here each item is loads in an isolated container. So we do not need to be much worried about what is in each container.
How does docker fit into this?
In the above example, the shipping industry introduced a standard container and packed everything in it. That helped the easy management of containers.
In the same way, the docker packs the application code, supporting packages, binaries and other dependencies along with the operating system. Let it be MySQL, Mongo DB, Nginx or Redis or any other application. Once we pack it inside a docker image, we can treat it like a container.
What is a Container?
We have a web application
We install the webserver
Install its dependency
Copy the web content
With this, we will make a container image
Push it to the Docker registry
Pull it from the registry
Run it as container
When we need to run a web server, we will pull it from the registry and run it as a container. Since it is a lightweight image, it is easy and quick to pull and run.
The below scenario is common in application development.
The developer spent the day together; finally, they came up with the code which works on her laptop. Tester took the code and tested it on the test server, but it broke. Sometimes the developer laptop might be having some dependency package or a library that she installed for the previous application development, which is she not listed as the dependency for this application. Or some of the supporting package versions are not matching in the test server. In a traditional setup to troubleshooting and fixing the issue will take a longer time.
In the Docker world, we pack the code along with its dependency in a container image. The containers are running from those images, so we will not end up in dependency issues. It is like we ask the developer to send the application code along with her laptop. And we use that laptop to serve the production application. In reality, we can not use a desktop computer in place of a production-grade server. It is just to show you that how containerisation helps in avoiding the dependency issue.
We came to the end of this module. This is what docker does in a nutshell.
Build Image → Pack everything you need
Ship the Image → Push it to the central registry
Pull & run the Image → Run it wherever you need
Build Image
Start from OS
Install the application
Copy your application code
Copy the supporting files
Build the application
Ship the Image
Push image to a repository
Default docker hub
All cloud providers providing the docker repository
It can also be a local repository
Run the Image
Pull it into your server
Run anywhere, we can run anywhere the container run time is available
It can be AWS, Azure, GCP
It can run on Kubernetes, Mesos, docker swarm or OpenShift