YOU ARE AT:FundamentalsDocker tutorial for beginners

Docker tutorial for beginners

Overview

Docker is one of the hottest products on the devops scene today. The open source platform deploys and updates applications inside containers on a single Linux operating system (OS). Those unfamiliar with the platform may find concepts anchored to the technology bewildering. To demystify matters, let’s look at the fundamental building blocks that makeup Docker, alongside their advantages and disadvantages.

What are containers?

Containers enable developers to package an application and its parts in a box. The box is an isolated environment, meaning it ‘contains’ all the resources applications need to function. With Docker, each container runs independently and shares the same resources of a single OS. Major companies rely on container-based technology to run their businesses. It is reported that Google, for example, uses over two billion containers each week across their data centers.

Docker standardizes containers

Containers have been around for a while, but only recently made headway in the telecom industry. Docker helped popularize containers by offering a simplified and secure way to launch them in comparison to alternative methods. In addition, by collaborating with tech giants like Google and Red Hat, and providing an open source platform, the company helped standardize containers in the IT industry.

Architecture

Docker leverages a client-server architecture, consisting of a company client and daemon. The client communicates with the daemon, which is responsible for creating, supporting and allocating the containers. Since the client and daemon can run on the same system, it is possible to connect a client to a remote daemon. The client is able to convey tasks to the daemon through a REST application program interface (API) over Unix sockets on the same machine.

Benefits

One of the main advantages of Docker containers is they address what is known as “dependency hell,” a term used to describe the vexation software developers feel whenever a software package is dependent on another software package. Containers sidestep the problem by providing an isolated environment with all the resources needed to run an application. Furthermore, containers are incredibly lightweight, making them easily portable. Service providers such as Amazon Web Services (AWS) and Google Compute Platform (GCP) have welcomed Docker with open arms because of its availability. Docker containers can run inside an Amazon EC2 instance and Google Compute Engine instance so long as the OS supports Docker. In these instances, a container running on an Amazon EC2 can conveniently move between environments.

Challenges

One of the drawbacks of Docker is it can be difficult to manage all of the containers on the platform effectively. Some of the tools necessary to manage containers haven’t been developed yet. In addition, security can be an issue for Docker. Since all the containers share the same OS, a computer hacker could potentially gain control of the entire system. Moreover, since containers are lightweight, they are very easy to duplicate. While this has its advantages, users may accidentally duplicate too many containers, exhausting physical resources like CPU and memory in return. Click here for a list of tools that can help manage containers.

ABOUT AUTHOR

Nathan Cranford
Nathan Cranford
Nathan Cranford joined RCR Wireless News as a Technology Writer in 2017. Prior to his current position, he served as a content producer for GateHouse Media, and as a freelance science and tech reporter. His work has been published by a myriad of news outlets, including COEUS Magazine, dailyRx News, The Oklahoma Daily, Texas Writers Journal and VETTA Magazine. Nathan earned a bachelor’s from the University of Oklahoma in 2013. He lives in Austin, Texas.