preloader

What is Docker ? Definition, Tools and Docker Container

Docker

Contents

Docker is an open source platform for building, deploying, and managing containerized applications. Learn about containers, how they compare to VMs, and why Docker is so widely adopted and used.

What is Docker?

Docker is an open source containerization platform. It enables developers to package applications into containers—standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Containers simplify delivery of distributed applications, and have become increasingly popular as organizations shift to cloud-native development and hybrid multicloud environments.

Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.

How Do You Use a Docker?

The biggest advantage of VMs is that they create snapshots that can be revisited instantly later. Docker containers further enhance the lightweight process virtualization by being OS independent and using the Linux Kernel’s functionality.

Docker for Hackers

  • This all sounds good, but what does it mean for us? Well while it’s considered “bad practice” to interact with a container while it’s running for developers, there is the option to drop to an interactive prompt for debugging containers.
  • So what we now have is a container that is easily configurable, launches in seconds always spin up into the same state, and launches a shell.
  • We can use Docker to build and configure an environment containing just the tools we need, and then launch a container and work from that.
  • Our environment will always be exactly the same as it’s launched from an image, and we can easily launch multiple containers at the same time as they don’t consume a lot of resources. Additionally, as we’re starting from a lightweight image and adding the tools we need, and as containers don’t create disks or have to virtualize the hardware, the resulting footprint on the hard drive is far smaller than traditional VMs.
  • This allows us to have a self-contained testing environment for each job or test or random-tinker, where any processes, installs, and so on are all local to that container and don’t pollute your “Testing VM” or host OS. Any data can be written out to the host via the shared volume where it’s saved and can be used by tools on the host, and we can still have connections back to the container for webservers, remote shells, and similar.
  • As the container configuration is just the Dockerfile, our whole environment can also be backed up or shared, and we can get up and running with a very specific configuration on a new box in minutes.

How containers work, and why they’re so popular

Docker is an open source platform for building, deploying, and managing containerized applications. Learn about docker containers and security

Containers are made possible by process isolation and virtualization capabilities built into the Linux kernel. These capabilities – such as control groups (Cgroups) for allocating resources among processes, and namespaces for restricting a processes access or visibility into other resources or areas of the system – enable multiple application components to share the resources of a single instance of the host operating system in much the same way that a hypervisor enables multiple virtual machines (VMs) to share the CPU, memory and other resources of a single hardware server. 

As a result, container technology offers all the functionality and benefits of VMs – including application isolation, cost-effective scalability, and disposability – plus important additional advantages:

  • Lighter weight: Unlike VMs, containers don’t carry the payload of an entire OS instance and hypervisor; they include only the OS processes and dependencies necessary to execute the code. Container sizes are measured in megabytes (vs. gigabytes for some VMs), make better use of hardware capacity, and have faster startup times.
  • Greater resource efficiency: With containers, you can run several times as many copies of an application on the same hardware as you can using VMs. This can reduce your cloud spending.
  • Improved developer productivity: Compared to VMs, containers are faster and easier to deploy, provision and restart. This makes them ideal for use in continuous integration and continuous delivery (CI/CD) pipelines and a better fit for development teams adopting Agile and DevOps practices.

Companies using containers report other benefits including improved app quality, faster response to market changes and much more. Learn more with this interactive tool:

Handling Docker containers

  • By default, a container is stopped as soon as its “main” process ends. In our case, this process is the shell, so if you exit the shell the container will stop.
  • We can list all containers, running and stopped, with the ps -a command. From here we can re-start a container with start -i container_name or remove it with container rm container_name.
  • If we want to create a temporary container that is automatically removed when it is stopped, we can add the –rm option to the run.
  • We can clean up Docker using the relevant prune commands. For example, if we have lots of stopped containers we want to clean we can run container prune.

Updating the image

As we build the image once and then run the containers from that image, we’ll find our environment slowly going out-of-date if we don’t update it. We can always update an individual container in the usual fashion if we need a quick update using apt, but we’ll want to rebuild the image occasionally to avoid doing this every time.

We can rebuild the image using the original build command.

Docker Commands

  • docker run – Runs a command in a new container.
  • docker start – Starts one or more stopped containers
  • docker stop – Stops one or more running containers
  • docker build – Builds an image form a file
  • docker pull – Pulls an image or a repository from a registry
  • docker push – Pushes an image or a repository to a registry

Why use Docker?

Docker is so popular today that “Docker” and “containers” are used interchangeably. But the first container-related technologies were available for years — even decades (link resides outside IBM) — before Docker was released to the public in 2013. 

Most notably, in 2008, LinuXContainers (LXC) was implemented in the Linux kernel, fully enabling virtualization for a single instance of Linux. While LXC is still used today, newer technologies using the Linux kernel are available. Ubuntu, a modern, open-source Linux operating system, also provides this capability.

Docker enhanced the native Linux containerization capabilities with technologies that enable:

  • Improved—and seamless—portability: While LXC containers often reference machine-specific configurations, containers run without modification across any desktop, data center and cloud environment.
  • Even lighter weight and more granular updates: With LXC, multiple processes can be combined within a single container. With containers, only one process can run in each container. This makes it possible to build an application that can continue running while one of its parts is taken down for an update or repair.
  • Automated container creation: Docker can automatically build a container based on application source code.
  • Container versioning: Docker can track versions of a container image, roll back to previous versions, and trace who built a version and how. It can even upload only the deltas between an existing version and a new one.
  • Container reuse: Existing containers can be used as base images—essentially like templates for building new containers.
  • Shared container libraries: Developers can access an open-source registry containing thousands of user-contributed containers.

Today Docker containerization also works with Microsoft Windows server. And most cloud providers offer specific services to help developers build, ship and run applications containerized with Docker. 

Docker tools and terms

Some of the tools and terminology you’ll encounter when using Docker include:

1. DockerFile

Every  container starts with a simple text file containing instructions for how to build the container image. DockerFile automates the process of image creation. It’s essentially a list of command-line interface (CLI) instructions that Docker Engine will run in order to assemble the image.

2. Docker images

  • Docker images contain executable application source code as well as all the tools, libraries, and dependencies that the application code needs to run as a container. When you run the Docker image, it becomes one instance (or multiple instances) of the container.
  • It’s possible to build a Docker image from scratch, but most developers pull them down from common repositories. Multiple Docker images can be created from a single base image, and they’ll share the commonalities of their stack.
  • Docker images are made up of layers, and each layer corresponds to a version of the image. Whenever a developer makes changes to the image, a new top layer is created, and this top layer replaces the previous top layer as the current version of the image. Previous layers are saved for rollbacks or to be re-used in other projects.
  • Each time a container is created from a Docker image, yet another new layer called the container layer is created. Changes made to the container—such as the addition or deletion of files—are saved to the container layer only and exist only while the container is running. This iterative image-creation process enables increased overall efficiency since multiple live container instances can run from just a single base image, and when they do so, they leverage a common stack.

3. Docker containers

Docker containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content. Users can interact with them, and administrators can adjust their settings and conditions using commands.

4. Docker Hub

  • Docker Hub (link resides outside IBM) is the public repository of Docker images that calls itself the “world’s largest library and community for container images.” It holds over 100,000 container images sourced from commercial software vendors, open-source projects, and individual developers. It includes images that have been produced by Docker, Inc., certified images belonging to the Trusted Registry, and many thousands of other images.
  • All Docker Hub users can share their images at will. They can also download predefined base images from the Docker filesystem to use as a starting point for any containerization project.

4. Docker daemon

Docker daemon is a service running on your operating system, such as Microsoft Windows or Apple MacOS or iOS. This service creates and manages your Docker images for you using the commands from the client, acting as the control center of your Docker implementation.

5. Docker registry

A Docker registry is a scalable open-source storage and distribution system for docker images. The registry enables you to track image versions in repositories, using tagging for identification. This is accomplished using git, a version control tool.

Docker deployment

If you’re running only a few containers, it’s fairly simple to manage your application within Docker Engine, the industry de facto runtime. But if your deployment comprises thousands of containers and hundreds of services, it’s nearly impossible to manage that workflow without the help of these purpose-built tools.

Docker Compose

  • If you’re building an application out of processes in multiple containers that all reside on the same host, you can use Compose to manage the application’s architecture. 
  • Compose creates a YAML file that specifies which services are included in the application and can deploy and run containers with a single command.
  • Using  Compose, you can also define persistent volumes for storage, specify base nodes, and document and configure service dependencies.

Docker container security

Securing a Docker container is no different than securing other containers. It requires an all-inclusive approach, securing everywhere from the host to the network and everything in between. Because of their moving parts, ensuring the security of containers is difficult for many organisations, and it requires more than rudimentary level of vigilance. 

Things to consider

Here are some things to consider when securing your containers:

  • Utilise resource quotas
  • Containers are not to be run as root
  • Ensure the security of your container registries
  • Use a trusted source
  • Go to the source of the code
  • Design APIs and networks with security in mind

When it comes to the security of your containers, this is very crucial and why Docker default settings are not set to run containers as root. For example, if your containerised application is vulnerable to an exploit, and you are running with the root user, it expands the attack surface and creates a simple path for attackers to gain privileged escalation.

Ensure the security of your Docker container registries

Use a trusted source

Now that you have the container registry secured, you don’t want to infect it with container images obtained from an untrusted source. It may seem convenient to simply download container images readily available to the public at the click of the mouse; however, it is extremely important to ensure the source of the download is trusted or verified.

Go to the source of the code

As discussed above, it is important to source reliable and trusted container images for your Docker containers. However, it is also good practice to investigate the code within the image to ensure it does not contain infected code—even if that image came from a trusted registry. Docker images have a combination of original code and packages from outside sources, which may not be derived from trusted sources.

In this scenario, it is best to make use of source code analysis tools. Once you have your images, you can scan the packages to determine where the code came from by downloading the sources of all packages within the Docker images. This will allow you to reveal if any of the images have known security vulnerabilities, keeping you secure from the first build.

Design APIs and networks with security in mind

  • In order for Docker containers to communicate with one another, they utilise application programming interfaces (APIs) and networks.
  • That communication is essential for containers to run properly, but it requires proper security and monitoring. Even though APIs and the network are not actually part of the Docker container but resources you use along with Docker, they still present a risk to the security of the container.
  • With that in mind, in order to have the ability to quickly stop an intrusion, you need to design your APIs and network for easy monitoring, and with security in mind. 

Wrapping it up

Securing your Docker containers is no picnic, but the payoff is well worth the work. It takes a holistic approach and hardening the container environment at every level. And while the above best practices seem like a lot, they will save you an immense amount of time in the future and alleviate you from major security risks. 

    Spread the love

    Related Post

    Leave a Reply

    Your email address will not be published. Required fields are marked *