Bytes

Docker Containers

In the rapidly evolving landscape of software development and deployment, containerization has emerged as a pivotal technology, with Docker leading the way. Docker containers have revolutionized the way applications are built, shipped, and deployed, offering unparalleled efficiency and consistency across various environments. This tutorial explores the fundamentals of Docker containers, their advantages, and provides a practical implementation to showcase their real-world applicability.

What is Docker Container?

Docker containers are lightweight, portable, and self-sufficient units that encapsulate an application and its dependencies, ensuring consistent execution across different environments. These containers are built from pre-packaged images, stand-alone executable packages that include everything needed to run a piece of software, from the code to the libraries and runtime.

Docker container

Why do We need Docker Containers?

Docker containers offer several key benefits that make them an essential tool for modern software development and deployment:

  1. Simplified Deployment: Docker containers simplify the deployment of applications. Instead of installing all the requirements for a specific platform, you can deploy it with a single command. This makes it easier to deploy complex applications with various dependencies.

  2. Scalability: Docker containers make it easier to scale an application to meet demand. For example, you could easily scale a WordPress site from a single node to multiple nodes to better handle user demands.

  3. Efficient Resource Utilization: Docker containers use less memory than virtual machines, start up and stop more quickly, and can be packed more densely on their host hardware. This leads to more efficient use of system resources and potentially lower costs.

  4. Faster Software Delivery Cycles: Docker containers enable quick deployment of new versions of software and easy rollback to a previous version if needed. This makes it easier to implement strategies like blue/green deployments.

  5. Application Portability: Docker containers encapsulate everything an application needs to run, allowing applications to be easily moved between environments. Any host with the Docker runtime installed can run a Docker container.

  6. Cost Savings: Running multiple apps with different dependencies on a single server can lead to clutter. Docker allows you to run multiple separate containers, each with its own dependencies, leading to a cleaner and more organized server setup.

  7. Consistent Environments: Using containers ensures that every environment is identical, reducing the gap between your development environment and your production servers. This eliminates the "it works on my machine" scenarios.

Virtual Machines

A Virtual Machine (VM) is a virtual representation or emulation of a physical computer. It is a software construct that simulates a full computer system, including the hardware, operating system, and even the peripheral devices. Each VM operates independently of other VMs, even when they are all running on the same physical host machine.

VMs are created and managed by a piece of software called a hypervisor. The hypervisor communicates directly with the physical server's disk space and CPU to manage the VMs. It allows for multiple environments that are isolated from one another yet exist on the same physical machine.

There are two main types of virtual machines:

  1. System Virtual Machines: These provide a substitute for a real machine. They provide the functionality needed to execute entire operating systems. This allows one machine to simultaneously run different operating systems.
  2. Process Virtual Machines: These are designed to execute computer programs in a platform-independent environment. They allow for different operating systems to run on the same physical machine without conflict.

VMs are used for many purposes, including server virtualization, which enables IT teams to consolidate their computing resources and improve efficiency. They can also perform specific tasks considered too risky to carry out in a host environment, such as accessing virus-infected data or testing operating systems.

Docker Containers vs Virtual Machines

Docker containers and virtual machines (VMs) are both powerful tools for creating isolated environments for applications, but they serve different purposes and have unique characteristics.

Docker Containers vs Virtual Machines

Here's a comparison:

  1. Efficiency and Performance: Docker containers are more lightweight and efficient than VMs. They share the host OS kernel, usually the binaries and libraries, resulting in less resource utilization. This makes containers smaller in size (usually in the range of megabytes) and faster to start. On the other hand, VMs are larger (usually in the range of gigabytes) and take longer to start.
  2. Isolation: Both Docker containers and VMs provide isolation, but they do it differently. VMs provide stronger isolation as they run their own OS, but this comes at the cost of increased resource usage. Docker containers, on the other hand, share the host OS, providing less isolation but using resources more efficiently.
  3. Portability and Consistency: Docker containers are highly portable and ensure consistency across environments. Developers can create a portable, packaged unit that contains all of the dependencies needed for that unit to run in any environment. This eliminates the "it works on my machine" problems often encountered in software development and deployment.
  4. Use Cases: VMs are suitable for running applications that are static and don't change very often. Docker containers are more flexible and are ideal for applications that need to be updated and scaled frequently. VMs are considered a better choice in a production environment, while Docker containers are preferred for testing applications.
  5. Security: VMs provide a higher level of security as they run on their own OS without being a threat to the host computer. Docker containers, while offering security due to their isolation, are less secure compared to VMs.

How a Docker Container Works?

Understanding how a Docker container works involves grasping key concepts such as images, containers, and the underlying technology that makes containerization possible.

Let's break down the process step by step:

Docker Images:

  • Building Blocks: At the core of Docker is the concept of an image. An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools.
  • Layered File System: Docker images are built using a layered file system. Each layer represents a specific instruction in the Dockerfile (the blueprint for creating an image). Layers are cached, making subsequent builds faster by only re-executing the steps that have changed.

Containerization:

  • Container as an Instance: A container is an instance of a Docker image. It can be run, started, stopped, moved, and deleted. Containers share the host machine's OS kernel but run in isolation, encapsulating the application and its dependencies.
  • File System Isolation: Each container has its own file system, separate from the host and other containers. However, it shares the same kernel with the host OS, making containers more lightweight than traditional virtual machines.

Docker Engine:

  • Runtime Environment: The Docker Engine is a client-server application that builds and runs Docker containers. It consists of a daemon process called the Docker daemon, a REST API that specifies interfaces for interacting with the daemon, and a command-line interface (CLI) that communicates with the daemon.
  • Daemon Responsibilities: The Docker daemon is responsible for building, running, and managing containers. It communicates with the Docker client (through the CLI or API) to execute commands and perform various operations.

Container Lifecycle:

  • Creation: When a container is started, the Docker Engine pulls the required image (if not already present) and creates a writable container layer on top of it. This writable layer allows the container to modify or add files during its runtime.
  • Execution: The container runs the specified command, whether it's an application or a process defined in the Dockerfile. The container has its isolated environment, including its own process space, network interfaces, and file system.
  • Modification: Any changes made to the container, such as writing new files or modifying existing ones, are stored in the writable layer. This layer is ephemeral, meaning changes are discarded when the container stops unless explicitly committed to a new image.
  • Termination: When the container stops, it goes through a shutdown process, and its resources are released. The container, along with its writable layer, is preserved unless explicitly deleted.

Networking and Port Mapping:

  • Isolated Networking: Containers have their own isolated network stack, allowing them to run services on different ports without conflicting with the host or other containers.
  • Port Mapping: Docker allows mapping ports between the host and the container, enabling external access to services within the container.

Volumes:

  • Persistent Data: Docker provides a way to persist data beyond the container's lifecycle using volumes. Volumes can be mounted into a container, allowing data to be shared and retained even if the container is removed.

For example, Imagine you have different containers, one running a web app, another running a postgres and another running redis, in a YAML file. That is called docker compose file, from there you can run these containers with a single command.

Docker Compose File

How to Create Docker Container?

For this example, let's assume you have a basic Node.js application with the following structure:

docker-image-example/
|-- app.js
|-- package.json
|-- Dockerfile

Create Node.js Application:

Create a simple Node.js application. For example, you might have an app.js file with the following content:

// app.js
const http = require('http');

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello, Docker!\\n');
});

const PORT = process.env.PORT || 3000;
server.listen(PORT, () => {
  console.log(`Server running on <http://localhost>:${PORT}/`);
});

Ensure you also have a package.json file created with npm init command.

Create Dockerfile:

Create a file named Dockerfile in the same directory as your Node.js application. This file will contain instructions for building the Docker image.

# Use an official Node.js runtime as a parent image
FROM node:14

# Set the working directory to /app
WORKDIR /app

# Copy package.json and package-lock.json to the working directory
COPY package*.json ./

# Install app dependencies
RUN npm install

# Copy the current directory contents to the container at /app
COPY . .

# Make port 3000 available to the world outside this container
EXPOSE 3000

# Define environment variable
ENV NODE_ENV=production

# Run app.js when the container starts
CMD ["node", "app.js"]

This Dockerfile does the following:

  • Uses the official Node.js 14 image as the base image.
  • Sets the working directory to /app.
  • Copies package.json and package-lock.json to the working directory and installs dependencies.
  • Copies the application code into the container.
  • Exposes port 3000.
  • Sets the NODE_ENV environment variable to "production."
  • Specifies the command to run when the container starts (node app.js).

Build the Docker Image:

Open a terminal in the directory where your Dockerfile is located and run the following command to build the Docker image:

docker build -t docker-image-example .

This command builds Docker image and tags it with the name "docker-image-example".

Run the Docker Container:

After successfully building the image, you can run a container based on that image:

docker run -p 3000:3000 docker-image-example

This commands build your Docker image and then run it, mapping port 3000 in the Docker container to port 3000 on your host machine.

Access the Application:

Open your web browser and navigate to http://localhost:3000/. You should see the "Hello, Docker!" message from your Node.js application running inside the Docker container.

How to Stop Docker Container?

To stop a Docker container, you can use the docker stop command.

Here's a simple guide:

  1. Find the Container ID or Name:
    • Open a terminal.
    • Use the following command to list all running containers along with their IDs and names:
docker ps
  • Locate the Container ID or Name of the container you want to stop.
  1. Stop the Container:
    • Use the following command to stop the container. Replace <container_id_or_name> with the actual ID or name of your container:
docker stop <container_id_or_name>
  • If the container was running a process, it will be gracefully stopped.

If you want to forcefully stop a container (ignoring any running processes), you can use the docker kill command:

docker kill <container_id_or_name>

Remember to replace <container_id_or_name> with the actual ID or name of your container.

After stopping or killing a container, you can check its status using docker ps -a to confirm that it is no longer running. The -a flag shows all containers, not just the running ones.

Docker Container Commands

Docker provides a variety of commands for managing containers.

Here's a list of some commonly used Docker container commands:

  1. List Running Containers: Show a list of running Docker containers.
docker ps

2. List All Containers (including stopped ones): Display a list of all containers, both running and stopped.

docker ps -a

3. Run a Container: Create and start a new container based on an image.

docker run [options] <image>

4. Stop a Container: Stop a running container.

docker stop <container_id_or_name>

5. Remove a Docker Container: Remove a stopped container.

docker rm <container_id_or_name>

6. Forcefully Remove a Docker Container: Forcefully remove a running container.

docker rm -f <container_id_or_name>

7. Restart a Docker Container: Restart a running or stopped container. If you want to forcefully restart a container, you can use the -t option to specify a timeout (in seconds) for how long Docker should wait for the container to stop before forcefully restarting it:

docker restart <container_id_or_name>
docker restart -t 10 <container_id_or_name>

8. View Docker Container Logs: View the logs of a specific container.

docker logs <container_id_or_name>

9. Follow Container Logs in Real-Time: Follow the logs of a container in real-time.

docker logs -f <container_id_or_name>

Press Ctrl + C to stop following the logs.

10. Execute a Command in a Running Container: Run a command inside a running container.

docker exec [options] <container_id_or_name> <command>

11. Inspect a Container: Display detailed information about a container, including configuration and network details.

docker inspect <container_id_or_name>

12. Pause and Unpause a Container: Pause and unpause a running container.

docker pause <container_id_or_name>
docker unpause <container_id_or_name>

13. Attach to a Running Container: Attach to the STDIN, STDOUT, and STDERR of a running container.

docker attach <container_id_or_name>

14. Copy Files to/from a Container: Copy files or directories between a container and the local file system.

docker cp <local_path> <container_id_or_name>:<container_path>
docker cp <container_id_or_name>:<container_path> <local_path>

Conclusion

Docker containers, at the forefront of modern software development, provide lightweight, portable, and self-contained environments for applications and their dependencies. Docker simplifies deployment, enhances scalability, and accelerates software delivery cycles. Key advantages include simplified deployment, scalability, efficient resource utilization, faster software delivery, application portability, cost savings, and consistent environments.

Virtual Machines (VMs) and Docker containers serve different purposes. Containers are more efficient, lightweight, and portable, while VMs offer stronger isolation. Docker containers excel in flexibility and quick updates, making them ideal for testing, while VMs are preferred for static applications in production.

Understanding Docker involves grasping images, containers, and the Docker Engine. Docker containers run instances of images, encapsulating applications. Docker Engine comprises a daemon, REST API, and CLI for building and managing containers.

A practical Node.js example illustrates creating a Docker container. Steps include writing a Dockerfile, building an image, running a container, and accessing the application. Essential Docker commands facilitate container management, including listing, running, stopping, removing, and restarting containers.

In conclusion, Docker containers revolutionize software deployment by providing efficiency, consistency, and portability. Developers and operators benefit from Docker's advantages and practical implementation, enhancing their ability to navigate the evolving landscape of software development and deployment.

Key Takeaways

Docker Containers:

  • Lightweight, portable, self-sufficient units encapsulating applications and dependencies.
  • Built from images, which are stand-alone executable packages including code, libraries, and runtime.

Advantages of Docker Containers:

  • Simplified deployment with a single command.
  • Scalability and efficient resource utilization.
  • Faster software delivery cycles and easy rollbacks.
  • Application portability across different environments.
  • Cost savings and consistent environments.

Virtual Machines vs Docker Containers:

  • Efficiency: Containers are more lightweight and start faster.
  • Isolation: VMs provide stronger isolation but with higher resource usage.
  • Portability: Docker containers are highly portable and consistent.

How Docker Containers Work:

  • Docker Images: Lightweight, layered file system packages.
  • Containerization: Instances of Docker images, isolated with their own file systems.
  • Docker Engine: Client-server application for building and running containers.
  • Container Lifecycle: Creation, execution, modification, and termination.
  • Networking and Port Mapping: Isolated networking and port mapping for external access.
  • Volumes: Persistent data storage beyond container lifecycle.

Creating a Docker Container (Node.js Example):

  • Create Node.js application files (app.js, package.json).
  • Write Dockerfile specifying image, working directory, dependencies, and commands.
  • Build Docker image: docker build -t docker-image-example .
  • Run Docker container: docker run -p 3000:3000 docker-image-example
  • Access the application in a web browser.

Stopping Docker Container:

  • Find container ID or name: docker ps
  • Stop a container: docker stop <container_id_or_name>

Docker Container Commands:

  • List, run, stop, remove, and restart containers.
  • View logs, execute commands, inspect, pause, unpause, attach, and copy files.

This tutorial equips developers and operators with foundational knowledge and practical insights into leveraging Docker containers for efficient and consistent application deployment in the dynamic realm of software development.

Module 2: Working with DockerDocker Containers

Top Tutorials

Related Articles

AlmaBetter
Made with heartin Bengaluru, India
  • Official Address
  • 4th floor, 133/2, Janardhan Towers, Residency Road, Bengaluru, Karnataka, 560025
  • Communication Address
  • 4th floor, 315 Work Avenue, Siddhivinayak Tower, 152, 1st Cross Rd., 1st Block, Koramangala, Bengaluru, Karnataka, 560034
  • Follow Us
  • facebookinstagramlinkedintwitteryoutubetelegram

© 2024 AlmaBetter