For information on how to install Docker, please refer to: Docker Series (Chapter 0): Installing docker-engine in Ubuntu.

For guidance on how to create and use Docker images, please refer to: Docker Series (Chapter 1): Building and using images.

For guidance on how to create and use Docker containers, please refer to: Docker Series (Chapter 2): Creating and Using containers.

Up to now, you could install the docker-engine in Ubuntu operating system, pull docker images from public/private image repositories, develop Dockerfile, use the Dockerfile to create containers and use some command to do some execution interacting with containers.

However, do you have some ideas about how are the resources of a created container bound? How much memory, how much disk, and what are the allocated IOPS?

In this Chapter3, let’s find the answer.

Memory Quotas

Identifying issues by observing memory usage

At first, let me create two containers.

1
2
3
4
5
shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name=http1 httpd
9871abf3948eeec881498356073c4ce2b11c1631d533a92ce44be64b088af880

shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name=http2 httpd
92cfd46a822b69bdc55fc1460e8698a79a4af7382cda9b038b45654017aef0a2

Then, let’s observe the memory size of each container.

1
2
3
4
5
shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it http1 grep MemTotal /proc/meminfo
MemTotal: 32741384 kB

shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it http2 grep MemTotal /proc/meminfo
MemTotal: 32741384 kB

Finally, let’s check the memory size of local physical machine.

1
2
3
4
shy@flash-shy:/HDD/learn/docker/apache2$ free -h
total used free shared buff/cache available
Mem: 31Gi 7.5Gi 1.7Gi 13Gi 22Gi 9.2Gi
Swap: 31Gi 335Mi 31Gi

Based on the hands-on exercise just now, it is obviously that the memory allocation for each container equals the total memory of the physical host. This means better performance, but it also means that as business demands increase, resource contention may occur. This is something that should generally be avoided during operational planning.

Managing memory quotas

We could use -m or --memory parameter to specify the limitation of memory usage, to allocate the memory quotas to containers.

1
2
3
4
5
6
7
8
9
10
shy@flash-shy:/HDD/learn/docker/apache2$ docker run -it --name=memlimit -m 500M progrium/stress \
--vm 1 --vm-bytes 400M
stress: dbug: [1] using backoff sleep of 3000us
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] --> hogvm worker 1 [8] forked
stress: dbug: [8] allocating 524288000 bytes ...
stress: dbug: [8] touching bytes in strides of 4096 bytes ...
stress: dbug: [8] freed 524288000 bytes
stress: dbug: [8] allocating 524288000 bytes ...
stress: dbug: [8] touching bytes in strides of 4096 bytes ...
1
2
3
4
5
6
7
8
9
10
11
shy@flash-shy:/HDD/learn/docker/apache2$ docker run -it --name=memfailed -m 500M progrium/stress \
--vm 1 --vm-bytes 700M
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogvm worker 1 [8] forked
stress: dbug: [8] allocating 734003200 bytes ...
stress: dbug: [8] touching bytes in strides of 4096 bytes ...
stress: FAIL: [1] (416) <-- worker 8 got signal 9
stress: WARN: [1] (418) now reaping child worker processes
stress: FAIL: [1] (422) kill error: No such process
stress: FAIL: [1] (452) failed run completed in 0s
  1. -it

    -i: Keep the container’s standard input (stdin) open, allowing you to interact with the container.

    -t: Allocate a pseudo-TTY (terminal) for the container, so you can see the output and interact with the container. These two flags are usually used together, meaning you want to enter the container and interact with it.

  2. –name=memfailed

    Assigns a name to the container. In this case, the container is named memfailed. This allows you to reference the container by its name rather than its ID in future commands.

  3. -m 500M

    Limits the container’s maximum memory usage to 500MB. If the container exceeds this limit, it will be killed, and an “Out of Memory” (OOM) error will occur.

  4. progrium/stress

    Specifies the Docker image to run. In this case, it’s the progrium/stress image, which contains the stress tool that can be used to generate load on the system.

  5. –vm 1

    This means that the stress tool will start one virtual memory load (vm) process. The number 1 indicates that one such process will be started.

  6. –vm-bytes 700M

    The –vm-bytes parameter specifies the amount of memory to allocate for each virtual memory load process. In this case, it is set to 700MB. Therefore, the stress tool will attempt to allocate 700MB of virtual memory for the load process.

It is clear for us why the result is different. 400M is legal, 700M is illegal.

Therefore, we set the parameter -m or --memory works to our containers.

Managing CPU Quotas

By default, all containers can use the same CPU resources without any restrictions.

Similar to memory, when the CPU demand of a container increases, it will lead to CPU resource contention. However, unlike memory, where an absolute amount is specified, CPU allocation is done by specifying a relative weight.

The --cpu-shares parameter is used to allocate CPU resources.

By default, this value is set to 1024.

Note that when the workload in the current container is idle, other containers have the right to use its idle CPU cycles, which ensures the performance of the workloads.

CPU resource limits only take effect when the physical machine’s resources are insufficient, and the allocation is based on priority. When other containers are idle, the busy containers can utilize all available CPU resources.

Managing I/O Quotas

We could use the parameter --blkio-weight 300 to set the limitation of I/O quotas.

Under normal circumstances, a container with a weight of 600 will have twice the I/O capacity compared to one with a weight of 300. You can test the I/O performance using the following command.

In actual tests, there is no resource contention. This setting will only be reflected during I/O contention. So, 600 weight is faster than 300 weight, but it is not twice the I/O capacity.

1
2
3
4
5
6
7
8
9
10
11
shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name 600io --blkio-weight 600 httpd
shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it 600io /bin/bash
root@fda33d6184a5:/usr/local/apache2# time dd if=/dev/zero of=test.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 35.8148 s, 300 MB/s

real 0m35.959s
user 0m0.011s
sys 0m14.162s

1
2
3
4
5
6
7
8
9
10
shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name 300io --blkio-weight 300 httpd
shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it 300io /bin/bash
root@7d406deec8ac:/usr/local/apache2# time dd if=/dev/zero of=test.out bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 44.8088 s, 240 MB/s

real 0m44.897s
user 0m0.025s
sys 0m16.777s

The underlying implementation of resource limits

Linux uses cgroups to allocate CPU, memory, and I/O resource quotas for processes.

We can view the resource quotas for containers through the settings under /sys/fs/cgroup/.

In Linux, cgroups (control groups) allow you to manage and allocate resources to processes. For Docker containers, cgroups are used to enforce resource limitations like CPU usage, memory consumption, and disk I/O. The resource settings for these containers can be viewed in the /sys/fs/cgroup/ directory, where cgroup-related files and parameters are exposed.

You can find specific resource limit details for a container by navigating to directories under /sys/fs/cgroup/ that correspond to the container’s cgroup, and checking the values for CPU, memory, and I/O usage.

1
2
3
shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it http1 /bin/bash
root@9871abf3948e:/usr/local/apache2# ls /sys/fs/cgroup/
blkio cpu cpu,cpuacct cpuacct cpuset devices freezer hugetlb memory misc net_cls net_cls,net_prio net_prio perf_event pids rdma systemd
1
2
3
4
root@9871abf3948e:/usr/local/apache2# ls /sys/fs/cgroup/blkio/
blkio.prio.class blkio.throttle.io_service_bytes_recursive blkio.throttle.read_bps_device blkio.throttle.write_iops_device notify_on_release
blkio.reset_stats blkio.throttle.io_serviced blkio.throttle.read_iops_device cgroup.clone_children tasks
blkio.throttle.io_service_bytes blkio.throttle.io_serviced_recursive blkio.throttle.write_bps_device cgroup.procs