Docker Series (Chapter 3/4): Managing Container Resources
For information on how to install Docker, please refer to: Docker Series (Chapter 0): Installing docker-engine in Ubuntu.
For guidance on how to create and use Docker images, please refer to: Docker Series (Chapter 1): Building and using images.
For guidance on how to create and use Docker containers, please refer to: Docker Series (Chapter 2): Creating and Using containers.
Up to now, you could install the docker-engine in Ubuntu operating system, pull docker images from public/private image repositories, develop Dockerfile, use the Dockerfile to create containers and use some command to do some execution interacting with containers.
However, do you have some ideas about how are the resources of a created container bound? How much memory, how much disk, and what are the allocated IOPS?
In this Chapter3, let’s find the answer.
Memory Quotas
Identifying issues by observing memory usage
At first, let me create two containers.
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name=http1 httpd |
Then, let’s observe the memory size of each container.
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it http1 grep MemTotal /proc/meminfo |
Finally, let’s check the memory size of local physical machine.
1 | shy@flash-shy:/HDD/learn/docker/apache2$ free -h |
Based on the hands-on exercise just now, it is obviously that the memory allocation for each container equals the total memory of the physical host
. This means better performance, but it also means that as business demands increase, resource contention may occur. This is something that should generally be avoided during operational planning.
Managing memory quotas
We could use -m
or --memory
parameter to specify the limitation of memory usage, to allocate the memory quotas to containers.
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker run -it --name=memlimit -m 500M progrium/stress \ |
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker run -it --name=memfailed -m 500M progrium/stress \ |
-it
-i: Keep the container’s standard input (stdin) open, allowing you to interact with the container.
-t: Allocate a pseudo-TTY (terminal) for the container, so you can see the output and interact with the container. These two flags are usually used together, meaning you want to enter the container and interact with it.
–name=memfailed
Assigns a name to the container. In this case, the container is named memfailed. This allows you to reference the container by its name rather than its ID in future commands.
-m 500M
Limits the container’s maximum memory usage to 500MB. If the container exceeds this limit, it will be killed, and an “Out of Memory” (OOM) error will occur.
progrium/stress
Specifies the Docker image to run. In this case, it’s the progrium/stress image, which contains the stress tool that can be used to generate load on the system.
–vm 1
This means that the stress tool will start one virtual memory load (vm) process. The number 1 indicates that one such process will be started.
–vm-bytes 700M
The –vm-bytes parameter specifies the amount of memory to allocate for each virtual memory load process. In this case, it is set to 700MB. Therefore, the stress tool will attempt to allocate 700MB of virtual memory for the load process.
It is clear for us why the result is different. 400M is legal, 700M is illegal.
Therefore, we set the parameter -m
or --memory
works to our containers.
Managing CPU Quotas
By default, all containers can use the same CPU resources without any restrictions.
Similar to memory, when the CPU demand of a container increases, it will lead to CPU resource contention. However, unlike memory, where an absolute amount is specified, CPU allocation is done by specifying a relative weight.
The --cpu-shares parameter
is used to allocate CPU resources.
By default, this value is set to 1024
.
Note that when the workload in the current container is idle, other containers have the right to use its idle CPU cycles, which ensures the performance of the workloads.
CPU resource limits only take effect when the physical machine’s resources are insufficient, and the allocation is based on priority. When other containers are idle, the busy containers can utilize all available CPU resources.
Managing I/O Quotas
We could use the parameter --blkio-weight 300
to set the limitation of I/O quotas.
Under normal circumstances, a container with a weight of 600 will have twice the I/O capacity compared to one with a weight of 300. You can test the I/O performance using the following command.
In actual tests, there is no resource contention. This setting will only be reflected during I/O contention.
So, 600 weight is faster than 300 weight, but it is not twice the I/O capacity.
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name 600io --blkio-weight 600 httpd |
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker run -d --name 300io --blkio-weight 300 httpd |
The underlying implementation of resource limits
Linux uses cgroups
to allocate CPU, memory, and I/O resource quotas for processes.
We can view the resource quotas for containers through the settings under /sys/fs/cgroup/
.
In Linux, cgroups (control groups) allow you to manage and allocate resources to processes. For Docker containers, cgroups are used to enforce resource limitations like CPU usage, memory consumption, and disk I/O. The resource settings for these containers can be viewed in the /sys/fs/cgroup/ directory, where cgroup-related files and parameters are exposed.
You can find specific resource limit details for a container by navigating to directories under /sys/fs/cgroup/ that correspond to the container’s cgroup, and checking the values for CPU, memory, and I/O usage.
1 | shy@flash-shy:/HDD/learn/docker/apache2$ docker exec -it http1 /bin/bash |
1 | root@9871abf3948e:/usr/local/apache2# ls /sys/fs/cgroup/blkio/ |