How to set a memory limit for Docker containers

0
357

[ad_1]

Docker containers run by default with no resource restrictions. Processes running in containers are free to use unlimited amounts of memory, which could impact neighboring containers and other workloads on your host.

This is dangerous in production environments. Each container should be configured with an appropriate memory limit to prevent runaway resource consumption. This helps reduce contention, which will maximize overall system stability.

How docker memory limits work

Docker allows you to set hard and soft memory limits on individual containers. These have different effects on the amount of memory available and the behavior when the limit is reached.

  • hard memory limits set an absolute limit on the memory provided to the container. Exceeding this limit will normally cause the out-of-memory killer kernel to kill the container process.
  • soft memory limits indicate the amount of memory a container is expected to use. The container can use more memory when capacity is available. It might abort if it exceeds a soft limit during a low memory condition.

Docker also provides controls to set swap memory restrictions and change what happens when a memory limit is reached. You will see how to use them in the following sections.

Setting hard and soft memory limits

A hard memory limit is set by the docker run of command -m either --memory flag. take a value like 512m (for megabytes) or 2g (for gigs):

$ docker run --memory=512m my-app:latest

Containers have a minimum memory requirement of 6 MB. trying to use --memory values ​​less than 6m will cause an error.

The soft memory limits are set with the --memory-reservation flag. This value must be less than --memory. The limit will only be applied when container resource contention occurs or when the host is low on physical memory.

$ docker run --memory=512m --memory-reservation=256m my-app:latest

This example starts a container that has 256 MB of reserved memory. The process might end if you are using 300 MB and the capacity is running low. It will always stop if usage exceeds 512MB.

Swap Memory Management

Containers can be allocated swap memory to accommodate high usage without affecting physical memory consumption. Swap allows the contents of memory to be written to disk once the available RAM has been exhausted.

the --memory-swap flag controls the amount of swap space available. It only works in conjunction with --memory. when you configure --memory Y --memory-swap At different values, the swap value controls the total amount of memory available to the container, including swap space. The value of --memory determines the portion of the quantity that is physical memory.

$ docker run --memory=512m --memory-swap=762m my-app:latest

This container has access to 762 MB of memory of which 512 MB is physical RAM. The remaining 250 MB is swap space stored on the disk.

Adjustment --memory without --memory-swap gives the container access to the same amount of swap space as physical memory:

$ docker run --memory=512m my-app:latest

This container has a total of 1024 MB of memory, comprising 512 MB of RAM and 512 MB of swap.

Sharing can be disabled for a container by setting the --memory-swap mark to the same value as --memory. What --memory-swap sets the total amount of memory, and --memory assigns the proportion of physical memory, you are telling Docker that 100% of the available memory should be RAM.

In all cases, sharing only works when it is enabled on your host. Exchange reports inside containers are not reliable and should not be used. commands like free running inside a container will show the total amount of swap space on your Docker host, not the swap space accessible to the container.

Disabling process deaths due to out of memory

Out of memory errors in a container usually cause the kernel to kill the process. This results in the container stopping with exit code 137.

Including the optional flag --oom-kill-disable with your docker run The command disables this behavior. Instead of stopping the process, the kernel will simply block new memory allocations. The process will appear to hang until you reduce its memory usage, cancel new memory allocations, or manually restart the container.

This flag should not be used unless you have implemented mechanisms to resolve out-of-memory conditions yourself. It’s usually best to let the kernel kill the process, which causes a container restart that restores normal memory consumption.

Summary

Docker containers come with no pre-applied resource restrictions. This leaves container processes free to consume unlimited memory, threatening the stability of your host.

In this article, you learned how to set hard and soft container memory limits to reduce the chance that you will run into an out-of-memory situation. Setting these limits on all your containers will reduce resource contention and help you stay within the physical memory capacity of your host. He should consider CPU usage limits along with his memory limits; this will prevent individual containers with high CPU demand from negatively affecting their neighbors.

[ad_2]