I have a script using docker
python library or Docker Client API. I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn’t find the solution to achieve that.
I know in docker, there is --cpus
flag, but docker
only has cpu_shares (int): CPU shares (relative weight)
parameter to use. Does everyone have experience in setting the limit on cpu usage using docker
?
import docker
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container = client.containers.run(my_docker_image, mem_limit=30G)
Edits:
I tried nano_cpus
as what here suggests, like client.containers.run(my_docker_image, nano_cpus=10000000000)
to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do parallel::detectCores()
, it still shows 30, which I am confused. I also link R tag now.
Thank you!
2
Answers
Setting
nana_cpus
works and you could useparallelly::availableCores()
to detect cpus set by cgroup in r.All that I see about this is with operating systems. Is there a reason you can’t just use these on system? Just asking for clarification here.
If you can use the cmd, then you can use this command: