2

I am trying to limit the bandwidth of a process to the memory. I have 2 cgroups (cgroup1 and cgroup2) with processes in them. I can limit the amount of memory is that used by each cgroup. But if they keep calling malloc() and free() and saturating the memory bus, they influence each other.

How do I limit this? I don't have two different memory nodes, and no NUMA.

1 Answer 1

3

This is not possible. Supported subsystems are:

  • blkio — this subsystem sets limits on input/output access to and from block devices such as physical drives (disk, solid state, USB, etc.).
  • cpu — this subsystem uses the scheduler to provide cgroup tasks access to the CPU.
  • cpuacct — this subsystem generates automatic reports on CPU resources used by tasks in a cgroup.
  • cpuset — this subsystem assigns individual CPUs (on a multicore system) and memory nodes to tasks in a cgroup.
  • devices — this subsystem allows or denies access to devices by tasks in a cgroup.
  • freezer — this subsystem suspends or resumes tasks in a cgroup.
  • memory — this subsystem sets limits on memory use by tasks in a cgroup, and generates automatic reports on memory resources used by those tasks.
  • net_cls — this subsystem tags network packets with a class identifier (classid) that allows the Linux traffic controller (tc) to identify packets originating from a particular cgroup task.
  • net_prio — this subsystem provides a way to dynamically set the priority of network traffic per network interface.
  • ns — the namespace subsystem.

A recent systems research paper introduced a new controller to achieve this and has made the associated code available.

An alternative would be to mmap a file and then set blkio settings on it (not sure if this would work). Then, have your program read/write variables from the file instead of using malloc (ugly!).

Sign up to request clarification or add additional context in comments.

1 Comment

I noticed indeed what you said above. I hoped some1 like you would have noticed a paper on a possible solution. thank you very much appreciated