270

I want to do some low-resources testing and for that I need to have 90% of the free memory full.

How can I do this on a *nix system?

5
  • 3
    Does it really have to work on any *nix system? Commented Nov 8, 2013 at 12:31
  • 37
    Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory? Commented Nov 8, 2013 at 13:27
  • 4
    @abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory. Commented Nov 9, 2013 at 17:40
  • 2
    In case anyone else is a little shocked by the scoring here: meta.unix.stackexchange.com/questions/1513/…? Commented Nov 13, 2013 at 14:46
  • See also: unix.stackexchange.com/a/1368/52956 Commented Jun 18, 2015 at 18:42

16 Answers 16

210

stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:

stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 

For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping:

stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 

Adapt the /proc/meminfo call with free(1)/vm_stat(1)/etc. if you need it portable. See also the reference wiki for stress-ng for further usage examples.

7
  • 3
    stress --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.097;}' < /proc/meminfo)k --vm-keep -m 10 Commented Oct 23, 2015 at 16:47
  • 2
    Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7. stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.98;}' < /proc/meminfo)k --vm-keep -m 1 Commented Feb 8, 2018 at 0:36
  • good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/… and git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/… Commented Feb 8, 2018 at 9:11
  • 2
    Just as an added note, providing both --vm 1 and --vm-keep are very important. Simply --vm-bytes does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution. Commented Mar 26, 2019 at 12:56
  • 1
    This is why there is -m 1. According to the stress manpage, -m N is short for --vm N: spawn N workers spinning on malloc()/free() Commented Mar 27, 2019 at 3:03
184

If you have a basic Unix/POSIX system with support for head -c (e.g. GNU coreutils or BusyBox), you can fill a certain amount of memory for a certain duration like this:

head -c BYTES /dev/zero | tail | sleep SECONDS head -c 5000m /dev/zero | tail | sleep 60 # ~5GB head -c 5G /dev/zero | tail | sleep 60 # 5GiB, requires GNU (not busybox) 

This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM the commands themselves use on your system and subtract that. The sleep command is a tricky hack that does two things: it keeps tail alive for the desired duration and it prevents the null bytes from being written to the terminal (they wouldn't be visible but they'd use lots of CPU). The way it works is that tail reads all the data from the first pipe into memory, and when it starts writing the data to the second pipe it will quickly block because sleep isn't reading. When sleep exits (after the specified duration), it causes tail to be killed with SIGPIPE.

Here is a less tricky but slightly longer way:

{ head -c BYTES /dev/zero; sleep SECONDS; } | tail > /dev/null 

This version runs the sleep right after head, in order to keep the first pipe open for the specified duration and hence keep tail alive.

To keep it running indefinitely (until killed using Ctrl+C), change the sleep command to sleep infinity (if supported) or to tail -f /dev/null.

To just quickly run out of RAM completely, you can use this simple command:

tail /dev/zero 

If you have pv and want to slowly increase RAM use:

head -c TOTAL /dev/zero | pv -L BYTES_PER_SEC | tail > /dev/null head -c 1000m /dev/zero | pv -L 10m | tail > /dev/null 

The latter will use up to one gigabyte at a rate of ten megabytes per second. As an added bonus, pv will show the current rate of use and the total use so far. Of course this can also be done with previous variants:

head -c 500m /dev/zero | pv | tail | sleep 60 

Just inserting the | pv | part will show you the current status (throughput and total by default). If you don't have pv but you do have GNU dd, you can use dd status=progress bs=4096 instead.

Compatibility hints and alternatives
If you do not have a /dev/zero device, the standard yes and tr tools might substitute: yes | tr \\n x | head -c BYTES | tail (yes outputs an infinite amount of "yes"es, tr substitutes the newline such that everything becomes one huge line and tail needs to keep all that in memory).
Another, simpler alternative is using dd: dd if=/dev/zero bs=1G | sleep 60 uses 1GB of memory on GNU and BusyBox.
Finally, if your head does not accept a multiplier suffix, you can calculate an amount of bytes inline, for example 50MiB: head -c $((1024*1024*50)). If it doesn't even support -c (e.g. OpenBSD), you can use dd bs=1024 count=$((1024*50)) instead for 50MiB (bs is the block size, it has a big impact on performance; the total size is the block size times the count).


Credits to:

  • falstaff for contributing a variant that is even simpler and more broadly compatible (like with BusyBox),
  • James Scriven for pointing out the CPU usage if the output is not redirected, and
  • tom for iterating further and improving large parts of the answer to increase compatibility and reduce CPU load while keeping the options simple to use.

Why another answer when there are eleven solutions already? The accepted answer recommends installing a package (not available everywhere); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed on/for the target platform); the second top-voted answer recommends running the application in a VM (copying my phone's internal sdcard and creating a virtualbox image to run an ARM VM is quite involved); the third suggests modifying something in the boot sequence (won't work on my phone); the fourth only works in so far as the /dev/shm mountpoint exists and is large enough (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth requires installing python; the ninth thinks we're all very uncreative and finally the tenth wrote their own C++ program which causes the same issue as the top-voted answer.

17
  • lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from stackoverflow.com/questions/6550484/… seem to fix it. Commented May 5, 2016 at 18:50
  • @HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard of set -e, so I just learned something :) Commented May 5, 2016 at 19:51
  • 1
    Cool technique! time yes | tr \\n x | head -c $((1024*1024*1024*10)) | grep n (use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel? Commented Apr 7, 2017 at 20:53
  • 1
    This is a great technique, but I would suggest piping the result to /dev/null. Writing all those null bytes to the terminal costs a huge amount of CPU. Instead of head -c 500m /dev/zero | tail, use head -c 500m /dev/zero | tail > /dev/null. For me, this reduced runtime from about 12s to about 0.27s. Commented Feb 20, 2024 at 17:42
  • 1
    Great answer, two suggestions: (1) To add a duration without bash, just use { head -c 500m /dev/zero; sleep SECONDS; } | tail. (2) I second the suggestion to redirect to /dev/null; if you want to save characters, you can instead use ... | tail | : (note that in general COMMAND | : is not equivalent to COMMAND > /dev/null, because the former kills COMMAND when it tries to write; but in this particular case that doesn't matter because tail only starts writing after it reads all the data, and at that point it's okay for it to be killed). I'm happy to create a suggested edit. Commented Jul 29, 2024 at 9:41
97

You can write a C program to malloc() the required memory and then use mlock() to prevent the memory from being swapped out.

Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.

7
  • 31
    Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that. Commented Nov 8, 2013 at 13:31
  • 2
    I concur with @siri; however, it depends on which variant UNIX you are using. Commented Nov 8, 2013 at 13:34
  • 3
    Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended. Commented Nov 8, 2013 at 13:44
  • 12
    You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See win.tue.nl/~aeb/linux/lk/lk-9.html Commented Nov 8, 2013 at 14:32
  • 10
    @Sebastian: calloc will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do a memset of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554 Commented Nov 8, 2013 at 16:43
50

I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.

That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.

Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.

39

From this HN comment: https://news.ycombinator.com/item?id=6695581

Just fill /dev/shm via dd or similar.

swapoff -a dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k 
4
  • 9
    Not all *nixes have /dev/shm. Any more portable idea? Commented Nov 8, 2013 at 12:24
  • 2
    If pv is installed, it helps to see the count: dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024 Commented Sep 26, 2017 at 20:01
  • 1
    If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big. YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM. Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate. Commented Dec 8, 2017 at 19:25
  • note to self: dont yes > /dev/shm/asdf, as it will crash your system (even with swapping enabled) Commented Jul 25, 2020 at 0:21
32
  1. run linux;
  2. boot with mem=nn[KMG] kernel boot parameter

(look in linux/Documentation/kernel-parameters.txt for details).

25

I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248

function malloc() { if [[ $# -eq 0 || $1 -eq '-h' || $1 -lt 0 ]] ; then echo -e "usage: malloc N\n\nAllocate N mb, wait, then release it." else N=$(free -m | grep Mem: | awk '{print int($2/10)}') if [[ $N -gt $1 ]] ;then N=$1 fi sh -c "MEMBLOB=\$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1" fi } 
2
  • 1
    This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces. Commented Oct 16, 2018 at 7:46
  • Indeed a nice idea. Link to Github example is broken. Consider updating it Commented Jul 10, 2022 at 7:16
22

How abount a simple python solution?

#!/usr/bin/env python import sys import time if len(sys.argv) != 2: print "usage: fillmem <number-of-megabytes>" sys.exit() count = int(sys.argv[1]) megabyte = (0,) * (1024 * 1024 / 8) data = megabyte * count while True: time.sleep(1) 
6
  • 8
    That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually) Commented Nov 8, 2013 at 13:22
  • 1
    Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be. Commented Nov 8, 2013 at 23:04
  • @AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing). Commented Nov 9, 2013 at 14:40
  • 1
    This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could set sysctl vm.swappiness=0 and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html Commented Apr 4, 2017 at 20:03
  • simply one liner for 1GB: python -c "x=(1*1024*1024*1024/8)*(0,); raw_input()" Commented Sep 19, 2019 at 10:51
12

How about ramfs if it exists? Mount it and copy over a large file? If there's no /dev/shm and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.

9

If you want to test a particular process with limited memory you might be better off using ulimit to restrict the amount of allocatable memory.

1
  • 2
    Actually this does not work on linux (dunno about other *nixes). man setrlimit: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED. Commented Nov 8, 2013 at 13:46
7

I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.

6

I need to have 90% of the free memory full

In case there are not enough answers already, one I did not see is doing a ramdisk, or technically a tmpfs. This will map RAM to a folder in linux, and then you just create or dump however many files of whatever size in there to take up however much ram you want. The one downside is you need to be root to use the mount command

# first as root make the given folder, however you like where the tmpfs mount is going to be. mkdir /ramdisk chmod 777 /ramdisk mount -t tmpfs -o size=500G tmpfs /ramdisk # change 500G to whatever size makes sense; in my case my server has 512GB of RAM installed. 

Obtain or copy or create a file of reasonable size; create a 1GB file for example then

cp my1gbfile /ramdisk/file001 cp my1gbfile /ramdisk/file002 # do 450 times; 450 GB of 512GB approx 90% 

use free -g to observe how much RAM is allocated.

Note: having 512GB physical ram for example, and if you tmpfs more than 512gb it will work, and allow you freeze/crash the system by allocating 100% of the RAM. For that reason it is advisable to only tmpfs so much RAM that you leave some reasonable amount free for the system.

To create a single file of a given size:

truncate -s 450G my450gbfile # man truncate # also dd works well dd if=/dev/zero of=my456gbfile bs=1GB count=456 
4

I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner

The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.

1

with just dd. This continuously reads and allocates 10GB RES:

dd if=/dev/zero of=/dev/null iflag=fullblock bs=10G 

To just allocate once, add count=1 The downside is it is cpu heavy.

1

This expands @tkrennwa's answer:

You may not wish to spin 100% cpu during the test, which stress-ng does by default.

This invocation will not spin the CPU, but it will allocate 4g of ram, page lock it (so it can't swap), and then wait forever (ie, until ctrl-c):

stress-ng --vm-bytes 4g --vm-keep -m 1 --vm-ops 1 --vm-hang 0 --vm-locked 
  • --vm-ops N - stop vm workers after N bogo operations.
  • --vm-hang N - sleep N seconds before unmapping memory, the default is zero seconds. Specifying 0 will do an infinite wait.
  • --vm-locked - Lock the pages of the mapped region into memory using mmap MAP_LOCKED (since Linux 2.5.37).

Also, since you are just eating memory, you might want to --vm-madvise hugepage to use "huge pages" (typically 2MB instead of 4k). This is notably faster when freeing pages after CTRL-C because far fewer pages occupy the pagetable:

]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked --vm-madvise hugepage stress-ng: info: [3107579] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor stress-ng: info: [3107579] dispatching hogs: 1 vm stress-ng: info: [3107579] successful run completed in 17.15s real 0m17.186s <<<<<< with huge pages user 0m2.481s sys 0m14.453s ]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked stress-ng: info: [3108342] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor stress-ng: info: [3108342] dispatching hogs: 1 vm stress-ng: info: [3108342] successful run completed in 36.52s real 0m36.555s <<<<<< without huge pages user 0m2.598s sys 0m33.538s 
-1

This program works very well for allocating a fixed amount of memory:

https://github.com/julman99/eatmemory

1
  • 1
    Please don't post link-only answers. Also, the source code just does malloc so it's a duplicate of Chris' and nemo's answer which were both posted in 2013 and recommend making a C program that does malloc. Or arguably of tkrennwa's (also 2013) that recommends using a tool. Why do we need another tool? Commented Nov 22, 2021 at 9:29

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.