I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix system?
I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix system?
stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping:
stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 Adapt the /proc/meminfo call with free(1)/vm_stat(1)/etc. if you need it portable. See also the reference wiki for stress-ng for further usage examples.
stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.98;}' < /proc/meminfo)k --vm-keep -m 1 --vm 1 and --vm-keep are very important. Simply --vm-bytes does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution. -m 1. According to the stress manpage, -m N is short for --vm N: spawn N workers spinning on malloc()/free() If you have a basic Unix/POSIX system with support for head -c (e.g. GNU coreutils or BusyBox), you can fill a certain amount of memory for a certain duration like this:
head -c BYTES /dev/zero | tail | sleep SECONDS head -c 5000m /dev/zero | tail | sleep 60 # ~5GB head -c 5G /dev/zero | tail | sleep 60 # 5GiB, requires GNU (not busybox) This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM the commands themselves use on your system and subtract that. The sleep command is a tricky hack that does two things: it keeps tail alive for the desired duration and it prevents the null bytes from being written to the terminal (they wouldn't be visible but they'd use lots of CPU). The way it works is that tail reads all the data from the first pipe into memory, and when it starts writing the data to the second pipe it will quickly block because sleep isn't reading. When sleep exits (after the specified duration), it causes tail to be killed with SIGPIPE.
Here is a less tricky but slightly longer way:
{ head -c BYTES /dev/zero; sleep SECONDS; } | tail > /dev/null This version runs the sleep right after head, in order to keep the first pipe open for the specified duration and hence keep tail alive.
To keep it running indefinitely (until killed using Ctrl+C), change the sleep command to sleep infinity (if supported) or to tail -f /dev/null.
To just quickly run out of RAM completely, you can use this simple command:
tail /dev/zero If you have pv and want to slowly increase RAM use:
head -c TOTAL /dev/zero | pv -L BYTES_PER_SEC | tail > /dev/null head -c 1000m /dev/zero | pv -L 10m | tail > /dev/null The latter will use up to one gigabyte at a rate of ten megabytes per second. As an added bonus, pv will show the current rate of use and the total use so far. Of course this can also be done with previous variants:
head -c 500m /dev/zero | pv | tail | sleep 60 Just inserting the | pv | part will show you the current status (throughput and total by default). If you don't have pv but you do have GNU dd, you can use dd status=progress bs=4096 instead.
Compatibility hints and alternatives
If you do not have a /dev/zero device, the standard yes and tr tools might substitute: yes | tr \\n x | head -c BYTES | tail (yes outputs an infinite amount of "yes"es, tr substitutes the newline such that everything becomes one huge line and tail needs to keep all that in memory).
Another, simpler alternative is using dd: dd if=/dev/zero bs=1G | sleep 60 uses 1GB of memory on GNU and BusyBox.
Finally, if your head does not accept a multiplier suffix, you can calculate an amount of bytes inline, for example 50MiB: head -c $((1024*1024*50)). If it doesn't even support -c (e.g. OpenBSD), you can use dd bs=1024 count=$((1024*50)) instead for 50MiB (bs is the block size, it has a big impact on performance; the total size is the block size times the count).
Credits to:
Why another answer when there are eleven solutions already? The accepted answer recommends installing a package (not available everywhere); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed on/for the target platform); the second top-voted answer recommends running the application in a VM (copying my phone's internal sdcard and creating a virtualbox image to run an ARM VM is quite involved); the third suggests modifying something in the boot sequence (won't work on my phone); the fourth only works in so far as the /dev/shm mountpoint exists and is large enough (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth requires installing python; the ninth thinks we're all very uncreative and finally the tenth wrote their own C++ program which causes the same issue as the top-voted answer.
set -e, so I just learned something :) time yes | tr \\n x | head -c $((1024*1024*1024*10)) | grep n (use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel? /dev/null. Writing all those null bytes to the terminal costs a huge amount of CPU. Instead of head -c 500m /dev/zero | tail, use head -c 500m /dev/zero | tail > /dev/null. For me, this reduced runtime from about 12s to about 0.27s. { head -c 500m /dev/zero; sleep SECONDS; } | tail. (2) I second the suggestion to redirect to /dev/null; if you want to save characters, you can instead use ... | tail | : (note that in general COMMAND | : is not equivalent to COMMAND > /dev/null, because the former kills COMMAND when it tries to write; but in this particular case that doesn't matter because tail only starts writing after it reads all the data, and at that point it's okay for it to be killed). I'm happy to create a suggested edit. You can write a C program to malloc() the required memory and then use mlock() to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
calloc will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do a memset of the whole buffer. See the following answer for more info stackoverflow.com/a/2688522/713554 I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
pv is installed, it helps to see the count: dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024 yes > /dev/shm/asdf, as it will crash your system (even with swapping enabled) mem=nn[KMG] kernel boot parameter(look in linux/Documentation/kernel-parameters.txt for details).
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() { if [[ $# -eq 0 || $1 -eq '-h' || $1 -lt 0 ]] ; then echo -e "usage: malloc N\n\nAllocate N mb, wait, then release it." else N=$(free -m | grep Mem: | awk '{print int($2/10)}') if [[ $N -gt $1 ]] ;then N=$1 fi sh -c "MEMBLOB=\$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1" fi } How abount a simple python solution?
#!/usr/bin/env python import sys import time if len(sys.argv) != 2: print "usage: fillmem <number-of-megabytes>" sys.exit() count = int(sys.argv[1]) megabyte = (0,) * (1024 * 1024 / 8) data = megabyte * count while True: time.sleep(1) sysctl vm.swappiness=0 and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See kernel.org/doc/Documentation/sysctl/vm.txt and kernel.org/doc/gorman/html/understand/understand005.html How about ramfs if it exists? Mount it and copy over a large file? If there's no /dev/shm and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.
If you want to test a particular process with limited memory you might be better off using ulimit to restrict the amount of allocatable memory.
man setrlimit: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED. I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
I need to have 90% of the free memory full
In case there are not enough answers already, one I did not see is doing a ramdisk, or technically a tmpfs. This will map RAM to a folder in linux, and then you just create or dump however many files of whatever size in there to take up however much ram you want. The one downside is you need to be root to use the mount command
# first as root make the given folder, however you like where the tmpfs mount is going to be. mkdir /ramdisk chmod 777 /ramdisk mount -t tmpfs -o size=500G tmpfs /ramdisk # change 500G to whatever size makes sense; in my case my server has 512GB of RAM installed. Obtain or copy or create a file of reasonable size; create a 1GB file for example then
cp my1gbfile /ramdisk/file001 cp my1gbfile /ramdisk/file002 # do 450 times; 450 GB of 512GB approx 90% use free -g to observe how much RAM is allocated.
Note: having 512GB physical ram for example, and if you tmpfs more than 512gb it will work, and allow you freeze/crash the system by allocating 100% of the RAM. For that reason it is advisable to only tmpfs so much RAM that you leave some reasonable amount free for the system.
To create a single file of a given size:
truncate -s 450G my450gbfile # man truncate # also dd works well dd if=/dev/zero of=my456gbfile bs=1GB count=456 I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
with just dd. This continuously reads and allocates 10GB RES:
dd if=/dev/zero of=/dev/null iflag=fullblock bs=10G To just allocate once, add count=1 The downside is it is cpu heavy.
This expands @tkrennwa's answer:
You may not wish to spin 100% cpu during the test, which stress-ng does by default.
This invocation will not spin the CPU, but it will allocate 4g of ram, page lock it (so it can't swap), and then wait forever (ie, until ctrl-c):
stress-ng --vm-bytes 4g --vm-keep -m 1 --vm-ops 1 --vm-hang 0 --vm-locked --vm-ops N - stop vm workers after N bogo operations.--vm-hang N - sleep N seconds before unmapping memory, the default is zero seconds. Specifying 0 will do an infinite wait.--vm-locked - Lock the pages of the mapped region into memory using mmap MAP_LOCKED (since Linux 2.5.37).Also, since you are just eating memory, you might want to --vm-madvise hugepage to use "huge pages" (typically 2MB instead of 4k). This is notably faster when freeing pages after CTRL-C because far fewer pages occupy the pagetable:
]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked --vm-madvise hugepage stress-ng: info: [3107579] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor stress-ng: info: [3107579] dispatching hogs: 1 vm stress-ng: info: [3107579] successful run completed in 17.15s real 0m17.186s <<<<<< with huge pages user 0m2.481s sys 0m14.453s ]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked stress-ng: info: [3108342] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor stress-ng: info: [3108342] dispatching hogs: 1 vm stress-ng: info: [3108342] successful run completed in 36.52s real 0m36.555s <<<<<< without huge pages user 0m2.598s sys 0m33.538s This program works very well for allocating a fixed amount of memory:
malloc. Or arguably of tkrennwa's (also 2013) that recommends using a tool. Why do we need another tool?