1

I recently upgraded a Virtual Machine from 12GB to 64GB and found that without any applications running, half of the memory was allocated. Since the upgrade, the VM load is chaotic and most of times the VM is unresponsive.

I couldn't find on htop, neither ps, which process was allocating this memory, but I found in the df -h output that some partitions such as /tmp, /sys/fs/cgroup, /run and /dev/shm were using 32G from the tmpfs filesystem, and /dev using 32GB from devtmpfs.

I understand that this memory is shared, and that this is the cause of the memory usage, and that new applications may use the memory occupied by these partitions. Please correct me if I'm wrong. However, the free -mh command reports about 20GB of memory available for new applications. Same for the column "free".

The df -h output.

Filesystem Size Used Avail Use% Mounted on ... devtmpfs 32G 0 32G 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 32G 49M 31G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup tmpfs 10G 17M 10G 1% /tmp ... 

The free -mh output.

 total used free shared buff/cache available Mem: 62G 41G 21G 24M 106M 21G 

From the px aux the program using more memory was 75MB from /usr/lib/systemd/systemd-journald. I don't have the output from that moment, neither from the vmstat and top -b 1 commands.

In another Centos machine with 128GB, I noticed that the tmpfs uses 64GB, but, still, the column "available" from the free -mh output indicates that new applications can allocate up to 123GB, even though there is 59GB free according to the column "free".

This latter example seems correct and understandable. But I can't understand the former.

I'm having problems to assign more than 12GB of memory to a java application (ES_HEAP_SIZE=12g) and I would like to know if there is anything I should consider to improve the memory behavior. Also, I would like to better understand the reasons behind this tmpfs partition and why it allocates half of the system memory. Is there any way to reduce the size of devtmpfs and tmpfs? And how does it affect the system?

The system is a Centos 7.1.1503 with kernel version 3.10.0-229.el7.x86_64. Thank you very much in advance.

Postscript: The java application hangs in a way I can't even do ps or htop and the only solution is to do killall -9 java. The system starts to be quite unresponsive as well.

2017/01/11 update

Now more processes are running since more applications are executed. The lsof -n | grep deleted output was empty. I did ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n which reports:

  • 143 processes with of less than 1MB.
  • 55 processes between 1 and 10MB that add up 221MB.
  • Only 5 processes over 10MB which are:
    • python with 14.89 MB
    • rsyslogd with 26 MB
    • systemd-journald with 47.83 MB
    • kibana with 78.72 MB
    • java with 13456 MB

Still, the free -mh command reports the following and I don't know where the rest of the memory is been consumed.

 total used free shared buff/cache available Mem: 62G 54G 5.5G 478M 3.2G 7.8G 

2017/01/16 update

The problem is already solved. First, there were different issues in this question.

The memory usage wasn't related to the tmpfs or the devtmpfs but with the memory ballooning from the vmware host. This, together with a 8GB quota (which contradicts the memory assignation to the VM) resulted in the reported strange behavior in which the VM load was chaotic. There were errors in dmesg mentioning vmballoon_work.

I couldn't find any information about this problem which suggested it could be a problem from the host, so I think this question/answer could be useful for future problems. The key point are these dmesg messages:

CPU: 6 PID: 10033 Comm: kworker/6:0 Not tainted 3.10.0-229.el7.x86_64 #1 Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/17/2015 Workqueue: events_freezable **vmballoon_work** [vmw_balloon] task: ffff88001d4ead80 ti: ffff880b9bad8000 task.ti: ffff880b9bad8000 RIP: 0010:[<ffffffff812edd71>] [<ffffffff812edd71>] __list_del_entry+0x31/0xd0 RSP: 0000:ffff880b9badbd68 EFLAGS: 00010246 RAX: ffffffffa032f3c0 RBX: ffffea0000000003 RCX: dead000000200200 RDX: ffffea001107ffe0 RSI: ffff88103fd969f0 RDI: ffffea0011040020 RBP: ffff880b9badbd68 R08: ffffea0011040020 R09: ffff88103fb94000 R10: 0000000000000020 R11: 0000000000000002 R12: ffff88103ff9d0d0 R13: 0000000000000002 R14: ffffff8000000001 R15: 0000000000000002 FS: 0000000000000000(0000) GS:ffff88103fd80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00000000016ba024 CR3: 0000000267e1c000 CR4: 00000000000407e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Stack: ffff880b9badbd80 ffffffff812ede1d ffffffffa032f3c0 ffff880b9badbdb0 ffffffffa032d04e ffffffffa032f4c0 ffff880155bd4180 ffff88103fd92ec0 ffff88103fd97300 ffff880b9badbe18 ffffffffa032d388 ffffffffa032f4c8 Call Trace: [<ffffffff812ede1d>] list_del+0xd/0x30 [<ffffffffa032d04e>] vmballoon_pop+0x4e/0x90 [vmw_balloon] [<ffffffffa032d388>] vmballoon_work+0xe8/0x720 [vmw_balloon] [<ffffffff8108f1db>] process_one_work+0x17b/0x470 [<ffffffff8108ffbb>] worker_thread+0x11b/0x400 [<ffffffff8108fea0>] ? rescuer_thread+0x400/0x400 [<ffffffff8109739f>] kthread+0xcf/0xe0 [<ffffffff810972d0>] ? kthread_create_on_node+0x140/0x140 [<ffffffff8161497c>] ret_from_fork+0x7c/0xb0 [<ffffffff810972d0>] ? kthread_create_on_node+0x140/0x140 

I want to thank Rui F Ribeiro for his answer about the tmpfs and devtmpfs. I changed the title from Why CentOS uses half of the memory for devtmpfs or tmpfs? to Where is my CentOS Virtual Machine using half of the memory? and added some tags.

6
  • would you please add the output of df -h, ps uax, free, vmstat and top -b 1? Your numbers seem strange. Also please include the value of variables used to setup the Java environment. Without them, the question seems more a rant than having technical hard data. Commented Jan 10, 2018 at 16:41
  • You're right. I added the outputs I have at that moment. Thank you. What does it seem strange? Commented Jan 10, 2018 at 17:08
  • Add the vars passed to the JVM, they might be off, and while at it the other commands I suggested. If you look well, /run is using only 49 MB, and /tmp 17 MB ; your reported values were way off normal values. They are not the ones eating memory, I am afraid. Commented Jan 10, 2018 at 17:11
  • 1
    You are also asking here multiple doubts in a single question. I would advise breaking this into several questions if possible. Commented Jan 10, 2018 at 17:17
  • Of course, the column "used" of the df -h command looks normal but I don't understand why the size of these partitions is 32G in the first place and how it affects the memory capabilities. Commented Jan 10, 2018 at 17:19

1 Answer 1

3

Your devtmpfs and tmpfs filesystems are not in reality using gigabytes of memory; they can potentially grow to 32GB, however that size is just the upper limit of their growing. That upper limit is also configurable, and is not what they use; they only eat the part of the RAM where they have content.

If you look right in your df:
/dev is using less than 1M and thus appears as 0,
/dev/shm the same thing,
/sys/fs/cgroup the same situation,
/tmpis using 17 MB,
/runis using 49 MB.

So your combined devtmpfs, tmpfs filesystems are using less than 70MB of your RAM. (megabytes, mind you)

Whatever is eating your RAM are surely not those filesystems.

As I said, you can change their upper limit values if that bothers you, however for the moment I would focus on how much RAM your JVM parameters are configured to use.

Ultimately, has per the OP feedback, there were issues with memory ballooning from the vmware host and there were errors in dmesg mentioning vmballoon_work.

This together with a 8GB quota in the VM hypervisor resulted in the reported strange behavior in which the VM load was chaotic, so it was indeed confirmed those filesystems were not the culprit.

6
  • Thank you. The problem is, without java even running, the system (htop as well) reports an usage of 40G. But there is no process using that much RAM, not even a GB. Commented Jan 10, 2018 at 17:37
  • Do a ps uax as root. Supposing there are not containers or other (pseudo)virtualisation technology involved, you should see something else. also do lsof -n | grep deleted Commented Jan 10, 2018 at 17:43
  • 1
    @CarlosVega Keep in mind also that it's fully possible for a lot of small processes to be running that together are eating up memory. Many UNIX applications, especially older ones, favor process-based parallelization over threading, and each process is accounted separately for memory. Commented Jan 10, 2018 at 19:57
  • 1
    The problem is already solved. First, there were different issues. The memory usage wasn't related to the tmpfs but with the memory ballooning from the vmware host. This, together with a 8GB quota (which contradicts the memory assignation to the VM) resulted in the reported strange behavior in which the VM load was chaotic. There were errors in dmesg mentioning vmballoon_work. Would you like to add this info to your answer so I can mark it as right? :-) Commented Jan 16, 2018 at 11:14
  • 1
    Well done. Quite interesting investigative work. Updated the answer with your feedback. Commented Jan 16, 2018 at 11:50

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.