13

I ran the program with root priviledge but it keeps complaining that mmap cannot allocate memory. Code snippet is below:

#define PROTECTION (PROT_READ | PROT_WRITE) #define LENGTH (4*1024) #ifndef MAP_HUGETLB #define MAP_HUGETLB 0x40000 #endif #define ADDR (void *) (0x0UL) #define FLAGS (MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB) int main (int argc, char *argv[]){ ... // allocate a buffer with the same size as the LLC using huge pages buf = mmap(ADDR, LENGTH, PROTECTION, FLAGS, 0, 0); if (buf == MAP_FAILED) { perror("mmap"); exit(1); } ...} 

Hardware: I have 8G RAM. Processor is ivybridge

Uname output:

Linux mymachine 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux 

EDIT 1: The output of perror

mmap: Cannot allocate memory 

Also added one line to print errno

printf("something is wrong: %d\n", errno); 

But the output is:

something is wrong: 12 

EDIT 2: The huge tlb related information from /proc/meminfo

HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB 
9
  • 2
    did you check the errno? [or] what is the output of perror()? Commented Dec 24, 2014 at 8:53
  • No problem with my OSX. Commented Dec 24, 2014 at 8:58
  • 4
    You shouldn't define that at all, it's a flag not something you change. Your system doesn't have any huge pages configured, so the error looks normal. Commented Dec 24, 2014 at 15:58
  • 1
    kernel.org/doc/Documentation/vm/hugetlbpage.txt Commented Dec 24, 2014 at 16:05
  • 1
    just FYI, since the huge page size is 2MB, but you're only allocating 4KB, you're wasting 2044KB Commented Dec 24, 2014 at 16:09

3 Answers 3

22

Well, as Documentation/vm/hugetlspage.txt suggested, do

echo 20 > /proc/sys/vm/nr_hugepages 

solved the problem. Tested on ubuntu 14.04. Check Why I can't map memory with mmap also.

Sign up to request clarification or add additional context in comments.

1 Comment

I used VHM, went to server configuration then searched for proc. I selected unlimited and my problems where solved
10

When you use MAP_HUGETLB, the mmap(2) call can fail (e.g. if your system does not have huge pages configured, or if some resource is exhausted), so you almost always should call mmap without MAP_HUGETLB as a fail back. Also, you should not define MAP_HUGETLB. If it is not defined (by system headers internal to <sys/mman.h>; it might be different according to architectures or kernel versions), don't use it!

// allocate a buffer with the same size as the LLC using huge pages buf = mmap(NULL, LENGTH, PROTECTION, #ifdef MAP_HUGETLB MAP_HUGETLB | #endif MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); #ifdef MAP_HUGETLB if (buf == MAP_FAILED) { // try again without huge pages: buf = mmap(NULL, LENGTH, PROTECTION, MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); }; #endif if (buf == MAP_FAILED) { perror("mmap"); exit(EXIT_FAILURE); } 

The kernel's Documentation/vm/hugetlspage.txt mention that huge pages are -or may be- limited (e.g. if you pass hugepages=N to the kernel, or if you do things thru /proc/ or /sys/, or if this was not configured in the kernel, etc...). So you are not sure to get them. And using huge pages for a small mapping of only 4Kbytes is a mistake (or perhaps a failure). Huge pages are worthwhile only when asking many megabytes (e.g. a gigabyte or more) and are always an optimization (e.g. you would want your application to be able to run on a kernel without them).

10 Comments

I want to ask mmap to use huge tlb. How should I make this error go away?
Why mmap with MAP_HUGETLB can fail? please give some link about that, both mmap and hugetlbpage did not talk about mmap with MAP_HUGETLB can fail
@jujj, and Basile Starynkevitch, I changed the mapping size to 4*1024*1024 and apply the echo command provided by jujj. It is working now. But since both of you provided part of the solution, I dont know who's answer to take. But I think Basile posted the kernel document and explanation first. Note: with only the echo command, I will still get seg faults later in my program.
But I still did not understand why I cannot define MAP_HUGETLB
Because it is provided by system headers. If they don't define it, your system don't have it.
|
-4

A practical solution if you're surely knowing physical memory is enough:

echo 1 > /proc/sys/vm/overcommit_memory 

1 Comment

This won't fix the problem. If the kernel has no huge pages available (they need to be consecutive in physical memory), allocating with MAP_HUGETLB will fail. Even if you still have a terrabyte of free memory.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.