Skip to content

Commit 78f3908

Browse files
Muchun Songakpm00
authored andcommitted
mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl
We must add hugetlb_free_vmemmap=on (or "off") to the boot cmdline and reboot the server to enable or disable the feature of optimizing vmemmap pages associated with HugeTLB pages. However, rebooting usually takes a long time. So add a sysctl to enable or disable the feature at runtime without rebooting. Why we need this? There are 3 use cases. 1) The feature of minimizing overhead of struct page associated with each HugeTLB is disabled by default without passing "hugetlb_free_vmemmap=on" to the boot cmdline. When we (ByteDance) deliver the servers to the users who want to enable this feature, they have to configure the grub (change boot cmdline) and reboot the servers, whereas rebooting usually takes a long time (we have thousands of servers). It's a very bad experience for the users. So we need a approach to enable this feature after rebooting. This is a use case in our practical environment. 2) Some use cases are that HugeTLB pages are allocated 'on the fly' instead of being pulled from the HugeTLB pool, those workloads would be affected with this feature enabled. Those workloads could be identified by the characteristics of they never explicitly allocating huge pages with 'nr_hugepages' but only set 'nr_overcommit_hugepages' and then let the pages be allocated from the buddy allocator at fault time. We can confirm it is a real use case from the commit 099730d. For those workloads, the page fault time could be ~2x slower than before. We suspect those users want to disable this feature if the system has enabled this before and they don't think the memory savings benefit is enough to make up for the performance drop. 3) If the workload which wants vmemmap pages to be optimized and the workload which wants to set 'nr_overcommit_hugepages' and does not want the extera overhead at fault time when the overcommitted pages be allocated from the buddy allocator are deployed in the same server. The user could enable this feature and set 'nr_hugepages' and 'nr_overcommit_hugepages', then disable the feature. In this case, the overcommited HugeTLB pages will not encounter the extra overhead at fault time. Link: https://lkml.kernel.org/r/20220512041142.39501-5-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Iurii Zaikin <yzaikin@google.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: David Hildenbrand <david@redhat.com> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 9c54c52 commit 78f3908

File tree

4 files changed

+133
-15
lines changed

4 files changed

+133
-15
lines changed

Documentation/admin-guide/sysctl/vm.rst

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -562,6 +562,45 @@ Change the minimum size of the hugepage pool.
562562
See Documentation/admin-guide/mm/hugetlbpage.rst
563563

564564

565+
hugetlb_optimize_vmemmap
566+
========================
567+
568+
This knob is not available when memory_hotplug.memmap_on_memory (kernel parameter)
569+
is configured or the size of 'struct page' (a structure defined in
570+
include/linux/mm_types.h) is not power of two (an unusual system config could
571+
result in this).
572+
573+
Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap pages
574+
associated with each HugeTLB page.
575+
576+
Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
577+
buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
578+
per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
579+
optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool
580+
to the buddy allocator, the vmemmap pages representing that range needs to be
581+
remapped again and the vmemmap pages discarded earlier need to be rellocated
582+
again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g.
583+
never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set
584+
'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on
585+
the fly') instead of being pulled from the HugeTLB pool, you should weigh the
586+
benefits of memory savings against the more overhead (~2x slower than before)
587+
of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy
588+
allocator. Another behavior to note is that if the system is under heavy memory
589+
pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB
590+
pool to the buddy allocator since the allocation of vmemmap pages could be
591+
failed, you have to retry later if your system encounter this situation.
592+
593+
Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
594+
buddy allocator will not be optimized meaning the extra overhead at allocation
595+
time from buddy allocator disappears, whereas already optimized HugeTLB pages
596+
will not be affected. If you want to make sure there are no optimized HugeTLB
597+
pages, you can set "nr_hugepages" to 0 first and then disable this. Note that
598+
writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
599+
pages. So, those surplus pages are still optimized until they are no longer
600+
in use. You would need to wait for those surplus pages to be released before
601+
there are no optimized pages in the system.
602+
603+
565604
nr_hugepages_mempolicy
566605
======================
567606

include/linux/memory_hotplug.h

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -351,4 +351,13 @@ void arch_remove_linear_mapping(u64 start, u64 size);
351351
extern bool mhp_supports_memmap_on_memory(unsigned long size);
352352
#endif /* CONFIG_MEMORY_HOTPLUG */
353353

354+
#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
355+
bool mhp_memmap_on_memory(void);
356+
#else
357+
static inline bool mhp_memmap_on_memory(void)
358+
{
359+
return false;
360+
}
361+
#endif
362+
354363
#endif /* __LINUX_MEMORY_HOTPLUG_H */

mm/hugetlb_vmemmap.c

Lines changed: 84 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
*/
1111
#define pr_fmt(fmt) "HugeTLB: " fmt
1212

13+
#include <linux/memory_hotplug.h>
1314
#include "hugetlb_vmemmap.h"
1415

1516
/*
@@ -22,21 +23,40 @@
2223
#define RESERVE_VMEMMAP_NR1U
2324
#define RESERVE_VMEMMAP_SIZE(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
2425

26+
enum vmemmap_optimize_mode {
27+
VMEMMAP_OPTIMIZE_OFF,
28+
VMEMMAP_OPTIMIZE_ON,
29+
};
30+
2531
DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON,
2632
hugetlb_optimize_vmemmap_key);
2733
EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
2834

35+
static enum vmemmap_optimize_mode vmemmap_optimize_mode =
36+
IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON);
37+
38+
static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to)
39+
{
40+
if (vmemmap_optimize_mode == to)
41+
return;
42+
43+
if (to == VMEMMAP_OPTIMIZE_OFF)
44+
static_branch_dec(&hugetlb_optimize_vmemmap_key);
45+
else
46+
static_branch_inc(&hugetlb_optimize_vmemmap_key);
47+
WRITE_ONCE(vmemmap_optimize_mode, to);
48+
}
49+
2950
static int __init hugetlb_vmemmap_early_param(char *buf)
3051
{
3152
bool enable;
53+
enum vmemmap_optimize_mode mode;
3254

3355
if (kstrtobool(buf, &enable))
3456
return -EINVAL;
3557

36-
if (enable)
37-
static_branch_enable(&hugetlb_optimize_vmemmap_key);
38-
else
39-
static_branch_disable(&hugetlb_optimize_vmemmap_key);
58+
mode = enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF;
59+
vmemmap_optimize_mode_switch(mode);
4060

4161
return 0;
4262
}
@@ -69,8 +89,10 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head)
6989
*/
7090
ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
7191
GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
72-
if (!ret)
92+
if (!ret) {
7393
ClearHPageVmemmapOptimized(head);
94+
static_branch_dec(&hugetlb_optimize_vmemmap_key);
95+
}
7496

7597
return ret;
7698
}
@@ -84,6 +106,11 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head)
84106
if (!vmemmap_pages)
85107
return;
86108

109+
if (READ_ONCE(vmemmap_optimize_mode) == VMEMMAP_OPTIMIZE_OFF)
110+
return;
111+
112+
static_branch_inc(&hugetlb_optimize_vmemmap_key);
113+
87114
vmemmap_addr+= RESERVE_VMEMMAP_SIZE;
88115
vmemmap_end= vmemmap_addr + (vmemmap_pages << PAGE_SHIFT);
89116
vmemmap_reuse= vmemmap_addr - PAGE_SIZE;
@@ -93,7 +120,9 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head)
93120
* to the page which @vmemmap_reuse is mapped to, then free the pages
94121
* which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
95122
*/
96-
if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse))
123+
if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse))
124+
static_branch_dec(&hugetlb_optimize_vmemmap_key);
125+
else
97126
SetHPageVmemmapOptimized(head);
98127
}
99128

@@ -110,9 +139,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
110139
BUILD_BUG_ON(__NR_USED_SUBPAGE >=
111140
RESERVE_VMEMMAP_SIZE / sizeof(struct page));
112141

113-
if (!hugetlb_optimize_vmemmap_enabled())
114-
return;
115-
116142
if (!is_power_of_2(sizeof(struct page))) {
117143
pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n");
118144
static_branch_disable(&hugetlb_optimize_vmemmap_key);
@@ -134,3 +160,52 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
134160
pr_info("can optimize %d vmemmap pages for %s\n",
135161
h->optimize_vmemmap_pages, h->name);
136162
}
163+
164+
#ifdef CONFIG_PROC_SYSCTL
165+
static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int write,
166+
void *buffer, size_t *length,
167+
loff_t *ppos)
168+
{
169+
int ret;
170+
enum vmemmap_optimize_mode mode;
171+
static DEFINE_MUTEX(sysctl_mutex);
172+
173+
if (write && !capable(CAP_SYS_ADMIN))
174+
return -EPERM;
175+
176+
mutex_lock(&sysctl_mutex);
177+
mode = vmemmap_optimize_mode;
178+
table->data = &mode;
179+
ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
180+
if (write && !ret)
181+
vmemmap_optimize_mode_switch(mode);
182+
mutex_unlock(&sysctl_mutex);
183+
184+
return ret;
185+
}
186+
187+
static struct ctl_table hugetlb_vmemmap_sysctls[] = {
188+
{
189+
.procname= "hugetlb_optimize_vmemmap",
190+
.maxlen= sizeof(enum vmemmap_optimize_mode),
191+
.mode= 0644,
192+
.proc_handler= hugetlb_optimize_vmemmap_handler,
193+
.extra1= SYSCTL_ZERO,
194+
.extra2= SYSCTL_ONE,
195+
},
196+
{ }
197+
};
198+
199+
static __init int hugetlb_vmemmap_sysctls_init(void)
200+
{
201+
/*
202+
* If "memory_hotplug.memmap_on_memory" is enabled or "struct page"
203+
* crosses page boundaries, the vmemmap pages cannot be optimized.
204+
*/
205+
if (!mhp_memmap_on_memory() && is_power_of_2(sizeof(struct page)))
206+
register_sysctl_init("vm", hugetlb_vmemmap_sysctls);
207+
208+
return 0;
209+
}
210+
late_initcall(hugetlb_vmemmap_sysctls_init);
211+
#endif /* CONFIG_PROC_SYSCTL */

mm/memory_hotplug.c

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,15 +63,10 @@ static bool memmap_on_memory __ro_after_init;
6363
module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444);
6464
MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug");
6565

66-
static inline bool mhp_memmap_on_memory(void)
66+
bool mhp_memmap_on_memory(void)
6767
{
6868
return memmap_on_memory;
6969
}
70-
#else
71-
static inline bool mhp_memmap_on_memory(void)
72-
{
73-
return false;
74-
}
7570
#endif
7671

7772
enum {

0 commit comments

Comments
 (0)