* [patch 1/5] MAINTAINERS: change hardening mailing list
2020-10-11 6:15 incoming Andrew Morton
@ 2020-10-11 6:16 ` Andrew Morton
2020-10-11 6:16 ` [patch 2/5] MAINTAINERS: Antoine Tenart's email address Andrew Morton
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Andrew Morton @ 2020-10-11 6:16 UTC (permalink / raw)
To: akpm, corbet, keescook, linux-mm, me, mm-commits, rdunlap,
re.emese, torvalds, tycho
From: Kees Cook <keescook@chromium.org>
Subject: MAINTAINERS: change hardening mailing list
As more email from git history gets aimed at the OpenWall
kernel-hardening@ list, there has been a desire to separate "new topics"
from "on-going" work. To handle this, the superset of hardening email
topics are now to be directed to linux-hardening@vger.kernel.org. Update
the MAINTAINERS file and the .mailmap to accomplish this, so that
linux-hardening@ can be treated like any other regular upstream kernel
development list.
Link: https://lore.kernel.org/linux-hardening/202010051443.279CC265D@keescook/
Link: https://lkml.kernel.org/r/20201006000012.2768958-1-keescook@chromium.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: "Tobin C. Harding" <me@tobin.cc>
Cc: Tycho Andersen <tycho@tycho.pizza>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
.mailmap | 1 +
MAINTAINERS | 4 ++--
2 files changed, 3 insertions(+), 2 deletions(-)
--- a/.mailmap~maintainers-change-hardening-mailing-list
+++ a/.mailmap
@@ -188,6 +188,7 @@ Leon Romanovsky <leon@kernel.org> <leonr
Linas Vepstas <linas@austin.ibm.com>
Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch>
Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de>
+<linux-hardening@vger.kernel.org> <kernel-hardening@lists.openwall.com>
Li Yang <leoyang.li@nxp.com> <leoli@freescale.com>
Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
Lukasz Luba <lukasz.luba@arm.com> <l.luba@partner.samsung.com>
--- a/MAINTAINERS~maintainers-change-hardening-mailing-list
+++ a/MAINTAINERS
@@ -7240,7 +7240,7 @@ F: drivers/staging/gasket/
GCC PLUGINS
M: Kees Cook <keescook@chromium.org>
R: Emese Revfy <re.emese@gmail.com>
-L: kernel-hardening@lists.openwall.com
+L: linux-hardening@vger.kernel.org
S: Maintained
F: Documentation/kbuild/gcc-plugins.rst
F: scripts/Makefile.gcc-plugins
@@ -9802,7 +9802,7 @@ F: drivers/scsi/53c700*
LEAKING_ADDRESSES
M: Tobin C. Harding <me@tobin.cc>
M: Tycho Andersen <tycho@tycho.pizza>
-L: kernel-hardening@lists.openwall.com
+L: linux-hardening@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tobin/leaks.git
F: scripts/leaking_addresses.pl
_
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 2/5] MAINTAINERS: Antoine Tenart's email address
2020-10-11 6:15 incoming Andrew Morton
2020-10-11 6:16 ` [patch 1/5] MAINTAINERS: change hardening mailing list Andrew Morton
@ 2020-10-11 6:16 ` Andrew Morton
2020-10-11 6:16 ` [patch 3/5] mm: mmap: fix general protection fault in unlink_file_vma() Andrew Morton
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Andrew Morton @ 2020-10-11 6:16 UTC (permalink / raw)
To: akpm, atenart, linux-mm, mm-commits, torvalds
From: Antoine Tenart <atenart@kernel.org>
Subject: MAINTAINERS: Antoine Tenart's email address
Use my kernel.org address instead of my bootlin.com one.
Link: https://lkml.kernel.org/r/20201005164533.16811-1-atenart@kernel.org
Signed-off-by: Antoine Tenart <atenart@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
.mailmap | 3 ++-
MAINTAINERS | 4 ++--
2 files changed, 4 insertions(+), 3 deletions(-)
--- a/.mailmap~maintainers-update-my-email-address
+++ a/.mailmap
@@ -41,7 +41,8 @@ Andrew Murray <amurray@thegoodpenguin.co
Andrew Vasquez <andrew.vasquez@qlogic.com>
Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com>
Andy Adamson <andros@citi.umich.edu>
-Antoine Tenart <antoine.tenart@free-electrons.com>
+Antoine Tenart <atenart@kernel.org> <antoine.tenart@bootlin.com>
+Antoine Tenart <atenart@kernel.org> <antoine.tenart@free-electrons.com>
Antonio Ospite <ao2@ao2.it> <ao2@amarulasolutions.com>
Archit Taneja <archit@ti.com>
Ard Biesheuvel <ardb@kernel.org> <ard.biesheuvel@linaro.org>
--- a/MAINTAINERS~maintainers-update-my-email-address
+++ a/MAINTAINERS
@@ -1628,7 +1628,7 @@ N: meson
ARM/Annapurna Labs ALPINE ARCHITECTURE
M: Tsahee Zidenberg <tsahee@annapurnalabs.com>
-M: Antoine Tenart <antoine.tenart@bootlin.com>
+M: Antoine Tenart <atenart@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/boot/dts/alpine*
@@ -8678,7 +8678,7 @@ F: drivers/input/input-mt.c
K: \b(ABS|SYN)_MT_
INSIDE SECURE CRYPTO DRIVER
-M: Antoine Tenart <antoine.tenart@bootlin.com>
+M: Antoine Tenart <atenart@kernel.org>
L: linux-crypto@vger.kernel.org
S: Maintained
F: drivers/crypto/inside-secure/
_
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 3/5] mm: mmap: fix general protection fault in unlink_file_vma()
2020-10-11 6:15 incoming Andrew Morton
2020-10-11 6:16 ` [patch 1/5] MAINTAINERS: change hardening mailing list Andrew Morton
2020-10-11 6:16 ` [patch 2/5] MAINTAINERS: Antoine Tenart's email address Andrew Morton
@ 2020-10-11 6:16 ` Andrew Morton
2020-10-12 12:19 ` Christian König
2020-10-11 6:16 ` [patch 4/5] mm: validate inode in mapping_set_error() Andrew Morton
2020-10-11 6:16 ` [patch 5/5] mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged Andrew Morton
4 siblings, 1 reply; 7+ messages in thread
From: Andrew Morton @ 2020-10-11 6:16 UTC (permalink / raw)
To: airlied, akpm, chris, ckoenig.leichtzumerken, daniel, jhubbard,
linmiaohe, linux-mm, louhongxiang, mm-commits, sumit.semwal,
torvalds, willy
From: Miaohe Lin <linmiaohe@huawei.com>
Subject: mm: mmap: Fix general protection fault in unlink_file_vma()
The syzbot reported the below general protection fault:
general protection fault, probably for non-canonical address
0xe00eeaee0000003b: 0000 [#1] PREEMPT SMP KASAN
KASAN: maybe wild-memory-access in range
[0x00777770000001d8-0x00777770000001df]
CPU: 1 PID: 10488 Comm: syz-executor721 Not tainted 5.9.0-rc3-syzkaller #0
RIP: 0010:unlink_file_vma+0x57/0xb0 mm/mmap.c:164
Code: 4c 8b a5 a0 00 00 00 4d 85 e4 74 4e e8 92 d7 cd ff 49 8d bc 24
d8 01 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <80> 3c
02 00 75 3d 4d 8b b4 24 d8 01 00 00 4d 8d 6e 78 4c 89 ef e8
RSP: 0018:ffffc9000ac0f9b0 EFLAGS: 00010202
RAX: dffffc0000000000 RBX: ffff88800010ceb0 RCX: ffffffff81592421
RDX: 000eeeee0000003b RSI: ffffffff81a6736e RDI: 00777770000001d8
RBP: ffff88800010ceb0 R08: 0000000000000001 R09: ffff88801291a50f
R10: ffffed10025234a1 R11: 0000000000000001 R12: 0077777000000000
R13: 00007f1eea0da000 R14: 00007f1eea0d9000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8880ae700000(0000)
knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1eea11a9d0 CR3: 000000000007e000 CR4: 00000000001506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
free_pgtables+0x1b3/0x2f0 mm/memory.c:415
exit_mmap+0x2c0/0x530 mm/mmap.c:3184
__mmput+0x122/0x470 kernel/fork.c:1076
mmput+0x53/0x60 kernel/fork.c:1097
exit_mm kernel/exit.c:483 [inline]
do_exit+0xa8b/0x29f0 kernel/exit.c:793
do_group_exit+0x125/0x310 kernel/exit.c:903
get_signal+0x428/0x1f00 kernel/signal.c:2757
arch_do_signal+0x82/0x2520 arch/x86/kernel/signal.c:811
exit_to_user_mode_loop kernel/entry/common.c:136 [inline]
exit_to_user_mode_prepare+0x1ae/0x200 kernel/entry/common.c:167
syscall_exit_to_user_mode+0x7e/0x2e0 kernel/entry/common.c:242
entry_SYSCALL_64_after_hwframe+0x44/0xa9
It's because the ->mmap() callback can change vma->vm_file and fput the
original file. But the commit d70cec898324 ("mm: mmap: merge vma after
call_mmap() if possible") failed to catch this case and always fput() the
original file, hence add an extra fput().
[ Thanks Hillf for pointing this extra fput() out. ]
Link: https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com
Fixes: d70cec898324 ("mm: mmap: merge vma after call_mmap() if possible")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reported-by: syzbot+c5d5a51dcbb558ca0cb5@syzkaller.appspotmail.com
Cc: Christian König <ckoenig.leichtzumerken@gmail.com>
Cc: Hongxiang Lou <louhongxiang@huawei.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/mmap.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/mm/mmap.c~mm-mmap-fix-general-protection-fault-in-unlink_file_vma
+++ a/mm/mmap.c
@@ -1781,7 +1781,11 @@ unsigned long mmap_region(struct file *f
merge = vma_merge(mm, prev, vma->vm_start, vma->vm_end, vma->vm_flags,
NULL, vma->vm_file, vma->vm_pgoff, NULL, NULL_VM_UFFD_CTX);
if (merge) {
- fput(file);
+ /* ->mmap() can change vma->vm_file and fput the original file. So
+ * fput the vma->vm_file here or we would add an extra fput for file
+ * and cause general protection fault ultimately.
+ */
+ fput(vma->vm_file);
vm_area_free(vma);
vma = merge;
/* Update vm_flags and possible addr to pick up the change. We don't
_
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [patch 3/5] mm: mmap: fix general protection fault in unlink_file_vma()
2020-10-11 6:16 ` [patch 3/5] mm: mmap: fix general protection fault in unlink_file_vma() Andrew Morton
@ 2020-10-12 12:19 ` Christian König
0 siblings, 0 replies; 7+ messages in thread
From: Christian König @ 2020-10-12 12:19 UTC (permalink / raw)
To: Andrew Morton, airlied, chris, daniel, jhubbard, linmiaohe,
linux-mm, louhongxiang, mm-commits, sumit.semwal, torvalds, willy
Am 11.10.20 um 08:16 schrieb Andrew Morton:
> From: Miaohe Lin <linmiaohe@huawei.com>
> Subject: mm: mmap: Fix general protection fault in unlink_file_vma()
>
> The syzbot reported the below general protection fault:
>
> general protection fault, probably for non-canonical address
> 0xe00eeaee0000003b: 0000 [#1] PREEMPT SMP KASAN
> KASAN: maybe wild-memory-access in range
> [0x00777770000001d8-0x00777770000001df]
> CPU: 1 PID: 10488 Comm: syz-executor721 Not tainted 5.9.0-rc3-syzkaller #0
> RIP: 0010:unlink_file_vma+0x57/0xb0 mm/mmap.c:164
> Code: 4c 8b a5 a0 00 00 00 4d 85 e4 74 4e e8 92 d7 cd ff 49 8d bc 24
> d8 01 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 <80> 3c
> 02 00 75 3d 4d 8b b4 24 d8 01 00 00 4d 8d 6e 78 4c 89 ef e8
> RSP: 0018:ffffc9000ac0f9b0 EFLAGS: 00010202
> RAX: dffffc0000000000 RBX: ffff88800010ceb0 RCX: ffffffff81592421
> RDX: 000eeeee0000003b RSI: ffffffff81a6736e RDI: 00777770000001d8
> RBP: ffff88800010ceb0 R08: 0000000000000001 R09: ffff88801291a50f
> R10: ffffed10025234a1 R11: 0000000000000001 R12: 0077777000000000
> R13: 00007f1eea0da000 R14: 00007f1eea0d9000 R15: 0000000000000000
> FS: 0000000000000000(0000) GS:ffff8880ae700000(0000)
> knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007f1eea11a9d0 CR3: 000000000007e000 CR4: 00000000001506e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> free_pgtables+0x1b3/0x2f0 mm/memory.c:415
> exit_mmap+0x2c0/0x530 mm/mmap.c:3184
> __mmput+0x122/0x470 kernel/fork.c:1076
> mmput+0x53/0x60 kernel/fork.c:1097
> exit_mm kernel/exit.c:483 [inline]
> do_exit+0xa8b/0x29f0 kernel/exit.c:793
> do_group_exit+0x125/0x310 kernel/exit.c:903
> get_signal+0x428/0x1f00 kernel/signal.c:2757
> arch_do_signal+0x82/0x2520 arch/x86/kernel/signal.c:811
> exit_to_user_mode_loop kernel/entry/common.c:136 [inline]
> exit_to_user_mode_prepare+0x1ae/0x200 kernel/entry/common.c:167
> syscall_exit_to_user_mode+0x7e/0x2e0 kernel/entry/common.c:242
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> It's because the ->mmap() callback can change vma->vm_file and fput the
> original file. But the commit d70cec898324 ("mm: mmap: merge vma after
> call_mmap() if possible") failed to catch this case and always fput() the
> original file, hence add an extra fput().
>
> [ Thanks Hillf for pointing this extra fput() out. ]
>
> Link: https://lkml.kernel.org/r/20200916090733.31427-1-linmiaohe@huawei.com
> Fixes: d70cec898324 ("mm: mmap: merge vma after call_mmap() if possible")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> Reported-by: syzbot+c5d5a51dcbb558ca0cb5@syzkaller.appspotmail.com
> Cc: Christian König <ckoenig.leichtzumerken@gmail.com>
> Cc: Hongxiang Lou <louhongxiang@huawei.com>
> Cc: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
>
> mm/mmap.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> --- a/mm/mmap.c~mm-mmap-fix-general-protection-fault-in-unlink_file_vma
> +++ a/mm/mmap.c
> @@ -1781,7 +1781,11 @@ unsigned long mmap_region(struct file *f
> merge = vma_merge(mm, prev, vma->vm_start, vma->vm_end, vma->vm_flags,
> NULL, vma->vm_file, vma->vm_pgoff, NULL, NULL_VM_UFFD_CTX);
> if (merge) {
> - fput(file);
> + /* ->mmap() can change vma->vm_file and fput the original file. So
> + * fput the vma->vm_file here or we would add an extra fput for file
> + * and cause general protection fault ultimately.
> + */
> + fput(vma->vm_file);
> vm_area_free(vma);
> vma = merge;
> /* Update vm_flags and possible addr to pick up the change. We don't
> _
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 4/5] mm: validate inode in mapping_set_error()
2020-10-11 6:15 incoming Andrew Morton
` (2 preceding siblings ...)
2020-10-11 6:16 ` [patch 3/5] mm: mmap: fix general protection fault in unlink_file_vma() Andrew Morton
@ 2020-10-11 6:16 ` Andrew Morton
2020-10-11 6:16 ` [patch 5/5] mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged Andrew Morton
4 siblings, 0 replies; 7+ messages in thread
From: Andrew Morton @ 2020-10-11 6:16 UTC (permalink / raw)
To: akpm, andres, david, dhowells, hch, jack, jlayton, linux-mm,
minchan, mm-commits, stable, torvalds, viro, willy
From: Minchan Kim <minchan@kernel.org>
Subject: mm: validate inode in mapping_set_error()
The swap address_space doesn't have host. Thus, it makes kernel crash once
swap write meets error. Fix it.
Link: https://lkml.kernel.org/r/20201010000650.750063-1-minchan@kernel.org
Fixes: 735e4ae5ba28 ("vfs: track per-sb writeback errors and report them to syncfs")
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jeff Layton <jlayton@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Andres Freund <andres@anarazel.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: David Howells <dhowells@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/pagemap.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/include/linux/pagemap.h~mm-validate-inode-in-mapping_set_error
+++ a/include/linux/pagemap.h
@@ -54,7 +54,8 @@ static inline void mapping_set_error(str
__filemap_set_wb_err(mapping, error);
/* Record it in superblock */
- errseq_set(&mapping->host->i_sb->s_wb_err, error);
+ if (mapping->host)
+ errseq_set(&mapping->host->i_sb->s_wb_err, error);
/* Record it in flags for now, for legacy callers */
if (error == -ENOSPC)
_
^ permalink raw reply [flat|nested] 7+ messages in thread
* [patch 5/5] mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged
2020-10-11 6:15 incoming Andrew Morton
` (3 preceding siblings ...)
2020-10-11 6:16 ` [patch 4/5] mm: validate inode in mapping_set_error() Andrew Morton
@ 2020-10-11 6:16 ` Andrew Morton
4 siblings, 0 replies; 7+ messages in thread
From: Andrew Morton @ 2020-10-11 6:16 UTC (permalink / raw)
To: aarcange, akpm, apais, kirill.shutemov, linux-mm, mhocko,
mm-commits, oleg, pasha.tatashin, songliubraving, stable,
torvalds, vijayb
From: Vijay Balakrishna <vijayb@linux.microsoft.com>
Subject: mm: khugepaged: recalculate min_free_kbytes after memory hotplug as expected by khugepaged
When memory is hotplug added or removed the min_free_kbytes should be
recalculated based on what is expected by khugepaged. Currently after
hotplug, min_free_kbytes will be set to a lower default and higher default
set when THP enabled is lost. This change restores min_free_kbytes as
expected for THP consumers.
[vijayb@linux.microsoft.com: v5]
Link: https://lkml.kernel.org/r/1601398153-5517-1-git-send-email-vijayb@linux.microsoft.com
Link: https://lkml.kernel.org/r/1600305709-2319-2-git-send-email-vijayb@linux.microsoft.com
Link: https://lkml.kernel.org/r/1600204258-13683-1-git-send-email-vijayb@linux.microsoft.com
Fixes: f000565adb77 ("thp: set recommended min free kbytes")
Signed-off-by: Vijay Balakrishna <vijayb@linux.microsoft.com>
Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Allen Pais <apais@microsoft.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/khugepaged.h | 5 +++++
mm/khugepaged.c | 13 +++++++++++--
mm/page_alloc.c | 3 +++
3 files changed, 19 insertions(+), 2 deletions(-)
--- a/include/linux/khugepaged.h~mm-khugepaged-recalculate-min_free_kbytes-after-memory-hotplug-as-expected-by-khugepaged
+++ a/include/linux/khugepaged.h
@@ -15,6 +15,7 @@ extern int __khugepaged_enter(struct mm_
extern void __khugepaged_exit(struct mm_struct *mm);
extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
unsigned long vm_flags);
+extern void khugepaged_min_free_kbytes_update(void);
#ifdef CONFIG_SHMEM
extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr);
#else
@@ -85,6 +86,10 @@ static inline void collapse_pte_mapped_t
unsigned long addr)
{
}
+
+static inline void khugepaged_min_free_kbytes_update(void)
+{
+}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* _LINUX_KHUGEPAGED_H */
--- a/mm/khugepaged.c~mm-khugepaged-recalculate-min_free_kbytes-after-memory-hotplug-as-expected-by-khugepaged
+++ a/mm/khugepaged.c
@@ -56,6 +56,9 @@ enum scan_result {
#define CREATE_TRACE_POINTS
#include <trace/events/huge_memory.h>
+static struct task_struct *khugepaged_thread __read_mostly;
+static DEFINE_MUTEX(khugepaged_mutex);
+
/* default scan 8*512 pte (or vmas) every 30 second */
static unsigned int khugepaged_pages_to_scan __read_mostly;
static unsigned int khugepaged_pages_collapsed;
@@ -2304,8 +2307,6 @@ static void set_recommended_min_free_kby
int start_stop_khugepaged(void)
{
- static struct task_struct *khugepaged_thread __read_mostly;
- static DEFINE_MUTEX(khugepaged_mutex);
int err = 0;
mutex_lock(&khugepaged_mutex);
@@ -2332,3 +2333,11 @@ fail:
mutex_unlock(&khugepaged_mutex);
return err;
}
+
+void khugepaged_min_free_kbytes_update(void)
+{
+ mutex_lock(&khugepaged_mutex);
+ if (khugepaged_enabled() && khugepaged_thread)
+ set_recommended_min_free_kbytes();
+ mutex_unlock(&khugepaged_mutex);
+}
--- a/mm/page_alloc.c~mm-khugepaged-recalculate-min_free_kbytes-after-memory-hotplug-as-expected-by-khugepaged
+++ a/mm/page_alloc.c
@@ -69,6 +69,7 @@
#include <linux/nmi.h>
#include <linux/psi.h>
#include <linux/padata.h>
+#include <linux/khugepaged.h>
#include <asm/sections.h>
#include <asm/tlbflush.h>
@@ -7904,6 +7905,8 @@ int __meminit init_per_zone_wmark_min(vo
setup_min_slab_ratio();
#endif
+ khugepaged_min_free_kbytes_update();
+
return 0;
}
postcore_initcall(init_per_zone_wmark_min)
_
^ permalink raw reply [flat|nested] 7+ messages in thread