* [merged mm-stable] mm-switch-the-rmap-lock-held-option-off-in-compat-layer.patch removed from -mm tree
@ 2026-03-31 0:43 Andrew Morton
0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2026-03-31 0:43 UTC (permalink / raw)
To: mm-commits, wei.liu, viro, vigneshr, vbabka, surenb, ryan.roberts,
rppt, richard, pfalcato, miquel.raynal, mhocko, mcoquelin.stm32,
martin.petersen, marc.dionne, longli, liam.howlett, kys, jannh,
jack, haiyangz, gregkh, dhowells, decui, david, corbet, clemens,
brauner, bostroesser, arnd, alexandre.torgue, alexander.shishkin,
ljs, akpm
The quilt patch titled
Subject: mm: switch the rmap lock held option off in compat layer
has been removed from the -mm tree. Its filename was
mm-switch-the-rmap-lock-held-option-off-in-compat-layer.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
Subject: mm: switch the rmap lock held option off in compat layer
Date: Fri, 20 Mar 2026 22:39:31 +0000
In the mmap_prepare compatibility layer, we don't need to hold the rmap
lock, as we are being called from an .mmap handler.
The .mmap_prepare hook, when invoked in the VMA logic, is called prior to
the VMA being instantiated, but the completion hook is called after the VMA
is linked into the maple tree, meaning rmap walkers can reach it.
The mmap hook does not link the VMA into the tree, so this cannot happen.
Therefore it's safe to simply disable this in the mmap_prepare
compatibility layer.
Also update VMA tests code to reflect current compatibility layer state.
[akpm@linux-foundation.org: fix comment typo, per Vlastimil]
Link: https://lkml.kernel.org/r/dda74230d26a1fcd79a3efab61fa4101dd1cac64.1774045440.git.ljs@kernel.org
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Bodo Stroesser <bostroesser@gmail.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Clemens Ladisch <clemens@ladisch.de>
Cc: David Hildenbrand <david@kernel.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: K. Y. Srinivasan <kys@microsoft.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Long Li <longli@microsoft.com>
Cc: Marc Dionne <marc.dionne@auristor.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Miquel Raynal <miquel.raynal@bootlin.com>
Cc: Pedro Falcato <pfalcato@suse.de>
Cc: Richard Weinberger <richard@nod.at>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vignesh Raghavendra <vigneshr@ti.com>
Cc: Wei Liu <wei.liu@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/util.c | 6 +++-
tools/testing/vma/include/dup.h | 42 +++++++++++++++---------------
2 files changed, 27 insertions(+), 21 deletions(-)
--- a/mm/util.c~mm-switch-the-rmap-lock-held-option-off-in-compat-layer
+++ a/mm/util.c
@@ -1204,6 +1204,7 @@ int compat_vma_mmap(struct file *file, s
.action.type = MMAP_NOTHING, /* Default */
};
+ struct mmap_action *action = &desc.action;
int err;
err = vfs_mmap_prepare(file, &desc);
@@ -1214,8 +1215,11 @@ int compat_vma_mmap(struct file *file, s
if (err)
return err;
+ /* being invoked from .mmap means we don't have to enforce this. */
+ action->hide_from_rmap_until_complete = false;
+
set_vma_from_desc(vma, &desc);
- err = mmap_action_complete(vma, &desc.action);
+ err = mmap_action_complete(vma, action);
if (err) {
const size_t len = vma_pages(vma) << PAGE_SHIFT;
--- a/tools/testing/vma/include/dup.h~mm-switch-the-rmap-lock-held-option-off-in-compat-layer
+++ a/tools/testing/vma/include/dup.h
@@ -1260,8 +1260,17 @@ static inline void vma_set_anonymous(str
static inline void set_vma_from_desc(struct vm_area_struct *vma,
struct vm_area_desc *desc);
-static inline int __compat_vma_mmap(const struct file_operations *f_op,
- struct file *file, struct vm_area_struct *vma)
+static inline int vfs_mmap_prepare(struct file *file, struct vm_area_desc *desc)
+{
+ return file->f_op->mmap_prepare(desc);
+}
+
+static inline unsigned long vma_pages(struct vm_area_struct *vma)
+{
+ return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+}
+
+static inline int compat_vma_mmap(struct file *file, struct vm_area_struct *vma)
{
struct vm_area_desc desc = {
.mm = vma->vm_mm,
@@ -1276,9 +1285,10 @@ static inline int __compat_vma_mmap(cons
.action.type = MMAP_NOTHING, /* Default */
};
+ struct mmap_action *action = &desc.action;
int err;
- err = f_op->mmap_prepare(&desc);
+ err = vfs_mmap_prepare(file, &desc);
if (err)
return err;
@@ -1286,28 +1296,25 @@ static inline int __compat_vma_mmap(cons
if (err)
return err;
+ /* being invoked from .mmmap means we don't have to enforce this. */
+ action->hide_from_rmap_until_complete = false;
+
set_vma_from_desc(vma, &desc);
- return mmap_action_complete(vma, &desc.action);
-}
+ err = mmap_action_complete(vma, action);
+ if (err) {
+ const size_t len = vma_pages(vma) << PAGE_SHIFT;
-static inline int compat_vma_mmap(struct file *file,
- struct vm_area_struct *vma)
-{
- return __compat_vma_mmap(file->f_op, file, vma);
+ do_munmap(current->mm, vma->vm_start, len, NULL);
+ }
+ return err;
}
-
static inline void vma_iter_init(struct vma_iterator *vmi,
struct mm_struct *mm, unsigned long addr)
{
mas_init(&vmi->mas, &mm->mm_mt, addr);
}
-static inline unsigned long vma_pages(struct vm_area_struct *vma)
-{
- return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
-}
-
static inline void mmap_assert_locked(struct mm_struct *);
static inline struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
unsigned long start_addr,
@@ -1477,11 +1484,6 @@ static inline int vfs_mmap(struct file *
return file->f_op->mmap(file, vma);
}
-static inline int vfs_mmap_prepare(struct file *file, struct vm_area_desc *desc)
-{
- return file->f_op->mmap_prepare(desc);
-}
-
static inline void vma_set_file(struct vm_area_struct *vma, struct file *file)
{
/* Changing an anonymous vma with this is illegal */
_
Patches currently in -mm which might be from ljs@kernel.org are
maintainers-update-mglru-entry-to-reflect-current-status.patch
selftests-mm-add-merge-test-for-partial-msealed-range.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-03-31 0:43 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 0:43 [merged mm-stable] mm-switch-the-rmap-lock-held-option-off-in-compat-layer.patch removed from -mm tree Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox