linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 00/21] Avoid MAP_FIXED gap exposure
@ 2024-08-22 19:25 Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 01/21] mm/vma: Correctly position vma_iterator in __split_vma() Liam R. Howlett
                   ` (20 more replies)
  0 siblings, 21 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

It is now possible to walk the vma tree using the rcu read locks and is
beneficial to do so to reduce lock contention.  Doing so while a
MAP_FIXED mapping is executing means that a reader may see a gap in the
vma tree that should never logically exist - and does not when using the
mmap lock in read mode.  The temporal gap exists because mmap_region()
calls munmap() prior to installing the new mapping.

This patch set stops rcu readers from seeing the temporal gap by
splitting up the munmap() function into two parts.  The first part
prepares the vma tree for modifications by doing the necessary splits
and tracks the vmas marked for removal in a side tree.  The second part
completes the munmapping of the vmas after the vma tree has been
overwritten (either by a MAP_FIXED replacement vma or by a NULL in the
munmap() case).

Please note that rcu walkers will still be able to see a temporary state
of split vmas that may be in the process of being removed, but the
temporal gap will not be exposed.  vma_start_write() are called on both
parts of the split vma, so this state is detectable.

If existing vmas have a vm_ops->close(), then they will be called prior
to mapping the new vmas (and ptes are cleared out).  Without calling
->close(), hugetlbfs tests fail (hugemmap06 specifically) due to
resources still being marked as 'busy'.  Unfortunately, calling the
corresponding ->open() may not restore the state of the vmas, so it is
safer to keep the existing failure scenario where a gap is inserted and
never replaced.  The failure scenario is in its own patch (0015) for
traceability.

RFC: https://lore.kernel.org/linux-mm/20240531163217.1584450-1-Liam.Howlett@oracle.com/
v1: https://lore.kernel.org/linux-mm/20240611180200.711239-1-Liam.Howlett@oracle.com/
v2: https://lore.kernel.org/all/20240625191145.3382793-1-Liam.Howlett@oracle.com/
v3: https://lore.kernel.org/linux-mm/20240704182718.2653918-1-Liam.Howlett@oracle.com/
v4: https://lore.kernel.org/linux-mm/20240710192250.4114783-1-Liam.Howlett@oracle.com/
v5: https://lore.kernel.org/linux-mm/20240717200709.1552558-1-Liam.Howlett@oracle.com/
v6: https://lore.kernel.org/all/20240820235730.2852400-1-Liam.Howlett@oracle.com/

Changes since v6:
 - Added ack by Paul Moore
 - Added some more SoB from Lorenzo
 - Fixed some minor comment language
 - Dropped extern from header
 - Removed constant from argument list of vms_clean_up_area()
 - Added VM_WARN_ON() to stat counting
 - Removed duplicate counting of VM_LOCKED vmas
 - Renamed abort_munmap_vmas() to reattach_vmas() when other code is
   removed
 - Added description to vms_abort_munmap_vmas()
 - Removed mm pointer from vma_munmap_struct
 - Added last patch to make vma_munmap_struct 2 cachelines

Liam R. Howlett (21):
  mm/vma: Correctly position vma_iterator in __split_vma()
  mm/vma: Introduce abort_munmap_vmas()
  mm/vma: Introduce vmi_complete_munmap_vmas()
  mm/vma: Extract the gathering of vmas from do_vmi_align_munmap()
  mm/vma: Introduce vma_munmap_struct for use in munmap operations
  mm/vma: Change munmap to use vma_munmap_struct() for accounting and
    surrounding vmas
  mm/vma: Extract validate_mm() from vma_complete()
  mm/vma: Inline munmap operation in mmap_region()
  mm/vma: Expand mmap_region() munmap call
  mm/vma: Support vma == NULL in init_vma_munmap()
  mm/mmap: Reposition vma iterator in mmap_region()
  mm/vma: Track start and end for munmap in vma_munmap_struct
  mm: Clean up unmap_region() argument list
  mm/mmap: Avoid zeroing vma tree in mmap_region()
  mm: Change failure of MAP_FIXED to restoring the gap on failure
  mm/mmap: Use PHYS_PFN in mmap_region()
  mm/mmap: Use vms accounted pages in mmap_region()
  ipc/shm, mm: Drop do_vma_munmap()
  mm: Move may_expand_vm() check in mmap_region()
  mm/vma: Drop incorrect comment from vms_gather_munmap_vmas()
  mm/vma.h: Optimise vma_munmap_struct

 include/linux/mm.h |   6 +-
 ipc/shm.c          |   8 +-
 mm/mmap.c          | 138 +++++++++---------
 mm/vma.c           | 357 +++++++++++++++++++++++++++------------------
 mm/vma.h           | 164 ++++++++++++++++++---
 5 files changed, 428 insertions(+), 245 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v7 01/21] mm/vma: Correctly position vma_iterator in __split_vma()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 02/21] mm/vma: Introduce abort_munmap_vmas() Liam R. Howlett
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Lorenzo Stoakes

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

The vma iterator may be left pointing to the newly created vma.  This
happens when inserting the new vma at the end of the old vma
(!new_below).

The incorrect position in the vma iterator is not exposed currently
since the vma iterator is repositioned in the munmap path and is not
reused in any of the other paths.

This has limited impact in the current code, but is required for future
changes.

Fixes: b2b3b886738f ("mm: don't use __vma_adjust() in __split_vma()")
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
 mm/vma.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/vma.c b/mm/vma.c
index 5850f7c0949b..066de79b7b73 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -177,7 +177,7 @@ void unmap_region(struct mm_struct *mm, struct ma_state *mas,
 /*
  * __split_vma() bypasses sysctl_max_map_count checking.  We use this where it
  * has already been checked or doesn't make sense to fail.
- * VMA Iterator will point to the end VMA.
+ * VMA Iterator will point to the original VMA.
  */
 static int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		       unsigned long addr, int new_below)
@@ -246,6 +246,9 @@ static int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	/* Success. */
 	if (new_below)
 		vma_next(vmi);
+	else
+		vma_prev(vmi);
+
 	return 0;
 
 out_free_mpol:
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 02/21] mm/vma: Introduce abort_munmap_vmas()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 01/21] mm/vma: Correctly position vma_iterator in __split_vma() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 03/21] mm/vma: Introduce vmi_complete_munmap_vmas() Liam R. Howlett
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Liam R . Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Extract clean up of failed munmap() operations from
do_vmi_align_munmap().  This simplifies later patches in the series.

It is worth noting that the mas_for_each() loop now has a different
upper limit.  This should not change the number of vmas visited for
reattaching to the main vma tree (mm_mt), as all vmas are reattached in
both scenarios.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/vma.c | 22 +++++++++++++++++-----
 1 file changed, 17 insertions(+), 5 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 066de79b7b73..58ecd447670d 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -668,6 +668,22 @@ void vma_complete(struct vma_prepare *vp,
 	validate_mm(mm);
 }
 
+/*
+ * abort_munmap_vmas - Undo any munmap work and free resources
+ *
+ * Reattach any detached vmas and free up the maple tree used to track the vmas.
+ */
+static inline void abort_munmap_vmas(struct ma_state *mas_detach)
+{
+	struct vm_area_struct *vma;
+
+	mas_set(mas_detach, 0);
+	mas_for_each(mas_detach, vma, ULONG_MAX)
+		vma_mark_detached(vma, false);
+
+	__mt_destroy(mas_detach->tree);
+}
+
 /*
  * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
  * @vmi: The vma iterator
@@ -834,11 +850,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 userfaultfd_error:
 munmap_gather_failed:
 end_split_failed:
-	mas_set(&mas_detach, 0);
-	mas_for_each(&mas_detach, next, end)
-		vma_mark_detached(next, false);
-
-	__mt_destroy(&mt_detach);
+	abort_munmap_vmas(&mas_detach);
 start_split_failed:
 map_count_exceeded:
 	validate_mm(mm);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 03/21] mm/vma: Introduce vmi_complete_munmap_vmas()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 01/21] mm/vma: Correctly position vma_iterator in __split_vma() Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 02/21] mm/vma: Introduce abort_munmap_vmas() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 04/21] mm/vma: Extract the gathering of vmas from do_vmi_align_munmap() Liam R. Howlett
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Liam R . Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Extract all necessary operations that need to be completed after the vma
maple tree is updated from a munmap() operation.  Extracting this makes
the later patch in the series easier to understand.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/vma.c | 80 ++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 55 insertions(+), 25 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 58ecd447670d..3a2098464b8f 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -684,6 +684,58 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach)
 	__mt_destroy(mas_detach->tree);
 }
 
+/*
+ * vmi_complete_munmap_vmas() - Finish the munmap() operation
+ * @vmi: The vma iterator
+ * @vma: The first vma to be munmapped
+ * @mm: The mm struct
+ * @start: The start address
+ * @end: The end address
+ * @unlock: Unlock the mm or not
+ * @mas_detach: them maple state of the detached vma maple tree
+ * @locked_vm: The locked_vm count in the detached vmas
+ *
+ * This function updates the mm_struct, unmaps the region, frees the resources
+ * used for the munmap() and may downgrade the lock - if requested.  Everything
+ * needed to be done once the vma maple tree is updated.
+ */
+static void
+vmi_complete_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
+		struct mm_struct *mm, unsigned long start, unsigned long end,
+		bool unlock, struct ma_state *mas_detach,
+		unsigned long locked_vm)
+{
+	struct vm_area_struct *prev, *next;
+	int count;
+
+	count = mas_detach->index + 1;
+	mm->map_count -= count;
+	mm->locked_vm -= locked_vm;
+	if (unlock)
+		mmap_write_downgrade(mm);
+
+	prev = vma_iter_prev_range(vmi);
+	next = vma_next(vmi);
+	if (next)
+		vma_iter_prev_range(vmi);
+
+	/*
+	 * We can free page tables without write-locking mmap_lock because VMAs
+	 * were isolated before we downgraded mmap_lock.
+	 */
+	mas_set(mas_detach, 1);
+	unmap_region(mm, mas_detach, vma, prev, next, start, end, count,
+		     !unlock);
+	/* Statistics and freeing VMAs */
+	mas_set(mas_detach, 0);
+	remove_mt(mm, mas_detach);
+	validate_mm(mm);
+	if (unlock)
+		mmap_read_unlock(mm);
+
+	__mt_destroy(mas_detach->tree);
+}
+
 /*
  * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
  * @vmi: The vma iterator
@@ -703,7 +755,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		    struct mm_struct *mm, unsigned long start,
 		    unsigned long end, struct list_head *uf, bool unlock)
 {
-	struct vm_area_struct *prev, *next = NULL;
+	struct vm_area_struct *next = NULL;
 	struct maple_tree mt_detach;
 	int count = 0;
 	int error = -ENOMEM;
@@ -818,31 +870,9 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		goto clear_tree_failed;
 
 	/* Point of no return */
-	mm->locked_vm -= locked_vm;
-	mm->map_count -= count;
-	if (unlock)
-		mmap_write_downgrade(mm);
-
-	prev = vma_iter_prev_range(vmi);
-	next = vma_next(vmi);
-	if (next)
-		vma_iter_prev_range(vmi);
-
-	/*
-	 * We can free page tables without write-locking mmap_lock because VMAs
-	 * were isolated before we downgraded mmap_lock.
-	 */
-	mas_set(&mas_detach, 1);
-	unmap_region(mm, &mas_detach, vma, prev, next, start, end, count,
-		     !unlock);
-	/* Statistics and freeing VMAs */
-	mas_set(&mas_detach, 0);
-	remove_mt(mm, &mas_detach);
-	validate_mm(mm);
-	if (unlock)
-		mmap_read_unlock(mm);
+	vmi_complete_munmap_vmas(vmi, vma, mm, start, end, unlock, &mas_detach,
+				 locked_vm);
 
-	__mt_destroy(&mt_detach);
 	return 0;
 
 modify_vma_failed:
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 04/21] mm/vma: Extract the gathering of vmas from do_vmi_align_munmap()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (2 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 03/21] mm/vma: Introduce vmi_complete_munmap_vmas() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 05/21] mm/vma: Introduce vma_munmap_struct for use in munmap operations Liam R. Howlett
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Liam R . Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Create vmi_gather_munmap_vmas() to handle the gathering of vmas into a
detached maple tree for removal later.  Part of the gathering is the
splitting of vmas that span the boundary.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/vma.c | 82 +++++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 58 insertions(+), 24 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 3a2098464b8f..da489063b2de 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -737,32 +737,30 @@ vmi_complete_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 }
 
 /*
- * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
+ * vmi_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
+ * for removal at a later date.  Handles splitting first and last if necessary
+ * and marking the vmas as isolated.
+ *
  * @vmi: The vma iterator
  * @vma: The starting vm_area_struct
  * @mm: The mm_struct
  * @start: The aligned start address to munmap.
  * @end: The aligned end address to munmap.
  * @uf: The userfaultfd list_head
- * @unlock: Set to true to drop the mmap_lock.  unlocking only happens on
- * success.
+ * @mas_detach: The maple state tracking the detached tree
+ * @locked_vm: a pointer to store the VM_LOCKED pages count.
  *
- * Return: 0 on success and drops the lock if so directed, error and leaves the
- * lock held otherwise.
+ * Return: 0 on success
  */
-int
-do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+static int
+vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		    struct mm_struct *mm, unsigned long start,
-		    unsigned long end, struct list_head *uf, bool unlock)
+		    unsigned long end, struct list_head *uf,
+		    struct ma_state *mas_detach, unsigned long *locked_vm)
 {
 	struct vm_area_struct *next = NULL;
-	struct maple_tree mt_detach;
 	int count = 0;
 	int error = -ENOMEM;
-	unsigned long locked_vm = 0;
-	MA_STATE(mas_detach, &mt_detach, 0, 0);
-	mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
-	mt_on_stack(mt_detach);
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -812,15 +810,15 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 				goto end_split_failed;
 		}
 		vma_start_write(next);
-		mas_set(&mas_detach, count);
-		error = mas_store_gfp(&mas_detach, next, GFP_KERNEL);
+		mas_set(mas_detach, count++);
+		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
 		if (error)
 			goto munmap_gather_failed;
+
 		vma_mark_detached(next, true);
 		if (next->vm_flags & VM_LOCKED)
-			locked_vm += vma_pages(next);
+			*locked_vm += vma_pages(next);
 
-		count++;
 		if (unlikely(uf)) {
 			/*
 			 * If userfaultfd_unmap_prep returns an error the vmas
@@ -845,7 +843,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
 	/* Make sure no VMAs are about to be lost. */
 	{
-		MA_STATE(test, &mt_detach, 0, 0);
+		MA_STATE(test, mas_detach->tree, 0, 0);
 		struct vm_area_struct *vma_mas, *vma_test;
 		int test_count = 0;
 
@@ -865,6 +863,47 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	while (vma_iter_addr(vmi) > start)
 		vma_iter_prev_range(vmi);
 
+	return 0;
+
+userfaultfd_error:
+munmap_gather_failed:
+end_split_failed:
+	abort_munmap_vmas(mas_detach);
+start_split_failed:
+map_count_exceeded:
+	return error;
+}
+
+/*
+ * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
+ * @vmi: The vma iterator
+ * @vma: The starting vm_area_struct
+ * @mm: The mm_struct
+ * @start: The aligned start address to munmap.
+ * @end: The aligned end address to munmap.
+ * @uf: The userfaultfd list_head
+ * @unlock: Set to true to drop the mmap_lock.  unlocking only happens on
+ * success.
+ *
+ * Return: 0 on success and drops the lock if so directed, error and leaves the
+ * lock held otherwise.
+ */
+int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+		struct mm_struct *mm, unsigned long start, unsigned long end,
+		struct list_head *uf, bool unlock)
+{
+	struct maple_tree mt_detach;
+	MA_STATE(mas_detach, &mt_detach, 0, 0);
+	mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
+	mt_on_stack(mt_detach);
+	int error;
+	unsigned long locked_vm = 0;
+
+	error = vmi_gather_munmap_vmas(vmi, vma, mm, start, end, uf,
+				       &mas_detach, &locked_vm);
+	if (error)
+		goto gather_failed;
+
 	error = vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL);
 	if (error)
 		goto clear_tree_failed;
@@ -872,17 +911,12 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	/* Point of no return */
 	vmi_complete_munmap_vmas(vmi, vma, mm, start, end, unlock, &mas_detach,
 				 locked_vm);
-
 	return 0;
 
 modify_vma_failed:
 clear_tree_failed:
-userfaultfd_error:
-munmap_gather_failed:
-end_split_failed:
 	abort_munmap_vmas(&mas_detach);
-start_split_failed:
-map_count_exceeded:
+gather_failed:
 	validate_mm(mm);
 	return error;
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 05/21] mm/vma: Introduce vma_munmap_struct for use in munmap operations
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (3 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 04/21] mm/vma: Extract the gathering of vmas from do_vmi_align_munmap() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Liam R . Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Use a structure to pass along all the necessary information and counters
involved in removing vmas from the mm_struct.

Update vmi_ function names to vms_ to indicate the first argument
type change.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/vma.c | 142 +++++++++++++++++++++++++++++--------------------------
 mm/vma.h |  16 +++++++
 2 files changed, 91 insertions(+), 67 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index da489063b2de..e1aee43a3dc4 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -80,6 +80,32 @@ static void init_multi_vma_prep(struct vma_prepare *vp,
 
 }
 
+/*
+ * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
+ * @vms: The vma munmap struct
+ * @vmi: The vma iterator
+ * @vma: The first vm_area_struct to munmap
+ * @start: The aligned start address to munmap
+ * @end: The aligned end address to munmap
+ * @uf: The userfaultfd list_head
+ * @unlock: Unlock after the operation.  Only unlocked on success
+ */
+static inline void init_vma_munmap(struct vma_munmap_struct *vms,
+		struct vma_iterator *vmi, struct vm_area_struct *vma,
+		unsigned long start, unsigned long end, struct list_head *uf,
+		bool unlock)
+{
+	vms->vmi = vmi;
+	vms->vma = vma;
+	vms->mm = vma->vm_mm;
+	vms->start = start;
+	vms->end = end;
+	vms->unlock = unlock;
+	vms->uf = uf;
+	vms->vma_count = 0;
+	vms->nr_pages = vms->locked_vm = 0;
+}
+
 /*
  * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff)
  * in front of (at a lower virtual address and file offset than) the vma.
@@ -685,81 +711,62 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach)
 }
 
 /*
- * vmi_complete_munmap_vmas() - Finish the munmap() operation
- * @vmi: The vma iterator
- * @vma: The first vma to be munmapped
- * @mm: The mm struct
- * @start: The start address
- * @end: The end address
- * @unlock: Unlock the mm or not
- * @mas_detach: them maple state of the detached vma maple tree
- * @locked_vm: The locked_vm count in the detached vmas
+ * vms_complete_munmap_vmas() - Finish the munmap() operation
+ * @vms: The vma munmap struct
+ * @mas_detach: The maple state of the detached vmas
  *
- * This function updates the mm_struct, unmaps the region, frees the resources
+ * This updates the mm_struct, unmaps the region, frees the resources
  * used for the munmap() and may downgrade the lock - if requested.  Everything
  * needed to be done once the vma maple tree is updated.
  */
-static void
-vmi_complete_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
-		struct mm_struct *mm, unsigned long start, unsigned long end,
-		bool unlock, struct ma_state *mas_detach,
-		unsigned long locked_vm)
+static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach)
 {
 	struct vm_area_struct *prev, *next;
-	int count;
+	struct mm_struct *mm;
 
-	count = mas_detach->index + 1;
-	mm->map_count -= count;
-	mm->locked_vm -= locked_vm;
-	if (unlock)
+	mm = vms->mm;
+	mm->map_count -= vms->vma_count;
+	mm->locked_vm -= vms->locked_vm;
+	if (vms->unlock)
 		mmap_write_downgrade(mm);
 
-	prev = vma_iter_prev_range(vmi);
-	next = vma_next(vmi);
+	prev = vma_iter_prev_range(vms->vmi);
+	next = vma_next(vms->vmi);
 	if (next)
-		vma_iter_prev_range(vmi);
+		vma_iter_prev_range(vms->vmi);
 
 	/*
 	 * We can free page tables without write-locking mmap_lock because VMAs
 	 * were isolated before we downgraded mmap_lock.
 	 */
 	mas_set(mas_detach, 1);
-	unmap_region(mm, mas_detach, vma, prev, next, start, end, count,
-		     !unlock);
+	unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end,
+		     vms->vma_count, !vms->unlock);
 	/* Statistics and freeing VMAs */
 	mas_set(mas_detach, 0);
 	remove_mt(mm, mas_detach);
 	validate_mm(mm);
-	if (unlock)
+	if (vms->unlock)
 		mmap_read_unlock(mm);
 
 	__mt_destroy(mas_detach->tree);
 }
 
 /*
- * vmi_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
+ * vms_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
  * for removal at a later date.  Handles splitting first and last if necessary
  * and marking the vmas as isolated.
  *
- * @vmi: The vma iterator
- * @vma: The starting vm_area_struct
- * @mm: The mm_struct
- * @start: The aligned start address to munmap.
- * @end: The aligned end address to munmap.
- * @uf: The userfaultfd list_head
+ * @vms: The vma munmap struct
  * @mas_detach: The maple state tracking the detached tree
- * @locked_vm: a pointer to store the VM_LOCKED pages count.
  *
  * Return: 0 on success
  */
-static int
-vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
-		    struct mm_struct *mm, unsigned long start,
-		    unsigned long end, struct list_head *uf,
-		    struct ma_state *mas_detach, unsigned long *locked_vm)
+static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach)
 {
 	struct vm_area_struct *next = NULL;
-	int count = 0;
 	int error = -ENOMEM;
 
 	/*
@@ -771,23 +778,24 @@ vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	 */
 
 	/* Does it split the first one? */
-	if (start > vma->vm_start) {
+	if (vms->start > vms->vma->vm_start) {
 
 		/*
 		 * Make sure that map_count on return from munmap() will
 		 * not exceed its limit; but let map_count go just above
 		 * its limit temporarily, to help free resources as expected.
 		 */
-		if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count)
+		if (vms->end < vms->vma->vm_end &&
+		    vms->mm->map_count >= sysctl_max_map_count)
 			goto map_count_exceeded;
 
 		/* Don't bother splitting the VMA if we can't unmap it anyway */
-		if (!can_modify_vma(vma)) {
+		if (!can_modify_vma(vms->vma)) {
 			error = -EPERM;
 			goto start_split_failed;
 		}
 
-		error = __split_vma(vmi, vma, start, 1);
+		error = __split_vma(vms->vmi, vms->vma, vms->start, 1);
 		if (error)
 			goto start_split_failed;
 	}
@@ -796,7 +804,7 @@ vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	 * Detach a range of VMAs from the mm. Using next as a temp variable as
 	 * it is always overwritten.
 	 */
-	next = vma;
+	next = vms->vma;
 	do {
 		if (!can_modify_vma(next)) {
 			error = -EPERM;
@@ -804,22 +812,22 @@ vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		}
 
 		/* Does it split the end? */
-		if (next->vm_end > end) {
-			error = __split_vma(vmi, next, end, 0);
+		if (next->vm_end > vms->end) {
+			error = __split_vma(vms->vmi, next, vms->end, 0);
 			if (error)
 				goto end_split_failed;
 		}
 		vma_start_write(next);
-		mas_set(mas_detach, count++);
+		mas_set(mas_detach, vms->vma_count++);
 		error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
 		if (error)
 			goto munmap_gather_failed;
 
 		vma_mark_detached(next, true);
 		if (next->vm_flags & VM_LOCKED)
-			*locked_vm += vma_pages(next);
+			vms->locked_vm += vma_pages(next);
 
-		if (unlikely(uf)) {
+		if (unlikely(vms->uf)) {
 			/*
 			 * If userfaultfd_unmap_prep returns an error the vmas
 			 * will remain split, but userland will get a
@@ -829,16 +837,17 @@ vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 			 * split, despite we could. This is unlikely enough
 			 * failure that it's not worth optimizing it for.
 			 */
-			error = userfaultfd_unmap_prep(next, start, end, uf);
+			error = userfaultfd_unmap_prep(next, vms->start,
+						       vms->end, vms->uf);
 
 			if (error)
 				goto userfaultfd_error;
 		}
 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
-		BUG_ON(next->vm_start < start);
-		BUG_ON(next->vm_start > end);
+		BUG_ON(next->vm_start < vms->start);
+		BUG_ON(next->vm_start > vms->end);
 #endif
-	} for_each_vma_range(*vmi, next, end);
+	} for_each_vma_range(*(vms->vmi), next, vms->end);
 
 #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
 	/* Make sure no VMAs are about to be lost. */
@@ -847,27 +856,28 @@ vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		struct vm_area_struct *vma_mas, *vma_test;
 		int test_count = 0;
 
-		vma_iter_set(vmi, start);
+		vma_iter_set(vms->vmi, vms->start);
 		rcu_read_lock();
-		vma_test = mas_find(&test, count - 1);
-		for_each_vma_range(*vmi, vma_mas, end) {
+		vma_test = mas_find(&test, vms->vma_count - 1);
+		for_each_vma_range(*(vms->vmi), vma_mas, vms->end) {
 			BUG_ON(vma_mas != vma_test);
 			test_count++;
-			vma_test = mas_next(&test, count - 1);
+			vma_test = mas_next(&test, vms->vma_count - 1);
 		}
 		rcu_read_unlock();
-		BUG_ON(count != test_count);
+		BUG_ON(vms->vma_count != test_count);
 	}
 #endif
 
-	while (vma_iter_addr(vmi) > start)
-		vma_iter_prev_range(vmi);
+	while (vma_iter_addr(vms->vmi) > vms->start)
+		vma_iter_prev_range(vms->vmi);
 
 	return 0;
 
 userfaultfd_error:
 munmap_gather_failed:
 end_split_failed:
+modify_vma_failed:
 	abort_munmap_vmas(mas_detach);
 start_split_failed:
 map_count_exceeded:
@@ -896,11 +906,11 @@ int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	MA_STATE(mas_detach, &mt_detach, 0, 0);
 	mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
 	mt_on_stack(mt_detach);
+	struct vma_munmap_struct vms;
 	int error;
-	unsigned long locked_vm = 0;
 
-	error = vmi_gather_munmap_vmas(vmi, vma, mm, start, end, uf,
-				       &mas_detach, &locked_vm);
+	init_vma_munmap(&vms, vmi, vma, start, end, uf, unlock);
+	error = vms_gather_munmap_vmas(&vms, &mas_detach);
 	if (error)
 		goto gather_failed;
 
@@ -909,11 +919,9 @@ int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		goto clear_tree_failed;
 
 	/* Point of no return */
-	vmi_complete_munmap_vmas(vmi, vma, mm, start, end, unlock, &mas_detach,
-				 locked_vm);
+	vms_complete_munmap_vmas(&vms, &mas_detach);
 	return 0;
 
-modify_vma_failed:
 clear_tree_failed:
 	abort_munmap_vmas(&mas_detach);
 gather_failed:
diff --git a/mm/vma.h b/mm/vma.h
index da31d0f62157..cb67acf59012 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -26,6 +26,22 @@ struct unlink_vma_file_batch {
 	struct vm_area_struct *vmas[8];
 };
 
+/*
+ * vma munmap operation
+ */
+struct vma_munmap_struct {
+	struct vma_iterator *vmi;
+	struct mm_struct *mm;
+	struct vm_area_struct *vma;     /* The first vma to munmap */
+	struct list_head *uf;           /* Userfaultfd list_head */
+	unsigned long start;            /* Aligned start addr (inclusive) */
+	unsigned long end;              /* Aligned end addr (exclusive) */
+	int vma_count;                  /* Number of vmas that will be removed */
+	unsigned long nr_pages;         /* Number of pages being removed */
+	unsigned long locked_vm;        /* Number of locked pages */
+	bool unlock;                    /* Unlock after the munmap */
+};
+
 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
 void validate_mm(struct mm_struct *mm);
 #else
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (4 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 05/21] mm/vma: Introduce vma_munmap_struct for use in munmap operations Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-23  8:43   ` Bert Karwatzki
  2024-08-23 13:30   ` [PATCH] mm/vma: fix bookkeeping checks Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 07/21] mm/vma: Extract validate_mm() from vma_complete() Liam R. Howlett
                   ` (14 subsequent siblings)
  20 siblings, 2 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Liam R . Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Clean up the code by changing the munmap operation to use a structure
for the accounting and munmap variables.

Since remove_mt() is only called in one location and the contents will
be reduced to almost nothing.  The remains of the function can be added
to vms_complete_munmap_vmas().

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/vma.c | 83 +++++++++++++++++++++++++++++---------------------------
 mm/vma.h |  6 ++++
 2 files changed, 49 insertions(+), 40 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index e1aee43a3dc4..58604fe3bd03 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -103,7 +103,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
 	vms->unlock = unlock;
 	vms->uf = uf;
 	vms->vma_count = 0;
-	vms->nr_pages = vms->locked_vm = 0;
+	vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
+	vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
 }
 
 /*
@@ -299,30 +300,6 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	return __split_vma(vmi, vma, addr, new_below);
 }
 
-/*
- * Ok - we have the memory areas we should free on a maple tree so release them,
- * and do the vma updates.
- *
- * Called with the mm semaphore held.
- */
-static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas)
-{
-	unsigned long nr_accounted = 0;
-	struct vm_area_struct *vma;
-
-	/* Update high watermark before we lower total_vm */
-	update_hiwater_vm(mm);
-	mas_for_each(mas, vma, ULONG_MAX) {
-		long nrpages = vma_pages(vma);
-
-		if (vma->vm_flags & VM_ACCOUNT)
-			nr_accounted += nrpages;
-		vm_stat_account(mm, vma->vm_flags, -nrpages);
-		remove_vma(vma, false);
-	}
-	vm_unacct_memory(nr_accounted);
-}
-
 /*
  * init_vma_prep() - Initializer wrapper for vma_prepare struct
  * @vp: The vma_prepare struct
@@ -722,7 +699,7 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach)
 static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
-	struct vm_area_struct *prev, *next;
+	struct vm_area_struct *vma;
 	struct mm_struct *mm;
 
 	mm = vms->mm;
@@ -731,21 +708,31 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 	if (vms->unlock)
 		mmap_write_downgrade(mm);
 
-	prev = vma_iter_prev_range(vms->vmi);
-	next = vma_next(vms->vmi);
-	if (next)
-		vma_iter_prev_range(vms->vmi);
-
 	/*
 	 * We can free page tables without write-locking mmap_lock because VMAs
 	 * were isolated before we downgraded mmap_lock.
 	 */
 	mas_set(mas_detach, 1);
-	unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end,
-		     vms->vma_count, !vms->unlock);
-	/* Statistics and freeing VMAs */
+	unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next,
+		     vms->start, vms->end, vms->vma_count, !vms->unlock);
+	/* Update high watermark before we lower total_vm */
+	update_hiwater_vm(mm);
+	/* Stat accounting */
+	WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
+	mm->exec_vm -= vms->exec_vm;
+	mm->stack_vm -= vms->stack_vm;
+	mm->data_vm -= vms->data_vm;
+	/* Paranoid bookkeeping */
+	VM_WARN_ON(vms->exec_vm > mm->exec_vm);
+	VM_WARN_ON(vms->stack_vm > mm->stack_vm);
+	VM_WARN_ON(vms->data_vm > mm->data_vm);
+
+	/* Remove and clean up vmas */
 	mas_set(mas_detach, 0);
-	remove_mt(mm, mas_detach);
+	mas_for_each(mas_detach, vma, ULONG_MAX)
+		remove_vma(vma, false);
+
+	vm_unacct_memory(vms->nr_accounted);
 	validate_mm(mm);
 	if (vms->unlock)
 		mmap_read_unlock(mm);
@@ -799,18 +786,19 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 		if (error)
 			goto start_split_failed;
 	}
+	vms->prev = vma_prev(vms->vmi);
 
 	/*
 	 * Detach a range of VMAs from the mm. Using next as a temp variable as
 	 * it is always overwritten.
 	 */
-	next = vms->vma;
-	do {
+	for_each_vma_range(*(vms->vmi), next, vms->end) {
+		long nrpages;
+
 		if (!can_modify_vma(next)) {
 			error = -EPERM;
 			goto modify_vma_failed;
 		}
-
 		/* Does it split the end? */
 		if (next->vm_end > vms->end) {
 			error = __split_vma(vms->vmi, next, vms->end, 0);
@@ -824,8 +812,21 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 			goto munmap_gather_failed;
 
 		vma_mark_detached(next, true);
+		nrpages = vma_pages(next);
+
+		vms->nr_pages += nrpages;
 		if (next->vm_flags & VM_LOCKED)
-			vms->locked_vm += vma_pages(next);
+			vms->locked_vm += nrpages;
+
+		if (next->vm_flags & VM_ACCOUNT)
+			vms->nr_accounted += nrpages;
+
+		if (is_exec_mapping(next->vm_flags))
+			vms->exec_vm += nrpages;
+		else if (is_stack_mapping(next->vm_flags))
+			vms->stack_vm += nrpages;
+		else if (is_data_mapping(next->vm_flags))
+			vms->data_vm += nrpages;
 
 		if (unlikely(vms->uf)) {
 			/*
@@ -847,7 +848,9 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 		BUG_ON(next->vm_start < vms->start);
 		BUG_ON(next->vm_start > vms->end);
 #endif
-	} for_each_vma_range(*(vms->vmi), next, vms->end);
+	}
+
+	vms->next = vma_next(vms->vmi);
 
 #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
 	/* Make sure no VMAs are about to be lost. */
diff --git a/mm/vma.h b/mm/vma.h
index cb67acf59012..cbf55e0e0c4f 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -33,12 +33,18 @@ struct vma_munmap_struct {
 	struct vma_iterator *vmi;
 	struct mm_struct *mm;
 	struct vm_area_struct *vma;     /* The first vma to munmap */
+	struct vm_area_struct *prev;    /* vma before the munmap area */
+	struct vm_area_struct *next;    /* vma after the munmap area */
 	struct list_head *uf;           /* Userfaultfd list_head */
 	unsigned long start;            /* Aligned start addr (inclusive) */
 	unsigned long end;              /* Aligned end addr (exclusive) */
 	int vma_count;                  /* Number of vmas that will be removed */
 	unsigned long nr_pages;         /* Number of pages being removed */
 	unsigned long locked_vm;        /* Number of locked pages */
+	unsigned long nr_accounted;     /* Number of VM_ACCOUNT pages */
+	unsigned long exec_vm;
+	unsigned long stack_vm;
+	unsigned long data_vm;
 	bool unlock;                    /* Unlock after the munmap */
 };
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 07/21] mm/vma: Extract validate_mm() from vma_complete()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (5 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 08/21] mm/vma: Inline munmap operation in mmap_region() Liam R. Howlett
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Liam R . Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

vma_complete() will need to be called during an unsafe time to call
validate_mm().  Extract the call in all places now so that only one
location can be modified in the next change.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c | 1 +
 mm/vma.c  | 5 ++++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 30ae4cb5cec9..112f2111c457 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1796,6 +1796,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		vma_iter_store(vmi, vma);
 
 		vma_complete(&vp, vmi, mm);
+		validate_mm(mm);
 		khugepaged_enter_vma(vma, flags);
 		goto out;
 	}
diff --git a/mm/vma.c b/mm/vma.c
index 58604fe3bd03..f061aa402f92 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -269,6 +269,7 @@ static int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
 
 	/* vma_complete stores the new vma */
 	vma_complete(&vp, vmi, vma->vm_mm);
+	validate_mm(vma->vm_mm);
 
 	/* Success. */
 	if (new_below)
@@ -548,6 +549,7 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	vma_iter_store(vmi, vma);
 
 	vma_complete(&vp, vmi, vma->vm_mm);
+	validate_mm(vma->vm_mm);
 	return 0;
 
 nomem:
@@ -589,6 +591,7 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	vma_iter_clear(vmi);
 	vma_set_range(vma, start, end, pgoff);
 	vma_complete(&vp, vmi, vma->vm_mm);
+	validate_mm(vma->vm_mm);
 	return 0;
 }
 
@@ -668,7 +671,6 @@ void vma_complete(struct vma_prepare *vp,
 	}
 	if (vp->insert && vp->file)
 		uprobe_mmap(vp->insert);
-	validate_mm(mm);
 }
 
 /*
@@ -1202,6 +1204,7 @@ static struct vm_area_struct
 	}
 
 	vma_complete(&vp, vmi, mm);
+	validate_mm(mm);
 	khugepaged_enter_vma(res, vm_flags);
 	return res;
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 08/21] mm/vma: Inline munmap operation in mmap_region()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (6 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 07/21] mm/vma: Extract validate_mm() from vma_complete() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 09/21] mm/vma: Expand mmap_region() munmap call Liam R. Howlett
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

mmap_region is already passed sanitized addr and len, so change the
call to do_vmi_munmap() to do_vmi_align_munmap() and inline the other
checks.

The inlining of the function and checks is an intermediate step in the
series so future patches are easier to follow.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 112f2111c457..0f5be29d48b6 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1388,12 +1388,14 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 			return -ENOMEM;
 	}
 
-	/* Unmap any existing mapping in the area */
-	error = do_vmi_munmap(&vmi, mm, addr, len, uf, false);
-	if (error == -EPERM)
-		return error;
-	else if (error)
-		return -ENOMEM;
+	/* Find the first overlapping VMA */
+	vma = vma_find(&vmi, end);
+	if (vma) {
+		/* Unmap any existing mapping in the area */
+		if (do_vmi_align_munmap(&vmi, vma, mm, addr, end, uf, false))
+			return -ENOMEM;
+		vma = NULL;
+	}
 
 	/*
 	 * Private writable mapping: check memory availability
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 09/21] mm/vma: Expand mmap_region() munmap call
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (7 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 08/21] mm/vma: Inline munmap operation in mmap_region() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 10/21] mm/vma: Support vma == NULL in init_vma_munmap() Liam R. Howlett
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Open code the do_vmi_align_munmap() call so that it can be broken up
later in the series.

This requires exposing a few more vma operations.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c | 26 ++++++++++++++++++++++----
 mm/vma.c  | 31 ++-----------------------------
 mm/vma.h  | 33 +++++++++++++++++++++++++++++++++
 3 files changed, 57 insertions(+), 33 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 0f5be29d48b6..e7e6bf09b558 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1366,6 +1366,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	struct vm_area_struct *next, *prev, *merge;
 	pgoff_t pglen = len >> PAGE_SHIFT;
 	unsigned long charged = 0;
+	struct vma_munmap_struct vms;
+	struct ma_state mas_detach;
+	struct maple_tree mt_detach;
 	unsigned long end = addr + len;
 	unsigned long merge_start = addr, merge_end = end;
 	bool writable_file_mapping = false;
@@ -1391,10 +1394,27 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	/* Find the first overlapping VMA */
 	vma = vma_find(&vmi, end);
 	if (vma) {
-		/* Unmap any existing mapping in the area */
-		if (do_vmi_align_munmap(&vmi, vma, mm, addr, end, uf, false))
+		mt_init_flags(&mt_detach, vmi.mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
+		mt_on_stack(mt_detach);
+		mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
+		init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock = */ false);
+		/* Prepare to unmap any existing mapping in the area */
+		if (vms_gather_munmap_vmas(&vms, &mas_detach))
+			return -ENOMEM;
+
+		/* Remove any existing mappings from the vma tree */
+		if (vma_iter_clear_gfp(&vmi, addr, end, GFP_KERNEL))
 			return -ENOMEM;
+
+		/* Unmap any existing mapping in the area */
+		vms_complete_munmap_vmas(&vms, &mas_detach);
+		next = vms.next;
+		prev = vms.prev;
+		vma_prev(&vmi);
 		vma = NULL;
+	} else {
+		next = vma_next(&vmi);
+		prev = vma_prev(&vmi);
 	}
 
 	/*
@@ -1407,8 +1427,6 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		vm_flags |= VM_ACCOUNT;
 	}
 
-	next = vma_next(&vmi);
-	prev = vma_prev(&vmi);
 	if (vm_flags & VM_SPECIAL) {
 		if (prev)
 			vma_iter_next_range(&vmi);
diff --git a/mm/vma.c b/mm/vma.c
index f061aa402f92..6b30f9748187 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -80,33 +80,6 @@ static void init_multi_vma_prep(struct vma_prepare *vp,
 
 }
 
-/*
- * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
- * @vms: The vma munmap struct
- * @vmi: The vma iterator
- * @vma: The first vm_area_struct to munmap
- * @start: The aligned start address to munmap
- * @end: The aligned end address to munmap
- * @uf: The userfaultfd list_head
- * @unlock: Unlock after the operation.  Only unlocked on success
- */
-static inline void init_vma_munmap(struct vma_munmap_struct *vms,
-		struct vma_iterator *vmi, struct vm_area_struct *vma,
-		unsigned long start, unsigned long end, struct list_head *uf,
-		bool unlock)
-{
-	vms->vmi = vmi;
-	vms->vma = vma;
-	vms->mm = vma->vm_mm;
-	vms->start = start;
-	vms->end = end;
-	vms->unlock = unlock;
-	vms->uf = uf;
-	vms->vma_count = 0;
-	vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
-	vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
-}
-
 /*
  * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff)
  * in front of (at a lower virtual address and file offset than) the vma.
@@ -698,7 +671,7 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach)
  * used for the munmap() and may downgrade the lock - if requested.  Everything
  * needed to be done once the vma maple tree is updated.
  */
-static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
+void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
 	struct vm_area_struct *vma;
@@ -752,7 +725,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
  *
  * Return: 0 on success
  */
-static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
+int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
 	struct vm_area_struct *next = NULL;
diff --git a/mm/vma.h b/mm/vma.h
index cbf55e0e0c4f..e78b24d1cf83 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -78,6 +78,39 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
 int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	       unsigned long start, unsigned long end, pgoff_t pgoff);
 
+/*
+ * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
+ * @vms: The vma munmap struct
+ * @vmi: The vma iterator
+ * @vma: The first vm_area_struct to munmap
+ * @start: The aligned start address to munmap
+ * @end: The aligned end address to munmap
+ * @uf: The userfaultfd list_head
+ * @unlock: Unlock after the operation.  Only unlocked on success
+ */
+static inline void init_vma_munmap(struct vma_munmap_struct *vms,
+		struct vma_iterator *vmi, struct vm_area_struct *vma,
+		unsigned long start, unsigned long end, struct list_head *uf,
+		bool unlock)
+{
+	vms->vmi = vmi;
+	vms->vma = vma;
+	vms->mm = vma->vm_mm;
+	vms->start = start;
+	vms->end = end;
+	vms->unlock = unlock;
+	vms->uf = uf;
+	vms->vma_count = 0;
+	vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
+	vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
+}
+
+int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach);
+
+void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach);
+
 int
 do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		    struct mm_struct *mm, unsigned long start,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 10/21] mm/vma: Support vma == NULL in init_vma_munmap()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (8 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 09/21] mm/vma: Expand mmap_region() munmap call Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 11/21] mm/mmap: Reposition vma iterator in mmap_region() Liam R. Howlett
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Adding support for a NULL vma means the init_vma_munmap() can be
initialized for a less error-prone process when calling
vms_complete_munmap_vmas() later on.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c |  2 +-
 mm/vma.h  | 11 ++++++++---
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index e7e6bf09b558..2b7445a002dc 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1393,11 +1393,11 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 
 	/* Find the first overlapping VMA */
 	vma = vma_find(&vmi, end);
+	init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock = */ false);
 	if (vma) {
 		mt_init_flags(&mt_detach, vmi.mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
 		mt_on_stack(mt_detach);
 		mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
-		init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock = */ false);
 		/* Prepare to unmap any existing mapping in the area */
 		if (vms_gather_munmap_vmas(&vms, &mas_detach))
 			return -ENOMEM;
diff --git a/mm/vma.h b/mm/vma.h
index e78b24d1cf83..0e214bbf443e 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -95,9 +95,14 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
 {
 	vms->vmi = vmi;
 	vms->vma = vma;
-	vms->mm = vma->vm_mm;
-	vms->start = start;
-	vms->end = end;
+	if (vma) {
+		vms->mm = vma->vm_mm;
+		vms->start = start;
+		vms->end = end;
+	} else {
+		vms->mm = NULL;
+		vms->start = vms->end = 0;
+	}
 	vms->unlock = unlock;
 	vms->uf = uf;
 	vms->vma_count = 0;
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 11/21] mm/mmap: Reposition vma iterator in mmap_region()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (9 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 10/21] mm/vma: Support vma == NULL in init_vma_munmap() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct Liam R. Howlett
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Instead of moving (or leaving) the vma iterator pointing at the previous
vma, leave it pointing at the insert location.  Pointing the vma
iterator at the insert location allows for a cleaner walk of the vma
tree for MAP_FIXED and the no expansion cases.

The vma_prev() call in the case of merging the previous vma is
equivalent to vma_iter_prev_range(), since the vma iterator will be
pointing to the location just before the previous vma.

This change needs to export abort_munmap_vmas() from mm/vma.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c | 38 ++++++++++++++++++++++----------------
 mm/vma.c  | 16 ----------------
 mm/vma.h  | 16 ++++++++++++++++
 3 files changed, 38 insertions(+), 32 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 2b7445a002dc..9285bdf14c4f 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1400,21 +1400,22 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
 		/* Prepare to unmap any existing mapping in the area */
 		if (vms_gather_munmap_vmas(&vms, &mas_detach))
-			return -ENOMEM;
+			goto gather_failed;
 
 		/* Remove any existing mappings from the vma tree */
 		if (vma_iter_clear_gfp(&vmi, addr, end, GFP_KERNEL))
-			return -ENOMEM;
+			goto clear_tree_failed;
 
 		/* Unmap any existing mapping in the area */
 		vms_complete_munmap_vmas(&vms, &mas_detach);
 		next = vms.next;
 		prev = vms.prev;
-		vma_prev(&vmi);
 		vma = NULL;
 	} else {
 		next = vma_next(&vmi);
 		prev = vma_prev(&vmi);
+		if (prev)
+			vma_iter_next_range(&vmi);
 	}
 
 	/*
@@ -1427,11 +1428,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		vm_flags |= VM_ACCOUNT;
 	}
 
-	if (vm_flags & VM_SPECIAL) {
-		if (prev)
-			vma_iter_next_range(&vmi);
+	if (vm_flags & VM_SPECIAL)
 		goto cannot_expand;
-	}
 
 	/* Attempt to expand an old mapping */
 	/* Check next */
@@ -1452,19 +1450,21 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		merge_start = prev->vm_start;
 		vma = prev;
 		vm_pgoff = prev->vm_pgoff;
-	} else if (prev) {
-		vma_iter_next_range(&vmi);
+		vma_prev(&vmi); /* Equivalent to going to the previous range */
 	}
 
-	/* Actually expand, if possible */
-	if (vma &&
-	    !vma_expand(&vmi, vma, merge_start, merge_end, vm_pgoff, next)) {
-		khugepaged_enter_vma(vma, vm_flags);
-		goto expanded;
+	if (vma) {
+		/* Actually expand, if possible */
+		if (!vma_expand(&vmi, vma, merge_start, merge_end, vm_pgoff, next)) {
+			khugepaged_enter_vma(vma, vm_flags);
+			goto expanded;
+		}
+
+		/* If the expand fails, then reposition the vma iterator */
+		if (unlikely(vma == prev))
+			vma_iter_set(&vmi, addr);
 	}
 
-	if (vma == prev)
-		vma_iter_set(&vmi, addr);
 cannot_expand:
 
 	/*
@@ -1625,6 +1625,12 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		vm_unacct_memory(charged);
 	validate_mm(mm);
 	return error;
+
+clear_tree_failed:
+	abort_munmap_vmas(&mas_detach);
+gather_failed:
+	validate_mm(mm);
+	return -ENOMEM;
 }
 
 static int __vm_munmap(unsigned long start, size_t len, bool unlock)
diff --git a/mm/vma.c b/mm/vma.c
index 6b30f9748187..9de41e1bf3b2 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -646,22 +646,6 @@ void vma_complete(struct vma_prepare *vp,
 		uprobe_mmap(vp->insert);
 }
 
-/*
- * abort_munmap_vmas - Undo any munmap work and free resources
- *
- * Reattach any detached vmas and free up the maple tree used to track the vmas.
- */
-static inline void abort_munmap_vmas(struct ma_state *mas_detach)
-{
-	struct vm_area_struct *vma;
-
-	mas_set(mas_detach, 0);
-	mas_for_each(mas_detach, vma, ULONG_MAX)
-		vma_mark_detached(vma, false);
-
-	__mt_destroy(mas_detach->tree);
-}
-
 /*
  * vms_complete_munmap_vmas() - Finish the munmap() operation
  * @vms: The vma munmap struct
diff --git a/mm/vma.h b/mm/vma.h
index 0e214bbf443e..c85fc7c888a8 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -116,6 +116,22 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach);
 
+/*
+ * abort_munmap_vmas - Undo any munmap work and free resources
+ *
+ * Reattach any detached vmas and free up the maple tree used to track the vmas.
+ */
+static inline void abort_munmap_vmas(struct ma_state *mas_detach)
+{
+	struct vm_area_struct *vma;
+
+	mas_set(mas_detach, 0);
+	mas_for_each(mas_detach, vma, ULONG_MAX)
+		vma_mark_detached(vma, false);
+
+	__mt_destroy(mas_detach->tree);
+}
+
 int
 do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		    struct mm_struct *mm, unsigned long start,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (10 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 11/21] mm/mmap: Reposition vma iterator in mmap_region() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-26 14:01   ` Geert Uytterhoeven
  2024-08-22 19:25 ` [PATCH v7 13/21] mm: Clean up unmap_region() argument list Liam R. Howlett
                   ` (8 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Set the start and end address for munmap when the prev and next are
gathered.  This is needed to avoid incorrect addresses being used during
the vms_complete_munmap_vmas() function if the prev/next vma are
expanded.

Add a new helper vms_complete_pte_clear(), which is needed later and
will avoid growing the argument list to unmap_region() beyond the 9 it
already has.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/vma.c | 32 +++++++++++++++++++++++++-------
 mm/vma.h |  4 ++++
 2 files changed, 29 insertions(+), 7 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 9de41e1bf3b2..dda0dae069e2 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -646,6 +646,26 @@ void vma_complete(struct vma_prepare *vp,
 		uprobe_mmap(vp->insert);
 }
 
+static void vms_complete_pte_clear(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach, bool mm_wr_locked)
+{
+	struct mmu_gather tlb;
+
+	/*
+	 * We can free page tables without write-locking mmap_lock because VMAs
+	 * were isolated before we downgraded mmap_lock.
+	 */
+	mas_set(mas_detach, 1);
+	lru_add_drain();
+	tlb_gather_mmu(&tlb, vms->mm);
+	update_hiwater_rss(vms->mm);
+	unmap_vmas(&tlb, mas_detach, vms->vma, vms->start, vms->end, vms->vma_count, mm_wr_locked);
+	mas_set(mas_detach, 1);
+	/* start and end may be different if there is no prev or next vma. */
+	free_pgtables(&tlb, mas_detach, vms->vma, vms->unmap_start, vms->unmap_end, mm_wr_locked);
+	tlb_finish_mmu(&tlb);
+}
+
 /*
  * vms_complete_munmap_vmas() - Finish the munmap() operation
  * @vms: The vma munmap struct
@@ -667,13 +687,7 @@ void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 	if (vms->unlock)
 		mmap_write_downgrade(mm);
 
-	/*
-	 * We can free page tables without write-locking mmap_lock because VMAs
-	 * were isolated before we downgraded mmap_lock.
-	 */
-	mas_set(mas_detach, 1);
-	unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next,
-		     vms->start, vms->end, vms->vma_count, !vms->unlock);
+	vms_complete_pte_clear(vms, mas_detach, !vms->unlock);
 	/* Update high watermark before we lower total_vm */
 	update_hiwater_vm(mm);
 	/* Stat accounting */
@@ -746,6 +760,8 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 			goto start_split_failed;
 	}
 	vms->prev = vma_prev(vms->vmi);
+	if (vms->prev)
+		vms->unmap_start = vms->prev->vm_end;
 
 	/*
 	 * Detach a range of VMAs from the mm. Using next as a temp variable as
@@ -810,6 +826,8 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 	}
 
 	vms->next = vma_next(vms->vmi);
+	if (vms->next)
+		vms->unmap_end = vms->next->vm_start;
 
 #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
 	/* Make sure no VMAs are about to be lost. */
diff --git a/mm/vma.h b/mm/vma.h
index c85fc7c888a8..7bc0f9e7751b 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -38,6 +38,8 @@ struct vma_munmap_struct {
 	struct list_head *uf;           /* Userfaultfd list_head */
 	unsigned long start;            /* Aligned start addr (inclusive) */
 	unsigned long end;              /* Aligned end addr (exclusive) */
+	unsigned long unmap_start;      /* Unmap PTE start */
+	unsigned long unmap_end;        /* Unmap PTE end */
 	int vma_count;                  /* Number of vmas that will be removed */
 	unsigned long nr_pages;         /* Number of pages being removed */
 	unsigned long locked_vm;        /* Number of locked pages */
@@ -108,6 +110,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
 	vms->vma_count = 0;
 	vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
 	vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
+	vms->unmap_start = FIRST_USER_ADDRESS;
+	vms->unmap_end = USER_PGTABLES_CEILING;
 }
 
 int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 13/21] mm: Clean up unmap_region() argument list
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (11 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 14/21] mm/mmap: Avoid zeroing vma tree in mmap_region() Liam R. Howlett
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

With the only caller to unmap_region() being the error path of
mmap_region(), the argument list can be significantly reduced.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c |  3 +--
 mm/vma.c  | 17 ++++++++---------
 mm/vma.h  |  6 ++----
 3 files changed, 11 insertions(+), 15 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 9285bdf14c4f..71b2bad717b6 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1613,8 +1613,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 
 		vma_iter_set(&vmi, vma->vm_end);
 		/* Undo any partial mapping done by a device driver. */
-		unmap_region(mm, &vmi.mas, vma, prev, next, vma->vm_start,
-			     vma->vm_end, vma->vm_end, true);
+		unmap_region(&vmi.mas, vma, prev, next);
 	}
 	if (writable_file_mapping)
 		mapping_unmap_writable(file->f_mapping);
diff --git a/mm/vma.c b/mm/vma.c
index dda0dae069e2..9e11892b0a2f 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -155,22 +155,21 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable)
  *
  * Called with the mm semaphore held.
  */
-void unmap_region(struct mm_struct *mm, struct ma_state *mas,
-		struct vm_area_struct *vma, struct vm_area_struct *prev,
-		struct vm_area_struct *next, unsigned long start,
-		unsigned long end, unsigned long tree_end, bool mm_wr_locked)
+void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
+		struct vm_area_struct *prev, struct vm_area_struct *next)
 {
+	struct mm_struct *mm = vma->vm_mm;
 	struct mmu_gather tlb;
-	unsigned long mt_start = mas->index;
 
 	lru_add_drain();
 	tlb_gather_mmu(&tlb, mm);
 	update_hiwater_rss(mm);
-	unmap_vmas(&tlb, mas, vma, start, end, tree_end, mm_wr_locked);
-	mas_set(mas, mt_start);
+	unmap_vmas(&tlb, mas, vma, vma->vm_start, vma->vm_end, vma->vm_end,
+		   /* mm_wr_locked = */ true);
+	mas_set(mas, vma->vm_end);
 	free_pgtables(&tlb, mas, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
-				 next ? next->vm_start : USER_PGTABLES_CEILING,
-				 mm_wr_locked);
+		      next ? next->vm_start : USER_PGTABLES_CEILING,
+		      /* mm_wr_locked = */ true);
 	tlb_finish_mmu(&tlb);
 }
 
diff --git a/mm/vma.h b/mm/vma.h
index 7bc0f9e7751b..6028fdf79257 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -147,10 +147,8 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 
 void remove_vma(struct vm_area_struct *vma, bool unreachable);
 
-void unmap_region(struct mm_struct *mm, struct ma_state *mas,
-		struct vm_area_struct *vma, struct vm_area_struct *prev,
-		struct vm_area_struct *next, unsigned long start,
-		unsigned long end, unsigned long tree_end, bool mm_wr_locked);
+void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
+		struct vm_area_struct *prev, struct vm_area_struct *next);
 
 /* Required by mmap_region(). */
 bool
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 14/21] mm/mmap: Avoid zeroing vma tree in mmap_region()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (12 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 13/21] mm: Clean up unmap_region() argument list Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 15/21] mm: Change failure of MAP_FIXED to restoring the gap on failure Liam R. Howlett
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Instead of zeroing the vma tree and then overwriting the area, let the
area be overwritten and then clean up the gathered vmas using
vms_complete_munmap_vmas().

To ensure locking is downgraded correctly, the mm is set regardless of
MAP_FIXED or not (NULL vma).

If a driver is mapping over an existing vma, then clear the ptes before
the call_mmap() invocation.  This is done using the vms_clean_up_area()
helper.  If there is a close vm_ops, that must also be called to ensure
any cleanup is done before mapping over the area.  This also means that
calling open has been added to the abort of an unmap operation, for now.

Since vm_ops->open() and vm_ops->close() are not always undo each other
(state cleanup may exist in ->close() that is lost forever), the code
cannot be left in this way, but that change has been isolated to another
commit to make this point very obvious for traceability.

Temporarily keep track of the number of pages that will be removed and
reduce the charged amount.

This also drops the validate_mm() call in the vma_expand() function.
It is necessary to drop the validate as it would fail since the mm
map_count would be incorrect during a vma expansion, prior to the
cleanup from vms_complete_munmap_vmas().

Clean up the error handing of the vms_gather_munmap_vmas() by calling
the verification within the function.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c | 62 ++++++++++++++++++++++++++-----------------------------
 mm/vma.c  | 54 +++++++++++++++++++++++++++++++++++++-----------
 mm/vma.h  | 22 ++++++++++++++------
 3 files changed, 87 insertions(+), 51 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 71b2bad717b6..6550d9470d3a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1373,23 +1373,19 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	unsigned long merge_start = addr, merge_end = end;
 	bool writable_file_mapping = false;
 	pgoff_t vm_pgoff;
-	int error;
+	int error = -ENOMEM;
 	VMA_ITERATOR(vmi, mm, addr);
+	unsigned long nr_pages, nr_accounted;
 
-	/* Check against address space limit. */
-	if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) {
-		unsigned long nr_pages;
+	nr_pages = count_vma_pages_range(mm, addr, end, &nr_accounted);
 
-		/*
-		 * MAP_FIXED may remove pages of mappings that intersects with
-		 * requested mapping. Account for the pages it would unmap.
-		 */
-		nr_pages = count_vma_pages_range(mm, addr, end);
-
-		if (!may_expand_vm(mm, vm_flags,
-					(len >> PAGE_SHIFT) - nr_pages))
-			return -ENOMEM;
-	}
+	/*
+	 * Check against address space limit.
+	 * MAP_FIXED may remove pages of mappings that intersects with requested
+	 * mapping. Account for the pages it would unmap.
+	 */
+	if (!may_expand_vm(mm, vm_flags, (len >> PAGE_SHIFT) - nr_pages))
+		return -ENOMEM;
 
 	/* Find the first overlapping VMA */
 	vma = vma_find(&vmi, end);
@@ -1400,14 +1396,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
 		/* Prepare to unmap any existing mapping in the area */
 		if (vms_gather_munmap_vmas(&vms, &mas_detach))
-			goto gather_failed;
-
-		/* Remove any existing mappings from the vma tree */
-		if (vma_iter_clear_gfp(&vmi, addr, end, GFP_KERNEL))
-			goto clear_tree_failed;
+			return -ENOMEM;
 
-		/* Unmap any existing mapping in the area */
-		vms_complete_munmap_vmas(&vms, &mas_detach);
 		next = vms.next;
 		prev = vms.prev;
 		vma = NULL;
@@ -1423,8 +1413,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 */
 	if (accountable_mapping(file, vm_flags)) {
 		charged = len >> PAGE_SHIFT;
+		charged -= nr_accounted;
 		if (security_vm_enough_memory_mm(mm, charged))
-			return -ENOMEM;
+			goto abort_munmap;
+		vms.nr_accounted = 0;
 		vm_flags |= VM_ACCOUNT;
 	}
 
@@ -1473,10 +1465,8 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 * not unmapped, but the maps are removed from the list.
 	 */
 	vma = vm_area_alloc(mm);
-	if (!vma) {
-		error = -ENOMEM;
+	if (!vma)
 		goto unacct_error;
-	}
 
 	vma_iter_config(&vmi, addr, end);
 	vma_set_range(vma, addr, end, pgoff);
@@ -1485,6 +1475,11 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 
 	if (file) {
 		vma->vm_file = get_file(file);
+		/*
+		 * call_mmap() may map PTE, so ensure there are no existing PTEs
+		 * call the vm_ops close function if one exists.
+		 */
+		vms_clean_up_area(&vms, &mas_detach, true);
 		error = call_mmap(file, vma);
 		if (error)
 			goto unmap_and_free_vma;
@@ -1575,6 +1570,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 expanded:
 	perf_event_mmap(vma);
 
+	/* Unmap any existing mapping in the area */
+	vms_complete_munmap_vmas(&vms, &mas_detach);
+
 	vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
 	if (vm_flags & VM_LOCKED) {
 		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
@@ -1603,7 +1601,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	return addr;
 
 close_and_free_vma:
-	if (file && vma->vm_ops && vma->vm_ops->close)
+	if (file && !vms.closed_vm_ops && vma->vm_ops && vma->vm_ops->close)
 		vma->vm_ops->close(vma);
 
 	if (file || vma->vm_file) {
@@ -1622,14 +1620,12 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 unacct_error:
 	if (charged)
 		vm_unacct_memory(charged);
-	validate_mm(mm);
-	return error;
 
-clear_tree_failed:
-	abort_munmap_vmas(&mas_detach);
-gather_failed:
+abort_munmap:
+	if (vms.nr_pages)
+		abort_munmap_vmas(&mas_detach, vms.closed_vm_ops);
 	validate_mm(mm);
-	return -ENOMEM;
+	return error;
 }
 
 static int __vm_munmap(unsigned long start, size_t len, bool unlock)
@@ -1959,7 +1955,7 @@ void exit_mmap(struct mm_struct *mm)
 	do {
 		if (vma->vm_flags & VM_ACCOUNT)
 			nr_accounted += vma_pages(vma);
-		remove_vma(vma, true);
+		remove_vma(vma, /* unreachable = */ true, /* closed = */ false);
 		count++;
 		cond_resched();
 		vma = vma_next(&vmi);
diff --git a/mm/vma.c b/mm/vma.c
index 9e11892b0a2f..3715c5c17ab3 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -136,10 +136,10 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
 /*
  * Close a vm structure and free it.
  */
-void remove_vma(struct vm_area_struct *vma, bool unreachable)
+void remove_vma(struct vm_area_struct *vma, bool unreachable, bool closed)
 {
 	might_sleep();
-	if (vma->vm_ops && vma->vm_ops->close)
+	if (!closed && vma->vm_ops && vma->vm_ops->close)
 		vma->vm_ops->close(vma);
 	if (vma->vm_file)
 		fput(vma->vm_file);
@@ -521,7 +521,6 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	vma_iter_store(vmi, vma);
 
 	vma_complete(&vp, vmi, vma->vm_mm);
-	validate_mm(vma->vm_mm);
 	return 0;
 
 nomem:
@@ -645,11 +644,14 @@ void vma_complete(struct vma_prepare *vp,
 		uprobe_mmap(vp->insert);
 }
 
-static void vms_complete_pte_clear(struct vma_munmap_struct *vms,
-		struct ma_state *mas_detach, bool mm_wr_locked)
+static inline void vms_clear_ptes(struct vma_munmap_struct *vms,
+		    struct ma_state *mas_detach, bool mm_wr_locked)
 {
 	struct mmu_gather tlb;
 
+	if (!vms->clear_ptes) /* Nothing to do */
+		return;
+
 	/*
 	 * We can free page tables without write-locking mmap_lock because VMAs
 	 * were isolated before we downgraded mmap_lock.
@@ -658,11 +660,31 @@ static void vms_complete_pte_clear(struct vma_munmap_struct *vms,
 	lru_add_drain();
 	tlb_gather_mmu(&tlb, vms->mm);
 	update_hiwater_rss(vms->mm);
-	unmap_vmas(&tlb, mas_detach, vms->vma, vms->start, vms->end, vms->vma_count, mm_wr_locked);
+	unmap_vmas(&tlb, mas_detach, vms->vma, vms->start, vms->end,
+		   vms->vma_count, mm_wr_locked);
+
 	mas_set(mas_detach, 1);
 	/* start and end may be different if there is no prev or next vma. */
-	free_pgtables(&tlb, mas_detach, vms->vma, vms->unmap_start, vms->unmap_end, mm_wr_locked);
+	free_pgtables(&tlb, mas_detach, vms->vma, vms->unmap_start,
+		      vms->unmap_end, mm_wr_locked);
 	tlb_finish_mmu(&tlb);
+	vms->clear_ptes = false;
+}
+
+void vms_clean_up_area(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach, bool mm_wr_locked)
+{
+	struct vm_area_struct *vma;
+
+	if (!vms->nr_pages)
+		return;
+
+	vms_clear_ptes(vms, mas_detach, mm_wr_locked);
+	mas_set(mas_detach, 0);
+	mas_for_each(mas_detach, vma, ULONG_MAX)
+		if (vma->vm_ops && vma->vm_ops->close)
+			vma->vm_ops->close(vma);
+	vms->closed_vm_ops = true;
 }
 
 /*
@@ -686,7 +708,10 @@ void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 	if (vms->unlock)
 		mmap_write_downgrade(mm);
 
-	vms_complete_pte_clear(vms, mas_detach, !vms->unlock);
+	if (!vms->nr_pages)
+		return;
+
+	vms_clear_ptes(vms, mas_detach, !vms->unlock);
 	/* Update high watermark before we lower total_vm */
 	update_hiwater_vm(mm);
 	/* Stat accounting */
@@ -702,7 +727,7 @@ void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 	/* Remove and clean up vmas */
 	mas_set(mas_detach, 0);
 	mas_for_each(mas_detach, vma, ULONG_MAX)
-		remove_vma(vma, false);
+		remove_vma(vma, /* = */ false, vms->closed_vm_ops);
 
 	vm_unacct_memory(vms->nr_accounted);
 	validate_mm(mm);
@@ -851,13 +876,14 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 	while (vma_iter_addr(vms->vmi) > vms->start)
 		vma_iter_prev_range(vms->vmi);
 
+	vms->clear_ptes = true;
 	return 0;
 
 userfaultfd_error:
 munmap_gather_failed:
 end_split_failed:
 modify_vma_failed:
-	abort_munmap_vmas(mas_detach);
+	abort_munmap_vmas(mas_detach, /* closed = */ false);
 start_split_failed:
 map_count_exceeded:
 	return error;
@@ -902,7 +928,7 @@ int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	return 0;
 
 clear_tree_failed:
-	abort_munmap_vmas(&mas_detach);
+	abort_munmap_vmas(&mas_detach, /* closed = */ false);
 gather_failed:
 	validate_mm(mm);
 	return error;
@@ -1620,17 +1646,21 @@ bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
 }
 
 unsigned long count_vma_pages_range(struct mm_struct *mm,
-				    unsigned long addr, unsigned long end)
+		unsigned long addr, unsigned long end,
+		unsigned long *nr_accounted)
 {
 	VMA_ITERATOR(vmi, mm, addr);
 	struct vm_area_struct *vma;
 	unsigned long nr_pages = 0;
 
+	*nr_accounted = 0;
 	for_each_vma_range(vmi, vma, end) {
 		unsigned long vm_start = max(addr, vma->vm_start);
 		unsigned long vm_end = min(end, vma->vm_end);
 
 		nr_pages += PHYS_PFN(vm_end - vm_start);
+		if (vma->vm_flags & VM_ACCOUNT)
+			*nr_accounted += PHYS_PFN(vm_end - vm_start);
 	}
 
 	return nr_pages;
diff --git a/mm/vma.h b/mm/vma.h
index 6028fdf79257..756dd42a6ec4 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -48,6 +48,8 @@ struct vma_munmap_struct {
 	unsigned long stack_vm;
 	unsigned long data_vm;
 	bool unlock;                    /* Unlock after the munmap */
+	bool clear_ptes;                /* If there are outstanding PTE to be cleared */
+	bool closed_vm_ops;		/* call_mmap() was encountered, so vmas may be closed */
 };
 
 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
@@ -95,14 +97,13 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
 		unsigned long start, unsigned long end, struct list_head *uf,
 		bool unlock)
 {
+	vms->mm = current->mm;
 	vms->vmi = vmi;
 	vms->vma = vma;
 	if (vma) {
-		vms->mm = vma->vm_mm;
 		vms->start = start;
 		vms->end = end;
 	} else {
-		vms->mm = NULL;
 		vms->start = vms->end = 0;
 	}
 	vms->unlock = unlock;
@@ -112,6 +113,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
 	vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
 	vms->unmap_start = FIRST_USER_ADDRESS;
 	vms->unmap_end = USER_PGTABLES_CEILING;
+	vms->clear_ptes = false;
+	vms->closed_vm_ops = false;
 }
 
 int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
@@ -120,18 +123,24 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach);
 
+void vms_clean_up_area(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach, bool mm_wr_locked);
+
 /*
  * abort_munmap_vmas - Undo any munmap work and free resources
  *
  * Reattach any detached vmas and free up the maple tree used to track the vmas.
  */
-static inline void abort_munmap_vmas(struct ma_state *mas_detach)
+static inline void abort_munmap_vmas(struct ma_state *mas_detach, bool closed)
 {
 	struct vm_area_struct *vma;
 
 	mas_set(mas_detach, 0);
-	mas_for_each(mas_detach, vma, ULONG_MAX)
+	mas_for_each(mas_detach, vma, ULONG_MAX) {
 		vma_mark_detached(vma, false);
+		if (closed && vma->vm_ops && vma->vm_ops->open)
+			vma->vm_ops->open(vma);
+	}
 
 	__mt_destroy(mas_detach->tree);
 }
@@ -145,7 +154,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 		  unsigned long start, size_t len, struct list_head *uf,
 		  bool unlock);
 
-void remove_vma(struct vm_area_struct *vma, bool unreachable);
+void remove_vma(struct vm_area_struct *vma, bool unreachable, bool closed);
 
 void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
 		struct vm_area_struct *prev, struct vm_area_struct *next);
@@ -259,7 +268,8 @@ bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
 int mm_take_all_locks(struct mm_struct *mm);
 void mm_drop_all_locks(struct mm_struct *mm);
 unsigned long count_vma_pages_range(struct mm_struct *mm,
-				    unsigned long addr, unsigned long end);
+				    unsigned long addr, unsigned long end,
+				    unsigned long *nr_accounted);
 
 static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma)
 {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 15/21] mm: Change failure of MAP_FIXED to restoring the gap on failure
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (13 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 14/21] mm/mmap: Avoid zeroing vma tree in mmap_region() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-27 17:15   ` [PATCH] mm/vma: Fix null pointer dereference in vms_abort_munmap_vmas() Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 16/21] mm/mmap: Use PHYS_PFN in mmap_region() Liam R. Howlett
                   ` (5 subsequent siblings)
  20 siblings, 1 reply; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Prior to call_mmap(), the vmas that will be replaced need to clear the
way for what may happen in the call_mmap().  This clean up work includes
clearing the ptes and calling the close() vm_ops.  Some users do more
setup than can be restored by calling the vm_ops open() function.  It is
safer to store the gap in the vma tree in these cases.

That is to say that the failure scenario that existed before the
MAP_FIXED gap exposure is restored as it is safer than trying to undo a
partial mapping.

Since abort_munmap_vmas() is only reattaching vmas with this change, the
function is renamed to reattach_vmas().

There is also a secondary failure that may occur if there is not enough
memory to store the gap.  In this case, the vmas are reattached and
resources freed.  If the system cannot complete the call_mmap() and
fails to allocate with GFP_KERNEL, then the system will print a warning
about the failure.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c |  3 +--
 mm/vma.c  |  4 +--
 mm/vma.h  | 78 +++++++++++++++++++++++++++++++++++++++----------------
 3 files changed, 59 insertions(+), 26 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 6550d9470d3a..217da37ef71d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1622,8 +1622,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		vm_unacct_memory(charged);
 
 abort_munmap:
-	if (vms.nr_pages)
-		abort_munmap_vmas(&mas_detach, vms.closed_vm_ops);
+	vms_abort_munmap_vmas(&vms, &mas_detach);
 	validate_mm(mm);
 	return error;
 }
diff --git a/mm/vma.c b/mm/vma.c
index 3715c5c17ab3..8dc60dcb6e8d 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -883,7 +883,7 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 munmap_gather_failed:
 end_split_failed:
 modify_vma_failed:
-	abort_munmap_vmas(mas_detach, /* closed = */ false);
+	reattach_vmas(mas_detach);
 start_split_failed:
 map_count_exceeded:
 	return error;
@@ -928,7 +928,7 @@ int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	return 0;
 
 clear_tree_failed:
-	abort_munmap_vmas(&mas_detach, /* closed = */ false);
+	reattach_vmas(&mas_detach);
 gather_failed:
 	validate_mm(mm);
 	return error;
diff --git a/mm/vma.h b/mm/vma.h
index 756dd42a6ec4..f710812482a1 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -82,6 +82,22 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
 int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	       unsigned long start, unsigned long end, pgoff_t pgoff);
 
+static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
+			struct vm_area_struct *vma, gfp_t gfp)
+
+{
+	if (vmi->mas.status != ma_start &&
+	    ((vmi->mas.index > vma->vm_start) || (vmi->mas.last < vma->vm_start)))
+		vma_iter_invalidate(vmi);
+
+	__mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1);
+	mas_store_gfp(&vmi->mas, vma, gfp);
+	if (unlikely(mas_is_err(&vmi->mas)))
+		return -ENOMEM;
+
+	return 0;
+}
+
 /*
  * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
  * @vms: The vma munmap struct
@@ -127,24 +143,58 @@ void vms_clean_up_area(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach, bool mm_wr_locked);
 
 /*
- * abort_munmap_vmas - Undo any munmap work and free resources
+ * reattach_vmas() - Undo any munmap work and free resources
+ * @mas_detach: The maple state with the detached maple tree
  *
  * Reattach any detached vmas and free up the maple tree used to track the vmas.
  */
-static inline void abort_munmap_vmas(struct ma_state *mas_detach, bool closed)
+static inline void reattach_vmas(struct ma_state *mas_detach)
 {
 	struct vm_area_struct *vma;
 
 	mas_set(mas_detach, 0);
-	mas_for_each(mas_detach, vma, ULONG_MAX) {
+	mas_for_each(mas_detach, vma, ULONG_MAX)
 		vma_mark_detached(vma, false);
-		if (closed && vma->vm_ops && vma->vm_ops->open)
-			vma->vm_ops->open(vma);
-	}
 
 	__mt_destroy(mas_detach->tree);
 }
 
+/*
+ * vms_abort_munmap_vmas() - Undo as much as possible from an aborted munmap()
+ * operation.
+ * @vms: The vma unmap structure
+ * @mas_detach: The maple state with the detached maple tree
+ *
+ * Reattach any detached vmas, free up the maple tree used to track the vmas.
+ * If that's not possible because the ptes are cleared (and vm_ops->closed() may
+ * have been called), then a NULL is written over the vmas and the vmas are
+ * removed (munmap() completed).
+ */
+static inline void vms_abort_munmap_vmas(struct vma_munmap_struct *vms,
+		struct ma_state *mas_detach)
+{
+	if (!vms->nr_pages)
+		return;
+
+	if (vms->clear_ptes)
+		return reattach_vmas(mas_detach);
+
+	/*
+	 * Aborting cannot just call the vm_ops open() because they are often
+	 * not symmetrical and state data has been lost.  Resort to the old
+	 * failure method of leaving a gap where the MAP_FIXED mapping failed.
+	 */
+	if (unlikely(vma_iter_store_gfp(vms->vmi, NULL, GFP_KERNEL))) {
+		pr_warn_once("%s: (%d) Unable to abort munmap() operation\n",
+			     current->comm, current->pid);
+		/* Leaving vmas detached and in-tree may hamper recovery */
+		reattach_vmas(mas_detach);
+	} else {
+		/* Clean up the insertion of unfortunate the gap */
+		vms_complete_munmap_vmas(vms, mas_detach);
+	}
+}
+
 int
 do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		    struct mm_struct *mm, unsigned long start,
@@ -297,22 +347,6 @@ static inline struct vm_area_struct *vma_prev_limit(struct vma_iterator *vmi,
 	return mas_prev(&vmi->mas, min);
 }
 
-static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
-			struct vm_area_struct *vma, gfp_t gfp)
-{
-	if (vmi->mas.status != ma_start &&
-	    ((vmi->mas.index > vma->vm_start) || (vmi->mas.last < vma->vm_start)))
-		vma_iter_invalidate(vmi);
-
-	__mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1);
-	mas_store_gfp(&vmi->mas, vma, gfp);
-	if (unlikely(mas_is_err(&vmi->mas)))
-		return -ENOMEM;
-
-	return 0;
-}
-
-
 /*
  * These three helpers classifies VMAs for virtual memory accounting.
  */
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 16/21] mm/mmap: Use PHYS_PFN in mmap_region()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (14 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 15/21] mm: Change failure of MAP_FIXED to restoring the gap on failure Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 17/21] mm/mmap: Use vms accounted pages " Liam R. Howlett
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Instead of shifting the length by PAGE_SIZE, use PHYS_PFN.  Also use the
existing local variable everywhere instead of some of the time.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/mmap.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 217da37ef71d..f8515126e435 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1364,7 +1364,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma = NULL;
 	struct vm_area_struct *next, *prev, *merge;
-	pgoff_t pglen = len >> PAGE_SHIFT;
+	pgoff_t pglen = PHYS_PFN(len);
 	unsigned long charged = 0;
 	struct vma_munmap_struct vms;
 	struct ma_state mas_detach;
@@ -1384,7 +1384,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 * MAP_FIXED may remove pages of mappings that intersects with requested
 	 * mapping. Account for the pages it would unmap.
 	 */
-	if (!may_expand_vm(mm, vm_flags, (len >> PAGE_SHIFT) - nr_pages))
+	if (!may_expand_vm(mm, vm_flags, pglen - nr_pages))
 		return -ENOMEM;
 
 	/* Find the first overlapping VMA */
@@ -1412,7 +1412,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 * Private writable mapping: check memory availability
 	 */
 	if (accountable_mapping(file, vm_flags)) {
-		charged = len >> PAGE_SHIFT;
+		charged = pglen;
 		charged -= nr_accounted;
 		if (security_vm_enough_memory_mm(mm, charged))
 			goto abort_munmap;
@@ -1573,14 +1573,14 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	/* Unmap any existing mapping in the area */
 	vms_complete_munmap_vmas(&vms, &mas_detach);
 
-	vm_stat_account(mm, vm_flags, len >> PAGE_SHIFT);
+	vm_stat_account(mm, vm_flags, pglen);
 	if (vm_flags & VM_LOCKED) {
 		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
 					is_vm_hugetlb_page(vma) ||
 					vma == get_gate_vma(current->mm))
 			vm_flags_clear(vma, VM_LOCKED_MASK);
 		else
-			mm->locked_vm += (len >> PAGE_SHIFT);
+			mm->locked_vm += pglen;
 	}
 
 	if (file)
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 17/21] mm/mmap: Use vms accounted pages in mmap_region()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (15 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 16/21] mm/mmap: Use PHYS_PFN in mmap_region() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 18/21] ipc/shm, mm: Drop do_vma_munmap() Liam R. Howlett
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	linux-security-module, Paul Moore

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Change from nr_pages variable to vms.nr_accounted for the charged pages
calculation.  This is necessary for a future patch.

This also avoids checking security_vm_enough_memory_mm() if the amount
of memory won't change.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Kees Cook <kees@kernel.org>
Cc: linux-security-module@vger.kernel.org
Reviewed-by: Kees Cook <kees@kernel.org>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Paul Moore <paul@paul-moore.com> (LSM)
---
 mm/mmap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index f8515126e435..aa4aa49f3b97 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1413,9 +1413,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	 */
 	if (accountable_mapping(file, vm_flags)) {
 		charged = pglen;
-		charged -= nr_accounted;
-		if (security_vm_enough_memory_mm(mm, charged))
+		charged -= vms.nr_accounted;
+		if (charged && security_vm_enough_memory_mm(mm, charged))
 			goto abort_munmap;
+
 		vms.nr_accounted = 0;
 		vm_flags |= VM_ACCOUNT;
 	}
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 18/21] ipc/shm, mm: Drop do_vma_munmap()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (16 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 17/21] mm/mmap: Use vms accounted pages " Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 19/21] mm: Move may_expand_vm() check in mmap_region() Liam R. Howlett
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

The do_vma_munmap() wrapper existed for callers that didn't have a vma
iterator and needed to check the vma mseal status prior to calling the
underlying munmap().  All callers now use a vma iterator and since the
mseal check has been moved to do_vmi_align_munmap() and the vmas are
aligned, this function can just be called instead.

do_vmi_align_munmap() can no longer be static as ipc/shm is using it and
it is exported via the mm.h header.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 include/linux/mm.h |  6 +++---
 ipc/shm.c          |  8 ++++----
 mm/mmap.c          | 33 ++++++---------------------------
 mm/vma.c           | 12 ++++++------
 mm/vma.h           |  4 +---
 5 files changed, 20 insertions(+), 43 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index b1eed30fdc06..6f1835e3b430 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3292,14 +3292,14 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr,
 extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 			 unsigned long start, size_t len, struct list_head *uf,
 			 bool unlock);
+int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+		    struct mm_struct *mm, unsigned long start,
+		    unsigned long end, struct list_head *uf, bool unlock);
 extern int do_munmap(struct mm_struct *, unsigned long, size_t,
 		     struct list_head *uf);
 extern int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior);
 
 #ifdef CONFIG_MMU
-extern int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
-			 unsigned long start, unsigned long end,
-			 struct list_head *uf, bool unlock);
 extern int __mm_populate(unsigned long addr, unsigned long len,
 			 int ignore_errors);
 static inline void mm_populate(unsigned long addr, unsigned long len)
diff --git a/ipc/shm.c b/ipc/shm.c
index 3e3071252dac..99564c870084 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1778,8 +1778,8 @@ long ksys_shmdt(char __user *shmaddr)
 			 */
 			file = vma->vm_file;
 			size = i_size_read(file_inode(vma->vm_file));
-			do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end,
-				      NULL, false);
+			do_vmi_align_munmap(&vmi, vma, mm, vma->vm_start,
+					    vma->vm_end, NULL, false);
 			/*
 			 * We discovered the size of the shm segment, so
 			 * break out of here and fall through to the next
@@ -1803,8 +1803,8 @@ long ksys_shmdt(char __user *shmaddr)
 		if ((vma->vm_ops == &shm_vm_ops) &&
 		    ((vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) &&
 		    (vma->vm_file == file)) {
-			do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end,
-				      NULL, false);
+			do_vmi_align_munmap(&vmi, vma, mm, vma->vm_start,
+					    vma->vm_end, NULL, false);
 		}
 
 		vma = vma_next(&vmi);
diff --git a/mm/mmap.c b/mm/mmap.c
index aa4aa49f3b97..51ab0bdb856c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -169,11 +169,12 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
 			goto out; /* mapping intersects with an existing non-brk vma. */
 		/*
 		 * mm->brk must be protected by write mmap_lock.
-		 * do_vma_munmap() will drop the lock on success,  so update it
-		 * before calling do_vma_munmap().
+		 * do_vmi_align_munmap() will drop the lock on success,  so
+		 * update it before calling do_vma_munmap().
 		 */
 		mm->brk = brk;
-		if (do_vma_munmap(&vmi, brkvma, newbrk, oldbrk, &uf, true))
+		if (do_vmi_align_munmap(&vmi, brkvma, mm, newbrk, oldbrk, &uf,
+					/* unlock = */ true))
 			goto out;
 
 		goto success_unlocked;
@@ -1478,9 +1479,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		vma->vm_file = get_file(file);
 		/*
 		 * call_mmap() may map PTE, so ensure there are no existing PTEs
-		 * call the vm_ops close function if one exists.
+		 * and call the vm_ops close function if one exists.
 		 */
-		vms_clean_up_area(&vms, &mas_detach, true);
+		vms_clean_up_area(&vms, &mas_detach);
 		error = call_mmap(file, vma);
 		if (error)
 			goto unmap_and_free_vma;
@@ -1742,28 +1743,6 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
 	return ret;
 }
 
-/*
- * do_vma_munmap() - Unmap a full or partial vma.
- * @vmi: The vma iterator pointing at the vma
- * @vma: The first vma to be munmapped
- * @start: the start of the address to unmap
- * @end: The end of the address to unmap
- * @uf: The userfaultfd list_head
- * @unlock: Drop the lock on success
- *
- * unmaps a VMA mapping when the vma iterator is already in position.
- * Does not handle alignment.
- *
- * Return: 0 on success drops the lock of so directed, error on failure and will
- * still hold the lock.
- */
-int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
-		unsigned long start, unsigned long end, struct list_head *uf,
-		bool unlock)
-{
-	return do_vmi_align_munmap(vmi, vma, vma->vm_mm, start, end, uf, unlock);
-}
-
 /*
  * do_brk_flags() - Increase the brk vma if the flags match.
  * @vmi: The vma iterator
diff --git a/mm/vma.c b/mm/vma.c
index 8dc60dcb6e8d..91b027eb9a38 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -658,8 +658,8 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms,
 	 */
 	mas_set(mas_detach, 1);
 	lru_add_drain();
-	tlb_gather_mmu(&tlb, vms->mm);
-	update_hiwater_rss(vms->mm);
+	tlb_gather_mmu(&tlb, vms->vma->vm_mm);
+	update_hiwater_rss(vms->vma->vm_mm);
 	unmap_vmas(&tlb, mas_detach, vms->vma, vms->start, vms->end,
 		   vms->vma_count, mm_wr_locked);
 
@@ -672,14 +672,14 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms,
 }
 
 void vms_clean_up_area(struct vma_munmap_struct *vms,
-		struct ma_state *mas_detach, bool mm_wr_locked)
+		struct ma_state *mas_detach)
 {
 	struct vm_area_struct *vma;
 
 	if (!vms->nr_pages)
 		return;
 
-	vms_clear_ptes(vms, mas_detach, mm_wr_locked);
+	vms_clear_ptes(vms, mas_detach, true);
 	mas_set(mas_detach, 0);
 	mas_for_each(mas_detach, vma, ULONG_MAX)
 		if (vma->vm_ops && vma->vm_ops->close)
@@ -702,7 +702,7 @@ void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 	struct vm_area_struct *vma;
 	struct mm_struct *mm;
 
-	mm = vms->mm;
+	mm = current->mm;
 	mm->map_count -= vms->vma_count;
 	mm->locked_vm -= vms->locked_vm;
 	if (vms->unlock)
@@ -770,7 +770,7 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 		 * its limit temporarily, to help free resources as expected.
 		 */
 		if (vms->end < vms->vma->vm_end &&
-		    vms->mm->map_count >= sysctl_max_map_count)
+		    vms->vma->vm_mm->map_count >= sysctl_max_map_count)
 			goto map_count_exceeded;
 
 		/* Don't bother splitting the VMA if we can't unmap it anyway */
diff --git a/mm/vma.h b/mm/vma.h
index f710812482a1..8ca32d7cb846 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -31,7 +31,6 @@ struct unlink_vma_file_batch {
  */
 struct vma_munmap_struct {
 	struct vma_iterator *vmi;
-	struct mm_struct *mm;
 	struct vm_area_struct *vma;     /* The first vma to munmap */
 	struct vm_area_struct *prev;    /* vma before the munmap area */
 	struct vm_area_struct *next;    /* vma after the munmap area */
@@ -113,7 +112,6 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
 		unsigned long start, unsigned long end, struct list_head *uf,
 		bool unlock)
 {
-	vms->mm = current->mm;
 	vms->vmi = vmi;
 	vms->vma = vma;
 	if (vma) {
@@ -140,7 +138,7 @@ void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach);
 
 void vms_clean_up_area(struct vma_munmap_struct *vms,
-		struct ma_state *mas_detach, bool mm_wr_locked);
+		struct ma_state *mas_detach);
 
 /*
  * reattach_vmas() - Undo any munmap work and free resources
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 19/21] mm: Move may_expand_vm() check in mmap_region()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (17 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 18/21] ipc/shm, mm: Drop do_vma_munmap() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 20/21] mm/vma: Drop incorrect comment from vms_gather_munmap_vmas() Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 21/21] mm/vma.h: Optimise vma_munmap_struct Liam R. Howlett
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

The may_expand_vm() check requires the count of the pages within the
munmap range.  Since this is needed for accounting and obtained later,
the reodering of ma_expand_vm() to later in the call stack, after the
vma munmap struct (vms) is initialised and the gather stage is
potentially run, will allow for a single loop over the vmas.  The gather
sage does not commit any work and so everything can be undone in the
case of a failure.

The MAP_FIXED page count is available after the vms_gather_munmap_vmas()
call, so use it instead of looping over the vmas twice.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/mmap.c | 15 ++++-----------
 mm/vma.c  | 21 ---------------------
 mm/vma.h  |  3 ---
 3 files changed, 4 insertions(+), 35 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 51ab0bdb856c..5937607f6949 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1376,17 +1376,6 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	pgoff_t vm_pgoff;
 	int error = -ENOMEM;
 	VMA_ITERATOR(vmi, mm, addr);
-	unsigned long nr_pages, nr_accounted;
-
-	nr_pages = count_vma_pages_range(mm, addr, end, &nr_accounted);
-
-	/*
-	 * Check against address space limit.
-	 * MAP_FIXED may remove pages of mappings that intersects with requested
-	 * mapping. Account for the pages it would unmap.
-	 */
-	if (!may_expand_vm(mm, vm_flags, pglen - nr_pages))
-		return -ENOMEM;
 
 	/* Find the first overlapping VMA */
 	vma = vma_find(&vmi, end);
@@ -1409,6 +1398,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 			vma_iter_next_range(&vmi);
 	}
 
+	/* Check against address space limit. */
+	if (!may_expand_vm(mm, vm_flags, pglen - vms.nr_pages))
+		goto abort_munmap;
+
 	/*
 	 * Private writable mapping: check memory availability
 	 */
diff --git a/mm/vma.c b/mm/vma.c
index 91b027eb9a38..61d51677eaaf 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -1645,27 +1645,6 @@ bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
 	return vma_fs_can_writeback(vma);
 }
 
-unsigned long count_vma_pages_range(struct mm_struct *mm,
-		unsigned long addr, unsigned long end,
-		unsigned long *nr_accounted)
-{
-	VMA_ITERATOR(vmi, mm, addr);
-	struct vm_area_struct *vma;
-	unsigned long nr_pages = 0;
-
-	*nr_accounted = 0;
-	for_each_vma_range(vmi, vma, end) {
-		unsigned long vm_start = max(addr, vma->vm_start);
-		unsigned long vm_end = min(end, vma->vm_end);
-
-		nr_pages += PHYS_PFN(vm_end - vm_start);
-		if (vma->vm_flags & VM_ACCOUNT)
-			*nr_accounted += PHYS_PFN(vm_end - vm_start);
-	}
-
-	return nr_pages;
-}
-
 static DEFINE_MUTEX(mm_all_locks_mutex);
 
 static void vm_lock_anon_vma(struct mm_struct *mm, struct anon_vma *anon_vma)
diff --git a/mm/vma.h b/mm/vma.h
index 8ca32d7cb846..7047fedce459 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -315,9 +315,6 @@ bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
 
 int mm_take_all_locks(struct mm_struct *mm);
 void mm_drop_all_locks(struct mm_struct *mm);
-unsigned long count_vma_pages_range(struct mm_struct *mm,
-				    unsigned long addr, unsigned long end,
-				    unsigned long *nr_accounted);
 
 static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma)
 {
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 20/21] mm/vma: Drop incorrect comment from vms_gather_munmap_vmas()
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (18 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 19/21] mm: Move may_expand_vm() check in mmap_region() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:25 ` [PATCH v7 21/21] mm/vma.h: Optimise vma_munmap_struct Liam R. Howlett
  20 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

The comment has been outdated since 6b73cff239e52 ("mm: change munmap
splitting order and move_vma()").  The move_vma() was altered to fix the
fragile state of the accounting since then.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/vma.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/mm/vma.c b/mm/vma.c
index 61d51677eaaf..ca87d30cb185 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -755,13 +755,8 @@ int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
-	 *
-	 * Note: mremap's move_vma VM_ACCOUNT handling assumes a partially
-	 * unmapped vm_area_struct will remain in use: so lower split_vma
-	 * places tmp vma above, and higher split_vma places tmp vma below.
+	 * Does it split the first one?
 	 */
-
-	/* Does it split the first one? */
 	if (vms->start > vms->vma->vm_start) {
 
 		/*
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v7 21/21] mm/vma.h: Optimise vma_munmap_struct
  2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
                   ` (19 preceding siblings ...)
  2024-08-22 19:25 ` [PATCH v7 20/21] mm/vma: Drop incorrect comment from vms_gather_munmap_vmas() Liam R. Howlett
@ 2024-08-22 19:25 ` Liam R. Howlett
  2024-08-22 19:41   ` Lorenzo Stoakes
  20 siblings, 1 reply; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-22 19:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

The vma_munmap_struct has a hole of 4 bytes and pushes the struct to
three cachelines.  Relocating the three booleans upwards allows for the
struct to only use two cachelines (as reported by pahole on amd64).

Before:
struct vma_munmap_struct {
        struct vma_iterator *      vmi;                  /*     0     8 */
        struct vm_area_struct *    vma;                  /*     8     8 */
        struct vm_area_struct *    prev;                 /*    16     8 */
        struct vm_area_struct *    next;                 /*    24     8 */
        struct list_head *         uf;                   /*    32     8 */
        long unsigned int          start;                /*    40     8 */
        long unsigned int          end;                  /*    48     8 */
        long unsigned int          unmap_start;          /*    56     8 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        long unsigned int          unmap_end;            /*    64     8 */
        int                        vma_count;            /*    72     4 */

        /* XXX 4 bytes hole, try to pack */

        long unsigned int          nr_pages;             /*    80     8 */
        long unsigned int          locked_vm;            /*    88     8 */
        long unsigned int          nr_accounted;         /*    96     8 */
        long unsigned int          exec_vm;              /*   104     8 */
        long unsigned int          stack_vm;             /*   112     8 */
        long unsigned int          data_vm;              /*   120     8 */
        /* --- cacheline 2 boundary (128 bytes) --- */
        bool                       unlock;               /*   128     1 */
        bool                       clear_ptes;           /*   129     1 */
        bool                       closed_vm_ops;        /*   130     1 */

        /* size: 136, cachelines: 3, members: 19 */
        /* sum members: 127, holes: 1, sum holes: 4 */
        /* padding: 5 */
        /* last cacheline: 8 bytes */
};

After:
struct vma_munmap_struct {
        struct vma_iterator *      vmi;                  /*     0     8 */
        struct vm_area_struct *    vma;                  /*     8     8 */
        struct vm_area_struct *    prev;                 /*    16     8 */
        struct vm_area_struct *    next;                 /*    24     8 */
        struct list_head *         uf;                   /*    32     8 */
        long unsigned int          start;                /*    40     8 */
        long unsigned int          end;                  /*    48     8 */
        long unsigned int          unmap_start;          /*    56     8 */
        /* --- cacheline 1 boundary (64 bytes) --- */
        long unsigned int          unmap_end;            /*    64     8 */
        int                        vma_count;            /*    72     4 */
        bool                       unlock;               /*    76     1 */
        bool                       clear_ptes;           /*    77     1 */
        bool                       closed_vm_ops;        /*    78     1 */

        /* XXX 1 byte hole, try to pack */

        long unsigned int          nr_pages;             /*    80     8 */
        long unsigned int          locked_vm;            /*    88     8 */
        long unsigned int          nr_accounted;         /*    96     8 */
        long unsigned int          exec_vm;              /*   104     8 */
        long unsigned int          stack_vm;             /*   112     8 */
        long unsigned int          data_vm;              /*   120     8 */

        /* size: 128, cachelines: 2, members: 19 */
        /* sum members: 127, holes: 1, sum holes: 1 */
};

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
 mm/vma.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/vma.h b/mm/vma.h
index 7047fedce459..c774642697a0 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -40,15 +40,16 @@ struct vma_munmap_struct {
 	unsigned long unmap_start;      /* Unmap PTE start */
 	unsigned long unmap_end;        /* Unmap PTE end */
 	int vma_count;                  /* Number of vmas that will be removed */
+	bool unlock;                    /* Unlock after the munmap */
+	bool clear_ptes;                /* If there are outstanding PTE to be cleared */
+	bool closed_vm_ops;		/* call_mmap() was encountered, so vmas may be closed */
+	/* 1 byte hole */
 	unsigned long nr_pages;         /* Number of pages being removed */
 	unsigned long locked_vm;        /* Number of locked pages */
 	unsigned long nr_accounted;     /* Number of VM_ACCOUNT pages */
 	unsigned long exec_vm;
 	unsigned long stack_vm;
 	unsigned long data_vm;
-	bool unlock;                    /* Unlock after the munmap */
-	bool clear_ptes;                /* If there are outstanding PTE to be cleared */
-	bool closed_vm_ops;		/* call_mmap() was encountered, so vmas may be closed */
 };
 
 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 21/21] mm/vma.h: Optimise vma_munmap_struct
  2024-08-22 19:25 ` [PATCH v7 21/21] mm/vma.h: Optimise vma_munmap_struct Liam R. Howlett
@ 2024-08-22 19:41   ` Lorenzo Stoakes
  0 siblings, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2024-08-22 19:41 UTC (permalink / raw)
  To: Liam R. Howlett
  Cc: Andrew Morton, linux-mm, linux-kernel, Suren Baghdasaryan,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney

On Thu, Aug 22, 2024 at 03:25:43PM GMT, Liam R. Howlett wrote:
> From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
>
> The vma_munmap_struct has a hole of 4 bytes and pushes the struct to
> three cachelines.  Relocating the three booleans upwards allows for the
> struct to only use two cachelines (as reported by pahole on amd64).
>
> Before:
> struct vma_munmap_struct {
>         struct vma_iterator *      vmi;                  /*     0     8 */
>         struct vm_area_struct *    vma;                  /*     8     8 */
>         struct vm_area_struct *    prev;                 /*    16     8 */
>         struct vm_area_struct *    next;                 /*    24     8 */
>         struct list_head *         uf;                   /*    32     8 */
>         long unsigned int          start;                /*    40     8 */
>         long unsigned int          end;                  /*    48     8 */
>         long unsigned int          unmap_start;          /*    56     8 */
>         /* --- cacheline 1 boundary (64 bytes) --- */
>         long unsigned int          unmap_end;            /*    64     8 */
>         int                        vma_count;            /*    72     4 */
>
>         /* XXX 4 bytes hole, try to pack */
>
>         long unsigned int          nr_pages;             /*    80     8 */
>         long unsigned int          locked_vm;            /*    88     8 */
>         long unsigned int          nr_accounted;         /*    96     8 */
>         long unsigned int          exec_vm;              /*   104     8 */
>         long unsigned int          stack_vm;             /*   112     8 */
>         long unsigned int          data_vm;              /*   120     8 */
>         /* --- cacheline 2 boundary (128 bytes) --- */
>         bool                       unlock;               /*   128     1 */
>         bool                       clear_ptes;           /*   129     1 */
>         bool                       closed_vm_ops;        /*   130     1 */
>
>         /* size: 136, cachelines: 3, members: 19 */
>         /* sum members: 127, holes: 1, sum holes: 4 */
>         /* padding: 5 */
>         /* last cacheline: 8 bytes */
> };
>
> After:
> struct vma_munmap_struct {
>         struct vma_iterator *      vmi;                  /*     0     8 */
>         struct vm_area_struct *    vma;                  /*     8     8 */
>         struct vm_area_struct *    prev;                 /*    16     8 */
>         struct vm_area_struct *    next;                 /*    24     8 */
>         struct list_head *         uf;                   /*    32     8 */
>         long unsigned int          start;                /*    40     8 */
>         long unsigned int          end;                  /*    48     8 */
>         long unsigned int          unmap_start;          /*    56     8 */
>         /* --- cacheline 1 boundary (64 bytes) --- */
>         long unsigned int          unmap_end;            /*    64     8 */
>         int                        vma_count;            /*    72     4 */
>         bool                       unlock;               /*    76     1 */
>         bool                       clear_ptes;           /*    77     1 */
>         bool                       closed_vm_ops;        /*    78     1 */
>
>         /* XXX 1 byte hole, try to pack */
>
>         long unsigned int          nr_pages;             /*    80     8 */
>         long unsigned int          locked_vm;            /*    88     8 */
>         long unsigned int          nr_accounted;         /*    96     8 */
>         long unsigned int          exec_vm;              /*   104     8 */
>         long unsigned int          stack_vm;             /*   112     8 */
>         long unsigned int          data_vm;              /*   120     8 */
>
>         /* size: 128, cachelines: 2, members: 19 */
>         /* sum members: 127, holes: 1, sum holes: 1 */
> };
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  mm/vma.h | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vma.h b/mm/vma.h
> index 7047fedce459..c774642697a0 100644
> --- a/mm/vma.h
> +++ b/mm/vma.h
> @@ -40,15 +40,16 @@ struct vma_munmap_struct {
>  	unsigned long unmap_start;      /* Unmap PTE start */
>  	unsigned long unmap_end;        /* Unmap PTE end */
>  	int vma_count;                  /* Number of vmas that will be removed */
> +	bool unlock;                    /* Unlock after the munmap */
> +	bool clear_ptes;                /* If there are outstanding PTE to be cleared */
> +	bool closed_vm_ops;		/* call_mmap() was encountered, so vmas may be closed */
> +	/* 1 byte hole */
>  	unsigned long nr_pages;         /* Number of pages being removed */
>  	unsigned long locked_vm;        /* Number of locked pages */
>  	unsigned long nr_accounted;     /* Number of VM_ACCOUNT pages */
>  	unsigned long exec_vm;
>  	unsigned long stack_vm;
>  	unsigned long data_vm;
> -	bool unlock;                    /* Unlock after the munmap */
> -	bool clear_ptes;                /* If there are outstanding PTE to be cleared */
> -	bool closed_vm_ops;		/* call_mmap() was encountered, so vmas may be closed */
>  };
>
>  #ifdef CONFIG_DEBUG_VM_MAPLE_TREE
> --
> 2.43.0
>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas
  2024-08-22 19:25 ` [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
@ 2024-08-23  8:43   ` Bert Karwatzki
  2024-08-23  9:55     ` Lorenzo Stoakes
  2024-08-23 11:37     ` Lorenzo Stoakes
  2024-08-23 13:30   ` [PATCH] mm/vma: fix bookkeeping checks Liam R. Howlett
  1 sibling, 2 replies; 32+ messages in thread
From: Bert Karwatzki @ 2024-08-23  8:43 UTC (permalink / raw)
  To: Liam R. Howlett
  Cc: Andrew Morton, linux-mm, linux-kernel, Suren Baghdasaryan,
	Lorenzo Stoakes, Matthew Wilcox, Vlastimil Babka, sidhartha.kumar,
	Jiri Olsa, Kees Cook, Paul E . McKenney, spasswolf

Am Donnerstag, dem 22.08.2024 um 15:25 -0400 schrieb Liam R. Howlett:
> From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
>
> Clean up the code by changing the munmap operation to use a structure
> for the accounting and munmap variables.
>
> Since remove_mt() is only called in one location and the contents will
> be reduced to almost nothing.  The remains of the function can be added
> to vms_complete_munmap_vmas().
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  mm/vma.c | 83 +++++++++++++++++++++++++++++---------------------------
>  mm/vma.h |  6 ++++
>  2 files changed, 49 insertions(+), 40 deletions(-)
>
> diff --git a/mm/vma.c b/mm/vma.c
> index e1aee43a3dc4..58604fe3bd03 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -103,7 +103,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
>  	vms->unlock = unlock;
>  	vms->uf = uf;
>  	vms->vma_count = 0;
> -	vms->nr_pages = vms->locked_vm = 0;
> +	vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
> +	vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
>  }
>
>  /*
> @@ -299,30 +300,6 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	return __split_vma(vmi, vma, addr, new_below);
>  }
>
> -/*
> - * Ok - we have the memory areas we should free on a maple tree so release them,
> - * and do the vma updates.
> - *
> - * Called with the mm semaphore held.
> - */
> -static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas)
> -{
> -	unsigned long nr_accounted = 0;
> -	struct vm_area_struct *vma;
> -
> -	/* Update high watermark before we lower total_vm */
> -	update_hiwater_vm(mm);
> -	mas_for_each(mas, vma, ULONG_MAX) {
> -		long nrpages = vma_pages(vma);
> -
> -		if (vma->vm_flags & VM_ACCOUNT)
> -			nr_accounted += nrpages;
> -		vm_stat_account(mm, vma->vm_flags, -nrpages);
> -		remove_vma(vma, false);
> -	}
> -	vm_unacct_memory(nr_accounted);
> -}
> -
>  /*
>   * init_vma_prep() - Initializer wrapper for vma_prepare struct
>   * @vp: The vma_prepare struct
> @@ -722,7 +699,7 @@ static inline void abort_munmap_vmas(struct ma_state *mas_detach)
>  static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
>  		struct ma_state *mas_detach)
>  {
> -	struct vm_area_struct *prev, *next;
> +	struct vm_area_struct *vma;
>  	struct mm_struct *mm;
>
>  	mm = vms->mm;
> @@ -731,21 +708,31 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
>  	if (vms->unlock)
>  		mmap_write_downgrade(mm);
>
> -	prev = vma_iter_prev_range(vms->vmi);
> -	next = vma_next(vms->vmi);
> -	if (next)
> -		vma_iter_prev_range(vms->vmi);
> -
>  	/*
>  	 * We can free page tables without write-locking mmap_lock because VMAs
>  	 * were isolated before we downgraded mmap_lock.
>  	 */
>  	mas_set(mas_detach, 1);
> -	unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end,
> -		     vms->vma_count, !vms->unlock);
> -	/* Statistics and freeing VMAs */
> +	unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next,
> +		     vms->start, vms->end, vms->vma_count, !vms->unlock);
> +	/* Update high watermark before we lower total_vm */
> +	update_hiwater_vm(mm);
> +	/* Stat accounting */
> +	WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
> +	mm->exec_vm -= vms->exec_vm;
> +	mm->stack_vm -= vms->stack_vm;
> +	mm->data_vm -= vms->data_vm;
> +	/* Paranoid bookkeeping */
> +	VM_WARN_ON(vms->exec_vm > mm->exec_vm);
> +	VM_WARN_ON(vms->stack_vm > mm->stack_vm);
> +	VM_WARN_ON(vms->data_vm > mm->data_vm);
> +

I'm running the v7 Patchset on linux-next-20240822 and I get lots of these
errors (right on boot) (both when using the complete patchset and when using
only the patches up to this):

[  T620] WARNING: CPU: 6 PID: 620 at mm/vma.c:725
vms_complete_munmap_vmas+0x1d8/0x200
[  T620] Modules linked in: amd_atl ecc mc sparse_keymap wmi_bmof edac_mce_amd
snd snd_pci_acp3x k10temp soundcore ccp battery ac button hid_sensor_gyro_3d
hid_sensor_als hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio amd_pmc
hid_sensor_iio_common joydev evdev serio_raw mt7921e mt7921_common mt792x_lib
mt76_connac_lib mt76 mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse
efi_pstore configfs efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid
amdgpu i2c_algo_bit drm_ttm_helper ttm drm_exec drm_suballoc_helper amdxcp
xhci_pci drm_buddy hid_sensor_hub xhci_hcd nvme mfd_core gpu_sched
hid_multitouch hid_generic crc32c_intel psmouse usbcore i2c_piix4
drm_display_helper amd_sfh i2c_hid_acpi i2c_smbus usb_common crc16 nvme_core
r8169 i2c_hid hid i2c_designware_platform i2c_designware_core
[  T620] CPU: 6 UID: 0 PID: 620 Comm: fsck.vfat Not tainted 6.11.0-rc4-next-
20240822-liamh-v7-00021-gc6686c81601f #322
[  T620] Hardware name: Micro-Star International Co., Ltd. Alpha 15 B5EEK/MS-
158L, BIOS E158LAMS.107 11/10/2021
[  T620] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
[  T620] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00 48 8d bd a8 00 00
00 83 c0 01 89 85 e0 00 00 00 e8 7d 39 e8 ff e9 63 fe ff ff <0f> 0b e9 eb fe ff
ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
[  T620] RSP: 0018:ffffa415c09d7d10 EFLAGS: 00010283
[  T620] RAX: 00000000000000cd RBX: ffffa415c09d7d90 RCX: 000000000000018e
[  T620] RDX: 0000000000000021 RSI: 00000000000019d9 RDI: ffff9073ee7a6400
[  T620] RBP: ffff906541341f80 R08: 0000000000000000 R09: 000000000000080a
[  T620] R10: 000000000001d4de R11: 0000000000000140 R12: ffffa415c09d7d48
[  T620] R13: 00007fbd5ea5f000 R14: 00007fbd5eb5efff R15: ffffa415c09d7d90
[  T620] FS:  00007fbd5ec38740(0000) GS:ffff9073ee780000(0000)
knlGS:0000000000000000
[  T620] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  T620] CR2: 00007fc336339c90 CR3: 000000010a39e000 CR4: 0000000000750ef0
[  T620] PKRU: 55555554
[  T620] Call Trace:
[  T620]  <TASK>
[  T620]  ? __warn.cold+0x90/0x9e
[  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  T620]  ? report_bug+0xfa/0x140
[  T620]  ? handle_bug+0x53/0x90
[  T620]  ? exc_invalid_op+0x17/0x70
[  T620]  ? asm_exc_invalid_op+0x1a/0x20
[  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  T620]  do_vmi_align_munmap+0x1e0/0x260
[  T620]  do_vmi_munmap+0xbe/0x160
[  T620]  __vm_munmap+0x96/0x110
[  T620]  __x64_sys_munmap+0x16/0x20
[  T620]  do_syscall_64+0x5f/0x170
[  T620]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
[  T620] RIP: 0033:0x7fbd5ed3ec57
[  T620] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff
73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
[  T620] RSP: 002b:00007fff0b04d298 EFLAGS: 00000202 ORIG_RAX: 000000000000000b
[  T620] RAX: ffffffffffffffda RBX: ffffffffffffff88 RCX: 00007fbd5ed3ec57
[  T620] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 00007fbd5ea5f000
[  T620] RBP: 0000000000000002 R08: 0000000000100000 R09: 0000000000000007
[  T620] R10: 0000000000000007 R11: 0000000000000202 R12: 00007fff0b04d588
[  T620] R13: 000055b76c789fc6 R14: 00007fff0b04d360 R15: 00007fff0b04d3c0
[  T620]  </TASK>
[  T620] ---[ end trace 0000000000000000 ]---


Bert Karwatzki


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas
  2024-08-23  8:43   ` Bert Karwatzki
@ 2024-08-23  9:55     ` Lorenzo Stoakes
  2024-08-23 10:42       ` Bert Karwatzki
  2024-08-23 11:37     ` Lorenzo Stoakes
  1 sibling, 1 reply; 32+ messages in thread
From: Lorenzo Stoakes @ 2024-08-23  9:55 UTC (permalink / raw)
  To: Bert Karwatzki
  Cc: Liam R. Howlett, Andrew Morton, linux-mm, linux-kernel,
	Suren Baghdasaryan, Matthew Wilcox, Vlastimil Babka,
	sidhartha.kumar, Jiri Olsa, Kees Cook, Paul E . McKenney

On Fri, Aug 23, 2024 at 10:43:11AM GMT, Bert Karwatzki wrote:

[snip]

> > @@ -731,21 +708,31 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
> >  	if (vms->unlock)
> >  		mmap_write_downgrade(mm);
> >
> > -	prev = vma_iter_prev_range(vms->vmi);
> > -	next = vma_next(vms->vmi);
> > -	if (next)
> > -		vma_iter_prev_range(vms->vmi);
> > -
> >  	/*
> >  	 * We can free page tables without write-locking mmap_lock because VMAs
> >  	 * were isolated before we downgraded mmap_lock.
> >  	 */
> >  	mas_set(mas_detach, 1);
> > -	unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end,
> > -		     vms->vma_count, !vms->unlock);
> > -	/* Statistics and freeing VMAs */
> > +	unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next,
> > +		     vms->start, vms->end, vms->vma_count, !vms->unlock);
> > +	/* Update high watermark before we lower total_vm */
> > +	update_hiwater_vm(mm);
> > +	/* Stat accounting */
> > +	WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
> > +	mm->exec_vm -= vms->exec_vm;
> > +	mm->stack_vm -= vms->stack_vm;
> > +	mm->data_vm -= vms->data_vm;
> > +	/* Paranoid bookkeeping */
> > +	VM_WARN_ON(vms->exec_vm > mm->exec_vm);
> > +	VM_WARN_ON(vms->stack_vm > mm->stack_vm);
> > +	VM_WARN_ON(vms->data_vm > mm->data_vm);
> > +
>
> I'm running the v7 Patchset on linux-next-20240822 and I get lots of these
> errors (right on boot) (both when using the complete patchset and when using
> only the patches up to this):

Hm curious, I'm running this in qemu with CONFIG_DEBUG_VM set and don't see
this at lesat on next-20240823.

Liam's series is based on the mseal series by Pedro, not sure if that wasn't in
22 somehow?

Can you try with 23, from tip and:

    b4 shazam 20240822192543.3359552-1-Liam.Howlett@oracle.com

To grab this series just to be sure?

Because that'd definitely be very weird + concerning and something we hadn't
seen before (I don't think?) for the mm->data_vm to be incorrect...

>
> [  T620] WARNING: CPU: 6 PID: 620 at mm/vma.c:725
> vms_complete_munmap_vmas+0x1d8/0x200
> [  T620] Modules linked in: amd_atl ecc mc sparse_keymap wmi_bmof edac_mce_amd
> snd snd_pci_acp3x k10temp soundcore ccp battery ac button hid_sensor_gyro_3d
> hid_sensor_als hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
> hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio amd_pmc
> hid_sensor_iio_common joydev evdev serio_raw mt7921e mt7921_common mt792x_lib
> mt76_connac_lib mt76 mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse
> efi_pstore configfs efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid
> amdgpu i2c_algo_bit drm_ttm_helper ttm drm_exec drm_suballoc_helper amdxcp
> xhci_pci drm_buddy hid_sensor_hub xhci_hcd nvme mfd_core gpu_sched
> hid_multitouch hid_generic crc32c_intel psmouse usbcore i2c_piix4
> drm_display_helper amd_sfh i2c_hid_acpi i2c_smbus usb_common crc16 nvme_core
> r8169 i2c_hid hid i2c_designware_platform i2c_designware_core
> [  T620] CPU: 6 UID: 0 PID: 620 Comm: fsck.vfat Not tainted 6.11.0-rc4-next-
> 20240822-liamh-v7-00021-gc6686c81601f #322
> [  T620] Hardware name: Micro-Star International Co., Ltd. Alpha 15 B5EEK/MS-
> 158L, BIOS E158LAMS.107 11/10/2021
> [  T620] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
> [  T620] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00 48 8d bd a8 00 00
> 00 83 c0 01 89 85 e0 00 00 00 e8 7d 39 e8 ff e9 63 fe ff ff <0f> 0b e9 eb fe ff
> ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
> [  T620] RSP: 0018:ffffa415c09d7d10 EFLAGS: 00010283
> [  T620] RAX: 00000000000000cd RBX: ffffa415c09d7d90 RCX: 000000000000018e
> [  T620] RDX: 0000000000000021 RSI: 00000000000019d9 RDI: ffff9073ee7a6400
> [  T620] RBP: ffff906541341f80 R08: 0000000000000000 R09: 000000000000080a
> [  T620] R10: 000000000001d4de R11: 0000000000000140 R12: ffffa415c09d7d48
> [  T620] R13: 00007fbd5ea5f000 R14: 00007fbd5eb5efff R15: ffffa415c09d7d90
> [  T620] FS:  00007fbd5ec38740(0000) GS:ffff9073ee780000(0000)
> knlGS:0000000000000000
> [  T620] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  T620] CR2: 00007fc336339c90 CR3: 000000010a39e000 CR4: 0000000000750ef0
> [  T620] PKRU: 55555554
> [  T620] Call Trace:
> [  T620]  <TASK>
> [  T620]  ? __warn.cold+0x90/0x9e
> [  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
> [  T620]  ? report_bug+0xfa/0x140
> [  T620]  ? handle_bug+0x53/0x90
> [  T620]  ? exc_invalid_op+0x17/0x70
> [  T620]  ? asm_exc_invalid_op+0x1a/0x20
> [  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
> [  T620]  do_vmi_align_munmap+0x1e0/0x260
> [  T620]  do_vmi_munmap+0xbe/0x160
> [  T620]  __vm_munmap+0x96/0x110
> [  T620]  __x64_sys_munmap+0x16/0x20
> [  T620]  do_syscall_64+0x5f/0x170
> [  T620]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
> [  T620] RIP: 0033:0x7fbd5ed3ec57
> [  T620] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
> 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff
> 73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
> [  T620] RSP: 002b:00007fff0b04d298 EFLAGS: 00000202 ORIG_RAX: 000000000000000b
> [  T620] RAX: ffffffffffffffda RBX: ffffffffffffff88 RCX: 00007fbd5ed3ec57
> [  T620] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 00007fbd5ea5f000
> [  T620] RBP: 0000000000000002 R08: 0000000000100000 R09: 0000000000000007
> [  T620] R10: 0000000000000007 R11: 0000000000000202 R12: 00007fff0b04d588
> [  T620] R13: 000055b76c789fc6 R14: 00007fff0b04d360 R15: 00007fff0b04d3c0
> [  T620]  </TASK>
> [  T620] ---[ end trace 0000000000000000 ]---
>
>
> Bert Karwatzki


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas
  2024-08-23  9:55     ` Lorenzo Stoakes
@ 2024-08-23 10:42       ` Bert Karwatzki
  2024-08-23 11:39         ` Lorenzo Stoakes
  0 siblings, 1 reply; 32+ messages in thread
From: Bert Karwatzki @ 2024-08-23 10:42 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Liam R. Howlett, Andrew Morton, linux-mm, linux-kernel,
	Suren Baghdasaryan, Matthew Wilcox, Vlastimil Babka,
	sidhartha.kumar, Jiri Olsa, Kees Cook, Paul E . McKenney,
	spasswolf

Am Freitag, dem 23.08.2024 um 10:55 +0100 schrieb Lorenzo Stoakes:
> On Fri, Aug 23, 2024 at 10:43:11AM GMT, Bert Karwatzki wrote:
>
> [snip]
>
> > > @@ -731,21 +708,31 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
> > >  	if (vms->unlock)
> > >  		mmap_write_downgrade(mm);
> > >
> > > -	prev = vma_iter_prev_range(vms->vmi);
> > > -	next = vma_next(vms->vmi);
> > > -	if (next)
> > > -		vma_iter_prev_range(vms->vmi);
> > > -
> > >  	/*
> > >  	 * We can free page tables without write-locking mmap_lock because VMAs
> > >  	 * were isolated before we downgraded mmap_lock.
> > >  	 */
> > >  	mas_set(mas_detach, 1);
> > > -	unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end,
> > > -		     vms->vma_count, !vms->unlock);
> > > -	/* Statistics and freeing VMAs */
> > > +	unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next,
> > > +		     vms->start, vms->end, vms->vma_count, !vms->unlock);
> > > +	/* Update high watermark before we lower total_vm */
> > > +	update_hiwater_vm(mm);
> > > +	/* Stat accounting */
> > > +	WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
> > > +	mm->exec_vm -= vms->exec_vm;
> > > +	mm->stack_vm -= vms->stack_vm;
> > > +	mm->data_vm -= vms->data_vm;
> > > +	/* Paranoid bookkeeping */
> > > +	VM_WARN_ON(vms->exec_vm > mm->exec_vm);
> > > +	VM_WARN_ON(vms->stack_vm > mm->stack_vm);
> > > +	VM_WARN_ON(vms->data_vm > mm->data_vm);
> > > +
> >
> > I'm running the v7 Patchset on linux-next-20240822 and I get lots of these
> > errors (right on boot) (both when using the complete patchset and when using
> > only the patches up to this):
>
> Hm curious, I'm running this in qemu with CONFIG_DEBUG_VM set and don't see
> this at lesat on next-20240823.
>
> Liam's series is based on the mseal series by Pedro, not sure if that wasn't in
> 22 somehow?
>
> Can you try with 23, from tip and:
>
>     b4 shazam 20240822192543.3359552-1-Liam.Howlett@oracle.com
>
> To grab this series just to be sure?
>
> Because that'd definitely be very weird + concerning and something we hadn't
> seen before (I don't think?) for the mm->data_vm to be incorrect...
>
> >
> > [  T620] WARNING: CPU: 6 PID: 620 at mm/vma.c:725
> > vms_complete_munmap_vmas+0x1d8/0x200
> > [  T620] Modules linked in: amd_atl ecc mc sparse_keymap wmi_bmof edac_mce_amd
> > snd snd_pci_acp3x k10temp soundcore ccp battery ac button hid_sensor_gyro_3d
> > hid_sensor_als hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
> > hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio amd_pmc
> > hid_sensor_iio_common joydev evdev serio_raw mt7921e mt7921_common mt792x_lib
> > mt76_connac_lib mt76 mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse
> > efi_pstore configfs efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid
> > amdgpu i2c_algo_bit drm_ttm_helper ttm drm_exec drm_suballoc_helper amdxcp
> > xhci_pci drm_buddy hid_sensor_hub xhci_hcd nvme mfd_core gpu_sched
> > hid_multitouch hid_generic crc32c_intel psmouse usbcore i2c_piix4
> > drm_display_helper amd_sfh i2c_hid_acpi i2c_smbus usb_common crc16 nvme_core
> > r8169 i2c_hid hid i2c_designware_platform i2c_designware_core
> > [  T620] CPU: 6 UID: 0 PID: 620 Comm: fsck.vfat Not tainted 6.11.0-rc4-next-
> > 20240822-liamh-v7-00021-gc6686c81601f #322
> > [  T620] Hardware name: Micro-Star International Co., Ltd. Alpha 15 B5EEK/MS-
> > 158L, BIOS E158LAMS.107 11/10/2021
> > [  T620] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
> > [  T620] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00 48 8d bd a8 00 00
> > 00 83 c0 01 89 85 e0 00 00 00 e8 7d 39 e8 ff e9 63 fe ff ff <0f> 0b e9 eb fe ff
> > ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
> > [  T620] RSP: 0018:ffffa415c09d7d10 EFLAGS: 00010283
> > [  T620] RAX: 00000000000000cd RBX: ffffa415c09d7d90 RCX: 000000000000018e
> > [  T620] RDX: 0000000000000021 RSI: 00000000000019d9 RDI: ffff9073ee7a6400
> > [  T620] RBP: ffff906541341f80 R08: 0000000000000000 R09: 000000000000080a
> > [  T620] R10: 000000000001d4de R11: 0000000000000140 R12: ffffa415c09d7d48
> > [  T620] R13: 00007fbd5ea5f000 R14: 00007fbd5eb5efff R15: ffffa415c09d7d90
> > [  T620] FS:  00007fbd5ec38740(0000) GS:ffff9073ee780000(0000)
> > knlGS:0000000000000000
> > [  T620] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [  T620] CR2: 00007fc336339c90 CR3: 000000010a39e000 CR4: 0000000000750ef0
> > [  T620] PKRU: 55555554
> > [  T620] Call Trace:
> > [  T620]  <TASK>
> > [  T620]  ? __warn.cold+0x90/0x9e
> > [  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
> > [  T620]  ? report_bug+0xfa/0x140
> > [  T620]  ? handle_bug+0x53/0x90
> > [  T620]  ? exc_invalid_op+0x17/0x70
> > [  T620]  ? asm_exc_invalid_op+0x1a/0x20
> > [  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
> > [  T620]  do_vmi_align_munmap+0x1e0/0x260
> > [  T620]  do_vmi_munmap+0xbe/0x160
> > [  T620]  __vm_munmap+0x96/0x110
> > [  T620]  __x64_sys_munmap+0x16/0x20
> > [  T620]  do_syscall_64+0x5f/0x170
> > [  T620]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
> > [  T620] RIP: 0033:0x7fbd5ed3ec57
> > [  T620] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
> > 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff
> > 73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
> > [  T620] RSP: 002b:00007fff0b04d298 EFLAGS: 00000202 ORIG_RAX: 000000000000000b
> > [  T620] RAX: ffffffffffffffda RBX: ffffffffffffff88 RCX: 00007fbd5ed3ec57
> > [  T620] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 00007fbd5ea5f000
> > [  T620] RBP: 0000000000000002 R08: 0000000000100000 R09: 0000000000000007
> > [  T620] R10: 0000000000000007 R11: 0000000000000202 R12: 00007fff0b04d588
> > [  T620] R13: 000055b76c789fc6 R14: 00007fff0b04d360 R15: 00007fff0b04d3c0
> > [  T620]  </TASK>
> > [  T620] ---[ end trace 0000000000000000 ]---
> >
> >
> > Bert Karwatzki

I grabbed the patches by saving the v7 patch emails as an mbox file and using
git am to apply them (which worked without error) and git pull --rebase to
update the series to next-20240823 (which works without conflicts).

$ git log HEAD~22..HEAD --oneline
a060ce2752a8 (HEAD -> liamh_mmap_v7) mm/vma.h: Optimise vma_munmap_struct
62fdaa7f747c mm/vma: Drop incorrect comment from vms_gather_munmap_vmas()
8606e70278c5 mm: Move may_expand_vm() check in mmap_region()
fada0fd73e66 ipc/shm, mm: Drop do_vma_munmap()
bc57e24e2564 mm/mmap: Use vms accounted pages in mmap_region()
26f203f001eb mm/mmap: Use PHYS_PFN in mmap_region()
efe56a49d0ef mm: Change failure of MAP_FIXED to restoring the gap on failure
494d21bcde64 mm/mmap: Avoid zeroing vma tree in mmap_region()
ff688d8cec39 mm: Clean up unmap_region() argument list
862b919b20a4 mm/vma: Track start and end for munmap in vma_munmap_struct
f406d75d8787 mm/mmap: Reposition vma iterator in mmap_region()
6548fe69d672 mm/vma: Support vma == NULL in init_vma_munmap()
2ff31a2341d2 mm/vma: Expand mmap_region() munmap call
7806ca6562c5 mm/vma: Inline munmap operation in mmap_region()
b9659761b35e mm/vma: Extract validate_mm() from vma_complete()
48fde0bebb75 mm/vma: Change munmap to use vma_munmap_struct() for accounting and
surrounding vmas
7bb7a27044f0 mm/vma: Introduce vma_munmap_struct for use in munmap operations
3b4885e2e6b2 mm/vma: Extract the gathering of vmas from do_vmi_align_munmap()
427cdb242d36 mm/vma: Introduce vmi_complete_munmap_vmas()
5035f0d0c68b mm/vma: Introduce abort_munmap_vmas()
717dcbdf7521 mm/vma: Correctly position vma_iterator in __split_vma()
c79c85875f1a (tag: next-20240823, origin/master, origin/HEAD) Add linux-next
specific files for 20240823

Here's a short extract from dmesg (the buffer has already overrun)

[  206.641849] [   T3201] ------------[ cut here ]------------
[  206.641852] [   T3201] WARNING: CPU: 7 PID: 3201 at mm/vma.c:725
vms_complete_munmap_vmas+0x1d8/0x200
[  206.641859] [   T3201] Modules linked in: ccm snd_seq_dummy snd_hrtimer
snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device rfcomm
cpufreq_userspace cpufreq_powersave cpufreq_conservative bnep nls_ascii
nls_cp437 vfat fat snd_ctl_led btusb btrtl snd_hda_codec_realtek btintel btbcm
snd_hda_codec_generic btmtk snd_hda_scodec_component snd_hda_codec_hdmi
snd_hda_intel snd_intel_dspcfg bluetooth amd_atl uvcvideo snd_hda_codec
videobuf2_vmalloc snd_acp3x_pdm_dma snd_soc_dmic snd_acp3x_rn uvc snd_hwdep
videobuf2_memops snd_soc_core snd_hda_core videobuf2_v4l2 snd_pcm_oss
snd_mixer_oss snd_rn_pci_acp3x videodev snd_acp_config videobuf2_common
snd_soc_acpi snd_pcm msi_wmi ecdh_generic ecc mc edac_mce_amd sparse_keymap
wmi_bmof snd_timer snd_pci_acp3x snd k10temp soundcore ccp ac battery button
hid_sensor_gyro_3d hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
hid_sensor_als hid_sensor_trigger industrialio_triggered_buffer kfifo_buf
industrialio amd_pmc hid_sensor_iio_common joydev evdev serio_raw mt7921e
[  206.641927] [   T3201]  mt7921_common mt792x_lib mt76_connac_lib mt76
mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse efi_pstore configfs
efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid amdgpu i2c_algo_bit
drm_ttm_helper xhci_pci ttm drm_exec drm_suballoc_helper xhci_hcd amdxcp
drm_buddy hid_sensor_hub usbcore i2c_piix4 nvme mfd_core gpu_sched
hid_multitouch hid_generic crc32c_intel psmouse i2c_hid_acpi i2c_smbus
usb_common amd_sfh drm_display_helper nvme_core i2c_hid crc16 r8169 hid
i2c_designware_platform i2c_designware_core
[  206.641971] [   T3201] CPU: 7 UID: 0 PID: 3201 Comm: apt-get Tainted: G
W          6.11.0-rc4-next-20240823-liamh-v7-00021-ga060ce2752a8 #325
[  206.641974] [   T3201] Tainted: [W]=WARN
[  206.641976] [   T3201] Hardware name: Micro-Star International Co., Ltd.
Alpha 15 B5EEK/MS-158L, BIOS E158LAMS.107 11/10/2021
[  206.641977] [   T3201] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
[  206.641980] [   T3201] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00
48 8d bd a8 00 00 00 83 c0 01 89 85 e0 00 00 00 e8 3d 43 e8 ff e9 63 fe ff ff
<0f> 0b e9 eb fe ff ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
[  206.641982] [   T3201] RSP: 0018:ffffb05784eb7d10 EFLAGS: 00010287
[  206.641984] [   T3201] RAX: 000000000000015d RBX: ffffb05784eb7d90 RCX:
000000000000087b
[  206.641986] [   T3201] RDX: 0000000000000021 RSI: 00000000000007e2 RDI:
ffff9f56ae7e63c0
[  206.641987] [   T3201] RBP: ffff9f48030a0540 R08: 0000000000000000 R09:
000000000000070d
[  206.641988] [   T3201] R10: 000000000001d4de R11: 0000000000000048 R12:
ffffb05784eb7d48
[  206.641990] [   T3201] R13: 00007f2017000000 R14: 00007f201dbc9fff R15:
ffffb05784eb7d90
[  206.641991] [   T3201] FS:  00007f201e88d880(0000) GS:ffff9f56ae7c0000(0000)
knlGS:0000000000000000
[  206.641993] [   T3201] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  206.641994] [   T3201] CR2: 000055cfbbc10000 CR3: 00000001813b8000 CR4:
0000000000750ef0
[  206.641995] [   T3201] PKRU: 55555554
[  206.641996] [   T3201] Call Trace:
[  206.641998] [   T3201]  <TASK>
[  206.642001] [   T3201]  ? __warn.cold+0x90/0x9e
[  206.642004] [   T3201]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  206.642007] [   T3201]  ? report_bug+0xfa/0x140
[  206.642010] [   T3201]  ? handle_bug+0x53/0x90
[  206.642012] [   T3201]  ? exc_invalid_op+0x17/0x70
[  206.642014] [   T3201]  ? asm_exc_invalid_op+0x1a/0x20
[  206.642018] [   T3201]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  206.642021] [   T3201]  do_vmi_align_munmap+0x1e0/0x260
[  206.642025] [   T3201]  do_vmi_munmap+0xbe/0x160
[  206.642028] [   T3201]  __vm_munmap+0x96/0x110
[  206.642032] [   T3201]  __x64_sys_munmap+0x16/0x20
[  206.642034] [   T3201]  do_syscall_64+0x5f/0x170
[  206.642037] [   T3201]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
[  206.642040] [   T3201] RIP: 0033:0x7f201e519c57
[  206.642042] [   T3201] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05
<48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
[  206.642044] [   T3201] RSP: 002b:00007ffd2e02c5f8 EFLAGS: 00000246 ORIG_RAX:
000000000000000b
[  206.642046] [   T3201] RAX: ffffffffffffffda RBX: 000055cfbbc554f0 RCX:
00007f201e519c57
[  206.642047] [   T3201] RDX: 0000000000000004 RSI: 0000000006bc97c1 RDI:
00007f2017000000
[  206.642048] [   T3201] RBP: 0000000000000000 R08: 0000000000000005 R09:
0000000000000004
[  206.642049] [   T3201] R10: 0000000000000007 R11: 0000000000000246 R12:
000055cfbbc54160
[  206.642050] [   T3201] R13: 000055cfbbbeb198 R14: 000055cfbbc554f0 R15:
00007ffd2e02c6c0
[  206.642053] [   T3201]  </TASK>
[  206.642054] [   T3201] ---[ end trace 0000000000000000 ]---
[  206.659454] [   T3201] ------------[ cut here ]------------
[  206.659458] [   T3201] WARNING: CPU: 7 PID: 3201 at mm/vma.c:725
vms_complete_munmap_vmas+0x1d8/0x200
[  206.659465] [   T3201] Modules linked in: ccm snd_seq_dummy snd_hrtimer
snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device rfcomm
cpufreq_userspace cpufreq_powersave cpufreq_conservative bnep nls_ascii
nls_cp437 vfat fat snd_ctl_led btusb btrtl snd_hda_codec_realtek btintel btbcm
snd_hda_codec_generic btmtk snd_hda_scodec_component snd_hda_codec_hdmi
snd_hda_intel snd_intel_dspcfg bluetooth amd_atl uvcvideo snd_hda_codec
videobuf2_vmalloc snd_acp3x_pdm_dma snd_soc_dmic snd_acp3x_rn uvc snd_hwdep
videobuf2_memops snd_soc_core snd_hda_core videobuf2_v4l2 snd_pcm_oss
snd_mixer_oss snd_rn_pci_acp3x videodev snd_acp_config videobuf2_common
snd_soc_acpi snd_pcm msi_wmi ecdh_generic ecc mc edac_mce_amd sparse_keymap
wmi_bmof snd_timer snd_pci_acp3x snd k10temp soundcore ccp ac battery button
hid_sensor_gyro_3d hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
hid_sensor_als hid_sensor_trigger industrialio_triggered_buffer kfifo_buf
industrialio amd_pmc hid_sensor_iio_common joydev evdev serio_raw mt7921e
[  206.659530] [   T3201]  mt7921_common mt792x_lib mt76_connac_lib mt76
mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse efi_pstore configfs
efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid amdgpu i2c_algo_bit
drm_ttm_helper xhci_pci ttm drm_exec drm_suballoc_helper xhci_hcd amdxcp
drm_buddy hid_sensor_hub usbcore i2c_piix4 nvme mfd_core gpu_sched
hid_multitouch hid_generic crc32c_intel psmouse i2c_hid_acpi i2c_smbus
usb_common amd_sfh drm_display_helper nvme_core i2c_hid crc16 r8169 hid
i2c_designware_platform i2c_designware_core
[  206.659575] [   T3201] CPU: 7 UID: 0 PID: 3201 Comm: apt-get Tainted: G
W          6.11.0-rc4-next-20240823-liamh-v7-00021-ga060ce2752a8 #325
[  206.659578] [   T3201] Tainted: [W]=WARN
[  206.659580] [   T3201] Hardware name: Micro-Star International Co., Ltd.
Alpha 15 B5EEK/MS-158L, BIOS E158LAMS.107 11/10/2021
[  206.659581] [   T3201] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
[  206.659584] [   T3201] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00
48 8d bd a8 00 00 00 83 c0 01 89 85 e0 00 00 00 e8 3d 43 e8 ff e9 63 fe ff ff
<0f> 0b e9 eb fe ff ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
[  206.659586] [   T3201] RSP: 0018:ffffb05784eb7d10 EFLAGS: 00010283
[  206.659588] [   T3201] RAX: 000000000000015d RBX: ffffb05784eb7d90 RCX:
000000000000087b
[  206.659589] [   T3201] RDX: 0000000000000021 RSI: 0000000000000821 RDI:
ffff9f56ae7e63c0
[  206.659591] [   T3201] RBP: ffff9f48030a0540 R08: 0000000000000000 R09:
00000000000006f2
[  206.659592] [   T3201] R10: 000000000001d4de R11: 0000000000000048 R12:
ffffb05784eb7d48
[  206.659593] [   T3201] R13: 00007f2017000000 R14: 00007f201dbaffff R15:
ffffb05784eb7d90
[  206.659594] [   T3201] FS:  00007f201e88d880(0000) GS:ffff9f56ae7c0000(0000)
knlGS:0000000000000000
[  206.659596] [   T3201] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  206.659597] [   T3201] CR2: 000055cfbbc28288 CR3: 00000001813b8000 CR4:
0000000000750ef0
[  206.659599] [   T3201] PKRU: 55555554
[  206.659600] [   T3201] Call Trace:
[  206.659602] [   T3201]  <TASK>
[  206.659604] [   T3201]  ? __warn.cold+0x90/0x9e
[  206.659607] [   T3201]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  206.659610] [   T3201]  ? report_bug+0xfa/0x140
[  206.659613] [   T3201]  ? handle_bug+0x53/0x90
[  206.659615] [   T3201]  ? exc_invalid_op+0x17/0x70
[  206.659618] [   T3201]  ? asm_exc_invalid_op+0x1a/0x20
[  206.659621] [   T3201]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  206.659624] [   T3201]  do_vmi_align_munmap+0x1e0/0x260
[  206.659628] [   T3201]  do_vmi_munmap+0xbe/0x160
[  206.659631] [   T3201]  __vm_munmap+0x96/0x110
[  206.659635] [   T3201]  __x64_sys_munmap+0x16/0x20
[  206.659637] [   T3201]  do_syscall_64+0x5f/0x170
[  206.659640] [   T3201]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
[  206.659642] [   T3201] RIP: 0033:0x7f201e519c57
[  206.659644] [   T3201] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05
<48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
[  206.659646] [   T3201] RSP: 002b:00007ffd2e02c5f8 EFLAGS: 00000246 ORIG_RAX:
000000000000000b
[  206.659648] [   T3201] RAX: ffffffffffffffda RBX: 000055cfbbbff660 RCX:
00007f201e519c57
[  206.659649] [   T3201] RDX: 0000000000000004 RSI: 0000000006baf09c RDI:
00007f2017000000
[  206.659650] [   T3201] RBP: 0000000000000001 R08: 0000000000000007 R09:
0000000000000006
[  206.659651] [   T3201] R10: 0000000000000007 R11: 0000000000000246 R12:
000055cfbbc60e30
[  206.659652] [   T3201] R13: 0000000000000044 R14: 000055cfbbbff660 R15:
00007ffd2e02c6c0
[  206.659655] [   T3201]  </TASK>
[  206.659656] [   T3201] ---[ end trace 0000000000000000 ]---
[  212.679951] [   T3222] ------------[ cut here ]------------
[  212.679955] [   T3222] WARNING: CPU: 11 PID: 3222 at mm/vma.c:725
vms_complete_munmap_vmas+0x1d8/0x200
[  212.679963] [   T3222] Modules linked in: ccm snd_seq_dummy snd_hrtimer
snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device rfcomm
cpufreq_userspace cpufreq_powersave cpufreq_conservative bnep nls_ascii
nls_cp437 vfat fat snd_ctl_led btusb btrtl snd_hda_codec_realtek btintel btbcm
snd_hda_codec_generic btmtk snd_hda_scodec_component snd_hda_codec_hdmi
snd_hda_intel snd_intel_dspcfg bluetooth amd_atl uvcvideo snd_hda_codec
videobuf2_vmalloc snd_acp3x_pdm_dma snd_soc_dmic snd_acp3x_rn uvc snd_hwdep
videobuf2_memops snd_soc_core snd_hda_core videobuf2_v4l2 snd_pcm_oss
snd_mixer_oss snd_rn_pci_acp3x videodev snd_acp_config videobuf2_common
snd_soc_acpi snd_pcm msi_wmi ecdh_generic ecc mc edac_mce_amd sparse_keymap
wmi_bmof snd_timer snd_pci_acp3x snd k10temp soundcore ccp ac battery button
hid_sensor_gyro_3d hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
hid_sensor_als hid_sensor_trigger industrialio_triggered_buffer kfifo_buf
industrialio amd_pmc hid_sensor_iio_common joydev evdev serio_raw mt7921e
[  212.680030] [   T3222]  mt7921_common mt792x_lib mt76_connac_lib mt76
mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse efi_pstore configfs
efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid amdgpu i2c_algo_bit
drm_ttm_helper xhci_pci ttm drm_exec drm_suballoc_helper xhci_hcd amdxcp
drm_buddy hid_sensor_hub usbcore i2c_piix4 nvme mfd_core gpu_sched
hid_multitouch hid_generic crc32c_intel psmouse i2c_hid_acpi i2c_smbus
usb_common amd_sfh drm_display_helper nvme_core i2c_hid crc16 r8169 hid
i2c_designware_platform i2c_designware_core
[  212.680071] [   T3222] CPU: 11 UID: 0 PID: 3222 Comm: apt-extracttemp
Tainted: G        W          6.11.0-rc4-next-20240823-liamh-v7-00021-
ga060ce2752a8 #325
[  212.680074] [   T3222] Tainted: [W]=WARN
[  212.680076] [   T3222] Hardware name: Micro-Star International Co., Ltd.
Alpha 15 B5EEK/MS-158L, BIOS E158LAMS.107 11/10/2021
[  212.680077] [   T3222] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
[  212.680080] [   T3222] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00
48 8d bd a8 00 00 00 83 c0 01 89 85 e0 00 00 00 e8 3d 43 e8 ff e9 63 fe ff ff
<0f> 0b e9 eb fe ff ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
[  212.680082] [   T3222] RSP: 0018:ffffb05785b7fd10 EFLAGS: 00010283
[  212.680084] [   T3222] RAX: 000000000000093a RBX: ffffb05785b7fd90 RCX:
0000000000000877
[  212.680085] [   T3222] RDX: 0000000000000021 RSI: 0000000000000ad6 RDI:
ffff9f56ae8e63c0
[  212.680086] [   T3222] RBP: ffff9f4889d178c0 R08: 0000000000000000 R09:
00000000000006a4
[  212.680088] [   T3222] R10: 000000000001d4de R11: 0000000000000048 R12:
ffffb05785b7fd48
[  212.680089] [   T3222] R13: 00007f1474400000 R14: 00007f147afc9fff R15:
ffffb05785b7fd90
[  212.680090] [   T3222] FS:  00007f147bc6e880(0000) GS:ffff9f56ae8c0000(0000)
knlGS:0000000000000000
[  212.680092] [   T3222] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  212.680093] [   T3222] CR2: 0000564415171000 CR3: 0000000223e48000 CR4:
0000000000750ef0
[  212.680094] [   T3222] PKRU: 55555554
[  212.680095] [   T3222] Call Trace:
[  212.680097] [   T3222]  <TASK>
[  212.680100] [   T3222]  ? __warn.cold+0x90/0x9e
[  212.680103] [   T3222]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  212.680106] [   T3222]  ? report_bug+0xfa/0x140
[  212.680109] [   T3222]  ? handle_bug+0x53/0x90
[  212.680111] [   T3222]  ? exc_invalid_op+0x17/0x70
[  212.680113] [   T3222]  ? asm_exc_invalid_op+0x1a/0x20
[  212.680117] [   T3222]  ? vms_complete_munmap_vmas+0x1d8/0x200
[  212.680119] [   T3222]  do_vmi_align_munmap+0x1e0/0x260
[  212.680124] [   T3222]  do_vmi_munmap+0xbe/0x160
[  212.680126] [   T3222]  __vm_munmap+0x96/0x110
[  212.680130] [   T3222]  __x64_sys_munmap+0x16/0x20
[  212.680132] [   T3222]  do_syscall_64+0x5f/0x170
[  212.680135] [   T3222]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
[  212.680137] [   T3222] RIP: 0033:0x7f147b919c57
[  212.680139] [   T3222] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48
83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05
<48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
[  212.680140] [   T3222] RSP: 002b:00007ffc1a6d5ae8 EFLAGS: 00000246 ORIG_RAX:
000000000000000b
[  212.680142] [   T3222] RAX: ffffffffffffffda RBX: 00005644150b34a0 RCX:
00007f147b919c57
[  212.680144] [   T3222] RDX: 0000000000000004 RSI: 0000000006bc983b RDI:
00007f1474400000
[  212.680145] [   T3222] RBP: 00007ffc1a6d5d90 R08: 0000000564415156 R09:
0000000000000007
[  212.680146] [   T3222] R10: 0000000000000007 R11: 0000000000000246 R12:
00007ffc1a6d5c40
[  212.680147] [   T3222] R13: 0000000000000011 R14: 0000000000000010 R15:
00007ffc1a6d5bc0
[  212.680150] [   T3222]  </TASK>
[  212.680150] [   T3222] ---[ end trace 0000000000000000 ]---


These messages aside everything seems to work (I sending this email using the
affected kernel) so I'm wondering if the checks aren't a little too paranoid.

By the way: These 6 patches by Pedro Falcato are present in linux-next-20240822,
too:

mm: remove can_modify_mm()	Pedro Falcato
mseal: replace can_modify_mm_madv with a vma variant	Pedro Falcato
mm/mremap: replace can_modify_mm with can_modify_vma	Pedro Falcato
mm/mprotect: replace can_modify_mm with can_modify_vma	Pedro Falcato
mm/munmap: replace can_modify_mm with can_modify_vma	Pedro Falcato
mm: move can_modify_vma to mm/vma.h			Pedro Falcato

Bert Karwatzki



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas
  2024-08-23  8:43   ` Bert Karwatzki
  2024-08-23  9:55     ` Lorenzo Stoakes
@ 2024-08-23 11:37     ` Lorenzo Stoakes
  1 sibling, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2024-08-23 11:37 UTC (permalink / raw)
  To: Bert Karwatzki
  Cc: Liam R. Howlett, Andrew Morton, linux-mm, linux-kernel,
	Suren Baghdasaryan, Matthew Wilcox, Vlastimil Babka,
	sidhartha.kumar, Jiri Olsa, Kees Cook, Paul E . McKenney

On Fri, Aug 23, 2024 at 10:43:11AM GMT, Bert Karwatzki wrote:

[snip]

> > @@ -731,21 +708,31 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
> >  	if (vms->unlock)
> >  		mmap_write_downgrade(mm);
> >
> > -	prev = vma_iter_prev_range(vms->vmi);
> > -	next = vma_next(vms->vmi);
> > -	if (next)
> > -		vma_iter_prev_range(vms->vmi);
> > -
> >  	/*
> >  	 * We can free page tables without write-locking mmap_lock because VMAs
> >  	 * were isolated before we downgraded mmap_lock.
> >  	 */
> >  	mas_set(mas_detach, 1);
> > -	unmap_region(mm, mas_detach, vms->vma, prev, next, vms->start, vms->end,
> > -		     vms->vma_count, !vms->unlock);
> > -	/* Statistics and freeing VMAs */
> > +	unmap_region(mm, mas_detach, vms->vma, vms->prev, vms->next,
> > +		     vms->start, vms->end, vms->vma_count, !vms->unlock);
> > +	/* Update high watermark before we lower total_vm */
> > +	update_hiwater_vm(mm);
> > +	/* Stat accounting */
> > +	WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
> > +	mm->exec_vm -= vms->exec_vm;
> > +	mm->stack_vm -= vms->stack_vm;
> > +	mm->data_vm -= vms->data_vm;
> > +	/* Paranoid bookkeeping */
> > +	VM_WARN_ON(vms->exec_vm > mm->exec_vm);
> > +	VM_WARN_ON(vms->stack_vm > mm->stack_vm);
> > +	VM_WARN_ON(vms->data_vm > mm->data_vm);

Hang on... I didn't read this closely enough (clearly!) we're doing these checks
_after_ we decrement the counters, which is... not correct :)

Your processes must be reducing their data_vm value to something less than what
was reduced during the munmap operation.

Liam - I suggest we put these checks before we decrement.

> > +
>
> I'm running the v7 Patchset on linux-next-20240822 and I get lots of these
> errors (right on boot) (both when using the complete patchset and when using
> only the patches up to this):
>
> [  T620] WARNING: CPU: 6 PID: 620 at mm/vma.c:725
> vms_complete_munmap_vmas+0x1d8/0x200
> [  T620] Modules linked in: amd_atl ecc mc sparse_keymap wmi_bmof edac_mce_amd
> snd snd_pci_acp3x k10temp soundcore ccp battery ac button hid_sensor_gyro_3d
> hid_sensor_als hid_sensor_magn_3d hid_sensor_prox hid_sensor_accel_3d
> hid_sensor_trigger industrialio_triggered_buffer kfifo_buf industrialio amd_pmc
> hid_sensor_iio_common joydev evdev serio_raw mt7921e mt7921_common mt792x_lib
> mt76_connac_lib mt76 mac80211 libarc4 cfg80211 rfkill msr nvme_fabrics fuse
> efi_pstore configfs efivarfs autofs4 ext4 crc32c_generic mbcache jbd2 usbhid
> amdgpu i2c_algo_bit drm_ttm_helper ttm drm_exec drm_suballoc_helper amdxcp
> xhci_pci drm_buddy hid_sensor_hub xhci_hcd nvme mfd_core gpu_sched
> hid_multitouch hid_generic crc32c_intel psmouse usbcore i2c_piix4
> drm_display_helper amd_sfh i2c_hid_acpi i2c_smbus usb_common crc16 nvme_core
> r8169 i2c_hid hid i2c_designware_platform i2c_designware_core
> [  T620] CPU: 6 UID: 0 PID: 620 Comm: fsck.vfat Not tainted 6.11.0-rc4-next-
> 20240822-liamh-v7-00021-gc6686c81601f #322
> [  T620] Hardware name: Micro-Star International Co., Ltd. Alpha 15 B5EEK/MS-
> 158L, BIOS E158LAMS.107 11/10/2021
> [  T620] RIP: 0010:vms_complete_munmap_vmas+0x1d8/0x200
> [  T620] Code: 8b 85 a8 00 00 00 a8 01 74 35 8b 85 e0 00 00 00 48 8d bd a8 00 00
> 00 83 c0 01 89 85 e0 00 00 00 e8 7d 39 e8 ff e9 63 fe ff ff <0f> 0b e9 eb fe ff
> ff 0f 0b e9 d0 fe ff ff 0f 0b e9 d3 fe ff ff 0f
> [  T620] RSP: 0018:ffffa415c09d7d10 EFLAGS: 00010283
> [  T620] RAX: 00000000000000cd RBX: ffffa415c09d7d90 RCX: 000000000000018e
> [  T620] RDX: 0000000000000021 RSI: 00000000000019d9 RDI: ffff9073ee7a6400
> [  T620] RBP: ffff906541341f80 R08: 0000000000000000 R09: 000000000000080a
> [  T620] R10: 000000000001d4de R11: 0000000000000140 R12: ffffa415c09d7d48
> [  T620] R13: 00007fbd5ea5f000 R14: 00007fbd5eb5efff R15: ffffa415c09d7d90
> [  T620] FS:  00007fbd5ec38740(0000) GS:ffff9073ee780000(0000)
> knlGS:0000000000000000
> [  T620] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  T620] CR2: 00007fc336339c90 CR3: 000000010a39e000 CR4: 0000000000750ef0
> [  T620] PKRU: 55555554
> [  T620] Call Trace:
> [  T620]  <TASK>
> [  T620]  ? __warn.cold+0x90/0x9e
> [  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
> [  T620]  ? report_bug+0xfa/0x140
> [  T620]  ? handle_bug+0x53/0x90
> [  T620]  ? exc_invalid_op+0x17/0x70
> [  T620]  ? asm_exc_invalid_op+0x1a/0x20
> [  T620]  ? vms_complete_munmap_vmas+0x1d8/0x200
> [  T620]  do_vmi_align_munmap+0x1e0/0x260
> [  T620]  do_vmi_munmap+0xbe/0x160
> [  T620]  __vm_munmap+0x96/0x110
> [  T620]  __x64_sys_munmap+0x16/0x20
> [  T620]  do_syscall_64+0x5f/0x170
> [  T620]  entry_SYSCALL_64_after_hwframe+0x55/0x5d
> [  T620] RIP: 0033:0x7fbd5ed3ec57
> [  T620] Code: 73 01 c3 48 8b 0d c1 71 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e
> 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff
> 73 01 c3 48 8b 0d 91 71 0d 00 f7 d8 64 89 01 48
> [  T620] RSP: 002b:00007fff0b04d298 EFLAGS: 00000202 ORIG_RAX: 000000000000000b
> [  T620] RAX: ffffffffffffffda RBX: ffffffffffffff88 RCX: 00007fbd5ed3ec57
> [  T620] RDX: 0000000000000000 RSI: 0000000000100000 RDI: 00007fbd5ea5f000
> [  T620] RBP: 0000000000000002 R08: 0000000000100000 R09: 0000000000000007
> [  T620] R10: 0000000000000007 R11: 0000000000000202 R12: 00007fff0b04d588
> [  T620] R13: 000055b76c789fc6 R14: 00007fff0b04d360 R15: 00007fff0b04d3c0
> [  T620]  </TASK>
> [  T620] ---[ end trace 0000000000000000 ]---
>
>
> Bert Karwatzki


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas
  2024-08-23 10:42       ` Bert Karwatzki
@ 2024-08-23 11:39         ` Lorenzo Stoakes
  0 siblings, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2024-08-23 11:39 UTC (permalink / raw)
  To: Bert Karwatzki
  Cc: Liam R. Howlett, Andrew Morton, linux-mm, linux-kernel,
	Suren Baghdasaryan, Matthew Wilcox, Vlastimil Babka,
	sidhartha.kumar, Jiri Olsa, Kees Cook, Paul E . McKenney

On Fri, Aug 23, 2024 at 12:42:18PM GMT, Bert Karwatzki wrote:
> Am Freitag, dem 23.08.2024 um 10:55 +0100 schrieb Lorenzo Stoakes:

[snip]

> > On Fri, Aug 23, 2024 at 10:43:11AM GMT, Bert Karwatzki wrote:
> >
> > [snip]
> > > I'm running the v7 Patchset on linux-next-20240822 and I get lots of these
> > > errors (right on boot) (both when using the complete patchset and when using
> > > only the patches up to this):
> >
> > Hm curious, I'm running this in qemu with CONFIG_DEBUG_VM set and don't see
> > this at lesat on next-20240823.
> >
> > Liam's series is based on the mseal series by Pedro, not sure if that wasn't in
> > 22 somehow?
> >
> > Can you try with 23, from tip and:
> >
> >     b4 shazam 20240822192543.3359552-1-Liam.Howlett@oracle.com
> >
> > To grab this series just to be sure?
> >
> > Because that'd definitely be very weird + concerning and something we hadn't
> > seen before (I don't think?) for the mm->data_vm to be incorrect...
> >
> > >

[snip]

>
> I grabbed the patches by saving the v7 patch emails as an mbox file and using
> git am to apply them (which worked without error) and git pull --rebase to
> update the series to next-20240823 (which works without conflicts).

Thanks, you are right, see other thread for an explanation. Good spot!

[snip - for brevity cutting dmesg logs, but much appreciated!]

These messages aside everything seems to work (I sending this email using the
> affected kernel) so I'm wondering if the checks aren't a little too paranoid.

Just to make the point - if these checks were the correct way around, this
would indicate that key mm counters are underflowing, which would be very
serious indeed - so just paranoid enough :) The paranoia is because this
should never happen.

>
> By the way: These 6 patches by Pedro Falcato are present in linux-next-20240822,
> too:
>
> mm: remove can_modify_mm()	Pedro Falcato
> mseal: replace can_modify_mm_madv with a vma variant	Pedro Falcato
> mm/mremap: replace can_modify_mm with can_modify_vma	Pedro Falcato
> mm/mprotect: replace can_modify_mm with can_modify_vma	Pedro Falcato
> mm/munmap: replace can_modify_mm with can_modify_vma	Pedro Falcato
> mm: move can_modify_vma to mm/vma.h			Pedro Falcato
>
> Bert Karwatzki
>

Thanks for confirming!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH] mm/vma: fix bookkeeping checks
  2024-08-22 19:25 ` [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
  2024-08-23  8:43   ` Bert Karwatzki
@ 2024-08-23 13:30   ` Liam R. Howlett
  1 sibling, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-23 13:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Switch the order of the checking.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
---
 mm/vma.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Please squash this into 06/21.

diff --git a/mm/vma.c b/mm/vma.c
index 58604fe3bd03..b0c481d08612 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -719,13 +719,13 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 	update_hiwater_vm(mm);
 	/* Stat accounting */
 	WRITE_ONCE(mm->total_vm, READ_ONCE(mm->total_vm) - vms->nr_pages);
-	mm->exec_vm -= vms->exec_vm;
-	mm->stack_vm -= vms->stack_vm;
-	mm->data_vm -= vms->data_vm;
 	/* Paranoid bookkeeping */
 	VM_WARN_ON(vms->exec_vm > mm->exec_vm);
 	VM_WARN_ON(vms->stack_vm > mm->stack_vm);
 	VM_WARN_ON(vms->data_vm > mm->data_vm);
+	mm->exec_vm -= vms->exec_vm;
+	mm->stack_vm -= vms->stack_vm;
+	mm->data_vm -= vms->data_vm;
 
 	/* Remove and clean up vmas */
 	mas_set(mas_detach, 0);
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct
  2024-08-22 19:25 ` [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct Liam R. Howlett
@ 2024-08-26 14:01   ` Geert Uytterhoeven
  2024-08-26 14:12     ` Lorenzo Stoakes
  0 siblings, 1 reply; 32+ messages in thread
From: Geert Uytterhoeven @ 2024-08-26 14:01 UTC (permalink / raw)
  To: Liam R. Howlett
  Cc: Andrew Morton, linux-mm, linux-kernel, Suren Baghdasaryan,
	Lorenzo Stoakes, Matthew Wilcox, Vlastimil Babka, sidhartha.kumar,
	Bert Karwatzki, Jiri Olsa, Kees Cook, Paul E . McKenney

Hi Liam,

On Thu, Aug 22, 2024 at 9:27 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
>
> Set the start and end address for munmap when the prev and next are
> gathered.  This is needed to avoid incorrect addresses being used during
> the vms_complete_munmap_vmas() function if the prev/next vma are
> expanded.
>
> Add a new helper vms_complete_pte_clear(), which is needed later and
> will avoid growing the argument list to unmap_region() beyond the 9 it
> already has.
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Thanks for your patch, which is now commit ca39aca8db2d78ff ("mm/vma:
track start and end for munmap in vma_munmap_struct") in next-20240826.

> --- a/mm/vma.h
> +++ b/mm/vma.h
> @@ -38,6 +38,8 @@ struct vma_munmap_struct {
>         struct list_head *uf;           /* Userfaultfd list_head */
>         unsigned long start;            /* Aligned start addr (inclusive) */
>         unsigned long end;              /* Aligned end addr (exclusive) */
> +       unsigned long unmap_start;      /* Unmap PTE start */
> +       unsigned long unmap_end;        /* Unmap PTE end */
>         int vma_count;                  /* Number of vmas that will be removed */
>         unsigned long nr_pages;         /* Number of pages being removed */
>         unsigned long locked_vm;        /* Number of locked pages */
> @@ -108,6 +110,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
>         vms->vma_count = 0;
>         vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
>         vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
> +       vms->unmap_start = FIRST_USER_ADDRESS;
> +       vms->unmap_end = USER_PGTABLES_CEILING;

noreply@ellerman.id.au reported build failures for m5272c3_defconfig
http://kisskb.ellerman.id.au/kisskb/buildresult/15224802/

$ make ARCH=m68k m5272c3_defconfig mm/filemap.o
In file included from mm/internal.h:22,
                 from mm/filemap.c:52:
mm/vma.h: In function ‘init_vma_munmap’:
mm/vma.h:113:21: error: ‘FIRST_USER_ADDRESS’ undeclared (first use in
this function)
  113 |  vms->unmap_start = FIRST_USER_ADDRESS;
      |                     ^~~~~~~~~~~~~~~~~~
mm/vma.h:113:21: note: each undeclared identifier is reported only
once for each function it appears in
mm/vma.h:114:19: error: ‘USER_PGTABLES_CEILING’ undeclared (first use
in this function)
  114 |  vms->unmap_end = USER_PGTABLES_CEILING;
      |                   ^~~~~~~~~~~~~~~~~~~~~

Both are defined in include/linux/pgtable.h inside #ifdef CONFIG_MMU,
so they are not available on nommu.


>  }
>
>  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,

Gr{oetje,eeting}s,

                        Geert


--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct
  2024-08-26 14:01   ` Geert Uytterhoeven
@ 2024-08-26 14:12     ` Lorenzo Stoakes
  0 siblings, 0 replies; 32+ messages in thread
From: Lorenzo Stoakes @ 2024-08-26 14:12 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Liam R. Howlett, Andrew Morton, linux-mm, linux-kernel,
	Suren Baghdasaryan, Matthew Wilcox, Vlastimil Babka,
	sidhartha.kumar, Bert Karwatzki, Jiri Olsa, Kees Cook,
	Paul E . McKenney

On Mon, Aug 26, 2024 at 04:01:10PM GMT, Geert Uytterhoeven wrote:
> Hi Liam,
>
> On Thu, Aug 22, 2024 at 9:27 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
> >
> > Set the start and end address for munmap when the prev and next are
> > gathered.  This is needed to avoid incorrect addresses being used during
> > the vms_complete_munmap_vmas() function if the prev/next vma are
> > expanded.
> >
> > Add a new helper vms_complete_pte_clear(), which is needed later and
> > will avoid growing the argument list to unmap_region() beyond the 9 it
> > already has.
> >
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> Thanks for your patch, which is now commit ca39aca8db2d78ff ("mm/vma:
> track start and end for munmap in vma_munmap_struct") in next-20240826.
>
> > --- a/mm/vma.h
> > +++ b/mm/vma.h
> > @@ -38,6 +38,8 @@ struct vma_munmap_struct {
> >         struct list_head *uf;           /* Userfaultfd list_head */
> >         unsigned long start;            /* Aligned start addr (inclusive) */
> >         unsigned long end;              /* Aligned end addr (exclusive) */
> > +       unsigned long unmap_start;      /* Unmap PTE start */
> > +       unsigned long unmap_end;        /* Unmap PTE end */
> >         int vma_count;                  /* Number of vmas that will be removed */
> >         unsigned long nr_pages;         /* Number of pages being removed */
> >         unsigned long locked_vm;        /* Number of locked pages */
> > @@ -108,6 +110,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
> >         vms->vma_count = 0;
> >         vms->nr_pages = vms->locked_vm = vms->nr_accounted = 0;
> >         vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
> > +       vms->unmap_start = FIRST_USER_ADDRESS;
> > +       vms->unmap_end = USER_PGTABLES_CEILING;
>
> noreply@ellerman.id.au reported build failures for m5272c3_defconfig
> http://kisskb.ellerman.id.au/kisskb/buildresult/15224802/
>
> $ make ARCH=m68k m5272c3_defconfig mm/filemap.o
> In file included from mm/internal.h:22,
>                  from mm/filemap.c:52:
> mm/vma.h: In function ‘init_vma_munmap’:
> mm/vma.h:113:21: error: ‘FIRST_USER_ADDRESS’ undeclared (first use in
> this function)
>   113 |  vms->unmap_start = FIRST_USER_ADDRESS;
>       |                     ^~~~~~~~~~~~~~~~~~
> mm/vma.h:113:21: note: each undeclared identifier is reported only
> once for each function it appears in
> mm/vma.h:114:19: error: ‘USER_PGTABLES_CEILING’ undeclared (first use
> in this function)
>   114 |  vms->unmap_end = USER_PGTABLES_CEILING;
>       |                   ^~~~~~~~~~~~~~~~~~~~~
>
> Both are defined in include/linux/pgtable.h inside #ifdef CONFIG_MMU,
> so they are not available on nommu.

Thanks for the report, this was already resolved (or should be :) via fix-patch
in [0].

[0}:https://lore.kernel.org/all/7d0ea994-f750-49c5-b392-ae7117369cf3@lucifer.local/


>
>
> >  }
> >
> >  int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>
> Gr{oetje,eeting}s,
>
>                         Geert
>
>
> --
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
>
> In personal conversations with technical people, I call myself a hacker. But
> when I'm talking to journalists I just say "programmer" or something like that.
>                                 -- Linus Torvalds


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH] mm/vma: Fix null pointer dereference in vms_abort_munmap_vmas()
  2024-08-22 19:25 ` [PATCH v7 15/21] mm: Change failure of MAP_FIXED to restoring the gap on failure Liam R. Howlett
@ 2024-08-27 17:15   ` Liam R. Howlett
  0 siblings, 0 replies; 32+ messages in thread
From: Liam R. Howlett @ 2024-08-27 17:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Suren Baghdasaryan, Lorenzo Stoakes,
	Matthew Wilcox, Vlastimil Babka, sidhartha.kumar, Bert Karwatzki,
	Jiri Olsa, Kees Cook, Paul E . McKenney, Liam R. Howlett,
	Dan Carpenter

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Don't pass a NULL vma to the vma_iter_store(), instead set up the maple
state for the store and do it manually.  vma_iter_clear() cannot be used
as it needs preallocations.

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
---
 mm/vma.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/vma.h b/mm/vma.h
index f710812482a1..5f525d723390 100644

Andrew,

Please squash this into Commit 131e4ef350fa ("mm: change failure of
MAP_FIXED to restoring the gap on failure")

--- a/mm/vma.h
+++ b/mm/vma.h
@@ -173,6 +173,7 @@ static inline void reattach_vmas(struct ma_state *mas_detach)
 static inline void vms_abort_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
+	struct ma_state *mas = &vms->vmi->mas;
 	if (!vms->nr_pages)
 		return;
 
@@ -184,13 +185,14 @@ static inline void vms_abort_munmap_vmas(struct vma_munmap_struct *vms,
 	 * not symmetrical and state data has been lost.  Resort to the old
 	 * failure method of leaving a gap where the MAP_FIXED mapping failed.
 	 */
-	if (unlikely(vma_iter_store_gfp(vms->vmi, NULL, GFP_KERNEL))) {
+	mas_set_range(mas, vms->start, vms->end);
+	if (unlikely(mas_store_gfp(mas, NULL, GFP_KERNEL))) {
 		pr_warn_once("%s: (%d) Unable to abort munmap() operation\n",
 			     current->comm, current->pid);
 		/* Leaving vmas detached and in-tree may hamper recovery */
 		reattach_vmas(mas_detach);
 	} else {
-		/* Clean up the insertion of unfortunate the gap */
+		/* Clean up the insertion of the unfortunate gap */
 		vms_complete_munmap_vmas(vms, mas_detach);
 	}
 }
-- 
2.43.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2024-08-27 17:17 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-22 19:25 [PATCH v7 00/21] Avoid MAP_FIXED gap exposure Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 01/21] mm/vma: Correctly position vma_iterator in __split_vma() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 02/21] mm/vma: Introduce abort_munmap_vmas() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 03/21] mm/vma: Introduce vmi_complete_munmap_vmas() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 04/21] mm/vma: Extract the gathering of vmas from do_vmi_align_munmap() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 05/21] mm/vma: Introduce vma_munmap_struct for use in munmap operations Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 06/21] mm/vma: Change munmap to use vma_munmap_struct() for accounting and surrounding vmas Liam R. Howlett
2024-08-23  8:43   ` Bert Karwatzki
2024-08-23  9:55     ` Lorenzo Stoakes
2024-08-23 10:42       ` Bert Karwatzki
2024-08-23 11:39         ` Lorenzo Stoakes
2024-08-23 11:37     ` Lorenzo Stoakes
2024-08-23 13:30   ` [PATCH] mm/vma: fix bookkeeping checks Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 07/21] mm/vma: Extract validate_mm() from vma_complete() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 08/21] mm/vma: Inline munmap operation in mmap_region() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 09/21] mm/vma: Expand mmap_region() munmap call Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 10/21] mm/vma: Support vma == NULL in init_vma_munmap() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 11/21] mm/mmap: Reposition vma iterator in mmap_region() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 12/21] mm/vma: Track start and end for munmap in vma_munmap_struct Liam R. Howlett
2024-08-26 14:01   ` Geert Uytterhoeven
2024-08-26 14:12     ` Lorenzo Stoakes
2024-08-22 19:25 ` [PATCH v7 13/21] mm: Clean up unmap_region() argument list Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 14/21] mm/mmap: Avoid zeroing vma tree in mmap_region() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 15/21] mm: Change failure of MAP_FIXED to restoring the gap on failure Liam R. Howlett
2024-08-27 17:15   ` [PATCH] mm/vma: Fix null pointer dereference in vms_abort_munmap_vmas() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 16/21] mm/mmap: Use PHYS_PFN in mmap_region() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 17/21] mm/mmap: Use vms accounted pages " Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 18/21] ipc/shm, mm: Drop do_vma_munmap() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 19/21] mm: Move may_expand_vm() check in mmap_region() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 20/21] mm/vma: Drop incorrect comment from vms_gather_munmap_vmas() Liam R. Howlett
2024-08-22 19:25 ` [PATCH v7 21/21] mm/vma.h: Optimise vma_munmap_struct Liam R. Howlett
2024-08-22 19:41   ` Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).