public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v2] mm/mseal: update VMA end correctly on merge
@ 2026-03-27 17:31 Lorenzo Stoakes (Oracle)
  2026-03-27 17:39 ` Lorenzo Stoakes (Oracle)
  2026-03-27 18:29 ` David Hildenbrand (Arm)
  0 siblings, 2 replies; 3+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-27 17:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

Previously we stored the end of the current VMA in curr_end, and then upon
iterating to the next VMA updated curr_start to curr_end to advance to the
next VMA.

However, this doesn't take into account the fact that a VMA might be
updated due to a merge by vma_modify_flags(), which can result in curr_end
being stale and thus, upon setting curr_start to curr_end, ending up with
an incorrect curr_start on the next iteration.

Resolve the issue by setting curr_end to vma->vm_end unconditionally to
ensure this value remains updated should this occur.

While we're here, eliminate this entire class of bug by simply setting
const curr_[start/end] to be clamped to the input range and VMAs, which
also happens to simplify the logic.

Reported-by: Antonius <antonius@bluedragonsec.com>
Closes: https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
Suggested-by: David Hildenbrand (ARM) <david@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Reviewed-by: Pedro Falcato <pfalcato@suse.de>
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
Cc: <stable@vger.kernel.org>
---
v2:
* Correct Closes: tag
* Use David's excellent idea to improve the patch

v1:
https://lore.kernel.org/all/20260327090640.146308-1-ljs@kernel.org/

 mm/mseal.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/mseal.c b/mm/mseal.c
index 316b5e1dec78..ac58643181f7 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
 		unsigned long start, unsigned long end)
 {
 	struct vm_area_struct *vma, *prev;
-	unsigned long curr_start = start;
 	VMA_ITERATOR(vmi, mm, start);

 	/* We know there are no gaps so this will be non-NULL. */
@@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
 		prev = vma;

 	for_each_vma_range(vmi, vma, end) {
+		const unsigned long curr_start = MAX(vma->vm_start, start);
 		const unsigned long curr_end = MIN(vma->vm_end, end);

 		if (!(vma->vm_flags & VM_SEALED)) {
@@ -79,7 +79,6 @@ static int mseal_apply(struct mm_struct *mm,
 		}

 		prev = vma;
-		curr_start = curr_end;
 	}

 	return 0;
--
2.53.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm/mseal: update VMA end correctly on merge
  2026-03-27 17:31 [PATCH v2] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
@ 2026-03-27 17:39 ` Lorenzo Stoakes (Oracle)
  2026-03-27 18:29 ` David Hildenbrand (Arm)
  1 sibling, 0 replies; 3+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-27 17:39 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

Note - I tested this locally against the repro and confirmed it resolved it
correctly, and I also ran it through AI review as a double-check.

(Secondary, less important note - I plan to refactor all of these loops as
they're all quite bug prone :)

Cheers, Lorenzo

On Fri, Mar 27, 2026 at 05:31:04PM +0000, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> While we're here, eliminate this entire class of bug by simply setting
> const curr_[start/end] to be clamped to the input range and VMAs, which
> also happens to simplify the logic.
>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
> Suggested-by: David Hildenbrand (ARM) <david@kernel.org>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> Reviewed-by: Pedro Falcato <pfalcato@suse.de>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> ---
> v2:
> * Correct Closes: tag
> * Use David's excellent idea to improve the patch
>
> v1:
> https://lore.kernel.org/all/20260327090640.146308-1-ljs@kernel.org/
>
>  mm/mseal.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..ac58643181f7 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
>  		unsigned long start, unsigned long end)
>  {
>  	struct vm_area_struct *vma, *prev;
> -	unsigned long curr_start = start;
>  	VMA_ITERATOR(vmi, mm, start);
>
>  	/* We know there are no gaps so this will be non-NULL. */
> @@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
>  		prev = vma;
>
>  	for_each_vma_range(vmi, vma, end) {
> +		const unsigned long curr_start = MAX(vma->vm_start, start);
>  		const unsigned long curr_end = MIN(vma->vm_end, end);
>
>  		if (!(vma->vm_flags & VM_SEALED)) {
> @@ -79,7 +79,6 @@ static int mseal_apply(struct mm_struct *mm,
>  		}
>
>  		prev = vma;
> -		curr_start = curr_end;
>  	}
>
>  	return 0;
> --
> 2.53.0


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] mm/mseal: update VMA end correctly on merge
  2026-03-27 17:31 [PATCH v2] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
  2026-03-27 17:39 ` Lorenzo Stoakes (Oracle)
@ 2026-03-27 18:29 ` David Hildenbrand (Arm)
  1 sibling, 0 replies; 3+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-27 18:29 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle), Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, linux-mm, linux-kernel, antonius

On 3/27/26 18:31, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
> 
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
> 
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
> 
> While we're here, eliminate this entire class of bug by simply setting
> const curr_[start/end] to be clamped to the input range and VMAs, which
> also happens to simplify the logic.
> 
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/
> Suggested-by: David Hildenbrand (ARM) <david@kernel.org>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> Reviewed-by: Pedro Falcato <pfalcato@suse.de>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> ---

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-03-27 18:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-27 17:31 [PATCH v2] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
2026-03-27 17:39 ` Lorenzo Stoakes (Oracle)
2026-03-27 18:29 ` David Hildenbrand (Arm)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox