public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
@ 2026-03-27  9:06 Lorenzo Stoakes (Oracle)
  2026-03-27  9:15 ` Pedro Falcato
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-27  9:06 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

Previously we stored the end of the current VMA in curr_end, and then upon
iterating to the next VMA updated curr_start to curr_end to advance to the
next VMA.

However, this doesn't take into account the fact that a VMA might be
updated due to a merge by vma_modify_flags(), which can result in curr_end
being stale and thus, upon setting curr_start to curr_end, ending up with
an incorrect curr_start on the next iteration.

Resolve the issue by setting curr_end to vma->vm_end unconditionally to
ensure this value remains updated should this occur.

Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
Cc: <stable@vger.kernel.org>
Reported-by: Antonius <antonius@bluedragonsec.com>
Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
---
 mm/mseal.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/mseal.c b/mm/mseal.c
index 316b5e1dec78..2d72a15d8ea1 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
 		prev = vma;

 	for_each_vma_range(vmi, vma, end) {
-		const unsigned long curr_end = MIN(vma->vm_end, end);
+		unsigned long curr_end = MIN(vma->vm_end, end);

 		if (!(vma->vm_flags & VM_SEALED)) {
 			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
@@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
 			if (IS_ERR(vma))
 				return PTR_ERR(vma);
 			vm_flags_set(vma, VM_SEALED);
+			curr_end = vma->vm_end; /* Merge may have updated. */
 		}

 		prev = vma;
--
2.53.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27  9:06 [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
@ 2026-03-27  9:15 ` Pedro Falcato
  2026-03-27  9:16 ` Lorenzo Stoakes (Oracle)
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Pedro Falcato @ 2026-03-27  9:15 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle)
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

On Fri, Mar 27, 2026 at 09:06:40AM +0000, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
> 
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
> 
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
> 
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/

Reviewed-by: Pedro Falcato <pfalcato@suse.de>

-- 
Pedro


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27  9:06 [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
  2026-03-27  9:15 ` Pedro Falcato
@ 2026-03-27  9:16 ` Lorenzo Stoakes (Oracle)
  2026-03-27 13:22 ` Vlastimil Babka (SUSE)
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-27  9:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

On Fri, Mar 27, 2026 at 09:06:40AM +0000, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
>
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
>
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
>
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/

Oops, Closes should be:

https://lore.kernel.org/linux-mm/CAK8a0jwWGj9-SgFk0yKFh7i8jMkwKm5b0ao9=kmXWjO54veX2g@mail.gmail.com/

Cheers, Lorenzo

> ---
>  mm/mseal.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..2d72a15d8ea1 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
>  		prev = vma;
>
>  	for_each_vma_range(vmi, vma, end) {
> -		const unsigned long curr_end = MIN(vma->vm_end, end);
> +		unsigned long curr_end = MIN(vma->vm_end, end);
>
>  		if (!(vma->vm_flags & VM_SEALED)) {
>  			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
>  			if (IS_ERR(vma))
>  				return PTR_ERR(vma);
>  			vm_flags_set(vma, VM_SEALED);
> +			curr_end = vma->vm_end; /* Merge may have updated. */
>  		}
>
>  		prev = vma;
> --
> 2.53.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27  9:06 [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
  2026-03-27  9:15 ` Pedro Falcato
  2026-03-27  9:16 ` Lorenzo Stoakes (Oracle)
@ 2026-03-27 13:22 ` Vlastimil Babka (SUSE)
  2026-03-27 15:24 ` Andrew Morton
  2026-03-27 16:57 ` David Hildenbrand (Arm)
  4 siblings, 0 replies; 8+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-03-27 13:22 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle), Andrew Morton
  Cc: Liam R . Howlett, Jann Horn, Pedro Falcato, Jeff Xu,
	David Hildenbrand, linux-mm, linux-kernel, antonius

On 3/27/26 10:06, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
> 
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
> 
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
> 
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/

Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>

Thanks!

> ---
>  mm/mseal.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..2d72a15d8ea1 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
>  		prev = vma;
> 
>  	for_each_vma_range(vmi, vma, end) {
> -		const unsigned long curr_end = MIN(vma->vm_end, end);
> +		unsigned long curr_end = MIN(vma->vm_end, end);
> 
>  		if (!(vma->vm_flags & VM_SEALED)) {
>  			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
>  			if (IS_ERR(vma))
>  				return PTR_ERR(vma);
>  			vm_flags_set(vma, VM_SEALED);
> +			curr_end = vma->vm_end; /* Merge may have updated. */
>  		}
> 
>  		prev = vma;
> --
> 2.53.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27  9:06 [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
                   ` (2 preceding siblings ...)
  2026-03-27 13:22 ` Vlastimil Babka (SUSE)
@ 2026-03-27 15:24 ` Andrew Morton
  2026-03-27 15:52   ` Lorenzo Stoakes (Oracle)
  2026-03-27 16:57 ` David Hildenbrand (Arm)
  4 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2026-03-27 15:24 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle)
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

On Fri, 27 Mar 2026 09:06:40 +0000 "Lorenzo Stoakes (Oracle)" <ljs@kernel.org> wrote:

> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
>  		prev = vma;
> 
>  	for_each_vma_range(vmi, vma, end) {
> -		const unsigned long curr_end = MIN(vma->vm_end, end);
> +		unsigned long curr_end = MIN(vma->vm_end, end);
> 
>  		if (!(vma->vm_flags & VM_SEALED)) {
>  			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
>  			if (IS_ERR(vma))
>  				return PTR_ERR(vma);
>  			vm_flags_set(vma, VM_SEALED);
> +			curr_end = vma->vm_end; /* Merge may have updated. */
>  		}
> 
>  		prev = vma;

This led to some rework in your "mm/vma: convert
vma_modify_flags[_uffd]() to use vma_flags_t".  Please check my
handiwork.

reject:

--- mm/mseal.c~mm-vma-convert-vma_modify_flags-to-use-vma_flags_t
+++ mm/mseal.c
@@ -68,14 +68,17 @@ static int mseal_apply(struct mm_struct
 	for_each_vma_range(vmi, vma, end) {
 		const unsigned long curr_end = MIN(vma->vm_end, end);
 
-		if (!(vma->vm_flags & VM_SEALED)) {
-			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
+		if (!vma_test(vma, VMA_SEALED_BIT)) {
+			vma_flags_t vma_flags = vma->flags;
+
+			vma_flags_set(&vma_flags, VMA_SEALED_BIT);
 
 			vma = vma_modify_flags(&vmi, prev, vma, curr_start,
-					       curr_end, &vm_flags);
+					       curr_end, &vma_flags);
 			if (IS_ERR(vma))
 				return PTR_ERR(vma);
-			vm_flags_set(vma, VM_SEALED);
+			vma_start_write(vma);
+			vma_set_flags(vma, VMA_SEALED_BIT);
 		}
 
 		prev = vma;

resolution:

static int mseal_apply(struct mm_struct *mm,
		unsigned long start, unsigned long end)
{
	struct vm_area_struct *vma, *prev;
	unsigned long curr_start = start;
	VMA_ITERATOR(vmi, mm, start);

	/* We know there are no gaps so this will be non-NULL. */
	vma = vma_iter_load(&vmi);
	prev = vma_prev(&vmi);
	if (start > vma->vm_start)
		prev = vma;

	for_each_vma_range(vmi, vma, end) {
		unsigned long curr_end = MIN(vma->vm_end, end);

		if (!vma_test(vma, VMA_SEALED_BIT)) {
			vma_flags_t vma_flags = vma->flags;

			vma_flags_set(&vma_flags, VMA_SEALED_BIT);

			vma = vma_modify_flags(&vmi, prev, vma, curr_start,
					       curr_end, &vma_flags);
			if (IS_ERR(vma))
				return PTR_ERR(vma);
			vma_start_write(vma);
			vma_set_flags(vma, VMA_SEALED_BIT);
			curr_end = vma->vm_end; /* Merge may have updated. */
		}

		prev = vma;
		curr_start = curr_end;
	}

	return 0;
}



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27 15:24 ` Andrew Morton
@ 2026-03-27 15:52   ` Lorenzo Stoakes (Oracle)
  0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-27 15:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, David Hildenbrand, linux-mm, linux-kernel, antonius

On Fri, Mar 27, 2026 at 08:24:46AM -0700, Andrew Morton wrote:
> On Fri, 27 Mar 2026 09:06:40 +0000 "Lorenzo Stoakes (Oracle)" <ljs@kernel.org> wrote:
>
> > --- a/mm/mseal.c
> > +++ b/mm/mseal.c
> > @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> >  		prev = vma;
> >
> >  	for_each_vma_range(vmi, vma, end) {
> > -		const unsigned long curr_end = MIN(vma->vm_end, end);
> > +		unsigned long curr_end = MIN(vma->vm_end, end);
> >
> >  		if (!(vma->vm_flags & VM_SEALED)) {
> >  			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> > @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> >  			if (IS_ERR(vma))
> >  				return PTR_ERR(vma);
> >  			vm_flags_set(vma, VM_SEALED);
> > +			curr_end = vma->vm_end; /* Merge may have updated. */
> >  		}
> >
> >  		prev = vma;
>
> This led to some rework in your "mm/vma: convert
> vma_modify_flags[_uffd]() to use vma_flags_t".  Please check my
> handiwork.
>
> reject:
>
> --- mm/mseal.c~mm-vma-convert-vma_modify_flags-to-use-vma_flags_t
> +++ mm/mseal.c
> @@ -68,14 +68,17 @@ static int mseal_apply(struct mm_struct
>  	for_each_vma_range(vmi, vma, end) {
>  		const unsigned long curr_end = MIN(vma->vm_end, end);
>
> -		if (!(vma->vm_flags & VM_SEALED)) {
> -			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> +		if (!vma_test(vma, VMA_SEALED_BIT)) {
> +			vma_flags_t vma_flags = vma->flags;
> +
> +			vma_flags_set(&vma_flags, VMA_SEALED_BIT);
>
>  			vma = vma_modify_flags(&vmi, prev, vma, curr_start,
> -					       curr_end, &vm_flags);
> +					       curr_end, &vma_flags);
>  			if (IS_ERR(vma))
>  				return PTR_ERR(vma);
> -			vm_flags_set(vma, VM_SEALED);
> +			vma_start_write(vma);
> +			vma_set_flags(vma, VMA_SEALED_BIT);
>  		}
>
>  		prev = vma;
>
> resolution:
>
> static int mseal_apply(struct mm_struct *mm,
> 		unsigned long start, unsigned long end)
> {
> 	struct vm_area_struct *vma, *prev;
> 	unsigned long curr_start = start;
> 	VMA_ITERATOR(vmi, mm, start);
>
> 	/* We know there are no gaps so this will be non-NULL. */
> 	vma = vma_iter_load(&vmi);
> 	prev = vma_prev(&vmi);
> 	if (start > vma->vm_start)
> 		prev = vma;
>
> 	for_each_vma_range(vmi, vma, end) {
> 		unsigned long curr_end = MIN(vma->vm_end, end);
>
> 		if (!vma_test(vma, VMA_SEALED_BIT)) {
> 			vma_flags_t vma_flags = vma->flags;
>
> 			vma_flags_set(&vma_flags, VMA_SEALED_BIT);
>
> 			vma = vma_modify_flags(&vmi, prev, vma, curr_start,
> 					       curr_end, &vma_flags);
> 			if (IS_ERR(vma))
> 				return PTR_ERR(vma);
> 			vma_start_write(vma);
> 			vma_set_flags(vma, VMA_SEALED_BIT);
> 			curr_end = vma->vm_end; /* Merge may have updated. */
> 		}
>
> 		prev = vma;
> 		curr_start = curr_end;
> 	}
>
> 	return 0;
> }
>

Thanks that looks correct!

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27  9:06 [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
                   ` (3 preceding siblings ...)
  2026-03-27 15:24 ` Andrew Morton
@ 2026-03-27 16:57 ` David Hildenbrand (Arm)
  2026-03-27 17:23   ` Lorenzo Stoakes (Oracle)
  4 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-27 16:57 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle), Andrew Morton
  Cc: Liam R . Howlett, Vlastimil Babka, Jann Horn, Pedro Falcato,
	Jeff Xu, linux-mm, linux-kernel, antonius

On 3/27/26 10:06, Lorenzo Stoakes (Oracle) wrote:
> Previously we stored the end of the current VMA in curr_end, and then upon
> iterating to the next VMA updated curr_start to curr_end to advance to the
> next VMA.
> 
> However, this doesn't take into account the fact that a VMA might be
> updated due to a merge by vma_modify_flags(), which can result in curr_end
> being stale and thus, upon setting curr_start to curr_end, ending up with
> an incorrect curr_start on the next iteration.
> 
> Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> ensure this value remains updated should this occur.
> 
> Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> Cc: <stable@vger.kernel.org>
> Reported-by: Antonius <antonius@bluedragonsec.com>
> Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
> ---
>  mm/mseal.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 316b5e1dec78..2d72a15d8ea1 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
>  		prev = vma;
> 
>  	for_each_vma_range(vmi, vma, end) {
> -		const unsigned long curr_end = MIN(vma->vm_end, end);
> +		unsigned long curr_end = MIN(vma->vm_end, end);
> 
>  		if (!(vma->vm_flags & VM_SEALED)) {
>  			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
>  			if (IS_ERR(vma))
>  				return PTR_ERR(vma);
>  			vm_flags_set(vma, VM_SEALED);
> +			curr_end = vma->vm_end; /* Merge may have updated. */
>  		}


I was a bit confused why curr_start is allowed to not start within the VMA,
but before it. Then I recalled that range_contains_unmapped() checks for no holes.


Would the following also sort out the problem and even simplify the code?

diff --git a/mm/mseal.c b/mm/mseal.c
index 603df53ad267..e2093ae3d25c 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
                unsigned long start, unsigned long end)
 {
        struct vm_area_struct *vma, *prev;
-       unsigned long curr_start = start;
        VMA_ITERATOR(vmi, mm, start);
 
        /* We know there are no gaps so this will be non-NULL. */
@@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
                prev = vma;
 
        for_each_vma_range(vmi, vma, end) {
+               const unsigned long curr_start = MAX(vma->vm_start, start);
                const unsigned long curr_end = MIN(vma->vm_end, end);
 
                if (!vma_test(vma, VMA_SEALED_BIT)) {
@@ -82,7 +82,6 @@ static int mseal_apply(struct mm_struct *mm,
                }
 
                prev = vma;
-               curr_start = curr_end;
        }
 
        return 0;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c


I might be wrong about that, I've been staring at the screen for too long today.


-- 
Cheers,

David


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge
  2026-03-27 16:57 ` David Hildenbrand (Arm)
@ 2026-03-27 17:23   ` Lorenzo Stoakes (Oracle)
  0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-27 17:23 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	Pedro Falcato, Jeff Xu, linux-mm, linux-kernel, antonius

On Fri, Mar 27, 2026 at 05:57:06PM +0100, David Hildenbrand (Arm) wrote:
> On 3/27/26 10:06, Lorenzo Stoakes (Oracle) wrote:
> > Previously we stored the end of the current VMA in curr_end, and then upon
> > iterating to the next VMA updated curr_start to curr_end to advance to the
> > next VMA.
> >
> > However, this doesn't take into account the fact that a VMA might be
> > updated due to a merge by vma_modify_flags(), which can result in curr_end
> > being stale and thus, upon setting curr_start to curr_end, ending up with
> > an incorrect curr_start on the next iteration.
> >
> > Resolve the issue by setting curr_end to vma->vm_end unconditionally to
> > ensure this value remains updated should this occur.
> >
> > Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> > Fixes: 6c2da14ae1e0 ("mm/mseal: rework mseal apply logic")
> > Cc: <stable@vger.kernel.org>
> > Reported-by: Antonius <antonius@bluedragonsec.com>
> > Closes: https://lore.kernel.org/linux-mm/CAK8a0jyHXqBpt8Xe8v9SNDbnRiwz7OthA8SKY=NLRY7smPEP3Q@mail.gmail.com/
> > ---
> >  mm/mseal.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/mseal.c b/mm/mseal.c
> > index 316b5e1dec78..2d72a15d8ea1 100644
> > --- a/mm/mseal.c
> > +++ b/mm/mseal.c
> > @@ -66,7 +66,7 @@ static int mseal_apply(struct mm_struct *mm,
> >  		prev = vma;
> >
> >  	for_each_vma_range(vmi, vma, end) {
> > -		const unsigned long curr_end = MIN(vma->vm_end, end);
> > +		unsigned long curr_end = MIN(vma->vm_end, end);
> >
> >  		if (!(vma->vm_flags & VM_SEALED)) {
> >  			vm_flags_t vm_flags = vma->vm_flags | VM_SEALED;
> > @@ -76,6 +76,7 @@ static int mseal_apply(struct mm_struct *mm,
> >  			if (IS_ERR(vma))
> >  				return PTR_ERR(vma);
> >  			vm_flags_set(vma, VM_SEALED);
> > +			curr_end = vma->vm_end; /* Merge may have updated. */
> >  		}
>
>
> I was a bit confused why curr_start is allowed to not start within the VMA,
> but before it. Then I recalled that range_contains_unmapped() checks for no holes.
>
>
> Would the following also sort out the problem and even simplify the code?
>
> diff --git a/mm/mseal.c b/mm/mseal.c
> index 603df53ad267..e2093ae3d25c 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -56,7 +56,6 @@ static int mseal_apply(struct mm_struct *mm,
>                 unsigned long start, unsigned long end)
>  {
>         struct vm_area_struct *vma, *prev;
> -       unsigned long curr_start = start;
>         VMA_ITERATOR(vmi, mm, start);
>
>         /* We know there are no gaps so this will be non-NULL. */
> @@ -66,6 +65,7 @@ static int mseal_apply(struct mm_struct *mm,
>                 prev = vma;
>
>         for_each_vma_range(vmi, vma, end) {
> +               const unsigned long curr_start = MAX(vma->vm_start, start);

Yeah that's nice :)


>                 const unsigned long curr_end = MIN(vma->vm_end, end);
>
>                 if (!vma_test(vma, VMA_SEALED_BIT)) {
> @@ -82,7 +82,6 @@ static int mseal_apply(struct mm_struct *mm,
>                 }
>
>                 prev = vma;
> -               curr_start = curr_end;
>         }
>
>         return 0;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>
>
> I might be wrong about that, I've been staring at the screen for too long today.

No this is better, I've mulled this over, this is much better :)

Sorry to be a pain Andrew - let me respin this right now.

I will retain tags as this is functionally equivalent, I have checked it
locally, confirmed repro solved, checked against AI review also.

>
>
> --
> Cheers,
>
> David

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-27 17:23 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-27  9:06 [PATCH mm-hotfixes] mm/mseal: update VMA end correctly on merge Lorenzo Stoakes (Oracle)
2026-03-27  9:15 ` Pedro Falcato
2026-03-27  9:16 ` Lorenzo Stoakes (Oracle)
2026-03-27 13:22 ` Vlastimil Babka (SUSE)
2026-03-27 15:24 ` Andrew Morton
2026-03-27 15:52   ` Lorenzo Stoakes (Oracle)
2026-03-27 16:57 ` David Hildenbrand (Arm)
2026-03-27 17:23   ` Lorenzo Stoakes (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox