public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm: prevent droppable mappings from being locked
@ 2026-03-06 20:45 Anthony Yznaga
  2026-03-09 14:15 ` David Hildenbrand (Arm)
  2026-03-09 14:28 ` Lorenzo Stoakes (Oracle)
  0 siblings, 2 replies; 9+ messages in thread
From: Anthony Yznaga @ 2026-03-06 20:45 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: akpm, david, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb,
	mhocko, jannh, pfalcato, Jason

Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
to the check in mlock_fixup(). However, they will be locked indirectly
if they are created after mlockall(MCL_FUTURE).

Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
---
 include/linux/mm.h | 3 +++
 mm/mlock.c         | 4 ++--
 mm/vma.c           | 2 +-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5be3d8a8f806..bb830574d112 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -574,6 +574,9 @@ enum {
 /* This mask represents all the VMA flag bits used by mlock */
 #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
 
+/* This mask prevents VMAs from being mlock'd */
+#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
+
 /* These flags can be updated atomically via VMA/mmap read lock. */
 #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD
 
diff --git a/mm/mlock.c b/mm/mlock.c
index 2f699c3497a5..fd35c1e88c4c 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -472,9 +472,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	int ret = 0;
 	vm_flags_t oldflags = vma->vm_flags;
 
-	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
+	if (newflags == oldflags || (oldflags & VM_NO_MLOCK_MASK) ||
 	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
+	    vma_is_dax(vma) || vma_is_secretmem(vma))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/vma.c b/mm/vma.c
index be64f781a3aa..1334622e4a03 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -2589,7 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
 
 	vm_stat_account(mm, vma->vm_flags, map->pglen);
 	if (vm_flags & VM_LOCKED) {
-		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
+		if ((vm_flags & VM_NO_MLOCK_MASK) || vma_is_dax(vma) ||
 					is_vm_hugetlb_page(vma) ||
 					vma == get_gate_vma(mm))
 			vm_flags_clear(vma, VM_LOCKED_MASK);
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-06 20:45 [PATCH] mm: prevent droppable mappings from being locked Anthony Yznaga
@ 2026-03-09 14:15 ` David Hildenbrand (Arm)
  2026-03-09 14:31   ` Lorenzo Stoakes (Oracle)
                     ` (2 more replies)
  2026-03-09 14:28 ` Lorenzo Stoakes (Oracle)
  1 sibling, 3 replies; 9+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-09 14:15 UTC (permalink / raw)
  To: Anthony Yznaga, linux-mm, linux-kernel
  Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	jannh, pfalcato, Jason

On 3/6/26 21:45, Anthony Yznaga wrote:
> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
> to the check in mlock_fixup(). However, they will be locked indirectly
> if they are created after mlockall(MCL_FUTURE).
> 
> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
> ---
>  include/linux/mm.h | 3 +++
>  mm/mlock.c         | 4 ++--
>  mm/vma.c           | 2 +-
>  3 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 5be3d8a8f806..bb830574d112 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -574,6 +574,9 @@ enum {
>  /* This mask represents all the VMA flag bits used by mlock */
>  #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>  
> +/* This mask prevents VMAs from being mlock'd */
> +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)

Instead of adding that, could we cleanup further by doing something like the following?

The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
the tools/testing/vma test happy.

Not even compile tested, so will require some more work.


diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index 593f5d4e108b..755281fab23d 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
 
 #endif
 
-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
+static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
 {
 	return is_vm_hugetlb_flags(vma->vm_flags);
 }
diff --git a/mm/internal.h b/mm/internal.h
index 6e1162e13289..b70ebbdafe00 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
 	}
 	return fpin;
 }
+
+static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
+{
+	if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
+		return false;
+	if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
+		return false;
+	return vma != get_gate_vma(vma->vm_mm);
+}
 #else /* !CONFIG_MMU */
 static inline void unmap_mapping_folio(struct folio *folio) { }
 static inline void mlock_new_folio(struct folio *folio) { }
diff --git a/mm/mlock.c b/mm/mlock.c
index 1a92d16f3684..e16b2ea234f7 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	int ret = 0;
 	vm_flags_t oldflags = vma->vm_flags;
 
-	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
-	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
+	if (newflags == oldflags || !vma_supports_mlock(vma))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/vma.c b/mm/vma.c
index e95fd5a5fe5c..b7055c264b5d 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
 
 	vm_stat_account(mm, vma->vm_flags, map->pglen);
 	if (vm_flags & VM_LOCKED) {
-		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
-					is_vm_hugetlb_page(vma) ||
-					vma == get_gate_vma(mm))
+		if (!vma_supports_mlock(vma))
 			vm_flags_clear(vma, VM_LOCKED_MASK);
 		else
 			mm->locked_vm += map->pglen;
-- 
2.43.0

-- 
Cheers,

David


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-06 20:45 [PATCH] mm: prevent droppable mappings from being locked Anthony Yznaga
  2026-03-09 14:15 ` David Hildenbrand (Arm)
@ 2026-03-09 14:28 ` Lorenzo Stoakes (Oracle)
  2026-03-09 15:54   ` anthony.yznaga
  1 sibling, 1 reply; 9+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-09 14:28 UTC (permalink / raw)
  To: Anthony Yznaga
  Cc: linux-mm, linux-kernel, akpm, david, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, jannh, pfalcato, Jason

-cc old mail (this is going to take some time to propagate I realise :P)

On Fri, Mar 06, 2026 at 12:45:50PM -0800, Anthony Yznaga wrote:
> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
> to the check in mlock_fixup(). However, they will be locked indirectly
> if they are created after mlockall(MCL_FUTURE).

You need to add more details here.

For e.g.: 'in apply_mlockall_flags(), if the flags parameter has MCL_FUTURE set,
the current task's mm's default VMA flag field mm->def_flags has VM_LOCKED
applied to it. Therefore, in __mmap_complete(), extend the test for VM_SPECIAL
to include a test for VM_DROPPABLE'.

Do you have a test that can check for this? It'd be good to have a regression
test to assert that it now behaves correctly.

You could extend either tools/testing/selftests/mm/mlock2-tests.c or
droppable.c?

It's worth mentioning that mlockall(MCL_ONFAULT) is handled too, as
VM_LOCKONFAULT is always set with VM_LOCKED (the only difference being that,
when trying to fault in memory for VM_LOCKED ranges, gup exits early in
populate_vma_page_range() which has an explicit test for VM_LOCKONFAULT) , and
apply_mlockall_flags() will invoke mlock_fixup() which already has the
VM_DROPPABLE check.

>
> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")

Do we want to cc: stable here?

> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
> ---
>  include/linux/mm.h | 3 +++
>  mm/mlock.c         | 4 ++--
>  mm/vma.c           | 2 +-
>  3 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 5be3d8a8f806..bb830574d112 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -574,6 +574,9 @@ enum {
>  /* This mask represents all the VMA flag bits used by mlock */
>  #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>
> +/* This mask prevents VMAs from being mlock'd */
> +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
> +

It'd be preferable to not use the legacy VMA flags implementation, but if we're
backporting I guess... However there's only one place you need to update, the
other already manually checks droppable, and it'd make my life easier for the
VMA flags conversions to not define a flag like this also :)

>  /* These flags can be updated atomically via VMA/mmap read lock. */
>  #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD
>
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 2f699c3497a5..fd35c1e88c4c 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	int ret = 0;
>  	vm_flags_t oldflags = vma->vm_flags;
>
> -	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> +	if (newflags == oldflags || (oldflags & VM_NO_MLOCK_MASK) ||
>  	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> -	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> +	    vma_is_dax(vma) || vma_is_secretmem(vma))

This obviously wouldn't be necessary without adding a new VM_xxx...

>  		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>  		goto out;
>
> diff --git a/mm/vma.c b/mm/vma.c
> index be64f781a3aa..1334622e4a03 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,7 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>
>  	vm_stat_account(mm, vma->vm_flags, map->pglen);
>  	if (vm_flags & VM_LOCKED) {
> -		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> +		if ((vm_flags & VM_NO_MLOCK_MASK) || vma_is_dax(vma) ||

For backport maybe just put an additional vm_flags & VM_DROPPABLE here?

>  					is_vm_hugetlb_page(vma) ||
>  					vma == get_gate_vma(mm))
>  			vm_flags_clear(vma, VM_LOCKED_MASK);
> --
> 2.47.3
>

Though I saw David suggested something different so that also addresses my review here :)

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-09 14:15 ` David Hildenbrand (Arm)
@ 2026-03-09 14:31   ` Lorenzo Stoakes (Oracle)
  2026-03-09 15:55     ` anthony.yznaga
  2026-03-09 15:39   ` anthony.yznaga
  2026-03-10  2:04   ` anthony.yznaga
  2 siblings, 1 reply; 9+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-09 14:31 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: Anthony Yznaga, linux-mm, linux-kernel, akpm, lorenzo.stoakes,
	Liam.Howlett, vbabka, rppt, surenb, mhocko, jannh, pfalcato,
	Jason

On Mon, Mar 09, 2026 at 03:15:24PM +0100, David Hildenbrand (Arm) wrote:
> On 3/6/26 21:45, Anthony Yznaga wrote:
> > Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
> > to the check in mlock_fixup(). However, they will be locked indirectly
> > if they are created after mlockall(MCL_FUTURE).
> >
> > Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
> > Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
> > ---
> >  include/linux/mm.h | 3 +++
> >  mm/mlock.c         | 4 ++--
> >  mm/vma.c           | 2 +-
> >  3 files changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 5be3d8a8f806..bb830574d112 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -574,6 +574,9 @@ enum {
> >  /* This mask represents all the VMA flag bits used by mlock */
> >  #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
> >
> > +/* This mask prevents VMAs from being mlock'd */
> > +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
>
> Instead of adding that, could we cleanup further by doing something like the following?
>
> The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
> the tools/testing/vma test happy.

Yeah Anthony - please do a simple:

$ cd tools/testing/vma
$ make && ./vma

To make sure that your changes don't introduce anything that breaks that.

If you need to add duplicate header defines put them in
tools/testing/vma/include/dup.h, if you need to stub stuff out put in stubs.h
and if you need to customise something for testing purposes, put it in custom.h.

>
> Not even compile tested, so will require some more work.
>
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 593f5d4e108b..755281fab23d 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>
>  #endif
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
>  {
>  	return is_vm_hugetlb_flags(vma->vm_flags);
>  }
> diff --git a/mm/internal.h b/mm/internal.h
> index 6e1162e13289..b70ebbdafe00 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
>  	}
>  	return fpin;
>  }
> +
> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
> +{
> +	if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
> +		return false;
> +	if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
> +		return false;
> +	return vma != get_gate_vma(vma->vm_mm);
> +}

Yeah this is nice.

>  #else /* !CONFIG_MMU */
>  static inline void unmap_mapping_folio(struct folio *folio) { }
>  static inline void mlock_new_folio(struct folio *folio) { }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 1a92d16f3684..e16b2ea234f7 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	int ret = 0;
>  	vm_flags_t oldflags = vma->vm_flags;
>
> -	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> -	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> -	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> +	if (newflags == oldflags || !vma_supports_mlock(vma))
>  		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>  		goto out;
>
> diff --git a/mm/vma.c b/mm/vma.c
> index e95fd5a5fe5c..b7055c264b5d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>
>  	vm_stat_account(mm, vma->vm_flags, map->pglen);
>  	if (vm_flags & VM_LOCKED) {
> -		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> -					is_vm_hugetlb_page(vma) ||
> -					vma == get_gate_vma(mm))
> +		if (!vma_supports_mlock(vma))
>  			vm_flags_clear(vma, VM_LOCKED_MASK);
>  		else
>  			mm->locked_vm += map->pglen;

Very much preferable!

> --
> 2.43.0
>
> --
> Cheers,
>
> David

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-09 14:15 ` David Hildenbrand (Arm)
  2026-03-09 14:31   ` Lorenzo Stoakes (Oracle)
@ 2026-03-09 15:39   ` anthony.yznaga
  2026-03-10  2:04   ` anthony.yznaga
  2 siblings, 0 replies; 9+ messages in thread
From: anthony.yznaga @ 2026-03-09 15:39 UTC (permalink / raw)
  To: David Hildenbrand (Arm), linux-mm, linux-kernel
  Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	jannh, pfalcato, Jason


On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
> On 3/6/26 21:45, Anthony Yznaga wrote:
>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>> to the check in mlock_fixup(). However, they will be locked indirectly
>> if they are created after mlockall(MCL_FUTURE).
>>
>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>> ---
>>   include/linux/mm.h | 3 +++
>>   mm/mlock.c         | 4 ++--
>>   mm/vma.c           | 2 +-
>>   3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 5be3d8a8f806..bb830574d112 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -574,6 +574,9 @@ enum {
>>   /* This mask represents all the VMA flag bits used by mlock */
>>   #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>>   
>> +/* This mask prevents VMAs from being mlock'd */
>> +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
> Instead of adding that, could we cleanup further by doing something like the following?
>
> The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
> the tools/testing/vma test happy.
>
> Not even compile tested, so will require some more work.

Thanks, David. This is a better approach that I'll implement. One thing 
to note is that the check for secretmem has to stay in mlock_fixup() 
because it's preventing the always-locked memory from being unlocked. I 
can add an extra comment for that.

Anthony

>
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 593f5d4e108b..755281fab23d 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>   
>   #endif
>   
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
>   {
>   	return is_vm_hugetlb_flags(vma->vm_flags);
>   }
> diff --git a/mm/internal.h b/mm/internal.h
> index 6e1162e13289..b70ebbdafe00 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
>   	}
>   	return fpin;
>   }
> +
> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
> +{
> +	if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
> +		return false;
> +	if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
> +		return false;
> +	return vma != get_gate_vma(vma->vm_mm);
> +}
>   #else /* !CONFIG_MMU */
>   static inline void unmap_mapping_folio(struct folio *folio) { }
>   static inline void mlock_new_folio(struct folio *folio) { }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 1a92d16f3684..e16b2ea234f7 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   	int ret = 0;
>   	vm_flags_t oldflags = vma->vm_flags;
>   
> -	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> -	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> -	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> +	if (newflags == oldflags || !vma_supports_mlock(vma))
>   		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>   		goto out;
>   
> diff --git a/mm/vma.c b/mm/vma.c
> index e95fd5a5fe5c..b7055c264b5d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>   
>   	vm_stat_account(mm, vma->vm_flags, map->pglen);
>   	if (vm_flags & VM_LOCKED) {
> -		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> -					is_vm_hugetlb_page(vma) ||
> -					vma == get_gate_vma(mm))
> +		if (!vma_supports_mlock(vma))
>   			vm_flags_clear(vma, VM_LOCKED_MASK);
>   		else
>   			mm->locked_vm += map->pglen;


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-09 14:28 ` Lorenzo Stoakes (Oracle)
@ 2026-03-09 15:54   ` anthony.yznaga
  0 siblings, 0 replies; 9+ messages in thread
From: anthony.yznaga @ 2026-03-09 15:54 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle)
  Cc: linux-mm, linux-kernel, akpm, david, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, jannh, pfalcato, Jason


On 3/9/26 7:28 AM, Lorenzo Stoakes (Oracle) wrote:
> -cc old mail (this is going to take some time to propagate I realise :P)
>
> On Fri, Mar 06, 2026 at 12:45:50PM -0800, Anthony Yznaga wrote:
>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>> to the check in mlock_fixup(). However, they will be locked indirectly
>> if they are created after mlockall(MCL_FUTURE).
> You need to add more details here.
>
> For e.g.: 'in apply_mlockall_flags(), if the flags parameter has MCL_FUTURE set,
> the current task's mm's default VMA flag field mm->def_flags has VM_LOCKED
> applied to it. Therefore, in __mmap_complete(), extend the test for VM_SPECIAL
> to include a test for VM_DROPPABLE'.
>
> Do you have a test that can check for this? It'd be good to have a regression
> test to assert that it now behaves correctly.
>
> You could extend either tools/testing/selftests/mm/mlock2-tests.c or
> droppable.c?
>
> It's worth mentioning that mlockall(MCL_ONFAULT) is handled too, as
> VM_LOCKONFAULT is always set with VM_LOCKED (the only difference being that,
> when trying to fault in memory for VM_LOCKED ranges, gup exits early in
> populate_vma_page_range() which has an explicit test for VM_LOCKONFAULT) , and
> apply_mlockall_flags() will invoke mlock_fixup() which already has the
> VM_DROPPABLE check.

I'll add a more detail commit message, and look at adding a test.


>
>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
> Do we want to cc: stable here?

I don't think so? It seems unlikely to be hit in practice, but I 
couldn't say for sure.

Anthony

>
>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>> ---
>>   include/linux/mm.h | 3 +++
>>   mm/mlock.c         | 4 ++--
>>   mm/vma.c           | 2 +-
>>   3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 5be3d8a8f806..bb830574d112 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -574,6 +574,9 @@ enum {
>>   /* This mask represents all the VMA flag bits used by mlock */
>>   #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>>
>> +/* This mask prevents VMAs from being mlock'd */
>> +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
>> +
> It'd be preferable to not use the legacy VMA flags implementation, but if we're
> backporting I guess... However there's only one place you need to update, the
> other already manually checks droppable, and it'd make my life easier for the
> VMA flags conversions to not define a flag like this also :)
>
>>   /* These flags can be updated atomically via VMA/mmap read lock. */
>>   #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD
>>
>> diff --git a/mm/mlock.c b/mm/mlock.c
>> index 2f699c3497a5..fd35c1e88c4c 100644
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -472,9 +472,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>>   	int ret = 0;
>>   	vm_flags_t oldflags = vma->vm_flags;
>>
>> -	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
>> +	if (newflags == oldflags || (oldflags & VM_NO_MLOCK_MASK) ||
>>   	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
>> -	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
>> +	    vma_is_dax(vma) || vma_is_secretmem(vma))
> This obviously wouldn't be necessary without adding a new VM_xxx...
>
>>   		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>>   		goto out;
>>
>> diff --git a/mm/vma.c b/mm/vma.c
>> index be64f781a3aa..1334622e4a03 100644
>> --- a/mm/vma.c
>> +++ b/mm/vma.c
>> @@ -2589,7 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>>
>>   	vm_stat_account(mm, vma->vm_flags, map->pglen);
>>   	if (vm_flags & VM_LOCKED) {
>> -		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
>> +		if ((vm_flags & VM_NO_MLOCK_MASK) || vma_is_dax(vma) ||
> For backport maybe just put an additional vm_flags & VM_DROPPABLE here?
>
>>   					is_vm_hugetlb_page(vma) ||
>>   					vma == get_gate_vma(mm))
>>   			vm_flags_clear(vma, VM_LOCKED_MASK);
>> --
>> 2.47.3
>>
> Though I saw David suggested something different so that also addresses my review here :)
>
> Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-09 14:31   ` Lorenzo Stoakes (Oracle)
@ 2026-03-09 15:55     ` anthony.yznaga
  0 siblings, 0 replies; 9+ messages in thread
From: anthony.yznaga @ 2026-03-09 15:55 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle), David Hildenbrand (Arm)
  Cc: linux-mm, linux-kernel, akpm, lorenzo.stoakes, Liam.Howlett,
	vbabka, rppt, surenb, mhocko, jannh, pfalcato, Jason


On 3/9/26 7:31 AM, Lorenzo Stoakes (Oracle) wrote:
> On Mon, Mar 09, 2026 at 03:15:24PM +0100, David Hildenbrand (Arm) wrote:
>> On 3/6/26 21:45, Anthony Yznaga wrote:
>>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>>> to the check in mlock_fixup(). However, they will be locked indirectly
>>> if they are created after mlockall(MCL_FUTURE).
>>>
>>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
>>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>>> ---
>>>   include/linux/mm.h | 3 +++
>>>   mm/mlock.c         | 4 ++--
>>>   mm/vma.c           | 2 +-
>>>   3 files changed, 6 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 5be3d8a8f806..bb830574d112 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -574,6 +574,9 @@ enum {
>>>   /* This mask represents all the VMA flag bits used by mlock */
>>>   #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>>>
>>> +/* This mask prevents VMAs from being mlock'd */
>>> +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
>> Instead of adding that, could we cleanup further by doing something like the following?
>>
>> The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
>> the tools/testing/vma test happy.
> Yeah Anthony - please do a simple:
>
> $ cd tools/testing/vma
> $ make && ./vma
>
> To make sure that your changes don't introduce anything that breaks that.
>
> If you need to add duplicate header defines put them in
> tools/testing/vma/include/dup.h, if you need to stub stuff out put in stubs.h
> and if you need to customise something for testing purposes, put it in custom.h.

Thanks, Lorenzo. Much appreciated!

Anthony


>
>> Not even compile tested, so will require some more work.
>>
>>
>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>> index 593f5d4e108b..755281fab23d 100644
>> --- a/include/linux/hugetlb_inline.h
>> +++ b/include/linux/hugetlb_inline.h
>> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>>
>>   #endif
>>
>> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
>>   {
>>   	return is_vm_hugetlb_flags(vma->vm_flags);
>>   }
>> diff --git a/mm/internal.h b/mm/internal.h
>> index 6e1162e13289..b70ebbdafe00 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
>>   	}
>>   	return fpin;
>>   }
>> +
>> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
>> +{
>> +	if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
>> +		return false;
>> +	if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
>> +		return false;
>> +	return vma != get_gate_vma(vma->vm_mm);
>> +}
> Yeah this is nice.
>
>>   #else /* !CONFIG_MMU */
>>   static inline void unmap_mapping_folio(struct folio *folio) { }
>>   static inline void mlock_new_folio(struct folio *folio) { }
>> diff --git a/mm/mlock.c b/mm/mlock.c
>> index 1a92d16f3684..e16b2ea234f7 100644
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>>   	int ret = 0;
>>   	vm_flags_t oldflags = vma->vm_flags;
>>
>> -	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
>> -	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
>> -	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
>> +	if (newflags == oldflags || !vma_supports_mlock(vma))
>>   		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>>   		goto out;
>>
>> diff --git a/mm/vma.c b/mm/vma.c
>> index e95fd5a5fe5c..b7055c264b5d 100644
>> --- a/mm/vma.c
>> +++ b/mm/vma.c
>> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>>
>>   	vm_stat_account(mm, vma->vm_flags, map->pglen);
>>   	if (vm_flags & VM_LOCKED) {
>> -		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
>> -					is_vm_hugetlb_page(vma) ||
>> -					vma == get_gate_vma(mm))
>> +		if (!vma_supports_mlock(vma))
>>   			vm_flags_clear(vma, VM_LOCKED_MASK);
>>   		else
>>   			mm->locked_vm += map->pglen;
> Very much preferable!
>
>> --
>> 2.43.0
>>
>> --
>> Cheers,
>>
>> David
> Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-09 14:15 ` David Hildenbrand (Arm)
  2026-03-09 14:31   ` Lorenzo Stoakes (Oracle)
  2026-03-09 15:39   ` anthony.yznaga
@ 2026-03-10  2:04   ` anthony.yznaga
  2026-03-10  8:25     ` David Hildenbrand (Arm)
  2 siblings, 1 reply; 9+ messages in thread
From: anthony.yznaga @ 2026-03-10  2:04 UTC (permalink / raw)
  To: David Hildenbrand (Arm), linux-mm, linux-kernel
  Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	jannh, pfalcato, Jason


On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
> On 3/6/26 21:45, Anthony Yznaga wrote:
>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>> to the check in mlock_fixup(). However, they will be locked indirectly
>> if they are created after mlockall(MCL_FUTURE).
>>
>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>> ---
>>   include/linux/mm.h | 3 +++
>>   mm/mlock.c         | 4 ++--
>>   mm/vma.c           | 2 +-
>>   3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 5be3d8a8f806..bb830574d112 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -574,6 +574,9 @@ enum {
>>   /* This mask represents all the VMA flag bits used by mlock */
>>   #define VM_LOCKED_MASK	(VM_LOCKED | VM_LOCKONFAULT)
>>   
>> +/* This mask prevents VMAs from being mlock'd */
>> +#define VM_NO_MLOCK_MASK	(VM_SPECIAL | VM_DROPPABLE)
> Instead of adding that, could we cleanup further by doing something like the following?
>
> The usage of "vma->vm_mm" must be double checked,

This sent me down an interesting rabbit hole since gate_vma->vm_mm is 
initialized to NULL. I can't see how the gate VMA could ever be passed 
to mlock_fixup() or __mmap_complete() if it's not part of the VMA tree 
of an mm and is not mapped through mmap. There are a couple of other 
places in the kernel that assume the gate VMA may be encountered when 
iterating VMAs, too. Am I missing something? Happy to clean these up if 
it makes sense.

Anthony

>   and we'll have to take care of making
> the tools/testing/vma test happy.
>
> Not even compile tested, so will require some more work.
>
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 593f5d4e108b..755281fab23d 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>   
>   #endif
>   
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
>   {
>   	return is_vm_hugetlb_flags(vma->vm_flags);
>   }
> diff --git a/mm/internal.h b/mm/internal.h
> index 6e1162e13289..b70ebbdafe00 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
>   	}
>   	return fpin;
>   }
> +
> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
> +{
> +	if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
> +		return false;
> +	if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
> +		return false;
> +	return vma != get_gate_vma(vma->vm_mm);
> +}
>   #else /* !CONFIG_MMU */
>   static inline void unmap_mapping_folio(struct folio *folio) { }
>   static inline void mlock_new_folio(struct folio *folio) { }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 1a92d16f3684..e16b2ea234f7 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   	int ret = 0;
>   	vm_flags_t oldflags = vma->vm_flags;
>   
> -	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> -	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> -	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> +	if (newflags == oldflags || !vma_supports_mlock(vma))
>   		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>   		goto out;
>   
> diff --git a/mm/vma.c b/mm/vma.c
> index e95fd5a5fe5c..b7055c264b5d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>   
>   	vm_stat_account(mm, vma->vm_flags, map->pglen);
>   	if (vm_flags & VM_LOCKED) {
> -		if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> -					is_vm_hugetlb_page(vma) ||
> -					vma == get_gate_vma(mm))
> +		if (!vma_supports_mlock(vma))
>   			vm_flags_clear(vma, VM_LOCKED_MASK);
>   		else
>   			mm->locked_vm += map->pglen;


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] mm: prevent droppable mappings from being locked
  2026-03-10  2:04   ` anthony.yznaga
@ 2026-03-10  8:25     ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-10  8:25 UTC (permalink / raw)
  To: anthony.yznaga, linux-mm, linux-kernel
  Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	jannh, pfalcato, Jason

On 3/10/26 03:04, anthony.yznaga@oracle.com wrote:
> 
> On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
>> On 3/6/26 21:45, Anthony Yznaga wrote:
>>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>>> to the check in mlock_fixup(). However, they will be locked indirectly
>>> if they are created after mlockall(MCL_FUTURE).
>>>
>>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always
>>> lazily freeable mappings")
>>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>>> ---
>>>   include/linux/mm.h | 3 +++
>>>   mm/mlock.c         | 4 ++--
>>>   mm/vma.c           | 2 +-
>>>   3 files changed, 6 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 5be3d8a8f806..bb830574d112 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -574,6 +574,9 @@ enum {
>>>   /* This mask represents all the VMA flag bits used by mlock */
>>>   #define VM_LOCKED_MASK    (VM_LOCKED | VM_LOCKONFAULT)
>>>   +/* This mask prevents VMAs from being mlock'd */
>>> +#define VM_NO_MLOCK_MASK    (VM_SPECIAL | VM_DROPPABLE)
>> Instead of adding that, could we cleanup further by doing something
>> like the following?
>>
>> The usage of "vma->vm_mm" must be double checked,
> 
> This sent me down an interesting rabbit hole since gate_vma->vm_mm is
> initialized to NULL. I can't see how the gate VMA could ever be passed
> to mlock_fixup() or __mmap_complete() if it's not part of the VMA tree
> of an mm and is not mapped through mmap. 

Right, gate_vma() would be shared across all processes.

Wouldn't code like the following be questionable as well?

fs/coredump.c:  if (vma == get_gate_vma(vma->vm_mm))
mm/vmscan.c:    if (vma == get_gate_vma(vma->vm_mm))


I mean, that cannot possibly be true unless I am missing something.

There are a couple of other
> places in the kernel that assume the gate VMA may be encountered when
> iterating VMAs, too. Am I missing something? Happy to clean these up if
> it makes sense.

Yes, please look into that. As an alternative, we could maybe pass
current->mm to get_gate_vma() ... but it'd be best to just clean that up.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-03-10  8:25 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-06 20:45 [PATCH] mm: prevent droppable mappings from being locked Anthony Yznaga
2026-03-09 14:15 ` David Hildenbrand (Arm)
2026-03-09 14:31   ` Lorenzo Stoakes (Oracle)
2026-03-09 15:55     ` anthony.yznaga
2026-03-09 15:39   ` anthony.yznaga
2026-03-10  2:04   ` anthony.yznaga
2026-03-10  8:25     ` David Hildenbrand (Arm)
2026-03-09 14:28 ` Lorenzo Stoakes (Oracle)
2026-03-09 15:54   ` anthony.yznaga

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox