* Re: [PATCH] mm: prevent droppable mappings from being locked
2026-03-09 14:15 ` David Hildenbrand (Arm)
@ 2026-03-09 14:31 ` Lorenzo Stoakes (Oracle)
2026-03-09 15:55 ` anthony.yznaga
2026-03-09 15:39 ` anthony.yznaga
2026-03-10 2:04 ` anthony.yznaga
2 siblings, 1 reply; 9+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-09 14:31 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Anthony Yznaga, linux-mm, linux-kernel, akpm, lorenzo.stoakes,
Liam.Howlett, vbabka, rppt, surenb, mhocko, jannh, pfalcato,
Jason
On Mon, Mar 09, 2026 at 03:15:24PM +0100, David Hildenbrand (Arm) wrote:
> On 3/6/26 21:45, Anthony Yznaga wrote:
> > Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
> > to the check in mlock_fixup(). However, they will be locked indirectly
> > if they are created after mlockall(MCL_FUTURE).
> >
> > Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
> > Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
> > ---
> > include/linux/mm.h | 3 +++
> > mm/mlock.c | 4 ++--
> > mm/vma.c | 2 +-
> > 3 files changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 5be3d8a8f806..bb830574d112 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -574,6 +574,9 @@ enum {
> > /* This mask represents all the VMA flag bits used by mlock */
> > #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT)
> >
> > +/* This mask prevents VMAs from being mlock'd */
> > +#define VM_NO_MLOCK_MASK (VM_SPECIAL | VM_DROPPABLE)
>
> Instead of adding that, could we cleanup further by doing something like the following?
>
> The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
> the tools/testing/vma test happy.
Yeah Anthony - please do a simple:
$ cd tools/testing/vma
$ make && ./vma
To make sure that your changes don't introduce anything that breaks that.
If you need to add duplicate header defines put them in
tools/testing/vma/include/dup.h, if you need to stub stuff out put in stubs.h
and if you need to customise something for testing purposes, put it in custom.h.
>
> Not even compile tested, so will require some more work.
>
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 593f5d4e108b..755281fab23d 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>
> #endif
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
> {
> return is_vm_hugetlb_flags(vma->vm_flags);
> }
> diff --git a/mm/internal.h b/mm/internal.h
> index 6e1162e13289..b70ebbdafe00 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
> }
> return fpin;
> }
> +
> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
> +{
> + if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
> + return false;
> + if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
> + return false;
> + return vma != get_gate_vma(vma->vm_mm);
> +}
Yeah this is nice.
> #else /* !CONFIG_MMU */
> static inline void unmap_mapping_folio(struct folio *folio) { }
> static inline void mlock_new_folio(struct folio *folio) { }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 1a92d16f3684..e16b2ea234f7 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> int ret = 0;
> vm_flags_t oldflags = vma->vm_flags;
>
> - if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> - is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> + if (newflags == oldflags || !vma_supports_mlock(vma))
> /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
> goto out;
>
> diff --git a/mm/vma.c b/mm/vma.c
> index e95fd5a5fe5c..b7055c264b5d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>
> vm_stat_account(mm, vma->vm_flags, map->pglen);
> if (vm_flags & VM_LOCKED) {
> - if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> - is_vm_hugetlb_page(vma) ||
> - vma == get_gate_vma(mm))
> + if (!vma_supports_mlock(vma))
> vm_flags_clear(vma, VM_LOCKED_MASK);
> else
> mm->locked_vm += map->pglen;
Very much preferable!
> --
> 2.43.0
>
> --
> Cheers,
>
> David
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] mm: prevent droppable mappings from being locked
2026-03-09 14:31 ` Lorenzo Stoakes (Oracle)
@ 2026-03-09 15:55 ` anthony.yznaga
0 siblings, 0 replies; 9+ messages in thread
From: anthony.yznaga @ 2026-03-09 15:55 UTC (permalink / raw)
To: Lorenzo Stoakes (Oracle), David Hildenbrand (Arm)
Cc: linux-mm, linux-kernel, akpm, lorenzo.stoakes, Liam.Howlett,
vbabka, rppt, surenb, mhocko, jannh, pfalcato, Jason
On 3/9/26 7:31 AM, Lorenzo Stoakes (Oracle) wrote:
> On Mon, Mar 09, 2026 at 03:15:24PM +0100, David Hildenbrand (Arm) wrote:
>> On 3/6/26 21:45, Anthony Yznaga wrote:
>>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>>> to the check in mlock_fixup(). However, they will be locked indirectly
>>> if they are created after mlockall(MCL_FUTURE).
>>>
>>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
>>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>>> ---
>>> include/linux/mm.h | 3 +++
>>> mm/mlock.c | 4 ++--
>>> mm/vma.c | 2 +-
>>> 3 files changed, 6 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 5be3d8a8f806..bb830574d112 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -574,6 +574,9 @@ enum {
>>> /* This mask represents all the VMA flag bits used by mlock */
>>> #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT)
>>>
>>> +/* This mask prevents VMAs from being mlock'd */
>>> +#define VM_NO_MLOCK_MASK (VM_SPECIAL | VM_DROPPABLE)
>> Instead of adding that, could we cleanup further by doing something like the following?
>>
>> The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
>> the tools/testing/vma test happy.
> Yeah Anthony - please do a simple:
>
> $ cd tools/testing/vma
> $ make && ./vma
>
> To make sure that your changes don't introduce anything that breaks that.
>
> If you need to add duplicate header defines put them in
> tools/testing/vma/include/dup.h, if you need to stub stuff out put in stubs.h
> and if you need to customise something for testing purposes, put it in custom.h.
Thanks, Lorenzo. Much appreciated!
Anthony
>
>> Not even compile tested, so will require some more work.
>>
>>
>> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
>> index 593f5d4e108b..755281fab23d 100644
>> --- a/include/linux/hugetlb_inline.h
>> +++ b/include/linux/hugetlb_inline.h
>> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>>
>> #endif
>>
>> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
>> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
>> {
>> return is_vm_hugetlb_flags(vma->vm_flags);
>> }
>> diff --git a/mm/internal.h b/mm/internal.h
>> index 6e1162e13289..b70ebbdafe00 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
>> }
>> return fpin;
>> }
>> +
>> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
>> +{
>> + if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
>> + return false;
>> + if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
>> + return false;
>> + return vma != get_gate_vma(vma->vm_mm);
>> +}
> Yeah this is nice.
>
>> #else /* !CONFIG_MMU */
>> static inline void unmap_mapping_folio(struct folio *folio) { }
>> static inline void mlock_new_folio(struct folio *folio) { }
>> diff --git a/mm/mlock.c b/mm/mlock.c
>> index 1a92d16f3684..e16b2ea234f7 100644
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>> int ret = 0;
>> vm_flags_t oldflags = vma->vm_flags;
>>
>> - if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
>> - is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
>> - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
>> + if (newflags == oldflags || !vma_supports_mlock(vma))
>> /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
>> goto out;
>>
>> diff --git a/mm/vma.c b/mm/vma.c
>> index e95fd5a5fe5c..b7055c264b5d 100644
>> --- a/mm/vma.c
>> +++ b/mm/vma.c
>> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>>
>> vm_stat_account(mm, vma->vm_flags, map->pglen);
>> if (vm_flags & VM_LOCKED) {
>> - if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
>> - is_vm_hugetlb_page(vma) ||
>> - vma == get_gate_vma(mm))
>> + if (!vma_supports_mlock(vma))
>> vm_flags_clear(vma, VM_LOCKED_MASK);
>> else
>> mm->locked_vm += map->pglen;
> Very much preferable!
>
>> --
>> 2.43.0
>>
>> --
>> Cheers,
>>
>> David
> Cheers, Lorenzo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: prevent droppable mappings from being locked
2026-03-09 14:15 ` David Hildenbrand (Arm)
2026-03-09 14:31 ` Lorenzo Stoakes (Oracle)
@ 2026-03-09 15:39 ` anthony.yznaga
2026-03-10 2:04 ` anthony.yznaga
2 siblings, 0 replies; 9+ messages in thread
From: anthony.yznaga @ 2026-03-09 15:39 UTC (permalink / raw)
To: David Hildenbrand (Arm), linux-mm, linux-kernel
Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
jannh, pfalcato, Jason
On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
> On 3/6/26 21:45, Anthony Yznaga wrote:
>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>> to the check in mlock_fixup(). However, they will be locked indirectly
>> if they are created after mlockall(MCL_FUTURE).
>>
>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>> ---
>> include/linux/mm.h | 3 +++
>> mm/mlock.c | 4 ++--
>> mm/vma.c | 2 +-
>> 3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 5be3d8a8f806..bb830574d112 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -574,6 +574,9 @@ enum {
>> /* This mask represents all the VMA flag bits used by mlock */
>> #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT)
>>
>> +/* This mask prevents VMAs from being mlock'd */
>> +#define VM_NO_MLOCK_MASK (VM_SPECIAL | VM_DROPPABLE)
> Instead of adding that, could we cleanup further by doing something like the following?
>
> The usage of "vma->vm_mm" must be double checked, and we'll have to take care of making
> the tools/testing/vma test happy.
>
> Not even compile tested, so will require some more work.
Thanks, David. This is a better approach that I'll implement. One thing
to note is that the check for secretmem has to stay in mlock_fixup()
because it's preventing the always-locked memory from being unlocked. I
can add an extra comment for that.
Anthony
>
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 593f5d4e108b..755281fab23d 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>
> #endif
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
> {
> return is_vm_hugetlb_flags(vma->vm_flags);
> }
> diff --git a/mm/internal.h b/mm/internal.h
> index 6e1162e13289..b70ebbdafe00 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
> }
> return fpin;
> }
> +
> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
> +{
> + if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
> + return false;
> + if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
> + return false;
> + return vma != get_gate_vma(vma->vm_mm);
> +}
> #else /* !CONFIG_MMU */
> static inline void unmap_mapping_folio(struct folio *folio) { }
> static inline void mlock_new_folio(struct folio *folio) { }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 1a92d16f3684..e16b2ea234f7 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> int ret = 0;
> vm_flags_t oldflags = vma->vm_flags;
>
> - if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> - is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> + if (newflags == oldflags || !vma_supports_mlock(vma))
> /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
> goto out;
>
> diff --git a/mm/vma.c b/mm/vma.c
> index e95fd5a5fe5c..b7055c264b5d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>
> vm_stat_account(mm, vma->vm_flags, map->pglen);
> if (vm_flags & VM_LOCKED) {
> - if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> - is_vm_hugetlb_page(vma) ||
> - vma == get_gate_vma(mm))
> + if (!vma_supports_mlock(vma))
> vm_flags_clear(vma, VM_LOCKED_MASK);
> else
> mm->locked_vm += map->pglen;
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] mm: prevent droppable mappings from being locked
2026-03-09 14:15 ` David Hildenbrand (Arm)
2026-03-09 14:31 ` Lorenzo Stoakes (Oracle)
2026-03-09 15:39 ` anthony.yznaga
@ 2026-03-10 2:04 ` anthony.yznaga
2026-03-10 8:25 ` David Hildenbrand (Arm)
2 siblings, 1 reply; 9+ messages in thread
From: anthony.yznaga @ 2026-03-10 2:04 UTC (permalink / raw)
To: David Hildenbrand (Arm), linux-mm, linux-kernel
Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
jannh, pfalcato, Jason
On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
> On 3/6/26 21:45, Anthony Yznaga wrote:
>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>> to the check in mlock_fixup(). However, they will be locked indirectly
>> if they are created after mlockall(MCL_FUTURE).
>>
>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>> ---
>> include/linux/mm.h | 3 +++
>> mm/mlock.c | 4 ++--
>> mm/vma.c | 2 +-
>> 3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 5be3d8a8f806..bb830574d112 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -574,6 +574,9 @@ enum {
>> /* This mask represents all the VMA flag bits used by mlock */
>> #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT)
>>
>> +/* This mask prevents VMAs from being mlock'd */
>> +#define VM_NO_MLOCK_MASK (VM_SPECIAL | VM_DROPPABLE)
> Instead of adding that, could we cleanup further by doing something like the following?
>
> The usage of "vma->vm_mm" must be double checked,
This sent me down an interesting rabbit hole since gate_vma->vm_mm is
initialized to NULL. I can't see how the gate VMA could ever be passed
to mlock_fixup() or __mmap_complete() if it's not part of the VMA tree
of an mm and is not mapped through mmap. There are a couple of other
places in the kernel that assume the gate VMA may be encountered when
iterating VMAs, too. Am I missing something? Happy to clean these up if
it makes sense.
Anthony
> and we'll have to take care of making
> the tools/testing/vma test happy.
>
> Not even compile tested, so will require some more work.
>
>
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 593f5d4e108b..755281fab23d 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -30,7 +30,7 @@ static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags)
>
> #endif
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
> {
> return is_vm_hugetlb_flags(vma->vm_flags);
> }
> diff --git a/mm/internal.h b/mm/internal.h
> index 6e1162e13289..b70ebbdafe00 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1242,6 +1242,15 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
> }
> return fpin;
> }
> +
> +static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
> +{
> + if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
> + return false;
> + if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
> + return false;
> + return vma != get_gate_vma(vma->vm_mm);
> +}
> #else /* !CONFIG_MMU */
> static inline void unmap_mapping_folio(struct folio *folio) { }
> static inline void mlock_new_folio(struct folio *folio) { }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 1a92d16f3684..e16b2ea234f7 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -472,9 +472,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> int ret = 0;
> vm_flags_t oldflags = vma->vm_flags;
>
> - if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> - is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
> + if (newflags == oldflags || !vma_supports_mlock(vma))
> /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
> goto out;
>
> diff --git a/mm/vma.c b/mm/vma.c
> index e95fd5a5fe5c..b7055c264b5d 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2589,9 +2589,7 @@ static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
>
> vm_stat_account(mm, vma->vm_flags, map->pglen);
> if (vm_flags & VM_LOCKED) {
> - if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
> - is_vm_hugetlb_page(vma) ||
> - vma == get_gate_vma(mm))
> + if (!vma_supports_mlock(vma))
> vm_flags_clear(vma, VM_LOCKED_MASK);
> else
> mm->locked_vm += map->pglen;
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH] mm: prevent droppable mappings from being locked
2026-03-10 2:04 ` anthony.yznaga
@ 2026-03-10 8:25 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-10 8:25 UTC (permalink / raw)
To: anthony.yznaga, linux-mm, linux-kernel
Cc: akpm, lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
jannh, pfalcato, Jason
On 3/10/26 03:04, anthony.yznaga@oracle.com wrote:
>
> On 3/9/26 7:15 AM, David Hildenbrand (Arm) wrote:
>> On 3/6/26 21:45, Anthony Yznaga wrote:
>>> Mappings created with MAP_DROPPABLE cannot be locked via mlock() due
>>> to the check in mlock_fixup(). However, they will be locked indirectly
>>> if they are created after mlockall(MCL_FUTURE).
>>>
>>> Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always
>>> lazily freeable mappings")
>>> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
>>> ---
>>> include/linux/mm.h | 3 +++
>>> mm/mlock.c | 4 ++--
>>> mm/vma.c | 2 +-
>>> 3 files changed, 6 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 5be3d8a8f806..bb830574d112 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -574,6 +574,9 @@ enum {
>>> /* This mask represents all the VMA flag bits used by mlock */
>>> #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT)
>>> +/* This mask prevents VMAs from being mlock'd */
>>> +#define VM_NO_MLOCK_MASK (VM_SPECIAL | VM_DROPPABLE)
>> Instead of adding that, could we cleanup further by doing something
>> like the following?
>>
>> The usage of "vma->vm_mm" must be double checked,
>
> This sent me down an interesting rabbit hole since gate_vma->vm_mm is
> initialized to NULL. I can't see how the gate VMA could ever be passed
> to mlock_fixup() or __mmap_complete() if it's not part of the VMA tree
> of an mm and is not mapped through mmap.
Right, gate_vma() would be shared across all processes.
Wouldn't code like the following be questionable as well?
fs/coredump.c: if (vma == get_gate_vma(vma->vm_mm))
mm/vmscan.c: if (vma == get_gate_vma(vma->vm_mm))
I mean, that cannot possibly be true unless I am missing something.
There are a couple of other
> places in the kernel that assume the gate VMA may be encountered when
> iterating VMAs, too. Am I missing something? Happy to clean these up if
> it makes sense.
Yes, please look into that. As an alternative, we could maybe pass
current->mm to get_gate_vma() ... but it'd be best to just clean that up.
--
Cheers,
David
^ permalink raw reply [flat|nested] 9+ messages in thread