* [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
@ 2026-04-23 14:28 Greg Kroah-Hartman
2026-04-23 15:55 ` David Hildenbrand (Arm)
2026-04-24 13:38 ` Andrew Morton
0 siblings, 2 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2026-04-23 14:28 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, Greg Kroah-Hartman, Andrew Morton,
David Hildenbrand, Jason Gunthorpe, John Hubbard, Peter Xu
The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
get_page() reference for each page regardless of foll_flags:
if (pages[i])
get_page(pages[i]);
This is reached from pin_user_pages*() with FOLL_PIN set.
unpin_user_page() is shared between MMU and NOMMU configurations and
unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
This means that pin adds 1, and then unpin will subtract 1024.
If a user maps a page (refcount 1), registers it 1023 times as an
io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
the page is freed and returned to the buddy allocator. The remaining
1022 unpins write into whatever was reallocated, and the user's VMA
still maps the freed page (NOMMU has no MMU to invalidate it).
Reallocating the page for an io_uring pbuf_ring then lets userspace
corrupt the new owner's data through the stale mapping.
Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
symmetric.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Peter Xu <peterx@redhat.com>
Reported-by: Anthropic
Assisted-by: gkh_clanker_t1000
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
v2: - drop huge comment
- rework error return value based on David's suggestion (heck,
pretty much the full patch was written by him now)
Link to v1: https://lore.kernel.org/r/2026042334-acutely-unadorned-e05c@gregkh
mm/gup.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index ad9ded39609c..2f6f95a167af 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1983,6 +1983,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
struct vm_area_struct *vma;
bool must_unlock = false;
vm_flags_t vm_flags;
+ int ret, err = -EFAULT;
long i;
if (!nr_pages)
@@ -2019,8 +2020,14 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
if (pages) {
pages[i] = virt_to_page((void *)start);
- if (pages[i])
- get_page(pages[i]);
+ if (!pages[i])
+ break;
+ ret = try_grab_folio(page_folio(pages[i]), 1, foll_flags);
+ if (unlikely(ret)) {
+ pages[i] = NULL;
+ err = ret;
+ break;
+ }
}
start = (start + PAGE_SIZE) & PAGE_MASK;
@@ -2031,7 +2038,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
*locked = 0;
}
- return i ? : -EFAULT;
+ return i ? : err;
}
#endif /* !CONFIG_MMU */
--
2.54.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-23 14:28 [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked() Greg Kroah-Hartman
@ 2026-04-23 15:55 ` David Hildenbrand (Arm)
2026-04-24 11:31 ` Greg Kroah-Hartman
2026-04-24 13:38 ` Andrew Morton
1 sibling, 1 reply; 10+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-23 15:55 UTC (permalink / raw)
To: Greg Kroah-Hartman, linux-mm
Cc: linux-kernel, Andrew Morton, Jason Gunthorpe, John Hubbard,
Peter Xu
On 4/23/26 16:28, Greg Kroah-Hartman wrote:
> The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
> get_page() reference for each page regardless of foll_flags:
> if (pages[i])
> get_page(pages[i]);
>
> This is reached from pin_user_pages*() with FOLL_PIN set.
> unpin_user_page() is shared between MMU and NOMMU configurations and
> unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
> GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
>
> This means that pin adds 1, and then unpin will subtract 1024.
>
> If a user maps a page (refcount 1), registers it 1023 times as an
> io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
> unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
> the page is freed and returned to the buddy allocator. The remaining
> 1022 unpins write into whatever was reallocated, and the user's VMA
> still maps the freed page (NOMMU has no MMU to invalidate it).
> Reallocating the page for an io_uring pbuf_ring then lets userspace
> corrupt the new owner's data through the stale mapping.
>
> Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
> for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
> symmetric.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Peter Xu <peterx@redhat.com>
> Reported-by: Anthropic
> Assisted-by: gkh_clanker_t1000
Assisted-by: David :(
(no, I'm not a tool! :) )
> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> ---
> v2: - drop huge comment
> - rework error return value based on David's suggestion (heck,
> pretty much the full patch was written by him now)
> Link to v1: https://lore.kernel.org/r/2026042334-acutely-unadorned-e05c@gregkh
>
> mm/gup.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index ad9ded39609c..2f6f95a167af 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1983,6 +1983,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
> struct vm_area_struct *vma;
> bool must_unlock = false;
> vm_flags_t vm_flags;
> + int ret, err = -EFAULT;
> long i;
>
> if (!nr_pages)
> @@ -2019,8 +2020,14 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
>
> if (pages) {
> pages[i] = virt_to_page((void *)start);
> - if (pages[i])
> - get_page(pages[i]);
> + if (!pages[i])
> + break;
Best to mention that change in the patch description. I really think this is the
right thing to do (returning NULL in the page array is just very dubious).
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
--
Cheers,
David
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-23 15:55 ` David Hildenbrand (Arm)
@ 2026-04-24 11:31 ` Greg Kroah-Hartman
2026-04-24 11:39 ` Andrew Morton
2026-04-24 12:16 ` David Hildenbrand (Arm)
0 siblings, 2 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2026-04-24 11:31 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-mm, linux-kernel, Andrew Morton, Jason Gunthorpe,
John Hubbard, Peter Xu
On Thu, Apr 23, 2026 at 05:55:56PM +0200, David Hildenbrand (Arm) wrote:
> On 4/23/26 16:28, Greg Kroah-Hartman wrote:
> > The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
> > get_page() reference for each page regardless of foll_flags:
> > if (pages[i])
> > get_page(pages[i]);
> >
> > This is reached from pin_user_pages*() with FOLL_PIN set.
> > unpin_user_page() is shared between MMU and NOMMU configurations and
> > unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
> > GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
> >
> > This means that pin adds 1, and then unpin will subtract 1024.
> >
> > If a user maps a page (refcount 1), registers it 1023 times as an
> > io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
> > unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
> > the page is freed and returned to the buddy allocator. The remaining
> > 1022 unpins write into whatever was reallocated, and the user's VMA
> > still maps the freed page (NOMMU has no MMU to invalidate it).
> > Reallocating the page for an io_uring pbuf_ring then lets userspace
> > corrupt the new owner's data through the stale mapping.
> >
> > Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
> > for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
> > symmetric.
> >
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: David Hildenbrand <david@kernel.org>
> > Cc: Jason Gunthorpe <jgg@ziepe.ca>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Peter Xu <peterx@redhat.com>
> > Reported-by: Anthropic
> > Assisted-by: gkh_clanker_t1000
>
> Assisted-by: David :(
>
> (no, I'm not a tool! :) )
True, sorry, I guess people can "assist", I should have added that.
If Andrew's tools automatically pick this up then:
Assisted-by: David Hildenbrand <david@kernel.org>
> > Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > ---
> > v2: - drop huge comment
> > - rework error return value based on David's suggestion (heck,
> > pretty much the full patch was written by him now)
> > Link to v1: https://lore.kernel.org/r/2026042334-acutely-unadorned-e05c@gregkh
> >
> > mm/gup.c | 13 ++++++++++---
> > 1 file changed, 10 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index ad9ded39609c..2f6f95a167af 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1983,6 +1983,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
> > struct vm_area_struct *vma;
> > bool must_unlock = false;
> > vm_flags_t vm_flags;
> > + int ret, err = -EFAULT;
> > long i;
> >
> > if (!nr_pages)
> > @@ -2019,8 +2020,14 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
> >
> > if (pages) {
> > pages[i] = virt_to_page((void *)start);
> > - if (pages[i])
> > - get_page(pages[i]);
> > + if (!pages[i])
> > + break;
>
> Best to mention that change in the patch description. I really think this is the
> right thing to do (returning NULL in the page array is just very dubious).
Ick, I see Andrew already grabbed this so I'll just leave it for now,
thanks for the help and review!
greg k-h
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-24 11:31 ` Greg Kroah-Hartman
@ 2026-04-24 11:39 ` Andrew Morton
2026-04-24 12:16 ` David Hildenbrand (Arm)
1 sibling, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2026-04-24 11:39 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: David Hildenbrand (Arm), linux-mm, linux-kernel, Jason Gunthorpe,
John Hubbard, Peter Xu
On Fri, 24 Apr 2026 13:31:34 +0200 Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
> > > @@ -2019,8 +2020,14 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
> > >
> > > if (pages) {
> > > pages[i] = virt_to_page((void *)start);
> > > - if (pages[i])
> > > - get_page(pages[i]);
> > > + if (!pages[i])
> > > + break;
> >
> > Best to mention that change in the patch description. I really think this is the
> > right thing to do (returning NULL in the page array is just very dubious).
>
> Ick, I see Andrew already grabbed this so I'll just leave it for now,
> thanks for the help and review!
Andrew does copy-n-paste ;)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-24 11:31 ` Greg Kroah-Hartman
2026-04-24 11:39 ` Andrew Morton
@ 2026-04-24 12:16 ` David Hildenbrand (Arm)
2026-04-24 12:41 ` Andrew Morton
1 sibling, 1 reply; 10+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-24 12:16 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: linux-mm, linux-kernel, Andrew Morton, Jason Gunthorpe,
John Hubbard, Peter Xu
On 4/24/26 13:31, Greg Kroah-Hartman wrote:
> On Thu, Apr 23, 2026 at 05:55:56PM +0200, David Hildenbrand (Arm) wrote:
>> On 4/23/26 16:28, Greg Kroah-Hartman wrote:
>>> The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
>>> get_page() reference for each page regardless of foll_flags:
>>> if (pages[i])
>>> get_page(pages[i]);
>>>
>>> This is reached from pin_user_pages*() with FOLL_PIN set.
>>> unpin_user_page() is shared between MMU and NOMMU configurations and
>>> unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
>>> GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
>>>
>>> This means that pin adds 1, and then unpin will subtract 1024.
>>>
>>> If a user maps a page (refcount 1), registers it 1023 times as an
>>> io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
>>> unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
>>> the page is freed and returned to the buddy allocator. The remaining
>>> 1022 unpins write into whatever was reallocated, and the user's VMA
>>> still maps the freed page (NOMMU has no MMU to invalidate it).
>>> Reallocating the page for an io_uring pbuf_ring then lets userspace
>>> corrupt the new owner's data through the stale mapping.
>>>
>>> Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
>>> for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
>>> symmetric.
>>>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: David Hildenbrand <david@kernel.org>
>>> Cc: Jason Gunthorpe <jgg@ziepe.ca>
>>> Cc: John Hubbard <jhubbard@nvidia.com>
>>> Cc: Peter Xu <peterx@redhat.com>
>>> Reported-by: Anthropic
>>> Assisted-by: gkh_clanker_t1000
>>
>> Assisted-by: David :(
>>
>> (no, I'm not a tool! :) )
>
> True, sorry, I guess people can "assist", I should have added that.
>
> If Andrew's tools automatically pick this up then:
>
> Assisted-by: David Hildenbrand <david@kernel.org>
I think we usually use Suggested-by:, but really I was just joking :)
>
>>> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>>> ---
>>> v2: - drop huge comment
>>> - rework error return value based on David's suggestion (heck,
>>> pretty much the full patch was written by him now)
>>> Link to v1: https://lore.kernel.org/r/2026042334-acutely-unadorned-e05c@gregkh
>>>
>>> mm/gup.c | 13 ++++++++++---
>>> 1 file changed, 10 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/gup.c b/mm/gup.c
>>> index ad9ded39609c..2f6f95a167af 100644
>>> --- a/mm/gup.c
>>> +++ b/mm/gup.c
>>> @@ -1983,6 +1983,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
>>> struct vm_area_struct *vma;
>>> bool must_unlock = false;
>>> vm_flags_t vm_flags;
>>> + int ret, err = -EFAULT;
>>> long i;
>>>
>>> if (!nr_pages)
>>> @@ -2019,8 +2020,14 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
>>>
>>> if (pages) {
>>> pages[i] = virt_to_page((void *)start);
>>> - if (pages[i])
>>> - get_page(pages[i]);
>>> + if (!pages[i])
>>> + break;
>>
>> Best to mention that change in the patch description. I really think this is the
>> right thing to do (returning NULL in the page array is just very dubious).
>
> Ick, I see Andrew already grabbed this so I'll just leave it for now,
> thanks for the help and review!
Andrew, can you add "While at it, don't return NULL pointers in the page array,
as this is really not expected for GUP users; instead, just fail and return
-EFAULT."? Thanks!
--
Cheers,
David
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-24 12:16 ` David Hildenbrand (Arm)
@ 2026-04-24 12:41 ` Andrew Morton
2026-04-24 12:49 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2026-04-24 12:41 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Greg Kroah-Hartman, linux-mm, linux-kernel, Jason Gunthorpe,
John Hubbard, Peter Xu
On Fri, 24 Apr 2026 14:16:21 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
> >>> Cc: David Hildenbrand <david@kernel.org>
> >>> Cc: Jason Gunthorpe <jgg@ziepe.ca>
> >>> Cc: John Hubbard <jhubbard@nvidia.com>
> >>> Cc: Peter Xu <peterx@redhat.com>
> >>> Reported-by: Anthropic
> >>> Assisted-by: gkh_clanker_t1000
> >>
> >> Assisted-by: David :(
> >>
> >> (no, I'm not a tool! :) )
> >
> > True, sorry, I guess people can "assist", I should have added that.
> >
> > If Andrew's tools automatically pick this up then:
> >
> > Assisted-by: David Hildenbrand <david@kernel.org>
>
> I think we usually use Suggested-by:, but really I was just joking :)
I thought you were gkh_clanker_t1000
> Andrew, can you add "While at it, don't return NULL pointers in the page array,
> as this is really not expected for GUP users; instead, just fail and return
> -EFAULT."? Thanks!
Did.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-24 12:41 ` Andrew Morton
@ 2026-04-24 12:49 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 10+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-24 12:49 UTC (permalink / raw)
To: Andrew Morton
Cc: Greg Kroah-Hartman, linux-mm, linux-kernel, Jason Gunthorpe,
John Hubbard, Peter Xu
On 4/24/26 14:41, Andrew Morton wrote:
> On Fri, 24 Apr 2026 14:16:21 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
>
>>>
>>> True, sorry, I guess people can "assist", I should have added that.
>>>
>>> If Andrew's tools automatically pick this up then:
>>>
>>> Assisted-by: David Hildenbrand <david@kernel.org>
>>
>> I think we usually use Suggested-by:, but really I was just joking :)
>
> I thought you were gkh_clanker_t1000
That was supposed to be our tiny little secret. ;)
--
Cheers,
David
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-23 14:28 [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked() Greg Kroah-Hartman
2026-04-23 15:55 ` David Hildenbrand (Arm)
@ 2026-04-24 13:38 ` Andrew Morton
2026-04-24 14:04 ` Greg Kroah-Hartman
2026-04-24 14:19 ` David Hildenbrand (Arm)
1 sibling, 2 replies; 10+ messages in thread
From: Andrew Morton @ 2026-04-24 13:38 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe,
John Hubbard, Peter Xu
On Thu, 23 Apr 2026 16:28:04 +0200 Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
> The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
> get_page() reference for each page regardless of foll_flags:
> if (pages[i])
> get_page(pages[i]);
>
> This is reached from pin_user_pages*() with FOLL_PIN set.
> unpin_user_page() is shared between MMU and NOMMU configurations and
> unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
> GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
>
> This means that pin adds 1, and then unpin will subtract 1024.
>
> If a user maps a page (refcount 1), registers it 1023 times as an
> io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
> unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
> the page is freed and returned to the buddy allocator. The remaining
> 1022 unpins write into whatever was reallocated, and the user's VMA
> still maps the freed page (NOMMU has no MMU to invalidate it).
> Reallocating the page for an io_uring pbuf_ring then lets userspace
> corrupt the new owner's data through the stale mapping.
>
> Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
> for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
> symmetric.
Battle of the bots?
https://sashiko.dev/#/patchset/2026042303-vendor-outright-b9d2@gregkh
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-24 13:38 ` Andrew Morton
@ 2026-04-24 14:04 ` Greg Kroah-Hartman
2026-04-24 14:19 ` David Hildenbrand (Arm)
1 sibling, 0 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2026-04-24 14:04 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, David Hildenbrand, Jason Gunthorpe,
John Hubbard, Peter Xu
On Fri, Apr 24, 2026 at 06:38:17AM -0700, Andrew Morton wrote:
> On Thu, 23 Apr 2026 16:28:04 +0200 Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
>
> > The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
> > get_page() reference for each page regardless of foll_flags:
> > if (pages[i])
> > get_page(pages[i]);
> >
> > This is reached from pin_user_pages*() with FOLL_PIN set.
> > unpin_user_page() is shared between MMU and NOMMU configurations and
> > unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
> > GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
> >
> > This means that pin adds 1, and then unpin will subtract 1024.
> >
> > If a user maps a page (refcount 1), registers it 1023 times as an
> > io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
> > unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
> > the page is freed and returned to the buddy allocator. The remaining
> > 1022 unpins write into whatever was reallocated, and the user's VMA
> > still maps the freed page (NOMMU has no MMU to invalidate it).
> > Reallocating the page for an io_uring pbuf_ring then lets userspace
> > corrupt the new owner's data through the stale mapping.
> >
> > Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
> > for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
> > symmetric.
>
> Battle of the bots?
> https://sashiko.dev/#/patchset/2026042303-vendor-outright-b9d2@gregkh
Odd, I really don't know the answer to that. I can provide my
reproducer if anyone wants to tell me this patch is wrong.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked()
2026-04-24 13:38 ` Andrew Morton
2026-04-24 14:04 ` Greg Kroah-Hartman
@ 2026-04-24 14:19 ` David Hildenbrand (Arm)
1 sibling, 0 replies; 10+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-24 14:19 UTC (permalink / raw)
To: Andrew Morton, Greg Kroah-Hartman
Cc: linux-mm, linux-kernel, Jason Gunthorpe, John Hubbard, Peter Xu
On 4/24/26 15:38, Andrew Morton wrote:
> On Thu, 23 Apr 2026 16:28:04 +0200 Greg Kroah-Hartman <gregkh@linuxfoundation.org> wrote:
>
>> The !CONFIG_MMU implementation of __get_user_pages_locked() takes a bare
>> get_page() reference for each page regardless of foll_flags:
>> if (pages[i])
>> get_page(pages[i]);
>>
>> This is reached from pin_user_pages*() with FOLL_PIN set.
>> unpin_user_page() is shared between MMU and NOMMU configurations and
>> unconditionally calls gup_put_folio(..., FOLL_PIN), which subtracts
>> GUP_PIN_COUNTING_BIAS (1024) from the folio refcount.
>>
>> This means that pin adds 1, and then unpin will subtract 1024.
>>
>> If a user maps a page (refcount 1), registers it 1023 times as an
>> io_uring fixed buffer (1023 pin_user_pages calls -> refcount 1024), then
>> unregisters: the first unpin_user_page subtracts 1024, refcount hits 0,
>> the page is freed and returned to the buddy allocator. The remaining
>> 1022 unpins write into whatever was reallocated, and the user's VMA
>> still maps the freed page (NOMMU has no MMU to invalidate it).
>> Reallocating the page for an io_uring pbuf_ring then lets userspace
>> corrupt the new owner's data through the stale mapping.
>>
>> Use try_grab_folio() which adds GUP_PIN_COUNTING_BIAS for FOLL_PIN and 1
>> for FOLL_GET, mirroring the CONFIG_MMU path so pin and unpin are
>> symmetric.
>
> Battle of the bots?
> https://sashiko.dev/#/patchset/2026042303-vendor-outright-b9d2@gregkh
It references the
if (pages && !(flags & FOLL_PIN))
flags |= FOLL_GET;
I'm not sure if there is actual NOMMU code that triggers it. For example,
uprobes uses that pattern, but I suspect that that's not a thing on NOMMU.
Probably best to just squash:
diff --git a/mm/gup.c b/mm/gup.c
index ad9ded39609c..44bd28cf6e00 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1988,6 +1988,9 @@ static long __get_user_pages_locked(struct mm_struct *mm,
unsigned long start,
if (!nr_pages)
return 0;
+ if (pages && !(foll_flags & FOLL_PIN))
+ foll_flags |= FOLL_GET;
+
/*
* The internal caller expects GUP to manage the lock internally and the
* lock must be released when this returns.
--
Cheers,
David
^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-04-24 14:19 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-23 14:28 [PATCH v2] mm/gup: honour FOLL_PIN in NOMMU __get_user_pages_locked() Greg Kroah-Hartman
2026-04-23 15:55 ` David Hildenbrand (Arm)
2026-04-24 11:31 ` Greg Kroah-Hartman
2026-04-24 11:39 ` Andrew Morton
2026-04-24 12:16 ` David Hildenbrand (Arm)
2026-04-24 12:41 ` Andrew Morton
2026-04-24 12:49 ` David Hildenbrand (Arm)
2026-04-24 13:38 ` Andrew Morton
2026-04-24 14:04 ` Greg Kroah-Hartman
2026-04-24 14:19 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox