The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
@ 2026-05-04 13:04 Alexander Gordeev
  2026-05-07  9:34 ` Wei Yang
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Gordeev @ 2026-05-04 13:04 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Anshuman Khandual, Oscar Salvador
  Cc: linux-s390, linux-mm, linux-kernel, Gerald Schaefer,
	Heiko Carstens, Vasily Gorbik

Switch from ptep_get() to ptep_get_lockless() accessor for
PTE reads when no lock is taken.

Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
 mm/page_vma_mapped.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index a4d52fdb3056..6559e17f11c2 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -41,7 +41,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,
 	if (!pvmw->pte)
 		return false;
 
-	ptent = ptep_get(pvmw->pte);
+	ptent = ptep_get_lockless(pvmw->pte);
 
 	if (pte_none(ptent)) {
 		return false;
@@ -310,7 +310,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 				goto restart;
 			}
 			pvmw->pte++;
-		} while (pte_none(ptep_get(pvmw->pte)));
+		} while (pte_none(ptep_get_lockless(pvmw->pte)));
 
 		if (!pvmw->ptl) {
 			spin_lock(ptl);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-04 13:04 [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access Alexander Gordeev
@ 2026-05-07  9:34 ` Wei Yang
  2026-05-07 10:32   ` Alexander Gordeev
  0 siblings, 1 reply; 8+ messages in thread
From: Wei Yang @ 2026-05-07  9:34 UTC (permalink / raw)
  To: Alexander Gordeev
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Anshuman Khandual, Oscar Salvador, linux-s390, linux-mm,
	linux-kernel, Gerald Schaefer, Heiko Carstens, Vasily Gorbik

On Mon, May 04, 2026 at 03:04:34PM +0200, Alexander Gordeev wrote:
>Switch from ptep_get() to ptep_get_lockless() accessor for
>PTE reads when no lock is taken.
>
>Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
>---
> mm/page_vma_mapped.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
>index a4d52fdb3056..6559e17f11c2 100644
>--- a/mm/page_vma_mapped.c
>+++ b/mm/page_vma_mapped.c
>@@ -41,7 +41,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,
> 	if (!pvmw->pte)
> 		return false;
> 
>-	ptent = ptep_get(pvmw->pte);
>+	ptent = ptep_get_lockless(pvmw->pte);
> 
> 	if (pte_none(ptent)) {
> 		return false;
>@@ -310,7 +310,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> 				goto restart;
> 			}
> 			pvmw->pte++;
>-		} while (pte_none(ptep_get(pvmw->pte)));
>+		} while (pte_none(ptep_get_lockless(pvmw->pte)));

As Oscar mentioned in lkml.org/lkml/2026/4/27/630, map_pte() may take the
lock. So probably it is not right?

> 
> 		if (!pvmw->ptl) {
> 			spin_lock(ptl);
>-- 
>2.51.0
>

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-07  9:34 ` Wei Yang
@ 2026-05-07 10:32   ` Alexander Gordeev
  2026-05-08  1:00     ` Wei Yang
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Gordeev @ 2026-05-07 10:32 UTC (permalink / raw)
  To: Wei Yang
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Anshuman Khandual, Oscar Salvador, linux-s390, linux-mm,
	linux-kernel, Gerald Schaefer, Heiko Carstens, Vasily Gorbik

On Thu, May 07, 2026 at 09:34:33AM +0000, Wei Yang wrote:
> >@@ -310,7 +310,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > 				goto restart;
> > 			}
> > 			pvmw->pte++;
> >-		} while (pte_none(ptep_get(pvmw->pte)));
> >+		} while (pte_none(ptep_get_lockless(pvmw->pte)));
> 
> As Oscar mentioned in lkml.org/lkml/2026/4/27/630, map_pte() may take the
> lock. So probably it is not right?

If I read the code correctly map_pte() might take the lock, but also
might not take it. If it took the lock and uses ptep_get_lockless(),
then it is fine. But if it did not take the lock and uses ptep_get(),
then it is an issue.

> > 
> > 		if (!pvmw->ptl) {
> > 			spin_lock(ptl);
> >-- 
> >2.51.0
> >
> 
> -- 
> Wei Yang

Thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-07 10:32   ` Alexander Gordeev
@ 2026-05-08  1:00     ` Wei Yang
  2026-05-08  5:15       ` Alexander Gordeev
  0 siblings, 1 reply; 8+ messages in thread
From: Wei Yang @ 2026-05-08  1:00 UTC (permalink / raw)
  To: Alexander Gordeev
  Cc: Wei Yang, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Anshuman Khandual, Oscar Salvador, linux-s390, linux-mm,
	linux-kernel, Gerald Schaefer, Heiko Carstens, Vasily Gorbik

On Thu, May 07, 2026 at 12:32:09PM +0200, Alexander Gordeev wrote:
>On Thu, May 07, 2026 at 09:34:33AM +0000, Wei Yang wrote:
>> >@@ -310,7 +310,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>> > 				goto restart;
>> > 			}
>> > 			pvmw->pte++;
>> >-		} while (pte_none(ptep_get(pvmw->pte)));
>> >+		} while (pte_none(ptep_get_lockless(pvmw->pte)));
>> 
>> As Oscar mentioned in lkml.org/lkml/2026/4/27/630, map_pte() may take the
>> lock. So probably it is not right?
>
>If I read the code correctly map_pte() might take the lock, but also
>might not take it. If it took the lock and uses ptep_get_lockless(),
>then it is fine. But if it did not take the lock and uses ptep_get(),
>then it is an issue.
>

So the rule here is:

  * ptep_get_lockless() could be used for locked and not locked
  * ptep_get() only used when locked

Right?

>> > 
>> > 		if (!pvmw->ptl) {
>> > 			spin_lock(ptl);
>> >-- 
>> >2.51.0
>> >
>> 
>> -- 
>> Wei Yang
>
>Thanks!

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-08  1:00     ` Wei Yang
@ 2026-05-08  5:15       ` Alexander Gordeev
  2026-05-08  6:23         ` Wei Yang
  2026-05-08  8:17         ` David Hildenbrand (Arm)
  0 siblings, 2 replies; 8+ messages in thread
From: Alexander Gordeev @ 2026-05-08  5:15 UTC (permalink / raw)
  To: Wei Yang
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Anshuman Khandual, Oscar Salvador, linux-s390, linux-mm,
	linux-kernel, Gerald Schaefer, Heiko Carstens, Vasily Gorbik

On Fri, May 08, 2026 at 01:00:40AM +0000, Wei Yang wrote:
> On Thu, May 07, 2026 at 12:32:09PM +0200, Alexander Gordeev wrote:
> >On Thu, May 07, 2026 at 09:34:33AM +0000, Wei Yang wrote:
> >> >@@ -310,7 +310,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> >> > 				goto restart;
> >> > 			}
> >> > 			pvmw->pte++;
> >> >-		} while (pte_none(ptep_get(pvmw->pte)));
> >> >+		} while (pte_none(ptep_get_lockless(pvmw->pte)));
> >> 
> >> As Oscar mentioned in lkml.org/lkml/2026/4/27/630, map_pte() may take the
> >> lock. So probably it is not right?
> >
> >If I read the code correctly map_pte() might take the lock, but also
> >might not take it. If it took the lock and uses ptep_get_lockless(),
> >then it is fine. But if it did not take the lock and uses ptep_get(),
> >then it is an issue.
> >
> 
> So the rule here is:
> 
>   * ptep_get_lockless() could be used for locked and not locked
>   * ptep_get() only used when locked
> 
> Right?

Yes, this is my assumption.

> >> > 
> >> > 		if (!pvmw->ptl) {
> >> > 			spin_lock(ptl);
> >> >-- 
> >> >2.51.0
> >> >
> >> 
> >> -- 
> >> Wei Yang
> >
> >Thanks!
> 
> -- 
> Wei Yang
> Help you, Help me

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-08  5:15       ` Alexander Gordeev
@ 2026-05-08  6:23         ` Wei Yang
  2026-05-08  8:17         ` David Hildenbrand (Arm)
  1 sibling, 0 replies; 8+ messages in thread
From: Wei Yang @ 2026-05-08  6:23 UTC (permalink / raw)
  To: Alexander Gordeev
  Cc: Wei Yang, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Anshuman Khandual, Oscar Salvador, linux-s390, linux-mm,
	linux-kernel, Gerald Schaefer, Heiko Carstens, Vasily Gorbik

On Fri, May 08, 2026 at 07:15:45AM +0200, Alexander Gordeev wrote:
>On Fri, May 08, 2026 at 01:00:40AM +0000, Wei Yang wrote:
>> On Thu, May 07, 2026 at 12:32:09PM +0200, Alexander Gordeev wrote:
>> >On Thu, May 07, 2026 at 09:34:33AM +0000, Wei Yang wrote:
>> >> >@@ -310,7 +310,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>> >> > 				goto restart;
>> >> > 			}
>> >> > 			pvmw->pte++;
>> >> >-		} while (pte_none(ptep_get(pvmw->pte)));
>> >> >+		} while (pte_none(ptep_get_lockless(pvmw->pte)));
>> >> 
>> >> As Oscar mentioned in lkml.org/lkml/2026/4/27/630, map_pte() may take the
>> >> lock. So probably it is not right?
>> >
>> >If I read the code correctly map_pte() might take the lock, but also
>> >might not take it. If it took the lock and uses ptep_get_lockless(),
>> >then it is fine. But if it did not take the lock and uses ptep_get(),
>> >then it is an issue.
>> >
>> 
>> So the rule here is:
>> 
>>   * ptep_get_lockless() could be used for locked and not locked
>>   * ptep_get() only used when locked
>> 
>> Right?
>
>Yes, this is my assumption.
>

Thanks, if so, it looks good.

>> >> > 
>> >> > 		if (!pvmw->ptl) {
>> >> > 			spin_lock(ptl);
>> >> >-- 
>> >> >2.51.0
>> >> >
>> >> 
>> >> -- 
>> >> Wei Yang
>> >
>> >Thanks!
>> 
>> -- 
>> Wei Yang
>> Help you, Help me

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-08  5:15       ` Alexander Gordeev
  2026-05-08  6:23         ` Wei Yang
@ 2026-05-08  8:17         ` David Hildenbrand (Arm)
  2026-05-08  8:34           ` Alexander Gordeev
  1 sibling, 1 reply; 8+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-08  8:17 UTC (permalink / raw)
  To: Alexander Gordeev, Wei Yang
  Cc: Andrew Morton, Lorenzo Stoakes, Anshuman Khandual, Oscar Salvador,
	linux-s390, linux-mm, linux-kernel, Gerald Schaefer,
	Heiko Carstens, Vasily Gorbik

On 5/8/26 07:15, Alexander Gordeev wrote:
> On Fri, May 08, 2026 at 01:00:40AM +0000, Wei Yang wrote:
>> On Thu, May 07, 2026 at 12:32:09PM +0200, Alexander Gordeev wrote:
>>>
>>> If I read the code correctly map_pte() might take the lock, but also
>>> might not take it. If it took the lock and uses ptep_get_lockless(),
>>> then it is fine. But if it did not take the lock and uses ptep_get(),
>>> then it is an issue.
>>>
>>
>> So the rule here is:
>>
>>   * ptep_get_lockless() could be used for locked and not locked
>>   * ptep_get() only used when locked
>>
>> Right?
> 
> Yes, this is my assumption.

I agree, ptep_get_lockless() simply makes sense to return something sensible if
there are concurrent modifications (which cannot happen when the PTL is held).

That's why only 32bit with 64bit PTEs and arm64 even has to special-case it.


We should clarify in the patch description that in the do-while loop, we might
or might not hold the PTL, and that calling ptep_get_lockless() with the PTL
held is OK.

I wonder if it's more efficient and clearer, to use the correct variant, though?


diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index a4d52fdb3056..36d97661a4e5 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -187,6 +187,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
        p4d_t *p4d;
        pud_t *pud;
        pmd_t pmde;
+       pte_t pteval;

        /* The only possible pmd mapping has been handled on last iteration */
        if (pvmw->pmd && !pvmw->pte)
@@ -310,7 +311,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
                                goto restart;
                        }
                        pvmw->pte++;
-               } while (pte_none(ptep_get(pvmw->pte)));
+                       if (!pvmw->ptl)
+                               pteval = ptep_get_lockless(pvmw->pte);
+                       else
+                               pteval = ptep_get(pvmw->pte);
+               } while (pte_none(pteval));

                if (!pvmw->ptl) {
                        spin_lock(ptl);


-- 
Cheers,

David

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access
  2026-05-08  8:17         ` David Hildenbrand (Arm)
@ 2026-05-08  8:34           ` Alexander Gordeev
  0 siblings, 0 replies; 8+ messages in thread
From: Alexander Gordeev @ 2026-05-08  8:34 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: Wei Yang, Andrew Morton, Lorenzo Stoakes, Anshuman Khandual,
	Oscar Salvador, linux-s390, linux-mm, linux-kernel,
	Gerald Schaefer, Heiko Carstens, Vasily Gorbik

On Fri, May 08, 2026 at 10:17:16AM +0200, David Hildenbrand (Arm) wrote:
> >> So the rule here is:
> >>
> >>   * ptep_get_lockless() could be used for locked and not locked
> >>   * ptep_get() only used when locked
> >>
> >> Right?
> > 
> > Yes, this is my assumption.
> 
> I agree, ptep_get_lockless() simply makes sense to return something sensible if
> there are concurrent modifications (which cannot happen when the PTL is held).
> 
> That's why only 32bit with 64bit PTEs and arm64 even has to special-case it.
> 
> 
> We should clarify in the patch description that in the do-while loop, we might
> or might not hold the PTL, and that calling ptep_get_lockless() with the PTL
> held is OK.
> 
> I wonder if it's more efficient and clearer, to use the correct variant, though?
> 
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index a4d52fdb3056..36d97661a4e5 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -187,6 +187,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>         p4d_t *p4d;
>         pud_t *pud;
>         pmd_t pmde;
> +       pte_t pteval;
> 
>         /* The only possible pmd mapping has been handled on last iteration */
>         if (pvmw->pmd && !pvmw->pte)
> @@ -310,7 +311,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>                                 goto restart;
>                         }
>                         pvmw->pte++;
> -               } while (pte_none(ptep_get(pvmw->pte)));
> +                       if (!pvmw->ptl)
> +                               pteval = ptep_get_lockless(pvmw->pte);
> +                       else
> +                               pteval = ptep_get(pvmw->pte);
> +               } while (pte_none(pteval));

Looks fine to me. I will try and add it to the next version.

>                 if (!pvmw->ptl) {
>                         spin_lock(ptl);
> 
> 
> -- 
> Cheers,
> 
> David

Thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-05-08  8:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-04 13:04 [PATCH v2] mm/page_vma_mapped_walk: Use ptep_get_lockless() for lockless access Alexander Gordeev
2026-05-07  9:34 ` Wei Yang
2026-05-07 10:32   ` Alexander Gordeev
2026-05-08  1:00     ` Wei Yang
2026-05-08  5:15       ` Alexander Gordeev
2026-05-08  6:23         ` Wei Yang
2026-05-08  8:17         ` David Hildenbrand (Arm)
2026-05-08  8:34           ` Alexander Gordeev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox