* [PATCH] mm: move mask update out of the atomic context
@ 2025-06-23 8:04 Alexander Gordeev
2025-06-23 8:56 ` Dev Jain
0 siblings, 1 reply; 9+ messages in thread
From: Alexander Gordeev @ 2025-06-23 8:04 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, linux-kernel
There is not need to modify page table synchronization mask
while apply_to_pte_range() holds user page tables spinlock.
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
mm/memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 8eba595056fe..6849ab4e44bf 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3035,12 +3035,13 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
}
} while (pte++, addr += PAGE_SIZE, addr != end);
}
- *mask |= PGTBL_PTE_MODIFIED;
arch_leave_lazy_mmu_mode();
if (mm != &init_mm)
pte_unmap_unlock(mapped_pte, ptl);
+ *mask |= PGTBL_PTE_MODIFIED;
+
return err;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-23 8:04 [PATCH] mm: move mask update out of the atomic context Alexander Gordeev
@ 2025-06-23 8:56 ` Dev Jain
2025-06-23 9:37 ` Alexander Gordeev
0 siblings, 1 reply; 9+ messages in thread
From: Dev Jain @ 2025-06-23 8:56 UTC (permalink / raw)
To: Alexander Gordeev, Andrew Morton; +Cc: linux-mm, linux-kernel
On 23/06/25 1:34 pm, Alexander Gordeev wrote:
> There is not need to modify page table synchronization mask
> while apply_to_pte_range() holds user page tables spinlock.
I don't get you, what is the problem with the current code?
Are you just concerned about the duration of holding the
lock?
>
> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
> ---
> mm/memory.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 8eba595056fe..6849ab4e44bf 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3035,12 +3035,13 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
> }
> } while (pte++, addr += PAGE_SIZE, addr != end);
> }
> - *mask |= PGTBL_PTE_MODIFIED;
>
> arch_leave_lazy_mmu_mode();
>
> if (mm != &init_mm)
> pte_unmap_unlock(mapped_pte, ptl);
> + *mask |= PGTBL_PTE_MODIFIED;
> +
> return err;
> }
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-23 8:56 ` Dev Jain
@ 2025-06-23 9:37 ` Alexander Gordeev
2025-06-23 10:09 ` Dev Jain
0 siblings, 1 reply; 9+ messages in thread
From: Alexander Gordeev @ 2025-06-23 9:37 UTC (permalink / raw)
To: Dev Jain; +Cc: Andrew Morton, linux-mm, linux-kernel
On Mon, Jun 23, 2025 at 02:26:29PM +0530, Dev Jain wrote:
>
> On 23/06/25 1:34 pm, Alexander Gordeev wrote:
> > There is not need to modify page table synchronization mask
> > while apply_to_pte_range() holds user page tables spinlock.
>
> I don't get you, what is the problem with the current code?
> Are you just concerned about the duration of holding the
> lock?
Yes.
> > Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
> > ---
> > mm/memory.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 8eba595056fe..6849ab4e44bf 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3035,12 +3035,13 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
> > }
> > } while (pte++, addr += PAGE_SIZE, addr != end);
> > }
> > - *mask |= PGTBL_PTE_MODIFIED;
> > arch_leave_lazy_mmu_mode();
> > if (mm != &init_mm)
> > pte_unmap_unlock(mapped_pte, ptl);
> > + *mask |= PGTBL_PTE_MODIFIED;
> > +
> > return err;
> > }
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-23 9:37 ` Alexander Gordeev
@ 2025-06-23 10:09 ` Dev Jain
2025-06-23 19:45 ` David Hildenbrand
0 siblings, 1 reply; 9+ messages in thread
From: Dev Jain @ 2025-06-23 10:09 UTC (permalink / raw)
To: Alexander Gordeev; +Cc: Andrew Morton, linux-mm, linux-kernel
On 23/06/25 3:07 pm, Alexander Gordeev wrote:
> On Mon, Jun 23, 2025 at 02:26:29PM +0530, Dev Jain wrote:
>> On 23/06/25 1:34 pm, Alexander Gordeev wrote:
>>> There is not need to modify page table synchronization mask
>>> while apply_to_pte_range() holds user page tables spinlock.
>> I don't get you, what is the problem with the current code?
>> Are you just concerned about the duration of holding the
>> lock?
> Yes.
Doesn't really matter but still a correct change:
Reviewed-by: Dev Jain <dev.jain@arm.com>
>
>>> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
>>> ---
>>> mm/memory.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index 8eba595056fe..6849ab4e44bf 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -3035,12 +3035,13 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
>>> }
>>> } while (pte++, addr += PAGE_SIZE, addr != end);
>>> }
>>> - *mask |= PGTBL_PTE_MODIFIED;
>>> arch_leave_lazy_mmu_mode();
>>> if (mm != &init_mm)
>>> pte_unmap_unlock(mapped_pte, ptl);
>>> + *mask |= PGTBL_PTE_MODIFIED;
>>> +
>>> return err;
>>> }
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-23 10:09 ` Dev Jain
@ 2025-06-23 19:45 ` David Hildenbrand
2025-06-24 9:37 ` Alexander Gordeev
0 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2025-06-23 19:45 UTC (permalink / raw)
To: Dev Jain, Alexander Gordeev; +Cc: Andrew Morton, linux-mm, linux-kernel
On 23.06.25 12:09, Dev Jain wrote:
>
> On 23/06/25 3:07 pm, Alexander Gordeev wrote:
>> On Mon, Jun 23, 2025 at 02:26:29PM +0530, Dev Jain wrote:
>>> On 23/06/25 1:34 pm, Alexander Gordeev wrote:
>>>> There is not need to modify page table synchronization mask
>>>> while apply_to_pte_range() holds user page tables spinlock.
>>> I don't get you, what is the problem with the current code?
>>> Are you just concerned about the duration of holding the
>>> lock?
>> Yes.
>
> Doesn't really matter but still a correct change:
Let's ask the real questions: who checks PGTBL_PTE_MODIFIED?
I see
if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
arch_sync_kernel_mappings(start, start + size);
And then
arch/arm/include/asm/page.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
arch/x86/include/asm/pgtable-2level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
arch/x86/include/asm/pgtable-3level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
Which makes me wonder why we need PGTBL_PTE_MODIFIED at all? Is there some other check I am missing?
(same question regarding everything excepy PGTBL_PMD_MODIFIED, because that actually seems to be used)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-23 19:45 ` David Hildenbrand
@ 2025-06-24 9:37 ` Alexander Gordeev
2025-06-24 9:40 ` David Hildenbrand
0 siblings, 1 reply; 9+ messages in thread
From: Alexander Gordeev @ 2025-06-24 9:37 UTC (permalink / raw)
To: David Hildenbrand; +Cc: Dev Jain, Andrew Morton, linux-mm, linux-kernel
On Mon, Jun 23, 2025 at 09:45:34PM +0200, David Hildenbrand wrote:
...
> Let's ask the real questions: who checks PGTBL_PTE_MODIFIED?
>
> I see
>
> if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> arch_sync_kernel_mappings(start, start + size);
>
> And then
>
> arch/arm/include/asm/page.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
> arch/x86/include/asm/pgtable-2level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
> arch/x86/include/asm/pgtable-3level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>
>
> Which makes me wonder why we need PGTBL_PTE_MODIFIED at all? Is there some other check I am missing?
>
> (same question regarding everything excepy PGTBL_PMD_MODIFIED, because that actually seems to be used)
AFAICT it was thought as architecture-specific:
/*
* Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
* and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
* needs to be called.
*/
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
#define ARCH_PAGE_TABLE_SYNC_MASK 0
#endif
Not sure if that needs to be addressed at all.
> --
> Cheers,
>
> David / dhildenb
Thanks!
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-24 9:37 ` Alexander Gordeev
@ 2025-06-24 9:40 ` David Hildenbrand
2025-06-24 12:00 ` Alexander Gordeev
0 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand @ 2025-06-24 9:40 UTC (permalink / raw)
To: Alexander Gordeev; +Cc: Dev Jain, Andrew Morton, linux-mm, linux-kernel
On 24.06.25 11:37, Alexander Gordeev wrote:
> On Mon, Jun 23, 2025 at 09:45:34PM +0200, David Hildenbrand wrote:
> ...
>> Let's ask the real questions: who checks PGTBL_PTE_MODIFIED?
>>
>> I see
>>
>> if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
>> arch_sync_kernel_mappings(start, start + size);
>>
>> And then
>>
>> arch/arm/include/asm/page.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>> arch/x86/include/asm/pgtable-2level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>> arch/x86/include/asm/pgtable-3level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>>
>>
>> Which makes me wonder why we need PGTBL_PTE_MODIFIED at all? Is there some other check I am missing?
>>
>> (same question regarding everything excepy PGTBL_PMD_MODIFIED, because that actually seems to be used)
>
> AFAICT it was thought as architecture-specific:
>
> /*
> * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> * needs to be called.
> */
> #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> #define ARCH_PAGE_TABLE_SYNC_MASK 0
> #endif
>
> Not sure if that needs to be addressed at all.
Okay, if there are no users of PGTBL_PTE_MODIFIED we could just ...
remove it. Dead code.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-24 9:40 ` David Hildenbrand
@ 2025-06-24 12:00 ` Alexander Gordeev
2025-06-24 12:03 ` David Hildenbrand
0 siblings, 1 reply; 9+ messages in thread
From: Alexander Gordeev @ 2025-06-24 12:00 UTC (permalink / raw)
To: David Hildenbrand; +Cc: Dev Jain, Andrew Morton, linux-mm, linux-kernel
On Tue, Jun 24, 2025 at 11:40:05AM +0200, David Hildenbrand wrote:
> On 24.06.25 11:37, Alexander Gordeev wrote:
> > On Mon, Jun 23, 2025 at 09:45:34PM +0200, David Hildenbrand wrote:
> > ...
> > > Let's ask the real questions: who checks PGTBL_PTE_MODIFIED?
> > >
> > > I see
> > >
> > > if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> > > arch_sync_kernel_mappings(start, start + size);
> > >
> > > And then
> > >
> > > arch/arm/include/asm/page.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
> > > arch/x86/include/asm/pgtable-2level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
> > > arch/x86/include/asm/pgtable-3level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
> > >
> > >
> > > Which makes me wonder why we need PGTBL_PTE_MODIFIED at all? Is there some other check I am missing?
> > >
> > > (same question regarding everything excepy PGTBL_PMD_MODIFIED, because that actually seems to be used)
> >
> > AFAICT it was thought as architecture-specific:
> >
> > /*
> > * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> > * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> > * needs to be called.
> > */
> > #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> > #define ARCH_PAGE_TABLE_SYNC_MASK 0
> > #endif
> >
> > Not sure if that needs to be addressed at all.
>
> Okay, if there are no users of PGTBL_PTE_MODIFIED we could just ... remove
> it. Dead code.
As you noticed, PGTBL_PMD_MODIFIED bit is used only. Thus, all other
bits would have to be removed as well, not just PGTBL_PTE_MODIFIED?
That is more or less revert of at least the below commits and rewriting
it in a PMD-focused manner:
2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified")
d8626138009b ("mm: add functions to track page directory modifications")
That would be a completely different effort, which I am not aming at ;)
> --
> Cheers,
>
> David / dhildenb
Thanks!
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mm: move mask update out of the atomic context
2025-06-24 12:00 ` Alexander Gordeev
@ 2025-06-24 12:03 ` David Hildenbrand
0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand @ 2025-06-24 12:03 UTC (permalink / raw)
To: Alexander Gordeev; +Cc: Dev Jain, Andrew Morton, linux-mm, linux-kernel
On 24.06.25 14:00, Alexander Gordeev wrote:
> On Tue, Jun 24, 2025 at 11:40:05AM +0200, David Hildenbrand wrote:
>> On 24.06.25 11:37, Alexander Gordeev wrote:
>>> On Mon, Jun 23, 2025 at 09:45:34PM +0200, David Hildenbrand wrote:
>>> ...
>>>> Let's ask the real questions: who checks PGTBL_PTE_MODIFIED?
>>>>
>>>> I see
>>>>
>>>> if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
>>>> arch_sync_kernel_mappings(start, start + size);
>>>>
>>>> And then
>>>>
>>>> arch/arm/include/asm/page.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>>>> arch/x86/include/asm/pgtable-2level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>>>> arch/x86/include/asm/pgtable-3level_types.h:#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED
>>>>
>>>>
>>>> Which makes me wonder why we need PGTBL_PTE_MODIFIED at all? Is there some other check I am missing?
>>>>
>>>> (same question regarding everything excepy PGTBL_PMD_MODIFIED, because that actually seems to be used)
>>>
>>> AFAICT it was thought as architecture-specific:
>>>
>>> /*
>>> * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
>>> * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
>>> * needs to be called.
>>> */
>>> #ifndef ARCH_PAGE_TABLE_SYNC_MASK
>>> #define ARCH_PAGE_TABLE_SYNC_MASK 0
>>> #endif
>>>
>>> Not sure if that needs to be addressed at all.
>>
>> Okay, if there are no users of PGTBL_PTE_MODIFIED we could just ... remove
>> it. Dead code.
>
> As you noticed, PGTBL_PMD_MODIFIED bit is used only. Thus, all other
> bits would have to be removed as well, not just PGTBL_PTE_MODIFIED?
Likely, yes.
>
> That is more or less revert of at least the below commits and rewriting
> it in a PMD-focused manner:
>
> 2ba3e6947aed ("mm/vmalloc: track which page-table levels were modified")
> d8626138009b ("mm: add functions to track page directory modifications")
>
> That would be a completely different effort, which I am not aming at ;)
Well, I don't think we should be taking this micro-optimization patch
here TBH. The real optimization is getting rid of dead code, not
optimizing dead code.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-06-24 12:03 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-23 8:04 [PATCH] mm: move mask update out of the atomic context Alexander Gordeev
2025-06-23 8:56 ` Dev Jain
2025-06-23 9:37 ` Alexander Gordeev
2025-06-23 10:09 ` Dev Jain
2025-06-23 19:45 ` David Hildenbrand
2025-06-24 9:37 ` Alexander Gordeev
2025-06-24 9:40 ` David Hildenbrand
2025-06-24 12:00 ` Alexander Gordeev
2025-06-24 12:03 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).