From: Dev Jain <dev.jain@arm.com>
To: David Hildenbrand <david@redhat.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: akpm@linux-foundation.org, ryan.roberts@arm.com,
willy@infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, catalin.marinas@arm.com,
will@kernel.org, Liam.Howlett@oracle.com, vbabka@suse.cz,
jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com,
joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org,
kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com,
christophe.leroy@csgroup.eu, yangyicong@hisilicon.com,
linux-arm-kernel@lists.infradead.org, hughd@google.com,
yang@os.amperecomputing.com, ziy@nvidia.com
Subject: Re: [PATCH v5 6/7] mm: Optimize mprotect() by PTE batching
Date: Wed, 6 Aug 2025 15:50:21 +0530 [thread overview]
Message-ID: <2d0d58a4-bd27-41ac-9b25-1cd989c02383@arm.com> (raw)
In-Reply-To: <e3d13396-8408-49c0-9ec9-1b02790959aa@redhat.com>
On 06/08/25 3:41 pm, David Hildenbrand wrote:
> On 06.08.25 11:50, Lorenzo Stoakes wrote:
>> On Wed, Aug 06, 2025 at 03:07:49PM +0530, Dev Jain wrote:
>>>>>
>>>>> You mean in _this_ PTE of the batch right? As we're invoking these
>>>>> on each part
>>>>> of the PTE table.
>>>>>
>>>>> I mean I guess we can simply do:
>>>>>
>>>>> struct page *first_page = pte_page(ptent);
>>>>>
>>>>> Right?
>>>>
>>>> Yes, but we should forward the result from vm_normal_page(), which
>>>> does
>>>> exactly that for you, and increment the page accordingly as required,
>>>> just like with the pte we are processing.
>>>
>>> Makes sense, so I guess I will have to change the signature of
>>> prot_numa_skip()
>>>
>>> to pass a double ptr to a page instead of folio and derive the folio
>>> in the
>>> caller,
>>>
>>> and pass down both the folio and the page to
>>> set_write_prot_commit_flush_ptes.
>>
>> I already don't love how we psas the folio back from there for very
>> dubious
>> benefit. I really hate the idea of having a struct **page parameter...
>>
>> I wonder if we should just have a quick fixup for hotfix, and refine
>> this more
>> later?
>
> This is not an issue in any released kernel, so we can do this properly.
>
> We should just remove that nested vm_normal_folio().
>
> Untested, but should give an idea what we can do.
This puts the overhead of vm_normal_folio() unconditionally into the
pte_present path.
Although I am guessing that is already happening assuming prot_numa case
is not the
hot path. This is fine by me. So I guess I shouldn't have done that
"reuse the folio
from prot_numa case if possible" thingy at all :)
>
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 78bded7acf795..4e0a22f7db495 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -120,7 +120,7 @@ static int mprotect_folio_pte_batch(struct folio
> *folio, pte_t *ptep,
>
> static bool prot_numa_skip(struct vm_area_struct *vma, unsigned long
> addr,
> pte_t oldpte, pte_t *pte, int target_node,
> - struct folio **foliop)
> + struct folio *folio)
> {
> struct folio *folio = NULL;
> bool ret = true;
> @@ -131,7 +131,6 @@ static bool prot_numa_skip(struct vm_area_struct
> *vma, unsigned long addr,
> if (pte_protnone(oldpte))
> goto skip;
>
> - folio = vm_normal_folio(vma, addr, oldpte);
> if (!folio)
> goto skip;
>
> @@ -172,8 +171,6 @@ static bool prot_numa_skip(struct vm_area_struct
> *vma, unsigned long addr,
> if (folio_use_access_time(folio))
> folio_xchg_access_time(folio, jiffies_to_msecs(jiffies));
>
> -skip:
> - *foliop = folio;
> return ret;
> }
>
> @@ -231,10 +228,9 @@ static int page_anon_exclusive_sub_batch(int
> start_idx, int max_len,
> * retrieve sub-batches.
> */
> static void commit_anon_folio_batch(struct vm_area_struct *vma,
> - struct folio *folio, unsigned long addr, pte_t *ptep,
> + struct folio *folio, struct page *first_page, unsigned long
> addr, pte_t *ptep,
> pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> {
> - struct page *first_page = folio_page(folio, 0);
> bool expected_anon_exclusive;
> int sub_batch_idx = 0;
> int len;
> @@ -243,7 +239,7 @@ static void commit_anon_folio_batch(struct
> vm_area_struct *vma,
> expected_anon_exclusive = PageAnonExclusive(first_page +
> sub_batch_idx);
> len = page_anon_exclusive_sub_batch(sub_batch_idx, nr_ptes,
> first_page, expected_anon_exclusive);
> - prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, len,
> + prot_commit_flush_ptes(vma, addr, ptep, page, oldpte, ptent,
> len,
> sub_batch_idx, expected_anon_exclusive, tlb);
> sub_batch_idx += len;
> nr_ptes -= len;
> @@ -251,7 +247,7 @@ static void commit_anon_folio_batch(struct
> vm_area_struct *vma,
> }
>
> static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
> - struct folio *folio, unsigned long addr, pte_t *ptep,
> + struct folio *folio, struct page *page, unsigned long addr,
> pte_t *ptep,
> pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> {
> bool set_write;
> @@ -270,7 +266,7 @@ static void
> set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
> /* idx = */ 0, set_write, tlb);
> return;
> }
> - commit_anon_folio_batch(vma, folio, addr, ptep, oldpte, ptent,
> nr_ptes, tlb);
> + commit_anon_folio_batch(vma, folio, page, addr, ptep, oldpte,
> ptent, nr_ptes, tlb);
> }
>
> static long change_pte_range(struct mmu_gather *tlb,
> @@ -305,15 +301,20 @@ static long change_pte_range(struct mmu_gather
> *tlb,
> const fpb_t flags = FPB_RESPECT_SOFT_DIRTY |
> FPB_RESPECT_WRITE;
> int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
> struct folio *folio = NULL;
> + struct page *page;
> pte_t ptent;
>
> + page = vm_normal_folio(vma, addr, oldpte);
> + if (page)
> + folio = page_folio(page);
> +
> /*
> * Avoid trapping faults against the zero or KSM
> * pages. See similar comment in change_huge_pmd.
> */
> if (prot_numa) {
> int ret = prot_numa_skip(vma, addr, oldpte, pte,
> - target_node, &folio);
> + target_node, folio);
> if (ret) {
>
> /* determine batch to skip */
> @@ -323,9 +324,6 @@ static long change_pte_range(struct mmu_gather *tlb,
> }
> }
>
> - if (!folio)
> - folio = vm_normal_folio(vma, addr, oldpte);
> -
> nr_ptes = mprotect_folio_pte_batch(folio, pte, oldpte,
> max_nr_ptes, flags);
>
> oldpte = modify_prot_start_ptes(vma, addr, pte, nr_ptes);
> @@ -351,7 +349,7 @@ static long change_pte_range(struct mmu_gather *tlb,
> */
> if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
> !pte_write(ptent))
> - set_write_prot_commit_flush_ptes(vma, folio,
> + set_write_prot_commit_flush_ptes(vma, folio, page,
> addr, pte, oldpte, ptent, nr_ptes, tlb);
> else
> prot_commit_flush_ptes(vma, addr, pte, oldpte, ptent,
next prev parent reply other threads:[~2025-08-06 10:21 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-18 9:02 [PATCH v5 0/7] Optimize mprotect() for large folios Dev Jain
2025-07-18 9:02 ` [PATCH v5 1/7] mm: Refactor MM_CP_PROT_NUMA skipping case into new function Dev Jain
2025-07-18 16:19 ` Lorenzo Stoakes
2025-07-20 23:44 ` Barry Song
2025-07-21 3:44 ` Dev Jain
2025-07-22 11:05 ` Dev Jain
2025-07-22 11:25 ` Ryan Roberts
2025-07-23 13:57 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 2/7] mm: Optimize mprotect() for MM_CP_PROT_NUMA by batch-skipping PTEs Dev Jain
2025-07-18 16:40 ` Lorenzo Stoakes
2025-07-22 11:26 ` Ryan Roberts
2025-07-23 14:25 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 3/7] mm: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-07-18 17:05 ` Lorenzo Stoakes
2025-07-20 23:59 ` Barry Song
2025-07-22 11:35 ` Ryan Roberts
2025-07-23 15:09 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 4/7] mm: Introduce FPB_RESPECT_WRITE for PTE batching infrastructure Dev Jain
2025-07-18 17:12 ` Lorenzo Stoakes
2025-07-22 11:37 ` Ryan Roberts
2025-07-23 15:28 ` Zi Yan
2025-07-23 15:32 ` Dev Jain
2025-07-18 9:02 ` [PATCH v5 5/7] mm: Split can_change_pte_writable() into private and shared parts Dev Jain
2025-07-18 17:27 ` Lorenzo Stoakes
2025-07-23 15:40 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 6/7] mm: Optimize mprotect() by PTE batching Dev Jain
2025-07-18 18:49 ` Lorenzo Stoakes
2025-07-19 13:46 ` Dev Jain
2025-07-20 11:20 ` Lorenzo Stoakes
2025-07-20 14:39 ` Dev Jain
2025-07-24 19:55 ` Zi Yan
2025-08-06 8:08 ` David Hildenbrand
2025-08-06 8:12 ` David Hildenbrand
2025-08-06 8:15 ` Will Deacon
2025-08-06 8:19 ` David Hildenbrand
2025-08-06 8:53 ` Dev Jain
2025-08-06 8:56 ` David Hildenbrand
2025-08-06 9:12 ` Lorenzo Stoakes
2025-08-06 9:21 ` David Hildenbrand
2025-08-06 9:37 ` Dev Jain
2025-08-06 9:50 ` Lorenzo Stoakes
[not found] ` <1b3d4799-2a57-4f16-973b-82fc7b438862@arm.com>
2025-08-06 10:07 ` Dev Jain
2025-08-06 10:12 ` David Hildenbrand
2025-08-06 10:11 ` David Hildenbrand
2025-08-06 10:20 ` Dev Jain [this message]
2025-08-06 10:28 ` David Hildenbrand
2025-08-06 10:45 ` Lorenzo Stoakes
2025-08-06 10:45 ` Lorenzo Stoakes
2025-07-18 9:02 ` [PATCH v5 7/7] arm64: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-07-18 18:50 ` Lorenzo Stoakes
2025-07-21 15:57 ` Catalin Marinas
2025-07-18 9:50 ` [PATCH v5 0/7] Optimize mprotect() for large folios Dev Jain
2025-07-18 18:53 ` Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2d0d58a4-bd27-41ac-9b25-1cd989c02383@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=jannh@google.com \
--cc=joey.gouly@arm.com \
--cc=kevin.brodsky@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=peterx@redhat.com \
--cc=quic_zhenhuah@quicinc.com \
--cc=ryan.roberts@arm.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yangyicong@hisilicon.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).