From: David Hildenbrand <david@redhat.com>
To: Dev Jain <dev.jain@arm.com>, akpm@linux-foundation.org
Cc: ryan.roberts@arm.com, willy@infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, catalin.marinas@arm.com,
will@kernel.org, Liam.Howlett@oracle.com,
lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com,
anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com,
ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com,
quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu,
yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org,
hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com
Subject: Re: [PATCH v5 6/7] mm: Optimize mprotect() by PTE batching
Date: Wed, 6 Aug 2025 10:08:33 +0200 [thread overview]
Message-ID: <7567c594-7588-49e0-8b09-2a591181b24d@redhat.com> (raw)
In-Reply-To: <20250718090244.21092-7-dev.jain@arm.com>
On 18.07.25 11:02, Dev Jain wrote:
> Use folio_pte_batch to batch process a large folio. Note that, PTE
> batching here will save a few function calls, and this strategy in certain
> cases (not this one) batches atomic operations in general, so we have
> a performance win for all arches. This patch paves the way for patch 7
> which will help us elide the TLBI per contig block on arm64.
>
> The correctness of this patch lies on the correctness of setting the
> new ptes based upon information only from the first pte of the batch
> (which may also have accumulated a/d bits via modify_prot_start_ptes()).
>
> Observe that the flag combination we pass to mprotect_folio_pte_batch()
> guarantees that the batch is uniform w.r.t the soft-dirty bit and the
> writable bit. Therefore, the only bits which may differ are the a/d bits.
> So we only need to worry about code which is concerned about the a/d bits
> of the PTEs.
>
> Setting extra a/d bits on the new ptes where previously they were not set,
> is fine - setting access bit when it was not set is not an incorrectness
> problem but will only possibly delay the reclaim of the page mapped by
> the pte (which is in fact intended because the kernel just operated on this
> region via mprotect()!). Setting dirty bit when it was not set is again
> not an incorrectness problem but will only possibly force an unnecessary
> writeback.
>
> So now we need to reason whether something can go wrong via
> can_change_pte_writable(). The pte_protnone, pte_needs_soft_dirty_wp,
> and userfaultfd_pte_wp cases are solved due to uniformity in the
> corresponding bits guaranteed by the flag combination. The ptes all
> belong to the same VMA (since callers guarantee that [start, end) will
> lie within the VMA) therefore the conditional based on the VMA is also
> safe to batch around.
>
> Since the dirty bit on the PTE really is just an indication that the folio
> got written to - even if the PTE is not actually dirty but one of the PTEs
> in the batch is, the wp-fault optimization can be made. Therefore, it is
> safe to batch around pte_dirty() in can_change_shared_pte_writable()
> (in fact this is better since without batching, it may happen that
> some ptes aren't changed to writable just because they are not dirty,
> even though the other ptes mapping the same large folio are dirty).
>
> To batch around the PageAnonExclusive case, we must check the corresponding
> condition for every single page. Therefore, from the large folio batch,
> we process sub batches of ptes mapping pages with the same
> PageAnonExclusive condition, and process that sub batch, then determine
> and process the next sub batch, and so on. Note that this does not cause
> any extra overhead; if suppose the size of the folio batch is 512, then
> the sub batch processing in total will take 512 iterations, which is the
> same as what we would have done before.
>
> For pte_needs_flush():
>
> ppc does not care about the a/d bits.
>
> For x86, PAGE_SAVED_DIRTY is ignored. We will flush only when a/d bits
> get cleared; since we can only have extra a/d bits due to batching,
> we will only have an extra flush, not a case where we elide a flush due
> to batching when we shouldn't have.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
I wanted to review this, but looks like it's already upstream and I
suspect it's buggy (see the upstream report I cc'ed you on)
[...]
> +
> +/*
> + * This function is a result of trying our very best to retain the
> + * "avoid the write-fault handler" optimization. In can_change_pte_writable(),
> + * if the vma is a private vma, and we cannot determine whether to change
> + * the pte to writable just from the vma and the pte, we then need to look
> + * at the actual page pointed to by the pte. Unfortunately, if we have a
> + * batch of ptes pointing to consecutive pages of the same anon large folio,
> + * the anon-exclusivity (or the negation) of the first page does not guarantee
> + * the anon-exclusivity (or the negation) of the other pages corresponding to
> + * the pte batch; hence in this case it is incorrect to decide to change or
> + * not change the ptes to writable just by using information from the first
> + * pte of the batch. Therefore, we must individually check all pages and
> + * retrieve sub-batches.
> + */
> +static void commit_anon_folio_batch(struct vm_area_struct *vma,
> + struct folio *folio, unsigned long addr, pte_t *ptep,
> + pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> +{
> + struct page *first_page = folio_page(folio, 0);
Who says that we have the first page of the folio mapped into the first
PTE of the batch?
> + bool expected_anon_exclusive;
> + int sub_batch_idx = 0;
> + int len;
> +
> + while (nr_ptes) {
> + expected_anon_exclusive = PageAnonExclusive(first_page + sub_batch_idx);
> + len = page_anon_exclusive_sub_batch(sub_batch_idx, nr_ptes,
> + first_page, expected_anon_exclusive);
> + prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, len,
> + sub_batch_idx, expected_anon_exclusive, tlb);
> + sub_batch_idx += len;
> + nr_ptes -= len;
> + }
> +}
> +
> +static void set_write_prot_commit_flush_ptes(struct vm_area_struct *vma,
> + struct folio *folio, unsigned long addr, pte_t *ptep,
> + pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> +{
> + bool set_write;
> +
> + if (vma->vm_flags & VM_SHARED) {
> + set_write = can_change_shared_pte_writable(vma, ptent);
> + prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, nr_ptes,
> + /* idx = */ 0, set_write, tlb);
> + return;
> + }
> +
> + set_write = maybe_change_pte_writable(vma, ptent) &&
> + (folio && folio_test_anon(folio));
> + if (!set_write) {
> + prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, nr_ptes,
> + /* idx = */ 0, set_write, tlb);
> + return;
> + }
> + commit_anon_folio_batch(vma, folio, addr, ptep, oldpte, ptent, nr_ptes, tlb);
> +}
> +
> static long change_pte_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
> unsigned long end, pgprot_t newprot, unsigned long cp_flags)
> @@ -206,8 +302,9 @@ static long change_pte_range(struct mmu_gather *tlb,
> nr_ptes = 1;
> oldpte = ptep_get(pte);
> if (pte_present(oldpte)) {
> + const fpb_t flags = FPB_RESPECT_SOFT_DIRTY | FPB_RESPECT_WRITE;
> int max_nr_ptes = (end - addr) >> PAGE_SHIFT;
> - struct folio *folio;
> + struct folio *folio = NULL;
> pte_t ptent;
>
> /*
> @@ -221,11 +318,16 @@ static long change_pte_range(struct mmu_gather *tlb,
>
> /* determine batch to skip */
> nr_ptes = mprotect_folio_pte_batch(folio,
> - pte, oldpte, max_nr_ptes);
> + pte, oldpte, max_nr_ptes, /* flags = */ 0);
> continue;
> }
> }
>
> + if (!folio)
> + folio = vm_normal_folio(vma, addr, oldpte);
> +
> + nr_ptes = mprotect_folio_pte_batch(folio, pte, oldpte, max_nr_ptes, flags);
> +
> oldpte = modify_prot_start_ptes(vma, addr, pte, nr_ptes);
> ptent = pte_modify(oldpte, newprot);
>
> @@ -248,14 +350,13 @@ static long change_pte_range(struct mmu_gather *tlb,
> * COW or special handling is required.
> */
> if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) &&
> - !pte_write(ptent) &&
> - can_change_pte_writable(vma, addr, ptent))
> - ptent = pte_mkwrite(ptent, vma);
> -
> - modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr_ptes);
> - if (pte_needs_flush(oldpte, ptent))
> - tlb_flush_pte_range(tlb, addr, PAGE_SIZE);
> - pages++;
> + !pte_write(ptent))
> + set_write_prot_commit_flush_ptes(vma, folio,
> + addr, pte, oldpte, ptent, nr_ptes, tlb);
While staring at this:
Very broken indentation.
> + else
> + prot_commit_flush_ptes(vma, addr, pte, oldpte, ptent,
> + nr_ptes, /* idx = */ 0, /* set_write = */ false, tlb);
Semi-broken intendation.
> + pages += nr_ptes;
> } else if (is_swap_pte(oldpte)) {
> swp_entry_t entry = pte_to_swp_entry(oldpte);
> pte_t newpte;
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-08-06 8:08 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-18 9:02 [PATCH v5 0/7] Optimize mprotect() for large folios Dev Jain
2025-07-18 9:02 ` [PATCH v5 1/7] mm: Refactor MM_CP_PROT_NUMA skipping case into new function Dev Jain
2025-07-18 16:19 ` Lorenzo Stoakes
2025-07-20 23:44 ` Barry Song
2025-07-21 3:44 ` Dev Jain
2025-07-22 11:05 ` Dev Jain
2025-07-22 11:25 ` Ryan Roberts
2025-07-23 13:57 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 2/7] mm: Optimize mprotect() for MM_CP_PROT_NUMA by batch-skipping PTEs Dev Jain
2025-07-18 16:40 ` Lorenzo Stoakes
2025-07-22 11:26 ` Ryan Roberts
2025-07-23 14:25 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 3/7] mm: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-07-18 17:05 ` Lorenzo Stoakes
2025-07-20 23:59 ` Barry Song
2025-07-22 11:35 ` Ryan Roberts
2025-07-23 15:09 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 4/7] mm: Introduce FPB_RESPECT_WRITE for PTE batching infrastructure Dev Jain
2025-07-18 17:12 ` Lorenzo Stoakes
2025-07-22 11:37 ` Ryan Roberts
2025-07-23 15:28 ` Zi Yan
2025-07-23 15:32 ` Dev Jain
2025-07-18 9:02 ` [PATCH v5 5/7] mm: Split can_change_pte_writable() into private and shared parts Dev Jain
2025-07-18 17:27 ` Lorenzo Stoakes
2025-07-23 15:40 ` Zi Yan
2025-07-18 9:02 ` [PATCH v5 6/7] mm: Optimize mprotect() by PTE batching Dev Jain
2025-07-18 18:49 ` Lorenzo Stoakes
2025-07-19 13:46 ` Dev Jain
2025-07-20 11:20 ` Lorenzo Stoakes
2025-07-20 14:39 ` Dev Jain
2025-07-24 19:55 ` Zi Yan
2025-08-06 8:08 ` David Hildenbrand [this message]
2025-08-06 8:12 ` David Hildenbrand
2025-08-06 8:15 ` Will Deacon
2025-08-06 8:19 ` David Hildenbrand
2025-08-06 8:53 ` Dev Jain
2025-08-06 8:56 ` David Hildenbrand
2025-08-06 9:12 ` Lorenzo Stoakes
2025-08-06 9:21 ` David Hildenbrand
2025-08-06 9:37 ` Dev Jain
2025-08-06 9:50 ` Lorenzo Stoakes
2025-08-06 10:04 ` Dev Jain
2025-08-06 10:07 ` Dev Jain
2025-08-06 10:12 ` David Hildenbrand
2025-08-06 10:11 ` David Hildenbrand
2025-08-06 10:20 ` Dev Jain
2025-08-06 10:28 ` David Hildenbrand
2025-08-06 10:45 ` Lorenzo Stoakes
2025-08-06 10:45 ` Lorenzo Stoakes
2025-07-18 9:02 ` [PATCH v5 7/7] arm64: Add batched versions of ptep_modify_prot_start/commit Dev Jain
2025-07-18 18:50 ` Lorenzo Stoakes
2025-07-21 15:57 ` Catalin Marinas
2025-07-18 9:50 ` [PATCH v5 0/7] Optimize mprotect() for large folios Dev Jain
2025-07-18 18:53 ` Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7567c594-7588-49e0-8b09-2a591181b24d@redhat.com \
--to=david@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=dev.jain@arm.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=jannh@google.com \
--cc=joey.gouly@arm.com \
--cc=kevin.brodsky@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=peterx@redhat.com \
--cc=quic_zhenhuah@quicinc.com \
--cc=ryan.roberts@arm.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=yangyicong@hisilicon.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).