From: Andrew Morton <akpm@linux-foundation.org>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: Pedro Falcato <pfalcato@suse.de>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Lorenzo Stoakes <ljs@kernel.org>,
Vlastimil Babka <vbabka@kernel.org>, Jann Horn <jannh@google.com>,
Dev Jain <dev.jain@arm.com>, Luke Yang <luyang@redhat.com>,
jhladky@redhat.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] mm/mprotect: special-case small folios when applying write permissions
Date: Wed, 1 Apr 2026 17:09:31 -0700 [thread overview]
Message-ID: <20260401170931.9e455bbf679792b039d9770a@linux-foundation.org> (raw)
In-Reply-To: <a7a903bf-10c2-4d2c-872d-9b31fefb5d1f@kernel.org>
On Mon, 30 Mar 2026 17:16:51 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
> > That could also work, but then set_write_prot_commit_flush_ptes (holy cow
> > what a long name) would definitely need inlining. And might be a little uglier
> > overall.
>
> Right. The idea is that you __always__inline any code that has PTE
> loops, such that all loops for nr_pages == 1 gets optimized out.
>
> We do that for zap and fork logic.
>
> >
> > This is the part where having data points other than my giga-fast-giga-powerful
> > zen5 could prove handy :/
> I just recently lost access to my reliably, well tunes, system ...
>
> Is it just the following benchmark?
>
> https://gist.github.com/heatd/1450d273005aba91fa5744f44dfcd933
>
> ?
>
>
> I can easily extending
>
> https://gitlab.com/davidhildenbrand/scratchspace/-/blob/main/pte-mapped-folio-benchmarks.c
>
> to have an "mprotect" mode. I had that in the past bit discarded it.
>
> Then, we can easily measure the effect on various folio sizes when
> mprotect'ing a larger memory area.
>
> With order-0 we can then benchmark small folios exclusively.
It sounds like this is all possible future work?
We have Lorenzo's R-b on this [2/2]. I'm reading this discussion as
"upstream both"?
--- a/mm/mprotect.c~mm-mprotect-special-case-small-folios-when-applying-write-permissions
+++ a/mm/mprotect.c
@@ -103,7 +103,7 @@ bool can_change_pte_writable(struct vm_a
return can_change_shared_pte_writable(vma, pte);
}
-static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
+static __always_inline int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
pte_t pte, int max_nr_ptes, fpb_t flags)
{
/* No underlying folio, so cannot batch */
@@ -117,9 +117,9 @@ static int mprotect_folio_pte_batch(stru
}
/* Set nr_ptes number of ptes, starting from idx */
-static void prot_commit_flush_ptes(struct vm_area_struct *vma, unsigned long addr,
- pte_t *ptep, pte_t oldpte, pte_t ptent, int nr_ptes,
- int idx, bool set_write, struct mmu_gather *tlb)
+static __always_inline void prot_commit_flush_ptes(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep, pte_t oldpte, pte_t ptent,
+ int nr_ptes, int idx, bool set_write, struct mmu_gather *tlb)
{
/*
* Advance the position in the batch by idx; note that if idx > 0,
@@ -169,7 +169,7 @@ static int page_anon_exclusive_sub_batch
* pte of the batch. Therefore, we must individually check all pages and
* retrieve sub-batches.
*/
-static void commit_anon_folio_batch(struct vm_area_struct *vma,
+static __always_inline void commit_anon_folio_batch(struct vm_area_struct *vma,
struct folio *folio, struct page *first_page, unsigned long addr, pte_t *ptep,
pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
{
@@ -177,6 +177,13 @@ static void commit_anon_folio_batch(stru
int sub_batch_idx = 0;
int len;
+ /* Optimize for the common order-0 case. */
+ if (likely(nr_ptes == 1)) {
+ prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, 1,
+ 0, PageAnonExclusive(first_page), tlb);
+ return;
+ }
+
while (nr_ptes) {
expected_anon_exclusive = PageAnonExclusive(first_page + sub_batch_idx);
len = page_anon_exclusive_sub_batch(sub_batch_idx, nr_ptes,
_
next prev parent reply other threads:[~2026-04-02 0:09 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 15:43 [PATCH v2 0/2] mm/mprotect: micro-optimization work Pedro Falcato
2026-03-24 15:43 ` [PATCH v2 1/2] mm/mprotect: move softleaf code out of the main function Pedro Falcato
2026-03-24 20:12 ` David Hildenbrand (Arm)
2026-03-24 15:43 ` [PATCH v2 2/2] mm/mprotect: special-case small folios when applying write permissions Pedro Falcato
2026-03-24 20:18 ` David Hildenbrand (Arm)
2026-03-25 11:37 ` Pedro Falcato
2026-03-30 15:16 ` David Hildenbrand (Arm)
2026-04-02 0:09 ` Andrew Morton [this message]
2026-04-02 3:44 ` Andrew Morton
2026-04-02 7:11 ` David Hildenbrand (Arm)
2026-03-30 19:55 ` [PATCH v2 0/2] mm/mprotect: micro-optimization work Luke Yang
2026-03-30 20:06 ` Andrew Morton
2026-04-01 8:25 ` David Hildenbrand (Arm)
2026-04-01 14:14 ` Pedro Falcato
2026-04-01 14:10 ` Pedro Falcato
2026-04-02 13:55 ` Luke Yang
2026-04-06 14:32 ` Luke Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260401170931.9e455bbf679792b039d9770a@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=Liam.Howlett@oracle.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=jannh@google.com \
--cc=jhladky@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=luyang@redhat.com \
--cc=pfalcato@suse.de \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox