From: Pedro Falcato <pfalcato@suse.de>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Lorenzo Stoakes <ljs@kernel.org>,
Vlastimil Babka <vbabka@kernel.org>,
Jann Horn <jannh@google.com>, Dev Jain <dev.jain@arm.com>,
Luke Yang <luyang@redhat.com>,
jhladky@redhat.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] mm/mprotect: special-case small folios when applying write permissions
Date: Wed, 25 Mar 2026 11:37:39 +0000 [thread overview]
Message-ID: <dccxy3i3kaprljloxbyo2u346bu6xwev42cpvt5lfbbli2hbap@6n2rd5mftffh> (raw)
In-Reply-To: <fbabc728-a87b-48e2-87dc-68ac21eb02d4@kernel.org>
On Tue, Mar 24, 2026 at 09:18:42PM +0100, David Hildenbrand (Arm) wrote:
> On 3/24/26 16:43, Pedro Falcato wrote:
> > The common order-0 case is important enough to want its own branch, and
> > avoids the hairy, large loop logic that the CPU does not seem to handle
> > particularly well.
> >
> > While at it, encourage the compiler to inline batch PTE logic and resolve
> > constant branches by adding __always_inline strategically.
> >
> > Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> > Signed-off-by: Pedro Falcato <pfalcato@suse.de>
> > ---
> > mm/mprotect.c | 17 ++++++++++++-----
> > 1 file changed, 12 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > index 2eaf862e5734..2fda26107066 100644
> > --- a/mm/mprotect.c
> > +++ b/mm/mprotect.c
> > @@ -103,7 +103,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> > return can_change_shared_pte_writable(vma, pte);
> > }
> >
> > -static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> > +static __always_inline int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> > pte_t pte, int max_nr_ptes, fpb_t flags)
> > {
> > /* No underlying folio, so cannot batch */
> > @@ -117,9 +117,9 @@ static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> > }
> >
> > /* Set nr_ptes number of ptes, starting from idx */
> > -static void prot_commit_flush_ptes(struct vm_area_struct *vma, unsigned long addr,
> > - pte_t *ptep, pte_t oldpte, pte_t ptent, int nr_ptes,
> > - int idx, bool set_write, struct mmu_gather *tlb)
> > +static __always_inline void prot_commit_flush_ptes(struct vm_area_struct *vma,
> > + unsigned long addr, pte_t *ptep, pte_t oldpte, pte_t ptent,
> > + int nr_ptes, int idx, bool set_write, struct mmu_gather *tlb)
> > {
> > /*
> > * Advance the position in the batch by idx; note that if idx > 0,
> > @@ -169,7 +169,7 @@ static int page_anon_exclusive_sub_batch(int start_idx, int max_len,
> > * pte of the batch. Therefore, we must individually check all pages and
> > * retrieve sub-batches.
> > */
> > -static void commit_anon_folio_batch(struct vm_area_struct *vma,
> > +static __always_inline void commit_anon_folio_batch(struct vm_area_struct *vma,
> > struct folio *folio, struct page *first_page, unsigned long addr, pte_t *ptep,
> > pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> > {
> > @@ -177,6 +177,13 @@ static void commit_anon_folio_batch(struct vm_area_struct *vma,
> > int sub_batch_idx = 0;
> > int len;
> >
> > + /* Optimize for the common order-0 case. */
> > + if (likely(nr_ptes == 1)) {
> > + prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, 1,
> > + 0, PageAnonExclusive(first_page), tlb);
>
> To optimize that one, inlining prot_commit_flush_ptes() would be
> sufficient. Does inlining the other two really help? I don't think we
> can optimize out loops etc. for them?
Well, I'm getting meaningful (smaller) wins from adding those
__always_inline's. (and I also get a small win for __always_inline on
set_write_prot_commit_flush_ptes, but I didn't realize that until now).
>
> I would have thought that specializing on nr_ptes==0 on an even higher
> level--where we call
> set_write_prot_commit_flush_ptes/prot_commit_flush_ptes() would allow
> for optimizing the loops entirely for nr_ptes==0?
That could also work, but then set_write_prot_commit_flush_ptes (holy cow
what a long name) would definitely need inlining. And might be a little uglier
overall.
This is the part where having data points other than my giga-fast-giga-powerful
zen5 could prove handy :/
--
Pedro
prev parent reply other threads:[~2026-03-25 11:37 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 15:43 [PATCH v2 0/2] mm/mprotect: micro-optimization work Pedro Falcato
2026-03-24 15:43 ` [PATCH v2 1/2] mm/mprotect: move softleaf code out of the main function Pedro Falcato
2026-03-24 20:12 ` David Hildenbrand (Arm)
2026-03-24 15:43 ` [PATCH v2 2/2] mm/mprotect: special-case small folios when applying write permissions Pedro Falcato
2026-03-24 20:18 ` David Hildenbrand (Arm)
2026-03-25 11:37 ` Pedro Falcato [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dccxy3i3kaprljloxbyo2u346bu6xwev42cpvt5lfbbli2hbap@6n2rd5mftffh \
--to=pfalcato@suse.de \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=jannh@google.com \
--cc=jhladky@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=luyang@redhat.com \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox