All of lore.kernel.org
 help / color / mirror / Atom feed
From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Pedro Falcato <pfalcato@suse.de>, Lorenzo Stoakes <ljs@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@kernel.org>, Jann Horn <jannh@google.com>,
	Dev Jain <dev.jain@arm.com>, Luke Yang <luyang@redhat.com>,
	jhladky@redhat.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/4] mm/mprotect: encourage inlining with __always_inline
Date: Fri, 20 Mar 2026 11:08:50 +0100	[thread overview]
Message-ID: <5732e382-3ebb-43e2-8aaa-4729387cea5e@kernel.org> (raw)
In-Reply-To: <hmme2zpc4klftq56t7cfuvrzrp2k4t22g264fs3zloohuwelqe@ubrzpz37qlvm>

On 3/20/26 10:59, Pedro Falcato wrote:
> On Thu, Mar 19, 2026 at 10:28:47PM +0100, David Hildenbrand (Arm) wrote:
>> On 3/19/26 19:31, Pedro Falcato wrote:
>>> Encourage the compiler to inline batch PTE logic and resolve constant
>>> branches by adding __always_inline strategically.
>>>
>>> Signed-off-by: Pedro Falcato <pfalcato@suse.de>
>>> ---
>>>  mm/mprotect.c | 10 +++++-----
>>>  1 file changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/mm/mprotect.c b/mm/mprotect.c
>>> index 9681f055b9fc..1bd0d4aa07c2 100644
>>> --- a/mm/mprotect.c
>>> +++ b/mm/mprotect.c
>>> @@ -103,7 +103,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
>>>  	return can_change_shared_pte_writable(vma, pte);
>>>  }
>>>  
>>> -static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
>>> +static __always_inline int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
>>>  				    pte_t pte, int max_nr_ptes, fpb_t flags)
>>>  {
>>>  	/* No underlying folio, so cannot batch */
>>> @@ -117,9 +117,9 @@ static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
>>>  }
>>>  
>>>  /* Set nr_ptes number of ptes, starting from idx */
>>> -static void prot_commit_flush_ptes(struct vm_area_struct *vma, unsigned long addr,
>>> -		pte_t *ptep, pte_t oldpte, pte_t ptent, int nr_ptes,
>>> -		int idx, bool set_write, struct mmu_gather *tlb)
>>> +static __always_inline void prot_commit_flush_ptes(struct vm_area_struct *vma,
>>> +		unsigned long addr, pte_t *ptep, pte_t oldpte, pte_t ptent,
>>> +		int nr_ptes, int idx, bool set_write, struct mmu_gather *tlb)
>>>  {
>>>  	/*
>>>  	 * Advance the position in the batch by idx; note that if idx > 0,
>>> @@ -169,7 +169,7 @@ static int page_anon_exclusive_sub_batch(int start_idx, int max_len,
>>>   * pte of the batch. Therefore, we must individually check all pages and
>>>   * retrieve sub-batches.
>>>   */
>>> -static void commit_anon_folio_batch(struct vm_area_struct *vma,
>>> +static __always_inline void commit_anon_folio_batch(struct vm_area_struct *vma,
>>>  		struct folio *folio, struct page *first_page, unsigned long addr, pte_t *ptep,
>>>  		pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
>>>  {
>>
>> From my micro-optimization work on zapping and fork, I learned that
>> these batching functions are best optimized for order-0 page by
>> explicitly calling them from the code with "nr_ptes == 1" and then
>> force-inlining them. nr_ptes and all loops will essentially be optimized
>> out.
>>
>> With no such explicit constants, is there really a real benefit to be
>> had here?
> 
> Per my measurements, I could measure a real speedup here. Of course things
> may heavily depend on the microarchitecture you use. I want to note that
> these three functions are part of the hot loop and thus we definitely want
> them inlined. Particularly if we start special-casing stuff. You can cut
> down _a lot_ of code if you simply tell it "yeah don't bother you're looking
> at 1 pte only".

That's why I think this change is a lot more valuable when squashing
patch #4.

-- 
Cheers,

David


  reply	other threads:[~2026-03-20 10:08 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19 18:31 [PATCH 0/4] mm/mprotect: micro-optimization work Pedro Falcato
2026-03-19 18:31 ` [PATCH 1/4] mm/mprotect: encourage inlining with __always_inline Pedro Falcato
2026-03-19 18:59   ` Lorenzo Stoakes (Oracle)
2026-03-19 19:00     ` Lorenzo Stoakes (Oracle)
2026-03-19 21:28   ` David Hildenbrand (Arm)
2026-03-20  9:59     ` Pedro Falcato
2026-03-20 10:08       ` David Hildenbrand (Arm) [this message]
2026-03-19 18:31 ` [PATCH 2/4] mm/mprotect: move softleaf code out of the main function Pedro Falcato
2026-03-19 19:06   ` Lorenzo Stoakes (Oracle)
2026-03-19 21:33   ` David Hildenbrand (Arm)
2026-03-20 10:04     ` Pedro Falcato
2026-03-20 10:07       ` David Hildenbrand (Arm)
2026-03-20 10:54         ` Lorenzo Stoakes (Oracle)
2026-03-19 18:31 ` [PATCH 3/4] mm/mprotect: un-inline folio_pte_batch_flags() Pedro Falcato
2026-03-19 19:14   ` Lorenzo Stoakes (Oracle)
2026-03-19 21:41     ` David Hildenbrand (Arm)
2026-03-20 10:36       ` Lorenzo Stoakes (Oracle)
2026-03-20 10:59         ` Pedro Falcato
2026-03-20 11:02           ` David Hildenbrand (Arm)
2026-03-20 11:27           ` Lorenzo Stoakes (Oracle)
2026-03-20 11:01         ` David Hildenbrand (Arm)
2026-03-20 11:45           ` Lorenzo Stoakes (Oracle)
2026-03-23 12:56             ` David Hildenbrand (Arm)
2026-03-20 10:34     ` Pedro Falcato
2026-03-20 10:51       ` Lorenzo Stoakes (Oracle)
2026-03-19 18:31 ` [PATCH 4/4] mm/mprotect: special-case small folios when applying write permissions Pedro Falcato
2026-03-19 19:17   ` Lorenzo Stoakes (Oracle)
2026-03-20 10:36     ` Pedro Falcato
2026-03-20 10:42       ` Lorenzo Stoakes (Oracle)
2026-03-19 21:43   ` David Hildenbrand (Arm)
2026-03-20 10:37     ` Pedro Falcato
2026-03-20  2:42 ` [PATCH 0/4] mm/mprotect: micro-optimization work Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5732e382-3ebb-43e2-8aaa-4729387cea5e@kernel.org \
    --to=david@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dev.jain@arm.com \
    --cc=jannh@google.com \
    --cc=jhladky@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=luyang@redhat.com \
    --cc=pfalcato@suse.de \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.