From: Xie Yuanbin <qq570070308@gmail.com>
To: linux@armlinux.org.uk, mathieu.desnoyers@efficios.com,
paulmck@kernel.org, pjw@kernel.org, palmer@dabbelt.com,
aou@eecs.berkeley.edu, alex@ghiti.fr, hca@linux.ibm.com,
gor@linux.ibm.com, agordeev@linux.ibm.com,
borntraeger@linux.ibm.com, svens@linux.ibm.com,
davem@davemloft.net, andreas@gaisler.com, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com,
alexander.shishkin@linux.intel.com, jolsa@kernel.org,
irogers@google.com, adrian.hunter@intel.com,
anna-maria@linutronix.de, frederic@kernel.org,
juri.lelli@redhat.com, vincent.guittot@linaro.org,
dietmar.eggemann@arm.com, rostedt@goodmis.org,
bsegall@google.com, mgorman@suse.de, vschneid@redhat.com,
qq570070308@gmail.com, thuth@redhat.com, riel@surriel.com,
akpm@linux-foundation.org, david@redhat.com,
lorenzo.stoakes@oracle.com, segher@kernel.crashing.org,
ryan.roberts@arm.com, max.kellermann@ionos.com, urezki@gmail.com,
nysal@linux.ibm.com
Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org,
linux-s390@vger.kernel.org, sparclinux@vger.kernel.org,
linux-perf-users@vger.kernel.org, will@kernel.org
Subject: [PATCH 1/3] Change enter_lazy_tlb to inline on x86
Date: Sat, 25 Oct 2025 02:26:26 +0800 [thread overview]
Message-ID: <20251024182628.68921-2-qq570070308@gmail.com> (raw)
In-Reply-To: <20251024182628.68921-1-qq570070308@gmail.com>
This function is very short, and is called in the context switching,
so it is called very frequently.
Change it to inline function on x86 to improve performance, just like
its code in other architectures
Signed-off-by: Xie Yuanbin <qq570070308@gmail.com>
---
arch/x86/include/asm/mmu_context.h | 22 +++++++++++++++++++++-
arch/x86/mm/tlb.c | 21 ---------------------
2 files changed, 21 insertions(+), 22 deletions(-)
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 73bf3b1b44e8..30e68c5ef798 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -129,22 +129,42 @@ static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm)
static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
{
}
static inline void mm_reset_untag_mask(struct mm_struct *mm)
{
}
#endif
+/*
+ * Please ignore the name of this function. It should be called
+ * switch_to_kernel_thread().
+ *
+ * enter_lazy_tlb() is a hint from the scheduler that we are entering a
+ * kernel thread or other context without an mm. Acceptable implementations
+ * include doing nothing whatsoever, switching to init_mm, or various clever
+ * lazy tricks to try to minimize TLB flushes.
+ *
+ * The scheduler reserves the right to call enter_lazy_tlb() several times
+ * in a row. It will notify us that we're going back to a real mm by
+ * calling switch_mm_irqs_off().
+ */
#define enter_lazy_tlb enter_lazy_tlb
-extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk);
+static __always_inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+{
+ if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
+ return;
+
+ this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
+}
+
#define mm_init_global_asid mm_init_global_asid
extern void mm_init_global_asid(struct mm_struct *mm);
extern void mm_free_global_asid(struct mm_struct *mm);
/*
* Init a new mm. Used on mm copies, like at fork()
* and on mm's that are brand-new, like at execve().
*/
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5d221709353e..cb715e8e75e4 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -963,41 +963,20 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next,
this_cpu_write(cpu_tlbstate.loaded_mm, next);
this_cpu_write(cpu_tlbstate.loaded_mm_asid, ns.asid);
cpu_tlbstate_update_lam(new_lam, mm_untag_mask(next));
if (next != prev) {
cr4_update_pce_mm(next);
switch_ldt(prev, next);
}
}
-/*
- * Please ignore the name of this function. It should be called
- * switch_to_kernel_thread().
- *
- * enter_lazy_tlb() is a hint from the scheduler that we are entering a
- * kernel thread or other context without an mm. Acceptable implementations
- * include doing nothing whatsoever, switching to init_mm, or various clever
- * lazy tricks to try to minimize TLB flushes.
- *
- * The scheduler reserves the right to call enter_lazy_tlb() several times
- * in a row. It will notify us that we're going back to a real mm by
- * calling switch_mm_irqs_off().
- */
-void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
-{
- if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm)
- return;
-
- this_cpu_write(cpu_tlbstate_shared.is_lazy, true);
-}
-
/*
* Using a temporary mm allows to set temporary mappings that are not accessible
* by other CPUs. Such mappings are needed to perform sensitive memory writes
* that override the kernel memory protections (e.g., W^X), without exposing the
* temporary page-table mappings that are required for these write operations to
* other CPUs. Using a temporary mm also allows to avoid TLB shootdowns when the
* mapping is torn down. Temporary mms can also be used for EFI runtime service
* calls or similar functionality.
*
* It is illegal to schedule while using a temporary mm -- the context switch
--
2.51.0
next prev parent reply other threads:[~2025-10-24 18:27 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-24 18:26 [PATCH 0/3] Optimize code generation during context switching Xie Yuanbin
2025-10-24 18:26 ` Xie Yuanbin [this message]
2025-10-24 20:14 ` [PATCH 1/3] Change enter_lazy_tlb to inline on x86 Rik van Riel
2025-10-24 18:35 ` [PATCH 2/3] Provide and use an always inline version of finish_task_switch Xie Yuanbin
2025-10-24 18:35 ` [PATCH 3/3] Set the subfunctions called by finish_task_switch to be inline Xie Yuanbin
2025-10-24 19:44 ` Thomas Gleixner
2025-10-25 18:51 ` Xie Yuanbin
2025-10-24 21:36 ` [PATCH 2/3] Provide and use an always inline version of finish_task_switch Rik van Riel
2025-10-25 14:36 ` Segher Boessenkool
2025-10-25 17:37 ` [PATCH 0/3] Optimize code generation during context Xie Yuanbin
2025-10-29 10:26 ` David Hildenbrand
2025-10-30 15:04 ` Xie Yuanbin
2025-10-25 19:18 ` [PATCH 2/3] Provide and use an always inline version of finish_task_switch Xie Yuanbin
2025-10-25 12:26 ` [PATCH 0/3] Optimize code generation during context switching Peter Zijlstra
2025-10-25 18:20 ` [PATCH 0/3] Optimize code generation during context Xie Yuanbin
2025-10-27 15:21 ` Xie Yuanbin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251024182628.68921-2-qq570070308@gmail.com \
--to=qq570070308@gmail.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=alex@ghiti.fr \
--cc=alexander.shishkin@linux.intel.com \
--cc=andreas@gaisler.com \
--cc=anna-maria@linutronix.de \
--cc=aou@eecs.berkeley.edu \
--cc=borntraeger@linux.ibm.com \
--cc=bp@alien8.de \
--cc=bsegall@google.com \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=david@redhat.com \
--cc=dietmar.eggemann@arm.com \
--cc=frederic@kernel.org \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=hpa@zytor.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=max.kellermann@ionos.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=nysal@linux.ibm.com \
--cc=palmer@dabbelt.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=pjw@kernel.org \
--cc=riel@surriel.com \
--cc=rostedt@goodmis.org \
--cc=ryan.roberts@arm.com \
--cc=segher@kernel.crashing.org \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=tglx@linutronix.de \
--cc=thuth@redhat.com \
--cc=urezki@gmail.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).