From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7444C24678E for ; Sat, 8 Nov 2025 17:25:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762622719; cv=none; b=lFXkyBBxUqHXXqxuv+4KLGkWL9SYfa+gukG9vTSTW+VZUTgKWzBDxGPyP/DBlINQ1TZ9hKcvbwXxIf+bXc/fKUDccaZhiY4JxUDhL2mTfisQ3211Mh/oDJWQqIhj7moD4nRXaNfu6cDCqe/X0inl5vbjwrV+XNOYzm8XGPXL2Gk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762622719; c=relaxed/simple; bh=tAQlIEkwJ2sCMsvaNedkWALZ2aqqsCV85XAAJBYBRUo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sd1DbvO1heW0sq9NRAjc5C3DU43Y3PRFz/pj17qpIs5cAOGoR3/A6kL3bgYBKZe/uPp1AsDlCHRjgdZ3e7qffqH2RIOLTWTVRPfmHFIPJLzMQiU2d1egZEUzR45f5lHQzNcwwXcOBjn0A2f6HjL+BOKWSVbO4hCnBDMqprMUbqs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DXMW3oWD; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DXMW3oWD" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-29555415c5fso22029695ad.1 for ; Sat, 08 Nov 2025 09:25:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1762622717; x=1763227517; darn=lists.linux.dev; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2TYL/KcboSr09KpEpWrRMXlQpJhItw5S/fTw0hxpRX4=; b=DXMW3oWDWVwFQJIYbKEcGiKHib1OUW9dFfGgu2XuPSGV2D4V9x0wPTPeJo2v0vigNR a3rQ239DSft7efTKFJWsQSMciNF43fJExoXHV1pRdnO2LWNfImCBKVGxYBOlS0LdZacZ 6Cnq0aZQuMSO/L2Q2RAylIxIP6U6KkUniYnw8fGxrmQVcRn2dWG1ilqF5eEKP5235HSc m8lsS8lVEnOtr5YqAT955nyfK+/O/IT0U8k7zueYFkrs09GBqqBf+CBZ6SRQsSVGe1BS m8awhrKjke9TBiBcoXU67BjydyTB/dVIukx2fM5KdU+vBAoIEvTru7KDaq+O65KnAls4 einw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762622717; x=1763227517; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=2TYL/KcboSr09KpEpWrRMXlQpJhItw5S/fTw0hxpRX4=; b=GDSkODz2Ct0yNfldLuuL6A6+lfc/mFig9yTOQkfyrGWMma8zu0Ppnw6c4e8/mO4Ojt gw40aNU/UMFspg1/Xn7/zJLF+uIxWW+uFjLQZHRJyq2pJnBuTx04VUkeWu66h6Qcj/Nm gFaN1LU9/cwJltFdnaDplDPPE+R1Fq0g2kPw+Pn7nIEK+W8kLbGCZFvzoFo/9Z2mDCux 5QWX3MVes8LqO13rPke3RGoKQE1o0IntmTLmJoaDvSMY9iFOej05A3788Zez0wxlEoRP IcFaBYZ1UDQzbfWrmRspYo9p+YK1Se28/P2FlBg4dNNDMhXuo1nG5HkfJEDYWdVtaCqG +RGA== X-Forwarded-Encrypted: i=1; AJvYcCW3GW+hg/12J4Vn06hLyH9hb14I0zMxKGPVXb8TDh8gN7G3D8UYsddZX/p7xcFGnyZ3Cp9B@lists.linux.dev X-Gm-Message-State: AOJu0YwuOpRVx5bGkKGbXTqOxA5Wpms9t3bPbxfJJdjs85WJYf7MFplo RBRX5HRYTYwwmLlProX02cKa/YXE3Ei2m+fXE5eDFCHus4RGBQcJkkc8 X-Gm-Gg: ASbGnctZhdBBzTMO0VAWXKcrZU4RuB7DSUMCKOqq/qyy6W6eFhwhMQdb949ONsL5g1l 3UAnjcxLv2XQFgc0sz/l9q/soQLlnu4dB6PFZPuD1+J4W2e6LZ4I7D+vwKKBpHf6kW2DRUCVt5F sFbLfklam6kE/PAnOjd7tXvn0RqWjQiqkWKVnKgQZUweOwBAwCjmikB+9caUfUuaU7EdRycEvYl T6lSJ17LkVt5OUo/9m7R/U5ueAA9y48DVUwfpHCfuZxWZa4q364Ur1EFJC4uZzlSIh4SALh3qZa GxA6DgpbfIXITB7M5I9Yg4I7uQRsFk2VoUdDZBeQqv8ofZs9ZG3Ub5ZtpLwz8LLzQTaRsilWm8G KJq+wnjMumST3bZhEShM1XFZm5+USQRj8Whbz6fgmSmZSC4bxgPcgDkgfYpqNJvRy/JVs3FJRp4 jytBqvi97t9dsVbVaeoIfhmcqh X-Google-Smtp-Source: AGHT+IGemHZA9SPXtNLFNdfrNqoZlItf/gue2FL3ckVQhHVRtlGuWtaylquJOH87biMGuXkZN384xA== X-Received: by 2002:a17:90b:1b12:b0:338:3d07:5174 with SMTP id 98e67ed59e1d1-3436cb7cfbdmr4095192a91.5.1762622716706; Sat, 08 Nov 2025 09:25:16 -0800 (PST) Received: from DESKTOP-8TIG9K0.localdomain ([119.28.20.50]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3436c3d7dddsm2769122a91.7.2025.11.08.09.24.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 08 Nov 2025 09:25:16 -0800 (PST) From: Xie Yuanbin To: david@redhat.com, tglx@linutronix.de, segher@kernel.crashing.org, riel@surriel.com, peterz@infradead.org, linux@armlinux.org.uk, mathieu.desnoyers@efficios.com, paulmck@kernel.org, pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu, alex@ghiti.fr, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, davem@davemloft.net, andreas@gaisler.com, luto@kernel.org, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, james.clark@linaro.org, anna-maria@linutronix.de, frederic@kernel.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, nathan@kernel.org, nick.desaulniers+lkml@gmail.com, morbo@google.com, justinstitt@google.com, qq570070308@gmail.com, thuth@redhat.com, brauner@kernel.org, arnd@arndb.de, jlayton@kernel.org, aalbersh@redhat.com, akpm@linux-foundation.org, david@kernel.org, lorenzo.stoakes@oracle.com, max.kellermann@ionos.com, ryan.roberts@arm.com, nysal@linux.ibm.com, urezki@gmail.com Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-perf-users@vger.kernel.org, llvm@lists.linux.dev, will@kernel.org Subject: [PATCH v2 1/4] Make enter_lazy_tlb inline on x86 Date: Sun, 9 Nov 2025 01:23:43 +0800 Message-ID: <20251108172346.263590-2-qq570070308@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251108172346.263590-1-qq570070308@gmail.com> References: <20251108172346.263590-1-qq570070308@gmail.com> Precedence: bulk X-Mailing-List: llvm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit This function is very short, and is called in the context switching, which is the hot code path. Change it to inline function on x86 to optimize performance, just like its code on other architectures. Signed-off-by: Xie Yuanbin Reviewed-by: Rik van Riel --- arch/x86/include/asm/mmu_context.h | 21 ++++++++++++++++++++- arch/x86/mm/tlb.c | 21 --------------------- 2 files changed, 20 insertions(+), 22 deletions(-) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 73bf3b1b44e8..263e18bc5b3d 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -136,8 +136,27 @@ static inline void mm_reset_untag_mask(struct mm_struct *mm) } #endif +/* + * Please ignore the name of this function. It should be called + * switch_to_kernel_thread(). + * + * enter_lazy_tlb() is a hint from the scheduler that we are entering a + * kernel thread or other context without an mm. Acceptable implementations + * include doing nothing whatsoever, switching to init_mm, or various clever + * lazy tricks to try to minimize TLB flushes. + * + * The scheduler reserves the right to call enter_lazy_tlb() several times + * in a row. It will notify us that we're going back to a real mm by + * calling switch_mm_irqs_off(). + */ #define enter_lazy_tlb enter_lazy_tlb -extern void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk); +static __always_inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) +{ + if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) + return; + + this_cpu_write(cpu_tlbstate_shared.is_lazy, true); +} #define mm_init_global_asid mm_init_global_asid extern void mm_init_global_asid(struct mm_struct *mm); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5d221709353e..cb715e8e75e4 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -970,27 +970,6 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, } } -/* - * Please ignore the name of this function. It should be called - * switch_to_kernel_thread(). - * - * enter_lazy_tlb() is a hint from the scheduler that we are entering a - * kernel thread or other context without an mm. Acceptable implementations - * include doing nothing whatsoever, switching to init_mm, or various clever - * lazy tricks to try to minimize TLB flushes. - * - * The scheduler reserves the right to call enter_lazy_tlb() several times - * in a row. It will notify us that we're going back to a real mm by - * calling switch_mm_irqs_off(). - */ -void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) -{ - if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) - return; - - this_cpu_write(cpu_tlbstate_shared.is_lazy, true); -} - /* * Using a temporary mm allows to set temporary mappings that are not accessible * by other CPUs. Such mappings are needed to perform sensitive memory writes -- 2.51.0