From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B2A7C3DA41 for ; Tue, 9 Jul 2024 11:58:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Q8nZyYdr5c//IVaXHq2jwJgGxDjZn9WoAVVh42O2uxg=; b=fcFQLt+q9q7CRZtt4MU3RmLNdm oxFfj0jZ4YS9VuYdv7JATfY7TN2Q+Hv3dAKy23TFbGYZLwipZEbT/KHUMNmC417uV80y85YiS+oZY 7SYfQU2pWzhsIKCF4e4MLSjISmwui2QZBsPezCq9y/OuzXLKEqoBlF4CFD1clZllRJ6YTxszeU0hV G/nmHtX5WjevWeuktEi0ztnFhpW8MFTUoA8gj2T+JqRcClSo4e+cyjd66Yzzp51a7DuV2HvNsSMm9 kV8RIXetQSLrK0mPOZ+dPujbyoLL4Ug43jAy79qrsm7FCfjMeQWf7Rs7qmTipSJ/4f/zg40IRfRZ+ WtnkO4ZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sR9U9-000000073hV-2spY; Tue, 09 Jul 2024 11:58:17 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sR9Tu-000000073ck-0r92 for linux-arm-kernel@lists.infradead.org; Tue, 09 Jul 2024 11:58:03 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 862A261315; Tue, 9 Jul 2024 11:58:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 421B8C3277B; Tue, 9 Jul 2024 11:57:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1720526281; bh=mcr1Q83MzDxEGUd1avareyzVmIooEbR7/Dw3XlUo3Vk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=lVrq05DNc6FHAVJmrJZr8SpnL7TggHUDO9DahF3RLoVsfC65pfiPRbuvsw+ZvB2VH i24w0UPDjAi9cS4aRBVWps9FqfK5glM3G8lUxCk+EX936e+zhm/nQaBBcAdBfLewZ+ Gjuiz/EuqRArL9zVe3NylTJCC6ZqSL0JVd1ivtfT8NkJd/TYmRWg8apXWdS7tU+6Sw 72WiWL1DKAqU09Xzom5qrs5vzR5nIsMEla/2vJw29h4zQOAYf3Q1Fn7oBnLn1RVb79 PS4rQNTT3ai3MzsOU+Ypvs7HiM73UYdGUZjEEsVSXy4aUPar9g7Oe6WX3NlgOetXFV QsR+nCZISOXdg== Date: Tue, 9 Jul 2024 12:57:54 +0100 From: Will Deacon To: Steven Price Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Catalin Marinas , Marc Zyngier , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: Re: [PATCH v4 08/15] arm64: mm: Avoid TLBI when marking pages as valid Message-ID: <20240709115754.GD13242@willie-the-truck> References: <20240701095505.165383-1-steven.price@arm.com> <20240701095505.165383-9-steven.price@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240701095505.165383-9-steven.price@arm.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240709_045802_331040_E198A108 X-CRM114-Status: GOOD ( 21.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Jul 01, 2024 at 10:54:58AM +0100, Steven Price wrote: > When __change_memory_common() is purely setting the valid bit on a PTE > (e.g. via the set_memory_valid() call) there is no need for a TLBI as > either the entry isn't changing (the valid bit was already set) or the > entry was invalid and so should not have been cached in the TLB. > > Signed-off-by: Steven Price > --- > v4: New patch > --- > arch/arm64/mm/pageattr.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c > index 0e270a1c51e6..547a9e0b46c2 100644 > --- a/arch/arm64/mm/pageattr.c > +++ b/arch/arm64/mm/pageattr.c > @@ -60,7 +60,13 @@ static int __change_memory_common(unsigned long start, unsigned long size, > ret = apply_to_page_range(&init_mm, start, size, change_page_range, > &data); > > - flush_tlb_kernel_range(start, start + size); > + /* > + * If the memory is being made valid without changing any other bits > + * then a TLBI isn't required as a non-valid entry cannot be cached in > + * the TLB. > + */ > + if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask)) > + flush_tlb_kernel_range(start, start + size); > return ret; Can you elaborate on when this actually happens, please? It feels like a case of "Doctor, it hurts when I do this" rather than something we should be trying to short-circuit in the low-level code. Will