From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11430C25B06 for ; Mon, 1 Aug 2022 19:02:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 987C56B0073; Mon, 1 Aug 2022 15:02:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9380F8E0002; Mon, 1 Aug 2022 15:02:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FFC18E0001; Mon, 1 Aug 2022 15:02:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 700F16B0073 for ; Mon, 1 Aug 2022 15:02:56 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 557E3A0BD5 for ; Mon, 1 Aug 2022 19:02:56 +0000 (UTC) X-FDA: 79751945952.29.DBC2656 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf31.hostedemail.com (Postfix) with ESMTP id 872E3200EC for ; Mon, 1 Aug 2022 19:02:55 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AE5E761233; Mon, 1 Aug 2022 19:02:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77E6BC4314F; Mon, 1 Aug 2022 19:02:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659380574; bh=YtD2IPeiwFHCoAQzO5yVCDNb/oS17//n3lkxt2qr7XI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PRUHpJnS0slQ/TyETjlhEPa/hHYhD9w2zGFzw+jbnyeRsCK1mShqPZryrLtUNwQ5S hU0h9Hpx8mPi/nJN8zkkrVZ2hNecvFuuvO2Nu/+9dIRRy+EFILOyiwBs++2wWdHBD/ AQ9i5dn8/qvmAnMcUeXWmH6A4TpkKMVWImgTkZg+kVVbjdlD9Tg5nGq5Ty2UH0P3CE VY5j5LiwaBp5VRl9D+BPiuuWsgdIM4NTm/3NTvNuqpaS/miQhs02/xdfTnHoYK8RQt OKBP7d40cgZVlRaBIqq6cvF6/2ZM9Q+EY5NEyTn67F3o1bNh+nWqzJRC097Ar0EeiP 9lwCrgvWgxchw== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Linus Torvalds , Sasha Levin , aneesh.kumar@linux.ibm.com, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.15 5/8] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Date: Mon, 1 Aug 2022 15:02:40 -0400 Message-Id: <20220801190243.3818811-5-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220801190243.3818811-1-sashal@kernel.org> References: <20220801190243.3818811-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PRUHpJnS; spf=pass (imf31.hostedemail.com: domain of sashal@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659380576; a=rsa-sha256; cv=none; b=5003VkHOlVNLHdhUS/PxU1HhIT+olQd+ufSY4+at7+zYUoflTE8+ltlG2iPbyy8kt1RKjD y4Rg/mLahK0Qud4XZeQe4lWYqO8VUgIHT41YXJA9DGQq3bUwlOzX7jRO0y9HgJPOOsvYnj qegjilNahkJmKZKLIJCAj6D8fE1bM38= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659380576; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=buY060/HTcpYSnCpRciEEH3AohlvTezI2tDn2ATg+4A=; b=zEBWP2htk4sLp93BeZpsxsfNhHFsuIqFcu+qVGXFbFrxEVomOoDdJJDTSQgUdqK28gn3rd vxVgfIx9+TIjOSNFMolvgR7CQfuXMwWLsZA3IzSC7WaGXgthtU4aEDhy6trHCOhm/B/8g9 7zo33G4RTprEgXqpbwORFL1qmMFopgA= X-Stat-Signature: dog639bzn3iza9ngreuks1pimsknnp1z X-Rspamd-Queue-Id: 872E3200EC X-Rspam-User: X-Rspamd-Server: rspam05 Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PRUHpJnS; spf=pass (imf31.hostedemail.com: domain of sashal@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1659380575-803066 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra [ Upstream commit 18ba064e42df3661e196ab58a23931fc732a420b ] Now that architectures are no longer allowed to override tlb_{start,end}_vma() re-arrange code so that there is only one implementation for each of these functions. This much simplifies trying to figure out what they actually do. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/asm-generic/tlb.h | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 71942a1c642d..17815e9d38b7 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -334,8 +334,8 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) #ifdef CONFIG_MMU_GATHER_NO_RANGE -#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma) -#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma() +#if defined(tlb_flush) +#error MMU_GATHER_NO_RANGE relies on default tlb_flush() #endif /* @@ -355,17 +355,10 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } -#define tlb_end_vma tlb_end_vma -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - #else /* CONFIG_MMU_GATHER_NO_RANGE */ #ifndef tlb_flush -#if defined(tlb_start_vma) || defined(tlb_end_vma) -#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() -#endif - /* * When an architecture does not provide its own tlb_flush() implementation * but does have a reasonably efficient flush_vma_range() implementation @@ -486,7 +479,6 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -#ifndef tlb_start_vma static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -495,9 +487,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * tlb_update_vma_flags(tlb, vma); flush_cache_range(vma, vma->vm_start, vma->vm_end); } -#endif -#ifndef tlb_end_vma static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -511,7 +501,6 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm */ tlb_flush_mmu_tlbonly(tlb); } -#endif /* * tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end, -- 2.35.1