From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 700D9C433EF for ; Tue, 12 Apr 2022 00:48:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E17ED6B0071; Mon, 11 Apr 2022 20:48:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA00D6B0073; Mon, 11 Apr 2022 20:48:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C199B6B0074; Mon, 11 Apr 2022 20:48:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id AFF876B0071 for ; Mon, 11 Apr 2022 20:48:08 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 653EFA5D2C for ; Tue, 12 Apr 2022 00:48:08 +0000 (UTC) X-FDA: 79346390256.27.2630591 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf17.hostedemail.com (Postfix) with ESMTP id C491A40002 for ; Tue, 12 Apr 2022 00:48:07 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E98D1B819C4; Tue, 12 Apr 2022 00:48:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5271BC385AD; Tue, 12 Apr 2022 00:48:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1649724485; bh=k0wSfYR0ig7xQ4rb/4er1kpUOZcttmdGmeX4lBcM9V8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ItTCsFsiyZ2k0JnuP9Uf2V4En6eyXDOOi9DXodIw6R2PbOvovqeuwdcMoABuSliib G7CXdvVzCWXysbW65+KBktd4M4wCfc0ijg5N43KqMownoqODN0xD6wRInbAkN23iuV RZ3PvCCGpNp72ylVJ19KCaKq5uaDKTkD7GjJ3LwmLRAQ0yqzXQweibPyfUHIwxEY12 GZiZCyVzQmOI/ct8OyieV5LiALhS9/MHijWq+lpXPY+dnxNr3e2Mk9NprvB6KO3Z0z iXM50tgH0gxlmKqD7RpkwsO1lmxrFegBz80u7czguuDygXpoxzYKOuO60JG8ntptb3 3iTeD8mpo7cvw== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Steve Capper , David Hildenbrand , Peter Zijlstra , Anshuman Khandual , Catalin Marinas , Will Deacon , Sasha Levin , aneesh.kumar@linux.ibm.com, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.15 23/41] tlb: hugetlb: Add more sizes to tlb_remove_huge_tlb_entry Date: Mon, 11 Apr 2022 20:46:35 -0400 Message-Id: <20220412004656.350101-23-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220412004656.350101-1-sashal@kernel.org> References: <20220412004656.350101-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C491A40002 X-Stat-Signature: twkesxqjfd1i3ho4f1w7x6rj5kab5kxm X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ItTCsFsi; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of sashal@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=sashal@kernel.org X-HE-Tag: 1649724487-854321 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steve Capper [ Upstream commit 697a1d44af8ba0477ee729e632f4ade37999249a ] tlb_remove_huge_tlb_entry only considers PMD_SIZE and PUD_SIZE when updating the mmu_gather structure. Unfortunately on arm64 there are two additional huge page sizes that need to be covered: CONT_PTE_SIZE and CONT_PMD_SIZE. Where an end-user attempts to employ contiguous huge pages, a VM_BUG_ON can be experienced due to the fact that the tlb structure hasn't been correctly updated by the relevant tlb_flush_p.._range() call from tlb_remove_huge_tlb_entry. This patch adds inequality logic to the generic implementation of tlb_remove_huge_tlb_entry s.t. CONT_PTE_SIZE and CONT_PMD_SIZE are effectively covered on arm64. Also, as well as ptes, pmds and puds; p4ds are now considered too. Reported-by: David Hildenbrand Suggested-by: Peter Zijlstra (Intel) Cc: Anshuman Khandual Cc: Catalin Marinas Cc: Will Deacon Link: https://lore.kernel.org/linux-mm/811c5c8e-b3a2-85d2-049c-717f17c3a0= 3a@redhat.com/ Signed-off-by: Steve Capper Acked-by: David Hildenbrand Reviewed-by: Anshuman Khandual Reviewed-by: Catalin Marinas Acked-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20220330112543.863-1-steve.capper@arm.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- include/asm-generic/tlb.h | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2c68a545ffa7..71942a1c642d 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -565,10 +565,14 @@ static inline void tlb_flush_p4d_range(struct mmu_g= ather *tlb, #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ do { \ unsigned long _sz =3D huge_page_size(h); \ - if (_sz =3D=3D PMD_SIZE) \ - tlb_flush_pmd_range(tlb, address, _sz); \ - else if (_sz =3D=3D PUD_SIZE) \ + if (_sz >=3D P4D_SIZE) \ + tlb_flush_p4d_range(tlb, address, _sz); \ + else if (_sz >=3D PUD_SIZE) \ tlb_flush_pud_range(tlb, address, _sz); \ + else if (_sz >=3D PMD_SIZE) \ + tlb_flush_pmd_range(tlb, address, _sz); \ + else \ + tlb_flush_pte_range(tlb, address, _sz); \ __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) =20 --=20 2.35.1