From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6CADC433F5 for ; Tue, 28 Sep 2021 05:34:38 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A48B060FC2 for ; Tue, 28 Sep 2021 05:34:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A48B060FC2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date:Subject:Cc:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=S+05G/iT2L7/LKqGCeHiGiraD1iQTNNJxy8/ZhWxYGs=; b=lAkQld96FrvylB 2PbN4nCz6FgfxZaz6Ehwippj+MnDlM05CRWfiKINTP33hXfXbDtgQQ6BMOBhLZa/Wmrxdft8cPdXi XE4o3Q/yZu+5Ds/LnQla06QiTidZweRQ4w/jQpvRAo3SJqURAx3AOvda07oPF/TsKbFGRREhFIhKH sreDkDh6ir8yM73UNHMPWd6UDSIKWRGBKind4Ajv7jQVJX3STS6bIiLQ2qunW463eUXlUVRe/gIHj zsCrMG8IU4snx7bo46bJg+OpquvfNnb4Bgmg4/PuMMnswMGgAXC+U3iAw0StbbYNw7KC2co4jvyHe elIGutN7orxLA8VidkZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mV5jZ-005emS-R6; Tue, 28 Sep 2021 05:32:53 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mV5jW-005elK-2y for linux-arm-kernel@lists.infradead.org; Tue, 28 Sep 2021 05:32:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FCC4D6E; Mon, 27 Sep 2021 22:32:45 -0700 (PDT) Received: from p8cg001049571a15.arm.com (unknown [10.163.74.111]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3B37B3F70D; Mon, 27 Sep 2021 22:32:43 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Catalin Marinas , Will Deacon , James Morse , Marc Zyngier , Mark Rutland , linux-kernel@vger.kernel.org Subject: [PATCH V2] arm64/mm: Fix idmap on [16K|36VA|48PA] Date: Tue, 28 Sep 2021 11:03:45 +0530 Message-Id: <1632807225-20189-1-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210927_223250_255893_92462199 X-CRM114-Status: GOOD ( 14.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When creating the idmap, the kernel may add one extra level to idmap memory outside the VA range. But for [16K|36VA|48PA], we need two levels to reach 48 bits. If the bootloader places the kernel in memory above (1 << 46), the kernel will fail to enable the MMU. Although we are not aware of a platform where this happens, it is worth to accommodate such scenarios and prevent a possible kernel crash. Lets fix this problem by carefully analyzing existing VA_BITS with respect to maximum possible mapping with the existing PGDIR level i.e (PGDIR_SHIFT + PAGE_SHIFT - 3) and then evaluating how many extra page table levels are required to accommodate the reduced idmap_t0sz to map __idmap_text_end. Cc: Catalin Marinas Cc: Will Deacon Cc: James Morse Cc: Marc Zyngier Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Fixes: 215399392fe4 ("arm64: 36 bit VA") Signed-off-by: Anshuman Khandual --- This applies on v5.15-rc3. This is a different approach as compared to V1 which still applies on the latest mainline. Besides this enables all upcoming FEAT_LPA2 combinations as well. Please do suggest which approach would be preferred. - Anshuman V1: https://lore.kernel.org/all/1627879359-30303-1-git-send-email-anshuman.khandual@arm.com/ RFC: https://lore.kernel.org/lkml/1627019894-14819-1-git-send-email-anshuman.khandual@arm.com/ arch/arm64/include/asm/assembler.h | 9 ++++++++ arch/arm64/kernel/head.S | 46 +++++++++++++++++++++++--------------- 2 files changed, 37 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index bfa5840..e5b5d3a 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -25,6 +25,15 @@ #include #include + .macro shift_to_ptrs, ptrs, shift, tmp, tmp1 + ldr_l \tmp1, idmap_t0sz + add \tmp1, \tmp1, \shift + mov \tmp, #64 + sub \tmp, \tmp, \tmp1 + mov \ptrs, #1 + lsr \ptrs, \ptrs, \tmp + .endm + /* * Provide a wxN alias for each wN register so what we can paste a xN * reference after a 'w' to obtain the 32-bit version. diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 1796245..b93d50d 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -328,30 +328,40 @@ SYM_FUNC_START_LOCAL(__create_page_tables) dmb sy dc ivac, x6 // Invalidate potentially stale cache line -#if (VA_BITS < 48) #define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3) -#define EXTRA_PTRS (1 << (PHYS_MASK_SHIFT - EXTRA_SHIFT)) - - /* - * If VA_BITS < 48, we have to configure an additional table level. - * First, we have to verify our assumption that the current value of - * VA_BITS was chosen such that all translation levels are fully - * utilised, and that lowering T0SZ will always result in an additional - * translation level to be configured. - */ -#if VA_BITS != EXTRA_SHIFT +#define EXTRA_SHIFT_1 (EXTRA_SHIFT + PAGE_SHIFT - 3) +#if (VA_BITS > EXTRA_SHIFT) #error "Mismatch between VA_BITS and page size/number of translation levels" #endif - mov x4, EXTRA_PTRS +#if (VA_BITS == EXTRA_SHIFT) + mov x6, #TCR_T0SZ(VA_BITS_MIN) + sub x6, x6, x5 + cmp x6, #(PAGE_SHIFT - 3) + b.gt 8f + + shift_to_ptrs x4, EXTRA_SHIFT, x5, x6 create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6 -#else - /* - * If VA_BITS == 48, we don't have to configure an additional - * translation level, but the top-level table has more entries. - */ - mov x4, #1 << (PHYS_MASK_SHIFT - PGDIR_SHIFT) + b 1f +8: + shift_to_ptrs x4, EXTRA_SHIFT_1, x5, x6 + create_table_entry x0, x3, EXTRA_SHIFT_1, x4, x5, x6 + + mov x4, PTRS_PER_PTE + create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6 +#elif (VA_BITS < EXTRA_SHIFT) + mov x6, #64 + sub x6, x6, x5 + cmp x6, EXTRA_SHIFT + b.eq 1f + b.gt 9f + + shift_to_ptrs x4, PGDIR_SHIFT, x5, x6 str_l x4, idmap_ptrs_per_pgd, x5 + b 1f +9: + shift_to_ptrs x4, EXTRA_SHIFT, x5, x6 + create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6 #endif 1: ldr_l x4, idmap_ptrs_per_pgd -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel