From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C829C4338F for ; Tue, 10 Aug 2021 15:30:14 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D564E60F56 for ; Tue, 10 Aug 2021 15:30:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D564E60F56 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date:Subject:Cc:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=9ndv2JFAvdnpk63orc9cKpyaedVb7D+S/0glzMeVxdg=; b=T06IKCJPCE1Zh3 BZtAABz/Wd1/zCjrhWWJUk8LmNKku1bWflsSMieA6HZZPdsYGQRCS6Tgq21Zx0QlpTxQUm1WpjzbC 9L0ivgWoBBx1G+GiS3XzCuHAu0rxhwFEhHTu2C8/qhpphMEMrasp5zBYIe+9GZh926wXxZ9UMVrn6 tLHyA4Pi9etMS9rG9jtnw0KM/AGJPAB5pvYw0Zopo0ZLPrHWTdSAM3yO8dzcgm/N/3VpTupRZilga RDU02cPYzL1mrrmh7RaRbr84ITiUJGTLonu3k5BZa2ynbVOOHiLC1bBpM3IQij7LzqfvlE+Yz2XzR TcNh33Veld2FgU2sTI6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mDTfo-004L02-9d; Tue, 10 Aug 2021 15:28:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mDTfk-004KzZ-GM for linux-arm-kernel@lists.infradead.org; Tue, 10 Aug 2021 15:28:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D6D1F1FB; Tue, 10 Aug 2021 08:28:06 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D618B3F718; Tue, 10 Aug 2021 08:28:05 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: anshuman.khandual@arm.com, ard.biesheuvel@linaro.org, catalin.marinas@arm.com, mark.rutland@arm.com, steve.capper@arm.com, will@kernel.org Subject: [PATCH] arm64: head: avoid over-mapping in map_memory Date: Tue, 10 Aug 2021 16:27:56 +0100 Message-Id: <20210810152756.23903-1-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210810_082808_684592_0F4CCD60 X-CRM114-Status: GOOD ( 15.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. Fix this by subtracting one from the end address in both cases, such that we always use inclusive bounds. For clarity, comments are updated to more clearly document that the macros expect inclusive bounds. Fixes: 0370b31e48454d8c ("arm64: Extend early page table code to allow for larger kernel") Signed-off-by: Mark Rutland Cc: Anshuman Khandual Cc: Ard Biesheuvel Cc: Catalin Marinas Cc: Steve Capper Cc: Will Deacon --- arch/arm64/kernel/head.S | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) I spotted this while working on some rework of the early page table code. While the rest isn't ready yet, I thought I'd send this out on its own as it's a fix. Mark. diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index c5c994a73a64..f0826be4c104 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -176,8 +176,8 @@ SYM_CODE_END(preserve_boot_args) * were needed in the previous page table level then the next page table level is assumed * to be composed of multiple pages. (This effectively scales the end index). * - * vstart: virtual address of start of range - * vend: virtual address of end of range + * vstart: virtual address of start of range (inclusive) + * vend: virtual address of end of range (inclusive) * shift: shift used to transform virtual address into index * ptrs: number of entries in page table * istart: index in table corresponding to vstart @@ -214,8 +214,8 @@ SYM_CODE_END(preserve_boot_args) * * tbl: location of page table * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) - * vstart: start address to map - * vend: end address to map - we map [vstart, vend] + * vstart: virtual address of start of mapping (inclusive) + * vend: virtual address of end of mapping (inclusive) * flags: flags to use to map last level entries * phys: physical address corresponding to vstart - physical memory is contiguous * pgds: the number of pgd entries @@ -355,6 +355,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) 1: ldr_l x4, idmap_ptrs_per_pgd adr_l x6, __idmap_text_end // __pa(__idmap_text_end) + sub x6, x6, #1 map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14 @@ -366,6 +367,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) add x5, x5, x23 // add KASLR displacement mov x4, PTRS_PER_PGD adrp x6, _end // runtime __pa(_end) + sub x6, x6, #1 adrp x3, _text // runtime __pa(_text) sub x6, x6, x3 // _end - _text add x6, x6, x5 // runtime __va(_end) -- 2.11.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel