From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B902C4320A for ; Tue, 10 Aug 2021 16:31:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C1CE160F02 for ; Tue, 10 Aug 2021 16:31:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C1CE160F02 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=D7HnjuE+SoJEXuukQv9O/B0KUqe7kbGlz934SffNQug=; b=e+CRLjoh/nmWV+ FGZpVB+s/+TNE/5w5rERtW1cc+bfP5BSKnkM/77vaVYjMZGfppvPnbOaNF6kcUEouHqBlaTHUmxyl kDDsqOdWEaC06kMZLy5ZMVsy4L6+LnNRzDx1wY7tHSqgvLP1+qIMFyBy4YTvibwGjcU0NRYe7tPuR sN7gBm0NHLjD9oegZRXfimeIgGE0fhe704Iy/jkLDoAhgL+kHvpvuatxeoJViASowa6fau6yW27go ptTrStIhDSUyqv40wfrPbFsZ9YYS3v2JNZrp67kszsi2eSVbb6oLW1x5hp+6E4gTfZ8Hn6F1jTEY+ XdOyvaArKNeMbxOx7GOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mDUcM-004SZN-Fm; Tue, 10 Aug 2021 16:28:42 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mDUcG-004SYW-7A for linux-arm-kernel@lists.infradead.org; Tue, 10 Aug 2021 16:28:40 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 158566D; Tue, 10 Aug 2021 09:28:33 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.41.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C42463F718; Tue, 10 Aug 2021 09:28:31 -0700 (PDT) Date: Tue, 10 Aug 2021 17:28:29 +0100 From: Mark Rutland To: Ard Biesheuvel Cc: Linux ARM , Anshuman Khandual , Ard Biesheuvel , Catalin Marinas , Steve Capper , Will Deacon Subject: Re: [PATCH] arm64: head: avoid over-mapping in map_memory Message-ID: <20210810162829.GC52842@C02TD0UTHF1T.local> References: <20210810152756.23903-1-mark.rutland@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210810_092836_384358_FAB15392 X-CRM114-Status: GOOD ( 35.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Aug 10, 2021 at 06:16:49PM +0200, Ard Biesheuvel wrote: > On Tue, 10 Aug 2021 at 17:29, Mark Rutland wrote: > > > > The `compute_indices` and `populate_entries` macros operate on inclusive > > bounds, and thus the `map_memory` macro which uses them also operates > > on inclusive bounds. > > > > We pass `_end` and `_idmap_text_end` to `map_memory`, but these are > > exclusive bounds, and if one of these is sufficiently aligned (as a > > result of kernel configuration, physical placement, and KASLR), then: > > > > * In `compute_indices`, the computed `iend` will be in the page/block *after* > > the final byte of the intended mapping. > > > > * In `populate_entries`, an unnecessary entry will be created at the end > > of each level of table. At the leaf level, this entry will map up to > > SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend > > to map. > > > > As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may > > violate the boot protocol and map physical address past the 2MiB-aligned > > end address we are permitted to map. As we map these with Normal memory > > attributes, this may result in further problems depending on what these > > physical addresses correspond to. > > > > Fix this by subtracting one from the end address in both cases, such > > that we always use inclusive bounds. For clarity, comments are updated > > to more clearly document that the macros expect inclusive bounds. > > > > Fixes: 0370b31e48454d8c ("arm64: Extend early page table code to allow for larger kernel") > > Signed-off-by: Mark Rutland > > Cc: Anshuman Khandual > > Cc: Ard Biesheuvel > > Cc: Catalin Marinas > > Cc: Steve Capper > > Cc: Will Deacon > > --- > > arch/arm64/kernel/head.S | 10 ++++++---- > > 1 file changed, 6 insertions(+), 4 deletions(-) > > > > I spotted this while working on some rework of the early page table code. > > While the rest isn't ready yet, I thought I'd send this out on its own as it's > > a fix. > > > > Mark. > > > > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S > > index c5c994a73a64..f0826be4c104 100644 > > --- a/arch/arm64/kernel/head.S > > +++ b/arch/arm64/kernel/head.S > > @@ -176,8 +176,8 @@ SYM_CODE_END(preserve_boot_args) > > * were needed in the previous page table level then the next page table level is assumed > > * to be composed of multiple pages. (This effectively scales the end index). > > * > > - * vstart: virtual address of start of range > > - * vend: virtual address of end of range > > + * vstart: virtual address of start of range (inclusive) > > + * vend: virtual address of end of range (inclusive) > > * shift: shift used to transform virtual address into index > > * ptrs: number of entries in page table > > * istart: index in table corresponding to vstart > > @@ -214,8 +214,8 @@ SYM_CODE_END(preserve_boot_args) > > * > > * tbl: location of page table > > * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) > > - * vstart: start address to map > > - * vend: end address to map - we map [vstart, vend] > > + * vstart: virtual address of start of mapping (inclusive) > > + * vend: virtual address of end of mapping (inclusive) > > * flags: flags to use to map last level entries > > * phys: physical address corresponding to vstart - physical memory is contiguous > > * pgds: the number of pgd entries > > @@ -355,6 +355,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) > > 1: > > ldr_l x4, idmap_ptrs_per_pgd > > adr_l x6, __idmap_text_end // __pa(__idmap_text_end) > > + sub x6, x6, #1 > > > > __idmap_text_end-1 should do the trick as well, no? Yup. If you want, I can make that: adr_l x6, __idmap_text_end - 1 // __pa(__idmap_text_end - 1) > > @@ -366,6 +367,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) > > add x5, x5, x23 // add KASLR displacement > > mov x4, PTRS_PER_PGD > > adrp x6, _end // runtime __pa(_end) > > + sub x6, x6, #1 ... and likewise here: adr_l x6, _end - 1 // runtime __pa(_end - 1) Thanks, Mark. > > adrp x3, _text // runtime __pa(_text) > > sub x6, x6, x3 // _end - _text > > add x6, x6, x5 // runtime __va(_end) > > -- > > 2.11.0 > > > > > > _______________________________________________ > > linux-arm-kernel mailing list > > linux-arm-kernel@lists.infradead.org > > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel