From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D82861C31 for ; Wed, 23 Nov 2022 09:42:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3AC1EC433D6; Wed, 23 Nov 2022 09:42:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1669196552; bh=FOH9V6sTon2UB/EUH/j7qS5FK+cZfzbzoXKUKQ6sG1o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hjh0yWWFQYEn/Onj8Tm+vFKL/lMyxbz2v83QozL6zTGLkhi/gWC/2iF1sW2sFc8u1 muuBY+FzFFEvfNU1xAPNlsvn9eqa2FFCcv/EQQYKQNtI4/NGVhxBN/HBUZh8xX/7By ZNN4DhJzYILs84sxA520c0EJqQIrALODY5IL9jx8= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Mike Rapoport , Anshuman Khandual , Catalin Marinas , Sasha Levin Subject: [PATCH 6.0 063/314] arm64/mm: fold check for KFENCE into can_set_direct_map() Date: Wed, 23 Nov 2022 09:48:28 +0100 Message-Id: <20221123084628.338225237@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123084625.457073469@linuxfoundation.org> References: <20221123084625.457073469@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Mike Rapoport [ Upstream commit b9dd04a20f81333e4b99662f1bbaf7c9e3a1e137 ] KFENCE requires linear map to be mapped at page granularity, so that it is possible to protect/unprotect single pages, just like with rodata_full and DEBUG_PAGEALLOC. Instead of repating can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE) make can_set_direct_map() handle the KFENCE case. This also prevents potential false positives in kernel_page_present() that may return true for non-present page if CONFIG_KFENCE is enabled. Signed-off-by: Mike Rapoport Reviewed-by: Anshuman Khandual Link: https://lore.kernel.org/r/20220921074841.382615-1-rppt@kernel.org Signed-off-by: Catalin Marinas Stable-dep-of: 2081b3bd0c11 ("arm64: fix rodata=full again") Signed-off-by: Sasha Levin --- arch/arm64/mm/mmu.c | 8 ++------ arch/arm64/mm/pageattr.c | 8 +++++++- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index eb489302c28a..e8de94dd5a60 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -539,7 +539,7 @@ static void __init map_mem(pgd_t *pgdp) */ BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end)); - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) + if (can_set_direct_map()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; /* @@ -1551,11 +1551,7 @@ int arch_add_memory(int nid, u64 start, u64 size, VM_BUG_ON(!mhp_range_allowed(start, size, true)); - /* - * KFENCE requires linear map to be mapped at page granularity, so that - * it is possible to protect/unprotect single pages in the KFENCE pool. - */ - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) + if (can_set_direct_map()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 64e985eaa52d..d107c3d434e2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -21,7 +21,13 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED bool can_set_direct_map(void) { - return rodata_full || debug_pagealloc_enabled(); + /* + * rodata_full, DEBUG_PAGEALLOC and KFENCE require linear map to be + * mapped at page granularity, so that it is possible to + * protect/unprotect single pages. + */ + return rodata_full || debug_pagealloc_enabled() || + IS_ENABLED(CONFIG_KFENCE); } static int change_page_range(pte_t *ptep, unsigned long addr, void *data) -- 2.35.1