From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 70DAE344DA8; Tue, 27 Jan 2026 10:50:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769511024; cv=none; b=Cg9Lr7Dc/NbLzLzHffEJAyX6ouSspkRXhRSXeOdmkB2+IO2XJa1UuDxp6U3VItG6990dsElCzIGsAfFzKX9faMI6alz8HocLjq4ZwtA9y912Mrywwentzgop2x5jbgmFdS/xcfDmFAc8KpuTdJzMSSp8xbnAeLIkblgTQsa8e7o= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769511024; c=relaxed/simple; bh=YLL6oxTyqOa4/Q5s6nBM/4d2DM5WOtIKAcZMHXVlWGg=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=swNpCGcVMzkGjeBG1LRBtqpXi7kvWPEYn381ZBENTwLLjwrv1Ry3hGFT+wvXvqwXDnb/deohAtNLHHdlDZ/9F9/33BSULgX40jWAYckM59MPIemrf7N4jDpbLlKNKK1LvJzcqexNCZGmkz534QsuNK7zkdxXOyWbzMDlW2LGBew= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B7261595; Tue, 27 Jan 2026 02:50:16 -0800 (PST) Received: from [10.57.94.246] (unknown [10.57.94.246]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DEC3C3F73F; Tue, 27 Jan 2026 02:50:20 -0800 (PST) Message-ID: <3d6b2d69-ba7c-4da0-80e1-d1b80da47696@arm.com> Date: Tue, 27 Jan 2026 10:50:19 +0000 Precedence: bulk X-Mailing-List: linux-hardening@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 10/10] arm64: mm: Unmap kernel data/bss entirely from the linear map Content-Language: en-GB To: Ard Biesheuvel , linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com, Ard Biesheuvel , Anshuman Khandual , Liz Prucka , Seth Jenkins , Kees Cook , linux-hardening@vger.kernel.org References: <20260126092630.1800589-12-ardb+git@google.com> <20260126092630.1800589-22-ardb+git@google.com> From: Ryan Roberts In-Reply-To: <20260126092630.1800589-22-ardb+git@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 26/01/2026 09:26, Ard Biesheuvel wrote: > From: Ard Biesheuvel > > The linear aliases of the kernel text and rodata are mapped read-only in > the linear map as well. Given that the contents of these regions are > mostly identical to the version in the loadable image, mapping them > read-only and leaving their contents visible is a reasonable hardening > measure. what about ro_after_init? Could that contain secrets that we don't want to leak? What is the advantage of leaving text/rodata ro in the linear map vs just umapping the whole lot? > > Data and bss, however, are now also mapped read-only but the contents of > these regions are more likely to contain data that we'd rather not leak. > So let's unmap these entirely in the linear map when the kernel is > running normally. > > When going into hibernation or waking up from it, these regions need to > be mapped, so map the region initially, and toggle the valid bit so > map/unmap the region as needed. > > Signed-off-by: Ard Biesheuvel > --- > arch/arm64/mm/mmu.c | 40 ++++++++++++++++++-- > 1 file changed, 37 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index fdbbb018adc5..06b2d11b4561 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -24,6 +24,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -1027,6 +1028,31 @@ static void __init __map_memblock(phys_addr_t start, phys_addr_t end, > end - start, prot, early_pgtable_alloc, flags); > } > > +static void remap_linear_data_alias(bool unmap) > +{ > + set_memory_valid((unsigned long)lm_alias(__init_end), > + (unsigned long)(__pgdir_start - __init_end) / PAGE_SIZE, > + !unmap); > +} > + > +static int arm64_hibernate_pm_notify(struct notifier_block *nb, > + unsigned long mode, void *unused) > +{ > + switch (mode) { > + default: > + break; > + case PM_POST_HIBERNATION: > + case PM_POST_RESTORE: > + remap_linear_data_alias(true); > + break; > + case PM_HIBERNATION_PREPARE: > + case PM_RESTORE_PREPARE: > + remap_linear_data_alias(false); > + break; > + } > + return 0; > +} > + > void __init mark_linear_text_alias_ro(void) > { > /* > @@ -1035,6 +1061,16 @@ void __init mark_linear_text_alias_ro(void) > update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text), > (unsigned long)__init_begin - (unsigned long)_text, > PAGE_KERNEL_RO); > + > + remap_linear_data_alias(true); > + > + if (IS_ENABLED(CONFIG_HIBERNATION)) { > + static struct notifier_block nb = { > + .notifier_call = arm64_hibernate_pm_notify > + }; > + > + register_pm_notifier(&nb); > + } > } > > #ifdef CONFIG_KFENCE > @@ -1163,7 +1199,7 @@ static void __init map_mem(void) > __map_memblock(kernel_start, init_begin, PAGE_KERNEL, > flags | NO_CONT_MAPPINGS); > __map_memblock(init_end, kernel_end, PAGE_KERNEL, > - flags | NO_CONT_MAPPINGS); > + flags | NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); > > /* map all the memory banks */ > for_each_mem_range(i, &start, &end) { > @@ -1176,8 +1212,6 @@ static void __init map_mem(void) > flags); > } > > - __map_memblock(init_end, kernel_end, PAGE_KERNEL_RO, > - flags | NO_CONT_MAPPINGS); In the previous patch, perhaps this should be moved to mark_linear_text_alias_ro() and combined with the update_mapping_prot() there? Thanks, Ryan > arm64_kfence_map_pool(early_kfence_pool); > } >