From: Aboorva Devarajan <aboorvad@linux.ibm.com>
To: stable@vger.kernel.org
Cc: hbathini@linux.ibm.com, mpe@ellerman.id.au,
ritesh.list@gmail.com, aboorvad@linux.ibm.com
Subject: Re: [PATCH] powerpc/64s/radix/kfence: map __kfence_pool at page granularity
Date: Mon, 29 Sep 2025 07:07:58 +0530 [thread overview]
Message-ID: <149c66a94a28f33330e2016e50e4f3faad4dd59d.camel@linux.ibm.com> (raw)
In-Reply-To: <20250910110245.123817-1-aboorvad@linux.ibm.com>
On Wed, 2025-09-10 at 16:32 +0530, Aboorva Devarajan wrote:
> From: Hari Bathini <hbathini@linux.ibm.com>
>
> When KFENCE is enabled, total system memory is mapped at page level
> granularity. But in radix MMU mode, ~3GB additional memory is needed
> to map 100GB of system memory at page level granularity when compared
> to using 2MB direct mapping.This is not desired considering KFENCE is
> designed to be enabled in production kernels [1].
>
> Mapping only the memory allocated for KFENCE pool at page granularity is
> sufficient to enable KFENCE support. So, allocate __kfence_pool during
> bootup and map it at page granularity instead of mapping all system
> memory at page granularity.
>
> Without patch:
> # cat /proc/meminfo
> MemTotal: 101201920 kB
>
> With patch:
> # cat /proc/meminfo
> MemTotal: 104483904 kB
>
> Note that enabling KFENCE at runtime is disabled for radix MMU for now,
> as it depends on the ability to split page table mappings and such APIs
> are not currently implemented for radix MMU.
>
> All kfence_test.c testcases passed with this patch.
>
> [1] https://lore.kernel.org/all/20201103175841.3495947-2-elver@google.com/
>
> Fixes: a5edf9815dd7 ("powerpc/64s: Enable KFENCE on book3s64")
> Cc: stable@vger.kernel.org
> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
> Link: https://msgid.link/20240701130021.578240-1-hbathini@linux.ibm.com
>
> ---
>
> Upstream commit 353d7a84c214 ("powerpc/64s/radix/kfence: map __kfence_pool at page granularity")
>
> This has already been merged upstream and is required in stable kernels as well.
>
> ---
> arch/powerpc/include/asm/kfence.h | 11 +++-
> arch/powerpc/mm/book3s64/radix_pgtable.c | 84 ++++++++++++++++++++++--
> arch/powerpc/mm/init-common.c | 3 +
> 3 files changed, 93 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kfence.h b/arch/powerpc/include/asm/kfence.h
> index 424ceef82ae615..fab124ada1c7f2 100644
> --- a/arch/powerpc/include/asm/kfence.h
> +++ b/arch/powerpc/include/asm/kfence.h
> @@ -15,10 +15,19 @@
> #define ARCH_FUNC_PREFIX "."
> #endif
>
> +#ifdef CONFIG_KFENCE
> +extern bool kfence_disabled;
> +
> +static inline void disable_kfence(void)
> +{
> + kfence_disabled = true;
> +}
> +
> static inline bool arch_kfence_init_pool(void)
> {
> - return true;
> + return !kfence_disabled;
> }
> +#endif
>
> #ifdef CONFIG_PPC64
> static inline bool kfence_protect_page(unsigned long addr, bool protect)
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 15e88f1439ec20..b0d927009af83c 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -17,6 +17,7 @@
> #include <linux/hugetlb.h>
> #include <linux/string_helpers.h>
> #include <linux/memory.h>
> +#include <linux/kfence.h>
>
> #include <asm/pgalloc.h>
> #include <asm/mmu_context.h>
> @@ -31,6 +32,7 @@
> #include <asm/uaccess.h>
> #include <asm/ultravisor.h>
> #include <asm/set_memory.h>
> +#include <asm/kfence.h>
>
> #include <trace/events/thp.h>
>
> @@ -293,7 +295,8 @@ static unsigned long next_boundary(unsigned long addr, unsigned long end)
>
> static int __meminit create_physical_mapping(unsigned long start,
> unsigned long end,
> - int nid, pgprot_t _prot)
> + int nid, pgprot_t _prot,
> + unsigned long mapping_sz_limit)
> {
> unsigned long vaddr, addr, mapping_size = 0;
> bool prev_exec, exec = false;
> @@ -301,7 +304,10 @@ static int __meminit create_physical_mapping(unsigned long start,
> int psize;
> unsigned long max_mapping_size = memory_block_size;
>
> - if (debug_pagealloc_enabled_or_kfence())
> + if (mapping_sz_limit < max_mapping_size)
> + max_mapping_size = mapping_sz_limit;
> +
> + if (debug_pagealloc_enabled())
> max_mapping_size = PAGE_SIZE;
>
> start = ALIGN(start, PAGE_SIZE);
> @@ -356,8 +362,74 @@ static int __meminit create_physical_mapping(unsigned long start,
> return 0;
> }
>
> +#ifdef CONFIG_KFENCE
> +static bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL;
> +
> +static int __init parse_kfence_early_init(char *arg)
> +{
> + int val;
> +
> + if (get_option(&arg, &val))
> + kfence_early_init = !!val;
> + return 0;
> +}
> +early_param("kfence.sample_interval", parse_kfence_early_init);
> +
> +static inline phys_addr_t alloc_kfence_pool(void)
> +{
> + phys_addr_t kfence_pool;
> +
> + /*
> + * TODO: Support to enable KFENCE after bootup depends on the ability to
> + * split page table mappings. As such support is not currently
> + * implemented for radix pagetables, support enabling KFENCE
> + * only at system startup for now.
> + *
> + * After support for splitting mappings is available on radix,
> + * alloc_kfence_pool() & map_kfence_pool() can be dropped and
> + * mapping for __kfence_pool memory can be
> + * split during arch_kfence_init_pool().
> + */
> + if (!kfence_early_init)
> + goto no_kfence;
> +
> + kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
> + if (!kfence_pool)
> + goto no_kfence;
> +
> + memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE);
> + return kfence_pool;
> +
> +no_kfence:
> + disable_kfence();
> + return 0;
> +}
> +
> +static inline void map_kfence_pool(phys_addr_t kfence_pool)
> +{
> + if (!kfence_pool)
> + return;
> +
> + if (create_physical_mapping(kfence_pool, kfence_pool + KFENCE_POOL_SIZE,
> + -1, PAGE_KERNEL, PAGE_SIZE))
> + goto err;
> +
> + memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
> + __kfence_pool = __va(kfence_pool);
> + return;
> +
> +err:
> + memblock_phys_free(kfence_pool, KFENCE_POOL_SIZE);
> + disable_kfence();
> +}
> +#else
> +static inline phys_addr_t alloc_kfence_pool(void) { return 0; }
> +static inline void map_kfence_pool(phys_addr_t kfence_pool) { }
> +#endif
> +
> static void __init radix_init_pgtable(void)
> {
> + phys_addr_t kfence_pool;
> unsigned long rts_field;
> phys_addr_t start, end;
> u64 i;
> @@ -365,6 +437,8 @@ static void __init radix_init_pgtable(void)
> /* We don't support slb for radix */
> slb_set_size(0);
>
> + kfence_pool = alloc_kfence_pool();
> +
> /*
> * Create the linear mapping
> */
> @@ -381,9 +455,11 @@ static void __init radix_init_pgtable(void)
> }
>
> WARN_ON(create_physical_mapping(start, end,
> - -1, PAGE_KERNEL));
> + -1, PAGE_KERNEL, ~0UL));
> }
>
> + map_kfence_pool(kfence_pool);
> +
> if (!cpu_has_feature(CPU_FTR_HVMODE) &&
> cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) {
> /*
> @@ -875,7 +951,7 @@ int __meminit radix__create_section_mapping(unsigned long start,
> }
>
> return create_physical_mapping(__pa(start), __pa(end),
> - nid, prot);
> + nid, prot, ~0UL);
> }
>
> int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
> diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
> index d3a7726ecf512c..21131b96d20901 100644
> --- a/arch/powerpc/mm/init-common.c
> +++ b/arch/powerpc/mm/init-common.c
> @@ -31,6 +31,9 @@ EXPORT_SYMBOL_GPL(kernstart_virt_addr);
>
> bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
> bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
> +#ifdef CONFIG_KFENCE
> +bool __ro_after_init kfence_disabled;
> +#endif
>
> static int __init parse_nosmep(char *p)
> {
Hi,
Just a gentle reminder, this patch is required in the stable kernels.
Please let me know if there are any comments.
Thanks,
Aboorva
next prev parent reply other threads:[~2025-09-29 1:38 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-10 11:02 [PATCH] powerpc/64s/radix/kfence: map __kfence_pool at page granularity Aboorva Devarajan
2025-09-29 1:37 ` Aboorva Devarajan [this message]
2025-12-01 10:58 ` Aboorva Devarajan
2025-12-01 11:07 ` Greg KH
2025-12-17 5:15 ` Aboorva Devarajan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=149c66a94a28f33330e2016e50e4f3faad4dd59d.camel@linux.ibm.com \
--to=aboorvad@linux.ibm.com \
--cc=hbathini@linux.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=ritesh.list@gmail.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox