linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
@ 2025-10-21  0:08 Pasha Tatashin
  2025-10-21  0:08 ` [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area Pasha Tatashin
                   ` (3 more replies)
  0 siblings, 4 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-21  0:08 UTC (permalink / raw)
  To: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pasha.tatashin, pratyush, rdunlap,
	rppt, tj, jasonmiu, dmatlack, skhawaja

This series fixes a memory corruption bug in KHO that occurs when KFENCE
is enabled.

The root cause is that KHO metadata, allocated via kzalloc(), can be
randomly serviced by kfence_alloc(). When a kernel boots via KHO, the
early memblock allocator is restricted to a "scratch area". This forces
the KFENCE pool to be allocated within this scratch area, creating a
conflict. If KHO metadata is subsequently placed in this pool, it gets
corrupted during the next kexec operation.

Patch 1/3 introduces a debug-only feature (CONFIG_KEXEC_HANDOVER_DEBUG)
that adds checks to detect and fail any operation that attempts to place
KHO metadata or preserved memory within the scratch area. This serves as
a validation and diagnostic tool to confirm the problem without
affecting production builds.

Patch 2/3 Increases bitmap to PAGE_SIZE, so buddy allocator can be used.

Patch 3/3 Provides the fix by modifying KHO to allocate its metadata
directly from the buddy allocator instead of slab. This bypasses the
KFENCE interception entirely.

Pasha Tatashin (3):
  liveupdate: kho: warn and fail on metadata or preserved memory in
    scratch area
  liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  liveupdate: kho: allocate metadata directly from the buddy allocator

 include/linux/gfp.h              |  3 ++
 kernel/Kconfig.kexec             |  9 ++++
 kernel/Makefile                  |  1 +
 kernel/kexec_handover.c          | 72 ++++++++++++++++++++------------
 kernel/kexec_handover_debug.c    | 25 +++++++++++
 kernel/kexec_handover_internal.h | 16 +++++++
 6 files changed, 100 insertions(+), 26 deletions(-)
 create mode 100644 kernel/kexec_handover_debug.c
 create mode 100644 kernel/kexec_handover_internal.h


base-commit: 6548d364a3e850326831799d7e3ea2d7bb97ba08
-- 
2.51.0.869.ge66316f041-goog



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area
  2025-10-21  0:08 [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Pasha Tatashin
@ 2025-10-21  0:08 ` Pasha Tatashin
  2025-10-22 10:22   ` Pratyush Yadav
  2025-10-27 22:29   ` David Matlack
  2025-10-21  0:08 ` [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE Pasha Tatashin
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-21  0:08 UTC (permalink / raw)
  To: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pasha.tatashin, pratyush, rdunlap,
	rppt, tj, jasonmiu, dmatlack, skhawaja

It is invalid for KHO metadata or preserved memory regions to be located
within the KHO scratch area, as this area is overwritten when the next
kernel is loaded, and used early in boot by the next kernel. This can
lead to memory corruption.

Adds checks to kho_preserve_* and KHO's internal metadata allocators
(xa_load_or_alloc, new_chunk) to verify that the physical address of the
memory does not overlap with any defined scratch region. If an overlap
is detected, the operation will fail and a WARN_ON is triggered. To
avoid performance overhead in production kernels, these checks are
enabled only when CONFIG_KEXEC_HANDOVER_DEBUG is selected.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 kernel/Kconfig.kexec             |  9 ++++++
 kernel/Makefile                  |  1 +
 kernel/kexec_handover.c          | 53 ++++++++++++++++++++++----------
 kernel/kexec_handover_debug.c    | 25 +++++++++++++++
 kernel/kexec_handover_internal.h | 16 ++++++++++
 5 files changed, 87 insertions(+), 17 deletions(-)
 create mode 100644 kernel/kexec_handover_debug.c
 create mode 100644 kernel/kexec_handover_internal.h

diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec
index 422270d64820..c94d36b5fcd9 100644
--- a/kernel/Kconfig.kexec
+++ b/kernel/Kconfig.kexec
@@ -109,6 +109,15 @@ config KEXEC_HANDOVER
 	  to keep data or state alive across the kexec. For this to work,
 	  both source and target kernels need to have this option enabled.
 
+config KEXEC_HANDOVER_DEBUG
+	bool "Enable Kexec Handover debug checks"
+	depends on KEXEC_HANDOVER_DEBUGFS
+	help
+	  This option enables extra sanity checks for the Kexec Handover
+	  subsystem. Since, KHO performance is crucial in live update
+	  scenarios and the extra code might be adding overhead it is
+	  only optionally enabled.
+
 config CRASH_DUMP
 	bool "kernel crash dumps"
 	default ARCH_DEFAULT_CRASH_DUMP
diff --git a/kernel/Makefile b/kernel/Makefile
index df3dd8291bb6..9fe722305c9b 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_KEXEC) += kexec.o
 obj-$(CONFIG_KEXEC_FILE) += kexec_file.o
 obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o
 obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o
+obj-$(CONFIG_KEXEC_HANDOVER_DEBUG) += kexec_handover_debug.o
 obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
 obj-$(CONFIG_COMPAT) += compat.o
 obj-$(CONFIG_CGROUPS) += cgroup/
diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
index 76f0940fb485..7b460806ef4f 100644
--- a/kernel/kexec_handover.c
+++ b/kernel/kexec_handover.c
@@ -8,6 +8,7 @@
 
 #define pr_fmt(fmt) "KHO: " fmt
 
+#include <linux/cleanup.h>
 #include <linux/cma.h>
 #include <linux/count_zeros.h>
 #include <linux/debugfs.h>
@@ -22,6 +23,7 @@
 
 #include <asm/early_ioremap.h>
 
+#include "kexec_handover_internal.h"
 /*
  * KHO is tightly coupled with mm init and needs access to some of mm
  * internal APIs.
@@ -133,26 +135,26 @@ static struct kho_out kho_out = {
 
 static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
 {
-	void *elm, *res;
+	void *res = xa_load(xa, index);
 
-	elm = xa_load(xa, index);
-	if (elm)
-		return elm;
+	if (res)
+		return res;
+
+	void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
 
-	elm = kzalloc(sz, GFP_KERNEL);
 	if (!elm)
 		return ERR_PTR(-ENOMEM);
 
+	if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
+		return ERR_PTR(-EINVAL);
+
 	res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL);
 	if (xa_is_err(res))
-		res = ERR_PTR(xa_err(res));
-
-	if (res) {
-		kfree(elm);
+		return ERR_PTR(xa_err(res));
+	else if (res)
 		return res;
-	}
 
-	return elm;
+	return no_free_ptr(elm);
 }
 
 static void __kho_unpreserve(struct kho_mem_track *track, unsigned long pfn,
@@ -345,15 +347,19 @@ static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE);
 static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk,
 					  unsigned long order)
 {
-	struct khoser_mem_chunk *chunk;
+	struct khoser_mem_chunk *chunk __free(kfree) = NULL;
 
 	chunk = kzalloc(PAGE_SIZE, GFP_KERNEL);
 	if (!chunk)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
+
+	if (WARN_ON(kho_scratch_overlap(virt_to_phys(chunk), PAGE_SIZE)))
+		return ERR_PTR(-EINVAL);
+
 	chunk->hdr.order = order;
 	if (cur_chunk)
 		KHOSER_STORE_PTR(cur_chunk->hdr.next, chunk);
-	return chunk;
+	return no_free_ptr(chunk);
 }
 
 static void kho_mem_ser_free(struct khoser_mem_chunk *first_chunk)
@@ -374,14 +380,17 @@ static int kho_mem_serialize(struct kho_serialization *ser)
 	struct khoser_mem_chunk *chunk = NULL;
 	struct kho_mem_phys *physxa;
 	unsigned long order;
+	int err = -ENOMEM;
 
 	xa_for_each(&ser->track.orders, order, physxa) {
 		struct kho_mem_phys_bits *bits;
 		unsigned long phys;
 
 		chunk = new_chunk(chunk, order);
-		if (!chunk)
+		if (IS_ERR(chunk)) {
+			err = PTR_ERR(chunk);
 			goto err_free;
+		}
 
 		if (!first_chunk)
 			first_chunk = chunk;
@@ -391,8 +400,10 @@ static int kho_mem_serialize(struct kho_serialization *ser)
 
 			if (chunk->hdr.num_elms == ARRAY_SIZE(chunk->bitmaps)) {
 				chunk = new_chunk(chunk, order);
-				if (!chunk)
+				if (IS_ERR(chunk)) {
+					err = PTR_ERR(chunk);
 					goto err_free;
+				}
 			}
 
 			elm = &chunk->bitmaps[chunk->hdr.num_elms];
@@ -409,7 +420,7 @@ static int kho_mem_serialize(struct kho_serialization *ser)
 
 err_free:
 	kho_mem_ser_free(first_chunk);
-	return -ENOMEM;
+	return err;
 }
 
 static void __init deserialize_bitmap(unsigned int order,
@@ -752,6 +763,9 @@ int kho_preserve_folio(struct folio *folio)
 	const unsigned int order = folio_order(folio);
 	struct kho_mem_track *track = &kho_out.ser.track;
 
+	if (WARN_ON(kho_scratch_overlap(pfn << PAGE_SHIFT, PAGE_SIZE << order)))
+		return -EINVAL;
+
 	return __kho_preserve_order(track, pfn, order);
 }
 EXPORT_SYMBOL_GPL(kho_preserve_folio);
@@ -775,6 +789,11 @@ int kho_preserve_pages(struct page *page, unsigned int nr_pages)
 	unsigned long failed_pfn = 0;
 	int err = 0;
 
+	if (WARN_ON(kho_scratch_overlap(start_pfn << PAGE_SHIFT,
+					nr_pages << PAGE_SHIFT))) {
+		return -EINVAL;
+	}
+
 	while (pfn < end_pfn) {
 		const unsigned int order =
 			min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn));
diff --git a/kernel/kexec_handover_debug.c b/kernel/kexec_handover_debug.c
new file mode 100644
index 000000000000..6efb696f5426
--- /dev/null
+++ b/kernel/kexec_handover_debug.c
@@ -0,0 +1,25 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * kexec_handover_debug.c - kexec handover optional debug functionality
+ * Copyright (C) 2025 Google LLC, Pasha Tatashin <pasha.tatashin@soleen.com>
+ */
+
+#define pr_fmt(fmt) "KHO: " fmt
+
+#include "kexec_handover_internal.h"
+
+bool kho_scratch_overlap(phys_addr_t phys, size_t size)
+{
+	phys_addr_t scratch_start, scratch_end;
+	unsigned int i;
+
+	for (i = 0; i < kho_scratch_cnt; i++) {
+		scratch_start = kho_scratch[i].addr;
+		scratch_end = kho_scratch[i].addr + kho_scratch[i].size;
+
+		if (phys < scratch_end && (phys + size) > scratch_start)
+			return true;
+	}
+
+	return false;
+}
diff --git a/kernel/kexec_handover_internal.h b/kernel/kexec_handover_internal.h
new file mode 100644
index 000000000000..05e9720ba7b9
--- /dev/null
+++ b/kernel/kexec_handover_internal.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef LINUX_KEXEC_HANDOVER_INTERNAL_H
+#define LINUX_KEXEC_HANDOVER_INTERNAL_H
+
+#include <linux/types.h>
+
+#ifdef CONFIG_KEXEC_HANDOVER_DEBUG
+bool kho_scratch_overlap(phys_addr_t phys, size_t size);
+#else
+static inline bool kho_scratch_overlap(phys_addr_t phys, size_t size)
+{
+	return false;
+}
+#endif /* CONFIG_KEXEC_HANDOVER_DEBUG */
+
+#endif /* LINUX_KEXEC_HANDOVER_INTERNAL_H */
-- 
2.51.0.869.ge66316f041-goog



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  2025-10-21  0:08 [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Pasha Tatashin
  2025-10-21  0:08 ` [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area Pasha Tatashin
@ 2025-10-21  0:08 ` Pasha Tatashin
  2025-10-22 10:25   ` Pratyush Yadav
                     ` (2 more replies)
  2025-10-21  0:08 ` [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator Pasha Tatashin
  2025-10-21  6:00 ` [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Mike Rapoport
  3 siblings, 3 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-21  0:08 UTC (permalink / raw)
  To: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pasha.tatashin, pratyush, rdunlap,
	rppt, tj, jasonmiu, dmatlack, skhawaja

KHO memory preservation metadata is preserved in 512 byte chunks which
requires their allocation from slab allocator. Slabs are not safe to be
used with KHO because of kfence, and because partial slabs may lead
leaks to the next kernel. Change the size to be PAGE_SIZE.

The kfence specifically may cause memory corruption, where it randomly
provides slab objects that can be within the scratch area. The reason
for that is that kfence allocates its objects prior to KHO scratch is
marked as CMA region.

While this change could potentially increase metadata overhead on
systems with sparsely preserved memory, this is being mitigated by
ongoing work to reduce sparseness during preservation via 1G guest
pages. Furthermore, this change aligns with future work on a stateless
KHO, which will also use page-sized bitmaps for its radix tree metadata.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 kernel/kexec_handover.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
index 7b460806ef4f..e5b91761fbfe 100644
--- a/kernel/kexec_handover.c
+++ b/kernel/kexec_handover.c
@@ -69,10 +69,10 @@ early_param("kho", kho_parse_enable);
  * Keep track of memory that is to be preserved across KHO.
  *
  * The serializing side uses two levels of xarrays to manage chunks of per-order
- * 512 byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order of a
- * 1TB system would fit inside a single 512 byte bitmap. For order 0 allocations
- * each bitmap will cover 16M of address space. Thus, for 16G of memory at most
- * 512K of bitmap memory will be needed for order 0.
+ * PAGE_SIZE byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order
+ * of a 8TB system would fit inside a single 4096 byte bitmap. For order 0
+ * allocations each bitmap will cover 128M of address space. Thus, for 16G of
+ * memory at most 512K of bitmap memory will be needed for order 0.
  *
  * This approach is fully incremental, as the serialization progresses folios
  * can continue be aggregated to the tracker. The final step, immediately prior
@@ -80,12 +80,14 @@ early_param("kho", kho_parse_enable);
  * successor kernel to parse.
  */
 
-#define PRESERVE_BITS (512 * 8)
+#define PRESERVE_BITS (PAGE_SIZE * 8)
 
 struct kho_mem_phys_bits {
 	DECLARE_BITMAP(preserve, PRESERVE_BITS);
 };
 
+static_assert(sizeof(struct kho_mem_phys_bits) == PAGE_SIZE);
+
 struct kho_mem_phys {
 	/*
 	 * Points to kho_mem_phys_bits, a sparse bitmap array. Each bit is sized
@@ -133,19 +135,19 @@ static struct kho_out kho_out = {
 	.finalized = false,
 };
 
-static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
+static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
 {
 	void *res = xa_load(xa, index);
 
 	if (res)
 		return res;
 
-	void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
+	void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
 
 	if (!elm)
 		return ERR_PTR(-ENOMEM);
 
-	if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
+	if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), PAGE_SIZE)))
 		return ERR_PTR(-EINVAL);
 
 	res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL);
@@ -218,8 +220,7 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn,
 		}
 	}
 
-	bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS,
-				sizeof(*bits));
+	bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS);
 	if (IS_ERR(bits))
 		return PTR_ERR(bits);
 
-- 
2.51.0.869.ge66316f041-goog



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator
  2025-10-21  0:08 [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Pasha Tatashin
  2025-10-21  0:08 ` [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area Pasha Tatashin
  2025-10-21  0:08 ` [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE Pasha Tatashin
@ 2025-10-21  0:08 ` Pasha Tatashin
  2025-10-27 23:04   ` David Matlack
  2025-10-21  6:00 ` [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Mike Rapoport
  3 siblings, 1 reply; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-21  0:08 UTC (permalink / raw)
  To: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pasha.tatashin, pratyush, rdunlap,
	rppt, tj, jasonmiu, dmatlack, skhawaja

KHO allocates metadata for its preserved memory map using the slab
allocator via kzalloc(). This metadata is temporary and is used by the
next kernel during early boot to find preserved memory.

A problem arises when KFENCE is enabled. kzalloc() calls can be
randomly intercepted by kfence_alloc(), which services the allocation
from a dedicated KFENCE memory pool. This pool is allocated early in
boot via memblock.

When booting via KHO, the memblock allocator is restricted to a "scratch
area", forcing the KFENCE pool to be allocated within it. This creates a
conflict, as the scratch area is expected to be ephemeral and
overwriteable by a subsequent kexec. If KHO metadata is placed in this
KFENCE pool, it leads to memory corruption when the next kernel is
loaded.

To fix this, modify KHO to allocate its metadata directly from the buddy
allocator instead of slab.

Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
---
 include/linux/gfp.h     | 3 +++
 kernel/kexec_handover.c | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 0ceb4e09306c..623bee335383 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -7,6 +7,7 @@
 #include <linux/mmzone.h>
 #include <linux/topology.h>
 #include <linux/alloc_tag.h>
+#include <linux/cleanup.h>
 #include <linux/sched.h>
 
 struct vm_area_struct;
@@ -463,4 +464,6 @@ static inline struct folio *folio_alloc_gigantic_noprof(int order, gfp_t gfp,
 /* This should be paired with folio_put() rather than free_contig_range(). */
 #define folio_alloc_gigantic(...) alloc_hooks(folio_alloc_gigantic_noprof(__VA_ARGS__))
 
+DEFINE_FREE(free_page, void *, free_page((unsigned long)_T))
+
 #endif /* __LINUX_GFP_H */
diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
index e5b91761fbfe..de4466b47455 100644
--- a/kernel/kexec_handover.c
+++ b/kernel/kexec_handover.c
@@ -142,7 +142,7 @@ static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
 	if (res)
 		return res;
 
-	void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
+	void *elm __free(free_page) = (void *)get_zeroed_page(GFP_KERNEL);
 
 	if (!elm)
 		return ERR_PTR(-ENOMEM);
@@ -348,9 +348,9 @@ static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE);
 static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk,
 					  unsigned long order)
 {
-	struct khoser_mem_chunk *chunk __free(kfree) = NULL;
+	struct khoser_mem_chunk *chunk __free(free_page) = NULL;
 
-	chunk = kzalloc(PAGE_SIZE, GFP_KERNEL);
+	chunk = (void *)get_zeroed_page(GFP_KERNEL);
 	if (!chunk)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.51.0.869.ge66316f041-goog



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-21  0:08 [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Pasha Tatashin
                   ` (2 preceding siblings ...)
  2025-10-21  0:08 ` [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator Pasha Tatashin
@ 2025-10-21  6:00 ` Mike Rapoport
  2025-10-21 16:04   ` Pasha Tatashin
  3 siblings, 1 reply; 21+ messages in thread
From: Mike Rapoport @ 2025-10-21  6:00 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, tj, jasonmiu,
	dmatlack, skhawaja

On Mon, Oct 20, 2025 at 08:08:49PM -0400, Pasha Tatashin wrote:
> This series fixes a memory corruption bug in KHO that occurs when KFENCE
> is enabled.
> 
> The root cause is that KHO metadata, allocated via kzalloc(), can be
> randomly serviced by kfence_alloc(). When a kernel boots via KHO, the
> early memblock allocator is restricted to a "scratch area". This forces
> the KFENCE pool to be allocated within this scratch area, creating a
> conflict. If KHO metadata is subsequently placed in this pool, it gets
> corrupted during the next kexec operation.
> 
> Patch 1/3 introduces a debug-only feature (CONFIG_KEXEC_HANDOVER_DEBUG)
> that adds checks to detect and fail any operation that attempts to place
> KHO metadata or preserved memory within the scratch area. This serves as
> a validation and diagnostic tool to confirm the problem without
> affecting production builds.
> 
> Patch 2/3 Increases bitmap to PAGE_SIZE, so buddy allocator can be used.
> 
> Patch 3/3 Provides the fix by modifying KHO to allocate its metadata
> directly from the buddy allocator instead of slab. This bypasses the
> KFENCE interception entirely.
> 
> Pasha Tatashin (3):
>   liveupdate: kho: warn and fail on metadata or preserved memory in
>     scratch area
>   liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
>   liveupdate: kho: allocate metadata directly from the buddy allocator

With liveupdate: dropped from the subjects

Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
 
>  include/linux/gfp.h              |  3 ++
>  kernel/Kconfig.kexec             |  9 ++++
>  kernel/Makefile                  |  1 +
>  kernel/kexec_handover.c          | 72 ++++++++++++++++++++------------
>  kernel/kexec_handover_debug.c    | 25 +++++++++++
>  kernel/kexec_handover_internal.h | 16 +++++++
>  6 files changed, 100 insertions(+), 26 deletions(-)
>  create mode 100644 kernel/kexec_handover_debug.c
>  create mode 100644 kernel/kexec_handover_internal.h
> 
> 
> base-commit: 6548d364a3e850326831799d7e3ea2d7bb97ba08
> -- 
> 2.51.0.869.ge66316f041-goog
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-21  6:00 ` [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Mike Rapoport
@ 2025-10-21 16:04   ` Pasha Tatashin
  2025-10-21 20:53     ` Andrew Morton
  0 siblings, 1 reply; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-21 16:04 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, tj, jasonmiu,
	dmatlack, skhawaja

On Tue, Oct 21, 2025 at 2:01 AM Mike Rapoport <rppt@kernel.org> wrote:
>
> On Mon, Oct 20, 2025 at 08:08:49PM -0400, Pasha Tatashin wrote:
> > This series fixes a memory corruption bug in KHO that occurs when KFENCE
> > is enabled.
> >
> > The root cause is that KHO metadata, allocated via kzalloc(), can be
> > randomly serviced by kfence_alloc(). When a kernel boots via KHO, the
> > early memblock allocator is restricted to a "scratch area". This forces
> > the KFENCE pool to be allocated within this scratch area, creating a
> > conflict. If KHO metadata is subsequently placed in this pool, it gets
> > corrupted during the next kexec operation.
> >
> > Patch 1/3 introduces a debug-only feature (CONFIG_KEXEC_HANDOVER_DEBUG)
> > that adds checks to detect and fail any operation that attempts to place
> > KHO metadata or preserved memory within the scratch area. This serves as
> > a validation and diagnostic tool to confirm the problem without
> > affecting production builds.
> >
> > Patch 2/3 Increases bitmap to PAGE_SIZE, so buddy allocator can be used.
> >
> > Patch 3/3 Provides the fix by modifying KHO to allocate its metadata
> > directly from the buddy allocator instead of slab. This bypasses the
> > KFENCE interception entirely.
> >
> > Pasha Tatashin (3):
> >   liveupdate: kho: warn and fail on metadata or preserved memory in
> >     scratch area
> >   liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
> >   liveupdate: kho: allocate metadata directly from the buddy allocator
>
> With liveupdate: dropped from the subjects

I noticed "liveupdate: " subject prefix left over only after sending
these patches. Andrew, would you like me to resend them, or could you
remove the prefix from these patches?

> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
>
> >  include/linux/gfp.h              |  3 ++
> >  kernel/Kconfig.kexec             |  9 ++++
> >  kernel/Makefile                  |  1 +
> >  kernel/kexec_handover.c          | 72 ++++++++++++++++++++------------
> >  kernel/kexec_handover_debug.c    | 25 +++++++++++
> >  kernel/kexec_handover_internal.h | 16 +++++++
> >  6 files changed, 100 insertions(+), 26 deletions(-)
> >  create mode 100644 kernel/kexec_handover_debug.c
> >  create mode 100644 kernel/kexec_handover_internal.h
> >
> >
> > base-commit: 6548d364a3e850326831799d7e3ea2d7bb97ba08
> > --
> > 2.51.0.869.ge66316f041-goog
> >
>
> --
> Sincerely yours,
> Mike.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-21 16:04   ` Pasha Tatashin
@ 2025-10-21 20:53     ` Andrew Morton
  2025-10-22  0:15       ` Pasha Tatashin
  0 siblings, 1 reply; 21+ messages in thread
From: Andrew Morton @ 2025-10-21 20:53 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Mike Rapoport, brauner, corbet, graf, jgg, linux-kernel,
	linux-kselftest, linux-mm, masahiroy, ojeda, pratyush, rdunlap,
	tj, jasonmiu, dmatlack, skhawaja

On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:

> > With liveupdate: dropped from the subjects
> 
> I noticed "liveupdate: " subject prefix left over only after sending
> these patches. Andrew, would you like me to resend them, or could you
> remove the prefix from these patches?

No problem.

What should we do about -stable kernels?

It doesn't seem worthwhile to backport a 3-patch series for a pretty
obscure bug.  Perhaps we could merge a patch which disables this
combination in Kconfig, as a 6.18-rcX hotfix with a cc:stable.

Then for 6.19-rc1 we add this series and a fourth patch which undoes
that Kconfig change?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-21 20:53     ` Andrew Morton
@ 2025-10-22  0:15       ` Pasha Tatashin
  2025-10-22  5:48         ` Mike Rapoport
  2025-10-23  2:45         ` Andrew Morton
  0 siblings, 2 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-22  0:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mike Rapoport, brauner, corbet, graf, jgg, linux-kernel,
	linux-kselftest, linux-mm, masahiroy, ojeda, pratyush, rdunlap,
	tj, jasonmiu, dmatlack, skhawaja

On Tue, Oct 21, 2025 at 4:53 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:
>
> > > With liveupdate: dropped from the subjects
> >
> > I noticed "liveupdate: " subject prefix left over only after sending
> > these patches. Andrew, would you like me to resend them, or could you
> > remove the prefix from these patches?
>
> No problem.
>
> What should we do about -stable kernels?
>
> It doesn't seem worthwhile to backport a 3-patch series for a pretty
> obscure bug.  Perhaps we could merge a patch which disables this

We are using KHO and have had obscure crashes due to this memory
corruption, with stacks all over the place. I would prefer this fix to
be properly backported to stable so we can also automatically consume
it once we switch to the upstream KHO. I do not think disabling kfence
in the Google fleet to resolve this problem would work for us, so if
it is not going to be part of stable, we would have to backport it
manually anyway.

Thanks,
Pasha

> combination in Kconfig, as a 6.18-rcX hotfix with a cc:stable.
>
> Then for 6.19-rc1 we add this series and a fourth patch which undoes
> that Kconfig change?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-22  0:15       ` Pasha Tatashin
@ 2025-10-22  5:48         ` Mike Rapoport
  2025-10-22 18:24           ` Andrew Morton
  2025-10-23  2:45         ` Andrew Morton
  1 sibling, 1 reply; 21+ messages in thread
From: Mike Rapoport @ 2025-10-22  5:48 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Andrew Morton, brauner, corbet, graf, jgg, linux-kernel,
	linux-kselftest, linux-mm, masahiroy, ojeda, pratyush, rdunlap,
	tj, jasonmiu, dmatlack, skhawaja

On Tue, Oct 21, 2025 at 08:15:04PM -0400, Pasha Tatashin wrote:
> On Tue, Oct 21, 2025 at 4:53 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:
> >
> > > > With liveupdate: dropped from the subjects
> > >
> > > I noticed "liveupdate: " subject prefix left over only after sending
> > > these patches. Andrew, would you like me to resend them, or could you
> > > remove the prefix from these patches?
> >
> > No problem.
> >
> > What should we do about -stable kernels?
> >
> > It doesn't seem worthwhile to backport a 3-patch series for a pretty
> > obscure bug.  Perhaps we could merge a patch which disables this
> 
> We are using KHO and have had obscure crashes due to this memory
> corruption, with stacks all over the place. I would prefer this fix to
> be properly backported to stable so we can also automatically consume
> it once we switch to the upstream KHO. I do not think disabling kfence
> in the Google fleet to resolve this problem would work for us, so if
> it is not going to be part of stable, we would have to backport it
> manually anyway.

The backport to stable is only relevant to 6.17 that's going to be EOL soon
anyway. Do you really think it's worth the effort?
 
> Thanks,
> Pasha
> 
> > combination in Kconfig, as a 6.18-rcX hotfix with a cc:stable.
> >
> > Then for 6.19-rc1 we add this series and a fourth patch which undoes
> > that Kconfig change?

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area
  2025-10-21  0:08 ` [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area Pasha Tatashin
@ 2025-10-22 10:22   ` Pratyush Yadav
  2025-10-27 22:29   ` David Matlack
  1 sibling, 0 replies; 21+ messages in thread
From: Pratyush Yadav @ 2025-10-22 10:22 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	dmatlack, skhawaja

On Mon, Oct 20 2025, Pasha Tatashin wrote:

> It is invalid for KHO metadata or preserved memory regions to be located
> within the KHO scratch area, as this area is overwritten when the next
> kernel is loaded, and used early in boot by the next kernel. This can
> lead to memory corruption.
>
> Adds checks to kho_preserve_* and KHO's internal metadata allocators
> (xa_load_or_alloc, new_chunk) to verify that the physical address of the
> memory does not overlap with any defined scratch region. If an overlap
> is detected, the operation will fail and a WARN_ON is triggered. To
> avoid performance overhead in production kernels, these checks are
> enabled only when CONFIG_KEXEC_HANDOVER_DEBUG is selected.
>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
[...]
> @@ -133,26 +135,26 @@ static struct kho_out kho_out = {
>  
>  static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
>  {
> -	void *elm, *res;
> +	void *res = xa_load(xa, index);
>  
> -	elm = xa_load(xa, index);
> -	if (elm)
> -		return elm;
> +	if (res)
> +		return res;
> +
> +	void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
>  
> -	elm = kzalloc(sz, GFP_KERNEL);
>  	if (!elm)
>  		return ERR_PTR(-ENOMEM);
>  
> +	if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
> +		return ERR_PTR(-EINVAL);
> +
>  	res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL);
>  	if (xa_is_err(res))
> -		res = ERR_PTR(xa_err(res));
> -
> -	if (res) {
> -		kfree(elm);
> +		return ERR_PTR(xa_err(res));
> +	else if (res)
>  		return res;
> -	}
>  
> -	return elm;
> +	return no_free_ptr(elm);

Super small nit: there exists return_ptr(p) which is a tiny bit neater
IMO but certainly not worth doing a new revision over. So,

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  2025-10-21  0:08 ` [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE Pasha Tatashin
@ 2025-10-22 10:25   ` Pratyush Yadav
  2025-10-27 22:44   ` David Matlack
  2025-10-27 22:56   ` David Matlack
  2 siblings, 0 replies; 21+ messages in thread
From: Pratyush Yadav @ 2025-10-22 10:25 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	dmatlack, skhawaja

On Mon, Oct 20 2025, Pasha Tatashin wrote:

> KHO memory preservation metadata is preserved in 512 byte chunks which
> requires their allocation from slab allocator. Slabs are not safe to be
> used with KHO because of kfence, and because partial slabs may lead
> leaks to the next kernel. Change the size to be PAGE_SIZE.
>
> The kfence specifically may cause memory corruption, where it randomly
> provides slab objects that can be within the scratch area. The reason
> for that is that kfence allocates its objects prior to KHO scratch is
> marked as CMA region.
>
> While this change could potentially increase metadata overhead on
> systems with sparsely preserved memory, this is being mitigated by
> ongoing work to reduce sparseness during preservation via 1G guest
> pages. Furthermore, this change aligns with future work on a stateless
> KHO, which will also use page-sized bitmaps for its radix tree metadata.
>
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-22  5:48         ` Mike Rapoport
@ 2025-10-22 18:24           ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2025-10-22 18:24 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Pasha Tatashin, brauner, corbet, graf, jgg, linux-kernel,
	linux-kselftest, linux-mm, masahiroy, ojeda, pratyush, rdunlap,
	tj, jasonmiu, dmatlack, skhawaja

On Wed, 22 Oct 2025 08:48:34 +0300 Mike Rapoport <rppt@kernel.org> wrote:

> > We are using KHO and have had obscure crashes due to this memory
> > corruption, with stacks all over the place. I would prefer this fix to
> > be properly backported to stable so we can also automatically consume
> > it once we switch to the upstream KHO. I do not think disabling kfence
> > in the Google fleet to resolve this problem would work for us, so if
> > it is not going to be part of stable, we would have to backport it
> > manually anyway.
> 
> The backport to stable is only relevant to 6.17 that's going to be EOL soon
> anyway. Do you really think it's worth the effort?

If some organization is basing their next kernel on 6.17 then they'd
like it.

Do we assume that all organizations follow the LTS schedule?  I haven't
been doing that.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix
  2025-10-22  0:15       ` Pasha Tatashin
  2025-10-22  5:48         ` Mike Rapoport
@ 2025-10-23  2:45         ` Andrew Morton
  1 sibling, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2025-10-23  2:45 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: Mike Rapoport, brauner, corbet, graf, jgg, linux-kernel,
	linux-kselftest, linux-mm, masahiroy, ojeda, pratyush, rdunlap,
	tj, jasonmiu, dmatlack, skhawaja

On Tue, 21 Oct 2025 20:15:04 -0400 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:

> On Tue, Oct 21, 2025 at 4:53 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin <pasha.tatashin@soleen.com> wrote:
> >
> > > > With liveupdate: dropped from the subjects
> > >
> > > I noticed "liveupdate: " subject prefix left over only after sending
> > > these patches. Andrew, would you like me to resend them, or could you
> > > remove the prefix from these patches?
> >
> > No problem.
> >
> > What should we do about -stable kernels?
> >
> > It doesn't seem worthwhile to backport a 3-patch series for a pretty
> > obscure bug.  Perhaps we could merge a patch which disables this
> 
> We are using KHO and have had obscure crashes due to this memory
> corruption, with stacks all over the place. I would prefer this fix to
> be properly backported to stable so we can also automatically consume
> it once we switch to the upstream KHO.

Oh.

I added this important info to the [0/N] changelog, added

Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
Cc: <stable@vger.kernel.org>

to all three and moved this into mm.git's mm-hotfixes branch.



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area
  2025-10-21  0:08 ` [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area Pasha Tatashin
  2025-10-22 10:22   ` Pratyush Yadav
@ 2025-10-27 22:29   ` David Matlack
  2025-10-28  0:01     ` Pasha Tatashin
  1 sibling, 1 reply; 21+ messages in thread
From: David Matlack @ 2025-10-27 22:29 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 20, 2025 at 5:08 PM Pasha Tatashin
<pasha.tatashin@soleen.com> wrote:
>
> It is invalid for KHO metadata or preserved memory regions to be located
> within the KHO scratch area, as this area is overwritten when the next
> kernel is loaded, and used early in boot by the next kernel. This can
> lead to memory corruption.
>
> Adds checks to kho_preserve_* and KHO's internal metadata allocators
> (xa_load_or_alloc, new_chunk) to verify that the physical address of the
> memory does not overlap with any defined scratch region. If an overlap
> is detected, the operation will fail and a WARN_ON is triggered. To
> avoid performance overhead in production kernels, these checks are
> enabled only when CONFIG_KEXEC_HANDOVER_DEBUG is selected.

How many scratch regions are there in practice? Checking
unconditionally seems like a small price to pay to avoid possible
memory corruption. Especially since most KHO preservation should
happen while the VM is still running (so does not have to by
hyper-optimized).

>  static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
>  {
> -       void *elm, *res;
> +       void *res = xa_load(xa, index);
>
> -       elm = xa_load(xa, index);
> -       if (elm)
> -               return elm;
> +       if (res)
> +               return res;
> +
> +       void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);

nit: This breaks the local style of always declaring variables at the
beginning of blocks.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  2025-10-21  0:08 ` [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE Pasha Tatashin
  2025-10-22 10:25   ` Pratyush Yadav
@ 2025-10-27 22:44   ` David Matlack
  2025-10-27 22:56   ` David Matlack
  2 siblings, 0 replies; 21+ messages in thread
From: David Matlack @ 2025-10-27 22:44 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 20, 2025 at 5:09 PM Pasha Tatashin
<pasha.tatashin@soleen.com> wrote:
>
> KHO memory preservation metadata is preserved in 512 byte chunks which
> requires their allocation from slab allocator. Slabs are not safe to be
> used with KHO because of kfence, and because partial slabs may lead
> leaks to the next kernel. Change the size to be PAGE_SIZE.

> -#define PRESERVE_BITS (512 * 8)
> +#define PRESERVE_BITS (PAGE_SIZE * 8)

nit: A comment somewhere (maybe here?) about the requirement that KHO
metadata are not stored on slabs would be helpful to avoid someone
"optimizing" this back to 512 in the future.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  2025-10-21  0:08 ` [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE Pasha Tatashin
  2025-10-22 10:25   ` Pratyush Yadav
  2025-10-27 22:44   ` David Matlack
@ 2025-10-27 22:56   ` David Matlack
  2025-10-27 23:01     ` David Matlack
  2 siblings, 1 reply; 21+ messages in thread
From: David Matlack @ 2025-10-27 22:56 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 20, 2025 at 5:09 PM Pasha Tatashin
<pasha.tatashin@soleen.com> wrote:

> -static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
> +static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
>  {
>         void *res = xa_load(xa, index);
>
>         if (res)
>                 return res;
>
> -       void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
> +       void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
>
>         if (!elm)
>                 return ERR_PTR(-ENOMEM);
>
> -       if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
> +       if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), PAGE_SIZE)))
>                 return ERR_PTR(-EINVAL);

Reading xa_load_or_alloc() is a bit confusing now.

It seems very generic (returns a void *) but now hard-codes a size
(PAGE_SIZE). You have to look at the caller to see it is allocating
for a struct kho_mem_phys_bits, and then at the definition of struct
kho_mem_phys_bits to see the static_assert() that this struct is
always PAGE_SIZE.

I would either keep letting the caller passing in size (if you think
this code is going to be re-used) or just commit to making
xa_load_or_alloc() specific to kho_mem_phys_bits. e.g. Change the
return type to struct kho_mem_phys_bits * and use sizeof() instead of
PAGE_SIZE.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  2025-10-27 22:56   ` David Matlack
@ 2025-10-27 23:01     ` David Matlack
  2025-10-28  0:03       ` Pasha Tatashin
  0 siblings, 1 reply; 21+ messages in thread
From: David Matlack @ 2025-10-27 23:01 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 27, 2025 at 3:56 PM David Matlack <dmatlack@google.com> wrote:
>
> On Mon, Oct 20, 2025 at 5:09 PM Pasha Tatashin
> <pasha.tatashin@soleen.com> wrote:
>
> > -static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
> > +static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
> >  {
> >         void *res = xa_load(xa, index);
> >
> >         if (res)
> >                 return res;
> >
> > -       void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
> > +       void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
> >
> >         if (!elm)
> >                 return ERR_PTR(-ENOMEM);
> >
> > -       if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
> > +       if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), PAGE_SIZE)))
> >                 return ERR_PTR(-EINVAL);
>
> Reading xa_load_or_alloc() is a bit confusing now.
>
> It seems very generic (returns a void *) but now hard-codes a size
> (PAGE_SIZE). You have to look at the caller to see it is allocating
> for a struct kho_mem_phys_bits, and then at the definition of struct
> kho_mem_phys_bits to see the static_assert() that this struct is
> always PAGE_SIZE.
>
> I would either keep letting the caller passing in size (if you think
> this code is going to be re-used) or just commit to making
> xa_load_or_alloc() specific to kho_mem_phys_bits. e.g. Change the
> return type to struct kho_mem_phys_bits * and use sizeof() instead of
> PAGE_SIZE.

I see that you replace kzalloc() with get_zeroed_page() in the next
patch. So the latter option is probably better, and maybe move static
assert down here and use BUILD_BUG_ON()? That way readers can easily
see that we are allocating for struct kho_mem_phys_bits *and* that
that struct is guaranteed to be PAGE_SIZE'd.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator
  2025-10-21  0:08 ` [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator Pasha Tatashin
@ 2025-10-27 23:04   ` David Matlack
  2025-10-28  0:03     ` Pasha Tatashin
  0 siblings, 1 reply; 21+ messages in thread
From: David Matlack @ 2025-10-27 23:04 UTC (permalink / raw)
  To: Pasha Tatashin
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 20, 2025 at 5:09 PM Pasha Tatashin
<pasha.tatashin@soleen.com> wrote:
>
> KHO allocates metadata for its preserved memory map using the slab
> allocator via kzalloc(). This metadata is temporary and is used by the
> next kernel during early boot to find preserved memory.
>
> A problem arises when KFENCE is enabled. kzalloc() calls can be
> randomly intercepted by kfence_alloc(), which services the allocation
> from a dedicated KFENCE memory pool. This pool is allocated early in
> boot via memblock.
>
> When booting via KHO, the memblock allocator is restricted to a "scratch
> area", forcing the KFENCE pool to be allocated within it. This creates a
> conflict, as the scratch area is expected to be ephemeral and
> overwriteable by a subsequent kexec. If KHO metadata is placed in this
> KFENCE pool, it leads to memory corruption when the next kernel is
> loaded.
>
> To fix this, modify KHO to allocate its metadata directly from the buddy
> allocator instead of slab.
>
> Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

Reviewed-by: David Matlack <dmatlack@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area
  2025-10-27 22:29   ` David Matlack
@ 2025-10-28  0:01     ` Pasha Tatashin
  0 siblings, 0 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-28  0:01 UTC (permalink / raw)
  To: David Matlack
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 27, 2025 at 6:29 PM David Matlack <dmatlack@google.com> wrote:
>
> On Mon, Oct 20, 2025 at 5:08 PM Pasha Tatashin
> <pasha.tatashin@soleen.com> wrote:
> >
> > It is invalid for KHO metadata or preserved memory regions to be located
> > within the KHO scratch area, as this area is overwritten when the next
> > kernel is loaded, and used early in boot by the next kernel. This can
> > lead to memory corruption.
> >
> > Adds checks to kho_preserve_* and KHO's internal metadata allocators
> > (xa_load_or_alloc, new_chunk) to verify that the physical address of the
> > memory does not overlap with any defined scratch region. If an overlap
> > is detected, the operation will fail and a WARN_ON is triggered. To
> > avoid performance overhead in production kernels, these checks are
> > enabled only when CONFIG_KEXEC_HANDOVER_DEBUG is selected.
>
> How many scratch regions are there in practice? Checking
> unconditionally seems like a small price to pay to avoid possible
> memory corruption. Especially since most KHO preservation should
> happen while the VM is still running (so does not have to by
> hyper-optimized).

The debug option can be enabled on production system as well, we have
some debug options enabled, but I do not see a reason to make this a
fixed cost that can add up; the runtime cost scares me, as we might be
using KHO preserve/unpreserve often once stateless KHO + slab
preservation is implemented during some allocations paths. Let's keep
it optional.

>
> >  static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
> >  {
> > -       void *elm, *res;
> > +       void *res = xa_load(xa, index);
> >
> > -       elm = xa_load(xa, index);
> > -       if (elm)
> > -               return elm;
> > +       if (res)
> > +               return res;
> > +
> > +       void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
>
> nit: This breaks the local style of always declaring variables at the
> beginning of blocks.

I think this suggestion came from Mike, to me it looks alright, as it
is only part of the clean-up path.

Pasha


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE
  2025-10-27 23:01     ` David Matlack
@ 2025-10-28  0:03       ` Pasha Tatashin
  0 siblings, 0 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-28  0:03 UTC (permalink / raw)
  To: David Matlack
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 27, 2025 at 7:02 PM David Matlack <dmatlack@google.com> wrote:
>
> On Mon, Oct 27, 2025 at 3:56 PM David Matlack <dmatlack@google.com> wrote:
> >
> > On Mon, Oct 20, 2025 at 5:09 PM Pasha Tatashin
> > <pasha.tatashin@soleen.com> wrote:
> >
> > > -static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz)
> > > +static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
> > >  {
> > >         void *res = xa_load(xa, index);
> > >
> > >         if (res)
> > >                 return res;
> > >
> > > -       void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
> > > +       void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
> > >
> > >         if (!elm)
> > >                 return ERR_PTR(-ENOMEM);
> > >
> > > -       if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
> > > +       if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), PAGE_SIZE)))
> > >                 return ERR_PTR(-EINVAL);
> >
> > Reading xa_load_or_alloc() is a bit confusing now.
> >
> > It seems very generic (returns a void *) but now hard-codes a size
> > (PAGE_SIZE). You have to look at the caller to see it is allocating
> > for a struct kho_mem_phys_bits, and then at the definition of struct
> > kho_mem_phys_bits to see the static_assert() that this struct is
> > always PAGE_SIZE.
> >
> > I would either keep letting the caller passing in size (if you think
> > this code is going to be re-used) or just commit to making
> > xa_load_or_alloc() specific to kho_mem_phys_bits. e.g. Change the
> > return type to struct kho_mem_phys_bits * and use sizeof() instead of
> > PAGE_SIZE.
>
> I see that you replace kzalloc() with get_zeroed_page() in the next
> patch. So the latter option is probably better, and maybe move static
> assert down here and use BUILD_BUG_ON()? That way readers can easily
> see that we are allocating for struct kho_mem_phys_bits *and* that
> that struct is guaranteed to be PAGE_SIZE'd.

The size is verified at build time via:
+static_assert(sizeof(struct kho_mem_phys_bits) == PAGE_SIZE);


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator
  2025-10-27 23:04   ` David Matlack
@ 2025-10-28  0:03     ` Pasha Tatashin
  0 siblings, 0 replies; 21+ messages in thread
From: Pasha Tatashin @ 2025-10-28  0:03 UTC (permalink / raw)
  To: David Matlack
  Cc: akpm, brauner, corbet, graf, jgg, linux-kernel, linux-kselftest,
	linux-mm, masahiroy, ojeda, pratyush, rdunlap, rppt, tj, jasonmiu,
	skhawaja

On Mon, Oct 27, 2025 at 7:05 PM David Matlack <dmatlack@google.com> wrote:
>
> On Mon, Oct 20, 2025 at 5:09 PM Pasha Tatashin
> <pasha.tatashin@soleen.com> wrote:
> >
> > KHO allocates metadata for its preserved memory map using the slab
> > allocator via kzalloc(). This metadata is temporary and is used by the
> > next kernel during early boot to find preserved memory.
> >
> > A problem arises when KFENCE is enabled. kzalloc() calls can be
> > randomly intercepted by kfence_alloc(), which services the allocation
> > from a dedicated KFENCE memory pool. This pool is allocated early in
> > boot via memblock.
> >
> > When booting via KHO, the memblock allocator is restricted to a "scratch
> > area", forcing the KFENCE pool to be allocated within it. This creates a
> > conflict, as the scratch area is expected to be ephemeral and
> > overwriteable by a subsequent kexec. If KHO metadata is placed in this
> > KFENCE pool, it leads to memory corruption when the next kernel is
> > loaded.
> >
> > To fix this, modify KHO to allocate its metadata directly from the buddy
> > allocator instead of slab.
> >
> > Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> > Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> > Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
>
> Reviewed-by: David Matlack <dmatlack@google.com>

Thank you,
Pasha


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-10-28  0:04 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-21  0:08 [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Pasha Tatashin
2025-10-21  0:08 ` [PATCH v3 1/3] liveupdate: kho: warn and fail on metadata or preserved memory in scratch area Pasha Tatashin
2025-10-22 10:22   ` Pratyush Yadav
2025-10-27 22:29   ` David Matlack
2025-10-28  0:01     ` Pasha Tatashin
2025-10-21  0:08 ` [PATCH v3 2/3] liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE Pasha Tatashin
2025-10-22 10:25   ` Pratyush Yadav
2025-10-27 22:44   ` David Matlack
2025-10-27 22:56   ` David Matlack
2025-10-27 23:01     ` David Matlack
2025-10-28  0:03       ` Pasha Tatashin
2025-10-21  0:08 ` [PATCH v3 3/3] liveupdate: kho: allocate metadata directly from the buddy allocator Pasha Tatashin
2025-10-27 23:04   ` David Matlack
2025-10-28  0:03     ` Pasha Tatashin
2025-10-21  6:00 ` [PATCH v3 0/3] KHO: kfence + KHO memory corruption fix Mike Rapoport
2025-10-21 16:04   ` Pasha Tatashin
2025-10-21 20:53     ` Andrew Morton
2025-10-22  0:15       ` Pasha Tatashin
2025-10-22  5:48         ` Mike Rapoport
2025-10-22 18:24           ` Andrew Morton
2025-10-23  2:45         ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).