* [PATCH v1] liveupdate: sanitize incoming session count
@ 2026-01-30 8:46 Li Chen
2026-02-10 13:26 ` Pratyush Yadav
0 siblings, 1 reply; 2+ messages in thread
From: Li Chen @ 2026-01-30 8:46 UTC (permalink / raw)
To: Pasha Tatashin, Mike Rapoport, Pratyush Yadav, linux-kernel; +Cc: Li Chen
luo_session_deserialize() iterates incoming sessions using
luo_session_header_ser::count. The header physical address is provided by
the previous kernel via the KHO FDT node.
If the header is corrupted, count may become arbitrarily large and the new
kernel can read past the preserved session array (sh->ser[i]). This is an
OOB read that can crash or hang early boot.
This can happen if the FDT node is corrupted or mis-parsed and points to a
wrong header address, if stale/incompatible handover data is interpreted
with the wrong layout, or if the preserved region is scribbled by memory
corruption or DMA after kexec.
Clamp the incoming count to LUO_SESSION_MAX before iterating.
Signed-off-by: Li Chen <me@linux.beauty>
---
kernel/liveupdate/luo_session.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/liveupdate/luo_session.c b/kernel/liveupdate/luo_session.c
index dbdbc3bd7929..9d6c3ad990d9 100644
--- a/kernel/liveupdate/luo_session.c
+++ b/kernel/liveupdate/luo_session.c
@@ -515,6 +515,7 @@ int luo_session_deserialize(void)
struct luo_session_header *sh = &luo_session_global.incoming;
static bool is_deserialized;
static int err;
+ u64 count;
/* If has been deserialized, always return the same error code */
if (is_deserialized)
@@ -524,6 +525,13 @@ int luo_session_deserialize(void)
if (!sh->active)
return 0;
+ count = sh->header_ser->count;
+ if (count > LUO_SESSION_MAX) {
+ pr_warn("incoming session count %llu exceeds max %lu\n",
+ count, LUO_SESSION_MAX);
+ count = LUO_SESSION_MAX;
+ }
+
/*
* Note on error handling:
*
@@ -539,7 +547,7 @@ int luo_session_deserialize(void)
* userspace to detect the failure and trigger a reboot, which will
* reliably reset devices and reclaim memory.
*/
- for (int i = 0; i < sh->header_ser->count; i++) {
+ for (u64 i = 0; i < count; i++) {
struct luo_session *session;
session = luo_session_alloc(sh->ser[i].name);
--
2.52.0
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH v1] liveupdate: sanitize incoming session count
2026-01-30 8:46 [PATCH v1] liveupdate: sanitize incoming session count Li Chen
@ 2026-02-10 13:26 ` Pratyush Yadav
0 siblings, 0 replies; 2+ messages in thread
From: Pratyush Yadav @ 2026-02-10 13:26 UTC (permalink / raw)
To: Li Chen; +Cc: Pasha Tatashin, Mike Rapoport, Pratyush Yadav, linux-kernel
Hi Li,
On Fri, Jan 30 2026, Li Chen wrote:
> luo_session_deserialize() iterates incoming sessions using
> luo_session_header_ser::count. The header physical address is provided by
> the previous kernel via the KHO FDT node.
>
> If the header is corrupted, count may become arbitrarily large and the new
> kernel can read past the preserved session array (sh->ser[i]). This is an
> OOB read that can crash or hang early boot.
>
> This can happen if the FDT node is corrupted or mis-parsed and points to a
> wrong header address, if stale/incompatible handover data is interpreted
> with the wrong layout, or if the preserved region is scribbled by memory
> corruption or DMA after kexec.
If the header is corrupted, won't the FDT magic checks fail when doing
any of the FDT operations like getting the compatible? Or perhaps we
should call fdt_check_header() in luo_early_startup()?
I think the sanity check might still be a useful thing, but I'd like to
clarify _why_ we are doing this.
>
> Clamp the incoming count to LUO_SESSION_MAX before iterating.
>
> Signed-off-by: Li Chen <me@linux.beauty>
[...]
--
Regards,
Pratyush Yadav
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-02-10 13:26 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-30 8:46 [PATCH v1] liveupdate: sanitize incoming session count Li Chen
2026-02-10 13:26 ` Pratyush Yadav
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox