From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0D81394496 for ; Thu, 26 Feb 2026 10:06:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772100419; cv=none; b=RTi78QQrU1KtmG43Cfio8NnqKxRLuvgAyzeh9ThRfGgNpQHkHJbR8+ZhTKX7saNaV8vPuwyKO0CB4eJs5eHDskHMRPY5yG4nxGdDma6bKxUXTH/gt8j0+mX3VJN3ORR3ZOR2ues6uBUpPt3+FrsBEbrtQ05qZ9qxctBEAk57/+s= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772100419; c=relaxed/simple; bh=BGxo6wslGgE39oGiudc0QhTkqydCp/qUfxUngCgbEc0=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=RVn3X7LPLq+G/W24hDB4dCeAjJBmpI8c1Jguh4+kj3y/YMG8cTd4ZO5I3NB9Qr0FY3H6wgmT12qCwRS00RFcAuyLShsOxs+bvppjWL485QbyvRYxfunVX4tnrPnacXmh19HkaeOaX0VRHBb79u0qjQ0GaBTsVswCiv0V97MXf2w= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=spAa9+qt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="spAa9+qt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E85A7C19423; Thu, 26 Feb 2026 10:06:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772100419; bh=BGxo6wslGgE39oGiudc0QhTkqydCp/qUfxUngCgbEc0=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=spAa9+qtNZCLBRsGVZjT8d31yd1XkdyZVdCJ5pQldkfOeRd2GQfQ0RduDih8RO816 H6bYFXlUPORm4J42h5PGUOeHItZCNdOPBIwDM7MrJNtmKlWLyaUTNj2gDjUd9iSY2H DhAfb5tvJ8fiH/ZcVLyXPeoReskyhDs3RJGmOnNreeIr1AYc/eNgwA6NLdqQxPLmgR gNkpy/EMggnTU5/Y8ss/F8UsPCDK3Ev9l+490cweaUvO/9qMxU/VPeHz/eybyZrLIy 9PLTbyuINxYLVu4LWmlGC8vg/FWPq2dERBNKm6YWmBHwrsHCp3P0exfVMW35Rhq52H g2XggYmcKTofw== From: Pratyush Yadav To: Pasha Tatashin Cc: pratyush@kernel.org, akpm@linux-foundation.org, david@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, graf@amazon.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, mhocko@suse.com, urezki@gmail.com Subject: Re: [PATCH v1 2/2] kho: fix KASAN support for restored vmalloc regions In-Reply-To: <20260225223857.1714801-3-pasha.tatashin@soleen.com> (Pasha Tatashin's message of "Wed, 25 Feb 2026 17:38:57 -0500") References: <20260225223857.1714801-1-pasha.tatashin@soleen.com> <20260225223857.1714801-3-pasha.tatashin@soleen.com> Date: Thu, 26 Feb 2026 11:06:55 +0100 Message-ID: <2vxz7brzddj4.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain Hi Pasha, On Wed, Feb 25 2026, Pasha Tatashin wrote: > Restored vmalloc regions are currently not properly marked for KASAN, > causing KASAN to treat accesses to these regions as out-of-bounds. > > Fix this by properly unpoisoning the restored vmalloc area using > kasan_unpoison_vmalloc(). This requires setting the VM_UNINITIALIZED > flag during the initial area allocation and clearing it after the pages > have been mapped and unpoisoned, using the clear_vm_uninitialized_flag() > helper. > > Reported-by: Pratyush Yadav > Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations") > Signed-off-by: Pasha Tatashin > --- > kernel/liveupdate/kexec_handover.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index 410098bae0bf..747a35107c84 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -14,6 +14,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -1077,6 +1078,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_vmalloc); > void *kho_restore_vmalloc(const struct kho_vmalloc *preservation) > { > struct kho_vmalloc_chunk *chunk = KHOSER_LOAD_PTR(preservation->first); > + kasan_vmalloc_flags_t kasan_flags = KASAN_VMALLOC_PROT_NORMAL; > unsigned int align, order, shift, vm_flags; > unsigned long total_pages, contig_pages; > unsigned long addr, size; > @@ -1128,7 +1130,8 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation) > goto err_free_pages_array; > > area = __get_vm_area_node(total_pages * PAGE_SIZE, align, shift, > - vm_flags, VMALLOC_START, VMALLOC_END, > + vm_flags | VM_UNINITIALIZED, > + VMALLOC_START, VMALLOC_END, > NUMA_NO_NODE, GFP_KERNEL, > __builtin_return_address(0)); > if (!area) > @@ -1143,6 +1146,13 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *preservation) > area->nr_pages = total_pages; > area->pages = pages; > > + if (vm_flags & VM_ALLOC) > + kasan_flags |= KASAN_VMALLOC_VM_ALLOC; > + > + area->addr = kasan_unpoison_vmalloc(area->addr, total_pages * PAGE_SIZE, > + kasan_flags); Ugh, this is tricky. Say I do vmalloc(sizeof(unsigned long)). After KHO, this would unpoison the whole page, effectively missing all out-of-bounds access within that page. We need to either store the buffer size in struct kho_vmalloc, or only allow preserving PAGE_SIZE aligned allocations, or just live with this missed coverage. I kind of prefer the second option, but no strong opinions. Anyway, I think this is a clear improvement regardless of this problem. So, Reviewed-by: Pratyush Yadav (Google) Tested-by: Pratyush Yadav (Google) Thanks for fixing it. > + clear_vm_uninitialized_flag(area); > + > return area->addr; > > err_free_vm_area: -- Regards, Pratyush Yadav