From: Sourabh Jain <sourabhjain@linux.ibm.com>
To: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com>,
linuxppc-dev@lists.ozlabs.org
Cc: Aditya Gupta <adityag@linux.ibm.com>,
Daniel Axtens <dja@axtens.net>,
Hari Bathini <hbathini@linux.ibm.com>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Mahesh Salgaonkar <mahesh@linux.ibm.com>,
Michael Ellerman <mpe@ellerman.id.au>,
Shivang Upadhyay <shivangu@linux.ibm.com>,
Venkat Rao Bagalkote <venkat88@linux.ibm.com>,
Aboorva Devarajan <aboorvad@linux.ibm.com>
Subject: Re: [PATCH 2/2] powerpc/kexec: Disable KASAN for VMX helpers used in MMU-off path
Date: Thu, 2 Apr 2026 09:29:16 +0530 [thread overview]
Message-ID: <255cc5b6-e1f3-4fc7-b100-5db4635a04eb@linux.ibm.com> (raw)
In-Reply-To: <wlyvmo0a.ritesh.list@gmail.com>
On 29/03/26 06:48, Ritesh Harjani (IBM) wrote:
> Sourabh Jain <sourabhjain@linux.ibm.com> writes:
>
>> The kexec sequence invokes enter_vmx_ops() and exit_vmx_ops() with the
>> MMU disabled. In this context, code must not rely on normal virtual
>> address translations or trigger page faults.
>> With KASAN enabled, these functions get instrumented and may access
>> shadow memory using regular address translation. When executed with
>> the MMU off, this can lead to page faults (bad_page_fault) from which
>> the kernel cannot recover in the kexec path, resulting in a hang.
> Right, so with mmu off, kernel can't access KASAN shadow memory.
>
> So, let me trace down the path a bit... you skipped an important detail
> i.e. preempt_count() is always inline, and we play a few tricks in kexec
> path to tell enter_vmx_ops() that we are in HARDIRQ mode.
>
> default_machine_kexec(image)
> current_thread_info()->preempt_count = HARDIRQ_OFFSET
>
> kexec_sequence(..., copy_with_mmu_off = 1)
> if (copy_with_mmu_off) bl real_mode
>
> bl kexec_copy_flush(image)
> memcpy(ranges, image->segment, ...)
>
> copy_segments()
> copy_page(dest, addr)
>
> bl enter_vmx_ops()
> if (in_interrupt() == true) return 0 // preempt_count == HARDIRQ_OFFSET
> beq .Lnonvmx_copy
Yes since preempt_count for the current thread is set to HARDIRQ_OFFSET
we return early from copy_page() -> copypage_power7 -> enter_vmx_ops()
and call to exit_vmx_ops is skipped.
>
>> Mark enter_vmx_ops() and exit_vmx_ops() with __no_sanitize_address to
>> avoid KASAN instrumentation and ensure kexec boots fine with KASAN
>> enabled.
>>
> IIUC, preempt_count() is always inline, and since you are disabling kasan
> instrumentation on enter_vmx_ops(), hence it just works for this reason.
> But you missed adding that detail here.
Yeah it is worth adding that in commit message. I will add it in v2.
>
> enter_vmx_ops()
> if (in_interrupt()) // return 0
> preempt_count() & ... | HARDIRQ_OFFSET // preempt_count() is this is __always_inline
>
> static __always_inline int preempt_count(void)
> {
> return READ_ONCE(current_thread_info()->preempt_count);
> }
>
>> Cc: Aditya Gupta <adityag@linux.ibm.com>
>> Cc: Daniel Axtens <dja@axtens.net>
>> Cc: Hari Bathini <hbathini@linux.ibm.com>
>> Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
>> Cc: Mahesh Salgaonkar <mahesh@linux.ibm.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> Cc: Shivang Upadhyay <shivangu@linux.ibm.com>
>> Cc: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
>> Reported-by: Aboorva Devarajan <aboorvad@linux.ibm.com>
>> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com>
>> ---
>> arch/powerpc/lib/vmx-helper.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
>> index 554b248002b4..c01b2d856650 100644
>> --- a/arch/powerpc/lib/vmx-helper.c
>> +++ b/arch/powerpc/lib/vmx-helper.c
>> @@ -52,7 +52,7 @@ int exit_vmx_usercopy(void)
>> }
>> EXPORT_SYMBOL(exit_vmx_usercopy);
>>
>> -int enter_vmx_ops(void)
> In that case, should we should add a comment here saying:
>
> /*
> * Can be called from kexec copy_page() path with MMU off. The kexec
> * code sets preempt_count to HARDIRQ_OFFSET so we return early here.
> * Since in_interrupt() is always inline, __no_sanitize_address on this
> * function is sufficient to avoid KASAN shadow memory accesses in real
> * mode.
> */
Thanks for the write up, I will add it in v2.
>> +int __no_sanitize_address enter_vmx_ops(void)
>> {
>> if (in_interrupt())
>> return 0;
>> @@ -69,7 +69,7 @@ int enter_vmx_ops(void)
>> * passed a pointer to the destination which we return as required by a
>> * memcpy implementation.
>> */
>> -void *exit_vmx_ops(void *dest)
>> +void __no_sanitize_address *exit_vmx_ops(void *dest)
> I am assuming since we never enter into VMX in kexec path, so kexec path
> must not be calling exit_vmx_ops anyways? So do we need __no_sanitize_address here?
Agree in copypage_power7() we jump to .Lnonvmx_copy label and do
not call exit_vmx_ops. I will remove __no_sanitize_address from
exit_vmx_ops().
Thanks for the detailed review Ritesh.
- Soruabh Jain
next prev parent reply other threads:[~2026-04-02 3:59 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-21 5:31 [PATCH 1/2] powerpc/kdump: fix KASAN sanitization flag for core_$(BITS).o Sourabh Jain
2026-03-21 5:31 ` [PATCH 2/2] powerpc/kexec: Disable KASAN for VMX helpers used in MMU-off path Sourabh Jain
2026-03-29 1:18 ` Ritesh Harjani
2026-04-02 0:04 ` Ritesh Harjani
2026-04-02 3:59 ` Sourabh Jain [this message]
2026-03-23 6:11 ` [PATCH 1/2] powerpc/kdump: fix KASAN sanitization flag for core_$(BITS).o Mahesh J Salgaonkar
2026-03-23 10:36 ` Sourabh Jain
2026-03-23 8:53 ` Venkat Rao Bagalkote
2026-03-29 1:56 ` Ritesh Harjani
2026-04-01 13:42 ` Sourabh Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=255cc5b6-e1f3-4fc7-b100-5db4635a04eb@linux.ibm.com \
--to=sourabhjain@linux.ibm.com \
--cc=aboorvad@linux.ibm.com \
--cc=adityag@linux.ibm.com \
--cc=dja@axtens.net \
--cc=hbathini@linux.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.ibm.com \
--cc=mahesh@linux.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=ritesh.list@gmail.com \
--cc=shivangu@linux.ibm.com \
--cc=venkat88@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox