From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 000E0296BBF; Fri, 27 Feb 2026 23:03:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772233438; cv=none; b=oBUAWPJuP0ZiG39eS76q06qd0vPrGIgLMiZmlE7hIJoeOmCaKncAo3gePDeUVtQ+J5EjnOmHfJESDoPi1nnQJPZZgXYeZeVvVxpLU54T/ifV3CIMoDkCXooGuVRJE5zz5IEerTmqmWe89pnby3JYFnhTn9V/5OUCGLaSn3B03Rc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772233438; c=relaxed/simple; bh=NbqG/UvBAP10Q1u+RHS7knfYcd90ubYoatJnHbUk5Bs=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=LpiCqgC5iWK79soeC8A6fz1/itUgXHpLYLnck/ybKoobAfiQ+6/Y5PqWalYCNi2qGpVF8FylybHIsOyPR9JVs9fZroWSr445pyohLjcy0U16Iq6M37aveqwY3qWirqrqVqTOC0Tnm17zAftnxE/Z63Kxn7tAucfUQCtaQnxSsSM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=CV9KmsPM; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="CV9KmsPM" Received: from [192.168.0.88] (192-184-212-33.fiber.dynamic.sonic.net [192.184.212.33]) by linux.microsoft.com (Postfix) with ESMTPSA id 011BC20B6F02; Fri, 27 Feb 2026 15:03:55 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 011BC20B6F02 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1772233436; bh=tOepTdQeyMasmyqfoiuPsCwdPw2/FBuC58pWEQycBO8=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=CV9KmsPMx9k/+jSFqWN7RocxIDyaIyoLU24QKI6My+XoPi9FwDd61wE6cL9O7a0qV xa6CKt/4PQkd+X64Ew2kgsH64n5YIRvVVa6sHFN+6srlPcybFbSQwVqQka1sJftAPU B8SB0i2vFYQl1j7EakZjqlYVoSABk2JGhGyRgqyw= Message-ID: <3cd719bb-334a-d05a-d44a-f68982a76a9d@linux.microsoft.com> Date: Fri, 27 Feb 2026 15:03:55 -0800 Precedence: bulk X-Mailing-List: linux-hyperv@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.13.1 Subject: Re: [PATCH v2] x86/hyperv: Use __naked attribute to fix stackless C function Content-Language: en-US To: Ard Biesheuvel , linux-kernel@vger.kernel.org Cc: x86@kernel.org, Wei Liu , Uros Bizjak , Andrew Cooper , linux-hyperv@vger.kernel.org References: <20260227224030.299993-2-ardb@kernel.org> From: Mukesh R In-Reply-To: <20260227224030.299993-2-ardb@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 2/27/26 14:40, Ard Biesheuvel wrote: > hv_crash_c_entry() is a C function that is entered without a stack, > and this is only allowed for functions that have the __naked attribute, > which informs the compiler that it must not emit the usual prologue and > epilogue or emit any other kind of instrumentation that relies on a > stack frame. > > So split up the function, and set the __naked attribute on the initial > part that sets up the stack, GDT, IDT and other pieces that are needed > for ordinary C execution. Given that function calls are not permitted > either, use the existing long return coded in an asm() block to call the > second part of the function, which is an ordinary function that is > permitted to call other functions as usual. Thank you for the patch. I'll start a build on the side and test it out and let you know. Thanks, -Mukesh > Cc: Mukesh Rathor > Cc: Wei Liu > Cc: Uros Bizjak > Cc: Andrew Cooper > Cc: linux-hyperv@vger.kernel.org > Fixes: 94212d34618c ("x86/hyperv: Implement hypervisor RAM collection into vmcore") > Signed-off-by: Ard Biesheuvel > --- > v2: apply some asm tweaks suggested by Uros and Andrew > > arch/x86/hyperv/hv_crash.c | 79 ++++++++++---------- > 1 file changed, 41 insertions(+), 38 deletions(-) > > diff --git a/arch/x86/hyperv/hv_crash.c b/arch/x86/hyperv/hv_crash.c > index 92da1b4f2e73..1c0965eb346e 100644 > --- a/arch/x86/hyperv/hv_crash.c > +++ b/arch/x86/hyperv/hv_crash.c > @@ -107,14 +107,12 @@ static void __noreturn hv_panic_timeout_reboot(void) > cpu_relax(); > } > > -/* This cannot be inlined as it needs stack */ > -static noinline __noclone void hv_crash_restore_tss(void) > +static void hv_crash_restore_tss(void) > { > load_TR_desc(); > } > > -/* This cannot be inlined as it needs stack */ > -static noinline void hv_crash_clear_kernpt(void) > +static void hv_crash_clear_kernpt(void) > { > pgd_t *pgd; > p4d_t *p4d; > @@ -125,6 +123,25 @@ static noinline void hv_crash_clear_kernpt(void) > native_p4d_clear(p4d); > } > > + > +static void __noreturn hv_crash_handle(void) > +{ > + hv_crash_restore_tss(); > + hv_crash_clear_kernpt(); > + > + /* we are now fully in devirtualized normal kernel mode */ > + __crash_kexec(NULL); > + > + hv_panic_timeout_reboot(); > +} > + > +/* > + * __naked functions do not permit function calls, not even to __always_inline > + * functions that only contain asm() blocks themselves. So use a macro instead. > + */ > +#define hv_wrmsr(msr, val) \ > + asm("wrmsr" :: "c"(msr), "a"((u32)val), "d"((u32)(val >> 32)) : "memory") > + > /* > * This is the C entry point from the asm glue code after the disable hypercall. > * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel > @@ -133,49 +150,35 @@ static noinline void hv_crash_clear_kernpt(void) > * available. We restore kernel GDT, and rest of the context, and continue > * to kexec. > */ > -static asmlinkage void __noreturn hv_crash_c_entry(void) > +static void __naked hv_crash_c_entry(void) > { > - struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; > - > /* first thing, restore kernel gdt */ > - native_load_gdt(&ctxt->gdtr); > + asm volatile("lgdt %0" : : "m" (hv_crash_ctxt.gdtr)); > > - asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss)); > - asm volatile("movq %0, %%rsp" : : "m"(ctxt->rsp)); > + asm volatile("movw %0, %%ss" : : "m"(hv_crash_ctxt.ss)); > + asm volatile("movq %0, %%rsp" : : "m"(hv_crash_ctxt.rsp)); > > - asm volatile("movw %%ax, %%ds" : : "a"(ctxt->ds)); > - asm volatile("movw %%ax, %%es" : : "a"(ctxt->es)); > - asm volatile("movw %%ax, %%fs" : : "a"(ctxt->fs)); > - asm volatile("movw %%ax, %%gs" : : "a"(ctxt->gs)); > + asm volatile("movw %0, %%ds" : : "m"(hv_crash_ctxt.ds)); > + asm volatile("movw %0, %%es" : : "m"(hv_crash_ctxt.es)); > + asm volatile("movw %0, %%fs" : : "m"(hv_crash_ctxt.fs)); > + asm volatile("movw %0, %%gs" : : "m"(hv_crash_ctxt.gs)); > > - native_wrmsrq(MSR_IA32_CR_PAT, ctxt->pat); > - asm volatile("movq %0, %%cr0" : : "r"(ctxt->cr0)); > + hv_wrmsr(MSR_IA32_CR_PAT, hv_crash_ctxt.pat); > + asm volatile("movq %0, %%cr0" : : "r"(hv_crash_ctxt.cr0)); > > - asm volatile("movq %0, %%cr8" : : "r"(ctxt->cr8)); > - asm volatile("movq %0, %%cr4" : : "r"(ctxt->cr4)); > - asm volatile("movq %0, %%cr2" : : "r"(ctxt->cr4)); > + asm volatile("movq %0, %%cr8" : : "r"(hv_crash_ctxt.cr8)); > + asm volatile("movq %0, %%cr4" : : "r"(hv_crash_ctxt.cr4)); > + asm volatile("movq %0, %%cr2" : : "r"(hv_crash_ctxt.cr4)); > > - native_load_idt(&ctxt->idtr); > - native_wrmsrq(MSR_GS_BASE, ctxt->gsbase); > - native_wrmsrq(MSR_EFER, ctxt->efer); > + asm volatile("lidt %0" : : "m" (hv_crash_ctxt.idtr)); > + hv_wrmsr(MSR_GS_BASE, hv_crash_ctxt.gsbase); > + hv_wrmsr(MSR_EFER, hv_crash_ctxt.efer); > > /* restore the original kernel CS now via far return */ > - asm volatile("movzwq %0, %%rax\n\t" > - "pushq %%rax\n\t" > - "pushq $1f\n\t" > - "lretq\n\t" > - "1:nop\n\t" : : "m"(ctxt->cs) : "rax"); > - > - /* We are in asmlinkage without stack frame, hence make C function > - * calls which will buy stack frames. > - */ > - hv_crash_restore_tss(); > - hv_crash_clear_kernpt(); > - > - /* we are now fully in devirtualized normal kernel mode */ > - __crash_kexec(NULL); > - > - hv_panic_timeout_reboot(); > + asm volatile("pushq %q0\n\t" > + "pushq %q1\n\t" > + "lretq" > + :: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle)); > } > /* Tell gcc we are using lretq long jump in the above function intentionally */ > STACK_FRAME_NON_STANDARD(hv_crash_c_entry);