* [PATCH v2 00/15] Add arm64 support in MSHV_VTL
@ 2026-04-23 12:41 Naman Jain
2026-04-23 12:41 ` [PATCH v2 01/15] arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module Naman Jain
` (14 more replies)
0 siblings, 15 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
The series adds support for ARM64 to mshv_vtl driver.
For this, common Hyper-V code is refactored, necessary support is added,
mshv_vtl_main.c is refactored and then finally support is added in
Kconfig.
Changes since v1:
https://lore.kernel.org/all/20260316121241.910764-1-namjain@linux.microsoft.com/
Patch 1: arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module
* Changed prefix in subject (Michael)
* Sashiko - no issues
Patch 2:
* Add #include <linux/io.h> in hv_common.c (Michael)
* Remove ms_hyperv.hints change from non TDX case,
as it won't matter in failure case (Michael)
* Add ms_hyperv.hints &=
~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED for TDX
case, to maintain parity with existing code.
(Sashiko)
* Handle synic_eventring_tail -ENOMEM issue by
returning early (Michael|Sashiko)
* Only 4k page is used here, so add dependency on
PAGE_SIZE_4KB for MSHV_VTL as well in a later
Kconfig patch (Sashiko|Michael)
* Use HV_HYP_PAGE_SIZE instead of PAGE_SIZE to avoid
page size mismatch issues (Sashiko)
* s/"vmalloc_to_pfn(*hvp)"/
"page_to_hvpfn(vmalloc_to_page(*hvp))" in
hv_common.c (Sashiko|Michael)
* s/GFP_KERNEL/flags in __vmalloc. (Sashiko|Michael)
* Limit code to 80 lines in hv_common_cpu_init (Mukesh R.)
* Move arch based definition of
HV_VP_ASSIST_PAGE_ADDRESS_SHIFT to
hvgdk_mini.h (Michael)
* Added a comment about x64 vmalloc_to_pfn(*hvp)) (Michael)
* Move remaining hv_vp_assist_page code from
arch/x86/include/asm/mshyperv.h to
include/asm-generic/mshyperv.h (Michael)
* s/HV_SYN_REG_VP_ASSIST_PAGE/HV_MSR_VP_ASSIST_PAGE (Michael)
Patch 3:
* Rework the code and remove these new APIs. Move
the vmbus_handler global variable and
hv_setup_vmbus_handler()/hv_remove_vmbus_handler()
from arch/x86 to drivers/hv/hv_common.c so that
the same APIs can be used to setup per-cpu vmbus
handlers as well for arm64. (Michael)
Patch 4:
* Sashiko's comments are generic and outside the
scope of the refactoring this patch is doing.
Will take it up separately.
Patch 6:
* Sashiko's comment regarding race condition is false positive.
* Regarding memory leak on cpu offline - online -
beyond the scope of this series, I will fix it
separately.
Patch 7:
* Subject s/"arch: arm64:"/"arm64: hyperv:" (Michael)
* Changed commit msg as per Michael's suggestion
* Add kernel_neon_begin(), kernel_neon_end() calls (Sashiko)
* Removed Note prefix from comments (Michael)
* Added compile time check for cpu context to be
within 1024 bytes of mshv_vtl_run
* Moved the declarations of mshv_vtl_return_call to generic file
Patch 8:
* Split the patch into three patches - number 8-10 (Michael)
* Moved hv_vtl_configure_reg_page declaration to asm-generic header
* Sashiko's other reviews are for existing code,
I will take them separately
Patch 9: (now patch 11)
No changes required for Sashiko's comments as most
of such controls are intentionally designated to
OpenVMM to keep kernel driver simpler.
Patch 10: (now patch 13)
* Remove hv_setup_percpu_vmbus_handler invocations,
after redesign in previous patchsets (Michael)
* Simplified mshv_vtl_get_vsm_regs() by moving arch
specific code (for x86) to hv_vtl -
mshv_vtl_return_call_init(). This removes arch
checks in mshv_vtl driver. Add a separate patch
for this (now patch 12)
* Other Sachiko's reviews are related to existing
code - can be taken up separately
Patch 11 (now patch 15):
* Only 4k page is supported, so add dependency on
PAGE_SIZE_4KB for MSHV_VTL (Sashiko|Mihael)
* Remove "Kconfig: " from subject line. (Michael)
New patch 14:
Add a Kconfig dependency on 4K PAGE_SIZE for
MSHV_VTL to manage assumptions in MSHV_VTL driver
Change prefix in subjects as per below naming convention:
mshv_vtl_main changes - "mshv_vtl: "
arch/arm64 Hyper-V changes - "arm64: hyperv: "
arch/x86 Hyper-V changes - "x86/hyperv: "
Add Reviewed-by on already reviewed patches.
Naman Jain (15):
arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module
Drivers: hv: Move hv_vp_assist_page to common files
Drivers: hv: Move vmbus_handler to common code
mshv_vtl: Refactor the driver for ARM64 support to be added
Drivers: hv: Export vmbus_interrupt for mshv_vtl module
mshv_vtl: Make sint vector architecture neutral
arm64: hyperv: Add support for mshv_vtl_return_call
Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations
Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86
arm64: hyperv: Add hv_vtl_configure_reg_page() stub
mshv_vtl: Let userspace do VSM configuration
mshv_vtl: Move VSM code page offset logic to x86 files
mshv_vtl: Add remaining support for arm64
Drivers: hv: Add 4K page dependency in MSHV_VTL
Drivers: hv: Add ARM64 support for MSHV_VTL in Kconfig
arch/arm64/hyperv/Makefile | 1 +
arch/arm64/hyperv/hv_vtl.c | 165 ++++++++++++++++++++++++
arch/arm64/include/asm/mshyperv.h | 25 ++++
arch/arm64/kernel/smp.c | 1 +
arch/x86/hyperv/hv_init.c | 88 +------------
arch/x86/hyperv/hv_vtl.c | 149 ++++++++++++++++++++-
arch/x86/include/asm/mshyperv.h | 21 +--
arch/x86/kernel/cpu/mshyperv.c | 12 --
drivers/hv/Kconfig | 7 +-
drivers/hv/hv_common.c | 103 ++++++++++++++-
drivers/hv/mshv.h | 8 --
drivers/hv/mshv_vtl.h | 3 +
drivers/hv/mshv_vtl_main.c | 208 +++---------------------------
drivers/hv/vmbus_drv.c | 18 +--
include/asm-generic/mshyperv.h | 62 +++++++++
include/hyperv/hvgdk_mini.h | 6 +-
16 files changed, 550 insertions(+), 327 deletions(-)
create mode 100644 arch/arm64/hyperv/hv_vtl.c
--
2.43.0
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v2 01/15] arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files Naman Jain
` (13 subsequent siblings)
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
mshv_vtl_main.c calls smp_send_reschedule() which expands to
arch_smp_send_reschedule(). When CONFIG_MSHV_VTL=m, the module cannot
access this symbol since it is not exported on arm64.
smp_send_reschedule() is used in mshv_vtl_cancel() to interrupt a vCPU
thread running on another CPU. When a vCPU is looping in
mshv_vtl_ioctl_return_to_lower_vtl(), it checks a per-CPU cancel flag
before each VTL0 entry. Setting cancel=1 alone is not enough if the
target CPU thread is sleeping - the IPI from smp_send_reschedule() kicks
the remote CPU out of idle/sleep so it re-checks the cancel flag and
exits the loop promptly.
Other architectures (riscv, loongarch, powerpc) already export this
symbol. Add the same EXPORT_SYMBOL_GPL for arm64. This is required
for adding arm64 support in MSHV_VTL.
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/arm64/kernel/smp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 1aa324104afb..26b1a4456ceb 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -1152,6 +1152,7 @@ void arch_smp_send_reschedule(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE);
}
+EXPORT_SYMBOL_GPL(arch_smp_send_reschedule);
#ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
void arch_send_wakeup_ipi(unsigned int cpu)
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
2026-04-23 12:41 ` [PATCH v2 01/15] arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-27 5:37 ` Michael Kelley
2026-04-23 12:41 ` [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code Naman Jain
` (12 subsequent siblings)
14 siblings, 1 reply; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Move the logic to initialize and export hv_vp_assist_page from x86
architecture code to Hyper-V common code to allow it to be used for
upcoming arm64 support in MSHV_VTL driver.
Note: This change also improves error handling - if VP assist page
allocation fails, hyperv_init() now returns early instead of
continuing with partial initialization.
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/x86/hyperv/hv_init.c | 88 +-----------------------------
arch/x86/include/asm/mshyperv.h | 14 -----
drivers/hv/hv_common.c | 94 ++++++++++++++++++++++++++++++++-
include/asm-generic/mshyperv.h | 16 ++++++
include/hyperv/hvgdk_mini.h | 6 ++-
5 files changed, 115 insertions(+), 103 deletions(-)
diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
index 323adc93f2dc..75a98b5e451b 100644
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -81,9 +81,6 @@ union hv_ghcb * __percpu *hv_ghcb_pg;
/* Storage to save the hypercall page temporarily for hibernation */
static void *hv_hypercall_pg_saved;
-struct hv_vp_assist_page **hv_vp_assist_page;
-EXPORT_SYMBOL_GPL(hv_vp_assist_page);
-
static int hyperv_init_ghcb(void)
{
u64 ghcb_gpa;
@@ -117,59 +114,12 @@ static int hyperv_init_ghcb(void)
static int hv_cpu_init(unsigned int cpu)
{
- union hv_vp_assist_msr_contents msr = { 0 };
- struct hv_vp_assist_page **hvp;
int ret;
ret = hv_common_cpu_init(cpu);
if (ret)
return ret;
- if (!hv_vp_assist_page)
- return 0;
-
- hvp = &hv_vp_assist_page[cpu];
- if (hv_root_partition()) {
- /*
- * For root partition we get the hypervisor provided VP assist
- * page, instead of allocating a new page.
- */
- rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
- *hvp = memremap(msr.pfn << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT,
- PAGE_SIZE, MEMREMAP_WB);
- } else {
- /*
- * The VP assist page is an "overlay" page (see Hyper-V TLFS's
- * Section 5.2.1 "GPA Overlay Pages"). Here it must be zeroed
- * out to make sure we always write the EOI MSR in
- * hv_apic_eoi_write() *after* the EOI optimization is disabled
- * in hv_cpu_die(), otherwise a CPU may not be stopped in the
- * case of CPU offlining and the VM will hang.
- */
- if (!*hvp) {
- *hvp = __vmalloc(PAGE_SIZE, GFP_KERNEL | __GFP_ZERO);
-
- /*
- * Hyper-V should never specify a VM that is a Confidential
- * VM and also running in the root partition. Root partition
- * is blocked to run in Confidential VM. So only decrypt assist
- * page in non-root partition here.
- */
- if (*hvp && !ms_hyperv.paravisor_present && hv_isolation_type_snp()) {
- WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1));
- memset(*hvp, 0, PAGE_SIZE);
- }
- }
-
- if (*hvp)
- msr.pfn = vmalloc_to_pfn(*hvp);
-
- }
- if (!WARN_ON(!(*hvp))) {
- msr.enable = 1;
- wrmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
- }
-
/* Allow Hyper-V stimer vector to be injected from Hypervisor. */
if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
apic_update_vector(cpu, HYPERV_STIMER0_VECTOR, true);
@@ -286,23 +236,6 @@ static int hv_cpu_die(unsigned int cpu)
hv_common_cpu_die(cpu);
- if (hv_vp_assist_page && hv_vp_assist_page[cpu]) {
- union hv_vp_assist_msr_contents msr = { 0 };
- if (hv_root_partition()) {
- /*
- * For root partition the VP assist page is mapped to
- * hypervisor provided page, and thus we unmap the
- * page here and nullify it, so that in future we have
- * correct page address mapped in hv_cpu_init.
- */
- memunmap(hv_vp_assist_page[cpu]);
- hv_vp_assist_page[cpu] = NULL;
- rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
- msr.enable = 0;
- }
- wrmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
- }
-
if (hv_reenlightenment_cb == NULL)
return 0;
@@ -460,21 +393,6 @@ void __init hyperv_init(void)
if (hv_common_init())
return;
- /*
- * The VP assist page is useless to a TDX guest: the only use we
- * would have for it is lazy EOI, which can not be used with TDX.
- */
- if (hv_isolation_type_tdx())
- hv_vp_assist_page = NULL;
- else
- hv_vp_assist_page = kzalloc_objs(*hv_vp_assist_page, nr_cpu_ids);
- if (!hv_vp_assist_page) {
- ms_hyperv.hints &= ~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
-
- if (!hv_isolation_type_tdx())
- goto common_free;
- }
-
if (ms_hyperv.paravisor_present && hv_isolation_type_snp()) {
/* Negotiate GHCB Version. */
if (!hv_ghcb_negotiate_protocol())
@@ -483,7 +401,7 @@ void __init hyperv_init(void)
hv_ghcb_pg = alloc_percpu(union hv_ghcb *);
if (!hv_ghcb_pg)
- goto free_vp_assist_page;
+ goto free_ghcb_page;
}
cpuhp = cpuhp_setup_state(CPUHP_AP_HYPERV_ONLINE, "x86/hyperv_init:online",
@@ -613,10 +531,6 @@ void __init hyperv_init(void)
cpuhp_remove_state(CPUHP_AP_HYPERV_ONLINE);
free_ghcb_page:
free_percpu(hv_ghcb_pg);
-free_vp_assist_page:
- kfree(hv_vp_assist_page);
- hv_vp_assist_page = NULL;
-common_free:
hv_common_free();
}
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index f64393e853ee..95b452387969 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -155,16 +155,6 @@ static inline u64 hv_do_fast_hypercall16(u16 code, u64 input1, u64 input2)
return _hv_do_fast_hypercall16(control, input1, input2);
}
-extern struct hv_vp_assist_page **hv_vp_assist_page;
-
-static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
-{
- if (!hv_vp_assist_page)
- return NULL;
-
- return hv_vp_assist_page[cpu];
-}
-
void __init hyperv_init(void);
void hyperv_setup_mmu_ops(void);
void set_hv_tscchange_cb(void (*cb)(void));
@@ -254,10 +244,6 @@ static inline void hyperv_setup_mmu_ops(void) {}
static inline void set_hv_tscchange_cb(void (*cb)(void)) {}
static inline void clear_hv_tscchange_cb(void) {}
static inline void hyperv_stop_tsc_emulation(void) {};
-static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
-{
- return NULL;
-}
static inline int hyperv_flush_guest_mapping(u64 as) { return -1; }
static inline int hyperv_flush_guest_mapping_range(u64 as,
hyperv_fill_flush_list_func fill_func, void *data)
diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index 6b67ac616789..e8633bc51d56 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -28,7 +28,11 @@
#include <linux/slab.h>
#include <linux/dma-map-ops.h>
#include <linux/set_memory.h>
+#include <linux/vmalloc.h>
+#include <linux/io.h>
+#include <linux/hyperv.h>
#include <hyperv/hvhdk.h>
+#include <hyperv/hvgdk.h>
#include <asm/mshyperv.h>
u64 hv_current_partition_id = HV_PARTITION_ID_SELF;
@@ -78,6 +82,8 @@ static struct ctl_table_header *hv_ctl_table_hdr;
u8 * __percpu *hv_synic_eventring_tail;
EXPORT_SYMBOL_GPL(hv_synic_eventring_tail);
+struct hv_vp_assist_page **hv_vp_assist_page;
+EXPORT_SYMBOL_GPL(hv_vp_assist_page);
/*
* Hyper-V specific initialization and shutdown code that is
* common across all architectures. Called from architecture
@@ -92,6 +98,9 @@ void __init hv_common_free(void)
if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE)
hv_kmsg_dump_unregister();
+ kfree(hv_vp_assist_page);
+ hv_vp_assist_page = NULL;
+
kfree(hv_vp_index);
hv_vp_index = NULL;
@@ -394,6 +403,23 @@ int __init hv_common_init(void)
for (i = 0; i < nr_cpu_ids; i++)
hv_vp_index[i] = VP_INVAL;
+ /*
+ * The VP assist page is useless to a TDX guest: the only use we
+ * would have for it is lazy EOI, which can not be used with TDX.
+ */
+ if (hv_isolation_type_tdx()) {
+ hv_vp_assist_page = NULL;
+#ifdef CONFIG_X86_64
+ ms_hyperv.hints &= ~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
+#endif
+ } else {
+ hv_vp_assist_page = kzalloc_objs(*hv_vp_assist_page, nr_cpu_ids);
+ if (!hv_vp_assist_page) {
+ hv_common_free();
+ return -ENOMEM;
+ }
+ }
+
return 0;
}
@@ -471,6 +497,8 @@ void __init ms_hyperv_late_init(void)
int hv_common_cpu_init(unsigned int cpu)
{
+ union hv_vp_assist_msr_contents msr = { 0 };
+ struct hv_vp_assist_page **hvp;
void **inputarg, **outputarg;
u8 **synic_eventring_tail;
u64 msr_vp_index;
@@ -539,7 +567,53 @@ int hv_common_cpu_init(unsigned int cpu)
sizeof(u8), flags);
/* No need to unwind any of the above on failure here */
if (unlikely(!*synic_eventring_tail))
- ret = -ENOMEM;
+ return -ENOMEM;
+ }
+
+ if (!hv_vp_assist_page)
+ return ret;
+
+ hvp = &hv_vp_assist_page[cpu];
+ if (hv_root_partition()) {
+ /*
+ * For root partition we get the hypervisor provided VP assist
+ * page, instead of allocating a new page.
+ */
+ msr.as_uint64 = hv_get_msr(HV_MSR_VP_ASSIST_PAGE);
+ *hvp = memremap(msr.pfn << HV_VP_ASSIST_PAGE_ADDRESS_SHIFT,
+ HV_HYP_PAGE_SIZE, MEMREMAP_WB);
+ } else {
+ /*
+ * The VP assist page is an "overlay" page (see Hyper-V TLFS's
+ * Section 5.2.1 "GPA Overlay Pages"). Here it must be zeroed
+ * out to make sure that on x86/x64, we always write the EOI MSR in
+ * hv_apic_eoi_write() *after* the EOI optimization is disabled
+ * in hv_cpu_die(), otherwise a CPU may not be stopped in the
+ * case of CPU offlining and the VM will hang.
+ */
+ if (!*hvp) {
+ *hvp = __vmalloc(HV_HYP_PAGE_SIZE, flags | __GFP_ZERO);
+
+ /*
+ * Hyper-V should never specify a VM that is a Confidential
+ * VM and also running in the root partition. Root partition
+ * is blocked to run in Confidential VM. So only decrypt assist
+ * page in non-root partition here.
+ */
+ if (*hvp &&
+ !ms_hyperv.paravisor_present &&
+ hv_isolation_type_snp()) {
+ WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1));
+ memset(*hvp, 0, HV_HYP_PAGE_SIZE);
+ }
+ }
+
+ if (*hvp)
+ msr.pfn = page_to_hvpfn(vmalloc_to_page(*hvp));
+ }
+ if (!WARN_ON(!(*hvp))) {
+ msr.enable = 1;
+ hv_set_msr(HV_MSR_VP_ASSIST_PAGE, msr.as_uint64);
}
return ret;
@@ -566,6 +640,24 @@ int hv_common_cpu_die(unsigned int cpu)
*synic_eventring_tail = NULL;
}
+ if (hv_vp_assist_page && hv_vp_assist_page[cpu]) {
+ union hv_vp_assist_msr_contents msr = { 0 };
+
+ if (hv_root_partition()) {
+ /*
+ * For root partition the VP assist page is mapped to
+ * hypervisor provided page, and thus we unmap the
+ * page here and nullify it, so that in future we have
+ * correct page address mapped in hv_cpu_init.
+ */
+ memunmap(hv_vp_assist_page[cpu]);
+ hv_vp_assist_page[cpu] = NULL;
+ msr.as_uint64 = hv_get_msr(HV_MSR_VP_ASSIST_PAGE);
+ msr.enable = 0;
+ }
+ hv_set_msr(HV_MSR_VP_ASSIST_PAGE, msr.as_uint64);
+ }
+
return 0;
}
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index d37b68238c97..2810aa05dc73 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -25,6 +25,7 @@
#include <linux/nmi.h>
#include <asm/ptrace.h>
#include <hyperv/hvhdk.h>
+#include <hyperv/hvgdk.h>
#define VTPM_BASE_ADDRESS 0xfed40000
@@ -299,6 +300,16 @@ do { \
#define hv_status_debug(status, fmt, ...) \
hv_status_printk(debug, status, fmt, ##__VA_ARGS__)
+extern struct hv_vp_assist_page **hv_vp_assist_page;
+
+static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
+{
+ if (!hv_vp_assist_page)
+ return NULL;
+
+ return hv_vp_assist_page[cpu];
+}
+
const char *hv_result_to_string(u64 hv_status);
int hv_result_to_errno(u64 status);
void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die);
@@ -327,6 +338,11 @@ static inline enum hv_isolation_type hv_get_isolation_type(void)
{
return HV_ISOLATION_TYPE_NONE;
}
+
+static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
+{
+ return NULL;
+}
#endif /* CONFIG_HYPERV */
#if IS_ENABLED(CONFIG_MSHV_ROOT)
diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 056ef7b6b360..c72d04cd5ae4 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -149,6 +149,7 @@ struct hv_u128 {
#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
#define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \
(~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
+#define HV_MSR_VP_ASSIST_PAGE (HV_X64_MSR_VP_ASSIST_PAGE)
/* Hyper-V Enlightened VMCS version mask in nested features CPUID */
#define HV_X64_ENLIGHTENED_VMCS_VERSION 0xff
@@ -410,6 +411,7 @@ union hv_x64_msr_hypercall_contents {
#if defined(CONFIG_ARM64)
#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(8)
#define HV_STIMER_DIRECT_MODE_AVAILABLE BIT(13)
+#define HV_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
#endif /* CONFIG_ARM64 */
#if defined(CONFIG_X86)
@@ -1163,6 +1165,8 @@ enum hv_register_name {
#define HV_MSR_STIMER0_CONFIG (HV_X64_MSR_STIMER0_CONFIG)
#define HV_MSR_STIMER0_COUNT (HV_X64_MSR_STIMER0_COUNT)
+#define HV_VP_ASSIST_PAGE_ADDRESS_SHIFT HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT
+
#elif defined(CONFIG_ARM64) /* CONFIG_X86 */
#define HV_MSR_CRASH_P0 (HV_REGISTER_GUEST_CRASH_P0)
@@ -1185,7 +1189,7 @@ enum hv_register_name {
#define HV_MSR_STIMER0_CONFIG (HV_REGISTER_STIMER0_CONFIG)
#define HV_MSR_STIMER0_COUNT (HV_REGISTER_STIMER0_COUNT)
-
+#define HV_MSR_VP_ASSIST_PAGE (HV_REGISTER_VP_ASSIST_PAGE)
#endif /* CONFIG_ARM64 */
union hv_explicit_suspend_register {
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
2026-04-23 12:41 ` [PATCH v2 01/15] arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module Naman Jain
2026-04-23 12:41 ` [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-27 5:38 ` Michael Kelley
2026-04-23 12:41 ` [PATCH v2 04/15] mshv_vtl: Refactor the driver for ARM64 support to be added Naman Jain
` (11 subsequent siblings)
14 siblings, 1 reply; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Move the vmbus_handler global variable and hv_setup_vmbus_handler()/
hv_remove_vmbus_handler() from arch/x86 to drivers/hv/hv_common.c.
hv_setup_vmbus_handler() is called unconditionally in vmbus_bus_init()
and works for both x86 (sysvec handler) and arm64 (vmbus_percpu_isr).
This eliminates the need for separate percpu vmbus handler setup
functions and __weak stubs, that are needed for adding ARM64 support
in MSHV_VTL driver where we need to set a custom per-cpu vmbus handler.
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/x86/kernel/cpu/mshyperv.c | 12 ------------
drivers/hv/hv_common.c | 9 +++++++--
drivers/hv/vmbus_drv.c | 17 +++++++++--------
include/asm-generic/mshyperv.h | 1 +
4 files changed, 17 insertions(+), 22 deletions(-)
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index 89a2eb8a0722..68706ff5880e 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -145,7 +145,6 @@ void hv_set_msr(unsigned int reg, u64 value)
EXPORT_SYMBOL_GPL(hv_set_msr);
static void (*mshv_handler)(void);
-static void (*vmbus_handler)(void);
static void (*hv_stimer0_handler)(void);
static void (*hv_kexec_handler)(void);
static void (*hv_crash_handler)(struct pt_regs *regs);
@@ -172,17 +171,6 @@ void hv_setup_mshv_handler(void (*handler)(void))
mshv_handler = handler;
}
-void hv_setup_vmbus_handler(void (*handler)(void))
-{
- vmbus_handler = handler;
-}
-
-void hv_remove_vmbus_handler(void)
-{
- /* We have no way to deallocate the interrupt gate */
- vmbus_handler = NULL;
-}
-
/*
* Routines to do per-architecture handling of stimer0
* interrupts when in Direct Mode
diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index e8633bc51d56..eb7b0028b45d 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -758,13 +758,18 @@ bool __weak hv_isolation_type_tdx(void)
}
EXPORT_SYMBOL_GPL(hv_isolation_type_tdx);
-void __weak hv_setup_vmbus_handler(void (*handler)(void))
+void (*vmbus_handler)(void);
+EXPORT_SYMBOL_GPL(vmbus_handler);
+
+void hv_setup_vmbus_handler(void (*handler)(void))
{
+ vmbus_handler = handler;
}
EXPORT_SYMBOL_GPL(hv_setup_vmbus_handler);
-void __weak hv_remove_vmbus_handler(void)
+void hv_remove_vmbus_handler(void)
{
+ vmbus_handler = NULL;
}
EXPORT_SYMBOL_GPL(hv_remove_vmbus_handler);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index bc4fc1951ae1..052ca8b11cee 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -1415,7 +1415,8 @@ EXPORT_SYMBOL_FOR_MODULES(vmbus_isr, "mshv_vtl");
static irqreturn_t vmbus_percpu_isr(int irq, void *dev_id)
{
- vmbus_isr();
+ if (vmbus_handler)
+ vmbus_handler();
return IRQ_HANDLED;
}
@@ -1517,8 +1518,10 @@ static int vmbus_bus_init(void)
vmbus_irq_initialized = true;
}
+ hv_setup_vmbus_handler(vmbus_isr);
+
if (vmbus_irq == -1) {
- hv_setup_vmbus_handler(vmbus_isr);
+ /* x86: sysvec handler uses vmbus_handler directly */
} else {
ret = request_percpu_irq(vmbus_irq, vmbus_percpu_isr,
"Hyper-V VMbus", &vmbus_evt);
@@ -1553,9 +1556,8 @@ static int vmbus_bus_init(void)
return 0;
err_connect:
- if (vmbus_irq == -1)
- hv_remove_vmbus_handler();
- else
+ hv_remove_vmbus_handler();
+ if (vmbus_irq != -1)
free_percpu_irq(vmbus_irq, &vmbus_evt);
err_setup:
if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
@@ -3026,9 +3028,8 @@ static void __exit vmbus_exit(void)
vmbus_connection.conn_state = DISCONNECTED;
hv_stimer_global_cleanup();
vmbus_disconnect();
- if (vmbus_irq == -1)
- hv_remove_vmbus_handler();
- else
+ hv_remove_vmbus_handler();
+ if (vmbus_irq != -1)
free_percpu_irq(vmbus_irq, &vmbus_evt);
if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
smpboot_unregister_percpu_thread(&vmbus_irq_threads);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 2810aa05dc73..db183c8cfb95 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -179,6 +179,7 @@ static inline u64 hv_generate_guest_id(u64 kernel_version)
int hv_get_hypervisor_version(union hv_hypervisor_version_info *info);
+extern void (*vmbus_handler)(void);
void hv_setup_vmbus_handler(void (*handler)(void));
void hv_remove_vmbus_handler(void);
void hv_setup_stimer0_handler(void (*handler)(void));
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 04/15] mshv_vtl: Refactor the driver for ARM64 support to be added
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (2 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 05/15] Drivers: hv: Export vmbus_interrupt for mshv_vtl module Naman Jain
` (10 subsequent siblings)
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Refactor MSHV_VTL driver to move some of the x86 specific code to arch
specific files, and add corresponding functions for arm64.
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/arm64/include/asm/mshyperv.h | 10 +++
arch/x86/hyperv/hv_vtl.c | 98 ++++++++++++++++++++++++++++
arch/x86/include/asm/mshyperv.h | 1 +
drivers/hv/mshv_vtl_main.c | 102 +-----------------------------
4 files changed, 111 insertions(+), 100 deletions(-)
diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h
index b721d3134ab6..585b23a26f1b 100644
--- a/arch/arm64/include/asm/mshyperv.h
+++ b/arch/arm64/include/asm/mshyperv.h
@@ -60,6 +60,16 @@ static inline u64 hv_get_non_nested_msr(unsigned int reg)
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_VENDOR_HYP, \
HV_SMCCC_FUNC_NUMBER)
+#ifdef CONFIG_HYPERV_VTL_MODE
+/*
+ * Get/Set the register. If the function returns `1`, that must be done via
+ * a hypercall. Returning `0` means success.
+ */
+static inline int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared)
+{
+ return 1;
+}
+#endif
#include <asm-generic/mshyperv.h>
diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
index 9b6a9bc4ab76..09d81f9b853c 100644
--- a/arch/x86/hyperv/hv_vtl.c
+++ b/arch/x86/hyperv/hv_vtl.c
@@ -17,6 +17,8 @@
#include <asm/realmode.h>
#include <asm/reboot.h>
#include <asm/smap.h>
+#include <uapi/asm/mtrr.h>
+#include <asm/debugreg.h>
#include <linux/export.h>
#include <../kernel/smpboot.h>
#include "../../kernel/fpu/legacy.h"
@@ -281,3 +283,99 @@ void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0)
kernel_fpu_end();
}
EXPORT_SYMBOL(mshv_vtl_return_call);
+
+/* Static table mapping register names to their corresponding actions */
+static const struct {
+ enum hv_register_name reg_name;
+ int debug_reg_num; /* -1 if not a debug register */
+ u32 msr_addr; /* 0 if not an MSR */
+} reg_table[] = {
+ /* Debug registers */
+ {HV_X64_REGISTER_DR0, 0, 0},
+ {HV_X64_REGISTER_DR1, 1, 0},
+ {HV_X64_REGISTER_DR2, 2, 0},
+ {HV_X64_REGISTER_DR3, 3, 0},
+ {HV_X64_REGISTER_DR6, 6, 0},
+ /* MTRR MSRs */
+ {HV_X64_REGISTER_MSR_MTRR_CAP, -1, MSR_MTRRcap},
+ {HV_X64_REGISTER_MSR_MTRR_DEF_TYPE, -1, MSR_MTRRdefType},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE0, -1, MTRRphysBase_MSR(0)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE1, -1, MTRRphysBase_MSR(1)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE2, -1, MTRRphysBase_MSR(2)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE3, -1, MTRRphysBase_MSR(3)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE4, -1, MTRRphysBase_MSR(4)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE5, -1, MTRRphysBase_MSR(5)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE6, -1, MTRRphysBase_MSR(6)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE7, -1, MTRRphysBase_MSR(7)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE8, -1, MTRRphysBase_MSR(8)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE9, -1, MTRRphysBase_MSR(9)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEA, -1, MTRRphysBase_MSR(0xa)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEB, -1, MTRRphysBase_MSR(0xb)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEC, -1, MTRRphysBase_MSR(0xc)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASED, -1, MTRRphysBase_MSR(0xd)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEE, -1, MTRRphysBase_MSR(0xe)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEF, -1, MTRRphysBase_MSR(0xf)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK0, -1, MTRRphysMask_MSR(0)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK1, -1, MTRRphysMask_MSR(1)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK2, -1, MTRRphysMask_MSR(2)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK3, -1, MTRRphysMask_MSR(3)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK4, -1, MTRRphysMask_MSR(4)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK5, -1, MTRRphysMask_MSR(5)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK6, -1, MTRRphysMask_MSR(6)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK7, -1, MTRRphysMask_MSR(7)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK8, -1, MTRRphysMask_MSR(8)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK9, -1, MTRRphysMask_MSR(9)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKA, -1, MTRRphysMask_MSR(0xa)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKB, -1, MTRRphysMask_MSR(0xb)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKC, -1, MTRRphysMask_MSR(0xc)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKD, -1, MTRRphysMask_MSR(0xd)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKE, -1, MTRRphysMask_MSR(0xe)},
+ {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKF, -1, MTRRphysMask_MSR(0xf)},
+ {HV_X64_REGISTER_MSR_MTRR_FIX64K00000, -1, MSR_MTRRfix64K_00000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX16K80000, -1, MSR_MTRRfix16K_80000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX16KA0000, -1, MSR_MTRRfix16K_A0000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KC0000, -1, MSR_MTRRfix4K_C0000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KC8000, -1, MSR_MTRRfix4K_C8000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KD0000, -1, MSR_MTRRfix4K_D0000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KD8000, -1, MSR_MTRRfix4K_D8000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KE0000, -1, MSR_MTRRfix4K_E0000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KE8000, -1, MSR_MTRRfix4K_E8000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KF0000, -1, MSR_MTRRfix4K_F0000},
+ {HV_X64_REGISTER_MSR_MTRR_FIX4KF8000, -1, MSR_MTRRfix4K_F8000},
+};
+
+int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared)
+{
+ u64 *reg64;
+ enum hv_register_name gpr_name;
+ int i;
+
+ gpr_name = regs->name;
+ reg64 = ®s->value.reg64;
+
+ /* Search for the register in the table */
+ for (i = 0; i < ARRAY_SIZE(reg_table); i++) {
+ if (reg_table[i].reg_name != gpr_name)
+ continue;
+ if (reg_table[i].debug_reg_num != -1) {
+ /* Handle debug registers */
+ if (gpr_name == HV_X64_REGISTER_DR6 && !shared)
+ goto hypercall;
+ if (set)
+ native_set_debugreg(reg_table[i].debug_reg_num, *reg64);
+ else
+ *reg64 = native_get_debugreg(reg_table[i].debug_reg_num);
+ } else {
+ /* Handle MSRs */
+ if (set)
+ wrmsrl(reg_table[i].msr_addr, *reg64);
+ else
+ rdmsrl(reg_table[i].msr_addr, *reg64);
+ }
+ return 0;
+ }
+
+hypercall:
+ return 1;
+}
+EXPORT_SYMBOL(hv_vtl_get_set_reg);
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index 95b452387969..08278547b84c 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -290,6 +290,7 @@ void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
void mshv_vtl_return_call_init(u64 vtl_return_offset);
void mshv_vtl_return_hypercall(void);
void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
+int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared);
#else
static inline void __init hv_vtl_init_platform(void) {}
static inline int __init hv_vtl_early_init(void) { return 0; }
diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
index 5856975f32e1..b607b6e7e121 100644
--- a/drivers/hv/mshv_vtl_main.c
+++ b/drivers/hv/mshv_vtl_main.c
@@ -19,10 +19,8 @@
#include <linux/poll.h>
#include <linux/file.h>
#include <linux/vmalloc.h>
-#include <asm/debugreg.h>
#include <asm/mshyperv.h>
#include <trace/events/ipi.h>
-#include <uapi/asm/mtrr.h>
#include <uapi/linux/mshv.h>
#include <hyperv/hvhdk.h>
@@ -505,102 +503,6 @@ static int mshv_vtl_ioctl_set_poll_file(struct mshv_vtl_set_poll_file __user *us
return 0;
}
-/* Static table mapping register names to their corresponding actions */
-static const struct {
- enum hv_register_name reg_name;
- int debug_reg_num; /* -1 if not a debug register */
- u32 msr_addr; /* 0 if not an MSR */
-} reg_table[] = {
- /* Debug registers */
- {HV_X64_REGISTER_DR0, 0, 0},
- {HV_X64_REGISTER_DR1, 1, 0},
- {HV_X64_REGISTER_DR2, 2, 0},
- {HV_X64_REGISTER_DR3, 3, 0},
- {HV_X64_REGISTER_DR6, 6, 0},
- /* MTRR MSRs */
- {HV_X64_REGISTER_MSR_MTRR_CAP, -1, MSR_MTRRcap},
- {HV_X64_REGISTER_MSR_MTRR_DEF_TYPE, -1, MSR_MTRRdefType},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE0, -1, MTRRphysBase_MSR(0)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE1, -1, MTRRphysBase_MSR(1)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE2, -1, MTRRphysBase_MSR(2)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE3, -1, MTRRphysBase_MSR(3)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE4, -1, MTRRphysBase_MSR(4)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE5, -1, MTRRphysBase_MSR(5)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE6, -1, MTRRphysBase_MSR(6)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE7, -1, MTRRphysBase_MSR(7)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE8, -1, MTRRphysBase_MSR(8)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASE9, -1, MTRRphysBase_MSR(9)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEA, -1, MTRRphysBase_MSR(0xa)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEB, -1, MTRRphysBase_MSR(0xb)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEC, -1, MTRRphysBase_MSR(0xc)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASED, -1, MTRRphysBase_MSR(0xd)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEE, -1, MTRRphysBase_MSR(0xe)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_BASEF, -1, MTRRphysBase_MSR(0xf)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK0, -1, MTRRphysMask_MSR(0)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK1, -1, MTRRphysMask_MSR(1)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK2, -1, MTRRphysMask_MSR(2)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK3, -1, MTRRphysMask_MSR(3)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK4, -1, MTRRphysMask_MSR(4)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK5, -1, MTRRphysMask_MSR(5)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK6, -1, MTRRphysMask_MSR(6)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK7, -1, MTRRphysMask_MSR(7)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK8, -1, MTRRphysMask_MSR(8)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASK9, -1, MTRRphysMask_MSR(9)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKA, -1, MTRRphysMask_MSR(0xa)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKB, -1, MTRRphysMask_MSR(0xb)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKC, -1, MTRRphysMask_MSR(0xc)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKD, -1, MTRRphysMask_MSR(0xd)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKE, -1, MTRRphysMask_MSR(0xe)},
- {HV_X64_REGISTER_MSR_MTRR_PHYS_MASKF, -1, MTRRphysMask_MSR(0xf)},
- {HV_X64_REGISTER_MSR_MTRR_FIX64K00000, -1, MSR_MTRRfix64K_00000},
- {HV_X64_REGISTER_MSR_MTRR_FIX16K80000, -1, MSR_MTRRfix16K_80000},
- {HV_X64_REGISTER_MSR_MTRR_FIX16KA0000, -1, MSR_MTRRfix16K_A0000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KC0000, -1, MSR_MTRRfix4K_C0000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KC8000, -1, MSR_MTRRfix4K_C8000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KD0000, -1, MSR_MTRRfix4K_D0000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KD8000, -1, MSR_MTRRfix4K_D8000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KE0000, -1, MSR_MTRRfix4K_E0000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KE8000, -1, MSR_MTRRfix4K_E8000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KF0000, -1, MSR_MTRRfix4K_F0000},
- {HV_X64_REGISTER_MSR_MTRR_FIX4KF8000, -1, MSR_MTRRfix4K_F8000},
-};
-
-static int mshv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set)
-{
- u64 *reg64;
- enum hv_register_name gpr_name;
- int i;
-
- gpr_name = regs->name;
- reg64 = ®s->value.reg64;
-
- /* Search for the register in the table */
- for (i = 0; i < ARRAY_SIZE(reg_table); i++) {
- if (reg_table[i].reg_name != gpr_name)
- continue;
- if (reg_table[i].debug_reg_num != -1) {
- /* Handle debug registers */
- if (gpr_name == HV_X64_REGISTER_DR6 &&
- !mshv_vsm_capabilities.dr6_shared)
- goto hypercall;
- if (set)
- native_set_debugreg(reg_table[i].debug_reg_num, *reg64);
- else
- *reg64 = native_get_debugreg(reg_table[i].debug_reg_num);
- } else {
- /* Handle MSRs */
- if (set)
- wrmsrl(reg_table[i].msr_addr, *reg64);
- else
- rdmsrl(reg_table[i].msr_addr, *reg64);
- }
- return 0;
- }
-
-hypercall:
- return 1;
-}
-
static void mshv_vtl_return(struct mshv_vtl_cpu_context *vtl0)
{
struct hv_vp_assist_page *hvp;
@@ -720,7 +622,7 @@ mshv_vtl_ioctl_get_regs(void __user *user_args)
sizeof(reg)))
return -EFAULT;
- ret = mshv_vtl_get_set_reg(®, false);
+ ret = hv_vtl_get_set_reg(®, false, mshv_vsm_capabilities.dr6_shared);
if (!ret)
goto copy_args; /* No need of hypercall */
ret = vtl_get_vp_register(®);
@@ -751,7 +653,7 @@ mshv_vtl_ioctl_set_regs(void __user *user_args)
if (copy_from_user(®, (void __user *)args.regs_ptr, sizeof(reg)))
return -EFAULT;
- ret = mshv_vtl_get_set_reg(®, true);
+ ret = hv_vtl_get_set_reg(®, true, mshv_vsm_capabilities.dr6_shared);
if (!ret)
return ret; /* No need of hypercall */
ret = vtl_set_vp_register(®);
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 05/15] Drivers: hv: Export vmbus_interrupt for mshv_vtl module
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (3 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 04/15] mshv_vtl: Refactor the driver for ARM64 support to be added Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 06/15] mshv_vtl: Make sint vector architecture neutral Naman Jain
` (9 subsequent siblings)
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
vmbus_interrupt is used in mshv_vtl_main.c to set the SINT vector.
When CONFIG_MSHV_VTL=m and CONFIG_HYPERV_VMBUS=y (built-in), the module
cannot access vmbus_interrupt at load time since it is not exported.
Export it using EXPORT_SYMBOL_FOR_MODULES consistent with the existing
pattern used for vmbus_isr.
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
drivers/hv/vmbus_drv.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 052ca8b11cee..047ad2848782 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -57,6 +57,7 @@ static DEFINE_PER_CPU(long, vmbus_evt);
/* Values parsed from ACPI DSDT */
int vmbus_irq;
int vmbus_interrupt;
+EXPORT_SYMBOL_FOR_MODULES(vmbus_interrupt, "mshv_vtl");
/*
* If the Confidential VMBus is used, the data on the "wire" is not
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 06/15] mshv_vtl: Make sint vector architecture neutral
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (4 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 05/15] Drivers: hv: Export vmbus_interrupt for mshv_vtl module Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call Naman Jain
` (8 subsequent siblings)
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Generalize Synthetic interrupt source vector (sint) to use
vmbus_interrupt variable instead, which automatically takes care of
architectures where HYPERVISOR_CALLBACK_VECTOR is not present (arm64).
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
drivers/hv/mshv_vtl_main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
index b607b6e7e121..91517b45d526 100644
--- a/drivers/hv/mshv_vtl_main.c
+++ b/drivers/hv/mshv_vtl_main.c
@@ -234,7 +234,7 @@ static void mshv_vtl_synic_enable_regs(unsigned int cpu)
union hv_synic_sint sint;
sint.as_uint64 = 0;
- sint.vector = HYPERVISOR_CALLBACK_VECTOR;
+ sint.vector = vmbus_interrupt;
sint.masked = false;
sint.auto_eoi = hv_recommend_using_aeoi();
@@ -753,7 +753,7 @@ static void mshv_vtl_synic_mask_vmbus_sint(void *info)
const u8 *mask = info;
sint.as_uint64 = 0;
- sint.vector = HYPERVISOR_CALLBACK_VECTOR;
+ sint.vector = vmbus_interrupt;
sint.masked = (*mask != 0);
sint.auto_eoi = hv_recommend_using_aeoi();
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (5 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 06/15] mshv_vtl: Make sint vector architecture neutral Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-23 13:56 ` Mark Rutland
` (2 more replies)
2026-04-23 12:41 ` [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations Naman Jain
` (7 subsequent siblings)
14 siblings, 3 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Add the arm64 variant of mshv_vtl_return_call() to support the MSHV_VTL
driver on arm64. This function enables the transition between Virtual
Trust Levels (VTLs) in MSHV_VTL when the kernel acts as a paravisor.
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/arm64/hyperv/Makefile | 1 +
arch/arm64/hyperv/hv_vtl.c | 158 ++++++++++++++++++++++++++++++
arch/arm64/include/asm/mshyperv.h | 13 +++
arch/x86/include/asm/mshyperv.h | 2 -
drivers/hv/mshv_vtl.h | 3 +
include/asm-generic/mshyperv.h | 2 +
6 files changed, 177 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/hyperv/hv_vtl.c
diff --git a/arch/arm64/hyperv/Makefile b/arch/arm64/hyperv/Makefile
index 87c31c001da9..9701a837a6e1 100644
--- a/arch/arm64/hyperv/Makefile
+++ b/arch/arm64/hyperv/Makefile
@@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-y := hv_core.o mshyperv.o
+obj-$(CONFIG_HYPERV_VTL_MODE) += hv_vtl.o
diff --git a/arch/arm64/hyperv/hv_vtl.c b/arch/arm64/hyperv/hv_vtl.c
new file mode 100644
index 000000000000..59cbeb74e7b9
--- /dev/null
+++ b/arch/arm64/hyperv/hv_vtl.c
@@ -0,0 +1,158 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2026, Microsoft, Inc.
+ *
+ * Authors:
+ * Roman Kisel <romank@linux.microsoft.com>
+ * Naman Jain <namjain@linux.microsoft.com>
+ */
+
+#include <asm/mshyperv.h>
+#include <asm/neon.h>
+#include <linux/export.h>
+
+void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0)
+{
+ struct user_fpsimd_state fpsimd_state;
+ u64 base_ptr = (u64)vtl0->x;
+
+ /*
+ * Obtain the CPU FPSIMD registers for VTL context switch.
+ * This saves the current task's FP/NEON state and allows us to
+ * safely load VTL0's FP/NEON context for the hypercall.
+ */
+ kernel_neon_begin(&fpsimd_state);
+
+ /*
+ * VTL switch for ARM64 platform - managing VTL0's CPU context.
+ * We explicitly use the stack to save the base pointer, and use x16
+ * as our working register for accessing the context structure.
+ *
+ * Register Handling:
+ * - X0-X17: Saved/restored (general-purpose, shared for VTL communication)
+ * - X18: NOT touched - hypervisor-managed per-VTL (platform register)
+ * - X19-X30: Saved/restored (part of VTL0's execution context)
+ * - Q0-Q31: Saved/restored (128-bit NEON/floating-point registers, shared)
+ * - SP: Not in structure, hypervisor-managed per-VTL
+ *
+ * X29 (FP) and X30 (LR) are in the structure and must be saved/restored
+ * as part of VTL0's complete execution state.
+ */
+ asm __volatile__ (
+ /* Save base pointer to stack explicitly, then load into x16 */
+ "str %0, [sp, #-16]!\n\t" /* Push base pointer onto stack */
+ "mov x16, %0\n\t" /* Load base pointer into x16 */
+ /* Volatile registers (Windows ARM64 ABI: x0-x17) */
+ "ldp x0, x1, [x16]\n\t"
+ "ldp x2, x3, [x16, #(2*8)]\n\t"
+ "ldp x4, x5, [x16, #(4*8)]\n\t"
+ "ldp x6, x7, [x16, #(6*8)]\n\t"
+ "ldp x8, x9, [x16, #(8*8)]\n\t"
+ "ldp x10, x11, [x16, #(10*8)]\n\t"
+ "ldp x12, x13, [x16, #(12*8)]\n\t"
+ "ldp x14, x15, [x16, #(14*8)]\n\t"
+ /* x16 will be loaded last, after saving base pointer */
+ "ldr x17, [x16, #(17*8)]\n\t"
+ /* x18 is hypervisor-managed per-VTL - DO NOT LOAD */
+
+ /* General-purpose registers: x19-x30 */
+ "ldp x19, x20, [x16, #(19*8)]\n\t"
+ "ldp x21, x22, [x16, #(21*8)]\n\t"
+ "ldp x23, x24, [x16, #(23*8)]\n\t"
+ "ldp x25, x26, [x16, #(25*8)]\n\t"
+ "ldp x27, x28, [x16, #(27*8)]\n\t"
+
+ /* Frame pointer and link register */
+ "ldp x29, x30, [x16, #(29*8)]\n\t"
+
+ /* Shared NEON/FP registers: Q0-Q31 (128-bit) */
+ "ldp q0, q1, [x16, #(32*8)]\n\t"
+ "ldp q2, q3, [x16, #(32*8 + 2*16)]\n\t"
+ "ldp q4, q5, [x16, #(32*8 + 4*16)]\n\t"
+ "ldp q6, q7, [x16, #(32*8 + 6*16)]\n\t"
+ "ldp q8, q9, [x16, #(32*8 + 8*16)]\n\t"
+ "ldp q10, q11, [x16, #(32*8 + 10*16)]\n\t"
+ "ldp q12, q13, [x16, #(32*8 + 12*16)]\n\t"
+ "ldp q14, q15, [x16, #(32*8 + 14*16)]\n\t"
+ "ldp q16, q17, [x16, #(32*8 + 16*16)]\n\t"
+ "ldp q18, q19, [x16, #(32*8 + 18*16)]\n\t"
+ "ldp q20, q21, [x16, #(32*8 + 20*16)]\n\t"
+ "ldp q22, q23, [x16, #(32*8 + 22*16)]\n\t"
+ "ldp q24, q25, [x16, #(32*8 + 24*16)]\n\t"
+ "ldp q26, q27, [x16, #(32*8 + 26*16)]\n\t"
+ "ldp q28, q29, [x16, #(32*8 + 28*16)]\n\t"
+ "ldp q30, q31, [x16, #(32*8 + 30*16)]\n\t"
+
+ /* Now load x16 itself */
+ "ldr x16, [x16, #(16*8)]\n\t"
+
+ /* Return to the lower VTL */
+ "hvc #3\n\t"
+
+ /* Save context after return - reload base pointer from stack */
+ "stp x16, x17, [sp, #-16]!\n\t" /* Save x16, x17 temporarily */
+ "ldr x16, [sp, #16]\n\t" /* Reload base pointer (skip saved x16,x17) */
+
+ /* Volatile registers */
+ "stp x0, x1, [x16]\n\t"
+ "stp x2, x3, [x16, #(2*8)]\n\t"
+ "stp x4, x5, [x16, #(4*8)]\n\t"
+ "stp x6, x7, [x16, #(6*8)]\n\t"
+ "stp x8, x9, [x16, #(8*8)]\n\t"
+ "stp x10, x11, [x16, #(10*8)]\n\t"
+ "stp x12, x13, [x16, #(12*8)]\n\t"
+ "stp x14, x15, [x16, #(14*8)]\n\t"
+ "ldp x0, x1, [sp], #16\n\t" /* Recover saved x16, x17 */
+ "stp x0, x1, [x16, #(16*8)]\n\t"
+ /* x18 is hypervisor-managed - DO NOT SAVE */
+
+ /* General-purpose registers: x19-x30 */
+ "stp x19, x20, [x16, #(19*8)]\n\t"
+ "stp x21, x22, [x16, #(21*8)]\n\t"
+ "stp x23, x24, [x16, #(23*8)]\n\t"
+ "stp x25, x26, [x16, #(25*8)]\n\t"
+ "stp x27, x28, [x16, #(27*8)]\n\t"
+ "stp x29, x30, [x16, #(29*8)]\n\t" /* Frame pointer and link register */
+
+ /* Shared NEON/FP registers: Q0-Q31 (128-bit) */
+ "stp q0, q1, [x16, #(32*8)]\n\t"
+ "stp q2, q3, [x16, #(32*8 + 2*16)]\n\t"
+ "stp q4, q5, [x16, #(32*8 + 4*16)]\n\t"
+ "stp q6, q7, [x16, #(32*8 + 6*16)]\n\t"
+ "stp q8, q9, [x16, #(32*8 + 8*16)]\n\t"
+ "stp q10, q11, [x16, #(32*8 + 10*16)]\n\t"
+ "stp q12, q13, [x16, #(32*8 + 12*16)]\n\t"
+ "stp q14, q15, [x16, #(32*8 + 14*16)]\n\t"
+ "stp q16, q17, [x16, #(32*8 + 16*16)]\n\t"
+ "stp q18, q19, [x16, #(32*8 + 18*16)]\n\t"
+ "stp q20, q21, [x16, #(32*8 + 20*16)]\n\t"
+ "stp q22, q23, [x16, #(32*8 + 22*16)]\n\t"
+ "stp q24, q25, [x16, #(32*8 + 24*16)]\n\t"
+ "stp q26, q27, [x16, #(32*8 + 26*16)]\n\t"
+ "stp q28, q29, [x16, #(32*8 + 28*16)]\n\t"
+ "stp q30, q31, [x16, #(32*8 + 30*16)]\n\t"
+
+ /* Clean up stack - pop base pointer */
+ "add sp, sp, #16\n\t"
+
+ : /* No outputs */
+ : /* Input */ "r"(base_ptr)
+ : /* Clobber list - x16 used as base, x18 is hypervisor-managed (not touched) */
+ "memory", "cc",
+ "x0", "x1", "x2", "x3", "x4", "x5",
+ "x6", "x7", "x8", "x9", "x10", "x11", "x12", "x13",
+ "x14", "x15", "x16", "x17", "x19", "x20", "x21",
+ "x22", "x23", "x24", "x25", "x26", "x27", "x28",
+ "x29", "x30",
+ "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7",
+ "v8", "v9", "v10", "v11", "v12", "v13", "v14", "v15",
+ "v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23",
+ "v24", "v25", "v26", "v27", "v28", "v29", "v30", "v31");
+
+ /*
+ * Restore the task's FP/SIMD state and return CPU FPSIMD registers
+ * back to normal kernel use.
+ */
+ kernel_neon_end(&fpsimd_state);
+}
+EXPORT_SYMBOL(mshv_vtl_return_call);
diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h
index 585b23a26f1b..9eb0e5999f29 100644
--- a/arch/arm64/include/asm/mshyperv.h
+++ b/arch/arm64/include/asm/mshyperv.h
@@ -60,6 +60,18 @@ static inline u64 hv_get_non_nested_msr(unsigned int reg)
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_VENDOR_HYP, \
HV_SMCCC_FUNC_NUMBER)
+
+struct mshv_vtl_cpu_context {
+/*
+ * x18 is managed by the hypervisor. It won't be reloaded from this array.
+ * It is included here for convenience in array indexing.
+ * 'rsvd' field serves as alignment padding so q[] starts at offset 32*8=256.
+ */
+ __u64 x[31];
+ __u64 rsvd;
+ __uint128_t q[32];
+};
+
#ifdef CONFIG_HYPERV_VTL_MODE
/*
* Get/Set the register. If the function returns `1`, that must be done via
@@ -69,6 +81,7 @@ static inline int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, b
{
return 1;
}
+
#endif
#include <asm-generic/mshyperv.h>
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index 08278547b84c..b4d80c9a673a 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -286,7 +286,6 @@ struct mshv_vtl_cpu_context {
#ifdef CONFIG_HYPERV_VTL_MODE
void __init hv_vtl_init_platform(void);
int __init hv_vtl_early_init(void);
-void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
void mshv_vtl_return_call_init(u64 vtl_return_offset);
void mshv_vtl_return_hypercall(void);
void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
@@ -294,7 +293,6 @@ int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared);
#else
static inline void __init hv_vtl_init_platform(void) {}
static inline int __init hv_vtl_early_init(void) { return 0; }
-static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
static inline void mshv_vtl_return_call_init(u64 vtl_return_offset) {}
static inline void mshv_vtl_return_hypercall(void) {}
static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
diff --git a/drivers/hv/mshv_vtl.h b/drivers/hv/mshv_vtl.h
index a6eea52f7aa2..103f07371f3f 100644
--- a/drivers/hv/mshv_vtl.h
+++ b/drivers/hv/mshv_vtl.h
@@ -22,4 +22,7 @@ struct mshv_vtl_run {
char vtl_ret_actions[MSHV_MAX_RUN_MSG_SIZE];
};
+static_assert(sizeof(struct mshv_vtl_cpu_context) <= 1024,
+ "struct mshv_vtl_cpu_context exceeds reserved space in struct mshv_vtl_run");
+
#endif /* _MSHV_VTL_H */
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index db183c8cfb95..8cdf2a9fbdfb 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -396,8 +396,10 @@ static inline int hv_deposit_memory(u64 partition_id, u64 status)
#if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
u8 __init get_vtl(void);
+void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
#else
static inline u8 get_vtl(void) { return 0; }
+static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
#endif
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (6 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-27 5:39 ` Michael Kelley
2026-04-23 12:41 ` [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86 Naman Jain
` (6 subsequent siblings)
14 siblings, 1 reply; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Move hv_call_get_vp_registers() and hv_call_set_vp_registers()
declarations from drivers/hv/mshv.h to include/asm-generic/mshyperv.h.
These functions are defined in mshv_common.c and are going to be called
from both drivers/hv/ and arch/x86/hyperv/hv_vtl.c. The latter never
included mshv.h, relying on implicit declaration visibility. Moving the
declarations to the arch-generic Hyper-V header makes them properly
visible to all architecture-specific callers.
Provide static inline stubs returning -EOPNOTSUPP when neither
CONFIG_MSHV_ROOT nor CONFIG_MSHV_VTL is enabled.
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
drivers/hv/mshv.h | 8 --------
include/asm-generic/mshyperv.h | 26 ++++++++++++++++++++++++++
2 files changed, 26 insertions(+), 8 deletions(-)
diff --git a/drivers/hv/mshv.h b/drivers/hv/mshv.h
index d4813df92b9c..0fcb7f9ba6a9 100644
--- a/drivers/hv/mshv.h
+++ b/drivers/hv/mshv.h
@@ -14,14 +14,6 @@
memchr_inv(&((STRUCT).MEMBER), \
0, sizeof_field(typeof(STRUCT), MEMBER))
-int hv_call_get_vp_registers(u32 vp_index, u64 partition_id, u16 count,
- union hv_input_vtl input_vtl,
- struct hv_register_assoc *registers);
-
-int hv_call_set_vp_registers(u32 vp_index, u64 partition_id, u16 count,
- union hv_input_vtl input_vtl,
- struct hv_register_assoc *registers);
-
int hv_call_get_partition_property(u64 partition_id, u64 property_code,
u64 *property_value);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 8cdf2a9fbdfb..ef0b9466808c 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -394,6 +394,32 @@ static inline int hv_deposit_memory(u64 partition_id, u64 status)
return hv_deposit_memory_node(NUMA_NO_NODE, partition_id, status);
}
+#if IS_ENABLED(CONFIG_MSHV_ROOT) || IS_ENABLED(CONFIG_MSHV_VTL)
+int hv_call_get_vp_registers(u32 vp_index, u64 partition_id, u16 count,
+ union hv_input_vtl input_vtl,
+ struct hv_register_assoc *registers);
+
+int hv_call_set_vp_registers(u32 vp_index, u64 partition_id, u16 count,
+ union hv_input_vtl input_vtl,
+ struct hv_register_assoc *registers);
+#else
+static inline int hv_call_get_vp_registers(u32 vp_index, u64 partition_id,
+ u16 count,
+ union hv_input_vtl input_vtl,
+ struct hv_register_assoc *registers)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int hv_call_set_vp_registers(u32 vp_index, u64 partition_id,
+ u16 count,
+ union hv_input_vtl input_vtl,
+ struct hv_register_assoc *registers)
+{
+ return -EOPNOTSUPP;
+}
+#endif /* CONFIG_MSHV_ROOT || CONFIG_MSHV_VTL */
+
#if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
u8 __init get_vtl(void);
void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (7 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations Naman Jain
@ 2026-04-23 12:41 ` Naman Jain
2026-04-27 5:40 ` Michael Kelley
2026-04-23 12:42 ` [PATCH v2 10/15] arm64: hyperv: Add hv_vtl_configure_reg_page() stub Naman Jain
` (5 subsequent siblings)
14 siblings, 1 reply; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:41 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Move hv_vtl_configure_reg_page() from drivers/hv/mshv_vtl_main.c to
arch/x86/hyperv/hv_vtl.c. The register page overlay is an x86-specific
feature that uses HV_X64_REGISTER_REG_PAGE, so its configuration belongs
in architecture-specific code.
Move struct mshv_vtl_per_cpu and union hv_synic_overlay_page_msr to
include/asm-generic/mshyperv.h so they are visible to both arch and
driver code.
Change the return type from void to bool so the caller can determine
whether the register page was successfully configured and set
mshv_has_reg_page accordingly.
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/x86/hyperv/hv_vtl.c | 32 ++++++++++++++++++++++
drivers/hv/mshv_vtl_main.c | 49 +++-------------------------------
include/asm-generic/mshyperv.h | 17 ++++++++++++
3 files changed, 53 insertions(+), 45 deletions(-)
diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
index 09d81f9b853c..f3ffb6a7cb2d 100644
--- a/arch/x86/hyperv/hv_vtl.c
+++ b/arch/x86/hyperv/hv_vtl.c
@@ -20,6 +20,7 @@
#include <uapi/asm/mtrr.h>
#include <asm/debugreg.h>
#include <linux/export.h>
+#include <linux/hyperv.h>
#include <../kernel/smpboot.h>
#include "../../kernel/fpu/legacy.h"
@@ -259,6 +260,37 @@ int __init hv_vtl_early_init(void)
return 0;
}
+static const union hv_input_vtl input_vtl_zero;
+
+bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu)
+{
+ struct hv_register_assoc reg_assoc = {};
+ union hv_synic_overlay_page_msr overlay = {};
+ struct page *reg_page;
+
+ reg_page = alloc_page(GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL);
+ if (!reg_page) {
+ WARN(1, "failed to allocate register page\n");
+ return false;
+ }
+
+ overlay.enabled = 1;
+ overlay.pfn = page_to_hvpfn(reg_page);
+ reg_assoc.name = HV_X64_REGISTER_REG_PAGE;
+ reg_assoc.value.reg64 = overlay.as_uint64;
+
+ if (hv_call_set_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
+ 1, input_vtl_zero, ®_assoc)) {
+ WARN(1, "failed to setup register page\n");
+ __free_page(reg_page);
+ return false;
+ }
+
+ per_cpu->reg_page = reg_page;
+ return true;
+}
+EXPORT_SYMBOL_GPL(hv_vtl_configure_reg_page);
+
DEFINE_STATIC_CALL_NULL(__mshv_vtl_return_hypercall, void (*)(void));
void mshv_vtl_return_call_init(u64 vtl_return_offset)
diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
index 91517b45d526..c79d24317b8e 100644
--- a/drivers/hv/mshv_vtl_main.c
+++ b/drivers/hv/mshv_vtl_main.c
@@ -78,21 +78,6 @@ struct mshv_vtl {
u64 id;
};
-struct mshv_vtl_per_cpu {
- struct mshv_vtl_run *run;
- struct page *reg_page;
-};
-
-/* SYNIC_OVERLAY_PAGE_MSR - internal, identical to hv_synic_simp */
-union hv_synic_overlay_page_msr {
- u64 as_uint64;
- struct {
- u64 enabled: 1;
- u64 reserved: 11;
- u64 pfn: 52;
- } __packed;
-};
-
static struct mutex mshv_vtl_poll_file_lock;
static union hv_register_vsm_page_offsets mshv_vsm_page_offsets;
static union hv_register_vsm_capabilities mshv_vsm_capabilities;
@@ -201,34 +186,6 @@ static struct page *mshv_vtl_cpu_reg_page(int cpu)
return *per_cpu_ptr(&mshv_vtl_per_cpu.reg_page, cpu);
}
-static void mshv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu)
-{
- struct hv_register_assoc reg_assoc = {};
- union hv_synic_overlay_page_msr overlay = {};
- struct page *reg_page;
-
- reg_page = alloc_page(GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL);
- if (!reg_page) {
- WARN(1, "failed to allocate register page\n");
- return;
- }
-
- overlay.enabled = 1;
- overlay.pfn = page_to_hvpfn(reg_page);
- reg_assoc.name = HV_X64_REGISTER_REG_PAGE;
- reg_assoc.value.reg64 = overlay.as_uint64;
-
- if (hv_call_set_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
- 1, input_vtl_zero, ®_assoc)) {
- WARN(1, "failed to setup register page\n");
- __free_page(reg_page);
- return;
- }
-
- per_cpu->reg_page = reg_page;
- mshv_has_reg_page = true;
-}
-
static void mshv_vtl_synic_enable_regs(unsigned int cpu)
{
union hv_synic_sint sint;
@@ -329,8 +286,10 @@ static int mshv_vtl_alloc_context(unsigned int cpu)
if (!per_cpu->run)
return -ENOMEM;
- if (mshv_vsm_capabilities.intercept_page_available)
- mshv_vtl_configure_reg_page(per_cpu);
+ if (mshv_vsm_capabilities.intercept_page_available) {
+ if (hv_vtl_configure_reg_page(per_cpu))
+ mshv_has_reg_page = true;
+ }
mshv_vtl_synic_enable_regs(cpu);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index ef0b9466808c..9e86178c182e 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -420,12 +420,29 @@ static inline int hv_call_set_vp_registers(u32 vp_index, u64 partition_id,
}
#endif /* CONFIG_MSHV_ROOT || CONFIG_MSHV_VTL */
+struct mshv_vtl_per_cpu {
+ struct mshv_vtl_run *run;
+ struct page *reg_page;
+};
+
#if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
+/* SYNIC_OVERLAY_PAGE_MSR - internal, identical to hv_synic_simp */
+union hv_synic_overlay_page_msr {
+ u64 as_uint64;
+ struct {
+ u64 enabled: 1;
+ u64 reserved: 11;
+ u64 pfn: 52;
+ } __packed;
+};
+
u8 __init get_vtl(void);
void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
+bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu);
#else
static inline u8 get_vtl(void) { return 0; }
static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
+static inline bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu) { return false; }
#endif
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 10/15] arm64: hyperv: Add hv_vtl_configure_reg_page() stub
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (8 preceding siblings ...)
2026-04-23 12:41 ` [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86 Naman Jain
@ 2026-04-23 12:42 ` Naman Jain
2026-04-23 12:42 ` [PATCH v2 11/15] mshv_vtl: Let userspace do VSM configuration Naman Jain
` (4 subsequent siblings)
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:42 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
ARM64 does not support the register page overlay, so provide a stub
that returns false. This pairs with the preceding commit that moved
hv_vtl_configure_reg_page() out of common code into architecture-
specific files.
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/arm64/hyperv/hv_vtl.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/hyperv/hv_vtl.c b/arch/arm64/hyperv/hv_vtl.c
index 59cbeb74e7b9..e07f6a865350 100644
--- a/arch/arm64/hyperv/hv_vtl.c
+++ b/arch/arm64/hyperv/hv_vtl.c
@@ -156,3 +156,10 @@ void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0)
kernel_neon_end(&fpsimd_state);
}
EXPORT_SYMBOL(mshv_vtl_return_call);
+
+bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu)
+{
+ pr_debug("Register page not supported on ARM64\n");
+ return false;
+}
+EXPORT_SYMBOL_GPL(hv_vtl_configure_reg_page);
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 11/15] mshv_vtl: Let userspace do VSM configuration
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (9 preceding siblings ...)
2026-04-23 12:42 ` [PATCH v2 10/15] arm64: hyperv: Add hv_vtl_configure_reg_page() stub Naman Jain
@ 2026-04-23 12:42 ` Naman Jain
2026-04-23 12:42 ` [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files Naman Jain
` (3 subsequent siblings)
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:42 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
The kernel currently sets the VSM configuration register, thereby
imposing certain VSM configuration on the userspace (OpenVMM).
The userspace (OpenVMM) has the capability to configure this register,
and it is already doing it using the generic hypercall interface.
The configuration can vary based on the use case or architectures, so
let userspace take care of configuring it and remove this logic in the
kernel driver.
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
drivers/hv/mshv_vtl_main.c | 29 -----------------------------
1 file changed, 29 deletions(-)
diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
index c79d24317b8e..4c9ae65ad3e8 100644
--- a/drivers/hv/mshv_vtl_main.c
+++ b/drivers/hv/mshv_vtl_main.c
@@ -222,30 +222,6 @@ static int mshv_vtl_get_vsm_regs(void)
return ret;
}
-static int mshv_vtl_configure_vsm_partition(struct device *dev)
-{
- union hv_register_vsm_partition_config config;
- struct hv_register_assoc reg_assoc;
-
- config.as_uint64 = 0;
- config.default_vtl_protection_mask = HV_MAP_GPA_PERMISSIONS_MASK;
- config.enable_vtl_protection = 1;
- config.zero_memory_on_reset = 1;
- config.intercept_vp_startup = 1;
- config.intercept_cpuid_unimplemented = 1;
-
- if (mshv_vsm_capabilities.intercept_page_available) {
- dev_dbg(dev, "using intercept page\n");
- config.intercept_page = 1;
- }
-
- reg_assoc.name = HV_REGISTER_VSM_PARTITION_CONFIG;
- reg_assoc.value.reg64 = config.as_uint64;
-
- return hv_call_set_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
- 1, input_vtl_zero, ®_assoc);
-}
-
static void mshv_vtl_vmbus_isr(void)
{
struct hv_per_cpu_context *per_cpu;
@@ -1168,11 +1144,6 @@ static int __init mshv_vtl_init(void)
ret = -ENODEV;
goto free_dev;
}
- if (mshv_vtl_configure_vsm_partition(dev)) {
- dev_emerg(dev, "VSM configuration failed !!\n");
- ret = -ENODEV;
- goto free_dev;
- }
mshv_vtl_return_call_init(mshv_vsm_page_offsets.vtl_return_offset);
ret = hv_vtl_setup_synic();
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (10 preceding siblings ...)
2026-04-23 12:42 ` [PATCH v2 11/15] mshv_vtl: Let userspace do VSM configuration Naman Jain
@ 2026-04-23 12:42 ` Naman Jain
2026-04-27 5:40 ` Michael Kelley
2026-04-23 12:42 ` [PATCH v2 13/15] mshv_vtl: Add remaining support for arm64 Naman Jain
` (2 subsequent siblings)
14 siblings, 1 reply; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:42 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
The VSM code page offset register (HV_REGISTER_VSM_CODE_PAGE_OFFSETS)
is x86 specific, its value configures the static call used to return
to VTL0 via the hypercall page. Move the register read from the common
mshv_vtl_get_vsm_regs() into the x86 mshv_vtl_return_call_init(),
which is the sole consumer of the offset.
Change mshv_vtl_return_call_init() from taking a u64 parameter
to taking no arguments, and rename mshv_vtl_get_vsm_regs() to
mshv_vtl_get_vsm_cap_reg() since it now only fetches
HV_REGISTER_VSM_CAPABILITIES.
No functional change on x86. This prepares the common driver code for
ARM64 where VSM code page offsets do not apply.
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/x86/hyperv/hv_vtl.c | 19 +++++++++++++++++--
arch/x86/include/asm/mshyperv.h | 4 ++--
drivers/hv/mshv_vtl_main.c | 24 +++++++++++++-----------
3 files changed, 32 insertions(+), 15 deletions(-)
diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
index f3ffb6a7cb2d..7c10b34cf8a4 100644
--- a/arch/x86/hyperv/hv_vtl.c
+++ b/arch/x86/hyperv/hv_vtl.c
@@ -293,10 +293,25 @@ EXPORT_SYMBOL_GPL(hv_vtl_configure_reg_page);
DEFINE_STATIC_CALL_NULL(__mshv_vtl_return_hypercall, void (*)(void));
-void mshv_vtl_return_call_init(u64 vtl_return_offset)
+int mshv_vtl_return_call_init(void)
{
+ struct hv_register_assoc vsm_pg_offset_reg;
+ union hv_register_vsm_page_offsets offsets;
+ int ret;
+
+ vsm_pg_offset_reg.name = HV_REGISTER_VSM_CODE_PAGE_OFFSETS;
+
+ ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
+ 1, input_vtl_zero, &vsm_pg_offset_reg);
+ if (ret)
+ return ret;
+
+ offsets.as_uint64 = vsm_pg_offset_reg.value.reg64;
+
static_call_update(__mshv_vtl_return_hypercall,
- (void *)((u8 *)hv_hypercall_pg + vtl_return_offset));
+ (void *)((u8 *)hv_hypercall_pg + offsets.vtl_return_offset));
+
+ return 0;
}
EXPORT_SYMBOL(mshv_vtl_return_call_init);
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index b4d80c9a673a..b48f115c1292 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -286,14 +286,14 @@ struct mshv_vtl_cpu_context {
#ifdef CONFIG_HYPERV_VTL_MODE
void __init hv_vtl_init_platform(void);
int __init hv_vtl_early_init(void);
-void mshv_vtl_return_call_init(u64 vtl_return_offset);
+int mshv_vtl_return_call_init(void);
void mshv_vtl_return_hypercall(void);
void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared);
#else
static inline void __init hv_vtl_init_platform(void) {}
static inline int __init hv_vtl_early_init(void) { return 0; }
-static inline void mshv_vtl_return_call_init(u64 vtl_return_offset) {}
+static inline int mshv_vtl_return_call_init(void) { return 0; }
static inline void mshv_vtl_return_hypercall(void) {}
static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
#endif
diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
index 4c9ae65ad3e8..be498c9234fd 100644
--- a/drivers/hv/mshv_vtl_main.c
+++ b/drivers/hv/mshv_vtl_main.c
@@ -79,7 +79,6 @@ struct mshv_vtl {
};
static struct mutex mshv_vtl_poll_file_lock;
-static union hv_register_vsm_page_offsets mshv_vsm_page_offsets;
static union hv_register_vsm_capabilities mshv_vsm_capabilities;
static DEFINE_PER_CPU(struct mshv_vtl_poll_file, mshv_vtl_poll_file);
@@ -203,21 +202,19 @@ static void mshv_vtl_synic_enable_regs(unsigned int cpu)
/* VTL2 Host VSP SINT is (un)masked when the user mode requests that */
}
-static int mshv_vtl_get_vsm_regs(void)
+static int mshv_vtl_get_vsm_cap_reg(void)
{
- struct hv_register_assoc registers[2];
- int ret, count = 2;
+ struct hv_register_assoc vsm_capability_reg;
+ int ret;
- registers[0].name = HV_REGISTER_VSM_CODE_PAGE_OFFSETS;
- registers[1].name = HV_REGISTER_VSM_CAPABILITIES;
+ vsm_capability_reg.name = HV_REGISTER_VSM_CAPABILITIES;
ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
- count, input_vtl_zero, registers);
+ 1, input_vtl_zero, &vsm_capability_reg);
if (ret)
return ret;
- mshv_vsm_page_offsets.as_uint64 = registers[0].value.reg64;
- mshv_vsm_capabilities.as_uint64 = registers[1].value.reg64;
+ mshv_vsm_capabilities.as_uint64 = vsm_capability_reg.value.reg64;
return ret;
}
@@ -1139,13 +1136,18 @@ static int __init mshv_vtl_init(void)
tasklet_init(&msg_dpc, mshv_vtl_sint_on_msg_dpc, 0);
init_waitqueue_head(&fd_wait_queue);
- if (mshv_vtl_get_vsm_regs()) {
+ if (mshv_vtl_get_vsm_cap_reg()) {
dev_emerg(dev, "Unable to get VSM capabilities !!\n");
ret = -ENODEV;
goto free_dev;
}
- mshv_vtl_return_call_init(mshv_vsm_page_offsets.vtl_return_offset);
+ ret = mshv_vtl_return_call_init();
+ if (ret) {
+ dev_err(dev, "mshv_vtl_return_call_init failed: %d\n", ret);
+ goto free_dev;
+ }
+
ret = hv_vtl_setup_synic();
if (ret)
goto free_dev;
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 13/15] mshv_vtl: Add remaining support for arm64
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (11 preceding siblings ...)
2026-04-23 12:42 ` [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files Naman Jain
@ 2026-04-23 12:42 ` Naman Jain
2026-04-23 12:42 ` [PATCH v2 14/15] Drivers: hv: Add 4K page dependency in MSHV_VTL Naman Jain
2026-04-23 12:42 ` [PATCH v2 15/15] Drivers: hv: Add ARM64 support for MSHV_VTL in Kconfig Naman Jain
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:42 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Add necessary support to make MSHV_VTL work for arm64 architecture.
* Add stub implementation for mshv_vtl_return_call_init() as it's not
required for arm64
* Handle hugepage functions by config checks, as it's x86 specific
* fpu/legacy.h header inclusion was required when x86 assembly code
was present here, it got left when the code was moved to arch files.
Remove it now (unrelated to arm64)
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
arch/arm64/include/asm/mshyperv.h | 2 ++
drivers/hv/mshv_vtl_main.c | 4 ++--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h
index 9eb0e5999f29..6f668ec68b2f 100644
--- a/arch/arm64/include/asm/mshyperv.h
+++ b/arch/arm64/include/asm/mshyperv.h
@@ -82,6 +82,8 @@ static inline int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, b
return 1;
}
+/* Stubbed for arm64 */
+static inline int mshv_vtl_return_call_init(void) { return 0; }
#endif
#include <asm-generic/mshyperv.h>
diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
index be498c9234fd..d5308956dfb6 100644
--- a/drivers/hv/mshv_vtl_main.c
+++ b/drivers/hv/mshv_vtl_main.c
@@ -23,8 +23,6 @@
#include <trace/events/ipi.h>
#include <uapi/linux/mshv.h>
#include <hyperv/hvhdk.h>
-
-#include "../../kernel/fpu/legacy.h"
#include "mshv.h"
#include "mshv_vtl.h"
#include "hyperv_vmbus.h"
@@ -1077,10 +1075,12 @@ static vm_fault_t mshv_vtl_low_huge_fault(struct vm_fault *vmf, unsigned int ord
ret = vmf_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
return ret;
+#if defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
case PUD_ORDER:
if (can_fault(vmf, PUD_SIZE, &pfn))
ret = vmf_insert_pfn_pud(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
return ret;
+#endif
default:
return VM_FAULT_SIGBUS;
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 14/15] Drivers: hv: Add 4K page dependency in MSHV_VTL
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (12 preceding siblings ...)
2026-04-23 12:42 ` [PATCH v2 13/15] mshv_vtl: Add remaining support for arm64 Naman Jain
@ 2026-04-23 12:42 ` Naman Jain
2026-04-23 12:42 ` [PATCH v2 15/15] Drivers: hv: Add ARM64 support for MSHV_VTL in Kconfig Naman Jain
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:42 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Add a dependency on 4K page size in Kconfig of MSHV_VTL
to support any assumptions that may be present in the code.
x86 anyways supports 4K page size only, and for arm64, higher
page size support is not validated. Remove this dependency as
and when this feature is supported.
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
drivers/hv/Kconfig | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..115821cc535c 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -96,6 +96,11 @@ config MSHV_VTL
# MTRRs are controlled by VTL0, and are not specific to individual VTLs.
# Therefore, do not attempt to access or modify MTRRs here.
depends on !MTRR
+ # The hypervisor interface operates on 4k pages. Enforcing it here
+ # simplifies many assumptions in the mshv_vtl code.
+ # VTL0 VMs can still support higher page size in ARM64 and is not limited
+ # by this setting.
+ depends on PAGE_SIZE_4KB
select CPUMASK_OFFSTACK
select VIRT_XFER_TO_GUEST_WORK
default n
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v2 15/15] Drivers: hv: Add ARM64 support for MSHV_VTL in Kconfig
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
` (13 preceding siblings ...)
2026-04-23 12:42 ` [PATCH v2 14/15] Drivers: hv: Add 4K page dependency in MSHV_VTL Naman Jain
@ 2026-04-23 12:42 ` Naman Jain
14 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-23 12:42 UTC (permalink / raw)
To: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, Naman Jain, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
Enable ARM64 support in MSHV_VTL Kconfig now that all the necessary
support is present.
Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Roman Kisel <vdso@mailbox.org>
Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
---
drivers/hv/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 115821cc535c..0bec3bc81a1a 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -87,7 +87,7 @@ config MSHV_ROOT
config MSHV_VTL
tristate "Microsoft Hyper-V VTL driver"
- depends on X86_64 && HYPERV_VTL_MODE
+ depends on (X86_64 || ARM64) && HYPERV_VTL_MODE
depends on HYPERV_VMBUS
# Mapping VTL0 memory to a userspace process in VTL2 is supported in OpenHCL.
# VTL2 for OpenHCL makes use of Huge Pages to improve performance on VMs,
--
2.43.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call
2026-04-23 12:41 ` [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call Naman Jain
@ 2026-04-23 13:56 ` Mark Rutland
2026-04-29 9:56 ` Naman Jain
2026-04-23 14:00 ` Marc Zyngier
2026-04-27 5:38 ` Michael Kelley
2 siblings, 1 reply; 31+ messages in thread
From: Mark Rutland @ 2026-04-23 13:56 UTC (permalink / raw)
To: Naman Jain
Cc: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley, Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi,
Sascha Bischoff, mrigendrachaubey, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
On Thu, Apr 23, 2026 at 12:41:57PM +0000, Naman Jain wrote:
> Add the arm64 variant of mshv_vtl_return_call() to support the MSHV_VTL
> driver on arm64. This function enables the transition between Virtual
> Trust Levels (VTLs) in MSHV_VTL when the kernel acts as a paravisor.
>
> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
> Reviewed-by: Roman Kisel <vdso@mailbox.org>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/arm64/hyperv/Makefile | 1 +
> arch/arm64/hyperv/hv_vtl.c | 158 ++++++++++++++++++++++++++++++
> arch/arm64/include/asm/mshyperv.h | 13 +++
> arch/x86/include/asm/mshyperv.h | 2 -
> drivers/hv/mshv_vtl.h | 3 +
> include/asm-generic/mshyperv.h | 2 +
> 6 files changed, 177 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm64/hyperv/hv_vtl.c
>
> diff --git a/arch/arm64/hyperv/Makefile b/arch/arm64/hyperv/Makefile
> index 87c31c001da9..9701a837a6e1 100644
> --- a/arch/arm64/hyperv/Makefile
> +++ b/arch/arm64/hyperv/Makefile
> @@ -1,2 +1,3 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-y := hv_core.o mshyperv.o
> +obj-$(CONFIG_HYPERV_VTL_MODE) += hv_vtl.o
> diff --git a/arch/arm64/hyperv/hv_vtl.c b/arch/arm64/hyperv/hv_vtl.c
> new file mode 100644
> index 000000000000..59cbeb74e7b9
> --- /dev/null
> +++ b/arch/arm64/hyperv/hv_vtl.c
> @@ -0,0 +1,158 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2026, Microsoft, Inc.
> + *
> + * Authors:
> + * Roman Kisel <romank@linux.microsoft.com>
> + * Naman Jain <namjain@linux.microsoft.com>
> + */
> +
> +#include <asm/mshyperv.h>
> +#include <asm/neon.h>
> +#include <linux/export.h>
> +
> +void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0)
> +{
> + struct user_fpsimd_state fpsimd_state;
> + u64 base_ptr = (u64)vtl0->x;
> +
> + /*
> + * Obtain the CPU FPSIMD registers for VTL context switch.
> + * This saves the current task's FP/NEON state and allows us to
> + * safely load VTL0's FP/NEON context for the hypercall.
> + */
> + kernel_neon_begin(&fpsimd_state);
> +
> + /*
> + * VTL switch for ARM64 platform - managing VTL0's CPU context.
> + * We explicitly use the stack to save the base pointer, and use x16
> + * as our working register for accessing the context structure.
> + *
> + * Register Handling:
> + * - X0-X17: Saved/restored (general-purpose, shared for VTL communication)
> + * - X18: NOT touched - hypervisor-managed per-VTL (platform register)
> + * - X19-X30: Saved/restored (part of VTL0's execution context)
> + * - Q0-Q31: Saved/restored (128-bit NEON/floating-point registers, shared)
> + * - SP: Not in structure, hypervisor-managed per-VTL
> + *
> + * X29 (FP) and X30 (LR) are in the structure and must be saved/restored
> + * as part of VTL0's complete execution state.
> + */
> + asm __volatile__ (
> + /* Save base pointer to stack explicitly, then load into x16 */
> + "str %0, [sp, #-16]!\n\t" /* Push base pointer onto stack */
> + "mov x16, %0\n\t" /* Load base pointer into x16 */
> + /* Volatile registers (Windows ARM64 ABI: x0-x17) */
> + "ldp x0, x1, [x16]\n\t"
> + "ldp x2, x3, [x16, #(2*8)]\n\t"
> + "ldp x4, x5, [x16, #(4*8)]\n\t"
> + "ldp x6, x7, [x16, #(6*8)]\n\t"
> + "ldp x8, x9, [x16, #(8*8)]\n\t"
> + "ldp x10, x11, [x16, #(10*8)]\n\t"
> + "ldp x12, x13, [x16, #(12*8)]\n\t"
> + "ldp x14, x15, [x16, #(14*8)]\n\t"
> + /* x16 will be loaded last, after saving base pointer */
> + "ldr x17, [x16, #(17*8)]\n\t"
> + /* x18 is hypervisor-managed per-VTL - DO NOT LOAD */
> +
> + /* General-purpose registers: x19-x30 */
> + "ldp x19, x20, [x16, #(19*8)]\n\t"
> + "ldp x21, x22, [x16, #(21*8)]\n\t"
> + "ldp x23, x24, [x16, #(23*8)]\n\t"
> + "ldp x25, x26, [x16, #(25*8)]\n\t"
> + "ldp x27, x28, [x16, #(27*8)]\n\t"
> +
> + /* Frame pointer and link register */
> + "ldp x29, x30, [x16, #(29*8)]\n\t"
> +
> + /* Shared NEON/FP registers: Q0-Q31 (128-bit) */
> + "ldp q0, q1, [x16, #(32*8)]\n\t"
> + "ldp q2, q3, [x16, #(32*8 + 2*16)]\n\t"
> + "ldp q4, q5, [x16, #(32*8 + 4*16)]\n\t"
> + "ldp q6, q7, [x16, #(32*8 + 6*16)]\n\t"
> + "ldp q8, q9, [x16, #(32*8 + 8*16)]\n\t"
> + "ldp q10, q11, [x16, #(32*8 + 10*16)]\n\t"
> + "ldp q12, q13, [x16, #(32*8 + 12*16)]\n\t"
> + "ldp q14, q15, [x16, #(32*8 + 14*16)]\n\t"
> + "ldp q16, q17, [x16, #(32*8 + 16*16)]\n\t"
> + "ldp q18, q19, [x16, #(32*8 + 18*16)]\n\t"
> + "ldp q20, q21, [x16, #(32*8 + 20*16)]\n\t"
> + "ldp q22, q23, [x16, #(32*8 + 22*16)]\n\t"
> + "ldp q24, q25, [x16, #(32*8 + 24*16)]\n\t"
> + "ldp q26, q27, [x16, #(32*8 + 26*16)]\n\t"
> + "ldp q28, q29, [x16, #(32*8 + 28*16)]\n\t"
> + "ldp q30, q31, [x16, #(32*8 + 30*16)]\n\t"
> +
> + /* Now load x16 itself */
> + "ldr x16, [x16, #(16*8)]\n\t"
> +
> + /* Return to the lower VTL */
> + "hvc #3\n\t"
NAK to this.
* This is a non-SMCCC hypercall, which we have NAK'd in general in the
past for various reasons that I am not going to rehash here.
* It's not clear how this is going to be extended with necessary
architecture state in future (e.g. SVE, SME). This is not
future-proof, and I don't believe this is maintainable.
* This breaks general requirements for reliable stacktracing by
clobbering state (e.g. x29) that we depend upon being valid AT ALL
TIMES outside of entry code.
* IMO, if this needs to be saved/restored, that should happen in
whatever you are calling.
Mark.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call
2026-04-23 12:41 ` [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call Naman Jain
2026-04-23 13:56 ` Mark Rutland
@ 2026-04-23 14:00 ` Marc Zyngier
2026-04-27 5:38 ` Michael Kelley
2 siblings, 0 replies; 31+ messages in thread
From: Marc Zyngier @ 2026-04-23 14:00 UTC (permalink / raw)
To: Naman Jain
Cc: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv, linux-arm-kernel, linux-kernel,
linux-arch, linux-riscv, vdso, ssengar
On Thu, 23 Apr 2026 13:41:57 +0100,
Naman Jain <namjain@linux.microsoft.com> wrote:
>
> Add the arm64 variant of mshv_vtl_return_call() to support the MSHV_VTL
> driver on arm64. This function enables the transition between Virtual
> Trust Levels (VTLs) in MSHV_VTL when the kernel acts as a paravisor.
>
> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
> Reviewed-by: Roman Kisel <vdso@mailbox.org>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/arm64/hyperv/Makefile | 1 +
> arch/arm64/hyperv/hv_vtl.c | 158 ++++++++++++++++++++++++++++++
> arch/arm64/include/asm/mshyperv.h | 13 +++
> arch/x86/include/asm/mshyperv.h | 2 -
> drivers/hv/mshv_vtl.h | 3 +
> include/asm-generic/mshyperv.h | 2 +
> 6 files changed, 177 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm64/hyperv/hv_vtl.c
>
> diff --git a/arch/arm64/hyperv/Makefile b/arch/arm64/hyperv/Makefile
> index 87c31c001da9..9701a837a6e1 100644
> --- a/arch/arm64/hyperv/Makefile
> +++ b/arch/arm64/hyperv/Makefile
> @@ -1,2 +1,3 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-y := hv_core.o mshyperv.o
> +obj-$(CONFIG_HYPERV_VTL_MODE) += hv_vtl.o
> diff --git a/arch/arm64/hyperv/hv_vtl.c b/arch/arm64/hyperv/hv_vtl.c
> new file mode 100644
> index 000000000000..59cbeb74e7b9
> --- /dev/null
> +++ b/arch/arm64/hyperv/hv_vtl.c
> @@ -0,0 +1,158 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2026, Microsoft, Inc.
> + *
> + * Authors:
> + * Roman Kisel <romank@linux.microsoft.com>
> + * Naman Jain <namjain@linux.microsoft.com>
> + */
> +
> +#include <asm/mshyperv.h>
> +#include <asm/neon.h>
> +#include <linux/export.h>
> +
> +void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0)
> +{
> + struct user_fpsimd_state fpsimd_state;
> + u64 base_ptr = (u64)vtl0->x;
> +
> + /*
> + * Obtain the CPU FPSIMD registers for VTL context switch.
> + * This saves the current task's FP/NEON state and allows us to
> + * safely load VTL0's FP/NEON context for the hypercall.
> + */
> + kernel_neon_begin(&fpsimd_state);
> +
> + /*
> + * VTL switch for ARM64 platform - managing VTL0's CPU context.
> + * We explicitly use the stack to save the base pointer, and use x16
> + * as our working register for accessing the context structure.
> + *
> + * Register Handling:
> + * - X0-X17: Saved/restored (general-purpose, shared for VTL communication)
> + * - X18: NOT touched - hypervisor-managed per-VTL (platform register)
> + * - X19-X30: Saved/restored (part of VTL0's execution context)
> + * - Q0-Q31: Saved/restored (128-bit NEON/floating-point registers, shared)
> + * - SP: Not in structure, hypervisor-managed per-VTL
> + *
> + * X29 (FP) and X30 (LR) are in the structure and must be saved/restored
> + * as part of VTL0's complete execution state.
> + */
> + asm __volatile__ (
> + /* Save base pointer to stack explicitly, then load into x16 */
> + "str %0, [sp, #-16]!\n\t" /* Push base pointer onto stack */
> + "mov x16, %0\n\t" /* Load base pointer into x16 */
> + /* Volatile registers (Windows ARM64 ABI: x0-x17) */
> + "ldp x0, x1, [x16]\n\t"
> + "ldp x2, x3, [x16, #(2*8)]\n\t"
> + "ldp x4, x5, [x16, #(4*8)]\n\t"
> + "ldp x6, x7, [x16, #(6*8)]\n\t"
> + "ldp x8, x9, [x16, #(8*8)]\n\t"
> + "ldp x10, x11, [x16, #(10*8)]\n\t"
> + "ldp x12, x13, [x16, #(12*8)]\n\t"
> + "ldp x14, x15, [x16, #(14*8)]\n\t"
> + /* x16 will be loaded last, after saving base pointer */
> + "ldr x17, [x16, #(17*8)]\n\t"
> + /* x18 is hypervisor-managed per-VTL - DO NOT LOAD */
Wut? Does it mean the kernel is not free to use x18?
> + /* General-purpose registers: x19-x30 */
> + "ldp x19, x20, [x16, #(19*8)]\n\t"
> + "ldp x21, x22, [x16, #(21*8)]\n\t"
> + "ldp x23, x24, [x16, #(23*8)]\n\t"
> + "ldp x25, x26, [x16, #(25*8)]\n\t"
> + "ldp x27, x28, [x16, #(27*8)]\n\t"
> +
> + /* Frame pointer and link register */
> + "ldp x29, x30, [x16, #(29*8)]\n\t"
> +
> + /* Shared NEON/FP registers: Q0-Q31 (128-bit) */
> + "ldp q0, q1, [x16, #(32*8)]\n\t"
> + "ldp q2, q3, [x16, #(32*8 + 2*16)]\n\t"
> + "ldp q4, q5, [x16, #(32*8 + 4*16)]\n\t"
> + "ldp q6, q7, [x16, #(32*8 + 6*16)]\n\t"
> + "ldp q8, q9, [x16, #(32*8 + 8*16)]\n\t"
> + "ldp q10, q11, [x16, #(32*8 + 10*16)]\n\t"
> + "ldp q12, q13, [x16, #(32*8 + 12*16)]\n\t"
> + "ldp q14, q15, [x16, #(32*8 + 14*16)]\n\t"
> + "ldp q16, q17, [x16, #(32*8 + 16*16)]\n\t"
> + "ldp q18, q19, [x16, #(32*8 + 18*16)]\n\t"
> + "ldp q20, q21, [x16, #(32*8 + 20*16)]\n\t"
> + "ldp q22, q23, [x16, #(32*8 + 22*16)]\n\t"
> + "ldp q24, q25, [x16, #(32*8 + 24*16)]\n\t"
> + "ldp q26, q27, [x16, #(32*8 + 26*16)]\n\t"
> + "ldp q28, q29, [x16, #(32*8 + 28*16)]\n\t"
> + "ldp q30, q31, [x16, #(32*8 + 30*16)]\n\t"
> +
> + /* Now load x16 itself */
> + "ldr x16, [x16, #(16*8)]\n\t"
> +
> + /* Return to the lower VTL */
> + "hvc #3\n\t"
No. Absolutely not. If you need to do context switching, do it in the
hypervisor. Entirely in the hypervisor. You don't even handle SVE, let
alone SME. How is that going to work?
And please use the SMCCC. Only that. Which mandates that the HVC
immediate is 0, 0 or zero.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files
2026-04-23 12:41 ` [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files Naman Jain
@ 2026-04-27 5:37 ` Michael Kelley
2026-04-29 9:55 ` Naman Jain
0 siblings, 1 reply; 31+ messages in thread
From: Michael Kelley @ 2026-04-27 5:37 UTC (permalink / raw)
To: Naman Jain, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>
> Move the logic to initialize and export hv_vp_assist_page from x86
> architecture code to Hyper-V common code to allow it to be used for
> upcoming arm64 support in MSHV_VTL driver.
> Note: This change also improves error handling - if VP assist page
> allocation fails, hyperv_init() now returns early instead of
> continuing with partial initialization.
>
> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
> Reviewed-by: Roman Kisel <vdso@mailbox.org>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/x86/hyperv/hv_init.c | 88 +-----------------------------
> arch/x86/include/asm/mshyperv.h | 14 -----
> drivers/hv/hv_common.c | 94 ++++++++++++++++++++++++++++++++-
> include/asm-generic/mshyperv.h | 16 ++++++
> include/hyperv/hvgdk_mini.h | 6 ++-
> 5 files changed, 115 insertions(+), 103 deletions(-)
>
> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> index 323adc93f2dc..75a98b5e451b 100644
> --- a/arch/x86/hyperv/hv_init.c
> +++ b/arch/x86/hyperv/hv_init.c
> @@ -81,9 +81,6 @@ union hv_ghcb * __percpu *hv_ghcb_pg;
> /* Storage to save the hypercall page temporarily for hibernation */
> static void *hv_hypercall_pg_saved;
>
> -struct hv_vp_assist_page **hv_vp_assist_page;
> -EXPORT_SYMBOL_GPL(hv_vp_assist_page);
> -
> static int hyperv_init_ghcb(void)
> {
> u64 ghcb_gpa;
> @@ -117,59 +114,12 @@ static int hyperv_init_ghcb(void)
>
> static int hv_cpu_init(unsigned int cpu)
> {
> - union hv_vp_assist_msr_contents msr = { 0 };
> - struct hv_vp_assist_page **hvp;
> int ret;
>
> ret = hv_common_cpu_init(cpu);
> if (ret)
> return ret;
>
> - if (!hv_vp_assist_page)
> - return 0;
> -
> - hvp = &hv_vp_assist_page[cpu];
> - if (hv_root_partition()) {
> - /*
> - * For root partition we get the hypervisor provided VP assist
> - * page, instead of allocating a new page.
> - */
> - rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
> - *hvp = memremap(msr.pfn << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT,
> - PAGE_SIZE, MEMREMAP_WB);
> - } else {
> - /*
> - * The VP assist page is an "overlay" page (see Hyper-V TLFS's
> - * Section 5.2.1 "GPA Overlay Pages"). Here it must be zeroed
> - * out to make sure we always write the EOI MSR in
> - * hv_apic_eoi_write() *after* the EOI optimization is disabled
> - * in hv_cpu_die(), otherwise a CPU may not be stopped in the
> - * case of CPU offlining and the VM will hang.
> - */
> - if (!*hvp) {
> - *hvp = __vmalloc(PAGE_SIZE, GFP_KERNEL | __GFP_ZERO);
> -
> - /*
> - * Hyper-V should never specify a VM that is a Confidential
> - * VM and also running in the root partition. Root partition
> - * is blocked to run in Confidential VM. So only decrypt assist
> - * page in non-root partition here.
> - */
> - if (*hvp && !ms_hyperv.paravisor_present && hv_isolation_type_snp()) {
> - WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1));
> - memset(*hvp, 0, PAGE_SIZE);
> - }
> - }
> -
> - if (*hvp)
> - msr.pfn = vmalloc_to_pfn(*hvp);
> -
> - }
> - if (!WARN_ON(!(*hvp))) {
> - msr.enable = 1;
> - wrmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
> - }
> -
> /* Allow Hyper-V stimer vector to be injected from Hypervisor. */
> if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
> apic_update_vector(cpu, HYPERV_STIMER0_VECTOR, true);
> @@ -286,23 +236,6 @@ static int hv_cpu_die(unsigned int cpu)
>
> hv_common_cpu_die(cpu);
>
> - if (hv_vp_assist_page && hv_vp_assist_page[cpu]) {
> - union hv_vp_assist_msr_contents msr = { 0 };
> - if (hv_root_partition()) {
> - /*
> - * For root partition the VP assist page is mapped to
> - * hypervisor provided page, and thus we unmap the
> - * page here and nullify it, so that in future we have
> - * correct page address mapped in hv_cpu_init.
> - */
> - memunmap(hv_vp_assist_page[cpu]);
> - hv_vp_assist_page[cpu] = NULL;
> - rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
> - msr.enable = 0;
> - }
> - wrmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
> - }
> -
> if (hv_reenlightenment_cb == NULL)
> return 0;
>
> @@ -460,21 +393,6 @@ void __init hyperv_init(void)
> if (hv_common_init())
> return;
>
> - /*
> - * The VP assist page is useless to a TDX guest: the only use we
> - * would have for it is lazy EOI, which can not be used with TDX.
> - */
> - if (hv_isolation_type_tdx())
> - hv_vp_assist_page = NULL;
> - else
> - hv_vp_assist_page = kzalloc_objs(*hv_vp_assist_page, nr_cpu_ids);
> - if (!hv_vp_assist_page) {
> - ms_hyperv.hints &= ~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
> -
> - if (!hv_isolation_type_tdx())
> - goto common_free;
> - }
> -
> if (ms_hyperv.paravisor_present && hv_isolation_type_snp()) {
> /* Negotiate GHCB Version. */
> if (!hv_ghcb_negotiate_protocol())
> @@ -483,7 +401,7 @@ void __init hyperv_init(void)
>
> hv_ghcb_pg = alloc_percpu(union hv_ghcb *);
> if (!hv_ghcb_pg)
> - goto free_vp_assist_page;
> + goto free_ghcb_page;
Seems like this should be "goto common_free". The allocation of
hv_ghcb_pg has failed, so going to a label where hv_ghcb_pg is
freed seems redundant. It works since free_percpu() checks for
a NULL argument, but it's a bit unexpected since the common_free
label is already there.
> }
>
> cpuhp = cpuhp_setup_state(CPUHP_AP_HYPERV_ONLINE, "x86/hyperv_init:online",
> @@ -613,10 +531,6 @@ void __init hyperv_init(void)
> cpuhp_remove_state(CPUHP_AP_HYPERV_ONLINE);
> free_ghcb_page:
> free_percpu(hv_ghcb_pg);
> -free_vp_assist_page:
> - kfree(hv_vp_assist_page);
> - hv_vp_assist_page = NULL;
> -common_free:
> hv_common_free();
> }
>
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> index f64393e853ee..95b452387969 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -155,16 +155,6 @@ static inline u64 hv_do_fast_hypercall16(u16 code, u64 input1, u64 input2)
> return _hv_do_fast_hypercall16(control, input1, input2);
> }
>
> -extern struct hv_vp_assist_page **hv_vp_assist_page;
> -
> -static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
> -{
> - if (!hv_vp_assist_page)
> - return NULL;
> -
> - return hv_vp_assist_page[cpu];
> -}
> -
> void __init hyperv_init(void);
> void hyperv_setup_mmu_ops(void);
> void set_hv_tscchange_cb(void (*cb)(void));
> @@ -254,10 +244,6 @@ static inline void hyperv_setup_mmu_ops(void) {}
> static inline void set_hv_tscchange_cb(void (*cb)(void)) {}
> static inline void clear_hv_tscchange_cb(void) {}
> static inline void hyperv_stop_tsc_emulation(void) {};
> -static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
> -{
> - return NULL;
> -}
> static inline int hyperv_flush_guest_mapping(u64 as) { return -1; }
> static inline int hyperv_flush_guest_mapping_range(u64 as,
> hyperv_fill_flush_list_func fill_func, void *data)
> diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
> index 6b67ac616789..e8633bc51d56 100644
> --- a/drivers/hv/hv_common.c
> +++ b/drivers/hv/hv_common.c
> @@ -28,7 +28,11 @@
> #include <linux/slab.h>
> #include <linux/dma-map-ops.h>
> #include <linux/set_memory.h>
> +#include <linux/vmalloc.h>
> +#include <linux/io.h>
> +#include <linux/hyperv.h>
> #include <hyperv/hvhdk.h>
> +#include <hyperv/hvgdk.h>
> #include <asm/mshyperv.h>
>
> u64 hv_current_partition_id = HV_PARTITION_ID_SELF;
> @@ -78,6 +82,8 @@ static struct ctl_table_header *hv_ctl_table_hdr;
> u8 * __percpu *hv_synic_eventring_tail;
> EXPORT_SYMBOL_GPL(hv_synic_eventring_tail);
>
> +struct hv_vp_assist_page **hv_vp_assist_page;
> +EXPORT_SYMBOL_GPL(hv_vp_assist_page);
> /*
> * Hyper-V specific initialization and shutdown code that is
> * common across all architectures. Called from architecture
> @@ -92,6 +98,9 @@ void __init hv_common_free(void)
> if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE)
> hv_kmsg_dump_unregister();
>
> + kfree(hv_vp_assist_page);
> + hv_vp_assist_page = NULL;
> +
> kfree(hv_vp_index);
> hv_vp_index = NULL;
>
> @@ -394,6 +403,23 @@ int __init hv_common_init(void)
> for (i = 0; i < nr_cpu_ids; i++)
> hv_vp_index[i] = VP_INVAL;
>
> + /*
> + * The VP assist page is useless to a TDX guest: the only use we
> + * would have for it is lazy EOI, which can not be used with TDX.
> + */
> + if (hv_isolation_type_tdx()) {
> + hv_vp_assist_page = NULL;
> +#ifdef CONFIG_X86_64
> + ms_hyperv.hints &= ~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
> +#endif
I realize that this #ifdef went away for the reason I flagged in v1 of
this patch set, but it's back again for a different reason.
Let me suggest another approach. hv_common_init() is called from
both the x86/64 and arm64 hyperv_init() functions. Immediately after
the call to hv_common_init() in the x86/64 hyperv_init(), test
hv_vp_assist_page for NULL and clear
HV_X64_ENLIGHTENED_VMCS_RECOMMENDED if it is. No #ifdef is
needed, and x86/64 specific hackery stays under arch/x86 instead of
being in common code.
> + } else {
> + hv_vp_assist_page = kzalloc_objs(*hv_vp_assist_page, nr_cpu_ids);
> + if (!hv_vp_assist_page) {
> + hv_common_free();
> + return -ENOMEM;
> + }
> + }
> +
> return 0;
> }
>
> @@ -471,6 +497,8 @@ void __init ms_hyperv_late_init(void)
>
> int hv_common_cpu_init(unsigned int cpu)
> {
> + union hv_vp_assist_msr_contents msr = { 0 };
> + struct hv_vp_assist_page **hvp;
> void **inputarg, **outputarg;
> u8 **synic_eventring_tail;
> u64 msr_vp_index;
> @@ -539,7 +567,53 @@ int hv_common_cpu_init(unsigned int cpu)
> sizeof(u8), flags);
> /* No need to unwind any of the above on failure here */
> if (unlikely(!*synic_eventring_tail))
> - ret = -ENOMEM;
> + return -ENOMEM;
> + }
> +
> + if (!hv_vp_assist_page)
> + return ret;
> +
> + hvp = &hv_vp_assist_page[cpu];
> + if (hv_root_partition()) {
> + /*
> + * For root partition we get the hypervisor provided VP assist
> + * page, instead of allocating a new page.
> + */
> + msr.as_uint64 = hv_get_msr(HV_MSR_VP_ASSIST_PAGE);
> + *hvp = memremap(msr.pfn << HV_VP_ASSIST_PAGE_ADDRESS_SHIFT,
> + HV_HYP_PAGE_SIZE, MEMREMAP_WB);
> + } else {
> + /*
> + * The VP assist page is an "overlay" page (see Hyper-V TLFS's
> + * Section 5.2.1 "GPA Overlay Pages"). Here it must be zeroed
> + * out to make sure that on x86/x64, we always write the EOI MSR in
> + * hv_apic_eoi_write() *after* the EOI optimization is disabled
> + * in hv_cpu_die(), otherwise a CPU may not be stopped in the
> + * case of CPU offlining and the VM will hang.
> + */
> + if (!*hvp) {
> + *hvp = __vmalloc(HV_HYP_PAGE_SIZE, flags | __GFP_ZERO);
> +
> + /*
> + * Hyper-V should never specify a VM that is a Confidential
> + * VM and also running in the root partition. Root partition
> + * is blocked to run in Confidential VM. So only decrypt assist
> + * page in non-root partition here.
> + */
> + if (*hvp &&
> + !ms_hyperv.paravisor_present &&
> + hv_isolation_type_snp()) {
> + WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1));
> + memset(*hvp, 0, HV_HYP_PAGE_SIZE);
> + }
> + }
> +
> + if (*hvp)
> + msr.pfn = page_to_hvpfn(vmalloc_to_page(*hvp));
Your Patch 0 changelog mentions adding a comment about vmalloc_to_pfn(), which
I didn't see anywhere. I'm not sure what that comment would say, so maybe it
became unnecessary.
> + }
> + if (!WARN_ON(!(*hvp))) {
> + msr.enable = 1;
> + hv_set_msr(HV_MSR_VP_ASSIST_PAGE, msr.as_uint64);
> }
>
> return ret;
> @@ -566,6 +640,24 @@ int hv_common_cpu_die(unsigned int cpu)
> *synic_eventring_tail = NULL;
> }
>
> + if (hv_vp_assist_page && hv_vp_assist_page[cpu]) {
> + union hv_vp_assist_msr_contents msr = { 0 };
> +
> + if (hv_root_partition()) {
> + /*
> + * For root partition the VP assist page is mapped to
> + * hypervisor provided page, and thus we unmap the
> + * page here and nullify it, so that in future we have
> + * correct page address mapped in hv_cpu_init.
> + */
> + memunmap(hv_vp_assist_page[cpu]);
> + hv_vp_assist_page[cpu] = NULL;
> + msr.as_uint64 = hv_get_msr(HV_MSR_VP_ASSIST_PAGE);
> + msr.enable = 0;
> + }
> + hv_set_msr(HV_MSR_VP_ASSIST_PAGE, msr.as_uint64);
> + }
> +
> return 0;
> }
>
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index d37b68238c97..2810aa05dc73 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -25,6 +25,7 @@
> #include <linux/nmi.h>
> #include <asm/ptrace.h>
> #include <hyperv/hvhdk.h>
> +#include <hyperv/hvgdk.h>
>
> #define VTPM_BASE_ADDRESS 0xfed40000
>
> @@ -299,6 +300,16 @@ do { \
> #define hv_status_debug(status, fmt, ...) \
> hv_status_printk(debug, status, fmt, ##__VA_ARGS__)
>
> +extern struct hv_vp_assist_page **hv_vp_assist_page;
> +
> +static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
> +{
> + if (!hv_vp_assist_page)
> + return NULL;
> +
> + return hv_vp_assist_page[cpu];
> +}
> +
> const char *hv_result_to_string(u64 hv_status);
> int hv_result_to_errno(u64 status);
> void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die);
> @@ -327,6 +338,11 @@ static inline enum hv_isolation_type hv_get_isolation_type(void)
> {
> return HV_ISOLATION_TYPE_NONE;
> }
> +
> +static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
> +{
> + return NULL;
> +}
> #endif /* CONFIG_HYPERV */
>
> #if IS_ENABLED(CONFIG_MSHV_ROOT)
> diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
> index 056ef7b6b360..c72d04cd5ae4 100644
> --- a/include/hyperv/hvgdk_mini.h
> +++ b/include/hyperv/hvgdk_mini.h
> @@ -149,6 +149,7 @@ struct hv_u128 {
> #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
Can this X64 specific definition of the shift be eliminated entirely,
and a single common definition for x86/64 and arm64 be used?
As I understand it, the MSR layout is the same on both architectures.
The one gotcha is that kvm_hv_set_msr() would need to be updated.
HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK defined below isn't
used anywhere, so it could go away too. (The KVM selftest usage has
its own definition.)
I realize these are changes to a source code file that is derived from
Windows, and I'm not sure of the guidelines for such changes. So maybe
these suggestions have to be ignored ....
> #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \
> (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
> +#define HV_MSR_VP_ASSIST_PAGE (HV_X64_MSR_VP_ASSIST_PAGE)
This is the correct file for this #define, but it should be placed down around
line 1148 or so with the other HV_MSR_* definitions in terms of HV_X64_MSR_*
>
> /* Hyper-V Enlightened VMCS version mask in nested features CPUID */
> #define HV_X64_ENLIGHTENED_VMCS_VERSION 0xff
> @@ -410,6 +411,7 @@ union hv_x64_msr_hypercall_contents {
> #if defined(CONFIG_ARM64)
> #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(8)
> #define HV_STIMER_DIRECT_MODE_AVAILABLE BIT(13)
> +#define HV_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
> #endif /* CONFIG_ARM64 */
>
> #if defined(CONFIG_X86)
> @@ -1163,6 +1165,8 @@ enum hv_register_name {
> #define HV_MSR_STIMER0_CONFIG (HV_X64_MSR_STIMER0_CONFIG)
> #define HV_MSR_STIMER0_COUNT (HV_X64_MSR_STIMER0_COUNT)
>
> +#define HV_VP_ASSIST_PAGE_ADDRESS_SHIFT HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT
> +
> #elif defined(CONFIG_ARM64) /* CONFIG_X86 */
>
> #define HV_MSR_CRASH_P0 (HV_REGISTER_GUEST_CRASH_P0)
> @@ -1185,7 +1189,7 @@ enum hv_register_name {
>
> #define HV_MSR_STIMER0_CONFIG (HV_REGISTER_STIMER0_CONFIG)
> #define HV_MSR_STIMER0_COUNT (HV_REGISTER_STIMER0_COUNT)
> -
> +#define HV_MSR_VP_ASSIST_PAGE (HV_REGISTER_VP_ASSIST_PAGE)
Nit: This definition is slightly mis-aligned. It has spaces where there
should be a tab to match the similar definitions above it.
> #endif /* CONFIG_ARM64 */
>
> union hv_explicit_suspend_register {
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code
2026-04-23 12:41 ` [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code Naman Jain
@ 2026-04-27 5:38 ` Michael Kelley
2026-04-29 9:55 ` Naman Jain
0 siblings, 1 reply; 31+ messages in thread
From: Michael Kelley @ 2026-04-27 5:38 UTC (permalink / raw)
To: Naman Jain, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>
> Move the vmbus_handler global variable and hv_setup_vmbus_handler()/
> hv_remove_vmbus_handler() from arch/x86 to drivers/hv/hv_common.c.
>
> hv_setup_vmbus_handler() is called unconditionally in vmbus_bus_init()
> and works for both x86 (sysvec handler) and arm64 (vmbus_percpu_isr).
>
> This eliminates the need for separate percpu vmbus handler setup
> functions and __weak stubs, that are needed for adding ARM64 support
> in MSHV_VTL driver where we need to set a custom per-cpu vmbus handler.
>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/x86/kernel/cpu/mshyperv.c | 12 ------------
> drivers/hv/hv_common.c | 9 +++++++--
> drivers/hv/vmbus_drv.c | 17 +++++++++--------
> include/asm-generic/mshyperv.h | 1 +
> 4 files changed, 17 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
> index 89a2eb8a0722..68706ff5880e 100644
> --- a/arch/x86/kernel/cpu/mshyperv.c
> +++ b/arch/x86/kernel/cpu/mshyperv.c
> @@ -145,7 +145,6 @@ void hv_set_msr(unsigned int reg, u64 value)
> EXPORT_SYMBOL_GPL(hv_set_msr);
>
> static void (*mshv_handler)(void);
> -static void (*vmbus_handler)(void);
> static void (*hv_stimer0_handler)(void);
> static void (*hv_kexec_handler)(void);
> static void (*hv_crash_handler)(struct pt_regs *regs);
> @@ -172,17 +171,6 @@ void hv_setup_mshv_handler(void (*handler)(void))
> mshv_handler = handler;
> }
>
> -void hv_setup_vmbus_handler(void (*handler)(void))
> -{
> - vmbus_handler = handler;
> -}
> -
> -void hv_remove_vmbus_handler(void)
> -{
> - /* We have no way to deallocate the interrupt gate */
> - vmbus_handler = NULL;
> -}
> -
> /*
> * Routines to do per-architecture handling of stimer0
> * interrupts when in Direct Mode
> diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
> index e8633bc51d56..eb7b0028b45d 100644
> --- a/drivers/hv/hv_common.c
> +++ b/drivers/hv/hv_common.c
> @@ -758,13 +758,18 @@ bool __weak hv_isolation_type_tdx(void)
> }
> EXPORT_SYMBOL_GPL(hv_isolation_type_tdx);
>
> -void __weak hv_setup_vmbus_handler(void (*handler)(void))
> +void (*vmbus_handler)(void);
> +EXPORT_SYMBOL_GPL(vmbus_handler);
> +
> +void hv_setup_vmbus_handler(void (*handler)(void))
> {
> + vmbus_handler = handler;
> }
> EXPORT_SYMBOL_GPL(hv_setup_vmbus_handler);
>
> -void __weak hv_remove_vmbus_handler(void)
> +void hv_remove_vmbus_handler(void)
> {
> + vmbus_handler = NULL;
> }
> EXPORT_SYMBOL_GPL(hv_remove_vmbus_handler);
I'd suggest moving hv_setup_vmbus_handler() and
hv_remove_vmbus_handler() above or below the group
of __weak stubs in this source code file. There's a comment
describing the purpose of these __weak functions, and
intermixing these two functions that are no longer __weak
produces something of a jumble.
>
> diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
> index bc4fc1951ae1..052ca8b11cee 100644
> --- a/drivers/hv/vmbus_drv.c
> +++ b/drivers/hv/vmbus_drv.c
> @@ -1415,7 +1415,8 @@ EXPORT_SYMBOL_FOR_MODULES(vmbus_isr, "mshv_vtl");
>
> static irqreturn_t vmbus_percpu_isr(int irq, void *dev_id)
> {
> - vmbus_isr();
> + if (vmbus_handler)
> + vmbus_handler();
Is it necessary to test vmbus_handler first? From what I can
see, it is always set before the per-cpu interrupt is setup.
> return IRQ_HANDLED;
> }
>
> @@ -1517,8 +1518,10 @@ static int vmbus_bus_init(void)
> vmbus_irq_initialized = true;
> }
>
> + hv_setup_vmbus_handler(vmbus_isr);
> +
> if (vmbus_irq == -1) {
> - hv_setup_vmbus_handler(vmbus_isr);
> + /* x86: sysvec handler uses vmbus_handler directly */
> } else {
> ret = request_percpu_irq(vmbus_irq, vmbus_percpu_isr,
> "Hyper-V VMbus", &vmbus_evt);
> @@ -1553,9 +1556,8 @@ static int vmbus_bus_init(void)
> return 0;
>
> err_connect:
> - if (vmbus_irq == -1)
> - hv_remove_vmbus_handler();
> - else
> + hv_remove_vmbus_handler();
> + if (vmbus_irq != -1)
> free_percpu_irq(vmbus_irq, &vmbus_evt);
These operations should be reordered so they are the inverse
of how they are setup. I.e., free_percpu_irq() first, then remove
the VMBus handler. That's just good standard practice unless
there's a specific reason to do the cleanup ordering differently. In
fact, hv_remove_vmbus_handler() needs to be moved down
to the err_setup label so it's done if request_percpu_irq()
fails.
> err_setup:
> if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
> @@ -3026,9 +3028,8 @@ static void __exit vmbus_exit(void)
> vmbus_connection.conn_state = DISCONNECTED;
> hv_stimer_global_cleanup();
> vmbus_disconnect();
> - if (vmbus_irq == -1)
> - hv_remove_vmbus_handler();
> - else
> + hv_remove_vmbus_handler();
> + if (vmbus_irq != -1)
> free_percpu_irq(vmbus_irq, &vmbus_evt);
Ordering should be changed here as well so it is the inverse
of how things are set up.
> if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
> smpboot_unregister_percpu_thread(&vmbus_irq_threads);
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index 2810aa05dc73..db183c8cfb95 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -179,6 +179,7 @@ static inline u64 hv_generate_guest_id(u64 kernel_version)
>
> int hv_get_hypervisor_version(union hv_hypervisor_version_info *info);
>
> +extern void (*vmbus_handler)(void);
> void hv_setup_vmbus_handler(void (*handler)(void));
> void hv_remove_vmbus_handler(void);
> void hv_setup_stimer0_handler(void (*handler)(void));
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call
2026-04-23 12:41 ` [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call Naman Jain
2026-04-23 13:56 ` Mark Rutland
2026-04-23 14:00 ` Marc Zyngier
@ 2026-04-27 5:38 ` Michael Kelley
2026-04-29 9:56 ` Naman Jain
2 siblings, 1 reply; 31+ messages in thread
From: Michael Kelley @ 2026-04-27 5:38 UTC (permalink / raw)
To: Naman Jain, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>
> Add the arm64 variant of mshv_vtl_return_call() to support the MSHV_VTL
> driver on arm64. This function enables the transition between Virtual
> Trust Levels (VTLs) in MSHV_VTL when the kernel acts as a paravisor.
>
> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
> Reviewed-by: Roman Kisel <vdso@mailbox.org>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/arm64/hyperv/Makefile | 1 +
> arch/arm64/hyperv/hv_vtl.c | 158 ++++++++++++++++++++++++++++++
> arch/arm64/include/asm/mshyperv.h | 13 +++
> arch/x86/include/asm/mshyperv.h | 2 -
> drivers/hv/mshv_vtl.h | 3 +
> include/asm-generic/mshyperv.h | 2 +
> 6 files changed, 177 insertions(+), 2 deletions(-)
> create mode 100644 arch/arm64/hyperv/hv_vtl.c
>
[snip]
> diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h
> index 585b23a26f1b..9eb0e5999f29 100644
> --- a/arch/arm64/include/asm/mshyperv.h
> +++ b/arch/arm64/include/asm/mshyperv.h
> @@ -60,6 +60,18 @@ static inline u64 hv_get_non_nested_msr(unsigned int reg)
> ARM_SMCCC_SMC_64, \
> ARM_SMCCC_OWNER_VENDOR_HYP, \
> HV_SMCCC_FUNC_NUMBER)
> +
> +struct mshv_vtl_cpu_context {
> +/*
> + * x18 is managed by the hypervisor. It won't be reloaded from this array.
> + * It is included here for convenience in array indexing.
> + * 'rsvd' field serves as alignment padding so q[] starts at offset 32*8=256.
> + */
> + __u64 x[31];
> + __u64 rsvd;
> + __uint128_t q[32];
> +};
> +
> #ifdef CONFIG_HYPERV_VTL_MODE
> /*
> * Get/Set the register. If the function returns `1`, that must be done via
> @@ -69,6 +81,7 @@ static inline int hv_vtl_get_set_reg(struct hv_register_assoc *regs,
> bool set, b
> {
> return 1;
> }
> +
This appears to be a spurious blank line being added since there
are no other changes in the vicinity.
> #endif
>
> #include <asm-generic/mshyperv.h>
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> index 08278547b84c..b4d80c9a673a 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -286,7 +286,6 @@ struct mshv_vtl_cpu_context {
> #ifdef CONFIG_HYPERV_VTL_MODE
> void __init hv_vtl_init_platform(void);
> int __init hv_vtl_early_init(void);
> -void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
> void mshv_vtl_return_call_init(u64 vtl_return_offset);
> void mshv_vtl_return_hypercall(void);
> void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
> @@ -294,7 +293,6 @@ int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set,
> bool shared);
> #else
> static inline void __init hv_vtl_init_platform(void) {}
> static inline int __init hv_vtl_early_init(void) { return 0; }
> -static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
> static inline void mshv_vtl_return_call_init(u64 vtl_return_offset) {}
> static inline void mshv_vtl_return_hypercall(void) {}
> static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
> diff --git a/drivers/hv/mshv_vtl.h b/drivers/hv/mshv_vtl.h
> index a6eea52f7aa2..103f07371f3f 100644
> --- a/drivers/hv/mshv_vtl.h
> +++ b/drivers/hv/mshv_vtl.h
> @@ -22,4 +22,7 @@ struct mshv_vtl_run {
> char vtl_ret_actions[MSHV_MAX_RUN_MSG_SIZE];
> };
>
> +static_assert(sizeof(struct mshv_vtl_cpu_context) <= 1024,
> + "struct mshv_vtl_cpu_context exceeds reserved space in struct
> mshv_vtl_run");
> +
> #endif /* _MSHV_VTL_H */
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index db183c8cfb95..8cdf2a9fbdfb 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -396,8 +396,10 @@ static inline int hv_deposit_memory(u64 partition_id, u64 status)
>
> #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
> u8 __init get_vtl(void);
> +void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
> #else
> static inline u8 get_vtl(void) { return 0; }
> +static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
Is this stub needed? Maybe I missed something, but it looks to me like none
of the code that calls this gets built unless CONFIG_HYPERV_VTL_MODE is set.
See further comments about stubs in Patch 8 of this series.
> #endif
>
> #endif
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations
2026-04-23 12:41 ` [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations Naman Jain
@ 2026-04-27 5:39 ` Michael Kelley
2026-04-29 9:57 ` Naman Jain
0 siblings, 1 reply; 31+ messages in thread
From: Michael Kelley @ 2026-04-27 5:39 UTC (permalink / raw)
To: Naman Jain, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>
> Move hv_call_get_vp_registers() and hv_call_set_vp_registers()
> declarations from drivers/hv/mshv.h to include/asm-generic/mshyperv.h.
>
> These functions are defined in mshv_common.c and are going to be called
> from both drivers/hv/ and arch/x86/hyperv/hv_vtl.c. The latter never
> included mshv.h, relying on implicit declaration visibility. Moving the
> declarations to the arch-generic Hyper-V header makes them properly
> visible to all architecture-specific callers.
>
> Provide static inline stubs returning -EOPNOTSUPP when neither
> CONFIG_MSHV_ROOT nor CONFIG_MSHV_VTL is enabled.
Looking at the drivers/hv/Kconfig, it's possible to build with
CONFIG_HYPERV_VTL_MODE=y, but not CONFIG_MSHV_VTL. In such a
case, mshv_common.o doesn't get built, which is why the stubs are
needed. Is such a configuration desirable for some scenarios?
I wonder if having CONFIG_HYPERV_VTL_MODE force the building of
mshv_common.o would be a better approach. Then the stubs wouldn't
be needed. The "ifneq" statement in drivers/hv/Makefile could use
CONFIG_HYPERV_VTL_MODE instead of CONFIG_MSHV_VTL, and
everything would be good since CONFIG_MSHV_VTL depends on
CONFIG_HYPERV_VTL_MODE.
>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> drivers/hv/mshv.h | 8 --------
> include/asm-generic/mshyperv.h | 26 ++++++++++++++++++++++++++
> 2 files changed, 26 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/hv/mshv.h b/drivers/hv/mshv.h
> index d4813df92b9c..0fcb7f9ba6a9 100644
> --- a/drivers/hv/mshv.h
> +++ b/drivers/hv/mshv.h
> @@ -14,14 +14,6 @@
> memchr_inv(&((STRUCT).MEMBER), \
> 0, sizeof_field(typeof(STRUCT), MEMBER))
>
> -int hv_call_get_vp_registers(u32 vp_index, u64 partition_id, u16 count,
> - union hv_input_vtl input_vtl,
> - struct hv_register_assoc *registers);
> -
> -int hv_call_set_vp_registers(u32 vp_index, u64 partition_id, u16 count,
> - union hv_input_vtl input_vtl,
> - struct hv_register_assoc *registers);
> -
> int hv_call_get_partition_property(u64 partition_id, u64 property_code,
> u64 *property_value);
>
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index 8cdf2a9fbdfb..ef0b9466808c 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -394,6 +394,32 @@ static inline int hv_deposit_memory(u64 partition_id, u64
> status)
> return hv_deposit_memory_node(NUMA_NO_NODE, partition_id, status);
> }
>
> +#if IS_ENABLED(CONFIG_MSHV_ROOT) || IS_ENABLED(CONFIG_MSHV_VTL)
> +int hv_call_get_vp_registers(u32 vp_index, u64 partition_id, u16 count,
> + union hv_input_vtl input_vtl,
> + struct hv_register_assoc *registers);
> +
> +int hv_call_set_vp_registers(u32 vp_index, u64 partition_id, u16 count,
> + union hv_input_vtl input_vtl,
> + struct hv_register_assoc *registers);
> +#else
> +static inline int hv_call_get_vp_registers(u32 vp_index, u64 partition_id,
> + u16 count,
> + union hv_input_vtl input_vtl,
> + struct hv_register_assoc *registers)
> +{
> + return -EOPNOTSUPP;
> +}
> +
> +static inline int hv_call_set_vp_registers(u32 vp_index, u64 partition_id,
> + u16 count,
> + union hv_input_vtl input_vtl,
> + struct hv_register_assoc *registers)
> +{
> + return -EOPNOTSUPP;
> +}
> +#endif /* CONFIG_MSHV_ROOT || CONFIG_MSHV_VTL */
> +
> #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
> u8 __init get_vtl(void);
> void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86
2026-04-23 12:41 ` [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86 Naman Jain
@ 2026-04-27 5:40 ` Michael Kelley
2026-04-29 9:57 ` Naman Jain
0 siblings, 1 reply; 31+ messages in thread
From: Michael Kelley @ 2026-04-27 5:40 UTC (permalink / raw)
To: Naman Jain, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>
> Move hv_vtl_configure_reg_page() from drivers/hv/mshv_vtl_main.c to
> arch/x86/hyperv/hv_vtl.c. The register page overlay is an x86-specific
> feature that uses HV_X64_REGISTER_REG_PAGE, so its configuration belongs
> in architecture-specific code.
>
> Move struct mshv_vtl_per_cpu and union hv_synic_overlay_page_msr to
> include/asm-generic/mshyperv.h so they are visible to both arch and
> driver code.
>
> Change the return type from void to bool so the caller can determine
> whether the register page was successfully configured and set
> mshv_has_reg_page accordingly.
>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/x86/hyperv/hv_vtl.c | 32 ++++++++++++++++++++++
> drivers/hv/mshv_vtl_main.c | 49 +++-------------------------------
> include/asm-generic/mshyperv.h | 17 ++++++++++++
> 3 files changed, 53 insertions(+), 45 deletions(-)
>
> diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
> index 09d81f9b853c..f3ffb6a7cb2d 100644
> --- a/arch/x86/hyperv/hv_vtl.c
> +++ b/arch/x86/hyperv/hv_vtl.c
> @@ -20,6 +20,7 @@
> #include <uapi/asm/mtrr.h>
> #include <asm/debugreg.h>
> #include <linux/export.h>
> +#include <linux/hyperv.h>
> #include <../kernel/smpboot.h>
> #include "../../kernel/fpu/legacy.h"
>
> @@ -259,6 +260,37 @@ int __init hv_vtl_early_init(void)
> return 0;
> }
>
> +static const union hv_input_vtl input_vtl_zero;
> +
> +bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu)
> +{
> + struct hv_register_assoc reg_assoc = {};
> + union hv_synic_overlay_page_msr overlay = {};
> + struct page *reg_page;
> +
> + reg_page = alloc_page(GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL);
> + if (!reg_page) {
> + WARN(1, "failed to allocate register page\n");
> + return false;
> + }
> +
> + overlay.enabled = 1;
> + overlay.pfn = page_to_hvpfn(reg_page);
> + reg_assoc.name = HV_X64_REGISTER_REG_PAGE;
> + reg_assoc.value.reg64 = overlay.as_uint64;
> +
> + if (hv_call_set_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
> + 1, input_vtl_zero, ®_assoc)) {
> + WARN(1, "failed to setup register page\n");
> + __free_page(reg_page);
> + return false;
> + }
> +
> + per_cpu->reg_page = reg_page;
> + return true;
> +}
> +EXPORT_SYMBOL_GPL(hv_vtl_configure_reg_page);
> +
> DEFINE_STATIC_CALL_NULL(__mshv_vtl_return_hypercall, void (*)(void));
>
> void mshv_vtl_return_call_init(u64 vtl_return_offset)
> diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
> index 91517b45d526..c79d24317b8e 100644
> --- a/drivers/hv/mshv_vtl_main.c
> +++ b/drivers/hv/mshv_vtl_main.c
> @@ -78,21 +78,6 @@ struct mshv_vtl {
> u64 id;
> };
>
> -struct mshv_vtl_per_cpu {
> - struct mshv_vtl_run *run;
> - struct page *reg_page;
> -};
> -
> -/* SYNIC_OVERLAY_PAGE_MSR - internal, identical to hv_synic_simp */
> -union hv_synic_overlay_page_msr {
> - u64 as_uint64;
> - struct {
> - u64 enabled: 1;
> - u64 reserved: 11;
> - u64 pfn: 52;
> - } __packed;
> -};
> -
> static struct mutex mshv_vtl_poll_file_lock;
> static union hv_register_vsm_page_offsets mshv_vsm_page_offsets;
> static union hv_register_vsm_capabilities mshv_vsm_capabilities;
> @@ -201,34 +186,6 @@ static struct page *mshv_vtl_cpu_reg_page(int cpu)
> return *per_cpu_ptr(&mshv_vtl_per_cpu.reg_page, cpu);
> }
>
> -static void mshv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu)
> -{
> - struct hv_register_assoc reg_assoc = {};
> - union hv_synic_overlay_page_msr overlay = {};
> - struct page *reg_page;
> -
> - reg_page = alloc_page(GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL);
> - if (!reg_page) {
> - WARN(1, "failed to allocate register page\n");
> - return;
> - }
> -
> - overlay.enabled = 1;
> - overlay.pfn = page_to_hvpfn(reg_page);
> - reg_assoc.name = HV_X64_REGISTER_REG_PAGE;
> - reg_assoc.value.reg64 = overlay.as_uint64;
> -
> - if (hv_call_set_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
> - 1, input_vtl_zero, ®_assoc)) {
> - WARN(1, "failed to setup register page\n");
> - __free_page(reg_page);
> - return;
> - }
> -
> - per_cpu->reg_page = reg_page;
> - mshv_has_reg_page = true;
> -}
> -
> static void mshv_vtl_synic_enable_regs(unsigned int cpu)
> {
> union hv_synic_sint sint;
> @@ -329,8 +286,10 @@ static int mshv_vtl_alloc_context(unsigned int cpu)
> if (!per_cpu->run)
> return -ENOMEM;
>
> - if (mshv_vsm_capabilities.intercept_page_available)
> - mshv_vtl_configure_reg_page(per_cpu);
> + if (mshv_vsm_capabilities.intercept_page_available) {
> + if (hv_vtl_configure_reg_page(per_cpu))
> + mshv_has_reg_page = true;
> + }
>
> mshv_vtl_synic_enable_regs(cpu);
>
> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
> index ef0b9466808c..9e86178c182e 100644
> --- a/include/asm-generic/mshyperv.h
> +++ b/include/asm-generic/mshyperv.h
> @@ -420,12 +420,29 @@ static inline int hv_call_set_vp_registers(u32 vp_index, u64
> partition_id,
> }
> #endif /* CONFIG_MSHV_ROOT || CONFIG_MSHV_VTL */
>
> +struct mshv_vtl_per_cpu {
> + struct mshv_vtl_run *run;
> + struct page *reg_page;
> +};
> +
> #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
> +/* SYNIC_OVERLAY_PAGE_MSR - internal, identical to hv_synic_simp */
This comment pre-dates your patch, but I don't understand the point
it is trying to make. The comment is factually true, but I don't know
why calling that out is relevant. The REG_PAGE MSR seems to be
conceptually separate and distinct from the SIMP MSR, so the fact
that the layouts are the same is just a coincidence. Or is there some
relationship between the two MSRs that I'm not aware of, and the
comment is trying (and failing?) to point out?
> +union hv_synic_overlay_page_msr {
> + u64 as_uint64;
> + struct {
> + u64 enabled: 1;
> + u64 reserved: 11;
> + u64 pfn: 52;
> + } __packed;
> +};
> +
> u8 __init get_vtl(void);
> void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
> +bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu);
> #else
> static inline u8 get_vtl(void) { return 0; }
> static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
> +static inline bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu) { return false; }
As with Patch 8, if CONFIG_HYPERV_VTL_MODE caused mshv_common.o
to be built, this stub wouldn't be needed.
> #endif
>
> #endif
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* RE: [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files
2026-04-23 12:42 ` [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files Naman Jain
@ 2026-04-27 5:40 ` Michael Kelley
2026-04-29 10:00 ` Naman Jain
0 siblings, 1 reply; 31+ messages in thread
From: Michael Kelley @ 2026-04-27 5:40 UTC (permalink / raw)
To: Naman Jain, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Michael Kelley
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>
> The VSM code page offset register (HV_REGISTER_VSM_CODE_PAGE_OFFSETS)
> is x86 specific, its value configures the static call used to return
> to VTL0 via the hypercall page. Move the register read from the common
> mshv_vtl_get_vsm_regs() into the x86 mshv_vtl_return_call_init(),
> which is the sole consumer of the offset.
>
> Change mshv_vtl_return_call_init() from taking a u64 parameter
> to taking no arguments, and rename mshv_vtl_get_vsm_regs() to
> mshv_vtl_get_vsm_cap_reg() since it now only fetches
> HV_REGISTER_VSM_CAPABILITIES.
>
> No functional change on x86. This prepares the common driver code for
> ARM64 where VSM code page offsets do not apply.
>
> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
> ---
> arch/x86/hyperv/hv_vtl.c | 19 +++++++++++++++++--
> arch/x86/include/asm/mshyperv.h | 4 ++--
> drivers/hv/mshv_vtl_main.c | 24 +++++++++++++-----------
> 3 files changed, 32 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
> index f3ffb6a7cb2d..7c10b34cf8a4 100644
> --- a/arch/x86/hyperv/hv_vtl.c
> +++ b/arch/x86/hyperv/hv_vtl.c
> @@ -293,10 +293,25 @@ EXPORT_SYMBOL_GPL(hv_vtl_configure_reg_page);
>
> DEFINE_STATIC_CALL_NULL(__mshv_vtl_return_hypercall, void (*)(void));
>
> -void mshv_vtl_return_call_init(u64 vtl_return_offset)
> +int mshv_vtl_return_call_init(void)
> {
> + struct hv_register_assoc vsm_pg_offset_reg;
> + union hv_register_vsm_page_offsets offsets;
> + int ret;
> +
> + vsm_pg_offset_reg.name = HV_REGISTER_VSM_CODE_PAGE_OFFSETS;
> +
> + ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
> + 1, input_vtl_zero, &vsm_pg_offset_reg);
> + if (ret)
> + return ret;
> +
> + offsets.as_uint64 = vsm_pg_offset_reg.value.reg64;
> +
> static_call_update(__mshv_vtl_return_hypercall,
> - (void *)((u8 *)hv_hypercall_pg + vtl_return_offset));
> + (void *)((u8 *)hv_hypercall_pg + offsets.vtl_return_offset));
> +
> + return 0;
> }
> EXPORT_SYMBOL(mshv_vtl_return_call_init);
>
> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> index b4d80c9a673a..b48f115c1292 100644
> --- a/arch/x86/include/asm/mshyperv.h
> +++ b/arch/x86/include/asm/mshyperv.h
> @@ -286,14 +286,14 @@ struct mshv_vtl_cpu_context {
> #ifdef CONFIG_HYPERV_VTL_MODE
> void __init hv_vtl_init_platform(void);
> int __init hv_vtl_early_init(void);
> -void mshv_vtl_return_call_init(u64 vtl_return_offset);
> +int mshv_vtl_return_call_init(void);
> void mshv_vtl_return_hypercall(void);
> void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
> int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared);
> #else
> static inline void __init hv_vtl_init_platform(void) {}
> static inline int __init hv_vtl_early_init(void) { return 0; }
> -static inline void mshv_vtl_return_call_init(u64 vtl_return_offset) {}
> +static inline int mshv_vtl_return_call_init(void) { return 0; }
> static inline void mshv_vtl_return_hypercall(void) {}
> static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
> #endif
> diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
> index 4c9ae65ad3e8..be498c9234fd 100644
> --- a/drivers/hv/mshv_vtl_main.c
> +++ b/drivers/hv/mshv_vtl_main.c
> @@ -79,7 +79,6 @@ struct mshv_vtl {
> };
>
> static struct mutex mshv_vtl_poll_file_lock;
> -static union hv_register_vsm_page_offsets mshv_vsm_page_offsets;
> static union hv_register_vsm_capabilities mshv_vsm_capabilities;
>
> static DEFINE_PER_CPU(struct mshv_vtl_poll_file, mshv_vtl_poll_file);
> @@ -203,21 +202,19 @@ static void mshv_vtl_synic_enable_regs(unsigned int cpu)
> /* VTL2 Host VSP SINT is (un)masked when the user mode requests that */
> }
>
> -static int mshv_vtl_get_vsm_regs(void)
> +static int mshv_vtl_get_vsm_cap_reg(void)
> {
> - struct hv_register_assoc registers[2];
> - int ret, count = 2;
> + struct hv_register_assoc vsm_capability_reg;
> + int ret;
>
> - registers[0].name = HV_REGISTER_VSM_CODE_PAGE_OFFSETS;
> - registers[1].name = HV_REGISTER_VSM_CAPABILITIES;
> + vsm_capability_reg.name = HV_REGISTER_VSM_CAPABILITIES;
>
> ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
> - count, input_vtl_zero, registers);
> + 1, input_vtl_zero, &vsm_capability_reg);
> if (ret)
> return ret;
>
> - mshv_vsm_page_offsets.as_uint64 = registers[0].value.reg64;
> - mshv_vsm_capabilities.as_uint64 = registers[1].value.reg64;
> + mshv_vsm_capabilities.as_uint64 = vsm_capability_reg.value.reg64;
>
> return ret;
Nit: This could be just "return 0".
> }
> @@ -1139,13 +1136,18 @@ static int __init mshv_vtl_init(void)
> tasklet_init(&msg_dpc, mshv_vtl_sint_on_msg_dpc, 0);
> init_waitqueue_head(&fd_wait_queue);
>
> - if (mshv_vtl_get_vsm_regs()) {
> + if (mshv_vtl_get_vsm_cap_reg()) {
> dev_emerg(dev, "Unable to get VSM capabilities !!\n");
Why is this failure an emergency message, while the other failures
here in mshv_vtl_init() are just error messages? When there's lack
of consistency, I always wonder if there is a reason ..... :-)
> ret = -ENODEV;
> goto free_dev;
> }
>
> - mshv_vtl_return_call_init(mshv_vsm_page_offsets.vtl_return_offset);
> + ret = mshv_vtl_return_call_init();
> + if (ret) {
> + dev_err(dev, "mshv_vtl_return_call_init failed: %d\n", ret);
> + goto free_dev;
> + }
> +
> ret = hv_vtl_setup_synic();
> if (ret)
> goto free_dev;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files
2026-04-27 5:37 ` Michael Kelley
@ 2026-04-29 9:55 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 9:55 UTC (permalink / raw)
To: Michael Kelley, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
On 4/27/2026 11:07 AM, Michael Kelley wrote:
> From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>>
>> Move the logic to initialize and export hv_vp_assist_page from x86
>> architecture code to Hyper-V common code to allow it to be used for
>> upcoming arm64 support in MSHV_VTL driver.
>> Note: This change also improves error handling - if VP assist page
>> allocation fails, hyperv_init() now returns early instead of
>> continuing with partial initialization.
>>
>> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
>> Reviewed-by: Roman Kisel <vdso@mailbox.org>
>> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
>> ---
>> arch/x86/hyperv/hv_init.c | 88 +-----------------------------
>> arch/x86/include/asm/mshyperv.h | 14 -----
>> drivers/hv/hv_common.c | 94 ++++++++++++++++++++++++++++++++-
>> include/asm-generic/mshyperv.h | 16 ++++++
>> include/hyperv/hvgdk_mini.h | 6 ++-
>> 5 files changed, 115 insertions(+), 103 deletions(-)
>>
>> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
>> index 323adc93f2dc..75a98b5e451b 100644
>> --- a/arch/x86/hyperv/hv_init.c
>> +++ b/arch/x86/hyperv/hv_init.c
>> @@ -81,9 +81,6 @@ union hv_ghcb * __percpu *hv_ghcb_pg;
>> /* Storage to save the hypercall page temporarily for hibernation */
>> static void *hv_hypercall_pg_saved;
>>
>> -struct hv_vp_assist_page **hv_vp_assist_page;
>> -EXPORT_SYMBOL_GPL(hv_vp_assist_page);
>> -
>> static int hyperv_init_ghcb(void)
>> {
>> u64 ghcb_gpa;
>> @@ -117,59 +114,12 @@ static int hyperv_init_ghcb(void)
>>
>> static int hv_cpu_init(unsigned int cpu)
>> {
>> - union hv_vp_assist_msr_contents msr = { 0 };
>> - struct hv_vp_assist_page **hvp;
>> int ret;
>>
>> ret = hv_common_cpu_init(cpu);
>> if (ret)
>> return ret;
>>
>> - if (!hv_vp_assist_page)
>> - return 0;
>> -
>> - hvp = &hv_vp_assist_page[cpu];
>> - if (hv_root_partition()) {
>> - /*
>> - * For root partition we get the hypervisor provided VP assist
>> - * page, instead of allocating a new page.
>> - */
>> - rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
>> - *hvp = memremap(msr.pfn << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT,
>> - PAGE_SIZE, MEMREMAP_WB);
>> - } else {
>> - /*
>> - * The VP assist page is an "overlay" page (see Hyper-V TLFS's
>> - * Section 5.2.1 "GPA Overlay Pages"). Here it must be zeroed
>> - * out to make sure we always write the EOI MSR in
>> - * hv_apic_eoi_write() *after* the EOI optimization is disabled
>> - * in hv_cpu_die(), otherwise a CPU may not be stopped in the
>> - * case of CPU offlining and the VM will hang.
>> - */
>> - if (!*hvp) {
>> - *hvp = __vmalloc(PAGE_SIZE, GFP_KERNEL | __GFP_ZERO);
>> -
>> - /*
>> - * Hyper-V should never specify a VM that is a Confidential
>> - * VM and also running in the root partition. Root partition
>> - * is blocked to run in Confidential VM. So only decrypt assist
>> - * page in non-root partition here.
>> - */
>> - if (*hvp && !ms_hyperv.paravisor_present && hv_isolation_type_snp()) {
>> - WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1));
>> - memset(*hvp, 0, PAGE_SIZE);
>> - }
>> - }
>> -
>> - if (*hvp)
>> - msr.pfn = vmalloc_to_pfn(*hvp);
>> -
>> - }
>> - if (!WARN_ON(!(*hvp))) {
>> - msr.enable = 1;
>> - wrmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
>> - }
>> -
>> /* Allow Hyper-V stimer vector to be injected from Hypervisor. */
>> if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
>> apic_update_vector(cpu, HYPERV_STIMER0_VECTOR, true);
>> @@ -286,23 +236,6 @@ static int hv_cpu_die(unsigned int cpu)
>>
>> hv_common_cpu_die(cpu);
>>
>> - if (hv_vp_assist_page && hv_vp_assist_page[cpu]) {
>> - union hv_vp_assist_msr_contents msr = { 0 };
>> - if (hv_root_partition()) {
>> - /*
>> - * For root partition the VP assist page is mapped to
>> - * hypervisor provided page, and thus we unmap the
>> - * page here and nullify it, so that in future we have
>> - * correct page address mapped in hv_cpu_init.
>> - */
>> - memunmap(hv_vp_assist_page[cpu]);
>> - hv_vp_assist_page[cpu] = NULL;
>> - rdmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
>> - msr.enable = 0;
>> - }
>> - wrmsrq(HV_X64_MSR_VP_ASSIST_PAGE, msr.as_uint64);
>> - }
>> -
>> if (hv_reenlightenment_cb == NULL)
>> return 0;
>>
>> @@ -460,21 +393,6 @@ void __init hyperv_init(void)
>> if (hv_common_init())
>> return;
>>
>> - /*
>> - * The VP assist page is useless to a TDX guest: the only use we
>> - * would have for it is lazy EOI, which can not be used with TDX.
>> - */
>> - if (hv_isolation_type_tdx())
>> - hv_vp_assist_page = NULL;
>> - else
>> - hv_vp_assist_page = kzalloc_objs(*hv_vp_assist_page, nr_cpu_ids);
>> - if (!hv_vp_assist_page) {
>> - ms_hyperv.hints &= ~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
>> -
>> - if (!hv_isolation_type_tdx())
>> - goto common_free;
>> - }
>> -
>> if (ms_hyperv.paravisor_present && hv_isolation_type_snp()) {
>> /* Negotiate GHCB Version. */
>> if (!hv_ghcb_negotiate_protocol())
>> @@ -483,7 +401,7 @@ void __init hyperv_init(void)
>>
>> hv_ghcb_pg = alloc_percpu(union hv_ghcb *);
>> if (!hv_ghcb_pg)
>> - goto free_vp_assist_page;
>> + goto free_ghcb_page;
>
> Seems like this should be "goto common_free". The allocation of
> hv_ghcb_pg has failed, so going to a label where hv_ghcb_pg is
> freed seems redundant. It works since free_percpu() checks for
> a NULL argument, but it's a bit unexpected since the common_free
> label is already there.
Thanks for catching this, I'll fix it.
>
>> }
>>
>> cpuhp = cpuhp_setup_state(CPUHP_AP_HYPERV_ONLINE, "x86/hyperv_init:online",
>> @@ -613,10 +531,6 @@ void __init hyperv_init(void)
>> cpuhp_remove_state(CPUHP_AP_HYPERV_ONLINE);
>> free_ghcb_page:
>> free_percpu(hv_ghcb_pg);
>> -free_vp_assist_page:
>> - kfree(hv_vp_assist_page);
>> - hv_vp_assist_page = NULL;
>> -common_free:
>> hv_common_free();
>> }
>>
>> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
>> index f64393e853ee..95b452387969 100644
>> --- a/arch/x86/include/asm/mshyperv.h
>> +++ b/arch/x86/include/asm/mshyperv.h
>> @@ -155,16 +155,6 @@ static inline u64 hv_do_fast_hypercall16(u16 code, u64 input1, u64 input2)
>> return _hv_do_fast_hypercall16(control, input1, input2);
>> }
>>
>> -extern struct hv_vp_assist_page **hv_vp_assist_page;
>> -
>> -static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
>> -{
>> - if (!hv_vp_assist_page)
>> - return NULL;
>> -
>> - return hv_vp_assist_page[cpu];
>> -}
>> -
>> void __init hyperv_init(void);
>> void hyperv_setup_mmu_ops(void);
>> void set_hv_tscchange_cb(void (*cb)(void));
>> @@ -254,10 +244,6 @@ static inline void hyperv_setup_mmu_ops(void) {}
>> static inline void set_hv_tscchange_cb(void (*cb)(void)) {}
>> static inline void clear_hv_tscchange_cb(void) {}
>> static inline void hyperv_stop_tsc_emulation(void) {};
>> -static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
>> -{
>> - return NULL;
>> -}
>> static inline int hyperv_flush_guest_mapping(u64 as) { return -1; }
>> static inline int hyperv_flush_guest_mapping_range(u64 as,
>> hyperv_fill_flush_list_func fill_func, void *data)
>> diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
>> index 6b67ac616789..e8633bc51d56 100644
>> --- a/drivers/hv/hv_common.c
>> +++ b/drivers/hv/hv_common.c
>> @@ -28,7 +28,11 @@
>> #include <linux/slab.h>
>> #include <linux/dma-map-ops.h>
>> #include <linux/set_memory.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/io.h>
>> +#include <linux/hyperv.h>
>> #include <hyperv/hvhdk.h>
>> +#include <hyperv/hvgdk.h>
>> #include <asm/mshyperv.h>
>>
>> u64 hv_current_partition_id = HV_PARTITION_ID_SELF;
>> @@ -78,6 +82,8 @@ static struct ctl_table_header *hv_ctl_table_hdr;
>> u8 * __percpu *hv_synic_eventring_tail;
>> EXPORT_SYMBOL_GPL(hv_synic_eventring_tail);
>>
>> +struct hv_vp_assist_page **hv_vp_assist_page;
>> +EXPORT_SYMBOL_GPL(hv_vp_assist_page);
>> /*
>> * Hyper-V specific initialization and shutdown code that is
>> * common across all architectures. Called from architecture
>> @@ -92,6 +98,9 @@ void __init hv_common_free(void)
>> if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE)
>> hv_kmsg_dump_unregister();
>>
>> + kfree(hv_vp_assist_page);
>> + hv_vp_assist_page = NULL;
>> +
>> kfree(hv_vp_index);
>> hv_vp_index = NULL;
>>
>> @@ -394,6 +403,23 @@ int __init hv_common_init(void)
>> for (i = 0; i < nr_cpu_ids; i++)
>> hv_vp_index[i] = VP_INVAL;
>>
>> + /*
>> + * The VP assist page is useless to a TDX guest: the only use we
>> + * would have for it is lazy EOI, which can not be used with TDX.
>> + */
>> + if (hv_isolation_type_tdx()) {
>> + hv_vp_assist_page = NULL;
>> +#ifdef CONFIG_X86_64
>> + ms_hyperv.hints &= ~HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
>> +#endif
>
> I realize that this #ifdef went away for the reason I flagged in v1 of
> this patch set, but it's back again for a different reason.
>
> Let me suggest another approach. hv_common_init() is called from
> both the x86/64 and arm64 hyperv_init() functions. Immediately after
> the call to hv_common_init() in the x86/64 hyperv_init(), test
> hv_vp_assist_page for NULL and clear
> HV_X64_ENLIGHTENED_VMCS_RECOMMENDED if it is. No #ifdef is
> needed, and x86/64 specific hackery stays under arch/x86 instead of
> being in common code.
Acked. Thanks.
>
>> + } else {
>> + hv_vp_assist_page = kzalloc_objs(*hv_vp_assist_page, nr_cpu_ids);
>> + if (!hv_vp_assist_page) {
>> + hv_common_free();
>> + return -ENOMEM;
>> + }
>> + }
>> +
>> return 0;
>> }
>>
>> @@ -471,6 +497,8 @@ void __init ms_hyperv_late_init(void)
>>
>> int hv_common_cpu_init(unsigned int cpu)
>> {
>> + union hv_vp_assist_msr_contents msr = { 0 };
>> + struct hv_vp_assist_page **hvp;
>> void **inputarg, **outputarg;
>> u8 **synic_eventring_tail;
>> u64 msr_vp_index;
>> @@ -539,7 +567,53 @@ int hv_common_cpu_init(unsigned int cpu)
>> sizeof(u8), flags);
>> /* No need to unwind any of the above on failure here */
>> if (unlikely(!*synic_eventring_tail))
>> - ret = -ENOMEM;
>> + return -ENOMEM;
>> + }
>> +
>> + if (!hv_vp_assist_page)
>> + return ret;
>> +
>> + hvp = &hv_vp_assist_page[cpu];
>> + if (hv_root_partition()) {
>> + /*
>> + * For root partition we get the hypervisor provided VP assist
>> + * page, instead of allocating a new page.
>> + */
>> + msr.as_uint64 = hv_get_msr(HV_MSR_VP_ASSIST_PAGE);
>> + *hvp = memremap(msr.pfn << HV_VP_ASSIST_PAGE_ADDRESS_SHIFT,
>> + HV_HYP_PAGE_SIZE, MEMREMAP_WB);
>> + } else {
>> + /*
>> + * The VP assist page is an "overlay" page (see Hyper-V TLFS's
>> + * Section 5.2.1 "GPA Overlay Pages"). Here it must be zeroed
>> + * out to make sure that on x86/x64, we always write the EOI MSR in
>> + * hv_apic_eoi_write() *after* the EOI optimization is disabled
>> + * in hv_cpu_die(), otherwise a CPU may not be stopped in the
>> + * case of CPU offlining and the VM will hang.
>> + */
>> + if (!*hvp) {
>> + *hvp = __vmalloc(HV_HYP_PAGE_SIZE, flags | __GFP_ZERO);
>> +
>> + /*
>> + * Hyper-V should never specify a VM that is a Confidential
>> + * VM and also running in the root partition. Root partition
>> + * is blocked to run in Confidential VM. So only decrypt assist
>> + * page in non-root partition here.
>> + */
>> + if (*hvp &&
>> + !ms_hyperv.paravisor_present &&
>> + hv_isolation_type_snp()) {
>> + WARN_ON_ONCE(set_memory_decrypted((unsigned long)(*hvp), 1));
>> + memset(*hvp, 0, HV_HYP_PAGE_SIZE);
>> + }
>> + }
>> +
>> + if (*hvp)
>> + msr.pfn = page_to_hvpfn(vmalloc_to_page(*hvp));
>
> Your Patch 0 changelog mentions adding a comment about vmalloc_to_pfn(), which
> I didn't see anywhere. I'm not sure what that comment would say, so maybe it
> became unnecessary.
I think I mixed up two things. Changelog was about your suggestion to
add "x86/x64" in above comment about GPA Overlay Pages.
I also changed this function to page_to_hvpfn(vmalloc_to_page(*hvp)) as
per your suggestion.
Apologies for the confusion.
>
>> + }
>> + if (!WARN_ON(!(*hvp))) {
>> + msr.enable = 1;
>> + hv_set_msr(HV_MSR_VP_ASSIST_PAGE, msr.as_uint64);
>> }
>>
>> return ret;
>> @@ -566,6 +640,24 @@ int hv_common_cpu_die(unsigned int cpu)
>> *synic_eventring_tail = NULL;
>> }
>>
>> + if (hv_vp_assist_page && hv_vp_assist_page[cpu]) {
>> + union hv_vp_assist_msr_contents msr = { 0 };
>> +
>> + if (hv_root_partition()) {
>> + /*
>> + * For root partition the VP assist page is mapped to
>> + * hypervisor provided page, and thus we unmap the
>> + * page here and nullify it, so that in future we have
>> + * correct page address mapped in hv_cpu_init.
>> + */
>> + memunmap(hv_vp_assist_page[cpu]);
>> + hv_vp_assist_page[cpu] = NULL;
>> + msr.as_uint64 = hv_get_msr(HV_MSR_VP_ASSIST_PAGE);
>> + msr.enable = 0;
>> + }
>> + hv_set_msr(HV_MSR_VP_ASSIST_PAGE, msr.as_uint64);
>> + }
>> +
>> return 0;
>> }
>>
>> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
>> index d37b68238c97..2810aa05dc73 100644
>> --- a/include/asm-generic/mshyperv.h
>> +++ b/include/asm-generic/mshyperv.h
>> @@ -25,6 +25,7 @@
>> #include <linux/nmi.h>
>> #include <asm/ptrace.h>
>> #include <hyperv/hvhdk.h>
>> +#include <hyperv/hvgdk.h>
>>
>> #define VTPM_BASE_ADDRESS 0xfed40000
>>
>> @@ -299,6 +300,16 @@ do { \
>> #define hv_status_debug(status, fmt, ...) \
>> hv_status_printk(debug, status, fmt, ##__VA_ARGS__)
>>
>> +extern struct hv_vp_assist_page **hv_vp_assist_page;
>> +
>> +static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
>> +{
>> + if (!hv_vp_assist_page)
>> + return NULL;
>> +
>> + return hv_vp_assist_page[cpu];
>> +}
>> +
>> const char *hv_result_to_string(u64 hv_status);
>> int hv_result_to_errno(u64 status);
>> void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die);
>> @@ -327,6 +338,11 @@ static inline enum hv_isolation_type hv_get_isolation_type(void)
>> {
>> return HV_ISOLATION_TYPE_NONE;
>> }
>> +
>> +static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
>> +{
>> + return NULL;
>> +}
>> #endif /* CONFIG_HYPERV */
>>
>> #if IS_ENABLED(CONFIG_MSHV_ROOT)
>> diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
>> index 056ef7b6b360..c72d04cd5ae4 100644
>> --- a/include/hyperv/hvgdk_mini.h
>> +++ b/include/hyperv/hvgdk_mini.h
>> @@ -149,6 +149,7 @@ struct hv_u128 {
>> #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
>
> Can this X64 specific definition of the shift be eliminated entirely,
> and a single common definition for x86/64 and arm64 be used?
> As I understand it, the MSR layout is the same on both architectures.
> The one gotcha is that kvm_hv_set_msr() would need to be updated.
>
> HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK defined below isn't
> used anywhere, so it could go away too. (The KVM selftest usage has
> its own definition.)
>
> I realize these are changes to a source code file that is derived from
> Windows, and I'm not sure of the guidelines for such changes. So maybe
> these suggestions have to be ignored ....
The VP assist page definition is common to both x86 and arm64, so the
address mask and shift can be shared. Also, I don't see the shift and
mask definitions in the Hyper-V header so seem to be specific to in
kernel usage.
>
>> #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \
>> (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
>> +#define HV_MSR_VP_ASSIST_PAGE (HV_X64_MSR_VP_ASSIST_PAGE)
>
> This is the correct file for this #define, but it should be placed down around
> line 1148 or so with the other HV_MSR_* definitions in terms of HV_X64_MSR_*
>
Acked.
>>
>> /* Hyper-V Enlightened VMCS version mask in nested features CPUID */
>> #define HV_X64_ENLIGHTENED_VMCS_VERSION 0xff
>> @@ -410,6 +411,7 @@ union hv_x64_msr_hypercall_contents {
>> #if defined(CONFIG_ARM64)
>> #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE BIT(8)
>> #define HV_STIMER_DIRECT_MODE_AVAILABLE BIT(13)
>> +#define HV_VP_ASSIST_PAGE_ADDRESS_SHIFT 12
>> #endif /* CONFIG_ARM64 */
>>
>> #if defined(CONFIG_X86)
>> @@ -1163,6 +1165,8 @@ enum hv_register_name {
>> #define HV_MSR_STIMER0_CONFIG (HV_X64_MSR_STIMER0_CONFIG)
>> #define HV_MSR_STIMER0_COUNT (HV_X64_MSR_STIMER0_COUNT)
>>
>> +#define HV_VP_ASSIST_PAGE_ADDRESS_SHIFT HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT
>> +
>> #elif defined(CONFIG_ARM64) /* CONFIG_X86 */
>>
>> #define HV_MSR_CRASH_P0 (HV_REGISTER_GUEST_CRASH_P0)
>> @@ -1185,7 +1189,7 @@ enum hv_register_name {
>>
>> #define HV_MSR_STIMER0_CONFIG (HV_REGISTER_STIMER0_CONFIG)
>> #define HV_MSR_STIMER0_COUNT (HV_REGISTER_STIMER0_COUNT)
>> -
>> +#define HV_MSR_VP_ASSIST_PAGE (HV_REGISTER_VP_ASSIST_PAGE)
>
> Nit: This definition is slightly mis-aligned. It has spaces where there
> should be a tab to match the similar definitions above it.
>
Acked.
>> #endif /* CONFIG_ARM64 */
>>
>> union hv_explicit_suspend_register {
>> --
>> 2.43.0
>>
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code
2026-04-27 5:38 ` Michael Kelley
@ 2026-04-29 9:55 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 9:55 UTC (permalink / raw)
To: Michael Kelley, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
On 4/27/2026 11:08 AM, Michael Kelley wrote:
> From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>>
>> Move the vmbus_handler global variable and hv_setup_vmbus_handler()/
>> hv_remove_vmbus_handler() from arch/x86 to drivers/hv/hv_common.c.
>>
>> hv_setup_vmbus_handler() is called unconditionally in vmbus_bus_init()
>> and works for both x86 (sysvec handler) and arm64 (vmbus_percpu_isr).
>>
>> This eliminates the need for separate percpu vmbus handler setup
>> functions and __weak stubs, that are needed for adding ARM64 support
>> in MSHV_VTL driver where we need to set a custom per-cpu vmbus handler.
>>
>> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
>> ---
>> arch/x86/kernel/cpu/mshyperv.c | 12 ------------
>> drivers/hv/hv_common.c | 9 +++++++--
>> drivers/hv/vmbus_drv.c | 17 +++++++++--------
>> include/asm-generic/mshyperv.h | 1 +
>> 4 files changed, 17 insertions(+), 22 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
>> index 89a2eb8a0722..68706ff5880e 100644
>> --- a/arch/x86/kernel/cpu/mshyperv.c
>> +++ b/arch/x86/kernel/cpu/mshyperv.c
>> @@ -145,7 +145,6 @@ void hv_set_msr(unsigned int reg, u64 value)
>> EXPORT_SYMBOL_GPL(hv_set_msr);
>>
>> static void (*mshv_handler)(void);
>> -static void (*vmbus_handler)(void);
>> static void (*hv_stimer0_handler)(void);
>> static void (*hv_kexec_handler)(void);
>> static void (*hv_crash_handler)(struct pt_regs *regs);
>> @@ -172,17 +171,6 @@ void hv_setup_mshv_handler(void (*handler)(void))
>> mshv_handler = handler;
>> }
>>
>> -void hv_setup_vmbus_handler(void (*handler)(void))
>> -{
>> - vmbus_handler = handler;
>> -}
>> -
>> -void hv_remove_vmbus_handler(void)
>> -{
>> - /* We have no way to deallocate the interrupt gate */
>> - vmbus_handler = NULL;
>> -}
>> -
>> /*
>> * Routines to do per-architecture handling of stimer0
>> * interrupts when in Direct Mode
>> diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
>> index e8633bc51d56..eb7b0028b45d 100644
>> --- a/drivers/hv/hv_common.c
>> +++ b/drivers/hv/hv_common.c
>> @@ -758,13 +758,18 @@ bool __weak hv_isolation_type_tdx(void)
>> }
>> EXPORT_SYMBOL_GPL(hv_isolation_type_tdx);
>>
>> -void __weak hv_setup_vmbus_handler(void (*handler)(void))
>> +void (*vmbus_handler)(void);
>> +EXPORT_SYMBOL_GPL(vmbus_handler);
>> +
>> +void hv_setup_vmbus_handler(void (*handler)(void))
>> {
>> + vmbus_handler = handler;
>> }
>> EXPORT_SYMBOL_GPL(hv_setup_vmbus_handler);
>>
>> -void __weak hv_remove_vmbus_handler(void)
>> +void hv_remove_vmbus_handler(void)
>> {
>> + vmbus_handler = NULL;
>> }
>> EXPORT_SYMBOL_GPL(hv_remove_vmbus_handler);
>
> I'd suggest moving hv_setup_vmbus_handler() and
> hv_remove_vmbus_handler() above or below the group
> of __weak stubs in this source code file. There's a comment
> describing the purpose of these __weak functions, and
> intermixing these two functions that are no longer __weak
> produces something of a jumble.
>
Acked.
>>
>> diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
>> index bc4fc1951ae1..052ca8b11cee 100644
>> --- a/drivers/hv/vmbus_drv.c
>> +++ b/drivers/hv/vmbus_drv.c
>> @@ -1415,7 +1415,8 @@ EXPORT_SYMBOL_FOR_MODULES(vmbus_isr, "mshv_vtl");
>>
>> static irqreturn_t vmbus_percpu_isr(int irq, void *dev_id)
>> {
>> - vmbus_isr();
>> + if (vmbus_handler)
>> + vmbus_handler();
>
> Is it necessary to test vmbus_handler first? From what I can
> see, it is always set before the per-cpu interrupt is setup.
After the shuffle of hv_remove_vmbus_handler() and freeing the irq, it
can be safely removed. When I was setting the vmbus_handler to NULL
first, before freeing the IRQ, this was required.
>
>> return IRQ_HANDLED;
>> }
>>
>> @@ -1517,8 +1518,10 @@ static int vmbus_bus_init(void)
>> vmbus_irq_initialized = true;
>> }
>>
>> + hv_setup_vmbus_handler(vmbus_isr);
>> +
>> if (vmbus_irq == -1) {
>> - hv_setup_vmbus_handler(vmbus_isr);
>> + /* x86: sysvec handler uses vmbus_handler directly */
>> } else {
>> ret = request_percpu_irq(vmbus_irq, vmbus_percpu_isr,
>> "Hyper-V VMbus", &vmbus_evt);
>> @@ -1553,9 +1556,8 @@ static int vmbus_bus_init(void)
>> return 0;
>>
>> err_connect:
>> - if (vmbus_irq == -1)
>> - hv_remove_vmbus_handler();
>> - else
>> + hv_remove_vmbus_handler();
>> + if (vmbus_irq != -1)
>> free_percpu_irq(vmbus_irq, &vmbus_evt);
>
> These operations should be reordered so they are the inverse
> of how they are setup. I.e., free_percpu_irq() first, then remove
> the VMBus handler. That's just good standard practice unless
> there's a specific reason to do the cleanup ordering differently. In
> fact, hv_remove_vmbus_handler() needs to be moved down
> to the err_setup label so it's done if request_percpu_irq()
> fails.
Acked. I will do the same for other hv_remove_vmbus_handler() as well.
>
>> err_setup:
>> if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
>> @@ -3026,9 +3028,8 @@ static void __exit vmbus_exit(void)
>> vmbus_connection.conn_state = DISCONNECTED;
>> hv_stimer_global_cleanup();
>> vmbus_disconnect();
>> - if (vmbus_irq == -1)
>> - hv_remove_vmbus_handler();
>> - else
>> + hv_remove_vmbus_handler();
>> + if (vmbus_irq != -1)
>> free_percpu_irq(vmbus_irq, &vmbus_evt);
>
> Ordering should be changed here as well so it is the inverse
> of how things are set up.
>
>> if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
>> smpboot_unregister_percpu_thread(&vmbus_irq_threads);
>> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
>> index 2810aa05dc73..db183c8cfb95 100644
>> --- a/include/asm-generic/mshyperv.h
>> +++ b/include/asm-generic/mshyperv.h
>> @@ -179,6 +179,7 @@ static inline u64 hv_generate_guest_id(u64 kernel_version)
>>
>> int hv_get_hypervisor_version(union hv_hypervisor_version_info *info);
>>
>> +extern void (*vmbus_handler)(void);
>> void hv_setup_vmbus_handler(void (*handler)(void));
>> void hv_remove_vmbus_handler(void);
>> void hv_setup_stimer0_handler(void (*handler)(void));
>> --
>> 2.43.0
>>
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call
2026-04-23 13:56 ` Mark Rutland
@ 2026-04-29 9:56 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 9:56 UTC (permalink / raw)
To: Mark Rutland, Marc Zyngier
Cc: K . Y . Srinivasan, Haiyang Zhang, Wei Liu, Dexuan Cui, Long Li,
Catalin Marinas, Will Deacon, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, x86, H . Peter Anvin, Arnd Bergmann,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Michael Kelley, Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi,
Sascha Bischoff, mrigendrachaubey, linux-hyperv, linux-arm-kernel,
linux-kernel, linux-arch, linux-riscv, vdso, ssengar
On 4/23/2026 7:26 PM, Mark Rutland wrote:
> On Thu, Apr 23, 2026 at 12:41:57PM +0000, Naman Jain wrote:
>> Add the arm64 variant of mshv_vtl_return_call() to support the MSHV_VTL
>> driver on arm64. This function enables the transition between Virtual
>> Trust Levels (VTLs) in MSHV_VTL when the kernel acts as a paravisor.
>>
>> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
>> Reviewed-by: Roman Kisel <vdso@mailbox.org>
>> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
>> ---
>> arch/arm64/hyperv/Makefile | 1 +
>> arch/arm64/hyperv/hv_vtl.c | 158 ++++++++++++++++++++++++++++++
>> arch/arm64/include/asm/mshyperv.h | 13 +++
>> arch/x86/include/asm/mshyperv.h | 2 -
>> drivers/hv/mshv_vtl.h | 3 +
>> include/asm-generic/mshyperv.h | 2 +
>> 6 files changed, 177 insertions(+), 2 deletions(-)
>> create mode 100644 arch/arm64/hyperv/hv_vtl.c
>>
>> diff --git a/arch/arm64/hyperv/Makefile b/arch/arm64/hyperv/Makefile
>> index 87c31c001da9..9701a837a6e1 100644
>> --- a/arch/arm64/hyperv/Makefile
>> +++ b/arch/arm64/hyperv/Makefile
>> @@ -1,2 +1,3 @@
>> # SPDX-License-Identifier: GPL-2.0
>> obj-y := hv_core.o mshyperv.o
>> +obj-$(CONFIG_HYPERV_VTL_MODE) += hv_vtl.o
>> diff --git a/arch/arm64/hyperv/hv_vtl.c b/arch/arm64/hyperv/hv_vtl.c
>> new file mode 100644
>> index 000000000000..59cbeb74e7b9
>> --- /dev/null
>> +++ b/arch/arm64/hyperv/hv_vtl.c
>> @@ -0,0 +1,158 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2026, Microsoft, Inc.
>> + *
>> + * Authors:
>> + * Roman Kisel <romank@linux.microsoft.com>
>> + * Naman Jain <namjain@linux.microsoft.com>
>> + */
>> +
>> +#include <asm/mshyperv.h>
>> +#include <asm/neon.h>
>> +#include <linux/export.h>
>> +
>> +void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0)
>> +{
>> + struct user_fpsimd_state fpsimd_state;
>> + u64 base_ptr = (u64)vtl0->x;
>> +
>> + /*
>> + * Obtain the CPU FPSIMD registers for VTL context switch.
>> + * This saves the current task's FP/NEON state and allows us to
>> + * safely load VTL0's FP/NEON context for the hypercall.
>> + */
>> + kernel_neon_begin(&fpsimd_state);
>> +
>> + /*
>> + * VTL switch for ARM64 platform - managing VTL0's CPU context.
>> + * We explicitly use the stack to save the base pointer, and use x16
>> + * as our working register for accessing the context structure.
>> + *
>> + * Register Handling:
>> + * - X0-X17: Saved/restored (general-purpose, shared for VTL communication)
>> + * - X18: NOT touched - hypervisor-managed per-VTL (platform register)
>> + * - X19-X30: Saved/restored (part of VTL0's execution context)
>> + * - Q0-Q31: Saved/restored (128-bit NEON/floating-point registers, shared)
>> + * - SP: Not in structure, hypervisor-managed per-VTL
>> + *
>> + * X29 (FP) and X30 (LR) are in the structure and must be saved/restored
>> + * as part of VTL0's complete execution state.
>> + */
>> + asm __volatile__ (
>> + /* Save base pointer to stack explicitly, then load into x16 */
>> + "str %0, [sp, #-16]!\n\t" /* Push base pointer onto stack */
>> + "mov x16, %0\n\t" /* Load base pointer into x16 */
>> + /* Volatile registers (Windows ARM64 ABI: x0-x17) */
>> + "ldp x0, x1, [x16]\n\t"
>> + "ldp x2, x3, [x16, #(2*8)]\n\t"
>> + "ldp x4, x5, [x16, #(4*8)]\n\t"
>> + "ldp x6, x7, [x16, #(6*8)]\n\t"
>> + "ldp x8, x9, [x16, #(8*8)]\n\t"
>> + "ldp x10, x11, [x16, #(10*8)]\n\t"
>> + "ldp x12, x13, [x16, #(12*8)]\n\t"
>> + "ldp x14, x15, [x16, #(14*8)]\n\t"
>> + /* x16 will be loaded last, after saving base pointer */
>> + "ldr x17, [x16, #(17*8)]\n\t"
>> + /* x18 is hypervisor-managed per-VTL - DO NOT LOAD */
>> +
>> + /* General-purpose registers: x19-x30 */
>> + "ldp x19, x20, [x16, #(19*8)]\n\t"
>> + "ldp x21, x22, [x16, #(21*8)]\n\t"
>> + "ldp x23, x24, [x16, #(23*8)]\n\t"
>> + "ldp x25, x26, [x16, #(25*8)]\n\t"
>> + "ldp x27, x28, [x16, #(27*8)]\n\t"
>> +
>> + /* Frame pointer and link register */
>> + "ldp x29, x30, [x16, #(29*8)]\n\t"
>> +
>> + /* Shared NEON/FP registers: Q0-Q31 (128-bit) */
>> + "ldp q0, q1, [x16, #(32*8)]\n\t"
>> + "ldp q2, q3, [x16, #(32*8 + 2*16)]\n\t"
>> + "ldp q4, q5, [x16, #(32*8 + 4*16)]\n\t"
>> + "ldp q6, q7, [x16, #(32*8 + 6*16)]\n\t"
>> + "ldp q8, q9, [x16, #(32*8 + 8*16)]\n\t"
>> + "ldp q10, q11, [x16, #(32*8 + 10*16)]\n\t"
>> + "ldp q12, q13, [x16, #(32*8 + 12*16)]\n\t"
>> + "ldp q14, q15, [x16, #(32*8 + 14*16)]\n\t"
>> + "ldp q16, q17, [x16, #(32*8 + 16*16)]\n\t"
>> + "ldp q18, q19, [x16, #(32*8 + 18*16)]\n\t"
>> + "ldp q20, q21, [x16, #(32*8 + 20*16)]\n\t"
>> + "ldp q22, q23, [x16, #(32*8 + 22*16)]\n\t"
>> + "ldp q24, q25, [x16, #(32*8 + 24*16)]\n\t"
>> + "ldp q26, q27, [x16, #(32*8 + 26*16)]\n\t"
>> + "ldp q28, q29, [x16, #(32*8 + 28*16)]\n\t"
>> + "ldp q30, q31, [x16, #(32*8 + 30*16)]\n\t"
>> +
>> + /* Now load x16 itself */
>> + "ldr x16, [x16, #(16*8)]\n\t"
>> +
>> + /* Return to the lower VTL */
>> + "hvc #3\n\t"
>
> NAK to this.
>
> * This is a non-SMCCC hypercall, which we have NAK'd in general in the
> past for various reasons that I am not going to rehash here.
>
> * It's not clear how this is going to be extended with necessary
> architecture state in future (e.g. SVE, SME). This is not
> future-proof, and I don't believe this is maintainable.
>
> * This breaks general requirements for reliable stacktracing by
> clobbering state (e.g. x29) that we depend upon being valid AT ALL
> TIMES outside of entry code.
>
> * IMO, if this needs to be saved/restored, that should happen in
> whatever you are calling.
>
> Mark.
Merging threads for addressing comments from Mark Rutland and Marc
Zyngier on this patch.
Thanks for reviewing the changes. Please allow me to briefly explain the
use case here and then address your comments.
Hyper-V's Virtual Trust Levels (VTLs) provide hardware-enforced
isolation within a single VM, analogous to ARM TrustZone. The kernel
runs in VTL2 (higher privilege) as a "paravisor", a security monitor
that handles intercepts for the primary OS in VTL0 (lower privilege).
The VTL switch (mshv_vtl_return_call) is functionally equivalent to
KVM's guest enter/exit. It saves VTL2 state, loads VTL0's GPRs other
registers from a shared context structure, issues hvc #3 to let VTL0
run, and on return saves VTL0's updated state back.
Coming to the problems with the code, I have identified a few ways to
address them.
I can put the assembly code in a separate .S file with
SYM_FUNC_START/SYM_FUNC_END and marked as noinstr, to prevent
ftrace/kprobes from instrumenting between the GPR load and the hvc,
which could have corrupted VTL0 register state. This should solve x29
clobbering, stack tracing problems.
I should use kernel_neon_begin()/kernel_neon_end() to save/restore the
full extended FP state of the current task in VTL2. VTL0's Q0-Q31 can be
loaded/saved separately via fpsimd_load_state()/fpsimd_save_state().
This way, the assembly touches none of the SIMD registers. This is
SVE/SME-safe for VTL2's task state. VTL0 still only carries Q0-Q31 in
the context struct, and extending to SVE, SME is a future context struct
change, which will need Hyper-V arm64 ABI support.
This way, VTL2's callee-saved regs (x19-x28, x29, x30) are explicitly
saved to the stack frame at the top and restored at the bottom of
assembly code. The C caller (in hv_vtl.c) is a clean function call.
Regarding Non-SMCCC "hvc #3" call, I have a limitation here owing to the
ABI that is defined by the Hyper-V hypervisor. Fixing this requires a
hypervisor-side change to support SMCCC-style dispatch for VTL return.
Until then, hvc #3 is the only working interface. Moreover there would
be backward compatibility issues with this new ABI interface, if at all
it is added.
Link to TLFS:
https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/vsm#on-arm64-platforms-3
Please correct me if any of the above is incorrect or if I should be
looking at some other existing examples to solve these problems.
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call
2026-04-27 5:38 ` Michael Kelley
@ 2026-04-29 9:56 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 9:56 UTC (permalink / raw)
To: Michael Kelley, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
On 4/27/2026 11:08 AM, Michael Kelley wrote:
> From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>>
>> Add the arm64 variant of mshv_vtl_return_call() to support the MSHV_VTL
>> driver on arm64. This function enables the transition between Virtual
>> Trust Levels (VTLs) in MSHV_VTL when the kernel acts as a paravisor.
>>
>> Signed-off-by: Roman Kisel <romank@linux.microsoft.com>
>> Reviewed-by: Roman Kisel <vdso@mailbox.org>
>> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
>> ---
>> arch/arm64/hyperv/Makefile | 1 +
>> arch/arm64/hyperv/hv_vtl.c | 158 ++++++++++++++++++++++++++++++
>> arch/arm64/include/asm/mshyperv.h | 13 +++
>> arch/x86/include/asm/mshyperv.h | 2 -
>> drivers/hv/mshv_vtl.h | 3 +
>> include/asm-generic/mshyperv.h | 2 +
>> 6 files changed, 177 insertions(+), 2 deletions(-)
>> create mode 100644 arch/arm64/hyperv/hv_vtl.c
>>
>
> [snip]
>
>> diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h
>> index 585b23a26f1b..9eb0e5999f29 100644
>> --- a/arch/arm64/include/asm/mshyperv.h
>> +++ b/arch/arm64/include/asm/mshyperv.h
>> @@ -60,6 +60,18 @@ static inline u64 hv_get_non_nested_msr(unsigned int reg)
>> ARM_SMCCC_SMC_64, \
>> ARM_SMCCC_OWNER_VENDOR_HYP, \
>> HV_SMCCC_FUNC_NUMBER)
>> +
>> +struct mshv_vtl_cpu_context {
>> +/*
>> + * x18 is managed by the hypervisor. It won't be reloaded from this array.
>> + * It is included here for convenience in array indexing.
>> + * 'rsvd' field serves as alignment padding so q[] starts at offset 32*8=256.
>> + */
>> + __u64 x[31];
>> + __u64 rsvd;
>> + __uint128_t q[32];
>> +};
>> +
>> #ifdef CONFIG_HYPERV_VTL_MODE
>> /*
>> * Get/Set the register. If the function returns `1`, that must be done via
>> @@ -69,6 +81,7 @@ static inline int hv_vtl_get_set_reg(struct hv_register_assoc *regs,
>> bool set, b
>> {
>> return 1;
>> }
>> +
>
> This appears to be a spurious blank line being added since there
> are no other changes in the vicinity.
Acked.
>
>> #endif
>>
>> #include <asm-generic/mshyperv.h>
>> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
>> index 08278547b84c..b4d80c9a673a 100644
>> --- a/arch/x86/include/asm/mshyperv.h
>> +++ b/arch/x86/include/asm/mshyperv.h
>> @@ -286,7 +286,6 @@ struct mshv_vtl_cpu_context {
>> #ifdef CONFIG_HYPERV_VTL_MODE
>> void __init hv_vtl_init_platform(void);
>> int __init hv_vtl_early_init(void);
>> -void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
>> void mshv_vtl_return_call_init(u64 vtl_return_offset);
>> void mshv_vtl_return_hypercall(void);
>> void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
>> @@ -294,7 +293,6 @@ int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set,
>> bool shared);
>> #else
>> static inline void __init hv_vtl_init_platform(void) {}
>> static inline int __init hv_vtl_early_init(void) { return 0; }
>> -static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
>> static inline void mshv_vtl_return_call_init(u64 vtl_return_offset) {}
>> static inline void mshv_vtl_return_hypercall(void) {}
>> static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
>> diff --git a/drivers/hv/mshv_vtl.h b/drivers/hv/mshv_vtl.h
>> index a6eea52f7aa2..103f07371f3f 100644
>> --- a/drivers/hv/mshv_vtl.h
>> +++ b/drivers/hv/mshv_vtl.h
>> @@ -22,4 +22,7 @@ struct mshv_vtl_run {
>> char vtl_ret_actions[MSHV_MAX_RUN_MSG_SIZE];
>> };
>>
>> +static_assert(sizeof(struct mshv_vtl_cpu_context) <= 1024,
>> + "struct mshv_vtl_cpu_context exceeds reserved space in struct
>> mshv_vtl_run");
>> +
>> #endif /* _MSHV_VTL_H */
>> diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
>> index db183c8cfb95..8cdf2a9fbdfb 100644
>> --- a/include/asm-generic/mshyperv.h
>> +++ b/include/asm-generic/mshyperv.h
>> @@ -396,8 +396,10 @@ static inline int hv_deposit_memory(u64 partition_id, u64 status)
>>
>> #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
>> u8 __init get_vtl(void);
>> +void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
>> #else
>> static inline u8 get_vtl(void) { return 0; }
>> +static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
>
> Is this stub needed? Maybe I missed something, but it looks to me like none
> of the code that calls this gets built unless CONFIG_HYPERV_VTL_MODE is set.
> See further comments about stubs in Patch 8 of this series.
>
Config dependencies would handle such cases, and this is not required. I
saw similar stubs added in the code, so I thought this is a norm that
should be followed, and not rely on config dependencies.
I can remove it.
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations
2026-04-27 5:39 ` Michael Kelley
@ 2026-04-29 9:57 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 9:57 UTC (permalink / raw)
To: Michael Kelley, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
On 4/27/2026 11:09 AM, Michael Kelley wrote:
> From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>>
>> Move hv_call_get_vp_registers() and hv_call_set_vp_registers()
>> declarations from drivers/hv/mshv.h to include/asm-generic/mshyperv.h.
>>
>> These functions are defined in mshv_common.c and are going to be called
>> from both drivers/hv/ and arch/x86/hyperv/hv_vtl.c. The latter never
>> included mshv.h, relying on implicit declaration visibility. Moving the
>> declarations to the arch-generic Hyper-V header makes them properly
>> visible to all architecture-specific callers.
>>
>> Provide static inline stubs returning -EOPNOTSUPP when neither
>> CONFIG_MSHV_ROOT nor CONFIG_MSHV_VTL is enabled.
>
> Looking at the drivers/hv/Kconfig, it's possible to build with
> CONFIG_HYPERV_VTL_MODE=y, but not CONFIG_MSHV_VTL. In such a
> case, mshv_common.o doesn't get built, which is why the stubs are
> needed. Is such a configuration desirable for some scenarios?
>
> I wonder if having CONFIG_HYPERV_VTL_MODE force the building of
> mshv_common.o would be a better approach. Then the stubs wouldn't
> be needed. The "ifneq" statement in drivers/hv/Makefile could use
> CONFIG_HYPERV_VTL_MODE instead of CONFIG_MSHV_VTL, and
> everything would be good since CONFIG_MSHV_VTL depends on
> CONFIG_HYPERV_VTL_MODE.
>
This looks good. I'll try this and make the changes. In case there are
some challenges with that, I'll revert back.
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86
2026-04-27 5:40 ` Michael Kelley
@ 2026-04-29 9:57 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 9:57 UTC (permalink / raw)
To: Michael Kelley, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
On 4/27/2026 11:10 AM, Michael Kelley wrote:
> From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>>
>> Move hv_vtl_configure_reg_page() from drivers/hv/mshv_vtl_main.c to
>> arch/x86/hyperv/hv_vtl.c. The register page overlay is an x86-specific
>> feature that uses HV_X64_REGISTER_REG_PAGE, so its configuration belongs
>> in architecture-specific code.
>>
>> Move struct mshv_vtl_per_cpu and union hv_synic_overlay_page_msr to
>> include/asm-generic/mshyperv.h so they are visible to both arch and
>> driver code.
>>
>> Change the return type from void to bool so the caller can determine
>> whether the register page was successfully configured and set
>> mshv_has_reg_page accordingly.
>>
>> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
>> ---
>> arch/x86/hyperv/hv_vtl.c | 32 ++++++++++++++++++++++
>> drivers/hv/mshv_vtl_main.c | 49 +++-------------------------------
>> include/asm-generic/mshyperv.h | 17 ++++++++++++
>> 3 files changed, 53 insertions(+), 45 deletions(-)
>>
<snip>
>> #if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
>> +/* SYNIC_OVERLAY_PAGE_MSR - internal, identical to hv_synic_simp */
>
> This comment pre-dates your patch, but I don't understand the point
> it is trying to make. The comment is factually true, but I don't know
> why calling that out is relevant. The REG_PAGE MSR seems to be
> conceptually separate and distinct from the SIMP MSR, so the fact
> that the layouts are the same is just a coincidence. Or is there some
> relationship between the two MSRs that I'm not aware of, and the
> comment is trying (and failing?) to point out?
This was added as per suggestion from Nuno in my initial series for
MSHV_VTL. If the reference in "identical to" is misleading, I should
remove it.
https://lore.kernel.org/all/68143eb0-e6a7-4579-bedb-4c2ec5aaef6b@linux.microsoft.com/
Quoting:
"""
it is a generic structure that
appears to be used for several overlay page MSRs (SIMP, SIEF, etc).
But, the type doesn't appear in the hv*dk headers explicitly; it's just
used internally by the hypervisor.
I think it should be renamed with a hv_ prefix to indicate it's part of
the hypervisor ABI, and a brief comment with the provenance:
/* SYNIC_OVERLAY_PAGE_MSR - internal, identical to hv_synic_simp */
union hv_synic_overlay_page_msr {
/* <snip> */
};
"""
>
>> +union hv_synic_overlay_page_msr {
>> + u64 as_uint64;
>> + struct {
>> + u64 enabled: 1;
>> + u64 reserved: 11;
>> + u64 pfn: 52;
>> + } __packed;
>> +};
>> +
>> u8 __init get_vtl(void);
>> void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
>> +bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu);
>> #else
>> static inline u8 get_vtl(void) { return 0; }
>> static inline void mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
>> +static inline bool hv_vtl_configure_reg_page(struct mshv_vtl_per_cpu *per_cpu) { return false; }
>
> As with Patch 8, if CONFIG_HYPERV_VTL_MODE caused mshv_common.o
> to be built, this stub wouldn't be needed.
>
Acked.
>> #endif
>>
>> #endif
>> --
>> 2.43.0
>>
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files
2026-04-27 5:40 ` Michael Kelley
@ 2026-04-29 10:00 ` Naman Jain
0 siblings, 0 replies; 31+ messages in thread
From: Naman Jain @ 2026-04-29 10:00 UTC (permalink / raw)
To: Michael Kelley, K . Y . Srinivasan, Haiyang Zhang, Wei Liu,
Dexuan Cui, Long Li, Catalin Marinas, Will Deacon,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
x86@kernel.org, H . Peter Anvin, Arnd Bergmann, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti
Cc: Marc Zyngier, Timothy Hayes, Lorenzo Pieralisi, Sascha Bischoff,
mrigendrachaubey, linux-hyperv@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-riscv@lists.infradead.org, vdso@mailbox.org,
ssengar@linux.microsoft.com
On 4/27/2026 11:10 AM, Michael Kelley wrote:
> From: Naman Jain <namjain@linux.microsoft.com> Sent: Thursday, April 23, 2026 5:42 AM
>>
>> The VSM code page offset register (HV_REGISTER_VSM_CODE_PAGE_OFFSETS)
>> is x86 specific, its value configures the static call used to return
>> to VTL0 via the hypercall page. Move the register read from the common
>> mshv_vtl_get_vsm_regs() into the x86 mshv_vtl_return_call_init(),
>> which is the sole consumer of the offset.
>>
>> Change mshv_vtl_return_call_init() from taking a u64 parameter
>> to taking no arguments, and rename mshv_vtl_get_vsm_regs() to
>> mshv_vtl_get_vsm_cap_reg() since it now only fetches
>> HV_REGISTER_VSM_CAPABILITIES.
>>
>> No functional change on x86. This prepares the common driver code for
>> ARM64 where VSM code page offsets do not apply.
>>
>> Signed-off-by: Naman Jain <namjain@linux.microsoft.com>
>> ---
>> arch/x86/hyperv/hv_vtl.c | 19 +++++++++++++++++--
>> arch/x86/include/asm/mshyperv.h | 4 ++--
>> drivers/hv/mshv_vtl_main.c | 24 +++++++++++++-----------
>> 3 files changed, 32 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
>> index f3ffb6a7cb2d..7c10b34cf8a4 100644
>> --- a/arch/x86/hyperv/hv_vtl.c
>> +++ b/arch/x86/hyperv/hv_vtl.c
>> @@ -293,10 +293,25 @@ EXPORT_SYMBOL_GPL(hv_vtl_configure_reg_page);
>>
>> DEFINE_STATIC_CALL_NULL(__mshv_vtl_return_hypercall, void (*)(void));
>>
>> -void mshv_vtl_return_call_init(u64 vtl_return_offset)
>> +int mshv_vtl_return_call_init(void)
>> {
>> + struct hv_register_assoc vsm_pg_offset_reg;
>> + union hv_register_vsm_page_offsets offsets;
>> + int ret;
>> +
>> + vsm_pg_offset_reg.name = HV_REGISTER_VSM_CODE_PAGE_OFFSETS;
>> +
>> + ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
>> + 1, input_vtl_zero, &vsm_pg_offset_reg);
>> + if (ret)
>> + return ret;
>> +
>> + offsets.as_uint64 = vsm_pg_offset_reg.value.reg64;
>> +
>> static_call_update(__mshv_vtl_return_hypercall,
>> - (void *)((u8 *)hv_hypercall_pg + vtl_return_offset));
>> + (void *)((u8 *)hv_hypercall_pg + offsets.vtl_return_offset));
>> +
>> + return 0;
>> }
>> EXPORT_SYMBOL(mshv_vtl_return_call_init);
>>
>> diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
>> index b4d80c9a673a..b48f115c1292 100644
>> --- a/arch/x86/include/asm/mshyperv.h
>> +++ b/arch/x86/include/asm/mshyperv.h
>> @@ -286,14 +286,14 @@ struct mshv_vtl_cpu_context {
>> #ifdef CONFIG_HYPERV_VTL_MODE
>> void __init hv_vtl_init_platform(void);
>> int __init hv_vtl_early_init(void);
>> -void mshv_vtl_return_call_init(u64 vtl_return_offset);
>> +int mshv_vtl_return_call_init(void);
>> void mshv_vtl_return_hypercall(void);
>> void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0);
>> int hv_vtl_get_set_reg(struct hv_register_assoc *regs, bool set, bool shared);
>> #else
>> static inline void __init hv_vtl_init_platform(void) {}
>> static inline int __init hv_vtl_early_init(void) { return 0; }
>> -static inline void mshv_vtl_return_call_init(u64 vtl_return_offset) {}
>> +static inline int mshv_vtl_return_call_init(void) { return 0; }
>> static inline void mshv_vtl_return_hypercall(void) {}
>> static inline void __mshv_vtl_return_call(struct mshv_vtl_cpu_context *vtl0) {}
>> #endif
>> diff --git a/drivers/hv/mshv_vtl_main.c b/drivers/hv/mshv_vtl_main.c
>> index 4c9ae65ad3e8..be498c9234fd 100644
>> --- a/drivers/hv/mshv_vtl_main.c
>> +++ b/drivers/hv/mshv_vtl_main.c
>> @@ -79,7 +79,6 @@ struct mshv_vtl {
>> };
>>
>> static struct mutex mshv_vtl_poll_file_lock;
>> -static union hv_register_vsm_page_offsets mshv_vsm_page_offsets;
>> static union hv_register_vsm_capabilities mshv_vsm_capabilities;
>>
>> static DEFINE_PER_CPU(struct mshv_vtl_poll_file, mshv_vtl_poll_file);
>> @@ -203,21 +202,19 @@ static void mshv_vtl_synic_enable_regs(unsigned int cpu)
>> /* VTL2 Host VSP SINT is (un)masked when the user mode requests that */
>> }
>>
>> -static int mshv_vtl_get_vsm_regs(void)
>> +static int mshv_vtl_get_vsm_cap_reg(void)
>> {
>> - struct hv_register_assoc registers[2];
>> - int ret, count = 2;
>> + struct hv_register_assoc vsm_capability_reg;
>> + int ret;
>>
>> - registers[0].name = HV_REGISTER_VSM_CODE_PAGE_OFFSETS;
>> - registers[1].name = HV_REGISTER_VSM_CAPABILITIES;
>> + vsm_capability_reg.name = HV_REGISTER_VSM_CAPABILITIES;
>>
>> ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF,
>> - count, input_vtl_zero, registers);
>> + 1, input_vtl_zero, &vsm_capability_reg);
>> if (ret)
>> return ret;
>>
>> - mshv_vsm_page_offsets.as_uint64 = registers[0].value.reg64;
>> - mshv_vsm_capabilities.as_uint64 = registers[1].value.reg64;
>> + mshv_vsm_capabilities.as_uint64 = vsm_capability_reg.value.reg64;
>>
>> return ret;
>
> Nit: This could be just "return 0".
Acked.
>
>> }
>> @@ -1139,13 +1136,18 @@ static int __init mshv_vtl_init(void)
>> tasklet_init(&msg_dpc, mshv_vtl_sint_on_msg_dpc, 0);
>> init_waitqueue_head(&fd_wait_queue);
>>
>> - if (mshv_vtl_get_vsm_regs()) {
>> + if (mshv_vtl_get_vsm_cap_reg()) {
>> dev_emerg(dev, "Unable to get VSM capabilities !!\n");
>
> Why is this failure an emergency message, while the other failures
> here in mshv_vtl_init() are just error messages? When there's lack
> of consistency, I always wonder if there is a reason ..... :-)
It might be because I didn’t pay enough attention to the old code :)
dev_err() should work just fine, I'll change it.
>
>> ret = -ENODEV;
>> goto free_dev;
>> }
>>
>> - mshv_vtl_return_call_init(mshv_vsm_page_offsets.vtl_return_offset);
>> + ret = mshv_vtl_return_call_init();
>> + if (ret) {
>> + dev_err(dev, "mshv_vtl_return_call_init failed: %d\n", ret);
>> + goto free_dev;
>> + }
>> +
>> ret = hv_vtl_setup_synic();
>> if (ret)
>> goto free_dev;
>> --
>> 2.43.0
>>
Regards,
Naman
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2026-04-29 10:00 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-23 12:41 [PATCH v2 00/15] Add arm64 support in MSHV_VTL Naman Jain
2026-04-23 12:41 ` [PATCH v2 01/15] arm64: smp: Export arch_smp_send_reschedule for mshv_vtl module Naman Jain
2026-04-23 12:41 ` [PATCH v2 02/15] Drivers: hv: Move hv_vp_assist_page to common files Naman Jain
2026-04-27 5:37 ` Michael Kelley
2026-04-29 9:55 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 03/15] Drivers: hv: Move vmbus_handler to common code Naman Jain
2026-04-27 5:38 ` Michael Kelley
2026-04-29 9:55 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 04/15] mshv_vtl: Refactor the driver for ARM64 support to be added Naman Jain
2026-04-23 12:41 ` [PATCH v2 05/15] Drivers: hv: Export vmbus_interrupt for mshv_vtl module Naman Jain
2026-04-23 12:41 ` [PATCH v2 06/15] mshv_vtl: Make sint vector architecture neutral Naman Jain
2026-04-23 12:41 ` [PATCH v2 07/15] arm64: hyperv: Add support for mshv_vtl_return_call Naman Jain
2026-04-23 13:56 ` Mark Rutland
2026-04-29 9:56 ` Naman Jain
2026-04-23 14:00 ` Marc Zyngier
2026-04-27 5:38 ` Michael Kelley
2026-04-29 9:56 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 08/15] Drivers: hv: Move hv_call_(get|set)_vp_registers() declarations Naman Jain
2026-04-27 5:39 ` Michael Kelley
2026-04-29 9:57 ` Naman Jain
2026-04-23 12:41 ` [PATCH v2 09/15] Drivers: hv: mshv_vtl: Move hv_vtl_configure_reg_page() to x86 Naman Jain
2026-04-27 5:40 ` Michael Kelley
2026-04-29 9:57 ` Naman Jain
2026-04-23 12:42 ` [PATCH v2 10/15] arm64: hyperv: Add hv_vtl_configure_reg_page() stub Naman Jain
2026-04-23 12:42 ` [PATCH v2 11/15] mshv_vtl: Let userspace do VSM configuration Naman Jain
2026-04-23 12:42 ` [PATCH v2 12/15] mshv_vtl: Move VSM code page offset logic to x86 files Naman Jain
2026-04-27 5:40 ` Michael Kelley
2026-04-29 10:00 ` Naman Jain
2026-04-23 12:42 ` [PATCH v2 13/15] mshv_vtl: Add remaining support for arm64 Naman Jain
2026-04-23 12:42 ` [PATCH v2 14/15] Drivers: hv: Add 4K page dependency in MSHV_VTL Naman Jain
2026-04-23 12:42 ` [PATCH v2 15/15] Drivers: hv: Add ARM64 support for MSHV_VTL in Kconfig Naman Jain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox