* [PATCH 00/27] Nested virtualization for KVM RISC-V
@ 2026-01-20 7:59 Anup Patel
2026-01-20 7:59 ` [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG Anup Patel
` (27 more replies)
0 siblings, 28 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Initial nested virtualization support for KVM RISC-V. Using
this series, we can boot Xvisor inside KVM guest and KVM RISC-V
insmod also works inside KVM guest but we can't run nested guest
at the moment due to work-in-progress G-stage emulation (or
G-stage page table walker).
Patch01-to-Patch09: Fixes and preparatory changes
Patch10-to-Patch23: Actual nested virtualization support
Patch24-to-Patch27: ONE_REG interface and get-reg-list selftest
Upcoming work on-top-of this series include:
* Software MMU emulation for nested guest (aka swtlb)
* HLV/HSV emulation
* Sstc emulation for nested guest
* SBI NACL for guest hypervisor
* ... and more ...
These patches can also be found in the riscv_kvm_nested_v1
branch at: https://github.com/avpatel/linux.git
Anup Patel (27):
RISC-V: KVM: Fix error code returned for Smstateen ONE_REG
RISC-V: KVM: Fix error code returned for Ssaia ONE_REG
RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip
RISC-V: KVM: Introduce common kvm_riscv_isa_check_host()
RISC-V: KVM: Factor-out ISA checks into separate sources
RISC-V: KVM: Move timer state defines closer to struct in UAPI header
RISC-V: KVM: Add hideleg to struct kvm_vcpu_config
RISC-V: KVM: Factor-out VCPU config into separate sources
RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR
RISC-V: KVM: Initial skeletal nested virtualization support
RISC-V: KVM: Use half VMID space for nested guest
RISC-V: KVM: Extend kvm_riscv_mmu_update_hgatp() for nested
virtualization
RISC-V: KVM: Extend kvm_riscv_vcpu_config_load() for nested
virtualization
RISC-V: KVM: Extend kvm_riscv_vcpu_update_timedelta() for nested virt
RISC-V: KVM: Extend trap redirection for nested virtualization
RISC-V: KVM: Check and inject nested virtual interrupts
RISC-V: KVM: Extend kvm_riscv_isa_check_host() for nested virt
RISC-V: KVM: Trap-n-emulate SRET for Guest HS-mode
RISC-V: KVM: Redirect nested supervisor ecall and breakpoint traps
RISC-V: KVM: Redirect nested WFI and WRS traps
RISC-V: KVM: Implement remote HFENCE SBI calls for guest
RISC-V: KVM: Add CSR emulation for nested virtualization
RISC-V: KVM: Add HFENCE emulation for nested virtualization
RISC-V: KVM: Add ONE_REG interface for nested virtualization state
RISC-V: KVM: selftests: Add nested virt state to get-reg-list test
RISC-V: KVM: Add ONE_REG interface for nested virtualization CSRs
RISC-V: KVM: selftests: Add nested virt CSRs to get-reg-list test
arch/riscv/include/asm/csr.h | 17 +
arch/riscv/include/asm/insn.h | 9 +
arch/riscv/include/asm/kvm_gstage.h | 2 +
arch/riscv/include/asm/kvm_host.h | 29 +-
arch/riscv/include/asm/kvm_isa.h | 20 +
arch/riscv/include/asm/kvm_mmu.h | 2 +-
arch/riscv/include/asm/kvm_tlb.h | 37 +-
arch/riscv/include/asm/kvm_vcpu_config.h | 25 ++
arch/riscv/include/asm/kvm_vcpu_nested.h | 163 ++++++++
arch/riscv/include/asm/kvm_vcpu_timer.h | 1 +
arch/riscv/include/asm/kvm_vmid.h | 1 +
arch/riscv/include/uapi/asm/kvm.h | 36 +-
arch/riscv/kvm/Makefile | 6 +
arch/riscv/kvm/aia.c | 4 +
arch/riscv/kvm/aia_device.c | 5 +
arch/riscv/kvm/gstage.c | 14 +
arch/riscv/kvm/isa.c | 259 ++++++++++++
arch/riscv/kvm/main.c | 13 +-
arch/riscv/kvm/mmu.c | 18 +-
arch/riscv/kvm/tlb.c | 135 +++++-
arch/riscv/kvm/vcpu.c | 117 ++----
arch/riscv/kvm/vcpu_config.c | 130 ++++++
arch/riscv/kvm/vcpu_exit.c | 62 ++-
arch/riscv/kvm/vcpu_fp.c | 9 +-
arch/riscv/kvm/vcpu_insn.c | 46 +++
arch/riscv/kvm/vcpu_nested.c | 258 ++++++++++++
arch/riscv/kvm/vcpu_nested_csr.c | 389 ++++++++++++++++++
arch/riscv/kvm/vcpu_nested_insn.c | 140 +++++++
arch/riscv/kvm/vcpu_nested_swtlb.c | 146 +++++++
arch/riscv/kvm/vcpu_onereg.c | 334 +++------------
arch/riscv/kvm/vcpu_pmu.c | 5 +-
arch/riscv/kvm/vcpu_sbi_replace.c | 63 ++-
arch/riscv/kvm/vcpu_timer.c | 24 +-
arch/riscv/kvm/vcpu_vector.c | 5 +-
arch/riscv/kvm/vmid.c | 33 +-
.../selftests/kvm/riscv/get-reg-list.c | 106 ++++-
36 files changed, 2244 insertions(+), 419 deletions(-)
create mode 100644 arch/riscv/include/asm/kvm_isa.h
create mode 100644 arch/riscv/include/asm/kvm_vcpu_config.h
create mode 100644 arch/riscv/include/asm/kvm_vcpu_nested.h
create mode 100644 arch/riscv/kvm/isa.c
create mode 100644 arch/riscv/kvm/vcpu_config.c
create mode 100644 arch/riscv/kvm/vcpu_nested.c
create mode 100644 arch/riscv/kvm/vcpu_nested_csr.c
create mode 100644 arch/riscv/kvm/vcpu_nested_insn.c
create mode 100644 arch/riscv/kvm/vcpu_nested_swtlb.c
--
2.43.0
^ permalink raw reply [flat|nested] 38+ messages in thread
* [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-06 7:04 ` Anup Patel
2026-01-20 7:59 ` [PATCH 02/27] RISC-V: KVM: Fix error code returned for Ssaia ONE_REG Anup Patel
` (26 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Return -ENOENT for Smstateen ONE_REG when:
1) Smstateen is not enabled for a VCPU
2) When ONE_REG id is out of range
This will make Smstateen ONE_REG error codes consistent
with other ONE_REG interfaces of KVM RISC-V.
Fixes: c04913f2b54e ("RISCV: KVM: Add sstateen0 to ONE_REG")
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/vcpu_onereg.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
index e7ab6cb00646..6dab4deed86d 100644
--- a/arch/riscv/kvm/vcpu_onereg.c
+++ b/arch/riscv/kvm/vcpu_onereg.c
@@ -549,9 +549,11 @@ static inline int kvm_riscv_vcpu_smstateen_set_csr(struct kvm_vcpu *vcpu,
{
struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr;
+ if (!riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
+ return -ENOENT;
if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) /
sizeof(unsigned long))
- return -EINVAL;
+ return -ENOENT;
((unsigned long *)csr)[reg_num] = reg_val;
return 0;
@@ -563,9 +565,11 @@ static int kvm_riscv_vcpu_smstateen_get_csr(struct kvm_vcpu *vcpu,
{
struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr;
+ if (!riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
+ return -ENOENT;
if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) /
sizeof(unsigned long))
- return -EINVAL;
+ return -ENOENT;
*out_val = ((unsigned long *)csr)[reg_num];
return 0;
@@ -595,10 +599,7 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, ®_val);
break;
case KVM_REG_RISCV_CSR_SMSTATEEN:
- rc = -EINVAL;
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
- rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num,
- ®_val);
+ rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num, ®_val);
break;
default:
rc = -ENOENT;
@@ -640,10 +641,7 @@ static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val);
break;
case KVM_REG_RISCV_CSR_SMSTATEEN:
- rc = -EINVAL;
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
- rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num,
- reg_val);
+ rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num, reg_val);
break;
default:
rc = -ENOENT;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 02/27] RISC-V: KVM: Fix error code returned for Ssaia ONE_REG
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
2026-01-20 7:59 ` [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-06 7:04 ` Anup Patel
2026-01-20 7:59 ` [PATCH 03/27] RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip Anup Patel
` (25 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Return -ENOENT for Ssaia ONE_REG when Ssaia is not enabled
for a VCPU.
This will make Ssaia ONE_REG error codes consistent with
other ONE_REG interfaces of KVM RISC-V.
Fixes: 2a88f38cd58d ("RISC-V: KVM: return ENOENT in *_one_reg() when reg is unknown")
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/aia.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
index dad318185660..31baea9f0589 100644
--- a/arch/riscv/kvm/aia.c
+++ b/arch/riscv/kvm/aia.c
@@ -183,6 +183,8 @@ int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
{
struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+ if (!riscv_isa_extension_available(vcpu->arch.isa, SSAIA))
+ return -ENOENT;
if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
return -ENOENT;
@@ -199,6 +201,8 @@ int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
{
struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
+ if (!riscv_isa_extension_available(vcpu->arch.isa, SSAIA))
+ return -ENOENT;
if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
return -ENOENT;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 03/27] RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
2026-01-20 7:59 ` [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG Anup Patel
2026-01-20 7:59 ` [PATCH 02/27] RISC-V: KVM: Fix error code returned for Ssaia ONE_REG Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-06 7:04 ` Anup Patel
2026-01-20 7:59 ` [PATCH 04/27] RISC-V: KVM: Introduce common kvm_riscv_isa_check_host() Anup Patel
` (24 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The KVM user-space may create KVM AIA irqchip before checking
VCPU Ssaia extension availability so KVM AIA irqchip must fail
when host does not have Ssaia extension.
Fixes: 89d01306e34d ("RISC-V: KVM: Implement device interface for AIA irqchip")
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/aia_device.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c
index b195a93add1c..bed4d2c8c44c 100644
--- a/arch/riscv/kvm/aia_device.c
+++ b/arch/riscv/kvm/aia_device.c
@@ -11,6 +11,7 @@
#include <linux/irqchip/riscv-imsic.h>
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
+#include <linux/cpufeature.h>
static int aia_create(struct kvm_device *dev, u32 type)
{
@@ -22,6 +23,9 @@ static int aia_create(struct kvm_device *dev, u32 type)
if (irqchip_in_kernel(kvm))
return -EEXIST;
+ if (!riscv_isa_extension_available(NULL, SSAIA))
+ return -ENODEV;
+
ret = -EBUSY;
if (kvm_trylock_all_vcpus(kvm))
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 04/27] RISC-V: KVM: Introduce common kvm_riscv_isa_check_host()
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (2 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 03/27] RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-13 14:22 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources Anup Patel
` (23 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Rename kvm_riscv_vcpu_isa_check_host() to kvm_riscv_isa_check_host()
and use it as common function with KVM RISC-V to check isa extensions
supported by host.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 4 ++++
arch/riscv/kvm/aia_device.c | 2 +-
arch/riscv/kvm/vcpu_fp.c | 8 +++----
arch/riscv/kvm/vcpu_onereg.c | 38 ++++++++++++++++---------------
arch/riscv/kvm/vcpu_pmu.c | 2 +-
arch/riscv/kvm/vcpu_timer.c | 2 +-
arch/riscv/kvm/vcpu_vector.c | 4 ++--
7 files changed, 33 insertions(+), 27 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 24585304c02b..47a350c25555 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -308,6 +308,10 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch);
+int __kvm_riscv_isa_check_host(unsigned long kvm_ext, unsigned long *guest_ext);
+#define kvm_riscv_isa_check_host(ext) \
+ __kvm_riscv_isa_check_host(KVM_RISCV_ISA_EXT_##ext, NULL)
+
void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu);
unsigned long kvm_riscv_vcpu_num_regs(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_copy_reg_indices(struct kvm_vcpu *vcpu,
diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c
index bed4d2c8c44c..4cecab9bf102 100644
--- a/arch/riscv/kvm/aia_device.c
+++ b/arch/riscv/kvm/aia_device.c
@@ -23,7 +23,7 @@ static int aia_create(struct kvm_device *dev, u32 type)
if (irqchip_in_kernel(kvm))
return -EEXIST;
- if (!riscv_isa_extension_available(NULL, SSAIA))
+ if (kvm_riscv_isa_check_host(SSAIA))
return -ENODEV;
ret = -EBUSY;
diff --git a/arch/riscv/kvm/vcpu_fp.c b/arch/riscv/kvm/vcpu_fp.c
index 030904d82b58..32ab5938a2ec 100644
--- a/arch/riscv/kvm/vcpu_fp.c
+++ b/arch/riscv/kvm/vcpu_fp.c
@@ -59,17 +59,17 @@ void kvm_riscv_vcpu_guest_fp_restore(struct kvm_cpu_context *cntx,
void kvm_riscv_vcpu_host_fp_save(struct kvm_cpu_context *cntx)
{
/* No need to check host sstatus as it can be modified outside */
- if (riscv_isa_extension_available(NULL, d))
+ if (!kvm_riscv_isa_check_host(D))
__kvm_riscv_fp_d_save(cntx);
- else if (riscv_isa_extension_available(NULL, f))
+ else if (!kvm_riscv_isa_check_host(F))
__kvm_riscv_fp_f_save(cntx);
}
void kvm_riscv_vcpu_host_fp_restore(struct kvm_cpu_context *cntx)
{
- if (riscv_isa_extension_available(NULL, d))
+ if (!kvm_riscv_isa_check_host(D))
__kvm_riscv_fp_d_restore(cntx);
- else if (riscv_isa_extension_available(NULL, f))
+ else if (!kvm_riscv_isa_check_host(F))
__kvm_riscv_fp_f_restore(cntx);
}
#endif
diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
index 6dab4deed86d..f0f8c293d950 100644
--- a/arch/riscv/kvm/vcpu_onereg.c
+++ b/arch/riscv/kvm/vcpu_onereg.c
@@ -119,7 +119,7 @@ static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
return KVM_RISCV_ISA_EXT_MAX;
}
-static int kvm_riscv_vcpu_isa_check_host(unsigned long kvm_ext, unsigned long *guest_ext)
+int __kvm_riscv_isa_check_host(unsigned long kvm_ext, unsigned long *base_ext)
{
unsigned long host_ext;
@@ -127,8 +127,7 @@ static int kvm_riscv_vcpu_isa_check_host(unsigned long kvm_ext, unsigned long *g
kvm_ext >= ARRAY_SIZE(kvm_isa_ext_arr))
return -ENOENT;
- *guest_ext = kvm_isa_ext_arr[kvm_ext];
- switch (*guest_ext) {
+ switch (kvm_isa_ext_arr[kvm_ext]) {
case RISCV_ISA_EXT_SMNPM:
/*
* Pointer masking effective in (H)S-mode is provided by the
@@ -139,13 +138,16 @@ static int kvm_riscv_vcpu_isa_check_host(unsigned long kvm_ext, unsigned long *g
host_ext = RISCV_ISA_EXT_SSNPM;
break;
default:
- host_ext = *guest_ext;
+ host_ext = kvm_isa_ext_arr[kvm_ext];
break;
}
if (!__riscv_isa_extension_available(NULL, host_ext))
return -ENOENT;
+ if (base_ext)
+ *base_ext = kvm_isa_ext_arr[kvm_ext];
+
return 0;
}
@@ -156,7 +158,7 @@ static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext)
return false;
case KVM_RISCV_ISA_EXT_SSCOFPMF:
/* Sscofpmf depends on interrupt filtering defined in ssaia */
- return __riscv_isa_extension_available(NULL, RISCV_ISA_EXT_SSAIA);
+ return !kvm_riscv_isa_check_host(SSAIA);
case KVM_RISCV_ISA_EXT_SVADU:
/*
* The henvcfg.ADUE is read-only zero if menvcfg.ADUE is zero.
@@ -263,7 +265,7 @@ void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu)
unsigned long guest_ext, i;
for (i = 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) {
- if (kvm_riscv_vcpu_isa_check_host(i, &guest_ext))
+ if (__kvm_riscv_isa_check_host(i, &guest_ext))
continue;
if (kvm_riscv_vcpu_isa_enable_allowed(i))
set_bit(guest_ext, vcpu->arch.isa);
@@ -288,17 +290,17 @@ static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu,
reg_val = vcpu->arch.isa[0] & KVM_RISCV_BASE_ISA_MASK;
break;
case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size):
- if (!riscv_isa_extension_available(NULL, ZICBOM))
+ if (kvm_riscv_isa_check_host(ZICBOM))
return -ENOENT;
reg_val = riscv_cbom_block_size;
break;
case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size):
- if (!riscv_isa_extension_available(NULL, ZICBOZ))
+ if (kvm_riscv_isa_check_host(ZICBOZ))
return -ENOENT;
reg_val = riscv_cboz_block_size;
break;
case KVM_REG_RISCV_CONFIG_REG(zicbop_block_size):
- if (!riscv_isa_extension_available(NULL, ZICBOP))
+ if (kvm_riscv_isa_check_host(ZICBOP))
return -ENOENT;
reg_val = riscv_cbop_block_size;
break;
@@ -382,19 +384,19 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu,
}
break;
case KVM_REG_RISCV_CONFIG_REG(zicbom_block_size):
- if (!riscv_isa_extension_available(NULL, ZICBOM))
+ if (kvm_riscv_isa_check_host(ZICBOM))
return -ENOENT;
if (reg_val != riscv_cbom_block_size)
return -EINVAL;
break;
case KVM_REG_RISCV_CONFIG_REG(zicboz_block_size):
- if (!riscv_isa_extension_available(NULL, ZICBOZ))
+ if (kvm_riscv_isa_check_host(ZICBOZ))
return -ENOENT;
if (reg_val != riscv_cboz_block_size)
return -EINVAL;
break;
case KVM_REG_RISCV_CONFIG_REG(zicbop_block_size):
- if (!riscv_isa_extension_available(NULL, ZICBOP))
+ if (kvm_riscv_isa_check_host(ZICBOP))
return -ENOENT;
if (reg_val != riscv_cbop_block_size)
return -EINVAL;
@@ -660,7 +662,7 @@ static int riscv_vcpu_get_isa_ext_single(struct kvm_vcpu *vcpu,
unsigned long guest_ext;
int ret;
- ret = kvm_riscv_vcpu_isa_check_host(reg_num, &guest_ext);
+ ret = __kvm_riscv_isa_check_host(reg_num, &guest_ext);
if (ret)
return ret;
@@ -678,7 +680,7 @@ static int riscv_vcpu_set_isa_ext_single(struct kvm_vcpu *vcpu,
unsigned long guest_ext;
int ret;
- ret = kvm_riscv_vcpu_isa_check_host(reg_num, &guest_ext);
+ ret = __kvm_riscv_isa_check_host(reg_num, &guest_ext);
if (ret)
return ret;
@@ -837,13 +839,13 @@ static int copy_config_reg_indices(const struct kvm_vcpu *vcpu,
* was not available.
*/
if (i == KVM_REG_RISCV_CONFIG_REG(zicbom_block_size) &&
- !riscv_isa_extension_available(NULL, ZICBOM))
+ kvm_riscv_isa_check_host(ZICBOM))
continue;
else if (i == KVM_REG_RISCV_CONFIG_REG(zicboz_block_size) &&
- !riscv_isa_extension_available(NULL, ZICBOZ))
+ kvm_riscv_isa_check_host(ZICBOZ))
continue;
else if (i == KVM_REG_RISCV_CONFIG_REG(zicbop_block_size) &&
- !riscv_isa_extension_available(NULL, ZICBOP))
+ kvm_riscv_isa_check_host(ZICBOP))
continue;
size = IS_ENABLED(CONFIG_32BIT) ? KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64;
@@ -1064,7 +1066,7 @@ static int copy_isa_ext_reg_indices(const struct kvm_vcpu *vcpu,
KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64;
u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_ISA_EXT | i;
- if (kvm_riscv_vcpu_isa_check_host(i, &guest_ext))
+ if (__kvm_riscv_isa_check_host(i, &guest_ext))
continue;
if (uindices) {
diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
index 4d8d5e9aa53d..9759143c1785 100644
--- a/arch/riscv/kvm/vcpu_pmu.c
+++ b/arch/riscv/kvm/vcpu_pmu.c
@@ -819,7 +819,7 @@ void kvm_riscv_vcpu_pmu_init(struct kvm_vcpu *vcpu)
* filtering is available in the host. Otherwise, guest will always count
* events while the execution is in hypervisor mode.
*/
- if (!riscv_isa_extension_available(NULL, SSCOFPMF))
+ if (kvm_riscv_isa_check_host(SSCOFPMF))
return;
ret = riscv_pmu_get_hpm_info(&hpm_width, &num_hw_ctrs);
diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c
index f36247e4c783..cac4f3a5f213 100644
--- a/arch/riscv/kvm/vcpu_timer.c
+++ b/arch/riscv/kvm/vcpu_timer.c
@@ -253,7 +253,7 @@ int kvm_riscv_vcpu_timer_init(struct kvm_vcpu *vcpu)
t->next_set = false;
/* Enable sstc for every vcpu if available in hardware */
- if (riscv_isa_extension_available(NULL, SSTC)) {
+ if (!kvm_riscv_isa_check_host(SSTC)) {
t->sstc_enabled = true;
hrtimer_setup(&t->hrt, kvm_riscv_vcpu_vstimer_expired, CLOCK_MONOTONIC,
HRTIMER_MODE_REL);
diff --git a/arch/riscv/kvm/vcpu_vector.c b/arch/riscv/kvm/vcpu_vector.c
index 05f3cc2d8e31..8c7315a96b9e 100644
--- a/arch/riscv/kvm/vcpu_vector.c
+++ b/arch/riscv/kvm/vcpu_vector.c
@@ -63,13 +63,13 @@ void kvm_riscv_vcpu_guest_vector_restore(struct kvm_cpu_context *cntx,
void kvm_riscv_vcpu_host_vector_save(struct kvm_cpu_context *cntx)
{
/* No need to check host sstatus as it can be modified outside */
- if (riscv_isa_extension_available(NULL, v))
+ if (!kvm_riscv_isa_check_host(V))
__kvm_riscv_vector_save(cntx);
}
void kvm_riscv_vcpu_host_vector_restore(struct kvm_cpu_context *cntx)
{
- if (riscv_isa_extension_available(NULL, v))
+ if (!kvm_riscv_isa_check_host(V))
__kvm_riscv_vector_restore(cntx);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (3 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 04/27] RISC-V: KVM: Introduce common kvm_riscv_isa_check_host() Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-13 14:14 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 06/27] RISC-V: KVM: Move timer state defines closer to struct in UAPI header Anup Patel
` (22 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The KVM ISA extension related checks are not VCPU specific and
should be factored out of vcpu_onereg.c into separate sources.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 4 -
arch/riscv/include/asm/kvm_isa.h | 20 +++
arch/riscv/kvm/Makefile | 1 +
arch/riscv/kvm/aia_device.c | 1 +
arch/riscv/kvm/isa.c | 251 +++++++++++++++++++++++++++++
arch/riscv/kvm/vcpu_fp.c | 1 +
arch/riscv/kvm/vcpu_onereg.c | 257 +-----------------------------
arch/riscv/kvm/vcpu_pmu.c | 3 +-
arch/riscv/kvm/vcpu_timer.c | 1 +
arch/riscv/kvm/vcpu_vector.c | 1 +
10 files changed, 286 insertions(+), 254 deletions(-)
create mode 100644 arch/riscv/include/asm/kvm_isa.h
create mode 100644 arch/riscv/kvm/isa.c
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 47a350c25555..24585304c02b 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -308,10 +308,6 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch);
-int __kvm_riscv_isa_check_host(unsigned long kvm_ext, unsigned long *guest_ext);
-#define kvm_riscv_isa_check_host(ext) \
- __kvm_riscv_isa_check_host(KVM_RISCV_ISA_EXT_##ext, NULL)
-
void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu);
unsigned long kvm_riscv_vcpu_num_regs(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_copy_reg_indices(struct kvm_vcpu *vcpu,
diff --git a/arch/riscv/include/asm/kvm_isa.h b/arch/riscv/include/asm/kvm_isa.h
new file mode 100644
index 000000000000..bc4b956d5f17
--- /dev/null
+++ b/arch/riscv/include/asm/kvm_isa.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#ifndef __KVM_RISCV_ISA_H
+#define __KVM_RISCV_ISA_H
+
+#include <linux/types.h>
+
+unsigned long kvm_riscv_base2isa_ext(unsigned long base_ext);
+
+int __kvm_riscv_isa_check_host(unsigned long ext, unsigned long *base_ext);
+#define kvm_riscv_isa_check_host(ext) \
+ __kvm_riscv_isa_check_host(KVM_RISCV_ISA_EXT_##ext, NULL)
+
+bool kvm_riscv_isa_enable_allowed(unsigned long ext);
+bool kvm_riscv_isa_disable_allowed(unsigned long ext);
+
+#endif
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 3b8afb038b35..07eab96189e7 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -15,6 +15,7 @@ kvm-y += aia_aplic.o
kvm-y += aia_device.o
kvm-y += aia_imsic.o
kvm-y += gstage.o
+kvm-y += isa.o
kvm-y += main.o
kvm-y += mmu.o
kvm-y += nacl.o
diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c
index 4cecab9bf102..77629b7eac09 100644
--- a/arch/riscv/kvm/aia_device.c
+++ b/arch/riscv/kvm/aia_device.c
@@ -12,6 +12,7 @@
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <linux/cpufeature.h>
+#include <asm/kvm_isa.h>
static int aia_create(struct kvm_device *dev, u32 type)
{
diff --git a/arch/riscv/kvm/isa.c b/arch/riscv/kvm/isa.c
new file mode 100644
index 000000000000..e860f6d79bb0
--- /dev/null
+++ b/arch/riscv/kvm/isa.c
@@ -0,0 +1,251 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#include <linux/errno.h>
+#include <linux/kvm_host.h>
+#include <linux/cpufeature.h>
+#include <linux/pgtable.h>
+#include <asm/kvm_isa.h>
+#include <asm/vector.h>
+
+#define KVM_ISA_EXT_ARR(ext) \
+[KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
+
+/* Mapping between KVM ISA Extension ID & guest ISA extension ID */
+static const unsigned long kvm_isa_ext_arr[] = {
+ /* Single letter extensions (alphabetically sorted) */
+ [KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
+ [KVM_RISCV_ISA_EXT_C] = RISCV_ISA_EXT_c,
+ [KVM_RISCV_ISA_EXT_D] = RISCV_ISA_EXT_d,
+ [KVM_RISCV_ISA_EXT_F] = RISCV_ISA_EXT_f,
+ [KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h,
+ [KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
+ [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
+ [KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v,
+ /* Multi letter extensions (alphabetically sorted) */
+ KVM_ISA_EXT_ARR(SMNPM),
+ KVM_ISA_EXT_ARR(SMSTATEEN),
+ KVM_ISA_EXT_ARR(SSAIA),
+ KVM_ISA_EXT_ARR(SSCOFPMF),
+ KVM_ISA_EXT_ARR(SSNPM),
+ KVM_ISA_EXT_ARR(SSTC),
+ KVM_ISA_EXT_ARR(SVADE),
+ KVM_ISA_EXT_ARR(SVADU),
+ KVM_ISA_EXT_ARR(SVINVAL),
+ KVM_ISA_EXT_ARR(SVNAPOT),
+ KVM_ISA_EXT_ARR(SVPBMT),
+ KVM_ISA_EXT_ARR(SVVPTC),
+ KVM_ISA_EXT_ARR(ZAAMO),
+ KVM_ISA_EXT_ARR(ZABHA),
+ KVM_ISA_EXT_ARR(ZACAS),
+ KVM_ISA_EXT_ARR(ZALASR),
+ KVM_ISA_EXT_ARR(ZALRSC),
+ KVM_ISA_EXT_ARR(ZAWRS),
+ KVM_ISA_EXT_ARR(ZBA),
+ KVM_ISA_EXT_ARR(ZBB),
+ KVM_ISA_EXT_ARR(ZBC),
+ KVM_ISA_EXT_ARR(ZBKB),
+ KVM_ISA_EXT_ARR(ZBKC),
+ KVM_ISA_EXT_ARR(ZBKX),
+ KVM_ISA_EXT_ARR(ZBS),
+ KVM_ISA_EXT_ARR(ZCA),
+ KVM_ISA_EXT_ARR(ZCB),
+ KVM_ISA_EXT_ARR(ZCD),
+ KVM_ISA_EXT_ARR(ZCF),
+ KVM_ISA_EXT_ARR(ZCLSD),
+ KVM_ISA_EXT_ARR(ZCMOP),
+ KVM_ISA_EXT_ARR(ZFA),
+ KVM_ISA_EXT_ARR(ZFBFMIN),
+ KVM_ISA_EXT_ARR(ZFH),
+ KVM_ISA_EXT_ARR(ZFHMIN),
+ KVM_ISA_EXT_ARR(ZICBOM),
+ KVM_ISA_EXT_ARR(ZICBOP),
+ KVM_ISA_EXT_ARR(ZICBOZ),
+ KVM_ISA_EXT_ARR(ZICCRSE),
+ KVM_ISA_EXT_ARR(ZICNTR),
+ KVM_ISA_EXT_ARR(ZICOND),
+ KVM_ISA_EXT_ARR(ZICSR),
+ KVM_ISA_EXT_ARR(ZIFENCEI),
+ KVM_ISA_EXT_ARR(ZIHINTNTL),
+ KVM_ISA_EXT_ARR(ZIHINTPAUSE),
+ KVM_ISA_EXT_ARR(ZIHPM),
+ KVM_ISA_EXT_ARR(ZILSD),
+ KVM_ISA_EXT_ARR(ZIMOP),
+ KVM_ISA_EXT_ARR(ZKND),
+ KVM_ISA_EXT_ARR(ZKNE),
+ KVM_ISA_EXT_ARR(ZKNH),
+ KVM_ISA_EXT_ARR(ZKR),
+ KVM_ISA_EXT_ARR(ZKSED),
+ KVM_ISA_EXT_ARR(ZKSH),
+ KVM_ISA_EXT_ARR(ZKT),
+ KVM_ISA_EXT_ARR(ZTSO),
+ KVM_ISA_EXT_ARR(ZVBB),
+ KVM_ISA_EXT_ARR(ZVBC),
+ KVM_ISA_EXT_ARR(ZVFBFMIN),
+ KVM_ISA_EXT_ARR(ZVFBFWMA),
+ KVM_ISA_EXT_ARR(ZVFH),
+ KVM_ISA_EXT_ARR(ZVFHMIN),
+ KVM_ISA_EXT_ARR(ZVKB),
+ KVM_ISA_EXT_ARR(ZVKG),
+ KVM_ISA_EXT_ARR(ZVKNED),
+ KVM_ISA_EXT_ARR(ZVKNHA),
+ KVM_ISA_EXT_ARR(ZVKNHB),
+ KVM_ISA_EXT_ARR(ZVKSED),
+ KVM_ISA_EXT_ARR(ZVKSH),
+ KVM_ISA_EXT_ARR(ZVKT),
+};
+
+unsigned long kvm_riscv_base2isa_ext(unsigned long base_ext)
+{
+ unsigned long i;
+
+ for (i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) {
+ if (kvm_isa_ext_arr[i] == base_ext)
+ return i;
+ }
+
+ return KVM_RISCV_ISA_EXT_MAX;
+}
+
+int __kvm_riscv_isa_check_host(unsigned long ext, unsigned long *base_ext)
+{
+ unsigned long host_ext;
+
+ if (ext >= KVM_RISCV_ISA_EXT_MAX ||
+ ext >= ARRAY_SIZE(kvm_isa_ext_arr))
+ return -ENOENT;
+
+ switch (kvm_isa_ext_arr[ext]) {
+ case RISCV_ISA_EXT_SMNPM:
+ /*
+ * Pointer masking effective in (H)S-mode is provided by the
+ * Smnpm extension, so that extension is reported to the guest,
+ * even though the CSR bits for configuring VS-mode pointer
+ * masking on the host side are part of the Ssnpm extension.
+ */
+ host_ext = RISCV_ISA_EXT_SSNPM;
+ break;
+ default:
+ host_ext = kvm_isa_ext_arr[ext];
+ break;
+ }
+
+ if (!__riscv_isa_extension_available(NULL, host_ext))
+ return -ENOENT;
+
+ if (base_ext)
+ *base_ext = kvm_isa_ext_arr[ext];
+
+ return 0;
+}
+
+bool kvm_riscv_isa_enable_allowed(unsigned long ext)
+{
+ switch (ext) {
+ case KVM_RISCV_ISA_EXT_H:
+ return false;
+ case KVM_RISCV_ISA_EXT_SSCOFPMF:
+ /* Sscofpmf depends on interrupt filtering defined in ssaia */
+ return !kvm_riscv_isa_check_host(SSAIA);
+ case KVM_RISCV_ISA_EXT_SVADU:
+ /*
+ * The henvcfg.ADUE is read-only zero if menvcfg.ADUE is zero.
+ * Guest OS can use Svadu only when host OS enable Svadu.
+ */
+ return arch_has_hw_pte_young();
+ case KVM_RISCV_ISA_EXT_V:
+ return riscv_v_vstate_ctrl_user_allowed();
+ default:
+ break;
+ }
+
+ return true;
+}
+
+bool kvm_riscv_isa_disable_allowed(unsigned long ext)
+{
+ switch (ext) {
+ /* Extensions which don't have any mechanism to disable */
+ case KVM_RISCV_ISA_EXT_A:
+ case KVM_RISCV_ISA_EXT_C:
+ case KVM_RISCV_ISA_EXT_I:
+ case KVM_RISCV_ISA_EXT_M:
+ /* There is not architectural config bit to disable sscofpmf completely */
+ case KVM_RISCV_ISA_EXT_SSCOFPMF:
+ case KVM_RISCV_ISA_EXT_SSNPM:
+ case KVM_RISCV_ISA_EXT_SSTC:
+ case KVM_RISCV_ISA_EXT_SVINVAL:
+ case KVM_RISCV_ISA_EXT_SVNAPOT:
+ case KVM_RISCV_ISA_EXT_SVVPTC:
+ case KVM_RISCV_ISA_EXT_ZAAMO:
+ case KVM_RISCV_ISA_EXT_ZABHA:
+ case KVM_RISCV_ISA_EXT_ZACAS:
+ case KVM_RISCV_ISA_EXT_ZALASR:
+ case KVM_RISCV_ISA_EXT_ZALRSC:
+ case KVM_RISCV_ISA_EXT_ZAWRS:
+ case KVM_RISCV_ISA_EXT_ZBA:
+ case KVM_RISCV_ISA_EXT_ZBB:
+ case KVM_RISCV_ISA_EXT_ZBC:
+ case KVM_RISCV_ISA_EXT_ZBKB:
+ case KVM_RISCV_ISA_EXT_ZBKC:
+ case KVM_RISCV_ISA_EXT_ZBKX:
+ case KVM_RISCV_ISA_EXT_ZBS:
+ case KVM_RISCV_ISA_EXT_ZCA:
+ case KVM_RISCV_ISA_EXT_ZCB:
+ case KVM_RISCV_ISA_EXT_ZCD:
+ case KVM_RISCV_ISA_EXT_ZCF:
+ case KVM_RISCV_ISA_EXT_ZCMOP:
+ case KVM_RISCV_ISA_EXT_ZFA:
+ case KVM_RISCV_ISA_EXT_ZFBFMIN:
+ case KVM_RISCV_ISA_EXT_ZFH:
+ case KVM_RISCV_ISA_EXT_ZFHMIN:
+ case KVM_RISCV_ISA_EXT_ZICBOP:
+ case KVM_RISCV_ISA_EXT_ZICCRSE:
+ case KVM_RISCV_ISA_EXT_ZICNTR:
+ case KVM_RISCV_ISA_EXT_ZICOND:
+ case KVM_RISCV_ISA_EXT_ZICSR:
+ case KVM_RISCV_ISA_EXT_ZIFENCEI:
+ case KVM_RISCV_ISA_EXT_ZIHINTNTL:
+ case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
+ case KVM_RISCV_ISA_EXT_ZIHPM:
+ case KVM_RISCV_ISA_EXT_ZIMOP:
+ case KVM_RISCV_ISA_EXT_ZKND:
+ case KVM_RISCV_ISA_EXT_ZKNE:
+ case KVM_RISCV_ISA_EXT_ZKNH:
+ case KVM_RISCV_ISA_EXT_ZKR:
+ case KVM_RISCV_ISA_EXT_ZKSED:
+ case KVM_RISCV_ISA_EXT_ZKSH:
+ case KVM_RISCV_ISA_EXT_ZKT:
+ case KVM_RISCV_ISA_EXT_ZTSO:
+ case KVM_RISCV_ISA_EXT_ZVBB:
+ case KVM_RISCV_ISA_EXT_ZVBC:
+ case KVM_RISCV_ISA_EXT_ZVFBFMIN:
+ case KVM_RISCV_ISA_EXT_ZVFBFWMA:
+ case KVM_RISCV_ISA_EXT_ZVFH:
+ case KVM_RISCV_ISA_EXT_ZVFHMIN:
+ case KVM_RISCV_ISA_EXT_ZVKB:
+ case KVM_RISCV_ISA_EXT_ZVKG:
+ case KVM_RISCV_ISA_EXT_ZVKNED:
+ case KVM_RISCV_ISA_EXT_ZVKNHA:
+ case KVM_RISCV_ISA_EXT_ZVKNHB:
+ case KVM_RISCV_ISA_EXT_ZVKSED:
+ case KVM_RISCV_ISA_EXT_ZVKSH:
+ case KVM_RISCV_ISA_EXT_ZVKT:
+ return false;
+ /* Extensions which can be disabled using Smstateen */
+ case KVM_RISCV_ISA_EXT_SSAIA:
+ return riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN);
+ case KVM_RISCV_ISA_EXT_SVADE:
+ /*
+ * The henvcfg.ADUE is read-only zero if menvcfg.ADUE is zero.
+ * Svade can't be disabled unless we support Svadu.
+ */
+ return arch_has_hw_pte_young();
+ default:
+ break;
+ }
+
+ return true;
+}
diff --git a/arch/riscv/kvm/vcpu_fp.c b/arch/riscv/kvm/vcpu_fp.c
index 32ab5938a2ec..49ad7446d2bb 100644
--- a/arch/riscv/kvm/vcpu_fp.c
+++ b/arch/riscv/kvm/vcpu_fp.c
@@ -12,6 +12,7 @@
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <asm/cpufeature.h>
+#include <asm/kvm_isa.h>
#ifdef CONFIG_FPU
void kvm_riscv_vcpu_fp_reset(struct kvm_vcpu *vcpu)
diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
index f0f8c293d950..6b16eee2c833 100644
--- a/arch/riscv/kvm/vcpu_onereg.c
+++ b/arch/riscv/kvm/vcpu_onereg.c
@@ -14,260 +14,19 @@
#include <linux/kvm_host.h>
#include <asm/cacheflush.h>
#include <asm/cpufeature.h>
+#include <asm/kvm_isa.h>
#include <asm/kvm_vcpu_vector.h>
-#include <asm/pgtable.h>
-#include <asm/vector.h>
#define KVM_RISCV_BASE_ISA_MASK GENMASK(25, 0)
-#define KVM_ISA_EXT_ARR(ext) \
-[KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
-
-/* Mapping between KVM ISA Extension ID & guest ISA extension ID */
-static const unsigned long kvm_isa_ext_arr[] = {
- /* Single letter extensions (alphabetically sorted) */
- [KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
- [KVM_RISCV_ISA_EXT_C] = RISCV_ISA_EXT_c,
- [KVM_RISCV_ISA_EXT_D] = RISCV_ISA_EXT_d,
- [KVM_RISCV_ISA_EXT_F] = RISCV_ISA_EXT_f,
- [KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h,
- [KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
- [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
- [KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v,
- /* Multi letter extensions (alphabetically sorted) */
- KVM_ISA_EXT_ARR(SMNPM),
- KVM_ISA_EXT_ARR(SMSTATEEN),
- KVM_ISA_EXT_ARR(SSAIA),
- KVM_ISA_EXT_ARR(SSCOFPMF),
- KVM_ISA_EXT_ARR(SSNPM),
- KVM_ISA_EXT_ARR(SSTC),
- KVM_ISA_EXT_ARR(SVADE),
- KVM_ISA_EXT_ARR(SVADU),
- KVM_ISA_EXT_ARR(SVINVAL),
- KVM_ISA_EXT_ARR(SVNAPOT),
- KVM_ISA_EXT_ARR(SVPBMT),
- KVM_ISA_EXT_ARR(SVVPTC),
- KVM_ISA_EXT_ARR(ZAAMO),
- KVM_ISA_EXT_ARR(ZABHA),
- KVM_ISA_EXT_ARR(ZACAS),
- KVM_ISA_EXT_ARR(ZALASR),
- KVM_ISA_EXT_ARR(ZALRSC),
- KVM_ISA_EXT_ARR(ZAWRS),
- KVM_ISA_EXT_ARR(ZBA),
- KVM_ISA_EXT_ARR(ZBB),
- KVM_ISA_EXT_ARR(ZBC),
- KVM_ISA_EXT_ARR(ZBKB),
- KVM_ISA_EXT_ARR(ZBKC),
- KVM_ISA_EXT_ARR(ZBKX),
- KVM_ISA_EXT_ARR(ZBS),
- KVM_ISA_EXT_ARR(ZCA),
- KVM_ISA_EXT_ARR(ZCB),
- KVM_ISA_EXT_ARR(ZCD),
- KVM_ISA_EXT_ARR(ZCF),
- KVM_ISA_EXT_ARR(ZCLSD),
- KVM_ISA_EXT_ARR(ZCMOP),
- KVM_ISA_EXT_ARR(ZFA),
- KVM_ISA_EXT_ARR(ZFBFMIN),
- KVM_ISA_EXT_ARR(ZFH),
- KVM_ISA_EXT_ARR(ZFHMIN),
- KVM_ISA_EXT_ARR(ZICBOM),
- KVM_ISA_EXT_ARR(ZICBOP),
- KVM_ISA_EXT_ARR(ZICBOZ),
- KVM_ISA_EXT_ARR(ZICCRSE),
- KVM_ISA_EXT_ARR(ZICNTR),
- KVM_ISA_EXT_ARR(ZICOND),
- KVM_ISA_EXT_ARR(ZICSR),
- KVM_ISA_EXT_ARR(ZIFENCEI),
- KVM_ISA_EXT_ARR(ZIHINTNTL),
- KVM_ISA_EXT_ARR(ZIHINTPAUSE),
- KVM_ISA_EXT_ARR(ZIHPM),
- KVM_ISA_EXT_ARR(ZILSD),
- KVM_ISA_EXT_ARR(ZIMOP),
- KVM_ISA_EXT_ARR(ZKND),
- KVM_ISA_EXT_ARR(ZKNE),
- KVM_ISA_EXT_ARR(ZKNH),
- KVM_ISA_EXT_ARR(ZKR),
- KVM_ISA_EXT_ARR(ZKSED),
- KVM_ISA_EXT_ARR(ZKSH),
- KVM_ISA_EXT_ARR(ZKT),
- KVM_ISA_EXT_ARR(ZTSO),
- KVM_ISA_EXT_ARR(ZVBB),
- KVM_ISA_EXT_ARR(ZVBC),
- KVM_ISA_EXT_ARR(ZVFBFMIN),
- KVM_ISA_EXT_ARR(ZVFBFWMA),
- KVM_ISA_EXT_ARR(ZVFH),
- KVM_ISA_EXT_ARR(ZVFHMIN),
- KVM_ISA_EXT_ARR(ZVKB),
- KVM_ISA_EXT_ARR(ZVKG),
- KVM_ISA_EXT_ARR(ZVKNED),
- KVM_ISA_EXT_ARR(ZVKNHA),
- KVM_ISA_EXT_ARR(ZVKNHB),
- KVM_ISA_EXT_ARR(ZVKSED),
- KVM_ISA_EXT_ARR(ZVKSH),
- KVM_ISA_EXT_ARR(ZVKT),
-};
-
-static unsigned long kvm_riscv_vcpu_base2isa_ext(unsigned long base_ext)
-{
- unsigned long i;
-
- for (i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) {
- if (kvm_isa_ext_arr[i] == base_ext)
- return i;
- }
-
- return KVM_RISCV_ISA_EXT_MAX;
-}
-
-int __kvm_riscv_isa_check_host(unsigned long kvm_ext, unsigned long *base_ext)
-{
- unsigned long host_ext;
-
- if (kvm_ext >= KVM_RISCV_ISA_EXT_MAX ||
- kvm_ext >= ARRAY_SIZE(kvm_isa_ext_arr))
- return -ENOENT;
-
- switch (kvm_isa_ext_arr[kvm_ext]) {
- case RISCV_ISA_EXT_SMNPM:
- /*
- * Pointer masking effective in (H)S-mode is provided by the
- * Smnpm extension, so that extension is reported to the guest,
- * even though the CSR bits for configuring VS-mode pointer
- * masking on the host side are part of the Ssnpm extension.
- */
- host_ext = RISCV_ISA_EXT_SSNPM;
- break;
- default:
- host_ext = kvm_isa_ext_arr[kvm_ext];
- break;
- }
-
- if (!__riscv_isa_extension_available(NULL, host_ext))
- return -ENOENT;
-
- if (base_ext)
- *base_ext = kvm_isa_ext_arr[kvm_ext];
-
- return 0;
-}
-
-static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext)
-{
- switch (ext) {
- case KVM_RISCV_ISA_EXT_H:
- return false;
- case KVM_RISCV_ISA_EXT_SSCOFPMF:
- /* Sscofpmf depends on interrupt filtering defined in ssaia */
- return !kvm_riscv_isa_check_host(SSAIA);
- case KVM_RISCV_ISA_EXT_SVADU:
- /*
- * The henvcfg.ADUE is read-only zero if menvcfg.ADUE is zero.
- * Guest OS can use Svadu only when host OS enable Svadu.
- */
- return arch_has_hw_pte_young();
- case KVM_RISCV_ISA_EXT_V:
- return riscv_v_vstate_ctrl_user_allowed();
- default:
- break;
- }
-
- return true;
-}
-
-static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
-{
- switch (ext) {
- /* Extensions which don't have any mechanism to disable */
- case KVM_RISCV_ISA_EXT_A:
- case KVM_RISCV_ISA_EXT_C:
- case KVM_RISCV_ISA_EXT_I:
- case KVM_RISCV_ISA_EXT_M:
- /* There is not architectural config bit to disable sscofpmf completely */
- case KVM_RISCV_ISA_EXT_SSCOFPMF:
- case KVM_RISCV_ISA_EXT_SSNPM:
- case KVM_RISCV_ISA_EXT_SSTC:
- case KVM_RISCV_ISA_EXT_SVINVAL:
- case KVM_RISCV_ISA_EXT_SVNAPOT:
- case KVM_RISCV_ISA_EXT_SVVPTC:
- case KVM_RISCV_ISA_EXT_ZAAMO:
- case KVM_RISCV_ISA_EXT_ZABHA:
- case KVM_RISCV_ISA_EXT_ZACAS:
- case KVM_RISCV_ISA_EXT_ZALASR:
- case KVM_RISCV_ISA_EXT_ZALRSC:
- case KVM_RISCV_ISA_EXT_ZAWRS:
- case KVM_RISCV_ISA_EXT_ZBA:
- case KVM_RISCV_ISA_EXT_ZBB:
- case KVM_RISCV_ISA_EXT_ZBC:
- case KVM_RISCV_ISA_EXT_ZBKB:
- case KVM_RISCV_ISA_EXT_ZBKC:
- case KVM_RISCV_ISA_EXT_ZBKX:
- case KVM_RISCV_ISA_EXT_ZBS:
- case KVM_RISCV_ISA_EXT_ZCA:
- case KVM_RISCV_ISA_EXT_ZCB:
- case KVM_RISCV_ISA_EXT_ZCD:
- case KVM_RISCV_ISA_EXT_ZCF:
- case KVM_RISCV_ISA_EXT_ZCMOP:
- case KVM_RISCV_ISA_EXT_ZFA:
- case KVM_RISCV_ISA_EXT_ZFBFMIN:
- case KVM_RISCV_ISA_EXT_ZFH:
- case KVM_RISCV_ISA_EXT_ZFHMIN:
- case KVM_RISCV_ISA_EXT_ZICBOP:
- case KVM_RISCV_ISA_EXT_ZICCRSE:
- case KVM_RISCV_ISA_EXT_ZICNTR:
- case KVM_RISCV_ISA_EXT_ZICOND:
- case KVM_RISCV_ISA_EXT_ZICSR:
- case KVM_RISCV_ISA_EXT_ZIFENCEI:
- case KVM_RISCV_ISA_EXT_ZIHINTNTL:
- case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
- case KVM_RISCV_ISA_EXT_ZIHPM:
- case KVM_RISCV_ISA_EXT_ZIMOP:
- case KVM_RISCV_ISA_EXT_ZKND:
- case KVM_RISCV_ISA_EXT_ZKNE:
- case KVM_RISCV_ISA_EXT_ZKNH:
- case KVM_RISCV_ISA_EXT_ZKR:
- case KVM_RISCV_ISA_EXT_ZKSED:
- case KVM_RISCV_ISA_EXT_ZKSH:
- case KVM_RISCV_ISA_EXT_ZKT:
- case KVM_RISCV_ISA_EXT_ZTSO:
- case KVM_RISCV_ISA_EXT_ZVBB:
- case KVM_RISCV_ISA_EXT_ZVBC:
- case KVM_RISCV_ISA_EXT_ZVFBFMIN:
- case KVM_RISCV_ISA_EXT_ZVFBFWMA:
- case KVM_RISCV_ISA_EXT_ZVFH:
- case KVM_RISCV_ISA_EXT_ZVFHMIN:
- case KVM_RISCV_ISA_EXT_ZVKB:
- case KVM_RISCV_ISA_EXT_ZVKG:
- case KVM_RISCV_ISA_EXT_ZVKNED:
- case KVM_RISCV_ISA_EXT_ZVKNHA:
- case KVM_RISCV_ISA_EXT_ZVKNHB:
- case KVM_RISCV_ISA_EXT_ZVKSED:
- case KVM_RISCV_ISA_EXT_ZVKSH:
- case KVM_RISCV_ISA_EXT_ZVKT:
- return false;
- /* Extensions which can be disabled using Smstateen */
- case KVM_RISCV_ISA_EXT_SSAIA:
- return riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN);
- case KVM_RISCV_ISA_EXT_SVADE:
- /*
- * The henvcfg.ADUE is read-only zero if menvcfg.ADUE is zero.
- * Svade can't be disabled unless we support Svadu.
- */
- return arch_has_hw_pte_young();
- default:
- break;
- }
-
- return true;
-}
-
void kvm_riscv_vcpu_setup_isa(struct kvm_vcpu *vcpu)
{
unsigned long guest_ext, i;
- for (i = 0; i < ARRAY_SIZE(kvm_isa_ext_arr); i++) {
+ for (i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) {
if (__kvm_riscv_isa_check_host(i, &guest_ext))
continue;
- if (kvm_riscv_vcpu_isa_enable_allowed(i))
+ if (kvm_riscv_isa_enable_allowed(i))
set_bit(guest_ext, vcpu->arch.isa);
}
}
@@ -361,15 +120,15 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu,
if (!vcpu->arch.ran_atleast_once) {
/* Ignore the enable/disable request for certain extensions */
for (i = 0; i < RISCV_ISA_EXT_BASE; i++) {
- isa_ext = kvm_riscv_vcpu_base2isa_ext(i);
+ isa_ext = kvm_riscv_base2isa_ext(i);
if (isa_ext >= KVM_RISCV_ISA_EXT_MAX) {
reg_val &= ~BIT(i);
continue;
}
- if (!kvm_riscv_vcpu_isa_enable_allowed(isa_ext))
+ if (!kvm_riscv_isa_enable_allowed(isa_ext))
if (reg_val & BIT(i))
reg_val &= ~BIT(i);
- if (!kvm_riscv_vcpu_isa_disable_allowed(isa_ext))
+ if (!kvm_riscv_isa_disable_allowed(isa_ext))
if (!(reg_val & BIT(i)))
reg_val |= BIT(i);
}
@@ -693,10 +452,10 @@ static int riscv_vcpu_set_isa_ext_single(struct kvm_vcpu *vcpu,
* extension can be disabled
*/
if (reg_val == 1 &&
- kvm_riscv_vcpu_isa_enable_allowed(reg_num))
+ kvm_riscv_isa_enable_allowed(reg_num))
set_bit(guest_ext, vcpu->arch.isa);
else if (!reg_val &&
- kvm_riscv_vcpu_isa_disable_allowed(reg_num))
+ kvm_riscv_isa_disable_allowed(reg_num))
clear_bit(guest_ext, vcpu->arch.isa);
else
return -EINVAL;
diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
index 9759143c1785..5d37830c59b6 100644
--- a/arch/riscv/kvm/vcpu_pmu.c
+++ b/arch/riscv/kvm/vcpu_pmu.c
@@ -7,15 +7,16 @@
*/
#define pr_fmt(fmt) "riscv-kvm-pmu: " fmt
+#include <linux/bitops.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/kvm_host.h>
#include <linux/perf/riscv_pmu.h>
#include <asm/csr.h>
+#include <asm/kvm_isa.h>
#include <asm/kvm_vcpu_sbi.h>
#include <asm/kvm_vcpu_pmu.h>
#include <asm/sbi.h>
-#include <linux/bitops.h>
#define kvm_pmu_num_counters(pmu) ((pmu)->num_hw_ctrs + (pmu)->num_fw_ctrs)
#define get_event_type(x) (((x) & SBI_PMU_EVENT_IDX_TYPE_MASK) >> 16)
diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c
index cac4f3a5f213..9817ff802821 100644
--- a/arch/riscv/kvm/vcpu_timer.c
+++ b/arch/riscv/kvm/vcpu_timer.c
@@ -12,6 +12,7 @@
#include <linux/uaccess.h>
#include <clocksource/timer-riscv.h>
#include <asm/delay.h>
+#include <asm/kvm_isa.h>
#include <asm/kvm_nacl.h>
#include <asm/kvm_vcpu_timer.h>
diff --git a/arch/riscv/kvm/vcpu_vector.c b/arch/riscv/kvm/vcpu_vector.c
index 8c7315a96b9e..a36e9e2c28df 100644
--- a/arch/riscv/kvm/vcpu_vector.c
+++ b/arch/riscv/kvm/vcpu_vector.c
@@ -12,6 +12,7 @@
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <asm/cpufeature.h>
+#include <asm/kvm_isa.h>
#include <asm/kvm_vcpu_vector.h>
#include <asm/vector.h>
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 06/27] RISC-V: KVM: Move timer state defines closer to struct in UAPI header
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (4 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-01-20 7:59 ` [PATCH 07/27] RISC-V: KVM: Add hideleg to struct kvm_vcpu_config Anup Patel
` (21 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The KVM_RISCV_TIMER_STATE_xyz defines specify possible values of the
"state" member in struct kvm_riscv_timer so move these defines closer
to struct kvm_riscv_timer in uapi/asm/kvm.h.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/uapi/asm/kvm.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 6a89c1d00a72..504e73305343 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -110,6 +110,10 @@ struct kvm_riscv_timer {
__u64 state;
};
+/* Possible states for kvm_riscv_timer */
+#define KVM_RISCV_TIMER_STATE_OFF 0
+#define KVM_RISCV_TIMER_STATE_ON 1
+
/*
* ISA extension IDs specific to KVM. This is not the same as the host ISA
* extension IDs as that is internal to the host and should not be exposed
@@ -238,10 +242,6 @@ struct kvm_riscv_sbi_fwft {
struct kvm_riscv_sbi_fwft_feature pointer_masking;
};
-/* Possible states for kvm_riscv_timer */
-#define KVM_RISCV_TIMER_STATE_OFF 0
-#define KVM_RISCV_TIMER_STATE_ON 1
-
/* If you need to interpret the index values, here is the key: */
#define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000
#define KVM_REG_RISCV_TYPE_SHIFT 24
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 07/27] RISC-V: KVM: Add hideleg to struct kvm_vcpu_config
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (5 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 06/27] RISC-V: KVM: Move timer state defines closer to struct in UAPI header Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-13 13:49 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 08/27] RISC-V: KVM: Factor-out VCPU config into separate sources Anup Patel
` (20 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The hideleg CSR state when VCPU is running in guest VS/VU-mode will
be different from when it is running in guest HS-mode. To achieve
this, add hideleg to struct kvm_vcpu_config and re-program hideleg
CSR upon every kvm_arch_vcpu_load().
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 1 +
arch/riscv/kvm/vcpu.c | 3 +++
2 files changed, 4 insertions(+)
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 24585304c02b..f3a41a1be678 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -171,6 +171,7 @@ struct kvm_vcpu_config {
u64 henvcfg;
u64 hstateen0;
unsigned long hedeleg;
+ unsigned long hideleg;
};
struct kvm_vcpu_smstateen_csr {
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index a55a95da54d0..494e0517ca4e 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -134,6 +134,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
vcpu->arch.ran_atleast_once = false;
vcpu->arch.cfg.hedeleg = KVM_HEDELEG_DEFAULT;
+ vcpu->arch.cfg.hideleg = KVM_HIDELEG_DEFAULT;
vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
@@ -591,6 +592,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
nacl_csr_write(nsh, CSR_VSCAUSE, csr->vscause);
nacl_csr_write(nsh, CSR_VSTVAL, csr->vstval);
nacl_csr_write(nsh, CSR_HEDELEG, cfg->hedeleg);
+ nacl_csr_write(nsh, CSR_HIDELEG, cfg->hideleg);
nacl_csr_write(nsh, CSR_HVIP, csr->hvip);
nacl_csr_write(nsh, CSR_VSATP, csr->vsatp);
nacl_csr_write(nsh, CSR_HENVCFG, cfg->henvcfg);
@@ -610,6 +612,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
csr_write(CSR_VSCAUSE, csr->vscause);
csr_write(CSR_VSTVAL, csr->vstval);
csr_write(CSR_HEDELEG, cfg->hedeleg);
+ csr_write(CSR_HIDELEG, cfg->hideleg);
csr_write(CSR_HVIP, csr->hvip);
csr_write(CSR_VSATP, csr->vsatp);
csr_write(CSR_HENVCFG, cfg->henvcfg);
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 08/27] RISC-V: KVM: Factor-out VCPU config into separate sources
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (6 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 07/27] RISC-V: KVM: Add hideleg to struct kvm_vcpu_config Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-13 13:46 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 09/27] RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR Anup Patel
` (19 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The VCPU config deals with hideleg, hedeleg, henvcfg, and hstateenX
CSR configuration for each VCPU. Factor-out VCPU config into separate
sources so that VCPU config can do things differently for guest HS-mode
and guest VS/VU-mode.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 20 +----
arch/riscv/include/asm/kvm_vcpu_config.h | 25 ++++++
arch/riscv/kvm/Makefile | 1 +
arch/riscv/kvm/main.c | 4 +-
arch/riscv/kvm/vcpu.c | 79 ++++--------------
arch/riscv/kvm/vcpu_config.c | 101 +++++++++++++++++++++++
6 files changed, 144 insertions(+), 86 deletions(-)
create mode 100644 arch/riscv/include/asm/kvm_vcpu_config.h
create mode 100644 arch/riscv/kvm/vcpu_config.c
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index f3a41a1be678..11c3566318ae 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -18,6 +18,7 @@
#include <asm/ptrace.h>
#include <asm/kvm_tlb.h>
#include <asm/kvm_vmid.h>
+#include <asm/kvm_vcpu_config.h>
#include <asm/kvm_vcpu_fp.h>
#include <asm/kvm_vcpu_insn.h>
#include <asm/kvm_vcpu_sbi.h>
@@ -47,18 +48,6 @@
#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
-#define KVM_HEDELEG_DEFAULT (BIT(EXC_INST_MISALIGNED) | \
- BIT(EXC_INST_ILLEGAL) | \
- BIT(EXC_BREAKPOINT) | \
- BIT(EXC_SYSCALL) | \
- BIT(EXC_INST_PAGE_FAULT) | \
- BIT(EXC_LOAD_PAGE_FAULT) | \
- BIT(EXC_STORE_PAGE_FAULT))
-
-#define KVM_HIDELEG_DEFAULT (BIT(IRQ_VS_SOFT) | \
- BIT(IRQ_VS_TIMER) | \
- BIT(IRQ_VS_EXT))
-
#define KVM_DIRTY_LOG_MANUAL_CAPS (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
KVM_DIRTY_LOG_INITIALLY_SET)
@@ -167,13 +156,6 @@ struct kvm_vcpu_csr {
unsigned long senvcfg;
};
-struct kvm_vcpu_config {
- u64 henvcfg;
- u64 hstateen0;
- unsigned long hedeleg;
- unsigned long hideleg;
-};
-
struct kvm_vcpu_smstateen_csr {
unsigned long sstateen0;
};
diff --git a/arch/riscv/include/asm/kvm_vcpu_config.h b/arch/riscv/include/asm/kvm_vcpu_config.h
new file mode 100644
index 000000000000..fcc15a0296b3
--- /dev/null
+++ b/arch/riscv/include/asm/kvm_vcpu_config.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#ifndef __KVM_VCPU_RISCV_CONFIG_H
+#define __KVM_VCPU_RISCV_CONFIG_H
+
+#include <linux/types.h>
+
+struct kvm_vcpu;
+
+struct kvm_vcpu_config {
+ u64 henvcfg;
+ u64 hstateen0;
+ unsigned long hedeleg;
+ unsigned long hideleg;
+};
+
+void kvm_riscv_vcpu_config_init(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_config_guest_debug(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_config_ran_once(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_config_load(struct kvm_vcpu *vcpu);
+
+#endif
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 07eab96189e7..296c2ba05089 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -21,6 +21,7 @@ kvm-y += mmu.o
kvm-y += nacl.o
kvm-y += tlb.o
kvm-y += vcpu.o
+kvm-y += vcpu_config.o
kvm-y += vcpu_exit.o
kvm-y += vcpu_fp.o
kvm-y += vcpu_insn.o
diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
index 45536af521f0..588a84783dff 100644
--- a/arch/riscv/kvm/main.c
+++ b/arch/riscv/kvm/main.c
@@ -41,8 +41,8 @@ int kvm_arch_enable_virtualization_cpu(void)
if (rc)
return rc;
- csr_write(CSR_HEDELEG, KVM_HEDELEG_DEFAULT);
- csr_write(CSR_HIDELEG, KVM_HIDELEG_DEFAULT);
+ csr_write(CSR_HEDELEG, 0);
+ csr_write(CSR_HIDELEG, 0);
/* VS should access only the time counter directly. Everything else should trap */
csr_write(CSR_HCOUNTEREN, 0x02);
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 494e0517ca4e..62599fc002e8 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -133,11 +133,12 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
/* Mark this VCPU never ran */
vcpu->arch.ran_atleast_once = false;
- vcpu->arch.cfg.hedeleg = KVM_HEDELEG_DEFAULT;
- vcpu->arch.cfg.hideleg = KVM_HIDELEG_DEFAULT;
vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
bitmap_zero(vcpu->arch.isa, RISCV_ISA_EXT_MAX);
+ /* Setup VCPU config */
+ kvm_riscv_vcpu_config_init(vcpu);
+
/* Setup ISA features available to VCPU */
kvm_riscv_vcpu_setup_isa(vcpu);
@@ -530,57 +531,25 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu,
int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
struct kvm_guest_debug *dbg)
{
- if (dbg->control & KVM_GUESTDBG_ENABLE) {
+ if (dbg->control & KVM_GUESTDBG_ENABLE)
vcpu->guest_debug = dbg->control;
- vcpu->arch.cfg.hedeleg &= ~BIT(EXC_BREAKPOINT);
- } else {
+ else
vcpu->guest_debug = 0;
- vcpu->arch.cfg.hedeleg |= BIT(EXC_BREAKPOINT);
- }
-
+ kvm_riscv_vcpu_config_guest_debug(vcpu);
return 0;
}
-static void kvm_riscv_vcpu_setup_config(struct kvm_vcpu *vcpu)
-{
- const unsigned long *isa = vcpu->arch.isa;
- struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
-
- if (riscv_isa_extension_available(isa, SVPBMT))
- cfg->henvcfg |= ENVCFG_PBMTE;
-
- if (riscv_isa_extension_available(isa, SSTC))
- cfg->henvcfg |= ENVCFG_STCE;
-
- if (riscv_isa_extension_available(isa, ZICBOM))
- cfg->henvcfg |= (ENVCFG_CBIE | ENVCFG_CBCFE);
-
- if (riscv_isa_extension_available(isa, ZICBOZ))
- cfg->henvcfg |= ENVCFG_CBZE;
-
- if (riscv_isa_extension_available(isa, SVADU) &&
- !riscv_isa_extension_available(isa, SVADE))
- cfg->henvcfg |= ENVCFG_ADUE;
-
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
- cfg->hstateen0 |= SMSTATEEN0_HSENVCFG;
- if (riscv_isa_extension_available(isa, SSAIA))
- cfg->hstateen0 |= SMSTATEEN0_AIA_IMSIC |
- SMSTATEEN0_AIA |
- SMSTATEEN0_AIA_ISEL;
- if (riscv_isa_extension_available(isa, SMSTATEEN))
- cfg->hstateen0 |= SMSTATEEN0_SSTATEEN0;
- }
-
- if (vcpu->guest_debug)
- cfg->hedeleg &= ~BIT(EXC_BREAKPOINT);
-}
-
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
void *nsh;
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
- struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
+
+ /*
+ * Load VCPU config CSRs before other CSRs because
+ * the read/write behaviour of certain CSRs change
+ * based on VCPU config CSRs.
+ */
+ kvm_riscv_vcpu_config_load(vcpu);
if (kvm_riscv_nacl_sync_csr_available()) {
nsh = nacl_shmem();
@@ -591,18 +560,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
nacl_csr_write(nsh, CSR_VSEPC, csr->vsepc);
nacl_csr_write(nsh, CSR_VSCAUSE, csr->vscause);
nacl_csr_write(nsh, CSR_VSTVAL, csr->vstval);
- nacl_csr_write(nsh, CSR_HEDELEG, cfg->hedeleg);
- nacl_csr_write(nsh, CSR_HIDELEG, cfg->hideleg);
nacl_csr_write(nsh, CSR_HVIP, csr->hvip);
nacl_csr_write(nsh, CSR_VSATP, csr->vsatp);
- nacl_csr_write(nsh, CSR_HENVCFG, cfg->henvcfg);
- if (IS_ENABLED(CONFIG_32BIT))
- nacl_csr_write(nsh, CSR_HENVCFGH, cfg->henvcfg >> 32);
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
- nacl_csr_write(nsh, CSR_HSTATEEN0, cfg->hstateen0);
- if (IS_ENABLED(CONFIG_32BIT))
- nacl_csr_write(nsh, CSR_HSTATEEN0H, cfg->hstateen0 >> 32);
- }
} else {
csr_write(CSR_VSSTATUS, csr->vsstatus);
csr_write(CSR_VSIE, csr->vsie);
@@ -611,18 +570,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
csr_write(CSR_VSEPC, csr->vsepc);
csr_write(CSR_VSCAUSE, csr->vscause);
csr_write(CSR_VSTVAL, csr->vstval);
- csr_write(CSR_HEDELEG, cfg->hedeleg);
- csr_write(CSR_HIDELEG, cfg->hideleg);
csr_write(CSR_HVIP, csr->hvip);
csr_write(CSR_VSATP, csr->vsatp);
- csr_write(CSR_HENVCFG, cfg->henvcfg);
- if (IS_ENABLED(CONFIG_32BIT))
- csr_write(CSR_HENVCFGH, cfg->henvcfg >> 32);
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
- csr_write(CSR_HSTATEEN0, cfg->hstateen0);
- if (IS_ENABLED(CONFIG_32BIT))
- csr_write(CSR_HSTATEEN0H, cfg->hstateen0 >> 32);
- }
}
kvm_riscv_mmu_update_hgatp(vcpu);
@@ -871,7 +820,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
struct kvm_run *run = vcpu->run;
if (!vcpu->arch.ran_atleast_once)
- kvm_riscv_vcpu_setup_config(vcpu);
+ kvm_riscv_vcpu_config_ran_once(vcpu);
/* Mark this VCPU ran at least once */
vcpu->arch.ran_atleast_once = true;
diff --git a/arch/riscv/kvm/vcpu_config.c b/arch/riscv/kvm/vcpu_config.c
new file mode 100644
index 000000000000..eb7374402b07
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_config.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_nacl.h>
+
+#define KVM_HEDELEG_DEFAULT (BIT(EXC_INST_MISALIGNED) | \
+ BIT(EXC_INST_ILLEGAL) | \
+ BIT(EXC_BREAKPOINT) | \
+ BIT(EXC_SYSCALL) | \
+ BIT(EXC_INST_PAGE_FAULT) | \
+ BIT(EXC_LOAD_PAGE_FAULT) | \
+ BIT(EXC_STORE_PAGE_FAULT))
+
+#define KVM_HIDELEG_DEFAULT (BIT(IRQ_VS_SOFT) | \
+ BIT(IRQ_VS_TIMER) | \
+ BIT(IRQ_VS_EXT))
+
+void kvm_riscv_vcpu_config_init(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.cfg.hedeleg = KVM_HEDELEG_DEFAULT;
+ vcpu->arch.cfg.hideleg = KVM_HIDELEG_DEFAULT;
+}
+
+void kvm_riscv_vcpu_config_guest_debug(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
+
+ if (vcpu->guest_debug)
+ cfg->hedeleg &= ~BIT(EXC_BREAKPOINT);
+ else
+ cfg->hedeleg |= BIT(EXC_BREAKPOINT);
+}
+
+void kvm_riscv_vcpu_config_ran_once(struct kvm_vcpu *vcpu)
+{
+ const unsigned long *isa = vcpu->arch.isa;
+ struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
+
+ if (riscv_isa_extension_available(isa, SVPBMT))
+ cfg->henvcfg |= ENVCFG_PBMTE;
+
+ if (riscv_isa_extension_available(isa, SSTC))
+ cfg->henvcfg |= ENVCFG_STCE;
+
+ if (riscv_isa_extension_available(isa, ZICBOM))
+ cfg->henvcfg |= (ENVCFG_CBIE | ENVCFG_CBCFE);
+
+ if (riscv_isa_extension_available(isa, ZICBOZ))
+ cfg->henvcfg |= ENVCFG_CBZE;
+
+ if (riscv_isa_extension_available(isa, SVADU) &&
+ !riscv_isa_extension_available(isa, SVADE))
+ cfg->henvcfg |= ENVCFG_ADUE;
+
+ if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
+ cfg->hstateen0 |= SMSTATEEN0_HSENVCFG;
+ if (riscv_isa_extension_available(isa, SSAIA))
+ cfg->hstateen0 |= SMSTATEEN0_AIA_IMSIC |
+ SMSTATEEN0_AIA |
+ SMSTATEEN0_AIA_ISEL;
+ if (riscv_isa_extension_available(isa, SMSTATEEN))
+ cfg->hstateen0 |= SMSTATEEN0_SSTATEEN0;
+ }
+
+ if (vcpu->guest_debug)
+ cfg->hedeleg &= ~BIT(EXC_BREAKPOINT);
+}
+
+void kvm_riscv_vcpu_config_load(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
+ void *nsh;
+
+ if (kvm_riscv_nacl_sync_csr_available()) {
+ nsh = nacl_shmem();
+ nacl_csr_write(nsh, CSR_HEDELEG, cfg->hedeleg);
+ nacl_csr_write(nsh, CSR_HIDELEG, cfg->hideleg);
+ nacl_csr_write(nsh, CSR_HENVCFG, cfg->henvcfg);
+ if (IS_ENABLED(CONFIG_32BIT))
+ nacl_csr_write(nsh, CSR_HENVCFGH, cfg->henvcfg >> 32);
+ if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
+ nacl_csr_write(nsh, CSR_HSTATEEN0, cfg->hstateen0);
+ if (IS_ENABLED(CONFIG_32BIT))
+ nacl_csr_write(nsh, CSR_HSTATEEN0H, cfg->hstateen0 >> 32);
+ }
+ } else {
+ csr_write(CSR_HEDELEG, cfg->hedeleg);
+ csr_write(CSR_HIDELEG, cfg->hideleg);
+ csr_write(CSR_HENVCFG, cfg->henvcfg);
+ if (IS_ENABLED(CONFIG_32BIT))
+ csr_write(CSR_HENVCFGH, cfg->henvcfg >> 32);
+ if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
+ csr_write(CSR_HSTATEEN0, cfg->hstateen0);
+ if (IS_ENABLED(CONFIG_32BIT))
+ csr_write(CSR_HSTATEEN0H, cfg->hstateen0 >> 32);
+ }
+ }
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 09/27] RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (7 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 08/27] RISC-V: KVM: Factor-out VCPU config into separate sources Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-03-13 13:27 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 10/27] RISC-V: KVM: Initial skeletal nested virtualization support Anup Patel
` (18 subsequent siblings)
27 siblings, 1 reply; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The hstateen0 will be programmed differently for guest HS-mode
and guest VS/VU-mode so don't check hstateen0.SSTATEEN0 bit when
updating sstateen0 CSR in kvm_riscv_vcpu_swap_in_guest_state()
and kvm_riscv_vcpu_swap_in_host_state().
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/vcpu.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 62599fc002e8..93c731da67f6 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -702,28 +702,22 @@ static __always_inline void kvm_riscv_vcpu_swap_in_guest_state(struct kvm_vcpu *
{
struct kvm_vcpu_smstateen_csr *smcsr = &vcpu->arch.smstateen_csr;
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
- struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
vcpu->arch.host_scounteren = csr_swap(CSR_SCOUNTEREN, csr->scounteren);
vcpu->arch.host_senvcfg = csr_swap(CSR_SENVCFG, csr->senvcfg);
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) &&
- (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0))
- vcpu->arch.host_sstateen0 = csr_swap(CSR_SSTATEEN0,
- smcsr->sstateen0);
+ if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
+ vcpu->arch.host_sstateen0 = csr_swap(CSR_SSTATEEN0, smcsr->sstateen0);
}
static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_smstateen_csr *smcsr = &vcpu->arch.smstateen_csr;
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
- struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
csr->scounteren = csr_swap(CSR_SCOUNTEREN, vcpu->arch.host_scounteren);
csr->senvcfg = csr_swap(CSR_SENVCFG, vcpu->arch.host_senvcfg);
- if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) &&
- (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0))
- smcsr->sstateen0 = csr_swap(CSR_SSTATEEN0,
- vcpu->arch.host_sstateen0);
+ if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
+ smcsr->sstateen0 = csr_swap(CSR_SSTATEEN0, vcpu->arch.host_sstateen0);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 10/27] RISC-V: KVM: Initial skeletal nested virtualization support
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (8 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 09/27] RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-01-20 7:59 ` [PATCH 11/27] RISC-V: KVM: Use half VMID space for nested guest Anup Patel
` (17 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Add initial skeletal nested virtualization support which is disabled
by default and needs to be explicitly enabled using module parameter.
Subsequent patches will further improve and complete the nested
virtualization support.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 5 ++
arch/riscv/include/asm/kvm_vcpu_nested.h | 83 ++++++++++++++++++++++++
arch/riscv/kvm/Makefile | 2 +
arch/riscv/kvm/isa.c | 2 +-
arch/riscv/kvm/main.c | 5 ++
arch/riscv/kvm/vcpu.c | 20 +++++-
arch/riscv/kvm/vcpu_exit.c | 22 ++++++-
arch/riscv/kvm/vcpu_nested.c | 48 ++++++++++++++
arch/riscv/kvm/vcpu_nested_swtlb.c | 70 ++++++++++++++++++++
9 files changed, 253 insertions(+), 4 deletions(-)
create mode 100644 arch/riscv/include/asm/kvm_vcpu_nested.h
create mode 100644 arch/riscv/kvm/vcpu_nested.c
create mode 100644 arch/riscv/kvm/vcpu_nested_swtlb.c
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 11c3566318ae..3b58953eb4eb 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -25,6 +25,7 @@
#include <asm/kvm_vcpu_sbi_fwft.h>
#include <asm/kvm_vcpu_timer.h>
#include <asm/kvm_vcpu_pmu.h>
+#include <asm/kvm_vcpu_nested.h>
#define KVM_MAX_VCPUS 1024
@@ -45,6 +46,7 @@
#define KVM_REQ_HFENCE \
KVM_ARCH_REQ_FLAGS(5, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_STEAL_UPDATE KVM_ARCH_REQ(6)
+#define KVM_REQ_NESTED_SWTLB KVM_ARCH_REQ(7)
#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
@@ -203,6 +205,9 @@ struct kvm_vcpu_arch {
/* CPU reset state of Guest VCPU */
struct kvm_vcpu_reset_state reset_state;
+ /* CPU nested virtualization context of Guest VCPU */
+ struct kvm_vcpu_nested nested;
+
/*
* VCPU interrupts
*
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
new file mode 100644
index 000000000000..4234c6e81bb6
--- /dev/null
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#ifndef __RISCV_VCPU_NESTED_H__
+#define __RISCV_VCPU_NESTED_H__
+
+#include <linux/jump_label.h>
+#include <linux/kvm_types.h>
+#include <asm/kvm_mmu.h>
+
+DECLARE_STATIC_KEY_FALSE(kvm_riscv_nested_available);
+#define kvm_riscv_nested_available() \
+ static_branch_unlikely(&kvm_riscv_nested_available)
+
+struct kvm_vcpu_nested_swtlb {
+ /* Software TLB request */
+ struct {
+ bool pending;
+ struct kvm_gstage_mapping guest;
+ struct kvm_gstage_mapping host;
+ } request;
+
+ /* Shadow G-stage page table for guest VS/VU-mode */
+ pgd_t *shadow_pgd;
+ phys_addr_t shadow_pgd_phys;
+};
+
+struct kvm_vcpu_nested_csr {
+ unsigned long hstatus;
+ unsigned long hedeleg;
+ unsigned long hideleg;
+ unsigned long hvip;
+ unsigned long hcounteren;
+ unsigned long htimedelta;
+ unsigned long htimedeltah;
+ unsigned long htval;
+ unsigned long htinst;
+ unsigned long henvcfg;
+ unsigned long henvcfgh;
+ unsigned long hgatp;
+ unsigned long vsstatus;
+ unsigned long vsie;
+ unsigned long vstvec;
+ unsigned long vsscratch;
+ unsigned long vsepc;
+ unsigned long vscause;
+ unsigned long vstval;
+ unsigned long vsatp;
+};
+
+struct kvm_vcpu_nested {
+ /* Nested virt state */
+ bool virt;
+
+ /* Nested software TLB request */
+ struct kvm_vcpu_nested_swtlb swtlb;
+
+ /* Nested CSR state */
+ struct kvm_vcpu_nested_csr csr;
+};
+
+#define kvm_riscv_vcpu_nested_virt(__vcpu) ((__vcpu)->arch.nested.virt)
+
+int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
+ const struct kvm_cpu_trap *trap,
+ struct kvm_gstage_mapping *out_map,
+ struct kvm_cpu_trap *out_trap);
+void kvm_riscv_vcpu_nested_swtlb_process(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_nested_swtlb_request(struct kvm_vcpu *vcpu,
+ const struct kvm_gstage_mapping *guest_map,
+ const struct kvm_gstage_mapping *host_map);
+void kvm_riscv_vcpu_nested_swtlb_reset(struct kvm_vcpu *vcpu);
+int kvm_riscv_vcpu_nested_swtlb_init(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_nested_swtlb_deinit(struct kvm_vcpu *vcpu);
+
+void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu);
+int kvm_riscv_vcpu_nested_init(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_nested_deinit(struct kvm_vcpu *vcpu);
+void kvm_riscv_nested_init(void);
+
+#endif
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index 296c2ba05089..a8806b69205f 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -25,6 +25,8 @@ kvm-y += vcpu_config.o
kvm-y += vcpu_exit.o
kvm-y += vcpu_fp.o
kvm-y += vcpu_insn.o
+kvm-y += vcpu_nested.o
+kvm-y += vcpu_nested_swtlb.o
kvm-y += vcpu_onereg.o
kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o
kvm-y += vcpu_sbi.o
diff --git a/arch/riscv/kvm/isa.c b/arch/riscv/kvm/isa.c
index e860f6d79bb0..1566d01fc52e 100644
--- a/arch/riscv/kvm/isa.c
+++ b/arch/riscv/kvm/isa.c
@@ -145,7 +145,7 @@ bool kvm_riscv_isa_enable_allowed(unsigned long ext)
{
switch (ext) {
case KVM_RISCV_ISA_EXT_H:
- return false;
+ return kvm_riscv_nested_available();
case KVM_RISCV_ISA_EXT_SSCOFPMF:
/* Sscofpmf depends on interrupt filtering defined in ssaia */
return !kvm_riscv_isa_check_host(SSAIA);
diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
index 588a84783dff..5b4bf972d242 100644
--- a/arch/riscv/kvm/main.c
+++ b/arch/riscv/kvm/main.c
@@ -131,6 +131,8 @@ static int __init riscv_kvm_init(void)
return rc;
}
+ kvm_riscv_nested_init();
+
kvm_info("hypervisor extension available\n");
if (kvm_riscv_nacl_available()) {
@@ -172,6 +174,9 @@ static int __init riscv_kvm_init(void)
kvm_info("AIA available with %d guest external interrupts\n",
kvm_riscv_aia_nr_hgei);
+ if (kvm_riscv_nested_available())
+ kvm_info("nested virtualization available\n");
+
kvm_riscv_setup_vendor_features();
kvm_register_perf_callbacks(NULL);
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 93c731da67f6..859c8e71df65 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -94,6 +94,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu, bool kvm_sbi_reset)
kvm_riscv_vcpu_context_reset(vcpu, kvm_sbi_reset);
+ kvm_riscv_vcpu_nested_reset(vcpu);
+
kvm_riscv_vcpu_fp_reset(vcpu);
kvm_riscv_vcpu_vector_reset(vcpu);
@@ -152,10 +154,16 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
spin_lock_init(&vcpu->arch.reset_state.lock);
- rc = kvm_riscv_vcpu_alloc_vector_context(vcpu);
+ rc = kvm_riscv_vcpu_nested_init(vcpu);
if (rc)
return rc;
+ rc = kvm_riscv_vcpu_alloc_vector_context(vcpu);
+ if (rc) {
+ kvm_riscv_vcpu_nested_deinit(vcpu);
+ return rc;
+ }
+
/* Setup VCPU timer */
kvm_riscv_vcpu_timer_init(vcpu);
@@ -205,6 +213,9 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
/* Free vector context space for host and guest kernel */
kvm_riscv_vcpu_free_vector_context(vcpu);
+
+ /* Cleanup VCPU nested state */
+ kvm_riscv_vcpu_nested_deinit(vcpu);
}
int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
@@ -683,6 +694,13 @@ static int kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu)
if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
kvm_riscv_vcpu_record_steal_time(vcpu);
+ /*
+ * Process nested software TLB request after handling
+ * various HFENCE requests.
+ */
+ if (kvm_check_request(KVM_REQ_NESTED_SWTLB, vcpu))
+ kvm_riscv_vcpu_nested_swtlb_process(vcpu);
+
if (kvm_dirty_ring_check_request(vcpu))
return 0;
}
diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
index 0bb0c51e3c89..4f63548e582f 100644
--- a/arch/riscv/kvm/vcpu_exit.c
+++ b/arch/riscv/kvm/vcpu_exit.c
@@ -15,14 +15,29 @@
static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
struct kvm_cpu_trap *trap)
{
- struct kvm_gstage_mapping host_map;
+ struct kvm_gstage_mapping guest_map, host_map;
struct kvm_memory_slot *memslot;
unsigned long hva, fault_addr;
+ struct kvm_cpu_trap out_trap;
bool writable;
gfn_t gfn;
int ret;
- fault_addr = (trap->htval << 2) | (trap->stval & 0x3);
+ if (kvm_riscv_vcpu_nested_virt(vcpu)) {
+ memset(&out_trap, 0, sizeof(out_trap));
+ ret = kvm_riscv_vcpu_nested_swtlb_xlate(vcpu, trap, &guest_map, &out_trap);
+ if (ret <= 0)
+ return ret;
+ fault_addr = __page_val_to_pfn(pte_val(guest_map.pte)) << PAGE_SHIFT;
+
+ if (out_trap.scause) {
+ kvm_riscv_vcpu_trap_redirect(vcpu, &out_trap);
+ return 1;
+ }
+ } else {
+ fault_addr = (trap->htval << 2) | (trap->stval & 0x3);
+ }
+
gfn = fault_addr >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
@@ -49,6 +64,9 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run,
if (ret < 0)
return ret;
+ if (kvm_riscv_vcpu_nested_virt(vcpu) && !pte_none(host_map.pte))
+ kvm_riscv_vcpu_nested_swtlb_request(vcpu, &guest_map, &host_map);
+
return 1;
}
diff --git a/arch/riscv/kvm/vcpu_nested.c b/arch/riscv/kvm/vcpu_nested.c
new file mode 100644
index 000000000000..3c30d35b3b39
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_nested.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#include <linux/kvm_host.h>
+
+DEFINE_STATIC_KEY_FALSE(kvm_riscv_nested_available);
+
+static bool __read_mostly enable_nested_virt;
+module_param(enable_nested_virt, bool, 0644);
+
+void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
+ struct kvm_vcpu_nested_csr *ncsr = &vcpu->arch.nested.csr;
+
+ ns->virt = false;
+ kvm_riscv_vcpu_nested_swtlb_reset(vcpu);
+ memset(ncsr, 0, sizeof(*ncsr));
+}
+
+int kvm_riscv_vcpu_nested_init(struct kvm_vcpu *vcpu)
+{
+ return kvm_riscv_vcpu_nested_swtlb_init(vcpu);
+}
+
+void kvm_riscv_vcpu_nested_deinit(struct kvm_vcpu *vcpu)
+{
+ kvm_riscv_vcpu_nested_swtlb_deinit(vcpu);
+}
+
+void kvm_riscv_nested_init(void)
+{
+ /*
+ * Nested virtualization uses hvictl CSR hence only
+ * available when AIA is available.
+ */
+ if (!kvm_riscv_aia_available())
+ return;
+
+ /* Check state of module parameter */
+ if (!enable_nested_virt)
+ return;
+
+ /* Enable KVM nested virtualization support */
+ static_branch_enable(&kvm_riscv_nested_available);
+}
diff --git a/arch/riscv/kvm/vcpu_nested_swtlb.c b/arch/riscv/kvm/vcpu_nested_swtlb.c
new file mode 100644
index 000000000000..1d9faf50a61f
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_nested_swtlb.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#include <linux/kvm_host.h>
+
+int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
+ const struct kvm_cpu_trap *trap,
+ struct kvm_gstage_mapping *out_map,
+ struct kvm_cpu_trap *out_trap)
+{
+ /* TODO: */
+ return 0;
+}
+
+void kvm_riscv_vcpu_nested_swtlb_process(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
+
+ WARN_ON(!nst->request.pending);
+
+ /* TODO: */
+
+ nst->request.pending = false;
+}
+
+void kvm_riscv_vcpu_nested_swtlb_request(struct kvm_vcpu *vcpu,
+ const struct kvm_gstage_mapping *guest_map,
+ const struct kvm_gstage_mapping *host_map)
+{
+ struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
+
+ WARN_ON(nst->request.pending);
+
+ nst->request.pending = true;
+ memcpy(&nst->request.guest, guest_map, sizeof(*guest_map));
+ memcpy(&nst->request.host, host_map, sizeof(*host_map));
+
+ kvm_make_request(KVM_REQ_NESTED_SWTLB, vcpu);
+}
+
+void kvm_riscv_vcpu_nested_swtlb_reset(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
+
+ memset(nst, 0, sizeof(*nst));
+}
+
+int kvm_riscv_vcpu_nested_swtlb_init(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
+ struct page *pgd_page;
+
+ pgd_page = alloc_pages(GFP_KERNEL | __GFP_ZERO,
+ get_order(kvm_riscv_gstage_pgd_size));
+ if (!pgd_page)
+ return -ENOMEM;
+ nst->shadow_pgd = page_to_virt(pgd_page);
+ nst->shadow_pgd_phys = page_to_phys(pgd_page);
+
+ return 0;
+}
+
+void kvm_riscv_vcpu_nested_swtlb_deinit(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
+
+ free_pages((unsigned long)nst->shadow_pgd, get_order(kvm_riscv_gstage_pgd_size));
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 11/27] RISC-V: KVM: Use half VMID space for nested guest
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (9 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 10/27] RISC-V: KVM: Initial skeletal nested virtualization support Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-01-20 7:59 ` [PATCH 12/27] RISC-V: KVM: Extend kvm_riscv_mmu_update_hgatp() for nested virtualization Anup Patel
` (16 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
A single guest with nested virtualization needs two VMIDs: one for
the guest hypervisor (L1) and another for the nested guest (L2).
To support this, divide the VMID space into two equal parts when
nested virtualization is enabled.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_vmid.h | 1 +
arch/riscv/kvm/main.c | 4 ++--
arch/riscv/kvm/tlb.c | 11 +++++++++--
arch/riscv/kvm/vmid.c | 33 ++++++++++++++++++++++++++++---
4 files changed, 42 insertions(+), 7 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_vmid.h b/arch/riscv/include/asm/kvm_vmid.h
index db61b0525a8d..3048e12a639c 100644
--- a/arch/riscv/include/asm/kvm_vmid.h
+++ b/arch/riscv/include/asm/kvm_vmid.h
@@ -19,6 +19,7 @@ struct kvm_vmid {
void __init kvm_riscv_gstage_vmid_detect(void);
unsigned long kvm_riscv_gstage_vmid_bits(void);
+unsigned long kvm_riscv_gstage_nested_vmid(unsigned long vmid);
int kvm_riscv_gstage_vmid_init(struct kvm *kvm);
bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid);
void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu);
diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c
index 5b4bf972d242..28044eefda47 100644
--- a/arch/riscv/kvm/main.c
+++ b/arch/riscv/kvm/main.c
@@ -123,8 +123,6 @@ static int __init riscv_kvm_init(void)
return -ENODEV;
}
- kvm_riscv_gstage_vmid_detect();
-
rc = kvm_riscv_aia_init();
if (rc && rc != -ENODEV) {
kvm_riscv_nacl_exit();
@@ -133,6 +131,8 @@ static int __init riscv_kvm_init(void)
kvm_riscv_nested_init();
+ kvm_riscv_gstage_vmid_detect();
+
kvm_info("hypervisor extension available\n");
if (kvm_riscv_nacl_available()) {
diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c
index ff1aeac4eb8e..a95aa5336560 100644
--- a/arch/riscv/kvm/tlb.c
+++ b/arch/riscv/kvm/tlb.c
@@ -160,7 +160,7 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmid)
void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu)
{
- unsigned long vmid;
+ unsigned long vmid, nvmid;
if (!kvm_riscv_gstage_vmid_bits() ||
vcpu->arch.last_exit_cpu == vcpu->cpu)
@@ -180,12 +180,19 @@ void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu)
vmid = READ_ONCE(vcpu->kvm->arch.vmid.vmid);
kvm_riscv_local_hfence_gvma_vmid_all(vmid);
+ nvmid = kvm_riscv_gstage_nested_vmid(vmid);
+ if (vmid != nvmid)
+ kvm_riscv_local_hfence_gvma_vmid_all(nvmid);
+
/*
* Flush VS-stage TLB entries for implementation where VS-stage
* TLB does not cahce guest physical address and VMID.
*/
- if (static_branch_unlikely(&kvm_riscv_vsstage_tlb_no_gpa))
+ if (static_branch_unlikely(&kvm_riscv_vsstage_tlb_no_gpa)) {
kvm_riscv_local_hfence_vvma_all(vmid);
+ if (vmid != nvmid)
+ kvm_riscv_local_hfence_vvma_all(nvmid);
+ }
}
void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu)
diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c
index cf34d448289d..2ddd95fe2d9c 100644
--- a/arch/riscv/kvm/vmid.c
+++ b/arch/riscv/kvm/vmid.c
@@ -25,6 +25,8 @@ static DEFINE_SPINLOCK(vmid_lock);
void __init kvm_riscv_gstage_vmid_detect(void)
{
+ unsigned long min_vmids;
+
/* Figure-out number of VMID bits in HW */
csr_write(CSR_HGATP, (kvm_riscv_gstage_mode << HGATP_MODE_SHIFT) | HGATP_VMID);
vmid_bits = csr_read(CSR_HGATP);
@@ -35,8 +37,23 @@ void __init kvm_riscv_gstage_vmid_detect(void)
/* We polluted local TLB so flush all guest TLB */
kvm_riscv_local_hfence_gvma_all();
- /* We don't use VMID bits if they are not sufficient */
- if ((1UL << vmid_bits) < num_possible_cpus())
+ /*
+ * A single guest with nested virtualization needs two
+ * VMIDs: one for the guest hypervisor (L1) and another
+ * for the nested guest (L2).
+ *
+ * Potentially, we can have a separate guest running on
+ * each host CPU so the number of VMIDs should not be:
+ *
+ * 1. less than the number of host CPUs for
+ * nested virtualization disabled
+ * 2. less than twice the number of host CPUs for
+ * nested virtualization enabled
+ */
+ min_vmids = num_possible_cpus();
+ if (kvm_riscv_nested_available())
+ min_vmids = min_vmids * 2;
+ if (BIT(vmid_bits) < min_vmids)
vmid_bits = 0;
}
@@ -45,6 +62,13 @@ unsigned long kvm_riscv_gstage_vmid_bits(void)
return vmid_bits;
}
+unsigned long kvm_riscv_gstage_nested_vmid(unsigned long vmid)
+{
+ if (kvm_riscv_nested_available())
+ return vmid | BIT(vmid_bits - 1);
+ return vmid;
+}
+
int kvm_riscv_gstage_vmid_init(struct kvm *kvm)
{
/* Mark the initial VMID and VMID version invalid */
@@ -112,7 +136,10 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu)
vmid->vmid = vmid_next;
vmid_next++;
- vmid_next &= (1 << vmid_bits) - 1;
+ if (kvm_riscv_nested_available())
+ vmid_next &= BIT(vmid_bits - 1) - 1;
+ else
+ vmid_next &= BIT(vmid_bits) - 1;
WRITE_ONCE(vmid->vmid_version, READ_ONCE(vmid_version));
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 12/27] RISC-V: KVM: Extend kvm_riscv_mmu_update_hgatp() for nested virtualization
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (10 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 11/27] RISC-V: KVM: Use half VMID space for nested guest Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-01-20 7:59 ` [PATCH 13/27] RISC-V: KVM: Extend kvm_riscv_vcpu_config_load() " Anup Patel
` (15 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The kvm_riscv_mmu_update_hgatp() will be also used for when switching
between guest HS-mode and guest VS/VU-mode so extend it accordingly.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_gstage.h | 2 ++
arch/riscv/include/asm/kvm_mmu.h | 2 +-
arch/riscv/kvm/gstage.c | 14 ++++++++++++++
arch/riscv/kvm/mmu.c | 18 ++++++++----------
arch/riscv/kvm/vcpu.c | 4 ++--
5 files changed, 27 insertions(+), 13 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_gstage.h b/arch/riscv/include/asm/kvm_gstage.h
index 595e2183173e..007a5fd7a526 100644
--- a/arch/riscv/include/asm/kvm_gstage.h
+++ b/arch/riscv/include/asm/kvm_gstage.h
@@ -67,6 +67,8 @@ void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage,
void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end);
+void kvm_riscv_gstage_update_hgatp(phys_addr_t pgd_phys, unsigned long vmid);
+
void kvm_riscv_gstage_mode_detect(void);
#endif
diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_mmu.h
index 5439e76f0a96..cc5994ec2805 100644
--- a/arch/riscv/include/asm/kvm_mmu.h
+++ b/arch/riscv/include/asm/kvm_mmu.h
@@ -16,6 +16,6 @@ int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot,
struct kvm_gstage_mapping *out_map);
int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm);
void kvm_riscv_mmu_free_pgd(struct kvm *kvm);
-void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu);
+void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu, bool nested_virt);
#endif
diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c
index b67d60d722c2..7834e1178b68 100644
--- a/arch/riscv/kvm/gstage.c
+++ b/arch/riscv/kvm/gstage.c
@@ -10,6 +10,7 @@
#include <linux/module.h>
#include <linux/pgtable.h>
#include <asm/kvm_gstage.h>
+#include <asm/kvm_nacl.h>
#ifdef CONFIG_64BIT
unsigned long kvm_riscv_gstage_mode __ro_after_init = HGATP_MODE_SV39X4;
@@ -313,6 +314,19 @@ void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa_t end
}
}
+void kvm_riscv_gstage_update_hgatp(phys_addr_t pgd_phys, unsigned long vmid)
+{
+ unsigned long hgatp = kvm_riscv_gstage_mode << HGATP_MODE_SHIFT;
+
+ hgatp |= (vmid << HGATP_VMID_SHIFT) & HGATP_VMID;
+ hgatp |= (pgd_phys >> PAGE_SHIFT) & HGATP_PPN;
+
+ ncsr_write(CSR_HGATP, hgatp);
+
+ if (!kvm_riscv_gstage_vmid_bits())
+ kvm_riscv_local_hfence_gvma_all();
+}
+
void __init kvm_riscv_gstage_mode_detect(void)
{
#ifdef CONFIG_64BIT
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 0b75eb2a1820..250606f5aa41 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -14,7 +14,6 @@
#include <linux/kvm_host.h>
#include <linux/sched/signal.h>
#include <asm/kvm_mmu.h>
-#include <asm/kvm_nacl.h>
static void mmu_wp_memory_region(struct kvm *kvm, int slot)
{
@@ -597,16 +596,15 @@ void kvm_riscv_mmu_free_pgd(struct kvm *kvm)
free_pages((unsigned long)pgd, get_order(kvm_riscv_gstage_pgd_size));
}
-void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu)
+void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu, bool nested_virt)
{
- unsigned long hgatp = kvm_riscv_gstage_mode << HGATP_MODE_SHIFT;
+ struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
struct kvm_arch *k = &vcpu->kvm->arch;
+ unsigned long vmid = READ_ONCE(k->vmid.vmid);
- hgatp |= (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID;
- hgatp |= (k->pgd_phys >> PAGE_SHIFT) & HGATP_PPN;
-
- ncsr_write(CSR_HGATP, hgatp);
-
- if (!kvm_riscv_gstage_vmid_bits())
- kvm_riscv_local_hfence_gvma_all();
+ if (nested_virt)
+ kvm_riscv_gstage_update_hgatp(nst->shadow_pgd_phys,
+ kvm_riscv_gstage_nested_vmid(vmid));
+ else
+ kvm_riscv_gstage_update_hgatp(k->pgd_phys, vmid);
}
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 859c8e71df65..178a4409d4e9 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -585,7 +585,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
csr_write(CSR_VSATP, csr->vsatp);
}
- kvm_riscv_mmu_update_hgatp(vcpu);
+ kvm_riscv_mmu_update_hgatp(vcpu, kvm_riscv_vcpu_nested_virt(vcpu));
kvm_riscv_vcpu_timer_restore(vcpu);
@@ -677,7 +677,7 @@ static int kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu)
kvm_riscv_reset_vcpu(vcpu, true);
if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu))
- kvm_riscv_mmu_update_hgatp(vcpu);
+ kvm_riscv_mmu_update_hgatp(vcpu, kvm_riscv_vcpu_nested_virt(vcpu));
if (kvm_check_request(KVM_REQ_FENCE_I, vcpu))
kvm_riscv_fence_i_process(vcpu);
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 13/27] RISC-V: KVM: Extend kvm_riscv_vcpu_config_load() for nested virtualization
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (11 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 12/27] RISC-V: KVM: Extend kvm_riscv_mmu_update_hgatp() for nested virtualization Anup Patel
@ 2026-01-20 7:59 ` Anup Patel
2026-01-20 8:00 ` [PATCH 14/27] RISC-V: KVM: Extend kvm_riscv_vcpu_update_timedelta() for nested virt Anup Patel
` (14 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 7:59 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The kvm_riscv_vcpu_config_load() will be also used for when switching
between guest HS-mode and guest VS/VU-mode so extend it accordingly.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_vcpu_config.h | 2 +-
arch/riscv/kvm/vcpu.c | 2 +-
arch/riscv/kvm/vcpu_config.c | 55 ++++++++++++++++++------
3 files changed, 44 insertions(+), 15 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_vcpu_config.h b/arch/riscv/include/asm/kvm_vcpu_config.h
index fcc15a0296b3..be7bffb6a428 100644
--- a/arch/riscv/include/asm/kvm_vcpu_config.h
+++ b/arch/riscv/include/asm/kvm_vcpu_config.h
@@ -20,6 +20,6 @@ struct kvm_vcpu_config {
void kvm_riscv_vcpu_config_init(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_config_guest_debug(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_config_ran_once(struct kvm_vcpu *vcpu);
-void kvm_riscv_vcpu_config_load(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_config_load(struct kvm_vcpu *vcpu, bool nested_virt);
#endif
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 178a4409d4e9..077637aff9a2 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -560,7 +560,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
* the read/write behaviour of certain CSRs change
* based on VCPU config CSRs.
*/
- kvm_riscv_vcpu_config_load(vcpu);
+ kvm_riscv_vcpu_config_load(vcpu, kvm_riscv_vcpu_nested_virt(vcpu));
if (kvm_riscv_nacl_sync_csr_available()) {
nsh = nacl_shmem();
diff --git a/arch/riscv/kvm/vcpu_config.c b/arch/riscv/kvm/vcpu_config.c
index eb7374402b07..6c49bd6f83c5 100644
--- a/arch/riscv/kvm/vcpu_config.c
+++ b/arch/riscv/kvm/vcpu_config.c
@@ -69,33 +69,62 @@ void kvm_riscv_vcpu_config_ran_once(struct kvm_vcpu *vcpu)
cfg->hedeleg &= ~BIT(EXC_BREAKPOINT);
}
-void kvm_riscv_vcpu_config_load(struct kvm_vcpu *vcpu)
+void kvm_riscv_vcpu_config_load(struct kvm_vcpu *vcpu, bool nested_virt)
{
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
struct kvm_vcpu_config *cfg = &vcpu->arch.cfg;
+ unsigned long hedeleg, hideleg, tmp;
+ u64 henvcfg, hstateen0;
void *nsh;
+ if (nested_virt) {
+ hedeleg = nsc->hedeleg;
+ hideleg = 0;
+ henvcfg = 0;
+ hstateen0 = 0;
+ } else {
+ hedeleg = cfg->hedeleg;
+ hideleg = cfg->hideleg;
+ henvcfg = cfg->henvcfg;
+ hstateen0 = cfg->hstateen0;
+ }
+
if (kvm_riscv_nacl_sync_csr_available()) {
nsh = nacl_shmem();
- nacl_csr_write(nsh, CSR_HEDELEG, cfg->hedeleg);
- nacl_csr_write(nsh, CSR_HIDELEG, cfg->hideleg);
- nacl_csr_write(nsh, CSR_HENVCFG, cfg->henvcfg);
+ nacl_csr_write(nsh, CSR_HEDELEG, hedeleg);
+ nacl_csr_write(nsh, CSR_HIDELEG, hideleg);
+ nacl_csr_write(nsh, CSR_HENVCFG, henvcfg);
if (IS_ENABLED(CONFIG_32BIT))
- nacl_csr_write(nsh, CSR_HENVCFGH, cfg->henvcfg >> 32);
+ nacl_csr_write(nsh, CSR_HENVCFGH, henvcfg >> 32);
if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
- nacl_csr_write(nsh, CSR_HSTATEEN0, cfg->hstateen0);
+ nacl_csr_write(nsh, CSR_HSTATEEN0, hstateen0);
if (IS_ENABLED(CONFIG_32BIT))
- nacl_csr_write(nsh, CSR_HSTATEEN0H, cfg->hstateen0 >> 32);
+ nacl_csr_write(nsh, CSR_HSTATEEN0H, hstateen0 >> 32);
+ }
+ if (kvm_riscv_aia_available()) {
+ tmp = nacl_csr_read(nsh, CSR_HVICTL);
+ if (nested_virt)
+ tmp |= HVICTL_VTI;
+ else
+ tmp &= ~HVICTL_VTI;
+ nacl_csr_write(nsh, CSR_HVICTL, tmp);
}
} else {
- csr_write(CSR_HEDELEG, cfg->hedeleg);
- csr_write(CSR_HIDELEG, cfg->hideleg);
- csr_write(CSR_HENVCFG, cfg->henvcfg);
+ csr_write(CSR_HEDELEG, hedeleg);
+ csr_write(CSR_HIDELEG, hideleg);
+ csr_write(CSR_HENVCFG, henvcfg);
if (IS_ENABLED(CONFIG_32BIT))
- csr_write(CSR_HENVCFGH, cfg->henvcfg >> 32);
+ csr_write(CSR_HENVCFGH, henvcfg >> 32);
if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) {
- csr_write(CSR_HSTATEEN0, cfg->hstateen0);
+ csr_write(CSR_HSTATEEN0, hstateen0);
if (IS_ENABLED(CONFIG_32BIT))
- csr_write(CSR_HSTATEEN0H, cfg->hstateen0 >> 32);
+ csr_write(CSR_HSTATEEN0H, hstateen0 >> 32);
+ }
+ if (kvm_riscv_aia_available()) {
+ if (nested_virt)
+ csr_set(CSR_HVICTL, HVICTL_VTI);
+ else
+ csr_clear(CSR_HVICTL, HVICTL_VTI);
}
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 14/27] RISC-V: KVM: Extend kvm_riscv_vcpu_update_timedelta() for nested virt
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (12 preceding siblings ...)
2026-01-20 7:59 ` [PATCH 13/27] RISC-V: KVM: Extend kvm_riscv_vcpu_config_load() " Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 15/27] RISC-V: KVM: Extend trap redirection for nested virtualization Anup Patel
` (13 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The kvm_riscv_vcpu_update_timedelta() will be also used for when
switching between guest HS-mode and guest VS/VU-mode so extend
it accordingly.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_vcpu_timer.h | 1 +
arch/riscv/kvm/vcpu_timer.c | 21 ++++++++++++++++-----
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_vcpu_timer.h b/arch/riscv/include/asm/kvm_vcpu_timer.h
index 82f7260301da..f97cf1d7d760 100644
--- a/arch/riscv/include/asm/kvm_vcpu_timer.h
+++ b/arch/riscv/include/asm/kvm_vcpu_timer.h
@@ -43,6 +43,7 @@ int kvm_riscv_vcpu_set_reg_timer(struct kvm_vcpu *vcpu,
int kvm_riscv_vcpu_timer_init(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_timer_deinit(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu);
+void kvm_riscv_vcpu_update_timedelta(struct kvm_vcpu *vcpu, bool nested_virt);
void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu);
void kvm_riscv_guest_timer_init(struct kvm *kvm);
void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu);
diff --git a/arch/riscv/kvm/vcpu_timer.c b/arch/riscv/kvm/vcpu_timer.c
index 9817ff802821..eda530228b05 100644
--- a/arch/riscv/kvm/vcpu_timer.c
+++ b/arch/riscv/kvm/vcpu_timer.c
@@ -287,15 +287,26 @@ int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu)
return kvm_riscv_vcpu_timer_cancel(&vcpu->arch.timer);
}
-static void kvm_riscv_vcpu_update_timedelta(struct kvm_vcpu *vcpu)
+void kvm_riscv_vcpu_update_timedelta(struct kvm_vcpu *vcpu, bool nested_virt)
{
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
struct kvm_guest_timer *gt = &vcpu->kvm->arch.timer;
+ u64 ndelta = 0;
+ if (nested_virt) {
+ ndelta = nsc->htimedelta;
#if defined(CONFIG_32BIT)
- ncsr_write(CSR_HTIMEDELTA, (u32)(gt->time_delta));
- ncsr_write(CSR_HTIMEDELTAH, (u32)(gt->time_delta >> 32));
+ ndelta |= ((u64)nsc->htimedeltah) << 32;
+#endif
+ }
+
+ ndelta += gt->time_delta;
+
+#if defined(CONFIG_32BIT)
+ ncsr_write(CSR_HTIMEDELTA, (u32)ndelta);
+ ncsr_write(CSR_HTIMEDELTAH, (u32)(ndelta >> 32));
#else
- ncsr_write(CSR_HTIMEDELTA, gt->time_delta);
+ ncsr_write(CSR_HTIMEDELTA, ndelta);
#endif
}
@@ -303,7 +314,7 @@ void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_timer *t = &vcpu->arch.timer;
- kvm_riscv_vcpu_update_timedelta(vcpu);
+ kvm_riscv_vcpu_update_timedelta(vcpu, kvm_riscv_vcpu_nested_virt(vcpu));
if (!t->sstc_enabled)
return;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 15/27] RISC-V: KVM: Extend trap redirection for nested virtualization
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (13 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 14/27] RISC-V: KVM: Extend kvm_riscv_vcpu_update_timedelta() for nested virt Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 16/27] RISC-V: KVM: Check and inject nested virtual interrupts Anup Patel
` (12 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The L0/host hypervisor must always redirect traps to the L1/guest
hypervisor so extend KVM RISC-V to perform the necessary nested
world-switch when redirecting traps.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 3 +
arch/riscv/include/asm/kvm_vcpu_nested.h | 12 ++
arch/riscv/kvm/vcpu_exit.c | 28 +++-
arch/riscv/kvm/vcpu_nested.c | 162 +++++++++++++++++++++++
4 files changed, 201 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index 3b58953eb4eb..c510564a09a2 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -289,6 +289,9 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
bool read_insn,
unsigned long guest_addr,
struct kvm_cpu_trap *trap);
+void kvm_riscv_vcpu_trap_smode_redirect(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_trap *trap,
+ bool prev_priv);
void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
struct kvm_cpu_trap *trap);
int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index 4234c6e81bb6..6bfb67702610 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -75,6 +75,18 @@ void kvm_riscv_vcpu_nested_swtlb_reset(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_nested_swtlb_init(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_nested_swtlb_deinit(struct kvm_vcpu *vcpu);
+enum kvm_vcpu_nested_set_virt_event {
+ NESTED_SET_VIRT_EVENT_TRAP = 0,
+ NESTED_SET_VIRT_EVENT_SRET
+};
+
+void kvm_riscv_vcpu_nested_set_virt(struct kvm_vcpu *vcpu,
+ enum kvm_vcpu_nested_set_virt_event event,
+ bool virt, bool spvp, bool gva);
+void kvm_riscv_vcpu_nested_trap_redirect(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_trap *trap,
+ bool prev_priv);
+
void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_nested_init(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_nested_deinit(struct kvm_vcpu *vcpu);
diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
index 4f63548e582f..aeec4c4eee06 100644
--- a/arch/riscv/kvm/vcpu_exit.c
+++ b/arch/riscv/kvm/vcpu_exit.c
@@ -149,19 +149,21 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcpu *vcpu,
}
/**
- * kvm_riscv_vcpu_trap_redirect -- Redirect trap to Guest
+ * kvm_riscv_vcpu_trap_smode_redirect -- Redirect S-mode trap to Guest
*
* @vcpu: The VCPU pointer
* @trap: Trap details
+ * @prev_priv: Previous privilege mode (true: S-mode, false: U-mode)
*/
-void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
- struct kvm_cpu_trap *trap)
+void kvm_riscv_vcpu_trap_smode_redirect(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_trap *trap,
+ bool prev_priv)
{
unsigned long vsstatus = ncsr_read(CSR_VSSTATUS);
/* Change Guest SSTATUS.SPP bit */
vsstatus &= ~SR_SPP;
- if (vcpu->arch.guest_context.sstatus & SR_SPP)
+ if (prev_priv)
vsstatus |= SR_SPP;
/* Change Guest SSTATUS.SPIE bit */
@@ -187,6 +189,24 @@ void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
vcpu->arch.guest_context.sstatus |= SR_SPP;
}
+/**
+ * kvm_riscv_vcpu_trap_redirect -- Redirect HS-mode trap to Guest
+ *
+ * @vcpu: The VCPU pointer
+ * @trap: Trap details
+ */
+void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_trap *trap)
+{
+ bool prev_priv = (vcpu->arch.guest_context.sstatus & SR_SPP) ? true : false;
+
+ /* Update Guest nested state */
+ kvm_riscv_vcpu_nested_trap_redirect(vcpu, trap, prev_priv);
+
+ /* Update Guest supervisor state */
+ kvm_riscv_vcpu_trap_smode_redirect(vcpu, trap, prev_priv);
+}
+
static inline int vcpu_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap)
{
int ret = -EFAULT;
diff --git a/arch/riscv/kvm/vcpu_nested.c b/arch/riscv/kvm/vcpu_nested.c
index 3c30d35b3b39..214206fc28bb 100644
--- a/arch/riscv/kvm/vcpu_nested.c
+++ b/arch/riscv/kvm/vcpu_nested.c
@@ -3,13 +3,175 @@
* Copyright (c) 2026 Qualcomm Technologies, Inc.
*/
+#include <linux/smp.h>
#include <linux/kvm_host.h>
+#include <asm/kvm_nacl.h>
+#include <asm/kvm_mmu.h>
DEFINE_STATIC_KEY_FALSE(kvm_riscv_nested_available);
static bool __read_mostly enable_nested_virt;
module_param(enable_nested_virt, bool, 0644);
+void kvm_riscv_vcpu_nested_set_virt(struct kvm_vcpu *vcpu,
+ enum kvm_vcpu_nested_set_virt_event event,
+ bool virt, bool spvp, bool gva)
+{
+ struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
+ struct kvm_vcpu_nested_csr *nsc = &ns->csr;
+ unsigned long tmp, sr_fs_vs_mask = 0;
+ int cpu;
+
+ /* If H-extension is not available for VCPU then do nothing */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return;
+
+ /* Grab the CPU to ensure we remain on same CPU */
+ cpu = get_cpu();
+
+ /* Skip hardware CSR update if no change in virt state */
+ if (virt == ns->virt)
+ goto skip_csr_update;
+
+ /* Update config CSRs (aka hedeleg, hideleg, henvcfg, and hstateeX) */
+ kvm_riscv_vcpu_config_load(vcpu, virt);
+
+ /* Update time delta */
+ kvm_riscv_vcpu_update_timedelta(vcpu, virt);
+
+ /* Update G-stage page table */
+ kvm_riscv_mmu_update_hgatp(vcpu, virt);
+
+ /* Swap hardware vs<xyz> CSRs except vsie and vsstatus */
+ nsc->vstvec = ncsr_swap(CSR_VSTVEC, nsc->vstvec);
+ nsc->vsscratch = ncsr_swap(CSR_VSSCRATCH, nsc->vsscratch);
+ nsc->vsepc = ncsr_swap(CSR_VSEPC, nsc->vsepc);
+ nsc->vscause = ncsr_swap(CSR_VSCAUSE, nsc->vscause);
+ nsc->vstval = ncsr_swap(CSR_VSTVAL, nsc->vstval);
+ nsc->vsatp = ncsr_swap(CSR_VSATP, nsc->vsatp);
+
+ /* Update vsstatus CSR */
+ if (riscv_isa_extension_available(vcpu->arch.isa, f) ||
+ riscv_isa_extension_available(vcpu->arch.isa, d))
+ sr_fs_vs_mask |= SR_FS;
+ if (riscv_isa_extension_available(vcpu->arch.isa, v))
+ sr_fs_vs_mask |= SR_VS;
+ if (virt) {
+ /*
+ * Update vsstatus in following manner:
+ * 1) Swap hardware vsstatus (i.e. virtual-HS mode sstatus) with
+ * vsstatus in nested virtualization context (i.e. virtual-VS
+ * mode sstatus)
+ * 2) Swap host sstatus.[FS|VS] (i.e. HS mode sstatus.[FS|VS])
+ * with the vsstatus.[FS|VS] saved in nested virtualization
+ * context (i.e. virtual-HS mode sstatus.[FS|VS])
+ */
+ nsc->vsstatus = ncsr_swap(CSR_VSSTATUS, nsc->vsstatus);
+ tmp = vcpu->arch.guest_context.sstatus & sr_fs_vs_mask;
+ vcpu->arch.guest_context.sstatus &= ~sr_fs_vs_mask;
+ vcpu->arch.guest_context.sstatus |= (nsc->vsstatus & sr_fs_vs_mask);
+ nsc->vsstatus &= ~sr_fs_vs_mask;
+ nsc->vsstatus |= tmp;
+ } else {
+ /*
+ * Update vsstatus in following manner:
+ * 1) Swap host sstatus.[FS|VS] (i.e. virtual-HS mode sstatus.[FS|VS])
+ * with vsstatus.[FS|VS] saved in the nested virtualization
+ * context (i.e. HS mode sstatus.[FS|VS])
+ * 2) Swap hardware vsstatus (i.e. virtual-VS mode sstatus) with
+ * vsstatus in nested virtualization context (i.e. virtual-HS
+ * mode sstatus)
+ */
+ tmp = vcpu->arch.guest_context.sstatus & sr_fs_vs_mask;
+ vcpu->arch.guest_context.sstatus &= ~sr_fs_vs_mask;
+ vcpu->arch.guest_context.sstatus |= (nsc->vsstatus & sr_fs_vs_mask);
+ nsc->vsstatus &= ~sr_fs_vs_mask;
+ nsc->vsstatus |= tmp;
+ nsc->vsstatus = ncsr_swap(CSR_VSSTATUS, nsc->vsstatus);
+ }
+
+skip_csr_update:
+ if (event != NESTED_SET_VIRT_EVENT_SRET) {
+ /* Update guest hstatus.SPV bit */
+ nsc->hstatus &= ~HSTATUS_SPV;
+ nsc->hstatus |= (ns->virt) ? HSTATUS_SPV : 0;
+
+ /* Update guest hstatus.SPVP bit */
+ if (ns->virt) {
+ nsc->hstatus &= ~HSTATUS_SPVP;
+ if (spvp)
+ nsc->hstatus |= HSTATUS_SPVP;
+ }
+
+ /* Update guest hstatus.GVA bit */
+ if (event == NESTED_SET_VIRT_EVENT_TRAP) {
+ nsc->hstatus &= ~HSTATUS_GVA;
+ nsc->hstatus |= (gva) ? HSTATUS_GVA : 0;
+ }
+ }
+
+ /* Update host SRET trapping */
+ vcpu->arch.guest_context.hstatus &= ~HSTATUS_VTSR;
+ if (virt) {
+ if (nsc->hstatus & HSTATUS_VTSR)
+ vcpu->arch.guest_context.hstatus |= HSTATUS_VTSR;
+ } else {
+ if (nsc->hstatus & HSTATUS_SPV)
+ vcpu->arch.guest_context.hstatus |= HSTATUS_VTSR;
+ }
+
+ /* Update host VM trapping */
+ vcpu->arch.guest_context.hstatus &= ~HSTATUS_VTVM;
+ if (virt && (nsc->hstatus & HSTATUS_VTVM))
+ vcpu->arch.guest_context.hstatus |= HSTATUS_VTVM;
+
+ /* Update virt flag */
+ ns->virt = virt;
+
+ /* Release CPU */
+ put_cpu();
+}
+
+void kvm_riscv_vcpu_nested_trap_redirect(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_trap *trap,
+ bool prev_priv)
+{
+ bool gva;
+
+ /* Do nothing if H-extension is not available for VCPU */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return;
+
+ /* Determine GVA bit state */
+ gva = false;
+ switch (trap->scause) {
+ case EXC_INST_MISALIGNED:
+ case EXC_INST_ACCESS:
+ case EXC_LOAD_MISALIGNED:
+ case EXC_LOAD_ACCESS:
+ case EXC_STORE_MISALIGNED:
+ case EXC_STORE_ACCESS:
+ case EXC_INST_PAGE_FAULT:
+ case EXC_LOAD_PAGE_FAULT:
+ case EXC_STORE_PAGE_FAULT:
+ case EXC_INST_GUEST_PAGE_FAULT:
+ case EXC_LOAD_GUEST_PAGE_FAULT:
+ case EXC_STORE_GUEST_PAGE_FAULT:
+ gva = true;
+ break;
+ default:
+ break;
+ }
+
+ /* Update Guest HTVAL and HTINST */
+ vcpu->arch.nested.csr.htval = trap->htval;
+ vcpu->arch.nested.csr.htinst = trap->htinst;
+
+ /* Turn-off nested virtualization for virtual-HS mode */
+ kvm_riscv_vcpu_nested_set_virt(vcpu, NESTED_SET_VIRT_EVENT_TRAP,
+ false, prev_priv, gva);
+}
+
void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 16/27] RISC-V: KVM: Check and inject nested virtual interrupts
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (14 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 15/27] RISC-V: KVM: Extend trap redirection for nested virtualization Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 17/27] RISC-V: KVM: Extend kvm_riscv_isa_check_host() for nested virt Anup Patel
` (11 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
When entering guest in virtual-VS/VU mode (aka nested guest),
check and inject nested virtual interrupt right before guest
entry.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_vcpu_nested.h | 1 +
arch/riscv/kvm/vcpu.c | 3 ++
arch/riscv/kvm/vcpu_nested.c | 49 ++++++++++++++++++++++++
3 files changed, 53 insertions(+)
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index 6bfb67702610..6d9d252a378c 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -86,6 +86,7 @@ void kvm_riscv_vcpu_nested_set_virt(struct kvm_vcpu *vcpu,
void kvm_riscv_vcpu_nested_trap_redirect(struct kvm_vcpu *vcpu,
struct kvm_cpu_trap *trap,
bool prev_priv);
+void kvm_riscv_vcpu_nested_vsirq_process(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_nested_init(struct kvm_vcpu *vcpu);
diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
index 077637aff9a2..f8c4344c2b1f 100644
--- a/arch/riscv/kvm/vcpu.c
+++ b/arch/riscv/kvm/vcpu.c
@@ -934,6 +934,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
*/
kvm_riscv_local_tlb_sanitize(vcpu);
+ /* Check and inject nested virtual interrupts */
+ kvm_riscv_vcpu_nested_vsirq_process(vcpu);
+
trace_kvm_entry(vcpu);
guest_timing_enter_irqoff();
diff --git a/arch/riscv/kvm/vcpu_nested.c b/arch/riscv/kvm/vcpu_nested.c
index 214206fc28bb..9b2b3369a232 100644
--- a/arch/riscv/kvm/vcpu_nested.c
+++ b/arch/riscv/kvm/vcpu_nested.c
@@ -172,6 +172,55 @@ void kvm_riscv_vcpu_nested_trap_redirect(struct kvm_vcpu *vcpu,
false, prev_priv, gva);
}
+void kvm_riscv_vcpu_nested_vsirq_process(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
+ struct kvm_vcpu_nested_csr *nsc = &ns->csr;
+ struct kvm_cpu_trap trap;
+ unsigned long irqs;
+ bool next_spp;
+ int vsirq;
+
+ /* Do nothing if nested virtualization is OFF */
+ if (!ns->virt)
+ return;
+
+ /* Determine the virtual-VS mode interrupt number */
+ vsirq = 0;
+ irqs = nsc->hvip;
+ irqs &= nsc->vsie << VSIP_TO_HVIP_SHIFT;
+ irqs &= nsc->hideleg;
+ if (irqs & BIT(IRQ_VS_EXT))
+ vsirq = IRQ_S_EXT;
+ else if (irqs & BIT(IRQ_VS_TIMER))
+ vsirq = IRQ_S_TIMER;
+ else if (irqs & BIT(IRQ_VS_SOFT))
+ vsirq = IRQ_S_SOFT;
+ if (vsirq <= 0)
+ return;
+
+ /*
+ * Determine whether we are resuming in virtual-VS mode
+ * or virtual-VU mode.
+ */
+ next_spp = !!(vcpu->arch.guest_context.sstatus & SR_SPP);
+
+ /*
+ * If we are going to virtual-VS mode and interrupts are
+ * disabled then do nothing.
+ */
+ if (next_spp && !(ncsr_read(CSR_VSSTATUS) & SR_SIE))
+ return;
+
+ /* Take virtual-VS mode interrupt */
+ trap.scause = CAUSE_IRQ_FLAG | vsirq;
+ trap.sepc = vcpu->arch.guest_context.sepc;
+ trap.stval = 0;
+ trap.htval = 0;
+ trap.htinst = 0;
+ kvm_riscv_vcpu_trap_smode_redirect(vcpu, &trap, next_spp);
+}
+
void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 17/27] RISC-V: KVM: Extend kvm_riscv_isa_check_host() for nested virt
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (15 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 16/27] RISC-V: KVM: Check and inject nested virtual interrupts Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 18/27] RISC-V: KVM: Trap-n-emulate SRET for Guest HS-mode Anup Patel
` (10 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Nested virtualization for various ISA extensions will be enabled
gradually so extend kvm_riscv_isa_check_host() such that certain
ISA extensions can be disabled when nested virtualization is
available.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/isa.c | 178 ++++++++++++++++++++++---------------------
1 file changed, 93 insertions(+), 85 deletions(-)
diff --git a/arch/riscv/kvm/isa.c b/arch/riscv/kvm/isa.c
index 1566d01fc52e..e469c350f5bb 100644
--- a/arch/riscv/kvm/isa.c
+++ b/arch/riscv/kvm/isa.c
@@ -10,91 +10,96 @@
#include <asm/kvm_isa.h>
#include <asm/vector.h>
-#define KVM_ISA_EXT_ARR(ext) \
-[KVM_RISCV_ISA_EXT_##ext] = RISCV_ISA_EXT_##ext
+struct kvm_isa_ext {
+ unsigned long ext;
+ bool nested;
+};
+
+#define KVM_ISA_EXT_ARR(ext, nested) \
+[KVM_RISCV_ISA_EXT_##ext] = { RISCV_ISA_EXT_##ext, nested }
/* Mapping between KVM ISA Extension ID & guest ISA extension ID */
-static const unsigned long kvm_isa_ext_arr[] = {
+static const struct kvm_isa_ext kvm_isa_ext_arr[] = {
/* Single letter extensions (alphabetically sorted) */
- [KVM_RISCV_ISA_EXT_A] = RISCV_ISA_EXT_a,
- [KVM_RISCV_ISA_EXT_C] = RISCV_ISA_EXT_c,
- [KVM_RISCV_ISA_EXT_D] = RISCV_ISA_EXT_d,
- [KVM_RISCV_ISA_EXT_F] = RISCV_ISA_EXT_f,
- [KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h,
- [KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
- [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
- [KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v,
+ [KVM_RISCV_ISA_EXT_A] = { RISCV_ISA_EXT_a, true },
+ [KVM_RISCV_ISA_EXT_C] = { RISCV_ISA_EXT_c, true },
+ [KVM_RISCV_ISA_EXT_D] = { RISCV_ISA_EXT_d, true },
+ [KVM_RISCV_ISA_EXT_F] = { RISCV_ISA_EXT_f, true },
+ [KVM_RISCV_ISA_EXT_H] = { RISCV_ISA_EXT_h, true },
+ [KVM_RISCV_ISA_EXT_I] = { RISCV_ISA_EXT_i, true },
+ [KVM_RISCV_ISA_EXT_M] = { RISCV_ISA_EXT_m, true },
+ [KVM_RISCV_ISA_EXT_V] = { RISCV_ISA_EXT_v, true },
/* Multi letter extensions (alphabetically sorted) */
- KVM_ISA_EXT_ARR(SMNPM),
- KVM_ISA_EXT_ARR(SMSTATEEN),
- KVM_ISA_EXT_ARR(SSAIA),
- KVM_ISA_EXT_ARR(SSCOFPMF),
- KVM_ISA_EXT_ARR(SSNPM),
- KVM_ISA_EXT_ARR(SSTC),
- KVM_ISA_EXT_ARR(SVADE),
- KVM_ISA_EXT_ARR(SVADU),
- KVM_ISA_EXT_ARR(SVINVAL),
- KVM_ISA_EXT_ARR(SVNAPOT),
- KVM_ISA_EXT_ARR(SVPBMT),
- KVM_ISA_EXT_ARR(SVVPTC),
- KVM_ISA_EXT_ARR(ZAAMO),
- KVM_ISA_EXT_ARR(ZABHA),
- KVM_ISA_EXT_ARR(ZACAS),
- KVM_ISA_EXT_ARR(ZALASR),
- KVM_ISA_EXT_ARR(ZALRSC),
- KVM_ISA_EXT_ARR(ZAWRS),
- KVM_ISA_EXT_ARR(ZBA),
- KVM_ISA_EXT_ARR(ZBB),
- KVM_ISA_EXT_ARR(ZBC),
- KVM_ISA_EXT_ARR(ZBKB),
- KVM_ISA_EXT_ARR(ZBKC),
- KVM_ISA_EXT_ARR(ZBKX),
- KVM_ISA_EXT_ARR(ZBS),
- KVM_ISA_EXT_ARR(ZCA),
- KVM_ISA_EXT_ARR(ZCB),
- KVM_ISA_EXT_ARR(ZCD),
- KVM_ISA_EXT_ARR(ZCF),
- KVM_ISA_EXT_ARR(ZCLSD),
- KVM_ISA_EXT_ARR(ZCMOP),
- KVM_ISA_EXT_ARR(ZFA),
- KVM_ISA_EXT_ARR(ZFBFMIN),
- KVM_ISA_EXT_ARR(ZFH),
- KVM_ISA_EXT_ARR(ZFHMIN),
- KVM_ISA_EXT_ARR(ZICBOM),
- KVM_ISA_EXT_ARR(ZICBOP),
- KVM_ISA_EXT_ARR(ZICBOZ),
- KVM_ISA_EXT_ARR(ZICCRSE),
- KVM_ISA_EXT_ARR(ZICNTR),
- KVM_ISA_EXT_ARR(ZICOND),
- KVM_ISA_EXT_ARR(ZICSR),
- KVM_ISA_EXT_ARR(ZIFENCEI),
- KVM_ISA_EXT_ARR(ZIHINTNTL),
- KVM_ISA_EXT_ARR(ZIHINTPAUSE),
- KVM_ISA_EXT_ARR(ZIHPM),
- KVM_ISA_EXT_ARR(ZILSD),
- KVM_ISA_EXT_ARR(ZIMOP),
- KVM_ISA_EXT_ARR(ZKND),
- KVM_ISA_EXT_ARR(ZKNE),
- KVM_ISA_EXT_ARR(ZKNH),
- KVM_ISA_EXT_ARR(ZKR),
- KVM_ISA_EXT_ARR(ZKSED),
- KVM_ISA_EXT_ARR(ZKSH),
- KVM_ISA_EXT_ARR(ZKT),
- KVM_ISA_EXT_ARR(ZTSO),
- KVM_ISA_EXT_ARR(ZVBB),
- KVM_ISA_EXT_ARR(ZVBC),
- KVM_ISA_EXT_ARR(ZVFBFMIN),
- KVM_ISA_EXT_ARR(ZVFBFWMA),
- KVM_ISA_EXT_ARR(ZVFH),
- KVM_ISA_EXT_ARR(ZVFHMIN),
- KVM_ISA_EXT_ARR(ZVKB),
- KVM_ISA_EXT_ARR(ZVKG),
- KVM_ISA_EXT_ARR(ZVKNED),
- KVM_ISA_EXT_ARR(ZVKNHA),
- KVM_ISA_EXT_ARR(ZVKNHB),
- KVM_ISA_EXT_ARR(ZVKSED),
- KVM_ISA_EXT_ARR(ZVKSH),
- KVM_ISA_EXT_ARR(ZVKT),
+ KVM_ISA_EXT_ARR(SMNPM, false),
+ KVM_ISA_EXT_ARR(SMSTATEEN, false),
+ KVM_ISA_EXT_ARR(SSAIA, false),
+ KVM_ISA_EXT_ARR(SSCOFPMF, false),
+ KVM_ISA_EXT_ARR(SSNPM, false),
+ KVM_ISA_EXT_ARR(SSTC, false),
+ KVM_ISA_EXT_ARR(SVADE, true),
+ KVM_ISA_EXT_ARR(SVADU, true),
+ KVM_ISA_EXT_ARR(SVINVAL, false),
+ KVM_ISA_EXT_ARR(SVNAPOT, false),
+ KVM_ISA_EXT_ARR(SVPBMT, false),
+ KVM_ISA_EXT_ARR(SVVPTC, true),
+ KVM_ISA_EXT_ARR(ZAAMO, true),
+ KVM_ISA_EXT_ARR(ZABHA, true),
+ KVM_ISA_EXT_ARR(ZACAS, true),
+ KVM_ISA_EXT_ARR(ZALASR, true),
+ KVM_ISA_EXT_ARR(ZALRSC, true),
+ KVM_ISA_EXT_ARR(ZAWRS, false),
+ KVM_ISA_EXT_ARR(ZBA, true),
+ KVM_ISA_EXT_ARR(ZBB, true),
+ KVM_ISA_EXT_ARR(ZBC, true),
+ KVM_ISA_EXT_ARR(ZBKB, true),
+ KVM_ISA_EXT_ARR(ZBKC, true),
+ KVM_ISA_EXT_ARR(ZBKX, true),
+ KVM_ISA_EXT_ARR(ZBS, true),
+ KVM_ISA_EXT_ARR(ZCA, true),
+ KVM_ISA_EXT_ARR(ZCB, true),
+ KVM_ISA_EXT_ARR(ZCD, true),
+ KVM_ISA_EXT_ARR(ZCF, true),
+ KVM_ISA_EXT_ARR(ZCLSD, true),
+ KVM_ISA_EXT_ARR(ZCMOP, true),
+ KVM_ISA_EXT_ARR(ZFA, true),
+ KVM_ISA_EXT_ARR(ZFBFMIN, true),
+ KVM_ISA_EXT_ARR(ZFH, true),
+ KVM_ISA_EXT_ARR(ZFHMIN, true),
+ KVM_ISA_EXT_ARR(ZICBOM, false),
+ KVM_ISA_EXT_ARR(ZICBOP, false),
+ KVM_ISA_EXT_ARR(ZICBOZ, false),
+ KVM_ISA_EXT_ARR(ZICCRSE, true),
+ KVM_ISA_EXT_ARR(ZICNTR, true),
+ KVM_ISA_EXT_ARR(ZICOND, true),
+ KVM_ISA_EXT_ARR(ZICSR, true),
+ KVM_ISA_EXT_ARR(ZIFENCEI, true),
+ KVM_ISA_EXT_ARR(ZIHINTNTL, true),
+ KVM_ISA_EXT_ARR(ZIHINTPAUSE, true),
+ KVM_ISA_EXT_ARR(ZIHPM, true),
+ KVM_ISA_EXT_ARR(ZILSD, true),
+ KVM_ISA_EXT_ARR(ZIMOP, true),
+ KVM_ISA_EXT_ARR(ZKND, true),
+ KVM_ISA_EXT_ARR(ZKNE, true),
+ KVM_ISA_EXT_ARR(ZKNH, true),
+ KVM_ISA_EXT_ARR(ZKR, true),
+ KVM_ISA_EXT_ARR(ZKSED, true),
+ KVM_ISA_EXT_ARR(ZKSH, true),
+ KVM_ISA_EXT_ARR(ZKT, true),
+ KVM_ISA_EXT_ARR(ZTSO, true),
+ KVM_ISA_EXT_ARR(ZVBB, true),
+ KVM_ISA_EXT_ARR(ZVBC, true),
+ KVM_ISA_EXT_ARR(ZVFBFMIN, true),
+ KVM_ISA_EXT_ARR(ZVFBFWMA, true),
+ KVM_ISA_EXT_ARR(ZVFH, true),
+ KVM_ISA_EXT_ARR(ZVFHMIN, true),
+ KVM_ISA_EXT_ARR(ZVKB, true),
+ KVM_ISA_EXT_ARR(ZVKG, true),
+ KVM_ISA_EXT_ARR(ZVKNED, true),
+ KVM_ISA_EXT_ARR(ZVKNHA, true),
+ KVM_ISA_EXT_ARR(ZVKNHB, true),
+ KVM_ISA_EXT_ARR(ZVKSED, true),
+ KVM_ISA_EXT_ARR(ZVKSH, true),
+ KVM_ISA_EXT_ARR(ZVKT, true),
};
unsigned long kvm_riscv_base2isa_ext(unsigned long base_ext)
@@ -102,7 +107,7 @@ unsigned long kvm_riscv_base2isa_ext(unsigned long base_ext)
unsigned long i;
for (i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++) {
- if (kvm_isa_ext_arr[i] == base_ext)
+ if (kvm_isa_ext_arr[i].ext == base_ext)
return i;
}
@@ -117,7 +122,10 @@ int __kvm_riscv_isa_check_host(unsigned long ext, unsigned long *base_ext)
ext >= ARRAY_SIZE(kvm_isa_ext_arr))
return -ENOENT;
- switch (kvm_isa_ext_arr[ext]) {
+ if (kvm_riscv_nested_available() && !kvm_isa_ext_arr[ext].nested)
+ return -ENOENT;
+
+ switch (kvm_isa_ext_arr[ext].ext) {
case RISCV_ISA_EXT_SMNPM:
/*
* Pointer masking effective in (H)S-mode is provided by the
@@ -128,7 +136,7 @@ int __kvm_riscv_isa_check_host(unsigned long ext, unsigned long *base_ext)
host_ext = RISCV_ISA_EXT_SSNPM;
break;
default:
- host_ext = kvm_isa_ext_arr[ext];
+ host_ext = kvm_isa_ext_arr[ext].ext;
break;
}
@@ -136,7 +144,7 @@ int __kvm_riscv_isa_check_host(unsigned long ext, unsigned long *base_ext)
return -ENOENT;
if (base_ext)
- *base_ext = kvm_isa_ext_arr[ext];
+ *base_ext = kvm_isa_ext_arr[ext].ext;
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 18/27] RISC-V: KVM: Trap-n-emulate SRET for Guest HS-mode
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (16 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 17/27] RISC-V: KVM: Extend kvm_riscv_isa_check_host() for nested virt Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 19/27] RISC-V: KVM: Redirect nested supervisor ecall and breakpoint traps Anup Patel
` (9 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The guest HS-mode (aka L1/guest hypervisor) can enter Guest VS/VU-mode
(aka L2/nested guest) using the SRET instruction. To achieve this, host
host hypervisor must trap-n-emulate SRET instruction for guest HS-mode
(aka L1/guest hypervisor) using host hstatus.VTSR bit.
Trapping all SRET instructions executed by guest HS-mode (aka L1/guest
hypervisor) will impact performance so host hypervisor will only set
hstatus.VTSR bit when guest sets HSTATUS.SPV bit and hstatus.VTSR bit
is cleared by SRET emulation upon entry into guest VS/VU-mode.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/insn.h | 3 ++
arch/riscv/include/asm/kvm_vcpu_nested.h | 2 +
arch/riscv/kvm/Makefile | 1 +
arch/riscv/kvm/vcpu_insn.c | 6 +++
arch/riscv/kvm/vcpu_nested_insn.c | 54 ++++++++++++++++++++++++
5 files changed, 66 insertions(+)
create mode 100644 arch/riscv/kvm/vcpu_nested_insn.c
diff --git a/arch/riscv/include/asm/insn.h b/arch/riscv/include/asm/insn.h
index c3005573e8c9..24a8abb3283c 100644
--- a/arch/riscv/include/asm/insn.h
+++ b/arch/riscv/include/asm/insn.h
@@ -331,6 +331,9 @@ static __always_inline bool riscv_insn_is_c_jalr(u32 code)
#define INSN_OPCODE_SHIFT 2
#define INSN_OPCODE_SYSTEM 28
+#define INSN_MASK_SRET 0xffffffff
+#define INSN_MATCH_SRET 0x10200073
+
#define INSN_MASK_WFI 0xffffffff
#define INSN_MATCH_WFI 0x10500073
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index 6d9d252a378c..665c60f09ee6 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -63,6 +63,8 @@ struct kvm_vcpu_nested {
#define kvm_riscv_vcpu_nested_virt(__vcpu) ((__vcpu)->arch.nested.virt)
+int kvm_riscv_vcpu_nested_insn_sret(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn);
+
int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
const struct kvm_cpu_trap *trap,
struct kvm_gstage_mapping *out_map,
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index a8806b69205f..c0534d4a469e 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -26,6 +26,7 @@ kvm-y += vcpu_exit.o
kvm-y += vcpu_fp.o
kvm-y += vcpu_insn.o
kvm-y += vcpu_nested.o
+kvm-y += vcpu_nested_insn.o
kvm-y += vcpu_nested_swtlb.o
kvm-y += vcpu_onereg.o
kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_pmu.o
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index 4d89b94128ae..745cd654df94 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -9,6 +9,7 @@
#include <asm/cpufeature.h>
#include <asm/insn.h>
+#include <asm/kvm_vcpu_nested.h>
struct insn_func {
unsigned long mask;
@@ -257,6 +258,11 @@ static const struct insn_func system_opcode_funcs[] = {
.match = INSN_MATCH_CSRRCI,
.func = csr_insn,
},
+ {
+ .mask = INSN_MASK_SRET,
+ .match = INSN_MATCH_SRET,
+ .func = kvm_riscv_vcpu_nested_insn_sret,
+ },
{
.mask = INSN_MASK_WFI,
.match = INSN_MATCH_WFI,
diff --git a/arch/riscv/kvm/vcpu_nested_insn.c b/arch/riscv/kvm/vcpu_nested_insn.c
new file mode 100644
index 000000000000..8f5b2992dbb9
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_nested_insn.c
@@ -0,0 +1,54 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/kvm_nacl.h>
+#include <asm/kvm_vcpu_insn.h>
+
+int kvm_riscv_vcpu_nested_insn_sret(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
+{
+ unsigned long vsstatus, next_sepc, next_spp;
+ bool next_virt;
+
+ /*
+ * Trap from virtual-VS/VU modes should be forwarded to
+ * virtual-HS mode as a virtual instruction trap.
+ */
+ if (kvm_riscv_vcpu_nested_virt(vcpu))
+ return KVM_INSN_VIRTUAL_TRAP;
+
+ /*
+ * Trap from virtual-U mode should be forwarded to
+ * virtual-HS mode as illegal instruction trap.
+ */
+ if (!(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ vsstatus = ncsr_read(CSR_VSSTATUS);
+
+ /*
+ * Find next nested virtualization mode, next privilege mode,
+ * and next sepc
+ */
+ next_virt = (vcpu->arch.nested.csr.hstatus & HSTATUS_SPV) ? true : false;
+ next_sepc = ncsr_read(CSR_VSEPC);
+ next_spp = vsstatus & SR_SPP;
+
+ /* Update Guest sstatus.sie */
+ vsstatus &= ~SR_SIE;
+ vsstatus |= (vsstatus & SR_SPIE) ? SR_SIE : 0;
+ ncsr_write(CSR_VSSTATUS, vsstatus);
+
+ /* Update return address and return privilege mode*/
+ vcpu->arch.guest_context.sepc = next_sepc;
+ vcpu->arch.guest_context.sstatus &= ~SR_SPP;
+ vcpu->arch.guest_context.sstatus |= next_spp;
+
+ /* Set nested virtualization state based on guest hstatus.SPV */
+ kvm_riscv_vcpu_nested_set_virt(vcpu, NESTED_SET_VIRT_EVENT_SRET,
+ next_virt, false, false);
+
+ return KVM_INSN_CONTINUE_SAME_SEPC;
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 19/27] RISC-V: KVM: Redirect nested supervisor ecall and breakpoint traps
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (17 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 18/27] RISC-V: KVM: Trap-n-emulate SRET for Guest HS-mode Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 20/27] RISC-V: KVM: Redirect nested WFI and WRS traps Anup Patel
` (8 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The supervisor ecall and breakpoint traps from Guest VS/VU-mode
(aka L2/nested guest) should be redirected to Guest HS-mode (aka
L1/guest hypervisor).
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/vcpu_exit.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c
index aeec4c4eee06..6627c2c25a71 100644
--- a/arch/riscv/kvm/vcpu_exit.c
+++ b/arch/riscv/kvm/vcpu_exit.c
@@ -274,12 +274,18 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
ret = gstage_page_fault(vcpu, run, trap);
break;
case EXC_SUPERVISOR_SYSCALL:
- if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
+ if (kvm_riscv_vcpu_nested_virt(vcpu))
+ ret = vcpu_redirect(vcpu, trap);
+ else if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
ret = kvm_riscv_vcpu_sbi_ecall(vcpu, run);
break;
case EXC_BREAKPOINT:
- run->exit_reason = KVM_EXIT_DEBUG;
- ret = 0;
+ if (kvm_riscv_vcpu_nested_virt(vcpu)) {
+ ret = vcpu_redirect(vcpu, trap);
+ } else {
+ run->exit_reason = KVM_EXIT_DEBUG;
+ ret = 0;
+ }
break;
default:
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 20/27] RISC-V: KVM: Redirect nested WFI and WRS traps
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (18 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 19/27] RISC-V: KVM: Redirect nested supervisor ecall and breakpoint traps Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 21/27] RISC-V: KVM: Implement remote HFENCE SBI calls for guest Anup Patel
` (7 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The WFI and WRS virtual instruction traps from Guest VS/VU-mode
(aka L2/nested guest) should be redirected to Guest HS-mode (aka
L1/guest hypervisor).
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/kvm/vcpu_insn.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index 745cd654df94..ebd0cfc1bf30 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -76,6 +76,20 @@ void kvm_riscv_vcpu_wfi(struct kvm_vcpu *vcpu)
static int wfi_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
{
+ /*
+ * Trap from virtual-VS/VU modes should be forwarded to
+ * virtual-HS mode as a virtual instruction trap.
+ */
+ if (kvm_riscv_vcpu_nested_virt(vcpu))
+ return KVM_INSN_VIRTUAL_TRAP;
+
+ /*
+ * Trap from virtual-U mode should be forwarded to
+ * virtual-HS mode as illegal instruction trap.
+ */
+ if (!(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
vcpu->stat.wfi_exit_stat++;
kvm_riscv_vcpu_wfi(vcpu);
return KVM_INSN_CONTINUE_NEXT_SEPC;
@@ -83,6 +97,20 @@ static int wfi_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
static int wrs_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
{
+ /*
+ * Trap from virtual-VS/VU modes should be forwarded to
+ * virtual-HS mode as a virtual instruction trap.
+ */
+ if (kvm_riscv_vcpu_nested_virt(vcpu))
+ return KVM_INSN_VIRTUAL_TRAP;
+
+ /*
+ * Trap from virtual-U mode should be forwarded to
+ * virtual-HS mode as illegal instruction trap.
+ */
+ if (!(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
vcpu->stat.wrs_exit_stat++;
kvm_vcpu_on_spin(vcpu, vcpu->arch.guest_context.sstatus & SR_SPP);
return KVM_INSN_CONTINUE_NEXT_SEPC;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 21/27] RISC-V: KVM: Implement remote HFENCE SBI calls for guest
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (19 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 20/27] RISC-V: KVM: Redirect nested WFI and WRS traps Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 22/27] RISC-V: KVM: Add CSR emulation for nested virtualization Anup Patel
` (6 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The remote HFENCE SBI calls can now be implemented as operations
on the nested g-stage page table emulated for the guest.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_host.h | 2 +
arch/riscv/include/asm/kvm_tlb.h | 37 ++++++-
arch/riscv/include/asm/kvm_vcpu_nested.h | 14 +++
arch/riscv/kvm/tlb.c | 124 +++++++++++++++++++++++
arch/riscv/kvm/vcpu_nested_swtlb.c | 76 ++++++++++++++
arch/riscv/kvm/vcpu_sbi_replace.c | 63 +++++++++++-
6 files changed, 310 insertions(+), 6 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h
index c510564a09a2..2f097459ee14 100644
--- a/arch/riscv/include/asm/kvm_host.h
+++ b/arch/riscv/include/asm/kvm_host.h
@@ -47,6 +47,8 @@
KVM_ARCH_REQ_FLAGS(5, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_STEAL_UPDATE KVM_ARCH_REQ(6)
#define KVM_REQ_NESTED_SWTLB KVM_ARCH_REQ(7)
+#define KVM_REQ_NESTED_HFENCE_GVMA_ALL KVM_ARCH_REQ(8)
+#define KVM_REQ_NESTED_HFENCE_VVMA_ALL KVM_ARCH_REQ(9)
#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_tlb.h
index a0e7099bcb85..591b8735000f 100644
--- a/arch/riscv/include/asm/kvm_tlb.h
+++ b/arch/riscv/include/asm/kvm_tlb.h
@@ -15,7 +15,11 @@ enum kvm_riscv_hfence_type {
KVM_RISCV_HFENCE_VVMA_ASID_GVA,
KVM_RISCV_HFENCE_VVMA_ASID_ALL,
KVM_RISCV_HFENCE_VVMA_GVA,
- KVM_RISCV_HFENCE_VVMA_ALL
+ KVM_RISCV_HFENCE_VVMA_ALL,
+ KVM_RISCV_NESTED_HFENCE_GVMA_GPA,
+ KVM_RISCV_NESTED_HFENCE_GVMA_VMID_GPA,
+ KVM_RISCV_NESTED_HFENCE_VVMA_GVA,
+ KVM_RISCV_NESTED_HFENCE_VVMA_ASID_GVA,
};
struct kvm_riscv_hfence {
@@ -56,6 +60,8 @@ void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu);
void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu);
void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu);
void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu);
+void kvm_riscv_nested_hfence_gvma_all_process(struct kvm_vcpu *vcpu);
+void kvm_riscv_nested_hfence_vvma_all_process(struct kvm_vcpu *vcpu);
void kvm_riscv_fence_i(struct kvm *kvm,
unsigned long hbase, unsigned long hmask);
@@ -82,4 +88,33 @@ void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
unsigned long hbase, unsigned long hmask,
unsigned long vmid);
+void kvm_riscv_nested_hfence_gvma_gpa(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ gpa_t gpa, gpa_t gpsz,
+ unsigned long order);
+void kvm_riscv_nested_hfence_gvma_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask);
+void kvm_riscv_nested_hfence_gvma_vmid_gpa(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ gpa_t gpa, gpa_t gpsz,
+ unsigned long order, unsigned long vmid);
+void kvm_riscv_nested_hfence_gvma_vmid_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long vmid);
+void kvm_riscv_nested_hfence_vvma_gva(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long gva, unsigned long gvsz,
+ unsigned long order, unsigned long vmid);
+void kvm_riscv_nested_hfence_vvma_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long vmid);
+void kvm_riscv_nested_hfence_vvma_asid_gva(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long gva, unsigned long gvsz,
+ unsigned long order, unsigned long asid,
+ unsigned long vmid);
+void kvm_riscv_nested_hfence_vvma_asid_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long asid, unsigned long vmid);
+
#endif
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index 665c60f09ee6..4935ab0db1a2 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -69,6 +69,20 @@ int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
const struct kvm_cpu_trap *trap,
struct kvm_gstage_mapping *out_map,
struct kvm_cpu_trap *out_trap);
+void kvm_riscv_vcpu_nested_swtlb_vvma_flush(struct kvm_vcpu *vcpu,
+ unsigned long vaddr, unsigned long size,
+ unsigned long order, unsigned long vmid);
+void kvm_riscv_vcpu_nested_swtlb_vvma_flush_asid(struct kvm_vcpu *vcpu,
+ unsigned long vaddr, unsigned long size,
+ unsigned long order, unsigned long vmid,
+ unsigned long asid);
+void kvm_riscv_vcpu_nested_swtlb_gvma_flush(struct kvm_vcpu *vcpu,
+ gpa_t addr, gpa_t size, unsigned long order);
+void kvm_riscv_vcpu_nested_swtlb_gvma_flush_vmid(struct kvm_vcpu *vcpu,
+ gpa_t addr, gpa_t size, unsigned long order,
+ unsigned long vmid);
+void kvm_riscv_vcpu_nested_swtlb_host_flush(struct kvm_vcpu *vcpu,
+ gpa_t addr, gpa_t size, unsigned long order);
void kvm_riscv_vcpu_nested_swtlb_process(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_nested_swtlb_request(struct kvm_vcpu *vcpu,
const struct kvm_gstage_mapping *guest_map,
diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c
index a95aa5336560..1b48a5ff81d1 100644
--- a/arch/riscv/kvm/tlb.c
+++ b/arch/riscv/kvm/tlb.c
@@ -210,6 +210,7 @@ void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu)
nacl_hfence_gvma_vmid_all(nacl_shmem(), vmid);
else
kvm_riscv_local_hfence_gvma_vmid_all(vmid);
+ kvm_riscv_vcpu_nested_swtlb_host_flush(vcpu, 0, 0, 0);
}
void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu)
@@ -223,6 +224,16 @@ void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu)
kvm_riscv_local_hfence_vvma_all(vmid);
}
+void kvm_riscv_nested_hfence_gvma_all_process(struct kvm_vcpu *vcpu)
+{
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush(vcpu, 0, 0, 0);
+}
+
+void kvm_riscv_nested_hfence_vvma_all_process(struct kvm_vcpu *vcpu)
+{
+ kvm_riscv_vcpu_nested_swtlb_vvma_flush(vcpu, 0, 0, 0, -1UL);
+}
+
static bool vcpu_hfence_dequeue(struct kvm_vcpu *vcpu,
struct kvm_riscv_hfence *out_data)
{
@@ -287,12 +298,14 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu)
else
kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr,
d.size, d.order);
+ kvm_riscv_vcpu_nested_swtlb_host_flush(vcpu, d.addr, d.size, d.order);
break;
case KVM_RISCV_HFENCE_GVMA_VMID_ALL:
if (kvm_riscv_nacl_available())
nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid);
else
kvm_riscv_local_hfence_gvma_vmid_all(d.vmid);
+ kvm_riscv_vcpu_nested_swtlb_host_flush(vcpu, 0, 0, 0);
break;
case KVM_RISCV_HFENCE_VVMA_ASID_GVA:
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD);
@@ -464,6 +477,117 @@ void kvm_riscv_hfence_vvma_all(struct kvm *kvm,
KVM_REQ_HFENCE_VVMA_ALL, &data);
}
+void kvm_riscv_nested_hfence_gvma_gpa(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ gpa_t gpa, gpa_t gpsz,
+ unsigned long order)
+{
+ struct kvm_riscv_hfence data = {0};
+
+ data.type = KVM_RISCV_NESTED_HFENCE_GVMA_GPA;
+ data.addr = gpa;
+ data.size = gpsz;
+ data.order = order;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_GVMA_ALL, &data);
+}
+
+void kvm_riscv_nested_hfence_gvma_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask)
+{
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_NESTED_HFENCE_GVMA_ALL,
+ KVM_REQ_NESTED_HFENCE_GVMA_ALL, NULL);
+}
+
+void kvm_riscv_nested_hfence_gvma_vmid_gpa(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ gpa_t gpa, gpa_t gpsz,
+ unsigned long order, unsigned long vmid)
+{
+ struct kvm_riscv_hfence data;
+
+ data.type = KVM_RISCV_NESTED_HFENCE_GVMA_VMID_GPA;
+ data.asid = 0;
+ data.vmid = vmid;
+ data.addr = gpa;
+ data.size = gpsz;
+ data.order = order;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_GVMA_ALL, &data);
+}
+
+void kvm_riscv_nested_hfence_gvma_vmid_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long vmid)
+{
+ struct kvm_riscv_hfence data = {0};
+
+ data.type = KVM_RISCV_NESTED_HFENCE_GVMA_VMID_GPA;
+ data.vmid = vmid;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_GVMA_ALL, &data);
+}
+
+void kvm_riscv_nested_hfence_vvma_gva(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long gva, unsigned long gvsz,
+ unsigned long order, unsigned long vmid)
+{
+ struct kvm_riscv_hfence data;
+
+ data.type = KVM_RISCV_NESTED_HFENCE_VVMA_GVA;
+ data.asid = 0;
+ data.vmid = vmid;
+ data.addr = gva;
+ data.size = gvsz;
+ data.order = order;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_VVMA_ALL, &data);
+}
+
+void kvm_riscv_nested_hfence_vvma_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long vmid)
+{
+ struct kvm_riscv_hfence data = {0};
+
+ data.type = KVM_RISCV_NESTED_HFENCE_VVMA_GVA;
+ data.vmid = vmid;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_VVMA_ALL, &data);
+}
+
+void kvm_riscv_nested_hfence_vvma_asid_gva(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long gva, unsigned long gvsz,
+ unsigned long order, unsigned long asid,
+ unsigned long vmid)
+{
+ struct kvm_riscv_hfence data;
+
+ data.type = KVM_RISCV_NESTED_HFENCE_VVMA_ASID_GVA;
+ data.asid = asid;
+ data.vmid = vmid;
+ data.addr = gva;
+ data.size = gvsz;
+ data.order = order;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_VVMA_ALL, &data);
+}
+
+void kvm_riscv_nested_hfence_vvma_asid_all(struct kvm *kvm,
+ unsigned long hbase, unsigned long hmask,
+ unsigned long asid, unsigned long vmid)
+{
+ struct kvm_riscv_hfence data = {0};
+
+ data.type = KVM_RISCV_NESTED_HFENCE_VVMA_ASID_GVA;
+ data.asid = asid;
+ data.vmid = vmid;
+ make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE,
+ KVM_REQ_NESTED_HFENCE_VVMA_ALL, &data);
+}
+
int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages)
{
kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0,
diff --git a/arch/riscv/kvm/vcpu_nested_swtlb.c b/arch/riscv/kvm/vcpu_nested_swtlb.c
index 1d9faf50a61f..7dabfc1c3e16 100644
--- a/arch/riscv/kvm/vcpu_nested_swtlb.c
+++ b/arch/riscv/kvm/vcpu_nested_swtlb.c
@@ -4,6 +4,7 @@
*/
#include <linux/kvm_host.h>
+#include <asm/kvm_nacl.h>
int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
const struct kvm_cpu_trap *trap,
@@ -14,6 +15,81 @@ int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
return 0;
}
+void kvm_riscv_vcpu_nested_swtlb_vvma_flush(struct kvm_vcpu *vcpu,
+ unsigned long vaddr, unsigned long size,
+ unsigned long order, unsigned long vmid)
+{
+ struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
+ struct kvm_vmid *v = &vcpu->kvm->arch.vmid;
+
+ if (vmid != -1UL && ((ns->csr.hgatp & HGATP_VMID) >> HGATP_VMID_SHIFT) != vmid)
+ return;
+
+ vmid = kvm_riscv_gstage_nested_vmid(READ_ONCE(v->vmid));
+ if (!vaddr && !size && !order) {
+ if (kvm_riscv_nacl_available())
+ nacl_hfence_vvma_all(nacl_shmem(), vmid);
+ else
+ kvm_riscv_local_hfence_vvma_all(vmid);
+ } else {
+ if (kvm_riscv_nacl_available())
+ nacl_hfence_vvma(nacl_shmem(), vmid, vaddr, size, order);
+ else
+ kvm_riscv_local_hfence_vvma_gva(vmid, vaddr, size, order);
+ }
+}
+
+void kvm_riscv_vcpu_nested_swtlb_vvma_flush_asid(struct kvm_vcpu *vcpu,
+ unsigned long vaddr, unsigned long size,
+ unsigned long order, unsigned long vmid,
+ unsigned long asid)
+{
+ struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
+ struct kvm_vmid *v = &vcpu->kvm->arch.vmid;
+
+ if (vmid != -1UL && ((ns->csr.hgatp & HGATP_VMID) >> HGATP_VMID_SHIFT) != vmid)
+ return;
+
+ vmid = kvm_riscv_gstage_nested_vmid(READ_ONCE(v->vmid));
+ if (!vaddr && !size && !order) {
+ if (kvm_riscv_nacl_available())
+ nacl_hfence_vvma_asid_all(nacl_shmem(), vmid, asid);
+ else
+ kvm_riscv_local_hfence_vvma_asid_all(vmid, asid);
+ } else {
+ if (kvm_riscv_nacl_available())
+ nacl_hfence_vvma_asid(nacl_shmem(), vmid, asid,
+ vaddr, size, order);
+ else
+ kvm_riscv_local_hfence_vvma_asid_gva(vmid, asid, vaddr,
+ size, order);
+ }
+}
+
+void kvm_riscv_vcpu_nested_swtlb_gvma_flush(struct kvm_vcpu *vcpu,
+ gpa_t addr, gpa_t size, unsigned long order)
+{
+ /* TODO: */
+}
+
+void kvm_riscv_vcpu_nested_swtlb_gvma_flush_vmid(struct kvm_vcpu *vcpu,
+ gpa_t addr, gpa_t size, unsigned long order,
+ unsigned long vmid)
+{
+ struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
+
+ if (vmid != -1UL && ((ns->csr.hgatp & HGATP_VMID) >> HGATP_VMID_SHIFT) != vmid)
+ return;
+
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush(vcpu, addr, size, order);
+}
+
+void kvm_riscv_vcpu_nested_swtlb_host_flush(struct kvm_vcpu *vcpu,
+ gpa_t addr, gpa_t size, unsigned long order)
+{
+ /* TODO: */
+}
+
void kvm_riscv_vcpu_nested_swtlb_process(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_nested_swtlb *nst = &vcpu->arch.nested.swtlb;
diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c
index 506a510b6bff..d60c7b05cd02 100644
--- a/arch/riscv/kvm/vcpu_sbi_replace.c
+++ b/arch/riscv/kvm/vcpu_sbi_replace.c
@@ -123,14 +123,67 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT);
break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA:
+ /* Not supported if VCPU does not have H-extension */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h)) {
+ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ break;
+ }
+
+ if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
+ kvm_riscv_nested_hfence_gvma_all(vcpu->kvm, hbase, hmask);
+ else
+ kvm_riscv_nested_hfence_gvma_gpa(vcpu->kvm, hbase, hmask,
+ cp->a2, cp->a3, PAGE_SHIFT);
+ kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_GVMA_SENT);
+ break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID:
+ /* Not supported if VCPU does not have H-extension */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h)) {
+ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ break;
+ }
+
+ if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
+ kvm_riscv_nested_hfence_gvma_vmid_all(vcpu->kvm,
+ hbase, hmask, cp->a4);
+ else
+ kvm_riscv_nested_hfence_gvma_vmid_gpa(vcpu->kvm, hbase, hmask,
+ cp->a2, cp->a3,
+ PAGE_SHIFT, cp->a4);
+ kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_GVMA_VMID_SENT);
+ break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA:
+ /* Not supported if VCPU does not have H-extension */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h)) {
+ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ break;
+ }
+
+ vmid = (vcpu->arch.nested.csr.hgatp & HGATP_VMID) >> HGATP_VMID_SHIFT;
+ if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
+ kvm_riscv_nested_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid);
+ else
+ kvm_riscv_nested_hfence_vvma_gva(vcpu->kvm, hbase, hmask,
+ cp->a2, cp->a3, PAGE_SHIFT, vmid);
+ kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT);
+ break;
case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:
- /*
- * Until nested virtualization is implemented, the
- * SBI HFENCE calls should return not supported
- * hence fallthrough.
- */
+ /* Not supported if VCPU does not have H-extension */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h)) {
+ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ break;
+ }
+
+ vmid = (vcpu->arch.nested.csr.hgatp & HGATP_VMID) >> HGATP_VMID_SHIFT;
+ if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
+ kvm_riscv_nested_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask,
+ cp->a4, vmid);
+ else
+ kvm_riscv_nested_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask,
+ cp->a2, cp->a3, PAGE_SHIFT,
+ cp->a4, vmid);
+ kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT);
+ break;
default:
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 22/27] RISC-V: KVM: Add CSR emulation for nested virtualization
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (20 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 21/27] RISC-V: KVM: Implement remote HFENCE SBI calls for guest Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 23/27] RISC-V: KVM: Add HFENCE " Anup Patel
` (5 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The Guest HS-mode (aka L1/guest hypervisor) needs H-extension CSRs
for hypervisor functionality so add corresponding CSR emulation.
Both, Guest HS-mode (aka L1/guest hypervisor) and Guest VS-mode (aka
L2/nested guest) will be running in actual VS-mode which complicates
receiving Guest HS-mode interrupts while Guest VS-mode is running.
To simplify this, trap-n-emulate SIE and SIP CSRs for Guest VS-mode
(aka L2/nested guest) using hvictl.VTI bit.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/csr.h | 17 ++
arch/riscv/include/asm/kvm_vcpu_nested.h | 42 +++
arch/riscv/kvm/Makefile | 1 +
arch/riscv/kvm/vcpu_insn.c | 2 +
arch/riscv/kvm/vcpu_nested.c | 3 +-
arch/riscv/kvm/vcpu_nested_csr.c | 361 +++++++++++++++++++++++
6 files changed, 424 insertions(+), 2 deletions(-)
create mode 100644 arch/riscv/kvm/vcpu_nested_csr.c
diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h
index 4a37a98398ad..7fba082d4a26 100644
--- a/arch/riscv/include/asm/csr.h
+++ b/arch/riscv/include/asm/csr.h
@@ -17,6 +17,7 @@
#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */
#define SR_MPP _AC(0x00001800, UL) /* Previously Machine */
#define SR_SUM _AC(0x00040000, UL) /* Supervisor User Memory Access */
+#define SR_MXR _AC(0x00080000, UL) /* Make eXecutable Readable */
#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */
#define SR_FS_OFF _AC(0x00000000, UL)
@@ -59,6 +60,7 @@
/* SATP flags */
#ifndef CONFIG_64BIT
#define SATP_PPN _AC(0x003FFFFF, UL)
+#define SATP_MODE _AC(0x80000000, UL)
#define SATP_MODE_32 _AC(0x80000000, UL)
#define SATP_MODE_SHIFT 31
#define SATP_ASID_BITS 9
@@ -66,6 +68,7 @@
#define SATP_ASID_MASK _AC(0x1FF, UL)
#else
#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL)
+#define SATP_MODE _AC(0xF000000000000000, UL)
#define SATP_MODE_39 _AC(0x8000000000000000, UL)
#define SATP_MODE_48 _AC(0x9000000000000000, UL)
#define SATP_MODE_57 _AC(0xa000000000000000, UL)
@@ -74,6 +77,8 @@
#define SATP_ASID_SHIFT 44
#define SATP_ASID_MASK _AC(0xFFFF, UL)
#endif
+#define SATP_MODE_OFF _AC(0, UL)
+#define SATP_ASID (SATP_ASID_MASK << SATP_ASID_SHIFT)
/* Exception cause high bit - is an interrupt if set */
#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
@@ -151,11 +156,13 @@
#define HGATP_MODE_SV57X4 _AC(10, UL)
#define HGATP32_MODE_SHIFT 31
+#define HGATP32_MODE GENMASK(31, 31)
#define HGATP32_VMID_SHIFT 22
#define HGATP32_VMID GENMASK(28, 22)
#define HGATP32_PPN GENMASK(21, 0)
#define HGATP64_MODE_SHIFT 60
+#define HGATP64_MODE GENMASK(63, 60)
#define HGATP64_VMID_SHIFT 44
#define HGATP64_VMID GENMASK(57, 44)
#define HGATP64_PPN GENMASK(43, 0)
@@ -167,11 +174,13 @@
#define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT
#define HGATP_VMID HGATP64_VMID
#define HGATP_MODE_SHIFT HGATP64_MODE_SHIFT
+#define HGATP_MODE HGATP64_MODE
#else
#define HGATP_PPN HGATP32_PPN
#define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT
#define HGATP_VMID HGATP32_VMID
#define HGATP_MODE_SHIFT HGATP32_MODE_SHIFT
+#define HGATP_MODE HGATP32_MODE
#endif
/* VSIP & HVIP relation */
@@ -237,6 +246,14 @@
#define MSECCFG_PMM_PMLEN_7 ENVCFG_PMM_PMLEN_7
#define MSECCFG_PMM_PMLEN_16 ENVCFG_PMM_PMLEN_16
+#define CSR_NUM_PRIV_SHIFT 8
+#define CSR_NUM_PRIV_MASK 0x3
+
+#define CSR_PRIV_USER 0
+#define CSR_PRIV_SUPERVISOR 1
+#define CSR_PRIV_HYPERVISOR 2
+#define CSR_PRIV_MACHINE 3
+
/* symbolic CSR names: */
#define CSR_CYCLE 0xc00
#define CSR_TIME 0xc01
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index 4935ab0db1a2..5262ec4f37b7 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -65,6 +65,48 @@ struct kvm_vcpu_nested {
int kvm_riscv_vcpu_nested_insn_sret(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn);
+int kvm_riscv_vcpu_nested_smode_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_num,
+ unsigned long *val, unsigned long new_val,
+ unsigned long wr_mask);
+int kvm_riscv_vcpu_nested_hext_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_num,
+ unsigned long *val, unsigned long new_val,
+ unsigned long wr_mask);
+
+#define KVM_RISCV_VCPU_NESTED_SMODE_CSR_FUNCS \
+{ .base = CSR_SIE, .count = 1, .func = kvm_riscv_vcpu_nested_smode_csr_rmw }, \
+{ .base = CSR_SIEH, .count = 1, .func = kvm_riscv_vcpu_nested_smode_csr_rmw }, \
+{ .base = CSR_SIP, .count = 1, .func = kvm_riscv_vcpu_nested_smode_csr_rmw }, \
+{ .base = CSR_SIPH, .count = 1, .func = kvm_riscv_vcpu_nested_smode_csr_rmw },
+
+#define KVM_RISCV_VCPU_NESTED_HEXT_CSR_FUNCS \
+{ .base = CSR_HSTATUS, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HEDELEG, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HIDELEG, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HIE, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HTIMEDELTA, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HCOUNTEREN, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HGEIE, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HENVCFG, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HTIMEDELTAH, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HENVCFGH, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HTVAL, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HIP, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HVIP, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HTINST, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HGATP, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_HGEIP, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSSTATUS, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSIE, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSTVEC, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSSCRATCH, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSEPC, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSCAUSE, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSTVAL, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSIP, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw }, \
+{ .base = CSR_VSATP, .count = 1, .func = kvm_riscv_vcpu_nested_hext_csr_rmw },
+
+void kvm_riscv_vcpu_nested_csr_reset(struct kvm_vcpu *vcpu);
+
int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
const struct kvm_cpu_trap *trap,
struct kvm_gstage_mapping *out_map,
diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile
index c0534d4a469e..40f385f229f4 100644
--- a/arch/riscv/kvm/Makefile
+++ b/arch/riscv/kvm/Makefile
@@ -26,6 +26,7 @@ kvm-y += vcpu_exit.o
kvm-y += vcpu_fp.o
kvm-y += vcpu_insn.o
kvm-y += vcpu_nested.o
+kvm-y += vcpu_nested_csr.o
kvm-y += vcpu_nested_insn.o
kvm-y += vcpu_nested_swtlb.o
kvm-y += vcpu_onereg.o
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index ebd0cfc1bf30..0246ca2d5e93 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -142,6 +142,8 @@ static const struct csr_func csr_funcs[] = {
KVM_RISCV_VCPU_AIA_CSR_FUNCS
KVM_RISCV_VCPU_HPMCOUNTER_CSR_FUNCS
{ .base = CSR_SEED, .count = 1, .func = seed_csr_rmw },
+ KVM_RISCV_VCPU_NESTED_SMODE_CSR_FUNCS
+ KVM_RISCV_VCPU_NESTED_HEXT_CSR_FUNCS
};
/**
diff --git a/arch/riscv/kvm/vcpu_nested.c b/arch/riscv/kvm/vcpu_nested.c
index 9b2b3369a232..1b4898d9c72c 100644
--- a/arch/riscv/kvm/vcpu_nested.c
+++ b/arch/riscv/kvm/vcpu_nested.c
@@ -224,11 +224,10 @@ void kvm_riscv_vcpu_nested_vsirq_process(struct kvm_vcpu *vcpu)
void kvm_riscv_vcpu_nested_reset(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu_nested *ns = &vcpu->arch.nested;
- struct kvm_vcpu_nested_csr *ncsr = &vcpu->arch.nested.csr;
ns->virt = false;
kvm_riscv_vcpu_nested_swtlb_reset(vcpu);
- memset(ncsr, 0, sizeof(*ncsr));
+ kvm_riscv_vcpu_nested_csr_reset(vcpu);
}
int kvm_riscv_vcpu_nested_init(struct kvm_vcpu *vcpu)
diff --git a/arch/riscv/kvm/vcpu_nested_csr.c b/arch/riscv/kvm/vcpu_nested_csr.c
new file mode 100644
index 000000000000..0e427f224954
--- /dev/null
+++ b/arch/riscv/kvm/vcpu_nested_csr.c
@@ -0,0 +1,361 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2026 Qualcomm Technologies, Inc.
+ */
+
+#include <linux/kvm_host.h>
+#include <linux/pgtable.h>
+#include <asm/csr.h>
+
+#define NESTED_SIE_WRITEABLE (BIT(IRQ_S_SOFT) | BIT(IRQ_S_TIMER) | BIT(IRQ_S_EXT))
+#define NESTED_HVIP_WRITEABLE (BIT(IRQ_VS_SOFT) | BIT(IRQ_VS_TIMER) | BIT(IRQ_VS_EXT))
+#define NESTED_HIDELEG_WRITEABLE NESTED_HVIP_WRITEABLE
+#define NESTED_HEDELEG_WRITEABLE \
+ (BIT(EXC_INST_MISALIGNED) | \
+ BIT(EXC_INST_ACCESS) | \
+ BIT(EXC_INST_ILLEGAL) | \
+ BIT(EXC_BREAKPOINT) | \
+ BIT(EXC_LOAD_MISALIGNED) | \
+ BIT(EXC_LOAD_ACCESS) | \
+ BIT(EXC_STORE_MISALIGNED) | \
+ BIT(EXC_STORE_ACCESS) | \
+ BIT(EXC_SYSCALL) | \
+ BIT(EXC_INST_PAGE_FAULT) | \
+ BIT(EXC_LOAD_PAGE_FAULT) | \
+ BIT(EXC_STORE_PAGE_FAULT))
+#define NESTED_HCOUNTEREN_WRITEABLE -1UL
+#define NESTED_VSIE_WRITEABLE NESTED_SIE_WRITEABLE
+#define NESTED_VSCAUSE_WRITEABLE GENMASK(4, 0)
+
+int kvm_riscv_vcpu_nested_smode_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_num,
+ unsigned long *val, unsigned long new_val,
+ unsigned long wr_mask)
+{
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
+ unsigned long *csr, csr_rdor = 0;
+ unsigned long writeable_mask = 0;
+#ifdef CONFIG_32BIT
+ unsigned long zero = 0;
+#endif
+ int csr_shift = 0;
+
+ /*
+ * These CSRs should never trap for virtual-HS/U modes because
+ * we only emulate these CSRs for virtual-VS/VU modes.
+ */
+ if (!kvm_riscv_vcpu_nested_virt(vcpu))
+ return -EINVAL;
+
+ /*
+ * Access of these CSRs from virtual-VU mode should be forwarded
+ * as illegal instruction trap to virtual-HS mode.
+ */
+ if (!(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ switch (csr_num) {
+ case CSR_SIE:
+ csr = &nsc->vsie;
+ writeable_mask = NESTED_SIE_WRITEABLE & (nsc->hideleg >> VSIP_TO_HVIP_SHIFT);
+ break;
+#ifdef CONFIG_32BIT
+ case CSR_SIEH:
+ csr = &zero;
+ break;
+#endif
+ case CSR_SIP:
+ csr = &nsc->hvip;
+ csr_shift = VSIP_TO_HVIP_SHIFT;
+ writeable_mask = BIT(IRQ_VS_EXT) & nsc->hideleg;
+ break;
+#ifdef CONFIG_32BIT
+ case CSR_SIPH:
+ csr = &zero;
+ break;
+#endif
+ default:
+ return KVM_INSN_ILLEGAL_TRAP;
+ }
+
+ if (val)
+ *val = (csr_shift < 0) ? (*csr | csr_rdor) << -csr_shift :
+ (*csr | csr_rdor) >> csr_shift;
+
+ if (wr_mask) {
+ writeable_mask = (csr_shift < 0) ?
+ writeable_mask >> -csr_shift :
+ writeable_mask << csr_shift;
+ wr_mask = (csr_shift < 0) ?
+ wr_mask >> -csr_shift : wr_mask << csr_shift;
+ new_val = (csr_shift < 0) ?
+ new_val >> -csr_shift : new_val << csr_shift;
+ wr_mask &= writeable_mask;
+ *csr = (*csr & ~wr_mask) | (new_val & wr_mask);
+ }
+
+ return KVM_INSN_CONTINUE_NEXT_SEPC;
+}
+
+static int __riscv_vcpu_nested_hext_csr_rmw(struct kvm_vcpu *vcpu,
+ bool priv_check, unsigned int csr_num,
+ unsigned long *val, unsigned long new_val,
+ unsigned long wr_mask)
+{
+ unsigned int csr_priv = (csr_num >> CSR_NUM_PRIV_SHIFT) & CSR_NUM_PRIV_MASK;
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
+ unsigned long mode, zero = 0, writeable_mask = 0;
+ bool read_only = false, nuke_swtlb = false;
+ unsigned long *csr, csr_rdor = 0;
+ int csr_shift = 0;
+
+ /*
+ * If H-extension is not available for VCPU then forward trap
+ * as illegal instruction trap to virtual-HS mode.
+ */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ /*
+ * Trap from virtual-VS and virtual-VU modes should be forwarded
+ * to virtual-HS mode as a virtual instruction trap.
+ */
+ if (priv_check && kvm_riscv_vcpu_nested_virt(vcpu))
+ return (csr_priv == CSR_PRIV_HYPERVISOR) ?
+ KVM_INSN_VIRTUAL_TRAP : KVM_INSN_ILLEGAL_TRAP;
+
+ /*
+ * H-extension CSRs not allowed in virtual-U mode so forward trap
+ * as illegal instruction trap to virtual-HS mode.
+ */
+ if (priv_check && !(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ switch (csr_num) {
+ case CSR_HSTATUS:
+ csr = &nsc->hstatus;
+ writeable_mask = HSTATUS_VTSR | HSTATUS_VTW | HSTATUS_VTVM |
+ HSTATUS_HU | HSTATUS_SPVP | HSTATUS_SPV |
+ HSTATUS_GVA;
+ if (wr_mask & HSTATUS_SPV) {
+ /*
+ * If hstatus.SPV == 1 then enable host SRET
+ * trapping for the virtual-HS mode which will
+ * allow host to do nested world-switch upon
+ * next SRET instruction executed by the
+ * virtual-HS-mode.
+ *
+ * If hstatus.SPV == 0 then disable host SRET
+ * trapping for the virtual-HS mode which will
+ * ensure that host does not do any nested
+ * world-switch for SRET instruction executed
+ * virtual-HS mode for general interrupt and
+ * trap handling.
+ */
+ vcpu->arch.guest_context.hstatus &= ~HSTATUS_VTSR;
+ vcpu->arch.guest_context.hstatus |= (new_val & HSTATUS_SPV) ?
+ HSTATUS_VTSR : 0;
+ }
+ break;
+ case CSR_HEDELEG:
+ csr = &nsc->hedeleg;
+ writeable_mask = NESTED_HEDELEG_WRITEABLE;
+ break;
+ case CSR_HIDELEG:
+ csr = &nsc->hideleg;
+ writeable_mask = NESTED_HIDELEG_WRITEABLE;
+ break;
+ case CSR_HVIP:
+ csr = &nsc->hvip;
+ writeable_mask = NESTED_HVIP_WRITEABLE;
+ break;
+ case CSR_HIE:
+ csr = &nsc->vsie;
+ csr_shift = -VSIP_TO_HVIP_SHIFT;
+ writeable_mask = NESTED_HVIP_WRITEABLE;
+ break;
+ case CSR_HIP:
+ csr = &nsc->hvip;
+ writeable_mask = BIT(IRQ_VS_SOFT);
+ break;
+ case CSR_HGEIP:
+ csr = &zero;
+ read_only = true;
+ break;
+ case CSR_HGEIE:
+ csr = &zero;
+ break;
+ case CSR_HCOUNTEREN:
+ csr = &nsc->hcounteren;
+ writeable_mask = NESTED_HCOUNTEREN_WRITEABLE;
+ break;
+ case CSR_HTIMEDELTA:
+ csr = &nsc->htimedelta;
+ writeable_mask = -1UL;
+ break;
+#ifndef CONFIG_64BIT
+ case CSR_HTIMEDELTAH:
+ csr = &nsc->htimedeltah;
+ writeable_mask = -1UL;
+ break;
+#endif
+ case CSR_HTVAL:
+ csr = &nsc->htval;
+ writeable_mask = -1UL;
+ break;
+ case CSR_HTINST:
+ csr = &nsc->htinst;
+ writeable_mask = -1UL;
+ break;
+ case CSR_HGATP:
+ csr = &nsc->hgatp;
+ writeable_mask = HGATP_MODE | HGATP_VMID | HGATP_PPN;
+ if (wr_mask & HGATP_MODE) {
+ mode = (new_val & HGATP_MODE) >> HGATP_MODE_SHIFT;
+ switch (mode) {
+ /*
+ * Intentionally support only Sv39x4 on RV64 and
+ * Sv32x4 on RV32 for guest G-stage so that software
+ * page table walks on guest G-stage are faster.
+ */
+#ifdef CONFIG_64BIT
+ case HGATP_MODE_SV39X4:
+ if (kvm_riscv_gstage_mode != HGATP_MODE_SV57X4 &&
+ kvm_riscv_gstage_mode != HGATP_MODE_SV48X4 &&
+ kvm_riscv_gstage_mode != HGATP_MODE_SV39X4)
+ mode = HGATP_MODE_OFF;
+ break;
+#else
+ case HGATP_MODE_SV32X4:
+ if (kvm_riscv_gstage_mode != HGATP_MODE_SV32X4)
+ mode = HGATP_MODE_OFF;
+ break;
+#endif
+ default:
+ mode = HGATP_MODE_OFF;
+ break;
+ }
+ new_val &= ~HGATP_MODE;
+ new_val |= (mode << HGATP_MODE_SHIFT) & HGATP_MODE;
+ if ((new_val ^ nsc->hgatp) & HGATP_MODE)
+ nuke_swtlb = true;
+ }
+ if (wr_mask & HGATP_VMID) {
+ if ((new_val ^ nsc->hgatp) & HGATP_VMID)
+ nuke_swtlb = true;
+ }
+ break;
+ case CSR_HENVCFG:
+ csr = &nsc->henvcfg;
+#ifdef CONFIG_64BIT
+ writeable_mask = ENVCFG_STCE;
+#endif
+ break;
+#ifdef CONFIG_32BIT
+ case CSR_HENVCFGH:
+ csr = &nsc->henvcfgh;
+ writeable_mask = ENVCFG_STCE >> 32;
+ break;
+#endif
+ case CSR_VSSTATUS:
+ csr = &nsc->vsstatus;
+ writeable_mask = SR_SIE | SR_SPIE | SR_SPP | SR_SUM | SR_MXR | SR_FS | SR_VS;
+ break;
+ case CSR_VSIP:
+ csr = &nsc->hvip;
+ csr_shift = VSIP_TO_HVIP_SHIFT;
+ writeable_mask = BIT(IRQ_VS_SOFT) & nsc->hideleg;
+ break;
+ case CSR_VSIE:
+ csr = &nsc->vsie;
+ writeable_mask = NESTED_VSIE_WRITEABLE & (nsc->hideleg >> VSIP_TO_HVIP_SHIFT);
+ break;
+ case CSR_VSTVEC:
+ csr = &nsc->vstvec;
+ writeable_mask = -1UL;
+ break;
+ case CSR_VSSCRATCH:
+ csr = &nsc->vsscratch;
+ writeable_mask = -1UL;
+ break;
+ case CSR_VSEPC:
+ csr = &nsc->vsepc;
+ writeable_mask = -1UL;
+ break;
+ case CSR_VSCAUSE:
+ csr = &nsc->vscause;
+ writeable_mask = NESTED_VSCAUSE_WRITEABLE;
+ break;
+ case CSR_VSTVAL:
+ csr = &nsc->vstval;
+ writeable_mask = -1UL;
+ break;
+ case CSR_VSATP:
+ csr = &nsc->vsatp;
+ writeable_mask = SATP_MODE | SATP_ASID | SATP_PPN;
+ if (wr_mask & SATP_MODE) {
+ mode = new_val & SATP_MODE;
+ switch (mode) {
+#ifdef CONFIG_64BIT
+ case SATP_MODE_57:
+ if (!pgtable_l5_enabled)
+ mode = SATP_MODE_OFF;
+ break;
+ case SATP_MODE_48:
+ if (!pgtable_l5_enabled && !pgtable_l4_enabled)
+ mode = SATP_MODE_OFF;
+ break;
+ case SATP_MODE_39:
+ break;
+#else
+ case SATP_MODE_32:
+ break;
+#endif
+ default:
+ mode = SATP_MODE_OFF;
+ break;
+ }
+ new_val &= ~SATP_MODE;
+ new_val |= mode & SATP_MODE;
+ }
+ break;
+ default:
+ return KVM_INSN_ILLEGAL_TRAP;
+ }
+
+ if (val)
+ *val = (csr_shift < 0) ? (*csr | csr_rdor) << -csr_shift :
+ (*csr | csr_rdor) >> csr_shift;
+
+ if (read_only) {
+ return KVM_INSN_ILLEGAL_TRAP;
+ } else if (wr_mask) {
+ writeable_mask = (csr_shift < 0) ?
+ writeable_mask >> -csr_shift :
+ writeable_mask << csr_shift;
+ wr_mask = (csr_shift < 0) ?
+ wr_mask >> -csr_shift : wr_mask << csr_shift;
+ new_val = (csr_shift < 0) ?
+ new_val >> -csr_shift : new_val << csr_shift;
+ wr_mask &= writeable_mask;
+ *csr = (*csr & ~wr_mask) | (new_val & wr_mask);
+ }
+
+ if (nuke_swtlb)
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush(vcpu, 0, 0, 0);
+
+ return KVM_INSN_CONTINUE_NEXT_SEPC;
+}
+
+int kvm_riscv_vcpu_nested_hext_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_num,
+ unsigned long *val, unsigned long new_val,
+ unsigned long wr_mask)
+{
+ return __riscv_vcpu_nested_hext_csr_rmw(vcpu, true, csr_num, val, new_val, wr_mask);
+}
+
+void kvm_riscv_vcpu_nested_csr_reset(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
+
+ memset(nsc, 0, sizeof(*nsc));
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 23/27] RISC-V: KVM: Add HFENCE emulation for nested virtualization
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (21 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 22/27] RISC-V: KVM: Add CSR emulation for nested virtualization Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 24/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization state Anup Patel
` (4 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The Guest HS-mode (aka L1/guest hypervisor) needs HFENCE instructions
for TLB maintenance of the nested guest physical addresses so add
corresponding HFENCE emulation.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/insn.h | 6 ++
arch/riscv/include/asm/kvm_vcpu_nested.h | 4 ++
arch/riscv/kvm/vcpu_insn.c | 10 +++
arch/riscv/kvm/vcpu_nested_insn.c | 86 ++++++++++++++++++++++++
4 files changed, 106 insertions(+)
diff --git a/arch/riscv/include/asm/insn.h b/arch/riscv/include/asm/insn.h
index 24a8abb3283c..6896ba0581b5 100644
--- a/arch/riscv/include/asm/insn.h
+++ b/arch/riscv/include/asm/insn.h
@@ -340,6 +340,12 @@ static __always_inline bool riscv_insn_is_c_jalr(u32 code)
#define INSN_MASK_WRS 0xffffffff
#define INSN_MATCH_WRS 0x00d00073
+#define INSN_MASK_HFENCE_VVMA 0xfe007fff
+#define INSN_MATCH_HFENCE_VVMA 0x22000073
+
+#define INSN_MASK_HFENCE_GVMA 0xfe007fff
+#define INSN_MATCH_HFENCE_GVMA 0x62000073
+
#define INSN_MATCH_CSRRW 0x1073
#define INSN_MASK_CSRRW 0x707f
#define INSN_MATCH_CSRRS 0x2073
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index 5262ec4f37b7..db6d89cf9771 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -64,6 +64,10 @@ struct kvm_vcpu_nested {
#define kvm_riscv_vcpu_nested_virt(__vcpu) ((__vcpu)->arch.nested.virt)
int kvm_riscv_vcpu_nested_insn_sret(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn);
+int kvm_riscv_vcpu_nested_insn_hfence_vvma(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ ulong insn);
+int kvm_riscv_vcpu_nested_insn_hfence_gvma(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ ulong insn);
int kvm_riscv_vcpu_nested_smode_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_num,
unsigned long *val, unsigned long new_val,
diff --git a/arch/riscv/kvm/vcpu_insn.c b/arch/riscv/kvm/vcpu_insn.c
index 0246ca2d5e93..8f11cda133ac 100644
--- a/arch/riscv/kvm/vcpu_insn.c
+++ b/arch/riscv/kvm/vcpu_insn.c
@@ -303,6 +303,16 @@ static const struct insn_func system_opcode_funcs[] = {
.match = INSN_MATCH_WRS,
.func = wrs_insn,
},
+ {
+ .mask = INSN_MASK_HFENCE_VVMA,
+ .match = INSN_MATCH_HFENCE_VVMA,
+ .func = kvm_riscv_vcpu_nested_insn_hfence_vvma,
+ },
+ {
+ .mask = INSN_MASK_HFENCE_GVMA,
+ .match = INSN_MATCH_HFENCE_GVMA,
+ .func = kvm_riscv_vcpu_nested_insn_hfence_gvma,
+ },
};
static int system_opcode_insn(struct kvm_vcpu *vcpu, struct kvm_run *run,
diff --git a/arch/riscv/kvm/vcpu_nested_insn.c b/arch/riscv/kvm/vcpu_nested_insn.c
index 8f5b2992dbb9..7e57d3215930 100644
--- a/arch/riscv/kvm/vcpu_nested_insn.c
+++ b/arch/riscv/kvm/vcpu_nested_insn.c
@@ -4,6 +4,7 @@
*/
#include <linux/kvm_host.h>
+#include <asm/insn.h>
#include <asm/kvm_nacl.h>
#include <asm/kvm_vcpu_insn.h>
@@ -52,3 +53,88 @@ int kvm_riscv_vcpu_nested_insn_sret(struct kvm_vcpu *vcpu, struct kvm_run *run,
return KVM_INSN_CONTINUE_SAME_SEPC;
}
+
+int kvm_riscv_vcpu_nested_insn_hfence_vvma(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ ulong insn)
+{
+ unsigned int vmid = (vcpu->arch.nested.csr.hgatp & HGATP_VMID) >> HGATP_VMID_SHIFT;
+ unsigned long vaddr = GET_RS1(insn, &vcpu->arch.guest_context);
+ unsigned int asid = GET_RS2(insn, &vcpu->arch.guest_context);
+ unsigned int rs1_num = (insn >> SH_RS1) & MASK_RX;
+ unsigned int rs2_num = (insn >> SH_RS2) & MASK_RX;
+
+ /*
+ * If H-extension is not available for VCPU then forward trap
+ * as illegal instruction trap to virtual-HS mode.
+ */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ /*
+ * Trap from virtual-VS and virtual-VU modes should be forwarded
+ * to virtual-HS mode as a virtual instruction trap.
+ */
+ if (kvm_riscv_vcpu_nested_virt(vcpu))
+ return KVM_INSN_VIRTUAL_TRAP;
+
+ /*
+ * H-extension instructions not allowed in virtual-U mode so
+ * forward trap as illegal instruction trap to virtual-HS mode.
+ */
+ if (!(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ if (!rs1_num && !rs2_num)
+ kvm_riscv_vcpu_nested_swtlb_vvma_flush(vcpu, 0, 0, 0, vmid);
+ else if (!rs1_num && rs2_num)
+ kvm_riscv_vcpu_nested_swtlb_vvma_flush_asid(vcpu, 0, 0, 0, vmid, asid);
+ else if (rs1_num && !rs2_num)
+ kvm_riscv_vcpu_nested_swtlb_vvma_flush(vcpu, vaddr, PAGE_SIZE, PAGE_SHIFT, vmid);
+ else
+ kvm_riscv_vcpu_nested_swtlb_vvma_flush_asid(vcpu, vaddr, PAGE_SIZE, PAGE_SHIFT,
+ vmid, asid);
+
+ return KVM_INSN_CONTINUE_NEXT_SEPC;
+}
+
+int kvm_riscv_vcpu_nested_insn_hfence_gvma(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ ulong insn)
+{
+ unsigned int vmid = GET_RS2(insn, &vcpu->arch.guest_context);
+ gpa_t gaddr = GET_RS1(insn, &vcpu->arch.guest_context) << 2;
+ unsigned int rs1_num = (insn >> SH_RS1) & MASK_RX;
+ unsigned int rs2_num = (insn >> SH_RS2) & MASK_RX;
+
+ /*
+ * If H-extension is not available for VCPU then forward trap
+ * as illegal instruction trap to virtual-HS mode.
+ */
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ /*
+ * Trap from virtual-VS and virtual-VU modes should be forwarded
+ * to virtual-HS mode as a virtual instruction trap.
+ */
+ if (kvm_riscv_vcpu_nested_virt(vcpu))
+ return KVM_INSN_VIRTUAL_TRAP;
+
+ /*
+ * H-extension instructions not allowed in virtual-U mode so
+ * forward trap as illegal instruction trap to virtual-HS mode.
+ */
+ if (!(vcpu->arch.guest_context.hstatus & HSTATUS_SPVP))
+ return KVM_INSN_ILLEGAL_TRAP;
+
+ if (!rs1_num && !rs2_num)
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush(vcpu, 0, 0, 0);
+ else if (!rs1_num && rs2_num)
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush_vmid(vcpu, 0, 0, 0, vmid);
+ else if (rs1_num && !rs2_num)
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush(vcpu, gaddr, PAGE_SIZE, PAGE_SHIFT);
+ else
+ kvm_riscv_vcpu_nested_swtlb_gvma_flush_vmid(vcpu, gaddr, PAGE_SIZE, PAGE_SHIFT,
+ vmid);
+
+ return KVM_INSN_CONTINUE_NEXT_SEPC;
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 24/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization state
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (22 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 23/27] RISC-V: KVM: Add HFENCE " Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 25/27] RISC-V: KVM: selftests: Add nested virt state to get-reg-list test Anup Patel
` (3 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Add nested virtualization state to CORE registers of the KVM RISC-V
ONE_REG interface so that it can be updated from KVM user-space in
the same way as privileged mode.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/uapi/asm/kvm.h | 1 +
arch/riscv/kvm/vcpu_onereg.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index 504e73305343..f62eaa47745b 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -65,6 +65,7 @@ struct kvm_riscv_config {
struct kvm_riscv_core {
struct user_regs_struct regs;
unsigned long mode;
+ unsigned long virt;
};
/* Possible privilege modes for kvm_riscv_core */
diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
index 6b16eee2c833..5f0d10beeb98 100644
--- a/arch/riscv/kvm/vcpu_onereg.c
+++ b/arch/riscv/kvm/vcpu_onereg.c
@@ -219,6 +219,8 @@ static int kvm_riscv_vcpu_get_reg_core(struct kvm_vcpu *vcpu,
else if (reg_num == KVM_REG_RISCV_CORE_REG(mode))
reg_val = (cntx->sstatus & SR_SPP) ?
KVM_RISCV_MODE_S : KVM_RISCV_MODE_U;
+ else if (reg_num == KVM_REG_RISCV_CORE_REG(virt))
+ reg_val = kvm_riscv_vcpu_nested_virt(vcpu);
else
return -ENOENT;
@@ -257,6 +259,9 @@ static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu,
cntx->sstatus |= SR_SPP;
else
cntx->sstatus &= ~SR_SPP;
+ } else if (reg_num == KVM_REG_RISCV_CORE_REG(virt)) {
+ if (riscv_isa_extension_available(vcpu->arch.isa, h))
+ kvm_riscv_vcpu_nested_virt(vcpu) = !!reg_val;
} else
return -ENOENT;
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 25/27] RISC-V: KVM: selftests: Add nested virt state to get-reg-list test
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (23 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 24/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization state Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 26/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization CSRs Anup Patel
` (2 subsequent siblings)
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The KVM RISC-V allows Guest/VM nested vitualization state to be
accessed via ONE_REG so add this to get-reg-list test.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
tools/testing/selftests/kvm/riscv/get-reg-list.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/testing/selftests/kvm/riscv/get-reg-list.c b/tools/testing/selftests/kvm/riscv/get-reg-list.c
index 8d6b951434eb..53af7a453327 100644
--- a/tools/testing/selftests/kvm/riscv/get-reg-list.c
+++ b/tools/testing/selftests/kvm/riscv/get-reg-list.c
@@ -314,6 +314,8 @@ static const char *core_id_to_str(const char *prefix, __u64 id)
reg_off - KVM_REG_RISCV_CORE_REG(regs.t3) + 3);
case KVM_REG_RISCV_CORE_REG(mode):
return "KVM_REG_RISCV_CORE_REG(mode)";
+ case KVM_REG_RISCV_CORE_REG(virt):
+ return "KVM_REG_RISCV_CORE_REG(virt)";
}
return strdup_printf("%lld /* UNKNOWN */", reg_off);
@@ -855,6 +857,7 @@ static __u64 base_regs[] = {
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t5),
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(regs.t6),
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(mode),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CORE | KVM_REG_RISCV_CORE_REG(virt),
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sstatus),
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sie),
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(stvec),
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 26/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization CSRs
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (24 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 25/27] RISC-V: KVM: selftests: Add nested virt state to get-reg-list test Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-01-20 8:00 ` [PATCH 27/27] RISC-V: KVM: selftests: Add nested virt CSRs to get-reg-list test Anup Patel
2026-04-03 12:36 ` [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
Add nested virtualization CSRs to the KVM RISC-V ONE_REG interface
so that it can be updated from KVM user-space.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
arch/riscv/include/asm/kvm_vcpu_nested.h | 5 ++++
arch/riscv/include/uapi/asm/kvm.h | 27 +++++++++++++++++++++
arch/riscv/kvm/vcpu_nested_csr.c | 28 ++++++++++++++++++++++
arch/riscv/kvm/vcpu_onereg.c | 30 ++++++++++++++++++++++--
4 files changed, 88 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/include/asm/kvm_vcpu_nested.h b/arch/riscv/include/asm/kvm_vcpu_nested.h
index db6d89cf9771..9ae0e3795522 100644
--- a/arch/riscv/include/asm/kvm_vcpu_nested.h
+++ b/arch/riscv/include/asm/kvm_vcpu_nested.h
@@ -111,6 +111,11 @@ int kvm_riscv_vcpu_nested_hext_csr_rmw(struct kvm_vcpu *vcpu, unsigned int csr_n
void kvm_riscv_vcpu_nested_csr_reset(struct kvm_vcpu *vcpu);
+int kvm_riscv_vcpu_nested_set_csr(struct kvm_vcpu *vcpu, unsigned long reg_num,
+ unsigned long reg_val);
+int kvm_riscv_vcpu_nested_get_csr(struct kvm_vcpu *vcpu, unsigned long reg_num,
+ unsigned long *out_val);
+
int kvm_riscv_vcpu_nested_swtlb_xlate(struct kvm_vcpu *vcpu,
const struct kvm_cpu_trap *trap,
struct kvm_gstage_mapping *out_map,
diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h
index f62eaa47745b..a486d73e64ce 100644
--- a/arch/riscv/include/uapi/asm/kvm.h
+++ b/arch/riscv/include/uapi/asm/kvm.h
@@ -103,6 +103,30 @@ struct kvm_riscv_smstateen_csr {
unsigned long sstateen0;
};
+/* H-extension CSR for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
+struct kvm_riscv_hext_csr {
+ unsigned long hstatus;
+ unsigned long hedeleg;
+ unsigned long hideleg;
+ unsigned long hvip;
+ unsigned long hcounteren;
+ unsigned long htimedelta;
+ unsigned long htimedeltah;
+ unsigned long htval;
+ unsigned long htinst;
+ unsigned long henvcfg;
+ unsigned long henvcfgh;
+ unsigned long hgatp;
+ unsigned long vsstatus;
+ unsigned long vsie;
+ unsigned long vstvec;
+ unsigned long vsscratch;
+ unsigned long vsepc;
+ unsigned long vscause;
+ unsigned long vstval;
+ unsigned long vsatp;
+};
+
/* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */
struct kvm_riscv_timer {
__u64 frequency;
@@ -264,12 +288,15 @@ struct kvm_riscv_sbi_fwft {
#define KVM_REG_RISCV_CSR_GENERAL (0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT)
#define KVM_REG_RISCV_CSR_AIA (0x1 << KVM_REG_RISCV_SUBTYPE_SHIFT)
#define KVM_REG_RISCV_CSR_SMSTATEEN (0x2 << KVM_REG_RISCV_SUBTYPE_SHIFT)
+#define KVM_REG_RISCV_CSR_HEXT (0x3 << KVM_REG_RISCV_SUBTYPE_SHIFT)
#define KVM_REG_RISCV_CSR_REG(name) \
(offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long))
#define KVM_REG_RISCV_CSR_AIA_REG(name) \
(offsetof(struct kvm_riscv_aia_csr, name) / sizeof(unsigned long))
#define KVM_REG_RISCV_CSR_SMSTATEEN_REG(name) \
(offsetof(struct kvm_riscv_smstateen_csr, name) / sizeof(unsigned long))
+#define KVM_REG_RISCV_CSR_HEXT_REG(name) \
+ (offsetof(struct kvm_riscv_hext_csr, name) / sizeof(unsigned long))
/* Timer registers are mapped as type 4 */
#define KVM_REG_RISCV_TIMER (0x04 << KVM_REG_RISCV_TYPE_SHIFT)
diff --git a/arch/riscv/kvm/vcpu_nested_csr.c b/arch/riscv/kvm/vcpu_nested_csr.c
index 0e427f224954..887e84d15321 100644
--- a/arch/riscv/kvm/vcpu_nested_csr.c
+++ b/arch/riscv/kvm/vcpu_nested_csr.c
@@ -359,3 +359,31 @@ void kvm_riscv_vcpu_nested_csr_reset(struct kvm_vcpu *vcpu)
memset(nsc, 0, sizeof(*nsc));
}
+
+int kvm_riscv_vcpu_nested_set_csr(struct kvm_vcpu *vcpu, unsigned long reg_num,
+ unsigned long reg_val)
+{
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
+
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return -ENOENT;
+ if (reg_num >= sizeof(struct kvm_riscv_hext_csr) / sizeof(unsigned long))
+ return -ENOENT;
+
+ ((unsigned long *)nsc)[reg_num] = reg_val;
+ return 0;
+}
+
+int kvm_riscv_vcpu_nested_get_csr(struct kvm_vcpu *vcpu, unsigned long reg_num,
+ unsigned long *out_val)
+{
+ struct kvm_vcpu_nested_csr *nsc = &vcpu->arch.nested.csr;
+
+ if (!riscv_isa_extension_available(vcpu->arch.isa, h))
+ return -ENOENT;
+ if (reg_num >= sizeof(struct kvm_riscv_hext_csr) / sizeof(unsigned long))
+ return -ENOENT;
+
+ *out_val = ((unsigned long *)nsc)[reg_num];
+ return 0;
+}
diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
index 5f0d10beeb98..6bae3753b924 100644
--- a/arch/riscv/kvm/vcpu_onereg.c
+++ b/arch/riscv/kvm/vcpu_onereg.c
@@ -367,6 +367,9 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
case KVM_REG_RISCV_CSR_SMSTATEEN:
rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num, ®_val);
break;
+ case KVM_REG_RISCV_CSR_HEXT:
+ rc = kvm_riscv_vcpu_nested_get_csr(vcpu, reg_num, ®_val);
+ break;
default:
rc = -ENOENT;
break;
@@ -409,6 +412,9 @@ static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
case KVM_REG_RISCV_CSR_SMSTATEEN:
rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num, reg_val);
break;
+ case KVM_REG_RISCV_CSR_HEXT:
+ rc = kvm_riscv_vcpu_nested_set_csr(vcpu, reg_num, reg_val);
+ break;
default:
rc = -ENOENT;
break;
@@ -664,6 +670,8 @@ static inline unsigned long num_csr_regs(const struct kvm_vcpu *vcpu)
n += sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long);
if (riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
n += sizeof(struct kvm_riscv_smstateen_csr) / sizeof(unsigned long);
+ if (riscv_isa_extension_available(vcpu->arch.isa, h))
+ n += sizeof(struct kvm_riscv_hext_csr) / sizeof(unsigned long);
return n;
}
@@ -672,7 +680,7 @@ static int copy_csr_reg_indices(const struct kvm_vcpu *vcpu,
u64 __user *uindices)
{
int n1 = sizeof(struct kvm_riscv_csr) / sizeof(unsigned long);
- int n2 = 0, n3 = 0;
+ int n2 = 0, n3 = 0, n4 = 0;
/* copy general csr regs */
for (int i = 0; i < n1; i++) {
@@ -724,7 +732,25 @@ static int copy_csr_reg_indices(const struct kvm_vcpu *vcpu,
}
}
- return n1 + n2 + n3;
+ /* copy H-extension csr regs */
+ if (riscv_isa_extension_available(vcpu->arch.isa, h)) {
+ n4 = sizeof(struct kvm_riscv_hext_csr) / sizeof(unsigned long);
+
+ for (int i = 0; i < n4; i++) {
+ u64 size = IS_ENABLED(CONFIG_32BIT) ?
+ KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64;
+ u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_CSR |
+ KVM_REG_RISCV_CSR_HEXT | i;
+
+ if (uindices) {
+ if (put_user(reg, uindices))
+ return -EFAULT;
+ uindices++;
+ }
+ }
+ }
+
+ return n1 + n2 + n3 + n4;
}
static inline unsigned long num_timer_regs(void)
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* [PATCH 27/27] RISC-V: KVM: selftests: Add nested virt CSRs to get-reg-list test
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (25 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 26/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization CSRs Anup Patel
@ 2026-01-20 8:00 ` Anup Patel
2026-04-03 12:36 ` [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-01-20 8:00 UTC (permalink / raw)
To: Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest, Anup Patel
The KVM RISC-V allows Guest/VM nested vitualization CSRs to be
accessed via ONE_REG so add this to get-reg-list test.
Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
---
.../selftests/kvm/riscv/get-reg-list.c | 103 +++++++++++++++++-
1 file changed, 102 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/riscv/get-reg-list.c b/tools/testing/selftests/kvm/riscv/get-reg-list.c
index 53af7a453327..88dc08c611cd 100644
--- a/tools/testing/selftests/kvm/riscv/get-reg-list.c
+++ b/tools/testing/selftests/kvm/riscv/get-reg-list.c
@@ -327,6 +327,8 @@ static const char *core_id_to_str(const char *prefix, __u64 id)
"KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_REG(" #csr ")"
#define RISCV_CSR_SMSTATEEN(csr) \
"KVM_REG_RISCV_CSR_SMSTATEEN | KVM_REG_RISCV_CSR_REG(" #csr ")"
+#define RISCV_CSR_HEXT(csr) \
+ "KVM_REG_RISCV_CSR_HEXT | KVM_REG_RISCV_CSR_REG(" #csr ")"
static const char *general_csr_id_to_str(__u64 reg_off)
{
@@ -394,6 +396,56 @@ static const char *smstateen_csr_id_to_str(__u64 reg_off)
return NULL;
}
+static const char *hext_csr_id_to_str(__u64 reg_off)
+{
+ /* reg_off is the offset into struct kvm_riscv_hext_csr */
+ switch (reg_off) {
+ case KVM_REG_RISCV_CSR_HEXT_REG(hstatus):
+ return RISCV_CSR_HEXT(hstatus);
+ case KVM_REG_RISCV_CSR_HEXT_REG(hedeleg):
+ return RISCV_CSR_HEXT(hedeleg);
+ case KVM_REG_RISCV_CSR_HEXT_REG(hideleg):
+ return RISCV_CSR_HEXT(hideleg);
+ case KVM_REG_RISCV_CSR_HEXT_REG(hvip):
+ return RISCV_CSR_HEXT(hvip);
+ case KVM_REG_RISCV_CSR_HEXT_REG(hcounteren):
+ return RISCV_CSR_HEXT(hcounteren);
+ case KVM_REG_RISCV_CSR_HEXT_REG(htimedelta):
+ return RISCV_CSR_HEXT(htimedelta);
+ case KVM_REG_RISCV_CSR_HEXT_REG(htimedeltah):
+ return RISCV_CSR_HEXT(htimedeltah);
+ case KVM_REG_RISCV_CSR_HEXT_REG(htval):
+ return RISCV_CSR_HEXT(htval);
+ case KVM_REG_RISCV_CSR_HEXT_REG(htinst):
+ return RISCV_CSR_HEXT(htinst);
+ case KVM_REG_RISCV_CSR_HEXT_REG(henvcfg):
+ return RISCV_CSR_HEXT(henvcfg);
+ case KVM_REG_RISCV_CSR_HEXT_REG(henvcfgh):
+ return RISCV_CSR_HEXT(henvcfgh);
+ case KVM_REG_RISCV_CSR_HEXT_REG(hgatp):
+ return RISCV_CSR_HEXT(hgatp);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vsstatus):
+ return RISCV_CSR_HEXT(vsstatus);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vsie):
+ return RISCV_CSR_HEXT(vsie);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vstvec):
+ return RISCV_CSR_HEXT(vstvec);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vsscratch):
+ return RISCV_CSR_HEXT(vsscratch);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vsepc):
+ return RISCV_CSR_HEXT(vsepc);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vscause):
+ return RISCV_CSR_HEXT(vscause);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vstval):
+ return RISCV_CSR_HEXT(vstval);
+ case KVM_REG_RISCV_CSR_HEXT_REG(vsatp):
+ return RISCV_CSR_HEXT(vsatp);
+ }
+
+ TEST_FAIL("Unknown h-extension csr reg: 0x%llx", reg_off);
+ return NULL;
+}
+
static const char *csr_id_to_str(const char *prefix, __u64 id)
{
__u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CSR);
@@ -410,6 +462,8 @@ static const char *csr_id_to_str(const char *prefix, __u64 id)
return aia_csr_id_to_str(reg_off);
case KVM_REG_RISCV_CSR_SMSTATEEN:
return smstateen_csr_id_to_str(reg_off);
+ case KVM_REG_RISCV_CSR_HEXT:
+ return hext_csr_id_to_str(reg_off);
}
return strdup_printf("%lld | %lld /* UNKNOWN */", reg_subtype, reg_off);
@@ -941,6 +995,51 @@ static __u64 smstateen_regs[] = {
KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_SMSTATEEN,
};
+static __u64 h_regs[] = {
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(hstatus),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(hedeleg),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(hideleg),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(hvip),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(hcounteren),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(htimedelta),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(htimedeltah),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(htval),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(htinst),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(henvcfg),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(henvcfgh),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(hgatp),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vsstatus),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vsie),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vstvec),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vsscratch),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vsepc),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vscause),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vstval),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_HEXT |
+ KVM_REG_RISCV_CSR_HEXT_REG(vsatp),
+ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE |
+ KVM_RISCV_ISA_EXT_H,
+};
+
static __u64 fp_f_regs[] = {
KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[0]),
KVM_REG_RISCV | KVM_REG_SIZE_U32 | KVM_REG_RISCV_FP_F | KVM_REG_RISCV_FP_F_REG(f[1]),
@@ -1079,6 +1178,8 @@ static __u64 vector_regs[] = {
{"aia", .feature = KVM_RISCV_ISA_EXT_SSAIA, .regs = aia_regs, .regs_n = ARRAY_SIZE(aia_regs),}
#define SUBLIST_SMSTATEEN \
{"smstateen", .feature = KVM_RISCV_ISA_EXT_SMSTATEEN, .regs = smstateen_regs, .regs_n = ARRAY_SIZE(smstateen_regs),}
+#define SUBLIST_H \
+ {"h", .feature = KVM_RISCV_ISA_EXT_H, .regs = h_regs, .regs_n = ARRAY_SIZE(h_regs),}
#define SUBLIST_FP_F \
{"fp_f", .feature = KVM_RISCV_ISA_EXT_F, .regs = fp_f_regs, \
.regs_n = ARRAY_SIZE(fp_f_regs),}
@@ -1160,7 +1261,7 @@ KVM_ISA_EXT_SUBLIST_CONFIG(aia, AIA);
KVM_ISA_EXT_SUBLIST_CONFIG(fp_f, FP_F);
KVM_ISA_EXT_SUBLIST_CONFIG(fp_d, FP_D);
KVM_ISA_EXT_SUBLIST_CONFIG(v, V);
-KVM_ISA_EXT_SIMPLE_CONFIG(h, H);
+KVM_ISA_EXT_SUBLIST_CONFIG(h, H);
KVM_ISA_EXT_SIMPLE_CONFIG(smnpm, SMNPM);
KVM_ISA_EXT_SUBLIST_CONFIG(smstateen, SMSTATEEN);
KVM_ISA_EXT_SIMPLE_CONFIG(sscofpmf, SSCOFPMF);
--
2.43.0
^ permalink raw reply related [flat|nested] 38+ messages in thread
* Re: [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG
2026-01-20 7:59 ` [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG Anup Patel
@ 2026-03-06 7:04 ` Anup Patel
0 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-03-06 7:04 UTC (permalink / raw)
To: Anup Patel
Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
Alexandre Ghiti, Shuah Khan, Andrew Jones, kvm-riscv, kvm,
linux-riscv, linux-kernel, linux-kselftest
On Tue, Jan 20, 2026 at 1:30 PM Anup Patel <anup.patel@oss.qualcomm.com> wrote:
>
> Return -ENOENT for Smstateen ONE_REG when:
> 1) Smstateen is not enabled for a VCPU
> 2) When ONE_REG id is out of range
>
> This will make Smstateen ONE_REG error codes consistent
> with other ONE_REG interfaces of KVM RISC-V.
>
> Fixes: c04913f2b54e ("RISCV: KVM: Add sstateen0 to ONE_REG")
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
Queued this as fix for Linux-7.0-rcX
Regards,
Anup
> ---
> arch/riscv/kvm/vcpu_onereg.c | 18 ++++++++----------
> 1 file changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/arch/riscv/kvm/vcpu_onereg.c b/arch/riscv/kvm/vcpu_onereg.c
> index e7ab6cb00646..6dab4deed86d 100644
> --- a/arch/riscv/kvm/vcpu_onereg.c
> +++ b/arch/riscv/kvm/vcpu_onereg.c
> @@ -549,9 +549,11 @@ static inline int kvm_riscv_vcpu_smstateen_set_csr(struct kvm_vcpu *vcpu,
> {
> struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr;
>
> + if (!riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
> + return -ENOENT;
> if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) /
> sizeof(unsigned long))
> - return -EINVAL;
> + return -ENOENT;
>
> ((unsigned long *)csr)[reg_num] = reg_val;
> return 0;
> @@ -563,9 +565,11 @@ static int kvm_riscv_vcpu_smstateen_get_csr(struct kvm_vcpu *vcpu,
> {
> struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr;
>
> + if (!riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
> + return -ENOENT;
> if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) /
> sizeof(unsigned long))
> - return -EINVAL;
> + return -ENOENT;
>
> *out_val = ((unsigned long *)csr)[reg_num];
> return 0;
> @@ -595,10 +599,7 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
> rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, ®_val);
> break;
> case KVM_REG_RISCV_CSR_SMSTATEEN:
> - rc = -EINVAL;
> - if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
> - rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num,
> - ®_val);
> + rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num, ®_val);
> break;
> default:
> rc = -ENOENT;
> @@ -640,10 +641,7 @@ static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
> rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val);
> break;
> case KVM_REG_RISCV_CSR_SMSTATEEN:
> - rc = -EINVAL;
> - if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
> - rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num,
> - reg_val);
> + rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num, reg_val);
> break;
> default:
> rc = -ENOENT;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 02/27] RISC-V: KVM: Fix error code returned for Ssaia ONE_REG
2026-01-20 7:59 ` [PATCH 02/27] RISC-V: KVM: Fix error code returned for Ssaia ONE_REG Anup Patel
@ 2026-03-06 7:04 ` Anup Patel
0 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-03-06 7:04 UTC (permalink / raw)
To: Anup Patel
Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
Alexandre Ghiti, Shuah Khan, Andrew Jones, kvm-riscv, kvm,
linux-riscv, linux-kernel, linux-kselftest
On Tue, Jan 20, 2026 at 1:30 PM Anup Patel <anup.patel@oss.qualcomm.com> wrote:
>
> Return -ENOENT for Ssaia ONE_REG when Ssaia is not enabled
> for a VCPU.
>
> This will make Ssaia ONE_REG error codes consistent with
> other ONE_REG interfaces of KVM RISC-V.
>
> Fixes: 2a88f38cd58d ("RISC-V: KVM: return ENOENT in *_one_reg() when reg is unknown")
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
Queued this as fix for Linux-7.0-rcX
Regards,
Anup
> ---
> arch/riscv/kvm/aia.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
> index dad318185660..31baea9f0589 100644
> --- a/arch/riscv/kvm/aia.c
> +++ b/arch/riscv/kvm/aia.c
> @@ -183,6 +183,8 @@ int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
> {
> struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
>
> + if (!riscv_isa_extension_available(vcpu->arch.isa, SSAIA))
> + return -ENOENT;
> if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> return -ENOENT;
>
> @@ -199,6 +201,8 @@ int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
> {
> struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
>
> + if (!riscv_isa_extension_available(vcpu->arch.isa, SSAIA))
> + return -ENOENT;
> if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
> return -ENOENT;
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 03/27] RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip
2026-01-20 7:59 ` [PATCH 03/27] RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip Anup Patel
@ 2026-03-06 7:04 ` Anup Patel
0 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-03-06 7:04 UTC (permalink / raw)
To: Anup Patel
Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
Alexandre Ghiti, Shuah Khan, Andrew Jones, kvm-riscv, kvm,
linux-riscv, linux-kernel, linux-kselftest
On Tue, Jan 20, 2026 at 1:30 PM Anup Patel <anup.patel@oss.qualcomm.com> wrote:
>
> The KVM user-space may create KVM AIA irqchip before checking
> VCPU Ssaia extension availability so KVM AIA irqchip must fail
> when host does not have Ssaia extension.
>
> Fixes: 89d01306e34d ("RISC-V: KVM: Implement device interface for AIA irqchip")
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
Queued this as fix for Linux-7.0-rcX
Regards,
Anup
> ---
> arch/riscv/kvm/aia_device.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c
> index b195a93add1c..bed4d2c8c44c 100644
> --- a/arch/riscv/kvm/aia_device.c
> +++ b/arch/riscv/kvm/aia_device.c
> @@ -11,6 +11,7 @@
> #include <linux/irqchip/riscv-imsic.h>
> #include <linux/kvm_host.h>
> #include <linux/uaccess.h>
> +#include <linux/cpufeature.h>
>
> static int aia_create(struct kvm_device *dev, u32 type)
> {
> @@ -22,6 +23,9 @@ static int aia_create(struct kvm_device *dev, u32 type)
> if (irqchip_in_kernel(kvm))
> return -EEXIST;
>
> + if (!riscv_isa_extension_available(NULL, SSAIA))
> + return -ENODEV;
> +
> ret = -EBUSY;
> if (kvm_trylock_all_vcpus(kvm))
> return ret;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 09/27] RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR
2026-01-20 7:59 ` [PATCH 09/27] RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR Anup Patel
@ 2026-03-13 13:27 ` Radim Krčmář
0 siblings, 0 replies; 38+ messages in thread
From: Radim Krčmář @ 2026-03-13 13:27 UTC (permalink / raw)
To: Anup Patel, Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest
2026-01-20T13:29:55+05:30, Anup Patel <anup.patel@oss.qualcomm.com>:
> The hstateen0 will be programmed differently for guest HS-mode
> and guest VS/VU-mode so don't check hstateen0.SSTATEEN0 bit when
> updating sstateen0 CSR in kvm_riscv_vcpu_swap_in_guest_state()
> and kvm_riscv_vcpu_swap_in_host_state().
>
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
> ---
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> @@ -702,28 +702,22 @@ static __always_inline void kvm_riscv_vcpu_swap_in_guest_state(struct kvm_vcpu *
> - if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) &&
> - (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0))
> - vcpu->arch.host_sstateen0 = csr_swap(CSR_SSTATEEN0,
> - smcsr->sstateen0);
> + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
> + vcpu->arch.host_sstateen0 = csr_swap(CSR_SSTATEEN0, smcsr->sstateen0);
This could even be considered as a fix, although there is no bug at the
moment (both host and guest sstateen are always 0).
In the future, execution of a guest might have been tampering with the
host sstateen, because sstateen is active even when hstateen.SE0=0.
Reviewed-by: Radim Krčmář <radim.krcmar@oss.qualcomm.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 08/27] RISC-V: KVM: Factor-out VCPU config into separate sources
2026-01-20 7:59 ` [PATCH 08/27] RISC-V: KVM: Factor-out VCPU config into separate sources Anup Patel
@ 2026-03-13 13:46 ` Radim Krčmář
0 siblings, 0 replies; 38+ messages in thread
From: Radim Krčmář @ 2026-03-13 13:46 UTC (permalink / raw)
To: Anup Patel, Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest
2026-01-20T13:29:54+05:30, Anup Patel <anup.patel@oss.qualcomm.com>:
> The VCPU config deals with hideleg, hedeleg, henvcfg, and hstateenX
> CSR configuration for each VCPU. Factor-out VCPU config into separate
> sources so that VCPU config can do things differently for guest HS-mode
> and guest VS/VU-mode.
>
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
> ---
> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c
> @@ -871,7 +820,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> struct kvm_run *run = vcpu->run;
>
> if (!vcpu->arch.ran_atleast_once)
> - kvm_riscv_vcpu_setup_config(vcpu);
> + kvm_riscv_vcpu_config_ran_once(vcpu);
>
> /* Mark this VCPU ran at least once */
> vcpu->arch.ran_atleast_once = true;
> diff --git a/arch/riscv/kvm/vcpu_config.c b/arch/riscv/kvm/vcpu_config.c
> +void kvm_riscv_vcpu_config_ran_once(struct kvm_vcpu *vcpu)
ran_once is a bit awkward name since it hasn't ran once...
Maybe _once or _first_run? Not that it matters,
Reviewed-by: Radim Krčmář <radim.krcmar@oss.qualcomm.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 07/27] RISC-V: KVM: Add hideleg to struct kvm_vcpu_config
2026-01-20 7:59 ` [PATCH 07/27] RISC-V: KVM: Add hideleg to struct kvm_vcpu_config Anup Patel
@ 2026-03-13 13:49 ` Radim Krčmář
0 siblings, 0 replies; 38+ messages in thread
From: Radim Krčmář @ 2026-03-13 13:49 UTC (permalink / raw)
To: Anup Patel, Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest
2026-01-20T13:29:53+05:30, Anup Patel <anup.patel@oss.qualcomm.com>:
> The hideleg CSR state when VCPU is running in guest VS/VU-mode will
> be different from when it is running in guest HS-mode. To achieve
> this, add hideleg to struct kvm_vcpu_config and re-program hideleg
> CSR upon every kvm_arch_vcpu_load().
>
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
> ---
Reviewed-by: Radim Krčmář <radim.krcmar@oss.qualcomm.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources
2026-01-20 7:59 ` [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources Anup Patel
@ 2026-03-13 14:14 ` Radim Krčmář
2026-04-03 12:34 ` Anup Patel
0 siblings, 1 reply; 38+ messages in thread
From: Radim Krčmář @ 2026-03-13 14:14 UTC (permalink / raw)
To: Anup Patel, Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest
2026-01-20T13:29:51+05:30, Anup Patel <anup.patel@oss.qualcomm.com>:
> The KVM ISA extension related checks are not VCPU specific and
> should be factored out of vcpu_onereg.c into separate sources.
>
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
> ---
> diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c
> @@ -12,6 +12,7 @@
> #include <linux/kvm_host.h>
> #include <linux/uaccess.h>
> #include <linux/cpufeature.h>
> +#include <asm/kvm_isa.h>
I guess <cpufeature.h> isn't needed anymore,
Reviewed-by: Radim Krčmář <radim.krcmar@oss.qualcomm.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 04/27] RISC-V: KVM: Introduce common kvm_riscv_isa_check_host()
2026-01-20 7:59 ` [PATCH 04/27] RISC-V: KVM: Introduce common kvm_riscv_isa_check_host() Anup Patel
@ 2026-03-13 14:22 ` Radim Krčmář
0 siblings, 0 replies; 38+ messages in thread
From: Radim Krčmář @ 2026-03-13 14:22 UTC (permalink / raw)
To: Anup Patel, Paolo Bonzini, Atish Patra
Cc: Palmer Dabbelt, Paul Walmsley, Alexandre Ghiti, Shuah Khan,
Anup Patel, Andrew Jones, kvm-riscv, kvm, linux-riscv,
linux-kernel, linux-kselftest
2026-01-20T13:29:50+05:30, Anup Patel <anup.patel@oss.qualcomm.com>:
> Rename kvm_riscv_vcpu_isa_check_host() to kvm_riscv_isa_check_host()
> and use it as common function with KVM RISC-V to check isa extensions
> supported by host.
>
> Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
> ---
Reviewed-by: Radim Krčmář <radim.krcmar@oss.qualcomm.com>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources
2026-03-13 14:14 ` Radim Krčmář
@ 2026-04-03 12:34 ` Anup Patel
0 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-04-03 12:34 UTC (permalink / raw)
To: Radim Krčmář
Cc: Anup Patel, Paolo Bonzini, Atish Patra, Palmer Dabbelt,
Paul Walmsley, Alexandre Ghiti, Shuah Khan, Andrew Jones,
kvm-riscv, kvm, linux-riscv, linux-kernel, linux-kselftest
On Fri, Mar 13, 2026 at 7:44 PM Radim Krčmář
<radim.krcmar@oss.qualcomm.com> wrote:
>
> 2026-01-20T13:29:51+05:30, Anup Patel <anup.patel@oss.qualcomm.com>:
> > The KVM ISA extension related checks are not VCPU specific and
> > should be factored out of vcpu_onereg.c into separate sources.
> >
> > Signed-off-by: Anup Patel <anup.patel@oss.qualcomm.com>
> > ---
> > diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c
> > @@ -12,6 +12,7 @@
> > #include <linux/kvm_host.h>
> > #include <linux/uaccess.h>
> > #include <linux/cpufeature.h>
> > +#include <asm/kvm_isa.h>
>
> I guess <cpufeature.h> isn't needed anymore,
Okay, I will drop this include at the time of merging this patch.
>
> Reviewed-by: Radim Krčmář <radim.krcmar@oss.qualcomm.com>
Regards,
Anup
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [PATCH 00/27] Nested virtualization for KVM RISC-V
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
` (26 preceding siblings ...)
2026-01-20 8:00 ` [PATCH 27/27] RISC-V: KVM: selftests: Add nested virt CSRs to get-reg-list test Anup Patel
@ 2026-04-03 12:36 ` Anup Patel
27 siblings, 0 replies; 38+ messages in thread
From: Anup Patel @ 2026-04-03 12:36 UTC (permalink / raw)
To: Anup Patel
Cc: Paolo Bonzini, Atish Patra, Palmer Dabbelt, Paul Walmsley,
Alexandre Ghiti, Shuah Khan, Andrew Jones, kvm-riscv, kvm,
linux-riscv, linux-kernel, linux-kselftest
On Tue, Jan 20, 2026 at 1:30 PM Anup Patel <anup.patel@oss.qualcomm.com> wrote:
>
> Initial nested virtualization support for KVM RISC-V. Using
> this series, we can boot Xvisor inside KVM guest and KVM RISC-V
> insmod also works inside KVM guest but we can't run nested guest
> at the moment due to work-in-progress G-stage emulation (or
> G-stage page table walker).
>
> Patch01-to-Patch09: Fixes and preparatory changes
> Patch10-to-Patch23: Actual nested virtualization support
> Patch24-to-Patch27: ONE_REG interface and get-reg-list selftest
>
> Upcoming work on-top-of this series include:
> * Software MMU emulation for nested guest (aka swtlb)
> * HLV/HSV emulation
> * Sstc emulation for nested guest
> * SBI NACL for guest hypervisor
> * ... and more ...
>
> These patches can also be found in the riscv_kvm_nested_v1
> branch at: https://github.com/avpatel/linux.git
>
> Anup Patel (27):
> RISC-V: KVM: Fix error code returned for Smstateen ONE_REG
> RISC-V: KVM: Fix error code returned for Ssaia ONE_REG
> RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip
> RISC-V: KVM: Introduce common kvm_riscv_isa_check_host()
> RISC-V: KVM: Factor-out ISA checks into separate sources
> RISC-V: KVM: Move timer state defines closer to struct in UAPI header
> RISC-V: KVM: Add hideleg to struct kvm_vcpu_config
> RISC-V: KVM: Factor-out VCPU config into separate sources
> RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR
> RISC-V: KVM: Initial skeletal nested virtualization support
> RISC-V: KVM: Use half VMID space for nested guest
> RISC-V: KVM: Extend kvm_riscv_mmu_update_hgatp() for nested
> virtualization
> RISC-V: KVM: Extend kvm_riscv_vcpu_config_load() for nested
> virtualization
> RISC-V: KVM: Extend kvm_riscv_vcpu_update_timedelta() for nested virt
> RISC-V: KVM: Extend trap redirection for nested virtualization
> RISC-V: KVM: Check and inject nested virtual interrupts
> RISC-V: KVM: Extend kvm_riscv_isa_check_host() for nested virt
> RISC-V: KVM: Trap-n-emulate SRET for Guest HS-mode
> RISC-V: KVM: Redirect nested supervisor ecall and breakpoint traps
> RISC-V: KVM: Redirect nested WFI and WRS traps
> RISC-V: KVM: Implement remote HFENCE SBI calls for guest
> RISC-V: KVM: Add CSR emulation for nested virtualization
> RISC-V: KVM: Add HFENCE emulation for nested virtualization
> RISC-V: KVM: Add ONE_REG interface for nested virtualization state
> RISC-V: KVM: selftests: Add nested virt state to get-reg-list test
> RISC-V: KVM: Add ONE_REG interface for nested virtualization CSRs
> RISC-V: KVM: selftests: Add nested virt CSRs to get-reg-list test
Patches 1-to-3 are already merged as fixes.
Queued patches 4-to-9 for Linux-7.1
Thanks,
Anup
>
> arch/riscv/include/asm/csr.h | 17 +
> arch/riscv/include/asm/insn.h | 9 +
> arch/riscv/include/asm/kvm_gstage.h | 2 +
> arch/riscv/include/asm/kvm_host.h | 29 +-
> arch/riscv/include/asm/kvm_isa.h | 20 +
> arch/riscv/include/asm/kvm_mmu.h | 2 +-
> arch/riscv/include/asm/kvm_tlb.h | 37 +-
> arch/riscv/include/asm/kvm_vcpu_config.h | 25 ++
> arch/riscv/include/asm/kvm_vcpu_nested.h | 163 ++++++++
> arch/riscv/include/asm/kvm_vcpu_timer.h | 1 +
> arch/riscv/include/asm/kvm_vmid.h | 1 +
> arch/riscv/include/uapi/asm/kvm.h | 36 +-
> arch/riscv/kvm/Makefile | 6 +
> arch/riscv/kvm/aia.c | 4 +
> arch/riscv/kvm/aia_device.c | 5 +
> arch/riscv/kvm/gstage.c | 14 +
> arch/riscv/kvm/isa.c | 259 ++++++++++++
> arch/riscv/kvm/main.c | 13 +-
> arch/riscv/kvm/mmu.c | 18 +-
> arch/riscv/kvm/tlb.c | 135 +++++-
> arch/riscv/kvm/vcpu.c | 117 ++----
> arch/riscv/kvm/vcpu_config.c | 130 ++++++
> arch/riscv/kvm/vcpu_exit.c | 62 ++-
> arch/riscv/kvm/vcpu_fp.c | 9 +-
> arch/riscv/kvm/vcpu_insn.c | 46 +++
> arch/riscv/kvm/vcpu_nested.c | 258 ++++++++++++
> arch/riscv/kvm/vcpu_nested_csr.c | 389 ++++++++++++++++++
> arch/riscv/kvm/vcpu_nested_insn.c | 140 +++++++
> arch/riscv/kvm/vcpu_nested_swtlb.c | 146 +++++++
> arch/riscv/kvm/vcpu_onereg.c | 334 +++------------
> arch/riscv/kvm/vcpu_pmu.c | 5 +-
> arch/riscv/kvm/vcpu_sbi_replace.c | 63 ++-
> arch/riscv/kvm/vcpu_timer.c | 24 +-
> arch/riscv/kvm/vcpu_vector.c | 5 +-
> arch/riscv/kvm/vmid.c | 33 +-
> .../selftests/kvm/riscv/get-reg-list.c | 106 ++++-
> 36 files changed, 2244 insertions(+), 419 deletions(-)
> create mode 100644 arch/riscv/include/asm/kvm_isa.h
> create mode 100644 arch/riscv/include/asm/kvm_vcpu_config.h
> create mode 100644 arch/riscv/include/asm/kvm_vcpu_nested.h
> create mode 100644 arch/riscv/kvm/isa.c
> create mode 100644 arch/riscv/kvm/vcpu_config.c
> create mode 100644 arch/riscv/kvm/vcpu_nested.c
> create mode 100644 arch/riscv/kvm/vcpu_nested_csr.c
> create mode 100644 arch/riscv/kvm/vcpu_nested_insn.c
> create mode 100644 arch/riscv/kvm/vcpu_nested_swtlb.c
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2026-04-03 12:36 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-20 7:59 [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
2026-01-20 7:59 ` [PATCH 01/27] RISC-V: KVM: Fix error code returned for Smstateen ONE_REG Anup Patel
2026-03-06 7:04 ` Anup Patel
2026-01-20 7:59 ` [PATCH 02/27] RISC-V: KVM: Fix error code returned for Ssaia ONE_REG Anup Patel
2026-03-06 7:04 ` Anup Patel
2026-01-20 7:59 ` [PATCH 03/27] RISC-V: KVM: Check host Ssaia extension when creating AIA irqchip Anup Patel
2026-03-06 7:04 ` Anup Patel
2026-01-20 7:59 ` [PATCH 04/27] RISC-V: KVM: Introduce common kvm_riscv_isa_check_host() Anup Patel
2026-03-13 14:22 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 05/27] RISC-V: KVM: Factor-out ISA checks into separate sources Anup Patel
2026-03-13 14:14 ` Radim Krčmář
2026-04-03 12:34 ` Anup Patel
2026-01-20 7:59 ` [PATCH 06/27] RISC-V: KVM: Move timer state defines closer to struct in UAPI header Anup Patel
2026-01-20 7:59 ` [PATCH 07/27] RISC-V: KVM: Add hideleg to struct kvm_vcpu_config Anup Patel
2026-03-13 13:49 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 08/27] RISC-V: KVM: Factor-out VCPU config into separate sources Anup Patel
2026-03-13 13:46 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 09/27] RISC-V: KVM: Don't check hstateen0 when updating sstateen0 CSR Anup Patel
2026-03-13 13:27 ` Radim Krčmář
2026-01-20 7:59 ` [PATCH 10/27] RISC-V: KVM: Initial skeletal nested virtualization support Anup Patel
2026-01-20 7:59 ` [PATCH 11/27] RISC-V: KVM: Use half VMID space for nested guest Anup Patel
2026-01-20 7:59 ` [PATCH 12/27] RISC-V: KVM: Extend kvm_riscv_mmu_update_hgatp() for nested virtualization Anup Patel
2026-01-20 7:59 ` [PATCH 13/27] RISC-V: KVM: Extend kvm_riscv_vcpu_config_load() " Anup Patel
2026-01-20 8:00 ` [PATCH 14/27] RISC-V: KVM: Extend kvm_riscv_vcpu_update_timedelta() for nested virt Anup Patel
2026-01-20 8:00 ` [PATCH 15/27] RISC-V: KVM: Extend trap redirection for nested virtualization Anup Patel
2026-01-20 8:00 ` [PATCH 16/27] RISC-V: KVM: Check and inject nested virtual interrupts Anup Patel
2026-01-20 8:00 ` [PATCH 17/27] RISC-V: KVM: Extend kvm_riscv_isa_check_host() for nested virt Anup Patel
2026-01-20 8:00 ` [PATCH 18/27] RISC-V: KVM: Trap-n-emulate SRET for Guest HS-mode Anup Patel
2026-01-20 8:00 ` [PATCH 19/27] RISC-V: KVM: Redirect nested supervisor ecall and breakpoint traps Anup Patel
2026-01-20 8:00 ` [PATCH 20/27] RISC-V: KVM: Redirect nested WFI and WRS traps Anup Patel
2026-01-20 8:00 ` [PATCH 21/27] RISC-V: KVM: Implement remote HFENCE SBI calls for guest Anup Patel
2026-01-20 8:00 ` [PATCH 22/27] RISC-V: KVM: Add CSR emulation for nested virtualization Anup Patel
2026-01-20 8:00 ` [PATCH 23/27] RISC-V: KVM: Add HFENCE " Anup Patel
2026-01-20 8:00 ` [PATCH 24/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization state Anup Patel
2026-01-20 8:00 ` [PATCH 25/27] RISC-V: KVM: selftests: Add nested virt state to get-reg-list test Anup Patel
2026-01-20 8:00 ` [PATCH 26/27] RISC-V: KVM: Add ONE_REG interface for nested virtualization CSRs Anup Patel
2026-01-20 8:00 ` [PATCH 27/27] RISC-V: KVM: selftests: Add nested virt CSRs to get-reg-list test Anup Patel
2026-04-03 12:36 ` [PATCH 00/27] Nested virtualization for KVM RISC-V Anup Patel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox