* [PATCH v1 0/3] KVM: arm64: Fix initializing HCRX_EL2 and other traps in pKVM
@ 2025-02-14 15:02 Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 " Fuad Tabba
` (2 more replies)
0 siblings, 3 replies; 15+ messages in thread
From: Fuad Tabba @ 2025-02-14 15:02 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, mark.rutland, will, joey.gouly, suzuki.poulose,
yuzenghui, catalin.marinas, broonie, qperret, vdonnefort, tabba
Since the introduction of initializing and setting HCRX_EL2 traps
in KVM, we haven't updated the pKVM code upstream do the same for
the hyp view.
Additionally, the current behavior of pKVM is to initialize the
hyp view of the vm and of _all_ its vcpus when the _first_ vcpu
is run. However, with the introduction of kvm_calculate_traps()
[*], some of the host trap values are not calculated until the
corresponding vcpu is run for the first time. This causes pKVM to
get the wrong view of some of the system registers, e.g.,
HCRX_EL2, for vcpus other than the first run vcpu, since it uses
the host's version as a starting point --- particularly for
non-protected vms. Because of these issues, it's might not be
possible to run a non-protected VM when certain features (e.g.,
MOPS) are supported by the system.
This series initializes HCRX_EL2 for VMs in protected mode. It
also initializes the hyp view of each vcpu after its
corresponding host vcpu has been fully initialized, i.e., run for
the first time.
This series is based on Linux 6.14-rc2.
Cheers,
/fuad
[*] Commit f1ff3fc5209a ("KVM: arm64: unify code to prepare traps")
Fuad Tabba (3):
KVM: arm64: Initialize HCRX_EL2 traps in pKVM
KVM: arm64: Factor out pkvm hyp vcpu creation to separate function
KVM: arm64: Create each pKVM hyp vcpu after its corresponding host
vcpu
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/include/asm/kvm_pkvm.h | 1 +
arch/arm64/kvm/arm.c | 4 +
arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 --
arch/arm64/kvm/hyp/nvhe/pkvm.c | 100 +++++++++++++++++++------
arch/arm64/kvm/pkvm.c | 70 +++++++++--------
6 files changed, 118 insertions(+), 65 deletions(-)
base-commit: a64dcfb451e254085a7daee5fe51bf22959d52d3
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-14 15:02 [PATCH v1 0/3] KVM: arm64: Fix initializing HCRX_EL2 and other traps in pKVM Fuad Tabba
@ 2025-02-14 15:02 ` Fuad Tabba
2025-02-26 10:07 ` Oliver Upton
2025-02-14 15:02 ` [PATCH v1 2/3] KVM: arm64: Factor out pkvm hyp vcpu creation to separate function Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu Fuad Tabba
2 siblings, 1 reply; 15+ messages in thread
From: Fuad Tabba @ 2025-02-14 15:02 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, mark.rutland, will, joey.gouly, suzuki.poulose,
yuzenghui, catalin.marinas, broonie, qperret, vdonnefort, tabba
Initialize and set the traps controlled by the HCRX_EL2 in pKVM
when the register is supported by the system.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 46 ++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 3927fe52a3dd..668ebec27f1b 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -58,6 +58,30 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 |= HCR_ATA;
}
+static void pkvm_vcpu_reset_hcrx(struct pkvm_hyp_vcpu *hyp_vcpu)
+{
+ struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
+ struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
+
+ if (!cpus_have_final_cap(ARM64_HAS_HCX))
+ return;
+
+ /*
+ * In general, all HCRX_EL2 bits are gated by a feature.
+ * The only reason we can set SMPME without checking any
+ * feature is that its effects are not directly observable
+ * from the guest.
+ */
+ vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
+
+ /*
+ * For non-protected VMs, the host is responsible for the guest's
+ * features, so use the remaining host HCRX_EL2 bits.
+ */
+ if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu)))
+ vcpu->arch.hcrx_el2 |= host_vcpu->arch.hcrx_el2;
+}
+
static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
@@ -92,6 +116,26 @@ static void pvm_init_traps_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 = val;
}
+static void pvm_init_traps_hcrx(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ u64 hcrx_set = 0;
+
+ if (!cpus_have_final_cap(ARM64_HAS_HCX))
+ return;
+
+ if (kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
+ hcrx_set |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+
+ if (kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP))
+ hcrx_set |= HCRX_EL2_TCR2En;
+
+ if (kvm_has_fpmr(kvm))
+ hcrx_set |= HCRX_EL2_EnFPM;
+
+ vcpu->arch.hcrx_el2 |= hcrx_set;
+}
+
static void pvm_init_traps_mdcr(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
@@ -165,6 +209,7 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu)
vcpu->arch.mdcr_el2 = 0;
pkvm_vcpu_reset_hcr(vcpu);
+ pkvm_vcpu_reset_hcrx(hyp_vcpu);
if ((!pkvm_hyp_vcpu_is_protected(hyp_vcpu)))
return 0;
@@ -174,6 +219,7 @@ static int pkvm_vcpu_init_traps(struct pkvm_hyp_vcpu *hyp_vcpu)
return ret;
pvm_init_traps_hcr(vcpu);
+ pvm_init_traps_hcrx(vcpu);
pvm_init_traps_mdcr(vcpu);
return 0;
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 2/3] KVM: arm64: Factor out pkvm hyp vcpu creation to separate function
2025-02-14 15:02 [PATCH v1 0/3] KVM: arm64: Fix initializing HCRX_EL2 and other traps in pKVM Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 " Fuad Tabba
@ 2025-02-14 15:02 ` Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu Fuad Tabba
2 siblings, 0 replies; 15+ messages in thread
From: Fuad Tabba @ 2025-02-14 15:02 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, mark.rutland, will, joey.gouly, suzuki.poulose,
yuzenghui, catalin.marinas, broonie, qperret, vdonnefort, tabba
Move the code that creates and initializes the hyp view of a vcpu
in pKVM to its own function. This is meant to make the transition
to initializing every vcpu individually clearer.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/pkvm.c | 48 ++++++++++++++++++++-----------------------
1 file changed, 22 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
index 930b677eb9b0..06665cf221bf 100644
--- a/arch/arm64/kvm/pkvm.c
+++ b/arch/arm64/kvm/pkvm.c
@@ -113,6 +113,24 @@ static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
free_hyp_memcache(&host_kvm->arch.pkvm.teardown_mc);
}
+static int __pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
+{
+ size_t hyp_vcpu_sz = PAGE_ALIGN(PKVM_HYP_VCPU_SIZE);
+ pkvm_handle_t handle = vcpu->kvm->arch.pkvm.handle;
+ void *hyp_vcpu;
+ int ret;
+
+ hyp_vcpu = alloc_pages_exact(hyp_vcpu_sz, GFP_KERNEL_ACCOUNT);
+ if (!hyp_vcpu)
+ return -ENOMEM;
+
+ ret = kvm_call_hyp_nvhe(__pkvm_init_vcpu, handle, vcpu, hyp_vcpu);
+ if (ret)
+ free_pages_exact(hyp_vcpu, hyp_vcpu_sz);
+
+ return ret;
+}
+
/*
* Allocates and donates memory for hypervisor VM structs at EL2.
*
@@ -125,9 +143,8 @@ static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
*/
static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
{
- size_t pgd_sz, hyp_vm_sz, hyp_vcpu_sz;
+ size_t pgd_sz, hyp_vm_sz;
struct kvm_vcpu *host_vcpu;
- pkvm_handle_t handle;
void *pgd, *hyp_vm;
unsigned long idx;
int ret;
@@ -161,33 +178,12 @@ static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
if (ret < 0)
goto free_vm;
- handle = ret;
+ WRITE_ONCE(host_kvm->arch.pkvm.handle, ret);
- host_kvm->arch.pkvm.handle = handle;
-
- /* Donate memory for the vcpus at hyp and initialize it. */
- hyp_vcpu_sz = PAGE_ALIGN(PKVM_HYP_VCPU_SIZE);
kvm_for_each_vcpu(idx, host_vcpu, host_kvm) {
- void *hyp_vcpu;
-
- /* Indexing of the vcpus to be sequential starting at 0. */
- if (WARN_ON(host_vcpu->vcpu_idx != idx)) {
- ret = -EINVAL;
- goto destroy_vm;
- }
-
- hyp_vcpu = alloc_pages_exact(hyp_vcpu_sz, GFP_KERNEL_ACCOUNT);
- if (!hyp_vcpu) {
- ret = -ENOMEM;
- goto destroy_vm;
- }
-
- ret = kvm_call_hyp_nvhe(__pkvm_init_vcpu, handle, host_vcpu,
- hyp_vcpu);
- if (ret) {
- free_pages_exact(hyp_vcpu, hyp_vcpu_sz);
+ ret = __pkvm_create_hyp_vcpu(host_vcpu);
+ if (ret)
goto destroy_vm;
- }
}
return 0;
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu
2025-02-14 15:02 [PATCH v1 0/3] KVM: arm64: Fix initializing HCRX_EL2 and other traps in pKVM Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 " Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 2/3] KVM: arm64: Factor out pkvm hyp vcpu creation to separate function Fuad Tabba
@ 2025-02-14 15:02 ` Fuad Tabba
2025-02-17 15:30 ` Will Deacon
2 siblings, 1 reply; 15+ messages in thread
From: Fuad Tabba @ 2025-02-14 15:02 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, oliver.upton, mark.rutland, will, joey.gouly, suzuki.poulose,
yuzenghui, catalin.marinas, broonie, qperret, vdonnefort, tabba
Instead of creating and initializing _all_ hyp vcpus in pKVM when
the first host vcpu runs for the first time, initialize _each_
hyp vcpu in conjunction with its corresponding host vcpu.
Some of the host vcpu state (e.g., system registers and traps
values) are not initialized until the first time the host vcpu is
run. Therefore, initializing a hyp vcpu before its corresponding
host vcpu has run for the first time might not view the complete
host state of these vcpus.
Additionally, this behavior is inline with non-protected modes.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/include/asm/kvm_pkvm.h | 1 +
arch/arm64/kvm/arm.c | 4 ++
arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 54 +++++++++++++++-----------
arch/arm64/kvm/pkvm.c | 28 ++++++-------
6 files changed, 53 insertions(+), 42 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cfa024de4e3..d35ce3c860fc 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -881,6 +881,8 @@ struct kvm_vcpu_arch {
#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0))
/* SVE config completed */
#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1))
+/* pKVM VCPU setup completed */
+#define VCPU_PKVM_FINALIZED __vcpu_single_flag(cflags, BIT(2))
/* Exception pending */
#define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0))
diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h
index eb65f12e81d9..abd693ce5b93 100644
--- a/arch/arm64/include/asm/kvm_pkvm.h
+++ b/arch/arm64/include/asm/kvm_pkvm.h
@@ -19,6 +19,7 @@
int pkvm_init_host_vm(struct kvm *kvm);
int pkvm_create_hyp_vm(struct kvm *kvm);
void pkvm_destroy_hyp_vm(struct kvm *kvm);
+int pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu);
/*
* This functions as an allow-list of protected VM capabilities.
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 071a7d75be68..e4661c2f0423 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -833,6 +833,10 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
ret = pkvm_create_hyp_vm(kvm);
if (ret)
return ret;
+
+ ret = pkvm_create_hyp_vcpu(vcpu);
+ if (ret)
+ return ret;
}
mutex_lock(&kvm->arch.config_lock);
diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
index e42bf68c8848..ce31d3b73603 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
@@ -43,12 +43,6 @@ struct pkvm_hyp_vm {
struct hyp_pool pool;
hyp_spinlock_t lock;
- /*
- * The number of vcpus initialized and ready to run.
- * Modifying this is protected by 'vm_table_lock'.
- */
- unsigned int nr_vcpus;
-
/* Array of the hyp vCPU structures for this VM. */
struct pkvm_hyp_vcpu *vcpus[];
};
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 668ebec27f1b..20abd46e03b8 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -285,10 +285,12 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
hyp_spin_lock(&vm_table_lock);
hyp_vm = get_vm_by_handle(handle);
- if (!hyp_vm || hyp_vm->nr_vcpus <= vcpu_idx)
+ if (!hyp_vm || hyp_vm->kvm.created_vcpus <= vcpu_idx)
goto unlock;
hyp_vcpu = hyp_vm->vcpus[vcpu_idx];
+ if (!hyp_vcpu)
+ goto unlock;
/* Ensure vcpu isn't loaded on more than one cpu simultaneously. */
if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) {
@@ -407,8 +409,14 @@ static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_vcpus[],
{
int i;
- for (i = 0; i < nr_vcpus; i++)
- unpin_host_vcpu(hyp_vcpus[i]->host_vcpu);
+ for (i = 0; i < nr_vcpus; i++) {
+ struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vcpus[i];
+
+ if (!hyp_vcpu)
+ continue;
+
+ unpin_host_vcpu(hyp_vcpu->host_vcpu);
+ }
}
static void init_pkvm_hyp_vm(struct kvm *host_kvm, struct pkvm_hyp_vm *hyp_vm,
@@ -432,24 +440,18 @@ static void pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *
static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
struct pkvm_hyp_vm *hyp_vm,
- struct kvm_vcpu *host_vcpu,
- unsigned int vcpu_idx)
+ struct kvm_vcpu *host_vcpu)
{
int ret = 0;
if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1))
return -EBUSY;
- if (host_vcpu->vcpu_idx != vcpu_idx) {
- ret = -EINVAL;
- goto done;
- }
-
hyp_vcpu->host_vcpu = host_vcpu;
hyp_vcpu->vcpu.kvm = &hyp_vm->kvm;
hyp_vcpu->vcpu.vcpu_id = READ_ONCE(host_vcpu->vcpu_id);
- hyp_vcpu->vcpu.vcpu_idx = vcpu_idx;
+ hyp_vcpu->vcpu.vcpu_idx = READ_ONCE(host_vcpu->vcpu_idx);
hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu;
hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags);
@@ -687,27 +689,28 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
goto unlock;
}
- idx = hyp_vm->nr_vcpus;
+ ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
+ if (ret)
+ goto unlock;
+
+ idx = hyp_vcpu->vcpu.vcpu_idx;
if (idx >= hyp_vm->kvm.created_vcpus) {
ret = -EINVAL;
goto unlock;
}
- ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu, idx);
- if (ret)
+ if (hyp_vm->vcpus[idx]) {
+ ret = -EINVAL;
goto unlock;
+ }
hyp_vm->vcpus[idx] = hyp_vcpu;
- hyp_vm->nr_vcpus++;
unlock:
hyp_spin_unlock(&vm_table_lock);
- if (ret) {
+ if (ret)
unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
- return ret;
- }
-
- return 0;
+ return ret;
}
static void
@@ -753,12 +756,17 @@ int __pkvm_teardown_vm(pkvm_handle_t handle)
/* Reclaim guest pages (including page-table pages) */
mc = &host_kvm->arch.pkvm.teardown_mc;
reclaim_guest_pages(hyp_vm, mc);
- unpin_host_vcpus(hyp_vm->vcpus, hyp_vm->nr_vcpus);
+ unpin_host_vcpus(hyp_vm->vcpus, hyp_vm->kvm.created_vcpus);
/* Push the metadata pages to the teardown memcache */
- for (idx = 0; idx < hyp_vm->nr_vcpus; ++idx) {
+ for (idx = 0; idx < hyp_vm->kvm.created_vcpus; ++idx) {
struct pkvm_hyp_vcpu *hyp_vcpu = hyp_vm->vcpus[idx];
- struct kvm_hyp_memcache *vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache;
+ struct kvm_hyp_memcache *vcpu_mc;
+
+ if (!hyp_vcpu)
+ continue;
+
+ vcpu_mc = &hyp_vcpu->vcpu.arch.pkvm_memcache;
while (vcpu_mc->nr_pages) {
void *addr = pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt);
diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c
index 06665cf221bf..f2a09dc50676 100644
--- a/arch/arm64/kvm/pkvm.c
+++ b/arch/arm64/kvm/pkvm.c
@@ -125,7 +125,9 @@ static int __pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
return -ENOMEM;
ret = kvm_call_hyp_nvhe(__pkvm_init_vcpu, handle, vcpu, hyp_vcpu);
- if (ret)
+ if (!ret)
+ vcpu_set_flag(vcpu, VCPU_PKVM_FINALIZED);
+ else
free_pages_exact(hyp_vcpu, hyp_vcpu_sz);
return ret;
@@ -144,9 +146,7 @@ static int __pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
{
size_t pgd_sz, hyp_vm_sz;
- struct kvm_vcpu *host_vcpu;
void *pgd, *hyp_vm;
- unsigned long idx;
int ret;
if (host_kvm->created_vcpus < 1)
@@ -180,17 +180,7 @@ static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
WRITE_ONCE(host_kvm->arch.pkvm.handle, ret);
- kvm_for_each_vcpu(idx, host_vcpu, host_kvm) {
- ret = __pkvm_create_hyp_vcpu(host_vcpu);
- if (ret)
- goto destroy_vm;
- }
-
return 0;
-
-destroy_vm:
- __pkvm_destroy_hyp_vm(host_kvm);
- return ret;
free_vm:
free_pages_exact(hyp_vm, hyp_vm_sz);
free_pgd:
@@ -210,6 +200,18 @@ int pkvm_create_hyp_vm(struct kvm *host_kvm)
return ret;
}
+int pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu)
+{
+ int ret = 0;
+
+ mutex_lock(&vcpu->kvm->arch.config_lock);
+ if (!vcpu_get_flag(vcpu, VCPU_PKVM_FINALIZED))
+ ret = __pkvm_create_hyp_vcpu(vcpu);
+ mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+ return ret;
+}
+
void pkvm_destroy_hyp_vm(struct kvm *host_kvm)
{
mutex_lock(&host_kvm->arch.config_lock);
--
2.48.1.601.g30ceb7b040-goog
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu
2025-02-14 15:02 ` [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu Fuad Tabba
@ 2025-02-17 15:30 ` Will Deacon
2025-02-17 15:41 ` Fuad Tabba
0 siblings, 1 reply; 15+ messages in thread
From: Will Deacon @ 2025-02-17 15:30 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, oliver.upton, mark.rutland,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
On Fri, Feb 14, 2025 at 03:02:58PM +0000, Fuad Tabba wrote:
> Instead of creating and initializing _all_ hyp vcpus in pKVM when
> the first host vcpu runs for the first time, initialize _each_
> hyp vcpu in conjunction with its corresponding host vcpu.
>
> Some of the host vcpu state (e.g., system registers and traps
> values) are not initialized until the first time the host vcpu is
> run. Therefore, initializing a hyp vcpu before its corresponding
> host vcpu has run for the first time might not view the complete
> host state of these vcpus.
>
> Additionally, this behavior is inline with non-protected modes.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 2 +
> arch/arm64/include/asm/kvm_pkvm.h | 1 +
> arch/arm64/kvm/arm.c | 4 ++
> arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ---
> arch/arm64/kvm/hyp/nvhe/pkvm.c | 54 +++++++++++++++-----------
> arch/arm64/kvm/pkvm.c | 28 ++++++-------
> 6 files changed, 53 insertions(+), 42 deletions(-)
[...]
> static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
> struct pkvm_hyp_vm *hyp_vm,
> - struct kvm_vcpu *host_vcpu,
> - unsigned int vcpu_idx)
> + struct kvm_vcpu *host_vcpu)
> {
> int ret = 0;
>
> if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1))
> return -EBUSY;
>
> - if (host_vcpu->vcpu_idx != vcpu_idx) {
> - ret = -EINVAL;
> - goto done;
> - }
> -
> hyp_vcpu->host_vcpu = host_vcpu;
>
> hyp_vcpu->vcpu.kvm = &hyp_vm->kvm;
> hyp_vcpu->vcpu.vcpu_id = READ_ONCE(host_vcpu->vcpu_id);
> - hyp_vcpu->vcpu.vcpu_idx = vcpu_idx;
> + hyp_vcpu->vcpu.vcpu_idx = READ_ONCE(host_vcpu->vcpu_idx);
>
> hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu;
> hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags);
> @@ -687,27 +689,28 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
> goto unlock;
> }
>
> - idx = hyp_vm->nr_vcpus;
> + ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
> + if (ret)
> + goto unlock;
> +
> + idx = hyp_vcpu->vcpu.vcpu_idx;
> if (idx >= hyp_vm->kvm.created_vcpus) {
> ret = -EINVAL;
> goto unlock;
> }
>
> - ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu, idx);
> - if (ret)
> + if (hyp_vm->vcpus[idx]) {
> + ret = -EINVAL;
> goto unlock;
> + }
I'm not sure how much we care at EL2, but it looks like there's a
potential spectre gadget here given that 'idx' is now untrusted.
Perhaps chuck something like:
idx = array_index_nospec(idx, hyp_vm->kvm.created_vcpus);
before indexing into 'hyp_vm->vcpus[]'?
Either way:
Acked-by: Will Deacon <will@kernel.org>
Will
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu
2025-02-17 15:30 ` Will Deacon
@ 2025-02-17 15:41 ` Fuad Tabba
2025-02-17 15:56 ` Fuad Tabba
0 siblings, 1 reply; 15+ messages in thread
From: Fuad Tabba @ 2025-02-17 15:41 UTC (permalink / raw)
To: Will Deacon
Cc: kvmarm, linux-arm-kernel, maz, oliver.upton, mark.rutland,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
Hi Will,
On Mon, 17 Feb 2025 at 15:30, Will Deacon <will@kernel.org> wrote:
>
> On Fri, Feb 14, 2025 at 03:02:58PM +0000, Fuad Tabba wrote:
> > Instead of creating and initializing _all_ hyp vcpus in pKVM when
> > the first host vcpu runs for the first time, initialize _each_
> > hyp vcpu in conjunction with its corresponding host vcpu.
> >
> > Some of the host vcpu state (e.g., system registers and traps
> > values) are not initialized until the first time the host vcpu is
> > run. Therefore, initializing a hyp vcpu before its corresponding
> > host vcpu has run for the first time might not view the complete
> > host state of these vcpus.
> >
> > Additionally, this behavior is inline with non-protected modes.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 2 +
> > arch/arm64/include/asm/kvm_pkvm.h | 1 +
> > arch/arm64/kvm/arm.c | 4 ++
> > arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ---
> > arch/arm64/kvm/hyp/nvhe/pkvm.c | 54 +++++++++++++++-----------
> > arch/arm64/kvm/pkvm.c | 28 ++++++-------
> > 6 files changed, 53 insertions(+), 42 deletions(-)
>
> [...]
>
> > static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
> > struct pkvm_hyp_vm *hyp_vm,
> > - struct kvm_vcpu *host_vcpu,
> > - unsigned int vcpu_idx)
> > + struct kvm_vcpu *host_vcpu)
> > {
> > int ret = 0;
> >
> > if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1))
> > return -EBUSY;
> >
> > - if (host_vcpu->vcpu_idx != vcpu_idx) {
> > - ret = -EINVAL;
> > - goto done;
> > - }
> > -
> > hyp_vcpu->host_vcpu = host_vcpu;
> >
> > hyp_vcpu->vcpu.kvm = &hyp_vm->kvm;
> > hyp_vcpu->vcpu.vcpu_id = READ_ONCE(host_vcpu->vcpu_id);
> > - hyp_vcpu->vcpu.vcpu_idx = vcpu_idx;
> > + hyp_vcpu->vcpu.vcpu_idx = READ_ONCE(host_vcpu->vcpu_idx);
> >
> > hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu;
> > hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags);
> > @@ -687,27 +689,28 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
> > goto unlock;
> > }
> >
> > - idx = hyp_vm->nr_vcpus;
> > + ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
> > + if (ret)
> > + goto unlock;
> > +
> > + idx = hyp_vcpu->vcpu.vcpu_idx;
> > if (idx >= hyp_vm->kvm.created_vcpus) {
> > ret = -EINVAL;
> > goto unlock;
> > }
> >
> > - ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu, idx);
> > - if (ret)
> > + if (hyp_vm->vcpus[idx]) {
> > + ret = -EINVAL;
> > goto unlock;
> > + }
>
> I'm not sure how much we care at EL2, but it looks like there's a
> potential spectre gadget here given that 'idx' is now untrusted.
> Perhaps chuck something like:
>
> idx = array_index_nospec(idx, hyp_vm->kvm.created_vcpus);
>
> before indexing into 'hyp_vm->vcpus[]'?
I'll add that when I respin.
> Either way:
>
> Acked-by: Will Deacon <will@kernel.org>
Thanks,
/fuad
> Will
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu
2025-02-17 15:41 ` Fuad Tabba
@ 2025-02-17 15:56 ` Fuad Tabba
2025-02-18 9:27 ` Will Deacon
0 siblings, 1 reply; 15+ messages in thread
From: Fuad Tabba @ 2025-02-17 15:56 UTC (permalink / raw)
To: Will Deacon
Cc: kvmarm, linux-arm-kernel, maz, oliver.upton, mark.rutland,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
Hi Will,
On Mon, 17 Feb 2025 at 15:41, Fuad Tabba <tabba@google.com> wrote:
>
> Hi Will,
>
> On Mon, 17 Feb 2025 at 15:30, Will Deacon <will@kernel.org> wrote:
> >
> > On Fri, Feb 14, 2025 at 03:02:58PM +0000, Fuad Tabba wrote:
> > > Instead of creating and initializing _all_ hyp vcpus in pKVM when
> > > the first host vcpu runs for the first time, initialize _each_
> > > hyp vcpu in conjunction with its corresponding host vcpu.
> > >
> > > Some of the host vcpu state (e.g., system registers and traps
> > > values) are not initialized until the first time the host vcpu is
> > > run. Therefore, initializing a hyp vcpu before its corresponding
> > > host vcpu has run for the first time might not view the complete
> > > host state of these vcpus.
> > >
> > > Additionally, this behavior is inline with non-protected modes.
> > >
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > ---
> > > arch/arm64/include/asm/kvm_host.h | 2 +
> > > arch/arm64/include/asm/kvm_pkvm.h | 1 +
> > > arch/arm64/kvm/arm.c | 4 ++
> > > arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ---
> > > arch/arm64/kvm/hyp/nvhe/pkvm.c | 54 +++++++++++++++-----------
> > > arch/arm64/kvm/pkvm.c | 28 ++++++-------
> > > 6 files changed, 53 insertions(+), 42 deletions(-)
> >
> > [...]
> >
> > > static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
> > > struct pkvm_hyp_vm *hyp_vm,
> > > - struct kvm_vcpu *host_vcpu,
> > > - unsigned int vcpu_idx)
> > > + struct kvm_vcpu *host_vcpu)
> > > {
> > > int ret = 0;
> > >
> > > if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1))
> > > return -EBUSY;
> > >
> > > - if (host_vcpu->vcpu_idx != vcpu_idx) {
> > > - ret = -EINVAL;
> > > - goto done;
> > > - }
> > > -
> > > hyp_vcpu->host_vcpu = host_vcpu;
> > >
> > > hyp_vcpu->vcpu.kvm = &hyp_vm->kvm;
> > > hyp_vcpu->vcpu.vcpu_id = READ_ONCE(host_vcpu->vcpu_id);
> > > - hyp_vcpu->vcpu.vcpu_idx = vcpu_idx;
> > > + hyp_vcpu->vcpu.vcpu_idx = READ_ONCE(host_vcpu->vcpu_idx);
> > >
> > > hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu;
> > > hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags);
> > > @@ -687,27 +689,28 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
> > > goto unlock;
> > > }
> > >
> > > - idx = hyp_vm->nr_vcpus;
> > > + ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
> > > + if (ret)
> > > + goto unlock;
> > > +
> > > + idx = hyp_vcpu->vcpu.vcpu_idx;
> > > if (idx >= hyp_vm->kvm.created_vcpus) {
> > > ret = -EINVAL;
> > > goto unlock;
> > > }
> > >
> > > - ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu, idx);
> > > - if (ret)
> > > + if (hyp_vm->vcpus[idx]) {
> > > + ret = -EINVAL;
> > > goto unlock;
> > > + }
> >
> > I'm not sure how much we care at EL2, but it looks like there's a
> > potential spectre gadget here given that 'idx' is now untrusted.
> > Perhaps chuck something like:
> >
> > idx = array_index_nospec(idx, hyp_vm->kvm.created_vcpus);
> >
> > before indexing into 'hyp_vm->vcpus[]'?
>
> I'll add that when I respin.
pKVM is riddled with these kinds of accesses (e.g.,
get_vm_by_handle(). It might be better if I address this in a separate
series.
Cheers,
/fuad
>
> > Either way:
> >
> > Acked-by: Will Deacon <will@kernel.org>
>
> Thanks,
> /fuad
>
> > Will
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu
2025-02-17 15:56 ` Fuad Tabba
@ 2025-02-18 9:27 ` Will Deacon
0 siblings, 0 replies; 15+ messages in thread
From: Will Deacon @ 2025-02-18 9:27 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, oliver.upton, mark.rutland,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
On Mon, Feb 17, 2025 at 03:56:53PM +0000, Fuad Tabba wrote:
> On Mon, 17 Feb 2025 at 15:41, Fuad Tabba <tabba@google.com> wrote:
> > On Mon, 17 Feb 2025 at 15:30, Will Deacon <will@kernel.org> wrote:
> > >
> > > On Fri, Feb 14, 2025 at 03:02:58PM +0000, Fuad Tabba wrote:
> > > > Instead of creating and initializing _all_ hyp vcpus in pKVM when
> > > > the first host vcpu runs for the first time, initialize _each_
> > > > hyp vcpu in conjunction with its corresponding host vcpu.
> > > >
> > > > Some of the host vcpu state (e.g., system registers and traps
> > > > values) are not initialized until the first time the host vcpu is
> > > > run. Therefore, initializing a hyp vcpu before its corresponding
> > > > host vcpu has run for the first time might not view the complete
> > > > host state of these vcpus.
> > > >
> > > > Additionally, this behavior is inline with non-protected modes.
> > > >
> > > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > > ---
> > > > arch/arm64/include/asm/kvm_host.h | 2 +
> > > > arch/arm64/include/asm/kvm_pkvm.h | 1 +
> > > > arch/arm64/kvm/arm.c | 4 ++
> > > > arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ---
> > > > arch/arm64/kvm/hyp/nvhe/pkvm.c | 54 +++++++++++++++-----------
> > > > arch/arm64/kvm/pkvm.c | 28 ++++++-------
> > > > 6 files changed, 53 insertions(+), 42 deletions(-)
> > >
> > > [...]
> > >
> > > > static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu,
> > > > struct pkvm_hyp_vm *hyp_vm,
> > > > - struct kvm_vcpu *host_vcpu,
> > > > - unsigned int vcpu_idx)
> > > > + struct kvm_vcpu *host_vcpu)
> > > > {
> > > > int ret = 0;
> > > >
> > > > if (hyp_pin_shared_mem(host_vcpu, host_vcpu + 1))
> > > > return -EBUSY;
> > > >
> > > > - if (host_vcpu->vcpu_idx != vcpu_idx) {
> > > > - ret = -EINVAL;
> > > > - goto done;
> > > > - }
> > > > -
> > > > hyp_vcpu->host_vcpu = host_vcpu;
> > > >
> > > > hyp_vcpu->vcpu.kvm = &hyp_vm->kvm;
> > > > hyp_vcpu->vcpu.vcpu_id = READ_ONCE(host_vcpu->vcpu_id);
> > > > - hyp_vcpu->vcpu.vcpu_idx = vcpu_idx;
> > > > + hyp_vcpu->vcpu.vcpu_idx = READ_ONCE(host_vcpu->vcpu_idx);
> > > >
> > > > hyp_vcpu->vcpu.arch.hw_mmu = &hyp_vm->kvm.arch.mmu;
> > > > hyp_vcpu->vcpu.arch.cflags = READ_ONCE(host_vcpu->arch.cflags);
> > > > @@ -687,27 +689,28 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
> > > > goto unlock;
> > > > }
> > > >
> > > > - idx = hyp_vm->nr_vcpus;
> > > > + ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu);
> > > > + if (ret)
> > > > + goto unlock;
> > > > +
> > > > + idx = hyp_vcpu->vcpu.vcpu_idx;
> > > > if (idx >= hyp_vm->kvm.created_vcpus) {
> > > > ret = -EINVAL;
> > > > goto unlock;
> > > > }
> > > >
> > > > - ret = init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu, idx);
> > > > - if (ret)
> > > > + if (hyp_vm->vcpus[idx]) {
> > > > + ret = -EINVAL;
> > > > goto unlock;
> > > > + }
> > >
> > > I'm not sure how much we care at EL2, but it looks like there's a
> > > potential spectre gadget here given that 'idx' is now untrusted.
> > > Perhaps chuck something like:
> > >
> > > idx = array_index_nospec(idx, hyp_vm->kvm.created_vcpus);
> > >
> > > before indexing into 'hyp_vm->vcpus[]'?
> >
> > I'll add that when I respin.
>
> pKVM is riddled with these kinds of accesses (e.g.,
> get_vm_by_handle(). It might be better if I address this in a separate
> series.
Yeah, makes sense to me. Like I said, I'm not entirely sure how much we
care about spectre at EL2, given that it's using a separate translation
regime, but having a series to fix up some of the potential issues might
provide a useful basis for discussion.
Will
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-14 15:02 ` [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 " Fuad Tabba
@ 2025-02-26 10:07 ` Oliver Upton
2025-02-26 10:45 ` Marc Zyngier
2025-02-26 12:36 ` Fuad Tabba
0 siblings, 2 replies; 15+ messages in thread
From: Oliver Upton @ 2025-02-26 10:07 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, mark.rutland, will, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, broonie, qperret,
vdonnefort
Hi Fuad,
Series LGTM overall, one comment:
On Fri, Feb 14, 2025 at 03:02:56PM +0000, Fuad Tabba wrote:
> Initialize and set the traps controlled by the HCRX_EL2 in pKVM
> when the register is supported by the system.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/kvm/hyp/nvhe/pkvm.c | 46 ++++++++++++++++++++++++++++++++++
> 1 file changed, 46 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> index 3927fe52a3dd..668ebec27f1b 100644
> --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
> +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> @@ -58,6 +58,30 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> vcpu->arch.hcr_el2 |= HCR_ATA;
> }
>
> +static void pkvm_vcpu_reset_hcrx(struct pkvm_hyp_vcpu *hyp_vcpu)
> +{
> + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
> + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
> +
> + if (!cpus_have_final_cap(ARM64_HAS_HCX))
> + return;
> +
> + /*
> + * In general, all HCRX_EL2 bits are gated by a feature.
> + * The only reason we can set SMPME without checking any
> + * feature is that its effects are not directly observable
> + * from the guest.
> + */
> + vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
> +
The comment isn't wrong, but we don't support SME at all in KVM at this
point.
Any objection to dropping this bit? I can fix it when applying the
series, no need to respin.
Thanks,
Oliver
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-26 10:07 ` Oliver Upton
@ 2025-02-26 10:45 ` Marc Zyngier
2025-02-26 12:44 ` Fuad Tabba
2025-02-26 12:36 ` Fuad Tabba
1 sibling, 1 reply; 15+ messages in thread
From: Marc Zyngier @ 2025-02-26 10:45 UTC (permalink / raw)
To: Oliver Upton
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, mark.rutland, will,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
On Wed, 26 Feb 2025 10:07:56 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Fuad,
>
> Series LGTM overall, one comment:
>
> On Fri, Feb 14, 2025 at 03:02:56PM +0000, Fuad Tabba wrote:
> > Initialize and set the traps controlled by the HCRX_EL2 in pKVM
> > when the register is supported by the system.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/arm64/kvm/hyp/nvhe/pkvm.c | 46 ++++++++++++++++++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > index 3927fe52a3dd..668ebec27f1b 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > @@ -58,6 +58,30 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> > vcpu->arch.hcr_el2 |= HCR_ATA;
> > }
> >
> > +static void pkvm_vcpu_reset_hcrx(struct pkvm_hyp_vcpu *hyp_vcpu)
> > +{
> > + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
> > + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
> > +
> > + if (!cpus_have_final_cap(ARM64_HAS_HCX))
> > + return;
> > +
> > + /*
> > + * In general, all HCRX_EL2 bits are gated by a feature.
> > + * The only reason we can set SMPME without checking any
> > + * feature is that its effects are not directly observable
> > + * from the guest.
> > + */
> > + vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
> > +
>
> The comment isn't wrong, but we don't support SME at all in KVM at this
> point.
This is a copy/paste of what we have in kvm_calculate_traps(), and the
result of the removal of the dreaded HCRX_GUEST_FLAGS.
> Any objection to dropping this bit? I can fix it when applying the
> series, no need to respin.
Whatever we do, I think we should keep the two side of the trap
configuration in sync, as this is otherwise a cause of bugs.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-26 10:07 ` Oliver Upton
2025-02-26 10:45 ` Marc Zyngier
@ 2025-02-26 12:36 ` Fuad Tabba
1 sibling, 0 replies; 15+ messages in thread
From: Fuad Tabba @ 2025-02-26 12:36 UTC (permalink / raw)
To: Oliver Upton
Cc: kvmarm, linux-arm-kernel, maz, mark.rutland, will, joey.gouly,
suzuki.poulose, yuzenghui, catalin.marinas, broonie, qperret,
vdonnefort
Hi Oliver,
On Wed, 26 Feb 2025 at 02:08, Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Fuad,
>
> Series LGTM overall, one comment:
>
> On Fri, Feb 14, 2025 at 03:02:56PM +0000, Fuad Tabba wrote:
> > Initialize and set the traps controlled by the HCRX_EL2 in pKVM
> > when the register is supported by the system.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/arm64/kvm/hyp/nvhe/pkvm.c | 46 ++++++++++++++++++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > index 3927fe52a3dd..668ebec27f1b 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > @@ -58,6 +58,30 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> > vcpu->arch.hcr_el2 |= HCR_ATA;
> > }
> >
> > +static void pkvm_vcpu_reset_hcrx(struct pkvm_hyp_vcpu *hyp_vcpu)
> > +{
> > + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
> > + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
> > +
> > + if (!cpus_have_final_cap(ARM64_HAS_HCX))
> > + return;
> > +
> > + /*
> > + * In general, all HCRX_EL2 bits are gated by a feature.
> > + * The only reason we can set SMPME without checking any
> > + * feature is that its effects are not directly observable
> > + * from the guest.
> > + */
> > + vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
> > +
>
> The comment isn't wrong, but we don't support SME at all in KVM at this
> point.
>
> Any objection to dropping this bit? I can fix it when applying the
> series, no need to respin.
No objections to dropping it. Regarding respinning though, it depends
on what I need to do to address Marc's comment. On that in my reply to
his email.
Thanks,
/fuad
> Thanks,
> Oliver
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-26 10:45 ` Marc Zyngier
@ 2025-02-26 12:44 ` Fuad Tabba
2025-02-26 15:28 ` Marc Zyngier
0 siblings, 1 reply; 15+ messages in thread
From: Fuad Tabba @ 2025-02-26 12:44 UTC (permalink / raw)
To: Marc Zyngier
Cc: Oliver Upton, kvmarm, linux-arm-kernel, mark.rutland, will,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
Hi Marc,
On Wed, 26 Feb 2025 at 02:45, Marc Zyngier <maz@kernel.org> wrote:
>
> On Wed, 26 Feb 2025 10:07:56 +0000,
> Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > Hi Fuad,
> >
> > Series LGTM overall, one comment:
> >
> > On Fri, Feb 14, 2025 at 03:02:56PM +0000, Fuad Tabba wrote:
> > > Initialize and set the traps controlled by the HCRX_EL2 in pKVM
> > > when the register is supported by the system.
> > >
> > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > ---
> > > arch/arm64/kvm/hyp/nvhe/pkvm.c | 46 ++++++++++++++++++++++++++++++++++
> > > 1 file changed, 46 insertions(+)
> > >
> > > diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > > index 3927fe52a3dd..668ebec27f1b 100644
> > > --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > > +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > > @@ -58,6 +58,30 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> > > vcpu->arch.hcr_el2 |= HCR_ATA;
> > > }
> > >
> > > +static void pkvm_vcpu_reset_hcrx(struct pkvm_hyp_vcpu *hyp_vcpu)
> > > +{
> > > + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
> > > + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
> > > +
> > > + if (!cpus_have_final_cap(ARM64_HAS_HCX))
> > > + return;
> > > +
> > > + /*
> > > + * In general, all HCRX_EL2 bits are gated by a feature.
> > > + * The only reason we can set SMPME without checking any
> > > + * feature is that its effects are not directly observable
> > > + * from the guest.
> > > + */
> > > + vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
> > > +
> >
> > The comment isn't wrong, but we don't support SME at all in KVM at this
> > point.
>
> This is a copy/paste of what we have in kvm_calculate_traps(), and the
> result of the removal of the dreaded HCRX_GUEST_FLAGS.
>
> > Any objection to dropping this bit? I can fix it when applying the
> > series, no need to respin.
>
> Whatever we do, I think we should keep the two side of the trap
> configuration in sync, as this is otherwise a cause of bugs.
I could take the initialization of hcrx part from
kvm_calculate_traps() and place it in an inline function in
kvm_emulate.h. This then would be called by kvm_calculate_traps() and
pkvm_vcpu_init_traps().
What do you think?
Thanks,
/fuad
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-26 12:44 ` Fuad Tabba
@ 2025-02-26 15:28 ` Marc Zyngier
2025-02-26 18:53 ` Oliver Upton
0 siblings, 1 reply; 15+ messages in thread
From: Marc Zyngier @ 2025-02-26 15:28 UTC (permalink / raw)
To: Fuad Tabba
Cc: Oliver Upton, kvmarm, linux-arm-kernel, mark.rutland, will,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
On Wed, 26 Feb 2025 12:44:49 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> Hi Marc,
>
> On Wed, 26 Feb 2025 at 02:45, Marc Zyngier <maz@kernel.org> wrote:
> >
> > On Wed, 26 Feb 2025 10:07:56 +0000,
> > Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Hi Fuad,
> > >
> > > Series LGTM overall, one comment:
> > >
> > > On Fri, Feb 14, 2025 at 03:02:56PM +0000, Fuad Tabba wrote:
> > > > Initialize and set the traps controlled by the HCRX_EL2 in pKVM
> > > > when the register is supported by the system.
> > > >
> > > > Signed-off-by: Fuad Tabba <tabba@google.com>
> > > > ---
> > > > arch/arm64/kvm/hyp/nvhe/pkvm.c | 46 ++++++++++++++++++++++++++++++++++
> > > > 1 file changed, 46 insertions(+)
> > > >
> > > > diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > > > index 3927fe52a3dd..668ebec27f1b 100644
> > > > --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > > > +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
> > > > @@ -58,6 +58,30 @@ static void pkvm_vcpu_reset_hcr(struct kvm_vcpu *vcpu)
> > > > vcpu->arch.hcr_el2 |= HCR_ATA;
> > > > }
> > > >
> > > > +static void pkvm_vcpu_reset_hcrx(struct pkvm_hyp_vcpu *hyp_vcpu)
> > > > +{
> > > > + struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
> > > > + struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
> > > > +
> > > > + if (!cpus_have_final_cap(ARM64_HAS_HCX))
> > > > + return;
> > > > +
> > > > + /*
> > > > + * In general, all HCRX_EL2 bits are gated by a feature.
> > > > + * The only reason we can set SMPME without checking any
> > > > + * feature is that its effects are not directly observable
> > > > + * from the guest.
> > > > + */
> > > > + vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
> > > > +
> > >
> > > The comment isn't wrong, but we don't support SME at all in KVM at this
> > > point.
> >
> > This is a copy/paste of what we have in kvm_calculate_traps(), and the
> > result of the removal of the dreaded HCRX_GUEST_FLAGS.
> >
> > > Any objection to dropping this bit? I can fix it when applying the
> > > series, no need to respin.
> >
> > Whatever we do, I think we should keep the two side of the trap
> > configuration in sync, as this is otherwise a cause of bugs.
>
> I could take the initialization of hcrx part from
> kvm_calculate_traps() and place it in an inline function in
> kvm_emulate.h. This then would be called by kvm_calculate_traps() and
> pkvm_vcpu_init_traps().
>
> What do you think?
That could be a good option indeed, irrespective of the fate of SMPME.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-26 15:28 ` Marc Zyngier
@ 2025-02-26 18:53 ` Oliver Upton
2025-02-26 18:54 ` Fuad Tabba
0 siblings, 1 reply; 15+ messages in thread
From: Oliver Upton @ 2025-02-26 18:53 UTC (permalink / raw)
To: Marc Zyngier
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, mark.rutland, will,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
Hey,
On Wed, Feb 26, 2025 at 03:28:22PM +0000, Marc Zyngier wrote:
> On Wed, 26 Feb 2025 12:44:49 +0000, Fuad Tabba <tabba@google.com> wrote:
> > On Wed, 26 Feb 2025 at 02:45, Marc Zyngier <maz@kernel.org> wrote:
> > > This is a copy/paste of what we have in kvm_calculate_traps(), and the
> > > result of the removal of the dreaded HCRX_GUEST_FLAGS.
Ah, I missed that.
We probably should've just tossed it at that point, doing configuration
for features we don't support is a bit confusing.
> > > > Any objection to dropping this bit? I can fix it when applying the
> > > > series, no need to respin.
> > >
> > > Whatever we do, I think we should keep the two side of the trap
> > > configuration in sync, as this is otherwise a cause of bugs.
> >
> > I could take the initialization of hcrx part from
> > kvm_calculate_traps() and place it in an inline function in
> > kvm_emulate.h. This then would be called by kvm_calculate_traps() and
> > pkvm_vcpu_init_traps().
> >
> > What do you think?
>
> That could be a good option indeed, irrespective of the fate of SMPME.
>
Agreed. Fuad, would you mind taking another pass at this with the shared
helper?
Thanks,
Oliver
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 traps in pKVM
2025-02-26 18:53 ` Oliver Upton
@ 2025-02-26 18:54 ` Fuad Tabba
0 siblings, 0 replies; 15+ messages in thread
From: Fuad Tabba @ 2025-02-26 18:54 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, kvmarm, linux-arm-kernel, mark.rutland, will,
joey.gouly, suzuki.poulose, yuzenghui, catalin.marinas, broonie,
qperret, vdonnefort
Hi,
On Wed, 26 Feb 2025 at 10:53, Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hey,
>
> On Wed, Feb 26, 2025 at 03:28:22PM +0000, Marc Zyngier wrote:
> > On Wed, 26 Feb 2025 12:44:49 +0000, Fuad Tabba <tabba@google.com> wrote:
> > > On Wed, 26 Feb 2025 at 02:45, Marc Zyngier <maz@kernel.org> wrote:
> > > > This is a copy/paste of what we have in kvm_calculate_traps(), and the
> > > > result of the removal of the dreaded HCRX_GUEST_FLAGS.
>
> Ah, I missed that.
>
> We probably should've just tossed it at that point, doing configuration
> for features we don't support is a bit confusing.
>
> > > > > Any objection to dropping this bit? I can fix it when applying the
> > > > > series, no need to respin.
> > > >
> > > > Whatever we do, I think we should keep the two side of the trap
> > > > configuration in sync, as this is otherwise a cause of bugs.
> > >
> > > I could take the initialization of hcrx part from
> > > kvm_calculate_traps() and place it in an inline function in
> > > kvm_emulate.h. This then would be called by kvm_calculate_traps() and
> > > pkvm_vcpu_init_traps().
> > >
> > > What do you think?
> >
> > That could be a good option indeed, irrespective of the fate of SMPME.
> >
>
> Agreed. Fuad, would you mind taking another pass at this with the shared
> helper?
I'm on it!
Cheers,
/fuad
> Thanks,
> Oliver
>
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-02-26 19:00 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-14 15:02 [PATCH v1 0/3] KVM: arm64: Fix initializing HCRX_EL2 and other traps in pKVM Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 1/3] KVM: arm64: Initialize HCRX_EL2 " Fuad Tabba
2025-02-26 10:07 ` Oliver Upton
2025-02-26 10:45 ` Marc Zyngier
2025-02-26 12:44 ` Fuad Tabba
2025-02-26 15:28 ` Marc Zyngier
2025-02-26 18:53 ` Oliver Upton
2025-02-26 18:54 ` Fuad Tabba
2025-02-26 12:36 ` Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 2/3] KVM: arm64: Factor out pkvm hyp vcpu creation to separate function Fuad Tabba
2025-02-14 15:02 ` [PATCH v1 3/3] KVM: arm64: Create each pKVM hyp vcpu after its corresponding host vcpu Fuad Tabba
2025-02-17 15:30 ` Will Deacon
2025-02-17 15:41 ` Fuad Tabba
2025-02-17 15:56 ` Fuad Tabba
2025-02-18 9:27 ` Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).