* [PATCH v9 0/5] Support writable CPU ID registers from userspace
@ 2023-05-17 6:10 Jing Zhang
2023-05-17 6:10 ` [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest Jing Zhang
` (4 more replies)
0 siblings, 5 replies; 16+ messages in thread
From: Jing Zhang @ 2023-05-17 6:10 UTC (permalink / raw)
To: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Jing Zhang
This patchset refactors/adds code to support writable per guest CPU ID feature
registers. Part of the code/ideas are from
https://lore.kernel.org/all/20220419065544.3616948-1-reijiw@google.com .
No functional change is intended in this patchset. With the new CPU ID feature
registers infrastructure, only writtings of ID_AA64PFR0_EL1.[CSV2|CSV3],
ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon are allowed as KVM does before.
Writable (Configurable) per guest CPU ID feature registers are useful for
creating/migrating guest on ARM CPUs with different kinds of features.
This patchset uses kvm->arch.config_lock from Oliver's lock inversion fixes at
https://lore.kernel.org/linux-arm-kernel/20230327164747.2466958-1-oliver.upton@linux.dev/
---
* v8 -> v9
- Rebased to v6.4-rc2.
- Don't create new file id_regs.c and don't move out id regs from
sys_reg_descs array to reduce the changes.
* v7 -> v8
- Move idregs table sanity check to kvm_sys_reg_table_init.
- Only allow userspace writing before VM running.
- No lock is hold for guest access to idregs.
- Addressed some other comments from Reiji and Oliver.
* v6 -> v7
- Rebased to v6.3-rc7.
- Add helpers for idregs read/write.
- Guard all idregs reads/writes.
- Add code to fix features' safe value type which is different for KVM than
for the host.
* v5 -> v6
- Rebased to v6.3-rc5.
- Reuse struct sys_reg_desc's reset() callback and field val for KVM.
sanitisation function and writable mask instead of creating a new data
structure for idregs.
- Use get_arm64_ftr_reg() instead of exposing idregs ftr_bits array.
* v4 -> v5
- Rebased to 2fad20ae05cb (kvmarm/next)
Merge branch kvm-arm64/selftest/misc-6,4 into kvmarm-master/next
- Use kvm->arch.config_lock to guard update to multiple VM scope idregs
to avoid lock inversion
- Add back IDREG() macro for idregs access
- Refactor struct id_reg_desc by using existing infrastructure.
- Addressed many other comments from Marc.
* v3 -> v4
- Remove IDREG() macro for ID reg access, use simple array access instead
- Rename kvm_arm_read_id_reg_with_encoding() to kvm_arm_read_id_reg()
- Save perfmon value in ID_DFR0_EL1 instead of pmuver
- Update perfmon in ID_DFR0_EL1 and pmuver in ID_AA64DFR0_EL1 atomically
- Remove kvm_vcpu_has_pmu() in macro kvm_pmu_is_3p5()
- Improve ID register sanity checking in kvm_arm_check_idreg_table()
* v2 -> v3
- Rebased to 96a4627dbbd4 (kvmarm/next)
Merge tag ' https://github.com/oupton/linux tags/kvmarm-6.3' from into kvmarm-master/next
- Add id registere emulation entry point function emulate_id_reg
- Fix consistency for ID_AA64DFR0_EL1.PMUVer and ID_DFR0_EL1.PerfMon
- Improve the checking for id register table by ensuring that every entry has
the correct id register encoding.
- Addressed other comments from Reiji and Marc.
* v1 -> v2
- Rebase to 7121a2e1d107 (kvmarm/next) Merge branch kvm-arm64/nv-prefix into kvmarm/next
- Address writing issue for PMUVer
[1] https://lore.kernel.org/all/20230201025048.205820-1-jingzhangos@google.com
[2] https://lore.kernel.org/all/20230212215830.2975485-1-jingzhangos@google.com
[3] https://lore.kernel.org/all/20230228062246.1222387-1-jingzhangos@google.com
[4] https://lore.kernel.org/all/20230317050637.766317-1-jingzhangos@google.com
[5] https://lore.kernel.org/all/20230402183735.3011540-1-jingzhangos@google.com
[6] https://lore.kernel.org/all/20230404035344.4043856-1-jingzhangos@google.com
[7] https://lore.kernel.org/all/20230424234704.2571444-1-jingzhangos@google.com
[8] https://lore.kernel.org/all/20230503171618.2020461-1-jingzhangos@google.com
---
Jing Zhang (5):
KVM: arm64: Save ID registers' sanitized value per guest
KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3]
KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer
KVM: arm64: Reuse fields of sys_reg_desc for idreg
KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/include/asm/kvm_host.h | 34 +-
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kvm/arm.c | 24 +-
arch/arm64/kvm/sys_regs.c | 469 +++++++++++++++++++++++-----
arch/arm64/kvm/sys_regs.h | 22 +-
include/kvm/arm_pmu.h | 5 +-
7 files changed, 437 insertions(+), 120 deletions(-)
base-commit: f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6
--
2.40.1.606.ga4b1b128d6-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-17 6:10 [PATCH v9 0/5] Support writable CPU ID registers from userspace Jing Zhang
@ 2023-05-17 6:10 ` Jing Zhang
2023-05-18 7:17 ` Shameerali Kolothum Thodi
2023-05-17 6:10 ` [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] Jing Zhang
` (3 subsequent siblings)
4 siblings, 1 reply; 16+ messages in thread
From: Jing Zhang @ 2023-05-17 6:10 UTC (permalink / raw)
To: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Jing Zhang
Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
and save ID registers' sanitized value in the array at KVM_CREATE_VM.
Use the saved ones when ID registers are read by the guest or
userspace (via KVM_GET_ONE_REG).
No functional change intended.
Co-developed-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Jing Zhang <jingzhangos@google.com>
---
arch/arm64/include/asm/kvm_host.h | 20 +++++++++
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/sys_regs.c | 69 +++++++++++++++++++++++++------
arch/arm64/kvm/sys_regs.h | 7 ++++
4 files changed, 85 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7e7e19ef6993..949a4a782844 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -178,6 +178,21 @@ struct kvm_smccc_features {
unsigned long vendor_hyp_bmap;
};
+/*
+ * Emulated CPU ID registers per VM
+ * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
+ * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+ *
+ * These emulated idregs are VM-wide, but accessed from the context of a vCPU.
+ * Access to id regs are guarded by kvm_arch.config_lock.
+ */
+#define KVM_ARM_ID_REG_NUM 56
+#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
+#define IDREG(kvm, id) ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
+struct kvm_idregs {
+ u64 regs[KVM_ARM_ID_REG_NUM];
+};
+
typedef unsigned int pkvm_handle_t;
struct kvm_protected_vm {
@@ -253,6 +268,9 @@ struct kvm_arch {
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
+ /* Emulated CPU ID registers */
+ struct kvm_idregs idregs;
+
/*
* For an untrusted host VM, 'pkvm.handle' is used to lookup
* the associated pKVM instance in the hypervisor.
@@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
struct kvm_arm_counter_offset *offset);
+void kvm_arm_init_id_regs(struct kvm *kvm);
+
/* Guest/host FPSIMD coordination helpers */
int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 14391826241c..774656a0718d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
set_default_spectre(kvm);
kvm_arm_init_hypercalls(kvm);
+ kvm_arm_init_id_regs(kvm);
/*
* Initialise the default PMUver before there is a chance to
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 71b12094d613..d2ee3a1c7f03 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -41,6 +41,7 @@
* 64bit interface.
*/
+static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
static bool read_from_write_only(struct kvm_vcpu *vcpu,
@@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+ u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
u32 sr = reg_to_encoding(r);
if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
@@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
}
}
-/* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
+static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
{
- u32 id = reg_to_encoding(r);
- u64 val;
-
- if (sysreg_visible_as_raz(vcpu, r))
- return 0;
-
- val = read_sanitised_ftr_reg(id);
+ u64 val = IDREG(vcpu->kvm, id);
switch (id) {
case SYS_ID_AA64PFR0_EL1:
@@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r
return val;
}
+/* Read a sanitised cpufeature ID register by sys_reg_desc */
+static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r)
+{
+ if (sysreg_visible_as_raz(vcpu, r))
+ return 0;
+
+ return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
+}
+
+/*
+ * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
+ * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+ */
+static inline bool is_id_reg(u32 id)
+{
+ return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
+ sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
+ sys_reg_CRm(id) < 8);
+}
+
static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
@@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
if (p->is_write) {
return ignore_write(vcpu, p);
} else {
- u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
- u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+ u64 dfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1);
+ u64 pfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1);
u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT);
p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) |
@@ -3343,6 +3357,37 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
return write_demux_regids(uindices);
}
+/*
+ * Set the guest's ID registers with ID_SANITISED() to the host's sanitized value.
+ */
+void kvm_arm_init_id_regs(struct kvm *kvm)
+{
+ const struct sys_reg_desc *idreg;
+ struct sys_reg_params params;
+ u32 id;
+
+ /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
+ id = SYS_ID_PFR0_EL1;
+ params = encoding_to_params(id);
+ idreg = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+ if (WARN_ON(!idreg))
+ return;
+
+ /* Initialize all idregs */
+ while (is_id_reg(id)) {
+ /*
+ * Some hidden ID registers which are not in arm64_ftr_regs[]
+ * would cause warnings from read_sanitised_ftr_reg().
+ * Skip those ID registers to avoid the warnings.
+ */
+ if (idreg->visibility != raz_visibility)
+ IDREG(kvm, id) = read_sanitised_ftr_reg(id);
+
+ idreg++;
+ id = reg_to_encoding(idreg);
+ }
+}
+
int __init kvm_sys_reg_table_init(void)
{
bool valid = true;
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 6b11f2cc7146..eba10de2e7ae 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -27,6 +27,13 @@ struct sys_reg_params {
bool is_write;
};
+#define encoding_to_params(reg) \
+ ((struct sys_reg_params){ .Op0 = sys_reg_Op0(reg), \
+ .Op1 = sys_reg_Op1(reg), \
+ .CRn = sys_reg_CRn(reg), \
+ .CRm = sys_reg_CRm(reg), \
+ .Op2 = sys_reg_Op2(reg) })
+
#define esr_sys64_to_params(esr) \
((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3, \
.Op1 = ((esr) >> 14) & 0x7, \
--
2.40.1.606.ga4b1b128d6-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3]
2023-05-17 6:10 [PATCH v9 0/5] Support writable CPU ID registers from userspace Jing Zhang
2023-05-17 6:10 ` [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest Jing Zhang
@ 2023-05-17 6:10 ` Jing Zhang
2023-05-19 23:52 ` Reiji Watanabe
2023-05-17 6:10 ` [PATCH v9 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer Jing Zhang
` (2 subsequent siblings)
4 siblings, 1 reply; 16+ messages in thread
From: Jing Zhang @ 2023-05-17 6:10 UTC (permalink / raw)
To: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Jing Zhang
With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from
userspace can be stored in its corresponding ID register.
The setting of CSV bits for protected VMs are removed according to the
discussion from Fuad below:
https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com
Besides the removal of CSV bits setting for protected VMs, No other
functional change intended.
Signed-off-by: Jing Zhang <jingzhangos@google.com>
---
arch/arm64/include/asm/kvm_host.h | 2 --
arch/arm64/kvm/arm.c | 17 ----------
arch/arm64/kvm/sys_regs.c | 55 +++++++++++++++++++++++++------
3 files changed, 45 insertions(+), 29 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 949a4a782844..07f0e091ae48 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -257,8 +257,6 @@ struct kvm_arch {
cpumask_var_t supported_cpus;
- u8 pfr0_csv2;
- u8 pfr0_csv3;
struct {
u8 imp:4;
u8 unimp:4;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 774656a0718d..5114521ace60 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -102,22 +102,6 @@ static int kvm_arm_default_max_vcpus(void)
return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS;
}
-static void set_default_spectre(struct kvm *kvm)
-{
- /*
- * The default is to expose CSV2 == 1 if the HW isn't affected.
- * Although this is a per-CPU feature, we make it global because
- * asymmetric systems are just a nuisance.
- *
- * Userspace can override this as long as it doesn't promise
- * the impossible.
- */
- if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED)
- kvm->arch.pfr0_csv2 = 1;
- if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED)
- kvm->arch.pfr0_csv3 = 1;
-}
-
/**
* kvm_arch_init_vm - initializes a VM data structure
* @kvm: pointer to the KVM struct
@@ -161,7 +145,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
/* The maximum number of VCPUs is limited by the host's GIC model */
kvm->max_vcpus = kvm_arm_default_max_vcpus();
- set_default_spectre(kvm);
kvm_arm_init_hypercalls(kvm);
kvm_arm_init_id_regs(kvm);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d2ee3a1c7f03..3c52b136ade3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1218,10 +1218,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
if (!vcpu_has_sve(vcpu))
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE);
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
- val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
- val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
if (kvm_vgic_global_state.type == VGIC_V3) {
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
@@ -1359,7 +1355,10 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 val)
{
+ struct kvm_arch *arch = &vcpu->kvm->arch;
+ u64 sval = val;
u8 csv2, csv3;
+ int ret = 0;
/*
* Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
@@ -1377,17 +1376,26 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
(csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
return -EINVAL;
+ mutex_lock(&arch->config_lock);
/* We can only differ with CSV[23], and anything else is an error */
val ^= read_id_reg(vcpu, rd);
val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
- if (val)
- return -EINVAL;
-
- vcpu->kvm->arch.pfr0_csv2 = csv2;
- vcpu->kvm->arch.pfr0_csv3 = csv3;
+ if (val) {
+ ret = -EINVAL;
+ goto out;
+ }
- return 0;
+ /* Only allow userspace to change the idregs before VM running */
+ if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
+ if (sval != read_id_reg(vcpu, rd))
+ ret = -EBUSY;
+ } else {
+ IDREG(vcpu->kvm, reg_to_encoding(rd)) = sval;
+ }
+out:
+ mutex_unlock(&arch->config_lock);
+ return ret;
}
static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
@@ -1479,7 +1487,12 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
u64 *val)
{
+ struct kvm_arch *arch = &vcpu->kvm->arch;
+
+ mutex_lock(&arch->config_lock);
*val = read_id_reg(vcpu, rd);
+ mutex_unlock(&arch->config_lock);
+
return 0;
}
@@ -3364,6 +3377,7 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
{
const struct sys_reg_desc *idreg;
struct sys_reg_params params;
+ u64 val;
u32 id;
/* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
@@ -3386,6 +3400,27 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
idreg++;
id = reg_to_encoding(idreg);
}
+
+ /*
+ * The default is to expose CSV2 == 1 if the HW isn't affected.
+ * Although this is a per-CPU feature, we make it global because
+ * asymmetric systems are just a nuisance.
+ *
+ * Userspace can override this as long as it doesn't promise
+ * the impossible.
+ */
+ val = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
+
+ if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
+ }
+ if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
+ }
+
+ IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val;
}
int __init kvm_sys_reg_table_init(void)
--
2.40.1.606.ga4b1b128d6-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v9 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer
2023-05-17 6:10 [PATCH v9 0/5] Support writable CPU ID registers from userspace Jing Zhang
2023-05-17 6:10 ` [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest Jing Zhang
2023-05-17 6:10 ` [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] Jing Zhang
@ 2023-05-17 6:10 ` Jing Zhang
2023-05-17 6:10 ` [PATCH v9 4/5] KVM: arm64: Reuse fields of sys_reg_desc for idreg Jing Zhang
2023-05-17 6:10 ` [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 Jing Zhang
4 siblings, 0 replies; 16+ messages in thread
From: Jing Zhang @ 2023-05-17 6:10 UTC (permalink / raw)
To: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Jing Zhang
With per guest ID registers, PMUver settings from userspace
can be stored in its corresponding ID register.
No functional change intended.
Signed-off-by: Jing Zhang <jingzhangos@google.com>
---
arch/arm64/include/asm/kvm_host.h | 12 ++--
arch/arm64/kvm/arm.c | 6 --
arch/arm64/kvm/sys_regs.c | 94 +++++++++++++++++++++++++------
include/kvm/arm_pmu.h | 5 +-
4 files changed, 88 insertions(+), 29 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 07f0e091ae48..9a5f82161083 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -246,6 +246,13 @@ struct kvm_arch {
#define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 7
/* SMCCC filter initialized for the VM */
#define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 8
+ /*
+ * AA64DFR0_EL1.PMUver was set as ID_AA64DFR0_EL1_PMUVer_IMP_DEF
+ * or DFR0_EL1.PerfMon was set as ID_DFR0_EL1_PerfMon_IMPDEF from
+ * userspace for VCPUs without PMU.
+ */
+#define KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU 9
+
unsigned long flags;
/*
@@ -257,11 +264,6 @@ struct kvm_arch {
cpumask_var_t supported_cpus;
- struct {
- u8 imp:4;
- u8 unimp:4;
- } dfr0_pmuver;
-
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5114521ace60..ca18c09ccf82 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -148,12 +148,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm_arm_init_hypercalls(kvm);
kvm_arm_init_id_regs(kvm);
- /*
- * Initialise the default PMUver before there is a chance to
- * create an actual PMU.
- */
- kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();
-
return 0;
err_free_cpumask:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3c52b136ade3..fefe83f8deda 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1178,9 +1178,12 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
{
if (kvm_vcpu_has_pmu(vcpu))
- return vcpu->kvm->arch.dfr0_pmuver.imp;
+ return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1));
+ else if (test_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags))
+ return ID_AA64DFR0_EL1_PMUVer_IMP_DEF;
- return vcpu->kvm->arch.dfr0_pmuver.unimp;
+ return 0;
}
static u8 perfmon_to_pmuver(u8 perfmon)
@@ -1402,8 +1405,11 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 val)
{
+ struct kvm_arch *arch = &vcpu->kvm->arch;
u8 pmuver, host_pmuver;
bool valid_pmu;
+ u64 sval = val;
+ int ret = 0;
host_pmuver = kvm_arm_pmu_get_pmuver_limit();
@@ -1423,26 +1429,50 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
return -EINVAL;
+ mutex_lock(&arch->config_lock);
/* We can only differ with PMUver, and anything else is an error */
val ^= read_id_reg(vcpu, rd);
val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
- if (val)
- return -EINVAL;
+ if (val) {
+ ret = -EINVAL;
+ goto out;
+ }
- if (valid_pmu)
- vcpu->kvm->arch.dfr0_pmuver.imp = pmuver;
- else
- vcpu->kvm->arch.dfr0_pmuver.unimp = pmuver;
+ /* Only allow userspace to change the idregs before VM running */
+ if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
+ if (sval != read_id_reg(vcpu, rd))
+ ret = -EBUSY;
+ } else {
+ if (valid_pmu) {
+ val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
+ val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
+ val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver);
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
+
+ val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
+ val &= ~ID_DFR0_EL1_PerfMon_MASK;
+ val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver));
+ IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
+ } else {
+ assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
+ pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
+ }
+ }
- return 0;
+out:
+ mutex_unlock(&arch->config_lock);
+ return ret;
}
static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 val)
{
+ struct kvm_arch *arch = &vcpu->kvm->arch;
u8 perfmon, host_perfmon;
bool valid_pmu;
+ u64 sval = val;
+ int ret = 0;
host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
@@ -1463,18 +1493,39 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
if (kvm_vcpu_has_pmu(vcpu) != valid_pmu)
return -EINVAL;
+ mutex_lock(&arch->config_lock);
/* We can only differ with PerfMon, and anything else is an error */
val ^= read_id_reg(vcpu, rd);
val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
- if (val)
- return -EINVAL;
+ if (val) {
+ ret = -EINVAL;
+ goto out;
+ }
- if (valid_pmu)
- vcpu->kvm->arch.dfr0_pmuver.imp = perfmon_to_pmuver(perfmon);
- else
- vcpu->kvm->arch.dfr0_pmuver.unimp = perfmon_to_pmuver(perfmon);
+ /* Only allow userspace to change the idregs before VM running */
+ if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
+ if (sval != read_id_reg(vcpu, rd))
+ ret = -EBUSY;
+ } else {
+ if (valid_pmu) {
+ val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
+ val &= ~ID_DFR0_EL1_PerfMon_MASK;
+ val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon);
+ IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
+
+ val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
+ val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
+ val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon));
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
+ } else {
+ assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
+ perfmon == ID_DFR0_EL1_PerfMon_IMPDEF);
+ }
+ }
- return 0;
+out:
+ mutex_unlock(&arch->config_lock);
+ return ret;
}
/*
@@ -3421,6 +3472,17 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
}
IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val;
+ /*
+ * Initialise the default PMUver before there is a chance to
+ * create an actual PMU.
+ */
+ val = IDREG(kvm, SYS_ID_AA64DFR0_EL1);
+
+ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+ kvm_arm_pmu_get_pmuver_limit());
+
+ IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val;
}
int __init kvm_sys_reg_table_init(void)
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 1a6a695ca67a..8d70dbdc1e0a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -92,8 +92,9 @@ void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
/*
* Evaluates as true when emulating PMUv3p5, and false otherwise.
*/
-#define kvm_pmu_is_3p5(vcpu) \
- (vcpu->kvm->arch.dfr0_pmuver.imp >= ID_AA64DFR0_EL1_PMUVer_V3P5)
+#define kvm_pmu_is_3p5(vcpu) \
+ (FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), \
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1)) >= ID_AA64DFR0_EL1_PMUVer_V3P5)
u8 kvm_arm_pmu_get_pmuver_limit(void);
--
2.40.1.606.ga4b1b128d6-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v9 4/5] KVM: arm64: Reuse fields of sys_reg_desc for idreg
2023-05-17 6:10 [PATCH v9 0/5] Support writable CPU ID registers from userspace Jing Zhang
` (2 preceding siblings ...)
2023-05-17 6:10 ` [PATCH v9 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer Jing Zhang
@ 2023-05-17 6:10 ` Jing Zhang
2023-05-17 6:10 ` [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 Jing Zhang
4 siblings, 0 replies; 16+ messages in thread
From: Jing Zhang @ 2023-05-17 6:10 UTC (permalink / raw)
To: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Jing Zhang
Since reset() and val are not used for idreg in sys_reg_desc, they would
be used with other purposes for idregs.
The callback reset() would be used to return KVM sanitised id register
values. The u64 val would be used as mask for writable fields in idregs.
Only bits with 1 in val are writable from userspace.
Signed-off-by: Jing Zhang <jingzhangos@google.com>
---
arch/arm64/kvm/sys_regs.c | 101 +++++++++++++++++++++++++++-----------
arch/arm64/kvm/sys_regs.h | 15 ++++--
2 files changed, 82 insertions(+), 34 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fefe83f8deda..1b5dada9aad7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -541,10 +541,11 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
return 0;
}
-static void reset_bvr(struct kvm_vcpu *vcpu,
+static u64 reset_bvr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val;
+ return rd->val;
}
static bool trap_bcr(struct kvm_vcpu *vcpu,
@@ -577,10 +578,11 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
return 0;
}
-static void reset_bcr(struct kvm_vcpu *vcpu,
+static u64 reset_bcr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val;
+ return rd->val;
}
static bool trap_wvr(struct kvm_vcpu *vcpu,
@@ -614,10 +616,11 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
return 0;
}
-static void reset_wvr(struct kvm_vcpu *vcpu,
+static u64 reset_wvr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val;
+ return rd->val;
}
static bool trap_wcr(struct kvm_vcpu *vcpu,
@@ -650,25 +653,28 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
return 0;
}
-static void reset_wcr(struct kvm_vcpu *vcpu,
+static u64 reset_wcr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val;
+ return rd->val;
}
-static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 amair = read_sysreg(amair_el1);
vcpu_write_sys_reg(vcpu, amair, AMAIR_EL1);
+ return amair;
}
-static void reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_actlr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 actlr = read_sysreg(actlr_el1);
vcpu_write_sys_reg(vcpu, actlr, ACTLR_EL1);
+ return actlr;
}
-static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 mpidr;
@@ -682,7 +688,10 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
- vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1);
+ mpidr |= (1ULL << 31);
+ vcpu_write_sys_reg(vcpu, mpidr, MPIDR_EL1);
+
+ return mpidr;
}
static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
@@ -694,13 +703,13 @@ static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu,
return REG_HIDDEN;
}
-static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX);
/* No PMU available, any PMU reg may UNDEF... */
if (!kvm_arm_support_pmu_v3())
- return;
+ return 0;
n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT;
n &= ARMV8_PMU_PMCR_N_MASK;
@@ -709,33 +718,41 @@ static void reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= mask;
+
+ return __vcpu_sys_reg(vcpu, r->reg);
}
-static void reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_pmevcntr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= GENMASK(31, 0);
+
+ return __vcpu_sys_reg(vcpu, r->reg);
}
-static void reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK;
+
+ return __vcpu_sys_reg(vcpu, r->reg);
}
-static void reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_pmselr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
reset_unknown(vcpu, r);
__vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_COUNTER_MASK;
+
+ return __vcpu_sys_reg(vcpu, r->reg);
}
-static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 pmcr;
/* No PMU available, PMCR_EL0 may UNDEF... */
if (!kvm_arm_support_pmu_v3())
- return;
+ return 0;
/* Only preserve PMCR_EL0.N, and reset the rest to 0 */
pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT);
@@ -743,6 +760,8 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
pmcr |= ARMV8_PMU_PMCR_LC;
__vcpu_sys_reg(vcpu, r->reg) = pmcr;
+
+ return __vcpu_sys_reg(vcpu, r->reg);
}
static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
@@ -1212,6 +1231,11 @@ static u8 pmuver_to_perfmon(u8 pmuver)
}
}
+static u64 general_read_kvm_sanitised_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd)
+{
+ return read_sanitised_ftr_reg(reg_to_encoding(rd));
+}
+
static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
{
u64 val = IDREG(vcpu->kvm, id);
@@ -1594,7 +1618,7 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
* Fabricate a CLIDR_EL1 value instead of using the real value, which can vary
* by the physical CPU which the vcpu currently resides in.
*/
-static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static u64 reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
u64 ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0);
u64 clidr;
@@ -1642,6 +1666,8 @@ static void reset_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
clidr |= 2 << CLIDR_TTYPE_SHIFT(loc);
__vcpu_sys_reg(vcpu, r->reg) = clidr;
+
+ return __vcpu_sys_reg(vcpu, r->reg);
}
static int set_clidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
@@ -1741,6 +1767,17 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
.visibility = elx2_visibility, \
}
+/*
+ * Since reset() callback and field val are not used for idregs, they will be
+ * used for specific purposes for idregs.
+ * The reset() would return KVM sanitised register value. The value would be the
+ * same as the host kernel sanitised value if there is no KVM sanitisation.
+ * The val would be used as a mask indicating writable fields for the idreg.
+ * Only bits with 1 are writable from userspace. This mask might not be
+ * necessary in the future whenever all ID registers are enabled as writable
+ * from userspace.
+ */
+
/* sys_reg_desc initialiser for known cpufeature ID registers */
#define ID_SANITISED(name) { \
SYS_DESC(SYS_##name), \
@@ -1748,6 +1785,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
.get_user = get_id_reg, \
.set_user = set_id_reg, \
.visibility = id_visibility, \
+ .reset = general_read_kvm_sanitised_reg,\
+ .val = 0, \
}
/* sys_reg_desc initialiser for known cpufeature ID registers */
@@ -1757,6 +1796,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
.get_user = get_id_reg, \
.set_user = set_id_reg, \
.visibility = aa32_id_visibility, \
+ .reset = general_read_kvm_sanitised_reg,\
+ .val = 0, \
}
/*
@@ -1769,7 +1810,9 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
.access = access_id_reg, \
.get_user = get_id_reg, \
.set_user = set_id_reg, \
- .visibility = raz_visibility \
+ .visibility = raz_visibility, \
+ .reset = NULL, \
+ .val = 0, \
}
/*
@@ -1783,6 +1826,8 @@ static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu,
.get_user = get_id_reg, \
.set_user = set_id_reg, \
.visibility = raz_visibility, \
+ .reset = NULL, \
+ .val = 0, \
}
static bool access_sp_el1(struct kvm_vcpu *vcpu,
@@ -3119,19 +3164,21 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id,
*/
#define FUNCTION_INVARIANT(reg) \
- static void get_##reg(struct kvm_vcpu *v, \
+ static u64 get_##reg(struct kvm_vcpu *v, \
const struct sys_reg_desc *r) \
{ \
((struct sys_reg_desc *)r)->val = read_sysreg(reg); \
+ return ((struct sys_reg_desc *)r)->val; \
}
FUNCTION_INVARIANT(midr_el1)
FUNCTION_INVARIANT(revidr_el1)
FUNCTION_INVARIANT(aidr_el1)
-static void get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
+static u64 get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
{
((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0);
+ return ((struct sys_reg_desc *)r)->val;
}
/* ->val is filled in by kvm_sys_reg_table_init() */
@@ -3421,9 +3468,7 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
return write_demux_regids(uindices);
}
-/*
- * Set the guest's ID registers with ID_SANITISED() to the host's sanitized value.
- */
+/* Initialize the guest's ID registers with KVM sanitised values. */
void kvm_arm_init_id_regs(struct kvm *kvm)
{
const struct sys_reg_desc *idreg;
@@ -3440,13 +3485,11 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
/* Initialize all idregs */
while (is_id_reg(id)) {
- /*
- * Some hidden ID registers which are not in arm64_ftr_regs[]
- * would cause warnings from read_sanitised_ftr_reg().
- * Skip those ID registers to avoid the warnings.
- */
- if (idreg->visibility != raz_visibility)
- IDREG(kvm, id) = read_sanitised_ftr_reg(id);
+ val = 0;
+ /* Read KVM sanitised register value if available */
+ if (idreg->reset)
+ val = idreg->reset(NULL, idreg);
+ IDREG(kvm, id) = val;
idreg++;
id = reg_to_encoding(idreg);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index eba10de2e7ae..c65c129b3500 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -71,13 +71,16 @@ struct sys_reg_desc {
struct sys_reg_params *,
const struct sys_reg_desc *);
- /* Initialization for vcpu. */
- void (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *);
+ /*
+ * Initialization for vcpu. Return initialized value, or KVM
+ * sanitized value for ID registers.
+ */
+ u64 (*reset)(struct kvm_vcpu *, const struct sys_reg_desc *);
/* Index into sys_reg[], or 0 if we don't need to save it. */
int reg;
- /* Value (usually reset value) */
+ /* Value (usually reset value), or write mask for idregs */
u64 val;
/* Custom get/set_user functions, fallback to generic if NULL */
@@ -130,19 +133,21 @@ static inline bool read_zero(struct kvm_vcpu *vcpu,
}
/* Reset functions */
-static inline void reset_unknown(struct kvm_vcpu *vcpu,
+static inline u64 reset_unknown(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *r)
{
BUG_ON(!r->reg);
BUG_ON(r->reg >= NR_SYS_REGS);
__vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
+ return __vcpu_sys_reg(vcpu, r->reg);
}
-static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+static inline u64 reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
BUG_ON(!r->reg);
BUG_ON(r->reg >= NR_SYS_REGS);
__vcpu_sys_reg(vcpu, r->reg) = r->val;
+ return __vcpu_sys_reg(vcpu, r->reg);
}
static inline unsigned int sysreg_visibility(const struct kvm_vcpu *vcpu,
--
2.40.1.606.ga4b1b128d6-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3
2023-05-17 6:10 [PATCH v9 0/5] Support writable CPU ID registers from userspace Jing Zhang
` (3 preceding siblings ...)
2023-05-17 6:10 ` [PATCH v9 4/5] KVM: arm64: Reuse fields of sys_reg_desc for idreg Jing Zhang
@ 2023-05-17 6:10 ` Jing Zhang
2023-06-02 1:03 ` Suraj Jitindar Singh
4 siblings, 1 reply; 16+ messages in thread
From: Jing Zhang @ 2023-05-17 6:10 UTC (permalink / raw)
To: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Jing Zhang
Refactor writings for ID_AA64PFR0_EL1.[CSV2|CSV3],
ID_AA64DFR0_EL1.PMUVer and ID_DFR0_ELF.PerfMon based on utilities
specific to ID register.
Signed-off-by: Jing Zhang <jingzhangos@google.com>
---
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kvm/sys_regs.c | 362 ++++++++++++++++++----------
3 files changed, 243 insertions(+), 122 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 6bf013fb110d..dc769c2eb7a4 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -915,6 +915,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
return 8;
}
+s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new, s64 cur);
struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sys_id);
extern struct arm64_ftr_override id_aa64mmfr1_override;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7d7128c65161..3317a7b6deac 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -798,7 +798,7 @@ static u64 arm64_ftr_set_value(const struct arm64_ftr_bits *ftrp, s64 reg,
return reg;
}
-static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
+s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
s64 cur)
{
s64 ret = 0;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1b5dada9aad7..bec02ba45ee7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -41,6 +41,7 @@
* 64bit interface.
*/
+static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, u64 val);
static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
@@ -1194,6 +1195,86 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
return true;
}
+static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp,
+ s64 new, s64 cur)
+{
+ struct arm64_ftr_bits kvm_ftr = *ftrp;
+
+ /* Some features have different safe value type in KVM than host features */
+ switch (id) {
+ case SYS_ID_AA64DFR0_EL1:
+ if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT)
+ kvm_ftr.type = FTR_LOWER_SAFE;
+ break;
+ case SYS_ID_DFR0_EL1:
+ if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT)
+ kvm_ftr.type = FTR_LOWER_SAFE;
+ break;
+ }
+
+ return arm64_ftr_safe_value(&kvm_ftr, new, cur);
+}
+
+/**
+ * arm64_check_features() - Check if a feature register value constitutes
+ * a subset of features indicated by the idreg's KVM sanitised limit.
+ *
+ * This function will check if each feature field of @val is the "safe" value
+ * against idreg's KVM sanitised limit return from reset() callback.
+ * If a field value in @val is the same as the one in limit, it is always
+ * considered the safe value regardless For register fields that are not in
+ * writable, only the value in limit is considered the safe value.
+ *
+ * Return: 0 if all the fields are safe. Otherwise, return negative errno.
+ */
+static int arm64_check_features(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd,
+ u64 val)
+{
+ const struct arm64_ftr_reg *ftr_reg;
+ const struct arm64_ftr_bits *ftrp = NULL;
+ u32 id = reg_to_encoding(rd);
+ u64 writable_mask = rd->val;
+ u64 limit = 0;
+ u64 mask = 0;
+
+ /* For hidden and unallocated idregs without reset, only val = 0 is allowed. */
+ if (rd->reset) {
+ limit = rd->reset(vcpu, rd);
+ ftr_reg = get_arm64_ftr_reg(id);
+ if (!ftr_reg)
+ return -EINVAL;
+ ftrp = ftr_reg->ftr_bits;
+ }
+
+ for (; ftrp && ftrp->width; ftrp++) {
+ s64 f_val, f_lim, safe_val;
+ u64 ftr_mask;
+
+ ftr_mask = arm64_ftr_mask(ftrp);
+ if ((ftr_mask & writable_mask) != ftr_mask)
+ continue;
+
+ f_val = arm64_ftr_value(ftrp, val);
+ f_lim = arm64_ftr_value(ftrp, limit);
+ mask |= ftr_mask;
+
+ if (f_val == f_lim)
+ safe_val = f_val;
+ else
+ safe_val = kvm_arm64_ftr_safe_value(id, ftrp, f_val, f_lim);
+
+ if (safe_val != f_val)
+ return -E2BIG;
+ }
+
+ /* For fields that are not writable, values in limit are the safe values. */
+ if ((val & ~mask) != (limit & ~mask))
+ return -E2BIG;
+
+ return 0;
+}
+
static u8 vcpu_pmuver(const struct kvm_vcpu *vcpu)
{
if (kvm_vcpu_has_pmu(vcpu))
@@ -1244,7 +1325,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
case SYS_ID_AA64PFR0_EL1:
if (!vcpu_has_sve(vcpu))
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE);
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
if (kvm_vgic_global_state.type == VGIC_V3) {
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
@@ -1271,15 +1351,10 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT);
break;
case SYS_ID_AA64DFR0_EL1:
- /* Limit debug to ARMv8.0 */
- val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
- val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
/* Set PMUver to the required version */
val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
vcpu_pmuver(vcpu));
- /* Hide SPE from guests */
- val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
break;
case SYS_ID_DFR0_EL1:
val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
@@ -1378,14 +1453,40 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
return REG_HIDDEN;
}
+static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ u64 val;
+ u32 id = reg_to_encoding(rd);
+
+ val = read_sanitised_ftr_reg(id);
+ /*
+ * The default is to expose CSV2 == 1 if the HW isn't affected.
+ * Although this is a per-CPU feature, we make it global because
+ * asymmetric systems are just a nuisance.
+ *
+ * Userspace can override this as long as it doesn't promise
+ * the impossible.
+ */
+ if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
+ }
+ if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
+ }
+
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
+
+ return val;
+}
+
static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 val)
{
- struct kvm_arch *arch = &vcpu->kvm->arch;
- u64 sval = val;
u8 csv2, csv3;
- int ret = 0;
/*
* Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
@@ -1403,26 +1504,30 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
(csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
return -EINVAL;
- mutex_lock(&arch->config_lock);
- /* We can only differ with CSV[23], and anything else is an error */
- val ^= read_id_reg(vcpu, rd);
- val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
- ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
- if (val) {
- ret = -EINVAL;
- goto out;
- }
+ return set_id_reg(vcpu, rd, val);
+}
- /* Only allow userspace to change the idregs before VM running */
- if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
- if (sval != read_id_reg(vcpu, rd))
- ret = -EBUSY;
- } else {
- IDREG(vcpu->kvm, reg_to_encoding(rd)) = sval;
- }
-out:
- mutex_unlock(&arch->config_lock);
- return ret;
+static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ u64 val;
+ u32 id = reg_to_encoding(rd);
+
+ val = read_sanitised_ftr_reg(id);
+ /* Limit debug to ARMv8.0 */
+ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), 6);
+ /*
+ * Initialise the default PMUver before there is a chance to
+ * create an actual PMU.
+ */
+ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
+ kvm_arm_pmu_get_pmuver_limit());
+ /* Hide SPE from guests */
+ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMSVer);
+
+ return val;
}
static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
@@ -1432,7 +1537,6 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
struct kvm_arch *arch = &vcpu->kvm->arch;
u8 pmuver, host_pmuver;
bool valid_pmu;
- u64 sval = val;
int ret = 0;
host_pmuver = kvm_arm_pmu_get_pmuver_limit();
@@ -1454,40 +1558,61 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
return -EINVAL;
mutex_lock(&arch->config_lock);
- /* We can only differ with PMUver, and anything else is an error */
- val ^= read_id_reg(vcpu, rd);
- val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
- if (val) {
- ret = -EINVAL;
- goto out;
- }
-
/* Only allow userspace to change the idregs before VM running */
if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
- if (sval != read_id_reg(vcpu, rd))
+ if (val != read_id_reg(vcpu, rd))
ret = -EBUSY;
- } else {
- if (valid_pmu) {
- val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
- val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
- val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver);
- IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
-
- val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
- val &= ~ID_DFR0_EL1_PerfMon_MASK;
- val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver));
- IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
- } else {
- assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
- pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
- }
+ goto out;
}
+ if (!valid_pmu) {
+ /*
+ * Ignore the PMUVer filed in @val. The PMUVer would be determined
+ * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU,
+ */
+ pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer_MASK,
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1));
+ val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
+ val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, pmuver);
+ }
+
+ ret = arm64_check_features(vcpu, rd, val);
+ if (ret)
+ goto out;
+
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
+
+ val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
+ val &= ~ID_DFR0_EL1_PerfMon_MASK;
+ val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, pmuver_to_perfmon(pmuver));
+ IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
+
+ if (!valid_pmu)
+ assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
+ pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF);
+
out:
mutex_unlock(&arch->config_lock);
return ret;
}
+static u64 read_sanitised_id_dfr0_el1(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ u64 val;
+ u32 id = reg_to_encoding(rd);
+
+ val = read_sanitised_ftr_reg(id);
+ /*
+ * Initialise the default PMUver before there is a chance to
+ * create an actual PMU.
+ */
+ val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
+ val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon), kvm_arm_pmu_get_pmuver_limit());
+
+ return val;
+}
+
static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 val)
@@ -1495,7 +1620,6 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
struct kvm_arch *arch = &vcpu->kvm->arch;
u8 perfmon, host_perfmon;
bool valid_pmu;
- u64 sval = val;
int ret = 0;
host_perfmon = pmuver_to_perfmon(kvm_arm_pmu_get_pmuver_limit());
@@ -1518,35 +1642,39 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
return -EINVAL;
mutex_lock(&arch->config_lock);
- /* We can only differ with PerfMon, and anything else is an error */
- val ^= read_id_reg(vcpu, rd);
- val &= ~ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon);
- if (val) {
- ret = -EINVAL;
- goto out;
- }
-
/* Only allow userspace to change the idregs before VM running */
if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
- if (sval != read_id_reg(vcpu, rd))
+ if (val != read_id_reg(vcpu, rd))
ret = -EBUSY;
- } else {
- if (valid_pmu) {
- val = IDREG(vcpu->kvm, SYS_ID_DFR0_EL1);
- val &= ~ID_DFR0_EL1_PerfMon_MASK;
- val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon);
- IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
-
- val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
- val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
- val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon));
- IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
- } else {
- assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
- perfmon == ID_DFR0_EL1_PerfMon_IMPDEF);
- }
+ goto out;
}
+ if (!valid_pmu) {
+ /*
+ * Ignore the PerfMon filed in @val. The PerfMon would be determined
+ * by arch flags bit KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU,
+ */
+ perfmon = FIELD_GET(ID_DFR0_EL1_PerfMon_MASK,
+ IDREG(vcpu->kvm, SYS_ID_DFR0_EL1));
+ val &= ~ID_DFR0_EL1_PerfMon_MASK;
+ val |= FIELD_PREP(ID_DFR0_EL1_PerfMon_MASK, perfmon);
+ }
+
+ ret = arm64_check_features(vcpu, rd, val);
+ if (ret)
+ goto out;
+
+ IDREG(vcpu->kvm, SYS_ID_DFR0_EL1) = val;
+
+ val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
+ val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
+ val |= FIELD_PREP(ID_AA64DFR0_EL1_PMUVer_MASK, perfmon_to_pmuver(perfmon));
+ IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1) = val;
+
+ if (!valid_pmu)
+ assign_bit(KVM_ARCH_FLAG_VCPU_HAS_IMP_DEF_PMU, &vcpu->kvm->arch.flags,
+ perfmon == ID_DFR0_EL1_PerfMon_IMPDEF);
+
out:
mutex_unlock(&arch->config_lock);
return ret;
@@ -1574,11 +1702,23 @@ static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
u64 val)
{
- /* This is what we mean by invariant: you can't change it. */
- if (val != read_id_reg(vcpu, rd))
- return -EINVAL;
+ struct kvm_arch *arch = &vcpu->kvm->arch;
+ u32 id = reg_to_encoding(rd);
+ int ret = 0;
- return 0;
+ mutex_lock(&arch->config_lock);
+ /* Only allow userspace to change the idregs before VM running */
+ if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
+ if (val != read_id_reg(vcpu, rd))
+ ret = -EBUSY;
+ } else {
+ ret = arm64_check_features(vcpu, rd, val);
+ if (!ret)
+ IDREG(vcpu->kvm, id) = val;
+ }
+ mutex_unlock(&arch->config_lock);
+
+ return ret;
}
static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
@@ -1929,9 +2069,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* CRm=1 */
AA32_ID_SANITISED(ID_PFR0_EL1),
AA32_ID_SANITISED(ID_PFR1_EL1),
- { SYS_DESC(SYS_ID_DFR0_EL1), .access = access_id_reg,
- .get_user = get_id_reg, .set_user = set_id_dfr0_el1,
- .visibility = aa32_id_visibility, },
+ { SYS_DESC(SYS_ID_DFR0_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+ .set_user = set_id_dfr0_el1,
+ .visibility = aa32_id_visibility,
+ .reset = read_sanitised_id_dfr0_el1,
+ .val = ID_DFR0_EL1_PerfMon_MASK, },
ID_HIDDEN(ID_AFR0_EL1),
AA32_ID_SANITISED(ID_MMFR0_EL1),
AA32_ID_SANITISED(ID_MMFR1_EL1),
@@ -1960,8 +2104,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* AArch64 ID registers */
/* CRm=4 */
- { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg,
- .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, },
+ { SYS_DESC(SYS_ID_AA64PFR0_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+ .set_user = set_id_aa64pfr0_el1,
+ .reset = read_sanitised_id_aa64pfr0_el1,
+ .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, },
ID_SANITISED(ID_AA64PFR1_EL1),
ID_UNALLOCATED(4,2),
ID_UNALLOCATED(4,3),
@@ -1971,8 +2119,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_UNALLOCATED(4,7),
/* CRm=5 */
- { SYS_DESC(SYS_ID_AA64DFR0_EL1), .access = access_id_reg,
- .get_user = get_id_reg, .set_user = set_id_aa64dfr0_el1, },
+ { SYS_DESC(SYS_ID_AA64DFR0_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+ .set_user = set_id_aa64dfr0_el1,
+ .reset = read_sanitised_id_aa64dfr0_el1,
+ .val = ID_AA64DFR0_EL1_PMUVer_MASK, },
ID_SANITISED(ID_AA64DFR1_EL1),
ID_UNALLOCATED(5,2),
ID_UNALLOCATED(5,3),
@@ -3494,38 +3646,6 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
idreg++;
id = reg_to_encoding(idreg);
}
-
- /*
- * The default is to expose CSV2 == 1 if the HW isn't affected.
- * Although this is a per-CPU feature, we make it global because
- * asymmetric systems are just a nuisance.
- *
- * Userspace can override this as long as it doesn't promise
- * the impossible.
- */
- val = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
-
- if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
- val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
- }
- if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
- val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
- }
-
- IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val;
- /*
- * Initialise the default PMUver before there is a chance to
- * create an actual PMU.
- */
- val = IDREG(kvm, SYS_ID_AA64DFR0_EL1);
-
- val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer);
- val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer),
- kvm_arm_pmu_get_pmuver_limit());
-
- IDREG(kvm, SYS_ID_AA64DFR0_EL1) = val;
}
int __init kvm_sys_reg_table_init(void)
--
2.40.1.606.ga4b1b128d6-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* RE: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-17 6:10 ` [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest Jing Zhang
@ 2023-05-18 7:17 ` Shameerali Kolothum Thodi
2023-05-18 19:48 ` Jing Zhang
0 siblings, 1 reply; 16+ messages in thread
From: Shameerali Kolothum Thodi @ 2023-05-18 7:17 UTC (permalink / raw)
To: Jing Zhang, KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta
> -----Original Message-----
> From: Jing Zhang [mailto:jingzhangos@google.com]
> Sent: 17 May 2023 07:10
> To: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> <maz@kernel.org>; Oliver Upton <oupton@google.com>
> Cc: Will Deacon <will@kernel.org>; Paolo Bonzini <pbonzini@redhat.com>;
> James Morse <james.morse@arm.com>; Alexandru Elisei
> <alexandru.elisei@arm.com>; Suzuki K Poulose <suzuki.poulose@arm.com>;
> Fuad Tabba <tabba@google.com>; Reiji Watanabe <reijiw@google.com>;
> Raghavendra Rao Ananta <rananta@google.com>; Jing Zhang
> <jingzhangos@google.com>
> Subject: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per
> guest
>
> Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
> and save ID registers' sanitized value in the array at KVM_CREATE_VM.
> Use the saved ones when ID registers are read by the guest or
> userspace (via KVM_GET_ONE_REG).
>
> No functional change intended.
>
> Co-developed-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> Signed-off-by: Jing Zhang <jingzhangos@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 20 +++++++++
> arch/arm64/kvm/arm.c | 1 +
> arch/arm64/kvm/sys_regs.c | 69
> +++++++++++++++++++++++++------
> arch/arm64/kvm/sys_regs.h | 7 ++++
> 4 files changed, 85 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h
> b/arch/arm64/include/asm/kvm_host.h
> index 7e7e19ef6993..949a4a782844 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -178,6 +178,21 @@ struct kvm_smccc_features {
> unsigned long vendor_hyp_bmap;
> };
>
> +/*
> + * Emulated CPU ID registers per VM
> + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
> + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> + *
> + * These emulated idregs are VM-wide, but accessed from the context of a
> vCPU.
> + * Access to id regs are guarded by kvm_arch.config_lock.
> + */
> +#define KVM_ARM_ID_REG_NUM 56
> +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
> +#define IDREG(kvm, id) ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
> +struct kvm_idregs {
> + u64 regs[KVM_ARM_ID_REG_NUM];
> +};
>
Not sure we really need this struct here. Why can't this array be moved to
struct kvm_arch directly?
> typedef unsigned int pkvm_handle_t;
>
> struct kvm_protected_vm {
> @@ -253,6 +268,9 @@ struct kvm_arch {
> struct kvm_smccc_features smccc_feat;
> struct maple_tree smccc_filter;
>
> + /* Emulated CPU ID registers */
> + struct kvm_idregs idregs;
> +
> /*
> * For an untrusted host VM, 'pkvm.handle' is used to lookup
> * the associated pKVM instance in the hypervisor.
> @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm
> *kvm,
> int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> struct kvm_arm_counter_offset *offset);
>
> +void kvm_arm_init_id_regs(struct kvm *kvm);
> +
> /* Guest/host FPSIMD coordination helpers */
> int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 14391826241c..774656a0718d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
> long type)
>
> set_default_spectre(kvm);
> kvm_arm_init_hypercalls(kvm);
> + kvm_arm_init_id_regs(kvm);
>
> /*
> * Initialise the default PMUver before there is a chance to
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 71b12094d613..d2ee3a1c7f03 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -41,6 +41,7 @@
> * 64bit interface.
> */
>
> +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
> static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
>
> static bool read_from_write_only(struct kvm_vcpu *vcpu,
> @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> const struct sys_reg_desc *r)
> {
> - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
> u32 sr = reg_to_encoding(r);
>
> if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
> }
> }
>
> -/* Read a sanitised cpufeature ID register by sys_reg_desc */
> -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> const *r)
> +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> {
> - u32 id = reg_to_encoding(r);
> - u64 val;
> -
> - if (sysreg_visible_as_raz(vcpu, r))
> - return 0;
> -
> - val = read_sanitised_ftr_reg(id);
> + u64 val = IDREG(vcpu->kvm, id);
>
> switch (id) {
> case SYS_ID_AA64PFR0_EL1:
> @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct kvm_vcpu
> *vcpu, struct sys_reg_desc const *r
> return val;
> }
>
> +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> const *r)
> +{
> + if (sysreg_visible_as_raz(vcpu, r))
> + return 0;
> +
> + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
> +}
> +
> +/*
> + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
> + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> + */
> +static inline bool is_id_reg(u32 id)
> +{
> + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
> + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
> + sys_reg_CRm(id) < 8);
> +}
> +
> static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> const struct sys_reg_desc *r)
> {
> @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
> if (p->is_write) {
> return ignore_write(vcpu, p);
> } else {
> - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> + u64 dfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1);
> + u64 pfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1);
Does this change the behavior slightly as now within the kvm_arm_read_id_reg()
the val will be further adjusted based on KVM/vCPU?
Thanks,
Shameer
> u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr,
> ID_AA64PFR0_EL1_EL3_SHIFT);
>
> p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) <<
> 28) |
> @@ -3343,6 +3357,37 @@ int kvm_arm_copy_sys_reg_indices(struct
> kvm_vcpu *vcpu, u64 __user *uindices)
> return write_demux_regids(uindices);
> }
>
> +/*
> + * Set the guest's ID registers with ID_SANITISED() to the host's sanitized
> value.
> + */
> +void kvm_arm_init_id_regs(struct kvm *kvm)
> +{
> + const struct sys_reg_desc *idreg;
> + struct sys_reg_params params;
> + u32 id;
> +
> + /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
> + id = SYS_ID_PFR0_EL1;
> + params = encoding_to_params(id);
> + idreg = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> + if (WARN_ON(!idreg))
> + return;
> +
> + /* Initialize all idregs */
> + while (is_id_reg(id)) {
> + /*
> + * Some hidden ID registers which are not in arm64_ftr_regs[]
> + * would cause warnings from read_sanitised_ftr_reg().
> + * Skip those ID registers to avoid the warnings.
> + */
> + if (idreg->visibility != raz_visibility)
> + IDREG(kvm, id) = read_sanitised_ftr_reg(id);
> +
> + idreg++;
> + id = reg_to_encoding(idreg);
> + }
> +}
> +
> int __init kvm_sys_reg_table_init(void)
> {
> bool valid = true;
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index 6b11f2cc7146..eba10de2e7ae 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -27,6 +27,13 @@ struct sys_reg_params {
> bool is_write;
> };
>
> +#define encoding_to_params(reg) \
> + ((struct sys_reg_params){ .Op0 = sys_reg_Op0(reg), \
> + .Op1 = sys_reg_Op1(reg), \
> + .CRn = sys_reg_CRn(reg), \
> + .CRm = sys_reg_CRm(reg), \
> + .Op2 = sys_reg_Op2(reg) })
> +
> #define esr_sys64_to_params(esr)
> \
> ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,
> \
> .Op1 = ((esr) >> 14) & 0x7, \
> --
> 2.40.1.606.ga4b1b128d6-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-18 7:17 ` Shameerali Kolothum Thodi
@ 2023-05-18 19:48 ` Jing Zhang
2023-05-19 8:08 ` Shameerali Kolothum Thodi
0 siblings, 1 reply; 16+ messages in thread
From: Jing Zhang @ 2023-05-18 19:48 UTC (permalink / raw)
To: Shameerali Kolothum Thodi
Cc: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton, Will Deacon,
Paolo Bonzini, James Morse, Alexandru Elisei, Suzuki K Poulose,
Fuad Tabba, Reiji Watanabe, Raghavendra Rao Ananta
Hi Shameerali,
On Thu, May 18, 2023 at 12:17 AM Shameerali Kolothum Thodi
<shameerali.kolothum.thodi@huawei.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jing Zhang [mailto:jingzhangos@google.com]
> > Sent: 17 May 2023 07:10
> > To: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > <maz@kernel.org>; Oliver Upton <oupton@google.com>
> > Cc: Will Deacon <will@kernel.org>; Paolo Bonzini <pbonzini@redhat.com>;
> > James Morse <james.morse@arm.com>; Alexandru Elisei
> > <alexandru.elisei@arm.com>; Suzuki K Poulose <suzuki.poulose@arm.com>;
> > Fuad Tabba <tabba@google.com>; Reiji Watanabe <reijiw@google.com>;
> > Raghavendra Rao Ananta <rananta@google.com>; Jing Zhang
> > <jingzhangos@google.com>
> > Subject: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per
> > guest
> >
> > Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
> > and save ID registers' sanitized value in the array at KVM_CREATE_VM.
> > Use the saved ones when ID registers are read by the guest or
> > userspace (via KVM_GET_ONE_REG).
> >
> > No functional change intended.
> >
> > Co-developed-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 20 +++++++++
> > arch/arm64/kvm/arm.c | 1 +
> > arch/arm64/kvm/sys_regs.c | 69
> > +++++++++++++++++++++++++------
> > arch/arm64/kvm/sys_regs.h | 7 ++++
> > 4 files changed, 85 insertions(+), 12 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h
> > b/arch/arm64/include/asm/kvm_host.h
> > index 7e7e19ef6993..949a4a782844 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -178,6 +178,21 @@ struct kvm_smccc_features {
> > unsigned long vendor_hyp_bmap;
> > };
> >
> > +/*
> > + * Emulated CPU ID registers per VM
> > + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
> > + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > + *
> > + * These emulated idregs are VM-wide, but accessed from the context of a
> > vCPU.
> > + * Access to id regs are guarded by kvm_arch.config_lock.
> > + */
> > +#define KVM_ARM_ID_REG_NUM 56
> > +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
> > +#define IDREG(kvm, id) ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
> > +struct kvm_idregs {
> > + u64 regs[KVM_ARM_ID_REG_NUM];
> > +};
> >
>
> Not sure we really need this struct here. Why can't this array be moved to
> struct kvm_arch directly?
It was put in kvm_arch directly before, then got into its own
structure in v5 according to the comments here:
https://lore.kernel.org/all/861qlaxzyw.wl-maz@kernel.org/#t
>
> > typedef unsigned int pkvm_handle_t;
> >
> > struct kvm_protected_vm {
> > @@ -253,6 +268,9 @@ struct kvm_arch {
> > struct kvm_smccc_features smccc_feat;
> > struct maple_tree smccc_filter;
> >
> > + /* Emulated CPU ID registers */
> > + struct kvm_idregs idregs;
> > +
> > /*
> > * For an untrusted host VM, 'pkvm.handle' is used to lookup
> > * the associated pKVM instance in the hypervisor.
> > @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm
> > *kvm,
> > int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > struct kvm_arm_counter_offset *offset);
> >
> > +void kvm_arm_init_id_regs(struct kvm *kvm);
> > +
> > /* Guest/host FPSIMD coordination helpers */
> > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 14391826241c..774656a0718d 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
> > long type)
> >
> > set_default_spectre(kvm);
> > kvm_arm_init_hypercalls(kvm);
> > + kvm_arm_init_id_regs(kvm);
> >
> > /*
> > * Initialise the default PMUver before there is a chance to
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 71b12094d613..d2ee3a1c7f03 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -41,6 +41,7 @@
> > * 64bit interface.
> > */
> >
> > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
> > static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
> >
> > static bool read_from_write_only(struct kvm_vcpu *vcpu,
> > @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> > struct sys_reg_params *p,
> > const struct sys_reg_desc *r)
> > {
> > - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
> > u32 sr = reg_to_encoding(r);
> >
> > if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> > @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
> > }
> > }
> >
> > -/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> > const *r)
> > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> > {
> > - u32 id = reg_to_encoding(r);
> > - u64 val;
> > -
> > - if (sysreg_visible_as_raz(vcpu, r))
> > - return 0;
> > -
> > - val = read_sanitised_ftr_reg(id);
> > + u64 val = IDREG(vcpu->kvm, id);
> >
> > switch (id) {
> > case SYS_ID_AA64PFR0_EL1:
> > @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct kvm_vcpu
> > *vcpu, struct sys_reg_desc const *r
> > return val;
> > }
> >
> > +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> > const *r)
> > +{
> > + if (sysreg_visible_as_raz(vcpu, r))
> > + return 0;
> > +
> > + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
> > +}
> > +
> > +/*
> > + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
> > + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > + */
> > +static inline bool is_id_reg(u32 id)
> > +{
> > + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
> > + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
> > + sys_reg_CRm(id) < 8);
> > +}
> > +
> > static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > const struct sys_reg_desc *r)
> > {
> > @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
> > if (p->is_write) {
> > return ignore_write(vcpu, p);
> > } else {
> > - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > + u64 dfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1);
> > + u64 pfr = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1);
>
> Does this change the behavior slightly as now within the kvm_arm_read_id_reg()
> the val will be further adjusted based on KVM/vCPU?
That's a good question. Although the actual behavior would be the same
no matter read idreg with read_sanitised_ftr_reg or
kvm_arm_read_id_reg, it is possible that the behavior would change
potentially in the future.
Since now every guest has its own idregs, for every guest, the idregs
should be read from kvm_arm_read_id_reg instead of
read_sanitised_ftr_reg.
The point is, for trap_dbgdidr, we should read AA64DFR0/AA64PFR0 from
host or the VM-scope?
>
> Thanks,
> Shameer
>
> > u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr,
> > ID_AA64PFR0_EL1_EL3_SHIFT);
> >
> > p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) <<
> > 28) |
> > @@ -3343,6 +3357,37 @@ int kvm_arm_copy_sys_reg_indices(struct
> > kvm_vcpu *vcpu, u64 __user *uindices)
> > return write_demux_regids(uindices);
> > }
> >
> > +/*
> > + * Set the guest's ID registers with ID_SANITISED() to the host's sanitized
> > value.
> > + */
> > +void kvm_arm_init_id_regs(struct kvm *kvm)
> > +{
> > + const struct sys_reg_desc *idreg;
> > + struct sys_reg_params params;
> > + u32 id;
> > +
> > + /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
> > + id = SYS_ID_PFR0_EL1;
> > + params = encoding_to_params(id);
> > + idreg = find_reg(¶ms, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> > + if (WARN_ON(!idreg))
> > + return;
> > +
> > + /* Initialize all idregs */
> > + while (is_id_reg(id)) {
> > + /*
> > + * Some hidden ID registers which are not in arm64_ftr_regs[]
> > + * would cause warnings from read_sanitised_ftr_reg().
> > + * Skip those ID registers to avoid the warnings.
> > + */
> > + if (idreg->visibility != raz_visibility)
> > + IDREG(kvm, id) = read_sanitised_ftr_reg(id);
> > +
> > + idreg++;
> > + id = reg_to_encoding(idreg);
> > + }
> > +}
> > +
> > int __init kvm_sys_reg_table_init(void)
> > {
> > bool valid = true;
> > diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> > index 6b11f2cc7146..eba10de2e7ae 100644
> > --- a/arch/arm64/kvm/sys_regs.h
> > +++ b/arch/arm64/kvm/sys_regs.h
> > @@ -27,6 +27,13 @@ struct sys_reg_params {
> > bool is_write;
> > };
> >
> > +#define encoding_to_params(reg) \
> > + ((struct sys_reg_params){ .Op0 = sys_reg_Op0(reg), \
> > + .Op1 = sys_reg_Op1(reg), \
> > + .CRn = sys_reg_CRn(reg), \
> > + .CRm = sys_reg_CRm(reg), \
> > + .Op2 = sys_reg_Op2(reg) })
> > +
> > #define esr_sys64_to_params(esr)
> > \
> > ((struct sys_reg_params){ .Op0 = ((esr) >> 20) & 3,
> > \
> > .Op1 = ((esr) >> 14) & 0x7, \
> > --
> > 2.40.1.606.ga4b1b128d6-goog
> >
>
Thanks,
Jing
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-18 19:48 ` Jing Zhang
@ 2023-05-19 8:08 ` Shameerali Kolothum Thodi
2023-05-19 17:44 ` Jing Zhang
0 siblings, 1 reply; 16+ messages in thread
From: Shameerali Kolothum Thodi @ 2023-05-19 8:08 UTC (permalink / raw)
To: Jing Zhang
Cc: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton, Will Deacon,
Paolo Bonzini, James Morse, Alexandru Elisei, Suzuki K Poulose,
Fuad Tabba, Reiji Watanabe, Raghavendra Rao Ananta
> -----Original Message-----
> From: Jing Zhang [mailto:jingzhangos@google.com]
> Sent: 18 May 2023 20:49
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> Cc: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> <maz@kernel.org>; Oliver Upton <oupton@google.com>; Will Deacon
> <will@kernel.org>; Paolo Bonzini <pbonzini@redhat.com>; James Morse
> <james.morse@arm.com>; Alexandru Elisei <alexandru.elisei@arm.com>;
> Suzuki K Poulose <suzuki.poulose@arm.com>; Fuad Tabba
> <tabba@google.com>; Reiji Watanabe <reijiw@google.com>; Raghavendra
> Rao Ananta <rananta@google.com>
> Subject: Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> per guest
>
> Hi Shameerali,
>
> On Thu, May 18, 2023 at 12:17 AM Shameerali Kolothum Thodi
> <shameerali.kolothum.thodi@huawei.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jing Zhang [mailto:jingzhangos@google.com]
> > > Sent: 17 May 2023 07:10
> > > To: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > > <maz@kernel.org>; Oliver Upton <oupton@google.com>
> > > Cc: Will Deacon <will@kernel.org>; Paolo Bonzini
> <pbonzini@redhat.com>;
> > > James Morse <james.morse@arm.com>; Alexandru Elisei
> > > <alexandru.elisei@arm.com>; Suzuki K Poulose
> <suzuki.poulose@arm.com>;
> > > Fuad Tabba <tabba@google.com>; Reiji Watanabe <reijiw@google.com>;
> > > Raghavendra Rao Ananta <rananta@google.com>; Jing Zhang
> > > <jingzhangos@google.com>
> > > Subject: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> per
> > > guest
> > >
> > > Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
> > > and save ID registers' sanitized value in the array at KVM_CREATE_VM.
> > > Use the saved ones when ID registers are read by the guest or
> > > userspace (via KVM_GET_ONE_REG).
> > >
> > > No functional change intended.
> > >
> > > Co-developed-by: Reiji Watanabe <reijiw@google.com>
> > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > > ---
> > > arch/arm64/include/asm/kvm_host.h | 20 +++++++++
> > > arch/arm64/kvm/arm.c | 1 +
> > > arch/arm64/kvm/sys_regs.c | 69
> > > +++++++++++++++++++++++++------
> > > arch/arm64/kvm/sys_regs.h | 7 ++++
> > > 4 files changed, 85 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h
> > > b/arch/arm64/include/asm/kvm_host.h
> > > index 7e7e19ef6993..949a4a782844 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -178,6 +178,21 @@ struct kvm_smccc_features {
> > > unsigned long vendor_hyp_bmap;
> > > };
> > >
> > > +/*
> > > + * Emulated CPU ID registers per VM
> > > + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
> > > + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > + *
> > > + * These emulated idregs are VM-wide, but accessed from the context of
> a
> > > vCPU.
> > > + * Access to id regs are guarded by kvm_arch.config_lock.
> > > + */
> > > +#define KVM_ARM_ID_REG_NUM 56
> > > +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) |
> sys_reg_Op2(id))
> > > +#define IDREG(kvm, id)
> ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
> > > +struct kvm_idregs {
> > > + u64 regs[KVM_ARM_ID_REG_NUM];
> > > +};
> > >
> >
> > Not sure we really need this struct here. Why can't this array be moved to
> > struct kvm_arch directly?
> It was put in kvm_arch directly before, then got into its own
> structure in v5 according to the comments here:
> https://lore.kernel.org/all/861qlaxzyw.wl-maz@kernel.org/#t
Ok.
> > > typedef unsigned int pkvm_handle_t;
> > >
> > > struct kvm_protected_vm {
> > > @@ -253,6 +268,9 @@ struct kvm_arch {
> > > struct kvm_smccc_features smccc_feat;
> > > struct maple_tree smccc_filter;
> > >
> > > + /* Emulated CPU ID registers */
> > > + struct kvm_idregs idregs;
> > > +
> > > /*
> > > * For an untrusted host VM, 'pkvm.handle' is used to lookup
> > > * the associated pKVM instance in the hypervisor.
> > > @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm
> > > *kvm,
> > > int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > > struct kvm_arm_counter_offset
> *offset);
> > >
> > > +void kvm_arm_init_id_regs(struct kvm *kvm);
> > > +
> > > /* Guest/host FPSIMD coordination helpers */
> > > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > > void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > index 14391826241c..774656a0718d 100644
> > > --- a/arch/arm64/kvm/arm.c
> > > +++ b/arch/arm64/kvm/arm.c
> > > @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
> > > long type)
> > >
> > > set_default_spectre(kvm);
> > > kvm_arm_init_hypercalls(kvm);
> > > + kvm_arm_init_id_regs(kvm);
> > >
> > > /*
> > > * Initialise the default PMUver before there is a chance to
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 71b12094d613..d2ee3a1c7f03 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -41,6 +41,7 @@
> > > * 64bit interface.
> > > */
> > >
> > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
> > > static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
> > >
> > > static bool read_from_write_only(struct kvm_vcpu *vcpu,
> > > @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> > > struct sys_reg_params *p,
> > > const struct sys_reg_desc *r)
> > > {
> > > - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > > + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
> > > u32 sr = reg_to_encoding(r);
> > >
> > > if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> > > @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
> > > }
> > > }
> > >
> > > -/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> > > const *r)
> > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> > > {
> > > - u32 id = reg_to_encoding(r);
> > > - u64 val;
> > > -
> > > - if (sysreg_visible_as_raz(vcpu, r))
> > > - return 0;
> > > -
> > > - val = read_sanitised_ftr_reg(id);
> > > + u64 val = IDREG(vcpu->kvm, id);
> > >
> > > switch (id) {
> > > case SYS_ID_AA64PFR0_EL1:
> > > @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct
> kvm_vcpu
> > > *vcpu, struct sys_reg_desc const *r
> > > return val;
> > > }
> > >
> > > +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct
> sys_reg_desc
> > > const *r)
> > > +{
> > > + if (sysreg_visible_as_raz(vcpu, r))
> > > + return 0;
> > > +
> > > + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
> > > +}
> > > +
> > > +/*
> > > + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
> > > + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > + */
> > > +static inline bool is_id_reg(u32 id)
> > > +{
> > > + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
> > > + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
> > > + sys_reg_CRm(id) < 8);
> > > +}
> > > +
> > > static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > > const struct sys_reg_desc *r)
> > > {
> > > @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu
> *vcpu,
> > > if (p->is_write) {
> > > return ignore_write(vcpu, p);
> > > } else {
> > > - u64 dfr =
> read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > > - u64 pfr =
> read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > > + u64 dfr = kvm_arm_read_id_reg(vcpu,
> SYS_ID_AA64DFR0_EL1);
> > > + u64 pfr = kvm_arm_read_id_reg(vcpu,
> SYS_ID_AA64PFR0_EL1);
> >
> > Does this change the behavior slightly as now within the
> kvm_arm_read_id_reg()
> > the val will be further adjusted based on KVM/vCPU?
> That's a good question. Although the actual behavior would be the same
> no matter read idreg with read_sanitised_ftr_reg or
> kvm_arm_read_id_reg, it is possible that the behavior would change
> potentially in the future.
> Since now every guest has its own idregs, for every guest, the idregs
> should be read from kvm_arm_read_id_reg instead of
> read_sanitised_ftr_reg.
> The point is, for trap_dbgdidr, we should read AA64DFR0/AA64PFR0 from
> host or the VM-scope?
Ok. I was just double checking whether it changes the behavior now itself or
not as we claim no functional changes in this series. As far as host vs VM
scope, I am not sure as well. From a quick look through the history of debug
support, couldn’t find anything that mandates host values though.
Thanks,
Shameer
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-19 8:08 ` Shameerali Kolothum Thodi
@ 2023-05-19 17:44 ` Jing Zhang
2023-05-19 22:16 ` Reiji Watanabe
0 siblings, 1 reply; 16+ messages in thread
From: Jing Zhang @ 2023-05-19 17:44 UTC (permalink / raw)
To: Shameerali Kolothum Thodi
Cc: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton, Will Deacon,
Paolo Bonzini, James Morse, Alexandru Elisei, Suzuki K Poulose,
Fuad Tabba, Reiji Watanabe, Raghavendra Rao Ananta
HI Shameerali,
On Fri, May 19, 2023 at 1:08 AM Shameerali Kolothum Thodi
<shameerali.kolothum.thodi@huawei.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jing Zhang [mailto:jingzhangos@google.com]
> > Sent: 18 May 2023 20:49
> > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > Cc: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > <maz@kernel.org>; Oliver Upton <oupton@google.com>; Will Deacon
> > <will@kernel.org>; Paolo Bonzini <pbonzini@redhat.com>; James Morse
> > <james.morse@arm.com>; Alexandru Elisei <alexandru.elisei@arm.com>;
> > Suzuki K Poulose <suzuki.poulose@arm.com>; Fuad Tabba
> > <tabba@google.com>; Reiji Watanabe <reijiw@google.com>; Raghavendra
> > Rao Ananta <rananta@google.com>
> > Subject: Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> > per guest
> >
> > Hi Shameerali,
> >
> > On Thu, May 18, 2023 at 12:17 AM Shameerali Kolothum Thodi
> > <shameerali.kolothum.thodi@huawei.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jing Zhang [mailto:jingzhangos@google.com]
> > > > Sent: 17 May 2023 07:10
> > > > To: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > > > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > > > <maz@kernel.org>; Oliver Upton <oupton@google.com>
> > > > Cc: Will Deacon <will@kernel.org>; Paolo Bonzini
> > <pbonzini@redhat.com>;
> > > > James Morse <james.morse@arm.com>; Alexandru Elisei
> > > > <alexandru.elisei@arm.com>; Suzuki K Poulose
> > <suzuki.poulose@arm.com>;
> > > > Fuad Tabba <tabba@google.com>; Reiji Watanabe <reijiw@google.com>;
> > > > Raghavendra Rao Ananta <rananta@google.com>; Jing Zhang
> > > > <jingzhangos@google.com>
> > > > Subject: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> > per
> > > > guest
> > > >
> > > > Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
> > > > and save ID registers' sanitized value in the array at KVM_CREATE_VM.
> > > > Use the saved ones when ID registers are read by the guest or
> > > > userspace (via KVM_GET_ONE_REG).
> > > >
> > > > No functional change intended.
> > > >
> > > > Co-developed-by: Reiji Watanabe <reijiw@google.com>
> > > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > > > ---
> > > > arch/arm64/include/asm/kvm_host.h | 20 +++++++++
> > > > arch/arm64/kvm/arm.c | 1 +
> > > > arch/arm64/kvm/sys_regs.c | 69
> > > > +++++++++++++++++++++++++------
> > > > arch/arm64/kvm/sys_regs.h | 7 ++++
> > > > 4 files changed, 85 insertions(+), 12 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/include/asm/kvm_host.h
> > > > b/arch/arm64/include/asm/kvm_host.h
> > > > index 7e7e19ef6993..949a4a782844 100644
> > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > @@ -178,6 +178,21 @@ struct kvm_smccc_features {
> > > > unsigned long vendor_hyp_bmap;
> > > > };
> > > >
> > > > +/*
> > > > + * Emulated CPU ID registers per VM
> > > > + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
> > > > + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > > + *
> > > > + * These emulated idregs are VM-wide, but accessed from the context of
> > a
> > > > vCPU.
> > > > + * Access to id regs are guarded by kvm_arch.config_lock.
> > > > + */
> > > > +#define KVM_ARM_ID_REG_NUM 56
> > > > +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) |
> > sys_reg_Op2(id))
> > > > +#define IDREG(kvm, id)
> > ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
> > > > +struct kvm_idregs {
> > > > + u64 regs[KVM_ARM_ID_REG_NUM];
> > > > +};
> > > >
> > >
> > > Not sure we really need this struct here. Why can't this array be moved to
> > > struct kvm_arch directly?
> > It was put in kvm_arch directly before, then got into its own
> > structure in v5 according to the comments here:
> > https://lore.kernel.org/all/861qlaxzyw.wl-maz@kernel.org/#t
>
> Ok.
>
> > > > typedef unsigned int pkvm_handle_t;
> > > >
> > > > struct kvm_protected_vm {
> > > > @@ -253,6 +268,9 @@ struct kvm_arch {
> > > > struct kvm_smccc_features smccc_feat;
> > > > struct maple_tree smccc_filter;
> > > >
> > > > + /* Emulated CPU ID registers */
> > > > + struct kvm_idregs idregs;
> > > > +
> > > > /*
> > > > * For an untrusted host VM, 'pkvm.handle' is used to lookup
> > > > * the associated pKVM instance in the hypervisor.
> > > > @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm
> > > > *kvm,
> > > > int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > > > struct kvm_arm_counter_offset
> > *offset);
> > > >
> > > > +void kvm_arm_init_id_regs(struct kvm *kvm);
> > > > +
> > > > /* Guest/host FPSIMD coordination helpers */
> > > > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > > > void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > index 14391826241c..774656a0718d 100644
> > > > --- a/arch/arm64/kvm/arm.c
> > > > +++ b/arch/arm64/kvm/arm.c
> > > > @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
> > > > long type)
> > > >
> > > > set_default_spectre(kvm);
> > > > kvm_arm_init_hypercalls(kvm);
> > > > + kvm_arm_init_id_regs(kvm);
> > > >
> > > > /*
> > > > * Initialise the default PMUver before there is a chance to
> > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > > index 71b12094d613..d2ee3a1c7f03 100644
> > > > --- a/arch/arm64/kvm/sys_regs.c
> > > > +++ b/arch/arm64/kvm/sys_regs.c
> > > > @@ -41,6 +41,7 @@
> > > > * 64bit interface.
> > > > */
> > > >
> > > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
> > > > static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
> > > >
> > > > static bool read_from_write_only(struct kvm_vcpu *vcpu,
> > > > @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> > > > struct sys_reg_params *p,
> > > > const struct sys_reg_desc *r)
> > > > {
> > > > - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > > > + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
> > > > u32 sr = reg_to_encoding(r);
> > > >
> > > > if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> > > > @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
> > > > }
> > > > }
> > > >
> > > > -/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > > -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> > > > const *r)
> > > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> > > > {
> > > > - u32 id = reg_to_encoding(r);
> > > > - u64 val;
> > > > -
> > > > - if (sysreg_visible_as_raz(vcpu, r))
> > > > - return 0;
> > > > -
> > > > - val = read_sanitised_ftr_reg(id);
> > > > + u64 val = IDREG(vcpu->kvm, id);
> > > >
> > > > switch (id) {
> > > > case SYS_ID_AA64PFR0_EL1:
> > > > @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct
> > kvm_vcpu
> > > > *vcpu, struct sys_reg_desc const *r
> > > > return val;
> > > > }
> > > >
> > > > +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > > +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct
> > sys_reg_desc
> > > > const *r)
> > > > +{
> > > > + if (sysreg_visible_as_raz(vcpu, r))
> > > > + return 0;
> > > > +
> > > > + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
> > > > +}
> > > > +
> > > > +/*
> > > > + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
> > > > + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > > + */
> > > > +static inline bool is_id_reg(u32 id)
> > > > +{
> > > > + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
> > > > + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
> > > > + sys_reg_CRm(id) < 8);
> > > > +}
> > > > +
> > > > static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > > > const struct sys_reg_desc *r)
> > > > {
> > > > @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu
> > *vcpu,
> > > > if (p->is_write) {
> > > > return ignore_write(vcpu, p);
> > > > } else {
> > > > - u64 dfr =
> > read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > > > - u64 pfr =
> > read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > > > + u64 dfr = kvm_arm_read_id_reg(vcpu,
> > SYS_ID_AA64DFR0_EL1);
> > > > + u64 pfr = kvm_arm_read_id_reg(vcpu,
> > SYS_ID_AA64PFR0_EL1);
> > >
> > > Does this change the behavior slightly as now within the
> > kvm_arm_read_id_reg()
> > > the val will be further adjusted based on KVM/vCPU?
> > That's a good question. Although the actual behavior would be the same
> > no matter read idreg with read_sanitised_ftr_reg or
> > kvm_arm_read_id_reg, it is possible that the behavior would change
> > potentially in the future.
> > Since now every guest has its own idregs, for every guest, the idregs
> > should be read from kvm_arm_read_id_reg instead of
> > read_sanitised_ftr_reg.
> > The point is, for trap_dbgdidr, we should read AA64DFR0/AA64PFR0 from
> > host or the VM-scope?
>
> Ok. I was just double checking whether it changes the behavior now itself or
> not as we claim no functional changes in this series. As far as host vs VM
> scope, I am not sure as well. From a quick look through the history of debug
> support, couldn’t find anything that mandates host values though.
>
> Thanks,
> Shameer
>
Thanks for the investigation. Let's keep it this way now and see if
there are any other comments.
Jing
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-19 17:44 ` Jing Zhang
@ 2023-05-19 22:16 ` Reiji Watanabe
2023-05-22 17:27 ` Jing Zhang
0 siblings, 1 reply; 16+ messages in thread
From: Reiji Watanabe @ 2023-05-19 22:16 UTC (permalink / raw)
To: Jing Zhang
Cc: Shameerali Kolothum Thodi, KVM, KVMARM, ARMLinux, Marc Zyngier,
Oliver Upton, Will Deacon, Paolo Bonzini, James Morse,
Alexandru Elisei, Suzuki K Poulose, Fuad Tabba,
Raghavendra Rao Ananta
On Fri, May 19, 2023 at 10:44:41AM -0700, Jing Zhang wrote:
> HI Shameerali,
>
> On Fri, May 19, 2023 at 1:08 AM Shameerali Kolothum Thodi
> <shameerali.kolothum.thodi@huawei.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jing Zhang [mailto:jingzhangos@google.com]
> > > Sent: 18 May 2023 20:49
> > > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > > Cc: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > > <maz@kernel.org>; Oliver Upton <oupton@google.com>; Will Deacon
> > > <will@kernel.org>; Paolo Bonzini <pbonzini@redhat.com>; James Morse
> > > <james.morse@arm.com>; Alexandru Elisei <alexandru.elisei@arm.com>;
> > > Suzuki K Poulose <suzuki.poulose@arm.com>; Fuad Tabba
> > > <tabba@google.com>; Reiji Watanabe <reijiw@google.com>; Raghavendra
> > > Rao Ananta <rananta@google.com>
> > > Subject: Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> > > per guest
> > >
> > > Hi Shameerali,
> > >
> > > On Thu, May 18, 2023 at 12:17 AM Shameerali Kolothum Thodi
> > > <shameerali.kolothum.thodi@huawei.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jing Zhang [mailto:jingzhangos@google.com]
> > > > > Sent: 17 May 2023 07:10
> > > > > To: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > > > > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > > > > <maz@kernel.org>; Oliver Upton <oupton@google.com>
> > > > > Cc: Will Deacon <will@kernel.org>; Paolo Bonzini
> > > <pbonzini@redhat.com>;
> > > > > James Morse <james.morse@arm.com>; Alexandru Elisei
> > > > > <alexandru.elisei@arm.com>; Suzuki K Poulose
> > > <suzuki.poulose@arm.com>;
> > > > > Fuad Tabba <tabba@google.com>; Reiji Watanabe <reijiw@google.com>;
> > > > > Raghavendra Rao Ananta <rananta@google.com>; Jing Zhang
> > > > > <jingzhangos@google.com>
> > > > > Subject: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> > > per
> > > > > guest
> > > > >
> > > > > Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
> > > > > and save ID registers' sanitized value in the array at KVM_CREATE_VM.
> > > > > Use the saved ones when ID registers are read by the guest or
> > > > > userspace (via KVM_GET_ONE_REG).
> > > > >
> > > > > No functional change intended.
> > > > >
> > > > > Co-developed-by: Reiji Watanabe <reijiw@google.com>
> > > > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > > > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > > > > ---
> > > > > arch/arm64/include/asm/kvm_host.h | 20 +++++++++
> > > > > arch/arm64/kvm/arm.c | 1 +
> > > > > arch/arm64/kvm/sys_regs.c | 69
> > > > > +++++++++++++++++++++++++------
> > > > > arch/arm64/kvm/sys_regs.h | 7 ++++
> > > > > 4 files changed, 85 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/arch/arm64/include/asm/kvm_host.h
> > > > > b/arch/arm64/include/asm/kvm_host.h
> > > > > index 7e7e19ef6993..949a4a782844 100644
> > > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > > @@ -178,6 +178,21 @@ struct kvm_smccc_features {
> > > > > unsigned long vendor_hyp_bmap;
> > > > > };
> > > > >
> > > > > +/*
> > > > > + * Emulated CPU ID registers per VM
> > > > > + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
> > > > > + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > > > + *
> > > > > + * These emulated idregs are VM-wide, but accessed from the context of
> > > a
> > > > > vCPU.
> > > > > + * Access to id regs are guarded by kvm_arch.config_lock.
Nit: This statement doesn't seem to be true yet :)
> > > > > + */
> > > > > +#define KVM_ARM_ID_REG_NUM 56
> > > > > +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) |
> > > sys_reg_Op2(id))
> > > > > +#define IDREG(kvm, id)
> > > ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
> > > > > +struct kvm_idregs {
> > > > > + u64 regs[KVM_ARM_ID_REG_NUM];
> > > > > +};
> > > > >
> > > >
> > > > Not sure we really need this struct here. Why can't this array be moved to
> > > > struct kvm_arch directly?
> > > It was put in kvm_arch directly before, then got into its own
> > > structure in v5 according to the comments here:
> > > https://lore.kernel.org/all/861qlaxzyw.wl-maz@kernel.org/#t
> >
> > Ok.
> >
> > > > > typedef unsigned int pkvm_handle_t;
> > > > >
> > > > > struct kvm_protected_vm {
> > > > > @@ -253,6 +268,9 @@ struct kvm_arch {
> > > > > struct kvm_smccc_features smccc_feat;
> > > > > struct maple_tree smccc_filter;
> > > > >
> > > > > + /* Emulated CPU ID registers */
> > > > > + struct kvm_idregs idregs;
> > > > > +
> > > > > /*
> > > > > * For an untrusted host VM, 'pkvm.handle' is used to lookup
> > > > > * the associated pKVM instance in the hypervisor.
> > > > > @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm
> > > > > *kvm,
> > > > > int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > > > > struct kvm_arm_counter_offset
> > > *offset);
> > > > >
> > > > > +void kvm_arm_init_id_regs(struct kvm *kvm);
> > > > > +
> > > > > /* Guest/host FPSIMD coordination helpers */
> > > > > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > > > > void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > > index 14391826241c..774656a0718d 100644
> > > > > --- a/arch/arm64/kvm/arm.c
> > > > > +++ b/arch/arm64/kvm/arm.c
> > > > > @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
> > > > > long type)
> > > > >
> > > > > set_default_spectre(kvm);
> > > > > kvm_arm_init_hypercalls(kvm);
> > > > > + kvm_arm_init_id_regs(kvm);
> > > > >
> > > > > /*
> > > > > * Initialise the default PMUver before there is a chance to
> > > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > > > index 71b12094d613..d2ee3a1c7f03 100644
> > > > > --- a/arch/arm64/kvm/sys_regs.c
> > > > > +++ b/arch/arm64/kvm/sys_regs.c
> > > > > @@ -41,6 +41,7 @@
> > > > > * 64bit interface.
> > > > > */
> > > > >
> > > > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
> > > > > static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
> > > > >
> > > > > static bool read_from_write_only(struct kvm_vcpu *vcpu,
> > > > > @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> > > > > struct sys_reg_params *p,
> > > > > const struct sys_reg_desc *r)
> > > > > {
> > > > > - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > > > > + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
> > > > > u32 sr = reg_to_encoding(r);
> > > > >
> > > > > if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> > > > > @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
> > > > > }
> > > > > }
> > > > >
> > > > > -/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > > > -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> > > > > const *r)
> > > > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> > > > > {
> > > > > - u32 id = reg_to_encoding(r);
> > > > > - u64 val;
> > > > > -
> > > > > - if (sysreg_visible_as_raz(vcpu, r))
> > > > > - return 0;
> > > > > -
> > > > > - val = read_sanitised_ftr_reg(id);
> > > > > + u64 val = IDREG(vcpu->kvm, id);
> > > > >
> > > > > switch (id) {
> > > > > case SYS_ID_AA64PFR0_EL1:
> > > > > @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct
> > > kvm_vcpu
> > > > > *vcpu, struct sys_reg_desc const *r
> > > > > return val;
> > > > > }
> > > > >
> > > > > +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > > > +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct
> > > sys_reg_desc
> > > > > const *r)
> > > > > +{
> > > > > + if (sysreg_visible_as_raz(vcpu, r))
> > > > > + return 0;
> > > > > +
> > > > > + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
> > > > > + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > > > + */
> > > > > +static inline bool is_id_reg(u32 id)
> > > > > +{
> > > > > + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
> > > > > + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
> > > > > + sys_reg_CRm(id) < 8);
> > > > > +}
> > > > > +
> > > > > static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > > > > const struct sys_reg_desc *r)
> > > > > {
> > > > > @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu
> > > *vcpu,
> > > > > if (p->is_write) {
> > > > > return ignore_write(vcpu, p);
> > > > > } else {
> > > > > - u64 dfr =
> > > read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > > > > - u64 pfr =
> > > read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > > > > + u64 dfr = kvm_arm_read_id_reg(vcpu,
> > > SYS_ID_AA64DFR0_EL1);
> > > > > + u64 pfr = kvm_arm_read_id_reg(vcpu,
> > > SYS_ID_AA64PFR0_EL1);
> > > >
> > > > Does this change the behavior slightly as now within the
> > > kvm_arm_read_id_reg()
> > > > the val will be further adjusted based on KVM/vCPU?
> > > That's a good question. Although the actual behavior would be the same
> > > no matter read idreg with read_sanitised_ftr_reg or
> > > kvm_arm_read_id_reg, it is possible that the behavior would change
> > > potentially in the future.
> > > Since now every guest has its own idregs, for every guest, the idregs
> > > should be read from kvm_arm_read_id_reg instead of
> > > read_sanitised_ftr_reg.
> > > The point is, for trap_dbgdidr, we should read AA64DFR0/AA64PFR0 from
> > > host or the VM-scope?
> >
> > Ok. I was just double checking whether it changes the behavior now itself or
> > not as we claim no functional changes in this series. As far as host vs VM
> > scope, I am not sure as well. From a quick look through the history of debug
> > support, couldn’t find anything that mandates host values though.
We should use the VM-scope AA64DFR0/AA64PFR0 values here.
As trap_dbgdidr() is the emulation code for the guest's reading DBGDIDR,
its WRPs, BRPs, CTX_CMPs, and EL3 field must be consistent with the ones
in the guest's AA64DFR0_EL1/AA64PFR0_EL1 values.
As Jing said, it doesn't matter practically until we allow userspace to
modify those fields though :)
Thank you,
Reiji
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3]
2023-05-17 6:10 ` [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] Jing Zhang
@ 2023-05-19 23:52 ` Reiji Watanabe
2023-05-22 17:23 ` Jing Zhang
0 siblings, 1 reply; 16+ messages in thread
From: Reiji Watanabe @ 2023-05-19 23:52 UTC (permalink / raw)
To: Jing Zhang
Cc: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton, Will Deacon,
Paolo Bonzini, James Morse, Alexandru Elisei, Suzuki K Poulose,
Fuad Tabba, Raghavendra Rao Ananta
Hi Jing,
On Wed, May 17, 2023 at 06:10:11AM +0000, Jing Zhang wrote:
> With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from
> userspace can be stored in its corresponding ID register.
>
> The setting of CSV bits for protected VMs are removed according to the
> discussion from Fuad below:
> https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com
>
> Besides the removal of CSV bits setting for protected VMs, No other
> functional change intended.
>
> Signed-off-by: Jing Zhang <jingzhangos@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 2 --
> arch/arm64/kvm/arm.c | 17 ----------
> arch/arm64/kvm/sys_regs.c | 55 +++++++++++++++++++++++++------
> 3 files changed, 45 insertions(+), 29 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 949a4a782844..07f0e091ae48 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -257,8 +257,6 @@ struct kvm_arch {
>
> cpumask_var_t supported_cpus;
>
> - u8 pfr0_csv2;
> - u8 pfr0_csv3;
> struct {
> u8 imp:4;
> u8 unimp:4;
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 774656a0718d..5114521ace60 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -102,22 +102,6 @@ static int kvm_arm_default_max_vcpus(void)
> return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS;
> }
>
> -static void set_default_spectre(struct kvm *kvm)
> -{
> - /*
> - * The default is to expose CSV2 == 1 if the HW isn't affected.
> - * Although this is a per-CPU feature, we make it global because
> - * asymmetric systems are just a nuisance.
> - *
> - * Userspace can override this as long as it doesn't promise
> - * the impossible.
> - */
> - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED)
> - kvm->arch.pfr0_csv2 = 1;
> - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED)
> - kvm->arch.pfr0_csv3 = 1;
> -}
> -
> /**
> * kvm_arch_init_vm - initializes a VM data structure
> * @kvm: pointer to the KVM struct
> @@ -161,7 +145,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> /* The maximum number of VCPUs is limited by the host's GIC model */
> kvm->max_vcpus = kvm_arm_default_max_vcpus();
>
> - set_default_spectre(kvm);
> kvm_arm_init_hypercalls(kvm);
> kvm_arm_init_id_regs(kvm);
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d2ee3a1c7f03..3c52b136ade3 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1218,10 +1218,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> if (!vcpu_has_sve(vcpu))
> val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE);
> val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
> - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
> - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
> - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
> - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
> if (kvm_vgic_global_state.type == VGIC_V3) {
> val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
> val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
> @@ -1359,7 +1355,10 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> const struct sys_reg_desc *rd,
> u64 val)
> {
> + struct kvm_arch *arch = &vcpu->kvm->arch;
> + u64 sval = val;
> u8 csv2, csv3;
> + int ret = 0;
>
> /*
> * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
> @@ -1377,17 +1376,26 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
> return -EINVAL;
>
> + mutex_lock(&arch->config_lock);
> /* We can only differ with CSV[23], and anything else is an error */
> val ^= read_id_reg(vcpu, rd);
> val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
> ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
> - if (val)
> - return -EINVAL;
> -
> - vcpu->kvm->arch.pfr0_csv2 = csv2;
> - vcpu->kvm->arch.pfr0_csv3 = csv3;
> + if (val) {
> + ret = -EINVAL;
> + goto out;
> + }
>
> - return 0;
> + /* Only allow userspace to change the idregs before VM running */
> + if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
How about using kvm_vm_has_ran_once() instead ?
> + if (sval != read_id_reg(vcpu, rd))
Rather than calling read_id_reg() twice in this function,
perhaps you might want to save the original val we got earlier
and re-use it here ?
Thank you,
Reiji
> + ret = -EBUSY;
> + } else {
> + IDREG(vcpu->kvm, reg_to_encoding(rd)) = sval;
> + }
> +out:
> + mutex_unlock(&arch->config_lock);
> + return ret;
> }
>
> static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> @@ -1479,7 +1487,12 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
> static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> u64 *val)
> {
> + struct kvm_arch *arch = &vcpu->kvm->arch;
> +
> + mutex_lock(&arch->config_lock);
> *val = read_id_reg(vcpu, rd);
> + mutex_unlock(&arch->config_lock);
> +
> return 0;
> }
>
> @@ -3364,6 +3377,7 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
> {
> const struct sys_reg_desc *idreg;
> struct sys_reg_params params;
> + u64 val;
> u32 id;
>
> /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
> @@ -3386,6 +3400,27 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
> idreg++;
> id = reg_to_encoding(idreg);
> }
> +
> + /*
> + * The default is to expose CSV2 == 1 if the HW isn't affected.
> + * Although this is a per-CPU feature, we make it global because
> + * asymmetric systems are just a nuisance.
> + *
> + * Userspace can override this as long as it doesn't promise
> + * the impossible.
> + */
> + val = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
> +
> + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
> + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
> + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
> + }
> + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
> + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
> + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
> + }
> +
> + IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val;
> }
>
> int __init kvm_sys_reg_table_init(void)
> --
> 2.40.1.606.ga4b1b128d6-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3]
2023-05-19 23:52 ` Reiji Watanabe
@ 2023-05-22 17:23 ` Jing Zhang
0 siblings, 0 replies; 16+ messages in thread
From: Jing Zhang @ 2023-05-22 17:23 UTC (permalink / raw)
To: Reiji Watanabe
Cc: KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton, Will Deacon,
Paolo Bonzini, James Morse, Alexandru Elisei, Suzuki K Poulose,
Fuad Tabba, Raghavendra Rao Ananta
Hi Reiji,
On Fri, May 19, 2023 at 4:52 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Jing,
>
> On Wed, May 17, 2023 at 06:10:11AM +0000, Jing Zhang wrote:
> > With per guest ID registers, ID_AA64PFR0_EL1.[CSV2|CSV3] settings from
> > userspace can be stored in its corresponding ID register.
> >
> > The setting of CSV bits for protected VMs are removed according to the
> > discussion from Fuad below:
> > https://lore.kernel.org/all/CA+EHjTwXA9TprX4jeG+-D+c8v9XG+oFdU1o6TSkvVye145_OvA@mail.gmail.com
> >
> > Besides the removal of CSV bits setting for protected VMs, No other
> > functional change intended.
> >
> > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 2 --
> > arch/arm64/kvm/arm.c | 17 ----------
> > arch/arm64/kvm/sys_regs.c | 55 +++++++++++++++++++++++++------
> > 3 files changed, 45 insertions(+), 29 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 949a4a782844..07f0e091ae48 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -257,8 +257,6 @@ struct kvm_arch {
> >
> > cpumask_var_t supported_cpus;
> >
> > - u8 pfr0_csv2;
> > - u8 pfr0_csv3;
> > struct {
> > u8 imp:4;
> > u8 unimp:4;
> > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > index 774656a0718d..5114521ace60 100644
> > --- a/arch/arm64/kvm/arm.c
> > +++ b/arch/arm64/kvm/arm.c
> > @@ -102,22 +102,6 @@ static int kvm_arm_default_max_vcpus(void)
> > return vgic_present ? kvm_vgic_get_max_vcpus() : KVM_MAX_VCPUS;
> > }
> >
> > -static void set_default_spectre(struct kvm *kvm)
> > -{
> > - /*
> > - * The default is to expose CSV2 == 1 if the HW isn't affected.
> > - * Although this is a per-CPU feature, we make it global because
> > - * asymmetric systems are just a nuisance.
> > - *
> > - * Userspace can override this as long as it doesn't promise
> > - * the impossible.
> > - */
> > - if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED)
> > - kvm->arch.pfr0_csv2 = 1;
> > - if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED)
> > - kvm->arch.pfr0_csv3 = 1;
> > -}
> > -
> > /**
> > * kvm_arch_init_vm - initializes a VM data structure
> > * @kvm: pointer to the KVM struct
> > @@ -161,7 +145,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> > /* The maximum number of VCPUs is limited by the host's GIC model */
> > kvm->max_vcpus = kvm_arm_default_max_vcpus();
> >
> > - set_default_spectre(kvm);
> > kvm_arm_init_hypercalls(kvm);
> > kvm_arm_init_id_regs(kvm);
> >
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index d2ee3a1c7f03..3c52b136ade3 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1218,10 +1218,6 @@ static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> > if (!vcpu_has_sve(vcpu))
> > val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE);
> > val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
> > - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
> > - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
> > - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
> > - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
> > if (kvm_vgic_global_state.type == VGIC_V3) {
> > val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
> > val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
> > @@ -1359,7 +1355,10 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> > const struct sys_reg_desc *rd,
> > u64 val)
> > {
> > + struct kvm_arch *arch = &vcpu->kvm->arch;
> > + u64 sval = val;
> > u8 csv2, csv3;
> > + int ret = 0;
> >
> > /*
> > * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
> > @@ -1377,17 +1376,26 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> > (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
> > return -EINVAL;
> >
> > + mutex_lock(&arch->config_lock);
> > /* We can only differ with CSV[23], and anything else is an error */
> > val ^= read_id_reg(vcpu, rd);
> > val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
> > ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
> > - if (val)
> > - return -EINVAL;
> > -
> > - vcpu->kvm->arch.pfr0_csv2 = csv2;
> > - vcpu->kvm->arch.pfr0_csv3 = csv3;
> > + if (val) {
> > + ret = -EINVAL;
> > + goto out;
> > + }
> >
> > - return 0;
> > + /* Only allow userspace to change the idregs before VM running */
> > + if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &vcpu->kvm->arch.flags)) {
>
> How about using kvm_vm_has_ran_once() instead ?
Sure.
>
>
> > + if (sval != read_id_reg(vcpu, rd))
>
> Rather than calling read_id_reg() twice in this function,
> perhaps you might want to save the original val we got earlier
> and re-use it here ?
Will do.
>
> Thank you,
> Reiji
>
>
>
>
> > + ret = -EBUSY;
> > + } else {
> > + IDREG(vcpu->kvm, reg_to_encoding(rd)) = sval;
> > + }
> > +out:
> > + mutex_unlock(&arch->config_lock);
> > + return ret;
> > }
> >
> > static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
> > @@ -1479,7 +1487,12 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
> > static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> > u64 *val)
> > {
> > + struct kvm_arch *arch = &vcpu->kvm->arch;
> > +
> > + mutex_lock(&arch->config_lock);
> > *val = read_id_reg(vcpu, rd);
> > + mutex_unlock(&arch->config_lock);
> > +
> > return 0;
> > }
> >
> > @@ -3364,6 +3377,7 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
> > {
> > const struct sys_reg_desc *idreg;
> > struct sys_reg_params params;
> > + u64 val;
> > u32 id;
> >
> > /* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
> > @@ -3386,6 +3400,27 @@ void kvm_arm_init_id_regs(struct kvm *kvm)
> > idreg++;
> > id = reg_to_encoding(idreg);
> > }
> > +
> > + /*
> > + * The default is to expose CSV2 == 1 if the HW isn't affected.
> > + * Although this is a per-CPU feature, we make it global because
> > + * asymmetric systems are just a nuisance.
> > + *
> > + * Userspace can override this as long as it doesn't promise
> > + * the impossible.
> > + */
> > + val = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
> > +
> > + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) {
> > + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2);
> > + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1);
> > + }
> > + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) {
> > + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3);
> > + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
> > + }
> > +
> > + IDREG(kvm, SYS_ID_AA64PFR0_EL1) = val;
> > }
> >
> > int __init kvm_sys_reg_table_init(void)
> > --
> > 2.40.1.606.ga4b1b128d6-goog
> >
Thanks,
Jing
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest
2023-05-19 22:16 ` Reiji Watanabe
@ 2023-05-22 17:27 ` Jing Zhang
0 siblings, 0 replies; 16+ messages in thread
From: Jing Zhang @ 2023-05-22 17:27 UTC (permalink / raw)
To: Reiji Watanabe
Cc: Shameerali Kolothum Thodi, KVM, KVMARM, ARMLinux, Marc Zyngier,
Oliver Upton, Will Deacon, Paolo Bonzini, James Morse,
Alexandru Elisei, Suzuki K Poulose, Fuad Tabba,
Raghavendra Rao Ananta
Hi Reiji,
On Fri, May 19, 2023 at 3:16 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> On Fri, May 19, 2023 at 10:44:41AM -0700, Jing Zhang wrote:
> > HI Shameerali,
> >
> > On Fri, May 19, 2023 at 1:08 AM Shameerali Kolothum Thodi
> > <shameerali.kolothum.thodi@huawei.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jing Zhang [mailto:jingzhangos@google.com]
> > > > Sent: 18 May 2023 20:49
> > > > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> > > > Cc: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > > > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > > > <maz@kernel.org>; Oliver Upton <oupton@google.com>; Will Deacon
> > > > <will@kernel.org>; Paolo Bonzini <pbonzini@redhat.com>; James Morse
> > > > <james.morse@arm.com>; Alexandru Elisei <alexandru.elisei@arm.com>;
> > > > Suzuki K Poulose <suzuki.poulose@arm.com>; Fuad Tabba
> > > > <tabba@google.com>; Reiji Watanabe <reijiw@google.com>; Raghavendra
> > > > Rao Ananta <rananta@google.com>
> > > > Subject: Re: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> > > > per guest
> > > >
> > > > Hi Shameerali,
> > > >
> > > > On Thu, May 18, 2023 at 12:17 AM Shameerali Kolothum Thodi
> > > > <shameerali.kolothum.thodi@huawei.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jing Zhang [mailto:jingzhangos@google.com]
> > > > > > Sent: 17 May 2023 07:10
> > > > > > To: KVM <kvm@vger.kernel.org>; KVMARM <kvmarm@lists.linux.dev>;
> > > > > > ARMLinux <linux-arm-kernel@lists.infradead.org>; Marc Zyngier
> > > > > > <maz@kernel.org>; Oliver Upton <oupton@google.com>
> > > > > > Cc: Will Deacon <will@kernel.org>; Paolo Bonzini
> > > > <pbonzini@redhat.com>;
> > > > > > James Morse <james.morse@arm.com>; Alexandru Elisei
> > > > > > <alexandru.elisei@arm.com>; Suzuki K Poulose
> > > > <suzuki.poulose@arm.com>;
> > > > > > Fuad Tabba <tabba@google.com>; Reiji Watanabe <reijiw@google.com>;
> > > > > > Raghavendra Rao Ananta <rananta@google.com>; Jing Zhang
> > > > > > <jingzhangos@google.com>
> > > > > > Subject: [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value
> > > > per
> > > > > > guest
> > > > > >
> > > > > > Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
> > > > > > and save ID registers' sanitized value in the array at KVM_CREATE_VM.
> > > > > > Use the saved ones when ID registers are read by the guest or
> > > > > > userspace (via KVM_GET_ONE_REG).
> > > > > >
> > > > > > No functional change intended.
> > > > > >
> > > > > > Co-developed-by: Reiji Watanabe <reijiw@google.com>
> > > > > > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > > > > > Signed-off-by: Jing Zhang <jingzhangos@google.com>
> > > > > > ---
> > > > > > arch/arm64/include/asm/kvm_host.h | 20 +++++++++
> > > > > > arch/arm64/kvm/arm.c | 1 +
> > > > > > arch/arm64/kvm/sys_regs.c | 69
> > > > > > +++++++++++++++++++++++++------
> > > > > > arch/arm64/kvm/sys_regs.h | 7 ++++
> > > > > > 4 files changed, 85 insertions(+), 12 deletions(-)
> > > > > >
> > > > > > diff --git a/arch/arm64/include/asm/kvm_host.h
> > > > > > b/arch/arm64/include/asm/kvm_host.h
> > > > > > index 7e7e19ef6993..949a4a782844 100644
> > > > > > --- a/arch/arm64/include/asm/kvm_host.h
> > > > > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > > > > @@ -178,6 +178,21 @@ struct kvm_smccc_features {
> > > > > > unsigned long vendor_hyp_bmap;
> > > > > > };
> > > > > >
> > > > > > +/*
> > > > > > + * Emulated CPU ID registers per VM
> > > > > > + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
> > > > > > + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > > > > + *
> > > > > > + * These emulated idregs are VM-wide, but accessed from the context of
> > > > a
> > > > > > vCPU.
> > > > > > + * Access to id regs are guarded by kvm_arch.config_lock.
>
> Nit: This statement doesn't seem to be true yet :)
Will remend it.
>
>
> > > > > > + */
> > > > > > +#define KVM_ARM_ID_REG_NUM 56
> > > > > > +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) |
> > > > sys_reg_Op2(id))
> > > > > > +#define IDREG(kvm, id)
> > > > ((kvm)->arch.idregs.regs[IDREG_IDX(id)])
> > > > > > +struct kvm_idregs {
> > > > > > + u64 regs[KVM_ARM_ID_REG_NUM];
> > > > > > +};
> > > > > >
> > > > >
> > > > > Not sure we really need this struct here. Why can't this array be moved to
> > > > > struct kvm_arch directly?
> > > > It was put in kvm_arch directly before, then got into its own
> > > > structure in v5 according to the comments here:
> > > > https://lore.kernel.org/all/861qlaxzyw.wl-maz@kernel.org/#t
> > >
> > > Ok.
> > >
> > > > > > typedef unsigned int pkvm_handle_t;
> > > > > >
> > > > > > struct kvm_protected_vm {
> > > > > > @@ -253,6 +268,9 @@ struct kvm_arch {
> > > > > > struct kvm_smccc_features smccc_feat;
> > > > > > struct maple_tree smccc_filter;
> > > > > >
> > > > > > + /* Emulated CPU ID registers */
> > > > > > + struct kvm_idregs idregs;
> > > > > > +
> > > > > > /*
> > > > > > * For an untrusted host VM, 'pkvm.handle' is used to lookup
> > > > > > * the associated pKVM instance in the hypervisor.
> > > > > > @@ -1045,6 +1063,8 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm
> > > > > > *kvm,
> > > > > > int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm,
> > > > > > struct kvm_arm_counter_offset
> > > > *offset);
> > > > > >
> > > > > > +void kvm_arm_init_id_regs(struct kvm *kvm);
> > > > > > +
> > > > > > /* Guest/host FPSIMD coordination helpers */
> > > > > > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > > > > > void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> > > > > > index 14391826241c..774656a0718d 100644
> > > > > > --- a/arch/arm64/kvm/arm.c
> > > > > > +++ b/arch/arm64/kvm/arm.c
> > > > > > @@ -163,6 +163,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
> > > > > > long type)
> > > > > >
> > > > > > set_default_spectre(kvm);
> > > > > > kvm_arm_init_hypercalls(kvm);
> > > > > > + kvm_arm_init_id_regs(kvm);
> > > > > >
> > > > > > /*
> > > > > > * Initialise the default PMUver before there is a chance to
> > > > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > > > > index 71b12094d613..d2ee3a1c7f03 100644
> > > > > > --- a/arch/arm64/kvm/sys_regs.c
> > > > > > +++ b/arch/arm64/kvm/sys_regs.c
> > > > > > @@ -41,6 +41,7 @@
> > > > > > * 64bit interface.
> > > > > > */
> > > > > >
> > > > > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
> > > > > > static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
> > > > > >
> > > > > > static bool read_from_write_only(struct kvm_vcpu *vcpu,
> > > > > > @@ -364,7 +365,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> > > > > > struct sys_reg_params *p,
> > > > > > const struct sys_reg_desc *r)
> > > > > > {
> > > > > > - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
> > > > > > + u64 val = kvm_arm_read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1);
> > > > > > u32 sr = reg_to_encoding(r);
> > > > > >
> > > > > > if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> > > > > > @@ -1208,16 +1209,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
> > > > > > }
> > > > > > }
> > > > > >
> > > > > > -/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > > > > -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc
> > > > > > const *r)
> > > > > > +static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
> > > > > > {
> > > > > > - u32 id = reg_to_encoding(r);
> > > > > > - u64 val;
> > > > > > -
> > > > > > - if (sysreg_visible_as_raz(vcpu, r))
> > > > > > - return 0;
> > > > > > -
> > > > > > - val = read_sanitised_ftr_reg(id);
> > > > > > + u64 val = IDREG(vcpu->kvm, id);
> > > > > >
> > > > > > switch (id) {
> > > > > > case SYS_ID_AA64PFR0_EL1:
> > > > > > @@ -1280,6 +1274,26 @@ static u64 read_id_reg(const struct
> > > > kvm_vcpu
> > > > > > *vcpu, struct sys_reg_desc const *r
> > > > > > return val;
> > > > > > }
> > > > > >
> > > > > > +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> > > > > > +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct
> > > > sys_reg_desc
> > > > > > const *r)
> > > > > > +{
> > > > > > + if (sysreg_visible_as_raz(vcpu, r))
> > > > > > + return 0;
> > > > > > +
> > > > > > + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r));
> > > > > > +}
> > > > > > +
> > > > > > +/*
> > > > > > + * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
> > > > > > + * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
> > > > > > + */
> > > > > > +static inline bool is_id_reg(u32 id)
> > > > > > +{
> > > > > > + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
> > > > > > + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
> > > > > > + sys_reg_CRm(id) < 8);
> > > > > > +}
> > > > > > +
> > > > > > static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > > > > > const struct sys_reg_desc *r)
> > > > > > {
> > > > > > @@ -2244,8 +2258,8 @@ static bool trap_dbgdidr(struct kvm_vcpu
> > > > *vcpu,
> > > > > > if (p->is_write) {
> > > > > > return ignore_write(vcpu, p);
> > > > > > } else {
> > > > > > - u64 dfr =
> > > > read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
> > > > > > - u64 pfr =
> > > > read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
> > > > > > + u64 dfr = kvm_arm_read_id_reg(vcpu,
> > > > SYS_ID_AA64DFR0_EL1);
> > > > > > + u64 pfr = kvm_arm_read_id_reg(vcpu,
> > > > SYS_ID_AA64PFR0_EL1);
> > > > >
> > > > > Does this change the behavior slightly as now within the
> > > > kvm_arm_read_id_reg()
> > > > > the val will be further adjusted based on KVM/vCPU?
> > > > That's a good question. Although the actual behavior would be the same
> > > > no matter read idreg with read_sanitised_ftr_reg or
> > > > kvm_arm_read_id_reg, it is possible that the behavior would change
> > > > potentially in the future.
> > > > Since now every guest has its own idregs, for every guest, the idregs
> > > > should be read from kvm_arm_read_id_reg instead of
> > > > read_sanitised_ftr_reg.
> > > > The point is, for trap_dbgdidr, we should read AA64DFR0/AA64PFR0 from
> > > > host or the VM-scope?
> > >
> > > Ok. I was just double checking whether it changes the behavior now itself or
> > > not as we claim no functional changes in this series. As far as host vs VM
> > > scope, I am not sure as well. From a quick look through the history of debug
> > > support, couldn’t find anything that mandates host values though.
>
> We should use the VM-scope AA64DFR0/AA64PFR0 values here.
> As trap_dbgdidr() is the emulation code for the guest's reading DBGDIDR,
> its WRPs, BRPs, CTX_CMPs, and EL3 field must be consistent with the ones
> in the guest's AA64DFR0_EL1/AA64PFR0_EL1 values.
>
> As Jing said, it doesn't matter practically until we allow userspace to
> modify those fields though :)
>
> Thank you,
> Reiji
Thanks,
Jing
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3
2023-05-17 6:10 ` [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 Jing Zhang
@ 2023-06-02 1:03 ` Suraj Jitindar Singh
2023-06-02 8:15 ` Marc Zyngier
0 siblings, 1 reply; 16+ messages in thread
From: Suraj Jitindar Singh @ 2023-06-02 1:03 UTC (permalink / raw)
To: Jing Zhang, KVM, KVMARM, ARMLinux, Marc Zyngier, Oliver Upton
Cc: Will Deacon, Paolo Bonzini, James Morse, Alexandru Elisei,
Suzuki K Poulose, Fuad Tabba, Reiji Watanabe,
Raghavendra Rao Ananta, Suraj Jitindar Singh
Hi,
With the patch set you posted I get some kvm unit tests failures due to
being unable to update register values from userspace. I propose the
following patch as a solution:
[PATCH 1/2] KVM: arm64: Update id_reg limit value based on per vcpu
flags
There are multiple features the availability of which is
enabled/disabled
and tracked on a per vcpu level in vcpu->arch.flagset e.g. sve,
ptrauth,
and pmu. While the vm wide value of the id regs which represent the
availability of these features is stored in the id_regs kvm struct
their
value needs to be manipulated on a per vcpu basis. This is done at read
time in kvm_arm_read_id_reg().
The value of these per vcpu flags needs to be factored in when
calculating
the id_reg limit value in check_features() as otherwise we can run into
the
following scenario.
[ running on cpu which supports sve ]
1. AA64PFR0.SVE set in id_reg by kvm_arm_init_id_regs() (cpu supports
it
and so is set in value returned from read_sanitised_ftr_reg())
2. vcpus created without sve feature enabled
3. vmm reads AA64PFR0 and attempts to write the same value back
(writing the same value back is allowed)
4. write fails in check_features() as limit has AA64PFR0.SVE set
however it
is not set in the value being written and although a lower value is
allowed for this feature it is not in the mask of bits which can be
modified and so much match exactly.
Thus add a step in check_features() to update the limit returned from
id_reg->reset() with the per vcpu features which may have been
enabled/disabled at vcpu creation time after the id_regs were
initialised.
Split this update into a new function named kvm_arm_update_id_reg() so
it
can be called from check_features() as well as kvm_arm_read_id_reg() to
dedup code.
While we're here there are features which are masked in
kvm_arm_update_id_reg() which cannot change through out a vms
lifecycle.
Thus rather than masking them each time the register is read, mask them
at
id_reg init time so that the value in the kvm id_reg reflects the state
of
support for that feature.
Move masking of AA64PFR0_EL1.GIC and AA64PFR0_EL1.AMU into
read_sanitised_id_aa64pfr0_el1().
Create read_sanitised_id_aa64pfr1_el1() and mask AA64PFR1_EL1.SME.
Create read_sanitised_id_[mmfr4|aa64mmfr2] and mask CCIDX.
Finally remove set_id_aa64pfr0_el1() as all it does is mask
AA64PFR0_EL1_CS[2|3]. The limit for these fields is already set
according
to cpu support in read_sanitised_id_aa64pfr0_el1() and then checked
when
writing the register in check_features() as such there is no need to
perform the check twice.
Signed-off-by: Suraj Jitindar Singh <surajjs@amazon.com>
---
arch/arm64/kvm/sys_regs.c | 113 ++++++++++++++++++++++++--------------
1 file changed, 73 insertions(+), 40 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bec02ba45ee7..ca793cd692fe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -42,6 +42,7 @@
*/
static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc
*rd, u64 val);
+static u64 kvm_arm_update_id_reg(const struct kvm_vcpu *vcpu, u32 id,
u64 val);
static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
@@ -1241,6 +1242,7 @@ static int arm64_check_features(struct kvm_vcpu
*vcpu,
/* For hidden and unallocated idregs without reset, only val =
0 is allowed. */
if (rd->reset) {
limit = rd->reset(vcpu, rd);
+ limit = kvm_arm_update_id_reg(vcpu, id, limit);
ftr_reg = get_arm64_ftr_reg(id);
if (!ftr_reg)
return -EINVAL;
@@ -1317,24 +1319,17 @@ static u64
general_read_kvm_sanitised_reg(struct kvm_vcpu *vcpu, const struct sy
return read_sanitised_ftr_reg(reg_to_encoding(rd));
}
-static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
+/* Provide an updated value for an ID register based on per vcpu flags
*/
+static u64 kvm_arm_update_id_reg(const struct kvm_vcpu *vcpu, u32 id,
u64 val)
{
- u64 val = IDREG(vcpu->kvm, id);
-
switch (id) {
case SYS_ID_AA64PFR0_EL1:
if (!vcpu_has_sve(vcpu))
val &=
~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE);
- if (kvm_vgic_global_state.type == VGIC_V3) {
- val &=
~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
- val |=
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
- }
break;
case SYS_ID_AA64PFR1_EL1:
if (!kvm_has_mte(vcpu->kvm))
val &=
~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
-
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
break;
case SYS_ID_AA64ISAR1_EL1:
if (!vcpu_has_ptrauth(vcpu))
@@ -1347,8 +1342,6 @@ static u64 kvm_arm_read_id_reg(const struct
kvm_vcpu *vcpu, u32 id)
if (!vcpu_has_ptrauth(vcpu))
val &=
~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) |
ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3));
- if (!cpus_have_final_cap(ARM64_HAS_WFXT))
- val &=
~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT);
break;
case SYS_ID_AA64DFR0_EL1:
/* Set PMUver to the required version */
@@ -1361,17 +1354,18 @@ static u64 kvm_arm_read_id_reg(const struct
kvm_vcpu *vcpu, u32 id)
val |=
FIELD_PREP(ARM64_FEATURE_MASK(ID_DFR0_EL1_PerfMon),
pmuver_to_perfmon(vcpu_pmuver(vcpu)));
break;
- case SYS_ID_AA64MMFR2_EL1:
- val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
- break;
- case SYS_ID_MMFR4_EL1:
- val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);
- break;
}
return val;
}
+static u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
+{
+ u64 val = IDREG(vcpu->kvm, id);
+
+ return kvm_arm_update_id_reg(vcpu, id, val);
+}
+
/* Read a sanitised cpufeature ID register by sys_reg_desc */
static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct
sys_reg_desc const *r)
{
@@ -1477,34 +1471,28 @@ static u64
read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
val |=
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1);
}
+ if (kvm_vgic_global_state.type == VGIC_V3) {
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC);
+ val |=
FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), 1);
+ }
+
val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU);
return val;
}
-static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
- const struct sys_reg_desc *rd,
- u64 val)
+static u64 read_sanitised_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc
*rd)
{
- u8 csv2, csv3;
+ u64 val;
+ u32 id = reg_to_encoding(rd);
- /*
- * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
- * it doesn't promise more than what is actually provided (the
- * guest could otherwise be covered in ectoplasmic residue).
- */
- csv2 = cpuid_feature_extract_unsigned_field(val,
ID_AA64PFR0_EL1_CSV2_SHIFT);
- if (csv2 > 1 ||
- (csv2 && arm64_get_spectre_v2_state() !=
SPECTRE_UNAFFECTED))
- return -EINVAL;
+ val = read_sanitised_ftr_reg(id);
- /* Same thing for CSV3 */
- csv3 = cpuid_feature_extract_unsigned_field(val,
ID_AA64PFR0_EL1_CSV3_SHIFT);
- if (csv3 > 1 ||
- (csv3 && arm64_get_meltdown_state() !=
SPECTRE_UNAFFECTED))
- return -EINVAL;
+ /* SME is not supported */
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
- return set_id_reg(vcpu, rd, val);
+ return val;
}
static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
@@ -1680,6 +1668,34 @@ static int set_id_dfr0_el1(struct kvm_vcpu
*vcpu,
return ret;
}
+static u64 read_sanitised_id_mmfr4_el1(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ u64 val;
+ u32 id = reg_to_encoding(rd);
+
+ val = read_sanitised_ftr_reg(id);
+
+ /* CCIDX is not supported */
+ val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);
+
+ return val;
+}
+
+static u64 read_sanitised_id_aa64mmfr2_el1(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc
*rd)
+{
+ u64 val;
+ u32 id = reg_to_encoding(rd);
+
+ val = read_sanitised_ftr_reg(id);
+
+ /* CCIDX is not supported */
+ val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK;
+
+ return val;
+}
+
/*
* cpufeature ID register user accessors
*
@@ -2089,7 +2105,14 @@ static const struct sys_reg_desc sys_reg_descs[]
= {
AA32_ID_SANITISED(ID_ISAR3_EL1),
AA32_ID_SANITISED(ID_ISAR4_EL1),
AA32_ID_SANITISED(ID_ISAR5_EL1),
- AA32_ID_SANITISED(ID_MMFR4_EL1),
+ { SYS_DESC(SYS_ID_MMFR4_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+ .set_user = set_id_reg,
+ .visibility = aa32_id_visibility,
+ .reset = read_sanitised_id_mmfr4_el1,
+ .val = 0, },
+ ID_HIDDEN(ID_AFR0_EL1),
AA32_ID_SANITISED(ID_ISAR6_EL1),
/* CRm=3 */
@@ -2107,10 +2130,15 @@ static const struct sys_reg_desc
sys_reg_descs[] = {
{ SYS_DESC(SYS_ID_AA64PFR0_EL1),
.access = access_id_reg,
.get_user = get_id_reg,
- .set_user = set_id_aa64pfr0_el1,
+ .set_user = set_id_reg,
.reset = read_sanitised_id_aa64pfr0_el1,
.val = ID_AA64PFR0_EL1_CSV2_MASK |
ID_AA64PFR0_EL1_CSV3_MASK, },
- ID_SANITISED(ID_AA64PFR1_EL1),
+ { SYS_DESC(SYS_ID_AA64PFR1_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+ .set_user = set_id_reg,
+ .reset = read_sanitised_id_aa64pfr1_el1,
+ .val = 0, },
ID_UNALLOCATED(4,2),
ID_UNALLOCATED(4,3),
ID_SANITISED(ID_AA64ZFR0_EL1),
@@ -2146,7 +2174,12 @@ static const struct sys_reg_desc sys_reg_descs[]
= {
/* CRm=7 */
ID_SANITISED(ID_AA64MMFR0_EL1),
ID_SANITISED(ID_AA64MMFR1_EL1),
- ID_SANITISED(ID_AA64MMFR2_EL1),
+ { SYS_DESC(SYS_ID_AA64MMFR2_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+ .set_user = set_id_reg,
+ .reset = read_sanitised_id_aa64mmfr2_el1,
+ .val = 0, },
ID_UNALLOCATED(7,3),
ID_UNALLOCATED(7,4),
ID_UNALLOCATED(7,5),
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3
2023-06-02 1:03 ` Suraj Jitindar Singh
@ 2023-06-02 8:15 ` Marc Zyngier
0 siblings, 0 replies; 16+ messages in thread
From: Marc Zyngier @ 2023-06-02 8:15 UTC (permalink / raw)
To: Suraj Jitindar Singh
Cc: Jing Zhang, KVM, KVMARM, ARMLinux, Oliver Upton, Will Deacon,
Paolo Bonzini, James Morse, Alexandru Elisei, Suzuki K Poulose,
Fuad Tabba, Reiji Watanabe, Raghavendra Rao Ananta,
Suraj Jitindar Singh
On Fri, 02 Jun 2023 02:03:30 +0100,
Suraj Jitindar Singh <sjitindarsingh@gmail.com> wrote:
>
> Hi,
>
> With the patch set you posted I get some kvm unit tests failures due to
> being unable to update register values from userspace. I propose the
> following patch as a solution:
>
> [PATCH 1/2] KVM: arm64: Update id_reg limit value based on per vcpu
> flags
While I really appreciate your help here, can you please try and reply
to the latest version of the series? You still commenting on v9 while
v10 has been out for a while, and there is now v11 on the list.
This will help making your comments and potential fixes relevant.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-06-02 8:16 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-17 6:10 [PATCH v9 0/5] Support writable CPU ID registers from userspace Jing Zhang
2023-05-17 6:10 ` [PATCH v9 1/5] KVM: arm64: Save ID registers' sanitized value per guest Jing Zhang
2023-05-18 7:17 ` Shameerali Kolothum Thodi
2023-05-18 19:48 ` Jing Zhang
2023-05-19 8:08 ` Shameerali Kolothum Thodi
2023-05-19 17:44 ` Jing Zhang
2023-05-19 22:16 ` Reiji Watanabe
2023-05-22 17:27 ` Jing Zhang
2023-05-17 6:10 ` [PATCH v9 2/5] KVM: arm64: Use per guest ID register for ID_AA64PFR0_EL1.[CSV2|CSV3] Jing Zhang
2023-05-19 23:52 ` Reiji Watanabe
2023-05-22 17:23 ` Jing Zhang
2023-05-17 6:10 ` [PATCH v9 3/5] KVM: arm64: Use per guest ID register for ID_AA64DFR0_EL1.PMUVer Jing Zhang
2023-05-17 6:10 ` [PATCH v9 4/5] KVM: arm64: Reuse fields of sys_reg_desc for idreg Jing Zhang
2023-05-17 6:10 ` [PATCH v9 5/5] KVM: arm64: Refactor writings for PMUVer/CSV2/CSV3 Jing Zhang
2023-06-02 1:03 ` Suraj Jitindar Singh
2023-06-02 8:15 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).