* [PATCH v6 00/43] arm64: Support for Arm CCA in KVM
@ 2024-12-12 15:55 Steven Price
2024-12-12 15:55 ` [PATCH v6 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
` (42 more replies)
0 siblings, 43 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
This series adds support for running protected VMs using KVM under the
Arm Confidential Compute Architecture (CCA).
The related guest support was merged for v6.13-rc1 so you no longer need
that separately.
There are numerous changes since v5, many thanks for the review
comments. The highlights are below, and individual patches have a changelog.
* KVM_VCPU_MAX_FEATURES is incremented. *NOTE*: This effectively
exposes the nested virtualisation feature. So this series as it
stands has a dependency on that being finished before it can be
merged. See [2] for more details.
* Properly support the RMM providing a more limited number of GIC LRs
than the hardware.
* Drop patch tracking number of pages for the top level PGD, it's
easier calculating it when we need it.
* Drop the "spare page" - it was causing code complexity and it's a bit
premature to start making such optimisations before we've got real
platforms to benchmark.
Major limitations:
* Only supports 4k host PAGE_SIZE (if PAGE_SIZE != 4k then the realm
extensions are disabled). I'm looking at removing this restriction,
but I didn't want to delay this posting.
* No support for huge pages when mapping the guest's pages. There is
some 'dead' code left over from before guest_mem was supported. This
is partly a current limitation of guest_memfd. It's also on my todo list to
look at improving this.
The ABI to the RMM (the RMI) is based on RMM v1.0-rel0 specification[1].
This series is based on v6.13-rc1. It is also available as a git
repository:
https://gitlab.arm.com/linux-arm/linux-cca cca-host/v6
Work in progress changes for kvmtool are available from the git
repository below:
https://gitlab.arm.com/linux-arm/kvmtool-cca cca/v4
[1] https://developer.arm.com/documentation/den0137/1-0rel0/
[2] https://lore.kernel.org/r/a7011738-a084-46fa-947f-395d90b37f8b%40arm.com
Jean-Philippe Brucker (7):
arm64: RME: Propagate number of breakpoints and watchpoints to
userspace
arm64: RME: Set breakpoint parameters through SET_ONE_REG
arm64: RME: Initialize PMCR.N with number counter supported by RMM
arm64: RME: Propagate max SVE vector length from RMM
arm64: RME: Configure max SVE vector length for a Realm
arm64: RME: Provide register list for unfinalized RME RECs
arm64: RME: Provide accurate register list
Joey Gouly (2):
arm64: rme: allow userspace to inject aborts
arm64: rme: support RSI_HOST_CALL
Sean Christopherson (1):
KVM: Prepare for handling only shared mappings in mmu_notifier events
Steven Price (30):
arm64: RME: Handle Granule Protection Faults (GPFs)
arm64: RME: Add SMC definitions for calling the RMM
arm64: RME: Add wrappers for RMI calls
arm64: RME: Check for RME support at KVM init
arm64: RME: Define the user ABI
arm64: RME: ioctls to create and configure realms
arm64: kvm: Allow passing machine type in KVM creation
arm64: RME: RTT tear down
arm64: RME: Allocate/free RECs to match vCPUs
KVM: arm64: vgic: Provide helper for number of list registers
arm64: RME: Support for the VGIC in realms
KVM: arm64: Support timers in realm RECs
arm64: RME: Allow VMM to set RIPAS
arm64: RME: Handle realm enter/exit
KVM: arm64: Handle realm MMIO emulation
arm64: RME: Allow populating initial contents
arm64: RME: Runtime faulting of memory
KVM: arm64: Handle realm VCPU load
KVM: arm64: Validate register access for a Realm VM
KVM: arm64: Handle Realm PSCI requests
KVM: arm64: WARN on injected undef exceptions
arm64: Don't expose stolen time for realm guests
arm64: RME: Always use 4k pages for realms
arm64: rme: Prevent Device mappings for Realms
arm_pmu: Provide a mechanism for disabling the physical IRQ
arm64: rme: Enable PMU support with a realm guest
kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests
arm64: kvm: Expose support for private memory
KVM: arm64: Expose KVM_ARM_VCPU_REC to user space
KVM: arm64: Allow activating realms
Suzuki K Poulose (3):
kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h
kvm: arm64: Expose debug HW register numbers for Realm
arm64: rme: Allow checking SVE on VM instance
Documentation/virt/kvm/api.rst | 3 +
arch/arm64/include/asm/kvm_emulate.h | 40 +
arch/arm64/include/asm/kvm_host.h | 17 +-
arch/arm64/include/asm/kvm_rme.h | 152 +++
arch/arm64/include/asm/rmi_cmds.h | 508 ++++++++
arch/arm64/include/asm/rmi_smc.h | 259 +++++
arch/arm64/include/asm/virt.h | 1 +
arch/arm64/include/uapi/asm/kvm.h | 49 +
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/Makefile | 3 +-
arch/arm64/kvm/arch_timer.c | 45 +-
arch/arm64/kvm/arm.c | 174 ++-
arch/arm64/kvm/guest.c | 107 +-
arch/arm64/kvm/hypercalls.c | 4 +-
arch/arm64/kvm/inject_fault.c | 6 +-
arch/arm64/kvm/mmio.c | 16 +-
arch/arm64/kvm/mmu.c | 195 +++-
arch/arm64/kvm/pmu-emul.c | 6 +
arch/arm64/kvm/psci.c | 29 +
arch/arm64/kvm/reset.c | 23 +-
arch/arm64/kvm/rme-exit.c | 195 ++++
arch/arm64/kvm/rme.c | 1588 ++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 79 +-
arch/arm64/kvm/vgic/vgic-init.c | 2 +-
arch/arm64/kvm/vgic/vgic-v3.c | 5 +
arch/arm64/kvm/vgic/vgic.c | 54 +-
arch/arm64/mm/fault.c | 31 +-
drivers/perf/arm_pmu.c | 15 +
include/kvm/arm_arch_timer.h | 2 +
include/kvm/arm_pmu.h | 4 +
include/kvm/arm_psci.h | 2 +
include/linux/kvm_host.h | 2 +
include/linux/perf/arm_pmu.h | 5 +
include/uapi/linux/kvm.h | 31 +-
virt/kvm/kvm_main.c | 7 +
35 files changed, 3557 insertions(+), 103 deletions(-)
create mode 100644 arch/arm64/include/asm/kvm_rme.h
create mode 100644 arch/arm64/include/asm/rmi_cmds.h
create mode 100644 arch/arm64/include/asm/rmi_smc.h
create mode 100644 arch/arm64/kvm/rme-exit.c
create mode 100644 arch/arm64/kvm/rme.c
--
2.43.0
^ permalink raw reply [flat|nested] 90+ messages in thread
* [PATCH v6 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 02/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
` (41 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Sean Christopherson, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Sean Christopherson <seanjc@google.com>
Add flags to "struct kvm_gfn_range" to let notifier events target only
shared and only private mappings, and write up the existing mmu_notifier
events to be shared-only (private memory is never associated with a
userspace virtual address, i.e. can't be reached via mmu_notifiers).
Add two flags so that KVM can handle the three possibilities (shared,
private, and shared+private) without needing something like a tri-state
enum.
Link: https://lore.kernel.org/all/ZJX0hk+KpQP0KUyB@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
include/linux/kvm_host.h | 2 ++
virt/kvm/kvm_main.c | 7 +++++++
2 files changed, 9 insertions(+)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 401439bb21e3..86088968107a 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -260,6 +260,8 @@ struct kvm_gfn_range {
gfn_t start;
gfn_t end;
union kvm_mmu_notifier_arg arg;
+ bool only_private;
+ bool only_shared;
bool may_block;
};
bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index de2c11dae231..17727f1330ff 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -593,6 +593,13 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm,
* the second or later invocation of the handler).
*/
gfn_range.arg = range->arg;
+
+ /*
+ * HVA-based notifications aren't relevant to private
+ * mappings as they don't have a userspace mapping.
+ */
+ gfn_range.only_private = false;
+ gfn_range.only_shared = true;
gfn_range.may_block = range->may_block;
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 02/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
2024-12-12 15:55 ` [PATCH v6 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
` (40 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Suzuki K Poulose, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Gavin Shan,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V, Steven Price
From: Suzuki K Poulose <suzuki.poulose@arm.com>
Fix a potential build error (like below, when asm/kvm_emulate.h gets
included after the kvm/arm_psci.h) by including the missing header file
in kvm/arm_psci.h:
./include/kvm/arm_psci.h: In function ‘kvm_psci_version’:
./include/kvm/arm_psci.h:29:13: error: implicit declaration of function
‘vcpu_has_feature’; did you mean ‘cpu_have_feature’? [-Werror=implicit-function-declaration]
29 | if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) {
| ^~~~~~~~~~~~~~~~
| cpu_have_feature
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
include/kvm/arm_psci.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h
index cbaec804eb83..38dab7add79b 100644
--- a/include/kvm/arm_psci.h
+++ b/include/kvm/arm_psci.h
@@ -10,6 +10,8 @@
#include <linux/kvm_host.h>
#include <uapi/linux/psci.h>
+#include <asm/kvm_emulate.h>
+
#define KVM_ARM_PSCI_0_1 PSCI_VERSION(0, 1)
#define KVM_ARM_PSCI_0_2 PSCI_VERSION(0, 2)
#define KVM_ARM_PSCI_1_0 PSCI_VERSION(1, 0)
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs)
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
2024-12-12 15:55 ` [PATCH v6 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
2024-12-12 15:55 ` [PATCH v6 02/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-16 16:45 ` Suzuki K Poulose
2024-12-12 15:55 ` [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
` (39 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
If the host attempts to access granules that have been delegated for use
in a realm these accesses will be caught and will trigger a Granule
Protection Fault (GPF).
A fault during a page walk signals a bug in the kernel and is handled by
oopsing the kernel. A non-page walk fault could be caused by user space
having access to a page which has been delegated to the kernel and will
trigger a SIGBUS to allow debugging why user space is trying to access a
delegated page.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v2:
* Include missing "Granule Protection Fault at level -1"
---
arch/arm64/mm/fault.c | 31 +++++++++++++++++++++++++------
1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index ef63651099a9..29821dc89f73 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -844,6 +844,25 @@ static int do_tag_check_fault(unsigned long far, unsigned long esr,
return 0;
}
+static int do_gpf_ptw(unsigned long far, unsigned long esr, struct pt_regs *regs)
+{
+ const struct fault_info *inf = esr_to_fault_info(esr);
+
+ die_kernel_fault(inf->name, far, esr, regs);
+ return 0;
+}
+
+static int do_gpf(unsigned long far, unsigned long esr, struct pt_regs *regs)
+{
+ const struct fault_info *inf = esr_to_fault_info(esr);
+
+ if (!is_el1_instruction_abort(esr) && fixup_exception(regs))
+ return 0;
+
+ arm64_notify_die(inf->name, regs, inf->sig, inf->code, far, esr);
+ return 0;
+}
+
static const struct fault_info fault_info[] = {
{ do_bad, SIGKILL, SI_KERNEL, "ttbr address size fault" },
{ do_bad, SIGKILL, SI_KERNEL, "level 1 address size fault" },
@@ -880,12 +899,12 @@ static const struct fault_info fault_info[] = {
{ do_bad, SIGKILL, SI_KERNEL, "unknown 32" },
{ do_alignment_fault, SIGBUS, BUS_ADRALN, "alignment fault" },
{ do_bad, SIGKILL, SI_KERNEL, "unknown 34" },
- { do_bad, SIGKILL, SI_KERNEL, "unknown 35" },
- { do_bad, SIGKILL, SI_KERNEL, "unknown 36" },
- { do_bad, SIGKILL, SI_KERNEL, "unknown 37" },
- { do_bad, SIGKILL, SI_KERNEL, "unknown 38" },
- { do_bad, SIGKILL, SI_KERNEL, "unknown 39" },
- { do_bad, SIGKILL, SI_KERNEL, "unknown 40" },
+ { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level -1" },
+ { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 0" },
+ { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 1" },
+ { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 2" },
+ { do_gpf_ptw, SIGKILL, SI_KERNEL, "Granule Protection Fault at level 3" },
+ { do_gpf, SIGBUS, SI_KERNEL, "Granule Protection Fault not on table walk" },
{ do_bad, SIGKILL, SI_KERNEL, "level -1 address size fault" },
{ do_bad, SIGKILL, SI_KERNEL, "unknown 42" },
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level -1 translation fault" },
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (2 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-16 17:51 ` Suzuki K Poulose
2024-12-12 15:55 ` [PATCH v6 05/43] arm64: RME: Add wrappers for RMI calls Steven Price
` (38 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM (Realm Management Monitor) provides functionality that can be
accessed by SMC calls from the host.
The SMC definitions are based on DEN0137[1] version 1.0-rel0
[1] https://developer.arm.com/documentation/den0137/1-0rel0/
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Sorted the SMC #defines by value.
* Renamed SMI_RxI_CALL to SMI_RMI_CALL since the macro is only used for
RMI calls.
* Renamed REC_GIC_NUM_LRS to REC_MAX_GIC_NUM_LRS since the actual
number of available list registers could be lower.
* Provided a define for the reserved fields of FeatureRegister0.
* Fix inconsistent names for padding fields.
Changes since v4:
* Update to point to final released RMM spec.
* Minor rearrangements.
Changes since v3:
* Update to match RMM spec v1.0-rel0-rc1.
Changes since v2:
* Fix specification link.
* Rename rec_entry->rec_enter to match spec.
* Fix size of pmu_ovf_status to match spec.
---
arch/arm64/include/asm/rmi_smc.h | 259 +++++++++++++++++++++++++++++++
1 file changed, 259 insertions(+)
create mode 100644 arch/arm64/include/asm/rmi_smc.h
diff --git a/arch/arm64/include/asm/rmi_smc.h b/arch/arm64/include/asm/rmi_smc.h
new file mode 100644
index 000000000000..eec659e284cd
--- /dev/null
+++ b/arch/arm64/include/asm/rmi_smc.h
@@ -0,0 +1,259 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023-2024 ARM Ltd.
+ *
+ * The values and structures in this file are from the Realm Management Monitor
+ * specification (DEN0137) version 1.0-rel0:
+ * https://developer.arm.com/documentation/den0137/1-0rel0/
+ */
+
+#ifndef __ASM_RME_SMC_H
+#define __ASM_RME_SMC_H
+
+#include <linux/arm-smccc.h>
+
+#define SMC_RMI_CALL(func) \
+ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
+ ARM_SMCCC_SMC_64, \
+ ARM_SMCCC_OWNER_STANDARD, \
+ (func))
+
+#define SMC_RMI_VERSION SMC_RMI_CALL(0x0150)
+#define SMC_RMI_GRANULE_DELEGATE SMC_RMI_CALL(0x0151)
+#define SMC_RMI_GRANULE_UNDELEGATE SMC_RMI_CALL(0x0152)
+#define SMC_RMI_DATA_CREATE SMC_RMI_CALL(0x0153)
+#define SMC_RMI_DATA_CREATE_UNKNOWN SMC_RMI_CALL(0x0154)
+#define SMC_RMI_DATA_DESTROY SMC_RMI_CALL(0x0155)
+
+#define SMC_RMI_REALM_ACTIVATE SMC_RMI_CALL(0x0157)
+#define SMC_RMI_REALM_CREATE SMC_RMI_CALL(0x0158)
+#define SMC_RMI_REALM_DESTROY SMC_RMI_CALL(0x0159)
+#define SMC_RMI_REC_CREATE SMC_RMI_CALL(0x015a)
+#define SMC_RMI_REC_DESTROY SMC_RMI_CALL(0x015b)
+#define SMC_RMI_REC_ENTER SMC_RMI_CALL(0x015c)
+#define SMC_RMI_RTT_CREATE SMC_RMI_CALL(0x015d)
+#define SMC_RMI_RTT_DESTROY SMC_RMI_CALL(0x015e)
+#define SMC_RMI_RTT_MAP_UNPROTECTED SMC_RMI_CALL(0x015f)
+
+#define SMC_RMI_RTT_READ_ENTRY SMC_RMI_CALL(0x0161)
+#define SMC_RMI_RTT_UNMAP_UNPROTECTED SMC_RMI_CALL(0x0162)
+
+#define SMC_RMI_PSCI_COMPLETE SMC_RMI_CALL(0x0164)
+#define SMC_RMI_FEATURES SMC_RMI_CALL(0x0165)
+#define SMC_RMI_RTT_FOLD SMC_RMI_CALL(0x0166)
+#define SMC_RMI_REC_AUX_COUNT SMC_RMI_CALL(0x0167)
+#define SMC_RMI_RTT_INIT_RIPAS SMC_RMI_CALL(0x0168)
+#define SMC_RMI_RTT_SET_RIPAS SMC_RMI_CALL(0x0169)
+
+#define RMI_ABI_MAJOR_VERSION 1
+#define RMI_ABI_MINOR_VERSION 0
+
+#define RMI_ABI_VERSION_GET_MAJOR(version) ((version) >> 16)
+#define RMI_ABI_VERSION_GET_MINOR(version) ((version) & 0xFFFF)
+#define RMI_ABI_VERSION(major, minor) (((major) << 16) | (minor))
+
+#define RMI_UNASSIGNED 0
+#define RMI_ASSIGNED 1
+#define RMI_TABLE 2
+
+#define RMI_RETURN_STATUS(ret) ((ret) & 0xFF)
+#define RMI_RETURN_INDEX(ret) (((ret) >> 8) & 0xFF)
+
+#define RMI_SUCCESS 0
+#define RMI_ERROR_INPUT 1
+#define RMI_ERROR_REALM 2
+#define RMI_ERROR_REC 3
+#define RMI_ERROR_RTT 4
+
+enum rmi_ripas {
+ RMI_EMPTY = 0,
+ RMI_RAM = 1,
+ RMI_DESTROYED = 2,
+};
+
+#define RMI_NO_MEASURE_CONTENT 0
+#define RMI_MEASURE_CONTENT 1
+
+#define RMI_FEATURE_REGISTER_0_S2SZ GENMASK(7, 0)
+#define RMI_FEATURE_REGISTER_0_LPA2 BIT(8)
+#define RMI_FEATURE_REGISTER_0_SVE_EN BIT(9)
+#define RMI_FEATURE_REGISTER_0_SVE_VL GENMASK(13, 10)
+#define RMI_FEATURE_REGISTER_0_NUM_BPS GENMASK(19, 14)
+#define RMI_FEATURE_REGISTER_0_NUM_WPS GENMASK(25, 20)
+#define RMI_FEATURE_REGISTER_0_PMU_EN BIT(26)
+#define RMI_FEATURE_REGISTER_0_PMU_NUM_CTRS GENMASK(31, 27)
+#define RMI_FEATURE_REGISTER_0_HASH_SHA_256 BIT(32)
+#define RMI_FEATURE_REGISTER_0_HASH_SHA_512 BIT(33)
+#define RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS GENMASK(37, 34)
+#define RMI_FEATURE_REGISTER_0_MAX_RECS_ORDER GENMASK(41, 38)
+#define RMI_FEATURE_REGISTER_0_Reserved GENMASK(63, 42)
+
+#define RMI_REALM_PARAM_FLAG_LPA2 BIT(0)
+#define RMI_REALM_PARAM_FLAG_SVE BIT(1)
+#define RMI_REALM_PARAM_FLAG_PMU BIT(2)
+
+/*
+ * Note many of these fields are smaller than u64 but all fields have u64
+ * alignment, so use u64 to ensure correct alignment.
+ */
+struct realm_params {
+ union { /* 0x0 */
+ struct {
+ u64 flags;
+ u64 s2sz;
+ u64 sve_vl;
+ u64 num_bps;
+ u64 num_wps;
+ u64 pmu_num_ctrs;
+ u64 hash_algo;
+ };
+ u8 padding0[0x400];
+ };
+ union { /* 0x400 */
+ u8 rpv[64];
+ u8 padding1[0x400];
+ };
+ union { /* 0x800 */
+ struct {
+ u64 vmid;
+ u64 rtt_base;
+ s64 rtt_level_start;
+ u64 rtt_num_start;
+ };
+ u8 padding2[0x800];
+ };
+};
+
+/*
+ * The number of GPRs (starting from X0) that are
+ * configured by the host when a REC is created.
+ */
+#define REC_CREATE_NR_GPRS 8
+
+#define REC_PARAMS_FLAG_RUNNABLE BIT_ULL(0)
+
+#define REC_PARAMS_AUX_GRANULES 16
+
+struct rec_params {
+ union { /* 0x0 */
+ u64 flags;
+ u8 padding0[0x100];
+ };
+ union { /* 0x100 */
+ u64 mpidr;
+ u8 padding1[0x100];
+ };
+ union { /* 0x200 */
+ u64 pc;
+ u8 padding2[0x100];
+ };
+ union { /* 0x300 */
+ u64 gprs[REC_CREATE_NR_GPRS];
+ u8 padding3[0x500];
+ };
+ union { /* 0x800 */
+ struct {
+ u64 num_rec_aux;
+ u64 aux[REC_PARAMS_AUX_GRANULES];
+ };
+ u8 padding4[0x800];
+ };
+};
+
+#define REC_ENTER_EMULATED_MMIO BIT(0)
+#define REC_ENTER_INJECT_SEA BIT(1)
+#define REC_ENTER_TRAP_WFI BIT(2)
+#define REC_ENTER_TRAP_WFE BIT(3)
+#define REC_ENTER_RIPAS_RESPONSE BIT(4)
+
+#define REC_RUN_GPRS 31
+#define REC_MAX_GIC_NUM_LRS 16
+
+struct rec_enter {
+ union { /* 0x000 */
+ u64 flags;
+ u8 padding0[0x200];
+ };
+ union { /* 0x200 */
+ u64 gprs[REC_RUN_GPRS];
+ u8 padding1[0x100];
+ };
+ union { /* 0x300 */
+ struct {
+ u64 gicv3_hcr;
+ u64 gicv3_lrs[REC_MAX_GIC_NUM_LRS];
+ };
+ u8 padding2[0x100];
+ };
+ u8 padding3[0x400];
+};
+
+#define RMI_EXIT_SYNC 0x00
+#define RMI_EXIT_IRQ 0x01
+#define RMI_EXIT_FIQ 0x02
+#define RMI_EXIT_PSCI 0x03
+#define RMI_EXIT_RIPAS_CHANGE 0x04
+#define RMI_EXIT_HOST_CALL 0x05
+#define RMI_EXIT_SERROR 0x06
+
+struct rec_exit {
+ union { /* 0x000 */
+ u8 exit_reason;
+ u8 padding0[0x100];
+ };
+ union { /* 0x100 */
+ struct {
+ u64 esr;
+ u64 far;
+ u64 hpfar;
+ };
+ u8 padding1[0x100];
+ };
+ union { /* 0x200 */
+ u64 gprs[REC_RUN_GPRS];
+ u8 padding2[0x100];
+ };
+ union { /* 0x300 */
+ struct {
+ u64 gicv3_hcr;
+ u64 gicv3_lrs[REC_MAX_GIC_NUM_LRS];
+ u64 gicv3_misr;
+ u64 gicv3_vmcr;
+ };
+ u8 padding3[0x100];
+ };
+ union { /* 0x400 */
+ struct {
+ u64 cntp_ctl;
+ u64 cntp_cval;
+ u64 cntv_ctl;
+ u64 cntv_cval;
+ };
+ u8 padding4[0x100];
+ };
+ union { /* 0x500 */
+ struct {
+ u64 ripas_base;
+ u64 ripas_top;
+ u64 ripas_value;
+ };
+ u8 padding5[0x100];
+ };
+ union { /* 0x600 */
+ u16 imm;
+ u8 padding6[0x100];
+ };
+ union { /* 0x700 */
+ struct {
+ u8 pmu_ovf_status;
+ };
+ u8 padding7[0x100];
+ };
+};
+
+struct rec_run {
+ struct rec_enter enter;
+ struct rec_exit exit;
+};
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 05/43] arm64: RME: Add wrappers for RMI calls
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (3 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init Steven Price
` (37 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The wrappers make the call sites easier to read and deal with the
boiler plate of handling the error codes from the RMM.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes from v5:
* Further improve comments
Changes from v4:
* Improve comments
Changes from v2:
* Make output arguments optional.
* Mask RIPAS value rmi_rtt_read_entry()
* Drop unused rmi_rtt_get_phys()
---
arch/arm64/include/asm/rmi_cmds.h | 508 ++++++++++++++++++++++++++++++
1 file changed, 508 insertions(+)
create mode 100644 arch/arm64/include/asm/rmi_cmds.h
diff --git a/arch/arm64/include/asm/rmi_cmds.h b/arch/arm64/include/asm/rmi_cmds.h
new file mode 100644
index 000000000000..043b7ff278ee
--- /dev/null
+++ b/arch/arm64/include/asm/rmi_cmds.h
@@ -0,0 +1,508 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#ifndef __ASM_RMI_CMDS_H
+#define __ASM_RMI_CMDS_H
+
+#include <linux/arm-smccc.h>
+
+#include <asm/rmi_smc.h>
+
+struct rtt_entry {
+ unsigned long walk_level;
+ unsigned long desc;
+ int state;
+ int ripas;
+};
+
+/**
+ * rmi_data_create() - Create a data granule
+ * @rd: PA of the RD
+ * @data: PA of the target granule
+ * @ipa: IPA at which the granule will be mapped in the guest
+ * @src: PA of the source granule
+ * @flags: RMI_MEASURE_CONTENT if the contents should be measured
+ *
+ * Create a new data granule, copying contents from a non-secure granule.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_data_create(unsigned long rd, unsigned long data,
+ unsigned long ipa, unsigned long src,
+ unsigned long flags)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_DATA_CREATE, rd, data, ipa, src,
+ flags, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_data_create_unknown() - Create a data granule with unknown contents
+ * @rd: PA of the RD
+ * @data: PA of the target granule
+ * @ipa: IPA at which the granule will be mapped in the guest
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_data_create_unknown(unsigned long rd,
+ unsigned long data,
+ unsigned long ipa)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_DATA_CREATE_UNKNOWN, rd, data, ipa, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_data_destroy() - Destroy a data granule
+ * @rd: PA of the RD
+ * @ipa: IPA at which the granule is mapped in the guest
+ * @data_out: PA of the granule which was destroyed
+ * @top_out: Top IPA of non-live RTT entries
+ *
+ * Unmap a protected IPA from stage 2, transitioning it to DESTROYED.
+ * The IPA cannot be used by the guest unless it is transitioned to RAM again
+ * by the realm guest.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_data_destroy(unsigned long rd, unsigned long ipa,
+ unsigned long *data_out,
+ unsigned long *top_out)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_DATA_DESTROY, rd, ipa, &res);
+
+ if (data_out)
+ *data_out = res.a1;
+ if (top_out)
+ *top_out = res.a2;
+
+ return res.a0;
+}
+
+/**
+ * rmi_features() - Read feature register
+ * @index: Feature register index
+ * @out: Feature register value is written to this pointer
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_features(unsigned long index, unsigned long *out)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_FEATURES, index, &res);
+
+ if (out)
+ *out = res.a1;
+ return res.a0;
+}
+
+/**
+ * rmi_granule_delegate() - Delegate a granule
+ * @phys: PA of the granule
+ *
+ * Delegate a granule for use by the realm world.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_granule_delegate(unsigned long phys)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_GRANULE_DELEGATE, phys, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_granule_undelegate() - Undelegate a granule
+ * @phys: PA of the granule
+ *
+ * Undelegate a granule to allow use by the normal world. Will fail if the
+ * granule is in use.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_granule_undelegate(unsigned long phys)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_GRANULE_UNDELEGATE, phys, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_psci_complete() - Complete pending PSCI command
+ * @calling_rec: PA of the calling REC
+ * @target_rec: PA of the target REC
+ * @status: Status of the PSCI request
+ *
+ * Completes a pending PSCI command which was called with an MPIDR argument, by
+ * providing the corresponding REC.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_psci_complete(unsigned long calling_rec,
+ unsigned long target_rec,
+ unsigned long status)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_PSCI_COMPLETE, calling_rec, target_rec,
+ status, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_realm_activate() - Active a realm
+ * @rd: PA of the RD
+ *
+ * Mark a realm as Active signalling that creation is complete and allowing
+ * execution of the realm.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_realm_activate(unsigned long rd)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REALM_ACTIVATE, rd, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_realm_create() - Create a realm
+ * @rd: PA of the RD
+ * @params_ptr: PA of realm parameters
+ *
+ * Create a new realm using the given parameters.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_realm_create(unsigned long rd, unsigned long params_ptr)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REALM_CREATE, rd, params_ptr, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_realm_destroy() - Destroy a realm
+ * @rd: PA of the RD
+ *
+ * Destroys a realm, all objects belonging to the realm must be destroyed first.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_realm_destroy(unsigned long rd)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REALM_DESTROY, rd, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_rec_aux_count() - Get number of auxiliary granules required
+ * @rd: PA of the RD
+ * @aux_count: Number of pages written to this pointer
+ *
+ * A REC may require extra auxiliary pages to be delegated for the RMM to
+ * store metadata (not visible to the normal world) in. This function provides
+ * the number of pages that are required.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rec_aux_count(unsigned long rd, unsigned long *aux_count)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REC_AUX_COUNT, rd, &res);
+
+ if (aux_count)
+ *aux_count = res.a1;
+ return res.a0;
+}
+
+/**
+ * rmi_rec_create() - Create a REC
+ * @rd: PA of the RD
+ * @rec: PA of the target REC
+ * @params_ptr: PA of REC parameters
+ *
+ * Create a REC using the parameters specified in the struct rec_params pointed
+ * to by @params_ptr.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rec_create(unsigned long rd, unsigned long rec,
+ unsigned long params_ptr)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REC_CREATE, rd, rec, params_ptr, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_rec_destroy() - Destroy a REC
+ * @rec: PA of the target REC
+ *
+ * Destroys a REC. The REC must not be running.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rec_destroy(unsigned long rec)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REC_DESTROY, rec, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_rec_enter() - Enter a REC
+ * @rec: PA of the target REC
+ * @run_ptr: PA of RecRun structure
+ *
+ * Starts (or continues) execution within a REC.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rec_enter(unsigned long rec, unsigned long run_ptr)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_REC_ENTER, rec, run_ptr, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_create() - Creates an RTT
+ * @rd: PA of the RD
+ * @rtt: PA of the target RTT
+ * @ipa: Base of the IPA range described by the RTT
+ * @level: Depth of the RTT within the tree
+ *
+ * Creates an RTT (Realm Translation Table) at the specified level for the
+ * translation of the specified address within the realm.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_create(unsigned long rd, unsigned long rtt,
+ unsigned long ipa, long level)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_CREATE, rd, rtt, ipa, level, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_destroy() - Destroy an RTT
+ * @rd: PA of the RD
+ * @ipa: Base of the IPA range described by the RTT
+ * @level: Depth of the RTT within the tree
+ * @out_rtt: Pointer to write the PA of the RTT which was destroyed
+ * @out_top: Pointer to write the top IPA of non-live RTT entries
+ *
+ * Destroys an RTT. The RTT must be non-live, i.e. none of the entries in the
+ * table are in ASSIGNED or TABLE state.
+ *
+ * Return: RMI return code.
+ */
+static inline int rmi_rtt_destroy(unsigned long rd,
+ unsigned long ipa,
+ long level,
+ unsigned long *out_rtt,
+ unsigned long *out_top)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_DESTROY, rd, ipa, level, &res);
+
+ if (out_rtt)
+ *out_rtt = res.a1;
+ if (out_top)
+ *out_top = res.a2;
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_fold() - Fold an RTT
+ * @rd: PA of the RD
+ * @ipa: Base of the IPA range described by the RTT
+ * @level: Depth of the RTT within the tree
+ * @out_rtt: Pointer to write the PA of the RTT which was destroyed
+ *
+ * Folds an RTT. If all entries with the RTT are 'homogeneous' the RTT can be
+ * folded into the parent and the RTT destroyed.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_fold(unsigned long rd, unsigned long ipa,
+ long level, unsigned long *out_rtt)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_FOLD, rd, ipa, level, &res);
+
+ if (out_rtt)
+ *out_rtt = res.a1;
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_init_ripas() - Set RIPAS for new realm
+ * @rd: PA of the RD
+ * @base: Base of target IPA region
+ * @top: Top of target IPA region
+ * @out_top: Top IPA of range whose RIPAS was modified
+ *
+ * Sets the RIPAS of a target IPA range to RAM, for a realm in the NEW state.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_init_ripas(unsigned long rd, unsigned long base,
+ unsigned long top, unsigned long *out_top)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_INIT_RIPAS, rd, base, top, &res);
+
+ if (out_top)
+ *out_top = res.a1;
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_map_unprotected() - Map NS pages into a realm
+ * @rd: PA of the RD
+ * @ipa: Base IPA of the mapping
+ * @level: Depth within the RTT tree
+ * @desc: RTTE descriptor
+ *
+ * Create a mapping from an Unprotected IPA to a Non-secure PA.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_map_unprotected(unsigned long rd,
+ unsigned long ipa,
+ long level,
+ unsigned long desc)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_MAP_UNPROTECTED, rd, ipa, level,
+ desc, &res);
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_read_entry() - Read an RTTE
+ * @rd: PA of the RD
+ * @ipa: IPA for which to read the RTTE
+ * @level: RTT level at which to read the RTTE
+ * @rtt: Output structure describing the RTTE
+ *
+ * Reads a RTTE (Realm Translation Table Entry).
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_read_entry(unsigned long rd, unsigned long ipa,
+ long level, struct rtt_entry *rtt)
+{
+ struct arm_smccc_1_2_regs regs = {
+ SMC_RMI_RTT_READ_ENTRY,
+ rd, ipa, level
+ };
+
+ arm_smccc_1_2_smc(®s, ®s);
+
+ rtt->walk_level = regs.a1;
+ rtt->state = regs.a2 & 0xFF;
+ rtt->desc = regs.a3;
+ rtt->ripas = regs.a4 & 0xFF;
+
+ return regs.a0;
+}
+
+/**
+ * rmi_rtt_set_ripas() - Set RIPAS for an running realm
+ * @rd: PA of the RD
+ * @rec: PA of the REC making the request
+ * @base: Base of target IPA region
+ * @top: Top of target IPA region
+ * @out_top: Pointer to write top IPA of range whose RIPAS was modified
+ *
+ * Completes a request made by the realm to change the RIPAS of a target IPA
+ * range.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_set_ripas(unsigned long rd, unsigned long rec,
+ unsigned long base, unsigned long top,
+ unsigned long *out_top)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_SET_RIPAS, rd, rec, base, top, &res);
+
+ if (out_top)
+ *out_top = res.a1;
+
+ return res.a0;
+}
+
+/**
+ * rmi_rtt_unmap_unprotected() - Remove a NS mapping
+ * @rd: PA of the RD
+ * @ipa: Base IPA of the mapping
+ * @level: Depth within the RTT tree
+ * @out_top: Pointer to write top IPA of non-live RTT entries
+ *
+ * Removes a mapping at an Unprotected IPA.
+ *
+ * Return: RMI return code
+ */
+static inline int rmi_rtt_unmap_unprotected(unsigned long rd,
+ unsigned long ipa,
+ long level,
+ unsigned long *out_top)
+{
+ struct arm_smccc_res res;
+
+ arm_smccc_1_1_invoke(SMC_RMI_RTT_UNMAP_UNPROTECTED, rd, ipa,
+ level, &res);
+
+ if (out_top)
+ *out_top = res.a1;
+
+ return res.a0;
+}
+
+#endif /* __ASM_RMI_CMDS_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (4 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 05/43] arm64: RME: Add wrappers for RMI calls Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-19 5:44 ` Aneesh Kumar K.V
2025-01-29 3:57 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 07/43] arm64: RME: Define the user ABI Steven Price
` (36 subsequent siblings)
42 siblings, 2 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Query the RMI version number and check if it is a compatible version. A
static key is also provided to signal that a supported RMM is available.
Functions are provided to query if a VM or VCPU is a realm (or rec)
which currently will always return false.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Reword "unsupported" message from "host supports" to "we want" to
clarify that 'we' are the 'host'.
Changes since v2:
* Drop return value from kvm_init_rme(), it was always 0.
* Rely on the RMM return value to identify whether the RSI ABI is
compatible.
---
arch/arm64/include/asm/kvm_emulate.h | 18 +++++++++
arch/arm64/include/asm/kvm_host.h | 4 ++
arch/arm64/include/asm/kvm_rme.h | 56 ++++++++++++++++++++++++++++
arch/arm64/include/asm/virt.h | 1 +
arch/arm64/kvm/Makefile | 3 +-
arch/arm64/kvm/arm.c | 6 +++
arch/arm64/kvm/rme.c | 50 +++++++++++++++++++++++++
7 files changed, 137 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/include/asm/kvm_rme.h
create mode 100644 arch/arm64/kvm/rme.c
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index cf811009a33c..4d417e39763e 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -702,4 +702,22 @@ static inline void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
{
vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH);
}
+
+static inline bool kvm_is_realm(struct kvm *kvm)
+{
+ if (static_branch_unlikely(&kvm_rme_is_available) && kvm)
+ return kvm->arch.is_realm;
+ return false;
+}
+
+static inline enum realm_state kvm_realm_state(struct kvm *kvm)
+{
+ return READ_ONCE(kvm->arch.realm.state);
+}
+
+static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
+{
+ return false;
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e18e9244d17a..8482638dce3b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -27,6 +27,7 @@
#include <asm/fpsimd.h>
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
+#include <asm/kvm_rme.h>
#include <asm/vncr_mapping.h>
#define __KVM_HAVE_ARCH_INTC_INITIALIZED
@@ -380,6 +381,9 @@ struct kvm_arch {
* the associated pKVM instance in the hypervisor.
*/
struct kvm_protected_vm pkvm;
+
+ bool is_realm;
+ struct realm realm;
};
struct kvm_vcpu_fault_info {
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
new file mode 100644
index 000000000000..69af5c3a1e44
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#ifndef __ASM_KVM_RME_H
+#define __ASM_KVM_RME_H
+
+/**
+ * enum realm_state - State of a Realm
+ */
+enum realm_state {
+ /**
+ * @REALM_STATE_NONE:
+ * Realm has not yet been created. rmi_realm_create() may be
+ * called to create the realm.
+ */
+ REALM_STATE_NONE,
+ /**
+ * @REALM_STATE_NEW:
+ * Realm is under construction, not eligible for execution. Pages
+ * may be populated with rmi_data_create().
+ */
+ REALM_STATE_NEW,
+ /**
+ * @REALM_STATE_ACTIVE:
+ * Realm has been created and is eligible for execution with
+ * rmi_rec_enter(). Pages may no longer be populated with
+ * rmi_data_create().
+ */
+ REALM_STATE_ACTIVE,
+ /**
+ * @REALM_STATE_DYING:
+ * Realm is in the process of being destroyed or has already been
+ * destroyed.
+ */
+ REALM_STATE_DYING,
+ /**
+ * @REALM_STATE_DEAD:
+ * Realm has been destroyed.
+ */
+ REALM_STATE_DEAD
+};
+
+/**
+ * struct realm - Additional per VM data for a Realm
+ *
+ * @state: The lifetime state machine for the realm
+ */
+struct realm {
+ enum realm_state state;
+};
+
+void kvm_init_rme(void);
+
+#endif
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index ebf4a9f943ed..e45d47156dcf 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -81,6 +81,7 @@ void __hyp_reset_vectors(void);
bool is_kvm_arm_initialised(void);
DECLARE_STATIC_KEY_FALSE(kvm_protected_mode_initialized);
+DECLARE_STATIC_KEY_FALSE(kvm_rme_is_available);
static inline bool is_pkvm_initialized(void)
{
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 3cf7adb2b503..ce8a10d3161d 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -23,7 +23,8 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
vgic/vgic-v3.o vgic/vgic-v4.o \
vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
- vgic/vgic-its.o vgic/vgic-debug.o
+ vgic/vgic-its.o vgic/vgic-debug.o \
+ rme.o
kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o
kvm-$(CONFIG_ARM64_PTR_AUTH) += pauth.o
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a102c3aebdbc..93087eca9c51 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -40,6 +40,7 @@
#include <asm/kvm_nested.h>
#include <asm/kvm_pkvm.h>
#include <asm/kvm_ptrauth.h>
+#include <asm/kvm_rme.h>
#include <asm/sections.h>
#include <kvm/arm_hypercalls.h>
@@ -59,6 +60,8 @@ enum kvm_wfx_trap_policy {
static enum kvm_wfx_trap_policy kvm_wfi_trap_policy __read_mostly = KVM_WFX_NOTRAP_SINGLE_TASK;
static enum kvm_wfx_trap_policy kvm_wfe_trap_policy __read_mostly = KVM_WFX_NOTRAP_SINGLE_TASK;
+DEFINE_STATIC_KEY_FALSE(kvm_rme_is_available);
+
DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
@@ -2769,6 +2772,9 @@ static __init int kvm_arm_init(void)
in_hyp_mode = is_kernel_in_hyp_mode();
+ if (in_hyp_mode)
+ kvm_init_rme();
+
if (cpus_have_final_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) ||
cpus_have_final_cap(ARM64_WORKAROUND_1508412))
kvm_info("Guests without required CPU erratum workarounds can deadlock system!\n" \
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
new file mode 100644
index 000000000000..b88269b80c11
--- /dev/null
+++ b/arch/arm64/kvm/rme.c
@@ -0,0 +1,50 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/rmi_cmds.h>
+#include <asm/virt.h>
+
+static int rmi_check_version(void)
+{
+ struct arm_smccc_res res;
+ int version_major, version_minor;
+ unsigned long host_version = RMI_ABI_VERSION(RMI_ABI_MAJOR_VERSION,
+ RMI_ABI_MINOR_VERSION);
+
+ arm_smccc_1_1_invoke(SMC_RMI_VERSION, host_version, &res);
+
+ if (res.a0 == SMCCC_RET_NOT_SUPPORTED)
+ return -ENXIO;
+
+ version_major = RMI_ABI_VERSION_GET_MAJOR(res.a1);
+ version_minor = RMI_ABI_VERSION_GET_MINOR(res.a1);
+
+ if (res.a0 != RMI_SUCCESS) {
+ kvm_err("Unsupported RMI ABI (v%d.%d) we want v%d.%d\n",
+ version_major, version_minor,
+ RMI_ABI_MAJOR_VERSION,
+ RMI_ABI_MINOR_VERSION);
+ return -ENXIO;
+ }
+
+ kvm_info("RMI ABI version %d.%d\n", version_major, version_minor);
+
+ return 0;
+}
+
+void kvm_init_rme(void)
+{
+ if (PAGE_SIZE != SZ_4K)
+ /* Only 4k page size on the host is supported */
+ return;
+
+ if (rmi_check_version())
+ /* Continue without realm support */
+ return;
+
+ /* Future patch will enable static branch kvm_rme_is_available */
+}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 07/43] arm64: RME: Define the user ABI
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (5 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-28 23:51 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms Steven Price
` (35 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
There is one (multiplexed) CAP which can be used to create, populate and
then activate the realm.
Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes from v5:
* Actually expose the new VCPU capability (KVM_ARM_VCPU_REC) by bumping
KVM_VCPU_MAX_FEATURES - note this also exposes KVM_ARM_VCPU_HAS_EL2!
---
Documentation/virt/kvm/api.rst | 1 +
arch/arm64/include/uapi/asm/kvm.h | 49 +++++++++++++++++++++++++++++++
include/uapi/linux/kvm.h | 12 ++++++++
3 files changed, 62 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 454c2aaa155e..df4679415a4c 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -5088,6 +5088,7 @@ Recognised values for feature:
===== ===========================================
arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
+ arm64 KVM_ARM_VCPU_REC (requires KVM_CAP_ARM_RME)
===== ===========================================
Finalizes the configuration of the specified vcpu feature.
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 66736ff04011..8810719523ec 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -108,6 +108,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
#define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
+#define KVM_ARM_VCPU_REC 8 /* VCPU REC state as part of Realm */
struct kvm_vcpu_init {
__u32 target;
@@ -418,6 +419,54 @@ enum {
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
#define KVM_DEV_ARM_ITS_CTRL_RESET 4
+/* KVM_CAP_ARM_RME on VM fd */
+#define KVM_CAP_ARM_RME_CONFIG_REALM 0
+#define KVM_CAP_ARM_RME_CREATE_RD 1
+#define KVM_CAP_ARM_RME_INIT_IPA_REALM 2
+#define KVM_CAP_ARM_RME_POPULATE_REALM 3
+#define KVM_CAP_ARM_RME_ACTIVATE_REALM 4
+
+#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256 0
+#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512 1
+
+#define KVM_CAP_ARM_RME_RPV_SIZE 64
+
+/* List of configuration items accepted for KVM_CAP_ARM_RME_CONFIG_REALM */
+#define KVM_CAP_ARM_RME_CFG_RPV 0
+#define KVM_CAP_ARM_RME_CFG_HASH_ALGO 1
+
+struct kvm_cap_arm_rme_config_item {
+ __u32 cfg;
+ union {
+ /* cfg == KVM_CAP_ARM_RME_CFG_RPV */
+ struct {
+ __u8 rpv[KVM_CAP_ARM_RME_RPV_SIZE];
+ };
+
+ /* cfg == KVM_CAP_ARM_RME_CFG_HASH_ALGO */
+ struct {
+ __u32 hash_algo;
+ };
+
+ /* Fix the size of the union */
+ __u8 reserved[256];
+ };
+};
+
+#define KVM_ARM_RME_POPULATE_FLAGS_MEASURE BIT(0)
+struct kvm_cap_arm_rme_populate_realm_args {
+ __u64 populate_ipa_base;
+ __u64 populate_ipa_size;
+ __u32 flags;
+ __u32 reserved[3];
+};
+
+struct kvm_cap_arm_rme_init_ipa_args {
+ __u64 init_ipa_base;
+ __u64 init_ipa_size;
+ __u32 reserved[4];
+};
+
/* Device Control API on vcpu fd */
#define KVM_ARM_VCPU_PMU_V3_CTRL 0
#define KVM_ARM_VCPU_PMU_V3_IRQ 0
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 502ea63b5d2e..f448198838cf 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -934,6 +934,8 @@ struct kvm_enable_cap {
#define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
#define KVM_CAP_X86_GUEST_MODE 238
+#define KVM_CAP_ARM_RME 300 /* FIXME: Large number to prevent conflicts */
+
struct kvm_irq_routing_irqchip {
__u32 irqchip;
__u32 pin;
@@ -1581,4 +1583,14 @@ struct kvm_pre_fault_memory {
__u64 padding[5];
};
+/* Available with KVM_CAP_ARM_RME, only for VMs with KVM_VM_TYPE_ARM_REALM */
+struct kvm_arm_rmm_psci_complete {
+ __u64 target_mpidr;
+ __u32 psci_status;
+ __u32 padding[3];
+};
+
+/* FIXME: Update nr (0xd2) when merging */
+#define KVM_ARM_VCPU_RMM_PSCI_COMPLETE _IOW(KVMIO, 0xd2, struct kvm_arm_rmm_psci_complete)
+
#endif /* __LINUX_KVM_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (6 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 07/43] arm64: RME: Define the user ABI Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-29 0:43 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 09/43] kvm: arm64: Expose debug HW register numbers for Realm Steven Price
` (34 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Add the KVM_CAP_ARM_RME_CREATE_RD ioctl to create a realm. This involves
delegating pages to the RMM to hold the Realm Descriptor (RD) and for
the base level of the Realm Translation Tables (RTT). A VMID also need
to be picked, since the RMM has a separate VMID address space a
dedicated allocator is added for this purpose.
KVM_CAP_ARM_RME_CONFIG_REALM is provided to allow configuring the realm
before it is created. Configuration options can be classified as:
1. Parameters specific to the Realm stage2 (e.g. IPA Size, vmid, stage2
entry level, entry level RTTs, number of RTTs in start level, LPA2)
Most of these are not measured by RMM and comes from KVM book
keeping.
2. Parameters controlling "Arm Architecture features for the VM". (e.g.
SVE VL, PMU counters, number of HW BRPs/WPs), configured by the VMM
using the "user ID register write" mechanism. These will be
supported in the later patches.
3. Parameters are not part of the core Arm architecture but defined
by the RMM spec (e.g. Hash algorithm for measurement,
Personalisation value). These are programmed via
KVM_CAP_ARM_RME_CONFIG_REALM.
For the IPA size there is the possibility that the RMM supports a
different size to the IPA size supported by KVM for normal guests. At
the moment the 'normal limit' is exposed by KVM_CAP_ARM_VM_IPA_SIZE and
the IPA size is configured by the bottom bits of vm_type in
KVM_CREATE_VM. This means that it isn't easy for the VMM to discover
what IPA sizes are supported for Realm guests. Since the IPA is part of
the measurement of the realm guest the current expectation is that the
VMM will be required to pick the IPA size demanded by attestation and
therefore simply failing if this isn't available is fine. An option
would be to expose a new capability ioctl to obtain the RMM's maximum
IPA size if this is needed in the future.
Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Introduce free_delegated_granule() to replace many
undelegate/free_page() instances and centralise the comment on
leaking when the undelegate fails.
* Several other minor improvements suggested by reviews - thanks for
the feedback!
Changes since v2:
* Improved commit description.
* Improved return failures for rmi_check_version().
* Clear contents of PGD after it has been undelegated in case the RMM
left stale data.
* Minor changes to reflect changes in previous patches.
---
arch/arm64/include/asm/kvm_emulate.h | 5 +
arch/arm64/include/asm/kvm_rme.h | 19 ++
arch/arm64/kvm/arm.c | 18 ++
arch/arm64/kvm/mmu.c | 22 +-
arch/arm64/kvm/rme.c | 296 +++++++++++++++++++++++++++
5 files changed, 358 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 4d417e39763e..27f54a7778aa 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -715,6 +715,11 @@ static inline enum realm_state kvm_realm_state(struct kvm *kvm)
return READ_ONCE(kvm->arch.realm.state);
}
+static inline bool kvm_realm_is_created(struct kvm *kvm)
+{
+ return kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_NONE;
+}
+
static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
{
return false;
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 69af5c3a1e44..209cd99f03dd 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -6,6 +6,8 @@
#ifndef __ASM_KVM_RME_H
#define __ASM_KVM_RME_H
+#include <uapi/linux/kvm.h>
+
/**
* enum realm_state - State of a Realm
*/
@@ -46,11 +48,28 @@ enum realm_state {
* struct realm - Additional per VM data for a Realm
*
* @state: The lifetime state machine for the realm
+ * @rd: Kernel mapping of the Realm Descriptor (RD)
+ * @params: Parameters for the RMI_REALM_CREATE command
+ * @num_aux: The number of auxiliary pages required by the RMM
+ * @vmid: VMID to be used by the RMM for the realm
+ * @ia_bits: Number of valid Input Address bits in the IPA
*/
struct realm {
enum realm_state state;
+
+ void *rd;
+ struct realm_params *params;
+
+ unsigned long num_aux;
+ unsigned int vmid;
+ unsigned int ia_bits;
};
void kvm_init_rme(void);
+u32 kvm_realm_ipa_limit(void);
+
+int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
+int kvm_init_realm_vm(struct kvm *kvm);
+void kvm_destroy_realm(struct kvm *kvm);
#endif
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 93087eca9c51..2bbf30d66424 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -153,6 +153,13 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
}
mutex_unlock(&kvm->slots_lock);
break;
+ case KVM_CAP_ARM_RME:
+ if (!kvm_is_realm(kvm))
+ return -EINVAL;
+ mutex_lock(&kvm->lock);
+ r = kvm_realm_enable_cap(kvm, cap);
+ mutex_unlock(&kvm->lock);
+ break;
default:
break;
}
@@ -215,6 +222,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
bitmap_zero(kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES);
+ /* Initialise the realm bits after the generic bits are enabled */
+ if (kvm_is_realm(kvm)) {
+ ret = kvm_init_realm_vm(kvm);
+ if (ret)
+ goto err_free_cpumask;
+ }
+
return 0;
err_free_cpumask:
@@ -274,6 +288,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kvm_unshare_hyp(kvm, kvm + 1);
kvm_arm_teardown_hypercalls(kvm);
+ kvm_destroy_realm(kvm);
}
static bool kvm_has_full_ptr_auth(void)
@@ -421,6 +436,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES:
r = BIT(0);
break;
+ case KVM_CAP_ARM_RME:
+ r = static_key_enabled(&kvm_rme_is_available);
+ break;
default:
r = 0;
}
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c9d46ad57e52..f09d580c12ad 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -863,12 +863,16 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = {
.icache_inval_pou = invalidate_icache_guest_page,
};
-static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long type)
+static int kvm_init_ipa_range(struct kvm *kvm,
+ struct kvm_s2_mmu *mmu, unsigned long type)
{
u32 kvm_ipa_limit = get_kvm_ipa_limit();
u64 mmfr0, mmfr1;
u32 phys_shift;
+ if (kvm_is_realm(kvm))
+ kvm_ipa_limit = kvm_realm_ipa_limit();
+
if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
return -EINVAL;
@@ -933,7 +937,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
return -EINVAL;
}
- err = kvm_init_ipa_range(mmu, type);
+ err = kvm_init_ipa_range(kvm, mmu, type);
if (err)
return err;
@@ -1056,6 +1060,20 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
struct kvm_pgtable *pgt = NULL;
write_lock(&kvm->mmu_lock);
+ if (kvm_is_realm(kvm) &&
+ (kvm_realm_state(kvm) != REALM_STATE_DEAD &&
+ kvm_realm_state(kvm) != REALM_STATE_NONE)) {
+ /* Tearing down RTTs will be added in a later patch */
+ write_unlock(&kvm->mmu_lock);
+
+ /*
+ * The physical PGD pages are delegated to the RMM, so cannot
+ * be freed at this point. This function will be called again
+ * from kvm_destroy_realm() after the physical pages have been
+ * returned at which point the memory can be freed.
+ */
+ return;
+ }
pgt = mmu->pgt;
if (pgt) {
mmu->pgd_phys = 0;
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index b88269b80c11..5530ec2653b7 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -5,9 +5,20 @@
#include <linux/kvm_host.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_mmu.h>
#include <asm/rmi_cmds.h>
#include <asm/virt.h>
+#include <asm/kvm_pgtable.h>
+
+static unsigned long rmm_feat_reg0;
+
+static bool rme_supports(unsigned long feature)
+{
+ return !!u64_get_bits(rmm_feat_reg0, feature);
+}
+
static int rmi_check_version(void)
{
struct arm_smccc_res res;
@@ -36,6 +47,285 @@ static int rmi_check_version(void)
return 0;
}
+u32 kvm_realm_ipa_limit(void)
+{
+ return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ);
+}
+
+static int get_start_level(struct realm *realm)
+{
+ return 4 - stage2_pgtable_levels(realm->ia_bits);
+}
+
+static void free_delegated_granule(phys_addr_t phys)
+{
+ if (WARN_ON(rmi_granule_undelegate(phys))) {
+ /* Undelegate failed: leak the page */
+ return;
+ }
+
+ free_page((unsigned long)phys_to_virt(phys));
+}
+
+static int realm_create_rd(struct kvm *kvm)
+{
+ struct realm *realm = &kvm->arch.realm;
+ struct realm_params *params = realm->params;
+ void *rd = NULL;
+ phys_addr_t rd_phys, params_phys;
+ size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
+ int i, r;
+
+ if (WARN_ON(realm->rd) || WARN_ON(!realm->params))
+ return -EEXIST;
+
+ rd = (void *)__get_free_page(GFP_KERNEL);
+ if (!rd)
+ return -ENOMEM;
+
+ rd_phys = virt_to_phys(rd);
+ if (rmi_granule_delegate(rd_phys)) {
+ r = -ENXIO;
+ goto free_rd;
+ }
+
+ for (i = 0; i < pgd_size; i += PAGE_SIZE) {
+ phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i;
+
+ if (rmi_granule_delegate(pgd_phys)) {
+ r = -ENXIO;
+ goto out_undelegate_tables;
+ }
+ }
+
+ realm->ia_bits = VTCR_EL2_IPA(kvm->arch.mmu.vtcr);
+
+ params->s2sz = VTCR_EL2_IPA(kvm->arch.mmu.vtcr);
+ params->rtt_level_start = get_start_level(realm);
+ params->rtt_num_start = pgd_size / PAGE_SIZE;
+ params->rtt_base = kvm->arch.mmu.pgd_phys;
+ params->vmid = realm->vmid;
+
+ params_phys = virt_to_phys(params);
+
+ if (rmi_realm_create(rd_phys, params_phys)) {
+ r = -ENXIO;
+ goto out_undelegate_tables;
+ }
+
+ if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) {
+ WARN_ON(rmi_realm_destroy(rd_phys));
+ goto out_undelegate_tables;
+ }
+
+ realm->rd = rd;
+
+ return 0;
+
+out_undelegate_tables:
+ while (--i >= 0) {
+ phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i * PAGE_SIZE;
+
+ if (WARN_ON(rmi_granule_undelegate(pgd_phys))) {
+ /* Leak the pages if they cannot be returned */
+ kvm->arch.mmu.pgt = NULL;
+ break;
+ }
+ }
+ if (WARN_ON(rmi_granule_undelegate(rd_phys))) {
+ /* Leak the page if it isn't returned */
+ return r;
+ }
+free_rd:
+ free_page((unsigned long)rd);
+ return r;
+}
+
+/* Protects access to rme_vmid_bitmap */
+static DEFINE_SPINLOCK(rme_vmid_lock);
+static unsigned long *rme_vmid_bitmap;
+
+static int rme_vmid_init(void)
+{
+ unsigned int vmid_count = 1 << kvm_get_vmid_bits();
+
+ rme_vmid_bitmap = bitmap_zalloc(vmid_count, GFP_KERNEL);
+ if (!rme_vmid_bitmap) {
+ kvm_err("%s: Couldn't allocate rme vmid bitmap\n", __func__);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int rme_vmid_reserve(void)
+{
+ int ret;
+ unsigned int vmid_count = 1 << kvm_get_vmid_bits();
+
+ spin_lock(&rme_vmid_lock);
+ ret = bitmap_find_free_region(rme_vmid_bitmap, vmid_count, 0);
+ spin_unlock(&rme_vmid_lock);
+
+ return ret;
+}
+
+static void rme_vmid_release(unsigned int vmid)
+{
+ spin_lock(&rme_vmid_lock);
+ bitmap_release_region(rme_vmid_bitmap, vmid, 0);
+ spin_unlock(&rme_vmid_lock);
+}
+
+static int kvm_create_realm(struct kvm *kvm)
+{
+ struct realm *realm = &kvm->arch.realm;
+ int ret;
+
+ if (!kvm_is_realm(kvm))
+ return -EINVAL;
+ if (kvm_realm_is_created(kvm))
+ return -EEXIST;
+
+ ret = rme_vmid_reserve();
+ if (ret < 0)
+ return ret;
+ realm->vmid = ret;
+
+ ret = realm_create_rd(kvm);
+ if (ret) {
+ rme_vmid_release(realm->vmid);
+ return ret;
+ }
+
+ WRITE_ONCE(realm->state, REALM_STATE_NEW);
+
+ /* The realm is up, free the parameters. */
+ free_page((unsigned long)realm->params);
+ realm->params = NULL;
+
+ return 0;
+}
+
+static int config_realm_hash_algo(struct realm *realm,
+ struct kvm_cap_arm_rme_config_item *cfg)
+{
+ switch (cfg->hash_algo) {
+ case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256:
+ if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_256))
+ return -EINVAL;
+ break;
+ case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512:
+ if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_512))
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+ realm->params->hash_algo = cfg->hash_algo;
+ return 0;
+}
+
+static int kvm_rme_config_realm(struct kvm *kvm, struct kvm_enable_cap *cap)
+{
+ struct kvm_cap_arm_rme_config_item cfg;
+ struct realm *realm = &kvm->arch.realm;
+ int r = 0;
+
+ if (kvm_realm_is_created(kvm))
+ return -EBUSY;
+
+ if (copy_from_user(&cfg, (void __user *)cap->args[1], sizeof(cfg)))
+ return -EFAULT;
+
+ switch (cfg.cfg) {
+ case KVM_CAP_ARM_RME_CFG_RPV:
+ memcpy(&realm->params->rpv, &cfg.rpv, sizeof(cfg.rpv));
+ break;
+ case KVM_CAP_ARM_RME_CFG_HASH_ALGO:
+ r = config_realm_hash_algo(realm, &cfg);
+ break;
+ default:
+ r = -EINVAL;
+ }
+
+ return r;
+}
+
+int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
+{
+ int r = 0;
+
+ if (!kvm_is_realm(kvm))
+ return -EINVAL;
+
+ switch (cap->args[0]) {
+ case KVM_CAP_ARM_RME_CONFIG_REALM:
+ r = kvm_rme_config_realm(kvm, cap);
+ break;
+ case KVM_CAP_ARM_RME_CREATE_RD:
+ r = kvm_create_realm(kvm);
+ break;
+ default:
+ r = -EINVAL;
+ break;
+ }
+
+ return r;
+}
+
+void kvm_destroy_realm(struct kvm *kvm)
+{
+ struct realm *realm = &kvm->arch.realm;
+ size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
+ int i;
+
+ if (realm->params) {
+ free_page((unsigned long)realm->params);
+ realm->params = NULL;
+ }
+
+ if (!kvm_realm_is_created(kvm))
+ return;
+
+ WRITE_ONCE(realm->state, REALM_STATE_DYING);
+
+ if (realm->rd) {
+ phys_addr_t rd_phys = virt_to_phys(realm->rd);
+
+ if (WARN_ON(rmi_realm_destroy(rd_phys)))
+ return;
+ free_delegated_granule(rd_phys);
+ realm->rd = NULL;
+ }
+
+ rme_vmid_release(realm->vmid);
+
+ for (i = 0; i < pgd_size; i += PAGE_SIZE) {
+ phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i;
+
+ if (WARN_ON(rmi_granule_undelegate(pgd_phys)))
+ return;
+ }
+
+ WRITE_ONCE(realm->state, REALM_STATE_DEAD);
+
+ /* Now that the Realm is destroyed, free the entry level RTTs */
+ kvm_free_stage2_pgd(&kvm->arch.mmu);
+}
+
+int kvm_init_realm_vm(struct kvm *kvm)
+{
+ struct realm_params *params;
+
+ params = (struct realm_params *)get_zeroed_page(GFP_KERNEL);
+ if (!params)
+ return -ENOMEM;
+
+ kvm->arch.realm.params = params;
+ return 0;
+}
+
void kvm_init_rme(void)
{
if (PAGE_SIZE != SZ_4K)
@@ -46,5 +336,11 @@ void kvm_init_rme(void)
/* Continue without realm support */
return;
+ if (WARN_ON(rmi_features(0, &rmm_feat_reg0)))
+ return;
+
+ if (rme_vmid_init())
+ return;
+
/* Future patch will enable static branch kvm_rme_is_available */
}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 09/43] kvm: arm64: Expose debug HW register numbers for Realm
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (7 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation Steven Price
` (33 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Suzuki K Poulose, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Gavin Shan,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V, Steven Price
From: Suzuki K Poulose <suzuki.poulose@arm.com>
Expose VM specific Debug HW register numbers.
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arm.c | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2bbf30d66424..c505ec61180a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -78,6 +78,22 @@ bool is_kvm_arm_initialised(void)
return kvm_arm_initialised;
}
+static u32 kvm_arm_get_num_brps(struct kvm *kvm)
+{
+ if (!kvm_is_realm(kvm))
+ return get_num_brps();
+ /* Realm guest is not debuggable. */
+ return 0;
+}
+
+static u32 kvm_arm_get_num_wrps(struct kvm *kvm)
+{
+ if (!kvm_is_realm(kvm))
+ return get_num_wrps();
+ /* Realm guest is not debuggable. */
+ return 0;
+}
+
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
@@ -350,7 +366,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ARM_IRQ_LINE_LAYOUT_2:
case KVM_CAP_ARM_NISV_TO_USER:
case KVM_CAP_ARM_INJECT_EXT_DABT:
- case KVM_CAP_SET_GUEST_DEBUG:
case KVM_CAP_VCPU_ATTRIBUTES:
case KVM_CAP_PTP_KVM:
case KVM_CAP_ARM_SYSTEM_SUSPEND:
@@ -358,6 +373,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_COUNTER_OFFSET:
r = 1;
break;
+ case KVM_CAP_SET_GUEST_DEBUG:
+ r = !kvm_is_realm(kvm);
+ break;
case KVM_CAP_SET_GUEST_DEBUG2:
return KVM_GUESTDBG_VALID_MASK;
case KVM_CAP_ARM_SET_DEVICE_ADDR:
@@ -403,10 +421,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1);
break;
case KVM_CAP_GUEST_DEBUG_HW_BPS:
- r = get_num_brps();
+ r = kvm_arm_get_num_brps(kvm);
break;
case KVM_CAP_GUEST_DEBUG_HW_WPS:
- r = get_num_wrps();
+ r = kvm_arm_get_num_wrps(kvm);
break;
case KVM_CAP_ARM_PMU_V3:
r = kvm_arm_support_pmu_v3();
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (8 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 09/43] kvm: arm64: Expose debug HW register numbers for Realm Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-29 4:07 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 11/43] arm64: RME: RTT tear down Steven Price
` (32 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Previously machine type was used purely for specifying the physical
address size of the guest. Reserve the higher bits to specify an ARM
specific machine type and declare a new type 'KVM_VM_TYPE_ARM_REALM'
used to create a realm guest.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arm.c | 17 +++++++++++++++++
arch/arm64/kvm/mmu.c | 3 ---
include/uapi/linux/kvm.h | 19 +++++++++++++++----
3 files changed, 32 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index c505ec61180a..73016e1e0067 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -207,6 +207,23 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
mutex_unlock(&kvm->lock);
#endif
+ if (type & ~(KVM_VM_TYPE_ARM_MASK | KVM_VM_TYPE_ARM_IPA_SIZE_MASK))
+ return -EINVAL;
+
+ switch (type & KVM_VM_TYPE_ARM_MASK) {
+ case KVM_VM_TYPE_ARM_NORMAL:
+ break;
+ case KVM_VM_TYPE_ARM_REALM:
+ kvm->arch.is_realm = true;
+ if (!kvm_is_realm(kvm)) {
+ /* Realm support unavailable */
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
kvm_init_nested(kvm);
ret = kvm_share_hyp(kvm, kvm + 1);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index f09d580c12ad..f3b48895aee5 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -873,9 +873,6 @@ static int kvm_init_ipa_range(struct kvm *kvm,
if (kvm_is_realm(kvm))
kvm_ipa_limit = kvm_realm_ipa_limit();
- if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
- return -EINVAL;
-
phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type);
if (is_protected_kvm_enabled()) {
phys_shift = kvm_ipa_limit;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index f448198838cf..05fb31ced849 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -648,14 +648,25 @@ struct kvm_enable_cap {
#define KVM_S390_SIE_PAGE_OFFSET 1
/*
- * On arm64, machine type can be used to request the physical
- * address size for the VM. Bits[7-0] are reserved for the guest
- * PA size shift (i.e, log2(PA_Size)). For backward compatibility,
- * value 0 implies the default IPA size, 40bits.
+ * On arm64, machine type can be used to request both the machine type and
+ * the physical address size for the VM.
+ *
+ * Bits[11-8] are reserved for the ARM specific machine type.
+ *
+ * Bits[7-0] are reserved for the guest PA size shift (i.e, log2(PA_Size)).
+ * For backward compatibility, value 0 implies the default IPA size, 40bits.
*/
+#define KVM_VM_TYPE_ARM_SHIFT 8
+#define KVM_VM_TYPE_ARM_MASK (0xfULL << KVM_VM_TYPE_ARM_SHIFT)
+#define KVM_VM_TYPE_ARM(_type) \
+ (((_type) << KVM_VM_TYPE_ARM_SHIFT) & KVM_VM_TYPE_ARM_MASK)
+#define KVM_VM_TYPE_ARM_NORMAL KVM_VM_TYPE_ARM(0)
+#define KVM_VM_TYPE_ARM_REALM KVM_VM_TYPE_ARM(1)
+
#define KVM_VM_TYPE_ARM_IPA_SIZE_MASK 0xffULL
#define KVM_VM_TYPE_ARM_IPA_SIZE(x) \
((x) & KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
+
/*
* ioctls for /dev/kvm fds:
*/
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 11/43] arm64: RME: RTT tear down
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (9 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-29 23:06 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
` (31 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM owns the stage 2 page tables for a realm, and KVM must request
that the RMM creates/destroys entries as necessary. The physical pages
to store the page tables are delegated to the realm as required, and can
be undelegated when no longer used.
Creating new RTTs is the easy part, tearing down is a little more
tricky. The result of realm_rtt_destroy() can be used to effectively
walk the tree and destroy the entries (undelegating pages that were
given to the realm).
Signed-off-by: Steven Price <steven.price@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Changes since v5:
* Rename some RME_xxx defines to do with page sizes as RMM_xxx - they are
a property of the RMM specification not the RME architecture.
Changes since v2:
* Moved {alloc,free}_delegated_page() and ensure_spare_page() to a
later patch when they are actually used.
* Some simplifications now rmi_xxx() functions allow NULL as an output
parameter.
* Improved comments and code layout.
---
arch/arm64/include/asm/kvm_rme.h | 19 ++++++
arch/arm64/kvm/mmu.c | 6 +-
arch/arm64/kvm/rme.c | 112 +++++++++++++++++++++++++++++++
3 files changed, 134 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 209cd99f03dd..32bdedf1d866 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -71,5 +71,24 @@ u32 kvm_realm_ipa_limit(void);
int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
int kvm_init_realm_vm(struct kvm *kvm);
void kvm_destroy_realm(struct kvm *kvm);
+void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
+
+#define RMM_RTT_BLOCK_LEVEL 2
+#define RMM_RTT_MAX_LEVEL 3
+
+#define RMM_PAGE_SHIFT 12
+#define RMM_PAGE_SIZE BIT(RMM_PAGE_SHIFT)
+/* See ARM64_HW_PGTABLE_LEVEL_SHIFT() */
+#define RMM_RTT_LEVEL_SHIFT(l) \
+ ((RMM_PAGE_SHIFT - 3) * (4 - (l)) + 3)
+#define RMM_L2_BLOCK_SIZE BIT(RMM_RTT_LEVEL_SHIFT(2))
+
+static inline unsigned long rme_rtt_level_mapsize(int level)
+{
+ if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
+ return RMM_PAGE_SIZE;
+
+ return (1UL << RMM_RTT_LEVEL_SHIFT(level));
+}
#endif
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index f3b48895aee5..325b578c734d 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1054,14 +1054,15 @@ void stage2_unmap_vm(struct kvm *kvm)
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
{
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
- struct kvm_pgtable *pgt = NULL;
+ struct kvm_pgtable *pgt;
write_lock(&kvm->mmu_lock);
+ pgt = mmu->pgt;
if (kvm_is_realm(kvm) &&
(kvm_realm_state(kvm) != REALM_STATE_DEAD &&
kvm_realm_state(kvm) != REALM_STATE_NONE)) {
- /* Tearing down RTTs will be added in a later patch */
write_unlock(&kvm->mmu_lock);
+ kvm_realm_destroy_rtts(kvm, pgt->ia_bits);
/*
* The physical PGD pages are delegated to the RMM, so cannot
@@ -1071,7 +1072,6 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
*/
return;
}
- pgt = mmu->pgt;
if (pgt) {
mmu->pgd_phys = 0;
mmu->pgt = NULL;
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 5530ec2653b7..5e719235dd77 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -141,6 +141,118 @@ static int realm_create_rd(struct kvm *kvm)
return r;
}
+static int realm_rtt_destroy(struct realm *realm, unsigned long addr,
+ int level, phys_addr_t *rtt_granule,
+ unsigned long *next_addr)
+{
+ unsigned long out_rtt;
+ int ret;
+
+ ret = rmi_rtt_destroy(virt_to_phys(realm->rd), addr, level,
+ &out_rtt, next_addr);
+
+ *rtt_granule = out_rtt;
+
+ return ret;
+}
+
+static int realm_tear_down_rtt_level(struct realm *realm, int level,
+ unsigned long start, unsigned long end)
+{
+ ssize_t map_size;
+ unsigned long addr, next_addr;
+
+ if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
+ return -EINVAL;
+
+ map_size = rme_rtt_level_mapsize(level - 1);
+
+ for (addr = start; addr < end; addr = next_addr) {
+ phys_addr_t rtt_granule;
+ int ret;
+ unsigned long align_addr = ALIGN(addr, map_size);
+
+ next_addr = ALIGN(addr + 1, map_size);
+
+ if (next_addr > end || align_addr != addr) {
+ /*
+ * The target range is smaller than what this level
+ * covers, recurse deeper.
+ */
+ ret = realm_tear_down_rtt_level(realm,
+ level + 1,
+ addr,
+ min(next_addr, end));
+ if (ret)
+ return ret;
+ continue;
+ }
+
+ ret = realm_rtt_destroy(realm, addr, level,
+ &rtt_granule, &next_addr);
+
+ switch (RMI_RETURN_STATUS(ret)) {
+ case RMI_SUCCESS:
+ free_delegated_granule(rtt_granule);
+ break;
+ case RMI_ERROR_RTT:
+ if (next_addr > addr) {
+ /* Missing RTT, skip */
+ break;
+ }
+ /*
+ * We tear down the RTT range for the full IPA
+ * space, after everything is unmapped. Also we
+ * descend down only if we cannot tear down a
+ * top level RTT. Thus RMM must be able to walk
+ * to the requested level. e.g., a block mapping
+ * exists at L1 or L2.
+ */
+ if (WARN_ON(RMI_RETURN_INDEX(ret) != level))
+ return -EBUSY;
+ if (WARN_ON(level == RMM_RTT_MAX_LEVEL))
+ return -EBUSY;
+
+ /*
+ * The table has active entries in it, recurse deeper
+ * and tear down the RTTs.
+ */
+ next_addr = ALIGN(addr + 1, map_size);
+ ret = realm_tear_down_rtt_level(realm,
+ level + 1,
+ addr,
+ next_addr);
+ if (ret)
+ return ret;
+ /*
+ * Now that the child RTTs are destroyed,
+ * retry at this level.
+ */
+ next_addr = addr;
+ break;
+ default:
+ WARN_ON(1);
+ return -ENXIO;
+ }
+ }
+
+ return 0;
+}
+
+static int realm_tear_down_rtt_range(struct realm *realm,
+ unsigned long start, unsigned long end)
+{
+ return realm_tear_down_rtt_level(realm, get_start_level(realm) + 1,
+ start, end);
+}
+
+void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits)
+{
+ struct realm *realm = &kvm->arch.realm;
+
+ WARN_ON(realm_tear_down_rtt_range(realm, 0, (1UL << ia_bits)));
+}
+
/* Protects access to rme_vmid_bitmap */
static DEFINE_SPINLOCK(rme_vmid_lock);
static unsigned long *rme_vmid_bitmap;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (10 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 11/43] arm64: RME: RTT tear down Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-19 5:37 ` Aneesh Kumar K.V
2025-01-29 4:50 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 13/43] KVM: arm64: vgic: Provide helper for number of list registers Steven Price
` (30 subsequent siblings)
42 siblings, 2 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM maintains a data structure known as the Realm Execution Context
(or REC). It is similar to struct kvm_vcpu and tracks the state of the
virtual CPUs. KVM must delegate memory and request the structures are
created when vCPUs are created, and suitably tear down on destruction.
RECs must also be supplied with addition pages - auxiliary (or AUX)
granules - for storing the larger registers state (e.g. for SVE). The
number of AUX granules for a REC depends on the parameters with which
the Realm was created - the RMM makes this information available via the
RMI_REC_AUX_COUNT call performed after creating the Realm Descriptor (RD).
Note that only some of register state for the REC can be set by KVM, the
rest is defined by the RMM (zeroed). The register state then cannot be
changed by KVM after the REC is created (except when the guest
explicitly requests this e.g. by performing a PSCI call). The RMM also
requires that the VMM creates RECs in ascending order of the MPIDR.
See Realm Management Monitor specification (DEN0137) for more information:
https://developer.arm.com/documentation/den0137/
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Separate the concept of vcpu_is_rec() and
kvm_arm_vcpu_rec_finalized() by using the KVM_ARM_VCPU_REC feature as
the indication that the VCPU is a REC.
Changes since v2:
* Free rec->run earlier in kvm_destroy_realm() and adapt to previous patches.
---
arch/arm64/include/asm/kvm_emulate.h | 7 ++
arch/arm64/include/asm/kvm_host.h | 3 +
arch/arm64/include/asm/kvm_rme.h | 18 ++++
arch/arm64/kvm/arm.c | 9 ++
arch/arm64/kvm/reset.c | 11 ++
arch/arm64/kvm/rme.c | 144 +++++++++++++++++++++++++++
6 files changed, 192 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 27f54a7778aa..ec2b6d9c9c07 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -722,7 +722,14 @@ static inline bool kvm_realm_is_created(struct kvm *kvm)
static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
{
+ if (static_branch_unlikely(&kvm_rme_is_available))
+ return vcpu_has_feature(vcpu, KVM_ARM_VCPU_REC);
return false;
}
+static inline bool kvm_arm_vcpu_rec_finalized(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.rec.mpidr != INVALID_HWID;
+}
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8482638dce3b..220195c727ef 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -789,6 +789,9 @@ struct kvm_vcpu_arch {
/* Per-vcpu CCSIDR override or NULL */
u32 *ccsidr;
+
+ /* Realm meta data */
+ struct realm_rec rec;
};
/*
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 32bdedf1d866..62d4a63d3035 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -6,6 +6,7 @@
#ifndef __ASM_KVM_RME_H
#define __ASM_KVM_RME_H
+#include <asm/rmi_smc.h>
#include <uapi/linux/kvm.h>
/**
@@ -65,6 +66,21 @@ struct realm {
unsigned int ia_bits;
};
+/**
+ * struct realm_rec - Additional per VCPU data for a Realm
+ *
+ * @mpidr: MPIDR (Multiprocessor Affinity Register) value to identify this VCPU
+ * @rec_page: Kernel VA of the RMM's private page for this REC
+ * @aux_pages: Additional pages private to the RMM for this REC
+ * @run: Kernel VA of the RmiRecRun structure shared with the RMM
+ */
+struct realm_rec {
+ unsigned long mpidr;
+ void *rec_page;
+ struct page *aux_pages[REC_PARAMS_AUX_GRANULES];
+ struct rec_run *run;
+};
+
void kvm_init_rme(void);
u32 kvm_realm_ipa_limit(void);
@@ -72,6 +88,8 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
int kvm_init_realm_vm(struct kvm *kvm);
void kvm_destroy_realm(struct kvm *kvm);
void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
+int kvm_create_rec(struct kvm_vcpu *vcpu);
+void kvm_destroy_rec(struct kvm_vcpu *vcpu);
#define RMM_RTT_BLOCK_LEVEL 2
#define RMM_RTT_MAX_LEVEL 3
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 73016e1e0067..2d97147651be 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -525,6 +525,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
/* Force users to call KVM_ARM_VCPU_INIT */
vcpu_clear_flag(vcpu, VCPU_INITIALIZED);
+ vcpu->arch.rec.mpidr = INVALID_HWID;
+
vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
/* Set up the timer */
@@ -1467,6 +1469,9 @@ static unsigned long system_supported_vcpu_features(void)
if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
clear_bit(KVM_ARM_VCPU_HAS_EL2, &features);
+ if (!static_branch_unlikely(&kvm_rme_is_available))
+ clear_bit(KVM_ARM_VCPU_REC, &features);
+
return features;
}
@@ -1506,6 +1511,10 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features))
return -EINVAL;
+ /* RME is incompatible with AArch32 */
+ if (test_bit(KVM_ARM_VCPU_REC, &features))
+ return -EINVAL;
+
return 0;
}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 470524b31951..16a5d11fc75a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -137,6 +137,11 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
return -EPERM;
return kvm_vcpu_finalize_sve(vcpu);
+ case KVM_ARM_VCPU_REC:
+ if (!kvm_is_realm(vcpu->kvm) || !vcpu_is_rec(vcpu))
+ return -EINVAL;
+
+ return kvm_create_rec(vcpu);
}
return -EINVAL;
@@ -147,6 +152,11 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu))
return false;
+ if (kvm_is_realm(vcpu->kvm) &&
+ !(vcpu_is_rec(vcpu) && kvm_arm_vcpu_rec_finalized(vcpu) &&
+ READ_ONCE(vcpu->kvm->arch.realm.state) == REALM_STATE_ACTIVE))
+ return false;
+
return true;
}
@@ -159,6 +169,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu)
kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu));
kfree(sve_state);
kfree(vcpu->arch.ccsidr);
+ kvm_destroy_rec(vcpu);
}
static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 5e719235dd77..ad984706f68c 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -426,6 +426,150 @@ void kvm_destroy_realm(struct kvm *kvm)
kvm_free_stage2_pgd(&kvm->arch.mmu);
}
+static void free_rec_aux(struct page **aux_pages,
+ unsigned int num_aux)
+{
+ unsigned int i;
+
+ for (i = 0; i < num_aux; i++)
+ free_delegated_granule(page_to_phys(aux_pages[i]));
+}
+
+static int alloc_rec_aux(struct page **aux_pages,
+ u64 *aux_phys_pages,
+ unsigned int num_aux)
+{
+ int ret;
+ unsigned int i;
+
+ for (i = 0; i < num_aux; i++) {
+ struct page *aux_page;
+ phys_addr_t aux_page_phys;
+
+ aux_page = alloc_page(GFP_KERNEL);
+ if (!aux_page) {
+ ret = -ENOMEM;
+ goto out_err;
+ }
+ aux_page_phys = page_to_phys(aux_page);
+ if (rmi_granule_delegate(aux_page_phys)) {
+ __free_page(aux_page);
+ ret = -ENXIO;
+ goto out_err;
+ }
+ aux_pages[i] = aux_page;
+ aux_phys_pages[i] = aux_page_phys;
+ }
+
+ return 0;
+out_err:
+ free_rec_aux(aux_pages, i);
+ return ret;
+}
+
+int kvm_create_rec(struct kvm_vcpu *vcpu)
+{
+ struct user_pt_regs *vcpu_regs = vcpu_gp_regs(vcpu);
+ unsigned long mpidr = kvm_vcpu_get_mpidr_aff(vcpu);
+ struct realm *realm = &vcpu->kvm->arch.realm;
+ struct realm_rec *rec = &vcpu->arch.rec;
+ unsigned long rec_page_phys;
+ struct rec_params *params;
+ int r, i;
+
+ if (kvm_realm_state(vcpu->kvm) != REALM_STATE_NEW)
+ return -ENOENT;
+
+ /*
+ * The RMM will report PSCI v1.0 to Realms and the KVM_ARM_VCPU_PSCI_0_2
+ * flag covers v0.2 and onwards.
+ */
+ if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2))
+ return -EINVAL;
+
+ BUILD_BUG_ON(sizeof(*params) > PAGE_SIZE);
+ BUILD_BUG_ON(sizeof(*rec->run) > PAGE_SIZE);
+
+ params = (struct rec_params *)get_zeroed_page(GFP_KERNEL);
+ rec->rec_page = (void *)__get_free_page(GFP_KERNEL);
+ rec->run = (void *)get_zeroed_page(GFP_KERNEL);
+ if (!params || !rec->rec_page || !rec->run) {
+ r = -ENOMEM;
+ goto out_free_pages;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(params->gprs); i++)
+ params->gprs[i] = vcpu_regs->regs[i];
+
+ params->pc = vcpu_regs->pc;
+
+ if (vcpu->vcpu_id == 0)
+ params->flags |= REC_PARAMS_FLAG_RUNNABLE;
+
+ rec_page_phys = virt_to_phys(rec->rec_page);
+
+ if (rmi_granule_delegate(rec_page_phys)) {
+ r = -ENXIO;
+ goto out_free_pages;
+ }
+
+ r = alloc_rec_aux(rec->aux_pages, params->aux, realm->num_aux);
+ if (r)
+ goto out_undelegate_rmm_rec;
+
+ params->num_rec_aux = realm->num_aux;
+ params->mpidr = mpidr;
+
+ if (rmi_rec_create(virt_to_phys(realm->rd),
+ rec_page_phys,
+ virt_to_phys(params))) {
+ r = -ENXIO;
+ goto out_free_rec_aux;
+ }
+
+ rec->mpidr = mpidr;
+
+ free_page((unsigned long)params);
+ return 0;
+
+out_free_rec_aux:
+ free_rec_aux(rec->aux_pages, realm->num_aux);
+out_undelegate_rmm_rec:
+ if (WARN_ON(rmi_granule_undelegate(rec_page_phys)))
+ rec->rec_page = NULL;
+out_free_pages:
+ free_page((unsigned long)rec->run);
+ free_page((unsigned long)rec->rec_page);
+ free_page((unsigned long)params);
+ return r;
+}
+
+void kvm_destroy_rec(struct kvm_vcpu *vcpu)
+{
+ struct realm *realm = &vcpu->kvm->arch.realm;
+ struct realm_rec *rec = &vcpu->arch.rec;
+ unsigned long rec_page_phys;
+
+ if (!vcpu_is_rec(vcpu))
+ return;
+
+ free_page((unsigned long)rec->run);
+
+ rec_page_phys = virt_to_phys(rec->rec_page);
+
+ /*
+ * The REC and any AUX pages cannot be reclaimed until the REC is
+ * destroyed. So if the REC destroy fails then the REC page and any AUX
+ * pages will be leaked.
+ */
+ if (WARN_ON(rmi_rec_destroy(rec_page_phys)))
+ return;
+
+ free_rec_aux(rec->aux_pages, realm->num_aux);
+
+ free_delegated_granule(rec_page_phys);
+}
+
int kvm_init_realm_vm(struct kvm *kvm)
{
struct realm_params *params;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 13/43] KVM: arm64: vgic: Provide helper for number of list registers
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (11 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 14/43] arm64: RME: Support for the VGIC in realms Steven Price
` (29 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Currently the number of list registers available is stored in a global
(kvm_vgic_global_state.nr_lr). With Arm CCA the RMM is permitted to
reserve list registers for its own use and so the number of available
list registers can be fewer for a realm VM. Provide a wrapper function
to fetch the global in preparation for restricting nr_lr when dealing
with a realm VM.
Signed-off-by: Steven Price <steven.price@arm.com>
---
New patch for v6
---
arch/arm64/kvm/vgic/vgic.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
index cc8c6b9b5dd8..1077fab2df4b 100644
--- a/arch/arm64/kvm/vgic/vgic.c
+++ b/arch/arm64/kvm/vgic/vgic.c
@@ -21,6 +21,11 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
};
+static inline int kvm_vcpu_vgic_nr_lr(struct kvm_vcpu *vcpu)
+{
+ return kvm_vgic_global_state.nr_lr;
+}
+
/*
* Locking order is always:
* kvm->lock (mutex)
@@ -802,7 +807,7 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
lockdep_assert_held(&vgic_cpu->ap_list_lock);
count = compute_ap_list_depth(vcpu, &multi_sgi);
- if (count > kvm_vgic_global_state.nr_lr || multi_sgi)
+ if (count > kvm_vcpu_vgic_nr_lr(vcpu) || multi_sgi)
vgic_sort_ap_list(vcpu);
count = 0;
@@ -831,7 +836,7 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
raw_spin_unlock(&irq->irq_lock);
- if (count == kvm_vgic_global_state.nr_lr) {
+ if (count == kvm_vcpu_vgic_nr_lr(vcpu)) {
if (!list_is_last(&irq->ap_list,
&vgic_cpu->ap_list_head))
vgic_set_underflow(vcpu);
@@ -840,7 +845,7 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
}
/* Nuke remaining LRs */
- for (i = count ; i < kvm_vgic_global_state.nr_lr; i++)
+ for (i = count ; i < kvm_vcpu_vgic_nr_lr(vcpu); i++)
vgic_clear_lr(vcpu, i);
if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif))
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 14/43] arm64: RME: Support for the VGIC in realms
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (12 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 13/43] KVM: arm64: vgic: Provide helper for number of list registers Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 15/43] KVM: arm64: Support timers in realm RECs Steven Price
` (28 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM provides emulation of a VGIC to the realm guest but delegates
much of the handling to the host. Implement support in KVM for
saving/restoring state to/from the REC structure.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes from v5:
* Handle RMM providing fewer GIC LRs than the hardware supports.
---
arch/arm64/include/asm/kvm_rme.h | 1 +
arch/arm64/kvm/arm.c | 16 +++++++++---
arch/arm64/kvm/rme.c | 5 ++++
arch/arm64/kvm/vgic/vgic-init.c | 2 +-
arch/arm64/kvm/vgic/vgic-v3.c | 5 ++++
arch/arm64/kvm/vgic/vgic.c | 43 ++++++++++++++++++++++++++++++--
6 files changed, 66 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 62d4a63d3035..be64b749fcac 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -83,6 +83,7 @@ struct realm_rec {
void kvm_init_rme(void);
u32 kvm_realm_ipa_limit(void);
+u32 kvm_realm_vgic_nr_lr(void);
int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
int kvm_init_realm_vm(struct kvm *kvm);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 2d97147651be..ab00fb7a8ece 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -683,19 +683,24 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
+ kvm_timer_vcpu_put(vcpu);
+ kvm_vgic_put(vcpu);
+
+ vcpu->cpu = -1;
+
+ if (vcpu_is_rec(vcpu))
+ return;
+
kvm_arch_vcpu_put_debug_state_flags(vcpu);
kvm_arch_vcpu_put_fp(vcpu);
if (has_vhe())
kvm_vcpu_put_vhe(vcpu);
- kvm_timer_vcpu_put(vcpu);
- kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
if (vcpu_has_nv(vcpu))
kvm_vcpu_put_hw_mmu(vcpu);
kvm_arm_vmid_clear_active();
vcpu_clear_on_unsupported_cpu(vcpu);
- vcpu->cpu = -1;
}
static void __kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu)
@@ -902,6 +907,11 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
return ret;
}
+ if (!irqchip_in_kernel(kvm) && kvm_is_realm(vcpu->kvm)) {
+ /* Userspace irqchip not yet supported with Realms */
+ return -EOPNOTSUPP;
+ }
+
mutex_lock(&kvm->arch.config_lock);
set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
mutex_unlock(&kvm->arch.config_lock);
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index ad984706f68c..72778d8ab52b 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -52,6 +52,11 @@ u32 kvm_realm_ipa_limit(void)
return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ);
}
+u32 kvm_realm_vgic_nr_lr(void)
+{
+ return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS);
+}
+
static int get_start_level(struct realm *realm)
{
return 4 - stage2_pgtable_levels(realm->ia_bits);
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
index bc7e22ab5d81..0ec9f6f62e86 100644
--- a/arch/arm64/kvm/vgic/vgic-init.c
+++ b/arch/arm64/kvm/vgic/vgic-init.c
@@ -79,7 +79,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
* the proper checks already.
*/
if (type == KVM_DEV_TYPE_ARM_VGIC_V2 &&
- !kvm_vgic_global_state.can_emulate_gicv2)
+ (!kvm_vgic_global_state.can_emulate_gicv2 || kvm_is_realm(kvm)))
return -ENODEV;
/* Must be held to avoid race with vCPU creation */
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index f267bc2486a1..9048e86e5775 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -7,9 +7,11 @@
#include <linux/kvm.h>
#include <linux/kvm_host.h>
#include <kvm/arm_vgic.h>
+#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
#include <asm/kvm_asm.h>
+#include <asm/rmi_smc.h>
#include "vgic.h"
@@ -746,6 +748,9 @@ void vgic_v3_put(struct kvm_vcpu *vcpu)
{
struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ if (vcpu_is_rec(vcpu))
+ cpu_if->vgic_vmcr = vcpu->arch.rec.run->exit.gicv3_vmcr;
+
kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if);
WARN_ON(vgic_v4_put(vcpu));
diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
index 1077fab2df4b..4218de3ea9da 100644
--- a/arch/arm64/kvm/vgic/vgic.c
+++ b/arch/arm64/kvm/vgic/vgic.c
@@ -10,7 +10,9 @@
#include <linux/list_sort.h>
#include <linux/nospec.h>
+#include <asm/kvm_emulate.h>
#include <asm/kvm_hyp.h>
+#include <asm/kvm_rme.h>
#include "vgic.h"
@@ -23,6 +25,8 @@ struct vgic_global kvm_vgic_global_state __ro_after_init = {
static inline int kvm_vcpu_vgic_nr_lr(struct kvm_vcpu *vcpu)
{
+ if (unlikely(vcpu_is_rec(vcpu)))
+ return kvm_realm_vgic_nr_lr();
return kvm_vgic_global_state.nr_lr;
}
@@ -864,10 +868,23 @@ static inline bool can_access_vgic_from_kernel(void)
return !static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif) || has_vhe();
}
+static inline void vgic_rmm_save_state(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ int i;
+
+ for (i = 0; i < kvm_vcpu_vgic_nr_lr(vcpu); i++) {
+ cpu_if->vgic_lr[i] = vcpu->arch.rec.run->exit.gicv3_lrs[i];
+ vcpu->arch.rec.run->enter.gicv3_lrs[i] = 0;
+ }
+}
+
static inline void vgic_save_state(struct kvm_vcpu *vcpu)
{
if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif))
vgic_v2_save_state(vcpu);
+ else if (vcpu_is_rec(vcpu))
+ vgic_rmm_save_state(vcpu);
else
__vgic_v3_save_state(&vcpu->arch.vgic_cpu.vgic_v3);
}
@@ -894,10 +911,28 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu)
vgic_prune_ap_list(vcpu);
}
+static inline void vgic_rmm_restore_state(struct kvm_vcpu *vcpu)
+{
+ struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
+ int i;
+
+ for (i = 0; i < kvm_vcpu_vgic_nr_lr(vcpu); i++) {
+ vcpu->arch.rec.run->enter.gicv3_lrs[i] = cpu_if->vgic_lr[i];
+ /*
+ * Also populate the rec.run->exit copies so that a late
+ * decision to back out from entering the realm doesn't cause
+ * the state to be lost
+ */
+ vcpu->arch.rec.run->exit.gicv3_lrs[i] = cpu_if->vgic_lr[i];
+ }
+}
+
static inline void vgic_restore_state(struct kvm_vcpu *vcpu)
{
if (!static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif))
vgic_v2_restore_state(vcpu);
+ else if (vcpu_is_rec(vcpu))
+ vgic_rmm_restore_state(vcpu);
else
__vgic_v3_restore_state(&vcpu->arch.vgic_cpu.vgic_v3);
}
@@ -938,7 +973,9 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
void kvm_vgic_load(struct kvm_vcpu *vcpu)
{
- if (unlikely(!irqchip_in_kernel(vcpu->kvm) || !vgic_initialized(vcpu->kvm))) {
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm) ||
+ !vgic_initialized(vcpu->kvm) ||
+ vcpu_is_rec(vcpu))) {
if (has_vhe() && static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif))
__vgic_v3_activate_traps(&vcpu->arch.vgic_cpu.vgic_v3);
return;
@@ -952,7 +989,9 @@ void kvm_vgic_load(struct kvm_vcpu *vcpu)
void kvm_vgic_put(struct kvm_vcpu *vcpu)
{
- if (unlikely(!irqchip_in_kernel(vcpu->kvm) || !vgic_initialized(vcpu->kvm))) {
+ if (unlikely(!irqchip_in_kernel(vcpu->kvm) ||
+ !vgic_initialized(vcpu->kvm) ||
+ vcpu_is_rec(vcpu))) {
if (has_vhe() && static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif))
__vgic_v3_deactivate_traps(&vcpu->arch.vgic_cpu.vgic_v3);
return;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 15/43] KVM: arm64: Support timers in realm RECs
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (13 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 14/43] arm64: RME: Support for the VGIC in realms Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS Steven Price
` (27 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM keeps track of the timer while the realm REC is running, but on
exit to the normal world KVM is responsible for handling the timers.
A later patch adds the support for propagating the timer values from the
exit data structure and calling kvm_realm_timers_update().
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arch_timer.c | 45 ++++++++++++++++++++++++++++++++----
include/kvm/arm_arch_timer.h | 2 ++
2 files changed, 43 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index 1215df590418..46fa903ed7ab 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -162,6 +162,13 @@ static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval)
static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset)
{
+ struct kvm_vcpu *vcpu = ctxt->vcpu;
+
+ if (kvm_is_realm(vcpu->kvm)) {
+ WARN_ON(offset);
+ return;
+ }
+
if (!ctxt->offset.vm_offset) {
WARN(offset, "timer %ld\n", arch_timer_ctx_index(ctxt));
return;
@@ -459,6 +466,21 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
}
}
+void kvm_realm_timers_update(struct kvm_vcpu *vcpu)
+{
+ struct arch_timer_cpu *arch_timer = &vcpu->arch.timer_cpu;
+ int i;
+
+ for (i = 0; i < NR_KVM_EL0_TIMERS; i++) {
+ struct arch_timer_context *timer = &arch_timer->timers[i];
+ bool status = timer_get_ctl(timer) & ARCH_TIMER_CTRL_IT_STAT;
+ bool level = kvm_timer_irq_can_fire(timer) && status;
+
+ if (level != timer->irq.level)
+ kvm_timer_update_irq(vcpu, level, timer);
+ }
+}
+
/* Only called for a fully emulated timer */
static void timer_emulate(struct arch_timer_context *ctx)
{
@@ -830,6 +852,8 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
if (unlikely(!timer->enabled))
return;
+ kvm_timer_unblocking(vcpu);
+
get_timer_map(vcpu, &map);
if (static_branch_likely(&has_gic_active_state)) {
@@ -843,8 +867,6 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu)
kvm_timer_vcpu_load_nogic(vcpu);
}
- kvm_timer_unblocking(vcpu);
-
timer_restore_state(map.direct_vtimer);
if (map.direct_ptimer)
timer_restore_state(map.direct_ptimer);
@@ -987,7 +1009,9 @@ static void timer_context_init(struct kvm_vcpu *vcpu, int timerid)
ctxt->vcpu = vcpu;
- if (timerid == TIMER_VTIMER)
+ if (kvm_is_realm(vcpu->kvm))
+ ctxt->offset.vm_offset = NULL;
+ else if (timerid == TIMER_VTIMER)
ctxt->offset.vm_offset = &kvm->arch.timer_data.voffset;
else
ctxt->offset.vm_offset = &kvm->arch.timer_data.poffset;
@@ -1010,13 +1034,19 @@ static void timer_context_init(struct kvm_vcpu *vcpu, int timerid)
void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = vcpu_timer(vcpu);
+ u64 cntvoff;
for (int i = 0; i < NR_KVM_TIMERS; i++)
timer_context_init(vcpu, i);
+ if (kvm_is_realm(vcpu->kvm))
+ cntvoff = 0;
+ else
+ cntvoff = kvm_phys_timer_read();
+
/* Synchronize offsets across timers of a VM if not already provided */
if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) {
- timer_set_offset(vcpu_vtimer(vcpu), kvm_phys_timer_read());
+ timer_set_offset(vcpu_vtimer(vcpu), cntvoff);
timer_set_offset(vcpu_ptimer(vcpu), 0);
}
@@ -1524,6 +1554,13 @@ int kvm_timer_enable(struct kvm_vcpu *vcpu)
return -EINVAL;
}
+ /*
+ * We don't use mapped IRQs for Realms because the RMI doesn't allow
+ * us setting the LR.HW bit in the VGIC.
+ */
+ if (vcpu_is_rec(vcpu))
+ return 0;
+
get_timer_map(vcpu, &map);
ret = kvm_vgic_map_phys_irq(vcpu,
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index fd650a8789b9..21297643e529 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -112,6 +112,8 @@ int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr);
int kvm_arm_timer_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr);
int kvm_arm_timer_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr);
+void kvm_realm_timers_update(struct kvm_vcpu *vcpu);
+
u64 kvm_phys_timer_read(void);
void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu);
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (14 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 15/43] KVM: arm64: Support timers in realm RECs Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-29 23:25 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 17/43] arm64: RME: Handle realm enter/exit Steven Price
` (26 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Each page within the protected region of the realm guest can be marked
as either RAM or EMPTY. Allow the VMM to control this before the guest
has started and provide the equivalent functions to change this (with
the guest's approval) at runtime.
When transitioning from RIPAS RAM (1) to RIPAS EMPTY (0) the memory is
unmapped from the guest and undelegated allowing the memory to be reused
by the host. When transitioning to RIPAS RAM the actual population of
the leaf RTTs is done later on stage 2 fault, however it may be
necessary to allocate additional RTTs to allow the RMM track the RIPAS
for the requested range.
When freeing a block mapping it is necessary to temporarily unfold the
RTT which requires delegating an extra page to the RMM, this page can
then be recovered once the contents of the block mapping have been
freed.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes from v5:
* Adapt to rebasing.
* Introduce find_map_level()
* Rename some functions to be clearer.
* Drop the "spare page" functionality.
Changes from v2:
* {alloc,free}_delegated_page() moved from previous patch to this one.
* alloc_delegated_page() now takes a gfp_t flags parameter.
* Fix the reference counting of guestmem pages to avoid leaking memory.
* Several misc code improvements and extra comments.
---
arch/arm64/include/asm/kvm_rme.h | 17 ++
arch/arm64/kvm/mmu.c | 8 +-
arch/arm64/kvm/rme.c | 411 +++++++++++++++++++++++++++++++
3 files changed, 433 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index be64b749fcac..4e7758f0e4b5 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -92,6 +92,15 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
int kvm_create_rec(struct kvm_vcpu *vcpu);
void kvm_destroy_rec(struct kvm_vcpu *vcpu);
+void kvm_realm_unmap_range(struct kvm *kvm,
+ unsigned long ipa,
+ u64 size,
+ bool unmap_private);
+int realm_set_ipa_state(struct kvm_vcpu *vcpu,
+ unsigned long addr, unsigned long end,
+ unsigned long ripas,
+ unsigned long *top_ipa);
+
#define RMM_RTT_BLOCK_LEVEL 2
#define RMM_RTT_MAX_LEVEL 3
@@ -110,4 +119,12 @@ static inline unsigned long rme_rtt_level_mapsize(int level)
return (1UL << RMM_RTT_LEVEL_SHIFT(level));
}
+static inline bool realm_is_addr_protected(struct realm *realm,
+ unsigned long addr)
+{
+ unsigned int ia_bits = realm->ia_bits;
+
+ return !(addr & ~(BIT(ia_bits - 1) - 1));
+}
+
#endif
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 325b578c734d..b100d4b3aa29 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -310,6 +310,7 @@ static void invalidate_icache_guest_page(void *va, size_t size)
* @start: The intermediate physical base address of the range to unmap
* @size: The size of the area to unmap
* @may_block: Whether or not we are permitted to block
+ * @only_shared: If true then protected mappings should not be unmapped
*
* Clear a range of stage-2 mappings, lowering the various ref-counts. Must
* be called while holding mmu_lock (unless for freeing the stage2 pgd before
@@ -317,7 +318,7 @@ static void invalidate_icache_guest_page(void *va, size_t size)
* with things behind our backs.
*/
static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size,
- bool may_block)
+ bool may_block, bool only_shared)
{
struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
phys_addr_t end = start + size;
@@ -331,7 +332,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
u64 size, bool may_block)
{
- __unmap_stage2_range(mmu, start, size, may_block);
+ __unmap_stage2_range(mmu, start, size, may_block, false);
}
void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
@@ -1932,7 +1933,8 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
__unmap_stage2_range(&kvm->arch.mmu, range->start << PAGE_SHIFT,
(range->end - range->start) << PAGE_SHIFT,
- range->may_block);
+ range->may_block,
+ range->only_shared);
kvm_nested_s2_unmap(kvm, range->may_block);
return false;
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 72778d8ab52b..e8ad04405ecd 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -62,6 +62,51 @@ static int get_start_level(struct realm *realm)
return 4 - stage2_pgtable_levels(realm->ia_bits);
}
+static int find_map_level(struct realm *realm,
+ unsigned long start,
+ unsigned long end)
+{
+ int level = RMM_RTT_MAX_LEVEL;
+
+ while (level > get_start_level(realm)) {
+ unsigned long map_size = rme_rtt_level_mapsize(level - 1);
+
+ if (!IS_ALIGNED(start, map_size) ||
+ (start + map_size) > end)
+ break;
+
+ level--;
+ }
+
+ return level;
+}
+
+static phys_addr_t alloc_delegated_granule(struct kvm_mmu_memory_cache *mc,
+ gfp_t flags)
+{
+ phys_addr_t phys = PHYS_ADDR_MAX;
+ void *virt;
+
+ if (mc)
+ virt = kvm_mmu_memory_cache_alloc(mc);
+ else
+ virt = (void *)__get_free_page(flags);
+
+ if (!virt)
+ goto out;
+
+ phys = virt_to_phys(virt);
+
+ if (rmi_granule_delegate(phys)) {
+ free_page((unsigned long)virt);
+
+ phys = PHYS_ADDR_MAX;
+ }
+
+out:
+ return phys;
+}
+
static void free_delegated_granule(phys_addr_t phys)
{
if (WARN_ON(rmi_granule_undelegate(phys))) {
@@ -72,6 +117,132 @@ static void free_delegated_granule(phys_addr_t phys)
free_page((unsigned long)phys_to_virt(phys));
}
+static int realm_rtt_create(struct realm *realm,
+ unsigned long addr,
+ int level,
+ phys_addr_t phys)
+{
+ addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1));
+ return rmi_rtt_create(virt_to_phys(realm->rd), phys, addr, level);
+}
+
+static int realm_rtt_fold(struct realm *realm,
+ unsigned long addr,
+ int level,
+ phys_addr_t *rtt_granule)
+{
+ unsigned long out_rtt;
+ int ret;
+
+ ret = rmi_rtt_fold(virt_to_phys(realm->rd), addr, level, &out_rtt);
+
+ if (RMI_RETURN_STATUS(ret) == RMI_SUCCESS && rtt_granule)
+ *rtt_granule = out_rtt;
+
+ return ret;
+}
+
+static int realm_destroy_protected(struct realm *realm,
+ unsigned long ipa,
+ unsigned long *next_addr)
+{
+ unsigned long rd = virt_to_phys(realm->rd);
+ unsigned long addr;
+ phys_addr_t rtt;
+ int ret;
+
+loop:
+ ret = rmi_data_destroy(rd, ipa, &addr, next_addr);
+ if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
+ if (*next_addr > ipa)
+ return 0; /* UNASSIGNED */
+ rtt = alloc_delegated_granule(NULL, GFP_KERNEL);
+ if (WARN_ON(rtt == PHYS_ADDR_MAX))
+ return -1;
+ /*
+ * ASSIGNED - ipa is mapped as a block, so split. The index
+ * from the return code should be 2 otherwise it appears
+ * there's a huge page bigger than KVM currently supports
+ */
+ WARN_ON(RMI_RETURN_INDEX(ret) != 2);
+ ret = realm_rtt_create(realm, ipa, 3, rtt);
+ if (WARN_ON(ret)) {
+ free_delegated_granule(rtt);
+ return -1;
+ }
+ /* retry */
+ goto loop;
+ } else if (WARN_ON(ret)) {
+ return -1;
+ }
+ ret = rmi_granule_undelegate(addr);
+
+ /*
+ * If the undelegate fails then something has gone seriously
+ * wrong: take an extra reference to just leak the page
+ */
+ if (!WARN_ON(ret))
+ put_page(phys_to_page(addr));
+
+ return 0;
+}
+
+static void realm_unmap_shared_range(struct kvm *kvm,
+ int level,
+ unsigned long start,
+ unsigned long end)
+{
+ struct realm *realm = &kvm->arch.realm;
+ unsigned long rd = virt_to_phys(realm->rd);
+ ssize_t map_size = rme_rtt_level_mapsize(level);
+ unsigned long next_addr, addr;
+ unsigned long shared_bit = BIT(realm->ia_bits - 1);
+
+ if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
+ return;
+
+ start |= shared_bit;
+ end |= shared_bit;
+
+ for (addr = start; addr < end; addr = next_addr) {
+ unsigned long align_addr = ALIGN(addr, map_size);
+ int ret;
+
+ next_addr = ALIGN(addr + 1, map_size);
+
+ if (align_addr != addr || next_addr > end) {
+ /* Need to recurse deeper */
+ if (addr < align_addr)
+ next_addr = align_addr;
+ realm_unmap_shared_range(kvm, level + 1, addr,
+ min(next_addr, end));
+ continue;
+ }
+
+ ret = rmi_rtt_unmap_unprotected(rd, addr, level, &next_addr);
+ switch (RMI_RETURN_STATUS(ret)) {
+ case RMI_SUCCESS:
+ break;
+ case RMI_ERROR_RTT:
+ if (next_addr == addr) {
+ /*
+ * There's a mapping here, but it's not a block
+ * mapping, so reset next_addr to the next block
+ * boundary and recurse to clear out the pages
+ * one level deeper.
+ */
+ next_addr = ALIGN(addr + 1, map_size);
+ realm_unmap_shared_range(kvm, level + 1, addr,
+ next_addr);
+ }
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+ }
+}
+
static int realm_create_rd(struct kvm *kvm)
{
struct realm *realm = &kvm->arch.realm;
@@ -161,6 +332,30 @@ static int realm_rtt_destroy(struct realm *realm, unsigned long addr,
return ret;
}
+static int realm_create_rtt_levels(struct realm *realm,
+ unsigned long ipa,
+ int level,
+ int max_level,
+ struct kvm_mmu_memory_cache *mc)
+{
+ if (WARN_ON(level == max_level))
+ return 0;
+
+ while (level++ < max_level) {
+ phys_addr_t rtt = alloc_delegated_granule(mc, GFP_KERNEL);
+
+ if (rtt == PHYS_ADDR_MAX)
+ return -ENOMEM;
+
+ if (realm_rtt_create(realm, ipa, level, rtt)) {
+ free_delegated_granule(rtt);
+ return -ENXIO;
+ }
+ }
+
+ return 0;
+}
+
static int realm_tear_down_rtt_level(struct realm *realm, int level,
unsigned long start, unsigned long end)
{
@@ -251,6 +446,61 @@ static int realm_tear_down_rtt_range(struct realm *realm,
start, end);
}
+/*
+ * Returns 0 on successful fold, a negative value on error, a positive value if
+ * we were not able to fold all tables at this level.
+ */
+static int realm_fold_rtt_level(struct realm *realm, int level,
+ unsigned long start, unsigned long end)
+{
+ int not_folded = 0;
+ ssize_t map_size;
+ unsigned long addr, next_addr;
+
+ if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
+ return -EINVAL;
+
+ map_size = rme_rtt_level_mapsize(level - 1);
+
+ for (addr = start; addr < end; addr = next_addr) {
+ phys_addr_t rtt_granule;
+ int ret;
+ unsigned long align_addr = ALIGN(addr, map_size);
+
+ next_addr = ALIGN(addr + 1, map_size);
+
+ ret = realm_rtt_fold(realm, align_addr, level, &rtt_granule);
+
+ switch (RMI_RETURN_STATUS(ret)) {
+ case RMI_SUCCESS:
+ free_delegated_granule(rtt_granule);
+ break;
+ case RMI_ERROR_RTT:
+ if (level == RMM_RTT_MAX_LEVEL ||
+ RMI_RETURN_INDEX(ret) < level) {
+ not_folded++;
+ break;
+ }
+ /* Recurse a level deeper */
+ ret = realm_fold_rtt_level(realm,
+ level + 1,
+ addr,
+ next_addr);
+ if (ret < 0)
+ return ret;
+ else if (ret == 0)
+ /* Try again at this level */
+ next_addr = addr;
+ break;
+ default:
+ WARN_ON(1);
+ return -ENXIO;
+ }
+ }
+
+ return not_folded;
+}
+
void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits)
{
struct realm *realm = &kvm->arch.realm;
@@ -258,6 +508,155 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits)
WARN_ON(realm_tear_down_rtt_range(realm, 0, (1UL << ia_bits)));
}
+static void realm_unmap_private_range(struct kvm *kvm,
+ unsigned long start,
+ unsigned long end)
+{
+ struct realm *realm = &kvm->arch.realm;
+ unsigned long next_addr, addr;
+
+ for (addr = start; addr < end; addr = next_addr) {
+ int ret;
+
+ ret = realm_destroy_protected(realm, addr, &next_addr);
+
+ if (WARN_ON(ret))
+ break;
+ }
+
+ realm_fold_rtt_level(realm, get_start_level(realm) + 1,
+ start, end);
+}
+
+void kvm_realm_unmap_range(struct kvm *kvm, unsigned long start, u64 size,
+ bool unmap_private)
+{
+ unsigned long end = start + size;
+ struct realm *realm = &kvm->arch.realm;
+
+ end = min(BIT(realm->ia_bits - 1), end);
+
+ if (realm->state == REALM_STATE_NONE)
+ return;
+
+ realm_unmap_shared_range(kvm, find_map_level(realm, start, end),
+ start, end);
+ if (unmap_private)
+ realm_unmap_private_range(kvm, start, end);
+}
+
+int realm_set_ipa_state(struct kvm_vcpu *vcpu,
+ unsigned long start,
+ unsigned long end,
+ unsigned long ripas,
+ unsigned long *top_ipa)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct realm *realm = &kvm->arch.realm;
+ struct realm_rec *rec = &vcpu->arch.rec;
+ phys_addr_t rd_phys = virt_to_phys(realm->rd);
+ phys_addr_t rec_phys = virt_to_phys(rec->rec_page);
+ struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
+ unsigned long ipa = start;
+ int ret = 0;
+
+ while (ipa < end) {
+ unsigned long next;
+
+ ret = rmi_rtt_set_ripas(rd_phys, rec_phys, ipa, end, &next);
+
+ if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
+ int walk_level = RMI_RETURN_INDEX(ret);
+ int level = find_map_level(realm, ipa, end);
+
+ /*
+ * If the RMM walk ended early then more tables are
+ * needed to reach the required depth to set the RIPAS.
+ */
+ if (walk_level < level) {
+ ret = realm_create_rtt_levels(realm, ipa,
+ walk_level,
+ level,
+ memcache);
+ /* Retry with RTTs created */
+ if (!ret)
+ continue;
+ } else {
+ ret = -EINVAL;
+ }
+
+ break;
+ } else if (RMI_RETURN_STATUS(ret) != RMI_SUCCESS) {
+ WARN(1, "Unexpected error in %s: %#x\n", __func__,
+ ret);
+ ret = -EINVAL;
+ break;
+ }
+ ipa = next;
+ }
+
+ *top_ipa = ipa;
+
+ if (ripas == RMI_EMPTY && ipa != start)
+ realm_unmap_private_range(kvm, start, ipa);
+
+ return ret;
+}
+
+static int realm_init_ipa_state(struct realm *realm,
+ unsigned long ipa,
+ unsigned long end)
+{
+ phys_addr_t rd_phys = virt_to_phys(realm->rd);
+ int ret;
+
+ while (ipa < end) {
+ unsigned long next;
+
+ ret = rmi_rtt_init_ripas(rd_phys, ipa, end, &next);
+
+ if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
+ int err_level = RMI_RETURN_INDEX(ret);
+ int level = find_map_level(realm, ipa, end);
+
+ if (WARN_ON(err_level >= level))
+ return -ENXIO;
+
+ ret = realm_create_rtt_levels(realm, ipa,
+ err_level,
+ level, NULL);
+ if (ret)
+ return ret;
+ /* Retry with the RTT levels in place */
+ continue;
+ } else if (WARN_ON(ret)) {
+ return -ENXIO;
+ }
+
+ ipa = next;
+ }
+
+ return 0;
+}
+
+static int kvm_init_ipa_range_realm(struct kvm *kvm,
+ struct kvm_cap_arm_rme_init_ipa_args *args)
+{
+ gpa_t addr, end;
+ struct realm *realm = &kvm->arch.realm;
+
+ addr = args->init_ipa_base;
+ end = addr + args->init_ipa_size;
+
+ if (end < addr)
+ return -EINVAL;
+
+ if (kvm_realm_state(kvm) != REALM_STATE_NEW)
+ return -EINVAL;
+
+ return realm_init_ipa_state(realm, addr, end);
+}
+
/* Protects access to rme_vmid_bitmap */
static DEFINE_SPINLOCK(rme_vmid_lock);
static unsigned long *rme_vmid_bitmap;
@@ -383,6 +782,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
case KVM_CAP_ARM_RME_CREATE_RD:
r = kvm_create_realm(kvm);
break;
+ case KVM_CAP_ARM_RME_INIT_IPA_REALM: {
+ struct kvm_cap_arm_rme_init_ipa_args args;
+ void __user *argp = u64_to_user_ptr(cap->args[1]);
+
+ if (copy_from_user(&args, argp, sizeof(args))) {
+ r = -EFAULT;
+ break;
+ }
+
+ r = kvm_init_ipa_range_realm(kvm, &args);
+ break;
+ }
default:
r = -EINVAL;
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 17/43] arm64: RME: Handle realm enter/exit
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (15 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-29 23:41 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 18/43] KVM: arm64: Handle realm MMIO emulation Steven Price
` (25 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Entering a realm is done using a SMC call to the RMM. On exit the
exit-codes need to be handled slightly differently to the normal KVM
path so define our own functions for realm enter/exit and hook them
in if the guest is a realm guest.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* For a RIPAS_CHANGE request from the guest perform the actual RIPAS
change on next entry rather than immediately on the exit. This allows
the VMM to 'reject' a RIPAS change by refusing to continue
scheduling.
Changes since v4:
* Rename handle_rme_exit() to handle_rec_exit()
* Move the loop to copy registers into the REC enter structure from the
to rec_exit_handlers callbacks to kvm_rec_enter(). This fixes a bug
where the handler exits to user space and user space wants to modify
the GPRS.
* Some code rearrangement in rec_exit_ripas_change().
Changes since v2:
* realm_set_ipa_state() now provides an output parameter for the
top_iap that was changed. Use this to signal the VMM with the correct
range that has been transitioned.
* Adapt to previous patch changes.
---
arch/arm64/include/asm/kvm_rme.h | 3 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/arm.c | 19 +++-
arch/arm64/kvm/rme-exit.c | 167 +++++++++++++++++++++++++++++++
arch/arm64/kvm/rme.c | 48 +++++++++
5 files changed, 233 insertions(+), 6 deletions(-)
create mode 100644 arch/arm64/kvm/rme-exit.c
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 4e7758f0e4b5..0410650cd545 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -92,6 +92,9 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
int kvm_create_rec(struct kvm_vcpu *vcpu);
void kvm_destroy_rec(struct kvm_vcpu *vcpu);
+int kvm_rec_enter(struct kvm_vcpu *vcpu);
+int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_status);
+
void kvm_realm_unmap_range(struct kvm *kvm,
unsigned long ipa,
u64 size,
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index ce8a10d3161d..0170e902fb63 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -24,7 +24,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
vgic/vgic-its.o vgic/vgic-debug.o \
- rme.o
+ rme.o rme-exit.o
kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o
kvm-$(CONFIG_ARM64_PTR_AUTH) += pauth.o
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index ab00fb7a8ece..6e83b2e3a6cf 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1264,7 +1264,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
trace_kvm_entry(*vcpu_pc(vcpu));
guest_timing_enter_irqoff();
- ret = kvm_arm_vcpu_enter_exit(vcpu);
+ if (vcpu_is_rec(vcpu))
+ ret = kvm_rec_enter(vcpu);
+ else
+ ret = kvm_arm_vcpu_enter_exit(vcpu);
vcpu->mode = OUTSIDE_GUEST_MODE;
vcpu->stat.exits++;
@@ -1318,10 +1321,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
local_irq_enable();
- trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
-
/* Exit types that need handling before we can be preempted */
- handle_exit_early(vcpu, ret);
+ if (!vcpu_is_rec(vcpu)) {
+ trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu),
+ *vcpu_pc(vcpu));
+
+ handle_exit_early(vcpu, ret);
+ }
preempt_enable();
@@ -1344,7 +1350,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
ret = ARM_EXCEPTION_IL;
}
- ret = handle_exit(vcpu, ret);
+ if (vcpu_is_rec(vcpu))
+ ret = handle_rec_exit(vcpu, ret);
+ else
+ ret = handle_exit(vcpu, ret);
}
/* Tell userspace about in-kernel device output levels */
diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c
new file mode 100644
index 000000000000..e532aa80e10d
--- /dev/null
+++ b/arch/arm64/kvm/rme-exit.c
@@ -0,0 +1,167 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2023 ARM Ltd.
+ */
+
+#include <linux/kvm_host.h>
+#include <kvm/arm_hypercalls.h>
+#include <kvm/arm_psci.h>
+
+#include <asm/rmi_smc.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_rme.h>
+#include <asm/kvm_mmu.h>
+
+typedef int (*exit_handler_fn)(struct kvm_vcpu *vcpu);
+
+static int rec_exit_reason_notimpl(struct kvm_vcpu *vcpu)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+
+ pr_err("[vcpu %d] Unhandled exit reason from realm (ESR: %#llx)\n",
+ vcpu->vcpu_id, rec->run->exit.esr);
+ return -ENXIO;
+}
+
+static int rec_exit_sync_dabt(struct kvm_vcpu *vcpu)
+{
+ return kvm_handle_guest_abort(vcpu);
+}
+
+static int rec_exit_sync_iabt(struct kvm_vcpu *vcpu)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+
+ pr_err("[vcpu %d] Unhandled instruction abort (ESR: %#llx).\n",
+ vcpu->vcpu_id, rec->run->exit.esr);
+ return -ENXIO;
+}
+
+static int rec_exit_sys_reg(struct kvm_vcpu *vcpu)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+ unsigned long esr = kvm_vcpu_get_esr(vcpu);
+ int rt = kvm_vcpu_sys_get_rt(vcpu);
+ bool is_write = !(esr & 1);
+ int ret;
+
+ if (is_write)
+ vcpu_set_reg(vcpu, rt, rec->run->exit.gprs[0]);
+
+ ret = kvm_handle_sys_reg(vcpu);
+
+ if (ret >= 0 && !is_write)
+ rec->run->enter.gprs[0] = vcpu_get_reg(vcpu, rt);
+
+ return ret;
+}
+
+static exit_handler_fn rec_exit_handlers[] = {
+ [0 ... ESR_ELx_EC_MAX] = rec_exit_reason_notimpl,
+ [ESR_ELx_EC_SYS64] = rec_exit_sys_reg,
+ [ESR_ELx_EC_DABT_LOW] = rec_exit_sync_dabt,
+ [ESR_ELx_EC_IABT_LOW] = rec_exit_sync_iabt
+};
+
+static int rec_exit_psci(struct kvm_vcpu *vcpu)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+ int i;
+
+ for (i = 0; i < REC_RUN_GPRS; i++)
+ vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]);
+
+ return kvm_smccc_call_handler(vcpu);
+}
+
+static int rec_exit_ripas_change(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct realm *realm = &kvm->arch.realm;
+ struct realm_rec *rec = &vcpu->arch.rec;
+ unsigned long base = rec->run->exit.ripas_base;
+ unsigned long top = rec->run->exit.ripas_top;
+ unsigned long ripas = rec->run->exit.ripas_value;
+
+ if (!realm_is_addr_protected(realm, base) ||
+ !realm_is_addr_protected(realm, top - 1)) {
+ kvm_err("Invalid RIPAS_CHANGE for %#lx - %#lx, ripas: %#lx\n",
+ base, top, ripas);
+ return -EINVAL;
+ }
+
+ /* Exit to VMM, the actual RIPAS change is done on next entry */
+ kvm_prepare_memory_fault_exit(vcpu, base, top - base, false, false,
+ ripas == RMI_RAM);
+
+ return 0;
+}
+
+static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+
+ __vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = rec->run->exit.cntv_ctl;
+ __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = rec->run->exit.cntv_cval;
+ __vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = rec->run->exit.cntp_ctl;
+ __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = rec->run->exit.cntp_cval;
+
+ kvm_realm_timers_update(vcpu);
+}
+
+/*
+ * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
+ * proper exit to userspace.
+ */
+int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_ret)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+ u8 esr_ec = ESR_ELx_EC(rec->run->exit.esr);
+ unsigned long status, index;
+
+ status = RMI_RETURN_STATUS(rec_run_ret);
+ index = RMI_RETURN_INDEX(rec_run_ret);
+
+ /*
+ * If a PSCI_SYSTEM_OFF request raced with a vcpu executing, we might
+ * see the following status code and index indicating an attempt to run
+ * a REC when the RD state is SYSTEM_OFF. In this case, we just need to
+ * return to user space which can deal with the system event or will try
+ * to run the KVM VCPU again, at which point we will no longer attempt
+ * to enter the Realm because we will have a sleep request pending on
+ * the VCPU as a result of KVM's PSCI handling.
+ */
+ if (status == RMI_ERROR_REALM && index == 1) {
+ vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
+ return 0;
+ }
+
+ if (rec_run_ret)
+ return -ENXIO;
+
+ vcpu->arch.fault.esr_el2 = rec->run->exit.esr;
+ vcpu->arch.fault.far_el2 = rec->run->exit.far;
+ vcpu->arch.fault.hpfar_el2 = rec->run->exit.hpfar;
+
+ update_arch_timer_irq_lines(vcpu);
+
+ /* Reset the emulation flags for the next run of the REC */
+ rec->run->enter.flags = 0;
+
+ switch (rec->run->exit.exit_reason) {
+ case RMI_EXIT_SYNC:
+ return rec_exit_handlers[esr_ec](vcpu);
+ case RMI_EXIT_IRQ:
+ case RMI_EXIT_FIQ:
+ return 1;
+ case RMI_EXIT_PSCI:
+ return rec_exit_psci(vcpu);
+ case RMI_EXIT_RIPAS_CHANGE:
+ return rec_exit_ripas_change(vcpu);
+ }
+
+ kvm_pr_unimpl("Unsupported exit reason: %u\n",
+ rec->run->exit.exit_reason);
+ vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+ return 0;
+}
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index e8ad04405ecd..22f0c74455af 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -842,6 +842,54 @@ void kvm_destroy_realm(struct kvm *kvm)
kvm_free_stage2_pgd(&kvm->arch.mmu);
}
+static void kvm_complete_ripas_change(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ struct realm_rec *rec = &vcpu->arch.rec;
+ unsigned long base = rec->run->exit.ripas_base;
+ unsigned long top = rec->run->exit.ripas_top;
+ unsigned long ripas = rec->run->exit.ripas_value;
+ unsigned long top_ipa;
+ int ret;
+
+ do {
+ kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache,
+ kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
+ write_lock(&kvm->mmu_lock);
+ ret = realm_set_ipa_state(vcpu, base, top, ripas, &top_ipa);
+ write_unlock(&kvm->mmu_lock);
+
+ if (WARN(ret && ret != -ENOMEM,
+ "Unable to satisfy RIPAS_CHANGE for %#lx - %#lx, ripas: %#lx\n",
+ base, top, ripas))
+ break;
+
+ base = top_ipa;
+ } while (top_ipa < top);
+}
+
+int kvm_rec_enter(struct kvm_vcpu *vcpu)
+{
+ struct realm_rec *rec = &vcpu->arch.rec;
+
+ switch (rec->run->exit.exit_reason) {
+ case RMI_EXIT_HOST_CALL:
+ case RMI_EXIT_PSCI:
+ for (int i = 0; i < REC_RUN_GPRS; i++)
+ rec->run->enter.gprs[i] = vcpu_get_reg(vcpu, i);
+ break;
+ case RMI_EXIT_RIPAS_CHANGE:
+ kvm_complete_ripas_change(vcpu);
+ break;
+ }
+
+ if (kvm_realm_state(vcpu->kvm) != REALM_STATE_ACTIVE)
+ return -EINVAL;
+
+ return rmi_rec_enter(virt_to_phys(rec->rec_page),
+ virt_to_phys(rec->run));
+}
+
static void free_rec_aux(struct page **aux_pages,
unsigned int num_aux)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 18/43] KVM: arm64: Handle realm MMIO emulation
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (16 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 17/43] arm64: RME: Handle realm enter/exit Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 19/43] arm64: RME: Allow populating initial contents Steven Price
` (24 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
MMIO emulation for a realm cannot be done directly with the VM's
registers as they are protected from the host. However, for emulatable
data aborts, the RMM uses GPRS[0] to provide the read/written value.
We can transfer this from/to the equivalent VCPU's register entry and
then depend on the generic MMIO handling code in KVM.
For a MMIO read, the value is placed in the shared RecExit structure
during kvm_handle_mmio_return() rather than in the VCPU's register
entry.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Inject SEA to the guest is an emulatable MMIO access triggers a data
abort.
* kvm_handle_mmio_return() - disable kvm_incr_pc() for a REC (as the PC
isn't under the host's control) and move the REC_ENTER_EMULATED_MMIO
flag setting to this location (as that tells the RMM to skip the
instruction).
---
arch/arm64/kvm/inject_fault.c | 4 +++-
arch/arm64/kvm/mmio.c | 16 ++++++++++++----
arch/arm64/kvm/rme-exit.c | 6 ++++++
3 files changed, 21 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index a640e839848e..611867b81ac2 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -165,7 +165,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr)
*/
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr)
{
- if (vcpu_el1_is_32bit(vcpu))
+ if (unlikely(vcpu_is_rec(vcpu)))
+ vcpu->arch.rec.run->enter.flags |= REC_ENTER_INJECT_SEA;
+ else if (vcpu_el1_is_32bit(vcpu))
inject_abt32(vcpu, false, addr);
else
inject_abt64(vcpu, false, addr);
diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
index ab365e839874..e0889f17c3f9 100644
--- a/arch/arm64/kvm/mmio.c
+++ b/arch/arm64/kvm/mmio.c
@@ -6,6 +6,7 @@
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
+#include <asm/rmi_smc.h>
#include <trace/events/kvm.h>
#include "trace.h"
@@ -136,14 +137,21 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu)
trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
&data);
data = vcpu_data_host_to_guest(vcpu, data, len);
- vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), data);
+
+ if (vcpu_is_rec(vcpu))
+ vcpu->arch.rec.run->enter.gprs[0] = data;
+ else
+ vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu), data);
}
/*
* The MMIO instruction is emulated and should not be re-executed
* in the guest.
*/
- kvm_incr_pc(vcpu);
+ if (vcpu_is_rec(vcpu))
+ vcpu->arch.rec.run->enter.flags |= REC_ENTER_EMULATED_MMIO;
+ else
+ kvm_incr_pc(vcpu);
return 1;
}
@@ -162,14 +170,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
* No valid syndrome? Ask userspace for help if it has
* volunteered to do so, and bail out otherwise.
*
- * In the protected VM case, there isn't much userspace can do
+ * In the protected/realm VM case, there isn't much userspace can do
* though, so directly deliver an exception to the guest.
*/
if (!kvm_vcpu_dabt_isvalid(vcpu)) {
trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
kvm_vcpu_get_hfar(vcpu), fault_ipa);
- if (vcpu_is_protected(vcpu)) {
+ if (vcpu_is_protected(vcpu) || vcpu_is_rec(vcpu)) {
kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c
index e532aa80e10d..8f0f9ab57f28 100644
--- a/arch/arm64/kvm/rme-exit.c
+++ b/arch/arm64/kvm/rme-exit.c
@@ -25,6 +25,12 @@ static int rec_exit_reason_notimpl(struct kvm_vcpu *vcpu)
static int rec_exit_sync_dabt(struct kvm_vcpu *vcpu)
{
+ struct realm_rec *rec = &vcpu->arch.rec;
+
+ if (kvm_vcpu_dabt_iswrite(vcpu) && kvm_vcpu_dabt_isvalid(vcpu))
+ vcpu_set_reg(vcpu, kvm_vcpu_dabt_get_rd(vcpu),
+ rec->run->exit.gprs[0]);
+
return kvm_handle_guest_abort(vcpu);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 19/43] arm64: RME: Allow populating initial contents
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (17 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 18/43] KVM: arm64: Handle realm MMIO emulation Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-30 4:38 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 20/43] arm64: RME: Runtime faulting of memory Steven Price
` (23 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The VMM needs to populate the realm with some data before starting (e.g.
a kernel and initrd). This is measured by the RMM and used as part of
the attestation later on.
For now only 4k mappings are supported, future work may add support for
larger mappings.
Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Refactor to use PFNs rather than tracking struct page in
realm_create_protected_data_page().
* Pull changes from a later patch (in the v5 series) for accessing
pages from a guest memfd.
* Do the populate in chunks to avoid holding locks for too long and
triggering RCU stall warnings.
---
arch/arm64/kvm/rme.c | 243 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 243 insertions(+)
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 22f0c74455af..d4561e368cd5 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -4,6 +4,7 @@
*/
#include <linux/kvm_host.h>
+#include <linux/hugetlb.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_mmu.h>
@@ -545,6 +546,236 @@ void kvm_realm_unmap_range(struct kvm *kvm, unsigned long start, u64 size,
realm_unmap_private_range(kvm, start, end);
}
+static int realm_create_protected_data_page(struct realm *realm,
+ unsigned long ipa,
+ kvm_pfn_t dst_pfn,
+ kvm_pfn_t src_pfn,
+ unsigned long flags)
+{
+ phys_addr_t dst_phys, src_phys;
+ int ret;
+
+ dst_phys = __pfn_to_phys(dst_pfn);
+ src_phys = __pfn_to_phys(src_pfn);
+
+ if (rmi_granule_delegate(dst_phys))
+ return -ENXIO;
+
+ ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa, src_phys,
+ flags);
+
+ if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
+ /* Create missing RTTs and retry */
+ int level = RMI_RETURN_INDEX(ret);
+
+ ret = realm_create_rtt_levels(realm, ipa, level,
+ RMM_RTT_MAX_LEVEL, NULL);
+ if (ret)
+ goto err;
+
+ ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa,
+ src_phys, flags);
+ }
+
+ if (!ret)
+ return 0;
+
+err:
+ if (WARN_ON(rmi_granule_undelegate(dst_phys))) {
+ /* Page can't be returned to NS world so is lost */
+ get_page(pfn_to_page(dst_pfn));
+ }
+ return -ENXIO;
+}
+
+static int fold_rtt(struct realm *realm, unsigned long addr, int level)
+{
+ phys_addr_t rtt_addr;
+ int ret;
+
+ ret = realm_rtt_fold(realm, addr, level + 1, &rtt_addr);
+ if (ret)
+ return ret;
+
+ free_delegated_granule(rtt_addr);
+
+ return 0;
+}
+
+static int populate_par_region(struct kvm *kvm,
+ phys_addr_t ipa_base,
+ phys_addr_t ipa_end,
+ u32 flags)
+{
+ struct realm *realm = &kvm->arch.realm;
+ struct kvm_memory_slot *memslot;
+ gfn_t base_gfn, end_gfn;
+ int idx;
+ phys_addr_t ipa;
+ int ret = 0;
+ unsigned long data_flags = 0;
+
+ base_gfn = gpa_to_gfn(ipa_base);
+ end_gfn = gpa_to_gfn(ipa_end);
+
+ if (flags & KVM_ARM_RME_POPULATE_FLAGS_MEASURE)
+ data_flags = RMI_MEASURE_CONTENT;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ memslot = gfn_to_memslot(kvm, base_gfn);
+ if (!memslot) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ /* We require the region to be contained within a single memslot */
+ if (memslot->base_gfn + memslot->npages < end_gfn) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (!kvm_slot_can_be_private(memslot)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ write_lock(&kvm->mmu_lock);
+
+ ipa = ipa_base;
+ while (ipa < ipa_end) {
+ struct vm_area_struct *vma;
+ unsigned long map_size;
+ unsigned int vma_shift;
+ unsigned long offset;
+ unsigned long hva;
+ struct page *page;
+ bool writeable;
+ kvm_pfn_t pfn;
+ int level, i;
+
+ hva = gfn_to_hva_memslot(memslot, gpa_to_gfn(ipa));
+ vma = vma_lookup(current->mm, hva);
+ if (!vma) {
+ ret = -EFAULT;
+ break;
+ }
+
+ /* FIXME: Currently we only support 4k sized mappings */
+ vma_shift = PAGE_SHIFT;
+
+ map_size = 1 << vma_shift;
+
+ ipa = ALIGN_DOWN(ipa, map_size);
+
+ switch (map_size) {
+ case RMM_L2_BLOCK_SIZE:
+ level = 2;
+ break;
+ case PAGE_SIZE:
+ level = 3;
+ break;
+ default:
+ WARN_ONCE(1, "Unsupported vma_shift %d", vma_shift);
+ ret = -EFAULT;
+ break;
+ }
+
+ pfn = __kvm_faultin_pfn(memslot, gpa_to_gfn(ipa), FOLL_WRITE,
+ &writeable, &page);
+
+ if (is_error_pfn(pfn)) {
+ ret = -EFAULT;
+ break;
+ }
+
+ if (level < RMM_RTT_MAX_LEVEL) {
+ /*
+ * A temporary RTT is needed during the map, precreate
+ * it, however if there is an error (e.g. missing
+ * parent tables) this will be handled in the
+ * realm_create_protected_data_page() call.
+ */
+ realm_create_rtt_levels(realm, ipa, level,
+ RMM_RTT_MAX_LEVEL, NULL);
+ }
+
+ for (offset = 0, i = 0; offset < map_size && !ret;
+ offset += PAGE_SIZE, i++) {
+ phys_addr_t page_ipa = ipa + offset;
+ kvm_pfn_t priv_pfn;
+ struct page *gmem_page;
+ int order;
+
+ ret = kvm_gmem_get_pfn(kvm, memslot,
+ page_ipa >> PAGE_SHIFT,
+ &priv_pfn, &gmem_page, &order);
+ if (ret)
+ break;
+
+ ret = realm_create_protected_data_page(realm, page_ipa,
+ priv_pfn,
+ pfn + i,
+ data_flags);
+ }
+
+ kvm_release_faultin_page(kvm, page, false, false);
+
+ if (ret)
+ break;
+
+ if (level == 2)
+ fold_rtt(realm, ipa, level);
+
+ ipa += map_size;
+ }
+
+ write_unlock(&kvm->mmu_lock);
+
+out:
+ srcu_read_unlock(&kvm->srcu, idx);
+ return ret;
+}
+
+static int kvm_populate_realm(struct kvm *kvm,
+ struct kvm_cap_arm_rme_populate_realm_args *args)
+{
+ phys_addr_t ipa_base, ipa_end;
+
+ if (kvm_realm_state(kvm) != REALM_STATE_NEW)
+ return -EINVAL;
+
+ if (!IS_ALIGNED(args->populate_ipa_base, PAGE_SIZE) ||
+ !IS_ALIGNED(args->populate_ipa_size, PAGE_SIZE))
+ return -EINVAL;
+
+ if (args->flags & ~RMI_MEASURE_CONTENT)
+ return -EINVAL;
+
+ ipa_base = args->populate_ipa_base;
+ ipa_end = ipa_base + args->populate_ipa_size;
+
+ if (ipa_end < ipa_base)
+ return -EINVAL;
+
+ /*
+ * Perform the populate in parts to ensure locks are not held for too
+ * long
+ */
+ while (ipa_base < ipa_end) {
+ phys_addr_t end = min(ipa_end, ipa_base + SZ_2M);
+
+ int ret = populate_par_region(kvm, ipa_base, end,
+ args->flags);
+
+ if (ret)
+ return ret;
+
+ ipa_base = end;
+ }
+
+ return 0;
+}
+
int realm_set_ipa_state(struct kvm_vcpu *vcpu,
unsigned long start,
unsigned long end,
@@ -794,6 +1025,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
r = kvm_init_ipa_range_realm(kvm, &args);
break;
}
+ case KVM_CAP_ARM_RME_POPULATE_REALM: {
+ struct kvm_cap_arm_rme_populate_realm_args args;
+ void __user *argp = u64_to_user_ptr(cap->args[1]);
+
+ if (copy_from_user(&args, argp, sizeof(args))) {
+ r = -EFAULT;
+ break;
+ }
+
+ r = kvm_populate_realm(kvm, &args);
+ break;
+ }
default:
r = -EINVAL;
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 20/43] arm64: RME: Runtime faulting of memory
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (18 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 19/43] arm64: RME: Allow populating initial contents Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-01-30 5:22 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 21/43] KVM: arm64: Handle realm VCPU load Steven Price
` (22 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
At runtime if the realm guest accesses memory which hasn't yet been
mapped then KVM needs to either populate the region or fault the guest.
For memory in the lower (protected) region of IPA a fresh page is
provided to the RMM which will zero the contents. For memory in the
upper (shared) region of IPA, the memory from the memslot is mapped
into the realm VM non secure.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Reduce use of struct page in preparation for supporting the RMM
having a different page size to the host.
* Handle a race when delegating a page where another CPU has faulted on
a the same page (and already delegated the physical page) but not yet
mapped it. In this case simply return to the guest to either use the
mapping from the other CPU (or refault if the race is lost).
* The changes to populate_par_region() are moved into the previous
patch where they belong.
Changes since v4:
* Code cleanup following review feedback.
* Drop the PTE_SHARED bit when creating unprotected page table entries.
This is now set by the RMM and the host has no control of it and the
spec requires the bit to be set to zero.
Changes since v2:
* Avoid leaking memory if failing to map it in the realm.
* Correctly mask RTT based on LPA2 flag (see rtt_get_phys()).
* Adapt to changes in previous patches.
---
arch/arm64/include/asm/kvm_emulate.h | 10 ++
arch/arm64/include/asm/kvm_rme.h | 10 ++
arch/arm64/kvm/mmu.c | 124 +++++++++++++++++++--
arch/arm64/kvm/rme.c | 156 +++++++++++++++++++++++++++
4 files changed, 293 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index ec2b6d9c9c07..b13e367b6972 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -720,6 +720,16 @@ static inline bool kvm_realm_is_created(struct kvm *kvm)
return kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_NONE;
}
+static inline gpa_t kvm_gpa_from_fault(struct kvm *kvm, phys_addr_t fault_ipa)
+{
+ if (kvm_is_realm(kvm)) {
+ struct realm *realm = &kvm->arch.realm;
+
+ return fault_ipa & ~BIT(realm->ia_bits - 1);
+ }
+ return fault_ipa;
+}
+
static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
{
if (static_branch_unlikely(&kvm_rme_is_available))
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 0410650cd545..158f77e24a26 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -99,6 +99,16 @@ void kvm_realm_unmap_range(struct kvm *kvm,
unsigned long ipa,
u64 size,
bool unmap_private);
+int realm_map_protected(struct realm *realm,
+ unsigned long base_ipa,
+ kvm_pfn_t pfn,
+ unsigned long size,
+ struct kvm_mmu_memory_cache *memcache);
+int realm_map_non_secure(struct realm *realm,
+ unsigned long ipa,
+ kvm_pfn_t pfn,
+ unsigned long size,
+ struct kvm_mmu_memory_cache *memcache);
int realm_set_ipa_state(struct kvm_vcpu *vcpu,
unsigned long addr, unsigned long end,
unsigned long ripas,
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index b100d4b3aa29..e88714903ce5 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -325,8 +325,13 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
lockdep_assert_held_write(&kvm->mmu_lock);
WARN_ON(size & ~PAGE_MASK);
- WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap,
- may_block));
+
+ if (kvm_is_realm(kvm))
+ kvm_realm_unmap_range(kvm, start, size, !only_shared);
+ else
+ WARN_ON(stage2_apply_range(mmu, start, end,
+ kvm_pgtable_stage2_unmap,
+ may_block));
}
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
@@ -346,7 +351,10 @@ static void stage2_flush_memslot(struct kvm *kvm,
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
- kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
+ if (kvm_is_realm(kvm))
+ kvm_realm_unmap_range(kvm, addr, end - addr, false);
+ else
+ kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
}
/**
@@ -1037,6 +1045,10 @@ void stage2_unmap_vm(struct kvm *kvm)
struct kvm_memory_slot *memslot;
int idx, bkt;
+ /* For realms this is handled by the RMM so nothing to do here */
+ if (kvm_is_realm(kvm))
+ return;
+
idx = srcu_read_lock(&kvm->srcu);
mmap_read_lock(current->mm);
write_lock(&kvm->mmu_lock);
@@ -1062,6 +1074,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
if (kvm_is_realm(kvm) &&
(kvm_realm_state(kvm) != REALM_STATE_DEAD &&
kvm_realm_state(kvm) != REALM_STATE_NONE)) {
+ kvm_stage2_unmap_range(mmu, 0, (~0ULL) & PAGE_MASK, false);
write_unlock(&kvm->mmu_lock);
kvm_realm_destroy_rtts(kvm, pgt->ia_bits);
@@ -1446,6 +1459,76 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
return vma->vm_flags & VM_MTE_ALLOWED;
}
+static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa,
+ kvm_pfn_t pfn, unsigned long map_size,
+ enum kvm_pgtable_prot prot,
+ struct kvm_mmu_memory_cache *memcache)
+{
+ struct realm *realm = &kvm->arch.realm;
+
+ if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W)))
+ return -EFAULT;
+
+ if (!realm_is_addr_protected(realm, ipa))
+ return realm_map_non_secure(realm, ipa, pfn, map_size,
+ memcache);
+
+ return realm_map_protected(realm, ipa, pfn, map_size, memcache);
+}
+
+static int private_memslot_fault(struct kvm_vcpu *vcpu,
+ phys_addr_t fault_ipa,
+ struct kvm_memory_slot *memslot)
+{
+ struct kvm *kvm = vcpu->kvm;
+ gpa_t gpa = kvm_gpa_from_fault(kvm, fault_ipa);
+ gfn_t gfn = gpa >> PAGE_SHIFT;
+ bool priv_exists = kvm_mem_is_private(kvm, gfn);
+ struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
+ struct page *page;
+ kvm_pfn_t pfn;
+ int ret;
+ /*
+ * For Realms, the shared address is an alias of the private GPA with
+ * the top bit set. Thus is the fault address matches the GPA then it
+ * is the private alias.
+ */
+ bool is_priv_gfn = (gpa == fault_ipa);
+
+ if (priv_exists != is_priv_gfn) {
+ kvm_prepare_memory_fault_exit(vcpu,
+ gpa,
+ PAGE_SIZE,
+ kvm_is_write_fault(vcpu),
+ false, is_priv_gfn);
+
+ return -EFAULT;
+ }
+
+ if (!is_priv_gfn) {
+ /* Not a private mapping, handling normally */
+ return -EINVAL;
+ }
+
+ ret = kvm_mmu_topup_memory_cache(memcache,
+ kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
+ if (ret)
+ return ret;
+
+ ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL);
+ if (ret)
+ return ret;
+
+ /* FIXME: Should be able to use bigger than PAGE_SIZE mappings */
+ ret = realm_map_ipa(kvm, fault_ipa, pfn, PAGE_SIZE, KVM_PGTABLE_PROT_W,
+ memcache);
+ if (!ret)
+ return 1; /* Handled */
+
+ put_page(page);
+ return ret;
+}
+
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
struct kvm_memory_slot *memslot, unsigned long hva,
@@ -1472,6 +1555,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (fault_is_perm)
fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
write_fault = kvm_is_write_fault(vcpu);
+
+ /*
+ * Realms cannot map protected pages read-only
+ * FIXME: It should be possible to map unprotected pages read-only
+ */
+ if (vcpu_is_rec(vcpu))
+ write_fault = true;
+
exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
VM_BUG_ON(write_fault && exec_fault);
@@ -1579,7 +1670,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
ipa &= ~(vma_pagesize - 1);
}
- gfn = ipa >> PAGE_SHIFT;
+ gfn = kvm_gpa_from_fault(kvm, ipa) >> PAGE_SHIFT;
mte_allowed = kvm_vma_mte_allowed(vma);
vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
@@ -1660,7 +1751,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* If we are not forced to use page mapping, check if we are
* backed by a THP and thus use block mapping if possible.
*/
- if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
+ /* FIXME: We shouldn't need to disable this for realms */
+ if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) {
if (fault_is_perm && fault_granule > PAGE_SIZE)
vma_pagesize = fault_granule;
else
@@ -1712,6 +1804,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
*/
prot &= ~KVM_NV_GUEST_MAP_SZ;
ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
+ } else if (kvm_is_realm(kvm)) {
+ ret = realm_map_ipa(kvm, fault_ipa, pfn, vma_pagesize,
+ prot, memcache);
} else {
ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
__pfn_to_phys(pfn), prot,
@@ -1854,8 +1949,15 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
nested = &nested_trans;
}
- gfn = ipa >> PAGE_SHIFT;
+ gfn = kvm_gpa_from_fault(vcpu->kvm, ipa) >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
+
+ if (kvm_slot_can_be_private(memslot)) {
+ ret = private_memslot_fault(vcpu, ipa, memslot);
+ if (ret != -EINVAL)
+ goto out;
+ }
+
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
write_fault = kvm_is_write_fault(vcpu);
if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
@@ -1899,7 +2001,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
* of the page size.
*/
ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
- ret = io_mem_abort(vcpu, ipa);
+ ret = io_mem_abort(vcpu, kvm_gpa_from_fault(vcpu->kvm, ipa));
goto out_unlock;
}
@@ -1947,6 +2049,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
if (!kvm->arch.mmu.pgt)
return false;
+ /* We don't support aging for Realms */
+ if (kvm_is_realm(kvm))
+ return true;
+
return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
range->start << PAGE_SHIFT,
size, true);
@@ -1963,6 +2069,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
if (!kvm->arch.mmu.pgt)
return false;
+ /* We don't support aging for Realms */
+ if (kvm_is_realm(kvm))
+ return true;
+
return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
range->start << PAGE_SHIFT,
size, false);
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index d4561e368cd5..146ef598a581 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -602,6 +602,162 @@ static int fold_rtt(struct realm *realm, unsigned long addr, int level)
return 0;
}
+int realm_map_protected(struct realm *realm,
+ unsigned long ipa,
+ kvm_pfn_t pfn,
+ unsigned long map_size,
+ struct kvm_mmu_memory_cache *memcache)
+{
+ phys_addr_t phys = __pfn_to_phys(pfn);
+ phys_addr_t rd = virt_to_phys(realm->rd);
+ unsigned long base_ipa = ipa;
+ unsigned long size;
+ int map_level;
+ int ret = 0;
+
+ if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
+ return -EINVAL;
+
+ switch (map_size) {
+ case PAGE_SIZE:
+ map_level = 3;
+ break;
+ case RMM_L2_BLOCK_SIZE:
+ map_level = 2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (map_level < RMM_RTT_MAX_LEVEL) {
+ /*
+ * A temporary RTT is needed during the map, precreate it,
+ * however if there is an error (e.g. missing parent tables)
+ * this will be handled below.
+ */
+ realm_create_rtt_levels(realm, ipa, map_level,
+ RMM_RTT_MAX_LEVEL, memcache);
+ }
+
+ for (size = 0; size < map_size; size += PAGE_SIZE) {
+ if (rmi_granule_delegate(phys)) {
+ /*
+ * It's likely we raced with another VCPU on the same
+ * fault. Assume the other VCPU has handled the fault
+ * and return to the guest.
+ */
+ return 0;
+ }
+
+ ret = rmi_data_create_unknown(rd, phys, ipa);
+
+ if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
+ /* Create missing RTTs and retry */
+ int level = RMI_RETURN_INDEX(ret);
+
+ ret = realm_create_rtt_levels(realm, ipa, level,
+ RMM_RTT_MAX_LEVEL,
+ memcache);
+ WARN_ON(ret);
+ if (ret)
+ goto err_undelegate;
+
+ ret = rmi_data_create_unknown(rd, phys, ipa);
+ }
+ WARN_ON(ret);
+
+ if (ret)
+ goto err_undelegate;
+
+ phys += PAGE_SIZE;
+ ipa += PAGE_SIZE;
+ }
+
+ if (map_size == RMM_L2_BLOCK_SIZE)
+ ret = fold_rtt(realm, base_ipa, map_level);
+ if (WARN_ON(ret))
+ goto err;
+
+ return 0;
+
+err_undelegate:
+ if (WARN_ON(rmi_granule_undelegate(phys))) {
+ /* Page can't be returned to NS world so is lost */
+ get_page(phys_to_page(phys));
+ }
+err:
+ while (size > 0) {
+ unsigned long data, top;
+
+ phys -= PAGE_SIZE;
+ size -= PAGE_SIZE;
+ ipa -= PAGE_SIZE;
+
+ WARN_ON(rmi_data_destroy(rd, ipa, &data, &top));
+
+ if (WARN_ON(rmi_granule_undelegate(phys))) {
+ /* Page can't be returned to NS world so is lost */
+ get_page(phys_to_page(phys));
+ }
+ }
+ return -ENXIO;
+}
+
+int realm_map_non_secure(struct realm *realm,
+ unsigned long ipa,
+ kvm_pfn_t pfn,
+ unsigned long map_size,
+ struct kvm_mmu_memory_cache *memcache)
+{
+ phys_addr_t rd = virt_to_phys(realm->rd);
+ int map_level;
+ int ret = 0;
+ unsigned long desc = __pfn_to_phys(pfn) |
+ PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) |
+ /* FIXME: Read+Write permissions for now */
+ (3 << 6);
+
+ if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
+ return -EINVAL;
+
+ switch (map_size) {
+ case PAGE_SIZE:
+ map_level = 3;
+ break;
+ case RMM_L2_BLOCK_SIZE:
+ map_level = 2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
+
+ if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
+ /* Create missing RTTs and retry */
+ int level = RMI_RETURN_INDEX(ret);
+
+ ret = realm_create_rtt_levels(realm, ipa, level, map_level,
+ memcache);
+ if (WARN_ON(ret))
+ return -ENXIO;
+
+ ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
+ }
+ /*
+ * RMI_ERROR_RTT can be reported for two reasons: either the RTT tables
+ * are not there, or there is an RTTE already present for the address.
+ * The call to realm_create_rtt_levels() above handles the first case,
+ * and in the second case this indicates that another thread has
+ * already populated the RTTE for us, so we can ignore the error and
+ * continue.
+ */
+ if (WARN_ON(ret && RMI_RETURN_STATUS(ret) != RMI_ERROR_RTT))
+ return -ENXIO;
+
+ return 0;
+}
+
static int populate_par_region(struct kvm *kvm,
phys_addr_t ipa_base,
phys_addr_t ipa_end,
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 21/43] KVM: arm64: Handle realm VCPU load
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (19 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 20/43] arm64: RME: Runtime faulting of memory Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM Steven Price
` (21 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
When loading a realm VCPU much of the work is handled by the RMM so only
some of the actions are required. Rearrange kvm_arch_vcpu_load()
slightly so we can bail out early for a realm guest.
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arm.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 6e83b2e3a6cf..f588b528c3f9 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -656,10 +656,6 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_vgic_load(vcpu);
kvm_timer_vcpu_load(vcpu);
- if (has_vhe())
- kvm_vcpu_load_vhe(vcpu);
- kvm_arch_vcpu_load_fp(vcpu);
- kvm_vcpu_pmu_restore_guest(vcpu);
if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu);
@@ -675,6 +671,15 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu_set_pauth_traps(vcpu);
+ /* No additional state needs to be loaded on Realmed VMs */
+ if (vcpu_is_rec(vcpu))
+ return;
+
+ if (has_vhe())
+ kvm_vcpu_load_vhe(vcpu);
+ kvm_arch_vcpu_load_fp(vcpu);
+ kvm_vcpu_pmu_restore_guest(vcpu);
+
kvm_arch_vcpu_load_debug_state_flags(vcpu);
if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus))
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (20 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 21/43] KVM: arm64: Handle realm VCPU load Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 1:22 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests Steven Price
` (20 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM only allows setting the GPRS (x0-x30) and PC for a realm
guest. Check this in kvm_arm_set_reg() so that the VMM can receive a
suitable error return if other registers are accessed.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v5:
* Upper GPRS can be set as part of a HOST_CALL return, so fix up the
test to allow them.
---
arch/arm64/kvm/guest.c | 43 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 12dad841f2a5..1ee2fe072f1a 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -73,6 +73,24 @@ static u64 core_reg_offset_from_id(u64 id)
return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
}
+static bool kvm_realm_validate_core_reg(u64 off)
+{
+ /*
+ * Note that GPRs can only sometimes be controlled by the VMM.
+ * For PSCI only X0-X6 are used, higher registers are ignored (restored
+ * from the REC).
+ * For HOST_CALL all of X0-X30 are copied to the RsiHostCall structure.
+ * For emulated MMIO X0 is always used.
+ */
+ switch (off) {
+ case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
+ KVM_REG_ARM_CORE_REG(regs.regs[30]):
+ case KVM_REG_ARM_CORE_REG(regs.pc):
+ return true;
+ }
+ return false;
+}
+
static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off)
{
int size;
@@ -115,6 +133,9 @@ static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off)
if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off))
return -EINVAL;
+ if (kvm_is_realm(vcpu->kvm) && !kvm_realm_validate_core_reg(off))
+ return -EPERM;
+
return size;
}
@@ -783,12 +804,34 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
return kvm_arm_sys_reg_get_reg(vcpu, reg);
}
+/*
+ * The RMI ABI only enables setting some GPRs and PC. The selection of GPRs
+ * that are available depends on the Realm state and the reason for the last
+ * exit. All other registers are reset to architectural or otherwise defined
+ * reset values by the RMM, except for a few configuration fields that
+ * correspond to Realm parameters.
+ */
+static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
+ const struct kvm_one_reg *reg)
+{
+ if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) {
+ u64 off = core_reg_offset_from_id(reg->id);
+
+ return kvm_realm_validate_core_reg(off);
+ }
+
+ return false;
+}
+
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
/* We currently use nothing arch-specific in upper 32 bits */
if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
return -EINVAL;
+ if (kvm_is_realm(vcpu->kvm) && !validate_realm_set_reg(vcpu, reg))
+ return -EINVAL;
+
switch (reg->id & KVM_REG_ARM_COPROC_MASK) {
case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg);
case KVM_REG_ARM_FW:
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (21 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 2:06 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions Steven Price
` (19 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM needs to be informed of the target REC when a PSCI call is made
with an MPIDR argument. Expose an ioctl to the userspace in case the PSCI
is handled by it.
Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/include/asm/kvm_rme.h | 3 +++
arch/arm64/kvm/arm.c | 25 +++++++++++++++++++++++++
arch/arm64/kvm/psci.c | 29 +++++++++++++++++++++++++++++
arch/arm64/kvm/rme.c | 15 +++++++++++++++
4 files changed, 72 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 158f77e24a26..90a4537ad38d 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -113,6 +113,9 @@ int realm_set_ipa_state(struct kvm_vcpu *vcpu,
unsigned long addr, unsigned long end,
unsigned long ripas,
unsigned long *top_ipa);
+int realm_psci_complete(struct kvm_vcpu *calling,
+ struct kvm_vcpu *target,
+ unsigned long status);
#define RMM_RTT_BLOCK_LEVEL 2
#define RMM_RTT_MAX_LEVEL 3
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index f588b528c3f9..eff1a4ec892b 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1745,6 +1745,22 @@ static int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
return __kvm_arm_vcpu_set_events(vcpu, events);
}
+static int kvm_arm_vcpu_rmm_psci_complete(struct kvm_vcpu *vcpu,
+ struct kvm_arm_rmm_psci_complete *arg)
+{
+ struct kvm_vcpu *target = kvm_mpidr_to_vcpu(vcpu->kvm, arg->target_mpidr);
+
+ if (!target)
+ return -EINVAL;
+
+ /*
+ * RMM v1.0 only supports PSCI_RET_SUCCESS or PSCI_RET_DENIED
+ * for the status. But, let us leave it to the RMM to filter
+ * for making this future proof.
+ */
+ return realm_psci_complete(vcpu, target, arg->psci_status);
+}
+
long kvm_arch_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
{
@@ -1867,6 +1883,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return kvm_arm_vcpu_finalize(vcpu, what);
}
+ case KVM_ARM_VCPU_RMM_PSCI_COMPLETE: {
+ struct kvm_arm_rmm_psci_complete req;
+
+ if (!kvm_is_realm(vcpu->kvm))
+ return -EINVAL;
+ if (copy_from_user(&req, argp, sizeof(req)))
+ return -EFAULT;
+ return kvm_arm_vcpu_rmm_psci_complete(vcpu, &req);
+ }
default:
r = -EINVAL;
}
diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
index 3b5dbe9a0a0e..9dc161abc30c 100644
--- a/arch/arm64/kvm/psci.c
+++ b/arch/arm64/kvm/psci.c
@@ -103,6 +103,12 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
reset_state->reset = true;
kvm_make_request(KVM_REQ_VCPU_RESET, vcpu);
+ /*
+ * Make sure we issue PSCI_COMPLETE before the VCPU can be
+ * scheduled.
+ */
+ if (vcpu_is_rec(vcpu))
+ realm_psci_complete(source_vcpu, vcpu, PSCI_RET_SUCCESS);
/*
* Make sure the reset request is observed if the RUNNABLE mp_state is
@@ -115,6 +121,10 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
out_unlock:
spin_unlock(&vcpu->arch.mp_state_lock);
+ if (vcpu_is_rec(vcpu) && ret != PSCI_RET_SUCCESS)
+ realm_psci_complete(source_vcpu, vcpu,
+ ret == PSCI_RET_ALREADY_ON ?
+ PSCI_RET_SUCCESS : PSCI_RET_DENIED);
return ret;
}
@@ -142,6 +152,25 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
/* Ignore other bits of target affinity */
target_affinity &= target_affinity_mask;
+ if (vcpu_is_rec(vcpu)) {
+ struct kvm_vcpu *target_vcpu;
+
+ /* RMM supports only zero affinity level */
+ if (lowest_affinity_level != 0)
+ return PSCI_RET_INVALID_PARAMS;
+
+ target_vcpu = kvm_mpidr_to_vcpu(kvm, target_affinity);
+ if (!target_vcpu)
+ return PSCI_RET_INVALID_PARAMS;
+
+ /*
+ * Provide the references of running and target RECs to the RMM
+ * so that the RMM can complete the PSCI request.
+ */
+ realm_psci_complete(vcpu, target_vcpu, PSCI_RET_SUCCESS);
+ return PSCI_RET_SUCCESS;
+ }
+
/*
* If one or more VCPU matching target affinity are running
* then ON else OFF
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 146ef598a581..5831d379760a 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -118,6 +118,21 @@ static void free_delegated_granule(phys_addr_t phys)
free_page((unsigned long)phys_to_virt(phys));
}
+int realm_psci_complete(struct kvm_vcpu *calling, struct kvm_vcpu *target,
+ unsigned long status)
+{
+ int ret;
+
+ ret = rmi_psci_complete(virt_to_phys(calling->arch.rec.rec_page),
+ virt_to_phys(target->arch.rec.rec_page),
+ status);
+
+ if (ret)
+ return -EINVAL;
+
+ return 0;
+}
+
static int realm_rtt_create(struct realm *realm,
unsigned long addr,
int level,
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (22 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 2:11 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests Steven Price
` (18 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
The RMM doesn't allow injection of a undefined exception into a realm
guest. Add a WARN to catch if this ever happens.
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/inject_fault.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 611867b81ac2..705463058ce0 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -226,6 +226,8 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu)
*/
void kvm_inject_undefined(struct kvm_vcpu *vcpu)
{
+ if (vcpu_is_rec(vcpu))
+ WARN(1, "Cannot inject undefined exception into REC. Continuing with unknown behaviour");
if (vcpu_el1_is_32bit(vcpu))
inject_undef32(vcpu);
else
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (23 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 2:15 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 26/43] arm64: rme: allow userspace to inject aborts Steven Price
` (17 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
It doesn't make much sense as a realm guest wouldn't want to trust the
host. It will also need some extra work to ensure that KVM will only
attempt to write into a shared memory region. So for now just disable
it.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arm.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index eff1a4ec892b..134acb4ee26f 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -432,7 +432,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = system_supports_mte();
break;
case KVM_CAP_STEAL_TIME:
- r = kvm_arm_pvtime_supported();
+ if (kvm_is_realm(kvm))
+ r = 0;
+ else
+ r = kvm_arm_pvtime_supported();
break;
case KVM_CAP_ARM_EL1_32BIT:
r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1);
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 26/43] arm64: rme: allow userspace to inject aborts
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (24 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL Steven Price
` (16 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Joey Gouly, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Joey Gouly <joey.gouly@arm.com>
Extend KVM_SET_VCPU_EVENTS to support realms, where KVM cannot set the
system registers, and the RMM must perform it on next REC entry.
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
Documentation/virt/kvm/api.rst | 2 ++
arch/arm64/kvm/guest.c | 24 ++++++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index df4679415a4c..1c622d8ca602 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -1291,6 +1291,8 @@ User space may need to inject several types of events to the guest.
Set the pending SError exception state for this VCPU. It is not possible to
'cancel' an Serror that has been made pending.
+User space cannot inject SErrors into Realms.
+
If the guest performed an access to I/O memory which could not be handled by
userspace, for example because of missing instruction syndrome decode
information or because there is no device mapped at the accessed IPA, then
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 1ee2fe072f1a..920e0fd73698 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -883,6 +883,30 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
bool has_esr = events->exception.serror_has_esr;
bool ext_dabt_pending = events->exception.ext_dabt_pending;
+ if (vcpu_is_rec(vcpu)) {
+ /* Cannot inject SError into a Realm. */
+ if (serror_pending)
+ return -EINVAL;
+
+ /*
+ * If a data abort is pending, set the flag and let the RMM
+ * inject an SEA when the REC is scheduled to be run.
+ */
+ if (ext_dabt_pending) {
+ /*
+ * Can only inject SEA into a Realm if the previous exit
+ * was due to a data abort of an Unprotected IPA.
+ */
+ if (!(vcpu->arch.rec.run->enter.flags & REC_ENTER_EMULATED_MMIO))
+ return -EINVAL;
+
+ vcpu->arch.rec.run->enter.flags &= ~REC_ENTER_EMULATED_MMIO;
+ vcpu->arch.rec.run->enter.flags |= REC_ENTER_INJECT_SEA;
+ }
+
+ return 0;
+ }
+
if (serror_pending && has_esr) {
if (!cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
return -EINVAL;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (25 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 26/43] arm64: rme: allow userspace to inject aborts Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 6:41 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance Steven Price
` (15 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Joey Gouly, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Joey Gouly <joey.gouly@arm.com>
Forward RSI_HOST_CALLS to KVM's HVC handler.
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v4:
* Setting GPRS is now done by kvm_rec_enter() rather than
rec_exit_host_call() (see previous patch - arm64: RME: Handle realm
enter/exit). This fixes a bug where the registers set by user space
were being ignored.
---
arch/arm64/kvm/rme-exit.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c
index 8f0f9ab57f28..b2a367474d74 100644
--- a/arch/arm64/kvm/rme-exit.c
+++ b/arch/arm64/kvm/rme-exit.c
@@ -103,6 +103,26 @@ static int rec_exit_ripas_change(struct kvm_vcpu *vcpu)
return 0;
}
+static int rec_exit_host_call(struct kvm_vcpu *vcpu)
+{
+ int ret, i;
+ struct realm_rec *rec = &vcpu->arch.rec;
+
+ vcpu->stat.hvc_exit_stat++;
+
+ for (i = 0; i < REC_RUN_GPRS; i++)
+ vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]);
+
+ ret = kvm_smccc_call_handler(vcpu);
+
+ if (ret < 0) {
+ vcpu_set_reg(vcpu, 0, ~0UL);
+ ret = 1;
+ }
+
+ return ret;
+}
+
static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu)
{
struct realm_rec *rec = &vcpu->arch.rec;
@@ -164,6 +184,8 @@ int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_ret)
return rec_exit_psci(vcpu);
case RMI_EXIT_RIPAS_CHANGE:
return rec_exit_ripas_change(vcpu);
+ case RMI_EXIT_HOST_CALL:
+ return rec_exit_host_call(vcpu);
}
kvm_pr_unimpl("Unsupported exit reason: %u\n",
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (26 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 6:00 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms Steven Price
` (14 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Suzuki K Poulose, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Gavin Shan,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V, Steven Price
From: Suzuki K Poulose <suzuki.poulose@arm.com>
Given we have different types of VMs supported, check the
support for SVE for the given instance of the VM to accurately
report the status.
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/include/asm/kvm_rme.h | 2 ++
arch/arm64/kvm/arm.c | 5 ++++-
arch/arm64/kvm/rme.c | 5 +++++
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 90a4537ad38d..0d89ab1645c1 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -85,6 +85,8 @@ void kvm_init_rme(void);
u32 kvm_realm_ipa_limit(void);
u32 kvm_realm_vgic_nr_lr(void);
+bool kvm_rme_supports_sve(void);
+
int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
int kvm_init_realm_vm(struct kvm *kvm);
void kvm_destroy_realm(struct kvm *kvm);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 134acb4ee26f..6f7f96ab781d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -456,7 +456,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = get_kvm_ipa_limit();
break;
case KVM_CAP_ARM_SVE:
- r = system_supports_sve();
+ if (kvm_is_realm(kvm))
+ r = kvm_rme_supports_sve();
+ else
+ r = system_supports_sve();
break;
case KVM_CAP_ARM_PTRAUTH_ADDRESS:
case KVM_CAP_ARM_PTRAUTH_GENERIC:
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 5831d379760a..27a479feb907 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -20,6 +20,11 @@ static bool rme_supports(unsigned long feature)
return !!u64_get_bits(rmm_feat_reg0, feature);
}
+bool kvm_rme_supports_sve(void)
+{
+ return rme_supports(RMI_FEATURE_REGISTER_0_SVE_EN);
+}
+
static int rmi_check_version(void)
{
struct arm_smccc_res res;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (27 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 6:52 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms Steven Price
` (13 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Always split up huge pages to avoid problems managing huge pages. There
are two issues currently:
1. The uABI for the VMM allows populating memory on 4k boundaries even
if the underlying allocator (e.g. hugetlbfs) is using a larger page
size. Using a memfd for private allocations will push this issue onto
the VMM as it will need to respect the granularity of the allocator.
2. The guest is able to request arbitrary ranges to be remapped as
shared. Again with a memfd approach it will be up to the VMM to deal
with the complexity and either overmap (need the huge mapping and add
an additional 'overlapping' shared mapping) or reject the request as
invalid due to the use of a huge page allocator.
For now just break everything down to 4k pages in the RMM controlled
stage 2.
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/mmu.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index e88714903ce5..9ede143ccef1 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1603,6 +1603,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (logging_active) {
force_pte = true;
vma_shift = PAGE_SHIFT;
+ } else if (kvm_is_realm(kvm)) {
+ // Force PTE level mappings for realms
+ force_pte = true;
+ vma_shift = PAGE_SHIFT;
} else {
vma_shift = get_vma_page_shift(vma, hva);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (28 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 7:12 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 31/43] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
` (12 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Physical device assignment is not yet supported by the RMM, so it
doesn't make much sense to allow device mappings within the realm.
Prevent them when the guest is a realm.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes from v5:
* Also prevent accesses in user_mem_abort()
---
arch/arm64/kvm/mmu.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9ede143ccef1..cef7c3dcbf99 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1149,6 +1149,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
if (is_protected_kvm_enabled())
return -EPERM;
+ /* We don't support mapping special pages into a Realm */
+ if (kvm_is_realm(kvm))
+ return -EINVAL;
+
size += offset_in_page(guest_ipa);
guest_ipa &= PAGE_MASK;
@@ -1725,6 +1729,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && device)
return -ENOEXEC;
+ /*
+ * Don't allow device accesses to protected memory as we don't (yet)
+ * support protected devices.
+ */
+ if (device && kvm_is_realm(kvm) &&
+ kvm_gpa_from_fault(kvm, fault_ipa) == fault_ipa)
+ return -EINVAL;
+
/*
* Potentially reduce shadow S2 permissions to match the guest's own
* S2. For exec faults, we'd only reach this point if the guest
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 31/43] arm_pmu: Provide a mechanism for disabling the physical IRQ
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (29 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest Steven Price
` (11 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Arm CCA assigns the physical PMU device to the guest running in realm
world, however the IRQs are routed via the host. To enter a realm guest
while a PMU IRQ is pending it is necessary to block the physical IRQ to
prevent an immediate exit. Provide a mechanism in the PMU driver for KVM
to control the physical IRQ.
Signed-off-by: Steven Price <steven.price@arm.com>
---
v3: Add a dummy function for the !CONFIG_ARM_PMU case.
---
drivers/perf/arm_pmu.c | 15 +++++++++++++++
include/linux/perf/arm_pmu.h | 5 +++++
2 files changed, 20 insertions(+)
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 398cce3d76fc..2cdcdda8f638 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -735,6 +735,21 @@ static int arm_perf_teardown_cpu(unsigned int cpu, struct hlist_node *node)
return 0;
}
+void arm_pmu_set_phys_irq(bool enable)
+{
+ int cpu = get_cpu();
+ struct arm_pmu *pmu = per_cpu(cpu_armpmu, cpu);
+ int irq;
+
+ irq = armpmu_get_cpu_irq(pmu, cpu);
+ if (irq && !enable)
+ per_cpu(cpu_irq_ops, cpu)->disable_pmuirq(irq);
+ else if (irq && enable)
+ per_cpu(cpu_irq_ops, cpu)->enable_pmuirq(irq);
+
+ put_cpu();
+}
+
#ifdef CONFIG_CPU_PM
static void cpu_pm_pmu_setup(struct arm_pmu *armpmu, unsigned long cmd)
{
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 4b5b83677e3f..6c2631e2cbd7 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -172,6 +172,7 @@ void kvm_host_pmu_init(struct arm_pmu *pmu);
#endif
bool arm_pmu_irq_is_nmi(void);
+void arm_pmu_set_phys_irq(bool enable);
/* Internal functions only for core arm_pmu code */
struct arm_pmu *armpmu_alloc(void);
@@ -182,6 +183,10 @@ void armpmu_free_irq(int irq, int cpu);
#define ARMV8_PMU_PDEV_NAME "armv8-pmu"
+#else /* CONFIG_ARM_PMU */
+
+static inline void arm_pmu_set_phys_irq(bool enable) {}
+
#endif /* CONFIG_ARM_PMU */
#define ARMV8_SPE_PDEV_NAME "arm,spe-v1"
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (30 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 31/43] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 16:54 ` Joey Gouly
2024-12-12 15:55 ` [PATCH v6 33/43] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
` (10 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Use the PMU registers from the RmiRecExit structure to identify when an
overflow interrupt is due and inject it into the guest. Also hook up the
configuration option for enabling the PMU within the guest.
When entering a realm guest with a PMU interrupt pending, it is
necessary to disable the physical interrupt. Otherwise when the RMM
restores the PMU state the physical interrupt will trigger causing an
immediate exit back to the host. The guest is expected to acknowledge
the interrupt causing a host exit (to update the GIC state) which gives
the opportunity to re-enable the physical interrupt before the next PMU
event.
Number of PMU counters is configured by the VMM by writing to PMCR.N.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v2:
* Add a macro kvm_pmu_get_irq_level() to avoid compile issues when PMU
support is disabled.
---
arch/arm64/kvm/arm.c | 11 +++++++++++
arch/arm64/kvm/guest.c | 7 +++++++
arch/arm64/kvm/pmu-emul.c | 3 +++
arch/arm64/kvm/rme.c | 8 ++++++++
arch/arm64/kvm/sys_regs.c | 2 +-
include/kvm/arm_pmu.h | 4 ++++
6 files changed, 34 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 6f7f96ab781d..ae3596a25272 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -15,6 +15,7 @@
#include <linux/vmalloc.h>
#include <linux/fs.h>
#include <linux/mman.h>
+#include <linux/perf/arm_pmu.h>
#include <linux/sched.h>
#include <linux/kvm.h>
#include <linux/kvm_irqfd.h>
@@ -1209,6 +1210,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
run->exit_reason = KVM_EXIT_UNKNOWN;
run->flags = 0;
while (ret > 0) {
+ bool pmu_stopped = false;
+
/*
* Check conditions before entering the guest
*/
@@ -1240,6 +1243,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
kvm_pmu_flush_hwstate(vcpu);
+ if (vcpu_is_rec(vcpu) && kvm_pmu_get_irq_level(vcpu)) {
+ pmu_stopped = true;
+ arm_pmu_set_phys_irq(false);
+ }
+
local_irq_disable();
kvm_vgic_flush_hwstate(vcpu);
@@ -1342,6 +1350,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
preempt_enable();
+ if (pmu_stopped)
+ arm_pmu_set_phys_irq(true);
+
/*
* The ARMv8 architecture doesn't give the hypervisor
* a mechanism to prevent a guest from dropping to AArch32 EL0
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 920e0fd73698..b4e3839f6076 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -804,6 +804,8 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
return kvm_arm_sys_reg_get_reg(vcpu, reg);
}
+#define KVM_REG_ARM_PMCR_EL0 ARM64_SYS_REG(3, 3, 9, 12, 0)
+
/*
* The RMI ABI only enables setting some GPRs and PC. The selection of GPRs
* that are available depends on the Realm state and the reason for the last
@@ -818,6 +820,11 @@ static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
u64 off = core_reg_offset_from_id(reg->id);
return kvm_realm_validate_core_reg(off);
+ } else {
+ switch (reg->id) {
+ case KVM_REG_ARM_PMCR_EL0:
+ return true;
+ }
}
return false;
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 456102bc0b55..5628b573ca41 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -397,6 +397,9 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
{
u64 reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+ if (vcpu_is_rec(vcpu))
+ return vcpu->arch.rec.run->exit.pmu_ovf_status;
+
reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
/*
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 27a479feb907..e562e77c1f94 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -303,6 +303,11 @@ static int realm_create_rd(struct kvm *kvm)
params->rtt_base = kvm->arch.mmu.pgd_phys;
params->vmid = realm->vmid;
+ if (kvm->arch.arm_pmu) {
+ params->pmu_num_ctrs = kvm->arch.pmcr_n;
+ params->flags |= RMI_REALM_PARAM_FLAG_PMU;
+ }
+
params_phys = virt_to_phys(params);
if (rmi_realm_create(rd_phys, params_phys)) {
@@ -1370,6 +1375,9 @@ int kvm_create_rec(struct kvm_vcpu *vcpu)
if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2))
return -EINVAL;
+ if (vcpu->kvm->arch.arm_pmu && !kvm_vcpu_has_pmu(vcpu))
+ return -EINVAL;
+
BUILD_BUG_ON(sizeof(*params) > PAGE_SIZE);
BUILD_BUG_ON(sizeof(*rec->run) > PAGE_SIZE);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 83c6b4a07ef5..a4713609e230 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1324,7 +1324,7 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
* implements. Ignore this error to maintain compatibility
* with the existing KVM behavior.
*/
- if (!kvm_vm_has_ran_once(kvm) &&
+ if (!kvm_vm_has_ran_once(kvm) && !kvm_realm_is_created(kvm) &&
new_n <= kvm_arm_pmu_get_max_counters(kvm))
kvm->arch.pmcr_n = new_n;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index e61dd7dd2286..30625a6fd143 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -77,6 +77,8 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
void kvm_vcpu_pmu_resync_el0(void);
+#define kvm_pmu_get_irq_level(vcpu) ((vcpu)->arch.pmu.irq_level)
+
#define kvm_vcpu_has_pmu(vcpu) \
(vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3))
@@ -164,6 +166,8 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
return 0;
}
+#define kvm_pmu_get_irq_level(vcpu) (false)
+
#define kvm_vcpu_has_pmu(vcpu) ({ false; })
static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 33/43] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (31 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest Steven Price
@ 2024-12-12 15:55 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
` (9 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
For protected memory read only isn't supported. While it may be possible
to support read only for unprotected memory, this isn't supported at the
present time.
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index ae3596a25272..b6f63ab33354 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -377,7 +377,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_ONE_REG:
case KVM_CAP_ARM_PSCI:
case KVM_CAP_ARM_PSCI_0_2:
- case KVM_CAP_READONLY_MEM:
case KVM_CAP_MP_STATE:
case KVM_CAP_IMMEDIATE_EXIT:
case KVM_CAP_VCPU_EVENTS:
@@ -391,6 +390,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_COUNTER_OFFSET:
r = 1;
break;
+ case KVM_CAP_READONLY_MEM:
case KVM_CAP_SET_GUEST_DEBUG:
r = !kvm_is_realm(kvm);
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (32 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 33/43] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
@ 2024-12-12 15:55 ` Steven Price
2025-02-02 23:15 ` Gavin Shan
2024-12-12 15:56 ` [PATCH v6 35/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
` (8 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:55 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
The RMM describes the maximum number of BPs/WPs available to the guest
in the Feature Register 0. Propagate those numbers into ID_AA64DFR0_EL1,
which is visible to userspace. A VMM needs this information in order to
set up realm parameters.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/include/asm/kvm_rme.h | 2 ++
arch/arm64/kvm/rme.c | 22 ++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 2 +-
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 0d89ab1645c1..f8e37907e2d5 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -85,6 +85,8 @@ void kvm_init_rme(void);
u32 kvm_realm_ipa_limit(void);
u32 kvm_realm_vgic_nr_lr(void);
+u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
+
bool kvm_rme_supports_sve(void);
int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index e562e77c1f94..d21042d5ec9a 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -63,6 +63,28 @@ u32 kvm_realm_vgic_nr_lr(void)
return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS);
}
+u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
+{
+ u32 bps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_BPS);
+ u32 wps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_WPS);
+ u32 ctx_cmps;
+
+ if (!kvm_is_realm(vcpu->kvm))
+ return val;
+
+ /* Ensure CTX_CMPs is still valid */
+ ctx_cmps = FIELD_GET(ID_AA64DFR0_EL1_CTX_CMPs, val);
+ ctx_cmps = min(bps, ctx_cmps);
+
+ val &= ~(ID_AA64DFR0_EL1_BRPs_MASK | ID_AA64DFR0_EL1_WRPs_MASK |
+ ID_AA64DFR0_EL1_CTX_CMPs);
+ val |= FIELD_PREP(ID_AA64DFR0_EL1_BRPs_MASK, bps) |
+ FIELD_PREP(ID_AA64DFR0_EL1_WRPs_MASK, wps) |
+ FIELD_PREP(ID_AA64DFR0_EL1_CTX_CMPs, ctx_cmps);
+
+ return val;
+}
+
static int get_start_level(struct realm *realm)
{
return 4 - stage2_pgtable_levels(realm->ia_bits);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a4713609e230..55cde43b36b9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1806,7 +1806,7 @@ static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
/* Hide SPE from guests */
val &= ~ID_AA64DFR0_EL1_PMSVer_MASK;
- return val;
+ return kvm_realm_reset_id_aa64dfr0_el1(vcpu, val);
}
static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 35/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (33 preceding siblings ...)
2024-12-12 15:55 ` [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 36/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
` (7 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
Allow userspace to configure the number of breakpoints and watchpoints
of a Realm VM through KVM_SET_ONE_REG ID_AA64DFR0_EL1.
The KVM sys_reg handler checks the user value against the maximum value
given by RMM (arm64_check_features() gets it from the
read_sanitised_id_aa64dfr0_el1() reset handler).
Userspace discovers that it can write these fields by issuing a
KVM_ARM_GET_REG_WRITABLE_MASKS ioctl.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/guest.c | 2 ++
arch/arm64/kvm/rme.c | 3 +++
arch/arm64/kvm/sys_regs.c | 17 +++++++++++------
3 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index b4e3839f6076..60756b4c18fc 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -805,6 +805,7 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
}
#define KVM_REG_ARM_PMCR_EL0 ARM64_SYS_REG(3, 3, 9, 12, 0)
+#define KVM_REG_ARM_ID_AA64DFR0_EL1 ARM64_SYS_REG(3, 0, 0, 5, 0)
/*
* The RMI ABI only enables setting some GPRs and PC. The selection of GPRs
@@ -823,6 +824,7 @@ static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
} else {
switch (reg->id) {
case KVM_REG_ARM_PMCR_EL0:
+ case KVM_REG_ARM_ID_AA64DFR0_EL1:
return true;
}
}
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index d21042d5ec9a..d77754ce8d14 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -293,6 +293,7 @@ static int realm_create_rd(struct kvm *kvm)
void *rd = NULL;
phys_addr_t rd_phys, params_phys;
size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
+ u64 dfr0 = kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1);
int i, r;
if (WARN_ON(realm->rd) || WARN_ON(!realm->params))
@@ -324,6 +325,8 @@ static int realm_create_rd(struct kvm *kvm)
params->rtt_num_start = pgd_size / PAGE_SIZE;
params->rtt_base = kvm->arch.mmu.pgd_phys;
params->vmid = realm->vmid;
+ params->num_bps = SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr0);
+ params->num_wps = SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr0);
if (kvm->arch.arm_pmu) {
params->pmu_num_ctrs = kvm->arch.pmcr_n;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 55cde43b36b9..750f69abd3a5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1815,6 +1815,9 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
{
u8 debugver = SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, val);
u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val);
+ u8 bps = SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, val);
+ u8 wps = SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, val);
+ u8 ctx_cmps = SYS_FIELD_GET(ID_AA64DFR0_EL1, CTX_CMPs, val);
/*
* Prior to commit 3d0dba5764b9 ("KVM: arm64: PMU: Move the
@@ -1834,10 +1837,11 @@ static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
val &= ~ID_AA64DFR0_EL1_PMUVer_MASK;
/*
- * ID_AA64DFR0_EL1.DebugVer is one of those awkward fields with a
- * nonzero minimum safe value.
+ * ID_AA64DFR0_EL1.DebugVer, BRPs and WRPs all have to be greater than
+ * zero. CTX_CMPs is never greater than BRPs.
*/
- if (debugver < ID_AA64DFR0_EL1_DebugVer_IMP)
+ if (debugver < ID_AA64DFR0_EL1_DebugVer_IMP || !bps || !wps ||
+ ctx_cmps > bps)
return -EINVAL;
return set_id_reg(vcpu, rd, val);
@@ -1984,10 +1988,11 @@ static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
mutex_lock(&vcpu->kvm->arch.config_lock);
/*
- * Once the VM has started the ID registers are immutable. Reject any
- * write that does not match the final register value.
+ * Once the VM has started or the Realm descriptor is created, the ID
+ * registers are immutable. Reject any write that does not match the
+ * final register value.
*/
- if (kvm_vm_has_ran_once(vcpu->kvm)) {
+ if (kvm_vm_has_ran_once(vcpu->kvm) || kvm_realm_is_created(vcpu->kvm)) {
if (val != read_id_reg(vcpu, rd))
ret = -EBUSY;
else
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 36/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (34 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 35/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 37/43] arm64: RME: Propagate max SVE vector length from RMM Steven Price
` (6 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
Provide an accurate number of available PMU counters to userspace when
setting up a Realm.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/include/asm/kvm_rme.h | 1 +
arch/arm64/kvm/pmu-emul.c | 3 +++
arch/arm64/kvm/rme.c | 5 +++++
3 files changed, 9 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index f8e37907e2d5..7839ef81c20e 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -84,6 +84,7 @@ struct realm_rec {
void kvm_init_rme(void);
u32 kvm_realm_ipa_limit(void);
u32 kvm_realm_vgic_nr_lr(void);
+u8 kvm_realm_max_pmu_counters(void);
u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 5628b573ca41..f201f86ed7c8 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -1016,6 +1016,9 @@ u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm)
{
struct arm_pmu *arm_pmu = kvm->arch.arm_pmu;
+ if (kvm_is_realm(kvm))
+ return kvm_realm_max_pmu_counters();
+
/*
* The arm_pmu->cntr_mask considers the fixed counter(s) as well.
* Ignore those and return only the general-purpose counters.
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index d77754ce8d14..1230550e2a71 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -63,6 +63,11 @@ u32 kvm_realm_vgic_nr_lr(void)
return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS);
}
+u8 kvm_realm_max_pmu_counters(void)
+{
+ return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_PMU_NUM_CTRS);
+}
+
u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
{
u32 bps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_BPS);
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 37/43] arm64: RME: Propagate max SVE vector length from RMM
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (35 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 36/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
` (5 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
RMM provides the maximum vector length it supports for a guest in its
feature register. Make it visible to the rest of KVM and to userspace
via KVM_REG_ARM64_SVE_VLS.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_rme.h | 1 +
arch/arm64/kvm/guest.c | 2 +-
arch/arm64/kvm/reset.c | 12 ++++++++++--
arch/arm64/kvm/rme.c | 6 ++++++
5 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 220195c727ef..5b16e7d61de8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -75,9 +75,9 @@ enum kvm_mode kvm_get_mode(void);
static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
#endif
-extern unsigned int __ro_after_init kvm_sve_max_vl;
extern unsigned int __ro_after_init kvm_host_sve_max_vl;
int __init kvm_arm_init_sve(void);
+unsigned int kvm_sve_get_max_vl(struct kvm *kvm);
u32 __attribute_const__ kvm_target_cpu(void);
void kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
index 7839ef81c20e..4b2e21d5f37e 100644
--- a/arch/arm64/include/asm/kvm_rme.h
+++ b/arch/arm64/include/asm/kvm_rme.h
@@ -85,6 +85,7 @@ void kvm_init_rme(void);
u32 kvm_realm_ipa_limit(void);
u32 kvm_realm_vgic_nr_lr(void);
u8 kvm_realm_max_pmu_counters(void);
+unsigned int kvm_realm_sve_max_vl(void);
u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 60756b4c18fc..429c8f10b76a 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -377,7 +377,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (vq_present(vqs, vq))
max_vq = vq;
- if (max_vq > sve_vq_from_vl(kvm_sve_max_vl))
+ if (max_vq > sve_vq_from_vl(kvm_sve_get_max_vl(vcpu->kvm)))
return -EINVAL;
/*
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 16a5d11fc75a..26134c904865 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -46,7 +46,7 @@ unsigned int __ro_after_init kvm_host_sve_max_vl;
#define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \
PSR_AA32_I_BIT | PSR_AA32_F_BIT)
-unsigned int __ro_after_init kvm_sve_max_vl;
+static unsigned int __ro_after_init kvm_sve_max_vl;
int __init kvm_arm_init_sve(void)
{
@@ -76,9 +76,17 @@ int __init kvm_arm_init_sve(void)
return 0;
}
+unsigned int kvm_sve_get_max_vl(struct kvm *kvm)
+{
+ if (kvm_is_realm(kvm))
+ return kvm_realm_sve_max_vl();
+ else
+ return kvm_sve_max_vl;
+}
+
static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu)
{
- vcpu->arch.sve_max_vl = kvm_sve_max_vl;
+ vcpu->arch.sve_max_vl = kvm_sve_get_max_vl(vcpu->kvm);
/*
* Userspace can still customize the vector lengths by writing
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 1230550e2a71..39dbc19e4a42 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -68,6 +68,12 @@ u8 kvm_realm_max_pmu_counters(void)
return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_PMU_NUM_CTRS);
}
+unsigned int kvm_realm_sve_max_vl(void)
+{
+ return sve_vl_from_vq(u64_get_bits(rmm_feat_reg0,
+ RMI_FEATURE_REGISTER_0_SVE_VL) + 1);
+}
+
u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
{
u32 bps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_BPS);
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (36 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 37/43] arm64: RME: Propagate max SVE vector length from RMM Steven Price
@ 2024-12-12 15:56 ` Steven Price
2025-02-02 23:26 ` Gavin Shan
2024-12-12 15:56 ` [PATCH v6 39/43] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
` (4 subsequent siblings)
42 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
Obtain the max vector length configured by userspace on the vCPUs, and
write it into the Realm parameters. By default the vCPU is configured
with the max vector length reported by RMM, and userspace can reduce it
with a write to KVM_REG_ARM64_SVE_VLS.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/guest.c | 3 ++-
arch/arm64/kvm/rme.c | 42 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 44 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 429c8f10b76a..5562029368c5 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -363,7 +363,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
if (!vcpu_has_sve(vcpu))
return -ENOENT;
- if (kvm_arm_vcpu_sve_finalized(vcpu))
+ if (kvm_arm_vcpu_sve_finalized(vcpu) || kvm_realm_is_created(vcpu->kvm))
return -EPERM; /* too late! */
if (WARN_ON(vcpu->arch.sve_state))
@@ -825,6 +825,7 @@ static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
switch (reg->id) {
case KVM_REG_ARM_PMCR_EL0:
case KVM_REG_ARM_ID_AA64DFR0_EL1:
+ case KVM_REG_ARM64_SVE_VLS:
return true;
}
}
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 39dbc19e4a42..3116ecee37a8 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -297,6 +297,44 @@ static void realm_unmap_shared_range(struct kvm *kvm,
}
}
+static int realm_init_sve_param(struct kvm *kvm, struct realm_params *params)
+{
+ int ret = 0;
+ unsigned long i;
+ struct kvm_vcpu *vcpu;
+ int max_vl, realm_max_vl = -1;
+
+ /*
+ * Get the preferred SVE configuration, set by userspace with the
+ * KVM_ARM_VCPU_SVE feature and KVM_REG_ARM64_SVE_VLS pseudo-register.
+ */
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ mutex_lock(&vcpu->mutex);
+ if (vcpu_has_sve(vcpu)) {
+ if (!kvm_arm_vcpu_sve_finalized(vcpu))
+ ret = -EINVAL;
+ max_vl = vcpu->arch.sve_max_vl;
+ } else {
+ max_vl = 0;
+ }
+ mutex_unlock(&vcpu->mutex);
+ if (ret)
+ return ret;
+
+ /* We need all vCPUs to have the same SVE config */
+ if (realm_max_vl >= 0 && realm_max_vl != max_vl)
+ return -EINVAL;
+
+ realm_max_vl = max_vl;
+ }
+
+ if (realm_max_vl > 0) {
+ params->sve_vl = sve_vq_from_vl(realm_max_vl) - 1;
+ params->flags |= RMI_REALM_PARAM_FLAG_SVE;
+ }
+ return 0;
+}
+
static int realm_create_rd(struct kvm *kvm)
{
struct realm *realm = &kvm->arch.realm;
@@ -344,6 +382,10 @@ static int realm_create_rd(struct kvm *kvm)
params->flags |= RMI_REALM_PARAM_FLAG_PMU;
}
+ r = realm_init_sve_param(kvm, params);
+ if (r)
+ goto out_undelegate_tables;
+
params_phys = virt_to_phys(params);
if (rmi_realm_create(rd_phys, params_phys)) {
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 39/43] arm64: RME: Provide register list for unfinalized RME RECs
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (37 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 40/43] arm64: RME: Provide accurate register list Steven Price
` (3 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
KVM_GET_REG_LIST should not be called before SVE is finalized. The ioctl
handler currently returns -EPERM in this case. But because it uses
kvm_arm_vcpu_is_finalized(), it now also rejects the call for
unfinalized REC even though finalizing the REC can only be done late,
after Realm descriptor creation.
Move the check to copy_sve_reg_indices(). One adverse side effect of
this change is that a KVM_GET_REG_LIST call that only probes for the
array size will now succeed even if SVE is not finalized, but that seems
harmless since the following KVM_GET_REG_LIST with the full array will
fail.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/arm.c | 4 ----
arch/arm64/kvm/guest.c | 9 +++------
2 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index b6f63ab33354..9badb7764667 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1832,10 +1832,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (unlikely(!kvm_vcpu_initialized(vcpu)))
break;
- r = -EPERM;
- if (!kvm_arm_vcpu_is_finalized(vcpu))
- break;
-
r = -EFAULT;
if (copy_from_user(®_list, user_list, sizeof(reg_list)))
break;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 5562029368c5..24be77888061 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -674,12 +674,9 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
{
const unsigned int slices = vcpu_sve_slices(vcpu);
- if (!vcpu_has_sve(vcpu))
+ if (!vcpu_has_sve(vcpu) || !kvm_arm_vcpu_sve_finalized(vcpu))
return 0;
- /* Policed by KVM_GET_REG_LIST: */
- WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu));
-
return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */)
+ 1; /* KVM_REG_ARM64_SVE_VLS */
}
@@ -695,8 +692,8 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
if (!vcpu_has_sve(vcpu))
return 0;
- /* Policed by KVM_GET_REG_LIST: */
- WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu));
+ if (!kvm_arm_vcpu_sve_finalized(vcpu))
+ return -EPERM;
/*
* Enumerate this first, so that userspace can save/restore in
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 40/43] arm64: RME: Provide accurate register list
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (38 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 39/43] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 41/43] arm64: kvm: Expose support for private memory Steven Price
` (2 subsequent siblings)
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V,
Steven Price
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
Userspace can set a few registers with KVM_SET_ONE_REG (9 GP registers
at runtime, and 3 system registers during initialization). Update the
register list returned by KVM_GET_REG_LIST.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/guest.c | 19 ++++++++----
arch/arm64/kvm/hypercalls.c | 4 +--
arch/arm64/kvm/sys_regs.c | 58 ++++++++++++++++++++++++++++---------
3 files changed, 60 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 24be77888061..e8c38eb5c245 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -621,8 +621,6 @@ static const u64 timer_reg_list[] = {
KVM_REG_ARM_PTIMER_CVAL,
};
-#define NUM_TIMER_REGS ARRAY_SIZE(timer_reg_list)
-
static bool is_timer_reg(u64 index)
{
switch (index) {
@@ -637,9 +635,14 @@ static bool is_timer_reg(u64 index)
return false;
}
+static unsigned long num_timer_regs(struct kvm_vcpu *vcpu)
+{
+ return kvm_is_realm(vcpu->kvm) ? 0 : ARRAY_SIZE(timer_reg_list);
+}
+
static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
- for (int i = 0; i < NUM_TIMER_REGS; i++) {
+ for (int i = 0; i < num_timer_regs(vcpu); i++) {
if (put_user(timer_reg_list[i], uindices))
return -EFAULT;
uindices++;
@@ -677,6 +680,9 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
if (!vcpu_has_sve(vcpu) || !kvm_arm_vcpu_sve_finalized(vcpu))
return 0;
+ if (kvm_is_realm(vcpu->kvm))
+ return 1; /* KVM_REG_ARM64_SVE_VLS */
+
return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */)
+ 1; /* KVM_REG_ARM64_SVE_VLS */
}
@@ -704,6 +710,9 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
return -EFAULT;
++num_regs;
+ if (kvm_is_realm(vcpu->kvm))
+ return num_regs;
+
for (i = 0; i < slices; i++) {
for (n = 0; n < SVE_NUM_ZREGS; n++) {
reg = KVM_REG_ARM64_SVE_ZREG(n, i);
@@ -742,7 +751,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
res += num_sve_regs(vcpu);
res += kvm_arm_num_sys_reg_descs(vcpu);
res += kvm_arm_get_fw_num_regs(vcpu);
- res += NUM_TIMER_REGS;
+ res += num_timer_regs(vcpu);
return res;
}
@@ -776,7 +785,7 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
ret = copy_timer_indices(vcpu, uindices);
if (ret < 0)
return ret;
- uindices += NUM_TIMER_REGS;
+ uindices += num_timer_regs(vcpu);
return kvm_arm_copy_sys_reg_indices(vcpu, uindices);
}
diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
index 27ce4cb44904..378576771125 100644
--- a/arch/arm64/kvm/hypercalls.c
+++ b/arch/arm64/kvm/hypercalls.c
@@ -407,14 +407,14 @@ void kvm_arm_teardown_hypercalls(struct kvm *kvm)
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
{
- return ARRAY_SIZE(kvm_arm_fw_reg_ids);
+ return kvm_is_realm(vcpu->kvm) ? 0 : ARRAY_SIZE(kvm_arm_fw_reg_ids);
}
int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
int i;
- for (i = 0; i < ARRAY_SIZE(kvm_arm_fw_reg_ids); i++) {
+ for (i = 0; i < kvm_arm_get_fw_num_regs(vcpu); i++) {
if (put_user(kvm_arm_fw_reg_ids[i], uindices++))
return -EFAULT;
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 750f69abd3a5..2fcc93d7310e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4706,18 +4706,18 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
}
-static unsigned int num_demux_regs(void)
+static unsigned int num_demux_regs(struct kvm_vcpu *vcpu)
{
- return CSSELR_MAX;
+ return kvm_is_realm(vcpu->kvm) ? 0 : CSSELR_MAX;
}
-static int write_demux_regids(u64 __user *uindices)
+static int write_demux_regids(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
u64 val = KVM_REG_ARM64 | KVM_REG_SIZE_U32 | KVM_REG_ARM_DEMUX;
unsigned int i;
val |= KVM_REG_ARM_DEMUX_ID_CCSIDR;
- for (i = 0; i < CSSELR_MAX; i++) {
+ for (i = 0; i < num_demux_regs(vcpu); i++) {
if (put_user(val | i, uindices))
return -EFAULT;
uindices++;
@@ -4725,6 +4725,23 @@ static int write_demux_regids(u64 __user *uindices)
return 0;
}
+static unsigned int num_invariant_regs(struct kvm_vcpu *vcpu)
+{
+ return kvm_is_realm(vcpu->kvm) ? 0 : ARRAY_SIZE(invariant_sys_regs);
+}
+
+static int write_invariant_regids(struct kvm_vcpu *vcpu, u64 __user *uindices)
+{
+ unsigned int i;
+
+ for (i = 0; i < num_invariant_regs(vcpu); i++) {
+ if (put_user(sys_reg_to_index(&invariant_sys_regs[i]), uindices))
+ return -EFAULT;
+ uindices++;
+ }
+ return 0;
+}
+
static u64 sys_reg_to_index(const struct sys_reg_desc *reg)
{
return (KVM_REG_ARM64 | KVM_REG_SIZE_U64 |
@@ -4748,11 +4765,27 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind)
return true;
}
+static bool kvm_realm_sys_reg_hidden_user(const struct kvm_vcpu *vcpu, u64 reg)
+{
+ if (!kvm_is_realm(vcpu->kvm))
+ return false;
+
+ switch (reg) {
+ case SYS_ID_AA64DFR0_EL1:
+ case SYS_PMCR_EL0:
+ return false;
+ }
+ return true;
+}
+
static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd,
u64 __user **uind,
unsigned int *total)
{
+ if (kvm_realm_sys_reg_hidden_user(vcpu, reg_to_encoding(rd)))
+ return 0;
+
/*
* Ignore registers we trap but don't save,
* and for which no custom user accessor is provided.
@@ -4790,29 +4823,26 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)
{
- return ARRAY_SIZE(invariant_sys_regs)
- + num_demux_regs()
+ return num_invariant_regs(vcpu)
+ + num_demux_regs(vcpu)
+ walk_sys_regs(vcpu, (u64 __user *)NULL);
}
int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
- unsigned int i;
int err;
- /* Then give them all the invariant registers' indices. */
- for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++) {
- if (put_user(sys_reg_to_index(&invariant_sys_regs[i]), uindices))
- return -EFAULT;
- uindices++;
- }
+ err = write_invariant_regids(vcpu, uindices);
+ if (err)
+ return err;
+ uindices += num_invariant_regs(vcpu);
err = walk_sys_regs(vcpu, uindices);
if (err < 0)
return err;
uindices += err;
- return write_demux_regids(uindices);
+ return write_demux_regids(vcpu, uindices);
}
#define KVM_ARM_FEATURE_ID_RANGE_INDEX(r) \
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 41/43] arm64: kvm: Expose support for private memory
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (39 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 40/43] arm64: RME: Provide accurate register list Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 42/43] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space Steven Price
2024-12-12 15:56 ` [PATCH v6 43/43] KVM: arm64: Allow activating realms Steven Price
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Select KVM_GENERIC_PRIVATE_MEM and provide the necessary support
functions.
Signed-off-by: Steven Price <steven.price@arm.com>
---
Changes since v2:
* Switch kvm_arch_has_private_mem() to a macro to avoid overhead of a
function call.
* Guard definitions of kvm_arch_{pre,post}_set_memory_attributes() with
#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES.
* Early out in kvm_arch_post_set_memory_attributes() if the WARN_ON
should trigger.
---
arch/arm64/include/asm/kvm_host.h | 6 ++++++
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/mmu.c | 22 ++++++++++++++++++++++
3 files changed, 29 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 5b16e7d61de8..837e2d29609d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1408,6 +1408,12 @@ struct kvm *kvm_arch_alloc_vm(void);
#define vcpu_is_protected(vcpu) kvm_vm_is_protected((vcpu)->kvm)
+#ifdef CONFIG_KVM_PRIVATE_MEM
+#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.is_realm)
+#else
+#define kvm_arch_has_private_mem(kvm) false
+#endif
+
int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature);
bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index ead632ad01b4..7bc1a2c89b3a 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -38,6 +38,7 @@ menuconfig KVM
select HAVE_KVM_VCPU_RUN_PID_CHANGE
select SCHED_INFO
select GUEST_PERF_EVENTS if PERF_EVENTS
+ select KVM_GENERIC_PRIVATE_MEM
help
Support hosting virtualized guest machines.
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index cef7c3dcbf99..8b2d803e558a 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -2314,6 +2314,28 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
return ret;
}
+#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
+bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
+ struct kvm_gfn_range *range)
+{
+ WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm));
+ return false;
+}
+
+bool kvm_arch_post_set_memory_attributes(struct kvm *kvm,
+ struct kvm_gfn_range *range)
+{
+ if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
+ return false;
+
+ if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE)
+ range->only_shared = true;
+ kvm_unmap_gfn_range(kvm, range);
+
+ return false;
+}
+#endif
+
void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot)
{
}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 42/43] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (40 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 41/43] arm64: kvm: Expose support for private memory Steven Price
@ 2024-12-12 15:56 ` Steven Price
2024-12-12 15:56 ` [PATCH v6 43/43] KVM: arm64: Allow activating realms Steven Price
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Increment KVM_VCPU_MAX_FEATURES to expose the new capability to user
space.
*NOTE*: This also exposes KVM_ARM_VCPU_HAS_EL2 (as it is one less than
KVM_ARM_VCPU_REC) - so this currently depends on nested virt being
'finished' before merging. See below for discussion:
https://lore.kernel.org/r/a7011738-a084-46fa-947f-395d90b37f8b%40arm.com
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/include/asm/kvm_host.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 837e2d29609d..df3168cbe70a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -40,7 +40,7 @@
#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
-#define KVM_VCPU_MAX_FEATURES 7
+#define KVM_VCPU_MAX_FEATURES 9
#define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1)
#define KVM_REQ_SLEEP \
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* [PATCH v6 43/43] KVM: arm64: Allow activating realms
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
` (41 preceding siblings ...)
2024-12-12 15:56 ` [PATCH v6 42/43] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space Steven Price
@ 2024-12-12 15:56 ` Steven Price
42 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2024-12-12 15:56 UTC (permalink / raw)
To: kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
Add the ioctl to activate a realm and set the static branch to enable
access to the realm functionality if the RMM is detected.
Signed-off-by: Steven Price <steven.price@arm.com>
---
arch/arm64/kvm/rme.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
index 3116ecee37a8..39d056e5c5b2 100644
--- a/arch/arm64/kvm/rme.c
+++ b/arch/arm64/kvm/rme.c
@@ -1147,6 +1147,20 @@ static int kvm_init_ipa_range_realm(struct kvm *kvm,
return realm_init_ipa_state(realm, addr, end);
}
+static int kvm_activate_realm(struct kvm *kvm)
+{
+ struct realm *realm = &kvm->arch.realm;
+
+ if (kvm_realm_state(kvm) != REALM_STATE_NEW)
+ return -EINVAL;
+
+ if (rmi_realm_activate(virt_to_phys(realm->rd)))
+ return -ENXIO;
+
+ WRITE_ONCE(realm->state, REALM_STATE_ACTIVE);
+ return 0;
+}
+
/* Protects access to rme_vmid_bitmap */
static DEFINE_SPINLOCK(rme_vmid_lock);
static unsigned long *rme_vmid_bitmap;
@@ -1296,6 +1310,9 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
r = kvm_populate_realm(kvm, &args);
break;
}
+ case KVM_CAP_ARM_RME_ACTIVATE_REALM:
+ r = kvm_activate_realm(kvm);
+ break;
default:
r = -EINVAL;
break;
@@ -1567,5 +1584,5 @@ void kvm_init_rme(void)
if (rme_vmid_init())
return;
- /* Future patch will enable static branch kvm_rme_is_available */
+ static_branch_enable(&kvm_rme_is_available);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 90+ messages in thread
* Re: [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest
2024-12-12 15:55 ` [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest Steven Price
@ 2024-12-12 16:54 ` Joey Gouly
2025-02-10 11:58 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Joey Gouly @ 2024-12-12 16:54 UTC (permalink / raw)
To: Steven Price
Cc: kvm, kvmarm, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On Thu, Dec 12, 2024 at 03:55:57PM +0000, Steven Price wrote:
> Use the PMU registers from the RmiRecExit structure to identify when an
> overflow interrupt is due and inject it into the guest. Also hook up the
> configuration option for enabling the PMU within the guest.
>
> When entering a realm guest with a PMU interrupt pending, it is
> necessary to disable the physical interrupt. Otherwise when the RMM
> restores the PMU state the physical interrupt will trigger causing an
> immediate exit back to the host. The guest is expected to acknowledge
> the interrupt causing a host exit (to update the GIC state) which gives
> the opportunity to re-enable the physical interrupt before the next PMU
> event.
>
> Number of PMU counters is configured by the VMM by writing to PMCR.N.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v2:
> * Add a macro kvm_pmu_get_irq_level() to avoid compile issues when PMU
> support is disabled.
> ---
> arch/arm64/kvm/arm.c | 11 +++++++++++
> arch/arm64/kvm/guest.c | 7 +++++++
> arch/arm64/kvm/pmu-emul.c | 3 +++
> arch/arm64/kvm/rme.c | 8 ++++++++
> arch/arm64/kvm/sys_regs.c | 2 +-
> include/kvm/arm_pmu.h | 4 ++++
> 6 files changed, 34 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 6f7f96ab781d..ae3596a25272 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -15,6 +15,7 @@
> #include <linux/vmalloc.h>
> #include <linux/fs.h>
> #include <linux/mman.h>
> +#include <linux/perf/arm_pmu.h>
> #include <linux/sched.h>
> #include <linux/kvm.h>
> #include <linux/kvm_irqfd.h>
> @@ -1209,6 +1210,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> run->exit_reason = KVM_EXIT_UNKNOWN;
> run->flags = 0;
> while (ret > 0) {
> + bool pmu_stopped = false;
> +
> /*
> * Check conditions before entering the guest
> */
> @@ -1240,6 +1243,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>
> kvm_pmu_flush_hwstate(vcpu);
>
> + if (vcpu_is_rec(vcpu) && kvm_pmu_get_irq_level(vcpu)) {
> + pmu_stopped = true;
> + arm_pmu_set_phys_irq(false);
> + }
> +
> local_irq_disable();
>
> kvm_vgic_flush_hwstate(vcpu);
> @@ -1342,6 +1350,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>
> preempt_enable();
>
> + if (pmu_stopped)
> + arm_pmu_set_phys_irq(true);
> +
> /*
> * The ARMv8 architecture doesn't give the hypervisor
> * a mechanism to prevent a guest from dropping to AArch32 EL0
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 920e0fd73698..b4e3839f6076 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -804,6 +804,8 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> return kvm_arm_sys_reg_get_reg(vcpu, reg);
> }
>
> +#define KVM_REG_ARM_PMCR_EL0 ARM64_SYS_REG(3, 3, 9, 12, 0)
There's already SYS_PMCR_EL0 defined, so it seems this is not needed?
> +
> /*
> * The RMI ABI only enables setting some GPRs and PC. The selection of GPRs
> * that are available depends on the Realm state and the reason for the last
> @@ -818,6 +820,11 @@ static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
> u64 off = core_reg_offset_from_id(reg->id);
>
> return kvm_realm_validate_core_reg(off);
> + } else {
> + switch (reg->id) {
> + case KVM_REG_ARM_PMCR_EL0:
> + return true;
> + }
> }
>
> return false;
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 456102bc0b55..5628b573ca41 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -397,6 +397,9 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> {
> u64 reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>
> + if (vcpu_is_rec(vcpu))
> + return vcpu->arch.rec.run->exit.pmu_ovf_status;
> +
> reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>
> /*
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index 27a479feb907..e562e77c1f94 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -303,6 +303,11 @@ static int realm_create_rd(struct kvm *kvm)
> params->rtt_base = kvm->arch.mmu.pgd_phys;
> params->vmid = realm->vmid;
>
> + if (kvm->arch.arm_pmu) {
> + params->pmu_num_ctrs = kvm->arch.pmcr_n;
> + params->flags |= RMI_REALM_PARAM_FLAG_PMU;
> + }
> +
> params_phys = virt_to_phys(params);
>
> if (rmi_realm_create(rd_phys, params_phys)) {
> @@ -1370,6 +1375,9 @@ int kvm_create_rec(struct kvm_vcpu *vcpu)
> if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2))
> return -EINVAL;
>
> + if (vcpu->kvm->arch.arm_pmu && !kvm_vcpu_has_pmu(vcpu))
> + return -EINVAL;
> +
> BUILD_BUG_ON(sizeof(*params) > PAGE_SIZE);
> BUILD_BUG_ON(sizeof(*rec->run) > PAGE_SIZE);
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 83c6b4a07ef5..a4713609e230 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1324,7 +1324,7 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
> * implements. Ignore this error to maintain compatibility
> * with the existing KVM behavior.
> */
> - if (!kvm_vm_has_ran_once(kvm) &&
> + if (!kvm_vm_has_ran_once(kvm) && !kvm_realm_is_created(kvm) &&
> new_n <= kvm_arm_pmu_get_max_counters(kvm))
> kvm->arch.pmcr_n = new_n;
>
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index e61dd7dd2286..30625a6fd143 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -77,6 +77,8 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
> void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
> void kvm_vcpu_pmu_resync_el0(void);
>
> +#define kvm_pmu_get_irq_level(vcpu) ((vcpu)->arch.pmu.irq_level)
> +
> #define kvm_vcpu_has_pmu(vcpu) \
> (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3))
>
> @@ -164,6 +166,8 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
> return 0;
> }
>
> +#define kvm_pmu_get_irq_level(vcpu) (false)
> +
> #define kvm_vcpu_has_pmu(vcpu) ({ false; })
> static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
> static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
Thanks,
Joey
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs)
2024-12-12 15:55 ` [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
@ 2024-12-16 16:45 ` Suzuki K Poulose
0 siblings, 0 replies; 90+ messages in thread
From: Suzuki K Poulose @ 2024-12-16 16:45 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Zenghui Yu, linux-arm-kernel, linux-kernel,
Joey Gouly, Alexandru Elisei, Christoffer Dall, Fuad Tabba,
linux-coco, Ganapatrao Kulkarni, Gavin Shan, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/12/2024 15:55, Steven Price wrote:
> If the host attempts to access granules that have been delegated for use
> in a realm these accesses will be caught and will trigger a Granule
> Protection Fault (GPF).
>
> A fault during a page walk signals a bug in the kernel and is handled by
> oopsing the kernel. A non-page walk fault could be caused by user space
> having access to a page which has been delegated to the kernel and will
> trigger a SIGBUS to allow debugging why user space is trying to access a
> delegated page.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM
2024-12-12 15:55 ` [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
@ 2024-12-16 17:51 ` Suzuki K Poulose
0 siblings, 0 replies; 90+ messages in thread
From: Suzuki K Poulose @ 2024-12-16 17:51 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Zenghui Yu, linux-arm-kernel, linux-kernel,
Joey Gouly, Alexandru Elisei, Christoffer Dall, Fuad Tabba,
linux-coco, Ganapatrao Kulkarni, Gavin Shan, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/12/2024 15:55, Steven Price wrote:
> The RMM (Realm Management Monitor) provides functionality that can be
> accessed by SMC calls from the host.
>
> The SMC definitions are based on DEN0137[1] version 1.0-rel0
>
> [1] https://developer.arm.com/documentation/den0137/1-0rel0/
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Sorted the SMC #defines by value.
> * Renamed SMI_RxI_CALL to SMI_RMI_CALL since the macro is only used for
> RMI calls.
> * Renamed REC_GIC_NUM_LRS to REC_MAX_GIC_NUM_LRS since the actual
> number of available list registers could be lower.
> * Provided a define for the reserved fields of FeatureRegister0.
> * Fix inconsistent names for padding fields.
> Changes since v4:
> * Update to point to final released RMM spec.
> * Minor rearrangements.
> Changes since v3:
> * Update to match RMM spec v1.0-rel0-rc1.
> Changes since v2:
> * Fix specification link.
> * Rename rec_entry->rec_enter to match spec.
> * Fix size of pmu_ovf_status to match spec.
> ---
> arch/arm64/include/asm/rmi_smc.h | 259 +++++++++++++++++++++++++++++++
> 1 file changed, 259 insertions(+)
> create mode 100644 arch/arm64/include/asm/rmi_smc.h
>
> diff --git a/arch/arm64/include/asm/rmi_smc.h b/arch/arm64/include/asm/rmi_smc.h
> new file mode 100644
> index 000000000000..eec659e284cd
> --- /dev/null
> +++ b/arch/arm64/include/asm/rmi_smc.h
> @@ -0,0 +1,259 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2023-2024 ARM Ltd.
> + *
> + * The values and structures in this file are from the Realm Management Monitor
> + * specification (DEN0137) version 1.0-rel0:
> + * https://developer.arm.com/documentation/den0137/1-0rel0/
> + */
> +
> +#ifndef __ASM_RME_SMC_H
> +#define __ASM_RME_SMC_H
> +
> +#include <linux/arm-smccc.h>
> +
> +#define SMC_RMI_CALL(func) \
> + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
> + ARM_SMCCC_SMC_64, \
> + ARM_SMCCC_OWNER_STANDARD, \
> + (func))
> +
> +#define SMC_RMI_VERSION SMC_RMI_CALL(0x0150)
> +#define SMC_RMI_GRANULE_DELEGATE SMC_RMI_CALL(0x0151)
> +#define SMC_RMI_GRANULE_UNDELEGATE SMC_RMI_CALL(0x0152)
> +#define SMC_RMI_DATA_CREATE SMC_RMI_CALL(0x0153)
> +#define SMC_RMI_DATA_CREATE_UNKNOWN SMC_RMI_CALL(0x0154)
> +#define SMC_RMI_DATA_DESTROY SMC_RMI_CALL(0x0155)
> +
> +#define SMC_RMI_REALM_ACTIVATE SMC_RMI_CALL(0x0157)
> +#define SMC_RMI_REALM_CREATE SMC_RMI_CALL(0x0158)
> +#define SMC_RMI_REALM_DESTROY SMC_RMI_CALL(0x0159)
> +#define SMC_RMI_REC_CREATE SMC_RMI_CALL(0x015a)
> +#define SMC_RMI_REC_DESTROY SMC_RMI_CALL(0x015b)
> +#define SMC_RMI_REC_ENTER SMC_RMI_CALL(0x015c)
> +#define SMC_RMI_RTT_CREATE SMC_RMI_CALL(0x015d)
> +#define SMC_RMI_RTT_DESTROY SMC_RMI_CALL(0x015e)
> +#define SMC_RMI_RTT_MAP_UNPROTECTED SMC_RMI_CALL(0x015f)
> +
> +#define SMC_RMI_RTT_READ_ENTRY SMC_RMI_CALL(0x0161)
> +#define SMC_RMI_RTT_UNMAP_UNPROTECTED SMC_RMI_CALL(0x0162)
> +
> +#define SMC_RMI_PSCI_COMPLETE SMC_RMI_CALL(0x0164)
> +#define SMC_RMI_FEATURES SMC_RMI_CALL(0x0165)
> +#define SMC_RMI_RTT_FOLD SMC_RMI_CALL(0x0166)
> +#define SMC_RMI_REC_AUX_COUNT SMC_RMI_CALL(0x0167)
> +#define SMC_RMI_RTT_INIT_RIPAS SMC_RMI_CALL(0x0168)
> +#define SMC_RMI_RTT_SET_RIPAS SMC_RMI_CALL(0x0169)
> +
> +#define RMI_ABI_MAJOR_VERSION 1
> +#define RMI_ABI_MINOR_VERSION 0
> +
> +#define RMI_ABI_VERSION_GET_MAJOR(version) ((version) >> 16)
> +#define RMI_ABI_VERSION_GET_MINOR(version) ((version) & 0xFFFF)
> +#define RMI_ABI_VERSION(major, minor) (((major) << 16) | (minor))
> +
> +#define RMI_UNASSIGNED 0
> +#define RMI_ASSIGNED 1
> +#define RMI_TABLE 2
> +
> +#define RMI_RETURN_STATUS(ret) ((ret) & 0xFF)
> +#define RMI_RETURN_INDEX(ret) (((ret) >> 8) & 0xFF)
> +
> +#define RMI_SUCCESS 0
> +#define RMI_ERROR_INPUT 1
> +#define RMI_ERROR_REALM 2
> +#define RMI_ERROR_REC 3
> +#define RMI_ERROR_RTT 4
> +
> +enum rmi_ripas {
> + RMI_EMPTY = 0,
> + RMI_RAM = 1,
> + RMI_DESTROYED = 2,
> +};
> +
> +#define RMI_NO_MEASURE_CONTENT 0
> +#define RMI_MEASURE_CONTENT 1
> +
> +#define RMI_FEATURE_REGISTER_0_S2SZ GENMASK(7, 0)
> +#define RMI_FEATURE_REGISTER_0_LPA2 BIT(8)
> +#define RMI_FEATURE_REGISTER_0_SVE_EN BIT(9)
> +#define RMI_FEATURE_REGISTER_0_SVE_VL GENMASK(13, 10)
> +#define RMI_FEATURE_REGISTER_0_NUM_BPS GENMASK(19, 14)
> +#define RMI_FEATURE_REGISTER_0_NUM_WPS GENMASK(25, 20)
> +#define RMI_FEATURE_REGISTER_0_PMU_EN BIT(26)
> +#define RMI_FEATURE_REGISTER_0_PMU_NUM_CTRS GENMASK(31, 27)
> +#define RMI_FEATURE_REGISTER_0_HASH_SHA_256 BIT(32)
> +#define RMI_FEATURE_REGISTER_0_HASH_SHA_512 BIT(33)
> +#define RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS GENMASK(37, 34)
> +#define RMI_FEATURE_REGISTER_0_MAX_RECS_ORDER GENMASK(41, 38)
> +#define RMI_FEATURE_REGISTER_0_Reserved GENMASK(63, 42)
> +
> +#define RMI_REALM_PARAM_FLAG_LPA2 BIT(0)
> +#define RMI_REALM_PARAM_FLAG_SVE BIT(1)
> +#define RMI_REALM_PARAM_FLAG_PMU BIT(2)
> +
> +/*
> + * Note many of these fields are smaller than u64 but all fields have u64
> + * alignment, so use u64 to ensure correct alignment.
> + */
> +struct realm_params {
> + union { /* 0x0 */
> + struct {
> + u64 flags;
> + u64 s2sz;
> + u64 sve_vl;
> + u64 num_bps;
> + u64 num_wps;
> + u64 pmu_num_ctrs;
> + u64 hash_algo;
> + };
> + u8 padding0[0x400];
> + };
> + union { /* 0x400 */
> + u8 rpv[64];
> + u8 padding1[0x400];
> + };
> + union { /* 0x800 */
> + struct {
> + u64 vmid;
> + u64 rtt_base;
> + s64 rtt_level_start;
> + u64 rtt_num_start;
> + };
> + u8 padding2[0x800];
> + };
> +};
> +
> +/*
> + * The number of GPRs (starting from X0) that are
> + * configured by the host when a REC is created.
> + */
> +#define REC_CREATE_NR_GPRS 8
> +
> +#define REC_PARAMS_FLAG_RUNNABLE BIT_ULL(0)
> +
> +#define REC_PARAMS_AUX_GRANULES 16
> +
> +struct rec_params {
> + union { /* 0x0 */
> + u64 flags;
> + u8 padding0[0x100];
> + };
> + union { /* 0x100 */
> + u64 mpidr;
> + u8 padding1[0x100];
> + };
> + union { /* 0x200 */
> + u64 pc;
> + u8 padding2[0x100];
> + };
> + union { /* 0x300 */
> + u64 gprs[REC_CREATE_NR_GPRS];
> + u8 padding3[0x500];
> + };
> + union { /* 0x800 */
> + struct {
> + u64 num_rec_aux;
> + u64 aux[REC_PARAMS_AUX_GRANULES];
> + };
> + u8 padding4[0x800];
> + };
> +};
> +
> +#define REC_ENTER_EMULATED_MMIO BIT(0)
> +#define REC_ENTER_INJECT_SEA BIT(1)
> +#define REC_ENTER_TRAP_WFI BIT(2)
> +#define REC_ENTER_TRAP_WFE BIT(3)
> +#define REC_ENTER_RIPAS_RESPONSE BIT(4)
> +
Minor nit: Everywhere else, the flags bit definitions indicate
OBJECT_FLAG_xyz
inline with that should this be :
REC_ENTER_FLAG_xyz ?
Rest looks fine to me
Suzuki
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs
2024-12-12 15:55 ` [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
@ 2024-12-19 5:37 ` Aneesh Kumar K.V
2025-01-29 4:50 ` Gavin Shan
1 sibling, 0 replies; 90+ messages in thread
From: Aneesh Kumar K.V @ 2024-12-19 5:37 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun
Steven Price <steven.price@arm.com> writes:
> +static int alloc_rec_aux(struct page **aux_pages,
> + u64 *aux_phys_pages,
> + unsigned int num_aux)
> +{
> + int ret;
> + unsigned int i;
> +
> + for (i = 0; i < num_aux; i++) {
> + struct page *aux_page;
> + phys_addr_t aux_page_phys;
> +
> + aux_page = alloc_page(GFP_KERNEL);
> + if (!aux_page) {
> + ret = -ENOMEM;
> + goto out_err;
> + }
> + aux_page_phys = page_to_phys(aux_page);
> + if (rmi_granule_delegate(aux_page_phys)) {
> + __free_page(aux_page);
> + ret = -ENXIO;
> + goto out_err;
> + }
> + aux_pages[i] = aux_page;
> + aux_phys_pages[i] = aux_page_phys;
> + }
> +
> + return 0;
> +out_err:
> + free_rec_aux(aux_pages, i);
> + return ret;
> +}
>
We can possibly switch the above to the alloc/free helper so that all
granule allocation is done by alloc_delegated_granule() ?
static int alloc_rec_aux(struct realm *realm, u64 *aux_phys_pages, unsigned int num_aux)
{
int ret;
unsigned int i;
for (i = 0; i < num_aux; i++) {
phys_addr_t aux_page_phys;
aux_page_phys = alloc_delegated_granule(realm, NULL, GFP_KERNEL);
if (aux_page_phys == PHYS_ADDR_MAX) {
ret = -ENOMEM;
goto out_err;
}
aux_phys_pages[i] = aux_page_phys;
}
return 0;
out_err:
free_rec_aux(realm, aux_phys_pages, i);
return ret;
}
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init
2024-12-12 15:55 ` [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init Steven Price
@ 2024-12-19 5:44 ` Aneesh Kumar K.V
2025-01-28 5:47 ` Gavin Shan
2025-01-29 3:57 ` Gavin Shan
1 sibling, 1 reply; 90+ messages in thread
From: Aneesh Kumar K.V @ 2024-12-19 5:44 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Steven Price, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun
Steven Price <steven.price@arm.com> writes:
> +static int rmi_check_version(void)
> +{
> + struct arm_smccc_res res;
> + int version_major, version_minor;
> + unsigned long host_version = RMI_ABI_VERSION(RMI_ABI_MAJOR_VERSION,
> + RMI_ABI_MINOR_VERSION);
> +
> + arm_smccc_1_1_invoke(SMC_RMI_VERSION, host_version, &res);
> +
> + if (res.a0 == SMCCC_RET_NOT_SUPPORTED)
> + return -ENXIO;
> +
> + version_major = RMI_ABI_VERSION_GET_MAJOR(res.a1);
> + version_minor = RMI_ABI_VERSION_GET_MINOR(res.a1);
> +
> + if (res.a0 != RMI_SUCCESS) {
> + kvm_err("Unsupported RMI ABI (v%d.%d) we want v%d.%d\n",
> + version_major, version_minor,
> + RMI_ABI_MAJOR_VERSION,
> + RMI_ABI_MINOR_VERSION);
> + return -ENXIO;
> + }
> +
> + kvm_info("RMI ABI version %d.%d\n", version_major, version_minor);
> +
> + return 0;
> +}
> +
Should we include both high and low version numbers in the kvm_err
message on error? ie,
high_version_major = RMI_ABI_VERSION_GET_MAJOR(res.a2);
high_version_minor = RMI_ABI_VERSION_GET_MINOR(res.a2);
-aneesh
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init
2024-12-19 5:44 ` Aneesh Kumar K.V
@ 2025-01-28 5:47 ` Gavin Shan
0 siblings, 0 replies; 90+ messages in thread
From: Gavin Shan @ 2025-01-28 5:47 UTC (permalink / raw)
To: Aneesh Kumar K.V, Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun
On 12/19/24 3:44 PM, Aneesh Kumar K.V wrote:
> Steven Price <steven.price@arm.com> writes:
>
>> +static int rmi_check_version(void)
>> +{
>> + struct arm_smccc_res res;
>> + int version_major, version_minor;
>> + unsigned long host_version = RMI_ABI_VERSION(RMI_ABI_MAJOR_VERSION,
>> + RMI_ABI_MINOR_VERSION);
>> +
>> + arm_smccc_1_1_invoke(SMC_RMI_VERSION, host_version, &res);
>> +
>> + if (res.a0 == SMCCC_RET_NOT_SUPPORTED)
>> + return -ENXIO;
>> +
>> + version_major = RMI_ABI_VERSION_GET_MAJOR(res.a1);
>> + version_minor = RMI_ABI_VERSION_GET_MINOR(res.a1);
>> +
>> + if (res.a0 != RMI_SUCCESS) {
>> + kvm_err("Unsupported RMI ABI (v%d.%d) we want v%d.%d\n",
>> + version_major, version_minor,
>> + RMI_ABI_MAJOR_VERSION,
>> + RMI_ABI_MINOR_VERSION);
>> + return -ENXIO;
>> + }
>> +
>> + kvm_info("RMI ABI version %d.%d\n", version_major, version_minor);
>> +
>> + return 0;
>> +}
>> +
>
> Should we include both high and low version numbers in the kvm_err
> message on error? ie,
>
> high_version_major = RMI_ABI_VERSION_GET_MAJOR(res.a2);
> high_version_minor = RMI_ABI_VERSION_GET_MINOR(res.a2);
>
I think so since a range of supported versions are returned in the failing case.
Besides, 'unsigned short' is more suitable for the local variable version_{major, minor}
since both are 16-bits in width. 'unsigned short' explicitly indicates their
width.
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 07/43] arm64: RME: Define the user ABI
2024-12-12 15:55 ` [PATCH v6 07/43] arm64: RME: Define the user ABI Steven Price
@ 2025-01-28 23:51 ` Gavin Shan
2025-01-29 16:31 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-28 23:51 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> There is one (multiplexed) CAP which can be used to create, populate and
> then activate the realm.
>
> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes from v5:
> * Actually expose the new VCPU capability (KVM_ARM_VCPU_REC) by bumping
> KVM_VCPU_MAX_FEATURES - note this also exposes KVM_ARM_VCPU_HAS_EL2!
> ---
> Documentation/virt/kvm/api.rst | 1 +
> arch/arm64/include/uapi/asm/kvm.h | 49 +++++++++++++++++++++++++++++++
> include/uapi/linux/kvm.h | 12 ++++++++
> 3 files changed, 62 insertions(+)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 454c2aaa155e..df4679415a4c 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -5088,6 +5088,7 @@ Recognised values for feature:
>
> ===== ===========================================
> arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
> + arm64 KVM_ARM_VCPU_REC (requires KVM_CAP_ARM_RME)
> ===== ===========================================
>
> Finalizes the configuration of the specified vcpu feature.
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 66736ff04011..8810719523ec 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -108,6 +108,7 @@ struct kvm_regs {
> #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
> #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
> #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
> +#define KVM_ARM_VCPU_REC 8 /* VCPU REC state as part of Realm */
>
> struct kvm_vcpu_init {
> __u32 target;
> @@ -418,6 +419,54 @@ enum {
> #define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
> #define KVM_DEV_ARM_ITS_CTRL_RESET 4
>
> +/* KVM_CAP_ARM_RME on VM fd */
> +#define KVM_CAP_ARM_RME_CONFIG_REALM 0
> +#define KVM_CAP_ARM_RME_CREATE_RD 1
> +#define KVM_CAP_ARM_RME_INIT_IPA_REALM 2
> +#define KVM_CAP_ARM_RME_POPULATE_REALM 3
> +#define KVM_CAP_ARM_RME_ACTIVATE_REALM 4
> +
I guess it would be nice to rename KVM_CAP_ARM_RME_CREATE_RD to KVM_CAP_ARM_RME_CREATE_REALM
since it's to create a realm. All other macros have suffix "_REALM". Besides, KVM_CAP_ARM_RME_INIT_IPA_REALM
would be KVM_CAP_ARM_RME_INIT_REALM_IPA, and KVM_CAP_ARM_RME_POPULATE_REALM would be
KVM_CAP_ARM_RME_POPULATE_REALM_IPA. Something like below.
/* KVM_CAP_ARM_RME on VM fd */
#define KVM_CAP_ARM_RME_CONFIG_REALM 0
#define KVM_CAP_ARM_RME_CREATE_RD 1
#define KVM_CAP_ARM_RME_INIT_REALM_IPA 2
#define KVM_CAP_ARM_RME_POPULATE_REALM_IPA 3
#define KVM_CAP_ARM_RME_ACTIVATE_REALM 4
> +#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256 0
> +#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512 1
> +
> +#define KVM_CAP_ARM_RME_RPV_SIZE 64
> +
> +/* List of configuration items accepted for KVM_CAP_ARM_RME_CONFIG_REALM */
> +#define KVM_CAP_ARM_RME_CFG_RPV 0
> +#define KVM_CAP_ARM_RME_CFG_HASH_ALGO 1
> +
The comments for the list of configuration items accepted for KVM_CAP_ARM_RME_CONFIG_REALM,
it shall be moved before KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256 to cover all the definitions
applied to KVM_CAP_ARM_RME_CONFIG_REALM.
Besides, the prefix "KVM_CAP_" in those definitions, except the first 5 definitions like
KVM_CAP_ARM_RME_CONFIG_REALM, are confusing. The macros with "KVM_CAP_" prefix are usually
indicating capabilities. In this specific case, they're applied to the argument (struct),
carried by the corresponding sub-command. So I would suggest to eleminate the prefix,
something like below:
/* List of configurations accepted for KVM_CAP_ARM_RME_CONFIG_REALM */
#define ARM_RME_CONFIG_RPV 0
#define ARM_RME_CONFIG_HASH_ALGO 1
#define ARM_RME_CONFIG_MEASUREMENT_ALGO_SHA256 0
#define ARM_RME_CONFIG_MEASUREMENT_ALGO_SHA512 1
#define ARM_RME_CONFIG_RPV_SIZE 64
struct arm_rme_config {
:
};
> +struct kvm_cap_arm_rme_config_item {
> + __u32 cfg;
> + union {
> + /* cfg == KVM_CAP_ARM_RME_CFG_RPV */
> + struct {
> + __u8 rpv[KVM_CAP_ARM_RME_RPV_SIZE];
> + };
> +
> + /* cfg == KVM_CAP_ARM_RME_CFG_HASH_ALGO */
> + struct {
> + __u32 hash_algo;
> + };
> +
> + /* Fix the size of the union */
> + __u8 reserved[256];
> + };
> +};
> +
> +#define KVM_ARM_RME_POPULATE_FLAGS_MEASURE BIT(0)
> +struct kvm_cap_arm_rme_populate_realm_args {
> + __u64 populate_ipa_base;
> + __u64 populate_ipa_size;
> + __u32 flags;
> + __u32 reserved[3];
> +};
> +
BIT(0) has type of 'unsigned long', inconsistent to '__u32 flags'. So it would
be something like below.
#define ARM_RME_POPULATE_REALM_IPA_FLAG_MEASURE (1 << 0)
struct arm_rme_populate_realm_ipa {
__u64 base;
__u64 size;
__u32 flags;
__u32 reserved[3];
};
> +struct kvm_cap_arm_rme_init_ipa_args {
> + __u64 init_ipa_base;
> + __u64 init_ipa_size;
> + __u32 reserved[4];
> +};
> +
Similiarly, it would be something like below:
struct arm_rme_init_realm_ipa {
__u64 base;
__u64 size;
__u64 reserved[2];
};
> /* Device Control API on vcpu fd */
> #define KVM_ARM_VCPU_PMU_V3_CTRL 0
> #define KVM_ARM_VCPU_PMU_V3_IRQ 0
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 502ea63b5d2e..f448198838cf 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -934,6 +934,8 @@ struct kvm_enable_cap {
> #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
> #define KVM_CAP_X86_GUEST_MODE 238
>
> +#define KVM_CAP_ARM_RME 300 /* FIXME: Large number to prevent conflicts */
> +
> struct kvm_irq_routing_irqchip {
> __u32 irqchip;
> __u32 pin;
> @@ -1581,4 +1583,14 @@ struct kvm_pre_fault_memory {
> __u64 padding[5];
> };
>
> +/* Available with KVM_CAP_ARM_RME, only for VMs with KVM_VM_TYPE_ARM_REALM */
> +struct kvm_arm_rmm_psci_complete {
> + __u64 target_mpidr;
> + __u32 psci_status;
> + __u32 padding[3];
> +};
> +
> +/* FIXME: Update nr (0xd2) when merging */
> +#define KVM_ARM_VCPU_RMM_PSCI_COMPLETE _IOW(KVMIO, 0xd2, struct kvm_arm_rmm_psci_complete)
> +
> #endif /* __LINUX_KVM_H */
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms
2024-12-12 15:55 ` [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms Steven Price
@ 2025-01-29 0:43 ` Gavin Shan
2025-01-30 11:57 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 0:43 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Add the KVM_CAP_ARM_RME_CREATE_RD ioctl to create a realm. This involves
> delegating pages to the RMM to hold the Realm Descriptor (RD) and for
> the base level of the Realm Translation Tables (RTT). A VMID also need
> to be picked, since the RMM has a separate VMID address space a
> dedicated allocator is added for this purpose.
>
> KVM_CAP_ARM_RME_CONFIG_REALM is provided to allow configuring the realm
> before it is created. Configuration options can be classified as:
>
> 1. Parameters specific to the Realm stage2 (e.g. IPA Size, vmid, stage2
> entry level, entry level RTTs, number of RTTs in start level, LPA2)
> Most of these are not measured by RMM and comes from KVM book
> keeping.
>
> 2. Parameters controlling "Arm Architecture features for the VM". (e.g.
> SVE VL, PMU counters, number of HW BRPs/WPs), configured by the VMM
> using the "user ID register write" mechanism. These will be
> supported in the later patches.
>
> 3. Parameters are not part of the core Arm architecture but defined
> by the RMM spec (e.g. Hash algorithm for measurement,
> Personalisation value). These are programmed via
> KVM_CAP_ARM_RME_CONFIG_REALM.
>
> For the IPA size there is the possibility that the RMM supports a
> different size to the IPA size supported by KVM for normal guests. At
> the moment the 'normal limit' is exposed by KVM_CAP_ARM_VM_IPA_SIZE and
> the IPA size is configured by the bottom bits of vm_type in
> KVM_CREATE_VM. This means that it isn't easy for the VMM to discover
> what IPA sizes are supported for Realm guests. Since the IPA is part of
> the measurement of the realm guest the current expectation is that the
> VMM will be required to pick the IPA size demanded by attestation and
> therefore simply failing if this isn't available is fine. An option
> would be to expose a new capability ioctl to obtain the RMM's maximum
> IPA size if this is needed in the future.
>
> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Introduce free_delegated_granule() to replace many
> undelegate/free_page() instances and centralise the comment on
> leaking when the undelegate fails.
> * Several other minor improvements suggested by reviews - thanks for
> the feedback!
> Changes since v2:
> * Improved commit description.
> * Improved return failures for rmi_check_version().
> * Clear contents of PGD after it has been undelegated in case the RMM
> left stale data.
> * Minor changes to reflect changes in previous patches.
> ---
> arch/arm64/include/asm/kvm_emulate.h | 5 +
> arch/arm64/include/asm/kvm_rme.h | 19 ++
> arch/arm64/kvm/arm.c | 18 ++
> arch/arm64/kvm/mmu.c | 22 +-
> arch/arm64/kvm/rme.c | 296 +++++++++++++++++++++++++++
> 5 files changed, 358 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 4d417e39763e..27f54a7778aa 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -715,6 +715,11 @@ static inline enum realm_state kvm_realm_state(struct kvm *kvm)
> return READ_ONCE(kvm->arch.realm.state);
> }
>
> +static inline bool kvm_realm_is_created(struct kvm *kvm)
> +{
> + return kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_NONE;
> +}
> +
> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
> {
> return false;
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 69af5c3a1e44..209cd99f03dd 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -6,6 +6,8 @@
> #ifndef __ASM_KVM_RME_H
> #define __ASM_KVM_RME_H
>
> +#include <uapi/linux/kvm.h>
> +
> /**
> * enum realm_state - State of a Realm
> */
> @@ -46,11 +48,28 @@ enum realm_state {
> * struct realm - Additional per VM data for a Realm
> *
> * @state: The lifetime state machine for the realm
> + * @rd: Kernel mapping of the Realm Descriptor (RD)
> + * @params: Parameters for the RMI_REALM_CREATE command
> + * @num_aux: The number of auxiliary pages required by the RMM
> + * @vmid: VMID to be used by the RMM for the realm
> + * @ia_bits: Number of valid Input Address bits in the IPA
> */
> struct realm {
> enum realm_state state;
> +
> + void *rd;
> + struct realm_params *params;
> +
> + unsigned long num_aux;
> + unsigned int vmid;
> + unsigned int ia_bits;
> };
>
> void kvm_init_rme(void);
> +u32 kvm_realm_ipa_limit(void);
> +
> +int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
> +int kvm_init_realm_vm(struct kvm *kvm);
> +void kvm_destroy_realm(struct kvm *kvm);
>
> #endif
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 93087eca9c51..2bbf30d66424 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -153,6 +153,13 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> }
> mutex_unlock(&kvm->slots_lock);
> break;
> + case KVM_CAP_ARM_RME:
> + if (!kvm_is_realm(kvm))
> + return -EINVAL;
> + mutex_lock(&kvm->lock);
> + r = kvm_realm_enable_cap(kvm, cap);
> + mutex_unlock(&kvm->lock);
> + break;
> default:
> break;
> }
The check of !kvm_is_realm() seems unnecessary because the same check will be
done in kvm_realm_enable_cap().
> @@ -215,6 +222,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>
> bitmap_zero(kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES);
>
> + /* Initialise the realm bits after the generic bits are enabled */
> + if (kvm_is_realm(kvm)) {
> + ret = kvm_init_realm_vm(kvm);
> + if (ret)
> + goto err_free_cpumask;
> + }
> +
> return 0;
>
> err_free_cpumask:
> @@ -274,6 +288,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
> kvm_unshare_hyp(kvm, kvm + 1);
>
> kvm_arm_teardown_hypercalls(kvm);
> + kvm_destroy_realm(kvm);
> }
>
> static bool kvm_has_full_ptr_auth(void)
> @@ -421,6 +436,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES:
> r = BIT(0);
> break;
> + case KVM_CAP_ARM_RME:
> + r = static_key_enabled(&kvm_rme_is_available);
> + break;
> default:
> r = 0;
> }
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9d46ad57e52..f09d580c12ad 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -863,12 +863,16 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = {
> .icache_inval_pou = invalidate_icache_guest_page,
> };
>
> -static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long type)
> +static int kvm_init_ipa_range(struct kvm *kvm,
> + struct kvm_s2_mmu *mmu, unsigned long type)
> {
> u32 kvm_ipa_limit = get_kvm_ipa_limit();
> u64 mmfr0, mmfr1;
> u32 phys_shift;
>
> + if (kvm_is_realm(kvm))
> + kvm_ipa_limit = kvm_realm_ipa_limit();
> +
> if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
> return -EINVAL;
>
> @@ -933,7 +937,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
> return -EINVAL;
> }
>
> - err = kvm_init_ipa_range(mmu, type);
> + err = kvm_init_ipa_range(kvm, mmu, type);
> if (err)
> return err;
>
> @@ -1056,6 +1060,20 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
> struct kvm_pgtable *pgt = NULL;
>
> write_lock(&kvm->mmu_lock);
> + if (kvm_is_realm(kvm) &&
> + (kvm_realm_state(kvm) != REALM_STATE_DEAD &&
> + kvm_realm_state(kvm) != REALM_STATE_NONE)) {
> + /* Tearing down RTTs will be added in a later patch */
> + write_unlock(&kvm->mmu_lock);
> +
> + /*
> + * The physical PGD pages are delegated to the RMM, so cannot
> + * be freed at this point. This function will be called again
> + * from kvm_destroy_realm() after the physical pages have been
> + * returned at which point the memory can be freed.
> + */
> + return;
> + }
> pgt = mmu->pgt;
> if (pgt) {
> mmu->pgd_phys = 0;
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index b88269b80c11..5530ec2653b7 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -5,9 +5,20 @@
>
> #include <linux/kvm_host.h>
>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_mmu.h>
> #include <asm/rmi_cmds.h>
> #include <asm/virt.h>
>
> +#include <asm/kvm_pgtable.h>
> +
> +static unsigned long rmm_feat_reg0;
> +
> +static bool rme_supports(unsigned long feature)
> +{
> + return !!u64_get_bits(rmm_feat_reg0, feature);
> +}
> +
I don't have strong sense, but the name would be rme_has_feature() since the
source is the RMM feature register-0.
> static int rmi_check_version(void)
> {
> struct arm_smccc_res res;
> @@ -36,6 +47,285 @@ static int rmi_check_version(void)
> return 0;
> }
>
> +u32 kvm_realm_ipa_limit(void)
> +{
> + return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ);
> +}
> +
> +static int get_start_level(struct realm *realm)
> +{
> + return 4 - stage2_pgtable_levels(realm->ia_bits);
> +}
> +
> +static void free_delegated_granule(phys_addr_t phys)
> +{
> + if (WARN_ON(rmi_granule_undelegate(phys))) {
> + /* Undelegate failed: leak the page */
> + return;
> + }
> +
> + free_page((unsigned long)phys_to_virt(phys));
> +}
> +
> +static int realm_create_rd(struct kvm *kvm)
> +{
> + struct realm *realm = &kvm->arch.realm;
> + struct realm_params *params = realm->params;
> + void *rd = NULL;
> + phys_addr_t rd_phys, params_phys;
> + size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
> + int i, r;
> +
> + if (WARN_ON(realm->rd) || WARN_ON(!realm->params))
> + return -EEXIST;
> +
> + rd = (void *)__get_free_page(GFP_KERNEL);
> + if (!rd)
> + return -ENOMEM;
> +
> + rd_phys = virt_to_phys(rd);
> + if (rmi_granule_delegate(rd_phys)) {
> + r = -ENXIO;
> + goto free_rd;
> + }
> +
> + for (i = 0; i < pgd_size; i += PAGE_SIZE) {
> + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i;
> +
> + if (rmi_granule_delegate(pgd_phys)) {
> + r = -ENXIO;
> + goto out_undelegate_tables;
> + }
> + }
> +
The step of 'i' is PAGE_SIZE, inconsistent with what has been done in the tag
'out_undelegate_tables' where the step is expected to be 1.
> + realm->ia_bits = VTCR_EL2_IPA(kvm->arch.mmu.vtcr);
> +
> + params->s2sz = VTCR_EL2_IPA(kvm->arch.mmu.vtcr);
> + params->rtt_level_start = get_start_level(realm);
> + params->rtt_num_start = pgd_size / PAGE_SIZE;
> + params->rtt_base = kvm->arch.mmu.pgd_phys;
> + params->vmid = realm->vmid;
> +
> + params_phys = virt_to_phys(params);
> +
> + if (rmi_realm_create(rd_phys, params_phys)) {
> + r = -ENXIO;
> + goto out_undelegate_tables;
> + }
> +
> + if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) {
> + WARN_ON(rmi_realm_destroy(rd_phys));
> + goto out_undelegate_tables;
> + }
> +
> + realm->rd = rd;
> +
> + return 0;
> +
> +out_undelegate_tables:
> + while (--i >= 0) {
> + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i * PAGE_SIZE;
> +
> + if (WARN_ON(rmi_granule_undelegate(pgd_phys))) {
> + /* Leak the pages if they cannot be returned */
> + kvm->arch.mmu.pgt = NULL;
> + break;
> + }
> + }
> + if (WARN_ON(rmi_granule_undelegate(rd_phys))) {
> + /* Leak the page if it isn't returned */
> + return r;
> + }
> +free_rd:
> + free_page((unsigned long)rd);
> + return r;
> +}
> +
> +/* Protects access to rme_vmid_bitmap */
> +static DEFINE_SPINLOCK(rme_vmid_lock);
> +static unsigned long *rme_vmid_bitmap;
> +
> +static int rme_vmid_init(void)
> +{
> + unsigned int vmid_count = 1 << kvm_get_vmid_bits();
> +
> + rme_vmid_bitmap = bitmap_zalloc(vmid_count, GFP_KERNEL);
> + if (!rme_vmid_bitmap) {
> + kvm_err("%s: Couldn't allocate rme vmid bitmap\n", __func__);
> + return -ENOMEM;
> + }
> +
> + return 0;
> +}
> +
> +static int rme_vmid_reserve(void)
> +{
> + int ret;
> + unsigned int vmid_count = 1 << kvm_get_vmid_bits();
> +
> + spin_lock(&rme_vmid_lock);
> + ret = bitmap_find_free_region(rme_vmid_bitmap, vmid_count, 0);
> + spin_unlock(&rme_vmid_lock);
> +
> + return ret;
> +}
> +
> +static void rme_vmid_release(unsigned int vmid)
> +{
> + spin_lock(&rme_vmid_lock);
> + bitmap_release_region(rme_vmid_bitmap, vmid, 0);
> + spin_unlock(&rme_vmid_lock);
> +}
> +
> +static int kvm_create_realm(struct kvm *kvm)
> +{
> + struct realm *realm = &kvm->arch.realm;
> + int ret;
> +
> + if (!kvm_is_realm(kvm))
> + return -EINVAL;
The duplicated check has been done in kvm_realm_enable_cap(). So it can be
dropped safely.
> + if (kvm_realm_is_created(kvm))
> + return -EEXIST;
> +
> + ret = rme_vmid_reserve();
> + if (ret < 0)
> + return ret;
> + realm->vmid = ret;
> +
> + ret = realm_create_rd(kvm);
> + if (ret) {
> + rme_vmid_release(realm->vmid);
> + return ret;
> + }
> +
> + WRITE_ONCE(realm->state, REALM_STATE_NEW);
> +
> + /* The realm is up, free the parameters. */
> + free_page((unsigned long)realm->params);
> + realm->params = NULL;
> +
> + return 0;
> +}
> +
> +static int config_realm_hash_algo(struct realm *realm,
> + struct kvm_cap_arm_rme_config_item *cfg)
> +{
> + switch (cfg->hash_algo) {
> + case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256:
> + if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_256))
> + return -EINVAL;
> + break;
> + case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512:
> + if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_512))
> + return -EINVAL;
> + break;
> + default:
> + return -EINVAL;
> + }
> + realm->params->hash_algo = cfg->hash_algo;
> + return 0;
> +}
> +
> +static int kvm_rme_config_realm(struct kvm *kvm, struct kvm_enable_cap *cap)
> +{
> + struct kvm_cap_arm_rme_config_item cfg;
> + struct realm *realm = &kvm->arch.realm;
> + int r = 0;
> +
> + if (kvm_realm_is_created(kvm))
> + return -EBUSY;
> +
> + if (copy_from_user(&cfg, (void __user *)cap->args[1], sizeof(cfg)))
> + return -EFAULT;
> +
> + switch (cfg.cfg) {
> + case KVM_CAP_ARM_RME_CFG_RPV:
> + memcpy(&realm->params->rpv, &cfg.rpv, sizeof(cfg.rpv));
> + break;
> + case KVM_CAP_ARM_RME_CFG_HASH_ALGO:
> + r = config_realm_hash_algo(realm, &cfg);
> + break;
> + default:
> + r = -EINVAL;
> + }
> +
> + return r;
> +}
> +
> +int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
> +{
> + int r = 0;
> +
> + if (!kvm_is_realm(kvm))
> + return -EINVAL;
> +
> + switch (cap->args[0]) {
> + case KVM_CAP_ARM_RME_CONFIG_REALM:
> + r = kvm_rme_config_realm(kvm, cap);
> + break;
> + case KVM_CAP_ARM_RME_CREATE_RD:
> + r = kvm_create_realm(kvm);
> + break;
> + default:
> + r = -EINVAL;
> + break;
> + }
> +
> + return r;
> +}
> +
> +void kvm_destroy_realm(struct kvm *kvm)
> +{
> + struct realm *realm = &kvm->arch.realm;
> + size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
> + int i;
> +
> + if (realm->params) {
> + free_page((unsigned long)realm->params);
> + realm->params = NULL;
> + }
> +
> + if (!kvm_realm_is_created(kvm))
> + return;
> +
> + WRITE_ONCE(realm->state, REALM_STATE_DYING);
> +
> + if (realm->rd) {
> + phys_addr_t rd_phys = virt_to_phys(realm->rd);
> +
> + if (WARN_ON(rmi_realm_destroy(rd_phys)))
> + return;
> + free_delegated_granule(rd_phys);
> + realm->rd = NULL;
> + }
> +
> + rme_vmid_release(realm->vmid);
> +
> + for (i = 0; i < pgd_size; i += PAGE_SIZE) {
> + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i;
> +
> + if (WARN_ON(rmi_granule_undelegate(pgd_phys)))
> + return;
> + }
> +
> + WRITE_ONCE(realm->state, REALM_STATE_DEAD);
> +
> + /* Now that the Realm is destroyed, free the entry level RTTs */
> + kvm_free_stage2_pgd(&kvm->arch.mmu);
> +}
> +
> +int kvm_init_realm_vm(struct kvm *kvm)
> +{
> + struct realm_params *params;
> +
> + params = (struct realm_params *)get_zeroed_page(GFP_KERNEL);
> + if (!params)
> + return -ENOMEM;
> +
> + kvm->arch.realm.params = params;
> + return 0;
> +}
> +
> void kvm_init_rme(void)
> {
> if (PAGE_SIZE != SZ_4K)
> @@ -46,5 +336,11 @@ void kvm_init_rme(void)
> /* Continue without realm support */
> return;
>
> + if (WARN_ON(rmi_features(0, &rmm_feat_reg0)))
> + return;
> +
> + if (rme_vmid_init())
> + return;
> +
> /* Future patch will enable static branch kvm_rme_is_available */
> }
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init
2024-12-12 15:55 ` [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init Steven Price
2024-12-19 5:44 ` Aneesh Kumar K.V
@ 2025-01-29 3:57 ` Gavin Shan
2025-01-29 15:24 ` Steven Price
1 sibling, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 3:57 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Query the RMI version number and check if it is a compatible version. A
> static key is also provided to signal that a supported RMM is available.
>
> Functions are provided to query if a VM or VCPU is a realm (or rec)
> which currently will always return false.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Reword "unsupported" message from "host supports" to "we want" to
> clarify that 'we' are the 'host'.
> Changes since v2:
> * Drop return value from kvm_init_rme(), it was always 0.
> * Rely on the RMM return value to identify whether the RSI ABI is
> compatible.
> ---
> arch/arm64/include/asm/kvm_emulate.h | 18 +++++++++
> arch/arm64/include/asm/kvm_host.h | 4 ++
> arch/arm64/include/asm/kvm_rme.h | 56 ++++++++++++++++++++++++++++
> arch/arm64/include/asm/virt.h | 1 +
> arch/arm64/kvm/Makefile | 3 +-
> arch/arm64/kvm/arm.c | 6 +++
> arch/arm64/kvm/rme.c | 50 +++++++++++++++++++++++++
> 7 files changed, 137 insertions(+), 1 deletion(-)
> create mode 100644 arch/arm64/include/asm/kvm_rme.h
> create mode 100644 arch/arm64/kvm/rme.c
>
[...]
> +
> +static inline bool kvm_is_realm(struct kvm *kvm)
> +{
> + if (static_branch_unlikely(&kvm_rme_is_available) && kvm)
> + return kvm->arch.is_realm;
> + return false;
> +}
> +
kvm->arch.is_realm won't be true when kvm_rme_is_available is false. It's set
in kvm_arch_init_vm() of PATCH[10]. So it's safe to be simplified as below if
the changes to kvm_arch_init_vm() is slightly modified, more details can be
found from the comments to PATCH[10]
With the changes, we don't have to check kvm_rme_is_available every time.
static inline bool kvm_is_realm(struct kvm *kvm)
{
return (kvm && kvm->arch.is_realm);
}
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation
2024-12-12 15:55 ` [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation Steven Price
@ 2025-01-29 4:07 ` Gavin Shan
2025-01-30 14:14 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 4:07 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Previously machine type was used purely for specifying the physical
> address size of the guest. Reserve the higher bits to specify an ARM
> specific machine type and declare a new type 'KVM_VM_TYPE_ARM_REALM'
> used to create a realm guest.
>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/kvm/arm.c | 17 +++++++++++++++++
> arch/arm64/kvm/mmu.c | 3 ---
> include/uapi/linux/kvm.h | 19 +++++++++++++++----
> 3 files changed, 32 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index c505ec61180a..73016e1e0067 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -207,6 +207,23 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> mutex_unlock(&kvm->lock);
> #endif
>
> + if (type & ~(KVM_VM_TYPE_ARM_MASK | KVM_VM_TYPE_ARM_IPA_SIZE_MASK))
> + return -EINVAL;
> +
> + switch (type & KVM_VM_TYPE_ARM_MASK) {
> + case KVM_VM_TYPE_ARM_NORMAL:
> + break;
> + case KVM_VM_TYPE_ARM_REALM:
> + kvm->arch.is_realm = true;
> + if (!kvm_is_realm(kvm)) {
> + /* Realm support unavailable */
> + return -EINVAL;
> + }
> + break;
> + default:
> + return -EINVAL;
> + }
> +
> kvm_init_nested(kvm);
>
> ret = kvm_share_hyp(kvm, kvm + 1);
Corresponding to comments for PATCH[6], the block of the code can be modified
to avoid using kvm_is_realm() here. In this way, kvm_is_realm() can be simplifed
as I commented for PATCH[6].
case KVM_VM_TYPE_ARM_REALM:
if (static_branch_unlikely(&kvm_rme_is_available))
return -EPERM; /* -EPERM may be more suitable than -EINVAL */
kvm->arch.is_realm = true;
break;
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs
2024-12-12 15:55 ` [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2024-12-19 5:37 ` Aneesh Kumar K.V
@ 2025-01-29 4:50 ` Gavin Shan
2025-01-30 16:40 ` Steven Price
1 sibling, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 4:50 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> The RMM maintains a data structure known as the Realm Execution Context
> (or REC). It is similar to struct kvm_vcpu and tracks the state of the
> virtual CPUs. KVM must delegate memory and request the structures are
> created when vCPUs are created, and suitably tear down on destruction.
>
> RECs must also be supplied with addition pages - auxiliary (or AUX)
> granules - for storing the larger registers state (e.g. for SVE). The
> number of AUX granules for a REC depends on the parameters with which
> the Realm was created - the RMM makes this information available via the
> RMI_REC_AUX_COUNT call performed after creating the Realm Descriptor (RD).
>
> Note that only some of register state for the REC can be set by KVM, the
> rest is defined by the RMM (zeroed). The register state then cannot be
> changed by KVM after the REC is created (except when the guest
> explicitly requests this e.g. by performing a PSCI call). The RMM also
> requires that the VMM creates RECs in ascending order of the MPIDR.
>
> See Realm Management Monitor specification (DEN0137) for more information:
> https://developer.arm.com/documentation/den0137/
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Separate the concept of vcpu_is_rec() and
> kvm_arm_vcpu_rec_finalized() by using the KVM_ARM_VCPU_REC feature as
> the indication that the VCPU is a REC.
> Changes since v2:
> * Free rec->run earlier in kvm_destroy_realm() and adapt to previous patches.
> ---
> arch/arm64/include/asm/kvm_emulate.h | 7 ++
> arch/arm64/include/asm/kvm_host.h | 3 +
> arch/arm64/include/asm/kvm_rme.h | 18 ++++
> arch/arm64/kvm/arm.c | 9 ++
> arch/arm64/kvm/reset.c | 11 ++
> arch/arm64/kvm/rme.c | 144 +++++++++++++++++++++++++++
> 6 files changed, 192 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 27f54a7778aa..ec2b6d9c9c07 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -722,7 +722,14 @@ static inline bool kvm_realm_is_created(struct kvm *kvm)
>
> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
> {
> + if (static_branch_unlikely(&kvm_rme_is_available))
> + return vcpu_has_feature(vcpu, KVM_ARM_VCPU_REC);
> return false;
> }
>
It seems the check on kvm_rme_is_available is unnecessary because KVM_ARM_VCPU_REC
is possible to be true only when kvm_rme_is_available is true.
> +static inline bool kvm_arm_vcpu_rec_finalized(struct kvm_vcpu *vcpu)
> +{
> + return vcpu->arch.rec.mpidr != INVALID_HWID;
> +}
> +
> #endif /* __ARM64_KVM_EMULATE_H__ */
I would suggest to rename to kvm_arm_rec_finalized() since vCPU and REC are
similar objects at the same level. It'd better to avoid duplicate object
name reference in the function name.
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 8482638dce3b..220195c727ef 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -789,6 +789,9 @@ struct kvm_vcpu_arch {
>
> /* Per-vcpu CCSIDR override or NULL */
> u32 *ccsidr;
> +
> + /* Realm meta data */
> + struct realm_rec rec;
> };
>
> /*
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 32bdedf1d866..62d4a63d3035 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -6,6 +6,7 @@
> #ifndef __ASM_KVM_RME_H
> #define __ASM_KVM_RME_H
>
> +#include <asm/rmi_smc.h>
> #include <uapi/linux/kvm.h>
>
> /**
> @@ -65,6 +66,21 @@ struct realm {
> unsigned int ia_bits;
> };
>
> +/**
> + * struct realm_rec - Additional per VCPU data for a Realm
> + *
> + * @mpidr: MPIDR (Multiprocessor Affinity Register) value to identify this VCPU
> + * @rec_page: Kernel VA of the RMM's private page for this REC
> + * @aux_pages: Additional pages private to the RMM for this REC
> + * @run: Kernel VA of the RmiRecRun structure shared with the RMM
> + */
> +struct realm_rec {
> + unsigned long mpidr;
> + void *rec_page;
> + struct page *aux_pages[REC_PARAMS_AUX_GRANULES];
> + struct rec_run *run;
> +};
> +
> void kvm_init_rme(void);
> u32 kvm_realm_ipa_limit(void);
>
> @@ -72,6 +88,8 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
> int kvm_init_realm_vm(struct kvm *kvm);
> void kvm_destroy_realm(struct kvm *kvm);
> void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
> +int kvm_create_rec(struct kvm_vcpu *vcpu);
> +void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>
> #define RMM_RTT_BLOCK_LEVEL 2
> #define RMM_RTT_MAX_LEVEL 3
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 73016e1e0067..2d97147651be 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -525,6 +525,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
> /* Force users to call KVM_ARM_VCPU_INIT */
> vcpu_clear_flag(vcpu, VCPU_INITIALIZED);
>
> + vcpu->arch.rec.mpidr = INVALID_HWID;
> +
> vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
>
> /* Set up the timer */
> @@ -1467,6 +1469,9 @@ static unsigned long system_supported_vcpu_features(void)
> if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
> clear_bit(KVM_ARM_VCPU_HAS_EL2, &features);
>
> + if (!static_branch_unlikely(&kvm_rme_is_available))
> + clear_bit(KVM_ARM_VCPU_REC, &features);
> +
> return features;
> }
>
> @@ -1506,6 +1511,10 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
> if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features))
> return -EINVAL;
>
> + /* RME is incompatible with AArch32 */
> + if (test_bit(KVM_ARM_VCPU_REC, &features))
> + return -EINVAL;
> +
> return 0;
> }
>
One corner case seems missed in kvm_vcpu_init_check_features(). It's allowed to
initialize a vCPU with REC feature even kvm_is_realm(kvm) is false. Hopefully,
I didn't miss something.
[...]
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init
2025-01-29 3:57 ` Gavin Shan
@ 2025-01-29 15:24 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-01-29 15:24 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 29/01/2025 03:57, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Query the RMI version number and check if it is a compatible version. A
>> static key is also provided to signal that a supported RMM is available.
>>
>> Functions are provided to query if a VM or VCPU is a realm (or rec)
>> which currently will always return false.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v5:
>> * Reword "unsupported" message from "host supports" to "we want" to
>> clarify that 'we' are the 'host'.
>> Changes since v2:
>> * Drop return value from kvm_init_rme(), it was always 0.
>> * Rely on the RMM return value to identify whether the RSI ABI is
>> compatible.
>> ---
>> arch/arm64/include/asm/kvm_emulate.h | 18 +++++++++
>> arch/arm64/include/asm/kvm_host.h | 4 ++
>> arch/arm64/include/asm/kvm_rme.h | 56 ++++++++++++++++++++++++++++
>> arch/arm64/include/asm/virt.h | 1 +
>> arch/arm64/kvm/Makefile | 3 +-
>> arch/arm64/kvm/arm.c | 6 +++
>> arch/arm64/kvm/rme.c | 50 +++++++++++++++++++++++++
>> 7 files changed, 137 insertions(+), 1 deletion(-)
>> create mode 100644 arch/arm64/include/asm/kvm_rme.h
>> create mode 100644 arch/arm64/kvm/rme.c
>>
>
> [...]
>
>> +
>> +static inline bool kvm_is_realm(struct kvm *kvm)
>> +{
>> + if (static_branch_unlikely(&kvm_rme_is_available) && kvm)
>> + return kvm->arch.is_realm;
>> + return false;
>> +}
>> +
>
> kvm->arch.is_realm won't be true when kvm_rme_is_available is false.
> It's set
> in kvm_arch_init_vm() of PATCH[10]. So it's safe to be simplified as
> below if
> the changes to kvm_arch_init_vm() is slightly modified, more details can be
> found from the comments to PATCH[10]
>
> With the changes, we don't have to check kvm_rme_is_available every time.
This is true, however, kvm_rme_is_available is a static key, which means
that the kernel uses runtime patching to change the value. So by
checking kvm_rme_is_available this check is short-circuited and there's
no (data) memory access.
I have to admit I've no idea whether this actually makes any practical
difference to any benchmark but the intention was to have the minimum
possible overhead on systems which don't have the RME hardware.
Steve
> static inline bool kvm_is_realm(struct kvm *kvm)
> {
> return (kvm && kvm->arch.is_realm);
> }
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 07/43] arm64: RME: Define the user ABI
2025-01-28 23:51 ` Gavin Shan
@ 2025-01-29 16:31 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-01-29 16:31 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 28/01/2025 23:51, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> There is one (multiplexed) CAP which can be used to create, populate and
>> then activate the realm.
>>
>> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes from v5:
>> * Actually expose the new VCPU capability (KVM_ARM_VCPU_REC) by bumping
>> KVM_VCPU_MAX_FEATURES - note this also exposes KVM_ARM_VCPU_HAS_EL2!
>> ---
>> Documentation/virt/kvm/api.rst | 1 +
>> arch/arm64/include/uapi/asm/kvm.h | 49 +++++++++++++++++++++++++++++++
>> include/uapi/linux/kvm.h | 12 ++++++++
>> 3 files changed, 62 insertions(+)
>>
>> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/
>> api.rst
>> index 454c2aaa155e..df4679415a4c 100644
>> --- a/Documentation/virt/kvm/api.rst
>> +++ b/Documentation/virt/kvm/api.rst
>> @@ -5088,6 +5088,7 @@ Recognised values for feature:
>> ===== ===========================================
>> arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE)
>> + arm64 KVM_ARM_VCPU_REC (requires KVM_CAP_ARM_RME)
>> ===== ===========================================
>> Finalizes the configuration of the specified vcpu feature.
>> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/
>> uapi/asm/kvm.h
>> index 66736ff04011..8810719523ec 100644
>> --- a/arch/arm64/include/uapi/asm/kvm.h
>> +++ b/arch/arm64/include/uapi/asm/kvm.h
>> @@ -108,6 +108,7 @@ struct kvm_regs {
>> #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address
>> authentication */
>> #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic
>> authentication */
>> #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested
>> virtualization */
>> +#define KVM_ARM_VCPU_REC 8 /* VCPU REC state as part of Realm */
>> struct kvm_vcpu_init {
>> __u32 target;
>> @@ -418,6 +419,54 @@ enum {
>> #define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
>> #define KVM_DEV_ARM_ITS_CTRL_RESET 4
>> +/* KVM_CAP_ARM_RME on VM fd */
>> +#define KVM_CAP_ARM_RME_CONFIG_REALM 0
>> +#define KVM_CAP_ARM_RME_CREATE_RD 1
>> +#define KVM_CAP_ARM_RME_INIT_IPA_REALM 2
>> +#define KVM_CAP_ARM_RME_POPULATE_REALM 3
>> +#define KVM_CAP_ARM_RME_ACTIVATE_REALM 4
>> +
>
> I guess it would be nice to rename KVM_CAP_ARM_RME_CREATE_RD to
> KVM_CAP_ARM_RME_CREATE_REALM
> since it's to create a realm. All other macros have suffix "_REALM".
This makes sense - the RMI call is RMI_REALM_CREATE so there's no need
to use the RMM jargon of 'RD' in the public interface here.
> Besides, KVM_CAP_ARM_RME_INIT_IPA_REALM
> would be KVM_CAP_ARM_RME_INIT_REALM_IPA, and
> KVM_CAP_ARM_RME_POPULATE_REALM would be
> KVM_CAP_ARM_RME_POPULATE_REALM_IPA. Something like below.
>
> /* KVM_CAP_ARM_RME on VM fd */
> #define KVM_CAP_ARM_RME_CONFIG_REALM 0
> #define KVM_CAP_ARM_RME_CREATE_RD 1
> #define KVM_CAP_ARM_RME_INIT_REALM_IPA 2
> #define KVM_CAP_ARM_RME_POPULATE_REALM_IPA 3
> #define KVM_CAP_ARM_RME_ACTIVATE_REALM 4
I'm a little confused here. You've got CREATE_RD again (was that a
mistake?). But the other two changes seem odd to me:
* INIT_IPA_REALM vs INIT_REALM_IPA. Both names are somewhat confusing
but I read the 'IPA' as being a adjective attached to the init verb.
It's not the IPA (intermediate physical address) that we're
initialising. Perhaps KVM_CAP_ARM_RME_INIT_RIPAS_REALM (matching the RMI
name) would be best?
* POPULATE_REALM vs POPULATE_REALM_IPA. Again the IPA part is superfluous.
>> +#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256 0
>> +#define KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512 1
>> +
>> +#define KVM_CAP_ARM_RME_RPV_SIZE 64
>> +
>> +/* List of configuration items accepted for
>> KVM_CAP_ARM_RME_CONFIG_REALM */
>> +#define KVM_CAP_ARM_RME_CFG_RPV 0
>> +#define KVM_CAP_ARM_RME_CFG_HASH_ALGO 1
>> +
>
> The comments for the list of configuration items accepted for
> KVM_CAP_ARM_RME_CONFIG_REALM,
> it shall be moved before KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256 to
> cover all the definitions
> applied to KVM_CAP_ARM_RME_CONFIG_REALM.
Ack
> Besides, the prefix "KVM_CAP_" in those definitions, except the first 5
> definitions like
> KVM_CAP_ARM_RME_CONFIG_REALM, are confusing. The macros with "KVM_CAP_"
> prefix are usually
> indicating capabilities. In this specific case, they're applied to the
> argument (struct),
> carried by the corresponding sub-command. So I would suggest to
> eleminate the prefix,
> something like below:
>
> /* List of configurations accepted for KVM_CAP_ARM_RME_CONFIG_REALM */
> #define ARM_RME_CONFIG_RPV 0
> #define ARM_RME_CONFIG_HASH_ALGO 1
>
> #define ARM_RME_CONFIG_MEASUREMENT_ALGO_SHA256 0
> #define ARM_RME_CONFIG_MEASUREMENT_ALGO_SHA512 1
>
> #define ARM_RME_CONFIG_RPV_SIZE 64
Yes, that's clearer - thanks for the suggestion.
> struct arm_rme_config {
> :
> };
>
>
>> +struct kvm_cap_arm_rme_config_item {
>> + __u32 cfg;
>> + union {
>> + /* cfg == KVM_CAP_ARM_RME_CFG_RPV */
>> + struct {
>> + __u8 rpv[KVM_CAP_ARM_RME_RPV_SIZE];
>> + };
>> +
>> + /* cfg == KVM_CAP_ARM_RME_CFG_HASH_ALGO */
>> + struct {
>> + __u32 hash_algo;
>> + };
>> +
>> + /* Fix the size of the union */
>> + __u8 reserved[256];
>> + };
>> +};
>> +
>> +#define KVM_ARM_RME_POPULATE_FLAGS_MEASURE BIT(0)
>> +struct kvm_cap_arm_rme_populate_realm_args {
>> + __u64 populate_ipa_base;
>> + __u64 populate_ipa_size;
>> + __u32 flags;
>> + __u32 reserved[3];
>> +};
>> +
>
> BIT(0) has type of 'unsigned long', inconsistent to '__u32 flags'. So it
> would
> be something like below.
>
> #define ARM_RME_POPULATE_REALM_IPA_FLAG_MEASURE (1 << 0)
Ok, I'll change, although personally I feel BIT(0) is clearer and it
will happily convert to a u32.
> struct arm_rme_populate_realm_ipa {
> __u64 base;
> __u64 size;
> __u32 flags;
> __u32 reserved[3];
> };
>
>> +struct kvm_cap_arm_rme_init_ipa_args {
>> + __u64 init_ipa_base;
>> + __u64 init_ipa_size;
>> + __u32 reserved[4];
>> +};
>> +
>
> Similiarly, it would be something like below:
>
> struct arm_rme_init_realm_ipa {
> __u64 base;
> __u64 size;
> __u64 reserved[2];
> };
Yes, that's better - they were somewhat repetitive before!
Thanks,
Steve
>> /* Device Control API on vcpu fd */
>> #define KVM_ARM_VCPU_PMU_V3_CTRL 0
>> #define KVM_ARM_VCPU_PMU_V3_IRQ 0
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 502ea63b5d2e..f448198838cf 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -934,6 +934,8 @@ struct kvm_enable_cap {
>> #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
>> #define KVM_CAP_X86_GUEST_MODE 238
>> +#define KVM_CAP_ARM_RME 300 /* FIXME: Large number to prevent
>> conflicts */
>> +
>> struct kvm_irq_routing_irqchip {
>> __u32 irqchip;
>> __u32 pin;
>> @@ -1581,4 +1583,14 @@ struct kvm_pre_fault_memory {
>> __u64 padding[5];
>> };
>> +/* Available with KVM_CAP_ARM_RME, only for VMs with
>> KVM_VM_TYPE_ARM_REALM */
>> +struct kvm_arm_rmm_psci_complete {
>> + __u64 target_mpidr;
>> + __u32 psci_status;
>> + __u32 padding[3];
>> +};
>> +
>> +/* FIXME: Update nr (0xd2) when merging */
>> +#define KVM_ARM_VCPU_RMM_PSCI_COMPLETE _IOW(KVMIO, 0xd2, struct
>> kvm_arm_rmm_psci_complete)
>> +
>> #endif /* __LINUX_KVM_H */
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 11/43] arm64: RME: RTT tear down
2024-12-12 15:55 ` [PATCH v6 11/43] arm64: RME: RTT tear down Steven Price
@ 2025-01-29 23:06 ` Gavin Shan
0 siblings, 0 replies; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 23:06 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> The RMM owns the stage 2 page tables for a realm, and KVM must request
> that the RMM creates/destroys entries as necessary. The physical pages
> to store the page tables are delegated to the realm as required, and can
> be undelegated when no longer used.
>
> Creating new RTTs is the easy part, tearing down is a little more
> tricky. The result of realm_rtt_destroy() can be used to effectively
> walk the tree and destroy the entries (undelegating pages that were
> given to the realm).
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> Changes since v5:
> * Rename some RME_xxx defines to do with page sizes as RMM_xxx - they are
> a property of the RMM specification not the RME architecture.
> Changes since v2:
> * Moved {alloc,free}_delegated_page() and ensure_spare_page() to a
> later patch when they are actually used.
> * Some simplifications now rmi_xxx() functions allow NULL as an output
> parameter.
> * Improved comments and code layout.
> ---
> arch/arm64/include/asm/kvm_rme.h | 19 ++++++
> arch/arm64/kvm/mmu.c | 6 +-
> arch/arm64/kvm/rme.c | 112 +++++++++++++++++++++++++++++++
> 3 files changed, 134 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 209cd99f03dd..32bdedf1d866 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -71,5 +71,24 @@ u32 kvm_realm_ipa_limit(void);
> int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
> int kvm_init_realm_vm(struct kvm *kvm);
> void kvm_destroy_realm(struct kvm *kvm);
> +void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
> +
> +#define RMM_RTT_BLOCK_LEVEL 2
> +#define RMM_RTT_MAX_LEVEL 3
> +
> +#define RMM_PAGE_SHIFT 12
> +#define RMM_PAGE_SIZE BIT(RMM_PAGE_SHIFT)
> +/* See ARM64_HW_PGTABLE_LEVEL_SHIFT() */
> +#define RMM_RTT_LEVEL_SHIFT(l) \
> + ((RMM_PAGE_SHIFT - 3) * (4 - (l)) + 3)
> +#define RMM_L2_BLOCK_SIZE BIT(RMM_RTT_LEVEL_SHIFT(2))
> +
> +static inline unsigned long rme_rtt_level_mapsize(int level)
> +{
> + if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
> + return RMM_PAGE_SIZE;
> +
> + return (1UL << RMM_RTT_LEVEL_SHIFT(level));
> +}
>
> #endif
All those definitions can be moved to rme.c since they're only used in rme.c
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS
2024-12-12 15:55 ` [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS Steven Price
@ 2025-01-29 23:25 ` Gavin Shan
2025-01-30 16:56 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 23:25 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Each page within the protected region of the realm guest can be marked
> as either RAM or EMPTY. Allow the VMM to control this before the guest
> has started and provide the equivalent functions to change this (with
> the guest's approval) at runtime.
>
> When transitioning from RIPAS RAM (1) to RIPAS EMPTY (0) the memory is
> unmapped from the guest and undelegated allowing the memory to be reused
> by the host. When transitioning to RIPAS RAM the actual population of
> the leaf RTTs is done later on stage 2 fault, however it may be
> necessary to allocate additional RTTs to allow the RMM track the RIPAS
> for the requested range.
>
> When freeing a block mapping it is necessary to temporarily unfold the
> RTT which requires delegating an extra page to the RMM, this page can
> then be recovered once the contents of the block mapping have been
> freed.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes from v5:
> * Adapt to rebasing.
> * Introduce find_map_level()
> * Rename some functions to be clearer.
> * Drop the "spare page" functionality.
> Changes from v2:
> * {alloc,free}_delegated_page() moved from previous patch to this one.
> * alloc_delegated_page() now takes a gfp_t flags parameter.
> * Fix the reference counting of guestmem pages to avoid leaking memory.
> * Several misc code improvements and extra comments.
> ---
> arch/arm64/include/asm/kvm_rme.h | 17 ++
> arch/arm64/kvm/mmu.c | 8 +-
> arch/arm64/kvm/rme.c | 411 +++++++++++++++++++++++++++++++
> 3 files changed, 433 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index be64b749fcac..4e7758f0e4b5 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -92,6 +92,15 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
> int kvm_create_rec(struct kvm_vcpu *vcpu);
> void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>
> +void kvm_realm_unmap_range(struct kvm *kvm,
> + unsigned long ipa,
> + u64 size,
> + bool unmap_private);
> +int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> + unsigned long addr, unsigned long end,
> + unsigned long ripas,
> + unsigned long *top_ipa);
> +
The declaration of realm_set_ipa_state() is unnecessary since its scope has
been limited to rme.c
> #define RMM_RTT_BLOCK_LEVEL 2
> #define RMM_RTT_MAX_LEVEL 3
>
> @@ -110,4 +119,12 @@ static inline unsigned long rme_rtt_level_mapsize(int level)
> return (1UL << RMM_RTT_LEVEL_SHIFT(level));
> }
>
> +static inline bool realm_is_addr_protected(struct realm *realm,
> + unsigned long addr)
> +{
> + unsigned int ia_bits = realm->ia_bits;
> +
> + return !(addr & ~(BIT(ia_bits - 1) - 1));
> +}
> +
> #endif
The check on the specified address to determine its range seems a bit complicated
to me, it can be simplified like below. Besides, it may be a good idea to rename
it to have the prefix "kvm_realm_".
static inline bool kvm_realm_is_{private | protected}_address(struct realm *realm,
unsigned long addr)
{
return !(addr & BIT(realm->ia_bits - 1));
}
A question related to the terms used in this series to describe a granule's state:
"protected" or "private", "unprotected" or "shared". Those terms are all used in
the function names of this series. I guess it would be nice to unify so that
"private" and "shared" to be used, which is consistent to the terms used by
guest-memfd. For example, kvm_realm_is_protected_address() can be renamed to
kvm_realm_is_private_address().
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 325b578c734d..b100d4b3aa29 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -310,6 +310,7 @@ static void invalidate_icache_guest_page(void *va, size_t size)
> * @start: The intermediate physical base address of the range to unmap
> * @size: The size of the area to unmap
> * @may_block: Whether or not we are permitted to block
> + * @only_shared: If true then protected mappings should not be unmapped
> *
> * Clear a range of stage-2 mappings, lowering the various ref-counts. Must
> * be called while holding mmu_lock (unless for freeing the stage2 pgd before
> @@ -317,7 +318,7 @@ static void invalidate_icache_guest_page(void *va, size_t size)
> * with things behind our backs.
> */
> static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size,
> - bool may_block)
> + bool may_block, bool only_shared)
> {
> struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
> phys_addr_t end = start + size;
> @@ -331,7 +332,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
> void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
> u64 size, bool may_block)
> {
> - __unmap_stage2_range(mmu, start, size, may_block);
> + __unmap_stage2_range(mmu, start, size, may_block, false);
> }
>
> void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
> @@ -1932,7 +1933,8 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
>
> __unmap_stage2_range(&kvm->arch.mmu, range->start << PAGE_SHIFT,
> (range->end - range->start) << PAGE_SHIFT,
> - range->may_block);
> + range->may_block,
> + range->only_shared);
>
> kvm_nested_s2_unmap(kvm, range->may_block);
> return false;
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index 72778d8ab52b..e8ad04405ecd 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -62,6 +62,51 @@ static int get_start_level(struct realm *realm)
> return 4 - stage2_pgtable_levels(realm->ia_bits);
> }
>
> +static int find_map_level(struct realm *realm,
> + unsigned long start,
> + unsigned long end)
> +{
> + int level = RMM_RTT_MAX_LEVEL;
> +
> + while (level > get_start_level(realm)) {
> + unsigned long map_size = rme_rtt_level_mapsize(level - 1);
> +
> + if (!IS_ALIGNED(start, map_size) ||
> + (start + map_size) > end)
> + break;
> +
> + level--;
> + }
> +
> + return level;
> +}
> +
> +static phys_addr_t alloc_delegated_granule(struct kvm_mmu_memory_cache *mc,
> + gfp_t flags)
> +{
> + phys_addr_t phys = PHYS_ADDR_MAX;
> + void *virt;
> +
> + if (mc)
> + virt = kvm_mmu_memory_cache_alloc(mc);
> + else
> + virt = (void *)__get_free_page(flags);
> +
> + if (!virt)
> + goto out;
> +
> + phys = virt_to_phys(virt);
> +
> + if (rmi_granule_delegate(phys)) {
> + free_page((unsigned long)virt);
> +
> + phys = PHYS_ADDR_MAX;
> + }
> +
> +out:
> + return phys;
> +}
> +
> static void free_delegated_granule(phys_addr_t phys)
> {
> if (WARN_ON(rmi_granule_undelegate(phys))) {
> @@ -72,6 +117,132 @@ static void free_delegated_granule(phys_addr_t phys)
> free_page((unsigned long)phys_to_virt(phys));
> }
>
> +static int realm_rtt_create(struct realm *realm,
> + unsigned long addr,
> + int level,
> + phys_addr_t phys)
> +{
> + addr = ALIGN_DOWN(addr, rme_rtt_level_mapsize(level - 1));
> + return rmi_rtt_create(virt_to_phys(realm->rd), phys, addr, level);
> +}
> +
> +static int realm_rtt_fold(struct realm *realm,
> + unsigned long addr,
> + int level,
> + phys_addr_t *rtt_granule)
> +{
> + unsigned long out_rtt;
> + int ret;
> +
> + ret = rmi_rtt_fold(virt_to_phys(realm->rd), addr, level, &out_rtt);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_SUCCESS && rtt_granule)
> + *rtt_granule = out_rtt;
> +
> + return ret;
> +}
> +
> +static int realm_destroy_protected(struct realm *realm,
> + unsigned long ipa,
> + unsigned long *next_addr)
> +{
> + unsigned long rd = virt_to_phys(realm->rd);
> + unsigned long addr;
> + phys_addr_t rtt;
> + int ret;
> +
> +loop:
> + ret = rmi_data_destroy(rd, ipa, &addr, next_addr);
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + if (*next_addr > ipa)
> + return 0; /* UNASSIGNED */
> + rtt = alloc_delegated_granule(NULL, GFP_KERNEL);
> + if (WARN_ON(rtt == PHYS_ADDR_MAX))
> + return -1;
> + /*
> + * ASSIGNED - ipa is mapped as a block, so split. The index
> + * from the return code should be 2 otherwise it appears
> + * there's a huge page bigger than KVM currently supports
> + */
> + WARN_ON(RMI_RETURN_INDEX(ret) != 2);
> + ret = realm_rtt_create(realm, ipa, 3, rtt);
> + if (WARN_ON(ret)) {
> + free_delegated_granule(rtt);
> + return -1;
> + }
> + /* retry */
> + goto loop;
> + } else if (WARN_ON(ret)) {
> + return -1;
> + }
> + ret = rmi_granule_undelegate(addr);
> +
> + /*
> + * If the undelegate fails then something has gone seriously
> + * wrong: take an extra reference to just leak the page
> + */
> + if (!WARN_ON(ret))
> + put_page(phys_to_page(addr));
> +
> + return 0;
> +}
> +
> +static void realm_unmap_shared_range(struct kvm *kvm,
> + int level,
> + unsigned long start,
> + unsigned long end)
> +{
> + struct realm *realm = &kvm->arch.realm;
> + unsigned long rd = virt_to_phys(realm->rd);
> + ssize_t map_size = rme_rtt_level_mapsize(level);
> + unsigned long next_addr, addr;
> + unsigned long shared_bit = BIT(realm->ia_bits - 1);
> +
> + if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
> + return;
> +
> + start |= shared_bit;
> + end |= shared_bit;
> +
> + for (addr = start; addr < end; addr = next_addr) {
> + unsigned long align_addr = ALIGN(addr, map_size);
> + int ret;
> +
> + next_addr = ALIGN(addr + 1, map_size);
> +
> + if (align_addr != addr || next_addr > end) {
> + /* Need to recurse deeper */
> + if (addr < align_addr)
> + next_addr = align_addr;
> + realm_unmap_shared_range(kvm, level + 1, addr,
> + min(next_addr, end));
> + continue;
> + }
> +
> + ret = rmi_rtt_unmap_unprotected(rd, addr, level, &next_addr);
> + switch (RMI_RETURN_STATUS(ret)) {
> + case RMI_SUCCESS:
> + break;
> + case RMI_ERROR_RTT:
> + if (next_addr == addr) {
> + /*
> + * There's a mapping here, but it's not a block
> + * mapping, so reset next_addr to the next block
> + * boundary and recurse to clear out the pages
> + * one level deeper.
> + */
> + next_addr = ALIGN(addr + 1, map_size);
> + realm_unmap_shared_range(kvm, level + 1, addr,
> + next_addr);
> + }
> + break;
> + default:
> + WARN_ON(1);
> + return;
> + }
> + }
> +}
> +
> static int realm_create_rd(struct kvm *kvm)
> {
> struct realm *realm = &kvm->arch.realm;
> @@ -161,6 +332,30 @@ static int realm_rtt_destroy(struct realm *realm, unsigned long addr,
> return ret;
> }
>
> +static int realm_create_rtt_levels(struct realm *realm,
> + unsigned long ipa,
> + int level,
> + int max_level,
> + struct kvm_mmu_memory_cache *mc)
> +{
> + if (WARN_ON(level == max_level))
> + return 0;
> +
> + while (level++ < max_level) {
> + phys_addr_t rtt = alloc_delegated_granule(mc, GFP_KERNEL);
> +
> + if (rtt == PHYS_ADDR_MAX)
> + return -ENOMEM;
> +
> + if (realm_rtt_create(realm, ipa, level, rtt)) {
> + free_delegated_granule(rtt);
> + return -ENXIO;
> + }
> + }
> +
> + return 0;
> +}
> +
> static int realm_tear_down_rtt_level(struct realm *realm, int level,
> unsigned long start, unsigned long end)
> {
> @@ -251,6 +446,61 @@ static int realm_tear_down_rtt_range(struct realm *realm,
> start, end);
> }
>
> +/*
> + * Returns 0 on successful fold, a negative value on error, a positive value if
> + * we were not able to fold all tables at this level.
> + */
> +static int realm_fold_rtt_level(struct realm *realm, int level,
> + unsigned long start, unsigned long end)
> +{
> + int not_folded = 0;
> + ssize_t map_size;
> + unsigned long addr, next_addr;
> +
> + if (WARN_ON(level > RMM_RTT_MAX_LEVEL))
> + return -EINVAL;
> +
> + map_size = rme_rtt_level_mapsize(level - 1);
> +
> + for (addr = start; addr < end; addr = next_addr) {
> + phys_addr_t rtt_granule;
> + int ret;
> + unsigned long align_addr = ALIGN(addr, map_size);
> +
> + next_addr = ALIGN(addr + 1, map_size);
> +
> + ret = realm_rtt_fold(realm, align_addr, level, &rtt_granule);
> +
> + switch (RMI_RETURN_STATUS(ret)) {
> + case RMI_SUCCESS:
> + free_delegated_granule(rtt_granule);
> + break;
> + case RMI_ERROR_RTT:
> + if (level == RMM_RTT_MAX_LEVEL ||
> + RMI_RETURN_INDEX(ret) < level) {
> + not_folded++;
> + break;
> + }
> + /* Recurse a level deeper */
> + ret = realm_fold_rtt_level(realm,
> + level + 1,
> + addr,
> + next_addr);
> + if (ret < 0)
> + return ret;
> + else if (ret == 0)
> + /* Try again at this level */
> + next_addr = addr;
> + break;
> + default:
> + WARN_ON(1);
> + return -ENXIO;
> + }
> + }
> +
> + return not_folded;
> +}
> +
> void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits)
> {
> struct realm *realm = &kvm->arch.realm;
> @@ -258,6 +508,155 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits)
> WARN_ON(realm_tear_down_rtt_range(realm, 0, (1UL << ia_bits)));
> }
>
> +static void realm_unmap_private_range(struct kvm *kvm,
> + unsigned long start,
> + unsigned long end)
> +{
> + struct realm *realm = &kvm->arch.realm;
> + unsigned long next_addr, addr;
> +
> + for (addr = start; addr < end; addr = next_addr) {
> + int ret;
> +
> + ret = realm_destroy_protected(realm, addr, &next_addr);
> +
> + if (WARN_ON(ret))
> + break;
> + }
> +
> + realm_fold_rtt_level(realm, get_start_level(realm) + 1,
> + start, end);
> +}
> +
> +void kvm_realm_unmap_range(struct kvm *kvm, unsigned long start, u64 size,
> + bool unmap_private)
> +{
> + unsigned long end = start + size;
> + struct realm *realm = &kvm->arch.realm;
> +
> + end = min(BIT(realm->ia_bits - 1), end);
> +
> + if (realm->state == REALM_STATE_NONE)
> + return;
> +
> + realm_unmap_shared_range(kvm, find_map_level(realm, start, end),
> + start, end);
> + if (unmap_private)
> + realm_unmap_private_range(kvm, start, end);
> +}
> +
> +int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> + unsigned long start,
> + unsigned long end,
> + unsigned long ripas,
> + unsigned long *top_ipa)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + struct realm *realm = &kvm->arch.realm;
> + struct realm_rec *rec = &vcpu->arch.rec;
> + phys_addr_t rd_phys = virt_to_phys(realm->rd);
> + phys_addr_t rec_phys = virt_to_phys(rec->rec_page);
> + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> + unsigned long ipa = start;
> + int ret = 0;
> +
> + while (ipa < end) {
> + unsigned long next;
> +
> + ret = rmi_rtt_set_ripas(rd_phys, rec_phys, ipa, end, &next);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + int walk_level = RMI_RETURN_INDEX(ret);
> + int level = find_map_level(realm, ipa, end);
> +
> + /*
> + * If the RMM walk ended early then more tables are
> + * needed to reach the required depth to set the RIPAS.
> + */
> + if (walk_level < level) {
> + ret = realm_create_rtt_levels(realm, ipa,
> + walk_level,
> + level,
> + memcache);
> + /* Retry with RTTs created */
> + if (!ret)
> + continue;
> + } else {
> + ret = -EINVAL;
> + }
> +
> + break;
> + } else if (RMI_RETURN_STATUS(ret) != RMI_SUCCESS) {
> + WARN(1, "Unexpected error in %s: %#x\n", __func__,
> + ret);
> + ret = -EINVAL;
> + break;
> + }
> + ipa = next;
> + }
> +
> + *top_ipa = ipa;
> +
> + if (ripas == RMI_EMPTY && ipa != start)
> + realm_unmap_private_range(kvm, start, ipa);
> +
> + return ret;
> +}
> +
> +static int realm_init_ipa_state(struct realm *realm,
> + unsigned long ipa,
> + unsigned long end)
> +{
> + phys_addr_t rd_phys = virt_to_phys(realm->rd);
> + int ret;
> +
> + while (ipa < end) {
> + unsigned long next;
> +
> + ret = rmi_rtt_init_ripas(rd_phys, ipa, end, &next);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + int err_level = RMI_RETURN_INDEX(ret);
> + int level = find_map_level(realm, ipa, end);
> +
> + if (WARN_ON(err_level >= level))
> + return -ENXIO;
> +
> + ret = realm_create_rtt_levels(realm, ipa,
> + err_level,
> + level, NULL);
> + if (ret)
> + return ret;
> + /* Retry with the RTT levels in place */
> + continue;
> + } else if (WARN_ON(ret)) {
> + return -ENXIO;
> + }
> +
> + ipa = next;
> + }
> +
> + return 0;
> +}
> +
> +static int kvm_init_ipa_range_realm(struct kvm *kvm,
> + struct kvm_cap_arm_rme_init_ipa_args *args)
> +{
> + gpa_t addr, end;
> + struct realm *realm = &kvm->arch.realm;
> +
> + addr = args->init_ipa_base;
> + end = addr + args->init_ipa_size;
> +
> + if (end < addr)
> + return -EINVAL;
> +
> + if (kvm_realm_state(kvm) != REALM_STATE_NEW)
> + return -EINVAL;
> +
> + return realm_init_ipa_state(realm, addr, end);
> +}
> +
> /* Protects access to rme_vmid_bitmap */
> static DEFINE_SPINLOCK(rme_vmid_lock);
> static unsigned long *rme_vmid_bitmap;
> @@ -383,6 +782,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
> case KVM_CAP_ARM_RME_CREATE_RD:
> r = kvm_create_realm(kvm);
> break;
> + case KVM_CAP_ARM_RME_INIT_IPA_REALM: {
> + struct kvm_cap_arm_rme_init_ipa_args args;
> + void __user *argp = u64_to_user_ptr(cap->args[1]);
> +
> + if (copy_from_user(&args, argp, sizeof(args))) {
> + r = -EFAULT;
> + break;
> + }
> +
> + r = kvm_init_ipa_range_realm(kvm, &args);
> + break;
> + }
> default:
> r = -EINVAL;
> break;
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 17/43] arm64: RME: Handle realm enter/exit
2024-12-12 15:55 ` [PATCH v6 17/43] arm64: RME: Handle realm enter/exit Steven Price
@ 2025-01-29 23:41 ` Gavin Shan
2025-02-03 16:34 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-29 23:41 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Entering a realm is done using a SMC call to the RMM. On exit the
> exit-codes need to be handled slightly differently to the normal KVM
> path so define our own functions for realm enter/exit and hook them
> in if the guest is a realm guest.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * For a RIPAS_CHANGE request from the guest perform the actual RIPAS
> change on next entry rather than immediately on the exit. This allows
> the VMM to 'reject' a RIPAS change by refusing to continue
> scheduling.
> Changes since v4:
> * Rename handle_rme_exit() to handle_rec_exit()
> * Move the loop to copy registers into the REC enter structure from the
> to rec_exit_handlers callbacks to kvm_rec_enter(). This fixes a bug
> where the handler exits to user space and user space wants to modify
> the GPRS.
> * Some code rearrangement in rec_exit_ripas_change().
> Changes since v2:
> * realm_set_ipa_state() now provides an output parameter for the
> top_iap that was changed. Use this to signal the VMM with the correct
> range that has been transitioned.
> * Adapt to previous patch changes.
> ---
> arch/arm64/include/asm/kvm_rme.h | 3 +
> arch/arm64/kvm/Makefile | 2 +-
> arch/arm64/kvm/arm.c | 19 +++-
> arch/arm64/kvm/rme-exit.c | 167 +++++++++++++++++++++++++++++++
> arch/arm64/kvm/rme.c | 48 +++++++++
> 5 files changed, 233 insertions(+), 6 deletions(-)
> create mode 100644 arch/arm64/kvm/rme-exit.c
>
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 4e7758f0e4b5..0410650cd545 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -92,6 +92,9 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
> int kvm_create_rec(struct kvm_vcpu *vcpu);
> void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>
> +int kvm_rec_enter(struct kvm_vcpu *vcpu);
> +int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_status);
> +
> void kvm_realm_unmap_range(struct kvm *kvm,
> unsigned long ipa,
> u64 size,
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index ce8a10d3161d..0170e902fb63 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -24,7 +24,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
> vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
> vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
> vgic/vgic-its.o vgic/vgic-debug.o \
> - rme.o
> + rme.o rme-exit.o
>
> kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o pmu.o
> kvm-$(CONFIG_ARM64_PTR_AUTH) += pauth.o
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index ab00fb7a8ece..6e83b2e3a6cf 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1264,7 +1264,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> trace_kvm_entry(*vcpu_pc(vcpu));
> guest_timing_enter_irqoff();
>
> - ret = kvm_arm_vcpu_enter_exit(vcpu);
> + if (vcpu_is_rec(vcpu))
> + ret = kvm_rec_enter(vcpu);
> + else
> + ret = kvm_arm_vcpu_enter_exit(vcpu);
>
> vcpu->mode = OUTSIDE_GUEST_MODE;
> vcpu->stat.exits++;
> @@ -1318,10 +1321,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>
> local_irq_enable();
>
> - trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
> -
> /* Exit types that need handling before we can be preempted */
> - handle_exit_early(vcpu, ret);
> + if (!vcpu_is_rec(vcpu)) {
> + trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu),
> + *vcpu_pc(vcpu));
> +
> + handle_exit_early(vcpu, ret);
> + }
>
> preempt_enable();
>
> @@ -1344,7 +1350,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
> ret = ARM_EXCEPTION_IL;
> }
>
> - ret = handle_exit(vcpu, ret);
> + if (vcpu_is_rec(vcpu))
> + ret = handle_rec_exit(vcpu, ret);
> + else
> + ret = handle_exit(vcpu, ret);
> }
>
> /* Tell userspace about in-kernel device output levels */
> diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c
> new file mode 100644
> index 000000000000..e532aa80e10d
> --- /dev/null
> +++ b/arch/arm64/kvm/rme-exit.c
> @@ -0,0 +1,167 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2023 ARM Ltd.
> + */
> +
> +#include <linux/kvm_host.h>
> +#include <kvm/arm_hypercalls.h>
> +#include <kvm/arm_psci.h>
> +
> +#include <asm/rmi_smc.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_rme.h>
> +#include <asm/kvm_mmu.h>
> +
> +typedef int (*exit_handler_fn)(struct kvm_vcpu *vcpu);
> +
> +static int rec_exit_reason_notimpl(struct kvm_vcpu *vcpu)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> +
> + pr_err("[vcpu %d] Unhandled exit reason from realm (ESR: %#llx)\n",
> + vcpu->vcpu_id, rec->run->exit.esr);
> + return -ENXIO;
> +}
> +
vcpu_err() can be used here since 'vcpu->vcpu_id' is only sensible to one
particular VM. The VMM's PID is also printed by vcpu_err().
> +static int rec_exit_sync_dabt(struct kvm_vcpu *vcpu)
> +{
> + return kvm_handle_guest_abort(vcpu);
> +}
> +
> +static int rec_exit_sync_iabt(struct kvm_vcpu *vcpu)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> +
> + pr_err("[vcpu %d] Unhandled instruction abort (ESR: %#llx).\n",
> + vcpu->vcpu_id, rec->run->exit.esr);
> + return -ENXIO;
> +}
> +
s/pr_err/vcpu_err
> +static int rec_exit_sys_reg(struct kvm_vcpu *vcpu)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> + unsigned long esr = kvm_vcpu_get_esr(vcpu);
> + int rt = kvm_vcpu_sys_get_rt(vcpu);
> + bool is_write = !(esr & 1);
> + int ret;
> +
> + if (is_write)
> + vcpu_set_reg(vcpu, rt, rec->run->exit.gprs[0]);
> +
> + ret = kvm_handle_sys_reg(vcpu);
> +
> + if (ret >= 0 && !is_write)
> + rec->run->enter.gprs[0] = vcpu_get_reg(vcpu, rt);
> +
> + return ret;
> +}
> +
> +static exit_handler_fn rec_exit_handlers[] = {
> + [0 ... ESR_ELx_EC_MAX] = rec_exit_reason_notimpl,
> + [ESR_ELx_EC_SYS64] = rec_exit_sys_reg,
> + [ESR_ELx_EC_DABT_LOW] = rec_exit_sync_dabt,
> + [ESR_ELx_EC_IABT_LOW] = rec_exit_sync_iabt
> +};
> +
> +static int rec_exit_psci(struct kvm_vcpu *vcpu)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> + int i;
> +
> + for (i = 0; i < REC_RUN_GPRS; i++)
> + vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]);
> +
> + return kvm_smccc_call_handler(vcpu);
> +}
> +
> +static int rec_exit_ripas_change(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + struct realm *realm = &kvm->arch.realm;
> + struct realm_rec *rec = &vcpu->arch.rec;
> + unsigned long base = rec->run->exit.ripas_base;
> + unsigned long top = rec->run->exit.ripas_top;
> + unsigned long ripas = rec->run->exit.ripas_value;
> +
> + if (!realm_is_addr_protected(realm, base) ||
> + !realm_is_addr_protected(realm, top - 1)) {
> + kvm_err("Invalid RIPAS_CHANGE for %#lx - %#lx, ripas: %#lx\n",
> + base, top, ripas);
> + return -EINVAL;
> + }
s/kvm_err/vcpu_err
> +
> + /* Exit to VMM, the actual RIPAS change is done on next entry */
> + kvm_prepare_memory_fault_exit(vcpu, base, top - base, false, false,
> + ripas == RMI_RAM);
> +
> + return 0;
> +}
> +
> +static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> +
> + __vcpu_sys_reg(vcpu, CNTV_CTL_EL0) = rec->run->exit.cntv_ctl;
> + __vcpu_sys_reg(vcpu, CNTV_CVAL_EL0) = rec->run->exit.cntv_cval;
> + __vcpu_sys_reg(vcpu, CNTP_CTL_EL0) = rec->run->exit.cntp_ctl;
> + __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = rec->run->exit.cntp_cval;
> +
> + kvm_realm_timers_update(vcpu);
> +}
> +
> +/*
> + * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
> + * proper exit to userspace.
> + */
> +int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_ret)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> + u8 esr_ec = ESR_ELx_EC(rec->run->exit.esr);
> + unsigned long status, index;
> +
> + status = RMI_RETURN_STATUS(rec_run_ret);
> + index = RMI_RETURN_INDEX(rec_run_ret);
> +
> + /*
> + * If a PSCI_SYSTEM_OFF request raced with a vcpu executing, we might
> + * see the following status code and index indicating an attempt to run
> + * a REC when the RD state is SYSTEM_OFF. In this case, we just need to
> + * return to user space which can deal with the system event or will try
> + * to run the KVM VCPU again, at which point we will no longer attempt
> + * to enter the Realm because we will have a sleep request pending on
> + * the VCPU as a result of KVM's PSCI handling.
> + */
> + if (status == RMI_ERROR_REALM && index == 1) {
> + vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
> + return 0;
> + }
> +
> + if (rec_run_ret)
> + return -ENXIO;
> +
> + vcpu->arch.fault.esr_el2 = rec->run->exit.esr;
> + vcpu->arch.fault.far_el2 = rec->run->exit.far;
> + vcpu->arch.fault.hpfar_el2 = rec->run->exit.hpfar;
> +
> + update_arch_timer_irq_lines(vcpu);
> +
> + /* Reset the emulation flags for the next run of the REC */
> + rec->run->enter.flags = 0;
> +
> + switch (rec->run->exit.exit_reason) {
> + case RMI_EXIT_SYNC:
> + return rec_exit_handlers[esr_ec](vcpu);
> + case RMI_EXIT_IRQ:
> + case RMI_EXIT_FIQ:
> + return 1;
> + case RMI_EXIT_PSCI:
> + return rec_exit_psci(vcpu);
> + case RMI_EXIT_RIPAS_CHANGE:
> + return rec_exit_ripas_change(vcpu);
> + }
> +
> + kvm_pr_unimpl("Unsupported exit reason: %u\n",
> + rec->run->exit.exit_reason);
s/kvm_pr_unimpl/vcpu_pr_unimpl
> + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> + return 0;
> +}
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index e8ad04405ecd..22f0c74455af 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -842,6 +842,54 @@ void kvm_destroy_realm(struct kvm *kvm)
> kvm_free_stage2_pgd(&kvm->arch.mmu);
> }
>
> +static void kvm_complete_ripas_change(struct kvm_vcpu *vcpu)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + struct realm_rec *rec = &vcpu->arch.rec;
> + unsigned long base = rec->run->exit.ripas_base;
> + unsigned long top = rec->run->exit.ripas_top;
> + unsigned long ripas = rec->run->exit.ripas_value;
> + unsigned long top_ipa;
> + int ret;
> +
> + do {
> + kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache,
> + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
> + write_lock(&kvm->mmu_lock);
> + ret = realm_set_ipa_state(vcpu, base, top, ripas, &top_ipa);
> + write_unlock(&kvm->mmu_lock);
> +
> + if (WARN(ret && ret != -ENOMEM,
> + "Unable to satisfy RIPAS_CHANGE for %#lx - %#lx, ripas: %#lx\n",
> + base, top, ripas))
> + break;
> +
> + base = top_ipa;
> + } while (top_ipa < top);
> +}
> +
> +int kvm_rec_enter(struct kvm_vcpu *vcpu)
> +{
> + struct realm_rec *rec = &vcpu->arch.rec;
> +
> + switch (rec->run->exit.exit_reason) {
> + case RMI_EXIT_HOST_CALL:
> + case RMI_EXIT_PSCI:
> + for (int i = 0; i < REC_RUN_GPRS; i++)
> + rec->run->enter.gprs[i] = vcpu_get_reg(vcpu, i);
> + break;
> + case RMI_EXIT_RIPAS_CHANGE:
> + kvm_complete_ripas_change(vcpu);
> + break;
> + }
> +
> + if (kvm_realm_state(vcpu->kvm) != REALM_STATE_ACTIVE)
> + return -EINVAL;
> +
> + return rmi_rec_enter(virt_to_phys(rec->rec_page),
> + virt_to_phys(rec->run));
> +}
> +
> static void free_rec_aux(struct page **aux_pages,
> unsigned int num_aux)
> {
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 19/43] arm64: RME: Allow populating initial contents
2024-12-12 15:55 ` [PATCH v6 19/43] arm64: RME: Allow populating initial contents Steven Price
@ 2025-01-30 4:38 ` Gavin Shan
2025-02-03 16:52 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-01-30 4:38 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> The VMM needs to populate the realm with some data before starting (e.g.
> a kernel and initrd). This is measured by the RMM and used as part of
> the attestation later on.
>
> For now only 4k mappings are supported, future work may add support for
> larger mappings.
>
> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Refactor to use PFNs rather than tracking struct page in
> realm_create_protected_data_page().
> * Pull changes from a later patch (in the v5 series) for accessing
> pages from a guest memfd.
> * Do the populate in chunks to avoid holding locks for too long and
> triggering RCU stall warnings.
> ---
> arch/arm64/kvm/rme.c | 243 +++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 243 insertions(+)
>
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index 22f0c74455af..d4561e368cd5 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -4,6 +4,7 @@
> */
>
> #include <linux/kvm_host.h>
> +#include <linux/hugetlb.h>
>
This wouldn't be needed since the huge pages, especially hugetlb part, isn't
supported yet.
> #include <asm/kvm_emulate.h>
> #include <asm/kvm_mmu.h>
> @@ -545,6 +546,236 @@ void kvm_realm_unmap_range(struct kvm *kvm, unsigned long start, u64 size,
> realm_unmap_private_range(kvm, start, end);
> }
>
> +static int realm_create_protected_data_page(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t dst_pfn,
> + kvm_pfn_t src_pfn,
> + unsigned long flags)
> +{
> + phys_addr_t dst_phys, src_phys;
> + int ret;
> +
> + dst_phys = __pfn_to_phys(dst_pfn);
> + src_phys = __pfn_to_phys(src_pfn);
> +
> + if (rmi_granule_delegate(dst_phys))
> + return -ENXIO;
> +
> + ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa, src_phys,
> + flags);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + /* Create missing RTTs and retry */
> + int level = RMI_RETURN_INDEX(ret);
> +
> + ret = realm_create_rtt_levels(realm, ipa, level,
> + RMM_RTT_MAX_LEVEL, NULL);
> + if (ret)
> + goto err;
> +
> + ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa,
> + src_phys, flags);
> + }
> +
> + if (!ret)
> + return 0;
> +
> +err:
> + if (WARN_ON(rmi_granule_undelegate(dst_phys))) {
> + /* Page can't be returned to NS world so is lost */
> + get_page(pfn_to_page(dst_pfn));
> + }
> + return -ENXIO;
> +}
> +
> +static int fold_rtt(struct realm *realm, unsigned long addr, int level)
> +{
> + phys_addr_t rtt_addr;
> + int ret;
> +
> + ret = realm_rtt_fold(realm, addr, level + 1, &rtt_addr);
> + if (ret)
> + return ret;
> +
> + free_delegated_granule(rtt_addr);
> +
> + return 0;
> +}
> +
> +static int populate_par_region(struct kvm *kvm,
> + phys_addr_t ipa_base,
> + phys_addr_t ipa_end,
> + u32 flags)
> +{
At the first glance, I was wandering what's meant by 'par' in the function name.
It turns to be a 2MB region and I guess it represents 'part'. I think this may
be renamed to populate_sub_region() or populate_region() directly.
> + struct realm *realm = &kvm->arch.realm;
> + struct kvm_memory_slot *memslot;
> + gfn_t base_gfn, end_gfn;
> + int idx;
> + phys_addr_t ipa;
> + int ret = 0;
> + unsigned long data_flags = 0;
> +
> + base_gfn = gpa_to_gfn(ipa_base);
> + end_gfn = gpa_to_gfn(ipa_end);
> +
> + if (flags & KVM_ARM_RME_POPULATE_FLAGS_MEASURE)
> + data_flags = RMI_MEASURE_CONTENT;
> +
The 'data_flags' can be sorted out by its caller kvm_populate_realm(), and passed
to populate_par_region(). In that way, we needn't to figure out 'data_flags' in
every call to populate_par_region().
> + idx = srcu_read_lock(&kvm->srcu);
> + memslot = gfn_to_memslot(kvm, base_gfn);
> + if (!memslot) {
> + ret = -EFAULT;
> + goto out;
> + }
> +
> + /* We require the region to be contained within a single memslot */
> + if (memslot->base_gfn + memslot->npages < end_gfn) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + if (!kvm_slot_can_be_private(memslot)) {
> + ret = -EINVAL;
> + goto out;
> + }
> +
> + write_lock(&kvm->mmu_lock);
> +
> + ipa = ipa_base;
> + while (ipa < ipa_end) {
> + struct vm_area_struct *vma;
> + unsigned long map_size;
> + unsigned int vma_shift;
> + unsigned long offset;
> + unsigned long hva;
> + struct page *page;
> + bool writeable;
> + kvm_pfn_t pfn;
> + int level, i;
> +
> + hva = gfn_to_hva_memslot(memslot, gpa_to_gfn(ipa));
> + vma = vma_lookup(current->mm, hva);
> + if (!vma) {
> + ret = -EFAULT;
> + break;
> + }
> +
> + /* FIXME: Currently we only support 4k sized mappings */
> + vma_shift = PAGE_SHIFT;
> +
> + map_size = 1 << vma_shift;
> +
> + ipa = ALIGN_DOWN(ipa, map_size);
> +
The blank lines in above 5 lines can be dropped :)
> + switch (map_size) {
> + case RMM_L2_BLOCK_SIZE:
> + level = 2;
> + break;
> + case PAGE_SIZE:
> + level = 3;
> + break;
> + default:
> + WARN_ONCE(1, "Unsupported vma_shift %d", vma_shift);
> + ret = -EFAULT;
> + break;
> + }
> +
> + pfn = __kvm_faultin_pfn(memslot, gpa_to_gfn(ipa), FOLL_WRITE,
> + &writeable, &page);
> +
> + if (is_error_pfn(pfn)) {
> + ret = -EFAULT;
> + break;
> + }
> +
> + if (level < RMM_RTT_MAX_LEVEL) {
> + /*
> + * A temporary RTT is needed during the map, precreate
> + * it, however if there is an error (e.g. missing
> + * parent tables) this will be handled in the
> + * realm_create_protected_data_page() call.
> + */
> + realm_create_rtt_levels(realm, ipa, level,
> + RMM_RTT_MAX_LEVEL, NULL);
> + }
> +
This block of code to create the temporary RTT can be removed. With it removed,
we're going to rely on realm_create_protected_data_page() to create the needed
RTT in its failing path. If the temporary RTT has been existing, the function
call to realm_create_rtt_levels() doesn't nothing except multiple RMI calls are
issued. RMI calls aren't cheap and it causes performance lost if you agree.
> + for (offset = 0, i = 0; offset < map_size && !ret;
> + offset += PAGE_SIZE, i++) {
> + phys_addr_t page_ipa = ipa + offset;
> + kvm_pfn_t priv_pfn;
> + struct page *gmem_page;
> + int order;
> +
> + ret = kvm_gmem_get_pfn(kvm, memslot,
> + page_ipa >> PAGE_SHIFT,
> + &priv_pfn, &gmem_page, &order);
> + if (ret)
> + break;
> +
> + ret = realm_create_protected_data_page(realm, page_ipa,
> + priv_pfn,
> + pfn + i,
> + data_flags);
> + }
> +
> + kvm_release_faultin_page(kvm, page, false, false);
> +
> + if (ret)
> + break;
> +
> + if (level == 2)
> + fold_rtt(realm, ipa, level);
> +
> + ipa += map_size;
> + }
> +
> + write_unlock(&kvm->mmu_lock);
> +
> +out:
> + srcu_read_unlock(&kvm->srcu, idx);
> + return ret;
> +}
> +
> +static int kvm_populate_realm(struct kvm *kvm,
> + struct kvm_cap_arm_rme_populate_realm_args *args)
> +{
> + phys_addr_t ipa_base, ipa_end;
> +
> + if (kvm_realm_state(kvm) != REALM_STATE_NEW)
> + return -EINVAL;
> +
> + if (!IS_ALIGNED(args->populate_ipa_base, PAGE_SIZE) ||
> + !IS_ALIGNED(args->populate_ipa_size, PAGE_SIZE))
> + return -EINVAL;
> +
> + if (args->flags & ~RMI_MEASURE_CONTENT)
> + return -EINVAL;
> +
> + ipa_base = args->populate_ipa_base;
> + ipa_end = ipa_base + args->populate_ipa_size;
> +
> + if (ipa_end < ipa_base)
> + return -EINVAL;
> +
> + /*
> + * Perform the populate in parts to ensure locks are not held for too
> + * long
> + */
> + while (ipa_base < ipa_end) {
> + phys_addr_t end = min(ipa_end, ipa_base + SZ_2M);
> +
> + int ret = populate_par_region(kvm, ipa_base, end,
> + args->flags);
> +
> + if (ret)
> + return ret;
> +
cond_resched() seems nice to have here so that those pending tasks can
be run immediately.
> + ipa_base = end;
> + }
> +
> + return 0;
> +}
> +
> int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> unsigned long start,
> unsigned long end,
> @@ -794,6 +1025,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
> r = kvm_init_ipa_range_realm(kvm, &args);
> break;
> }
> + case KVM_CAP_ARM_RME_POPULATE_REALM: {
> + struct kvm_cap_arm_rme_populate_realm_args args;
> + void __user *argp = u64_to_user_ptr(cap->args[1]);
> +
> + if (copy_from_user(&args, argp, sizeof(args))) {
> + r = -EFAULT;
> + break;
> + }
> +
> + r = kvm_populate_realm(kvm, &args);
> + break;
> + }
> default:
> r = -EINVAL;
> break;
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 20/43] arm64: RME: Runtime faulting of memory
2024-12-12 15:55 ` [PATCH v6 20/43] arm64: RME: Runtime faulting of memory Steven Price
@ 2025-01-30 5:22 ` Gavin Shan
2025-02-06 4:33 ` Aneesh Kumar K.V
2025-02-07 17:04 ` Steven Price
0 siblings, 2 replies; 90+ messages in thread
From: Gavin Shan @ 2025-01-30 5:22 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> At runtime if the realm guest accesses memory which hasn't yet been
> mapped then KVM needs to either populate the region or fault the guest.
>
> For memory in the lower (protected) region of IPA a fresh page is
> provided to the RMM which will zero the contents. For memory in the
> upper (shared) region of IPA, the memory from the memslot is mapped
> into the realm VM non secure.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Reduce use of struct page in preparation for supporting the RMM
> having a different page size to the host.
> * Handle a race when delegating a page where another CPU has faulted on
> a the same page (and already delegated the physical page) but not yet
> mapped it. In this case simply return to the guest to either use the
> mapping from the other CPU (or refault if the race is lost).
> * The changes to populate_par_region() are moved into the previous
> patch where they belong.
> Changes since v4:
> * Code cleanup following review feedback.
> * Drop the PTE_SHARED bit when creating unprotected page table entries.
> This is now set by the RMM and the host has no control of it and the
> spec requires the bit to be set to zero.
> Changes since v2:
> * Avoid leaking memory if failing to map it in the realm.
> * Correctly mask RTT based on LPA2 flag (see rtt_get_phys()).
> * Adapt to changes in previous patches.
> ---
> arch/arm64/include/asm/kvm_emulate.h | 10 ++
> arch/arm64/include/asm/kvm_rme.h | 10 ++
> arch/arm64/kvm/mmu.c | 124 +++++++++++++++++++--
> arch/arm64/kvm/rme.c | 156 +++++++++++++++++++++++++++
> 4 files changed, 293 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index ec2b6d9c9c07..b13e367b6972 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -720,6 +720,16 @@ static inline bool kvm_realm_is_created(struct kvm *kvm)
> return kvm_is_realm(kvm) && kvm_realm_state(kvm) != REALM_STATE_NONE;
> }
>
> +static inline gpa_t kvm_gpa_from_fault(struct kvm *kvm, phys_addr_t fault_ipa)
> +{
> + if (kvm_is_realm(kvm)) {
> + struct realm *realm = &kvm->arch.realm;
> +
> + return fault_ipa & ~BIT(realm->ia_bits - 1);
> + }
> + return fault_ipa;
> +}
> +
'fault' has been included in 'kvm_gpa_from_fault' and 'fault_ipa'. To avoid the
duplication, 'fault_ipa' can be renamed to 'ipa'.
> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
> {
> if (static_branch_unlikely(&kvm_rme_is_available))
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 0410650cd545..158f77e24a26 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -99,6 +99,16 @@ void kvm_realm_unmap_range(struct kvm *kvm,
> unsigned long ipa,
> u64 size,
> bool unmap_private);
> +int realm_map_protected(struct realm *realm,
> + unsigned long base_ipa,
> + kvm_pfn_t pfn,
> + unsigned long size,
> + struct kvm_mmu_memory_cache *memcache);
> +int realm_map_non_secure(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t pfn,
> + unsigned long size,
> + struct kvm_mmu_memory_cache *memcache);
> int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> unsigned long addr, unsigned long end,
> unsigned long ripas,
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index b100d4b3aa29..e88714903ce5 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -325,8 +325,13 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
>
> lockdep_assert_held_write(&kvm->mmu_lock);
> WARN_ON(size & ~PAGE_MASK);
> - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap,
> - may_block));
> +
> + if (kvm_is_realm(kvm))
> + kvm_realm_unmap_range(kvm, start, size, !only_shared);
> + else
> + WARN_ON(stage2_apply_range(mmu, start, end,
> + kvm_pgtable_stage2_unmap,
> + may_block));
> }
>
> void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start,
> @@ -346,7 +351,10 @@ static void stage2_flush_memslot(struct kvm *kvm,
> phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
> phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
>
> - kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
> + if (kvm_is_realm(kvm))
> + kvm_realm_unmap_range(kvm, addr, end - addr, false);
> + else
> + kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
> }
>
> /**
> @@ -1037,6 +1045,10 @@ void stage2_unmap_vm(struct kvm *kvm)
> struct kvm_memory_slot *memslot;
> int idx, bkt;
>
> + /* For realms this is handled by the RMM so nothing to do here */
> + if (kvm_is_realm(kvm))
> + return;
> +
> idx = srcu_read_lock(&kvm->srcu);
> mmap_read_lock(current->mm);
> write_lock(&kvm->mmu_lock);
> @@ -1062,6 +1074,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
> if (kvm_is_realm(kvm) &&
> (kvm_realm_state(kvm) != REALM_STATE_DEAD &&
> kvm_realm_state(kvm) != REALM_STATE_NONE)) {
> + kvm_stage2_unmap_range(mmu, 0, (~0ULL) & PAGE_MASK, false);
> write_unlock(&kvm->mmu_lock);
> kvm_realm_destroy_rtts(kvm, pgt->ia_bits);
>
> @@ -1446,6 +1459,76 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> return vma->vm_flags & VM_MTE_ALLOWED;
> }
>
> +static int realm_map_ipa(struct kvm *kvm, phys_addr_t ipa,
> + kvm_pfn_t pfn, unsigned long map_size,
> + enum kvm_pgtable_prot prot,
> + struct kvm_mmu_memory_cache *memcache)
> +{
> + struct realm *realm = &kvm->arch.realm;
> +
> + if (WARN_ON(!(prot & KVM_PGTABLE_PROT_W)))
> + return -EFAULT;
> +
> + if (!realm_is_addr_protected(realm, ipa))
> + return realm_map_non_secure(realm, ipa, pfn, map_size,
> + memcache);
> +
> + return realm_map_protected(realm, ipa, pfn, map_size, memcache);
> +}
> +
> +static int private_memslot_fault(struct kvm_vcpu *vcpu,
> + phys_addr_t fault_ipa,
> + struct kvm_memory_slot *memslot)
> +{
> + struct kvm *kvm = vcpu->kvm;
> + gpa_t gpa = kvm_gpa_from_fault(kvm, fault_ipa);
> + gfn_t gfn = gpa >> PAGE_SHIFT;
> + bool priv_exists = kvm_mem_is_private(kvm, gfn);
> + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
> + struct page *page;
> + kvm_pfn_t pfn;
> + int ret;
> + /*
> + * For Realms, the shared address is an alias of the private GPA with
> + * the top bit set. Thus is the fault address matches the GPA then it
> + * is the private alias.
> + */
> + bool is_priv_gfn = (gpa == fault_ipa);
> +
We may rename 'priv_exists' to 'was_priv_gfn', which is consistent to 'is_priv_gfn'.
Alternatively, we may use 'was_private' and 'is_private'.
> + if (priv_exists != is_priv_gfn) {
> + kvm_prepare_memory_fault_exit(vcpu,
> + gpa,
> + PAGE_SIZE,
> + kvm_is_write_fault(vcpu),
> + false, is_priv_gfn);
> +
> + return -EFAULT;
> + }
> +
> + if (!is_priv_gfn) {
> + /* Not a private mapping, handling normally */
> + return -EINVAL;
> + }
> +
> + ret = kvm_mmu_topup_memory_cache(memcache,
> + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
> + if (ret)
> + return ret;
> +
> + ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL);
> + if (ret)
> + return ret;
> +
> + /* FIXME: Should be able to use bigger than PAGE_SIZE mappings */
> + ret = realm_map_ipa(kvm, fault_ipa, pfn, PAGE_SIZE, KVM_PGTABLE_PROT_W,
> + memcache);
> + if (!ret)
> + return 1; /* Handled */
> +
> + put_page(page);
> + return ret;
> +}
> +
> static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> struct kvm_s2_trans *nested,
> struct kvm_memory_slot *memslot, unsigned long hva,
> @@ -1472,6 +1555,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (fault_is_perm)
> fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu);
> write_fault = kvm_is_write_fault(vcpu);
> +
> + /*
> + * Realms cannot map protected pages read-only
> + * FIXME: It should be possible to map unprotected pages read-only
> + */
> + if (vcpu_is_rec(vcpu))
> + write_fault = true;
> +
> exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> VM_BUG_ON(write_fault && exec_fault);
>
> @@ -1579,7 +1670,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> ipa &= ~(vma_pagesize - 1);
> }
>
> - gfn = ipa >> PAGE_SHIFT;
> + gfn = kvm_gpa_from_fault(kvm, ipa) >> PAGE_SHIFT;
> mte_allowed = kvm_vma_mte_allowed(vma);
>
> vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
> @@ -1660,7 +1751,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> * If we are not forced to use page mapping, check if we are
> * backed by a THP and thus use block mapping if possible.
> */
> - if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
> + /* FIXME: We shouldn't need to disable this for realms */
> + if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) {
> if (fault_is_perm && fault_granule > PAGE_SIZE)
> vma_pagesize = fault_granule;
> else
> @@ -1712,6 +1804,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> */
> prot &= ~KVM_NV_GUEST_MAP_SZ;
> ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
> + } else if (kvm_is_realm(kvm)) {
> + ret = realm_map_ipa(kvm, fault_ipa, pfn, vma_pagesize,
> + prot, memcache);
> } else {
> ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
> __pfn_to_phys(pfn), prot,
> @@ -1854,8 +1949,15 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> nested = &nested_trans;
> }
>
> - gfn = ipa >> PAGE_SHIFT;
> + gfn = kvm_gpa_from_fault(vcpu->kvm, ipa) >> PAGE_SHIFT;
> memslot = gfn_to_memslot(vcpu->kvm, gfn);
> +
> + if (kvm_slot_can_be_private(memslot)) {
> + ret = private_memslot_fault(vcpu, ipa, memslot);
> + if (ret != -EINVAL)
> + goto out;
> + }
> +
> hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
> write_fault = kvm_is_write_fault(vcpu);
> if (kvm_is_error_hva(hva) || (write_fault && !writable)) {
> @@ -1899,7 +2001,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> * of the page size.
> */
> ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
> - ret = io_mem_abort(vcpu, ipa);
> + ret = io_mem_abort(vcpu, kvm_gpa_from_fault(vcpu->kvm, ipa));
> goto out_unlock;
> }
>
> @@ -1947,6 +2049,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> if (!kvm->arch.mmu.pgt)
> return false;
>
> + /* We don't support aging for Realms */
> + if (kvm_is_realm(kvm))
> + return true;
> +
> return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
> range->start << PAGE_SHIFT,
> size, true);
> @@ -1963,6 +2069,10 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
> if (!kvm->arch.mmu.pgt)
> return false;
>
> + /* We don't support aging for Realms */
> + if (kvm_is_realm(kvm))
> + return true;
> +
> return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
> range->start << PAGE_SHIFT,
> size, false);
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index d4561e368cd5..146ef598a581 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -602,6 +602,162 @@ static int fold_rtt(struct realm *realm, unsigned long addr, int level)
> return 0;
> }
>
> +int realm_map_protected(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t pfn,
> + unsigned long map_size,
> + struct kvm_mmu_memory_cache *memcache)
> +{
> + phys_addr_t phys = __pfn_to_phys(pfn);
> + phys_addr_t rd = virt_to_phys(realm->rd);
> + unsigned long base_ipa = ipa;
> + unsigned long size;
> + int map_level;
> + int ret = 0;
> +
> + if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
> + return -EINVAL;
> +
> + switch (map_size) {
> + case PAGE_SIZE:
> + map_level = 3;
> + break;
> + case RMM_L2_BLOCK_SIZE:
> + map_level = 2;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
The same block of code, to return the RTT level according to the map size, has been
used for multiple times. It would be nice to introduce a helper for this.
> + if (map_level < RMM_RTT_MAX_LEVEL) {
> + /*
> + * A temporary RTT is needed during the map, precreate it,
> + * however if there is an error (e.g. missing parent tables)
> + * this will be handled below.
> + */
> + realm_create_rtt_levels(realm, ipa, map_level,
> + RMM_RTT_MAX_LEVEL, memcache);
> + }
> +
This block of code could be dropped. If the RTTs have been existing, realm_create_rtt_levels()
doesn't nothing, but several RMI calls are issued. RMI calls aren't cheap and it can cause
performance lost.
> + for (size = 0; size < map_size; size += PAGE_SIZE) {
> + if (rmi_granule_delegate(phys)) {
> + /*
> + * It's likely we raced with another VCPU on the same
> + * fault. Assume the other VCPU has handled the fault
> + * and return to the guest.
> + */
> + return 0;
> + }
We probably can't bail immediately when error is returned from rmi_granule_delegate()
because we intend to map a region whose size is 'map_size'. So a 'continue' instead
of 'return 0' seems correct to me.
> +
> + ret = rmi_data_create_unknown(rd, phys, ipa);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + /* Create missing RTTs and retry */
> + int level = RMI_RETURN_INDEX(ret);
> +
> + ret = realm_create_rtt_levels(realm, ipa, level,
> + RMM_RTT_MAX_LEVEL,
> + memcache);
> + WARN_ON(ret);
> + if (ret)
> + goto err_undelegate;
if (WARN_ON(ret))
> +
> + ret = rmi_data_create_unknown(rd, phys, ipa);
> + }
> + WARN_ON(ret);
> +
> + if (ret)
> + goto err_undelegate;
if (WARN_ON(ret))
> +
> + phys += PAGE_SIZE;
> + ipa += PAGE_SIZE;
> + }
> +
> + if (map_size == RMM_L2_BLOCK_SIZE)
> + ret = fold_rtt(realm, base_ipa, map_level);
> + if (WARN_ON(ret))
> + goto err;
> +
The nested if statements are needed here because the WARN_ON() only
take effect on the return value from fold_rtt().
if (map_size == RMM_L2_BLOCK_SIZE) {
ret = fold_rtt(realm, base_ipa, map_level);
if (WARN_ON(ret))
goto err;
}
> + return 0;
> +
> +err_undelegate:
> + if (WARN_ON(rmi_granule_undelegate(phys))) {
> + /* Page can't be returned to NS world so is lost */
> + get_page(phys_to_page(phys));
> + }
> +err:
> + while (size > 0) {
> + unsigned long data, top;
> +
> + phys -= PAGE_SIZE;
> + size -= PAGE_SIZE;
> + ipa -= PAGE_SIZE;
> +
> + WARN_ON(rmi_data_destroy(rd, ipa, &data, &top));
> +
> + if (WARN_ON(rmi_granule_undelegate(phys))) {
> + /* Page can't be returned to NS world so is lost */
> + get_page(phys_to_page(phys));
> + }
> + }
> + return -ENXIO;
> +}
> +
> +int realm_map_non_secure(struct realm *realm,
> + unsigned long ipa,
> + kvm_pfn_t pfn,
> + unsigned long map_size,
> + struct kvm_mmu_memory_cache *memcache)
> +{
> + phys_addr_t rd = virt_to_phys(realm->rd);
> + int map_level;
> + int ret = 0;
> + unsigned long desc = __pfn_to_phys(pfn) |
> + PTE_S2_MEMATTR(MT_S2_FWB_NORMAL) |
> + /* FIXME: Read+Write permissions for now */
> + (3 << 6);
> +
> + if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
> + return -EINVAL;
> +
> + switch (map_size) {
> + case PAGE_SIZE:
> + map_level = 3;
> + break;
> + case RMM_L2_BLOCK_SIZE:
> + map_level = 2;
> + break;
> + default:
> + return -EINVAL;
> + }
> +
As above.
> + ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> +
> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
> + /* Create missing RTTs and retry */
> + int level = RMI_RETURN_INDEX(ret);
> +
> + ret = realm_create_rtt_levels(realm, ipa, level, map_level,
> + memcache);
> + if (WARN_ON(ret))
> + return -ENXIO;
> +
> + ret = rmi_rtt_map_unprotected(rd, ipa, map_level, desc);
> + }
> + /*
> + * RMI_ERROR_RTT can be reported for two reasons: either the RTT tables
> + * are not there, or there is an RTTE already present for the address.
> + * The call to realm_create_rtt_levels() above handles the first case,
> + * and in the second case this indicates that another thread has
> + * already populated the RTTE for us, so we can ignore the error and
> + * continue.
> + */
> + if (WARN_ON(ret && RMI_RETURN_STATUS(ret) != RMI_ERROR_RTT))
> + return -ENXIO;
> +
> + return 0;
> +}
> +
> static int populate_par_region(struct kvm *kvm,
> phys_addr_t ipa_base,
> phys_addr_t ipa_end,
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms
2025-01-29 0:43 ` Gavin Shan
@ 2025-01-30 11:57 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-01-30 11:57 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 29/01/2025 00:43, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Add the KVM_CAP_ARM_RME_CREATE_RD ioctl to create a realm. This involves
>> delegating pages to the RMM to hold the Realm Descriptor (RD) and for
>> the base level of the Realm Translation Tables (RTT). A VMID also need
>> to be picked, since the RMM has a separate VMID address space a
>> dedicated allocator is added for this purpose.
>>
>> KVM_CAP_ARM_RME_CONFIG_REALM is provided to allow configuring the realm
>> before it is created. Configuration options can be classified as:
>>
>> 1. Parameters specific to the Realm stage2 (e.g. IPA Size, vmid, stage2
>> entry level, entry level RTTs, number of RTTs in start level, LPA2)
>> Most of these are not measured by RMM and comes from KVM book
>> keeping.
>>
>> 2. Parameters controlling "Arm Architecture features for the VM". (e.g.
>> SVE VL, PMU counters, number of HW BRPs/WPs), configured by the VMM
>> using the "user ID register write" mechanism. These will be
>> supported in the later patches.
>>
>> 3. Parameters are not part of the core Arm architecture but defined
>> by the RMM spec (e.g. Hash algorithm for measurement,
>> Personalisation value). These are programmed via
>> KVM_CAP_ARM_RME_CONFIG_REALM.
>>
>> For the IPA size there is the possibility that the RMM supports a
>> different size to the IPA size supported by KVM for normal guests. At
>> the moment the 'normal limit' is exposed by KVM_CAP_ARM_VM_IPA_SIZE and
>> the IPA size is configured by the bottom bits of vm_type in
>> KVM_CREATE_VM. This means that it isn't easy for the VMM to discover
>> what IPA sizes are supported for Realm guests. Since the IPA is part of
>> the measurement of the realm guest the current expectation is that the
>> VMM will be required to pick the IPA size demanded by attestation and
>> therefore simply failing if this isn't available is fine. An option
>> would be to expose a new capability ioctl to obtain the RMM's maximum
>> IPA size if this is needed in the future.
>>
>> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v5:
>> * Introduce free_delegated_granule() to replace many
>> undelegate/free_page() instances and centralise the comment on
>> leaking when the undelegate fails.
>> * Several other minor improvements suggested by reviews - thanks for
>> the feedback!
>> Changes since v2:
>> * Improved commit description.
>> * Improved return failures for rmi_check_version().
>> * Clear contents of PGD after it has been undelegated in case the RMM
>> left stale data.
>> * Minor changes to reflect changes in previous patches.
>> ---
>> arch/arm64/include/asm/kvm_emulate.h | 5 +
>> arch/arm64/include/asm/kvm_rme.h | 19 ++
>> arch/arm64/kvm/arm.c | 18 ++
>> arch/arm64/kvm/mmu.c | 22 +-
>> arch/arm64/kvm/rme.c | 296 +++++++++++++++++++++++++++
>> 5 files changed, 358 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/
>> include/asm/kvm_emulate.h
>> index 4d417e39763e..27f54a7778aa 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>> @@ -715,6 +715,11 @@ static inline enum realm_state
>> kvm_realm_state(struct kvm *kvm)
>> return READ_ONCE(kvm->arch.realm.state);
>> }
>> +static inline bool kvm_realm_is_created(struct kvm *kvm)
>> +{
>> + return kvm_is_realm(kvm) && kvm_realm_state(kvm) !=
>> REALM_STATE_NONE;
>> +}
>> +
>> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
>> {
>> return false;
>> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/
>> asm/kvm_rme.h
>> index 69af5c3a1e44..209cd99f03dd 100644
>> --- a/arch/arm64/include/asm/kvm_rme.h
>> +++ b/arch/arm64/include/asm/kvm_rme.h
>> @@ -6,6 +6,8 @@
>> #ifndef __ASM_KVM_RME_H
>> #define __ASM_KVM_RME_H
>> +#include <uapi/linux/kvm.h>
>> +
>> /**
>> * enum realm_state - State of a Realm
>> */
>> @@ -46,11 +48,28 @@ enum realm_state {
>> * struct realm - Additional per VM data for a Realm
>> *
>> * @state: The lifetime state machine for the realm
>> + * @rd: Kernel mapping of the Realm Descriptor (RD)
>> + * @params: Parameters for the RMI_REALM_CREATE command
>> + * @num_aux: The number of auxiliary pages required by the RMM
>> + * @vmid: VMID to be used by the RMM for the realm
>> + * @ia_bits: Number of valid Input Address bits in the IPA
>> */
>> struct realm {
>> enum realm_state state;
>> +
>> + void *rd;
>> + struct realm_params *params;
>> +
>> + unsigned long num_aux;
>> + unsigned int vmid;
>> + unsigned int ia_bits;
>> };
>> void kvm_init_rme(void);
>> +u32 kvm_realm_ipa_limit(void);
>> +
>> +int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
>> +int kvm_init_realm_vm(struct kvm *kvm);
>> +void kvm_destroy_realm(struct kvm *kvm);
>> #endif
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 93087eca9c51..2bbf30d66424 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -153,6 +153,13 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
>> }
>> mutex_unlock(&kvm->slots_lock);
>> break;
>> + case KVM_CAP_ARM_RME:
>> + if (!kvm_is_realm(kvm))
>> + return -EINVAL;
>> + mutex_lock(&kvm->lock);
>> + r = kvm_realm_enable_cap(kvm, cap);
>> + mutex_unlock(&kvm->lock);
>> + break;
>> default:
>> break;
>> }
>
> The check of !kvm_is_realm() seems unnecessary because the same check
> will be
> done in kvm_realm_enable_cap().
Indeed, I'll drop this check.
>> @@ -215,6 +222,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
>> long type)
>> bitmap_zero(kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES);
>> + /* Initialise the realm bits after the generic bits are enabled */
>> + if (kvm_is_realm(kvm)) {
>> + ret = kvm_init_realm_vm(kvm);
>> + if (ret)
>> + goto err_free_cpumask;
>> + }
>> +
>> return 0;
>> err_free_cpumask:
>> @@ -274,6 +288,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
>> kvm_unshare_hyp(kvm, kvm + 1);
>> kvm_arm_teardown_hypercalls(kvm);
>> + kvm_destroy_realm(kvm);
>> }
>> static bool kvm_has_full_ptr_auth(void)
>> @@ -421,6 +436,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm,
>> long ext)
>> case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES:
>> r = BIT(0);
>> break;
>> + case KVM_CAP_ARM_RME:
>> + r = static_key_enabled(&kvm_rme_is_available);
>> + break;
>> default:
>> r = 0;
>> }
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index c9d46ad57e52..f09d580c12ad 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -863,12 +863,16 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = {
>> .icache_inval_pou = invalidate_icache_guest_page,
>> };
>> -static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long
>> type)
>> +static int kvm_init_ipa_range(struct kvm *kvm,
>> + struct kvm_s2_mmu *mmu, unsigned long type)
>> {
>> u32 kvm_ipa_limit = get_kvm_ipa_limit();
>> u64 mmfr0, mmfr1;
>> u32 phys_shift;
>> + if (kvm_is_realm(kvm))
>> + kvm_ipa_limit = kvm_realm_ipa_limit();
>> +
>> if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
>> return -EINVAL;
>> @@ -933,7 +937,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct
>> kvm_s2_mmu *mmu, unsigned long t
>> return -EINVAL;
>> }
>> - err = kvm_init_ipa_range(mmu, type);
>> + err = kvm_init_ipa_range(kvm, mmu, type);
>> if (err)
>> return err;
>> @@ -1056,6 +1060,20 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
>> struct kvm_pgtable *pgt = NULL;
>> write_lock(&kvm->mmu_lock);
>> + if (kvm_is_realm(kvm) &&
>> + (kvm_realm_state(kvm) != REALM_STATE_DEAD &&
>> + kvm_realm_state(kvm) != REALM_STATE_NONE)) {
>> + /* Tearing down RTTs will be added in a later patch */
>> + write_unlock(&kvm->mmu_lock);
>> +
>> + /*
>> + * The physical PGD pages are delegated to the RMM, so cannot
>> + * be freed at this point. This function will be called again
>> + * from kvm_destroy_realm() after the physical pages have been
>> + * returned at which point the memory can be freed.
>> + */
>> + return;
>> + }
>> pgt = mmu->pgt;
>> if (pgt) {
>> mmu->pgd_phys = 0;
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index b88269b80c11..5530ec2653b7 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -5,9 +5,20 @@
>> #include <linux/kvm_host.h>
>> +#include <asm/kvm_emulate.h>
>> +#include <asm/kvm_mmu.h>
>> #include <asm/rmi_cmds.h>
>> #include <asm/virt.h>
>> +#include <asm/kvm_pgtable.h>
>> +
>> +static unsigned long rmm_feat_reg0;
>> +
>> +static bool rme_supports(unsigned long feature)
>> +{
>> + return !!u64_get_bits(rmm_feat_reg0, feature);
>> +}
>> +
>
> I don't have strong sense, but the name would be rme_has_feature() since
> the
> source is the RMM feature register-0.
Sure.
>> static int rmi_check_version(void)
>> {
>> struct arm_smccc_res res;
>> @@ -36,6 +47,285 @@ static int rmi_check_version(void)
>> return 0;
>> }
>> +u32 kvm_realm_ipa_limit(void)
>> +{
>> + return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ);
>> +}
>> +
>> +static int get_start_level(struct realm *realm)
>> +{
>> + return 4 - stage2_pgtable_levels(realm->ia_bits);
>> +}
>> +
>> +static void free_delegated_granule(phys_addr_t phys)
>> +{
>> + if (WARN_ON(rmi_granule_undelegate(phys))) {
>> + /* Undelegate failed: leak the page */
>> + return;
>> + }
>> +
>> + free_page((unsigned long)phys_to_virt(phys));
>> +}
>> +
>> +static int realm_create_rd(struct kvm *kvm)
>> +{
>> + struct realm *realm = &kvm->arch.realm;
>> + struct realm_params *params = realm->params;
>> + void *rd = NULL;
>> + phys_addr_t rd_phys, params_phys;
>> + size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
>> + int i, r;
>> +
>> + if (WARN_ON(realm->rd) || WARN_ON(!realm->params))
>> + return -EEXIST;
>> +
>> + rd = (void *)__get_free_page(GFP_KERNEL);
>> + if (!rd)
>> + return -ENOMEM;
>> +
>> + rd_phys = virt_to_phys(rd);
>> + if (rmi_granule_delegate(rd_phys)) {
>> + r = -ENXIO;
>> + goto free_rd;
>> + }
>> +
>> + for (i = 0; i < pgd_size; i += PAGE_SIZE) {
>> + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i;
>> +
>> + if (rmi_granule_delegate(pgd_phys)) {
>> + r = -ENXIO;
>> + goto out_undelegate_tables;
>> + }
>> + }
>> +
>
> The step of 'i' is PAGE_SIZE, inconsistent with what has been done in
> the tag
> 'out_undelegate_tables' where the step is expected to be 1.
Ouch, good spot!
Thanks!
Steve
>> + realm->ia_bits = VTCR_EL2_IPA(kvm->arch.mmu.vtcr);
>> +
>> + params->s2sz = VTCR_EL2_IPA(kvm->arch.mmu.vtcr);
>> + params->rtt_level_start = get_start_level(realm);
>> + params->rtt_num_start = pgd_size / PAGE_SIZE;
>> + params->rtt_base = kvm->arch.mmu.pgd_phys;
>> + params->vmid = realm->vmid;
>> +
>> + params_phys = virt_to_phys(params);
>> +
>> + if (rmi_realm_create(rd_phys, params_phys)) {
>> + r = -ENXIO;
>> + goto out_undelegate_tables;
>> + }
>> +
>> + if (WARN_ON(rmi_rec_aux_count(rd_phys, &realm->num_aux))) {
>> + WARN_ON(rmi_realm_destroy(rd_phys));
>> + goto out_undelegate_tables;
>> + }
>> +
>> + realm->rd = rd;
>> +
>> + return 0;
>> +
>> +out_undelegate_tables:
>> + while (--i >= 0) {
>> + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i * PAGE_SIZE;
>> +
>> + if (WARN_ON(rmi_granule_undelegate(pgd_phys))) {
>> + /* Leak the pages if they cannot be returned */
>> + kvm->arch.mmu.pgt = NULL;
>> + break;
>> + }
>> + }
>> + if (WARN_ON(rmi_granule_undelegate(rd_phys))) {
>> + /* Leak the page if it isn't returned */
>> + return r;
>> + }
>> +free_rd:
>> + free_page((unsigned long)rd);
>> + return r;
>> +}
>> +
>> +/* Protects access to rme_vmid_bitmap */
>> +static DEFINE_SPINLOCK(rme_vmid_lock);
>> +static unsigned long *rme_vmid_bitmap;
>> +
>> +static int rme_vmid_init(void)
>> +{
>> + unsigned int vmid_count = 1 << kvm_get_vmid_bits();
>> +
>> + rme_vmid_bitmap = bitmap_zalloc(vmid_count, GFP_KERNEL);
>> + if (!rme_vmid_bitmap) {
>> + kvm_err("%s: Couldn't allocate rme vmid bitmap\n", __func__);
>> + return -ENOMEM;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int rme_vmid_reserve(void)
>> +{
>> + int ret;
>> + unsigned int vmid_count = 1 << kvm_get_vmid_bits();
>> +
>> + spin_lock(&rme_vmid_lock);
>> + ret = bitmap_find_free_region(rme_vmid_bitmap, vmid_count, 0);
>> + spin_unlock(&rme_vmid_lock);
>> +
>> + return ret;
>> +}
>> +
>> +static void rme_vmid_release(unsigned int vmid)
>> +{
>> + spin_lock(&rme_vmid_lock);
>> + bitmap_release_region(rme_vmid_bitmap, vmid, 0);
>> + spin_unlock(&rme_vmid_lock);
>> +}
>> +
>> +static int kvm_create_realm(struct kvm *kvm)
>> +{
>> + struct realm *realm = &kvm->arch.realm;
>> + int ret;
>> +
>> + if (!kvm_is_realm(kvm))
>> + return -EINVAL;
>
> The duplicated check has been done in kvm_realm_enable_cap(). So it can be
> dropped safely.
>
>> + if (kvm_realm_is_created(kvm))
>> + return -EEXIST;
>> +
>> + ret = rme_vmid_reserve();
>> + if (ret < 0)
>> + return ret;
>> + realm->vmid = ret;
>> +
>> + ret = realm_create_rd(kvm);
>> + if (ret) {
>> + rme_vmid_release(realm->vmid);
>> + return ret;
>> + }
>> +
>> + WRITE_ONCE(realm->state, REALM_STATE_NEW);
>> +
>> + /* The realm is up, free the parameters. */
>> + free_page((unsigned long)realm->params);
>> + realm->params = NULL;
>> +
>> + return 0;
>> +}
>> +
>> +static int config_realm_hash_algo(struct realm *realm,
>> + struct kvm_cap_arm_rme_config_item *cfg)
>> +{
>> + switch (cfg->hash_algo) {
>> + case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA256:
>> + if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_256))
>> + return -EINVAL;
>> + break;
>> + case KVM_CAP_ARM_RME_MEASUREMENT_ALGO_SHA512:
>> + if (!rme_supports(RMI_FEATURE_REGISTER_0_HASH_SHA_512))
>> + return -EINVAL;
>> + break;
>> + default:
>> + return -EINVAL;
>> + }
>> + realm->params->hash_algo = cfg->hash_algo;
>> + return 0;
>> +}
>> +
>> +static int kvm_rme_config_realm(struct kvm *kvm, struct
>> kvm_enable_cap *cap)
>> +{
>> + struct kvm_cap_arm_rme_config_item cfg;
>> + struct realm *realm = &kvm->arch.realm;
>> + int r = 0;
>> +
>> + if (kvm_realm_is_created(kvm))
>> + return -EBUSY;
>> +
>> + if (copy_from_user(&cfg, (void __user *)cap->args[1], sizeof(cfg)))
>> + return -EFAULT;
>> +
>> + switch (cfg.cfg) {
>> + case KVM_CAP_ARM_RME_CFG_RPV:
>> + memcpy(&realm->params->rpv, &cfg.rpv, sizeof(cfg.rpv));
>> + break;
>> + case KVM_CAP_ARM_RME_CFG_HASH_ALGO:
>> + r = config_realm_hash_algo(realm, &cfg);
>> + break;
>> + default:
>> + r = -EINVAL;
>> + }
>> +
>> + return r;
>> +}
>> +
>> +int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
>> +{
>> + int r = 0;
>> +
>> + if (!kvm_is_realm(kvm))
>> + return -EINVAL;
>> +
>> + switch (cap->args[0]) {
>> + case KVM_CAP_ARM_RME_CONFIG_REALM:
>> + r = kvm_rme_config_realm(kvm, cap);
>> + break;
>> + case KVM_CAP_ARM_RME_CREATE_RD:
>> + r = kvm_create_realm(kvm);
>> + break;
>> + default:
>> + r = -EINVAL;
>> + break;
>> + }
>> +
>> + return r;
>> +}
>> +
>> +void kvm_destroy_realm(struct kvm *kvm)
>> +{
>> + struct realm *realm = &kvm->arch.realm;
>> + size_t pgd_size = kvm_pgtable_stage2_pgd_size(kvm->arch.mmu.vtcr);
>> + int i;
>> +
>> + if (realm->params) {
>> + free_page((unsigned long)realm->params);
>> + realm->params = NULL;
>> + }
>> +
>> + if (!kvm_realm_is_created(kvm))
>> + return;
>> +
>> + WRITE_ONCE(realm->state, REALM_STATE_DYING);
>> +
>> + if (realm->rd) {
>> + phys_addr_t rd_phys = virt_to_phys(realm->rd);
>> +
>> + if (WARN_ON(rmi_realm_destroy(rd_phys)))
>> + return;
>> + free_delegated_granule(rd_phys);
>> + realm->rd = NULL;
>> + }
>> +
>> + rme_vmid_release(realm->vmid);
>> +
>> + for (i = 0; i < pgd_size; i += PAGE_SIZE) {
>> + phys_addr_t pgd_phys = kvm->arch.mmu.pgd_phys + i;
>> +
>> + if (WARN_ON(rmi_granule_undelegate(pgd_phys)))
>> + return;
>> + }
>> +
>> + WRITE_ONCE(realm->state, REALM_STATE_DEAD);
>> +
>> + /* Now that the Realm is destroyed, free the entry level RTTs */
>> + kvm_free_stage2_pgd(&kvm->arch.mmu);
>> +}
>> +
>> +int kvm_init_realm_vm(struct kvm *kvm)
>> +{
>> + struct realm_params *params;
>> +
>> + params = (struct realm_params *)get_zeroed_page(GFP_KERNEL);
>> + if (!params)
>> + return -ENOMEM;
>> +
>> + kvm->arch.realm.params = params;
>> + return 0;
>> +}
>> +
>> void kvm_init_rme(void)
>> {
>> if (PAGE_SIZE != SZ_4K)
>> @@ -46,5 +336,11 @@ void kvm_init_rme(void)
>> /* Continue without realm support */
>> return;
>> + if (WARN_ON(rmi_features(0, &rmm_feat_reg0)))
>> + return;
>> +
>> + if (rme_vmid_init())
>> + return;
>> +
>> /* Future patch will enable static branch kvm_rme_is_available */
>> }
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation
2025-01-29 4:07 ` Gavin Shan
@ 2025-01-30 14:14 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-01-30 14:14 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 29/01/2025 04:07, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Previously machine type was used purely for specifying the physical
>> address size of the guest. Reserve the higher bits to specify an ARM
>> specific machine type and declare a new type 'KVM_VM_TYPE_ARM_REALM'
>> used to create a realm guest.
>>
>> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/kvm/arm.c | 17 +++++++++++++++++
>> arch/arm64/kvm/mmu.c | 3 ---
>> include/uapi/linux/kvm.h | 19 +++++++++++++++----
>> 3 files changed, 32 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index c505ec61180a..73016e1e0067 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -207,6 +207,23 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned
>> long type)
>> mutex_unlock(&kvm->lock);
>> #endif
>> + if (type & ~(KVM_VM_TYPE_ARM_MASK |
>> KVM_VM_TYPE_ARM_IPA_SIZE_MASK))
>> + return -EINVAL;
>> +
>> + switch (type & KVM_VM_TYPE_ARM_MASK) {
>> + case KVM_VM_TYPE_ARM_NORMAL:
>> + break;
>> + case KVM_VM_TYPE_ARM_REALM:
>> + kvm->arch.is_realm = true;
>> + if (!kvm_is_realm(kvm)) {
>> + /* Realm support unavailable */
>> + return -EINVAL;
>> + }
>> + break;
>> + default:
>> + return -EINVAL;
>> + }
>> +
>> kvm_init_nested(kvm);
>> ret = kvm_share_hyp(kvm, kvm + 1);
>
> Corresponding to comments for PATCH[6], the block of the code can be
> modified
> to avoid using kvm_is_realm() here. In this way, kvm_is_realm() can be
> simplifed
> as I commented for PATCH[6].
>
> case KVM_VM_TYPE_ARM_REALM:
> if (static_branch_unlikely(&kvm_rme_is_available))
> return -EPERM; /* -EPERM may be more suitable than -
> EINVAL */
>
> kvm->arch.is_realm = true;
> break;
Yes that's more readable. I'd used kvm_is_realm() because I wanted to
keep the check on kvm_rme_is_available to one place, but coming back to
the code there's definitely a "Huh?" moment from setting 'is_realm' and
then testing if it's a realm!
I also agree -EPERM is probably better to signify that the kernel
supports realms but the hardware doesn't.
Thanks,
Steve
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs
2025-01-29 4:50 ` Gavin Shan
@ 2025-01-30 16:40 ` Steven Price
2025-02-03 11:18 ` Suzuki K Poulose
0 siblings, 1 reply; 90+ messages in thread
From: Steven Price @ 2025-01-30 16:40 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 29/01/2025 04:50, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> The RMM maintains a data structure known as the Realm Execution Context
>> (or REC). It is similar to struct kvm_vcpu and tracks the state of the
>> virtual CPUs. KVM must delegate memory and request the structures are
>> created when vCPUs are created, and suitably tear down on destruction.
>>
>> RECs must also be supplied with addition pages - auxiliary (or AUX)
>> granules - for storing the larger registers state (e.g. for SVE). The
>> number of AUX granules for a REC depends on the parameters with which
>> the Realm was created - the RMM makes this information available via the
>> RMI_REC_AUX_COUNT call performed after creating the Realm Descriptor
>> (RD).
>>
>> Note that only some of register state for the REC can be set by KVM, the
>> rest is defined by the RMM (zeroed). The register state then cannot be
>> changed by KVM after the REC is created (except when the guest
>> explicitly requests this e.g. by performing a PSCI call). The RMM also
>> requires that the VMM creates RECs in ascending order of the MPIDR.
>>
>> See Realm Management Monitor specification (DEN0137) for more
>> information:
>> https://developer.arm.com/documentation/den0137/
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v5:
>> * Separate the concept of vcpu_is_rec() and
>> kvm_arm_vcpu_rec_finalized() by using the KVM_ARM_VCPU_REC feature as
>> the indication that the VCPU is a REC.
>> Changes since v2:
>> * Free rec->run earlier in kvm_destroy_realm() and adapt to previous
>> patches.
>> ---
>> arch/arm64/include/asm/kvm_emulate.h | 7 ++
>> arch/arm64/include/asm/kvm_host.h | 3 +
>> arch/arm64/include/asm/kvm_rme.h | 18 ++++
>> arch/arm64/kvm/arm.c | 9 ++
>> arch/arm64/kvm/reset.c | 11 ++
>> arch/arm64/kvm/rme.c | 144 +++++++++++++++++++++++++++
>> 6 files changed, 192 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/
>> include/asm/kvm_emulate.h
>> index 27f54a7778aa..ec2b6d9c9c07 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>> @@ -722,7 +722,14 @@ static inline bool kvm_realm_is_created(struct
>> kvm *kvm)
>> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
>> {
>> + if (static_branch_unlikely(&kvm_rme_is_available))
>> + return vcpu_has_feature(vcpu, KVM_ARM_VCPU_REC);
>> return false;
>> }
>>
>
> It seems the check on kvm_rme_is_available is unnecessary because
> KVM_ARM_VCPU_REC
> is possible to be true only when kvm_rme_is_available is true.
Similar to a previous patch - the check of the static key is to avoid
overhead on systems without RME.
>> +static inline bool kvm_arm_vcpu_rec_finalized(struct kvm_vcpu *vcpu)
>> +{
>> + return vcpu->arch.rec.mpidr != INVALID_HWID;
>> +}
>> +
>> #endif /* __ARM64_KVM_EMULATE_H__ */
>
> I would suggest to rename to kvm_arm_rec_finalized() since vCPU and REC are
> similar objects at the same level. It'd better to avoid duplicate object
> name reference in the function name.
Ack
>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/
>> asm/kvm_host.h
>> index 8482638dce3b..220195c727ef 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -789,6 +789,9 @@ struct kvm_vcpu_arch {
>> /* Per-vcpu CCSIDR override or NULL */
>> u32 *ccsidr;
>> +
>> + /* Realm meta data */
>> + struct realm_rec rec;
>> };
>> /*
>> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/
>> asm/kvm_rme.h
>> index 32bdedf1d866..62d4a63d3035 100644
>> --- a/arch/arm64/include/asm/kvm_rme.h
>> +++ b/arch/arm64/include/asm/kvm_rme.h
>> @@ -6,6 +6,7 @@
>> #ifndef __ASM_KVM_RME_H
>> #define __ASM_KVM_RME_H
>> +#include <asm/rmi_smc.h>
>> #include <uapi/linux/kvm.h>
>> /**
>> @@ -65,6 +66,21 @@ struct realm {
>> unsigned int ia_bits;
>> };
>> +/**
>> + * struct realm_rec - Additional per VCPU data for a Realm
>> + *
>> + * @mpidr: MPIDR (Multiprocessor Affinity Register) value to identify
>> this VCPU
>> + * @rec_page: Kernel VA of the RMM's private page for this REC
>> + * @aux_pages: Additional pages private to the RMM for this REC
>> + * @run: Kernel VA of the RmiRecRun structure shared with the RMM
>> + */
>> +struct realm_rec {
>> + unsigned long mpidr;
>> + void *rec_page;
>> + struct page *aux_pages[REC_PARAMS_AUX_GRANULES];
>> + struct rec_run *run;
>> +};
>> +
>> void kvm_init_rme(void);
>> u32 kvm_realm_ipa_limit(void);
>> @@ -72,6 +88,8 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct
>> kvm_enable_cap *cap);
>> int kvm_init_realm_vm(struct kvm *kvm);
>> void kvm_destroy_realm(struct kvm *kvm);
>> void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
>> +int kvm_create_rec(struct kvm_vcpu *vcpu);
>> +void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>> #define RMM_RTT_BLOCK_LEVEL 2
>> #define RMM_RTT_MAX_LEVEL 3
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 73016e1e0067..2d97147651be 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -525,6 +525,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>> /* Force users to call KVM_ARM_VCPU_INIT */
>> vcpu_clear_flag(vcpu, VCPU_INITIALIZED);
>> + vcpu->arch.rec.mpidr = INVALID_HWID;
>> +
>> vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
>> /* Set up the timer */
>> @@ -1467,6 +1469,9 @@ static unsigned long
>> system_supported_vcpu_features(void)
>> if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
>> clear_bit(KVM_ARM_VCPU_HAS_EL2, &features);
>> + if (!static_branch_unlikely(&kvm_rme_is_available))
>> + clear_bit(KVM_ARM_VCPU_REC, &features);
>> +
>> return features;
>> }
>> @@ -1506,6 +1511,10 @@ static int
>> kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
>> if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features))
>> return -EINVAL;
>> + /* RME is incompatible with AArch32 */
>> + if (test_bit(KVM_ARM_VCPU_REC, &features))
>> + return -EINVAL;
>> +
>> return 0;
>> }
>>
>
> One corner case seems missed in kvm_vcpu_init_check_features(). It's
> allowed to
> initialize a vCPU with REC feature even kvm_is_realm(kvm) is false.
> Hopefully,
> I didn't miss something.
Ah, yes good point. I'll pass a kvm pointer to
kvm_vcpu_init_check_features() and use kvm_is_realm() in there.
Thanks,
Steve
> [...]
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS
2025-01-29 23:25 ` Gavin Shan
@ 2025-01-30 16:56 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-01-30 16:56 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 29/01/2025 23:25, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Each page within the protected region of the realm guest can be marked
>> as either RAM or EMPTY. Allow the VMM to control this before the guest
>> has started and provide the equivalent functions to change this (with
>> the guest's approval) at runtime.
>>
>> When transitioning from RIPAS RAM (1) to RIPAS EMPTY (0) the memory is
>> unmapped from the guest and undelegated allowing the memory to be reused
>> by the host. When transitioning to RIPAS RAM the actual population of
>> the leaf RTTs is done later on stage 2 fault, however it may be
>> necessary to allocate additional RTTs to allow the RMM track the RIPAS
>> for the requested range.
>>
>> When freeing a block mapping it is necessary to temporarily unfold the
>> RTT which requires delegating an extra page to the RMM, this page can
>> then be recovered once the contents of the block mapping have been
>> freed.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes from v5:
>> * Adapt to rebasing.
>> * Introduce find_map_level()
>> * Rename some functions to be clearer.
>> * Drop the "spare page" functionality.
>> Changes from v2:
>> * {alloc,free}_delegated_page() moved from previous patch to this one.
>> * alloc_delegated_page() now takes a gfp_t flags parameter.
>> * Fix the reference counting of guestmem pages to avoid leaking memory.
>> * Several misc code improvements and extra comments.
>> ---
>> arch/arm64/include/asm/kvm_rme.h | 17 ++
>> arch/arm64/kvm/mmu.c | 8 +-
>> arch/arm64/kvm/rme.c | 411 +++++++++++++++++++++++++++++++
>> 3 files changed, 433 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/
>> asm/kvm_rme.h
>> index be64b749fcac..4e7758f0e4b5 100644
>> --- a/arch/arm64/include/asm/kvm_rme.h
>> +++ b/arch/arm64/include/asm/kvm_rme.h
>> @@ -92,6 +92,15 @@ void kvm_realm_destroy_rtts(struct kvm *kvm, u32
>> ia_bits);
>> int kvm_create_rec(struct kvm_vcpu *vcpu);
>> void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>> +void kvm_realm_unmap_range(struct kvm *kvm,
>> + unsigned long ipa,
>> + u64 size,
>> + bool unmap_private);
>> +int realm_set_ipa_state(struct kvm_vcpu *vcpu,
>> + unsigned long addr, unsigned long end,
>> + unsigned long ripas,
>> + unsigned long *top_ipa);
>> +
>
> The declaration of realm_set_ipa_state() is unnecessary since its scope has
> been limited to rme.c
Ack, the function can be static too.
>> #define RMM_RTT_BLOCK_LEVEL 2
>> #define RMM_RTT_MAX_LEVEL 3
>> @@ -110,4 +119,12 @@ static inline unsigned long
>> rme_rtt_level_mapsize(int level)
>> return (1UL << RMM_RTT_LEVEL_SHIFT(level));
>> }
>> +static inline bool realm_is_addr_protected(struct realm *realm,
>> + unsigned long addr)
>> +{
>> + unsigned int ia_bits = realm->ia_bits;
>> +
>> + return !(addr & ~(BIT(ia_bits - 1) - 1));
>> +}
>> +
>> #endif
>
> The check on the specified address to determine its range seems a bit
> complicated
> to me, it can be simplified like below. Besides, it may be a good idea
> to rename
> it to have the prefix "kvm_realm_".
>
> static inline bool kvm_realm_is_{private | protected}_address(struct
> realm *realm,
> unsigned long addr)
> {
> return !(addr & BIT(realm->ia_bits - 1));
> }
Ack
> A question related to the terms used in this series to describe a
> granule's state:
> "protected" or "private", "unprotected" or "shared". Those terms are all
> used in
> the function names of this series. I guess it would be nice to unify so
> that
> "private" and "shared" to be used, which is consistent to the terms used by
> guest-memfd. For example, kvm_realm_is_protected_address() can be
> renamed to
> kvm_realm_is_private_address().
Happy with the rename here. More generally it's a little awkward because
the RMM spec does refer to protected/unprotected (e.g.
RMI_RTT_MAP_UNPROTECTED). So there's always a choice between aligning
with the RMM spec or aligning with guest-memfd.
Thanks,
Steve
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM
2024-12-12 15:55 ` [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM Steven Price
@ 2025-02-02 1:22 ` Gavin Shan
2025-02-07 17:04 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 1:22 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> The RMM only allows setting the GPRS (x0-x30) and PC for a realm
> guest. Check this in kvm_arm_set_reg() so that the VMM can receive a
> suitable error return if other registers are accessed.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v5:
> * Upper GPRS can be set as part of a HOST_CALL return, so fix up the
> test to allow them.
> ---
> arch/arm64/kvm/guest.c | 43 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 43 insertions(+)
>
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 12dad841f2a5..1ee2fe072f1a 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -73,6 +73,24 @@ static u64 core_reg_offset_from_id(u64 id)
> return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
> }
>
> +static bool kvm_realm_validate_core_reg(u64 off)
> +{
> + /*
> + * Note that GPRs can only sometimes be controlled by the VMM.
> + * For PSCI only X0-X6 are used, higher registers are ignored (restored
> + * from the REC).
> + * For HOST_CALL all of X0-X30 are copied to the RsiHostCall structure.
> + * For emulated MMIO X0 is always used.
> + */
> + switch (off) {
> + case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
> + KVM_REG_ARM_CORE_REG(regs.regs[30]):
> + case KVM_REG_ARM_CORE_REG(regs.pc):
> + return true;
> + }
> + return false;
> +}
> +
> static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off)
> {
> int size;
> @@ -115,6 +133,9 @@ static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off)
> if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off))
> return -EINVAL;
>
> + if (kvm_is_realm(vcpu->kvm) && !kvm_realm_validate_core_reg(off))
> + return -EPERM;
> +
> return size;
> }
>
> @@ -783,12 +804,34 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> return kvm_arm_sys_reg_get_reg(vcpu, reg);
> }
>
> +/*
> + * The RMI ABI only enables setting some GPRs and PC. The selection of GPRs
> + * that are available depends on the Realm state and the reason for the last
> + * exit. All other registers are reset to architectural or otherwise defined
> + * reset values by the RMM, except for a few configuration fields that
> + * correspond to Realm parameters.
> + */
> +static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
> + const struct kvm_one_reg *reg)
> +{
> + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) {
> + u64 off = core_reg_offset_from_id(reg->id);
> +
> + return kvm_realm_validate_core_reg(off);
> + }
> +
> + return false;
> +}
> +
> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> {
> /* We currently use nothing arch-specific in upper 32 bits */
> if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
> return -EINVAL;
>
> + if (kvm_is_realm(vcpu->kvm) && !validate_realm_set_reg(vcpu, reg))
> + return -EINVAL;
> +
> switch (reg->id & KVM_REG_ARM_COPROC_MASK) {
> case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg);
> case KVM_REG_ARM_FW:
It looks the core registers in kvm_arm_set_reg() has been validated for twice.
ioctl(KVM_SET_ONE_REG)
kvm_arm_set_reg
validate_realm_set_reg
kvm_realm_validate_core_reg // 1
set_core_reg
core_reg_offset_from_id
core_reg_addr
core_reg_offset_from_id
core_reg_size_from_offset
kvm_realm_validate_core_reg // 2
copy_from_user
Besides, there are other types of registers that can be accessed by KVM_{GET, SET}_ONE_REG:
firmware and bitmap registers, SVE registers, timer registers, system registers. Need we
to hide any of them from the user space? As I can understand, the SVE registers are owned
by RMM at least and won't be exposed to user space.
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests
2024-12-12 15:55 ` [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests Steven Price
@ 2025-02-02 2:06 ` Gavin Shan
0 siblings, 0 replies; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 2:06 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> The RMM needs to be informed of the target REC when a PSCI call is made
> with an MPIDR argument. Expose an ioctl to the userspace in case the PSCI
> is handled by it.
>
> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/include/asm/kvm_rme.h | 3 +++
> arch/arm64/kvm/arm.c | 25 +++++++++++++++++++++++++
> arch/arm64/kvm/psci.c | 29 +++++++++++++++++++++++++++++
> arch/arm64/kvm/rme.c | 15 +++++++++++++++
> 4 files changed, 72 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 158f77e24a26..90a4537ad38d 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -113,6 +113,9 @@ int realm_set_ipa_state(struct kvm_vcpu *vcpu,
> unsigned long addr, unsigned long end,
> unsigned long ripas,
> unsigned long *top_ipa);
> +int realm_psci_complete(struct kvm_vcpu *calling,
> + struct kvm_vcpu *target,
> + unsigned long status);
>
The 'calling' may be renamed to 'source', consistent to the names in arch/arm64/kvm/psci.c.
Other nitpicks can be found below.
> #define RMM_RTT_BLOCK_LEVEL 2
> #define RMM_RTT_MAX_LEVEL 3
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index f588b528c3f9..eff1a4ec892b 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -1745,6 +1745,22 @@ static int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
> return __kvm_arm_vcpu_set_events(vcpu, events);
> }
>
> +static int kvm_arm_vcpu_rmm_psci_complete(struct kvm_vcpu *vcpu,
> + struct kvm_arm_rmm_psci_complete *arg)
> +{
> + struct kvm_vcpu *target = kvm_mpidr_to_vcpu(vcpu->kvm, arg->target_mpidr);
> +
> + if (!target)
> + return -EINVAL;
> +
> + /*
> + * RMM v1.0 only supports PSCI_RET_SUCCESS or PSCI_RET_DENIED
> + * for the status. But, let us leave it to the RMM to filter
> + * for making this future proof.
> + */
> + return realm_psci_complete(vcpu, target, arg->psci_status);
> +}
> +
> long kvm_arch_vcpu_ioctl(struct file *filp,
> unsigned int ioctl, unsigned long arg)
> {
> @@ -1867,6 +1883,15 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>
> return kvm_arm_vcpu_finalize(vcpu, what);
> }
> + case KVM_ARM_VCPU_RMM_PSCI_COMPLETE: {
> + struct kvm_arm_rmm_psci_complete req;
> +
> + if (!kvm_is_realm(vcpu->kvm))
> + return -EINVAL;
We probably need to check with vcpu_is_rec(). -EPERM seems more precise
than -EINVAL.
> + if (copy_from_user(&req, argp, sizeof(req)))
> + return -EFAULT;
> + return kvm_arm_vcpu_rmm_psci_complete(vcpu, &req);
> + }
> default:
> r = -EINVAL;
> }
> diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c
> index 3b5dbe9a0a0e..9dc161abc30c 100644
> --- a/arch/arm64/kvm/psci.c
> +++ b/arch/arm64/kvm/psci.c
> @@ -103,6 +103,12 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
>
> reset_state->reset = true;
> kvm_make_request(KVM_REQ_VCPU_RESET, vcpu);
> + /*
> + * Make sure we issue PSCI_COMPLETE before the VCPU can be
> + * scheduled.
> + */
> + if (vcpu_is_rec(vcpu))
> + realm_psci_complete(source_vcpu, vcpu, PSCI_RET_SUCCESS);
>
> /*
> * Make sure the reset request is observed if the RUNNABLE mp_state is
> @@ -115,6 +121,10 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
>
> out_unlock:
> spin_unlock(&vcpu->arch.mp_state_lock);
> + if (vcpu_is_rec(vcpu) && ret != PSCI_RET_SUCCESS)
> + realm_psci_complete(source_vcpu, vcpu,
> + ret == PSCI_RET_ALREADY_ON ?
> + PSCI_RET_SUCCESS : PSCI_RET_DENIED);
{} is needed here.
> return ret;
> }
>
> @@ -142,6 +152,25 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
> /* Ignore other bits of target affinity */
> target_affinity &= target_affinity_mask;
>
> + if (vcpu_is_rec(vcpu)) {
> + struct kvm_vcpu *target_vcpu;
> +
> + /* RMM supports only zero affinity level */
> + if (lowest_affinity_level != 0)
> + return PSCI_RET_INVALID_PARAMS;
> +
> + target_vcpu = kvm_mpidr_to_vcpu(kvm, target_affinity);
> + if (!target_vcpu)
> + return PSCI_RET_INVALID_PARAMS;
> +
> + /*
> + * Provide the references of running and target RECs to the RMM
^^^^^^^
the source
> + * so that the RMM can complete the PSCI request.
> + */
> + realm_psci_complete(vcpu, target_vcpu, PSCI_RET_SUCCESS);
> + return PSCI_RET_SUCCESS;
> + }
> +
> /*
> * If one or more VCPU matching target affinity are running
> * then ON else OFF
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index 146ef598a581..5831d379760a 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -118,6 +118,21 @@ static void free_delegated_granule(phys_addr_t phys)
> free_page((unsigned long)phys_to_virt(phys));
> }
>
> +int realm_psci_complete(struct kvm_vcpu *calling, struct kvm_vcpu *target,
> + unsigned long status)
> +{
> + int ret;
> +
> + ret = rmi_psci_complete(virt_to_phys(calling->arch.rec.rec_page),
> + virt_to_phys(target->arch.rec.rec_page),
> + status);
> +
^^^^^
Unnecessary blank line.
> + if (ret)
> + return -EINVAL;
> +
> + return 0;
> +}
> +
> static int realm_rtt_create(struct realm *realm,
> unsigned long addr,
> int level,
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions
2024-12-12 15:55 ` [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions Steven Price
@ 2025-02-02 2:11 ` Gavin Shan
0 siblings, 0 replies; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 2:11 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> The RMM doesn't allow injection of a undefined exception into a realm
> guest. Add a WARN to catch if this ever happens.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/kvm/inject_fault.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index 611867b81ac2..705463058ce0 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -226,6 +226,8 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu)
> */
> void kvm_inject_undefined(struct kvm_vcpu *vcpu)
> {
> + if (vcpu_is_rec(vcpu))
> + WARN(1, "Cannot inject undefined exception into REC. Continuing with unknown behaviour");
May be more compact with:
WARN(vcpu_is_rec(vcpu), "Unexpected undefined exception injection to REC");
> if (vcpu_el1_is_32bit(vcpu))
> inject_undef32(vcpu);
> else
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests
2024-12-12 15:55 ` [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests Steven Price
@ 2025-02-02 2:15 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 2:15 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> It doesn't make much sense as a realm guest wouldn't want to trust the
> host. It will also need some extra work to ensure that KVM will only
> attempt to write into a shared memory region. So for now just disable
> it.
>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/kvm/arm.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index eff1a4ec892b..134acb4ee26f 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -432,7 +432,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> r = system_supports_mte();
> break;
> case KVM_CAP_STEAL_TIME:
> - r = kvm_arm_pvtime_supported();
> + if (kvm_is_realm(kvm))
> + r = 0;
> + else
> + r = kvm_arm_pvtime_supported();
> break;
kvm_vm_ioctl_check_extension() can be called on the file descriptor of "/dev/kvm".
'kvm' is NULL and kvm_is_realm() returns false, which is the missed corner case.
> case KVM_CAP_ARM_EL1_32BIT:
> r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1);
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance
2024-12-12 15:55 ` [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance Steven Price
@ 2025-02-02 6:00 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 6:00 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Suzuki K Poulose, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> From: Suzuki K Poulose <suzuki.poulose@arm.com>
>
> Given we have different types of VMs supported, check the
> support for SVE for the given instance of the VM to accurately
> report the status.
>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/include/asm/kvm_rme.h | 2 ++
> arch/arm64/kvm/arm.c | 5 ++++-
> arch/arm64/kvm/rme.c | 5 +++++
> 3 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 90a4537ad38d..0d89ab1645c1 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -85,6 +85,8 @@ void kvm_init_rme(void);
> u32 kvm_realm_ipa_limit(void);
> u32 kvm_realm_vgic_nr_lr(void);
>
> +bool kvm_rme_supports_sve(void);
> +
> int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
> int kvm_init_realm_vm(struct kvm *kvm);
> void kvm_destroy_realm(struct kvm *kvm);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 134acb4ee26f..6f7f96ab781d 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -456,7 +456,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> r = get_kvm_ipa_limit();
> break;
> case KVM_CAP_ARM_SVE:
> - r = system_supports_sve();
> + if (kvm_is_realm(kvm))
> + r = kvm_rme_supports_sve();
> + else
> + r = system_supports_sve();
> break;
kvm_vm_ioctl_check_extension() can be called by ioctl(KVM_CHECK_EXTENSION) on the
file descriptor of '/dev/kvm'. kvm is NULL and kvm_is_realm() returns false in
this case.
kvm_dev_ioctl
kvm_vm_ioctl_check_extension_generic // kvm is NULL
kvm_vm_ioctl_check_extension
> case KVM_CAP_ARM_PTRAUTH_ADDRESS:
> case KVM_CAP_ARM_PTRAUTH_GENERIC:
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index 5831d379760a..27a479feb907 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -20,6 +20,11 @@ static bool rme_supports(unsigned long feature)
> return !!u64_get_bits(rmm_feat_reg0, feature);
> }
>
> +bool kvm_rme_supports_sve(void)
> +{
> + return rme_supports(RMI_FEATURE_REGISTER_0_SVE_EN);
> +}
> +
If rme_supports() becomes a public helper, it can be directly used. In turn,
kvm_rme_supports_sve() can be dropped. RMI_FEATURE_REGISTER_0_SVE_EN is obvious
to indicate the corresponding feature.
> static int rmi_check_version(void)
> {
> struct arm_smccc_res res;
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL
2024-12-12 15:55 ` [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL Steven Price
@ 2025-02-02 6:41 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 6:41 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Joey Gouly, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> From: Joey Gouly <joey.gouly@arm.com>
>
> Forward RSI_HOST_CALLS to KVM's HVC handler.
>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes since v4:
> * Setting GPRS is now done by kvm_rec_enter() rather than
> rec_exit_host_call() (see previous patch - arm64: RME: Handle realm
> enter/exit). This fixes a bug where the registers set by user space
> were being ignored.
> ---
> arch/arm64/kvm/rme-exit.c | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c
> index 8f0f9ab57f28..b2a367474d74 100644
> --- a/arch/arm64/kvm/rme-exit.c
> +++ b/arch/arm64/kvm/rme-exit.c
> @@ -103,6 +103,26 @@ static int rec_exit_ripas_change(struct kvm_vcpu *vcpu)
> return 0;
> }
>
> +static int rec_exit_host_call(struct kvm_vcpu *vcpu)
> +{
> + int ret, i;
> + struct realm_rec *rec = &vcpu->arch.rec;
> +
> + vcpu->stat.hvc_exit_stat++;
> +
> + for (i = 0; i < REC_RUN_GPRS; i++)
> + vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]);
> +
> + ret = kvm_smccc_call_handler(vcpu);
> +
> + if (ret < 0) {
> + vcpu_set_reg(vcpu, 0, ~0UL);
> + ret = 1;
> + }
> +
> + return ret;
> +}
> +
It seems that the return value from kvm_smccc_call() won't be negative. Besides,
the host call requests are currently handled by kvm_psci_call(), which isn't
what we want. So I think a new helper is needed and called in kvm_smccc_call_handler().
The new helper simply push the error (NOT_SUPPORTED) to x0. Otherwise, a unexpected
return value will be seen by guest.
handle_rec_exit
rec_exit_host_call
kvm_smccc_call_handler
> static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu)
> {
> struct realm_rec *rec = &vcpu->arch.rec;
> @@ -164,6 +184,8 @@ int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_ret)
> return rec_exit_psci(vcpu);
> case RMI_EXIT_RIPAS_CHANGE:
> return rec_exit_ripas_change(vcpu);
> + case RMI_EXIT_HOST_CALL:
> + return rec_exit_host_call(vcpu);
> }
>
> kvm_pr_unimpl("Unsupported exit reason: %u\n",
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms
2024-12-12 15:55 ` [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms Steven Price
@ 2025-02-02 6:52 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 6:52 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Always split up huge pages to avoid problems managing huge pages. There
> are two issues currently:
>
> 1. The uABI for the VMM allows populating memory on 4k boundaries even
> if the underlying allocator (e.g. hugetlbfs) is using a larger page
> size. Using a memfd for private allocations will push this issue onto
> the VMM as it will need to respect the granularity of the allocator.
>
> 2. The guest is able to request arbitrary ranges to be remapped as
> shared. Again with a memfd approach it will be up to the VMM to deal
> with the complexity and either overmap (need the huge mapping and add
> an additional 'overlapping' shared mapping) or reject the request as
> invalid due to the use of a huge page allocator.
>
> For now just break everything down to 4k pages in the RMM controlled
> stage 2.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/kvm/mmu.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index e88714903ce5..9ede143ccef1 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1603,6 +1603,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (logging_active) {
> force_pte = true;
> vma_shift = PAGE_SHIFT;
> + } else if (kvm_is_realm(kvm)) {
> + // Force PTE level mappings for realms
> + force_pte = true;
> + vma_shift = PAGE_SHIFT;
> } else {
> vma_shift = get_vma_page_shift(vma, hva);
> }
Since a memory abort is specific to a vCPU instead of a VM, so vcpu_is_rec()
instead of kvm_is_realm() is more accurate for the check. Besides, it looks
duplicate to the check added by "PATCH[20/43] arm64: RME: Runtime faulting
of memory", which is as below.
/* FIXME: We shouldn't need to disable this for realms */
if (vma_pagesize == PAGE_SIZE && !(force_pte || device || kvm_is_realm(kvm))) {
^^^^^^^^^^^^^^^^^
Can be dropped now.
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms
2024-12-12 15:55 ` [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms Steven Price
@ 2025-02-02 7:12 ` Gavin Shan
2025-02-07 17:08 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 7:12 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> Physical device assignment is not yet supported by the RMM, so it
> doesn't make much sense to allow device mappings within the realm.
> Prevent them when the guest is a realm.
>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> Changes from v5:
> * Also prevent accesses in user_mem_abort()
> ---
> arch/arm64/kvm/mmu.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 9ede143ccef1..cef7c3dcbf99 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1149,6 +1149,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
> if (is_protected_kvm_enabled())
> return -EPERM;
>
> + /* We don't support mapping special pages into a Realm */
> + if (kvm_is_realm(kvm))
> + return -EINVAL;
> +
return -EPERM;
> size += offset_in_page(guest_ipa);
> guest_ipa &= PAGE_MASK;
>
> @@ -1725,6 +1729,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (exec_fault && device)
> return -ENOEXEC;
>
> + /*
> + * Don't allow device accesses to protected memory as we don't (yet)
> + * support protected devices.
> + */
> + if (device && kvm_is_realm(kvm) &&
> + kvm_gpa_from_fault(kvm, fault_ipa) == fault_ipa)
> + return -EINVAL;
> +
s/kvm_is_realm/vcpu_is_rec
I don't understand the check very well. What I understood is mem_abort() is called
only when kvm_gpa_from_fault(kvm, fault_ipa) != fault_ipa, meaning only the page
faults in the shared address space is handled by mem_abort(). So I guess we perhaps
need something like below.
if (vcpu_is_rec(vcpu) && device)
return -EPERM;
kvm_handle_guest_abort
kvm_slot_can_be_private
private_memslot_fault // page fault in the private space is handled here
io_mem_abort // MMIO emulation is handled here
user_mem_abort // page fault in the shared space is handled here
> /*
> * Potentially reduce shadow S2 permissions to match the guest's own
> * S2. For exec faults, we'd only reach this point if the guest
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace
2024-12-12 15:55 ` [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
@ 2025-02-02 23:15 ` Gavin Shan
2025-02-02 23:17 ` Gavin Shan
0 siblings, 1 reply; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 23:15 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:55 AM, Steven Price wrote:
> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
>
> The RMM describes the maximum number of BPs/WPs available to the guest
> in the Feature Register 0. Propagate those numbers into ID_AA64DFR0_EL1,
> which is visible to userspace. A VMM needs this information in order to
> set up realm parameters.
>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/include/asm/kvm_rme.h | 2 ++
> arch/arm64/kvm/rme.c | 22 ++++++++++++++++++++++
> arch/arm64/kvm/sys_regs.c | 2 +-
> 3 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
> index 0d89ab1645c1..f8e37907e2d5 100644
> --- a/arch/arm64/include/asm/kvm_rme.h
> +++ b/arch/arm64/include/asm/kvm_rme.h
> @@ -85,6 +85,8 @@ void kvm_init_rme(void);
> u32 kvm_realm_ipa_limit(void);
> u32 kvm_realm_vgic_nr_lr(void);
>
> +u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
> +
> bool kvm_rme_supports_sve(void);
>
> int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index e562e77c1f94..d21042d5ec9a 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -63,6 +63,28 @@ u32 kvm_realm_vgic_nr_lr(void)
> return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS);
> }
>
> +u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
> +{
> + u32 bps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_BPS);
> + u32 wps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_WPS);
> + u32 ctx_cmps;
> +
> + if (!kvm_is_realm(vcpu->kvm))
> + return val;
> +
> + /* Ensure CTX_CMPs is still valid */
> + ctx_cmps = FIELD_GET(ID_AA64DFR0_EL1_CTX_CMPs, val);
> + ctx_cmps = min(bps, ctx_cmps);
> +
> + val &= ~(ID_AA64DFR0_EL1_BRPs_MASK | ID_AA64DFR0_EL1_WRPs_MASK |
> + ID_AA64DFR0_EL1_CTX_CMPs);
> + val |= FIELD_PREP(ID_AA64DFR0_EL1_BRPs_MASK, bps) |
> + FIELD_PREP(ID_AA64DFR0_EL1_WRPs_MASK, wps) |
> + FIELD_PREP(ID_AA64DFR0_EL1_CTX_CMPs, ctx_cmps);
> +
> + return val;
> +}
> +
The the filed ID_AA64DFR0_EL1_WRPs_MASK of the system register ID_AA64DFR0_EL1 is
writtable, as declared in sys_reg.c. We need to consolidate the field when the
system register is written.
ID_FILTERED(ID_AA64DFR0_EL1, id_aa64dfr0_el1,
ID_AA64DFR0_EL1_DoubleLock_MASK |
ID_AA64DFR0_EL1_WRPs_MASK |
ID_AA64DFR0_EL1_PMUVer_MASK |
ID_AA64DFR0_EL1_DebugVer_MASK),
> static int get_start_level(struct realm *realm)
> {
> return 4 - stage2_pgtable_levels(realm->ia_bits);
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index a4713609e230..55cde43b36b9 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1806,7 +1806,7 @@ static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
> /* Hide SPE from guests */
> val &= ~ID_AA64DFR0_EL1_PMSVer_MASK;
>
> - return val;
> + return kvm_realm_reset_id_aa64dfr0_el1(vcpu, val);
> }
>
> static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace
2025-02-02 23:15 ` Gavin Shan
@ 2025-02-02 23:17 ` Gavin Shan
0 siblings, 0 replies; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 23:17 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On 2/3/25 9:15 AM, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
>>
>> The RMM describes the maximum number of BPs/WPs available to the guest
>> in the Feature Register 0. Propagate those numbers into ID_AA64DFR0_EL1,
>> which is visible to userspace. A VMM needs this information in order to
>> set up realm parameters.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_rme.h | 2 ++
>> arch/arm64/kvm/rme.c | 22 ++++++++++++++++++++++
>> arch/arm64/kvm/sys_regs.c | 2 +-
>> 3 files changed, 25 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/asm/kvm_rme.h
>> index 0d89ab1645c1..f8e37907e2d5 100644
>> --- a/arch/arm64/include/asm/kvm_rme.h
>> +++ b/arch/arm64/include/asm/kvm_rme.h
>> @@ -85,6 +85,8 @@ void kvm_init_rme(void);
>> u32 kvm_realm_ipa_limit(void);
>> u32 kvm_realm_vgic_nr_lr(void);
>> +u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
>> +
>> bool kvm_rme_supports_sve(void);
>> int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index e562e77c1f94..d21042d5ec9a 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -63,6 +63,28 @@ u32 kvm_realm_vgic_nr_lr(void)
>> return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_GICV3_NUM_LRS);
>> }
>> +u64 kvm_realm_reset_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
>> +{
>> + u32 bps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_BPS);
>> + u32 wps = u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_NUM_WPS);
>> + u32 ctx_cmps;
>> +
>> + if (!kvm_is_realm(vcpu->kvm))
>> + return val;
>> +
>> + /* Ensure CTX_CMPs is still valid */
>> + ctx_cmps = FIELD_GET(ID_AA64DFR0_EL1_CTX_CMPs, val);
>> + ctx_cmps = min(bps, ctx_cmps);
>> +
>> + val &= ~(ID_AA64DFR0_EL1_BRPs_MASK | ID_AA64DFR0_EL1_WRPs_MASK |
>> + ID_AA64DFR0_EL1_CTX_CMPs);
>> + val |= FIELD_PREP(ID_AA64DFR0_EL1_BRPs_MASK, bps) |
>> + FIELD_PREP(ID_AA64DFR0_EL1_WRPs_MASK, wps) |
>> + FIELD_PREP(ID_AA64DFR0_EL1_CTX_CMPs, ctx_cmps);
>> +
>> + return val;
>> +}
>> +
>
> The the filed ID_AA64DFR0_EL1_WRPs_MASK of the system register ID_AA64DFR0_EL1 is
> writtable, as declared in sys_reg.c. We need to consolidate the field when the
> system register is written.
>
> ID_FILTERED(ID_AA64DFR0_EL1, id_aa64dfr0_el1,
> ID_AA64DFR0_EL1_DoubleLock_MASK |
> ID_AA64DFR0_EL1_WRPs_MASK |
> ID_AA64DFR0_EL1_PMUVer_MASK |
> ID_AA64DFR0_EL1_DebugVer_MASK),
>
Please ignore this comment. The consolidation when the system register is
written has been covered by PATCH[35/43].
>> static int get_start_level(struct realm *realm)
>> {
>> return 4 - stage2_pgtable_levels(realm->ia_bits);
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index a4713609e230..55cde43b36b9 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1806,7 +1806,7 @@ static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
>> /* Hide SPE from guests */
>> val &= ~ID_AA64DFR0_EL1_PMSVer_MASK;
>> - return val;
>> + return kvm_realm_reset_id_aa64dfr0_el1(vcpu, val);
>> }
>> static int set_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm
2024-12-12 15:56 ` [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
@ 2025-02-02 23:26 ` Gavin Shan
0 siblings, 0 replies; 90+ messages in thread
From: Gavin Shan @ 2025-02-02 23:26 UTC (permalink / raw)
To: Steven Price, kvm, kvmarm
Cc: Jean-Philippe Brucker, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Joey Gouly, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On 12/13/24 1:56 AM, Steven Price wrote:
> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
>
> Obtain the max vector length configured by userspace on the vCPUs, and
> write it into the Realm parameters. By default the vCPU is configured
> with the max vector length reported by RMM, and userspace can reduce it
> with a write to KVM_REG_ARM64_SVE_VLS.
>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
> arch/arm64/kvm/guest.c | 3 ++-
> arch/arm64/kvm/rme.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 44 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 429c8f10b76a..5562029368c5 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -363,7 +363,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
> if (!vcpu_has_sve(vcpu))
> return -ENOENT;
>
> - if (kvm_arm_vcpu_sve_finalized(vcpu))
> + if (kvm_arm_vcpu_sve_finalized(vcpu) || kvm_realm_is_created(vcpu->kvm))
> return -EPERM; /* too late! */
>
> if (WARN_ON(vcpu->arch.sve_state))
> @@ -825,6 +825,7 @@ static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
> switch (reg->id) {
> case KVM_REG_ARM_PMCR_EL0:
> case KVM_REG_ARM_ID_AA64DFR0_EL1:
> + case KVM_REG_ARM64_SVE_VLS:
> return true;
> }
> }
> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
> index 39dbc19e4a42..3116ecee37a8 100644
> --- a/arch/arm64/kvm/rme.c
> +++ b/arch/arm64/kvm/rme.c
> @@ -297,6 +297,44 @@ static void realm_unmap_shared_range(struct kvm *kvm,
> }
> }
>
> +static int realm_init_sve_param(struct kvm *kvm, struct realm_params *params)
> +{
> + int ret = 0;
> + unsigned long i;
> + struct kvm_vcpu *vcpu;
> + int max_vl, realm_max_vl = -1;
> +
I would suggest to rename 'max_vl' and 'realm_max_vl' to 'vl' and 'last_vl'
since we're not looking for the maximal VLs. Instead, we're making sure the
VLs on all vCPUs are equal.
> + /*
> + * Get the preferred SVE configuration, set by userspace with the
> + * KVM_ARM_VCPU_SVE feature and KVM_REG_ARM64_SVE_VLS pseudo-register.
> + */
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + mutex_lock(&vcpu->mutex);
> + if (vcpu_has_sve(vcpu)) {
> + if (!kvm_arm_vcpu_sve_finalized(vcpu))
> + ret = -EINVAL;
> + max_vl = vcpu->arch.sve_max_vl;
> + } else {
> + max_vl = 0;
> + }
> + mutex_unlock(&vcpu->mutex);
> + if (ret)
> + return ret;
> +
> + /* We need all vCPUs to have the same SVE config */
> + if (realm_max_vl >= 0 && realm_max_vl != max_vl)
> + return -EINVAL;
> +
> + realm_max_vl = max_vl;
> + }
> +
> + if (realm_max_vl > 0) {
> + params->sve_vl = sve_vq_from_vl(realm_max_vl) - 1;
> + params->flags |= RMI_REALM_PARAM_FLAG_SVE;
> + }
> + return 0;
> +}
> +
> static int realm_create_rd(struct kvm *kvm)
> {
> struct realm *realm = &kvm->arch.realm;
> @@ -344,6 +382,10 @@ static int realm_create_rd(struct kvm *kvm)
> params->flags |= RMI_REALM_PARAM_FLAG_PMU;
> }
>
> + r = realm_init_sve_param(kvm, params);
> + if (r)
> + goto out_undelegate_tables;
> +
> params_phys = virt_to_phys(params);
>
> if (rmi_realm_create(rd_phys, params_phys)) {
Thanks,
Gavin
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs
2025-01-30 16:40 ` Steven Price
@ 2025-02-03 11:18 ` Suzuki K Poulose
2025-02-03 16:34 ` Steven Price
0 siblings, 1 reply; 90+ messages in thread
From: Suzuki K Poulose @ 2025-02-03 11:18 UTC (permalink / raw)
To: Steven Price, Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Zenghui Yu, linux-arm-kernel, linux-kernel,
Joey Gouly, Alexandru Elisei, Christoffer Dall, Fuad Tabba,
linux-coco, Ganapatrao Kulkarni, Shanker Donthineni, Alper Gun,
Aneesh Kumar K . V
On 30/01/2025 16:40, Steven Price wrote:
> On 29/01/2025 04:50, Gavin Shan wrote:
>> On 12/13/24 1:55 AM, Steven Price wrote:
>>> The RMM maintains a data structure known as the Realm Execution Context
>>> (or REC). It is similar to struct kvm_vcpu and tracks the state of the
>>> virtual CPUs. KVM must delegate memory and request the structures are
>>> created when vCPUs are created, and suitably tear down on destruction.
>>>
>>> RECs must also be supplied with addition pages - auxiliary (or AUX)
>>> granules - for storing the larger registers state (e.g. for SVE). The
>>> number of AUX granules for a REC depends on the parameters with which
>>> the Realm was created - the RMM makes this information available via the
>>> RMI_REC_AUX_COUNT call performed after creating the Realm Descriptor
>>> (RD).
>>>
>>> Note that only some of register state for the REC can be set by KVM, the
>>> rest is defined by the RMM (zeroed). The register state then cannot be
>>> changed by KVM after the REC is created (except when the guest
>>> explicitly requests this e.g. by performing a PSCI call). The RMM also
>>> requires that the VMM creates RECs in ascending order of the MPIDR.
>>>
>>> See Realm Management Monitor specification (DEN0137) for more
>>> information:
>>> https://developer.arm.com/documentation/den0137/
>>>
>>> Signed-off-by: Steven Price <steven.price@arm.com>
>>> ---
>>> Changes since v5:
>>> * Separate the concept of vcpu_is_rec() and
>>> kvm_arm_vcpu_rec_finalized() by using the KVM_ARM_VCPU_REC feature as
>>> the indication that the VCPU is a REC.
>>> Changes since v2:
>>> * Free rec->run earlier in kvm_destroy_realm() and adapt to previous
>>> patches.
>>> ---
>>> arch/arm64/include/asm/kvm_emulate.h | 7 ++
>>> arch/arm64/include/asm/kvm_host.h | 3 +
>>> arch/arm64/include/asm/kvm_rme.h | 18 ++++
>>> arch/arm64/kvm/arm.c | 9 ++
>>> arch/arm64/kvm/reset.c | 11 ++
>>> arch/arm64/kvm/rme.c | 144 +++++++++++++++++++++++++++
>>> 6 files changed, 192 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/
>>> include/asm/kvm_emulate.h
>>> index 27f54a7778aa..ec2b6d9c9c07 100644
>>> --- a/arch/arm64/include/asm/kvm_emulate.h
>>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>>> @@ -722,7 +722,14 @@ static inline bool kvm_realm_is_created(struct
>>> kvm *kvm)
>>> static inline bool vcpu_is_rec(struct kvm_vcpu *vcpu)
>>> {
>>> + if (static_branch_unlikely(&kvm_rme_is_available))
>>> + return vcpu_has_feature(vcpu, KVM_ARM_VCPU_REC);
>>> return false;
>>> }
>>>
>>
>> It seems the check on kvm_rme_is_available is unnecessary because
>> KVM_ARM_VCPU_REC
>> is possible to be true only when kvm_rme_is_available is true.
>
> Similar to a previous patch - the check of the static key is to avoid
> overhead on systems without RME.
>
>>> +static inline bool kvm_arm_vcpu_rec_finalized(struct kvm_vcpu *vcpu)
>>> +{
>>> + return vcpu->arch.rec.mpidr != INVALID_HWID;
>>> +}
>>> +
>>> #endif /* __ARM64_KVM_EMULATE_H__ */
>>
>> I would suggest to rename to kvm_arm_rec_finalized() since vCPU and REC are
>> similar objects at the same level. It'd better to avoid duplicate object
>> name reference in the function name.
>
> Ack
>
>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/
>>> asm/kvm_host.h
>>> index 8482638dce3b..220195c727ef 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -789,6 +789,9 @@ struct kvm_vcpu_arch {
>>> /* Per-vcpu CCSIDR override or NULL */
>>> u32 *ccsidr;
>>> +
>>> + /* Realm meta data */
>>> + struct realm_rec rec;
>>> };
>>> /*
>>> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/
>>> asm/kvm_rme.h
>>> index 32bdedf1d866..62d4a63d3035 100644
>>> --- a/arch/arm64/include/asm/kvm_rme.h
>>> +++ b/arch/arm64/include/asm/kvm_rme.h
>>> @@ -6,6 +6,7 @@
>>> #ifndef __ASM_KVM_RME_H
>>> #define __ASM_KVM_RME_H
>>> +#include <asm/rmi_smc.h>
>>> #include <uapi/linux/kvm.h>
>>> /**
>>> @@ -65,6 +66,21 @@ struct realm {
>>> unsigned int ia_bits;
>>> };
>>> +/**
>>> + * struct realm_rec - Additional per VCPU data for a Realm
>>> + *
>>> + * @mpidr: MPIDR (Multiprocessor Affinity Register) value to identify
>>> this VCPU
>>> + * @rec_page: Kernel VA of the RMM's private page for this REC
>>> + * @aux_pages: Additional pages private to the RMM for this REC
>>> + * @run: Kernel VA of the RmiRecRun structure shared with the RMM
>>> + */
>>> +struct realm_rec {
>>> + unsigned long mpidr;
>>> + void *rec_page;
>>> + struct page *aux_pages[REC_PARAMS_AUX_GRANULES];
>>> + struct rec_run *run;
>>> +};
>>> +
>>> void kvm_init_rme(void);
>>> u32 kvm_realm_ipa_limit(void);
>>> @@ -72,6 +88,8 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct
>>> kvm_enable_cap *cap);
>>> int kvm_init_realm_vm(struct kvm *kvm);
>>> void kvm_destroy_realm(struct kvm *kvm);
>>> void kvm_realm_destroy_rtts(struct kvm *kvm, u32 ia_bits);
>>> +int kvm_create_rec(struct kvm_vcpu *vcpu);
>>> +void kvm_destroy_rec(struct kvm_vcpu *vcpu);
>>> #define RMM_RTT_BLOCK_LEVEL 2
>>> #define RMM_RTT_MAX_LEVEL 3
>>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>>> index 73016e1e0067..2d97147651be 100644
>>> --- a/arch/arm64/kvm/arm.c
>>> +++ b/arch/arm64/kvm/arm.c
>>> @@ -525,6 +525,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
>>> /* Force users to call KVM_ARM_VCPU_INIT */
>>> vcpu_clear_flag(vcpu, VCPU_INITIALIZED);
>>> + vcpu->arch.rec.mpidr = INVALID_HWID;
>>> +
>>> vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO;
>>> /* Set up the timer */
>>> @@ -1467,6 +1469,9 @@ static unsigned long
>>> system_supported_vcpu_features(void)
>>> if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT))
>>> clear_bit(KVM_ARM_VCPU_HAS_EL2, &features);
>>> + if (!static_branch_unlikely(&kvm_rme_is_available))
>>> + clear_bit(KVM_ARM_VCPU_REC, &features);
>>> +
>>> return features;
>>> }
>>> @@ -1506,6 +1511,10 @@ static int
>>> kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
>>> if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features))
>>> return -EINVAL;
>>> + /* RME is incompatible with AArch32 */
>>> + if (test_bit(KVM_ARM_VCPU_REC, &features))
>>> + return -EINVAL;
>>> +
>>> return 0;
>>> }
>>>
>>
>> One corner case seems missed in kvm_vcpu_init_check_features(). It's
>> allowed to
>> initialize a vCPU with REC feature even kvm_is_realm(kvm) is false.
>> Hopefully,
>> I didn't miss something.
>
> Ah, yes good point. I'll pass a kvm pointer to
> kvm_vcpu_init_check_features() and use kvm_is_realm() in there.
nit: kvm is available from the VCPU via vcpu->kvm
Cheers
Suzuki
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 17/43] arm64: RME: Handle realm enter/exit
2025-01-29 23:41 ` Gavin Shan
@ 2025-02-03 16:34 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-03 16:34 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 29/01/2025 23:41, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
[...]
>> +
>> + kvm_pr_unimpl("Unsupported exit reason: %u\n",
>> + rec->run->exit.exit_reason);
>
> s/kvm_pr_unimpl/vcpu_pr_unimpl
It's vcpu_unimpl(), but sadly that attempts to print 'RIP' which is x86
specific (and would be nonsense for a realm guest anyway).
The other s/kvm_err/vcpu_err/ changes all make sense.
Thanks,
Steve
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs
2025-02-03 11:18 ` Suzuki K Poulose
@ 2025-02-03 16:34 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-03 16:34 UTC (permalink / raw)
To: Suzuki K Poulose, Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Zenghui Yu, linux-arm-kernel, linux-kernel,
Joey Gouly, Alexandru Elisei, Christoffer Dall, Fuad Tabba,
linux-coco, Ganapatrao Kulkarni, Shanker Donthineni, Alper Gun,
Aneesh Kumar K . V
On 03/02/2025 11:18, Suzuki K Poulose wrote:
> On 30/01/2025 16:40, Steven Price wrote:
>> On 29/01/2025 04:50, Gavin Shan wrote:
[...]
>>> One corner case seems missed in kvm_vcpu_init_check_features(). It's
>>> allowed to
>>> initialize a vCPU with REC feature even kvm_is_realm(kvm) is false.
>>> Hopefully,
>>> I didn't miss something.
>>
>> Ah, yes good point. I'll pass a kvm pointer to
>> kvm_vcpu_init_check_features() and use kvm_is_realm() in there.
>
> nit: kvm is available from the VCPU via vcpu->kvm
Absolutely correct, and what I wrote was nonsense ;)
What I meant was to pass a kvm pointer *from*
kvm_vcpu_init_check_features() *to* system_supported_vcpu_features() and
use kvm_is_realm() in there!
Thanks,
Steve
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 19/43] arm64: RME: Allow populating initial contents
2025-01-30 4:38 ` Gavin Shan
@ 2025-02-03 16:52 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-03 16:52 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 30/01/2025 04:38, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> The VMM needs to populate the realm with some data before starting (e.g.
>> a kernel and initrd). This is measured by the RMM and used as part of
>> the attestation later on.
>>
>> For now only 4k mappings are supported, future work may add support for
>> larger mappings.
>>
>> Co-developed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v5:
>> * Refactor to use PFNs rather than tracking struct page in
>> realm_create_protected_data_page().
>> * Pull changes from a later patch (in the v5 series) for accessing
>> pages from a guest memfd.
>> * Do the populate in chunks to avoid holding locks for too long and
>> triggering RCU stall warnings.
>> ---
>> arch/arm64/kvm/rme.c | 243 +++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 243 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index 22f0c74455af..d4561e368cd5 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -4,6 +4,7 @@
>> */
>> #include <linux/kvm_host.h>
>> +#include <linux/hugetlb.h>
>>
>
> This wouldn't be needed since the huge pages, especially hugetlb part,
> isn't
> supported yet.
Indeed, thanks for spotting.
>> #include <asm/kvm_emulate.h>
>> #include <asm/kvm_mmu.h>
>> @@ -545,6 +546,236 @@ void kvm_realm_unmap_range(struct kvm *kvm,
>> unsigned long start, u64 size,
>> realm_unmap_private_range(kvm, start, end);
>> }
>> +static int realm_create_protected_data_page(struct realm *realm,
>> + unsigned long ipa,
>> + kvm_pfn_t dst_pfn,
>> + kvm_pfn_t src_pfn,
>> + unsigned long flags)
>> +{
>> + phys_addr_t dst_phys, src_phys;
>> + int ret;
>> +
>> + dst_phys = __pfn_to_phys(dst_pfn);
>> + src_phys = __pfn_to_phys(src_pfn);
>> +
>> + if (rmi_granule_delegate(dst_phys))
>> + return -ENXIO;
>> +
>> + ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa,
>> src_phys,
>> + flags);
>> +
>> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
>> + /* Create missing RTTs and retry */
>> + int level = RMI_RETURN_INDEX(ret);
>> +
>> + ret = realm_create_rtt_levels(realm, ipa, level,
>> + RMM_RTT_MAX_LEVEL, NULL);
>> + if (ret)
>> + goto err;
>> +
>> + ret = rmi_data_create(virt_to_phys(realm->rd), dst_phys, ipa,
>> + src_phys, flags);
>> + }
>> +
>> + if (!ret)
>> + return 0;
>> +
>> +err:
>> + if (WARN_ON(rmi_granule_undelegate(dst_phys))) {
>> + /* Page can't be returned to NS world so is lost */
>> + get_page(pfn_to_page(dst_pfn));
>> + }
>> + return -ENXIO;
>> +}
>> +
>> +static int fold_rtt(struct realm *realm, unsigned long addr, int level)
>> +{
>> + phys_addr_t rtt_addr;
>> + int ret;
>> +
>> + ret = realm_rtt_fold(realm, addr, level + 1, &rtt_addr);
>> + if (ret)
>> + return ret;
>> +
>> + free_delegated_granule(rtt_addr);
>> +
>> + return 0;
>> +}
>> +
>> +static int populate_par_region(struct kvm *kvm,
>> + phys_addr_t ipa_base,
>> + phys_addr_t ipa_end,
>> + u32 flags)
>> +{
>
> At the first glance, I was wandering what's meant by 'par' in the
> function name.
> It turns to be a 2MB region and I guess it represents 'part'. I think
> this may
> be renamed to populate_sub_region() or populate_region() directly.
Actually 'par' is outdated and refers to "protected address range" which
is a concept which existed in an older version of CCA realms. It's
definitely in need of a rename! ;) I'll just drop the 'par' and call it
populate_region().
>> + struct realm *realm = &kvm->arch.realm;
>> + struct kvm_memory_slot *memslot;
>> + gfn_t base_gfn, end_gfn;
>> + int idx;
>> + phys_addr_t ipa;
>> + int ret = 0;
>> + unsigned long data_flags = 0;
>> +
>> + base_gfn = gpa_to_gfn(ipa_base);
>> + end_gfn = gpa_to_gfn(ipa_end);
>> +
>> + if (flags & KVM_ARM_RME_POPULATE_FLAGS_MEASURE)
>> + data_flags = RMI_MEASURE_CONTENT;
>> +
>
> The 'data_flags' can be sorted out by its caller kvm_populate_realm(),
> and passed
> to populate_par_region(). In that way, we needn't to figure out
> 'data_flags' in
> every call to populate_par_region().
Ack
>> + idx = srcu_read_lock(&kvm->srcu);
>> + memslot = gfn_to_memslot(kvm, base_gfn);
>> + if (!memslot) {
>> + ret = -EFAULT;
>> + goto out;
>> + }
>> +
>> + /* We require the region to be contained within a single memslot */
>> + if (memslot->base_gfn + memslot->npages < end_gfn) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> +
>> + if (!kvm_slot_can_be_private(memslot)) {
>> + ret = -EINVAL;
>> + goto out;
>> + }
>> +
>> + write_lock(&kvm->mmu_lock);
>> +
>> + ipa = ipa_base;
>> + while (ipa < ipa_end) {
>> + struct vm_area_struct *vma;
>> + unsigned long map_size;
>> + unsigned int vma_shift;
>> + unsigned long offset;
>> + unsigned long hva;
>> + struct page *page;
>> + bool writeable;
>> + kvm_pfn_t pfn;
>> + int level, i;
>> +
>> + hva = gfn_to_hva_memslot(memslot, gpa_to_gfn(ipa));
>> + vma = vma_lookup(current->mm, hva);
>> + if (!vma) {
>> + ret = -EFAULT;
>> + break;
>> + }
>> +
>> + /* FIXME: Currently we only support 4k sized mappings */
>> + vma_shift = PAGE_SHIFT;
>> +
>> + map_size = 1 << vma_shift;
>> +
>> + ipa = ALIGN_DOWN(ipa, map_size);
>> +
>
> The blank lines in above 5 lines can be dropped :)
:) Agreed
>> + switch (map_size) {
>> + case RMM_L2_BLOCK_SIZE:
>> + level = 2;
>> + break;
>> + case PAGE_SIZE:
>> + level = 3;
>> + break;
>> + default:
>> + WARN_ONCE(1, "Unsupported vma_shift %d", vma_shift);
>> + ret = -EFAULT;
>> + break;
>> + }
>> +
>> + pfn = __kvm_faultin_pfn(memslot, gpa_to_gfn(ipa), FOLL_WRITE,
>> + &writeable, &page);
>> +
>> + if (is_error_pfn(pfn)) {
>> + ret = -EFAULT;
>> + break;
>> + }
>> +
>> + if (level < RMM_RTT_MAX_LEVEL) {
>> + /*
>> + * A temporary RTT is needed during the map, precreate
>> + * it, however if there is an error (e.g. missing
>> + * parent tables) this will be handled in the
>> + * realm_create_protected_data_page() call.
>> + */
>> + realm_create_rtt_levels(realm, ipa, level,
>> + RMM_RTT_MAX_LEVEL, NULL);
>> + }
>> +
>
> This block of code to create the temporary RTT can be removed. With it
> removed,
> we're going to rely on realm_create_protected_data_page() to create the
> needed
> RTT in its failing path. If the temporary RTT has been existing, the
> function
> call to realm_create_rtt_levels() doesn't nothing except multiple RMI
> calls are
> issued. RMI calls aren't cheap and it causes performance lost if you agree.
Actually this is to avoid extra RMI calls - albeit this is irrelevant
with the current FIXME above because everything is 4k.
For a block mapping it's still required to create RTTs to the full
depth, the individual 4k pages are then mapped, and finally the RTTs are
"folded" to remove the bottom table.
So in the usual case where RTTs exist to level 2 (because there are
other mappings nearby) then attempting to create the final level avoids
the first call to rmi_data_create() (in
realm_create_protected_data_page()) failing.
We don't expect tables to be created to RMM_RTT_MAX_LEVEL - because that
implies that there is something already mapped there (although this can
also happen in the situation where there was previously something mapped
but it has been unmapped without the RTTs being destroyed).
The comment is there because rather than adding another path for the
situation where parent tables don't exist, this is handled by the
rmi_data_create() failing in realm_create_protected_data_page() - the
error code lets us know which levels are needed. This is (of course)
inefficient, but it will be rare.
But given that I haven't got huge pages supported yet it's hard to make
any real judgements about the trade-offs and which is going to be most
performant.
>> + for (offset = 0, i = 0; offset < map_size && !ret;
>> + offset += PAGE_SIZE, i++) {
>> + phys_addr_t page_ipa = ipa + offset;
>> + kvm_pfn_t priv_pfn;
>> + struct page *gmem_page;
>> + int order;
>> +
>> + ret = kvm_gmem_get_pfn(kvm, memslot,
>> + page_ipa >> PAGE_SHIFT,
>> + &priv_pfn, &gmem_page, &order);
>> + if (ret)
>> + break;
>> +
>> + ret = realm_create_protected_data_page(realm, page_ipa,
>> + priv_pfn,
>> + pfn + i,
>> + data_flags);
>> + }
>> +
>> + kvm_release_faultin_page(kvm, page, false, false);
>> +
>> + if (ret)
>> + break;
>> +
>> + if (level == 2)
>> + fold_rtt(realm, ipa, level);
>> +
>> + ipa += map_size;
>> + }
>> +
>> + write_unlock(&kvm->mmu_lock);
>> +
>> +out:
>> + srcu_read_unlock(&kvm->srcu, idx);
>> + return ret;
>> +}
>> +
>> +static int kvm_populate_realm(struct kvm *kvm,
>> + struct kvm_cap_arm_rme_populate_realm_args *args)
>> +{
>> + phys_addr_t ipa_base, ipa_end;
>> +
>> + if (kvm_realm_state(kvm) != REALM_STATE_NEW)
>> + return -EINVAL;
>> +
>> + if (!IS_ALIGNED(args->populate_ipa_base, PAGE_SIZE) ||
>> + !IS_ALIGNED(args->populate_ipa_size, PAGE_SIZE))
>> + return -EINVAL;
>> +
>> + if (args->flags & ~RMI_MEASURE_CONTENT)
>> + return -EINVAL;
>> +
>> + ipa_base = args->populate_ipa_base;
>> + ipa_end = ipa_base + args->populate_ipa_size;
>> +
>> + if (ipa_end < ipa_base)
>> + return -EINVAL;
>> +
>> + /*
>> + * Perform the populate in parts to ensure locks are not held for
>> too
>> + * long
>> + */
>> + while (ipa_base < ipa_end) {
>> + phys_addr_t end = min(ipa_end, ipa_base + SZ_2M);
>> +
>> + int ret = populate_par_region(kvm, ipa_base, end,
>> + args->flags);
>> +
>> + if (ret)
>> + return ret;
>> +
>
> cond_resched() seems nice to have here so that those pending tasks can
> be run immediately.
Ack
Thanks,
Steve
>> + ipa_base = end;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> int realm_set_ipa_state(struct kvm_vcpu *vcpu,
>> unsigned long start,
>> unsigned long end,
>> @@ -794,6 +1025,18 @@ int kvm_realm_enable_cap(struct kvm *kvm, struct
>> kvm_enable_cap *cap)
>> r = kvm_init_ipa_range_realm(kvm, &args);
>> break;
>> }
>> + case KVM_CAP_ARM_RME_POPULATE_REALM: {
>> + struct kvm_cap_arm_rme_populate_realm_args args;
>> + void __user *argp = u64_to_user_ptr(cap->args[1]);
>> +
>> + if (copy_from_user(&args, argp, sizeof(args))) {
>> + r = -EFAULT;
>> + break;
>> + }
>> +
>> + r = kvm_populate_realm(kvm, &args);
>> + break;
>> + }
>> default:
>> r = -EINVAL;
>> break;
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 20/43] arm64: RME: Runtime faulting of memory
2025-01-30 5:22 ` Gavin Shan
@ 2025-02-06 4:33 ` Aneesh Kumar K.V
2025-02-07 17:04 ` Steven Price
1 sibling, 0 replies; 90+ messages in thread
From: Aneesh Kumar K.V @ 2025-02-06 4:33 UTC (permalink / raw)
To: Gavin Shan, Steven Price, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun
Gavin Shan <gshan@redhat.com> writes:
> On 12/13/24 1:55 AM, Steven Price wrote:
....
>> +static int private_memslot_fault(struct kvm_vcpu *vcpu,
>> + phys_addr_t fault_ipa,
>> + struct kvm_memory_slot *memslot)
>> +{
>> + struct kvm *kvm = vcpu->kvm;
>> + gpa_t gpa = kvm_gpa_from_fault(kvm, fault_ipa);
>> + gfn_t gfn = gpa >> PAGE_SHIFT;
>> + bool priv_exists = kvm_mem_is_private(kvm, gfn);
>> + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
>> + struct page *page;
>> + kvm_pfn_t pfn;
>> + int ret;
>> + /*
>> + * For Realms, the shared address is an alias of the private GPA with
>> + * the top bit set. Thus is the fault address matches the GPA then it
>> + * is the private alias.
>> + */
>> + bool is_priv_gfn = (gpa == fault_ipa);
>> +
>
> We may rename 'priv_exists' to 'was_priv_gfn', which is consistent to 'is_priv_gfn'.
> Alternatively, we may use 'was_private' and 'is_private'.
>
>> + if (priv_exists != is_priv_gfn) {
>> + kvm_prepare_memory_fault_exit(vcpu,
>> + gpa,
>> + PAGE_SIZE,
>> + kvm_is_write_fault(vcpu),
>> + false, is_priv_gfn);
>> +
>> + return -EFAULT;
>> + }
>> +
>> + if (!is_priv_gfn) {
>> + /* Not a private mapping, handling normally */
>> + return -EINVAL;
>> + }
>> +
>> + ret = kvm_mmu_topup_memory_cache(memcache,
>> + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
>> + if (ret)
>> + return ret;
>> +
>> + ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL);
>> + if (ret)
>> + return ret;
>> +
>> + /* FIXME: Should be able to use bigger than PAGE_SIZE mappings */
>> + ret = realm_map_ipa(kvm, fault_ipa, pfn, PAGE_SIZE, KVM_PGTABLE_PROT_W,
>> + memcache);
>> + if (!ret)
>> + return 1; /* Handled */
>> +
>> + put_page(page);
>> + return ret;
>> +}
I also found the names confusing. Can we do
modified arch/arm64/kvm/mmu.c
@@ -1487,7 +1487,7 @@ static int private_memslot_fault(struct kvm_vcpu *vcpu,
struct kvm *kvm = vcpu->kvm;
gpa_t gpa = kvm_gpa_from_fault(kvm, fault_ipa);
gfn_t gfn = gpa >> PAGE_SHIFT;
- bool priv_exists = kvm_mem_is_private(kvm, gfn);
+ bool is_priv_gfn = kvm_mem_is_private(kvm, gfn);
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct page *page;
kvm_pfn_t pfn;
@@ -1497,19 +1497,19 @@ static int private_memslot_fault(struct kvm_vcpu *vcpu,
* the top bit set. Thus is the fault address matches the GPA then it
* is the private alias.
*/
- bool is_priv_gfn = (gpa == fault_ipa);
+ bool is_priv_fault = (gpa == fault_ipa);
- if (priv_exists != is_priv_gfn) {
+ if (is_priv_gfn != is_priv_fault) {
kvm_prepare_memory_fault_exit(vcpu,
gpa,
PAGE_SIZE,
kvm_is_write_fault(vcpu),
- false, is_priv_gfn);
+ false, is_priv_fault);
return 0;
}
- if (!is_priv_gfn) {
+ if (!is_priv_fault) {
/* Not a private mapping, handling normally */
return -EINVAL;
}
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 20/43] arm64: RME: Runtime faulting of memory
2025-01-30 5:22 ` Gavin Shan
2025-02-06 4:33 ` Aneesh Kumar K.V
@ 2025-02-07 17:04 ` Steven Price
1 sibling, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:04 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 30/01/2025 05:22, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
[...]
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index d4561e368cd5..146ef598a581 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -602,6 +602,162 @@ static int fold_rtt(struct realm *realm,
>> unsigned long addr, int level)
>> return 0;
>> }
>> +int realm_map_protected(struct realm *realm,
>> + unsigned long ipa,
>> + kvm_pfn_t pfn,
>> + unsigned long map_size,
>> + struct kvm_mmu_memory_cache *memcache)
>> +{
>> + phys_addr_t phys = __pfn_to_phys(pfn);
>> + phys_addr_t rd = virt_to_phys(realm->rd);
>> + unsigned long base_ipa = ipa;
>> + unsigned long size;
>> + int map_level;
>> + int ret = 0;
>> +
>> + if (WARN_ON(!IS_ALIGNED(ipa, map_size)))
>> + return -EINVAL;
>> +
>> + switch (map_size) {
>> + case PAGE_SIZE:
>> + map_level = 3;
>> + break;
>> + case RMM_L2_BLOCK_SIZE:
>> + map_level = 2;
>> + break;
>> + default:
>> + return -EINVAL;
>> + }
>> +
>
> The same block of code, to return the RTT level according to the map
> size, has been
> used for multiple times. It would be nice to introduce a helper for this.
I have some changes to support larger host (and guest) page sizes which
rework this code which should help clean this up.
>> + if (map_level < RMM_RTT_MAX_LEVEL) {
>> + /*
>> + * A temporary RTT is needed during the map, precreate it,
>> + * however if there is an error (e.g. missing parent tables)
>> + * this will be handled below.
>> + */
>> + realm_create_rtt_levels(realm, ipa, map_level,
>> + RMM_RTT_MAX_LEVEL, memcache);
>> + }
>> +
>
> This block of code could be dropped. If the RTTs have been existing,
> realm_create_rtt_levels()
> doesn't nothing, but several RMI calls are issued. RMI calls aren't
> cheap and it can cause
> performance lost.
As mentioned in the patch 19 this is to prevent the first call to
rmi_data_create_unknown() failing when we know we're going to need to
create the RTTs. So this should generally reduce the number of RMIs
calls not increase.
>> + for (size = 0; size < map_size; size += PAGE_SIZE) {
>> + if (rmi_granule_delegate(phys)) {
>> + /*
>> + * It's likely we raced with another VCPU on the same
>> + * fault. Assume the other VCPU has handled the fault
>> + * and return to the guest.
>> + */
>> + return 0;
>> + }
>
> We probably can't bail immediately when error is returned from
> rmi_granule_delegate()
> because we intend to map a region whose size is 'map_size'. So a
> 'continue' instead
> of 'return 0' seems correct to me.
The logic here is that two (or more) vCPUs have faulted on the same
region. So if we get a failure here it means that we're the second vCPU
to hit the fault. While we could continue, it's highly likely that the
other vCPU will be faulting in the later parts of the region so we
expect to hit more delegation failures.
Returning to the guest and letting the guest retry the access means:
a) The access might now continue because the other vCPU has completed
(enough of) the mapping, or
b) We fault a second time, but give the other vCPU some more time to
progress in creating the mapping (and prevent the other vCPU being
tripped up by continued attempts by this vCPU from delegating the pages).
"continue" would work, but it's likely to waste CPU time with the two
vCPUs fighting each other to complete the mapping.
>> +
>> + ret = rmi_data_create_unknown(rd, phys, ipa);
>> +
>> + if (RMI_RETURN_STATUS(ret) == RMI_ERROR_RTT) {
>> + /* Create missing RTTs and retry */
>> + int level = RMI_RETURN_INDEX(ret);
>> +
>> + ret = realm_create_rtt_levels(realm, ipa, level,
>> + RMM_RTT_MAX_LEVEL,
>> + memcache);
>> + WARN_ON(ret);
>> + if (ret)
>> + goto err_undelegate;
>
> if (WARN_ON(ret))
ack
>> +
>> + ret = rmi_data_create_unknown(rd, phys, ipa);
>> + }
>> + WARN_ON(ret);
>> +
>> + if (ret)
>> + goto err_undelegate;
>
> if (WARN_ON(ret))
ack
>> +
>> + phys += PAGE_SIZE;
>> + ipa += PAGE_SIZE;
>> + }
>> +
>> + if (map_size == RMM_L2_BLOCK_SIZE)
>> + ret = fold_rtt(realm, base_ipa, map_level);
>> + if (WARN_ON(ret))
>> + goto err;
>> +
>
> The nested if statements are needed here because the WARN_ON() only
> take effect on the return value from fold_rtt().
>
> if (map_size == RMM_L2_BLOCK_SIZE) {
> ret = fold_rtt(realm, base_ipa, map_level);
> if (WARN_ON(ret))
> goto err;
> }
Technically to get here then ret==0 so there's no need to nest like
this. But I agree it makes the intent much clearer so I'll update.
Thanks,
Steve
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM
2025-02-02 1:22 ` Gavin Shan
@ 2025-02-07 17:04 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:04 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 02/02/2025 01:22, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> The RMM only allows setting the GPRS (x0-x30) and PC for a realm
>> guest. Check this in kvm_arm_set_reg() so that the VMM can receive a
>> suitable error return if other registers are accessed.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v5:
>> * Upper GPRS can be set as part of a HOST_CALL return, so fix up the
>> test to allow them.
>> ---
>> arch/arm64/kvm/guest.c | 43 ++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 43 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index 12dad841f2a5..1ee2fe072f1a 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -73,6 +73,24 @@ static u64 core_reg_offset_from_id(u64 id)
>> return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK |
>> KVM_REG_ARM_CORE);
>> }
>> +static bool kvm_realm_validate_core_reg(u64 off)
>> +{
>> + /*
>> + * Note that GPRs can only sometimes be controlled by the VMM.
>> + * For PSCI only X0-X6 are used, higher registers are ignored
>> (restored
>> + * from the REC).
>> + * For HOST_CALL all of X0-X30 are copied to the RsiHostCall
>> structure.
>> + * For emulated MMIO X0 is always used.
>> + */
>> + switch (off) {
>> + case KVM_REG_ARM_CORE_REG(regs.regs[0]) ...
>> + KVM_REG_ARM_CORE_REG(regs.regs[30]):
>> + case KVM_REG_ARM_CORE_REG(regs.pc):
>> + return true;
>> + }
>> + return false;
>> +}
>> +
>> static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu,
>> u64 off)
>> {
>> int size;
>> @@ -115,6 +133,9 @@ static int core_reg_size_from_offset(const struct
>> kvm_vcpu *vcpu, u64 off)
>> if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off))
>> return -EINVAL;
>> + if (kvm_is_realm(vcpu->kvm) && !kvm_realm_validate_core_reg(off))
>> + return -EPERM;
>> +
>> return size;
>> }
>> @@ -783,12 +804,34 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu,
>> const struct kvm_one_reg *reg)
>> return kvm_arm_sys_reg_get_reg(vcpu, reg);
>> }
>> +/*
>> + * The RMI ABI only enables setting some GPRs and PC. The selection
>> of GPRs
>> + * that are available depends on the Realm state and the reason for
>> the last
>> + * exit. All other registers are reset to architectural or otherwise
>> defined
>> + * reset values by the RMM, except for a few configuration fields that
>> + * correspond to Realm parameters.
>> + */
>> +static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
>> + const struct kvm_one_reg *reg)
>> +{
>> + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) {
>> + u64 off = core_reg_offset_from_id(reg->id);
>> +
>> + return kvm_realm_validate_core_reg(off);
>> + }
>> +
>> + return false;
>> +}
>> +
>> int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg
>> *reg)
>> {
>> /* We currently use nothing arch-specific in upper 32 bits */
>> if ((reg->id & ~KVM_REG_SIZE_MASK) >> 32 != KVM_REG_ARM64 >> 32)
>> return -EINVAL;
>> + if (kvm_is_realm(vcpu->kvm) && !validate_realm_set_reg(vcpu, reg))
>> + return -EINVAL;
>> +
>> switch (reg->id & KVM_REG_ARM_COPROC_MASK) {
>> case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg);
>> case KVM_REG_ARM_FW:
>
> It looks the core registers in kvm_arm_set_reg() has been validated for
> twice.
>
> ioctl(KVM_SET_ONE_REG)
> kvm_arm_set_reg
> validate_realm_set_reg
> kvm_realm_validate_core_reg // 1
> set_core_reg
> core_reg_offset_from_id
> core_reg_addr
> core_reg_offset_from_id
> core_reg_size_from_offset
> kvm_realm_validate_core_reg // 2
> copy_from_user
Indeed, it looks like the check in core_reg_size_from_offset() is
unnecessary.
> Besides, there are other types of registers that can be accessed by
> KVM_{GET, SET}_ONE_REG:
> firmware and bitmap registers, SVE registers, timer registers, system
> registers. Need we
> to hide any of them from the user space? As I can understand, the SVE
> registers are owned
> by RMM at least and won't be exposed to user space.
validate_realm_set_reg() only allows core registers permitted by
kvm_realm_validate_core_reg() (x0-x30+pc) and the three other
specifically mentioned registers (PMCR_EL0, ID_AA64DFR0_EL1 and the
pseudo register SVE_VLS).
In general most registers are hidden from the host (and user space),
even the general purpose registers are in most contexts stale. The only
real exception is during realm creation and PSCI. The registers are also
(ab)used during MMIO and HOST_CALL. For MMIO we use x0 for the value to
be read/written (even if the guest used a different register). For
HOST_CALL the guest provides a structure containing the values to be
placed in the GPRS (but the actual values of the guest's registers are
not exposed).
Steve
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests
2025-02-02 2:15 ` Gavin Shan
@ 2025-02-07 17:05 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:05 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 02/02/2025 02:15, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> It doesn't make much sense as a realm guest wouldn't want to trust the
>> host. It will also need some extra work to ensure that KVM will only
>> attempt to write into a shared memory region. So for now just disable
>> it.
>>
>> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/kvm/arm.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index eff1a4ec892b..134acb4ee26f 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -432,7 +432,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm,
>> long ext)
>> r = system_supports_mte();
>> break;
>> case KVM_CAP_STEAL_TIME:
>> - r = kvm_arm_pvtime_supported();
>> + if (kvm_is_realm(kvm))
>> + r = 0;
>> + else
>> + r = kvm_arm_pvtime_supported();
>> break;
>
> kvm_vm_ioctl_check_extension() can be called on the file descriptor of
> "/dev/kvm".
> 'kvm' is NULL and kvm_is_realm() returns false, which is the missed
> corner case.
This is a general problem with checking extensions on the /dev/kvm file
descriptor. In this case the kernel does support stolen time, but it
doesn't support it with a realm guest. Because there's no context to
know whether the query is about realm guests or not there's little the
kernel can do other than report support.
This is the same situation with other extensions that are keyed off
kvm_is_realm(). I'm not sure what we can do other than say that VMMs
shouldn't do that.
Steve
>> case KVM_CAP_ARM_EL1_32BIT:
>> r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1);
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL
2025-02-02 6:41 ` Gavin Shan
@ 2025-02-07 17:05 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:05 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Joey Gouly, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On 02/02/2025 06:41, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> From: Joey Gouly <joey.gouly@arm.com>
>>
>> Forward RSI_HOST_CALLS to KVM's HVC handler.
>>
>> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v4:
>> * Setting GPRS is now done by kvm_rec_enter() rather than
>> rec_exit_host_call() (see previous patch - arm64: RME: Handle realm
>> enter/exit). This fixes a bug where the registers set by user space
>> were being ignored.
>> ---
>> arch/arm64/kvm/rme-exit.c | 22 ++++++++++++++++++++++
>> 1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/rme-exit.c b/arch/arm64/kvm/rme-exit.c
>> index 8f0f9ab57f28..b2a367474d74 100644
>> --- a/arch/arm64/kvm/rme-exit.c
>> +++ b/arch/arm64/kvm/rme-exit.c
>> @@ -103,6 +103,26 @@ static int rec_exit_ripas_change(struct kvm_vcpu
>> *vcpu)
>> return 0;
>> }
>> +static int rec_exit_host_call(struct kvm_vcpu *vcpu)
>> +{
>> + int ret, i;
>> + struct realm_rec *rec = &vcpu->arch.rec;
>> +
>> + vcpu->stat.hvc_exit_stat++;
>> +
>> + for (i = 0; i < REC_RUN_GPRS; i++)
>> + vcpu_set_reg(vcpu, i, rec->run->exit.gprs[i]);
>> +
>> + ret = kvm_smccc_call_handler(vcpu);
>> +
>> + if (ret < 0) {
>> + vcpu_set_reg(vcpu, 0, ~0UL);
>> + ret = 1;
>> + }
>> +
>> + return ret;
>> +}
>> +
>
> It seems that the return value from kvm_smccc_call() won't be negative.
Well the comment above kvm_psci_call() explains that the return value
can be negative which would be passed through, so there's definitely a
convention that it could be negative. However...
> Besides,
> the host call requests are currently handled by kvm_psci_call(), which
> isn't
> what we want.
Indeed, we shouldn't be getting PSCI calls this way as the RMM needs to
be involved for proper handling of PSCI.
> So I think a new helper is needed and called in> kvm_smccc_call_handler().
> The new helper simply push the error (NOT_SUPPORTED) to x0. Otherwise, a
> unexpected
> return value will be seen by guest.
>
> handle_rec_exit
> rec_exit_host_call
> kvm_smccc_call_handler
I'm not sure I follow here. Are you saying that we should have separate
handling of HOST_CALLs to SMCCC? That's certainly a possibility, but the
expectation is that HOST_CALL is effectively equivalent to a simple
SMC/HVC call in a normal guest. To be honest a "Realm Host Interface" is
something that we're currently lacking a spec for.
Thanks,
Steve
>> static void update_arch_timer_irq_lines(struct kvm_vcpu *vcpu)
>> {
>> struct realm_rec *rec = &vcpu->arch.rec;
>> @@ -164,6 +184,8 @@ int handle_rec_exit(struct kvm_vcpu *vcpu, int
>> rec_run_ret)
>> return rec_exit_psci(vcpu);
>> case RMI_EXIT_RIPAS_CHANGE:
>> return rec_exit_ripas_change(vcpu);
>> + case RMI_EXIT_HOST_CALL:
>> + return rec_exit_host_call(vcpu);
>> }
>> kvm_pr_unimpl("Unsupported exit reason: %u\n",
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance
2025-02-02 6:00 ` Gavin Shan
@ 2025-02-07 17:05 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:05 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Suzuki K Poulose, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 02/02/2025 06:00, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> From: Suzuki K Poulose <suzuki.poulose@arm.com>
>>
>> Given we have different types of VMs supported, check the
>> support for SVE for the given instance of the VM to accurately
>> report the status.
>>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/include/asm/kvm_rme.h | 2 ++
>> arch/arm64/kvm/arm.c | 5 ++++-
>> arch/arm64/kvm/rme.c | 5 +++++
>> 3 files changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_rme.h b/arch/arm64/include/
>> asm/kvm_rme.h
>> index 90a4537ad38d..0d89ab1645c1 100644
>> --- a/arch/arm64/include/asm/kvm_rme.h
>> +++ b/arch/arm64/include/asm/kvm_rme.h
>> @@ -85,6 +85,8 @@ void kvm_init_rme(void);
>> u32 kvm_realm_ipa_limit(void);
>> u32 kvm_realm_vgic_nr_lr(void);
>> +bool kvm_rme_supports_sve(void);
>> +
>> int kvm_realm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap);
>> int kvm_init_realm_vm(struct kvm *kvm);
>> void kvm_destroy_realm(struct kvm *kvm);
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 134acb4ee26f..6f7f96ab781d 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -456,7 +456,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm,
>> long ext)
>> r = get_kvm_ipa_limit();
>> break;
>> case KVM_CAP_ARM_SVE:
>> - r = system_supports_sve();
>> + if (kvm_is_realm(kvm))
>> + r = kvm_rme_supports_sve();
>> + else
>> + r = system_supports_sve();
>> break;
>
> kvm_vm_ioctl_check_extension() can be called by
> ioctl(KVM_CHECK_EXTENSION) on the
> file descriptor of '/dev/kvm'. kvm is NULL and kvm_is_realm() returns
> false in
> this case.
>
> kvm_dev_ioctl
> kvm_vm_ioctl_check_extension_generic // kvm is NULL
> kvm_vm_ioctl_check_extension
See my reply in patch 25
>> case KVM_CAP_ARM_PTRAUTH_ADDRESS:
>> case KVM_CAP_ARM_PTRAUTH_GENERIC:
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index 5831d379760a..27a479feb907 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -20,6 +20,11 @@ static bool rme_supports(unsigned long feature)
>> return !!u64_get_bits(rmm_feat_reg0, feature);
>> }
>> +bool kvm_rme_supports_sve(void)
>> +{
>> + return rme_supports(RMI_FEATURE_REGISTER_0_SVE_EN);
>> +}
>> +
>
> If rme_supports() becomes a public helper, it can be directly used. In
> turn,
> kvm_rme_supports_sve() can be dropped. RMI_FEATURE_REGISTER_0_SVE_EN is
> obvious
> to indicate the corresponding feature.
I agree this seem reasonable. Sadly the use of u64_get_bits() is
assuming that the 'feature' parameter is constant at runtime. I could
rework it to not require a constant, but considering this is the only
function which is exposing rme_supports() without any further checks I
feel it's better to leave the code as it is.
Obviously if we get more feature bits like this in the future then it
would be worth revisiting.
Thanks,
Steve
>> static int rmi_check_version(void)
>> {
>> struct arm_smccc_res res;
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms
2025-02-02 6:52 ` Gavin Shan
@ 2025-02-07 17:05 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:05 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 02/02/2025 06:52, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Always split up huge pages to avoid problems managing huge pages. There
>> are two issues currently:
>>
>> 1. The uABI for the VMM allows populating memory on 4k boundaries even
>> if the underlying allocator (e.g. hugetlbfs) is using a larger page
>> size. Using a memfd for private allocations will push this issue onto
>> the VMM as it will need to respect the granularity of the allocator.
>>
>> 2. The guest is able to request arbitrary ranges to be remapped as
>> shared. Again with a memfd approach it will be up to the VMM to deal
>> with the complexity and either overmap (need the huge mapping and add
>> an additional 'overlapping' shared mapping) or reject the request as
>> invalid due to the use of a huge page allocator.
>>
>> For now just break everything down to 4k pages in the RMM controlled
>> stage 2.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> arch/arm64/kvm/mmu.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index e88714903ce5..9ede143ccef1 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1603,6 +1603,10 @@ static int user_mem_abort(struct kvm_vcpu
>> *vcpu, phys_addr_t fault_ipa,
>> if (logging_active) {
>> force_pte = true;
>> vma_shift = PAGE_SHIFT;
>> + } else if (kvm_is_realm(kvm)) {
>> + // Force PTE level mappings for realms
>> + force_pte = true;
>> + vma_shift = PAGE_SHIFT;
>> } else {
>> vma_shift = get_vma_page_shift(vma, hva);
>> }
>
> Since a memory abort is specific to a vCPU instead of a VM, so
> vcpu_is_rec()
> instead of kvm_is_realm() is more accurate for the check. Besides, it looks
> duplicate to the check added by "PATCH[20/43] arm64: RME: Runtime faulting
> of memory", which is as below.
>
> /* FIXME: We shouldn't need to disable this for realms */
> if (vma_pagesize == PAGE_SIZE && !(force_pte || device ||
> kvm_is_realm(kvm))) {
>
> ^^^^^^^^^^^^^^^^^
> Can be
> dropped now.
Indeed, thanks for that - one less FIXME ;)
Thanks,
Steve
> Thanks,
> Gavin
>
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms
2025-02-02 7:12 ` Gavin Shan
@ 2025-02-07 17:08 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-07 17:08 UTC (permalink / raw)
To: Gavin Shan, kvm, kvmarm
Cc: Catalin Marinas, Marc Zyngier, Will Deacon, James Morse,
Oliver Upton, Suzuki K Poulose, Zenghui Yu, linux-arm-kernel,
linux-kernel, Joey Gouly, Alexandru Elisei, Christoffer Dall,
Fuad Tabba, linux-coco, Ganapatrao Kulkarni, Shanker Donthineni,
Alper Gun, Aneesh Kumar K . V
On 02/02/2025 07:12, Gavin Shan wrote:
> On 12/13/24 1:55 AM, Steven Price wrote:
>> Physical device assignment is not yet supported by the RMM, so it
>> doesn't make much sense to allow device mappings within the realm.
>> Prevent them when the guest is a realm.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes from v5:
>> * Also prevent accesses in user_mem_abort()
>> ---
>> arch/arm64/kvm/mmu.c | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 9ede143ccef1..cef7c3dcbf99 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1149,6 +1149,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm,
>> phys_addr_t guest_ipa,
>> if (is_protected_kvm_enabled())
>> return -EPERM;
>> + /* We don't support mapping special pages into a Realm */
>> + if (kvm_is_realm(kvm))
>> + return -EINVAL;
>> +
>
> return -EPERM;
>
>> size += offset_in_page(guest_ipa);
>> guest_ipa &= PAGE_MASK;
>> @@ -1725,6 +1729,14 @@ static int user_mem_abort(struct kvm_vcpu
>> *vcpu, phys_addr_t fault_ipa,
>> if (exec_fault && device)
>> return -ENOEXEC;
>> + /*
>> + * Don't allow device accesses to protected memory as we don't (yet)
>> + * support protected devices.
>> + */
>> + if (device && kvm_is_realm(kvm) &&
>> + kvm_gpa_from_fault(kvm, fault_ipa) == fault_ipa)
>> + return -EINVAL;
>> +
>
> s/kvm_is_realm/vcpu_is_rec
>
> I don't understand the check very well. What I understood is mem_abort()
> is called
> only when kvm_gpa_from_fault(kvm, fault_ipa) != fault_ipa, meaning only
> the page
> faults in the shared address space is handled by mem_abort(). So I guess
> we perhaps
> need something like below.
private_memslot_fault() is only when the memslot is a private
guest_memfd(). So there's also the situation of a 'normal' memslot which
can still end up in user_mem_abort().
As things currently stand we can't really deal with this (the bottom
part of the IPA is protected, and therefore must come from
guest_memfd()). But in the future we are expecting to be able to support
protected devices.
So I think really the correct solution for now is to drop the "device"
check and update the comment to reflect the fact that
private_memslot_fault() should be handling all protected address until
we get support for assigning devices.
Thanks,
Steve
> if (vcpu_is_rec(vcpu) && device)
> return -EPERM;
>
> kvm_handle_guest_abort
> kvm_slot_can_be_private
> private_memslot_fault // page fault in the private space is
> handled here
> io_mem_abort // MMIO emulation is handled here
> user_mem_abort // page fault in the shared space is
> handled here
>
>> /*
>> * Potentially reduce shadow S2 permissions to match the guest's
>> own
>> * S2. For exec faults, we'd only reach this point if the guest
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 90+ messages in thread
* Re: [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest
2024-12-12 16:54 ` Joey Gouly
@ 2025-02-10 11:58 ` Steven Price
0 siblings, 0 replies; 90+ messages in thread
From: Steven Price @ 2025-02-10 11:58 UTC (permalink / raw)
To: Joey Gouly
Cc: kvm, kvmarm, Catalin Marinas, Marc Zyngier, Will Deacon,
James Morse, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
linux-arm-kernel, linux-kernel, Alexandru Elisei,
Christoffer Dall, Fuad Tabba, linux-coco, Ganapatrao Kulkarni,
Gavin Shan, Shanker Donthineni, Alper Gun, Aneesh Kumar K . V
On 12/12/2024 16:54, Joey Gouly wrote:
> On Thu, Dec 12, 2024 at 03:55:57PM +0000, Steven Price wrote:
>> Use the PMU registers from the RmiRecExit structure to identify when an
>> overflow interrupt is due and inject it into the guest. Also hook up the
>> configuration option for enabling the PMU within the guest.
>>
>> When entering a realm guest with a PMU interrupt pending, it is
>> necessary to disable the physical interrupt. Otherwise when the RMM
>> restores the PMU state the physical interrupt will trigger causing an
>> immediate exit back to the host. The guest is expected to acknowledge
>> the interrupt causing a host exit (to update the GIC state) which gives
>> the opportunity to re-enable the physical interrupt before the next PMU
>> event.
>>
>> Number of PMU counters is configured by the VMM by writing to PMCR.N.
>>
>> Signed-off-by: Steven Price <steven.price@arm.com>
>> ---
>> Changes since v2:
>> * Add a macro kvm_pmu_get_irq_level() to avoid compile issues when PMU
>> support is disabled.
>> ---
>> arch/arm64/kvm/arm.c | 11 +++++++++++
>> arch/arm64/kvm/guest.c | 7 +++++++
>> arch/arm64/kvm/pmu-emul.c | 3 +++
>> arch/arm64/kvm/rme.c | 8 ++++++++
>> arch/arm64/kvm/sys_regs.c | 2 +-
>> include/kvm/arm_pmu.h | 4 ++++
>> 6 files changed, 34 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 6f7f96ab781d..ae3596a25272 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -15,6 +15,7 @@
>> #include <linux/vmalloc.h>
>> #include <linux/fs.h>
>> #include <linux/mman.h>
>> +#include <linux/perf/arm_pmu.h>
>> #include <linux/sched.h>
>> #include <linux/kvm.h>
>> #include <linux/kvm_irqfd.h>
>> @@ -1209,6 +1210,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>> run->exit_reason = KVM_EXIT_UNKNOWN;
>> run->flags = 0;
>> while (ret > 0) {
>> + bool pmu_stopped = false;
>> +
>> /*
>> * Check conditions before entering the guest
>> */
>> @@ -1240,6 +1243,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>>
>> kvm_pmu_flush_hwstate(vcpu);
>>
>> + if (vcpu_is_rec(vcpu) && kvm_pmu_get_irq_level(vcpu)) {
>> + pmu_stopped = true;
>> + arm_pmu_set_phys_irq(false);
>> + }
>> +
>> local_irq_disable();
>>
>> kvm_vgic_flush_hwstate(vcpu);
>> @@ -1342,6 +1350,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>>
>> preempt_enable();
>>
>> + if (pmu_stopped)
>> + arm_pmu_set_phys_irq(true);
>> +
>> /*
>> * The ARMv8 architecture doesn't give the hypervisor
>> * a mechanism to prevent a guest from dropping to AArch32 EL0
>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
>> index 920e0fd73698..b4e3839f6076 100644
>> --- a/arch/arm64/kvm/guest.c
>> +++ b/arch/arm64/kvm/guest.c
>> @@ -804,6 +804,8 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>> return kvm_arm_sys_reg_get_reg(vcpu, reg);
>> }
>>
>> +#define KVM_REG_ARM_PMCR_EL0 ARM64_SYS_REG(3, 3, 9, 12, 0)
>
> There's already SYS_PMCR_EL0 defined, so it seems this is not needed?
ARM64_SYS_REG() is slightly different from the sys_reg() macro used by
SYS_PMCR_EL0. The ARM64_SYS_REG() macro encodes extra bits about the
register size (KVM_REG_SIZE_U64) and that this is an arm64 system
register (KVM_REG_ARM64 | KVM_REG_ARM64_SYSREG).
There is a KVM_ARM64_SYS_REG() macro used by the selftests which would
allow using the SYS_PMCR_EL0 to avoid repeating the encoding, but I
can't find an equivalent outside the selftests.
To be honest I'm not sure quite what the best option here is. I was
tempted to put them in uapi/asm/kvm.h but other than the timer registers
(which have quirks due to a accident in the past) we don't define any
other register names as uABI - and there's no particular reason to start
for this one.
Thanks,
Steve
>> +
>> /*
>> * The RMI ABI only enables setting some GPRs and PC. The selection of GPRs
>> * that are available depends on the Realm state and the reason for the last
>> @@ -818,6 +820,11 @@ static bool validate_realm_set_reg(struct kvm_vcpu *vcpu,
>> u64 off = core_reg_offset_from_id(reg->id);
>>
>> return kvm_realm_validate_core_reg(off);
>> + } else {
>> + switch (reg->id) {
>> + case KVM_REG_ARM_PMCR_EL0:
>> + return true;
>> + }
>> }
>>
>> return false;
>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>> index 456102bc0b55..5628b573ca41 100644
>> --- a/arch/arm64/kvm/pmu-emul.c
>> +++ b/arch/arm64/kvm/pmu-emul.c
>> @@ -397,6 +397,9 @@ static bool kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
>> {
>> u64 reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>
>> + if (vcpu_is_rec(vcpu))
>> + return vcpu->arch.rec.run->exit.pmu_ovf_status;
>> +
>> reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
>>
>> /*
>> diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c
>> index 27a479feb907..e562e77c1f94 100644
>> --- a/arch/arm64/kvm/rme.c
>> +++ b/arch/arm64/kvm/rme.c
>> @@ -303,6 +303,11 @@ static int realm_create_rd(struct kvm *kvm)
>> params->rtt_base = kvm->arch.mmu.pgd_phys;
>> params->vmid = realm->vmid;
>>
>> + if (kvm->arch.arm_pmu) {
>> + params->pmu_num_ctrs = kvm->arch.pmcr_n;
>> + params->flags |= RMI_REALM_PARAM_FLAG_PMU;
>> + }
>> +
>> params_phys = virt_to_phys(params);
>>
>> if (rmi_realm_create(rd_phys, params_phys)) {
>> @@ -1370,6 +1375,9 @@ int kvm_create_rec(struct kvm_vcpu *vcpu)
>> if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2))
>> return -EINVAL;
>>
>> + if (vcpu->kvm->arch.arm_pmu && !kvm_vcpu_has_pmu(vcpu))
>> + return -EINVAL;
>> +
>> BUILD_BUG_ON(sizeof(*params) > PAGE_SIZE);
>> BUILD_BUG_ON(sizeof(*rec->run) > PAGE_SIZE);
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 83c6b4a07ef5..a4713609e230 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -1324,7 +1324,7 @@ static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r,
>> * implements. Ignore this error to maintain compatibility
>> * with the existing KVM behavior.
>> */
>> - if (!kvm_vm_has_ran_once(kvm) &&
>> + if (!kvm_vm_has_ran_once(kvm) && !kvm_realm_is_created(kvm) &&
>> new_n <= kvm_arm_pmu_get_max_counters(kvm))
>> kvm->arch.pmcr_n = new_n;
>>
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index e61dd7dd2286..30625a6fd143 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -77,6 +77,8 @@ void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
>> void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>> void kvm_vcpu_pmu_resync_el0(void);
>>
>> +#define kvm_pmu_get_irq_level(vcpu) ((vcpu)->arch.pmu.irq_level)
>> +
>> #define kvm_vcpu_has_pmu(vcpu) \
>> (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3))
>>
>> @@ -164,6 +166,8 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
>> return 0;
>> }
>>
>> +#define kvm_pmu_get_irq_level(vcpu) (false)
>> +
>> #define kvm_vcpu_has_pmu(vcpu) ({ false; })
>> static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
>> static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
>
> Thanks,
> Joey
^ permalink raw reply [flat|nested] 90+ messages in thread
end of thread, other threads:[~2025-02-10 12:17 UTC | newest]
Thread overview: 90+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-12 15:55 [PATCH v6 00/43] arm64: Support for Arm CCA in KVM Steven Price
2024-12-12 15:55 ` [PATCH v6 01/43] KVM: Prepare for handling only shared mappings in mmu_notifier events Steven Price
2024-12-12 15:55 ` [PATCH v6 02/43] kvm: arm64: Include kvm_emulate.h in kvm/arm_psci.h Steven Price
2024-12-12 15:55 ` [PATCH v6 03/43] arm64: RME: Handle Granule Protection Faults (GPFs) Steven Price
2024-12-16 16:45 ` Suzuki K Poulose
2024-12-12 15:55 ` [PATCH v6 04/43] arm64: RME: Add SMC definitions for calling the RMM Steven Price
2024-12-16 17:51 ` Suzuki K Poulose
2024-12-12 15:55 ` [PATCH v6 05/43] arm64: RME: Add wrappers for RMI calls Steven Price
2024-12-12 15:55 ` [PATCH v6 06/43] arm64: RME: Check for RME support at KVM init Steven Price
2024-12-19 5:44 ` Aneesh Kumar K.V
2025-01-28 5:47 ` Gavin Shan
2025-01-29 3:57 ` Gavin Shan
2025-01-29 15:24 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 07/43] arm64: RME: Define the user ABI Steven Price
2025-01-28 23:51 ` Gavin Shan
2025-01-29 16:31 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 08/43] arm64: RME: ioctls to create and configure realms Steven Price
2025-01-29 0:43 ` Gavin Shan
2025-01-30 11:57 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 09/43] kvm: arm64: Expose debug HW register numbers for Realm Steven Price
2024-12-12 15:55 ` [PATCH v6 10/43] arm64: kvm: Allow passing machine type in KVM creation Steven Price
2025-01-29 4:07 ` Gavin Shan
2025-01-30 14:14 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 11/43] arm64: RME: RTT tear down Steven Price
2025-01-29 23:06 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 12/43] arm64: RME: Allocate/free RECs to match vCPUs Steven Price
2024-12-19 5:37 ` Aneesh Kumar K.V
2025-01-29 4:50 ` Gavin Shan
2025-01-30 16:40 ` Steven Price
2025-02-03 11:18 ` Suzuki K Poulose
2025-02-03 16:34 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 13/43] KVM: arm64: vgic: Provide helper for number of list registers Steven Price
2024-12-12 15:55 ` [PATCH v6 14/43] arm64: RME: Support for the VGIC in realms Steven Price
2024-12-12 15:55 ` [PATCH v6 15/43] KVM: arm64: Support timers in realm RECs Steven Price
2024-12-12 15:55 ` [PATCH v6 16/43] arm64: RME: Allow VMM to set RIPAS Steven Price
2025-01-29 23:25 ` Gavin Shan
2025-01-30 16:56 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 17/43] arm64: RME: Handle realm enter/exit Steven Price
2025-01-29 23:41 ` Gavin Shan
2025-02-03 16:34 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 18/43] KVM: arm64: Handle realm MMIO emulation Steven Price
2024-12-12 15:55 ` [PATCH v6 19/43] arm64: RME: Allow populating initial contents Steven Price
2025-01-30 4:38 ` Gavin Shan
2025-02-03 16:52 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 20/43] arm64: RME: Runtime faulting of memory Steven Price
2025-01-30 5:22 ` Gavin Shan
2025-02-06 4:33 ` Aneesh Kumar K.V
2025-02-07 17:04 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 21/43] KVM: arm64: Handle realm VCPU load Steven Price
2024-12-12 15:55 ` [PATCH v6 22/43] KVM: arm64: Validate register access for a Realm VM Steven Price
2025-02-02 1:22 ` Gavin Shan
2025-02-07 17:04 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 23/43] KVM: arm64: Handle Realm PSCI requests Steven Price
2025-02-02 2:06 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 24/43] KVM: arm64: WARN on injected undef exceptions Steven Price
2025-02-02 2:11 ` Gavin Shan
2024-12-12 15:55 ` [PATCH v6 25/43] arm64: Don't expose stolen time for realm guests Steven Price
2025-02-02 2:15 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 26/43] arm64: rme: allow userspace to inject aborts Steven Price
2024-12-12 15:55 ` [PATCH v6 27/43] arm64: rme: support RSI_HOST_CALL Steven Price
2025-02-02 6:41 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 28/43] arm64: rme: Allow checking SVE on VM instance Steven Price
2025-02-02 6:00 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 29/43] arm64: RME: Always use 4k pages for realms Steven Price
2025-02-02 6:52 ` Gavin Shan
2025-02-07 17:05 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 30/43] arm64: rme: Prevent Device mappings for Realms Steven Price
2025-02-02 7:12 ` Gavin Shan
2025-02-07 17:08 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 31/43] arm_pmu: Provide a mechanism for disabling the physical IRQ Steven Price
2024-12-12 15:55 ` [PATCH v6 32/43] arm64: rme: Enable PMU support with a realm guest Steven Price
2024-12-12 16:54 ` Joey Gouly
2025-02-10 11:58 ` Steven Price
2024-12-12 15:55 ` [PATCH v6 33/43] kvm: rme: Hide KVM_CAP_READONLY_MEM for realm guests Steven Price
2024-12-12 15:55 ` [PATCH v6 34/43] arm64: RME: Propagate number of breakpoints and watchpoints to userspace Steven Price
2025-02-02 23:15 ` Gavin Shan
2025-02-02 23:17 ` Gavin Shan
2024-12-12 15:56 ` [PATCH v6 35/43] arm64: RME: Set breakpoint parameters through SET_ONE_REG Steven Price
2024-12-12 15:56 ` [PATCH v6 36/43] arm64: RME: Initialize PMCR.N with number counter supported by RMM Steven Price
2024-12-12 15:56 ` [PATCH v6 37/43] arm64: RME: Propagate max SVE vector length from RMM Steven Price
2024-12-12 15:56 ` [PATCH v6 38/43] arm64: RME: Configure max SVE vector length for a Realm Steven Price
2025-02-02 23:26 ` Gavin Shan
2024-12-12 15:56 ` [PATCH v6 39/43] arm64: RME: Provide register list for unfinalized RME RECs Steven Price
2024-12-12 15:56 ` [PATCH v6 40/43] arm64: RME: Provide accurate register list Steven Price
2024-12-12 15:56 ` [PATCH v6 41/43] arm64: kvm: Expose support for private memory Steven Price
2024-12-12 15:56 ` [PATCH v6 42/43] KVM: arm64: Expose KVM_ARM_VCPU_REC to user space Steven Price
2024-12-12 15:56 ` [PATCH v6 43/43] KVM: arm64: Allow activating realms Steven Price
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).