linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Dave Martin <Dave.Martin@arm.com>
To: kvmarm@lists.cs.columbia.edu
Cc: "Peter Maydell" <peter.maydell@linaro.org>,
	"Okamoto Takayuki" <tokamoto@jp.fujitsu.com>,
	"Christoffer Dall" <cdall@kernel.org>,
	"Ard Biesheuvel" <ard.biesheuvel@linaro.org>,
	"Marc Zyngier" <marc.zyngier@arm.com>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Will Deacon" <will.deacon@arm.com>,
	"Zhang Lei" <zhang.lei@jp.fujitsu.com>,
	"Julien Grall" <julien.grall@arm.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH v7 23/27] KVM: arm64/sve: Add pseudo-register for the guest's vector lengths
Date: Fri, 29 Mar 2019 13:00:48 +0000	[thread overview]
Message-ID: <1553864452-15080-24-git-send-email-Dave.Martin@arm.com> (raw)
In-Reply-To: <1553864452-15080-1-git-send-email-Dave.Martin@arm.com>

This patch adds a new pseudo-register KVM_REG_ARM64_SVE_VLS to
allow userspace to set and query the set of vector lengths visible
to the guest.

In the future, multiple register slices per SVE register may be
visible through the ioctl interface.  Once the set of slices has
been determined we would not be able to allow the vector length set
to be changed any more, in order to avoid userspace seeing
inconsistent sets of registers.  For this reason, this patch adds
support for explicit finalization of the SVE configuration via the
KVM_ARM_VCPU_FINALIZE ioctl.

Finalization is the proper place to allocate the SVE register state
storage in vcpu->arch.sve_state, so this patch adds that as
appropriate.  The data is freed via kvm_arch_vcpu_uninit(), which
was previously a no-op on arm64.

To simplify the logic for determining what vector lengths can be
supported, some code is added to KVM init to work this out, in the
kvm_arm_init_arch_resources() hook.

The KVM_REG_ARM64_SVE_VLS pseudo-register is not exposed yet.
Subsequent patches will allow SVE to be turned on for guest vcpus,
making it visible.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Tested-by: zhang.lei <zhang.lei@jp.fujitsu.com>

---

Changes since v5:

 * [Julien Thierry] Delete overzealous BUILD_BUG_ON() checks.
   It also turns out that these could cause kernel build failures in
   some configurations, even though the checked condition is compile-
   time constant.

   Because of the way the affected functions are called, the checks
   are superfluous, so the simplest option is simply to get rid of
   them.

 * [Julien Thierry] Free vcpu->arch.sve_state (if any) in
   kvm_arch_vcpu_uninit() (which is currently a no-op).

   This was accidentally lost during a previous rebase.

 * Add kvm_arm_init_arch_resources() hook, and use it to probe SVE
   configuration for KVM, to avoid duplicating the logic elsewhere.
   We only need to do this once.

 * Move sve_state buffer allocation to kvm_arm_vcpu_finalize().

   As well as making the code more straightforward, this avoids the
   need to allocate memory in kvm_reset_vcpu(), the meat of which is
   non-preemptible since commit 358b28f09f0a ("arm/arm64: KVM: Allow a
   VCPU to fully reset itself").

   The refactoring means that if this has not been done by the time
   we hit KVM_RUN, then this allocation will happen on the
   kvm_arm_first_run_init() path, where preemption remains enabled.

 * Add a couple of comments in {get,set}_sve_reg() to make the handling
   of the KVM_REG_ARM64_SVE_VLS special case a little clearer.

 * Move mis-split rework to avoid put_user() being the correct size
   by accident in KVM_GET_REG_LIST to KVM: arm64: Enumerate SVE register
   indices for KVM_GET_REG_LIST.

 * Fix wrong WARN_ON() check sense when checking whether the
   implementation may needs move SVE register slices than KVM can
   support.

 * Fix erroneous setting of vcpu->arch.sve_max_vl based on stale loop
   control veriable vq.

 * Move definition of KVM_ARM_VCPU_SVE from KVM: arm64/sve: Allow
   userspace to enable SVE for vcpus.

 * Migrate to explicit finalization of the SVE configuration, using
   KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE).
---
 arch/arm64/include/asm/kvm_host.h |  15 +++--
 arch/arm64/include/uapi/asm/kvm.h |   5 ++
 arch/arm64/kvm/guest.c            | 114 +++++++++++++++++++++++++++++++++++++-
 arch/arm64/kvm/reset.c            |  89 +++++++++++++++++++++++++++++
 4 files changed, 215 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 98658f7..5475cc4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -23,7 +23,6 @@
 #define __ARM64_KVM_HOST_H__
 
 #include <linux/bitmap.h>
-#include <linux/errno.h>
 #include <linux/types.h>
 #include <linux/jump_label.h>
 #include <linux/kvm_types.h>
@@ -50,6 +49,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
+/* Will be incremented when KVM_ARM_VCPU_SVE is fully implemented: */
 #define KVM_VCPU_MAX_FEATURES 4
 
 #define KVM_REQ_SLEEP \
@@ -59,10 +59,12 @@
 
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
-static inline int kvm_arm_init_arch_resources(void) { return 0; }
+extern unsigned int kvm_sve_max_vl;
+int kvm_arm_init_arch_resources(void);
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu);
 int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext);
 void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
 
@@ -353,6 +355,7 @@ struct kvm_vcpu_arch {
 #define KVM_ARM64_HOST_SVE_IN_USE	(1 << 3) /* backup for host TIF_SVE */
 #define KVM_ARM64_HOST_SVE_ENABLED	(1 << 4) /* SVE enabled for EL0 */
 #define KVM_ARM64_GUEST_HAS_SVE		(1 << 5) /* SVE exposed to guest */
+#define KVM_ARM64_VCPU_SVE_FINALIZED	(1 << 6) /* SVE config completed */
 
 #define vcpu_has_sve(vcpu) (system_supports_sve() && \
 			    ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
@@ -525,7 +528,6 @@ static inline bool kvm_arch_requires_vhe(void)
 
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
-static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
@@ -626,7 +628,10 @@ void kvm_arch_free_vm(struct kvm *kvm);
 
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
 
-#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL)
-#define kvm_arm_vcpu_is_finalized(vcpu) true
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what);
+bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
+
+#define kvm_arm_vcpu_sve_finalized(vcpu) \
+	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
 
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index ced760c..6963b7e 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_SVE		4 /* enable SVE for this CPU */
 
 struct kvm_vcpu_init {
 	__u32 target;
@@ -243,6 +244,10 @@ struct kvm_vcpu_events {
 					 ((n) << 5) | (i))
 #define KVM_REG_ARM64_SVE_FFR(i)	KVM_REG_ARM64_SVE_PREG(16, i)
 
+/* Vector lengths pseudo-register: */
+#define KVM_REG_ARM64_SVE_VLS		(KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
+					 KVM_REG_SIZE_U512 | 0xffff)
+
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR	0
 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS	1
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 2aa80a5..086ab05 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -206,6 +206,73 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	return err;
 }
 
+#define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64)
+#define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
+
+static bool vq_present(
+	const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)],
+	unsigned int vq)
+{
+	return (*vqs)[vq_word(vq)] & vq_mask(vq);
+}
+
+static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+	unsigned int max_vq, vq;
+	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+
+	if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl)))
+		return -EINVAL;
+
+	memset(vqs, 0, sizeof(vqs));
+
+	max_vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
+	for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
+		if (sve_vq_available(vq))
+			vqs[vq_word(vq)] |= vq_mask(vq);
+
+	if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+{
+	unsigned int max_vq, vq;
+	u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
+
+	if (kvm_arm_vcpu_sve_finalized(vcpu))
+		return -EPERM; /* too late! */
+
+	if (WARN_ON(vcpu->arch.sve_state))
+		return -EINVAL;
+
+	if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs)))
+		return -EFAULT;
+
+	max_vq = 0;
+	for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; ++vq)
+		if (vq_present(&vqs, vq))
+			max_vq = vq;
+
+	if (max_vq > sve_vq_from_vl(kvm_sve_max_vl))
+		return -EINVAL;
+
+	for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
+		if (vq_present(&vqs, vq) != sve_vq_available(vq))
+			return -EINVAL;
+
+	/* Can't run with no vector lengths at all: */
+	if (max_vq < SVE_VQ_MIN)
+		return -EINVAL;
+
+	/* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */
+	vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq);
+
+	return 0;
+}
+
 #define SVE_REG_SLICE_SHIFT	0
 #define SVE_REG_SLICE_BITS	5
 #define SVE_REG_ID_SHIFT	(SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS)
@@ -296,7 +363,19 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	struct sve_state_reg_region region;
 	char __user *uptr = (char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu) || sve_reg_to_region(&region, vcpu, reg))
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
+	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
+	if (reg->id == KVM_REG_ARM64_SVE_VLS)
+		return get_sve_vls(vcpu, reg);
+
+	/* Otherwise, reg is an architectural SVE register... */
+
+	if (!kvm_arm_vcpu_sve_finalized(vcpu))
+		return -EPERM;
+
+	if (sve_reg_to_region(&region, vcpu, reg))
 		return -ENOENT;
 
 	if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset,
@@ -312,7 +391,19 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 	struct sve_state_reg_region region;
 	const char __user *uptr = (const char __user *)reg->addr;
 
-	if (!vcpu_has_sve(vcpu) || sve_reg_to_region(&region, vcpu, reg))
+	if (!vcpu_has_sve(vcpu))
+		return -ENOENT;
+
+	/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
+	if (reg->id == KVM_REG_ARM64_SVE_VLS)
+		return set_sve_vls(vcpu, reg);
+
+	/* Otherwise, reg is an architectural SVE register... */
+
+	if (!kvm_arm_vcpu_sve_finalized(vcpu))
+		return -EPERM;
+
+	if (sve_reg_to_region(&region, vcpu, reg))
 		return -ENOENT;
 
 	if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr,
@@ -426,7 +517,11 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
 	if (!vcpu_has_sve(vcpu))
 		return 0;
 
-	return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */);
+	/* Policed by KVM_GET_REG_LIST: */
+	WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu));
+
+	return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */)
+		+ 1; /* KVM_REG_ARM64_SVE_VLS */
 }
 
 static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
@@ -441,6 +536,19 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
 	if (!vcpu_has_sve(vcpu))
 		return 0;
 
+	/* Policed by KVM_GET_REG_LIST: */
+	WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu));
+
+	/*
+	 * Enumerate this first, so that userspace can save/restore in
+	 * the order reported by KVM_GET_REG_LIST:
+	 */
+	reg = KVM_REG_ARM64_SVE_VLS;
+	if (put_user(reg, uindices++))
+		return -EFAULT;
+
+	++num_regs;
+
 	for (i = 0; i < slices; i++) {
 		for (n = 0; n < SVE_NUM_ZREGS; n++) {
 			reg = KVM_REG_ARM64_SVE_ZREG(n, i);
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f16a5f8..e7f9c06 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -23,11 +23,14 @@
 #include <linux/kvm_host.h>
 #include <linux/kvm.h>
 #include <linux/hw_breakpoint.h>
+#include <linux/slab.h>
+#include <linux/types.h>
 
 #include <kvm/arm_arch_timer.h>
 
 #include <asm/cpufeature.h>
 #include <asm/cputype.h>
+#include <asm/fpsimd.h>
 #include <asm/ptrace.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -99,6 +102,92 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	return r;
 }
 
+unsigned int kvm_sve_max_vl;
+
+int kvm_arm_init_arch_resources(void)
+{
+	if (system_supports_sve()) {
+		kvm_sve_max_vl = sve_max_virtualisable_vl;
+
+		/*
+		 * The get_sve_reg()/set_sve_reg() ioctl interface will need
+		 * to be extended with multiple register slice support in
+		 * order to support vector lengths greater than
+		 * SVE_VL_ARCH_MAX:
+		 */
+		if (WARN_ON(kvm_sve_max_vl > SVE_VL_ARCH_MAX))
+			kvm_sve_max_vl = SVE_VL_ARCH_MAX;
+
+		/*
+		 * Don't even try to make use of vector lengths that
+		 * aren't available on all CPUs, for now:
+		 */
+		if (kvm_sve_max_vl < sve_max_vl)
+			pr_warn("KVM: SVE vector length for guests limited to %u bytes\n",
+				kvm_sve_max_vl);
+	}
+
+	return 0;
+}
+
+/*
+ * Finalize vcpu's maximum SVE vector length, allocating
+ * vcpu->arch.sve_state as necessary.
+ */
+static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu)
+{
+	void *buf;
+	unsigned int vl;
+
+	vl = vcpu->arch.sve_max_vl;
+
+	/*
+	 * Resposibility for these properties is shared between
+	 * kvm_arm_init_arch_resources(), kvm_vcpu_enable_sve() and
+	 * set_sve_vls().  Double-check here just to be sure:
+	 */
+	if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl ||
+		    vl > SVE_VL_ARCH_MAX))
+		return -EIO;
+
+	buf = kzalloc(SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl)), GFP_KERNEL);
+	if (!buf)
+		return -ENOMEM;
+
+	vcpu->arch.sve_state = buf;
+	vcpu->arch.flags |= KVM_ARM64_VCPU_SVE_FINALIZED;
+	return 0;
+}
+
+int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int what)
+{
+	switch (what) {
+	case KVM_ARM_VCPU_SVE:
+		if (!vcpu_has_sve(vcpu))
+			return -EINVAL;
+
+		if (kvm_arm_vcpu_sve_finalized(vcpu))
+			return -EPERM;
+
+		return kvm_vcpu_finalize_sve(vcpu);
+	}
+
+	return -EINVAL;
+}
+
+bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
+{
+	if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu))
+		return false;
+
+	return true;
+}
+
+void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
+{
+	kfree(vcpu->arch.sve_state);
+}
+
 /**
  * kvm_reset_vcpu - sets core registers and sys_regs to reset value
  * @vcpu: The VCPU pointer
-- 
2.1.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2019-03-29 13:18 UTC|newest]

Thread overview: 109+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-29 13:00 [PATCH v7 00/27] KVM: arm64: SVE guest support Dave Martin
2019-03-29 13:00 ` [PATCH v7 01/27] KVM: Documentation: Document arm64 core registers in detail Dave Martin
2019-04-24  9:25   ` Alex Bennée
2019-03-29 13:00 ` [PATCH v7 02/27] arm64: fpsimd: Always set TIF_FOREIGN_FPSTATE on task state flush Dave Martin
2019-03-29 13:00 ` [PATCH v7 03/27] KVM: arm64: Delete orphaned declaration for __fpsimd_enabled() Dave Martin
2019-03-29 13:00 ` [PATCH v7 04/27] KVM: arm64: Refactor kvm_arm_num_regs() for easier maintenance Dave Martin
2019-03-29 13:00 ` [PATCH v7 05/27] KVM: arm64: Add missing #includes to kvm_host.h Dave Martin
2019-03-29 13:00 ` [PATCH v7 06/27] arm64/sve: Clarify role of the VQ map maintenance functions Dave Martin
2019-04-04 21:21   ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 07/27] arm64/sve: Check SVE virtualisability Dave Martin
2019-04-04 21:21   ` Andrew Jones
2019-04-05  9:35     ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 08/27] arm64/sve: Enable SVE state tracking for non-task contexts Dave Martin
2019-03-29 13:00 ` [PATCH v7 09/27] KVM: arm64: Add a vcpu flag to control SVE visibility for the guest Dave Martin
2019-04-03 19:14   ` Andrew Jones
2019-04-04  3:17     ` Marc Zyngier
2019-04-04  7:53       ` Dave Martin
2019-04-04 21:15   ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 10/27] KVM: arm64: Propagate vcpu into read_id_reg() Dave Martin
2019-04-04 21:15   ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 11/27] KVM: arm64: Support runtime sysreg visibility filtering Dave Martin
2019-04-03 19:17   ` Andrew Jones
2019-04-24  9:39   ` Alex Bennée
2019-04-24 13:47     ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 12/27] KVM: arm64/sve: System register context switch and access support Dave Martin
2019-04-03 19:39   ` Andrew Jones
2019-04-04  8:06     ` Dave Martin
2019-04-04  8:32       ` Andrew Jones
2019-04-04  8:47         ` Dave Martin
2019-04-04  8:59   ` Andrew Jones
2019-04-24 15:21   ` Alex Bennée
2019-04-25 13:28     ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 13/27] KVM: arm64/sve: Context switch the SVE registers Dave Martin
2019-04-03 20:01   ` Andrew Jones
2019-04-04  8:10     ` Dave Martin
2019-04-04  8:35       ` Andrew Jones
2019-04-04  8:36         ` Dave Martin
2019-04-24 14:51           ` Alex Bennée
2019-04-25 13:35             ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 14/27] KVM: Allow 2048-bit register access via ioctl interface Dave Martin
2019-04-04 21:11   ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 15/27] KVM: arm64: Add missing #include of <linux/string.h> in guest.c Dave Martin
2019-03-29 13:00 ` [PATCH v7 16/27] KVM: arm64: Factor out core register ID enumeration Dave Martin
2019-04-02  2:41   ` Marc Zyngier
2019-04-02  8:59     ` Dave Martin
2019-04-02  9:32       ` Marc Zyngier
2019-04-02  9:54         ` Dave P Martin
2019-03-29 13:00 ` [PATCH v7 17/27] KVM: arm64: Reject ioctl access to FPSIMD V-regs on SVE vcpus Dave Martin
2019-04-03 20:15   ` Andrew Jones
2019-04-24 13:45   ` Alex Bennée
2019-03-29 13:00 ` [PATCH v7 18/27] KVM: arm64/sve: Add SVE support to register access ioctl interface Dave Martin
2019-04-04 13:57   ` Andrew Jones
2019-04-04 14:50     ` Dave Martin
2019-04-04 16:25       ` Andrew Jones
2019-04-04 16:56         ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 19/27] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST Dave Martin
2019-04-04 14:08   ` Andrew Jones
2019-04-05  9:35     ` Dave Martin
2019-04-05  9:45       ` Andrew Jones
2019-04-05 11:11         ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 20/27] arm64/sve: In-kernel vector length availability query interface Dave Martin
2019-04-04 14:20   ` Andrew Jones
2019-04-05  9:35     ` Dave Martin
2019-04-05  9:54       ` Andrew Jones
2019-04-05 11:13         ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 21/27] KVM: arm/arm64: Add hook for arch-specific KVM initialisation Dave Martin
2019-04-04 14:25   ` Andrew Jones
2019-04-04 14:53     ` Dave Martin
2019-04-04 16:33       ` Andrew Jones
2019-04-05  9:36         ` Dave Martin
2019-04-05 10:40           ` Andrew Jones
2019-04-05 11:14             ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 22/27] KVM: arm/arm64: Add KVM_ARM_VCPU_FINALIZE ioctl Dave Martin
2019-04-04 15:07   ` Andrew Jones
2019-04-04 16:47     ` Dave Martin
2019-03-29 13:00 ` Dave Martin [this message]
2019-04-04 20:18   ` [PATCH v7 23/27] KVM: arm64/sve: Add pseudo-register for the guest's vector lengths Andrew Jones
2019-04-05  9:36     ` Dave Martin
2019-04-05 10:14       ` Andrew Jones
2019-04-05 12:54         ` Dave Martin
2019-04-05 15:33           ` Andrew Jones
2019-04-10 12:42             ` Dave Martin
2019-04-04 20:31   ` Andrew Jones
2019-04-05  9:36     ` Dave Martin
2019-04-05 10:22       ` Andrew Jones
2019-04-05 14:06         ` Dave Martin
2019-04-05 15:41           ` Andrew Jones
2019-04-10 12:35             ` Dave Martin
2019-03-29 13:00 ` [PATCH v7 24/27] KVM: arm64/sve: Allow userspace to enable SVE for vcpus Dave Martin
2019-04-04 20:36   ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 25/27] KVM: arm64: Add a capability to advertise SVE support Dave Martin
2019-04-04 20:39   ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 26/27] KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG Dave Martin
2019-04-03 20:27   ` Andrew Jones
2019-04-04  8:35     ` Dave Martin
2019-04-04  9:34       ` Andrew Jones
2019-04-04  9:38         ` Dave P Martin
2019-04-04  9:45           ` Andrew Jones
2019-03-29 13:00 ` [PATCH v7 27/27] KVM: arm64/sve: Document KVM API extensions for SVE Dave Martin
2019-04-04 21:09   ` Andrew Jones
2019-04-05  9:36     ` Dave Martin
2019-04-05 10:39       ` Andrew Jones
2019-04-05 13:00         ` Dave Martin
2019-04-05 15:38           ` Andrew Jones
2019-04-10 12:34             ` Dave Martin
2019-03-29 14:56 ` [PATCH v7 00/27] KVM: arm64: SVE guest support Marc Zyngier
2019-03-29 15:06   ` Dave Martin
2019-04-05 16:41 ` Dave Martin
2019-04-25 10:33 ` Alex Bennée

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1553864452-15080-24-git-send-email-Dave.Martin@arm.com \
    --to=dave.martin@arm.com \
    --cc=alex.bennee@linaro.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=catalin.marinas@arm.com \
    --cc=cdall@kernel.org \
    --cc=julien.grall@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=marc.zyngier@arm.com \
    --cc=peter.maydell@linaro.org \
    --cc=tokamoto@jp.fujitsu.com \
    --cc=will.deacon@arm.com \
    --cc=zhang.lei@jp.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).