From: Dave Martin <Dave.Martin@arm.com> To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, Okamoto Takayuki <tokamoto@jp.fujitsu.com>, libc-alpha@sourceware.org, Ard Biesheuvel <ard.biesheuvel@linaro.org>, Szabolcs Nagy <szabolcs.nagy@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will.deacon@arm.com>, kvmarm@lists.cs.columbia.edu Subject: [PATCH v5 24/30] arm64/sve: KVM: Prevent guests from using SVE Date: Tue, 31 Oct 2017 15:51:16 +0000 [thread overview] Message-ID: <1509465082-30427-25-git-send-email-Dave.Martin@arm.com> (raw) In-Reply-To: <1509465082-30427-1-git-send-email-Dave.Martin@arm.com> Until KVM has full SVE support, guests must not be allowed to execute SVE instructions. This patch enables the necessary traps, and also ensures that the traps are disabled again on exit from the guest so that the host can still use SVE if it wants to. On guest exit, high bits of the SVE Zn registers may have been clobbered as a side-effect the execution of FPSIMD instructions in the guest. The existing KVM host FPSIMD restore code is not sufficient to restore these bits, so this patch explicitly marks the CPU as not containing cached vector state for any task, thus forcing a reload on the next return to userspace. This is an interim measure, in advance of adding full SVE awareness to KVM. This marking of cached vector state in the CPU as invalid is done using __this_cpu_write(fpsimd_last_state, NULL) in fpsimd.c. Due to the repeated use of this rather obscure operation, it makes sense to factor it out as a separate helper with a clearer name. This patch factors it out as fpsimd_flush_cpu_state(), and ports all callers to use it. As a side effect of this refactoring, a this_cpu_write() in fpsimd_cpu_pm_notifier() is changed to __this_cpu_write(). This should be fine, since cpu_pm_enter() is supposed to be called only with interrupts disabled. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- arch/arm/include/asm/kvm_host.h | 3 +++ arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/kvm_arm.h | 4 +++- arch/arm64/include/asm/kvm_host.h | 11 +++++++++++ arch/arm64/kernel/fpsimd.c | 31 +++++++++++++++++++++++++++++-- arch/arm64/kvm/hyp/switch.c | 6 +++--- virt/kvm/arm/arm.c | 3 +++ 7 files changed, 53 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 4a879f6..242151e 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -293,4 +293,7 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +/* All host FP/SIMD state is restored on guest exit, so nothing to save: */ +static inline void kvm_fpsimd_flush_cpu_state(void) {} + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index b868412..74f3439 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -74,6 +74,7 @@ extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct fpsimd_state *state); extern void fpsimd_flush_task_state(struct task_struct *target); +extern void sve_flush_cpu_state(void); /* Maximum VL that SVE VL-agnostic software can transparently support */ #define SVE_VL_ARCH_MAX 0x100 diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index dbf0537..7f069ff 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -186,7 +186,8 @@ #define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TZ (1 << 8) -#define CPTR_EL2_DEFAULT 0x000033ff +#define CPTR_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 */ +#define CPTR_EL2_DEFAULT CPTR_EL2_RES1 /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_TPMS (1 << 14) @@ -237,5 +238,6 @@ #define CPACR_EL1_FPEN (3 << 20) #define CPACR_EL1_TTA (1 << 28) +#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN) #endif /* __ARM64_KVM_ARM_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e923b58..674912d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -25,6 +25,7 @@ #include <linux/types.h> #include <linux/kvm_types.h> #include <asm/cpufeature.h> +#include <asm/fpsimd.h> #include <asm/kvm.h> #include <asm/kvm_asm.h> #include <asm/kvm_mmio.h> @@ -384,4 +385,14 @@ static inline void __cpu_init_stage2(void) "PARange is %d bits, unsupported configuration!", parange); } +/* + * All host FP/SIMD state is restored on guest exit, so nothing needs + * doing here except in the SVE case: +*/ +static inline void kvm_fpsimd_flush_cpu_state(void) +{ + if (system_supports_sve()) + sve_flush_cpu_state(); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 4da64fc..42fc731 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1049,6 +1049,33 @@ void fpsimd_flush_task_state(struct task_struct *t) t->thread.fpsimd_state.cpu = NR_CPUS; } +static inline void fpsimd_flush_cpu_state(void) +{ + __this_cpu_write(fpsimd_last_state, NULL); +} + +/* + * Invalidate any task SVE state currently held in this CPU's regs. + * + * This is used to prevent the kernel from trying to reuse SVE register data + * that is detroyed by KVM guest enter/exit. This function should go away when + * KVM SVE support is implemented. Don't use it for anything else. + */ +#ifdef CONFIG_ARM64_SVE +void sve_flush_cpu_state(void) +{ + struct fpsimd_state *const fpstate = __this_cpu_read(fpsimd_last_state); + struct task_struct *tsk; + + if (!fpstate) + return; + + tsk = container_of(fpstate, struct task_struct, thread.fpsimd_state); + if (test_tsk_thread_flag(tsk, TIF_SVE)) + fpsimd_flush_cpu_state(); +} +#endif /* CONFIG_ARM64_SVE */ + #ifdef CONFIG_KERNEL_MODE_NEON DEFINE_PER_CPU(bool, kernel_neon_busy); @@ -1089,7 +1116,7 @@ void kernel_neon_begin(void) } /* Invalidate any task state remaining in the fpsimd regs: */ - __this_cpu_write(fpsimd_last_state, NULL); + fpsimd_flush_cpu_state(); preempt_disable(); @@ -1210,7 +1237,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self, case CPU_PM_ENTER: if (current->mm) task_fpsimd_save(); - this_cpu_write(fpsimd_last_state, NULL); + fpsimd_flush_cpu_state(); break; case CPU_PM_EXIT: if (current->mm) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 35a90b8..951f3eb 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -48,7 +48,7 @@ static void __hyp_text __activate_traps_vhe(void) val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; - val &= ~CPACR_EL1_FPEN; + val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN); write_sysreg(val, cpacr_el1); write_sysreg(__kvm_hyp_vector, vbar_el1); @@ -59,7 +59,7 @@ static void __hyp_text __activate_traps_nvhe(void) u64 val; val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TFP; + val |= CPTR_EL2_TTA | CPTR_EL2_TFP | CPTR_EL2_TZ; write_sysreg(val, cptr_el2); } @@ -117,7 +117,7 @@ static void __hyp_text __deactivate_traps_vhe(void) write_sysreg(mdcr_el2, mdcr_el2); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); - write_sysreg(CPACR_EL1_FPEN, cpacr_el1); + write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(vectors, vbar_el1); } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index b9f68e4..4d3cf9c 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -652,6 +652,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); + /* Flush FP/SIMD state that can't survive guest entry/exit */ + kvm_fpsimd_flush_cpu_state(); + kvm_pmu_flush_hwstate(vcpu); kvm_timer_flush_hwstate(vcpu); -- 2.1.4 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
WARNING: multiple messages have this Message-ID (diff)
From: Dave Martin <Dave.Martin@arm.com> To: linux-arm-kernel@lists.infradead.org Cc: "Catalin Marinas" <catalin.marinas@arm.com>, "Will Deacon" <will.deacon@arm.com>, "Ard Biesheuvel" <ard.biesheuvel@linaro.org>, "Alex Bennée" <alex.bennee@linaro.org>, "Szabolcs Nagy" <szabolcs.nagy@arm.com>, "Okamoto Takayuki" <tokamoto@jp.fujitsu.com>, kvmarm@lists.cs.columbia.edu, libc-alpha@sourceware.org, linux-arch@vger.kernel.org Subject: [PATCH v5 24/30] arm64/sve: KVM: Prevent guests from using SVE Date: Tue, 31 Oct 2017 15:51:16 +0000 [thread overview] Message-ID: <1509465082-30427-25-git-send-email-Dave.Martin@arm.com> (raw) Message-ID: <20171031155116.kdlAWtIadgaUw0-3ZGlC-eZx84tfs1TOY28IvFDbD4Y@z> (raw) In-Reply-To: <1509465082-30427-1-git-send-email-Dave.Martin@arm.com> Until KVM has full SVE support, guests must not be allowed to execute SVE instructions. This patch enables the necessary traps, and also ensures that the traps are disabled again on exit from the guest so that the host can still use SVE if it wants to. On guest exit, high bits of the SVE Zn registers may have been clobbered as a side-effect the execution of FPSIMD instructions in the guest. The existing KVM host FPSIMD restore code is not sufficient to restore these bits, so this patch explicitly marks the CPU as not containing cached vector state for any task, thus forcing a reload on the next return to userspace. This is an interim measure, in advance of adding full SVE awareness to KVM. This marking of cached vector state in the CPU as invalid is done using __this_cpu_write(fpsimd_last_state, NULL) in fpsimd.c. Due to the repeated use of this rather obscure operation, it makes sense to factor it out as a separate helper with a clearer name. This patch factors it out as fpsimd_flush_cpu_state(), and ports all callers to use it. As a side effect of this refactoring, a this_cpu_write() in fpsimd_cpu_pm_notifier() is changed to __this_cpu_write(). This should be fine, since cpu_pm_enter() is supposed to be called only with interrupts disabled. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- arch/arm/include/asm/kvm_host.h | 3 +++ arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/kvm_arm.h | 4 +++- arch/arm64/include/asm/kvm_host.h | 11 +++++++++++ arch/arm64/kernel/fpsimd.c | 31 +++++++++++++++++++++++++++++-- arch/arm64/kvm/hyp/switch.c | 6 +++--- virt/kvm/arm/arm.c | 3 +++ 7 files changed, 53 insertions(+), 6 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 4a879f6..242151e 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -293,4 +293,7 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +/* All host FP/SIMD state is restored on guest exit, so nothing to save: */ +static inline void kvm_fpsimd_flush_cpu_state(void) {} + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index b868412..74f3439 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -74,6 +74,7 @@ extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct fpsimd_state *state); extern void fpsimd_flush_task_state(struct task_struct *target); +extern void sve_flush_cpu_state(void); /* Maximum VL that SVE VL-agnostic software can transparently support */ #define SVE_VL_ARCH_MAX 0x100 diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index dbf0537..7f069ff 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -186,7 +186,8 @@ #define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TZ (1 << 8) -#define CPTR_EL2_DEFAULT 0x000033ff +#define CPTR_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 */ +#define CPTR_EL2_DEFAULT CPTR_EL2_RES1 /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_TPMS (1 << 14) @@ -237,5 +238,6 @@ #define CPACR_EL1_FPEN (3 << 20) #define CPACR_EL1_TTA (1 << 28) +#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN) #endif /* __ARM64_KVM_ARM_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e923b58..674912d 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -25,6 +25,7 @@ #include <linux/types.h> #include <linux/kvm_types.h> #include <asm/cpufeature.h> +#include <asm/fpsimd.h> #include <asm/kvm.h> #include <asm/kvm_asm.h> #include <asm/kvm_mmio.h> @@ -384,4 +385,14 @@ static inline void __cpu_init_stage2(void) "PARange is %d bits, unsupported configuration!", parange); } +/* + * All host FP/SIMD state is restored on guest exit, so nothing needs + * doing here except in the SVE case: +*/ +static inline void kvm_fpsimd_flush_cpu_state(void) +{ + if (system_supports_sve()) + sve_flush_cpu_state(); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 4da64fc..42fc731 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1049,6 +1049,33 @@ void fpsimd_flush_task_state(struct task_struct *t) t->thread.fpsimd_state.cpu = NR_CPUS; } +static inline void fpsimd_flush_cpu_state(void) +{ + __this_cpu_write(fpsimd_last_state, NULL); +} + +/* + * Invalidate any task SVE state currently held in this CPU's regs. + * + * This is used to prevent the kernel from trying to reuse SVE register data + * that is detroyed by KVM guest enter/exit. This function should go away when + * KVM SVE support is implemented. Don't use it for anything else. + */ +#ifdef CONFIG_ARM64_SVE +void sve_flush_cpu_state(void) +{ + struct fpsimd_state *const fpstate = __this_cpu_read(fpsimd_last_state); + struct task_struct *tsk; + + if (!fpstate) + return; + + tsk = container_of(fpstate, struct task_struct, thread.fpsimd_state); + if (test_tsk_thread_flag(tsk, TIF_SVE)) + fpsimd_flush_cpu_state(); +} +#endif /* CONFIG_ARM64_SVE */ + #ifdef CONFIG_KERNEL_MODE_NEON DEFINE_PER_CPU(bool, kernel_neon_busy); @@ -1089,7 +1116,7 @@ void kernel_neon_begin(void) } /* Invalidate any task state remaining in the fpsimd regs: */ - __this_cpu_write(fpsimd_last_state, NULL); + fpsimd_flush_cpu_state(); preempt_disable(); @@ -1210,7 +1237,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self, case CPU_PM_ENTER: if (current->mm) task_fpsimd_save(); - this_cpu_write(fpsimd_last_state, NULL); + fpsimd_flush_cpu_state(); break; case CPU_PM_EXIT: if (current->mm) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 35a90b8..951f3eb 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -48,7 +48,7 @@ static void __hyp_text __activate_traps_vhe(void) val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; - val &= ~CPACR_EL1_FPEN; + val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN); write_sysreg(val, cpacr_el1); write_sysreg(__kvm_hyp_vector, vbar_el1); @@ -59,7 +59,7 @@ static void __hyp_text __activate_traps_nvhe(void) u64 val; val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TFP; + val |= CPTR_EL2_TTA | CPTR_EL2_TFP | CPTR_EL2_TZ; write_sysreg(val, cptr_el2); } @@ -117,7 +117,7 @@ static void __hyp_text __deactivate_traps_vhe(void) write_sysreg(mdcr_el2, mdcr_el2); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); - write_sysreg(CPACR_EL1_FPEN, cpacr_el1); + write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(vectors, vbar_el1); } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index b9f68e4..4d3cf9c 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -652,6 +652,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); + /* Flush FP/SIMD state that can't survive guest entry/exit */ + kvm_fpsimd_flush_cpu_state(); + kvm_pmu_flush_hwstate(vcpu); kvm_timer_flush_hwstate(vcpu); -- 2.1.4
next prev parent reply other threads:[~2017-10-31 15:51 UTC|newest] Thread overview: 106+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-10-31 15:50 [PATCH v5 00/30] ARM Scalable Vector Extension (SVE) Dave Martin 2017-10-31 15:50 ` Dave Martin 2017-10-31 15:50 ` [PATCH v5 01/30] regset: Add support for dynamically sized regsets Dave Martin 2017-11-01 11:42 ` Catalin Marinas 2017-11-01 13:16 ` Dave Martin 2017-11-01 13:16 ` Dave Martin 2017-11-08 11:50 ` Alex Bennée 2017-11-08 11:50 ` Alex Bennée 2017-10-31 15:50 ` [PATCH v5 02/30] arm64: fpsimd: Correctly annotate exception helpers called from asm Dave Martin 2017-10-31 15:50 ` Dave Martin 2017-11-01 11:42 ` Catalin Marinas 2017-10-31 15:50 ` [PATCH v5 03/30] arm64: signal: Verify extra data is user-readable in sys_rt_sigreturn Dave Martin 2017-10-31 15:50 ` Dave Martin 2017-11-01 11:43 ` Catalin Marinas 2017-10-31 15:50 ` [PATCH v5 04/30] arm64: KVM: Hide unsupported AArch64 CPU features from guests Dave Martin 2017-11-01 4:47 ` Christoffer Dall 2017-11-01 10:26 ` Dave Martin 2017-11-02 8:15 ` Christoffer Dall 2017-11-02 9:20 ` Dave Martin 2017-11-02 11:01 ` Dave Martin 2017-11-02 19:18 ` Christoffer Dall 2017-10-31 15:50 ` [PATCH v5 05/30] arm64: efi: Add missing Kconfig dependency on KERNEL_MODE_NEON Dave Martin 2017-10-31 15:50 ` Dave Martin 2017-10-31 15:50 ` [PATCH v5 06/30] arm64: Port deprecated instruction emulation to new sysctl interface Dave Martin 2017-10-31 15:50 ` [PATCH v5 07/30] arm64: fpsimd: Simplify uses of {set,clear}_ti_thread_flag() Dave Martin 2017-10-31 15:51 ` [PATCH v5 08/30] arm64/sve: System register and exception syndrome definitions Dave Martin 2017-10-31 15:51 ` [PATCH v5 09/30] arm64/sve: Low-level SVE architectural state manipulation functions Dave Martin 2017-10-31 15:51 ` [PATCH v5 10/30] arm64/sve: Kconfig update and conditional compilation support Dave Martin 2017-10-31 15:51 ` [PATCH v5 11/30] arm64/sve: Signal frame and context structure definition Dave Martin 2017-11-08 16:34 ` Alex Bennée 2017-11-08 16:34 ` Alex Bennée 2017-10-31 15:51 ` [PATCH v5 12/30] arm64/sve: Low-level CPU setup Dave Martin 2017-11-08 16:37 ` Alex Bennée 2017-11-08 16:37 ` Alex Bennée 2017-10-31 15:51 ` [PATCH v5 13/30] arm64/sve: Core task context handling Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-11-09 17:16 ` Alex Bennée 2017-11-09 17:16 ` Alex Bennée 2017-11-09 17:56 ` Dave Martin 2017-11-09 18:06 ` Alex Bennée 2017-11-09 18:06 ` Alex Bennée 2017-10-31 15:51 ` [PATCH v5 14/30] arm64/sve: Support vector length resetting for new processes Dave Martin 2017-10-31 15:51 ` [PATCH v5 15/30] arm64/sve: Signal handling support Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-11-01 14:33 ` Catalin Marinas 2017-11-07 13:22 ` Alex Bennée 2017-11-07 13:22 ` Alex Bennée 2017-11-08 16:11 ` Dave Martin 2017-12-06 19:56 ` Kees Cook 2017-12-07 10:49 ` Will Deacon 2017-12-07 12:03 ` Dave Martin 2017-12-07 18:50 ` Kees Cook 2017-12-11 14:07 ` Will Deacon 2017-12-11 19:23 ` Kees Cook 2017-12-12 10:40 ` Will Deacon 2017-12-12 11:11 ` Dave Martin 2017-12-12 19:36 ` Kees Cook 2017-12-12 19:36 ` Kees Cook 2017-10-31 15:51 ` [PATCH v5 16/30] arm64/sve: Backend logic for setting the vector length Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-11-10 10:27 ` Alex Bennée 2017-11-10 10:27 ` Alex Bennée 2017-10-31 15:51 ` [PATCH v5 17/30] arm64: cpufeature: Move sys_caps_initialised declarations Dave Martin 2017-10-31 15:51 ` [PATCH v5 18/30] arm64/sve: Probe SVE capabilities and usable vector lengths Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-10-31 15:51 ` [PATCH v5 19/30] arm64/sve: Preserve SVE registers around kernel-mode NEON use Dave Martin 2017-10-31 15:51 ` [PATCH v5 20/30] arm64/sve: Preserve SVE registers around EFI runtime service calls Dave Martin 2017-10-31 15:51 ` [PATCH v5 21/30] arm64/sve: ptrace and ELF coredump support Dave Martin 2017-10-31 15:51 ` [PATCH v5 22/30] arm64/sve: Add prctl controls for userspace vector length management Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-10-31 15:51 ` [PATCH v5 23/30] arm64/sve: Add sysctl to set the default vector length for new processes Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-10-31 15:51 ` Dave Martin [this message] 2017-10-31 15:51 ` [PATCH v5 24/30] arm64/sve: KVM: Prevent guests from using SVE Dave Martin 2017-10-31 15:51 ` [PATCH v5 25/30] arm64/sve: KVM: Treat guest SVE use as undefined instruction execution Dave Martin 2017-10-31 15:51 ` [PATCH v5 26/30] arm64/sve: KVM: Hide SVE from CPU features exposed to guests Dave Martin 2017-10-31 15:51 ` [PATCH v5 27/30] arm64/sve: Detect SVE and activate runtime support Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-10-31 15:51 ` [RFC PATCH v5 29/30] arm64: signal: Report signal frame size to userspace via auxv Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-10-31 15:51 ` [RFC PATCH v5 30/30] arm64/sve: signal: Include SVE when computing AT_MINSIGSTKSZ Dave Martin 2017-10-31 15:51 ` Dave Martin [not found] ` <1509465082-30427-1-git-send-email-Dave.Martin-5wv7dgnIgG8@public.gmane.org> 2017-10-31 15:51 ` [PATCH v5 28/30] arm64/sve: Add documentation Dave Martin 2017-10-31 15:51 ` Dave Martin 2017-11-02 16:32 ` [PATCH v5 00/30] ARM Scalable Vector Extension (SVE) Will Deacon 2017-11-02 16:32 ` Will Deacon [not found] ` <20171102163248.GB595-5wv7dgnIgG8@public.gmane.org> 2017-11-02 17:04 ` Dave P Martin 2017-11-02 17:04 ` Dave P Martin 2017-11-29 15:04 ` Alex Bennée 2017-11-29 15:04 ` Alex Bennée [not found] ` <877eu9dt3n.fsf-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> 2017-11-29 15:21 ` Will Deacon 2017-11-29 15:21 ` Will Deacon [not found] ` <20171129152140.GD10650-5wv7dgnIgG8@public.gmane.org> 2017-11-29 15:37 ` Dave Martin 2017-11-29 15:37 ` Dave Martin 2018-01-08 14:49 ` Yury Norov 2018-01-08 14:49 ` Yury Norov 2018-01-09 16:51 ` Yury Norov 2018-01-09 16:51 ` Yury Norov 2018-01-15 17:22 ` Dave Martin 2018-01-15 17:22 ` Dave Martin [not found] ` <20180115172201.GW22781-M5GwZQ6tE7x5pKCnmE3YQBJ8xKzm50AiAL8bYrjMMd8@public.gmane.org> 2018-01-16 10:11 ` Yury Norov 2018-01-16 10:11 ` Yury Norov 2018-01-16 16:05 ` Dave Martin 2018-01-16 16:05 ` Dave Martin 2018-01-15 16:55 ` Dave Martin 2018-01-15 16:55 ` Dave Martin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1509465082-30427-25-git-send-email-Dave.Martin@arm.com \ --to=dave.martin@arm.com \ --cc=ard.biesheuvel@linaro.org \ --cc=catalin.marinas@arm.com \ --cc=kvmarm@lists.cs.columbia.edu \ --cc=libc-alpha@sourceware.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=szabolcs.nagy@arm.com \ --cc=tokamoto@jp.fujitsu.com \ --cc=will.deacon@arm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).