From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:39138 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935033AbdAFPIw (ORCPT ); Fri, 6 Jan 2017 10:08:52 -0500 Subject: Patch "arm64: KVM: pmu: Reset PMSELR_EL0.SEL to a sane value before entering the guest" has been added to the 4.9-stable tree To: marc.zyngier@arm.com, gregkh@linuxfoundation.org, will.deacon@arm.com Cc: , From: Date: Fri, 06 Jan 2017 16:09:10 +0100 Message-ID: <14837153506062@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled arm64: KVM: pmu: Reset PMSELR_EL0.SEL to a sane value before entering the guest to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: arm64-kvm-pmu-reset-pmselr_el0.sel-to-a-sane-value-before-entering-the-guest.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From 21cbe3cc8a48ff17059912e019fbde28ed54745a Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Tue, 6 Dec 2016 14:34:22 +0000 Subject: arm64: KVM: pmu: Reset PMSELR_EL0.SEL to a sane value before entering the guest From: Marc Zyngier commit 21cbe3cc8a48ff17059912e019fbde28ed54745a upstream. The ARMv8 architecture allows the cycle counter to be configured by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0, hence accessing PMCCFILTR_EL0. But it disallows the use of PMSELR_EL0.SEL==0x1f to access the cycle counter itself through PMXEVCNTR_EL0. Linux itself doesn't violate this rule, but we may end up with PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1, despite the guest not having done anything wrong. In order to avoid this unfortunate course of events (haha!), let's sanitize PMSELR_EL0 on guest entry. This ensures that the guest won't explode unexpectedly. Acked-by: Will Deacon Signed-off-by: Marc Zyngier Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kvm/hyp/switch.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -85,7 +85,13 @@ static void __hyp_text __activate_traps( write_sysreg(val, hcr_el2); /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); - /* Make sure we trap PMU access from EL0 to EL2 */ + /* + * Make sure we trap PMU access from EL0 to EL2. Also sanitize + * PMSELR_EL0 to make sure it never contains the cycle + * counter, which could make a PMXEVCNTR_EL0 access UNDEF at + * EL1 instead of being trapped to EL2. + */ + write_sysreg(0, pmselr_el0); write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); __activate_traps_arch()(); Patches currently in stable-queue which might be from marc.zyngier@arm.com are queue-4.9/arm64-kvm-pmu-reset-pmselr_el0.sel-to-a-sane-value-before-entering-the-guest.patch