* [PATCH 0/3] arm/arm64: KVM: SCTLR_EL2/HSCTLR setup fixes @ 2017-06-06 18:08 Marc Zyngier 2017-06-06 18:08 ` [PATCH 1/3] arm64: KVM: Preserve RES1 bits in SCTLR_EL2 Marc Zyngier ` (2 more replies) 0 siblings, 3 replies; 9+ messages in thread From: Marc Zyngier @ 2017-06-06 18:08 UTC (permalink / raw) To: linux-arm-kernel A couple of issues have recently cropped up regarding the way we setup SCTLR_EL2: (1) We accidentally zero some RES1 bits. This doesn't have any impact on current revision of the architecture, but may have unexpected impacts on future revisions. (2) We set SCTLR_EL2.A (trap on aligned accesses), but don't provide any code to handle such trap (we panic). So far, this has never been an issue, but GCC 7 has started emiting such accesses, and the EL2 code explodes. The best course of action is actually to let these accesses take place, as we don't have any particular restrictions there. This short series addresses both issues, and provides (2) for 32bit ARM as well (though we haven't had any report of that exploding yet). Marc Zyngier (3): arm64: KVM: Preserve RES1 bits in SCTLR_EL2 arm64: KVM: Allow unaligned accesses at EL2 arm: KVM: Allow unaligned accesses at HYP arch/arm/kvm/init.S | 5 ++--- arch/arm64/include/asm/sysreg.h | 4 ++++ arch/arm64/kvm/hyp-init.S | 11 +++++++---- 3 files changed, 13 insertions(+), 7 deletions(-) -- 2.11.0 ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] arm64: KVM: Preserve RES1 bits in SCTLR_EL2 2017-06-06 18:08 [PATCH 0/3] arm/arm64: KVM: SCTLR_EL2/HSCTLR setup fixes Marc Zyngier @ 2017-06-06 18:08 ` Marc Zyngier 2017-06-06 18:08 ` [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 Marc Zyngier 2017-06-06 18:08 ` [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP Marc Zyngier 2 siblings, 0 replies; 9+ messages in thread From: Marc Zyngier @ 2017-06-06 18:08 UTC (permalink / raw) To: linux-arm-kernel __do_hyp_init has the rather bad habit of ignoring RES1 bits and writing them back as zero. On a v8.0-8.2 CPU, this doesn't do anything bad, but may end-up being pretty nasty on future revisions of the architecture. Let's preserve those bits so that we don't have to fix this later on. Cc: stable at vger.kernel.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> --- arch/arm64/include/asm/sysreg.h | 4 ++++ arch/arm64/kvm/hyp-init.S | 10 ++++++---- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 15c142ce991c..b4d13d9267ff 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -286,6 +286,10 @@ #define SCTLR_ELx_A (1 << 1) #define SCTLR_ELx_M 1 +#define SCTLR_EL2_RES1 ((1 << 4) | (1 << 5) | (1 << 11) | (1 << 16) | \ + (1 << 16) | (1 << 18) | (1 << 22) | (1 << 23) | \ + (1 << 28) | (1 << 29)) + #define SCTLR_ELx_FLAGS (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \ SCTLR_ELx_SA | SCTLR_ELx_I) diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 839425c24b1c..4072d408a4b4 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -106,10 +106,12 @@ __do_hyp_init: tlbi alle2 dsb sy - mrs x4, sctlr_el2 - and x4, x4, #SCTLR_ELx_EE // preserve endianness of EL2 - ldr x5, =SCTLR_ELx_FLAGS - orr x4, x4, x5 + /* + * Preserve all the RES1 bits while setting the default flags, + * as well as the EE bit on BE. + */ + ldr x4, =(SCTLR_EL2_RES1 | SCTLR_ELx_FLAGS) +CPU_BE( orr x4, x4, #SCTLR_ELx_EE) msr sctlr_el2, x4 isb -- 2.11.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 2017-06-06 18:08 [PATCH 0/3] arm/arm64: KVM: SCTLR_EL2/HSCTLR setup fixes Marc Zyngier 2017-06-06 18:08 ` [PATCH 1/3] arm64: KVM: Preserve RES1 bits in SCTLR_EL2 Marc Zyngier @ 2017-06-06 18:08 ` Marc Zyngier 2017-06-06 20:09 ` Christoffer Dall 2017-06-06 18:08 ` [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP Marc Zyngier 2 siblings, 1 reply; 9+ messages in thread From: Marc Zyngier @ 2017-06-06 18:08 UTC (permalink / raw) To: linux-arm-kernel We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses at EL2, but we're not really prepared to deal with it. So far, this has been unnoticed, until GCC 7 started emitting those (in particular 64bit writes on a 32bit boundary). Since the rest of the kernel is pretty happy about that, let's follow its example and set SCTLR_EL2.A to zero. Modern CPUs don't really care. Cc: stable at vger.kernel.org Reported-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> --- arch/arm64/kvm/hyp-init.S | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 4072d408a4b4..3f9615582377 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -108,9 +108,10 @@ __do_hyp_init: /* * Preserve all the RES1 bits while setting the default flags, - * as well as the EE bit on BE. + * as well as the EE bit on BE. Drop the A flag since the compiler + * is allowed to generate unaligned accesses. */ - ldr x4, =(SCTLR_EL2_RES1 | SCTLR_ELx_FLAGS) + ldr x4, =(SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A)) CPU_BE( orr x4, x4, #SCTLR_ELx_EE) msr sctlr_el2, x4 isb -- 2.11.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 2017-06-06 18:08 ` [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 Marc Zyngier @ 2017-06-06 20:09 ` Christoffer Dall 2017-06-07 9:16 ` Marc Zyngier 0 siblings, 1 reply; 9+ messages in thread From: Christoffer Dall @ 2017-06-06 20:09 UTC (permalink / raw) To: linux-arm-kernel On Tue, Jun 06, 2017 at 07:08:34PM +0100, Marc Zyngier wrote: > We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses > at EL2, but we're not really prepared to deal with it. So far, this > has been unnoticed, until GCC 7 started emitting those (in particular > 64bit writes on a 32bit boundary). > > Since the rest of the kernel is pretty happy about that, let's follow > its example and set SCTLR_EL2.A to zero. Modern CPUs don't really > care. Why do we set the A flag via SCTLR_ELx_FLAGS in the first place, only to drop that flag later on for both EL1 and EL2 ? > > Cc: stable at vger.kernel.org > Reported-by: Alexander Graf <agraf@suse.de> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> > --- > arch/arm64/kvm/hyp-init.S | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S > index 4072d408a4b4..3f9615582377 100644 > --- a/arch/arm64/kvm/hyp-init.S > +++ b/arch/arm64/kvm/hyp-init.S > @@ -108,9 +108,10 @@ __do_hyp_init: > > /* > * Preserve all the RES1 bits while setting the default flags, > - * as well as the EE bit on BE. > + * as well as the EE bit on BE. Drop the A flag since the compiler > + * is allowed to generate unaligned accesses. > */ > - ldr x4, =(SCTLR_EL2_RES1 | SCTLR_ELx_FLAGS) > + ldr x4, =(SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A)) > CPU_BE( orr x4, x4, #SCTLR_ELx_EE) > msr sctlr_el2, x4 > isb > -- > 2.11.0 > Otherwise this looks fine to me. Thanks, -Christoffer ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 2017-06-06 20:09 ` Christoffer Dall @ 2017-06-07 9:16 ` Marc Zyngier 2017-06-07 9:56 ` Christoffer Dall 0 siblings, 1 reply; 9+ messages in thread From: Marc Zyngier @ 2017-06-07 9:16 UTC (permalink / raw) To: linux-arm-kernel On 06/06/17 21:09, Christoffer Dall wrote: > On Tue, Jun 06, 2017 at 07:08:34PM +0100, Marc Zyngier wrote: >> We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses >> at EL2, but we're not really prepared to deal with it. So far, this >> has been unnoticed, until GCC 7 started emitting those (in particular >> 64bit writes on a 32bit boundary). >> >> Since the rest of the kernel is pretty happy about that, let's follow >> its example and set SCTLR_EL2.A to zero. Modern CPUs don't really >> care. > > Why do we set the A flag via SCTLR_ELx_FLAGS in the first place, only to > drop that flag later on for both EL1 and EL2 ? That flag is always cleared at EL1, never set. Actually, only EL2 uses that macro to *set* flags. An alternative would be to do away with the macro and use the individual flags, like the 32bit side does. What do you think? M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 2017-06-07 9:16 ` Marc Zyngier @ 2017-06-07 9:56 ` Christoffer Dall 2017-06-07 10:11 ` Marc Zyngier 0 siblings, 1 reply; 9+ messages in thread From: Christoffer Dall @ 2017-06-07 9:56 UTC (permalink / raw) To: linux-arm-kernel On Wed, Jun 07, 2017 at 10:16:29AM +0100, Marc Zyngier wrote: > On 06/06/17 21:09, Christoffer Dall wrote: > > On Tue, Jun 06, 2017 at 07:08:34PM +0100, Marc Zyngier wrote: > >> We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses > >> at EL2, but we're not really prepared to deal with it. So far, this > >> has been unnoticed, until GCC 7 started emitting those (in particular > >> 64bit writes on a 32bit boundary). > >> > >> Since the rest of the kernel is pretty happy about that, let's follow > >> its example and set SCTLR_EL2.A to zero. Modern CPUs don't really > >> care. > > > > Why do we set the A flag via SCTLR_ELx_FLAGS in the first place, only to > > drop that flag later on for both EL1 and EL2 ? > > That flag is always cleared at EL1, never set. Actually, only EL2 uses > that macro to *set* flags. An alternative would be to do away with the > macro and use the individual flags, like the 32bit side does. > > What do you think? > I don't understand why the A bit is part of SCTLR_ELx_FLAGS then? Is it used as a mask, is that why? In terms of these patches, I think we should apply these, because they solve the problem and do the same thing. Thanks, -Christoffer ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 2017-06-07 9:56 ` Christoffer Dall @ 2017-06-07 10:11 ` Marc Zyngier 0 siblings, 0 replies; 9+ messages in thread From: Marc Zyngier @ 2017-06-07 10:11 UTC (permalink / raw) To: linux-arm-kernel On 07/06/17 10:56, Christoffer Dall wrote: > On Wed, Jun 07, 2017 at 10:16:29AM +0100, Marc Zyngier wrote: >> On 06/06/17 21:09, Christoffer Dall wrote: >>> On Tue, Jun 06, 2017 at 07:08:34PM +0100, Marc Zyngier wrote: >>>> We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses >>>> at EL2, but we're not really prepared to deal with it. So far, this >>>> has been unnoticed, until GCC 7 started emitting those (in particular >>>> 64bit writes on a 32bit boundary). >>>> >>>> Since the rest of the kernel is pretty happy about that, let's follow >>>> its example and set SCTLR_EL2.A to zero. Modern CPUs don't really >>>> care. >>> >>> Why do we set the A flag via SCTLR_ELx_FLAGS in the first place, only to >>> drop that flag later on for both EL1 and EL2 ? >> >> That flag is always cleared at EL1, never set. Actually, only EL2 uses >> that macro to *set* flags. An alternative would be to do away with the >> macro and use the individual flags, like the 32bit side does. >> >> What do you think? >> > I don't understand why the A bit is part of SCTLR_ELx_FLAGS then? Is it > used as a mask, is that why? Yes. See arch/arm64/kernel/cpu-reset.S for example, where the macro is used to clear all these flags in one go. SCTLR_EL1.A was never set the first place (it was cleared at boot time in __cpu_setup). We could remove the A bit from SCTLR_ELx_FLAGS altogether, and it wouldn't have any observable effect (or so I believe). This would be another way to fix this problem. > In terms of these patches, I think we should apply these, because they > solve the problem and do the same thing. OK. Thanks, M. -- Jazz is not dead. It just smells funny... ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP 2017-06-06 18:08 [PATCH 0/3] arm/arm64: KVM: SCTLR_EL2/HSCTLR setup fixes Marc Zyngier 2017-06-06 18:08 ` [PATCH 1/3] arm64: KVM: Preserve RES1 bits in SCTLR_EL2 Marc Zyngier 2017-06-06 18:08 ` [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 Marc Zyngier @ 2017-06-06 18:08 ` Marc Zyngier 2017-06-06 20:09 ` Christoffer Dall 2 siblings, 1 reply; 9+ messages in thread From: Marc Zyngier @ 2017-06-06 18:08 UTC (permalink / raw) To: linux-arm-kernel We currently have the HSCTLR.A bit set, trapping unaligned accesses at HYP, but we're not really prepared to deal with it. Since the rest of the kernel is pretty happy about that, let's follow its example and set HSCTLR.A to zero. Modern CPUs don't really care. Cc: stable at vger.kernel.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> --- arch/arm/kvm/init.S | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S index 570ed4a9c261..5386528665b5 100644 --- a/arch/arm/kvm/init.S +++ b/arch/arm/kvm/init.S @@ -104,7 +104,6 @@ __do_hyp_init: @ - Write permission implies XN: disabled @ - Instruction cache: enabled @ - Data/Unified cache: enabled - @ - Memory alignment checks: enabled @ - MMU: enabled (this code must be run from an identity mapping) mrc p15, 4, r0, c1, c0, 0 @ HSCR ldr r2, =HSCTLR_MASK @@ -112,8 +111,8 @@ __do_hyp_init: mrc p15, 0, r1, c1, c0, 0 @ SCTLR ldr r2, =(HSCTLR_EE | HSCTLR_FI | HSCTLR_I | HSCTLR_C) and r1, r1, r2 - ARM( ldr r2, =(HSCTLR_M | HSCTLR_A) ) - THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) ) + ARM( ldr r2, =(HSCTLR_M) ) + THUMB( ldr r2, =(HSCTLR_M | HSCTLR_TE) ) orr r1, r1, r2 orr r0, r0, r1 mcr p15, 4, r0, c1, c0, 0 @ HSCR -- 2.11.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP 2017-06-06 18:08 ` [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP Marc Zyngier @ 2017-06-06 20:09 ` Christoffer Dall 0 siblings, 0 replies; 9+ messages in thread From: Christoffer Dall @ 2017-06-06 20:09 UTC (permalink / raw) To: linux-arm-kernel On Tue, Jun 06, 2017 at 07:08:35PM +0100, Marc Zyngier wrote: > We currently have the HSCTLR.A bit set, trapping unaligned accesses > at HYP, but we're not really prepared to deal with it. > > Since the rest of the kernel is pretty happy about that, let's follow > its example and set HSCTLR.A to zero. Modern CPUs don't really care. > > Cc: stable at vger.kernel.org > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <cdall@linaro.org> > --- > arch/arm/kvm/init.S | 5 ++--- > 1 file changed, 2 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S > index 570ed4a9c261..5386528665b5 100644 > --- a/arch/arm/kvm/init.S > +++ b/arch/arm/kvm/init.S > @@ -104,7 +104,6 @@ __do_hyp_init: > @ - Write permission implies XN: disabled > @ - Instruction cache: enabled > @ - Data/Unified cache: enabled > - @ - Memory alignment checks: enabled > @ - MMU: enabled (this code must be run from an identity mapping) > mrc p15, 4, r0, c1, c0, 0 @ HSCR > ldr r2, =HSCTLR_MASK > @@ -112,8 +111,8 @@ __do_hyp_init: > mrc p15, 0, r1, c1, c0, 0 @ SCTLR > ldr r2, =(HSCTLR_EE | HSCTLR_FI | HSCTLR_I | HSCTLR_C) > and r1, r1, r2 > - ARM( ldr r2, =(HSCTLR_M | HSCTLR_A) ) > - THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) ) > + ARM( ldr r2, =(HSCTLR_M) ) > + THUMB( ldr r2, =(HSCTLR_M | HSCTLR_TE) ) > orr r1, r1, r2 > orr r0, r0, r1 > mcr p15, 4, r0, c1, c0, 0 @ HSCR > -- > 2.11.0 > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-06-07 10:11 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-06-06 18:08 [PATCH 0/3] arm/arm64: KVM: SCTLR_EL2/HSCTLR setup fixes Marc Zyngier 2017-06-06 18:08 ` [PATCH 1/3] arm64: KVM: Preserve RES1 bits in SCTLR_EL2 Marc Zyngier 2017-06-06 18:08 ` [PATCH 2/3] arm64: KVM: Allow unaligned accesses at EL2 Marc Zyngier 2017-06-06 20:09 ` Christoffer Dall 2017-06-07 9:16 ` Marc Zyngier 2017-06-07 9:56 ` Christoffer Dall 2017-06-07 10:11 ` Marc Zyngier 2017-06-06 18:08 ` [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP Marc Zyngier 2017-06-06 20:09 ` Christoffer Dall
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).