From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoffer Dall Subject: Re: [PATCH 3/3] arm: KVM: Allow unaligned accesses at HYP Date: Tue, 6 Jun 2017 22:09:45 +0200 Message-ID: <20170606200945.GR9464@cbox> References: <20170606180835.14421-1-marc.zyngier@arm.com> <20170606180835.14421-4-marc.zyngier@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Christoffer Dall , Catalin Marinas , Mark Rutland , Alexander Graf , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org To: Marc Zyngier Return-path: Received: from mail-wm0-f43.google.com ([74.125.82.43]:38586 "EHLO mail-wm0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751404AbdFFUJ7 (ORCPT ); Tue, 6 Jun 2017 16:09:59 -0400 Received: by mail-wm0-f43.google.com with SMTP id n195so108142727wmg.1 for ; Tue, 06 Jun 2017 13:09:58 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20170606180835.14421-4-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Jun 06, 2017 at 07:08:35PM +0100, Marc Zyngier wrote: > We currently have the HSCTLR.A bit set, trapping unaligned accesses > at HYP, but we're not really prepared to deal with it. > > Since the rest of the kernel is pretty happy about that, let's follow > its example and set HSCTLR.A to zero. Modern CPUs don't really care. > > Cc: stable@vger.kernel.org > Signed-off-by: Marc Zyngier Acked-by: Christoffer Dall > --- > arch/arm/kvm/init.S | 5 ++--- > 1 file changed, 2 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S > index 570ed4a9c261..5386528665b5 100644 > --- a/arch/arm/kvm/init.S > +++ b/arch/arm/kvm/init.S > @@ -104,7 +104,6 @@ __do_hyp_init: > @ - Write permission implies XN: disabled > @ - Instruction cache: enabled > @ - Data/Unified cache: enabled > - @ - Memory alignment checks: enabled > @ - MMU: enabled (this code must be run from an identity mapping) > mrc p15, 4, r0, c1, c0, 0 @ HSCR > ldr r2, =HSCTLR_MASK > @@ -112,8 +111,8 @@ __do_hyp_init: > mrc p15, 0, r1, c1, c0, 0 @ SCTLR > ldr r2, =(HSCTLR_EE | HSCTLR_FI | HSCTLR_I | HSCTLR_C) > and r1, r1, r2 > - ARM( ldr r2, =(HSCTLR_M | HSCTLR_A) ) > - THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) ) > + ARM( ldr r2, =(HSCTLR_M) ) > + THUMB( ldr r2, =(HSCTLR_M | HSCTLR_TE) ) > orr r1, r1, r2 > orr r0, r0, r1 > mcr p15, 4, r0, c1, c0, 0 @ HSCR > -- > 2.11.0 >