From mboxrd@z Thu Jan 1 00:00:00 1970 From: marc.zyngier@arm.com (Marc Zyngier) Date: Thu, 29 Sep 2016 16:36:25 +0100 Subject: [PATCH] arm64: KVM: Take S1 walks into account when determining S2 write faults In-Reply-To: <1475149021-13288-1-git-send-email-will.deacon@arm.com> References: <1475149021-13288-1-git-send-email-will.deacon@arm.com> Message-ID: <20160929163625.69980b5d@arm.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Thu, 29 Sep 2016 12:37:01 +0100 Will Deacon wrote: > The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was > generated by a read or a write instruction. For stage 2 data aborts > generated by a stage 1 translation table walk (i.e. the actual page > table access faults at EL2), the WnR bit therefore reports whether the > instruction generating the walk was a load or a store, *not* whether the > page table walker was reading or writing the entry. > > For page tables marked as read-only at stage 2 (e.g. due to KSM merging > them with the tables from another guest), this could result in livelock, > where a page table walk generated by a load instruction attempts to > set the access flag in the stage 1 descriptor, but fails to trigger > CoW in the host since only a read fault is reported. > > This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to > take into account stage 2 faults in stage 1 walks. Since DBM cannot be > disabled at EL2 for CPUs that implement it, we assume that these faults > are always causes by writes, avoiding the livelock situation at the > expense of occasional, spurious CoWs. > > We could, in theory, do a bit better by checking the guest TCR > configuration and inspecting the page table to see why the PTE faulted. > However, I doubt this is measurable in practice, and the threat of > livelock is real. > > Cc: Marc Zyngier > Cc: Christoffer Dall > Cc: Julien Grall > Signed-off-by: Will Deacon > --- > arch/arm64/include/asm/kvm_emulate.h | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h > index 4cdeae3b17c6..948a9a8a9297 100644 > --- a/arch/arm64/include/asm/kvm_emulate.h > +++ b/arch/arm64/include/asm/kvm_emulate.h > @@ -167,11 +167,6 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) > return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV); > } > > -static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) > -{ > - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR); > -} > - > static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) > { > return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE); > @@ -192,6 +187,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu) > return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW); > } > > +static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu) > +{ > + return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) || > + kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */ > +} > + > static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu) > { > return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM); Reviewed-by: Marc Zyngier Thanks, M. -- Jazz is not dead. It just smells funny.