From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Graf Subject: Re: [PATCH 3/6] KVM: PPC: booke: Check for MSR[WE] in prepare_to_enter Date: Mon, 14 Nov 2011 14:13:51 +0100 Message-ID: <4EC1140F.3000307@suse.de> References: <20111109002325.GC6132@schlenkerla.am.freescale.net> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, kvm-ppc@vger.kernel.org To: Scott Wood Return-path: Received: from cantor2.suse.de ([195.135.220.15]:59934 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754549Ab1KNNMh (ORCPT ); Mon, 14 Nov 2011 08:12:37 -0500 In-Reply-To: <20111109002325.GC6132@schlenkerla.am.freescale.net> Sender: kvm-owner@vger.kernel.org List-ID: On 11/09/2011 01:23 AM, Scott Wood wrote: > This prevents us from inappropriately blocking in a KVM_SET_REGS > ioctl -- the MSR[WE] will take effect when the guest is next entered. > > It also causes SRR1[WE] to be set when we enter the guest's interrupt > handler, which is what e500 hardware is documented to do. > > Signed-off-by: Scott Wood > --- > arch/powerpc/kvm/booke.c | 28 ++++++++++++++++++---------- > 1 files changed, 18 insertions(+), 10 deletions(-) > > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c > index feaefc4..557f028 100644 > --- a/arch/powerpc/kvm/booke.c > +++ b/arch/powerpc/kvm/booke.c > @@ -124,12 +124,6 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u32 new_msr) > vcpu->arch.shared->msr = new_msr; > > kvmppc_mmu_msr_notify(vcpu, old_msr); > - > - if (vcpu->arch.shared->msr& MSR_WE) { > - kvm_vcpu_block(vcpu); > - kvmppc_set_exit_type(vcpu, EMULATED_MTMSRWE_EXITS); > - }; > - > kvmppc_vcpu_sync_spe(vcpu); > } > > @@ -288,15 +282,12 @@ static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, > return allowed; > } > > -/* Check pending exceptions and deliver one, if possible. */ > -void kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > +static void kvmppc_core_check_exceptions(struct kvm_vcpu *vcpu) > { > unsigned long *pending =&vcpu->arch.pending_exceptions; > unsigned long old_pending = vcpu->arch.pending_exceptions; > unsigned int priority; > > - WARN_ON_ONCE(!irqs_disabled()); > - > priority = __ffs(*pending); > while (priority<= BOOKE_IRQPRIO_MAX) { > if (kvmppc_booke_irqprio_deliver(vcpu, priority)) > @@ -314,6 +305,23 @@ void kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > vcpu->arch.shared->int_pending = 0; > } > > +/* Check pending exceptions and deliver one, if possible. */ > +void kvmppc_core_prepare_to_enter(struct kvm_vcpu *vcpu) > +{ > + WARN_ON_ONCE(!irqs_disabled()); > + > + kvmppc_core_check_exceptions(vcpu); > + > + if (vcpu->arch.shared->msr& MSR_WE) { > + local_irq_enable(); > + kvm_vcpu_block(vcpu); > + local_irq_disable(); Hrm. This specific irq enable/disable part isn't pretty but I can't think of a cleaner way either. Unless you move it out of the prepare function, since I don't see a way it could race with an interrupt. > + > + kvmppc_set_exit_type(vcpu, EMULATED_MTMSRWE_EXITS); > + kvmppc_core_check_exceptions(vcpu); Shouldn't if (msr & MSR_WE) { ... } core_check_exceptions(vcpu); work just as well?