kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [RFC] KVM: Fix simultaneous NMIs
Date: Thu, 15 Sep 2011 20:02:32 +0300	[thread overview]
Message-ID: <4E722FA8.2030006@redhat.com> (raw)
In-Reply-To: <4E722140.4070702@siemens.com>

On 09/15/2011 07:01 PM, Jan Kiszka wrote:
> On 2011-09-15 16:45, Avi Kivity wrote:
> >  If simultaneous NMIs happen, we're supposed to queue the second
> >  and next (collapsing them), but currently we sometimes collapse
> >  the second into the first.
>
> Can you describe the race in a few more details here ("sometimes" sounds
> like "I don't know when" :) )?

In this case it was "I'm in a hurry".

> >
> >   void kvm_inject_nmi(struct kvm_vcpu *vcpu)
> >   {
> >  +	atomic_inc(&vcpu->arch.nmi_pending);
> >   	kvm_make_request(KVM_REQ_EVENT, vcpu);
> >  -	vcpu->arch.nmi_pending = 1;
>
> Does the reordering matter?

I think so.  Suppose the vcpu enters just after kvm_make_request(); it 
sees KVM_REQ_EVENT and clears it, but doesn't see nmi_pending because it 
wasn't set set.  Then comes a kick, the guest is reentered with 
nmi_pending set but KVM_REQ_EVENT clear and sails through the check and 
enters the guest.  The NMI is delayed until the next KVM_REQ_EVENT.

> Do we need barriers?

Yes.

>
> >  @@ -5570,9 +5570,9 @@ static void inject_pending_event(struct kvm_vcpu *vcpu)
> >   	}
> >
> >   	/* try to inject new event if pending */
> >  -	if (vcpu->arch.nmi_pending) {
> >  +	if (atomic_read(&vcpu->arch.nmi_pending)) {
> >   		if (kvm_x86_ops->nmi_allowed(vcpu)) {
> >  -			vcpu->arch.nmi_pending = false;
> >  +			atomic_dec(&vcpu->arch.nmi_pending);
>
> Here we lost NMIs in the past by overwriting nmi_pending while another
> one was already queued, right?

One place, yes.  The other is kvm_inject_nmi() - if the first nmi didn't 
get picked up by the vcpu by the time the second nmi arrives, we lose 
the second nmi.

> >   	if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
> >   		inject_pending_event(vcpu);
> >
> >   		/* enable NMI/IRQ window open exits if needed */
> >  -		if (nmi_pending)
> >  +		if (atomic_read(&vcpu->arch.nmi_pending)
> >  +		&&  nmi_in_progress(vcpu))
>
> Is nmi_pending&&  !nmi_in_progress possible at all?

Yes, due to NMI-blocked-by-STI.  A really touchy area.

> Is it rather a BUG
> condition?

No.

> If not, what will happen next?

The NMI window will open and we'll inject the NMI.  But I think we have 
a bug here - we should only kvm_collapse_nmis() if an NMI handler was 
indeed running, yet we do it unconditionally.

> >
> >  +static inline void kvm_collapse_pending_nmis(struct kvm_vcpu *vcpu)
> >  +{
> >  +	/* Collapse all NMIs queued while an NMI handler was running to one */
> >  +	if (atomic_read(&vcpu->arch.nmi_pending))
> >  +		atomic_set(&vcpu->arch.nmi_pending, 1);
>
> Is it OK that NMIs injected after the collapse will increment this to>
> 1 again? Or is that impossible?
>

It's possible and okay.  We're now completing execution of IRET.  Doing 
atomic_set() after atomic_inc() means the NMI happened before IRET 
completed, and vice versa.  Since these events are asynchronous, we're 
free to choose one or the other (a self-IPI-NMI just before the IRET 
must be swallowed, and a self-IPI-NMI just after the IRET would only be 
executed after the next time around the handler).


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


  reply	other threads:[~2011-09-15 17:02 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-15 14:45 [RFC] KVM: Fix simultaneous NMIs Avi Kivity
2011-09-15 16:01 ` Jan Kiszka
2011-09-15 17:02   ` Avi Kivity [this message]
2011-09-15 17:25     ` Jan Kiszka
2011-09-15 17:48       ` Avi Kivity
2011-09-19 13:54         ` Marcelo Tosatti
2011-09-19 14:30           ` Avi Kivity
2011-09-19 14:54             ` Marcelo Tosatti
2011-09-19 15:09               ` Avi Kivity
2011-09-19 15:12                 ` Avi Kivity
2011-09-19 15:22                 ` Marcelo Tosatti
2011-09-19 15:37                   ` Avi Kivity
2011-09-19 15:57                     ` Marcelo Tosatti
2011-09-20  8:40                       ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E722FA8.2030006@redhat.com \
    --to=avi@redhat.com \
    --cc=jan.kiszka@siemens.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).