public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Li Zefan <lizefan@huawei.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
	Gleb Natapov <gleb@redhat.com>,
	kvm@vger.kernel.org, Davide Libenzi <davidel@xmailserver.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Cgroups <cgroups@vger.kernel.org>,
	Gregory Haskins <ghaskins@novell.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>
Subject: Re: [RFC][PATCH] kvm: fix a race when closing irq eventfd
Date: Sun, 17 Feb 2013 21:25:33 -0700	[thread overview]
Message-ID: <1361161533.2801.24.camel@bling.home> (raw)
In-Reply-To: <5121A971.7000604@huawei.com>

On Mon, 2013-02-18 at 12:09 +0800, Li Zefan wrote:
> On 2013/2/18 12:02, Alex Williamson wrote:
> > On Mon, 2013-02-18 at 11:13 +0800, Li Zefan wrote:
> >> While trying to fix a race when closing cgroup eventfd, I took a look
> >> at how kvm deals with this problem, and I found it doesn't.
> >>
> >> I may be wrong, as I don't know kvm code, so correct me if I'm.
> >>
> >> 	/*
> >> 	 * Race-free decouple logic (ordering is critical)
> >> 	 */
> >> 	static void
> >> 	irqfd_shutdown(struct work_struct *work)
> >>
> >> I don't think it's race-free!
> >>
> >> 	static int
> >> 	irqfd_wakeup(wait_queue_t *wait, unsigned mode, int sync, void *key)
> >> 	{
> >> 	...
> >> 			 * We cannot race against the irqfd going away since the
> >> 			 * other side is required to acquire wqh->lock, which we hold
> >> 			 */
> >> 			if (irqfd_is_active(irqfd))
> >> 				irqfd_deactivate(irqfd);
> >> 	}
> >>
> >> In kvm_irqfd_deassign() and kvm_irqfd_release() where irqfds are freed,
> >> wqh->lock is not acquired!
> >>
> >> So here is the race:
> >>
> >> CPU0                                    CPU1
> >> -----------------------------------     ---------------------------------
> >> kvm_irqfd_release()
> >>   spin_lock(kvm->irqfds.lock);
> >>   ...
> >>   irqfd_deactivate(irqfd);
> >>     list_del_init(&irqfd->list);
> >>   spin_unlock(kvm->irqfd.lock);
> >>   ...
> >> 					close(eventfd)
> >> 					  irqfd_wakeup();
> > 
> > irqfd_wakeup is assumed to be called with wqh->lock held
> > 
> 
> I'm aware of this.
> 
> As I said, kvm_irqfd_deassign() and kvm_irqfd_release() are not acquiring
> wqh->lock.

They do when they call eventfd_ctx_remove_wait_queue.  The irqfd is
enabled until that point and the list_del_init prevents multiple paths
from calling irqfd_deactivate.

> >>     irqfd_shutdown();
> > 
> > eventfd_ctx_remove_wait_queue has to acquire wqh->lock to complete or
> > else irqfd_shutdown never makes it to the kfree.  So in your scenario
> > this cpu0 spins here until cpu1 completes.
> > 
> >>       remove_waitqueue(irqfd->wait);
> >>       kfree(irqfd);
> >> 					    spin_lock(kvm->irqfd.lock);
> >> 					      if (!list_empty(&irqfd->list))
> > 
> > We don't take this branch because we already did list_del_init above,
> > which makes irqfd->list empty.
> > 
> 
> It doesn't matter if the list is empty or not.

Note that this is not kvm->irqfds.items, we're testing whether the
individual irqfd is detached from the list.

> The point is, irqfd has been kfreed, so the if statement is simply not safe!

It cannot be kfreed.  As noted above the cpu0 path stops trying to
acquire wqh->lock which already owned by cpu1.  The call to
eventfd_ctx_remove_wait_queue atomically removes the wait queue once the
wqh->lock is acquired, so after that point we're ok to kfree it.
Thanks,

Alex

> >> 						irqfd_deactivate(irqfd);
> >> 						  list_del_init(&irqfd->list);
> >> 					    spin_unlock(kvm->irqfd.lock);
> >>
> >> Look, we're accessing irqfd though it has already been freed!
> > 
> > Unless the irqfd_wakeup path isn't acquiring wqh->lock, it looks
> > race-free to me.  Thanks,
> > 
> > Alex
> > 
> > .
> > 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2013-02-18  4:25 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-18  3:13 [RFC][PATCH] kvm: fix a race when closing irq eventfd Li Zefan
     [not found] ` <51219C57.7080809-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2013-02-18  4:02   ` Alex Williamson
2013-02-18  4:09     ` Li Zefan
2013-02-18  4:25       ` Alex Williamson [this message]
     [not found]       ` <5121A971.7000604-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2013-02-18  4:31         ` Li Zefan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1361161533.2801.24.camel@bling.home \
    --to=alex.williamson@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=davidel@xmailserver.org \
    --cc=ghaskins@novell.com \
    --cc=gleb@redhat.com \
    --cc=kirill@shutemov.name \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox