qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: anthony@codemonkey.ws, Jan Kiszka <jan.kiszka@siemens.com>,
	qemu-devel@nongnu.org, kvm@vger.kernel.org, laurent@vivier.eu
Subject: Re: [Qemu-devel] [RFC PATCH] fix select(2) race between main_loop_wait and qemu_aio_wait
Date: Mon, 05 Mar 2012 19:35:57 +0200	[thread overview]
Message-ID: <4F54F97D.4080504@redhat.com> (raw)
In-Reply-To: <4F54E64C.4050506@redhat.com>

On 03/05/2012 06:14 PM, Paolo Bonzini wrote:
> Il 05/03/2012 16:14, Avi Kivity ha scritto:
> >> > Hmm, I don't think so.  It would need to protect execution of the
> >> > iohandlers too, and pretty much everything can happen there including a
> >> > nested loop.  Of course recursive mutexes exist, but it sounds like too
> >> > big an axe.
> > The I/O handlers would still use the qemu mutex, no?  we'd just protect
> > the select() (taking the mutex from before releasing the global lock,
> > and reacquiring it afterwards).
>
> Yes, that could work, but it is _really_ ugly. 

Yes, it is...

>  I still prefer this
> patch or fixing NBD.  At least both contain the hack in a single place.



> >> > I could add a generation count updated by qemu_aio_wait(), and rerun the
> >> > select() only if the generation count changes during its execution.
> >> >
> >> > Or we can call it an NBD bug.  I'm not against that, but it seemed to me
> >> > that the problem is more general.
> > What about making sure all callers of qemu_aio_wait() run from
> > coroutines (or threads)?  Then they just ask the main thread to wake
> > them up, instead of dispatching completions themselves.
>
> That would open another Pandora's box.  The point of having a separate
> main loop is that only AIO can happen during qemu_aio_wait() or
> qemu_aio_flush().  In particular you don't want the monitor to process
> input while you're running another monitor command.

Hmm, yes, we're abusing the type of completion here as a kind of wierd
locking.  It's conceptually broken since an aio completion could trigger
anything.  Usually it just involves block format driver and device code,
but in theory, it can affect the state of whoever's running qemu_aio_wait().

-- 
error compiling committee.c: too many arguments to function

  reply	other threads:[~2012-03-05 17:36 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-05  8:34 [Qemu-devel] [RFC PATCH] fix select(2) race between main_loop_wait and qemu_aio_wait Paolo Bonzini
2012-03-05  9:07 ` Jan Kiszka
2012-03-05  9:25   ` Paolo Bonzini
2012-03-05 14:24   ` Avi Kivity
2012-03-05 14:30     ` Paolo Bonzini
2012-03-05 15:14       ` Avi Kivity
2012-03-05 16:14         ` Paolo Bonzini
2012-03-05 17:35           ` Avi Kivity [this message]
2012-03-06  9:01             ` Paolo Bonzini
2012-03-05 14:30     ` Jan Kiszka
2012-03-05 17:39       ` Avi Kivity
2012-03-05 17:55         ` Jan Kiszka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F54F97D.4080504@redhat.com \
    --to=avi@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=jan.kiszka@siemens.com \
    --cc=kvm@vger.kernel.org \
    --cc=laurent@vivier.eu \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).