qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: "libvir-list @ redhat . com" <libvir-list@redhat.com>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [Qemu-devel] [RFC PATCH] qcow2: Fix race in cache invalidation
Date: Thu, 25 Sep 2014 10:57:18 +0200	[thread overview]
Message-ID: <20140925085718.GE4667@noname.redhat.com> (raw)
In-Reply-To: <5423D523.5070009@ozlabs.ru>

Am 25.09.2014 um 10:41 hat Alexey Kardashevskiy geschrieben:
> On 09/24/2014 07:48 PM, Kevin Wolf wrote:
> > Am 23.09.2014 um 10:47 hat Alexey Kardashevskiy geschrieben:
> >> On 09/19/2014 06:47 PM, Kevin Wolf wrote:> Am 16.09.2014 um 14:59 hat Paolo Bonzini geschrieben:
> >>>> Il 16/09/2014 14:52, Kevin Wolf ha scritto:
> >>>>> Yes, that's true. We can't fix this problem in qcow2, though, because
> >>>>> it's a more general one.  I think we must make sure that
> >>>>> bdrv_invalidate_cache() doesn't yield.
> >>>>>
> >>>>> Either by forbidding to run bdrv_invalidate_cache() in a coroutine and
> >>>>> moving the problem to the caller (where and why is it even called from a
> >>>>> coroutine?), or possibly by creating a new coroutine for the driver
> >>>>> callback and running that in a nested event loop that only handles
> >>>>> bdrv_invalidate_cache() callbacks, so that the NBD server doesn't get a
> >>>>> chance to process new requests in this thread.
> >>>>
> >>>> Incoming migration runs in a coroutine (the coroutine entry point is
> >>>> process_incoming_migration_co).  But everything after qemu_fclose() can
> >>>> probably be moved into a separate bottom half, so that it gets out of
> >>>> coroutine context.
> >>>
> >>> Alexey, you should probably rather try this (and add a bdrv_drain_all()
> >>> in bdrv_invalidate_cache) than messing around with qcow2 locks. This
> >>> isn't a problem that can be completely fixed in qcow2.
> >>
> >>
> >> Ok. Tried :) Not very successful though. The patch is below.
> >>
> >> Is that the correct bottom half? When I did it, I started getting crashes
> >> in various sport on accesses to s->l1_cache which is NULL after qcow2_close.
> >> Normally the code would check s->l1_size and then use but they are out of sync.
> > 
> > No, that's not the place we were talking about.
> > 
> > What Paolo meant is that in process_incoming_migration_co(), you can
> > split out the final part that calls bdrv_invalidate_cache_all() into a
> > BH (you need to move everything until the end of the function into the
> > BH then). His suggestion was to move everything below the qemu_fclose().
> 
> Ufff. I took it very literally. Ok. Let it be
> process_incoming_migration_co(). But there is something I am missing about
> BHs. Here is a patch:
> 
> 
> diff --git a/migration.c b/migration.c
> index 6db04a6..101043e 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -88,6 +88,9 @@ void qemu_start_incoming_migration(const char *uri, Error
> **errp)
>      }
>  }
> 
> +static QEMUBH *migration_complete_bh;
> +static void process_incoming_migration_complete(void *opaque);
> +
>  static void process_incoming_migration_co(void *opaque)
>  {
>      QEMUFile *f = opaque;
> @@ -117,6 +120,16 @@ static void process_incoming_migration_co(void *opaque)
>      } else {
>          runstate_set(RUN_STATE_PAUSED);
>      }
> +
> +    migration_complete_bh = aio_bh_new(qemu_get_aio_context(),
> +                                       process_incoming_migration_complete,
> +                                       NULL);
> +}
> +
> +static void process_incoming_migration_complete(void *opaque)
> +{
> +    qemu_bh_delete(migration_complete_bh);
> +    migration_complete_bh = NULL;
>  }
> 
>  void process_incoming_migration(QEMUFile *f)
> 
> 
> 
> Then I run it under gdb and set breakpoint in
> process_incoming_migration_complete - and it never hits. Why is that? Thanks.

You need to call qemu_bh_schedule().

Kevin

  reply	other threads:[~2014-09-25  8:57 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-15 10:50 [Qemu-devel] migration: qemu-coroutine-lock.c:141: qemu_co_mutex_unlock: Assertion `mutex->locked == 1' failed Alexey Kardashevskiy
2014-09-16 12:02 ` Alexey Kardashevskiy
2014-09-16 12:10   ` Paolo Bonzini
2014-09-16 12:34     ` Kevin Wolf
2014-09-16 12:35       ` Paolo Bonzini
2014-09-16 12:52         ` Kevin Wolf
2014-09-16 12:59           ` Paolo Bonzini
2014-09-19  8:47             ` Kevin Wolf
2014-09-23  8:47               ` [Qemu-devel] [RFC PATCH] qcow2: Fix race in cache invalidation Alexey Kardashevskiy
2014-09-24  7:30                 ` Alexey Kardashevskiy
2014-09-24  9:48                 ` Kevin Wolf
2014-09-25  8:41                   ` Alexey Kardashevskiy
2014-09-25  8:57                     ` Kevin Wolf [this message]
2014-09-25  9:55                       ` Alexey Kardashevskiy
2014-09-25 10:20                         ` Kevin Wolf
2014-09-25 12:29                           ` Alexey Kardashevskiy
2014-09-25 12:39                             ` Kevin Wolf
2014-09-25 14:05                               ` Alexey Kardashevskiy
2014-09-28 11:14                                 ` Alexey Kardashevskiy
2014-09-17  6:46       ` [Qemu-devel] migration: qemu-coroutine-lock.c:141: qemu_co_mutex_unlock: Assertion `mutex->locked == 1' failed Alexey Kardashevskiy
2014-09-16 14:52     ` Alexey Kardashevskiy
2014-09-17  9:06     ` Stefan Hajnoczi
2014-09-17  9:25       ` Paolo Bonzini
2014-09-17 13:44         ` Alexey Kardashevskiy
2014-09-17 15:07           ` Stefan Hajnoczi
2014-09-18  3:26             ` Alexey Kardashevskiy
2014-09-18  9:56               ` Paolo Bonzini
2014-09-19  8:23                 ` Alexey Kardashevskiy
2014-09-17 15:04         ` Stefan Hajnoczi
2014-09-17 15:17           ` Eric Blake
2014-09-17 15:53           ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140925085718.GE4667@noname.redhat.com \
    --to=kwolf@redhat.com \
    --cc=aik@ozlabs.ru \
    --cc=dgilbert@redhat.com \
    --cc=libvir-list@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).