qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: Kevin Wolf <kwolf@redhat.com>
Cc: "libvir-list @ redhat . com" <libvir-list@redhat.com>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [Qemu-devel] [RFC PATCH] qcow2: Fix race in cache invalidation
Date: Thu, 25 Sep 2014 19:55:18 +1000	[thread overview]
Message-ID: <5423E686.20109@ozlabs.ru> (raw)
In-Reply-To: <20140925085718.GE4667@noname.redhat.com>

On 09/25/2014 06:57 PM, Kevin Wolf wrote:
> Am 25.09.2014 um 10:41 hat Alexey Kardashevskiy geschrieben:
>> On 09/24/2014 07:48 PM, Kevin Wolf wrote:
>>> Am 23.09.2014 um 10:47 hat Alexey Kardashevskiy geschrieben:
>>>> On 09/19/2014 06:47 PM, Kevin Wolf wrote:> Am 16.09.2014 um 14:59 hat Paolo Bonzini geschrieben:
>>>>>> Il 16/09/2014 14:52, Kevin Wolf ha scritto:
>>>>>>> Yes, that's true. We can't fix this problem in qcow2, though, because
>>>>>>> it's a more general one.  I think we must make sure that
>>>>>>> bdrv_invalidate_cache() doesn't yield.
>>>>>>>
>>>>>>> Either by forbidding to run bdrv_invalidate_cache() in a coroutine and
>>>>>>> moving the problem to the caller (where and why is it even called from a
>>>>>>> coroutine?), or possibly by creating a new coroutine for the driver
>>>>>>> callback and running that in a nested event loop that only handles
>>>>>>> bdrv_invalidate_cache() callbacks, so that the NBD server doesn't get a
>>>>>>> chance to process new requests in this thread.
>>>>>>
>>>>>> Incoming migration runs in a coroutine (the coroutine entry point is
>>>>>> process_incoming_migration_co).  But everything after qemu_fclose() can
>>>>>> probably be moved into a separate bottom half, so that it gets out of
>>>>>> coroutine context.
>>>>>
>>>>> Alexey, you should probably rather try this (and add a bdrv_drain_all()
>>>>> in bdrv_invalidate_cache) than messing around with qcow2 locks. This
>>>>> isn't a problem that can be completely fixed in qcow2.
>>>>
>>>>
>>>> Ok. Tried :) Not very successful though. The patch is below.
>>>>
>>>> Is that the correct bottom half? When I did it, I started getting crashes
>>>> in various sport on accesses to s->l1_cache which is NULL after qcow2_close.
>>>> Normally the code would check s->l1_size and then use but they are out of sync.
>>>
>>> No, that's not the place we were talking about.
>>>
>>> What Paolo meant is that in process_incoming_migration_co(), you can
>>> split out the final part that calls bdrv_invalidate_cache_all() into a
>>> BH (you need to move everything until the end of the function into the
>>> BH then). His suggestion was to move everything below the qemu_fclose().
>>
>> Ufff. I took it very literally. Ok. Let it be
>> process_incoming_migration_co(). But there is something I am missing about
>> BHs. Here is a patch:
>>
> You need to call qemu_bh_schedule().

Right. Cool. So is below what was suggested? I am doublechecking as it does
not solve the original issue - the bottomhalf is called first and then
nbd_trip() crashes in qcow2_co_flush_to_os().



diff --git a/block.c b/block.c
index d06dd51..1e6dfd1 100644
--- a/block.c
+++ b/block.c
@@ -5037,20 +5037,22 @@ void bdrv_invalidate_cache(BlockDriverState *bs,
Error **errp)
     if (local_err) {
         error_propagate(errp, local_err);
         return;
     }

     ret = refresh_total_sectors(bs, bs->total_sectors);
     if (ret < 0) {
         error_setg_errno(errp, -ret, "Could not refresh total sector count");
         return;
     }
+
+    bdrv_drain_all();
 }

 void bdrv_invalidate_cache_all(Error **errp)
 {
     BlockDriverState *bs;
     Error *local_err = NULL;

     QTAILQ_FOREACH(bs, &bdrv_states, device_list) {
         AioContext *aio_context = bdrv_get_aio_context(bs);

diff --git a/migration.c b/migration.c
index 6db04a6..dc026a9 100644
--- a/migration.c
+++ b/migration.c
@@ -81,49 +81,64 @@ void qemu_start_incoming_migration(const char *uri,
Error **errp)
     else if (strstart(uri, "unix:", &p))
         unix_start_incoming_migration(p, errp);
     else if (strstart(uri, "fd:", &p))
         fd_start_incoming_migration(p, errp);
 #endif
     else {
         error_setg(errp, "unknown migration protocol: %s", uri);
     }
 }

+static QEMUBH *migration_complete_bh;
+static void process_incoming_migration_complete(void *opaque);
+
 static void process_incoming_migration_co(void *opaque)
 {
     QEMUFile *f = opaque;
-    Error *local_err = NULL;
     int ret;

     ret = qemu_loadvm_state(f);
     qemu_fclose(f);
     free_xbzrle_decoded_buf();
     if (ret < 0) {
         error_report("load of migration failed: %s", strerror(-ret));
         exit(EXIT_FAILURE);
     }
     qemu_announce_self();

     bdrv_clear_incoming_migration_all();
+
+    migration_complete_bh = aio_bh_new(qemu_get_aio_context(),
+                                       process_incoming_migration_complete,
+                                       NULL);
+    qemu_bh_schedule(migration_complete_bh);
+}
+
+static void process_incoming_migration_complete(void *opaque)
+{
+    Error *local_err = NULL;
+
     /* Make sure all file formats flush their mutable metadata */
     bdrv_invalidate_cache_all(&local_err);
     if (local_err) {
         qerror_report_err(local_err);
         error_free(local_err);
         exit(EXIT_FAILURE);
     }

     if (autostart) {
         vm_start();
     } else {
         runstate_set(RUN_STATE_PAUSED);
     }
+    qemu_bh_delete(migration_complete_bh);
+    migration_complete_bh = NULL;
 }




-- 
Alexey

  reply	other threads:[~2014-09-25  9:55 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-15 10:50 [Qemu-devel] migration: qemu-coroutine-lock.c:141: qemu_co_mutex_unlock: Assertion `mutex->locked == 1' failed Alexey Kardashevskiy
2014-09-16 12:02 ` Alexey Kardashevskiy
2014-09-16 12:10   ` Paolo Bonzini
2014-09-16 12:34     ` Kevin Wolf
2014-09-16 12:35       ` Paolo Bonzini
2014-09-16 12:52         ` Kevin Wolf
2014-09-16 12:59           ` Paolo Bonzini
2014-09-19  8:47             ` Kevin Wolf
2014-09-23  8:47               ` [Qemu-devel] [RFC PATCH] qcow2: Fix race in cache invalidation Alexey Kardashevskiy
2014-09-24  7:30                 ` Alexey Kardashevskiy
2014-09-24  9:48                 ` Kevin Wolf
2014-09-25  8:41                   ` Alexey Kardashevskiy
2014-09-25  8:57                     ` Kevin Wolf
2014-09-25  9:55                       ` Alexey Kardashevskiy [this message]
2014-09-25 10:20                         ` Kevin Wolf
2014-09-25 12:29                           ` Alexey Kardashevskiy
2014-09-25 12:39                             ` Kevin Wolf
2014-09-25 14:05                               ` Alexey Kardashevskiy
2014-09-28 11:14                                 ` Alexey Kardashevskiy
2014-09-17  6:46       ` [Qemu-devel] migration: qemu-coroutine-lock.c:141: qemu_co_mutex_unlock: Assertion `mutex->locked == 1' failed Alexey Kardashevskiy
2014-09-16 14:52     ` Alexey Kardashevskiy
2014-09-17  9:06     ` Stefan Hajnoczi
2014-09-17  9:25       ` Paolo Bonzini
2014-09-17 13:44         ` Alexey Kardashevskiy
2014-09-17 15:07           ` Stefan Hajnoczi
2014-09-18  3:26             ` Alexey Kardashevskiy
2014-09-18  9:56               ` Paolo Bonzini
2014-09-19  8:23                 ` Alexey Kardashevskiy
2014-09-17 15:04         ` Stefan Hajnoczi
2014-09-17 15:17           ` Eric Blake
2014-09-17 15:53           ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5423E686.20109@ozlabs.ru \
    --to=aik@ozlabs.ru \
    --cc=dgilbert@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=libvir-list@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).