From: Paolo Bonzini <pbonzini@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Anthony Liguori <aliguori@us.ibm.com>,
Anand Avati <aavati@redhat.com>,
Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>,
Vijay Bellur <vbellur@redhat.com>,
Amar Tumballi <amarts@redhat.com>,
qemu-devel@nongnu.org, Blue Swirl <blauwirbel@gmail.com>,
bharata@linux.vnet.ibm.com
Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend
Date: Thu, 06 Sep 2012 12:18:41 +0200 [thread overview]
Message-ID: <50487881.9030207@redhat.com> (raw)
In-Reply-To: <504875F6.9090302@redhat.com>
Il 06/09/2012 12:07, Kevin Wolf ha scritto:
>> The AIOCB is already invalid at the time the callback is entered, so we
>> could release it before the call. However, not all implementation of
>> AIO are ready for that and I'm not really in the mood for large scale
>> refactoring...
>
> But the way, what I'd really want to see in the end is to get rid of
> qemu_aio_flush() and replace it by .bdrv_drain() callbacks in each
> BlockDriver. The way we're doing it today is a layering violation.
That's quite difficult. Completion of an I/O operation can trigger
another I/O operation on another block device, and so on until we go
back to the first device (think of a hypothetical RAID-5 device).
> Doesn't change anything about this problem, though. So the options that
> we have are:
>
> 1. Delay the callback using a BH. Doing this in each driver is ugly.
> But is there actually more than one possible callback in today's
> coroutine world? I only see bdrv_co_io_em_complete(), which could
> reenter the coroutine from a BH.
Easy and safe, but it feels a bit like a timebomb. Also, I'm not
entirely sure of _why_ the bottom half works. :)
> 2. Delay the callback by just calling it later when the cleanup has
> been completed and .io_flush() can return 0. You say that it's hard
> to implement for some drivers, except if the AIOCB are leaked until
> the end of functions like qcow2_create().
... which is what we do in posix-aio-compat.c; nobody screamed so far.
Not really hard, it just has to be assessed for each driver separately.
We can just do it in gluster and refactor it later.
> 3. Add a delay only later in functions like bdrv_drain_all() that assume
> that the request has completed. Are there more of this type? AIOCBs
> are leaked until a bdrv_drain_all() call. Does it work with draining
> specific BDSes instead of everything?
>
> Unless I forgot some important point, it almost looks like option 1 is
> the easiest and safest.
I agree with your opinion, but I would feel better if I understood
better why it works. (2) can be done easily in each driver (no
ugliness) and refactored later.
Paolo
next prev parent reply other threads:[~2012-09-06 10:19 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-09 13:00 [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao
2012-08-09 13:01 ` [Qemu-devel] [PATCH v6 1/2] qemu: Add a config option for GlusterFS as block backend Bharata B Rao
2012-08-09 13:02 ` [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU " Bharata B Rao
2012-08-13 12:50 ` Kevin Wolf
2012-08-14 4:38 ` Bharata B Rao
2012-08-14 8:29 ` Kevin Wolf
2012-08-14 9:34 ` Bharata B Rao
2012-08-14 9:58 ` Kevin Wolf
2012-09-06 8:29 ` Avi Kivity
2012-09-06 15:40 ` Bharata B Rao
2012-09-06 15:44 ` Paolo Bonzini
2012-09-06 15:47 ` Daniel P. Berrange
2012-09-06 16:04 ` ronnie sahlberg
2012-09-06 16:06 ` Avi Kivity
2012-09-07 3:24 ` Bharata B Rao
2012-09-07 9:19 ` Daniel P. Berrange
2012-09-07 9:36 ` Paolo Bonzini
2012-09-07 9:57 ` Kevin Wolf
2012-09-12 9:22 ` Bharata B Rao
2012-09-12 9:24 ` Paolo Bonzini
2012-09-07 10:00 ` Kevin Wolf
2012-09-07 10:03 ` Daniel P. Berrange
2012-09-07 10:05 ` Paolo Bonzini
2012-08-15 5:21 ` Bharata B Rao
2012-08-15 8:00 ` Kevin Wolf
2012-08-15 9:22 ` Bharata B Rao
2012-08-15 8:51 ` Bharata B Rao
2012-09-05 7:41 ` Bharata B Rao
2012-09-05 9:57 ` Bharata B Rao
2012-09-06 7:23 ` Paolo Bonzini
2012-09-06 9:06 ` Kevin Wolf
2012-09-06 9:38 ` Paolo Bonzini
2012-09-06 10:07 ` Kevin Wolf
2012-09-06 10:18 ` Paolo Bonzini [this message]
2012-09-06 10:29 ` Kevin Wolf
2012-09-06 11:01 ` Paolo Bonzini
2012-09-07 15:06 ` Bharata B Rao
2012-09-07 15:11 ` Paolo Bonzini
2012-09-08 14:22 ` Bharata B Rao
2012-09-05 10:01 ` Kevin Wolf
2012-09-05 10:43 ` Bharata B Rao
2012-09-06 7:35 ` Paolo Bonzini
2012-09-07 5:46 ` Bharata B Rao
2012-08-13 9:49 ` [Qemu-devel] [PATCH v6 0/2] GlusterFS support in QEMU - v6 Bharata B Rao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50487881.9030207@redhat.com \
--to=pbonzini@redhat.com \
--cc=aavati@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=amarts@redhat.com \
--cc=bharata@linux.vnet.ibm.com \
--cc=blauwirbel@gmail.com \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@linux.vnet.ibm.com \
--cc=vbellur@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).