qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, stefanha@linux.vnet.ibm.com
Subject: Re: [Qemu-devel] [PATCH 2/4] block: unify flush implementations
Date: Fri, 14 Oct 2011 13:54:42 +0200	[thread overview]
Message-ID: <4E982302.1030100@redhat.com> (raw)
In-Reply-To: <4E981D6D.1070407@redhat.com>

Am 14.10.2011 13:30, schrieb Paolo Bonzini:
> On 10/14/2011 01:08 PM, Kevin Wolf wrote:
>> Am 14.10.2011 10:41, schrieb Paolo Bonzini:
>>> Add coroutine support for flush and apply the same emulation that
>>> we already do for read/write.  bdrv_aio_flush is simplified to always
>>> go through a coroutine.
>>>
>>> Signed-off-by: Paolo Bonzini<pbonzini@redhat.com>
>>
>> To make the implementation more consistent with read/write operations,
>> wouldn't it make sense to provide a bdrv_co_flush() globally instead of
>> using the synchronous version as the preferred public interface?
> 
> I thought about it, but then it turned out that I would have
> 
>          bdrv_flush
>          -> create coroutine or just fast-path to bdrv_flush_co_entry
>             -> bdrv_flush_co_entry
>                -> driver
> 
> and
> 
>          bdrv_co_flush
>          -> bdrv_flush_co_entry
>             -> driver
> 
> In other words, the code would be exactly the same, save for an "if 
> (qemu_in_coroutine())".  The reason is that, unlike read/write, neither 
> flush nor discard take a qiov.

What I was thinking of looks a bit different:

    bdrv_flush
    -> create coroutine or just fast-path to bdrv_flush_co_entry
       -> bdrv_flush_co_entry
          -> bdrv_co_flush

and

    bdrv_co_flush
    -> driver

And the reason for this is that bdrv_co_flush would be a function that
does only little more than passing the function to the driver (just like
most bdrv_* functions do), with no emulation going on at all.

> In general, I think that with Stefan's cleanup having specialized 
> coroutine versions has in general a much smaller benefit.  The code 
> reading benefit of naming routines like bdrv_co_* is already lost, for 
> example, since bdrv_read can yield when called for coroutine context.

Instead of taking a void* and working on a RwCo structure that is really
meant for emulation, bdrv_co_flush would take a BlockDriverState and
improve readability this way.

The more complicated and ugly code would be left separated and only used
for emulation. I think that would make it easier to understand the
common path without being distracted by emulation code.

> Let me show how this might go.  Right now you have
> 
>          bdrv_read/write
>          -> bdrv_rw_co
>             -> create qiov
>             -> create coroutine or just fast-path to bdrv_rw_co_entry
> 	      -> bdrv_rw_co_entry
>                   -> bdrv_co_do_readv/writev
>                      -> driver
> 
>          bdrv_co_readv/writev
>          -> bdrv_co_do_readv/writev
>             -> driver
> 
> But starting from here, you might just as well reorganize it like this:
> 
>          bdrv_read/writev
>          -> bdrv_rw_co
>             -> create qiov
>             -> bdrv_readv/writev
> 
>          bdrv_readv/writev
>          -> create coroutine or just fast-path to bdrv_rw_co_entry
>             -> bdrv_rw_co_entry
>                -> bdrv_co_do_readv/writev
>                   -> driver
> 
> and just drop bdrv_co_readv, since it would just be hard-coding the 
> fast-path of bdrv_readv.

I guess it's all a matter of taste. Stefan, what do you think?

> Since some amount of synchronous I/O would likely always be there, for 
> example in qemu-img, I think this unification would make more sense than 
> providing two separate entrypoints for bdrv_co_flush and bdrv_flush.

Actually, I'm not so sure about qemu-img. I think we have thought of
scenarios where converting it to a coroutine based version with a main
loop would be helpful (can't remember the details, though).

Kevin

  reply	other threads:[~2011-10-14 11:51 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-10-14  8:41 [Qemu-devel] [PATCH 0/4] coroutinization of flush and discard (split out of NBD series) Paolo Bonzini
2011-10-14  8:41 ` [Qemu-devel] [PATCH 1/4] block: rename bdrv_co_rw_bh Paolo Bonzini
2011-10-14  8:41 ` [Qemu-devel] [PATCH 2/4] block: unify flush implementations Paolo Bonzini
2011-10-14 11:08   ` Kevin Wolf
2011-10-14 11:30     ` Paolo Bonzini
2011-10-14 11:54       ` Kevin Wolf [this message]
2011-10-14 12:42         ` Paolo Bonzini
2011-10-14 14:02           ` Kevin Wolf
2011-10-14 14:04             ` Paolo Bonzini
2011-10-14 13:20         ` Stefan Hajnoczi
2011-10-14 13:47           ` Paolo Bonzini
2011-10-14  8:41 ` [Qemu-devel] [PATCH 3/4] block: drop redundant bdrv_flush implementation Paolo Bonzini
2011-10-14  8:41 ` [Qemu-devel] [PATCH 4/4] block: add bdrv_co_discard and bdrv_aio_discard support Paolo Bonzini
2011-10-14 14:23   ` Kevin Wolf
2011-10-14 14:24     ` Paolo Bonzini
2011-10-14 14:32       ` Kevin Wolf
2011-10-14 14:35         ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E982302.1030100@redhat.com \
    --to=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).