qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Anthony Liguori <anthony@codemonkey.ws>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Subject: Re: [Qemu-devel] converting the block layer from coroutines to threads
Date: Fri, 24 Feb 2012 15:01:13 -0600	[thread overview]
Message-ID: <4F47FA99.8050608@codemonkey.ws> (raw)
In-Reply-To: <4F47F685.5010000@redhat.com>

On 02/24/2012 02:43 PM, Paolo Bonzini wrote:
> On 02/24/2012 08:22 PM, Anthony Liguori wrote:
>> Virtio really wants each virtqueue to be processed in a separate
>> thread.  On a multicore system, there's considerable improvement doing
>> this.  I think that's where we ought to start.
>
> Well, that's where we ought to *get*.  Stefan's work is awesome but with
> the current feature set it would be hard to justify it upstream.
>
> To get it upstream we need to generalize it and make it work well with
> the block layer.  And vice versa make the block layer work well with
> threads, which is what I care about here.
>
>> We really just need the block layer to be re-entrant, we don't
>> actually need qcow2 or anything else that uses coroutines to use full
>> threads.
>
> Once you can issue I/O from two threads at the same-time (such as
> streaming in the iothread and guest I/O in the virtqueue thread),
> everything already needs to be thread-safe.  It is a pretty short step
> from there to thread pools for everything.

If you start with a thread safe API for submitting block requests, that could be 
implemented as

bapi_aiocb *bapi_submit_readv(bapi_driver *d, struct iovec *iov, int iovcnt,
                               off_t offset)
{
    bapi_request *req = make_bapi_request(BAPI_READ, iov, iovcnt, offset);

    return bapi_queue_add_req(req);
}

Which would schedule the I/O thread to actually implement the operation.  You 
could then start incrementally refactoring specific drivers to be re-entrant 
(like linux-aio).  But anything that already needs to use a thread pool to do 
its I/O probably wouldn't benefit from threading virtio.

More importantly, the above would give you good performance to start with, 
instead of refactoring a bunch of code hoping to eventually get to good performance.

>
>> Or at least, as far as I know, we don't have any performance data to
>> suggest that we do.
>
> No, it's not about speed, though of course it only works if there is no
> performance dip.  It is just an enabling step.
>
> That said, my weekend officially begins now. :)

Enjoy!!

Regards,

Anthony Liguori

>
> Paolo
>

  reply	other threads:[~2012-02-24 21:01 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-24 19:02 [Qemu-devel] converting the block layer from coroutines to threads Paolo Bonzini
2012-02-24 19:22 ` Anthony Liguori
2012-02-24 20:43   ` Paolo Bonzini
2012-02-24 21:01     ` Anthony Liguori [this message]
2012-02-28  9:34       ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F47FA99.8050608@codemonkey.ws \
    --to=anthony@codemonkey.ws \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).