From: Paolo Bonzini <pbonzini@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>, Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org
Subject: Re: [PATCH] virtio_blk: merge S/G list entries by default
Date: Mon, 08 Sep 2014 18:21:03 +0200 [thread overview]
Message-ID: <540DD76F.3030106@redhat.com> (raw)
In-Reply-To: <20140907114153.GB26569@redhat.com>
Il 07/09/2014 13:41, Michael S. Tsirkin ha scritto:
> On Sat, Sep 06, 2014 at 04:09:54PM -0700, Christoph Hellwig wrote:
>> Most virtio setups have a fairly limited number of ring entries available.
>
> Seems a bit vague: QEMU at least has pretty large queues.
> Which hypervisor do you have in mind?
> This could be a gain everywhere if you manage to make descriptors
> completely linear, so they fit in a single s/g.
> ATM __virtblk_add_req always adds an s/g for the header:
> is there a chance linux can pre-allocate a bit of memory
> in front of the buffer to stick the header in?
Nope, the buffer usually comes directly from the page cache and will be
page aligned.
Paolo
>> Enable S/G entry merging by default to fit into less of them. This restores
>> the behavior at time of the virtio-blk blk-mq conversion, which was changed
>> by commit "block: add queue flag for disabling SG merging" which made the
>> behavior optional, but didn't update the existing drivers to keep their
>> previous behavior.
>>
>> Signed-off-by: Christoph Hellwig <hch@lst.de>
>
> OK so this is an optimization patch right?
> What kind of performance gain is observed with it?
>
>> ---
>> drivers/block/virtio_blk.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> index 0a58140..311b857 100644
>> --- a/drivers/block/virtio_blk.c
>> +++ b/drivers/block/virtio_blk.c
>> @@ -634,7 +634,7 @@ static int virtblk_probe(struct virtio_device *vdev)
>> vblk->tag_set.ops = &virtio_mq_ops;
>> vblk->tag_set.queue_depth = virtblk_queue_depth;
>> vblk->tag_set.numa_node = NUMA_NO_NODE;
>> - vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
>> + vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
>> vblk->tag_set.cmd_size =
>> sizeof(struct virtblk_req) +
>> sizeof(struct scatterlist) * sg_elems;
>> --
>> 1.9.1
next prev parent reply other threads:[~2014-09-08 16:21 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-06 23:09 [PATCH] virtio_blk: merge S/G list entries by default Christoph Hellwig
2014-09-07 10:18 ` Paolo Bonzini
2014-09-07 10:32 ` Ming Lei
2014-09-07 11:41 ` Michael S. Tsirkin
2014-09-07 18:47 ` Christoph Hellwig
2014-09-08 8:18 ` Michael S. Tsirkin
2014-09-08 20:15 ` Christoph Hellwig
2014-09-10 16:43 ` Michael S. Tsirkin
2014-09-08 16:21 ` Paolo Bonzini [this message]
[not found] ` <CACVXFVMRqgEh7EFQ0cDEatsqtxsgBrbSQNH9b6UZTsSD+=OWdA__14121.5735966854$1410309057$gmane$org@mail.gmail.com>
2014-09-10 15:18 ` Paolo Bonzini
2014-09-10 15:21 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=540DD76F.3030106@redhat.com \
--to=pbonzini@redhat.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).