From: Jens Axboe <axboe@kernel.dk>
To: Christoph Hellwig <hch@infradead.org>,
Rusty Russell <rusty@rustcorp.com.au>
Cc: Asias He <asias@redhat.com>, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] virtio_blk: blk-mq support
Date: Tue, 29 Oct 2013 15:34:01 -0600 [thread overview]
Message-ID: <527029C9.2030102@kernel.dk> (raw)
In-Reply-To: <20131028085206.GB31270@infradead.org>
On 10/28/2013 02:52 AM, Christoph Hellwig wrote:
> On Mon, Oct 28, 2013 at 01:17:54PM +1030, Rusty Russell wrote:
>> Let's pretend I'm stupid.
>>
>> We don't actually have multiple queues through to the host, but we're
>> pretending to, because it makes the block layer go faster?
>>
>> Do I want to know *why* it's faster? Or should I look the other way?
>
> You shouldn't. To how multiple queues benefit here I'd like to defer to
> Jens, given the single workqueue I don't really know where to look here.
The 4 was chosen to "have some number of multiple queues" and to be able
to exercise that part, no real performance testing was done by me after
the implementation to verify whether it was faster at 1, 2, 4, or
others. But it was useful for that! For merging, we can easily just make
it 1 since that's the most logical transformation. I can set some time
aside to play with multiple queues and see if we gain anything, but that
can be done post merge.
> The real benefit that unfortunately wasn't obvious from the description
> is that even with just a single queue the blk-multiqueue infrastructure
> will be a lot faster, because it is designed in a much more streaminline
> fashion and avoids lots of lock roundtrips both during submission itself
> and for submission vs complettion. Back when I tried to get virtio-blk
> to perform well on high-end flash (the work that Asias took over later)
> the queue_lock contention was the major issue in virtio-blk and this
> patch gets rid of that even with a single queue.
>
> A good example are the patches from Nick to move scsi drivers over to
> the infrastructure that only support a single queue. Even that gave
> over a 10 fold improvement over the old code.
>
> Unfortunately I do not have access to this kind of hardware at the
> moment, but I'd love to see if Asias or anyone at Red Hat could redo
> those old numbers.
I've got a variety of fast devices, so should be able to run that.
Asias, let me know what your position is, it'd be great to have
independent results.
--
Jens Axboe
next prev parent reply other threads:[~2013-10-29 21:34 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-25 13:04 [PATCH 0/2] block multiqueue support for virtio-blk Christoph Hellwig
2013-10-25 13:05 ` [PATCH 1/2] blk-mq: add blk_mq_stop_hw_queues Christoph Hellwig
2013-10-25 13:44 ` Jens Axboe
2013-10-25 13:05 ` [PATCH 2/2] virtio_blk: blk-mq support Christoph Hellwig
2013-10-25 13:51 ` Jens Axboe
2013-10-25 16:06 ` Asias He
2013-10-28 2:47 ` Rusty Russell
2013-10-28 8:52 ` Christoph Hellwig
2013-10-29 21:34 ` Jens Axboe [this message]
2013-10-30 8:51 ` Asias He
2013-11-01 16:45 ` Christoph Hellwig
2013-11-01 17:00 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=527029C9.2030102@kernel.dk \
--to=axboe@kernel.dk \
--cc=asias@redhat.com \
--cc=hch@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).