From: Mike Snitzer <snitzer@redhat.com>
To: Bart Van Assche <Bart.VanAssche@wdc.com>
Cc: "axboe@kernel.dk" <axboe@kernel.dk>,
"ming.lei@redhat.com" <ming.lei@redhat.com>,
"dm-devel@redhat.com" <dm-devel@redhat.com>,
"hch@infradead.org" <hch@infradead.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"axboe@fb.com" <axboe@fb.com>,
"martin.petersen@oracle.com" <martin.petersen@oracle.com>
Subject: Re: [PATCH V3 0/5] dm-rq: improve sequential I/O performance
Date: Thu, 11 Jan 2018 17:58:30 -0500 [thread overview]
Message-ID: <20180111225830.GE31944@redhat.com> (raw)
In-Reply-To: <1515710256.2752.72.camel@sandisk.com>
On Thu, Jan 11 2018 at 5:37pm -0500,
Bart Van Assche <Bart.VanAssche@wdc.com> wrote:
> On Thu, 2018-01-11 at 17:07 -0500, Mike Snitzer wrote:
> > Bart, if for some reason we regress for some workload you're able to
> > more readily test we can deal with it. But I'm too encouraged by Ming's
> > performance improvements to hold these changes back any further.
>
> Sorry Mike but I think Ming's measurement results are not sufficient to
> motivate upstreaming of these patches. What I remember from previous versions
> of this patch series is that Ming measured the performance impact of this
> patch series only for the Emulex FC driver (lpfc). That driver limits queue
> depth to 3. Instead of modifying the dm code, that driver needs to be fixed
> such that it allows larger queue depths.
Sorry Bart but even in my test (using null_blk with queue depth of 8, 12
hw queues on 12 cpu system, with 12 fio jobs) it is yielding performance
improvement. Going from 1350K iops to 1420K iops. And 5275MB/s to
5532MB/s. For sequential fio workload.
This test has been a staple of mine for a very long time. The
improvement is very real (and hard to come by). The iops improvement is
a bellweather for improved efficiency in the fast path.
So all your concern about Ming's testing is fine. But you'd really help
me out a lot if you could give these changes a test.
> Additionally, some time ago I had explained in detail why I think that patch
> 2/5 in this series is wrong and why it will introduce fairness regressions
> in multi-LUN workloads.
You may think you explained it.. but IIRC it was very hand-wavvy. If
you can provide a pointer to what you think was a compelling argument
then please do.
But I'm left very much unconvinced with your argument given what I see
in patch 2. If blk_get_request() fails it follows that BLK_STS_RESOURCE
should be returned. Why is the 100ms delay meaningful in that case for
SCSI?
> I think we need performance measurements for this
> patch series for at least the following two configurations before this patch
> series can be considered for upstream inclusion:
> * The performance impact of this patch series for SCSI devices with a
> realistic queue depth (e.g. 64 or 128).
> * The performance impact for this patch series for a SCSI host with which
> multiple LUNs are associated and for which the target system often replies
> with the SCSI "BUSY" status.
OK, so please test it.
The changes are pretty easy to review. This notion that these changes
are problematic rings very hollow given your lack of actual numbers (or
some other concerning observation rooted in testing fact) to back up
your position.
Mike
next prev parent reply other threads:[~2018-01-11 22:58 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-11 6:01 [PATCH V3 0/5] dm-rq: improve sequential I/O performance Ming Lei
2018-01-11 6:01 ` [PATCH V3 1/5] dm-mpath: don't call blk_mq_delay_run_hw_queue() in case of BLK_STS_RESOURCE Ming Lei
2018-01-11 6:01 ` [PATCH V3 2/5] dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocation failure Ming Lei
2018-01-12 19:04 ` Bart Van Assche
2018-01-13 1:29 ` Ming Lei
2018-01-11 6:01 ` [PATCH V3 3/5] blk-mq: move actual issue into one helper Ming Lei
2018-01-11 22:09 ` Mike Snitzer
2018-01-11 6:01 ` [PATCH V3 4/5] blk-mq: return dispatch result to caller in blk_mq_try_issue_directly Ming Lei
2018-01-11 22:10 ` Mike Snitzer
2018-01-11 6:01 ` [PATCH V3 5/5] blk-mq: issue request directly for blk_insert_cloned_request Ming Lei
2018-01-11 22:42 ` Mike Snitzer
2018-01-11 22:07 ` [PATCH V3 0/5] dm-rq: improve sequential I/O performance Mike Snitzer
2018-01-11 22:37 ` Bart Van Assche
2018-01-11 22:58 ` Mike Snitzer [this message]
2018-01-11 23:27 ` Bart Van Assche
2018-01-12 1:43 ` Mike Snitzer
2018-01-12 1:42 ` Ming Lei
2018-01-12 1:57 ` Mike Snitzer
2018-01-12 3:33 ` Ming Lei
2018-01-12 17:18 ` Mike Snitzer
2018-01-12 17:26 ` Bart Van Assche
2018-01-12 17:40 ` Mike Snitzer
2018-01-12 17:46 ` Bart Van Assche
2018-01-12 18:06 ` Mike Snitzer
2018-01-12 18:54 ` Bart Van Assche
2018-01-12 19:29 ` Mike Snitzer
2018-01-12 19:53 ` Elliott, Robert (Persistent Memory)
2018-01-13 0:52 ` Mike Snitzer
2018-01-13 1:00 ` Bart Van Assche
2018-01-13 1:37 ` Mike Snitzer
2018-01-13 15:14 ` Mike Snitzer
2018-01-12 22:31 ` Mike Snitzer
2018-01-13 15:04 ` Ming Lei
2018-01-13 15:10 ` Mike Snitzer
2018-01-12 23:17 ` Mike Snitzer
2018-01-12 23:42 ` Bart Van Assche
2018-01-13 0:45 ` Mike Snitzer
2018-01-13 14:34 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180111225830.GE31944@redhat.com \
--to=snitzer@redhat.com \
--cc=Bart.VanAssche@wdc.com \
--cc=axboe@fb.com \
--cc=axboe@kernel.dk \
--cc=dm-devel@redhat.com \
--cc=hch@infradead.org \
--cc=linux-block@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).