* [RFC PATCH 0/2] blk-mq I/O scheduling fixes
@ 2019-09-19 9:45 Hannes Reinecke
2019-09-19 9:45 ` [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly() Hannes Reinecke
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Hannes Reinecke @ 2019-09-19 9:45 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal,
Hannes Reinecke
Hi all,
Damien pointed out that there are some areas in the blk-mq I/O
scheduling algorithm which have a distinct legacy feel to it,
and prohibit multiqueue I/O schedulers from working properly.
These two patches should clear up this situation, but as it's
not quite clear what the original intention of the code was
I'll be posting them as an RFC.
So as usual, comments and reviews are welcome.
Hannes Reinecke (2):
blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
blk-mq: always call into the scheduler in blk_mq_make_request()
block/blk-mq.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
--
2.16.4
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
2019-09-19 9:45 [RFC PATCH 0/2] blk-mq I/O scheduling fixes Hannes Reinecke
@ 2019-09-19 9:45 ` Hannes Reinecke
2019-09-19 14:19 ` Ming Lei
2019-09-19 14:52 ` Guoqing Jiang
2019-09-19 9:45 ` [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request() Hannes Reinecke
` (3 subsequent siblings)
4 siblings, 2 replies; 17+ messages in thread
From: Hannes Reinecke @ 2019-09-19 9:45 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal,
Hannes Reinecke
From: Hannes Reinecke <hare@suse.com>
When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we
need to requeue the I/O, but adding it to the global request list
will mess up with the passed-in request list. So re-add the request
to the original list and leave it to the caller to handle situations
where the list wasn't completely emptied.
Signed-off-by: Hannes Reinecke <hare@suse.com>
---
block/blk-mq.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index b038ec680e84..44ff3c1442a4 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
if (ret != BLK_STS_OK) {
if (ret == BLK_STS_RESOURCE ||
ret == BLK_STS_DEV_RESOURCE) {
- blk_mq_request_bypass_insert(rq,
- list_empty(list));
+ list_add(list, &rq->queuelist);
break;
}
blk_mq_end_request(rq, ret);
--
2.16.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request()
2019-09-19 9:45 [RFC PATCH 0/2] blk-mq I/O scheduling fixes Hannes Reinecke
2019-09-19 9:45 ` [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly() Hannes Reinecke
@ 2019-09-19 9:45 ` Hannes Reinecke
2019-09-19 10:21 ` Damien Le Moal
2019-09-19 9:56 ` [RFC PATCH 0/2] blk-mq I/O scheduling fixes Liu, Sunny
` (2 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Hannes Reinecke @ 2019-09-19 9:45 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal,
Hannes Reinecke
From: Hannes Reinecke <hare@suse.com>
A scheduler might be attached even for devices exposing more than
one hardware queue, so the check for the number of hardware queue
is pointless and should be removed.
Signed-off-by: Hannes Reinecke <hare@suse.com>
---
block/blk-mq.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 44ff3c1442a4..faab542e4836 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1931,7 +1931,6 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
{
- const int is_sync = op_is_sync(bio->bi_opf);
const int is_flush_fua = op_is_flush(bio->bi_opf);
struct blk_mq_alloc_data data = { .flags = 0};
struct request *rq;
@@ -1977,7 +1976,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
/* bypass scheduler for flush rq */
blk_insert_flush(rq);
blk_mq_run_hw_queue(data.hctx, true);
- } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
+ } else if (plug && q->mq_ops->commit_rqs) {
/*
* Use plugging if we have a ->commit_rqs() hook as well, as
* we know the driver uses bd->last in a smart fashion.
@@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
blk_mq_try_issue_directly(data.hctx, same_queue_rq,
&cookie);
}
- } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
- !data.hctx->dispatch_busy)) {
- blk_mq_try_issue_directly(data.hctx, rq, &cookie);
} else {
blk_mq_sched_insert_request(rq, false, true, true);
}
--
2.16.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* RE: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
2019-09-19 9:45 [RFC PATCH 0/2] blk-mq I/O scheduling fixes Hannes Reinecke
2019-09-19 9:45 ` [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly() Hannes Reinecke
2019-09-19 9:45 ` [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request() Hannes Reinecke
@ 2019-09-19 9:56 ` Liu, Sunny
2019-09-19 10:03 ` Damien Le Moal
2019-09-19 12:57 ` Hans Holmberg
2019-09-19 17:48 ` Jens Axboe
4 siblings, 1 reply; 17+ messages in thread
From: Liu, Sunny @ 2019-09-19 9:56 UTC (permalink / raw)
To: Hannes Reinecke, Jens Axboe
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Hans Holmberg,
Damien Le Moal
Hello Sir,
I have a question about the I/O scheduler in kernel 5.2.9
in the new kernel, which I/O scheduler should be used by legacy rotating drive? Such as sata HDD?
During FIO testing with libaio, I had create multiple thread in the testing, and then found 512k and bigger sequent write had bad performance result. Even I had enable and use BFQ scheduler.
There has no sq scheduler anymore, only has none, mq-deadline, kyber and BFQ.
Mq-deadline and kyber is for fast block device. Only the BFQ looks better performance, but it can't keep the good behavior during 512k or bigger 100% seq write.
Could you give me some advices what parameter should I change for multiple thread bigger file seq writing?
Thanks all of you.
BestRegards,
SunnyLiu(刘萍)
LenovoNetApp
北京市海淀区西北旺东路10号院2号楼L3-E1-01
L3-E1-01,Building No.2, Lenovo HQ West No.10 XiBeiWang East Rd.,
Haidian District, Beijing 100094, PRC
Tel: +86 15910622368
-----Original Message-----
From: linux-block-owner@vger.kernel.org <linux-block-owner@vger.kernel.org> On Behalf Of Hannes Reinecke
Sent: 2019年9月19日 17:46
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-scsi@vger.kernel.org; Martin K. Petersen <martin.petersen@oracle.com>; James Bottomley <james.bottomley@hansenpartnership.com>; Christoph Hellwig <hch@lst.de>; linux-block@vger.kernel.org; Hans Holmberg <hans.holmberg@wdc.com>; Damien Le Moal <damien.lemoal@wdc.com>; Hannes Reinecke <hare@suse.de>
Subject: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
Hi all,
Damien pointed out that there are some areas in the blk-mq I/O scheduling algorithm which have a distinct legacy feel to it, and prohibit multiqueue I/O schedulers from working properly.
These two patches should clear up this situation, but as it's not quite clear what the original intention of the code was I'll be posting them as an RFC.
So as usual, comments and reviews are welcome.
Hannes Reinecke (2):
blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
blk-mq: always call into the scheduler in blk_mq_make_request()
block/blk-mq.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
--
2.16.4
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
2019-09-19 9:56 ` [RFC PATCH 0/2] blk-mq I/O scheduling fixes Liu, Sunny
@ 2019-09-19 10:03 ` Damien Le Moal
[not found] ` <BJXPR01MB0296594F3E478B5BFD4DA2ABF4890@BJXPR01MB0296.CHNPR01.prod.partner.outlook.cn>
0 siblings, 1 reply; 17+ messages in thread
From: Damien Le Moal @ 2019-09-19 10:03 UTC (permalink / raw)
To: Liu, Sunny, Hannes Reinecke, Jens Axboe
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Hans Holmberg
On 2019/09/19 11:57, Liu, Sunny wrote:
> Hello Sir,
>
> I have a question about the I/O scheduler in kernel 5.2.9
>
> in the new kernel, which I/O scheduler should be used by legacy rotating
> drive? Such as sata HDD? During FIO testing with libaio, I had create
> multiple thread in the testing, and then found 512k and bigger sequent write
> had bad performance result. Even I had enable and use BFQ scheduler.
>
> There has no sq scheduler anymore, only has none, mq-deadline, kyber and
> BFQ. Mq-deadline and kyber is for fast block device. Only the BFQ looks
> better performance, but it can't keep the good behavior during 512k or bigger
> 100% seq write.
>
> Could you give me some advices what parameter should I change for multiple
> thread bigger file seq writing?
The default block IO scheduler for a single queue device (e.g. HDDs in most
cases, but beware of the HBA being used and how it exposes the disk) is
mq-deadline. For a multiqueue device (e.g. NVMe SSDs), the default elevator is none.
For your SATA SSD, which is a single queue device, the default elevator will be
mq-deadline. This elevator should give you very good performance. "none" will
probably also give you the same results though.
Performance on SSD highly depends on the SSD condition (the amount and pattern
of writes preceding the test). You may want to trim the entire device before
writing it to check the maximum performance you can get out of it.
>
> Thanks all of you.
>
> BestRegards, SunnyLiu(刘萍) LenovoNetApp 北京市海淀区西北旺东路10号院2号楼L3-E1-01
> L3-E1-01,Building No.2, Lenovo HQ West No.10 XiBeiWang East Rd., Haidian
> District, Beijing 100094, PRC Tel: +86 15910622368
>
> -----Original Message----- From: linux-block-owner@vger.kernel.org
> <linux-block-owner@vger.kernel.org> On Behalf Of Hannes Reinecke Sent: 2019年9
> 月19日 17:46 To: Jens Axboe <axboe@kernel.dk> Cc: linux-scsi@vger.kernel.org;
> Martin K. Petersen <martin.petersen@oracle.com>; James Bottomley
> <james.bottomley@hansenpartnership.com>; Christoph Hellwig <hch@lst.de>;
> linux-block@vger.kernel.org; Hans Holmberg <hans.holmberg@wdc.com>; Damien Le
> Moal <damien.lemoal@wdc.com>; Hannes Reinecke <hare@suse.de> Subject: [RFC
> PATCH 0/2] blk-mq I/O scheduling fixes
>
> Hi all,
>
> Damien pointed out that there are some areas in the blk-mq I/O scheduling
> algorithm which have a distinct legacy feel to it, and prohibit multiqueue
> I/O schedulers from working properly. These two patches should clear up this
> situation, but as it's not quite clear what the original intention of the
> code was I'll be posting them as an RFC.
>
> So as usual, comments and reviews are welcome.
>
> Hannes Reinecke (2): blk-mq: fixup request re-insert in
> blk_mq_try_issue_list_directly() blk-mq: always call into the scheduler in
> blk_mq_make_request()
>
> block/blk-mq.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
>
> -- 2.16.4
>
>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request()
2019-09-19 9:45 ` [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request() Hannes Reinecke
@ 2019-09-19 10:21 ` Damien Le Moal
2019-09-19 14:23 ` Ming Lei
0 siblings, 1 reply; 17+ messages in thread
From: Damien Le Moal @ 2019-09-19 10:21 UTC (permalink / raw)
To: Hannes Reinecke, Jens Axboe
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Hans Holmberg,
Hannes Reinecke
On 2019/09/19 11:45, Hannes Reinecke wrote:
> From: Hannes Reinecke <hare@suse.com>
>
> A scheduler might be attached even for devices exposing more than
> one hardware queue, so the check for the number of hardware queue
> is pointless and should be removed.
>
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> ---
> block/blk-mq.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 44ff3c1442a4..faab542e4836 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1931,7 +1931,6 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
>
> static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> {
> - const int is_sync = op_is_sync(bio->bi_opf);
> const int is_flush_fua = op_is_flush(bio->bi_opf);
> struct blk_mq_alloc_data data = { .flags = 0};
> struct request *rq;
> @@ -1977,7 +1976,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> /* bypass scheduler for flush rq */
> blk_insert_flush(rq);
> blk_mq_run_hw_queue(data.hctx, true);
> - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
> + } else if (plug && q->mq_ops->commit_rqs) {
> /*
> * Use plugging if we have a ->commit_rqs() hook as well, as
> * we know the driver uses bd->last in a smart fashion.
> @@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> blk_mq_try_issue_directly(data.hctx, same_queue_rq,
> &cookie);
> }
> - } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
> - !data.hctx->dispatch_busy)) {
> - blk_mq_try_issue_directly(data.hctx, rq, &cookie);
It may be worth mentioning that blk_mq_sched_insert_request() will do a direct
insert of the request using __blk_mq_insert_request(). But that insert is
slightly different from what blk_mq_try_issue_directly() does with
__blk_mq_issue_directly() as the request in that case is passed along to the
device using queue->mq_ops->queue_rq() while __blk_mq_insert_request() will put
the request in ctx->rq_lists[type].
This removes the optimized case !q->elevator && !data.hctx->dispatch_busy, but I
am not sure of the actual performance impact yet. We may want to patch
blk_mq_sched_insert_request() to handle that case.
> } else {
> blk_mq_sched_insert_request(rq, false, true, true);
> }
>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
[not found] ` <BJXPR01MB0296594F3E478B5BFD4DA2ABF4890@BJXPR01MB0296.CHNPR01.prod.partner.outlook.cn>
@ 2019-09-19 12:44 ` Damien Le Moal
2019-09-19 12:54 ` Liu, Sunny
0 siblings, 1 reply; 17+ messages in thread
From: Damien Le Moal @ 2019-09-19 12:44 UTC (permalink / raw)
To: Liu, Sunny, Hannes Reinecke, Jens Axboe
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Hans Holmberg
On 2019/09/19 12:59, Liu, Sunny wrote:
> Thank very much for your quickly advice.
>
> The problem drive is sata HDD 7200rpm in raid 5.
Sorry, I read "SDD" where you had written "HDD" :)
Is this a hardware RAID ? Or is this using dm/md raid ?
> If using Fio libaio iodepth=128 numjob=2, the bad performance will be as below
> in red. But there is no problem with numjob=1. In our solution, *multiple
> threads* should be used.
Your data does not have the numjobs=1 case for kernel 5.2.9. You should run that
for comparison with the numjobs=2 case on the same kernel.
> From the testing result, BFQ low-latency had good performance, but it still has
> problem in 1m seq write.
>
> The data is come from centos 7.6 (kernel 3.10.0-975) and kernel 5.2.9 with BFQ
> and bcache enabled. No bcache configure.
>
> Is there any parameter can solve the 1m and upper seq write problem with
> multiple threads?
Not sure what the problem is here. You could look at a blktrace of each case to
see if there is any major difference in the command patterns sent to the disks
of your array, in particular command size.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
2019-09-19 12:44 ` Damien Le Moal
@ 2019-09-19 12:54 ` Liu, Sunny
0 siblings, 0 replies; 17+ messages in thread
From: Liu, Sunny @ 2019-09-19 12:54 UTC (permalink / raw)
To: Damien Le Moal, Hannes Reinecke, Jens Axboe
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Hans Holmberg
Sir,
The HDD is hardware Raid 5 with 530-8i raid card.
I tried 1m seq write with numjobs=1, the data similar as kernel 3.10.0, whatever mq-deadline or BFQ elevator.
If you need detail testing data with numjobs=1, I can do it. Or any info you need, such as two process with 1 thread.
Thank you.
BestRegards,
SunnyLiu(刘萍)
LenovoNetApp
北京市海淀区西北旺东路10号院2号楼L3-E1-01
L3-E1-01,Building No.2, Lenovo HQ West No.10 XiBeiWang East Rd.,
Haidian District, Beijing 100094, PRC
Tel: +86 15910622368
-----Original Message-----
From: linux-block-owner@vger.kernel.org <linux-block-owner@vger.kernel.org> On Behalf Of Damien Le Moal
Sent: 2019年9月19日 20:45
To: Liu, Sunny <ping.liu@lenovonetapp.com>; Hannes Reinecke <hare@suse.de>; Jens Axboe <axboe@kernel.dk>
Cc: linux-scsi@vger.kernel.org; Martin K. Petersen <martin.petersen@oracle.com>; James Bottomley <james.bottomley@hansenpartnership.com>; Christoph Hellwig <hch@lst.de>; linux-block@vger.kernel.org; Hans Holmberg <Hans.Holmberg@wdc.com>
Subject: Re: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
On 2019/09/19 12:59, Liu, Sunny wrote:
> Thank very much for your quickly advice.
>
> The problem drive is sata HDD 7200rpm in raid 5.
Sorry, I read "SDD" where you had written "HDD" :) Is this a hardware RAID ? Or is this using dm/md raid ?
> If using Fio libaio iodepth=128 numjob=2, the bad performance will be
> as below in red. But there is no problem with numjob=1. In our
> solution, *multiple
> threads* should be used.
Your data does not have the numjobs=1 case for kernel 5.2.9. You should run that for comparison with the numjobs=2 case on the same kernel.
> From the testing result, BFQ low-latency had good performance, but it
> still has problem in 1m seq write.
>
> The data is come from centos 7.6 (kernel 3.10.0-975) and kernel 5.2.9
> with BFQ and bcache enabled. No bcache configure.
>
> Is there any parameter can solve the 1m and upper seq write problem
> with multiple threads?
Not sure what the problem is here. You could look at a blktrace of each case to see if there is any major difference in the command patterns sent to the disks of your array, in particular command size.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
2019-09-19 9:45 [RFC PATCH 0/2] blk-mq I/O scheduling fixes Hannes Reinecke
` (2 preceding siblings ...)
2019-09-19 9:56 ` [RFC PATCH 0/2] blk-mq I/O scheduling fixes Liu, Sunny
@ 2019-09-19 12:57 ` Hans Holmberg
2019-09-19 17:48 ` Jens Axboe
4 siblings, 0 replies; 17+ messages in thread
From: Hans Holmberg @ 2019-09-19 12:57 UTC (permalink / raw)
To: Hannes Reinecke, Jens Axboe
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Damien Le Moal
On 2019-09-19 11:45, Hannes Reinecke wrote:
> Hi all,
>
> Damien pointed out that there are some areas in the blk-mq I/O
> scheduling algorithm which have a distinct legacy feel to it,
> and prohibit multiqueue I/O schedulers from working properly.
> These two patches should clear up this situation, but as it's
> not quite clear what the original intention of the code was
> I'll be posting them as an RFC.
>
> So as usual, comments and reviews are welcome.
>
> Hannes Reinecke (2):
> blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
> blk-mq: always call into the scheduler in blk_mq_make_request()
>
> block/blk-mq.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
I tested this patch set in qemu and confirmed that write locking for ZBD
now works again.
The bypass of the scheduler(in case (q->nr_hw_queues > 1 && is_sync) is
the culprit, and with this removed, we're good again for zoned block
devices.
Cheers,
Hans
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
2019-09-19 9:45 ` [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly() Hannes Reinecke
@ 2019-09-19 14:19 ` Ming Lei
2019-09-20 6:42 ` Hannes Reinecke
2019-09-19 14:52 ` Guoqing Jiang
1 sibling, 1 reply; 17+ messages in thread
From: Ming Lei @ 2019-09-19 14:19 UTC (permalink / raw)
To: Hannes Reinecke
Cc: Jens Axboe, linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal,
Hannes Reinecke
On Thu, Sep 19, 2019 at 11:45:46AM +0200, Hannes Reinecke wrote:
> From: Hannes Reinecke <hare@suse.com>
>
> When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we
> need to requeue the I/O, but adding it to the global request list
> will mess up with the passed-in request list. So re-add the request
We always add request to hctx->dispatch_list after .queue_rq() returns
BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE, so what is the messing up?
> to the original list and leave it to the caller to handle situations
> where the list wasn't completely emptied.
>
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> ---
> block/blk-mq.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index b038ec680e84..44ff3c1442a4 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
> if (ret != BLK_STS_OK) {
> if (ret == BLK_STS_RESOURCE ||
> ret == BLK_STS_DEV_RESOURCE) {
> - blk_mq_request_bypass_insert(rq,
> - list_empty(list));
> + list_add(list, &rq->queuelist);
This way may let this request(DONTPREP set) to be merged with other rq
or bio, and potential data corruption may be caused, please see commit:
c616cbee97ae blk-mq: punt failed direct issue to dispatch list
Thanks,
Ming
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request()
2019-09-19 10:21 ` Damien Le Moal
@ 2019-09-19 14:23 ` Ming Lei
2019-09-19 15:48 ` Kashyap Desai
0 siblings, 1 reply; 17+ messages in thread
From: Ming Lei @ 2019-09-19 14:23 UTC (permalink / raw)
To: Damien Le Moal
Cc: Hannes Reinecke, Jens Axboe, linux-scsi@vger.kernel.org,
Martin K. Petersen, James Bottomley, Christoph Hellwig,
linux-block@vger.kernel.org, Hans Holmberg, Hannes Reinecke,
Kashyap Desai
On Thu, Sep 19, 2019 at 10:21:54AM +0000, Damien Le Moal wrote:
> On 2019/09/19 11:45, Hannes Reinecke wrote:
> > From: Hannes Reinecke <hare@suse.com>
> >
> > A scheduler might be attached even for devices exposing more than
> > one hardware queue, so the check for the number of hardware queue
> > is pointless and should be removed.
> >
> > Signed-off-by: Hannes Reinecke <hare@suse.com>
> > ---
> > block/blk-mq.c | 6 +-----
> > 1 file changed, 1 insertion(+), 5 deletions(-)
> >
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index 44ff3c1442a4..faab542e4836 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -1931,7 +1931,6 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
> >
> > static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> > {
> > - const int is_sync = op_is_sync(bio->bi_opf);
> > const int is_flush_fua = op_is_flush(bio->bi_opf);
> > struct blk_mq_alloc_data data = { .flags = 0};
> > struct request *rq;
> > @@ -1977,7 +1976,7 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> > /* bypass scheduler for flush rq */
> > blk_insert_flush(rq);
> > blk_mq_run_hw_queue(data.hctx, true);
> > - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops->commit_rqs)) {
> > + } else if (plug && q->mq_ops->commit_rqs) {
> > /*
> > * Use plugging if we have a ->commit_rqs() hook as well, as
> > * we know the driver uses bd->last in a smart fashion.
> > @@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> > blk_mq_try_issue_directly(data.hctx, same_queue_rq,
> > &cookie);
> > }
> > - } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
> > - !data.hctx->dispatch_busy)) {
> > - blk_mq_try_issue_directly(data.hctx, rq, &cookie);
>
> It may be worth mentioning that blk_mq_sched_insert_request() will do a direct
> insert of the request using __blk_mq_insert_request(). But that insert is
> slightly different from what blk_mq_try_issue_directly() does with
> __blk_mq_issue_directly() as the request in that case is passed along to the
> device using queue->mq_ops->queue_rq() while __blk_mq_insert_request() will put
> the request in ctx->rq_lists[type].
>
> This removes the optimized case !q->elevator && !data.hctx->dispatch_busy, but I
> am not sure of the actual performance impact yet. We may want to patch
> blk_mq_sched_insert_request() to handle that case.
The optimization did improve IOPS of single queue SCSI SSD a lot, see
commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8
Author: Ming Lei <ming.lei@redhat.com>
Date: Tue Jul 10 09:03:31 2018 +0800
blk-mq: issue directly if hw queue isn't busy in case of 'none'
In case of 'none' io scheduler, when hw queue isn't busy, it isn't
necessary to enqueue request to sw queue and dequeue it from
sw queue because request may be submitted to hw queue asap without
extra cost, meantime there shouldn't be much request in sw queue,
and we don't need to worry about effect on IO merge.
There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas, ...)
which may connect high performance devices, so 'none' is often required
for obtaining good performance.
This patch improves IOPS and decreases CPU unilization on megaraid_sas,
per Kashyap's test.
Thanks,
Ming
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
2019-09-19 9:45 ` [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly() Hannes Reinecke
2019-09-19 14:19 ` Ming Lei
@ 2019-09-19 14:52 ` Guoqing Jiang
1 sibling, 0 replies; 17+ messages in thread
From: Guoqing Jiang @ 2019-09-19 14:52 UTC (permalink / raw)
To: Hannes Reinecke, Jens Axboe
Cc: linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal,
Hannes Reinecke
On 9/19/19 11:45 AM, Hannes Reinecke wrote:
> From: Hannes Reinecke <hare@suse.com>
>
> When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we
> need to requeue the I/O, but adding it to the global request list
> will mess up with the passed-in request list. So re-add the request
> to the original list and leave it to the caller to handle situations
> where the list wasn't completely emptied.
>
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> ---
> block/blk-mq.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index b038ec680e84..44ff3c1442a4 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
> if (ret != BLK_STS_OK) {
> if (ret == BLK_STS_RESOURCE ||
> ret == BLK_STS_DEV_RESOURCE) {
> - blk_mq_request_bypass_insert(rq,
> - list_empty(list));
> + list_add(list, &rq->queuelist);
Just curious, maybe the above should be "list_add(&rq->queuelist, list)".
Thanks,
Guoqing
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request()
2019-09-19 14:23 ` Ming Lei
@ 2019-09-19 15:48 ` Kashyap Desai
2019-09-19 16:13 ` Damien Le Moal
0 siblings, 1 reply; 17+ messages in thread
From: Kashyap Desai @ 2019-09-19 15:48 UTC (permalink / raw)
To: Ming Lei, Damien Le Moal
Cc: Hannes Reinecke, Jens Axboe, linux-scsi, Martin K. Petersen,
James Bottomley, Christoph Hellwig, linux-block, Hans Holmberg,
Hannes Reinecke
> > > - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops-
> >commit_rqs)) {
> > > + } else if (plug && q->mq_ops->commit_rqs) {
> > > /*
> > > * Use plugging if we have a ->commit_rqs() hook as well,
as
> > > * we know the driver uses bd->last in a smart fashion.
> > > @@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct
> request_queue *q, struct bio *bio)
> > > blk_mq_try_issue_directly(data.hctx,
same_queue_rq,
> > > &cookie);
> > > }
> > > - } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
> > > - !data.hctx->dispatch_busy)) {
> > > - blk_mq_try_issue_directly(data.hctx, rq, &cookie);
Hannes -
Earlier check prior to "commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8"
was only (q->nr_hw_queues > 1 && is_sync).
I am not sure if check of nr_hw_queues are required or not at this place,
but other part of check (!q->elevator && !data.hctx->dispatch_busy) to
qualify for direct dispatch is required for higher performance.
Recent MegaRaid and MPT HBA Aero series controller is capable of doing
~3.0 M IOPs and for such high performance using single hardware queue,
commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8 is very important.
Kashyap
> >
> > It may be worth mentioning that blk_mq_sched_insert_request() will do
> > a direct insert of the request using __blk_mq_insert_request(). But
> > that insert is slightly different from what
> > blk_mq_try_issue_directly() does with
> > __blk_mq_issue_directly() as the request in that case is passed along
> > to the device using queue->mq_ops->queue_rq() while
> > __blk_mq_insert_request() will put the request in ctx->rq_lists[type].
> >
> > This removes the optimized case !q->elevator &&
> > !data.hctx->dispatch_busy, but I am not sure of the actual performance
> > impact yet. We may want to patch
> > blk_mq_sched_insert_request() to handle that case.
>
> The optimization did improve IOPS of single queue SCSI SSD a lot, see
>
> commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8
> Author: Ming Lei <ming.lei@redhat.com>
> Date: Tue Jul 10 09:03:31 2018 +0800
>
> blk-mq: issue directly if hw queue isn't busy in case of 'none'
>
> In case of 'none' io scheduler, when hw queue isn't busy, it isn't
> necessary to enqueue request to sw queue and dequeue it from
> sw queue because request may be submitted to hw queue asap without
> extra cost, meantime there shouldn't be much request in sw queue,
> and we don't need to worry about effect on IO merge.
>
> There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas,
...)
> which may connect high performance devices, so 'none' is often
required
> for obtaining good performance.
>
> This patch improves IOPS and decreases CPU unilization on
megaraid_sas,
> per Kashyap's test.
>
>
> Thanks,
> Ming
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request()
2019-09-19 15:48 ` Kashyap Desai
@ 2019-09-19 16:13 ` Damien Le Moal
0 siblings, 0 replies; 17+ messages in thread
From: Damien Le Moal @ 2019-09-19 16:13 UTC (permalink / raw)
To: Kashyap Desai, Ming Lei
Cc: Hannes Reinecke, Jens Axboe, linux-scsi@vger.kernel.org,
Martin K. Petersen, James Bottomley, Christoph Hellwig,
linux-block@vger.kernel.org, Hans Holmberg, Hannes Reinecke
On 2019/09/19 17:48, Kashyap Desai wrote:
>>>> - } else if (plug && (q->nr_hw_queues == 1 || q->mq_ops-
>>> commit_rqs)) {
>>>> + } else if (plug && q->mq_ops->commit_rqs) {
>>>> /*
>>>> * Use plugging if we have a ->commit_rqs() hook as well,
> as
>>>> * we know the driver uses bd->last in a smart fashion.
>>>> @@ -2020,9 +2019,6 @@ static blk_qc_t blk_mq_make_request(struct
>> request_queue *q, struct bio *bio)
>>>> blk_mq_try_issue_directly(data.hctx,
> same_queue_rq,
>>>> &cookie);
>>>> }
>>>> - } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
>>>> - !data.hctx->dispatch_busy)) {
>>>> - blk_mq_try_issue_directly(data.hctx, rq, &cookie);
> Hannes -
>
> Earlier check prior to "commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8"
> was only (q->nr_hw_queues > 1 && is_sync).
> I am not sure if check of nr_hw_queues are required or not at this place,
> but other part of check (!q->elevator && !data.hctx->dispatch_busy) to
> qualify for direct dispatch is required for higher performance.
>
> Recent MegaRaid and MPT HBA Aero series controller is capable of doing
> ~3.0 M IOPs and for such high performance using single hardware queue,
> commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8 is very important.
Kashyap, Ming,
Thanks for the information. We will restore this case.
>
> Kashyap
>
>
>>>
>>> It may be worth mentioning that blk_mq_sched_insert_request() will do
>>> a direct insert of the request using __blk_mq_insert_request(). But
>>> that insert is slightly different from what
>>> blk_mq_try_issue_directly() does with
>>> __blk_mq_issue_directly() as the request in that case is passed along
>>> to the device using queue->mq_ops->queue_rq() while
>>> __blk_mq_insert_request() will put the request in ctx->rq_lists[type].
>>>
>>> This removes the optimized case !q->elevator &&
>>> !data.hctx->dispatch_busy, but I am not sure of the actual performance
>>> impact yet. We may want to patch
>>> blk_mq_sched_insert_request() to handle that case.
>>
>> The optimization did improve IOPS of single queue SCSI SSD a lot, see
>>
>> commit 6ce3dd6eec114930cf2035a8bcb1e80477ed79a8
>> Author: Ming Lei <ming.lei@redhat.com>
>> Date: Tue Jul 10 09:03:31 2018 +0800
>>
>> blk-mq: issue directly if hw queue isn't busy in case of 'none'
>>
>> In case of 'none' io scheduler, when hw queue isn't busy, it isn't
>> necessary to enqueue request to sw queue and dequeue it from
>> sw queue because request may be submitted to hw queue asap without
>> extra cost, meantime there shouldn't be much request in sw queue,
>> and we don't need to worry about effect on IO merge.
>>
>> There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas,
> ...)
>> which may connect high performance devices, so 'none' is often
> required
>> for obtaining good performance.
>>
>> This patch improves IOPS and decreases CPU unilization on
> megaraid_sas,
>> per Kashyap's test.
>>
>>
>> Thanks,
>> Ming
>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
2019-09-19 9:45 [RFC PATCH 0/2] blk-mq I/O scheduling fixes Hannes Reinecke
` (3 preceding siblings ...)
2019-09-19 12:57 ` Hans Holmberg
@ 2019-09-19 17:48 ` Jens Axboe
2019-09-19 21:11 ` Damien Le Moal
4 siblings, 1 reply; 17+ messages in thread
From: Jens Axboe @ 2019-09-19 17:48 UTC (permalink / raw)
To: Hannes Reinecke
Cc: linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal
On 9/19/19 3:45 AM, Hannes Reinecke wrote:
> Hi all,
>
> Damien pointed out that there are some areas in the blk-mq I/O
> scheduling algorithm which have a distinct legacy feel to it,
> and prohibit multiqueue I/O schedulers from working properly.
> These two patches should clear up this situation, but as it's
> not quite clear what the original intention of the code was
> I'll be posting them as an RFC.
>
> So as usual, comments and reviews are welcome.
>
> Hannes Reinecke (2):
> blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
> blk-mq: always call into the scheduler in blk_mq_make_request()
>
> block/blk-mq.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
Not quite sure what to do with this... Did you test them at all?
One is obviously broken and would crash the kernel, the other
is/was a performance optimization done not that long ago.
Just going to ignore this series for now.
--
Jens Axboe
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [RFC PATCH 0/2] blk-mq I/O scheduling fixes
2019-09-19 17:48 ` Jens Axboe
@ 2019-09-19 21:11 ` Damien Le Moal
0 siblings, 0 replies; 17+ messages in thread
From: Damien Le Moal @ 2019-09-19 21:11 UTC (permalink / raw)
To: Jens Axboe, Hannes Reinecke
Cc: linux-scsi@vger.kernel.org, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block@vger.kernel.org, Hans Holmberg
On 2019/09/19 19:48, Jens Axboe wrote:
> On 9/19/19 3:45 AM, Hannes Reinecke wrote:
>> Hi all,
>>
>> Damien pointed out that there are some areas in the blk-mq I/O
>> scheduling algorithm which have a distinct legacy feel to it,
>> and prohibit multiqueue I/O schedulers from working properly.
>> These two patches should clear up this situation, but as it's
>> not quite clear what the original intention of the code was
>> I'll be posting them as an RFC.
>>
>> So as usual, comments and reviews are welcome.
>>
>> Hannes Reinecke (2):
>> blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
>> blk-mq: always call into the scheduler in blk_mq_make_request()
>>
>> block/blk-mq.c | 9 ++-------
>> 1 file changed, 2 insertions(+), 7 deletions(-)
>
> Not quite sure what to do with this... Did you test them at all?
Yes, Hans tested but on one device type only and the bug in patch 1 went
undetected with the test case. Patch 2 does solve our specific problem which is
that sync write were bypassing the elevator (mq-deadline), causing unaligned
write errors with a multi-queue zoned device.
> One is obviously broken and would crash the kernel, the other
> is/was a performance optimization done not that long ago.
>
> Just going to ignore this series for now.
Yes, please do. This was hacked quickly with Hannes yesterday and Hannes sent
this as an RFC. We now got plenty of comments (thanks to all who provided
feedback !) and will work on a proper patch series backed by more testing.
Best regards.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly()
2019-09-19 14:19 ` Ming Lei
@ 2019-09-20 6:42 ` Hannes Reinecke
0 siblings, 0 replies; 17+ messages in thread
From: Hannes Reinecke @ 2019-09-20 6:42 UTC (permalink / raw)
To: Ming Lei
Cc: Jens Axboe, linux-scsi, Martin K. Petersen, James Bottomley,
Christoph Hellwig, linux-block, Hans Holmberg, Damien Le Moal,
Hannes Reinecke
On 9/19/19 4:19 PM, Ming Lei wrote:
> On Thu, Sep 19, 2019 at 11:45:46AM +0200, Hannes Reinecke wrote:
>> From: Hannes Reinecke <hare@suse.com>
>>
>> When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we
>> need to requeue the I/O, but adding it to the global request list
>> will mess up with the passed-in request list. So re-add the request
>
> We always add request to hctx->dispatch_list after .queue_rq() returns
> BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE, so what is the messing up?
>
>> to the original list and leave it to the caller to handle situations
>> where the list wasn't completely emptied.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.com>
>> ---
>> block/blk-mq.c | 3 +--
>> 1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index b038ec680e84..44ff3c1442a4 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
>> if (ret != BLK_STS_OK) {
>> if (ret == BLK_STS_RESOURCE ||
>> ret == BLK_STS_DEV_RESOURCE) {
>> - blk_mq_request_bypass_insert(rq,
>> - list_empty(list));
>> + list_add(list, &rq->queuelist);
>
> This way may let this request(DONTPREP set) to be merged with other rq
> or bio, and potential data corruption may be caused, please see commit:
>
> c616cbee97ae blk-mq: punt failed direct issue to dispatch list
>
Ok.
What triggered this patch is this code:
insert:
if (bypass_insert)
return BLK_STS_RESOURCE;
blk_mq_request_bypass_insert(rq, run_queue);
return BLK_STS_OK;
}
static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
struct request *rq, blk_qc_t *cookie)
{
blk_status_t ret;
int srcu_idx;
might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
hctx_lock(hctx, &srcu_idx);
ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
blk_mq_request_bypass_insert(rq, true);
IE blk_mq_request_bypass_insert() will be called always once we hit the
'insert' label, the only difference being the second parameter of that
function.
I'd rather have the sequence consolidated, preferably by calling
blk_mq_request_bypass_insert() in one place only, and not scatter the
calls all over the code.
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 247165 (AG München), GF: Felix Imendörffer
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2019-09-20 6:42 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-09-19 9:45 [RFC PATCH 0/2] blk-mq I/O scheduling fixes Hannes Reinecke
2019-09-19 9:45 ` [PATCH 1/2] blk-mq: fixup request re-insert in blk_mq_try_issue_list_directly() Hannes Reinecke
2019-09-19 14:19 ` Ming Lei
2019-09-20 6:42 ` Hannes Reinecke
2019-09-19 14:52 ` Guoqing Jiang
2019-09-19 9:45 ` [PATCH 2/2] blk-mq: always call into the scheduler in blk_mq_make_request() Hannes Reinecke
2019-09-19 10:21 ` Damien Le Moal
2019-09-19 14:23 ` Ming Lei
2019-09-19 15:48 ` Kashyap Desai
2019-09-19 16:13 ` Damien Le Moal
2019-09-19 9:56 ` [RFC PATCH 0/2] blk-mq I/O scheduling fixes Liu, Sunny
2019-09-19 10:03 ` Damien Le Moal
[not found] ` <BJXPR01MB0296594F3E478B5BFD4DA2ABF4890@BJXPR01MB0296.CHNPR01.prod.partner.outlook.cn>
2019-09-19 12:44 ` Damien Le Moal
2019-09-19 12:54 ` Liu, Sunny
2019-09-19 12:57 ` Hans Holmberg
2019-09-19 17:48 ` Jens Axboe
2019-09-19 21:11 ` Damien Le Moal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).