linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] dm-mpath: improve I/O schedule
@ 2017-09-15 16:33 Ming Lei
  0 siblings, 0 replies; 2+ messages in thread
From: Ming Lei @ 2017-09-15 16:33 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Laurence Oberman,
	Ming Lei

Hi,

We depend on I/O scheduler in dm-mpath layer, and underlying
I/O scheduler is bypassed basically.

I/O scheduler depends on queue busy condition to
trigger I/O merge, unfortunatley inside dm-mpath,
the underlying queue busy feedback is not accurate
enough, and we just allocate one request and dispatcch
it out to underlying queue, no matter if that queue
is busy or not. Then I/O merge is hard to trigger.

This patchset sets underlying queue's nr_request as
the queue's queue depth, so that queue busy is figured
out by checking if request is allocated successfully.

>From test result on mq-deadline, sequential I/O performance
is improved a lot, see test result in patch 5's commit log.

Any comments are welcome!

Thanks,
Ming

Ming Lei (5):
  block: don't call blk_mq_delay_run_hw_queue() in case of
    BLK_STS_RESOURCE
  dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocation failure
  dm-mpath: remove annoying message of 'blk_get_request() returned -11'
  block: export blk_update_nr_requests
  dm-mpath: improve I/O schedule

 block/blk-core.c        |  4 +++-
 block/blk-sysfs.c       |  5 +----
 block/blk.h             |  2 --
 drivers/md/dm-mpath.c   | 30 +++++++++++++++++++++++++++---
 drivers/md/dm-rq.c      |  1 -
 drivers/nvme/host/fc.c  |  3 ---
 drivers/scsi/scsi_lib.c |  4 ----
 include/linux/blkdev.h  |  1 +
 8 files changed, 32 insertions(+), 18 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH 0/5] dm-mpath: improve I/O schedule
@ 2017-09-15 16:44 Ming Lei
  0 siblings, 0 replies; 2+ messages in thread
From: Ming Lei @ 2017-09-15 16:44 UTC (permalink / raw)
  To: dm-devel, Mike Snitzer, Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Laurence Oberman,
	Ming Lei

Hi,

We depend on I/O scheduler in dm-mpath layer, and underlying
I/O scheduler is bypassed basically.

I/O scheduler depends on queue busy condition to
trigger I/O merge, unfortunatley inside dm-mpath,
the underlying queue busy feedback is not accurate
enough, and we just allocate one request and dispatcch
it out to underlying queue, no matter if that queue
is busy or not. Then I/O merge is hard to trigger.

This patchset sets underlying queue's nr_request as
the queue's queue depth, so that queue busy is figured
out by checking if request is allocated successfully.

>From test result on mq-deadline, sequential I/O performance
is improved a lot, see test result in patch 5's commit log.

Any comments are welcome!

Thanks,
Ming

Ming Lei (5):
  block: don't call blk_mq_delay_run_hw_queue() in case of
    BLK_STS_RESOURCE
  dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocation failure
  dm-mpath: remove annoying message of 'blk_get_request() returned -11'
  block: export blk_update_nr_requests
  dm-mpath: improve I/O schedule

 block/blk-core.c        |  4 +++-
 block/blk-sysfs.c       |  5 +----
 block/blk.h             |  2 --
 drivers/md/dm-mpath.c   | 30 +++++++++++++++++++++++++++---
 drivers/md/dm-rq.c      |  1 -
 drivers/nvme/host/fc.c  |  3 ---
 drivers/scsi/scsi_lib.c |  4 ----
 include/linux/blkdev.h  |  1 +
 8 files changed, 32 insertions(+), 18 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-09-15 16:45 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-15 16:33 [PATCH 0/5] dm-mpath: improve I/O schedule Ming Lei
  -- strict thread matches above, loose matches on Subject: below --
2017-09-15 16:44 Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).