From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, Ming Lei <ming.lei@redhat.com>,
John Garry <john.garry@huawei.com>,
Bart Van Assche <bvanassche@acm.org>,
Hannes Reinecke <hare@suse.com>, Christoph Hellwig <hch@lst.de>,
Thomas Gleixner <tglx@linutronix.de>
Subject: [PATCH V7 0/9] blk-mq: improvement CPU hotplug
Date: Sat, 18 Apr 2020 11:09:16 +0800 [thread overview]
Message-ID: <20200418030925.31996-1-ming.lei@redhat.com> (raw)
Hi,
Thomas mentioned:
"
That was the constraint of managed interrupts from the very beginning:
The driver/subsystem has to quiesce the interrupt line and the associated
queue _before_ it gets shutdown in CPU unplug and not fiddle with it
until it's restarted by the core when the CPU is plugged in again.
"
But no drivers or blk-mq do that before one hctx becomes inactive(all
CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
This patchset tries to address the issue by two stages:
1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
- mark the hctx as internal stopped, and drain all in-flight requests
if the hctx is going to be dead.
2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
- steal bios from the request, and resubmit them via generic_make_request(),
then these IO will be mapped to other live hctx for dispatch
Thanks John Garry for running lots of tests on arm64 with this patchset
and co-working on investigating all kinds of issues.
Please comment & review, thanks!
https://github.com/ming1/linux/commits/v5.7-rc-blk-mq-improve-cpu-hotplug
V7:
- fix updating .nr_active in get_driver_tag
- add hctx->cpumask check in cpuhp handler
- only drain requests which tag is >= 0
- pass more aggressive cpuhotplug&io test
V6:
- simplify getting driver tag, so that we can drain in-flight
requests correctly without using synchronize_rcu()
- handle re-submission of flush & passthrough request correctly
V5:
- rename BLK_MQ_S_INTERNAL_STOPPED as BLK_MQ_S_INACTIVE
- re-factor code for re-submit requests in cpu dead hotplug handler
- address requeue corner case
V4:
- resubmit IOs in dispatch list in case that this hctx is dead
V3:
- re-organize patch 2 & 3 a bit for addressing Hannes's comment
- fix patch 4 for avoiding potential deadlock, as found by Hannes
V2:
- patch4 & patch 5 in V1 have been merged to block tree, so remove
them
- address comments from John Garry and Minwoo
Ming Lei (9):
blk-mq: mark blk_mq_get_driver_tag as static
blk-mq: assign rq->tag in blk_mq_get_driver_tag
blk-mq: prepare for draining IO when hctx's all CPUs are offline
blk-mq: support rq filter callback when iterating rqs
blk-mq: stop to handle IO and drain IO before hctx becomes inactive
block: add blk_end_flush_machinery
blk-mq: re-submit IO in case that hctx is inactive
blk-mq: handle requests dispatched from IO scheduler in case of
inactive hctx
block: deactivate hctx when the hctx is actually inactive
block/blk-flush.c | 143 +++++++++++---
block/blk-mq-debugfs.c | 2 +
block/blk-mq-tag.c | 39 ++--
block/blk-mq-tag.h | 4 +
block/blk-mq.c | 384 ++++++++++++++++++++++++++++++-------
block/blk-mq.h | 25 ++-
block/blk.h | 9 +-
drivers/block/loop.c | 2 +-
drivers/md/dm-rq.c | 2 +-
include/linux/blk-mq.h | 6 +
include/linux/cpuhotplug.h | 1 +
11 files changed, 495 insertions(+), 122 deletions(-)
Cc: John Garry <john.garry@huawei.com>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
--
2.25.2
next reply other threads:[~2020-04-18 3:09 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-18 3:09 Ming Lei [this message]
2020-04-18 3:09 ` [PATCH V7 1/9] blk-mq: mark blk_mq_get_driver_tag as static Ming Lei
2020-04-23 7:14 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 2/9] blk-mq: assign rq->tag in blk_mq_get_driver_tag Ming Lei
2020-04-23 7:30 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 3/9] blk-mq: prepare for draining IO when hctx's all CPUs are offline Ming Lei
2020-04-23 7:31 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 4/9] blk-mq: support rq filter callback when iterating rqs Ming Lei
2020-04-20 10:34 ` John Garry
2020-04-23 7:31 ` Christoph Hellwig
2020-04-23 7:32 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 5/9] blk-mq: stop to handle IO and drain IO before hctx becomes inactive Ming Lei
2020-04-23 7:38 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 6/9] block: add blk_end_flush_machinery Ming Lei
2020-04-23 7:40 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 7/9] blk-mq: re-submit IO in case that hctx is inactive Ming Lei
2020-04-23 7:50 ` Christoph Hellwig
2020-04-23 8:46 ` Ming Lei
2020-04-18 3:09 ` [PATCH V7 8/9] blk-mq: handle requests dispatched from IO scheduler in case of inactive hctx Ming Lei
2020-04-23 7:51 ` Christoph Hellwig
2020-04-18 3:09 ` [PATCH V7 9/9] block: deactivate hctx when the hctx is actually inactive Ming Lei
2020-04-20 10:29 ` [PATCH V7 0/9] blk-mq: improvement CPU hotplug John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200418030925.31996-1-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=hare@suse.com \
--cc=hch@lst.de \
--cc=john.garry@huawei.com \
--cc=linux-block@vger.kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).