public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Dongli Zhang <dongli.zhang@oracle.com>
Cc: Ming Lei <ming.lei@redhat.com>,
	Keith Busch <keith.busch@gmail.com>, Jens Axboe <axboe@kernel.dk>,
	"Busch, Keith" <keith.busch@intel.com>,
	Bart Van Assche <bvanassche@acm.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Jianchao Wang <jianchao.w.wang@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] blk-mq: Wait for for hctx requests on CPU unplug
Date: Mon, 8 Apr 2019 09:36:27 -0600	[thread overview]
Message-ID: <20190408153627.GF32498@localhost.localdomain> (raw)
In-Reply-To: <4a57f581-954d-b314-fd1d-6dd30640e0f5@oracle.com>

On Sun, Apr 07, 2019 at 06:55:20AM -0700, Dongli Zhang wrote:
> [PATCH 1/1] blk-mq: do not splice ctx->rq_lists[type] to hctx->dispatch if ctx
> is not mapped to hctx
> 
> When a cpu is offline, blk_mq_hctx_notify_dead() is called once for each
> hctx for the offline cpu.
> 
> While blk_mq_hctx_notify_dead() is used to splice all ctx->rq_lists[type]
> to hctx->dispatch, it never checks whether the ctx is already mapped to the
> hctx.
> 
> For example, on a VM (with nvme) of 4 cpu, to offline cpu 2 out of the
> 4 cpu (0-3), blk_mq_hctx_notify_dead() is called once for each io queue
> hctx:
> 
> 1st: blk_mq_ctx->cpu = 2 for blk_mq_hw_ctx->queue_num = 3
> 2nd: blk_mq_ctx->cpu = 2 for blk_mq_hw_ctx->queue_num = 2
> 3rd: blk_mq_ctx->cpu = 2 for blk_mq_hw_ctx->queue_num = 1
> 4th: blk_mq_ctx->cpu = 2 for blk_mq_hw_ctx->queue_num = 0
> 
> Although blk_mq_ctx->cpu = 2 is only mapped to blk_mq_hw_ctx->queue_num = 2
> in this case, its ctx->rq_lists[type] will however be moved to
> blk_mq_hw_ctx->queue_num = 3 during the 1st call of
> blk_mq_hctx_notify_dead().
> 
> This patch would return and go ahead to next call of
> blk_mq_hctx_notify_dead() if ctx is not mapped to hctx.

Ha, I think you're right. 

It would be a bit more work, but it might be best if we could avoid
calling the notify for each hctx that doesn't apply to the CPU. We might
get that by registering a single callback for the request_queue and loop
only the affected hctx's.

But this patch looks good to me too.

Reviewed-by: Keith Busch <keith.busch@intel.com>
 
> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
> ---
>  block/blk-mq.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 3ff3d7b..b8ef489 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2212,6 +2212,10 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu,
> struct hlist_node *node)
>  	enum hctx_type type;
> 
>  	hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_dead);
> +
> +	if (!cpumask_test_cpu(cpu, hctx->cpumask))
> +		return 0;
> +
>  	ctx = __blk_mq_get_ctx(hctx->queue, cpu);
>  	type = hctx->type;
> 
> -- 
> 2.7.4

  parent reply	other threads:[~2019-04-08 15:34 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-05 21:59 [PATCH] blk-mq: Wait for for hctx requests on CPU unplug Keith Busch
2019-04-05 22:23 ` Jens Axboe
2019-04-05 22:37   ` Keith Busch
2019-04-05 23:04     ` Jens Axboe
2019-04-05 23:36       ` Keith Busch
2019-04-06  9:44         ` Dongli Zhang
2019-04-06 21:27         ` Ming Lei
2019-04-07 13:55           ` Dongli Zhang
2019-04-08  9:49             ` Ming Lei
2019-04-08 15:36             ` Keith Busch [this message]
2019-04-08 15:21           ` Keith Busch
2019-04-07  7:51         ` Christoph Hellwig
2019-04-08 15:23           ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190408153627.GF32498@localhost.localdomain \
    --to=kbusch@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=dongli.zhang@oracle.com \
    --cc=jianchao.w.wang@oracle.com \
    --cc=keith.busch@gmail.com \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox