linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V3 0/2] blk-mq/nvme: cancel request synchronously
@ 2019-04-08  9:40 Ming Lei
  2019-04-08  9:40 ` [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync() Ming Lei
  2019-04-08  9:40 ` [PATCH V3 2/2] nvme: cancel request synchronously Ming Lei
  0 siblings, 2 replies; 5+ messages in thread
From: Ming Lei @ 2019-04-08  9:40 UTC (permalink / raw)


Hi,

This patchset introduces blk_mq_complete_request_sync() for canceling
request synchronously in error handler context, then one race between
completing request remotely and destroying contoller/queues can be fixed.


V3:
	- avoid extra cost to blk_mq_complete_request

V2:
	- export via EXPORT_SYMBOL_GPL
	- minor commit log change


Ming Lei (2):
  blk-mq: introduce blk_mq_complete_request_sync()
  nvme: cancel request synchronously

 block/blk-mq.c           | 11 +++++++++++
 drivers/nvme/host/core.c |  2 +-
 include/linux/blk-mq.h   |  1 +
 3 files changed, 13 insertions(+), 1 deletion(-)

Cc: Keith Busch <kbusch at kernel.org>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Bart Van Assche <bvanassche at acm.org>
Cc: James Smart <james.smart at broadcom.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: linux-nvme at lists.infradead.org

-- 
2.9.5

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync()
  2019-04-08  9:40 [PATCH V3 0/2] blk-mq/nvme: cancel request synchronously Ming Lei
@ 2019-04-08  9:40 ` Ming Lei
  2019-04-08 16:15   ` Keith Busch
  2019-04-08  9:40 ` [PATCH V3 2/2] nvme: cancel request synchronously Ming Lei
  1 sibling, 1 reply; 5+ messages in thread
From: Ming Lei @ 2019-04-08  9:40 UTC (permalink / raw)


In NVMe's error handler, follows the typical steps of tearing down
hardware for recovering controller:

1) stop blk_mq hw queues
2) stop the real hw queues
3) cancel in-flight requests via
	blk_mq_tagset_busy_iter(tags, cancel_request, ...)
cancel_request():
	mark the request as abort
	blk_mq_complete_request(req);
4) destroy real hw queues

However, there may be race between #3 and #4, because blk_mq_complete_request()
may run q->mq_ops->complete(rq) remotelly and asynchronously, and
->complete(rq) may be run after #4.

This patch introduces blk_mq_complete_request_sync() for fixing the
above race.

Cc: Keith Busch <kbusch at kernel.org>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Bart Van Assche <bvanassche at acm.org>
Cc: James Smart <james.smart at broadcom.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: linux-nvme at lists.infradead.org
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
 block/blk-mq.c         | 11 +++++++++++
 include/linux/blk-mq.h |  1 +
 2 files changed, 12 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a9354835cf51..d8d89f3514ac 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -654,6 +654,17 @@ bool blk_mq_complete_request(struct request *rq)
 }
 EXPORT_SYMBOL(blk_mq_complete_request);
 
+bool blk_mq_complete_request_sync(struct request *rq)
+{
+	if (unlikely(blk_should_fake_timeout(rq->q)))
+		return false;
+
+	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
+	rq->q->mq_ops->complete(rq);
+	return true;
+}
+EXPORT_SYMBOL_GPL(blk_mq_complete_request_sync);
+
 int blk_mq_request_started(struct request *rq)
 {
 	return blk_mq_rq_state(rq) != MQ_RQ_IDLE;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index cb2aa7ecafff..1412c983e7b8 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -302,6 +302,7 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list);
 void blk_mq_kick_requeue_list(struct request_queue *q);
 void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs);
 bool blk_mq_complete_request(struct request *rq);
+bool blk_mq_complete_request_sync(struct request *rq);
 bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list,
 			   struct bio *bio);
 bool blk_mq_queue_stopped(struct request_queue *q);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH V3 2/2] nvme: cancel request synchronously
  2019-04-08  9:40 [PATCH V3 0/2] blk-mq/nvme: cancel request synchronously Ming Lei
  2019-04-08  9:40 ` [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync() Ming Lei
@ 2019-04-08  9:40 ` Ming Lei
  1 sibling, 0 replies; 5+ messages in thread
From: Ming Lei @ 2019-04-08  9:40 UTC (permalink / raw)


nvme_cancel_request() is used in error handler, and it is always
reliable to cancel request synchronously, and avoids possible race
in which request may be completed after real hw queue is destroyed.

One issue is reported by our customer on NVMe RDMA, in which freed ib
queue pair may be used in nvme_rdma_complete_rq().

Cc: Keith Busch <kbusch at kernel.org>
Cc: Sagi Grimberg <sagi at grimberg.me>
Cc: Bart Van Assche <bvanassche at acm.org>
Cc: James Smart <james.smart at broadcom.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: linux-nvme at lists.infradead.org
Reviewed-by: Christoph Hellwig <hch at lst.de>
Signed-off-by: Ming Lei <ming.lei at redhat.com>
---
 drivers/nvme/host/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 470601980794..2c43e12b70af 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -288,7 +288,7 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved)
 				"Cancelling I/O %d", req->tag);
 
 	nvme_req(req)->status = NVME_SC_ABORT_REQ;
-	blk_mq_complete_request(req);
+	blk_mq_complete_request_sync(req);
 	return true;
 }
 EXPORT_SYMBOL_GPL(nvme_cancel_request);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync()
  2019-04-08  9:40 ` [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync() Ming Lei
@ 2019-04-08 16:15   ` Keith Busch
  2019-04-08 16:16     ` Christoph Hellwig
  0 siblings, 1 reply; 5+ messages in thread
From: Keith Busch @ 2019-04-08 16:15 UTC (permalink / raw)


On Mon, Apr 08, 2019@05:40:46PM +0800, Ming Lei wrote:
> In NVMe's error handler, follows the typical steps of tearing down
> hardware for recovering controller:
> 
> 1) stop blk_mq hw queues
> 2) stop the real hw queues
> 3) cancel in-flight requests via
> 	blk_mq_tagset_busy_iter(tags, cancel_request, ...)
> cancel_request():
> 	mark the request as abort
> 	blk_mq_complete_request(req);
> 4) destroy real hw queues
> 
> However, there may be race between #3 and #4, because blk_mq_complete_request()
> may run q->mq_ops->complete(rq) remotelly and asynchronously, and
> ->complete(rq) may be run after #4.
> 
> This patch introduces blk_mq_complete_request_sync() for fixing the
> above race.
> 
> Cc: Keith Busch <kbusch at kernel.org>
> Cc: Sagi Grimberg <sagi at grimberg.me>
> Cc: Bart Van Assche <bvanassche at acm.org>
> Cc: James Smart <james.smart at broadcom.com>
> Cc: Christoph Hellwig <hch at lst.de>
> Cc: linux-nvme at lists.infradead.org
> Signed-off-by: Ming Lei <ming.lei at redhat.com>
> ---
>  block/blk-mq.c         | 11 +++++++++++
>  include/linux/blk-mq.h |  1 +
>  2 files changed, 12 insertions(+)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index a9354835cf51..d8d89f3514ac 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -654,6 +654,17 @@ bool blk_mq_complete_request(struct request *rq)
>  }
>  EXPORT_SYMBOL(blk_mq_complete_request);
>  
> +bool blk_mq_complete_request_sync(struct request *rq)
> +{
> +	if (unlikely(blk_should_fake_timeout(rq->q)))
> +		return false;
> +
> +	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
> +	rq->q->mq_ops->complete(rq);
> +	return true;
> +}
> +EXPORT_SYMBOL_GPL(blk_mq_complete_request_sync);

Could we possibly drop the fake timeout in this path? We're using this
in error handling that is past pretending completing requests didn't
happen.

Otherwise this all looks good to me.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync()
  2019-04-08 16:15   ` Keith Busch
@ 2019-04-08 16:16     ` Christoph Hellwig
  0 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2019-04-08 16:16 UTC (permalink / raw)


On Mon, Apr 08, 2019@10:15:05AM -0600, Keith Busch wrote:
> > +bool blk_mq_complete_request_sync(struct request *rq)
> > +{
> > +	if (unlikely(blk_should_fake_timeout(rq->q)))
> > +		return false;
> > +
> > +	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
> > +	rq->q->mq_ops->complete(rq);
> > +	return true;
> > +}
> > +EXPORT_SYMBOL_GPL(blk_mq_complete_request_sync);
> 
> Could we possibly drop the fake timeout in this path? We're using this
> in error handling that is past pretending completing requests didn't
> happen.

.. and at that point we can also drop the return value.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-04-08 16:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-08  9:40 [PATCH V3 0/2] blk-mq/nvme: cancel request synchronously Ming Lei
2019-04-08  9:40 ` [PATCH V3 1/2] blk-mq: introduce blk_mq_complete_request_sync() Ming Lei
2019-04-08 16:15   ` Keith Busch
2019-04-08 16:16     ` Christoph Hellwig
2019-04-08  9:40 ` [PATCH V3 2/2] nvme: cancel request synchronously Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).