public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] correct quiescing in several block drivers
@ 2017-07-05  6:53 Sagi Grimberg
  2017-07-05  6:53 ` [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

Before we either iterate on tags we must guarantee that the
hw queues are stopped and no inflight .queue_rq is active.
Thats what blk_mq_quiesce_queue is for, so use it where appropriate.

Note that before we cleanup the request queue, we must unquiesce in
order to guarantee to not block on requests queued during quiescing.

Changes from v1:
- dropped the xen_blockfront patch, not sure if it was helpful at all
- Removed the requeue kick in nvme drivers while making nvme unconditionally
  kick the requeue list when requeuing a request instead of doing so when
  unquiescing the queues. Note that this change was a two step change to
  keep bisection possible.
- Fixed possible hangs with unquiescing before cleanup of the request queue
  in mtip32xx and nvme drivers.
- Added check that the request started in mtip32xx itration callouts.
- collected review tags.

Sagi Grimberg (8):
  nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw
    queues
  nvme: kick requeue list when requeueing a request instead of when
    starting the queues
  nbd: quiesce request queues to make sure no submissions are inflight
  mtip32xx: quiesce request queues to make sure no submissions are
    inflight
  virtio_blk: quiesce/unquiesce live IO when entering PM states

 drivers/block/mtip32xx/mtip32xx.c | 19 +++++++++++++++----
 drivers/block/nbd.c               |  4 ++--
 drivers/block/virtio_blk.c        |  4 ++--
 drivers/nvme/host/core.c          | 19 ++-----------------
 drivers/nvme/host/fc.c            |  5 +++--
 drivers/nvme/host/pci.c           |  9 +++++----
 drivers/nvme/host/rdma.c          |  7 ++++---
 drivers/nvme/target/loop.c        |  3 ++-
 8 files changed, 35 insertions(+), 35 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  7:12   ` Ming Lei
  2017-07-05  6:53 ` [PATCH v2 2/8] nvme-fc: " Sagi Grimberg
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.
Also make sure to kick the requeue list when appropriate.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/rdma.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index cfb22531fc16..c8bf2606ba64 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -778,7 +778,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 
 	if (ctrl->ctrl.queue_count > 1)
 		nvme_stop_queues(&ctrl->ctrl);
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 
 	/* We must take care of fastfail/requeue all our inflight requests */
 	if (ctrl->ctrl.queue_count > 1)
@@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 	 * queues are not a live anymore, so restart the queues to fail fast
 	 * new IO
 	 */
-	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
+	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
 	nvme_start_queues(&ctrl->ctrl);
 
 	nvme_rdma_reconnect_or_remove(ctrl);
@@ -1636,9 +1637,10 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl)
 	if (test_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags))
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_cancel_request, &ctrl->ctrl);
+	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 	nvme_rdma_destroy_admin_queue(ctrl);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/8] nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
  2017-07-05  6:53 ` [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  7:16   ` Ming Lei
  2017-07-05  6:53 ` [PATCH v2 3/8] nvme-loop: " Sagi Grimberg
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.

Also, make sure to unquiesce before cleanup the admin queue.

Reviewed-By: James Smart <james.smart@broadcom.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/fc.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 50cc3b2b0e11..3e35fc622680 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -1703,6 +1703,7 @@ nvme_fc_ctrl_free(struct kref *ref)
 	list_del(&ctrl->ctrl_list);
 	spin_unlock_irqrestore(&ctrl->rport->lock, flags);
 
+	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 	blk_cleanup_queue(ctrl->ctrl.admin_q);
 	blk_mq_free_tag_set(&ctrl->admin_tag_set);
 
@@ -2320,8 +2321,10 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 	if (ret)
 		goto out_delete_hw_queue;
 
-	if (ctrl->ctrl.state != NVME_CTRL_NEW)
-		blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
+	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
+		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
+	}
 
 	ret = nvmf_connect_admin_queue(&ctrl->ctrl);
 	if (ret)
@@ -2475,7 +2478,7 @@ nvme_fc_delete_association(struct nvme_fc_ctrl *ctrl)
 	 * use blk_mq_tagset_busy_itr() and the transport routine to
 	 * terminate the exchanges.
 	 */
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_fc_terminate_exchange, &ctrl->ctrl);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 3/8] nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
  2017-07-05  6:53 ` [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
  2017-07-05  6:53 ` [PATCH v2 2/8] nvme-fc: " Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  7:14   ` Ming Lei
  2017-07-05  6:53 ` [PATCH v2 4/8] nvme-pci: " Sagi Grimberg
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/loop.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 3d51341e62ee..6a0b70685e77 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -434,9 +434,10 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
 	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
 		nvme_shutdown_ctrl(&ctrl->ctrl);
 
-	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
+	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
 				nvme_cancel_request, &ctrl->ctrl);
+	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 	nvme_loop_destroy_admin_queue(ctrl);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
                   ` (2 preceding siblings ...)
  2017-07-05  6:53 ` [PATCH v2 3/8] nvme-loop: " Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  7:20   ` Ming Lei
  2017-07-05  6:53 ` [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues Sagi Grimberg
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
quiescing/unquiescing respects the submission path rcu grace.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/pci.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index eb729ff70e7d..df7c8a355075 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1125,7 +1125,7 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
 	spin_unlock_irq(&nvmeq->q_lock);
 
 	if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q)
-		blk_mq_stop_hw_queues(nvmeq->dev->ctrl.admin_q);
+		blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q);
 
 	pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq);
 
@@ -1315,7 +1315,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
 		 * user requests may be waiting on a stopped queue. Start the
 		 * queue to flush these to completion.
 		 */
-		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
+		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
 		blk_cleanup_queue(dev->ctrl.admin_q);
 		blk_mq_free_tag_set(&dev->admin_tagset);
 	}
@@ -1351,8 +1351,10 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
 			dev->ctrl.admin_q = NULL;
 			return -ENODEV;
 		}
-	} else
-		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
+	} else {
+		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
+		blk_mq_kick_requeue_list(dev->ctrl.admin_q);
+	}
 
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
                   ` (3 preceding siblings ...)
  2017-07-05  6:53 ` [PATCH v2 4/8] nvme-pci: " Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  7:22   ` Ming Lei
  2017-07-05  6:53 ` [PATCH v2 6/8] nbd: quiesce request queues to make sure no submissions are inflight Sagi Grimberg
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

When we requeue a request, we can always insert the request
back to the scheduler instead of doing it when restarting
the queues and kicking the requeue work, so get rid of
the requeue kick in nvme (core and drivers).

Also, now there is no need start hw queues in nvme_kill_queues
We don't stop the hw queues anymore, so no need to
start them.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 19 ++-----------------
 drivers/nvme/host/fc.c   |  4 +---
 drivers/nvme/host/pci.c  |  1 -
 drivers/nvme/host/rdma.c |  1 -
 4 files changed, 3 insertions(+), 22 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d70df1d0072d..48cafaa6fbc5 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -131,7 +131,7 @@ void nvme_complete_rq(struct request *req)
 {
 	if (unlikely(nvme_req(req)->status && nvme_req_needs_retry(req))) {
 		nvme_req(req)->retries++;
-		blk_mq_requeue_request(req, !blk_mq_queue_stopped(req->q));
+		blk_mq_requeue_request(req, true);
 		return;
 	}
 
@@ -2694,9 +2694,6 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
 	/* Forcibly unquiesce queues to avoid blocking dispatch */
 	blk_mq_unquiesce_queue(ctrl->admin_q);
 
-	/* Forcibly start all queues to avoid having stuck requests */
-	blk_mq_start_hw_queues(ctrl->admin_q);
-
 	list_for_each_entry(ns, &ctrl->namespaces, list) {
 		/*
 		 * Revalidating a dead namespace sets capacity to 0. This will
@@ -2709,16 +2706,6 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
 
 		/* Forcibly unquiesce queues to avoid blocking dispatch */
 		blk_mq_unquiesce_queue(ns->queue);
-
-		/*
-		 * Forcibly start all queues to avoid having stuck requests.
-		 * Note that we must ensure the queues are not stopped
-		 * when the final removal happens.
-		 */
-		blk_mq_start_hw_queues(ns->queue);
-
-		/* draining requests in requeue list */
-		blk_mq_kick_requeue_list(ns->queue);
 	}
 	mutex_unlock(&ctrl->namespaces_mutex);
 }
@@ -2787,10 +2774,8 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 	struct nvme_ns *ns;
 
 	mutex_lock(&ctrl->namespaces_mutex);
-	list_for_each_entry(ns, &ctrl->namespaces, list) {
+	list_for_each_entry(ns, &ctrl->namespaces, list)
 		blk_mq_unquiesce_queue(ns->queue);
-		blk_mq_kick_requeue_list(ns->queue);
-	}
 	mutex_unlock(&ctrl->namespaces_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_start_queues);
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 3e35fc622680..8d55e7827932 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2321,10 +2321,8 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
 	if (ret)
 		goto out_delete_hw_queue;
 
-	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
+	if (ctrl->ctrl.state != NVME_CTRL_NEW)
 		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
-		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
-	}
 
 	ret = nvmf_connect_admin_queue(&ctrl->ctrl);
 	if (ret)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index df7c8a355075..8d8e2eddb3da 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1353,7 +1353,6 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
 		}
 	} else {
 		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
-		blk_mq_kick_requeue_list(dev->ctrl.admin_q);
 	}
 
 	return 0;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index c8bf2606ba64..f2cf9638b4d2 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -792,7 +792,6 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
 	 * new IO
 	 */
 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
-	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
 	nvme_start_queues(&ctrl->ctrl);
 
 	nvme_rdma_reconnect_or_remove(ctrl);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 6/8] nbd: quiesce request queues to make sure no submissions are inflight
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
                   ` (4 preceding siblings ...)
  2017-07-05  6:53 ` [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  6:53 ` [PATCH v2 7/8] mtip32xx: " Sagi Grimberg
  2017-07-05  6:53 ` [PATCH v2 8/8] virtio_blk: quiesce/unquiesce live IO when entering PM states Sagi Grimberg
  7 siblings, 0 replies; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei, Josef Bacik

Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
submission path rcu grace. quiesce the queue before iterating
on live tags.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Cc: Josef Bacik <jbacik@fb.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/nbd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 977ec960dd2f..dea7d85134ee 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -661,9 +661,9 @@ static void nbd_clear_req(struct request *req, void *data, bool reserved)
 
 static void nbd_clear_que(struct nbd_device *nbd)
 {
-	blk_mq_stop_hw_queues(nbd->disk->queue);
+	blk_mq_quiesce_queue(nbd->disk->queue);
 	blk_mq_tagset_busy_iter(&nbd->tag_set, nbd_clear_req, NULL);
-	blk_mq_start_hw_queues(nbd->disk->queue);
+	blk_mq_unquiesce_queue(nbd->disk->queue);
 	dev_dbg(disk_to_dev(nbd->disk), "queue cleared\n");
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 7/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
                   ` (5 preceding siblings ...)
  2017-07-05  6:53 ` [PATCH v2 6/8] nbd: quiesce request queues to make sure no submissions are inflight Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  2017-07-05  7:28   ` Ming Lei
  2017-07-05  6:53 ` [PATCH v2 8/8] virtio_blk: quiesce/unquiesce live IO when entering PM states Sagi Grimberg
  7 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei

Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
submission path rcu grace. quiesce the queue before iterating
on live tags, or performing device io quiescing.

While were at it, verify that the request started in mtip_abort_cmd
amd mtip_queue_cmd tag iteration calls.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/mtip32xx/mtip32xx.c | 19 +++++++++++++++----
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 61b046f256ca..87717a1a5c89 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -950,7 +950,7 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
 	unsigned long to;
 	bool active = true;
 
-	blk_mq_stop_hw_queues(port->dd->queue);
+	blk_mq_quiesce_queue(port->dd->queue);
 
 	to = jiffies + msecs_to_jiffies(timeout);
 	do {
@@ -970,10 +970,10 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
 			break;
 	} while (time_before(jiffies, to));
 
-	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
+	blk_mq_unquiesce_queue(port->dd->queue);
 	return active ? -EBUSY : 0;
 err_fault:
-	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
+	blk_mq_unquiesce_queue(port->dd->queue);
 	return -EFAULT;
 }
 
@@ -2737,6 +2737,9 @@ static void mtip_abort_cmd(struct request *req, void *data,
 	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(req);
 	struct driver_data *dd = data;
 
+	if (!blk_mq_request_started(req))
+		return;
+
 	dbg_printk(MTIP_DRV_NAME " Aborting request, tag = %d\n", req->tag);
 
 	clear_bit(req->tag, dd->port->cmds_to_issue);
@@ -2749,6 +2752,9 @@ static void mtip_queue_cmd(struct request *req, void *data,
 {
 	struct driver_data *dd = data;
 
+	if (!blk_mq_request_started(req))
+		return;
+
 	set_bit(req->tag, dd->port->cmds_to_issue);
 	blk_abort_request(req);
 }
@@ -2814,6 +2820,8 @@ static int mtip_service_thread(void *data)
 				dev_warn(&dd->pdev->dev,
 					"Completion workers still active!");
 
+			blk_mq_quiesce_queue(dd->queue);
+
 			spin_lock(dd->queue->queue_lock);
 			blk_mq_tagset_busy_iter(&dd->tags,
 							mtip_queue_cmd, dd);
@@ -2826,6 +2834,8 @@ static int mtip_service_thread(void *data)
 							mtip_abort_cmd, dd);
 
 			clear_bit(MTIP_PF_TO_ACTIVE_BIT, &dd->port->flags);
+
+			blk_mq_unquiesce_queue(dd->queue);
 		}
 
 		if (test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags)) {
@@ -3995,8 +4005,9 @@ static int mtip_block_remove(struct driver_data *dd)
 						dd->disk->disk_name);
 
 	blk_freeze_queue_start(dd->queue);
-	blk_mq_stop_hw_queues(dd->queue);
+	blk_mq_quiesce_queue(dd->queue);
 	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
+	blk_mq_unquiesce_queue(dd->queue);
 
 	/*
 	 * Delete our gendisk structure. This also removes the device
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 8/8] virtio_blk: quiesce/unquiesce live IO when entering PM states
  2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
                   ` (6 preceding siblings ...)
  2017-07-05  6:53 ` [PATCH v2 7/8] mtip32xx: " Sagi Grimberg
@ 2017-07-05  6:53 ` Sagi Grimberg
  7 siblings, 0 replies; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  6:53 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: linux-nvme, Christoph Hellwig, Keith Busch, Ming Lei, virtio-dev,
	Jason Wang

Without it its not guaranteed that no .queue_rq is inflight.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/block/virtio_blk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 0297ad7c1452..4e02aa5fdac0 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -840,7 +840,7 @@ static int virtblk_freeze(struct virtio_device *vdev)
 	/* Make sure no work handler is accessing the device. */
 	flush_work(&vblk->config_work);
 
-	blk_mq_stop_hw_queues(vblk->disk->queue);
+	blk_mq_quiesce_queue(vblk->disk->queue);
 
 	vdev->config->del_vqs(vdev);
 	return 0;
@@ -857,7 +857,7 @@ static int virtblk_restore(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
-	blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+	blk_mq_unquiesce_queue(vblk->disk->queue);
 	return 0;
 }
 #endif
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 ` [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
@ 2017-07-05  7:12   ` Ming Lei
  2017-07-05  7:15     ` Sagi Grimberg
  0 siblings, 1 reply; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:12 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch

On Wed, Jul 05, 2017 at 09:53:04AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> Also make sure to kick the requeue list when appropriate.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/rdma.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index cfb22531fc16..c8bf2606ba64 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -778,7 +778,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>  
>  	if (ctrl->ctrl.queue_count > 1)
>  		nvme_stop_queues(&ctrl->ctrl);
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  
>  	/* We must take care of fastfail/requeue all our inflight requests */
>  	if (ctrl->ctrl.queue_count > 1)
> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>  	 * queues are not a live anymore, so restart the queues to fail fast
>  	 * new IO
>  	 */
> -	blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> +	blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);

As I commented yesterday, blk_mq_kick_requeue_list() isn't needed.

>  	nvme_start_queues(&ctrl->ctrl);
>  
>  	nvme_rdma_reconnect_or_remove(ctrl);
> @@ -1636,9 +1637,10 @@ static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl)
>  	if (test_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags))
>  		nvme_shutdown_ctrl(&ctrl->ctrl);
>  
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>  				nvme_cancel_request, &ctrl->ctrl);
> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>  	nvme_rdma_destroy_admin_queue(ctrl);
>  }
>  
> -- 
> 2.7.4
> 

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/8] nvme-loop: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 ` [PATCH v2 3/8] nvme-loop: " Sagi Grimberg
@ 2017-07-05  7:14   ` Ming Lei
  0 siblings, 0 replies; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:14 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch

On Wed, Jul 05, 2017 at 09:53:06AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/target/loop.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
> index 3d51341e62ee..6a0b70685e77 100644
> --- a/drivers/nvme/target/loop.c
> +++ b/drivers/nvme/target/loop.c
> @@ -434,9 +434,10 @@ static void nvme_loop_shutdown_ctrl(struct nvme_loop_ctrl *ctrl)
>  	if (ctrl->ctrl.state == NVME_CTRL_LIVE)
>  		nvme_shutdown_ctrl(&ctrl->ctrl);
>  
> -	blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);
> +	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
>  	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
>  				nvme_cancel_request, &ctrl->ctrl);
> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>  	nvme_loop_destroy_admin_queue(ctrl);
>  }
>  
> -- 
> 2.7.4
> 

Reviewed-by: Ming Lei <ming.lei@redhat.com>

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  7:12   ` Ming Lei
@ 2017-07-05  7:15     ` Sagi Grimberg
  2017-07-05  7:34       ` Ming Lei
  0 siblings, 1 reply; 17+ messages in thread
From: Sagi Grimberg @ 2017-07-05  7:15 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch


> As I commented yesterday, blk_mq_kick_requeue_list() isn't needed.

I commented in the cover-letter, I am removing it a two steps to keep
bisection. I'm only allowed to remove it in patch 5/8.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/8] nvme-fc: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 ` [PATCH v2 2/8] nvme-fc: " Sagi Grimberg
@ 2017-07-05  7:16   ` Ming Lei
  0 siblings, 0 replies; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:16 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, Keith Busch, Christoph Hellwig,
	linux-nvme

On Wed, Jul 05, 2017 at 09:53:05AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> 
> Also, make sure to unquiesce before cleanup the admin queue.
> 
> Reviewed-By: James Smart <james.smart@broadcom.com>
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/fc.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 50cc3b2b0e11..3e35fc622680 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -1703,6 +1703,7 @@ nvme_fc_ctrl_free(struct kref *ref)
>  	list_del(&ctrl->ctrl_list);
>  	spin_unlock_irqrestore(&ctrl->rport->lock, flags);
>  
> +	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>  	blk_cleanup_queue(ctrl->ctrl.admin_q);
>  	blk_mq_free_tag_set(&ctrl->admin_tag_set);
>  
> @@ -2320,8 +2321,10 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
>  	if (ret)
>  		goto out_delete_hw_queue;
>  
> -	if (ctrl->ctrl.state != NVME_CTRL_NEW)
> -		blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
> +	if (ctrl->ctrl.state != NVME_CTRL_NEW) {
> +		blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> +		blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);

Same comment with patch 1

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  6:53 ` [PATCH v2 4/8] nvme-pci: " Sagi Grimberg
@ 2017-07-05  7:20   ` Ming Lei
  0 siblings, 0 replies; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:20 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch

On Wed, Jul 05, 2017 at 09:53:07AM +0300, Sagi Grimberg wrote:
> unlike blk_mq_stop_hw_queues and blk_mq_start_stopped_hw_queues
> quiescing/unquiescing respects the submission path rcu grace.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/pci.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index eb729ff70e7d..df7c8a355075 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1125,7 +1125,7 @@ static int nvme_suspend_queue(struct nvme_queue *nvmeq)
>  	spin_unlock_irq(&nvmeq->q_lock);
>  
>  	if (!nvmeq->qid && nvmeq->dev->ctrl.admin_q)
> -		blk_mq_stop_hw_queues(nvmeq->dev->ctrl.admin_q);
> +		blk_mq_quiesce_queue(nvmeq->dev->ctrl.admin_q);
>  
>  	pci_free_irq(to_pci_dev(nvmeq->dev->dev), vector, nvmeq);
>  
> @@ -1315,7 +1315,7 @@ static void nvme_dev_remove_admin(struct nvme_dev *dev)
>  		 * user requests may be waiting on a stopped queue. Start the
>  		 * queue to flush these to completion.
>  		 */
> -		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
> +		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
>  		blk_cleanup_queue(dev->ctrl.admin_q);
>  		blk_mq_free_tag_set(&dev->admin_tagset);
>  	}
> @@ -1351,8 +1351,10 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
>  			dev->ctrl.admin_q = NULL;
>  			return -ENODEV;
>  		}
> -	} else
> -		blk_mq_start_stopped_hw_queues(dev->ctrl.admin_q, true);
> +	} else {
> +		blk_mq_unquiesce_queue(dev->ctrl.admin_q);
> +		blk_mq_kick_requeue_list(dev->ctrl.admin_q);
> +	}

Hammmm, the kick is still not needed. Actually kick requeue list
is only needed where we restart the queue. For quiesce and unquiesce,
we don't touch the STOPPED state, so needn't to kick requeue list.

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues
  2017-07-05  6:53 ` [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues Sagi Grimberg
@ 2017-07-05  7:22   ` Ming Lei
  0 siblings, 0 replies; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:22 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch

On Wed, Jul 05, 2017 at 09:53:08AM +0300, Sagi Grimberg wrote:
> When we requeue a request, we can always insert the request
> back to the scheduler instead of doing it when restarting
> the queues and kicking the requeue work, so get rid of
> the requeue kick in nvme (core and drivers).
> 
> Also, now there is no need start hw queues in nvme_kill_queues
> We don't stop the hw queues anymore, so no need to
> start them.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/core.c | 19 ++-----------------
>  drivers/nvme/host/fc.c   |  4 +---
>  drivers/nvme/host/pci.c  |  1 -
>  drivers/nvme/host/rdma.c |  1 -
>  4 files changed, 3 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index d70df1d0072d..48cafaa6fbc5 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -131,7 +131,7 @@ void nvme_complete_rq(struct request *req)
>  {
>  	if (unlikely(nvme_req(req)->status && nvme_req_needs_retry(req))) {
>  		nvme_req(req)->retries++;
> -		blk_mq_requeue_request(req, !blk_mq_queue_stopped(req->q));
> +		blk_mq_requeue_request(req, true);
>  		return;
>  	}

This change is correct.

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 7/8] mtip32xx: quiesce request queues to make sure no submissions are inflight
  2017-07-05  6:53 ` [PATCH v2 7/8] mtip32xx: " Sagi Grimberg
@ 2017-07-05  7:28   ` Ming Lei
  0 siblings, 0 replies; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:28 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch

On Wed, Jul 05, 2017 at 09:53:10AM +0300, Sagi Grimberg wrote:
> Unlike blk_mq_stop_hw_queues, blk_mq_quiesce_queue respects the
> submission path rcu grace. quiesce the queue before iterating
> on live tags, or performing device io quiescing.
> 
> While were at it, verify that the request started in mtip_abort_cmd
> amd mtip_queue_cmd tag iteration calls.
> 
> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/block/mtip32xx/mtip32xx.c | 19 +++++++++++++++----
>  1 file changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
> index 61b046f256ca..87717a1a5c89 100644
> --- a/drivers/block/mtip32xx/mtip32xx.c
> +++ b/drivers/block/mtip32xx/mtip32xx.c
> @@ -950,7 +950,7 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
>  	unsigned long to;
>  	bool active = true;
>  
> -	blk_mq_stop_hw_queues(port->dd->queue);
> +	blk_mq_quiesce_queue(port->dd->queue);
>  
>  	to = jiffies + msecs_to_jiffies(timeout);
>  	do {
> @@ -970,10 +970,10 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
>  			break;
>  	} while (time_before(jiffies, to));
>  
> -	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
> +	blk_mq_unquiesce_queue(port->dd->queue);
>  	return active ? -EBUSY : 0;
>  err_fault:
> -	blk_mq_start_stopped_hw_queues(port->dd->queue, true);
> +	blk_mq_unquiesce_queue(port->dd->queue);
>  	return -EFAULT;
>  }
>  
> @@ -2737,6 +2737,9 @@ static void mtip_abort_cmd(struct request *req, void *data,
>  	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(req);
>  	struct driver_data *dd = data;
>  
> +	if (!blk_mq_request_started(req))
> +		return;
> +
>  	dbg_printk(MTIP_DRV_NAME " Aborting request, tag = %d\n", req->tag);
>  
>  	clear_bit(req->tag, dd->port->cmds_to_issue);
> @@ -2749,6 +2752,9 @@ static void mtip_queue_cmd(struct request *req, void *data,
>  {
>  	struct driver_data *dd = data;
>  
> +	if (!blk_mq_request_started(req))
> +		return;
> +
>  	set_bit(req->tag, dd->port->cmds_to_issue);
>  	blk_abort_request(req);
>  }
> @@ -2814,6 +2820,8 @@ static int mtip_service_thread(void *data)
>  				dev_warn(&dd->pdev->dev,
>  					"Completion workers still active!");
>  
> +			blk_mq_quiesce_queue(dd->queue);
> +
>  			spin_lock(dd->queue->queue_lock);
>  			blk_mq_tagset_busy_iter(&dd->tags,
>  							mtip_queue_cmd, dd);
> @@ -2826,6 +2834,8 @@ static int mtip_service_thread(void *data)
>  							mtip_abort_cmd, dd);
>  
>  			clear_bit(MTIP_PF_TO_ACTIVE_BIT, &dd->port->flags);
> +
> +			blk_mq_unquiesce_queue(dd->queue);
>  		}
>  
>  		if (test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags)) {
> @@ -3995,8 +4005,9 @@ static int mtip_block_remove(struct driver_data *dd)
>  						dd->disk->disk_name);
>  
>  	blk_freeze_queue_start(dd->queue);
> -	blk_mq_stop_hw_queues(dd->queue);
> +	blk_mq_quiesce_queue(dd->queue);
>  	blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
> +	blk_mq_unquiesce_queue(dd->queue);
>  
>  	/*
>  	 * Delete our gendisk structure. This also removes the device
> -- 
> 2.7.4

Looks fine:

	Reviewed-by: Ming Lei <ming.lei@redhat.com>

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
  2017-07-05  7:15     ` Sagi Grimberg
@ 2017-07-05  7:34       ` Ming Lei
  0 siblings, 0 replies; 17+ messages in thread
From: Ming Lei @ 2017-07-05  7:34 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Jens Axboe, linux-block, linux-nvme, Christoph Hellwig,
	Keith Busch

On Wed, Jul 05, 2017 at 10:15:28AM +0300, Sagi Grimberg wrote:
> 
> > As I commented yesterday, blk_mq_kick_requeue_list() isn't needed.
> 
> I commented in the cover-letter, I am removing it a two steps to keep
> bisection. I'm only allowed to remove it in patch 5/8.

The kick requeue list shouldn't be added at the 1st step.

The thing is simple, we only need to kick request list where the queue
is restarted. Now quiesce and unquiesce do not touch STOP state, so
it may make people confused if you add the kick after unquiese.

-- 
Ming

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-07-05  7:34 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-05  6:53 [PATCH v2 0/8] correct quiescing in several block drivers Sagi Grimberg
2017-07-05  6:53 ` [PATCH v2 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
2017-07-05  7:12   ` Ming Lei
2017-07-05  7:15     ` Sagi Grimberg
2017-07-05  7:34       ` Ming Lei
2017-07-05  6:53 ` [PATCH v2 2/8] nvme-fc: " Sagi Grimberg
2017-07-05  7:16   ` Ming Lei
2017-07-05  6:53 ` [PATCH v2 3/8] nvme-loop: " Sagi Grimberg
2017-07-05  7:14   ` Ming Lei
2017-07-05  6:53 ` [PATCH v2 4/8] nvme-pci: " Sagi Grimberg
2017-07-05  7:20   ` Ming Lei
2017-07-05  6:53 ` [PATCH v2 5/8] nvme: kick requeue list when requeueing a request instead of when starting the queues Sagi Grimberg
2017-07-05  7:22   ` Ming Lei
2017-07-05  6:53 ` [PATCH v2 6/8] nbd: quiesce request queues to make sure no submissions are inflight Sagi Grimberg
2017-07-05  6:53 ` [PATCH v2 7/8] mtip32xx: " Sagi Grimberg
2017-07-05  7:28   ` Ming Lei
2017-07-05  6:53 ` [PATCH v2 8/8] virtio_blk: quiesce/unquiesce live IO when entering PM states Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox