linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/7] block, scsi, md: Improve suspend and resume
@ 2017-10-30 22:41 Bart Van Assche
  2017-10-30 22:41 ` [PATCH v11 1/7] block: Make q_usage_counter also track legacy requests Bart Van Assche
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche

Hello Jens,

It is known that during the resume following a hibernate, especially when
using an md RAID1 array created on top of SCSI devices, sometimes the system
hangs instead of coming up properly. This patch series fixes that
problem. These patches have been tested on top of the block layer for-next
branch. Please consider these changes for kernel v4.15.

Thanks,

Bart.

Changes between v10 and v11:
- Left out the three md patches because a deadlock was reported when using XFS
  on top of md RAID 1. This deadlock occurred because the md kernel thread got
  frozen before the kernel thread running xfsaild().
- Left out the blk_queue_enter() / blk_queue_exit() changes from
  block/blk-timeout.c because a recent patch removed these calls from
  blk_timeout_work().
- Retested the whole series.

Changes between v9 and v10:
- Made sure that scsi_device_resume() handles SCSI devices that are in an
  error state correctly. Unlike other SCSI transport protocols, during an
  ATA device resume the ATA controller is reset. This should fix the
  blk_queue_enter WARNING reported by the kernel test robot.

Changes between v8 and v9:
- Modified the third patch such that the MD_RECOVERY_FROZEN flag is restored
  properly after a resume.
- Modified the ninth patch such that a kernel warning is reported if it is
  attempted to call scsi_device_quiesce() from multiple contexts concurrently.
- Added Reviewed-by / Tested-by tags as appropriate.
  
Changes between v7 and v8:
- Fixed a (theoretical?) race identified by Ming Lei.
- Added a tenth patch that checks whether the proper type of flags has been
  passed to a range of block layer functions.

Changes between v6 and v7:
- Added support for the PREEMPT_ONLY flag in blk-mq-debugfs.c.
- Fixed kerneldoc header of blk_queue_enter().
- Added a rcu_read_lock_sched() / rcu_read_unlock_sched() pair in
  blk_set_preempt_only().
- Removed a synchronize_rcu() call from scsi_device_quiesce().
- Modified the description of patch 9/9 in this series.
- Removed scsi_run_queue() call from scsi_device_resume().

Changes between v5 and v6:
- Split an md patch into two patches to make it easier to review the changes.
- For the md patch that suspends I/O while the system is frozen, switched back
  to the freezer mechanism because this makes the code shorter and easier to
  review.
- Changed blk_set/clear_preempt_only() from EXPORT_SYMBOL() into
  EXPORT_SYMBOL_GPL().
- Made blk_set_preempt_only() behave as a test-and-set operation.
- Introduced blk_get_request_flags() and BLK_MQ_REQ_PREEMPT as requested by
  Christoph and reduced the number of arguments of blk_queue_enter() back from
  three to two.
- In scsi_device_quiesce(), moved the blk_mq_freeze_queue() call out of a
  critical section. Made the explanation of why the synchronize_rcu() call
  is necessary more detailed.

Changes between v4 and v5:
- Split blk_set_preempt_only() into two functions as requested by Christoph.
- Made blk_get_request() trigger WARN_ONCE() if it is attempted to allocate
  a request while the system is frozen. This is a big help to identify drivers
  that submit I/O while the system is frozen.
- Since kernel thread freezing is on its way out, reworked the approach for
  avoiding that the md driver submits I/O while the system is frozen such that
  the approach no longer depends on the kernel thread freeze mechanism.
- Fixed the (theoretical) deadlock in scsi_device_quiesce() that was identified
  by Ming.

Changes between v3 and v4:
- Made sure that this patch series not only works for scsi-mq but also for
  the legacy SCSI stack.
- Removed an smp_rmb()/smp_wmb() pair from the hot path and added a comment
  that explains why that is safe.
- Reordered the patches in this patch series.

Changes between v2 and v3:
- Made md kernel threads freezable.
- Changed the approach for quiescing SCSI devices again.
- Addressed Ming's review comments.

Changes compared to v1 of this patch series:
- Changed the approach and rewrote the patch series.

Bart Van Assche (6):
  block: Introduce blk_get_request_flags()
  block: Introduce BLK_MQ_REQ_PREEMPT
  ide, scsi: Tell the block layer at request allocation time about
    preempt requests
  block: Add the QUEUE_FLAG_PREEMPT_ONLY request queue flag
  block, scsi: Make SCSI quiesce and resume work reliably
  block, nvme: Introduce blk_mq_req_flags_t

Ming Lei (1):
  block: Make q_usage_counter also track legacy requests

 block/blk-core.c           | 133 ++++++++++++++++++++++++++++++++++++++-------
 block/blk-mq-debugfs.c     |   1 +
 block/blk-mq.c             |  20 +++----
 block/blk-mq.h             |   2 +-
 drivers/ide/ide-pm.c       |   4 +-
 drivers/nvme/host/core.c   |   5 +-
 drivers/nvme/host/nvme.h   |   5 +-
 drivers/scsi/scsi_lib.c    |  48 +++++++++++-----
 fs/block_dev.c             |   4 +-
 include/linux/blk-mq.h     |  16 ++++--
 include/linux/blk_types.h  |   2 +
 include/linux/blkdev.h     |  11 +++-
 include/scsi/scsi_device.h |   1 +
 13 files changed, 190 insertions(+), 62 deletions(-)

-- 
2.14.2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v11 1/7] block: Make q_usage_counter also track legacy requests
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
@ 2017-10-30 22:41 ` Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 2/7] block: Introduce blk_get_request_flags() Bart Van Assche
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:41 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche

From: Ming Lei <ming.lei@redhat.com>

This patch makes it possible to pause request allocation for
the legacy block layer by calling blk_mq_freeze_queue() and
blk_mq_unfreeze_queue().

Signed-off-by: Ming Lei <ming.lei@redhat.com>
[ bvanassche: Combined two patches into one, edited a comment and made sure
  REQ_NOWAIT is handled properly in blk_old_get_request() ]
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: Martin Steigerwald <martin@lichtvoll.de>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 12 ++++++++++++
 block/blk-mq.c   | 10 ++--------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index bb4fce694a60..ec4eafb5af9f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -611,6 +611,9 @@ void blk_set_queue_dying(struct request_queue *q)
 		}
 		spin_unlock_irq(q->queue_lock);
 	}
+
+	/* Make blk_queue_enter() reexamine the DYING flag. */
+	wake_up_all(&q->mq_freeze_wq);
 }
 EXPORT_SYMBOL_GPL(blk_set_queue_dying);
 
@@ -1397,16 +1400,22 @@ static struct request *blk_old_get_request(struct request_queue *q,
 					   unsigned int op, gfp_t gfp_mask)
 {
 	struct request *rq;
+	int ret = 0;
 
 	WARN_ON_ONCE(q->mq_ops);
 
 	/* create ioc upfront */
 	create_io_context(gfp_mask, q->node);
 
+	ret = blk_queue_enter(q, !(gfp_mask & __GFP_DIRECT_RECLAIM) ||
+			      (op & REQ_NOWAIT));
+	if (ret)
+		return ERR_PTR(ret);
 	spin_lock_irq(q->queue_lock);
 	rq = get_request(q, op, NULL, gfp_mask);
 	if (IS_ERR(rq)) {
 		spin_unlock_irq(q->queue_lock);
+		blk_queue_exit(q);
 		return rq;
 	}
 
@@ -1578,6 +1587,7 @@ void __blk_put_request(struct request_queue *q, struct request *req)
 		blk_free_request(rl, req);
 		freed_request(rl, sync, rq_flags);
 		blk_put_rl(rl);
+		blk_queue_exit(q);
 	}
 }
 EXPORT_SYMBOL_GPL(__blk_put_request);
@@ -1859,8 +1869,10 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
 	 * Grab a free request. This is might sleep but can not fail.
 	 * Returns with the queue unlocked.
 	 */
+	blk_queue_enter_live(q);
 	req = get_request(q, bio->bi_opf, bio, GFP_NOIO);
 	if (IS_ERR(req)) {
+		blk_queue_exit(q);
 		__wbt_done(q->rq_wb, wb_acct);
 		if (PTR_ERR(req) == -ENOMEM)
 			bio->bi_status = BLK_STS_RESOURCE;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 097ca3ece716..59b7de6b616b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -125,7 +125,8 @@ void blk_freeze_queue_start(struct request_queue *q)
 	freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
 	if (freeze_depth == 1) {
 		percpu_ref_kill(&q->q_usage_counter);
-		blk_mq_run_hw_queues(q, false);
+		if (q->mq_ops)
+			blk_mq_run_hw_queues(q, false);
 	}
 }
 EXPORT_SYMBOL_GPL(blk_freeze_queue_start);
@@ -255,13 +256,6 @@ void blk_mq_wake_waiters(struct request_queue *q)
 	queue_for_each_hw_ctx(q, hctx, i)
 		if (blk_mq_hw_queue_mapped(hctx))
 			blk_mq_tag_wakeup_all(hctx->tags, true);
-
-	/*
-	 * If we are called because the queue has now been marked as
-	 * dying, we need to ensure that processes currently waiting on
-	 * the queue are notified as well.
-	 */
-	wake_up_all(&q->mq_freeze_wq);
 }
 
 bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v11 2/7] block: Introduce blk_get_request_flags()
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
  2017-10-30 22:41 ` [PATCH v11 1/7] block: Make q_usage_counter also track legacy requests Bart Van Assche
@ 2017-10-30 22:42 ` Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 3/7] block: Introduce BLK_MQ_REQ_PREEMPT Bart Van Assche
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche, Johannes Thumshirn

A side effect of this patch is that the GFP mask that is passed to
several allocation functions in the legacy block layer is changed
from GFP_KERNEL into __GFP_DIRECT_RECLAIM.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: Martin Steigerwald <martin@lichtvoll.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 block/blk-core.c       | 50 +++++++++++++++++++++++++++++++++++---------------
 include/linux/blkdev.h |  3 +++
 2 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index ec4eafb5af9f..396104c05b38 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1159,7 +1159,7 @@ int blk_update_nr_requests(struct request_queue *q, unsigned int nr)
  * @rl: request list to allocate from
  * @op: operation and flags
  * @bio: bio to allocate request for (can be %NULL)
- * @gfp_mask: allocation mask
+ * @flags: BLQ_MQ_REQ_* flags
  *
  * Get a free request from @q.  This function may fail under memory
  * pressure or if @q is dead.
@@ -1169,7 +1169,7 @@ int blk_update_nr_requests(struct request_queue *q, unsigned int nr)
  * Returns request pointer on success, with @q->queue_lock *not held*.
  */
 static struct request *__get_request(struct request_list *rl, unsigned int op,
-		struct bio *bio, gfp_t gfp_mask)
+				     struct bio *bio, unsigned int flags)
 {
 	struct request_queue *q = rl->q;
 	struct request *rq;
@@ -1178,6 +1178,8 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
 	struct io_cq *icq = NULL;
 	const bool is_sync = op_is_sync(op);
 	int may_queue;
+	gfp_t gfp_mask = flags & BLK_MQ_REQ_NOWAIT ? GFP_ATOMIC :
+			 __GFP_DIRECT_RECLAIM;
 	req_flags_t rq_flags = RQF_ALLOCED;
 
 	lockdep_assert_held(q->queue_lock);
@@ -1338,7 +1340,7 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
  * @q: request_queue to allocate request from
  * @op: operation and flags
  * @bio: bio to allocate request for (can be %NULL)
- * @gfp_mask: allocation mask
+ * @flags: BLK_MQ_REQ_* flags.
  *
  * Get a free request from @q.  If %__GFP_DIRECT_RECLAIM is set in @gfp_mask,
  * this function keeps retrying under memory pressure and fails iff @q is dead.
@@ -1348,7 +1350,7 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
  * Returns request pointer on success, with @q->queue_lock *not held*.
  */
 static struct request *get_request(struct request_queue *q, unsigned int op,
-		struct bio *bio, gfp_t gfp_mask)
+				   struct bio *bio, unsigned int flags)
 {
 	const bool is_sync = op_is_sync(op);
 	DEFINE_WAIT(wait);
@@ -1360,7 +1362,7 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 
 	rl = blk_get_rl(q, bio);	/* transferred to @rq on success */
 retry:
-	rq = __get_request(rl, op, bio, gfp_mask);
+	rq = __get_request(rl, op, bio, flags);
 	if (!IS_ERR(rq))
 		return rq;
 
@@ -1369,7 +1371,7 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 		return ERR_PTR(-EAGAIN);
 	}
 
-	if (!gfpflags_allow_blocking(gfp_mask) || unlikely(blk_queue_dying(q))) {
+	if ((flags & BLK_MQ_REQ_NOWAIT) || unlikely(blk_queue_dying(q))) {
 		blk_put_rl(rl);
 		return rq;
 	}
@@ -1396,10 +1398,13 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 	goto retry;
 }
 
+/* flags: BLK_MQ_REQ_PREEMPT and/or BLK_MQ_REQ_NOWAIT. */
 static struct request *blk_old_get_request(struct request_queue *q,
-					   unsigned int op, gfp_t gfp_mask)
+					   unsigned int op, unsigned int flags)
 {
 	struct request *rq;
+	gfp_t gfp_mask = flags & BLK_MQ_REQ_NOWAIT ? GFP_ATOMIC :
+			 __GFP_DIRECT_RECLAIM;
 	int ret = 0;
 
 	WARN_ON_ONCE(q->mq_ops);
@@ -1412,7 +1417,7 @@ static struct request *blk_old_get_request(struct request_queue *q,
 	if (ret)
 		return ERR_PTR(ret);
 	spin_lock_irq(q->queue_lock);
-	rq = get_request(q, op, NULL, gfp_mask);
+	rq = get_request(q, op, NULL, flags);
 	if (IS_ERR(rq)) {
 		spin_unlock_irq(q->queue_lock);
 		blk_queue_exit(q);
@@ -1426,25 +1431,40 @@ static struct request *blk_old_get_request(struct request_queue *q,
 	return rq;
 }
 
-struct request *blk_get_request(struct request_queue *q, unsigned int op,
-				gfp_t gfp_mask)
+/**
+ * blk_get_request_flags - allocate a request
+ * @q: request queue to allocate a request for
+ * @op: operation (REQ_OP_*) and REQ_* flags, e.g. REQ_SYNC.
+ * @flags: BLK_MQ_REQ_* flags, e.g. BLK_MQ_REQ_NOWAIT.
+ */
+struct request *blk_get_request_flags(struct request_queue *q, unsigned int op,
+				      unsigned int flags)
 {
 	struct request *req;
 
+	WARN_ON_ONCE(op & REQ_NOWAIT);
+	WARN_ON_ONCE(flags & ~BLK_MQ_REQ_NOWAIT);
+
 	if (q->mq_ops) {
-		req = blk_mq_alloc_request(q, op,
-			(gfp_mask & __GFP_DIRECT_RECLAIM) ?
-				0 : BLK_MQ_REQ_NOWAIT);
+		req = blk_mq_alloc_request(q, op, flags);
 		if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn)
 			q->mq_ops->initialize_rq_fn(req);
 	} else {
-		req = blk_old_get_request(q, op, gfp_mask);
+		req = blk_old_get_request(q, op, flags);
 		if (!IS_ERR(req) && q->initialize_rq_fn)
 			q->initialize_rq_fn(req);
 	}
 
 	return req;
 }
+EXPORT_SYMBOL(blk_get_request_flags);
+
+struct request *blk_get_request(struct request_queue *q, unsigned int op,
+				gfp_t gfp_mask)
+{
+	return blk_get_request_flags(q, op, gfp_mask & __GFP_DIRECT_RECLAIM ?
+				     0 : BLK_MQ_REQ_NOWAIT);
+}
 EXPORT_SYMBOL(blk_get_request);
 
 /**
@@ -1870,7 +1890,7 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio)
 	 * Returns with the queue unlocked.
 	 */
 	blk_queue_enter_live(q);
-	req = get_request(q, bio->bi_opf, bio, GFP_NOIO);
+	req = get_request(q, bio->bi_opf, bio, 0);
 	if (IS_ERR(req)) {
 		blk_queue_exit(q);
 		__wbt_done(q->rq_wb, wb_acct);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 72637028f3c9..05203175eb9c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -924,6 +924,9 @@ extern void blk_rq_init(struct request_queue *q, struct request *rq);
 extern void blk_init_request_from_bio(struct request *req, struct bio *bio);
 extern void blk_put_request(struct request *);
 extern void __blk_put_request(struct request_queue *, struct request *);
+extern struct request *blk_get_request_flags(struct request_queue *,
+					     unsigned int op,
+					     unsigned int flags);
 extern struct request *blk_get_request(struct request_queue *, unsigned int op,
 				       gfp_t gfp_mask);
 extern void blk_requeue_request(struct request_queue *, struct request *);
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v11 3/7] block: Introduce BLK_MQ_REQ_PREEMPT
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
  2017-10-30 22:41 ` [PATCH v11 1/7] block: Make q_usage_counter also track legacy requests Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 2/7] block: Introduce blk_get_request_flags() Bart Van Assche
@ 2017-10-30 22:42 ` Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 4/7] ide, scsi: Tell the block layer at request allocation time about preempt requests Bart Van Assche
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche, Johannes Thumshirn

Set RQF_PREEMPT if BLK_MQ_REQ_PREEMPT is passed to
blk_get_request_flags().

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: Martin Steigerwald <martin@lichtvoll.de>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 block/blk-core.c       | 4 +++-
 block/blk-mq.c         | 2 ++
 include/linux/blk-mq.h | 1 +
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 396104c05b38..48f18f5e2e16 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1262,6 +1262,8 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
 	blk_rq_set_rl(rq, rl);
 	rq->cmd_flags = op;
 	rq->rq_flags = rq_flags;
+	if (flags & BLK_MQ_REQ_PREEMPT)
+		rq->rq_flags |= RQF_PREEMPT;
 
 	/* init elvpriv */
 	if (rq_flags & RQF_ELVPRIV) {
@@ -1443,7 +1445,7 @@ struct request *blk_get_request_flags(struct request_queue *q, unsigned int op,
 	struct request *req;
 
 	WARN_ON_ONCE(op & REQ_NOWAIT);
-	WARN_ON_ONCE(flags & ~BLK_MQ_REQ_NOWAIT);
+	WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT));
 
 	if (q->mq_ops) {
 		req = blk_mq_alloc_request(q, op, flags);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 59b7de6b616b..6a025b17caac 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -290,6 +290,8 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 	rq->q = data->q;
 	rq->mq_ctx = data->ctx;
 	rq->cmd_flags = op;
+	if (data->flags & BLK_MQ_REQ_PREEMPT)
+		rq->rq_flags |= RQF_PREEMPT;
 	if (blk_queue_io_stat(data->q))
 		rq->rq_flags |= RQF_IO_STAT;
 	/* do not touch atomic flags, it needs atomic ops against the timer */
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index e5e6becd57d3..22c7f36745fc 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -213,6 +213,7 @@ enum {
 	BLK_MQ_REQ_NOWAIT	= (1 << 0), /* return when out of requests */
 	BLK_MQ_REQ_RESERVED	= (1 << 1), /* allocate from reserved pool */
 	BLK_MQ_REQ_INTERNAL	= (1 << 2), /* allocate internal/sched tag */
+	BLK_MQ_REQ_PREEMPT	= (1 << 3), /* set RQF_PREEMPT */
 };
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v11 4/7] ide, scsi: Tell the block layer at request allocation time about preempt requests
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
                   ` (2 preceding siblings ...)
  2017-10-30 22:42 ` [PATCH v11 3/7] block: Introduce BLK_MQ_REQ_PREEMPT Bart Van Assche
@ 2017-10-30 22:42 ` Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 5/7] block: Add the QUEUE_FLAG_PREEMPT_ONLY request queue flag Bart Van Assche
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche, Johannes Thumshirn

Convert blk_get_request(q, op, __GFP_RECLAIM) into
blk_get_request_flags(q, op, BLK_MQ_PREEMPT). This patch does not
change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Tested-by: Martin Steigerwald <martin@lichtvoll.de>
Acked-by: David S. Miller <davem@davemloft.net> [ for IDE ]
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 drivers/ide/ide-pm.c    | 4 ++--
 drivers/scsi/scsi_lib.c | 6 +++---
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 544f02d673ca..f56d742908df 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -89,9 +89,9 @@ int generic_ide_resume(struct device *dev)
 	}
 
 	memset(&rqpm, 0, sizeof(rqpm));
-	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
+	rq = blk_get_request_flags(drive->queue, REQ_OP_DRV_IN,
+				   BLK_MQ_REQ_PREEMPT);
 	ide_req(rq)->type = ATA_PRIV_PM_RESUME;
-	rq->rq_flags |= RQF_PREEMPT;
 	rq->special = &rqpm;
 	rqpm.pm_step = IDE_PM_START_RESUME;
 	rqpm.pm_state = PM_EVENT_ON;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 6f10afaca25b..7c119696402c 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -252,9 +252,9 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 	struct scsi_request *rq;
 	int ret = DRIVER_ERROR << 24;
 
-	req = blk_get_request(sdev->request_queue,
+	req = blk_get_request_flags(sdev->request_queue,
 			data_direction == DMA_TO_DEVICE ?
-			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, __GFP_RECLAIM);
+			REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, BLK_MQ_REQ_PREEMPT);
 	if (IS_ERR(req))
 		return ret;
 	rq = scsi_req(req);
@@ -268,7 +268,7 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 	rq->retries = retries;
 	req->timeout = timeout;
 	req->cmd_flags |= flags;
-	req->rq_flags |= rq_flags | RQF_QUIET | RQF_PREEMPT;
+	req->rq_flags |= rq_flags | RQF_QUIET;
 
 	/*
 	 * head injection *required* here otherwise quiesce won't work
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v11 5/7] block: Add the QUEUE_FLAG_PREEMPT_ONLY request queue flag
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
                   ` (3 preceding siblings ...)
  2017-10-30 22:42 ` [PATCH v11 4/7] ide, scsi: Tell the block layer at request allocation time about preempt requests Bart Van Assche
@ 2017-10-30 22:42 ` Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 6/7] block, scsi: Make SCSI quiesce and resume work reliably Bart Van Assche
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche, Johannes Thumshirn

This flag will be used in the next patch to let the block layer
core know whether or not a SCSI request queue has been quiesced.
A quiesced SCSI queue namely only processes RQF_PREEMPT requests.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Tested-by: Martin Steigerwald <martin@lichtvoll.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 block/blk-core.c       | 30 ++++++++++++++++++++++++++++++
 block/blk-mq-debugfs.c |  1 +
 include/linux/blkdev.h |  6 ++++++
 3 files changed, 37 insertions(+)

diff --git a/block/blk-core.c b/block/blk-core.c
index 48f18f5e2e16..16ddd52e6408 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -347,6 +347,36 @@ void blk_sync_queue(struct request_queue *q)
 }
 EXPORT_SYMBOL(blk_sync_queue);
 
+/**
+ * blk_set_preempt_only - set QUEUE_FLAG_PREEMPT_ONLY
+ * @q: request queue pointer
+ *
+ * Returns the previous value of the PREEMPT_ONLY flag - 0 if the flag was not
+ * set and 1 if the flag was already set.
+ */
+int blk_set_preempt_only(struct request_queue *q)
+{
+	unsigned long flags;
+	int res;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	res = queue_flag_test_and_set(QUEUE_FLAG_PREEMPT_ONLY, q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+
+	return res;
+}
+EXPORT_SYMBOL_GPL(blk_set_preempt_only);
+
+void blk_clear_preempt_only(struct request_queue *q)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(q->queue_lock, flags);
+	queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q);
+	spin_unlock_irqrestore(q->queue_lock, flags);
+}
+EXPORT_SYMBOL_GPL(blk_clear_preempt_only);
+
 /**
  * __blk_run_queue_uncond - run a queue whether or not it has been stopped
  * @q:	The queue to run
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 7f4a1ba532af..75f31535f280 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -74,6 +74,7 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(SCSI_PASSTHROUGH),
 	QUEUE_FLAG_NAME(QUIESCED),
+	QUEUE_FLAG_NAME(PREEMPT_ONLY),
 };
 #undef QUEUE_FLAG_NAME
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 05203175eb9c..864ad2e4a58c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -630,6 +630,7 @@ struct request_queue {
 #define QUEUE_FLAG_REGISTERED  26	/* queue has been registered to a disk */
 #define QUEUE_FLAG_SCSI_PASSTHROUGH 27	/* queue supports SCSI commands */
 #define QUEUE_FLAG_QUIESCED    28	/* queue has been quiesced */
+#define QUEUE_FLAG_PREEMPT_ONLY	29	/* only process REQ_PREEMPT requests */
 
 #define QUEUE_FLAG_DEFAULT	((1 << QUEUE_FLAG_IO_STAT) |		\
 				 (1 << QUEUE_FLAG_SAME_COMP)	|	\
@@ -730,6 +731,11 @@ static inline void queue_flag_clear(unsigned int flag, struct request_queue *q)
 	((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
 			     REQ_FAILFAST_DRIVER))
 #define blk_queue_quiesced(q)	test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags)
+#define blk_queue_preempt_only(q)				\
+	test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags)
+
+extern int blk_set_preempt_only(struct request_queue *q);
+extern void blk_clear_preempt_only(struct request_queue *q);
 
 static inline bool blk_account_rq(struct request *rq)
 {
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v11 6/7] block, scsi: Make SCSI quiesce and resume work reliably
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
                   ` (4 preceding siblings ...)
  2017-10-30 22:42 ` [PATCH v11 5/7] block: Add the QUEUE_FLAG_PREEMPT_ONLY request queue flag Bart Van Assche
@ 2017-10-30 22:42 ` Bart Van Assche
  2017-10-30 22:42 ` [PATCH v11 7/7] block, nvme: Introduce blk_mq_req_flags_t Bart Van Assche
  2017-11-09  6:16 ` [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Oleksandr Natalenko
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche, Johannes Thumshirn

The contexts from which a SCSI device can be quiesced or resumed are:
* Writing into /sys/class/scsi_device/*/device/state.
* SCSI parallel (SPI) domain validation.
* The SCSI device power management methods. See also scsi_bus_pm_ops.

It is essential during suspend and resume that neither the filesystem
state nor the filesystem metadata in RAM changes. This is why while
the hibernation image is being written or restored that SCSI devices
are quiesced. The SCSI core quiesces devices through scsi_device_quiesce()
and scsi_device_resume(). In the SDEV_QUIESCE state execution of
non-preempt requests is deferred. This is realized by returning
BLKPREP_DEFER from inside scsi_prep_state_check() for quiesced SCSI
devices. Avoid that a full queue prevents power management requests
to be submitted by deferring allocation of non-preempt requests for
devices in the quiesced state. This patch has been tested by running
the following commands and by verifying that after each resume the
fio job was still running:

for ((i=0; i<10; i++)); do
  (
    cd /sys/block/md0/md &&
    while true; do
      [ "$(<sync_action)" = "idle" ] && echo check > sync_action
      sleep 1
    done
  ) &
  pids=($!)
  for d in /sys/class/block/sd*[a-z]; do
    bdev=${d#/sys/class/block/}
    hcil=$(readlink "$d/device")
    hcil=${hcil#../../../}
    echo 4 > "$d/queue/nr_requests"
    echo 1 > "/sys/class/scsi_device/$hcil/device/queue_depth"
    fio --name="$bdev" --filename="/dev/$bdev" --buffered=0 --bs=512 \
      --rw=randread --ioengine=libaio --numjobs=4 --iodepth=16       \
      --iodepth_batch=1 --thread --loops=$((2**31)) &
    pids+=($!)
  done
  sleep 1
  echo "$(date) Hibernating ..." >>hibernate-test-log.txt
  systemctl hibernate
  sleep 10
  kill "${pids[@]}"
  echo idle > /sys/block/md0/md/sync_action
  wait
  echo "$(date) Done." >>hibernate-test-log.txt
done

Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
References: "I/O hangs after resuming from suspend-to-ram" (https://marc.info/?l=linux-block&m=150340235201348).
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Tested-by: Martin Steigerwald <martin@lichtvoll.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
---
 block/blk-core.c           | 43 ++++++++++++++++++++++++++++++++++++-------
 block/blk-mq.c             |  4 ++--
 drivers/scsi/scsi_lib.c    | 42 ++++++++++++++++++++++++++++++------------
 fs/block_dev.c             |  4 ++--
 include/linux/blkdev.h     |  2 +-
 include/scsi/scsi_device.h |  1 +
 6 files changed, 72 insertions(+), 24 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 16ddd52e6408..d4dc10bb01e3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -373,6 +373,7 @@ void blk_clear_preempt_only(struct request_queue *q)
 
 	spin_lock_irqsave(q->queue_lock, flags);
 	queue_flag_clear(QUEUE_FLAG_PREEMPT_ONLY, q);
+	wake_up_all(&q->mq_freeze_wq);
 	spin_unlock_irqrestore(q->queue_lock, flags);
 }
 EXPORT_SYMBOL_GPL(blk_clear_preempt_only);
@@ -794,15 +795,41 @@ struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
 }
 EXPORT_SYMBOL(blk_alloc_queue);
 
-int blk_queue_enter(struct request_queue *q, bool nowait)
+/**
+ * blk_queue_enter() - try to increase q->q_usage_counter
+ * @q: request queue pointer
+ * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PREEMPT
+ */
+int blk_queue_enter(struct request_queue *q, unsigned int flags)
 {
+	const bool preempt = flags & BLK_MQ_REQ_PREEMPT;
+
 	while (true) {
+		bool success = false;
 		int ret;
 
-		if (percpu_ref_tryget_live(&q->q_usage_counter))
+		rcu_read_lock_sched();
+		if (percpu_ref_tryget_live(&q->q_usage_counter)) {
+			/*
+			 * The code that sets the PREEMPT_ONLY flag is
+			 * responsible for ensuring that that flag is globally
+			 * visible before the queue is unfrozen.
+			 */
+			if (preempt || !blk_queue_preempt_only(q)) {
+				success = true;
+			} else {
+				percpu_ref_put(&q->q_usage_counter);
+				WARN_ONCE(true,
+					  "%s: Attempt to allocate non-preempt request in preempt-only mode.\n",
+					  kobject_name(q->kobj.parent));
+			}
+		}
+		rcu_read_unlock_sched();
+
+		if (success)
 			return 0;
 
-		if (nowait)
+		if (flags & BLK_MQ_REQ_NOWAIT)
 			return -EBUSY;
 
 		/*
@@ -815,7 +842,8 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
 		smp_rmb();
 
 		ret = wait_event_interruptible(q->mq_freeze_wq,
-				!atomic_read(&q->mq_freeze_depth) ||
+				(atomic_read(&q->mq_freeze_depth) == 0 &&
+				 (preempt || !blk_queue_preempt_only(q))) ||
 				blk_queue_dying(q));
 		if (blk_queue_dying(q))
 			return -ENODEV;
@@ -1444,8 +1472,7 @@ static struct request *blk_old_get_request(struct request_queue *q,
 	/* create ioc upfront */
 	create_io_context(gfp_mask, q->node);
 
-	ret = blk_queue_enter(q, !(gfp_mask & __GFP_DIRECT_RECLAIM) ||
-			      (op & REQ_NOWAIT));
+	ret = blk_queue_enter(q, flags);
 	if (ret)
 		return ERR_PTR(ret);
 	spin_lock_irq(q->queue_lock);
@@ -2266,8 +2293,10 @@ blk_qc_t generic_make_request(struct bio *bio)
 	current->bio_list = bio_list_on_stack;
 	do {
 		struct request_queue *q = bio->bi_disk->queue;
+		unsigned int flags = bio->bi_opf & REQ_NOWAIT ?
+			BLK_MQ_REQ_NOWAIT : 0;
 
-		if (likely(blk_queue_enter(q, bio->bi_opf & REQ_NOWAIT) == 0)) {
+		if (likely(blk_queue_enter(q, flags) == 0)) {
 			struct bio_list lower, same;
 
 			/* Create a fresh bio_list for all subordinate requests */
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6a025b17caac..c6bff60e6b8b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -386,7 +386,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 	struct request *rq;
 	int ret;
 
-	ret = blk_queue_enter(q, flags & BLK_MQ_REQ_NOWAIT);
+	ret = blk_queue_enter(q, flags);
 	if (ret)
 		return ERR_PTR(ret);
 
@@ -425,7 +425,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	if (hctx_idx >= q->nr_hw_queues)
 		return ERR_PTR(-EIO);
 
-	ret = blk_queue_enter(q, true);
+	ret = blk_queue_enter(q, flags);
 	if (ret)
 		return ERR_PTR(ret);
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 7c119696402c..d85b7941b988 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -2955,21 +2955,37 @@ static void scsi_wait_for_queuecommand(struct scsi_device *sdev)
 int
 scsi_device_quiesce(struct scsi_device *sdev)
 {
+	struct request_queue *q = sdev->request_queue;
 	int err;
 
+	/*
+	 * It is allowed to call scsi_device_quiesce() multiple times from
+	 * the same context but concurrent scsi_device_quiesce() calls are
+	 * not allowed.
+	 */
+	WARN_ON_ONCE(sdev->quiesced_by && sdev->quiesced_by != current);
+
+	blk_set_preempt_only(q);
+
+	blk_mq_freeze_queue(q);
+	/*
+	 * Ensure that the effect of blk_set_preempt_only() will be visible
+	 * for percpu_ref_tryget() callers that occur after the queue
+	 * unfreeze even if the queue was already frozen before this function
+	 * was called. See also https://lwn.net/Articles/573497/.
+	 */
+	synchronize_rcu();
+	blk_mq_unfreeze_queue(q);
+
 	mutex_lock(&sdev->state_mutex);
 	err = scsi_device_set_state(sdev, SDEV_QUIESCE);
+	if (err == 0)
+		sdev->quiesced_by = current;
+	else
+		blk_clear_preempt_only(q);
 	mutex_unlock(&sdev->state_mutex);
 
-	if (err)
-		return err;
-
-	scsi_run_queue(sdev->request_queue);
-	while (atomic_read(&sdev->device_busy)) {
-		msleep_interruptible(200);
-		scsi_run_queue(sdev->request_queue);
-	}
-	return 0;
+	return err;
 }
 EXPORT_SYMBOL(scsi_device_quiesce);
 
@@ -2989,9 +3005,11 @@ void scsi_device_resume(struct scsi_device *sdev)
 	 * device deleted during suspend)
 	 */
 	mutex_lock(&sdev->state_mutex);
-	if (sdev->sdev_state == SDEV_QUIESCE &&
-	    scsi_device_set_state(sdev, SDEV_RUNNING) == 0)
-		scsi_run_queue(sdev->request_queue);
+	WARN_ON_ONCE(!sdev->quiesced_by);
+	sdev->quiesced_by = NULL;
+	blk_clear_preempt_only(sdev->request_queue);
+	if (sdev->sdev_state == SDEV_QUIESCE)
+		scsi_device_set_state(sdev, SDEV_RUNNING);
 	mutex_unlock(&sdev->state_mutex);
 }
 EXPORT_SYMBOL(scsi_device_resume);
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 07ddccd17801..c5363186618b 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -662,7 +662,7 @@ int bdev_read_page(struct block_device *bdev, sector_t sector,
 	if (!ops->rw_page || bdev_get_integrity(bdev))
 		return result;
 
-	result = blk_queue_enter(bdev->bd_queue, false);
+	result = blk_queue_enter(bdev->bd_queue, 0);
 	if (result)
 		return result;
 	result = ops->rw_page(bdev, sector + get_start_sect(bdev), page, false);
@@ -698,7 +698,7 @@ int bdev_write_page(struct block_device *bdev, sector_t sector,
 
 	if (!ops->rw_page || bdev_get_integrity(bdev))
 		return -EOPNOTSUPP;
-	result = blk_queue_enter(bdev->bd_queue, false);
+	result = blk_queue_enter(bdev->bd_queue, 0);
 	if (result)
 		return result;
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 864ad2e4a58c..4f91c6462752 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -956,7 +956,7 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			 struct scsi_ioctl_command __user *);
 
-extern int blk_queue_enter(struct request_queue *q, bool nowait);
+extern int blk_queue_enter(struct request_queue *q, unsigned int flags);
 extern void blk_queue_exit(struct request_queue *q);
 extern void blk_start_queue(struct request_queue *q);
 extern void blk_start_queue_async(struct request_queue *q);
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 82e93ee94708..6f0f1e242e23 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -219,6 +219,7 @@ struct scsi_device {
 	unsigned char		access_state;
 	struct mutex		state_mutex;
 	enum scsi_device_state sdev_state;
+	struct task_struct	*quiesced_by;
 	unsigned long		sdev_data[0];
 } __attribute__((aligned(sizeof(unsigned long))));
 
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v11 7/7] block, nvme: Introduce blk_mq_req_flags_t
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
                   ` (5 preceding siblings ...)
  2017-10-30 22:42 ` [PATCH v11 6/7] block, scsi: Make SCSI quiesce and resume work reliably Bart Van Assche
@ 2017-10-30 22:42 ` Bart Van Assche
  2017-11-09  6:16 ` [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Oleksandr Natalenko
  7 siblings, 0 replies; 12+ messages in thread
From: Bart Van Assche @ 2017-10-30 22:42 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, linux-scsi, Christoph Hellwig, Martin K . Petersen,
	Oleksandr Natalenko, Ming Lei, Martin Steigerwald,
	Bart Van Assche, linux-nvme, Johannes Thumshirn

Several block layer and NVMe core functions accept a combination
of BLK_MQ_REQ_* flags through the 'flags' argument but there is
no verification at compile time whether the right type of block
layer flags is passed. Make it possible for sparse to verify this.
This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: linux-nvme@lists.infradead.org
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c          | 12 ++++++------
 block/blk-mq.c            |  4 ++--
 block/blk-mq.h            |  2 +-
 drivers/nvme/host/core.c  |  5 +++--
 drivers/nvme/host/nvme.h  |  5 +++--
 include/linux/blk-mq.h    | 17 +++++++++++------
 include/linux/blk_types.h |  2 ++
 include/linux/blkdev.h    |  4 ++--
 8 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d4dc10bb01e3..0a0fdaa474e3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -800,7 +800,7 @@ EXPORT_SYMBOL(blk_alloc_queue);
  * @q: request queue pointer
  * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PREEMPT
  */
-int blk_queue_enter(struct request_queue *q, unsigned int flags)
+int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
 {
 	const bool preempt = flags & BLK_MQ_REQ_PREEMPT;
 
@@ -1227,7 +1227,7 @@ int blk_update_nr_requests(struct request_queue *q, unsigned int nr)
  * Returns request pointer on success, with @q->queue_lock *not held*.
  */
 static struct request *__get_request(struct request_list *rl, unsigned int op,
-				     struct bio *bio, unsigned int flags)
+				     struct bio *bio, blk_mq_req_flags_t flags)
 {
 	struct request_queue *q = rl->q;
 	struct request *rq;
@@ -1410,7 +1410,7 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
  * Returns request pointer on success, with @q->queue_lock *not held*.
  */
 static struct request *get_request(struct request_queue *q, unsigned int op,
-				   struct bio *bio, unsigned int flags)
+				   struct bio *bio, blk_mq_req_flags_t flags)
 {
 	const bool is_sync = op_is_sync(op);
 	DEFINE_WAIT(wait);
@@ -1460,7 +1460,7 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 
 /* flags: BLK_MQ_REQ_PREEMPT and/or BLK_MQ_REQ_NOWAIT. */
 static struct request *blk_old_get_request(struct request_queue *q,
-					   unsigned int op, unsigned int flags)
+				unsigned int op, blk_mq_req_flags_t flags)
 {
 	struct request *rq;
 	gfp_t gfp_mask = flags & BLK_MQ_REQ_NOWAIT ? GFP_ATOMIC :
@@ -1497,7 +1497,7 @@ static struct request *blk_old_get_request(struct request_queue *q,
  * @flags: BLK_MQ_REQ_* flags, e.g. BLK_MQ_REQ_NOWAIT.
  */
 struct request *blk_get_request_flags(struct request_queue *q, unsigned int op,
-				      unsigned int flags)
+				      blk_mq_req_flags_t flags)
 {
 	struct request *req;
 
@@ -2293,7 +2293,7 @@ blk_qc_t generic_make_request(struct bio *bio)
 	current->bio_list = bio_list_on_stack;
 	do {
 		struct request_queue *q = bio->bi_disk->queue;
-		unsigned int flags = bio->bi_opf & REQ_NOWAIT ?
+		blk_mq_req_flags_t flags = bio->bi_opf & REQ_NOWAIT ?
 			BLK_MQ_REQ_NOWAIT : 0;
 
 		if (likely(blk_queue_enter(q, flags) == 0)) {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c6bff60e6b8b..c037b1ad64a7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -380,7 +380,7 @@ static struct request *blk_mq_get_request(struct request_queue *q,
 }
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
-		unsigned int flags)
+		blk_mq_req_flags_t flags)
 {
 	struct blk_mq_alloc_data alloc_data = { .flags = flags };
 	struct request *rq;
@@ -406,7 +406,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 EXPORT_SYMBOL(blk_mq_alloc_request);
 
 struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
-		unsigned int op, unsigned int flags, unsigned int hctx_idx)
+	unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx)
 {
 	struct blk_mq_alloc_data alloc_data = { .flags = flags };
 	struct request *rq;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 522b420dedc0..5dcfe4fa5e0d 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -110,7 +110,7 @@ static inline void blk_mq_put_ctx(struct blk_mq_ctx *ctx)
 struct blk_mq_alloc_data {
 	/* input parameter */
 	struct request_queue *q;
-	unsigned int flags;
+	blk_mq_req_flags_t flags;
 	unsigned int shallow_depth;
 
 	/* input & output parameter */
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bb2aad078637..01947cd82b5a 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -292,7 +292,7 @@ static struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk)
 }
 
 struct request *nvme_alloc_request(struct request_queue *q,
-		struct nvme_command *cmd, unsigned int flags, int qid)
+		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
 {
 	unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
 	struct request *req;
@@ -560,7 +560,8 @@ EXPORT_SYMBOL_GPL(nvme_setup_cmd);
  */
 int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		union nvme_result *result, void *buffer, unsigned bufflen,
-		unsigned timeout, int qid, int at_head, int flags)
+		unsigned timeout, int qid, int at_head,
+		blk_mq_req_flags_t flags)
 {
 	struct request *req;
 	int ret;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index d3f3c4447515..61b25e8c222c 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -314,14 +314,15 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
 
 #define NVME_QID_ANY -1
 struct request *nvme_alloc_request(struct request_queue *q,
-		struct nvme_command *cmd, unsigned int flags, int qid);
+		struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid);
 blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,
 		struct nvme_command *cmd);
 int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		void *buf, unsigned bufflen);
 int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		union nvme_result *result, void *buffer, unsigned bufflen,
-		unsigned timeout, int qid, int at_head, int flags);
+		unsigned timeout, int qid, int at_head,
+		blk_mq_req_flags_t flags);
 int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count);
 void nvme_start_keep_alive(struct nvme_ctrl *ctrl);
 void nvme_stop_keep_alive(struct nvme_ctrl *ctrl);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 22c7f36745fc..38ecf1340266 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -210,16 +210,21 @@ void blk_mq_free_request(struct request *rq);
 bool blk_mq_can_queue(struct blk_mq_hw_ctx *);
 
 enum {
-	BLK_MQ_REQ_NOWAIT	= (1 << 0), /* return when out of requests */
-	BLK_MQ_REQ_RESERVED	= (1 << 1), /* allocate from reserved pool */
-	BLK_MQ_REQ_INTERNAL	= (1 << 2), /* allocate internal/sched tag */
-	BLK_MQ_REQ_PREEMPT	= (1 << 3), /* set RQF_PREEMPT */
+	/* return when out of requests */
+	BLK_MQ_REQ_NOWAIT	= (__force blk_mq_req_flags_t)(1 << 0),
+	/* allocate from reserved pool */
+	BLK_MQ_REQ_RESERVED	= (__force blk_mq_req_flags_t)(1 << 1),
+	/* allocate internal/sched tag */
+	BLK_MQ_REQ_INTERNAL	= (__force blk_mq_req_flags_t)(1 << 2),
+	/* set RQF_PREEMPT */
+	BLK_MQ_REQ_PREEMPT	= (__force blk_mq_req_flags_t)(1 << 3),
 };
 
 struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
-		unsigned int flags);
+		blk_mq_req_flags_t flags);
 struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
-		unsigned int op, unsigned int flags, unsigned int hctx_idx);
+		unsigned int op, blk_mq_req_flags_t flags,
+		unsigned int hctx_idx);
 struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag);
 
 enum {
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 3385c89f402e..cbd908478140 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -162,6 +162,8 @@ struct bio {
  */
 #define BIO_RESET_BITS	BVEC_POOL_OFFSET
 
+typedef __u32 __bitwise blk_mq_req_flags_t;
+
 /*
  * Operations and flags common to the bio and request structures.
  * We use 8 bits for encoding the operation, and the remaining 24 for flags.
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4f91c6462752..9a7d7e775cd0 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -932,7 +932,7 @@ extern void blk_put_request(struct request *);
 extern void __blk_put_request(struct request_queue *, struct request *);
 extern struct request *blk_get_request_flags(struct request_queue *,
 					     unsigned int op,
-					     unsigned int flags);
+					     blk_mq_req_flags_t flags);
 extern struct request *blk_get_request(struct request_queue *, unsigned int op,
 				       gfp_t gfp_mask);
 extern void blk_requeue_request(struct request_queue *, struct request *);
@@ -956,7 +956,7 @@ extern int scsi_cmd_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 extern int sg_scsi_ioctl(struct request_queue *, struct gendisk *, fmode_t,
 			 struct scsi_ioctl_command __user *);
 
-extern int blk_queue_enter(struct request_queue *q, unsigned int flags);
+extern int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags);
 extern void blk_queue_exit(struct request_queue *q);
 extern void blk_start_queue(struct request_queue *q);
 extern void blk_start_queue_async(struct request_queue *q);
-- 
2.14.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v11 0/7] block, scsi, md: Improve suspend and resume
  2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
                   ` (6 preceding siblings ...)
  2017-10-30 22:42 ` [PATCH v11 7/7] block, nvme: Introduce blk_mq_req_flags_t Bart Van Assche
@ 2017-11-09  6:16 ` Oleksandr Natalenko
  2017-11-09 16:54   ` Bart Van Assche
  7 siblings, 1 reply; 12+ messages in thread
From: Oleksandr Natalenko @ 2017-11-09  6:16 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, linux-scsi, Christoph Hellwig,
	Martin K . Petersen, Ming Lei, Martin Steigerwald

Bart,

is this something known to you, or it is just my fault applying this series=
 to=20
v4.13? Except having this warning, suspend/resume works for me:

=3D=3D=3D
[   27.383846] sd 0:0:0:0: [sda] Starting disk
[   27.383976] sd 1:0:0:0: [sdb] Starting disk
[   27.451218] sdb: Attempt to allocate non-preempt request in preempt-only=
=20
mode.
[   27.459640] ------------[ cut here ]------------
[   27.464521] WARNING: CPU: 0 PID: 172 at block/blk-core.c:823=20
blk_queue_enter+0x222/0x280
[   27.470867] Modules linked in: nls_iso8859_1 nls_cp437 vfat fat kvm_inte=
l=20
iTCO_wdt bochs_drm iTCO_vendor_support ppdev kvm ttm irqbypass evdev=20
input_leds drm_kms_helper joydev psmouse led_class lpc_ich pcspkr i2c_i801=
=20
mousedev parport_pc mac_hid qemu_fw_cfg drm parport syscopyarea sysfillrect=
=20
sysimgblt button fb_sys_fops intel_agp intel_gtt sch_fq_codel ip_tables=20
x_tables xfs dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc3=
2c=20
crc32c_generic dm_crypt algif_skcipher af_alg dm_mod dax raid10 md_mod sr_m=
od=20
sd_mod cdrom hid_generic usbhid hid crct10dif_pclmul crc32_pclmul crc32c_in=
tel=20
ghash_clmulni_intel uhci_hcd pcbc serio_raw atkbd libps2 ahci aesni_intel=20
xhci_pci ehci_pci aes_x86_64 crypto_simd glue_helper xhci_hcd ehci_hcd liba=
hci=20
cryptd libata usbcore usb_common i8042 serio virtio_pci
[   27.501799]  virtio_net virtio_scsi scsi_mod virtio_ring virtio
[   27.503639] CPU: 0 PID: 172 Comm: md0_raid10 Not tainted 4.13.0-pf13 #1
[   27.505492] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0=
=2E0=20
02/06/2015
[   27.507693] task: ffff88001f6aa340 task.stack: ffffc900005e8000
[   27.509516] RIP: 0010:blk_queue_enter+0x222/0x280
[   27.511623] RSP: 0018:ffffc900005ebb70 EFLAGS: 00010282
[   27.512978] RAX: 0000000000000042 RBX: 0000000000000000 RCX:=20
0000000000000000
[   27.514389] RDX: 0000000000000000 RSI: ffff88001f80dbd8 RDI:=20
ffff88001f80dbd8
[   27.516339] RBP: ffffc900005ebbd0 R08: 000000000000028e R09:=20
0000000000000000
[   27.519083] R10: ffffc900005ebc50 R11: 00000000ffffffff R12:=20
0000000100000000
[   27.521298] R13: ffff88001deaa100 R14: 0000000000000000 R15:=20
ffff88001deaa100
[   27.523577] FS:  0000000000000000(0000) GS:ffff88001f800000(0000) knlGS:
0000000000000000
[   27.525889] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   27.527928] CR2: 00005568d4d27858 CR3: 000000001983b000 CR4:=20
00000000001406f0
[   27.529721] Call Trace:
[   27.530622]  ? wait_woken+0x80/0x80
[   27.531739]  generic_make_request+0xf1/0x320
[   27.532806]  submit_bio+0x73/0x150
[   27.533775]  ? submit_bio+0x73/0x150
[   27.534773]  md_super_write.part.58+0xbd/0xe0 [md_mod]
[   27.536078]  md_update_sb.part.59+0x534/0x840 [md_mod]
[   27.537468]  ? percpu_ref_switch_to_percpu+0x36/0x40
[   27.538862]  md_check_recovery+0x452/0x510 [md_mod]
[   27.540273]  raid10d+0x62/0x1420 [raid10]
[   27.541757]  ? schedule+0x3d/0xb0
[   27.542744]  ? schedule+0x3d/0xb0
[   27.544013]  ? schedule_timeout+0x208/0x390
[   27.546399]  md_thread+0x120/0x160 [md_mod]
[   27.548810]  ? md_thread+0x120/0x160 [md_mod]
[   27.550394]  ? wait_woken+0x80/0x80
[   27.551840]  kthread+0x124/0x140
[   27.551846]  ? state_show+0x2f0/0x2f0 [md_mod]
[   27.551848]  ? kthread_create_on_node+0x70/0x70
[   27.551852]  ? SyS_exit_group+0x14/0x20
[   27.551857]  ret_from_fork+0x25/0x30
[   27.551859] Code: 00 00 e9 6d fe ff ff 31 c0 e9 66 fe ff ff 49 8b 87 e0 =
01=20
00 00 48 c7 c7 78 73 95 81 c6 05 6c 11 80 00 01 48 8b 30 e8 0f 0f de ff <0f=
>=20
ff e9 97 fe ff ff 49 8b b7 a8 01 00 00 89 c2 83 e6 20 0f 85=20
[   27.551882] ---[ end trace ba6164315560503f ]---
[   27.701328] ata4: SATA link down (SStatus 0 SControl 300)
[   27.710425] ata5: SATA link down (SStatus 0 SControl 300)
[   27.714620] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[   27.722375] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[   27.733520] ata6: SATA link down (SStatus 0 SControl 300)
[   27.738315] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[   27.743962] ata3.00: configured for UDMA/100
[   27.747153] ata2.00: configured for UDMA/100
[   27.750833] ata1.00: configured for UDMA/100
[   27.781627] usb 2-1: reset high-speed USB device number 2 using xhci_hcd
[   27.963142] PM: resume of devices complete after 627.949 msecs
[   27.971546] OOM killer enabled.
[   27.978424] Restarting tasks ... done.
=3D=3D=3D

Thanks.

On pond=C4=9Bl=C3=AD 30. =C5=99=C3=ADjna 2017 23:41:58 CET Bart Van Assche =
wrote:
> It is known that during the resume following a hibernate, especially when
> using an md RAID1 array created on top of SCSI devices, sometimes the sys=
tem
> hangs instead of coming up properly. This patch series fixes that
> problem. These patches have been tested on top of the block layer for-next
> branch. Please consider these changes for kernel v4.15.

> Changes between v10 and v11:
> - Left out the three md patches because a deadlock was reported when using
> XFS on top of md RAID 1. This deadlock occurred because the md kernel
> thread got frozen before the kernel thread running xfsaild().
> - Left out the blk_queue_enter() / blk_queue_exit() changes from
>   block/blk-timeout.c because a recent patch removed these calls from
>   blk_timeout_work().
> - Retested the whole series.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v11 0/7] block, scsi, md: Improve suspend and resume
  2017-11-09  6:16 ` [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Oleksandr Natalenko
@ 2017-11-09 16:54   ` Bart Van Assche
  2017-11-09 16:55     ` Jens Axboe
  0 siblings, 1 reply; 12+ messages in thread
From: Bart Van Assche @ 2017-11-09 16:54 UTC (permalink / raw)
  To: oleksandr@natalenko.name
  Cc: linux-scsi@vger.kernel.org, hch@lst.de,
	linux-block@vger.kernel.org, martin@lichtvoll.de,
	martin.petersen@oracle.com, axboe@kernel.dk, ming.lei@redhat.com

T24gVGh1LCAyMDE3LTExLTA5IGF0IDA3OjE2ICswMTAwLCBPbGVrc2FuZHIgTmF0YWxlbmtvIHdy
b3RlOg0KPiBpcyB0aGlzIHNvbWV0aGluZyBrbm93biB0byB5b3UsIG9yIGl0IGlzIGp1c3QgbXkg
ZmF1bHQgYXBwbHlpbmcgdGhpcyBzZXJpZXMgdG8gDQo+IHY0LjEzPyBFeGNlcHQgaGF2aW5nIHRo
aXMgd2FybmluZywgc3VzcGVuZC9yZXN1bWUgd29ya3MgZm9yIG1lOg0KPiANCj4gWyAgIDI3LjM4
Mzg0Nl0gc2QgMDowOjA6MDogW3NkYV0gU3RhcnRpbmcgZGlzaw0KPiBbICAgMjcuMzgzOTc2XSBz
ZCAxOjA6MDowOiBbc2RiXSBTdGFydGluZyBkaXNrDQo+IFsgICAyNy40NTEyMThdIHNkYjogQXR0
ZW1wdCB0byBhbGxvY2F0ZSBub24tcHJlZW1wdCByZXF1ZXN0IGluIHByZWVtcHQtb25seSANCj4g
bW9kZS4NCj4gWyAgIDI3LjQ1OTY0MF0gLS0tLS0tLS0tLS0tWyBjdXQgaGVyZSBdLS0tLS0tLS0t
LS0tDQo+IFsgICAyNy40NjQ1MjFdIFdBUk5JTkc6IENQVTogMCBQSUQ6IDE3MiBhdCBibG9jay9i
bGstY29yZS5jOjgyMyBibGtfcXVldWVfZW50ZXIrMHgyMjIvMHgyODANCg0KSGVsbG8gT2xla3Nh
bmRyLA0KDQpUaGFua3MgZm9yIHRoZSB0ZXN0aW5nLiBUaGUgd2FybmluZyB0aGF0IHlvdSByZXBv
cnRlZCBpcyBleHBlY3RlZC4gTWF5YmUgaXQNCnNob3VsZCBiZSBsZWZ0IG91dCB0byBhdm9pZCB0
aGF0IHVzZXJzIGdldCBjb25mdXNlZC4NCg0KSmVucywgdGhpcyBwYXRjaCBzZXJpZXMgc3RpbGwg
YXBwbGllcyBjbGVhbmx5IG9uIHRvcCBvZiB5b3VyIGZvci00LjE1L2Jsb2NrDQpicmFuY2guIEFy
ZSB5b3UgZmluZSB3aXRoIHRoaXMgcGF0Y2ggc2VyaWVzIG9yIGRvIHlvdSBwZXJoYXBzIHdhbnQg
bWUgdG8NCnJlcG9zdCBpdCB3aXRoIE9sZWtzYW5kcidzIFRlc3RlZC1ieSB0YWcgYWRkZWQgdG8g
ZWFjaCBwYXRjaD8NCg0KVGhhbmtzLA0KDQpCYXJ0Lg==

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v11 0/7] block, scsi, md: Improve suspend and resume
  2017-11-09 16:54   ` Bart Van Assche
@ 2017-11-09 16:55     ` Jens Axboe
  2017-11-09 17:10       ` Oleksandr Natalenko
  0 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2017-11-09 16:55 UTC (permalink / raw)
  To: Bart Van Assche, oleksandr@natalenko.name
  Cc: linux-scsi@vger.kernel.org, hch@lst.de,
	linux-block@vger.kernel.org, martin@lichtvoll.de,
	martin.petersen@oracle.com, ming.lei@redhat.com

On 11/09/2017 09:54 AM, Bart Van Assche wrote:
> On Thu, 2017-11-09 at 07:16 +0100, Oleksandr Natalenko wrote:
>> is this something known to you, or it is just my fault applying this series to 
>> v4.13? Except having this warning, suspend/resume works for me:
>>
>> [   27.383846] sd 0:0:0:0: [sda] Starting disk
>> [   27.383976] sd 1:0:0:0: [sdb] Starting disk
>> [   27.451218] sdb: Attempt to allocate non-preempt request in preempt-only 
>> mode.
>> [   27.459640] ------------[ cut here ]------------
>> [   27.464521] WARNING: CPU: 0 PID: 172 at block/blk-core.c:823 blk_queue_enter+0x222/0x280
> 
> Hello Oleksandr,
> 
> Thanks for the testing. The warning that you reported is expected. Maybe it
> should be left out to avoid that users get confused.

If the warning is expected, then it should be removed. It'll just confuse
users and cause useless bug reports.

> Jens, this patch series still applies cleanly on top of your for-4.15/block
> branch. Are you fine with this patch series or do you perhaps want me to
> repost it with Oleksandr's Tested-by tag added to each patch?

Since you need to kill the warning anyway, let's get it respun.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v11 0/7] block, scsi, md: Improve suspend and resume
  2017-11-09 16:55     ` Jens Axboe
@ 2017-11-09 17:10       ` Oleksandr Natalenko
  0 siblings, 0 replies; 12+ messages in thread
From: Oleksandr Natalenko @ 2017-11-09 17:10 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Bart Van Assche, linux-scsi@vger.kernel.org, hch@lst.de,
	linux-block@vger.kernel.org, martin@lichtvoll.de,
	martin.petersen@oracle.com, ming.lei@redhat.com

Then,

Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>

On =C4=8Dtvrtek 9. listopadu 2017 17:55:58 CET Jens Axboe wrote:
> On 11/09/2017 09:54 AM, Bart Van Assche wrote:
> > On Thu, 2017-11-09 at 07:16 +0100, Oleksandr Natalenko wrote:
> >> is this something known to you, or it is just my fault applying this
> >> series to v4.13? Except having this warning, suspend/resume works for
> >> me:
> >>=20
> >> [   27.383846] sd 0:0:0:0: [sda] Starting disk
> >> [   27.383976] sd 1:0:0:0: [sdb] Starting disk
> >> [   27.451218] sdb: Attempt to allocate non-preempt request in
> >> preempt-only
> >> mode.
> >> [   27.459640] ------------[ cut here ]------------
> >> [   27.464521] WARNING: CPU: 0 PID: 172 at block/blk-core.c:823
> >> blk_queue_enter+0x222/0x280>=20
> > Hello Oleksandr,
> >=20
> > Thanks for the testing. The warning that you reported is expected. Maybe
> > it
> > should be left out to avoid that users get confused.
>=20
> If the warning is expected, then it should be removed. It'll just confuse
> users and cause useless bug reports.
>=20
> > Jens, this patch series still applies cleanly on top of your
> > for-4.15/block
> > branch. Are you fine with this patch series or do you perhaps want me to
> > repost it with Oleksandr's Tested-by tag added to each patch?
>=20
> Since you need to kill the warning anyway, let's get it respun.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-11-09 17:10 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-30 22:41 [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Bart Van Assche
2017-10-30 22:41 ` [PATCH v11 1/7] block: Make q_usage_counter also track legacy requests Bart Van Assche
2017-10-30 22:42 ` [PATCH v11 2/7] block: Introduce blk_get_request_flags() Bart Van Assche
2017-10-30 22:42 ` [PATCH v11 3/7] block: Introduce BLK_MQ_REQ_PREEMPT Bart Van Assche
2017-10-30 22:42 ` [PATCH v11 4/7] ide, scsi: Tell the block layer at request allocation time about preempt requests Bart Van Assche
2017-10-30 22:42 ` [PATCH v11 5/7] block: Add the QUEUE_FLAG_PREEMPT_ONLY request queue flag Bart Van Assche
2017-10-30 22:42 ` [PATCH v11 6/7] block, scsi: Make SCSI quiesce and resume work reliably Bart Van Assche
2017-10-30 22:42 ` [PATCH v11 7/7] block, nvme: Introduce blk_mq_req_flags_t Bart Van Assche
2017-11-09  6:16 ` [PATCH v11 0/7] block, scsi, md: Improve suspend and resume Oleksandr Natalenko
2017-11-09 16:54   ` Bart Van Assche
2017-11-09 16:55     ` Jens Axboe
2017-11-09 17:10       ` Oleksandr Natalenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).