* [PATCH v3 00/12] More patches for kernel v4.13
@ 2017-06-08 17:33 Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size Bart Van Assche
` (12 more replies)
0 siblings, 13 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche
Hello Jens,
This patch series contains one patch that reduces the size of struct
blk_mq_hw_ctx, a few patches that simplify some of the block layer code and
also patches that improve the block layer documentation. Please consider these
patches for kernel v4.13.
The basis for these patches is a merge of your for-next and for-linus
branches. Please note that the first patch in this series applies fine on top
of the merge of these two branches but not on top of your for-next branch.
Thanks,
Bart.
Changes between v2 and v3:
* Added patch "blk-mq: Reduce blk_mq_hw_ctx size".
* Removed patch "block: Rename blk_mq_rq_{to,from}_pdu()".
* Addressed Christoph's review comments about patch "block: Introduce
request_queue.initialize_rq_fn()".
* Rebased (and retested) this patch series on top of a merge of Jens'
for-next and for-linus branches.
Changes between v1 and v2:
* Addressed Christoph's comment about moving the .initialize_rq_fn() call
from blk_rq_init() / blk_mq_rq_ctx_init() into blk_get_request().
* Left out patch "scsi: Make scsi_ioctl_reset() pass the request queue pointer
to blk_rq_init()" since it's no longer needed.
* Restored the scsi_req_init() call in ide_prep_sense().
* Combined the two patches that reduce the blk_mq_hw_ctx size into a single
patch.
* Modified patch "blk-mq: Initialize a request before assigning a tag" such
that .tag and .internal_tag are no longer initialized twice.
* Removed WARN_ON_ONCE(q->mq_ops) from blk_queue_bypass_end() because this
function is used by both blk-sq and blk-mq.
* Added several new patches, e.g. "block: Rename blk_mq_rq_{to,from}_pdu()".
Bart Van Assche (12):
blk-mq: Reduce blk_mq_hw_ctx size
block: Make request operation type argument declarations consistent
block: Introduce request_queue.initialize_rq_fn()
block: Make most scsi_req_init() calls implicit
block: Change argument type of scsi_req_init()
blk-mq: Initialize a request before assigning a tag
block: Add a comment above queue_lockdep_assert_held()
block: Check locking assumptions at runtime
block: Document what queue type each function is intended for
blk-mq: Document locking assumptions
block: Constify disk_type
blk-mq: Warn when attempting to run a hardware queue that is not
mapped
block/blk-core.c | 130 ++++++++++++++++++++++++++++---------
block/blk-flush.c | 8 ++-
block/blk-merge.c | 3 +
block/blk-mq-sched.c | 2 +
block/blk-mq.c | 60 +++++++++++------
block/blk-tag.c | 15 ++---
block/blk-timeout.c | 4 +-
block/blk.h | 2 +
block/bsg.c | 1 -
block/genhd.c | 4 +-
block/scsi_ioctl.c | 13 ++--
drivers/block/pktcdvd.c | 1 -
drivers/cdrom/cdrom.c | 1 -
drivers/ide/ide-atapi.c | 3 +-
drivers/ide/ide-cd.c | 1 -
drivers/ide/ide-cd_ioctl.c | 1 -
drivers/ide/ide-devsets.c | 1 -
drivers/ide/ide-disk.c | 1 -
drivers/ide/ide-ioctls.c | 2 -
drivers/ide/ide-park.c | 2 -
drivers/ide/ide-pm.c | 2 -
drivers/ide/ide-probe.c | 6 +-
drivers/ide/ide-tape.c | 1 -
drivers/ide/ide-taskfile.c | 1 -
drivers/scsi/osd/osd_initiator.c | 2 -
drivers/scsi/osst.c | 1 -
drivers/scsi/scsi_error.c | 1 -
drivers/scsi/scsi_lib.c | 12 +++-
drivers/scsi/scsi_transport_sas.c | 6 ++
drivers/scsi/sg.c | 2 -
drivers/scsi/st.c | 1 -
drivers/target/target_core_pscsi.c | 2 -
fs/nfsd/blocklayout.c | 1 -
include/linux/blk-mq.h | 13 ++--
include/linux/blkdev.h | 14 +++-
include/scsi/scsi_request.h | 2 +-
36 files changed, 213 insertions(+), 109 deletions(-)
--
2.12.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-19 2:26 ` Ming Lei
2017-06-08 17:33 ` [PATCH v3 02/12] block: Make request operation type argument declarations consistent Bart Van Assche
` (11 subsequent siblings)
12 siblings, 1 reply; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Since the srcu structure is rather large (184 bytes on an x86-64
system with kernel debugging disabled), only allocate it if needed.
Reported-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-mq.c | 30 ++++++++++++++++++++++--------
include/linux/blk-mq.h | 5 +++--
2 files changed, 25 insertions(+), 10 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4e8b1bc87274..ef64a3ea4e83 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -172,7 +172,7 @@ void blk_mq_quiesce_queue(struct request_queue *q)
queue_for_each_hw_ctx(q, hctx, i) {
if (hctx->flags & BLK_MQ_F_BLOCKING)
- synchronize_srcu(&hctx->queue_rq_srcu);
+ synchronize_srcu(hctx->queue_rq_srcu);
else
rcu = true;
}
@@ -1056,9 +1056,9 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
} else {
might_sleep();
- srcu_idx = srcu_read_lock(&hctx->queue_rq_srcu);
+ srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
blk_mq_sched_dispatch_requests(hctx);
- srcu_read_unlock(&hctx->queue_rq_srcu, srcu_idx);
+ srcu_read_unlock(hctx->queue_rq_srcu, srcu_idx);
}
}
@@ -1460,9 +1460,9 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
might_sleep();
- srcu_idx = srcu_read_lock(&hctx->queue_rq_srcu);
+ srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
__blk_mq_try_issue_directly(hctx, rq, cookie, true);
- srcu_read_unlock(&hctx->queue_rq_srcu, srcu_idx);
+ srcu_read_unlock(hctx->queue_rq_srcu, srcu_idx);
}
}
@@ -1806,7 +1806,7 @@ static void blk_mq_exit_hctx(struct request_queue *q,
set->ops->exit_hctx(hctx, hctx_idx);
if (hctx->flags & BLK_MQ_F_BLOCKING)
- cleanup_srcu_struct(&hctx->queue_rq_srcu);
+ cleanup_srcu_struct(hctx->queue_rq_srcu);
blk_mq_remove_cpuhp(hctx);
blk_free_flush_queue(hctx->fq);
@@ -1879,7 +1879,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
goto free_fq;
if (hctx->flags & BLK_MQ_F_BLOCKING)
- init_srcu_struct(&hctx->queue_rq_srcu);
+ init_srcu_struct(hctx->queue_rq_srcu);
blk_mq_debugfs_register_hctx(q, hctx);
@@ -2154,6 +2154,20 @@ struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set)
}
EXPORT_SYMBOL(blk_mq_init_queue);
+static int blk_mq_hw_ctx_size(struct blk_mq_tag_set *tag_set)
+{
+ int hw_ctx_size = sizeof(struct blk_mq_hw_ctx);
+
+ BUILD_BUG_ON(ALIGN(offsetof(struct blk_mq_hw_ctx, queue_rq_srcu),
+ __alignof__(struct blk_mq_hw_ctx)) !=
+ sizeof(struct blk_mq_hw_ctx));
+
+ if (tag_set->flags & BLK_MQ_F_BLOCKING)
+ hw_ctx_size += sizeof(struct srcu_struct);
+
+ return hw_ctx_size;
+}
+
static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
struct request_queue *q)
{
@@ -2168,7 +2182,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
continue;
node = blk_mq_hw_queue_to_node(q->mq_map, i);
- hctxs[i] = kzalloc_node(sizeof(struct blk_mq_hw_ctx),
+ hctxs[i] = kzalloc_node(blk_mq_hw_ctx_size(set),
GFP_KERNEL, node);
if (!hctxs[i])
break;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index fcd641032f8d..c534ec64e214 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -39,8 +39,6 @@ struct blk_mq_hw_ctx {
struct blk_mq_tags *tags;
struct blk_mq_tags *sched_tags;
- struct srcu_struct queue_rq_srcu;
-
unsigned long queued;
unsigned long run;
#define BLK_MQ_MAX_DISPATCH_ORDER 7
@@ -62,6 +60,9 @@ struct blk_mq_hw_ctx {
struct dentry *debugfs_dir;
struct dentry *sched_debugfs_dir;
#endif
+
+ /* Must be the last member - see also blk_mq_hw_ctx_size(). */
+ struct srcu_struct queue_rq_srcu[0];
};
struct blk_mq_tag_set {
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 02/12] block: Make request operation type argument declarations consistent
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 03/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
` (10 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Instead of declaring the second argument of blk_*_get_request()
as int and passing it to functions that expect an unsigned int,
declare that second argument as unsigned int. Also because of
consistency, rename that second argument from 'rw' into 'op'.
This patch does not change any functionality.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-core.c | 13 +++++++------
block/blk-mq.c | 10 +++++-----
include/linux/blk-mq.h | 6 +++---
include/linux/blkdev.h | 3 ++-
4 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index a7421b772d0e..3bc431a77309 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1283,8 +1283,8 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
goto retry;
}
-static struct request *blk_old_get_request(struct request_queue *q, int rw,
- gfp_t gfp_mask)
+static struct request *blk_old_get_request(struct request_queue *q,
+ unsigned int op, gfp_t gfp_mask)
{
struct request *rq;
@@ -1292,7 +1292,7 @@ static struct request *blk_old_get_request(struct request_queue *q, int rw,
create_io_context(gfp_mask, q->node);
spin_lock_irq(q->queue_lock);
- rq = get_request(q, rw, NULL, gfp_mask);
+ rq = get_request(q, op, NULL, gfp_mask);
if (IS_ERR(rq)) {
spin_unlock_irq(q->queue_lock);
return rq;
@@ -1305,14 +1305,15 @@ static struct request *blk_old_get_request(struct request_queue *q, int rw,
return rq;
}
-struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
+struct request *blk_get_request(struct request_queue *q, unsigned int op,
+ gfp_t gfp_mask)
{
if (q->mq_ops)
- return blk_mq_alloc_request(q, rw,
+ return blk_mq_alloc_request(q, op,
(gfp_mask & __GFP_DIRECT_RECLAIM) ?
0 : BLK_MQ_REQ_NOWAIT);
else
- return blk_old_get_request(q, rw, gfp_mask);
+ return blk_old_get_request(q, op, gfp_mask);
}
EXPORT_SYMBOL(blk_get_request);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ef64a3ea4e83..8cd5261ca1ab 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -278,7 +278,7 @@ struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
}
EXPORT_SYMBOL_GPL(__blk_mq_alloc_request);
-struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
+struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
unsigned int flags)
{
struct blk_mq_alloc_data alloc_data = { .flags = flags };
@@ -289,7 +289,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
if (ret)
return ERR_PTR(ret);
- rq = blk_mq_sched_get_request(q, NULL, rw, &alloc_data);
+ rq = blk_mq_sched_get_request(q, NULL, op, &alloc_data);
blk_mq_put_ctx(alloc_data.ctx);
blk_queue_exit(q);
@@ -304,8 +304,8 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
}
EXPORT_SYMBOL(blk_mq_alloc_request);
-struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int rw,
- unsigned int flags, unsigned int hctx_idx)
+struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ unsigned int op, unsigned int flags, unsigned int hctx_idx)
{
struct blk_mq_alloc_data alloc_data = { .flags = flags };
struct request *rq;
@@ -340,7 +340,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int rw,
cpu = cpumask_first(alloc_data.hctx->cpumask);
alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
- rq = blk_mq_sched_get_request(q, NULL, rw, &alloc_data);
+ rq = blk_mq_sched_get_request(q, NULL, op, &alloc_data);
blk_queue_exit(q);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index c534ec64e214..a4759fd34e7e 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -205,10 +205,10 @@ enum {
BLK_MQ_REQ_INTERNAL = (1 << 2), /* allocate internal/sched tag */
};
-struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
+struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
unsigned int flags);
-struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int op,
- unsigned int flags, unsigned int hctx_idx);
+struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+ unsigned int op, unsigned int flags, unsigned int hctx_idx);
struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag);
enum {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 019f18c65098..6c4235018b49 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -932,7 +932,8 @@ extern void blk_rq_init(struct request_queue *q, struct request *rq);
extern void blk_init_request_from_bio(struct request *req, struct bio *bio);
extern void blk_put_request(struct request *);
extern void __blk_put_request(struct request_queue *, struct request *);
-extern struct request *blk_get_request(struct request_queue *, int, gfp_t);
+extern struct request *blk_get_request(struct request_queue *, unsigned int op,
+ gfp_t gfp_mask);
extern void blk_requeue_request(struct request_queue *, struct request *);
extern int blk_lld_busy(struct request_queue *q);
extern int blk_rq_prep_clone(struct request *rq, struct request *rq_src,
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 03/12] block: Introduce request_queue.initialize_rq_fn()
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 02/12] block: Make request operation type argument declarations consistent Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 04/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
` (9 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval
Several block drivers need to initialize the driver-private request
data after having called blk_get_request() and before .prep_rq_fn()
is called, e.g. when submitting a REQ_OP_SCSI_* request. Avoid that
that initialization code has to be repeated after every
blk_get_request() call by adding new callback functions to struct
request_queue and to struct blk_mq_ops.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
---
block/blk-core.c | 17 +++++++++++++++--
include/linux/blk-mq.h | 2 ++
include/linux/blkdev.h | 4 ++++
3 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 3bc431a77309..86fc08898fac 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1308,12 +1308,25 @@ static struct request *blk_old_get_request(struct request_queue *q,
struct request *blk_get_request(struct request_queue *q, unsigned int op,
gfp_t gfp_mask)
{
+ struct request *req;
+
if (q->mq_ops)
- return blk_mq_alloc_request(q, op,
+ req = blk_mq_alloc_request(q, op,
(gfp_mask & __GFP_DIRECT_RECLAIM) ?
0 : BLK_MQ_REQ_NOWAIT);
else
- return blk_old_get_request(q, op, gfp_mask);
+ req = blk_old_get_request(q, op, gfp_mask);
+
+ if (IS_ERR(req))
+ goto out;
+
+ if (q->mq_ops && q->mq_ops->initialize_rq_fn)
+ q->mq_ops->initialize_rq_fn(req);
+ else if (!q->mq_ops && q->initialize_rq_fn)
+ q->initialize_rq_fn(req);
+
+out:
+ return req;
}
EXPORT_SYMBOL(blk_get_request);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index a4759fd34e7e..61125d200492 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -143,6 +143,8 @@ struct blk_mq_ops {
init_request_fn *init_request;
exit_request_fn *exit_request;
reinit_request_fn *reinit_request;
+ /* Called from inside blk_get_request() */
+ void (*initialize_rq_fn)(struct request *rq);
map_queues_fn *map_queues;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 6c4235018b49..cbc0028290e4 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -410,8 +410,12 @@ struct request_queue {
rq_timed_out_fn *rq_timed_out_fn;
dma_drain_needed_fn *dma_drain_needed;
lld_busy_fn *lld_busy_fn;
+ /* Called just after a request is allocated */
init_rq_fn *init_rq_fn;
+ /* Called just before a request is freed */
exit_rq_fn *exit_rq_fn;
+ /* Called from inside blk_get_request() */
+ void (*initialize_rq_fn)(struct request *rq);
const struct blk_mq_ops *mq_ops;
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 04/12] block: Make most scsi_req_init() calls implicit
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (2 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 03/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 05/12] block: Change argument type of scsi_req_init() Bart Van Assche
` (8 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Nicholas Bellinger
Instead of explicitly calling scsi_req_init() after blk_get_request(),
call that function from inside blk_get_request(). Add an
.initialize_rq_fn() callback function to the block drivers that need
it. Merge the IDE .init_rq_fn() function into .initialize_rq_fn()
because it is too small to keep it as a separate function. Keep the
scsi_req_init() call in ide_prep_sense() because it follows a
blk_rq_init() call.
References: commit 82ed4db499b8 ("block: split scsi_request out of struct request")
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
---
block/bsg.c | 1 -
block/scsi_ioctl.c | 3 ---
drivers/block/pktcdvd.c | 1 -
drivers/cdrom/cdrom.c | 1 -
drivers/ide/ide-atapi.c | 1 -
drivers/ide/ide-cd.c | 1 -
drivers/ide/ide-cd_ioctl.c | 1 -
drivers/ide/ide-devsets.c | 1 -
drivers/ide/ide-disk.c | 1 -
drivers/ide/ide-ioctls.c | 2 --
drivers/ide/ide-park.c | 2 --
drivers/ide/ide-pm.c | 2 --
drivers/ide/ide-probe.c | 6 +++---
drivers/ide/ide-tape.c | 1 -
drivers/ide/ide-taskfile.c | 1 -
drivers/scsi/osd/osd_initiator.c | 2 --
drivers/scsi/osst.c | 1 -
drivers/scsi/scsi_error.c | 1 -
drivers/scsi/scsi_lib.c | 10 +++++++++-
drivers/scsi/scsi_transport_sas.c | 6 ++++++
drivers/scsi/sg.c | 2 --
drivers/scsi/st.c | 1 -
drivers/target/target_core_pscsi.c | 2 --
fs/nfsd/blocklayout.c | 1 -
24 files changed, 18 insertions(+), 33 deletions(-)
diff --git a/block/bsg.c b/block/bsg.c
index 40db8ff4c618..84ec1b19d516 100644
--- a/block/bsg.c
+++ b/block/bsg.c
@@ -236,7 +236,6 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm)
rq = blk_get_request(q, op, GFP_KERNEL);
if (IS_ERR(rq))
return rq;
- scsi_req_init(rq);
ret = blk_fill_sgv4_hdr_rq(q, rq, hdr, bd, has_write_perm);
if (ret)
diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index 4a294a5f7fab..f96c51f5df40 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -326,7 +326,6 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
if (IS_ERR(rq))
return PTR_ERR(rq);
req = scsi_req(rq);
- scsi_req_init(rq);
if (hdr->cmd_len > BLK_MAX_CDB) {
req->cmd = kzalloc(hdr->cmd_len, GFP_KERNEL);
@@ -456,7 +455,6 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
goto error_free_buffer;
}
req = scsi_req(rq);
- scsi_req_init(rq);
cmdlen = COMMAND_SIZE(opcode);
@@ -542,7 +540,6 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
rq = blk_get_request(q, REQ_OP_SCSI_OUT, __GFP_RECLAIM);
if (IS_ERR(rq))
return PTR_ERR(rq);
- scsi_req_init(rq);
rq->timeout = BLK_DEFAULT_SG_TIMEOUT;
scsi_req(rq)->cmd[0] = cmd;
scsi_req(rq)->cmd[4] = data;
diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 42e3c880a8a5..2ea332c9438a 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -707,7 +707,6 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *
REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, __GFP_RECLAIM);
if (IS_ERR(rq))
return PTR_ERR(rq);
- scsi_req_init(rq);
if (cgc->buflen) {
ret = blk_rq_map_kern(q, rq, cgc->buffer, cgc->buflen,
diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
index ff19cfc587f0..e36d160c458f 100644
--- a/drivers/cdrom/cdrom.c
+++ b/drivers/cdrom/cdrom.c
@@ -2201,7 +2201,6 @@ static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf,
break;
}
req = scsi_req(rq);
- scsi_req_init(rq);
ret = blk_rq_map_user(q, rq, NULL, ubuf, len, GFP_KERNEL);
if (ret) {
diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c
index 5901937284e7..98e78b520417 100644
--- a/drivers/ide/ide-atapi.c
+++ b/drivers/ide/ide-atapi.c
@@ -93,7 +93,6 @@ int ide_queue_pc_tail(ide_drive_t *drive, struct gendisk *disk,
int error;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
rq->special = (char *)pc;
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index 07e5ff3a64c3..a14ccb34c923 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -438,7 +438,6 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
rq = blk_get_request(drive->queue,
write ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
memcpy(scsi_req(rq)->cmd, cmd, BLK_MAX_CDB);
ide_req(rq)->type = ATA_PRIV_PC;
rq->rq_flags |= rq_flags;
diff --git a/drivers/ide/ide-cd_ioctl.c b/drivers/ide/ide-cd_ioctl.c
index 55cd736c39c6..9d26c9737e21 100644
--- a/drivers/ide/ide-cd_ioctl.c
+++ b/drivers/ide/ide-cd_ioctl.c
@@ -304,7 +304,6 @@ int ide_cdrom_reset(struct cdrom_device_info *cdi)
int ret;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
rq->rq_flags = RQF_QUIET;
blk_execute_rq(drive->queue, cd->disk, rq, 0);
diff --git a/drivers/ide/ide-devsets.c b/drivers/ide/ide-devsets.c
index 9b69c32ee560..ef7c8c43a380 100644
--- a/drivers/ide/ide-devsets.c
+++ b/drivers/ide/ide-devsets.c
@@ -166,7 +166,6 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting,
return setting->set(drive, arg);
rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
scsi_req(rq)->cmd_len = 5;
scsi_req(rq)->cmd[0] = REQ_DEVSET_EXEC;
diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
index 7c06237f3479..241983da5fc4 100644
--- a/drivers/ide/ide-disk.c
+++ b/drivers/ide/ide-disk.c
@@ -478,7 +478,6 @@ static int set_multcount(ide_drive_t *drive, int arg)
return -EBUSY;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_TASKFILE;
drive->mult_req = arg;
diff --git a/drivers/ide/ide-ioctls.c b/drivers/ide/ide-ioctls.c
index 8c0d17297a7a..3661abb16a5f 100644
--- a/drivers/ide/ide-ioctls.c
+++ b/drivers/ide/ide-ioctls.c
@@ -126,7 +126,6 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
struct request *rq;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_TASKFILE;
blk_execute_rq(drive->queue, NULL, rq, 0);
err = scsi_req(rq)->result ? -EIO : 0;
@@ -224,7 +223,6 @@ static int generic_drive_reset(ide_drive_t *drive)
int ret = 0;
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
scsi_req(rq)->cmd_len = 1;
scsi_req(rq)->cmd[0] = REQ_DRIVE_RESET;
diff --git a/drivers/ide/ide-park.c b/drivers/ide/ide-park.c
index 94e3107f59b9..1f264d5d3f3f 100644
--- a/drivers/ide/ide-park.c
+++ b/drivers/ide/ide-park.c
@@ -32,7 +32,6 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
spin_unlock_irq(&hwif->lock);
rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
scsi_req(rq)->cmd[0] = REQ_PARK_HEADS;
scsi_req(rq)->cmd_len = 1;
ide_req(rq)->type = ATA_PRIV_MISC;
@@ -48,7 +47,6 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
* timeout has expired, so power management will be reenabled.
*/
rq = blk_get_request(q, REQ_OP_DRV_IN, GFP_NOWAIT);
- scsi_req_init(rq);
if (IS_ERR(rq))
goto out;
diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 0977fc1f40ce..cfe3c2d7db7f 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -19,7 +19,6 @@ int generic_ide_suspend(struct device *dev, pm_message_t mesg)
memset(&rqpm, 0, sizeof(rqpm));
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_PM_SUSPEND;
rq->special = &rqpm;
rqpm.pm_step = IDE_PM_START_SUSPEND;
@@ -91,7 +90,6 @@ int generic_ide_resume(struct device *dev)
memset(&rqpm, 0, sizeof(rqpm));
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_PM_RESUME;
rq->rq_flags |= RQF_PREEMPT;
rq->special = &rqpm;
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index b3f85250dea9..c60e5ffc9231 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -741,12 +741,12 @@ static void ide_port_tune_devices(ide_hwif_t *hwif)
}
}
-static int ide_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
+static void ide_initialize_rq(struct request *rq)
{
struct ide_request *req = blk_mq_rq_to_pdu(rq);
+ scsi_req_init(rq);
req->sreq.sense = req->sense;
- return 0;
}
/*
@@ -771,7 +771,7 @@ static int ide_init_queue(ide_drive_t *drive)
return 1;
q->request_fn = do_ide_request;
- q->init_rq_fn = ide_init_rq;
+ q->initialize_rq_fn = ide_initialize_rq;
q->cmd_size = sizeof(struct ide_request);
queue_flag_set_unlocked(QUEUE_FLAG_SCSI_PASSTHROUGH, q);
if (blk_init_allocated_queue(q) < 0) {
diff --git a/drivers/ide/ide-tape.c b/drivers/ide/ide-tape.c
index a0651f948b76..370fd39dce94 100644
--- a/drivers/ide/ide-tape.c
+++ b/drivers/ide/ide-tape.c
@@ -855,7 +855,6 @@ static int idetape_queue_rw_tail(ide_drive_t *drive, int cmd, int size)
BUG_ON(size < 0 || size % tape->blk_size);
rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_MISC;
scsi_req(rq)->cmd[13] = cmd;
rq->rq_disk = tape->disk;
diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c
index d71199d23c9e..d915a8eba557 100644
--- a/drivers/ide/ide-taskfile.c
+++ b/drivers/ide/ide-taskfile.c
@@ -433,7 +433,6 @@ int ide_raw_taskfile(ide_drive_t *drive, struct ide_cmd *cmd, u8 *buf,
rq = blk_get_request(drive->queue,
(cmd->tf_flags & IDE_TFLAG_WRITE) ?
REQ_OP_DRV_OUT : REQ_OP_DRV_IN, __GFP_RECLAIM);
- scsi_req_init(rq);
ide_req(rq)->type = ATA_PRIV_TASKFILE;
/*
diff --git a/drivers/scsi/osd/osd_initiator.c b/drivers/scsi/osd/osd_initiator.c
index 8a1b94816419..d974e7f1d2f1 100644
--- a/drivers/scsi/osd/osd_initiator.c
+++ b/drivers/scsi/osd/osd_initiator.c
@@ -1572,7 +1572,6 @@ static struct request *_make_request(struct request_queue *q, bool has_write,
flags);
if (IS_ERR(req))
return req;
- scsi_req_init(req);
for_each_bio(bio) {
struct bio *bounce_bio = bio;
@@ -1617,7 +1616,6 @@ static int _init_blk_request(struct osd_request *or,
ret = PTR_ERR(req);
goto out;
}
- scsi_req_init(req);
or->in.req = or->request->next_rq = req;
}
} else if (has_in)
diff --git a/drivers/scsi/osst.c b/drivers/scsi/osst.c
index 67cbed92f07d..22080148c6a8 100644
--- a/drivers/scsi/osst.c
+++ b/drivers/scsi/osst.c
@@ -373,7 +373,6 @@ static int osst_execute(struct osst_request *SRpnt, const unsigned char *cmd,
return DRIVER_ERROR << 24;
rq = scsi_req(req);
- scsi_req_init(req);
req->rq_flags |= RQF_QUIET;
SRpnt->bio = NULL;
diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
index ecc07dab893d..b1ff26ac68c1 100644
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -1903,7 +1903,6 @@ static void scsi_eh_lock_door(struct scsi_device *sdev)
if (IS_ERR(req))
return;
rq = scsi_req(req);
- scsi_req_init(req);
rq->cmd[0] = ALLOW_MEDIUM_REMOVAL;
rq->cmd[1] = 0;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 884aaa84c2dd..378cf44a97fc 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -250,7 +250,6 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
if (IS_ERR(req))
return ret;
rq = scsi_req(req);
- scsi_req_init(req);
if (bufflen && blk_rq_map_kern(sdev->request_queue, req,
buffer, bufflen, __GFP_RECLAIM))
@@ -1134,6 +1133,13 @@ int scsi_init_io(struct scsi_cmnd *cmd)
}
EXPORT_SYMBOL(scsi_init_io);
+/* Called from inside blk_get_request() */
+static void scsi_initialize_rq(struct request *rq)
+{
+ scsi_req_init(rq);
+}
+
+/* Called after a request has been started. */
void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
{
void *buf = cmd->sense_buffer;
@@ -2141,6 +2147,7 @@ struct request_queue *scsi_alloc_queue(struct scsi_device *sdev)
q->request_fn = scsi_request_fn;
q->init_rq_fn = scsi_init_rq;
q->exit_rq_fn = scsi_exit_rq;
+ q->initialize_rq_fn = scsi_initialize_rq;
if (blk_init_allocated_queue(q) < 0) {
blk_cleanup_queue(q);
@@ -2165,6 +2172,7 @@ static const struct blk_mq_ops scsi_mq_ops = {
#endif
.init_request = scsi_init_request,
.exit_request = scsi_exit_request,
+ .initialize_rq_fn = scsi_initialize_rq,
.map_queues = scsi_map_queues,
};
diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
index d16414bfe2ef..f5449da6fcad 100644
--- a/drivers/scsi/scsi_transport_sas.c
+++ b/drivers/scsi/scsi_transport_sas.c
@@ -213,6 +213,11 @@ static void sas_host_release(struct device *dev)
blk_cleanup_queue(q);
}
+static void sas_initialize_rq(struct request *rq)
+{
+ scsi_req_init(rq);
+}
+
static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy)
{
struct request_queue *q;
@@ -230,6 +235,7 @@ static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy)
q = blk_alloc_queue(GFP_KERNEL);
if (!q)
return -ENOMEM;
+ q->initialize_rq_fn = sas_initialize_rq;
q->cmd_size = sizeof(struct scsi_request);
if (rphy) {
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index 82c33a6edbea..c3215cec0c82 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -1732,8 +1732,6 @@ sg_start_req(Sg_request *srp, unsigned char *cmd)
}
req = scsi_req(rq);
- scsi_req_init(rq);
-
if (hp->cmd_len > BLK_MAX_CDB)
req->cmd = long_cmdp;
memcpy(req->cmd, cmd, hp->cmd_len);
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
index 1ea34d6f5437..dc4d2b9e15a0 100644
--- a/drivers/scsi/st.c
+++ b/drivers/scsi/st.c
@@ -549,7 +549,6 @@ static int st_scsi_execute(struct st_request *SRpnt, const unsigned char *cmd,
if (IS_ERR(req))
return DRIVER_ERROR << 24;
rq = scsi_req(req);
- scsi_req_init(req);
req->rq_flags |= RQF_QUIET;
mdata->null_mapped = 1;
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 3e4abb13f8ea..d4572639949f 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -992,8 +992,6 @@ pscsi_execute_cmd(struct se_cmd *cmd)
goto fail;
}
- scsi_req_init(req);
-
if (sgl) {
ret = pscsi_map_sg(cmd, sgl, sgl_nents, req);
if (ret)
diff --git a/fs/nfsd/blocklayout.c b/fs/nfsd/blocklayout.c
index 47ed19c53f2e..c862c2489df0 100644
--- a/fs/nfsd/blocklayout.c
+++ b/fs/nfsd/blocklayout.c
@@ -232,7 +232,6 @@ static int nfsd4_scsi_identify_device(struct block_device *bdev,
goto out_free_buf;
}
req = scsi_req(rq);
- scsi_req_init(rq);
error = blk_rq_map_kern(q, rq, buf, bufflen, GFP_KERNEL);
if (error)
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 05/12] block: Change argument type of scsi_req_init()
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (3 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 04/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 06/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
` (7 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche
Since scsi_req_init() works on a struct scsi_request, change the
argument type into struct scsi_request *.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
---
block/scsi_ioctl.c | 10 +++++++---
drivers/ide/ide-atapi.c | 2 +-
drivers/ide/ide-probe.c | 2 +-
drivers/scsi/scsi_lib.c | 4 +++-
drivers/scsi/scsi_transport_sas.c | 2 +-
include/scsi/scsi_request.h | 2 +-
6 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index f96c51f5df40..7440de44dd85 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -741,10 +741,14 @@ int scsi_cmd_blk_ioctl(struct block_device *bd, fmode_t mode,
}
EXPORT_SYMBOL(scsi_cmd_blk_ioctl);
-void scsi_req_init(struct request *rq)
+/**
+ * scsi_req_init - initialize certain fields of a scsi_request structure
+ * @req: Pointer to a scsi_request structure.
+ * Initializes .__cmd[], .cmd, .cmd_len and .sense_len but no other members
+ * of struct scsi_request.
+ */
+void scsi_req_init(struct scsi_request *req)
{
- struct scsi_request *req = scsi_req(rq);
-
memset(req->__cmd, 0, sizeof(req->__cmd));
req->cmd = req->__cmd;
req->cmd_len = BLK_MAX_CDB;
diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c
index 98e78b520417..5ffecef8b910 100644
--- a/drivers/ide/ide-atapi.c
+++ b/drivers/ide/ide-atapi.c
@@ -199,7 +199,7 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq)
memset(sense, 0, sizeof(*sense));
blk_rq_init(rq->q, sense_rq);
- scsi_req_init(sense_rq);
+ scsi_req_init(req);
err = blk_rq_map_kern(drive->queue, sense_rq, sense, sense_len,
GFP_NOIO);
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index c60e5ffc9231..01b2adfd8226 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -745,7 +745,7 @@ static void ide_initialize_rq(struct request *rq)
{
struct ide_request *req = blk_mq_rq_to_pdu(rq);
- scsi_req_init(rq);
+ scsi_req_init(&req->sreq);
req->sreq.sense = req->sense;
}
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 378cf44a97fc..c5a43a7c960d 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1136,7 +1136,9 @@ EXPORT_SYMBOL(scsi_init_io);
/* Called from inside blk_get_request() */
static void scsi_initialize_rq(struct request *rq)
{
- scsi_req_init(rq);
+ struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+
+ scsi_req_init(&cmd->req);
}
/* Called after a request has been started. */
diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
index f5449da6fcad..35598905d785 100644
--- a/drivers/scsi/scsi_transport_sas.c
+++ b/drivers/scsi/scsi_transport_sas.c
@@ -215,7 +215,7 @@ static void sas_host_release(struct device *dev)
static void sas_initialize_rq(struct request *rq)
{
- scsi_req_init(rq);
+ scsi_req_init(scsi_req(rq));
}
static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy)
diff --git a/include/scsi/scsi_request.h b/include/scsi/scsi_request.h
index f0c76f9dc285..e0afa445ee4e 100644
--- a/include/scsi/scsi_request.h
+++ b/include/scsi/scsi_request.h
@@ -27,6 +27,6 @@ static inline void scsi_req_free_cmd(struct scsi_request *req)
kfree(req->cmd);
}
-void scsi_req_init(struct request *);
+void scsi_req_init(struct scsi_request *req);
#endif /* _SCSI_SCSI_REQUEST_H */
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 06/12] blk-mq: Initialize a request before assigning a tag
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (4 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 05/12] block: Change argument type of scsi_req_init() Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 07/12] block: Add a comment above queue_lockdep_assert_held() Bart Van Assche
` (6 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Initialization of blk-mq requests is a bit weird: blk_mq_rq_ctx_init()
is called after a value has been assigned to .rq_flags and .rq_flags
is initialized in __blk_mq_finish_request(). Call blk_mq_rq_ctx_init()
before modifying any struct request members. Initialize .rq_flags in
blk_mq_rq_ctx_init() instead of relying on __blk_mq_finish_request().
Moving the initialization of .rq_flags is fine because all changes
and tests of .rq_flags occur between blk_get_request() and finishing
a request.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-mq.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8cd5261ca1ab..5d9cca62c2f0 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -212,6 +212,7 @@ void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
rq->q = q;
rq->mq_ctx = ctx;
rq->cmd_flags = op;
+ rq->rq_flags = 0;
if (blk_queue_io_stat(q))
rq->rq_flags |= RQF_IO_STAT;
/* do not touch atomic flags, it needs atomic ops against the timer */
@@ -231,7 +232,7 @@ void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
rq->nr_integrity_segments = 0;
#endif
rq->special = NULL;
- /* tag was already set */
+ /* tag will be set by caller */
rq->extra_len = 0;
INIT_LIST_HEAD(&rq->timeout_list);
@@ -257,12 +258,14 @@ struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
rq = tags->static_rqs[tag];
+ blk_mq_rq_ctx_init(data->q, data->ctx, rq, op);
+
if (data->flags & BLK_MQ_REQ_INTERNAL) {
rq->tag = -1;
rq->internal_tag = tag;
} else {
if (blk_mq_tag_busy(data->hctx)) {
- rq->rq_flags = RQF_MQ_INFLIGHT;
+ rq->rq_flags |= RQF_MQ_INFLIGHT;
atomic_inc(&data->hctx->nr_active);
}
rq->tag = tag;
@@ -270,7 +273,6 @@ struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
data->hctx->tags->rqs[rq->tag] = rq;
}
- blk_mq_rq_ctx_init(data->q, data->ctx, rq, op);
return rq;
}
@@ -361,7 +363,6 @@ void __blk_mq_finish_request(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
atomic_dec(&hctx->nr_active);
wbt_done(q->rq_wb, &rq->issue_stat);
- rq->rq_flags = 0;
clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
clear_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags);
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 07/12] block: Add a comment above queue_lockdep_assert_held()
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (5 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 06/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 08/12] block: Check locking assumptions at runtime Bart Van Assche
` (5 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Add a comment above the queue_lockdep_assert_held() macro that
explains the purpose of the q->queue_lock test.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
include/linux/blkdev.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cbc0028290e4..1e73b4df13a9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -634,6 +634,13 @@ struct request_queue {
(1 << QUEUE_FLAG_SAME_COMP) | \
(1 << QUEUE_FLAG_POLL))
+/*
+ * @q->queue_lock is set while a queue is being initialized. Since we know
+ * that no other threads access the queue object before @q->queue_lock has
+ * been set, it is safe to manipulate queue flags without holding the
+ * queue_lock if @q->queue_lock == NULL. See also blk_alloc_queue_node() and
+ * blk_init_allocated_queue().
+ */
static inline void queue_lockdep_assert_held(struct request_queue *q)
{
if (q->queue_lock)
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 08/12] block: Check locking assumptions at runtime
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (6 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 07/12] block: Add a comment above queue_lockdep_assert_held() Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 09/12] block: Document what queue type each function is intended for Bart Van Assche
` (4 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Instead of documenting the locking assumptions of most block layer
functions as a comment, use lockdep_assert_held() to verify locking
assumptions at runtime.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-core.c | 71 +++++++++++++++++++++++++++++++++++------------------
block/blk-flush.c | 8 +++---
block/blk-merge.c | 3 +++
block/blk-tag.c | 15 +++++------
block/blk-timeout.c | 4 ++-
5 files changed, 64 insertions(+), 37 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 86fc08898fac..6bbdce8b8b6f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -177,10 +177,12 @@ static void blk_delay_work(struct work_struct *work)
* Description:
* Sometimes queueing needs to be postponed for a little while, to allow
* resources to come back. This function will make sure that queueing is
- * restarted around the specified time. Queue lock must be held.
+ * restarted around the specified time.
*/
void blk_delay_queue(struct request_queue *q, unsigned long msecs)
{
+ lockdep_assert_held(q->queue_lock);
+
if (likely(!blk_queue_dead(q)))
queue_delayed_work(kblockd_workqueue, &q->delay_work,
msecs_to_jiffies(msecs));
@@ -198,6 +200,8 @@ EXPORT_SYMBOL(blk_delay_queue);
**/
void blk_start_queue_async(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
blk_run_queue_async(q);
}
@@ -210,10 +214,11 @@ EXPORT_SYMBOL(blk_start_queue_async);
* Description:
* blk_start_queue() will clear the stop flag on the queue, and call
* the request_fn for the queue if it was in a stopped state when
- * entered. Also see blk_stop_queue(). Queue lock must be held.
+ * entered. Also see blk_stop_queue().
**/
void blk_start_queue(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
WARN_ON(!irqs_disabled());
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
@@ -233,10 +238,12 @@ EXPORT_SYMBOL(blk_start_queue);
* or if it simply chooses not to queue more I/O at one point, it can
* call this function to prevent the request_fn from being called until
* the driver has signalled it's ready to go again. This happens by calling
- * blk_start_queue() to restart queue operations. Queue lock must be held.
+ * blk_start_queue() to restart queue operations.
**/
void blk_stop_queue(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
cancel_delayed_work(&q->delay_work);
queue_flag_set(QUEUE_FLAG_STOPPED, q);
}
@@ -289,6 +296,8 @@ EXPORT_SYMBOL(blk_sync_queue);
*/
inline void __blk_run_queue_uncond(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
if (unlikely(blk_queue_dead(q)))
return;
@@ -310,11 +319,12 @@ EXPORT_SYMBOL_GPL(__blk_run_queue_uncond);
* @q: The queue to run
*
* Description:
- * See @blk_run_queue. This variant must be called with the queue lock
- * held and interrupts disabled.
+ * See @blk_run_queue.
*/
void __blk_run_queue(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
if (unlikely(blk_queue_stopped(q)))
return;
@@ -328,10 +338,17 @@ EXPORT_SYMBOL(__blk_run_queue);
*
* Description:
* Tells kblockd to perform the equivalent of @blk_run_queue on behalf
- * of us. The caller must hold the queue lock.
+ * of us.
+ *
+ * Note:
+ * Since it is not allowed to run q->delay_work after blk_cleanup_queue()
+ * has canceled q->delay_work, callers must hold the queue lock to avoid
+ * race conditions between blk_cleanup_queue() and blk_run_queue_async().
*/
void blk_run_queue_async(struct request_queue *q)
{
+ lockdep_assert_held(q->queue_lock);
+
if (likely(!blk_queue_stopped(q) && !blk_queue_dead(q)))
mod_delayed_work(kblockd_workqueue, &q->delay_work, 0);
}
@@ -1077,6 +1094,8 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
int may_queue;
req_flags_t rq_flags = RQF_ALLOCED;
+ lockdep_assert_held(q->queue_lock);
+
if (unlikely(blk_queue_dying(q)))
return ERR_PTR(-ENODEV);
@@ -1250,6 +1269,8 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
struct request_list *rl;
struct request *rq;
+ lockdep_assert_held(q->queue_lock);
+
rl = blk_get_rl(q, bio); /* transferred to @rq on success */
retry:
rq = __get_request(rl, op, bio, gfp_mask);
@@ -1342,6 +1363,8 @@ EXPORT_SYMBOL(blk_get_request);
*/
void blk_requeue_request(struct request_queue *q, struct request *rq)
{
+ lockdep_assert_held(q->queue_lock);
+
blk_delete_timer(rq);
blk_clear_rq_complete(rq);
trace_block_rq_requeue(q, rq);
@@ -1416,9 +1439,6 @@ static void blk_pm_put_request(struct request *rq)
static inline void blk_pm_put_request(struct request *rq) {}
#endif
-/*
- * queue lock must be held
- */
void __blk_put_request(struct request_queue *q, struct request *req)
{
req_flags_t rq_flags = req->rq_flags;
@@ -1431,6 +1451,8 @@ void __blk_put_request(struct request_queue *q, struct request *req)
return;
}
+ lockdep_assert_held(q->queue_lock);
+
blk_pm_put_request(req);
elv_completed_request(q, req);
@@ -2252,9 +2274,6 @@ EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
*
* Return:
* The number of bytes to fail.
- *
- * Context:
- * queue_lock must be held.
*/
unsigned int blk_rq_err_bytes(const struct request *rq)
{
@@ -2394,15 +2413,14 @@ void blk_account_io_start(struct request *rq, bool new_io)
* Return:
* Pointer to the request at the top of @q if available. Null
* otherwise.
- *
- * Context:
- * queue_lock must be held.
*/
struct request *blk_peek_request(struct request_queue *q)
{
struct request *rq;
int ret;
+ lockdep_assert_held(q->queue_lock);
+
while ((rq = __elv_next_request(q)) != NULL) {
rq = blk_pm_peek_request(q, rq);
@@ -2519,12 +2537,11 @@ void blk_dequeue_request(struct request *rq)
*
* Block internal functions which don't want to start timer should
* call blk_dequeue_request().
- *
- * Context:
- * queue_lock must be held.
*/
void blk_start_request(struct request *req)
{
+ lockdep_assert_held(req->q->queue_lock);
+
blk_dequeue_request(req);
if (test_bit(QUEUE_FLAG_STATS, &req->q->queue_flags)) {
@@ -2549,14 +2566,13 @@ EXPORT_SYMBOL(blk_start_request);
* Return:
* Pointer to the request at the top of @q if available. Null
* otherwise.
- *
- * Context:
- * queue_lock must be held.
*/
struct request *blk_fetch_request(struct request_queue *q)
{
struct request *rq;
+ lockdep_assert_held(q->queue_lock);
+
rq = blk_peek_request(q);
if (rq)
blk_start_request(rq);
@@ -2732,13 +2748,12 @@ void blk_unprep_request(struct request *req)
}
EXPORT_SYMBOL_GPL(blk_unprep_request);
-/*
- * queue lock must be held
- */
void blk_finish_request(struct request *req, int error)
{
struct request_queue *q = req->q;
+ lockdep_assert_held(req->q->queue_lock);
+
if (req->rq_flags & RQF_STATS)
blk_stat_add(req);
@@ -2820,6 +2835,8 @@ static bool blk_end_bidi_request(struct request *rq, int error,
static bool __blk_end_bidi_request(struct request *rq, int error,
unsigned int nr_bytes, unsigned int bidi_bytes)
{
+ lockdep_assert_held(rq->q->queue_lock);
+
if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
return true;
@@ -2884,6 +2901,8 @@ EXPORT_SYMBOL(blk_end_request_all);
**/
bool __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
{
+ lockdep_assert_held(rq->q->queue_lock);
+
return __blk_end_bidi_request(rq, error, nr_bytes, 0);
}
EXPORT_SYMBOL(__blk_end_request);
@@ -2901,6 +2920,8 @@ void __blk_end_request_all(struct request *rq, int error)
bool pending;
unsigned int bidi_bytes = 0;
+ lockdep_assert_held(rq->q->queue_lock);
+
if (unlikely(blk_bidi_rq(rq)))
bidi_bytes = blk_rq_bytes(rq->next_rq);
@@ -3165,6 +3186,8 @@ static void queue_unplugged(struct request_queue *q, unsigned int depth,
bool from_schedule)
__releases(q->queue_lock)
{
+ lockdep_assert_held(q->queue_lock);
+
trace_block_unplug(q, depth, !from_schedule);
if (from_schedule)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index c4e0880b54bb..610c35bd9eeb 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -346,6 +346,8 @@ static void flush_data_end_io(struct request *rq, int error)
struct request_queue *q = rq->q;
struct blk_flush_queue *fq = blk_get_flush_queue(q, NULL);
+ lockdep_assert_held(q->queue_lock);
+
/*
* Updating q->in_flight[] here for making this tag usable
* early. Because in blk_queue_start_tag(),
@@ -411,9 +413,6 @@ static void mq_flush_data_end_io(struct request *rq, int error)
* or __blk_mq_run_hw_queue() to dispatch request.
* @rq is being submitted. Analyze what needs to be done and put it on the
* right queue.
- *
- * CONTEXT:
- * spin_lock_irq(q->queue_lock) in !mq case
*/
void blk_insert_flush(struct request *rq)
{
@@ -422,6 +421,9 @@ void blk_insert_flush(struct request *rq)
unsigned int policy = blk_flush_policy(fflags, rq);
struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+ if (!q->mq_ops)
+ lockdep_assert_held(q->queue_lock);
+
/*
* @policy now records what operations need to be done. Adjust
* REQ_PREFLUSH and FUA for the driver.
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 3990ae406341..573db663d53f 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -671,6 +671,9 @@ static void blk_account_io_merge(struct request *req)
static struct request *attempt_merge(struct request_queue *q,
struct request *req, struct request *next)
{
+ if (!q->mq_ops)
+ lockdep_assert_held(q->queue_lock);
+
if (!rq_mergeable(req) || !rq_mergeable(next))
return NULL;
diff --git a/block/blk-tag.c b/block/blk-tag.c
index 07cc329fa4b0..2290f65b9d73 100644
--- a/block/blk-tag.c
+++ b/block/blk-tag.c
@@ -258,15 +258,14 @@ EXPORT_SYMBOL(blk_queue_resize_tags);
* all transfers have been done for a request. It's important to call
* this function before end_that_request_last(), as that will put the
* request back on the free list thus corrupting the internal tag list.
- *
- * Notes:
- * queue lock must be held.
**/
void blk_queue_end_tag(struct request_queue *q, struct request *rq)
{
struct blk_queue_tag *bqt = q->queue_tags;
unsigned tag = rq->tag; /* negative tags invalid */
+ lockdep_assert_held(q->queue_lock);
+
BUG_ON(tag >= bqt->real_max_depth);
list_del_init(&rq->queuelist);
@@ -307,9 +306,6 @@ EXPORT_SYMBOL(blk_queue_end_tag);
* calling this function. The request will also be removed from
* the request queue, so it's the drivers responsibility to readd
* it if it should need to be restarted for some reason.
- *
- * Notes:
- * queue lock must be held.
**/
int blk_queue_start_tag(struct request_queue *q, struct request *rq)
{
@@ -317,6 +313,8 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
unsigned max_depth;
int tag;
+ lockdep_assert_held(q->queue_lock);
+
if (unlikely((rq->rq_flags & RQF_QUEUED))) {
printk(KERN_ERR
"%s: request %p for device [%s] already tagged %d",
@@ -389,14 +387,13 @@ EXPORT_SYMBOL(blk_queue_start_tag);
* Hardware conditions may dictate a need to stop all pending requests.
* In this case, we will safely clear the block side of the tag queue and
* readd all requests to the request queue in the right order.
- *
- * Notes:
- * queue lock must be held.
**/
void blk_queue_invalidate_tags(struct request_queue *q)
{
struct list_head *tmp, *n;
+ lockdep_assert_held(q->queue_lock);
+
list_for_each_safe(tmp, n, &q->tag_busy_list)
blk_requeue_request(q, list_entry_rq(tmp));
}
diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index cbff183f3d9f..17ec83bb0900 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -189,13 +189,15 @@ unsigned long blk_rq_timeout(unsigned long timeout)
* Notes:
* Each request has its own timer, and as it is added to the queue, we
* set up the timer. When the request completes, we cancel the timer.
- * Queue lock must be held for the non-mq case, mq case doesn't care.
*/
void blk_add_timer(struct request *req)
{
struct request_queue *q = req->q;
unsigned long expiry;
+ if (!q->mq_ops)
+ lockdep_assert_held(q->queue_lock);
+
/* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */
if (!q->mq_ops && !q->rq_timed_out_fn)
return;
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 09/12] block: Document what queue type each function is intended for
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (7 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 08/12] block: Check locking assumptions at runtime Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 10/12] blk-mq: Document locking assumptions Bart Van Assche
` (3 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Some functions in block/blk-core.c must only be used on blk-sq queues
while others are safe to use against any queue type. Document which
functions are intended for blk-sq queues and issue a warning if the
blk-sq API is misused. This does not only help block driver authors
but will also make it easier to remove the blk-sq code once that code
is declared obsolete.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-core.c | 33 +++++++++++++++++++++++++++++++++
block/blk.h | 2 ++
2 files changed, 35 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index 6bbdce8b8b6f..ab4cb509c170 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -182,6 +182,7 @@ static void blk_delay_work(struct work_struct *work)
void blk_delay_queue(struct request_queue *q, unsigned long msecs)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
if (likely(!blk_queue_dead(q)))
queue_delayed_work(kblockd_workqueue, &q->delay_work,
@@ -201,6 +202,7 @@ EXPORT_SYMBOL(blk_delay_queue);
void blk_start_queue_async(struct request_queue *q)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
blk_run_queue_async(q);
@@ -220,6 +222,7 @@ void blk_start_queue(struct request_queue *q)
{
lockdep_assert_held(q->queue_lock);
WARN_ON(!irqs_disabled());
+ WARN_ON_ONCE(q->mq_ops);
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
__blk_run_queue(q);
@@ -243,6 +246,7 @@ EXPORT_SYMBOL(blk_start_queue);
void blk_stop_queue(struct request_queue *q)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
cancel_delayed_work(&q->delay_work);
queue_flag_set(QUEUE_FLAG_STOPPED, q);
@@ -297,6 +301,7 @@ EXPORT_SYMBOL(blk_sync_queue);
inline void __blk_run_queue_uncond(struct request_queue *q)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
if (unlikely(blk_queue_dead(q)))
return;
@@ -324,6 +329,7 @@ EXPORT_SYMBOL_GPL(__blk_run_queue_uncond);
void __blk_run_queue(struct request_queue *q)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
if (unlikely(blk_queue_stopped(q)))
return;
@@ -348,6 +354,7 @@ EXPORT_SYMBOL(__blk_run_queue);
void blk_run_queue_async(struct request_queue *q)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
if (likely(!blk_queue_stopped(q) && !blk_queue_dead(q)))
mod_delayed_work(kblockd_workqueue, &q->delay_work, 0);
@@ -366,6 +373,8 @@ void blk_run_queue(struct request_queue *q)
{
unsigned long flags;
+ WARN_ON_ONCE(q->mq_ops);
+
spin_lock_irqsave(q->queue_lock, flags);
__blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
@@ -394,6 +403,7 @@ static void __blk_drain_queue(struct request_queue *q, bool drain_all)
int i;
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
while (true) {
bool drain = false;
@@ -472,6 +482,8 @@ static void __blk_drain_queue(struct request_queue *q, bool drain_all)
*/
void blk_queue_bypass_start(struct request_queue *q)
{
+ WARN_ON_ONCE(q->mq_ops);
+
spin_lock_irq(q->queue_lock);
q->bypass_depth++;
queue_flag_set(QUEUE_FLAG_BYPASS, q);
@@ -498,6 +510,9 @@ EXPORT_SYMBOL_GPL(blk_queue_bypass_start);
* @q: queue of interest
*
* Leave bypass mode and restore the normal queueing behavior.
+ *
+ * Note: although blk_queue_bypass_start() is only called for blk-sq queues,
+ * this function is called for both blk-sq and blk-mq queues.
*/
void blk_queue_bypass_end(struct request_queue *q)
{
@@ -895,6 +910,8 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio);
int blk_init_allocated_queue(struct request_queue *q)
{
+ WARN_ON_ONCE(q->mq_ops);
+
q->fq = blk_alloc_flush_queue(q, NUMA_NO_NODE, q->cmd_size);
if (!q->fq)
return -ENOMEM;
@@ -1032,6 +1049,8 @@ int blk_update_nr_requests(struct request_queue *q, unsigned int nr)
struct request_list *rl;
int on_thresh, off_thresh;
+ WARN_ON_ONCE(q->mq_ops);
+
spin_lock_irq(q->queue_lock);
q->nr_requests = nr;
blk_queue_congestion_threshold(q);
@@ -1270,6 +1289,7 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
struct request *rq;
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
rl = blk_get_rl(q, bio); /* transferred to @rq on success */
retry:
@@ -1309,6 +1329,8 @@ static struct request *blk_old_get_request(struct request_queue *q,
{
struct request *rq;
+ WARN_ON_ONCE(q->mq_ops);
+
/* create ioc upfront */
create_io_context(gfp_mask, q->node);
@@ -1364,6 +1386,7 @@ EXPORT_SYMBOL(blk_get_request);
void blk_requeue_request(struct request_queue *q, struct request *rq)
{
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
blk_delete_timer(rq);
blk_clear_rq_complete(rq);
@@ -2420,6 +2443,7 @@ struct request *blk_peek_request(struct request_queue *q)
int ret;
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
while ((rq = __elv_next_request(q)) != NULL) {
@@ -2541,6 +2565,7 @@ void blk_dequeue_request(struct request *rq)
void blk_start_request(struct request *req)
{
lockdep_assert_held(req->q->queue_lock);
+ WARN_ON_ONCE(req->q->mq_ops);
blk_dequeue_request(req);
@@ -2572,6 +2597,7 @@ struct request *blk_fetch_request(struct request_queue *q)
struct request *rq;
lockdep_assert_held(q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
rq = blk_peek_request(q);
if (rq)
@@ -2753,6 +2779,7 @@ void blk_finish_request(struct request *req, int error)
struct request_queue *q = req->q;
lockdep_assert_held(req->q->queue_lock);
+ WARN_ON_ONCE(q->mq_ops);
if (req->rq_flags & RQF_STATS)
blk_stat_add(req);
@@ -2807,6 +2834,8 @@ static bool blk_end_bidi_request(struct request *rq, int error,
struct request_queue *q = rq->q;
unsigned long flags;
+ WARN_ON_ONCE(q->mq_ops);
+
if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
return true;
@@ -2836,6 +2865,7 @@ static bool __blk_end_bidi_request(struct request *rq, int error,
unsigned int nr_bytes, unsigned int bidi_bytes)
{
lockdep_assert_held(rq->q->queue_lock);
+ WARN_ON_ONCE(rq->q->mq_ops);
if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
return true;
@@ -2861,6 +2891,7 @@ static bool __blk_end_bidi_request(struct request *rq, int error,
**/
bool blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
{
+ WARN_ON_ONCE(rq->q->mq_ops);
return blk_end_bidi_request(rq, error, nr_bytes, 0);
}
EXPORT_SYMBOL(blk_end_request);
@@ -2902,6 +2933,7 @@ EXPORT_SYMBOL(blk_end_request_all);
bool __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
{
lockdep_assert_held(rq->q->queue_lock);
+ WARN_ON_ONCE(rq->q->mq_ops);
return __blk_end_bidi_request(rq, error, nr_bytes, 0);
}
@@ -2921,6 +2953,7 @@ void __blk_end_request_all(struct request *rq, int error)
unsigned int bidi_bytes = 0;
lockdep_assert_held(rq->q->queue_lock);
+ WARN_ON_ONCE(rq->q->mq_ops);
if (unlikely(blk_bidi_rq(rq)))
bidi_bytes = blk_rq_bytes(rq->next_rq);
diff --git a/block/blk.h b/block/blk.h
index 83c8e1100525..798691a5e5e9 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -143,6 +143,8 @@ static inline struct request *__elv_next_request(struct request_queue *q)
struct request *rq;
struct blk_flush_queue *fq = blk_get_flush_queue(q, NULL);
+ WARN_ON_ONCE(q->mq_ops);
+
while (1) {
if (!list_empty(&q->queue_head)) {
rq = list_entry_rq(q->queue_head.next);
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 10/12] blk-mq: Document locking assumptions
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (8 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 09/12] block: Document what queue type each function is intended for Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 11/12] block: Constify disk_type Bart Van Assche
` (2 subsequent siblings)
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
Document the locking assumptions in functions that modify
blk_mq_ctx.rq_list to make it easier for humans to verify
this code.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-mq-sched.c | 2 ++
block/blk-mq.c | 4 ++++
2 files changed, 6 insertions(+)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index c4e2afb9d12d..88aa460b2e8a 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -232,6 +232,8 @@ static bool blk_mq_attempt_merge(struct request_queue *q,
struct request *rq;
int checked = 8;
+ lockdep_assert_held(&ctx->lock);
+
list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
bool merged = false;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 5d9cca62c2f0..0f8c011eff97 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1274,6 +1274,8 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
{
struct blk_mq_ctx *ctx = rq->mq_ctx;
+ lockdep_assert_held(&ctx->lock);
+
trace_block_rq_insert(hctx->queue, rq);
if (at_head)
@@ -1287,6 +1289,8 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
{
struct blk_mq_ctx *ctx = rq->mq_ctx;
+ lockdep_assert_held(&ctx->lock);
+
__blk_mq_insert_req_list(hctx, rq, at_head);
blk_mq_hctx_mark_pending(hctx, ctx);
}
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 11/12] block: Constify disk_type
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (9 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 10/12] blk-mq: Document locking assumptions Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 12/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
2017-06-19 2:55 ` [PATCH v3 00/12] More patches for kernel v4.13 Jens Axboe
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
The variable 'disk_type' is never modified so constify it.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/genhd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index d252d29fe837..7f520fa25d16 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -36,7 +36,7 @@ struct kobject *block_depr;
static DEFINE_SPINLOCK(ext_devt_lock);
static DEFINE_IDR(ext_devt_idr);
-static struct device_type disk_type;
+static const struct device_type disk_type;
static void disk_check_events(struct disk_events *ev,
unsigned int *clearing_ptr);
@@ -1183,7 +1183,7 @@ static char *block_devnode(struct device *dev, umode_t *mode,
return NULL;
}
-static struct device_type disk_type = {
+static const struct device_type disk_type = {
.name = "disk",
.groups = disk_attr_groups,
.release = disk_release,
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 12/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (10 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 11/12] block: Constify disk_type Bart Van Assche
@ 2017-06-08 17:33 ` Bart Van Assche
2017-06-19 2:55 ` [PATCH v3 00/12] More patches for kernel v4.13 Jens Axboe
12 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-08 17:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
Omar Sandoval, Ming Lei
A queue must be frozen while the mapped state of a hardware queue
is changed. Additionally, any change of the mapped state is
followed by a call to blk_mq_map_swqueue() (see also
blk_mq_init_allocated_queue() and blk_mq_update_nr_hw_queues()).
Since blk_mq_map_swqueue() does not map any unmapped hardware
queue onto any software queue, no attempt will be made to run
an unmapped hardware queue. Hence issue a warning upon attempts
to run an unmapped hardware queue.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
block/blk-mq.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0f8c011eff97..689026d3c4bd 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1091,8 +1091,9 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
unsigned long msecs)
{
- if (unlikely(blk_mq_hctx_stopped(hctx) ||
- !blk_mq_hw_queue_mapped(hctx)))
+ WARN_ON_ONCE(!blk_mq_hw_queue_mapped(hctx));
+
+ if (unlikely(blk_mq_hctx_stopped(hctx)))
return;
if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
@@ -1252,7 +1253,7 @@ static void blk_mq_run_work_fn(struct work_struct *work)
void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
{
- if (unlikely(!blk_mq_hw_queue_mapped(hctx)))
+ if (WARN_ON_ONCE(!blk_mq_hw_queue_mapped(hctx)))
return;
/*
--
2.12.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size
2017-06-08 17:33 ` [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size Bart Van Assche
@ 2017-06-19 2:26 ` Ming Lei
0 siblings, 0 replies; 16+ messages in thread
From: Ming Lei @ 2017-06-19 2:26 UTC (permalink / raw)
To: Bart Van Assche
Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
Omar Sandoval
On Thu, Jun 08, 2017 at 10:33:44AM -0700, Bart Van Assche wrote:
> Since the srcu structure is rather large (184 bytes on an x86-64
> system with kernel debugging disabled), only allocate it if needed.
>
> Reported-by: Ming Lei <ming.lei@redhat.com>
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Omar Sandoval <osandov@fb.com>
> Cc: Ming Lei <ming.lei@redhat.com>
> ---
> block/blk-mq.c | 30 ++++++++++++++++++++++--------
> include/linux/blk-mq.h | 5 +++--
> 2 files changed, 25 insertions(+), 10 deletions(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 4e8b1bc87274..ef64a3ea4e83 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -172,7 +172,7 @@ void blk_mq_quiesce_queue(struct request_queue *q)
>
> queue_for_each_hw_ctx(q, hctx, i) {
> if (hctx->flags & BLK_MQ_F_BLOCKING)
> - synchronize_srcu(&hctx->queue_rq_srcu);
> + synchronize_srcu(hctx->queue_rq_srcu);
> else
> rcu = true;
> }
> @@ -1056,9 +1056,9 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
> } else {
> might_sleep();
>
> - srcu_idx = srcu_read_lock(&hctx->queue_rq_srcu);
> + srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
> blk_mq_sched_dispatch_requests(hctx);
> - srcu_read_unlock(&hctx->queue_rq_srcu, srcu_idx);
> + srcu_read_unlock(hctx->queue_rq_srcu, srcu_idx);
> }
> }
>
> @@ -1460,9 +1460,9 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
>
> might_sleep();
>
> - srcu_idx = srcu_read_lock(&hctx->queue_rq_srcu);
> + srcu_idx = srcu_read_lock(hctx->queue_rq_srcu);
> __blk_mq_try_issue_directly(hctx, rq, cookie, true);
> - srcu_read_unlock(&hctx->queue_rq_srcu, srcu_idx);
> + srcu_read_unlock(hctx->queue_rq_srcu, srcu_idx);
> }
> }
>
> @@ -1806,7 +1806,7 @@ static void blk_mq_exit_hctx(struct request_queue *q,
> set->ops->exit_hctx(hctx, hctx_idx);
>
> if (hctx->flags & BLK_MQ_F_BLOCKING)
> - cleanup_srcu_struct(&hctx->queue_rq_srcu);
> + cleanup_srcu_struct(hctx->queue_rq_srcu);
>
> blk_mq_remove_cpuhp(hctx);
> blk_free_flush_queue(hctx->fq);
> @@ -1879,7 +1879,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
> goto free_fq;
>
> if (hctx->flags & BLK_MQ_F_BLOCKING)
> - init_srcu_struct(&hctx->queue_rq_srcu);
> + init_srcu_struct(hctx->queue_rq_srcu);
>
> blk_mq_debugfs_register_hctx(q, hctx);
>
> @@ -2154,6 +2154,20 @@ struct request_queue *blk_mq_init_queue(struct blk_mq_tag_set *set)
> }
> EXPORT_SYMBOL(blk_mq_init_queue);
>
> +static int blk_mq_hw_ctx_size(struct blk_mq_tag_set *tag_set)
> +{
> + int hw_ctx_size = sizeof(struct blk_mq_hw_ctx);
> +
> + BUILD_BUG_ON(ALIGN(offsetof(struct blk_mq_hw_ctx, queue_rq_srcu),
> + __alignof__(struct blk_mq_hw_ctx)) !=
> + sizeof(struct blk_mq_hw_ctx));
> +
> + if (tag_set->flags & BLK_MQ_F_BLOCKING)
> + hw_ctx_size += sizeof(struct srcu_struct);
> +
> + return hw_ctx_size;
> +}
> +
> static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
> struct request_queue *q)
> {
> @@ -2168,7 +2182,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
> continue;
>
> node = blk_mq_hw_queue_to_node(q->mq_map, i);
> - hctxs[i] = kzalloc_node(sizeof(struct blk_mq_hw_ctx),
> + hctxs[i] = kzalloc_node(blk_mq_hw_ctx_size(set),
> GFP_KERNEL, node);
> if (!hctxs[i])
> break;
> diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
> index fcd641032f8d..c534ec64e214 100644
> --- a/include/linux/blk-mq.h
> +++ b/include/linux/blk-mq.h
> @@ -39,8 +39,6 @@ struct blk_mq_hw_ctx {
> struct blk_mq_tags *tags;
> struct blk_mq_tags *sched_tags;
>
> - struct srcu_struct queue_rq_srcu;
> -
> unsigned long queued;
> unsigned long run;
> #define BLK_MQ_MAX_DISPATCH_ORDER 7
> @@ -62,6 +60,9 @@ struct blk_mq_hw_ctx {
> struct dentry *debugfs_dir;
> struct dentry *sched_debugfs_dir;
> #endif
> +
> + /* Must be the last member - see also blk_mq_hw_ctx_size(). */
> + struct srcu_struct queue_rq_srcu[0];
> };
>
> struct blk_mq_tag_set {
> --
> 2.12.2
>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v3 00/12] More patches for kernel v4.13
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
` (11 preceding siblings ...)
2017-06-08 17:33 ` [PATCH v3 12/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
@ 2017-06-19 2:55 ` Jens Axboe
2017-06-19 18:35 ` Bart Van Assche
12 siblings, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2017-06-19 2:55 UTC (permalink / raw)
To: Bart Van Assche; +Cc: linux-block, Christoph Hellwig
On 06/08/2017 11:33 AM, Bart Van Assche wrote:
> Hello Jens,
>
> This patch series contains one patch that reduces the size of struct
> blk_mq_hw_ctx, a few patches that simplify some of the block layer code and
> also patches that improve the block layer documentation. Please consider these
> patches for kernel v4.13.
>
> The basis for these patches is a merge of your for-next and for-linus
> branches. Please note that the first patch in this series applies fine on top
> of the merge of these two branches but not on top of your for-next branch.
I seem to be missing some of the patches in this series, and I'm assuming
high likely hood of rejects due to recent churn. Would you mind resending
this series?
--
Jens Axboe
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v3 00/12] More patches for kernel v4.13
2017-06-19 2:55 ` [PATCH v3 00/12] More patches for kernel v4.13 Jens Axboe
@ 2017-06-19 18:35 ` Bart Van Assche
0 siblings, 0 replies; 16+ messages in thread
From: Bart Van Assche @ 2017-06-19 18:35 UTC (permalink / raw)
To: axboe@kernel.dk; +Cc: hch@lst.de, linux-block@vger.kernel.org
On Sun, 2017-06-18 at 20:55 -0600, Jens Axboe wrote:
> On 06/08/2017 11:33 AM, Bart Van Assche wrote:
> > This patch series contains one patch that reduces the size of struct
> > blk_mq_hw_ctx, a few patches that simplify some of the block layer code=
and
> > also patches that improve the block layer documentation. Please conside=
r these
> > patches for kernel v4.13.
> >=20
> > The basis for these patches is a merge of your for-next and for-linus
> > branches. Please note that the first patch in this series applies fine =
on top
> > of the merge of these two branches but not on top of your for-next bran=
ch.
>=20
> I seem to be missing some of the patches in this series, and I'm assuming
> high likely hood of rejects due to recent churn. Would you mind resending
> this series?
Hello Jens,
I will repost this series as soon as I have finished retesting a rebase of
this series on top of your current for-next branch.
Bart.=
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2017-06-19 18:35 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-06-08 17:33 [PATCH v3 00/12] More patches for kernel v4.13 Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 01/12] blk-mq: Reduce blk_mq_hw_ctx size Bart Van Assche
2017-06-19 2:26 ` Ming Lei
2017-06-08 17:33 ` [PATCH v3 02/12] block: Make request operation type argument declarations consistent Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 03/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 04/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 05/12] block: Change argument type of scsi_req_init() Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 06/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 07/12] block: Add a comment above queue_lockdep_assert_held() Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 08/12] block: Check locking assumptions at runtime Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 09/12] block: Document what queue type each function is intended for Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 10/12] blk-mq: Document locking assumptions Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 11/12] block: Constify disk_type Bart Van Assche
2017-06-08 17:33 ` [PATCH v3 12/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
2017-06-19 2:55 ` [PATCH v3 00/12] More patches for kernel v4.13 Jens Axboe
2017-06-19 18:35 ` Bart Van Assche
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox