* [PATCH v2 00/15] dm: improve request-based DM and multipath
@ 2016-02-07 15:53 Mike Snitzer
2016-02-07 15:53 ` [dm-4.5 PATCH v2 01/15] dm: fix excessive dm-mq context switching Mike Snitzer
` (14 more replies)
0 siblings, 15 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: axboe, jmoyer, Mike Snitzer, Sagi Grimberg
These changes have been staged in linux-dm.git's 'for-next', see:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/log/?h=for-next
The first patch is a stable@ candidate to go into 4.5-rc4+, the
remainder are targetting the 4.6 merge window.
Any review would be appreciated.
Mike Snitzer (15):
dm: fix excessive dm-mq context switching
dm: remove unused dm_get_rq_mapinfo()
dm: cleanup dm_any_congested()
dm: set DM_TARGET_WILDCARD feature on "error" target
dm: optimize dm_mq_queue_rq()
dm: optimize dm_request_fn()
dm: add 'blk_mq_nr_hw_queues' and 'blk_mq_queue_depth' module params
dm: allocate blk_mq_tag_set rather than embed in mapped_device
dm: rename target's per_bio_data_size to per_io_data_size
dm: allow immutable request-based targets to use blk-mq pdu
dm mpath: use blk-mq pdu for per-request 'struct dm_mpath_io'
dm mpath: cleanup 'struct dm_mpath_io' management code
dm mpath: use blk_mq_alloc_request() and blk_mq_free_request() directly
dm mpath: reduce granularity of locking in __multipath_map
dm mpath: remove unnecessary casts in front of ti->private
block/blk-core.c | 2 +-
drivers/md/dm-cache-target.c | 2 +-
drivers/md/dm-crypt.c | 2 +-
drivers/md/dm-delay.c | 2 +-
drivers/md/dm-flakey.c | 2 +-
drivers/md/dm-ioctl.c | 5 +-
drivers/md/dm-log-writes.c | 2 +-
drivers/md/dm-mpath.c | 101 ++++++++++++------
drivers/md/dm-raid1.c | 2 +-
drivers/md/dm-snap.c | 2 +-
drivers/md/dm-table.c | 30 +++++-
drivers/md/dm-target.c | 3 +-
drivers/md/dm-thin.c | 2 +-
drivers/md/dm-verity-fec.c | 2 +-
drivers/md/dm-verity-target.c | 12 +--
drivers/md/dm.c | 239 ++++++++++++++++++++++++------------------
drivers/md/dm.h | 4 +-
include/linux/device-mapper.h | 13 ++-
18 files changed, 263 insertions(+), 164 deletions(-)
--
2.5.4 (Apple Git-61)
^ permalink raw reply [flat|nested] 16+ messages in thread
* [dm-4.5 PATCH v2 01/15] dm: fix excessive dm-mq context switching
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 02/15] dm: remove unused dm_get_rq_mapinfo() Mike Snitzer
` (13 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: axboe, jmoyer, Mike Snitzer, Sagi Grimberg
Request-based DM's blk-mq support (dm-mq) was reported to be 50% slower
than if an underlying null_blk device were used directly. One of the
reasons for this drop in performance is that blk_insert_clone_request()
was calling blk_mq_insert_request() with @async=true. This forced the
use of kblockd_schedule_delayed_work_on() to run the blk-mq hw queues
which ushered in ping-ponging between process context (fio in this case)
and kblockd's kworker to submit the cloned request. The ftrace
function_graph tracer showed:
kworker-2013 => fio-12190
fio-12190 => kworker-2013
...
kworker-2013 => fio-12190
fio-12190 => kworker-2013
...
Fixing blk_insert_clone_request()'s blk_mq_insert_request() call to
_not_ use kblockd to submit the cloned requests isn't enough to
eliminate the observed context switches.
In addition to this dm-mq specific blk-core fix, there are 2 DM core
fixes to dm-mq that (when paired with the blk-core fix) completely
eliminate the observed context switching:
1) don't blk_mq_run_hw_queues in blk-mq request completion
Motivated by desire to reduce overhead of dm-mq, punting to kblockd
just increases context switches.
In my testing against a really fast null_blk device there was no benefit
to running blk_mq_run_hw_queues() on completion (and no other blk-mq
driver does this). So hopefully this change doesn't induce the need for
yet another revert like commit 621739b00e16ca2d !
2) use blk_mq_complete_request() in dm_complete_request()
blk_complete_request() doesn't offer the traditional q->mq_ops vs
.request_fn branching pattern that other historic block interfaces
do (e.g. blk_get_request). Using blk_mq_complete_request() for
blk-mq requests is important for performance. It should be noted
that, like blk_complete_request(), blk_mq_complete_request() doesn't
natively handle partial completions -- but the request-based
DM-multipath target does provide the required partial completion
support by dm.c:end_clone_bio() triggering requeueing of the request
via dm-mpath.c:multipath_end_io()'s return of DM_ENDIO_REQUEUE.
dm-mq fix #2 is _much_ more important than #1 for eliminating the
context switches.
Before: cpu : usr=15.10%, sys=59.39%, ctx=7905181, majf=0, minf=475
After: cpu : usr=20.60%, sys=79.35%, ctx=2008, majf=0, minf=472
With these changes multithreaded async read IOPs improved from ~950K
to ~1350K for this dm-mq stacked on null_blk test-case. The raw read
IOPs of the underlying null_blk device for the same workload is ~1950K.
Fixes: 7fb4898e0 ("block: add blk-mq support to blk_insert_cloned_request()")
Fixes: bfebd1cdb ("dm: add full blk-mq support to request-based DM")
Reported-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # 4.1+
---
block/blk-core.c | 2 +-
drivers/md/dm.c | 13 ++++++-------
2 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index ab51685..c60e233 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2198,7 +2198,7 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
if (q->mq_ops) {
if (blk_queue_io_stat(q))
blk_account_io_start(rq, true);
- blk_mq_insert_request(rq, false, true, true);
+ blk_mq_insert_request(rq, false, true, false);
return 0;
}
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 5df4048..846e1bb 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1109,12 +1109,8 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
* back into ->request_fn() could deadlock attempting to grab the
* queue lock again.
*/
- if (run_queue) {
- if (md->queue->mq_ops)
- blk_mq_run_hw_queues(md->queue, true);
- else
- blk_run_queue_async(md->queue);
- }
+ if (!md->queue->mq_ops && run_queue)
+ blk_run_queue_async(md->queue);
/*
* dm_put() must be at the end of this function. See the comment above
@@ -1334,7 +1330,10 @@ static void dm_complete_request(struct request *rq, int error)
struct dm_rq_target_io *tio = tio_from_request(rq);
tio->error = error;
- blk_complete_request(rq);
+ if (!rq->q->mq_ops)
+ blk_complete_request(rq);
+ else
+ blk_mq_complete_request(rq, rq->errors);
}
/*
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 02/15] dm: remove unused dm_get_rq_mapinfo()
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
2016-02-07 15:53 ` [dm-4.5 PATCH v2 01/15] dm: fix excessive dm-mq context switching Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 03/15] dm: cleanup dm_any_congested() Mike Snitzer
` (12 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 846e1bb..873512d 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -106,14 +106,6 @@ struct dm_rq_clone_bio_info {
struct bio clone;
};
-union map_info *dm_get_rq_mapinfo(struct request *rq)
-{
- if (rq && rq->end_io_data)
- return &((struct dm_rq_target_io *)rq->end_io_data)->info;
- return NULL;
-}
-EXPORT_SYMBOL_GPL(dm_get_rq_mapinfo);
-
#define MINOR_ALLOCED ((void *)-1)
/*
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 03/15] dm: cleanup dm_any_congested()
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
2016-02-07 15:53 ` [dm-4.5 PATCH v2 01/15] dm: fix excessive dm-mq context switching Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 02/15] dm: remove unused dm_get_rq_mapinfo() Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 04/15] dm: set DM_TARGET_WILDCARD feature on "error" target Mike Snitzer
` (11 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
The request-based DM support for checking queue congestion doesn't
require access to the live DM table.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 873512d..c92e356 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2135,19 +2135,18 @@ static int dm_any_congested(void *congested_data, int bdi_bits)
struct dm_table *map;
if (!test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) {
- map = dm_get_live_table_fast(md);
- if (map) {
+ if (dm_request_based(md)) {
/*
- * Request-based dm cares about only own queue for
- * the query about congestion status of request_queue
+ * With request-based DM we only need to check the
+ * top-level queue for congestion.
*/
- if (dm_request_based(md))
- r = md->queue->backing_dev_info.wb.state &
- bdi_bits;
- else
+ r = md->queue->backing_dev_info.wb.state & bdi_bits;
+ } else {
+ map = dm_get_live_table_fast(md);
+ if (map)
r = dm_table_any_congested(map, bdi_bits);
+ dm_put_live_table_fast(md);
}
- dm_put_live_table_fast(md);
}
return r;
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 04/15] dm: set DM_TARGET_WILDCARD feature on "error" target
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (2 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 03/15] dm: cleanup dm_any_congested() Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 05/15] dm: optimize dm_mq_queue_rq() Mike Snitzer
` (10 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
The DM_TARGET_WILDCARD feature indicates that the "error" target may
replace any target; even immutable targets. This feature will be useful
to preserve the ability to replace the "multipath" target even once it
is formally converted over to having the DM_TARGET_IMMUTABLE feature.
Also, implicit in the DM_TARGET_WILDCARD feature flag being set is that
.map, .map_rq, .clone_and_map_rq and .release_clone_rq are all defined
in the target_type.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-ioctl.c | 3 ++-
drivers/md/dm-table.c | 14 ++++++++++++++
drivers/md/dm-target.c | 3 ++-
drivers/md/dm.h | 1 +
include/linux/device-mapper.h | 7 +++++++
5 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 80a4395..4763c4a 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1291,7 +1291,8 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
immutable_target_type = dm_get_immutable_target_type(md);
if (immutable_target_type &&
- (immutable_target_type != dm_table_get_immutable_target_type(t))) {
+ (immutable_target_type != dm_table_get_immutable_target_type(t)) &&
+ !dm_table_get_wildcard_target(t)) {
DMWARN("can't replace immutable target type %s",
immutable_target_type->name);
r = -EINVAL;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 061152a..a49e62b 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -920,6 +920,20 @@ struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
return t->immutable_target_type;
}
+struct dm_target *dm_table_get_wildcard_target(struct dm_table *t)
+{
+ struct dm_target *uninitialized_var(ti);
+ unsigned i = 0;
+
+ while (i < dm_table_get_num_targets(t)) {
+ ti = dm_table_get_target(t, i++);
+ if (dm_target_is_wildcard(ti->type))
+ return ti;
+ }
+
+ return NULL;
+}
+
bool dm_table_request_based(struct dm_table *t)
{
return __table_type_request_based(dm_table_get_type(t));
diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c
index 925ec1b..a317dd8 100644
--- a/drivers/md/dm-target.c
+++ b/drivers/md/dm-target.c
@@ -150,7 +150,8 @@ static void io_err_release_clone_rq(struct request *clone)
static struct target_type error_target = {
.name = "error",
- .version = {1, 3, 0},
+ .version = {1, 4, 0},
+ .features = DM_TARGET_WILDCARD,
.ctr = io_err_ctr,
.dtr = io_err_dtr,
.map = io_err_map,
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 7edcf97..53df258 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -73,6 +73,7 @@ int dm_table_resume_targets(struct dm_table *t);
int dm_table_any_congested(struct dm_table *t, int bdi_bits);
unsigned dm_table_get_type(struct dm_table *t);
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
+struct dm_target *dm_table_get_wildcard_target(struct dm_table *t);
bool dm_table_request_based(struct dm_table *t);
bool dm_table_mq_request_based(struct dm_table *t);
void dm_table_free_md_mempools(struct dm_table *t);
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index ec1c61c..87d50ec 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -190,6 +190,13 @@ struct target_type {
#define dm_target_is_immutable(type) ((type)->features & DM_TARGET_IMMUTABLE)
/*
+ * Indicates that a target may replace any target; even immutable targets.
+ * .map, .map_rq, .clone_and_map_rq and .release_clone_rq are all defined.
+ */
+#define DM_TARGET_WILDCARD 0x00000008
+#define dm_target_is_wildcard(type) ((type)->features & DM_TARGET_WILDCARD)
+
+/*
* Some targets need to be sent the same WRITE bio severals times so
* that they can send copies of it to different devices. This function
* examines any supplied bio and returns the number of copies of it the
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 05/15] dm: optimize dm_mq_queue_rq()
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (3 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 04/15] dm: set DM_TARGET_WILDCARD feature on "error" target Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 06/15] dm: optimize dm_request_fn() Mike Snitzer
` (9 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
DM multipath is the only dm-mq target. But that aside, request-based DM
only supports tables with a single target that is immutable. Leverage
this fact in dm_mq_queue_rq() by using the 'immutable_target' stored in
the mapped_device when the table was made active. This saves the need
to even take the read-side of the SRCU via dm_{get,put}_live_table.
If the active DM table does not have an immutable target (e.g. "error"
target was swapped in) then fallback to the slow-path where the target
is looked up from the live table.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-mpath.c | 3 ++-
drivers/md/dm-table.c | 10 ++++++++++
drivers/md/dm.c | 40 ++++++++++++++++++----------------------
drivers/md/dm.h | 1 +
4 files changed, 31 insertions(+), 23 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index cfa29f5..3ddaa11 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -1684,7 +1684,8 @@ out:
*---------------------------------------------------------------*/
static struct target_type multipath_target = {
.name = "multipath",
- .version = {1, 10, 0},
+ .version = {1, 11, 0},
+ .features = DM_TARGET_SINGLETON | DM_TARGET_IMMUTABLE,
.module = THIS_MODULE,
.ctr = multipath_ctr,
.dtr = multipath_dtr,
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index a49e62b..89180fd 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -920,6 +920,16 @@ struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
return t->immutable_target_type;
}
+struct dm_target *dm_table_get_immutable_target(struct dm_table *t)
+{
+ /* Immutable target is implicitly a singleton */
+ if (t->num_targets > 1 ||
+ !dm_target_is_immutable(t->targets[0].type))
+ return NULL;
+
+ return t->targets;
+}
+
struct dm_target *dm_table_get_wildcard_target(struct dm_table *t)
{
struct dm_target *uninitialized_var(ti);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index c92e356..312cc77 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -154,6 +154,7 @@ struct mapped_device {
/* Protect queue and type against concurrent access. */
struct mutex type_lock;
+ struct dm_target *immutable_target;
struct target_type *immutable_target_type;
struct gendisk *disk;
@@ -2490,8 +2491,15 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
* This must be done before setting the queue restrictions,
* because request-based dm may be run just after the setting.
*/
- if (dm_table_request_based(t))
+ if (dm_table_request_based(t)) {
stop_queue(q);
+ /*
+ * Leverage the fact that request-based DM targets are
+ * immutable singletons and establish md->immutable_target
+ * - used to optimize both dm_request_fn and dm_mq_queue_rq
+ */
+ md->immutable_target = dm_table_get_immutable_target(t);
+ }
__bind_mempools(md, t);
@@ -2562,7 +2570,6 @@ void dm_set_md_type(struct mapped_device *md, unsigned type)
unsigned dm_get_md_type(struct mapped_device *md)
{
- BUG_ON(!mutex_is_locked(&md->type_lock));
return md->type;
}
@@ -2639,28 +2646,15 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
struct request *rq = bd->rq;
struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq);
struct mapped_device *md = tio->md;
- int srcu_idx;
- struct dm_table *map = dm_get_live_table(md, &srcu_idx);
- struct dm_target *ti;
- sector_t pos;
+ struct dm_target *ti = md->immutable_target;
- /* always use block 0 to find the target for flushes for now */
- pos = 0;
- if (!(rq->cmd_flags & REQ_FLUSH))
- pos = blk_rq_pos(rq);
+ if (unlikely(!ti)) {
+ int srcu_idx;
+ struct dm_table *map = dm_get_live_table(md, &srcu_idx);
- ti = dm_table_find_target(map, pos);
- if (!dm_target_is_valid(ti)) {
+ ti = dm_table_find_target(map, 0);
dm_put_live_table(md, srcu_idx);
- DMERR_LIMIT("request attempted access beyond the end of device");
- /*
- * Must perform setup, that rq_completed() requires,
- * before returning BLK_MQ_RQ_QUEUE_ERROR
- */
- dm_start_request(md, rq);
- return BLK_MQ_RQ_QUEUE_ERROR;
}
- dm_put_live_table(md, srcu_idx);
if (ti->type->busy && ti->type->busy(ti))
return BLK_MQ_RQ_QUEUE_BUSY;
@@ -2676,8 +2670,10 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
*/
tio->ti = ti;
- /* Clone the request if underlying devices aren't blk-mq */
- if (dm_table_get_type(map) == DM_TYPE_REQUEST_BASED) {
+ /*
+ * Both the table and md type cannot change after initial table load
+ */
+ if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) {
/* clone request is allocated at the end of the pdu */
tio->clone = (void *)blk_mq_rq_to_pdu(rq) + sizeof(struct dm_rq_target_io);
(void) clone_rq(rq, md, tio, GFP_ATOMIC);
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 53df258..4305a51 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -73,6 +73,7 @@ int dm_table_resume_targets(struct dm_table *t);
int dm_table_any_congested(struct dm_table *t, int bdi_bits);
unsigned dm_table_get_type(struct dm_table *t);
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
+struct dm_target *dm_table_get_immutable_target(struct dm_table *t);
struct dm_target *dm_table_get_wildcard_target(struct dm_table *t);
bool dm_table_request_based(struct dm_table *t);
bool dm_table_mq_request_based(struct dm_table *t);
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 06/15] dm: optimize dm_request_fn()
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (4 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 05/15] dm: optimize dm_mq_queue_rq() Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 07/15] dm: add 'blk_mq_nr_hw_queues' and 'blk_mq_queue_depth' module params Mike Snitzer
` (8 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
DM multipath is the only request-based DM target -- which only supports
tables with a single target that is immutable. Leverage this fact in
dm_request_fn().
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 47 +++++++++++++++++------------------------------
1 file changed, 17 insertions(+), 30 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 312cc77..3dfcb5a 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2069,12 +2069,18 @@ static bool dm_request_peeked_before_merge_deadline(struct mapped_device *md)
static void dm_request_fn(struct request_queue *q)
{
struct mapped_device *md = q->queuedata;
- int srcu_idx;
- struct dm_table *map = dm_get_live_table(md, &srcu_idx);
- struct dm_target *ti;
+ struct dm_target *ti = md->immutable_target;
struct request *rq;
struct dm_rq_target_io *tio;
- sector_t pos;
+ sector_t pos = 0;
+
+ if (unlikely(!ti)) {
+ int srcu_idx;
+ struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+
+ ti = dm_table_find_target(map, pos);
+ dm_put_live_table(md, srcu_idx);
+ }
/*
* For suspend, check blk_queue_stopped() and increment
@@ -2085,33 +2091,21 @@ static void dm_request_fn(struct request_queue *q)
while (!blk_queue_stopped(q)) {
rq = blk_peek_request(q);
if (!rq)
- goto out;
+ return;
/* always use block 0 to find the target for flushes for now */
pos = 0;
if (!(rq->cmd_flags & REQ_FLUSH))
pos = blk_rq_pos(rq);
- ti = dm_table_find_target(map, pos);
- if (!dm_target_is_valid(ti)) {
- /*
- * Must perform setup, that rq_completed() requires,
- * before calling dm_kill_unmapped_request
- */
- DMERR_LIMIT("request attempted access beyond the end of device");
- dm_start_request(md, rq);
- dm_kill_unmapped_request(rq, -EIO);
- continue;
+ if ((dm_request_peeked_before_merge_deadline(md) &&
+ md_in_flight(md) && rq->bio && rq->bio->bi_vcnt == 1 &&
+ md->last_rq_pos == pos && md->last_rq_rw == rq_data_dir(rq)) ||
+ (ti->type->busy && ti->type->busy(ti))) {
+ blk_delay_queue(q, HZ / 100);
+ return;
}
- if (dm_request_peeked_before_merge_deadline(md) &&
- md_in_flight(md) && rq->bio && rq->bio->bi_vcnt == 1 &&
- md->last_rq_pos == pos && md->last_rq_rw == rq_data_dir(rq))
- goto delay_and_out;
-
- if (ti->type->busy && ti->type->busy(ti))
- goto delay_and_out;
-
dm_start_request(md, rq);
tio = tio_from_request(rq);
@@ -2120,13 +2114,6 @@ static void dm_request_fn(struct request_queue *q)
queue_kthread_work(&md->kworker, &tio->work);
BUG_ON(!irqs_disabled());
}
-
- goto out;
-
-delay_and_out:
- blk_delay_queue(q, HZ / 100);
-out:
- dm_put_live_table(md, srcu_idx);
}
static int dm_any_congested(void *congested_data, int bdi_bits)
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 07/15] dm: add 'blk_mq_nr_hw_queues' and 'blk_mq_queue_depth' module params
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (5 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 06/15] dm: optimize dm_request_fn() Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 08/15] dm: allocate blk_mq_tag_set rather than embed in mapped_device Mike Snitzer
` (7 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
Allow user to change these values via module params or sysfs.
'blk_mq_nr_hw_queues' defaults to 1 (max 32).
'blk_mq_queue_depth' defaults to 2048 (up from 64, which proved far too
small under moderate sized workloads -- the dm-multipath device would
continuously block waiting for tags (requests) to become available).
The maximum is BLK_MQ_MAX_DEPTH (currently 10240).
Keep in mind the total number of pre-allocated requests per rq-based DM
blk-mq device is 'blk_mq_nr_hw_queues' * 'blk_mq_queue_depth' (currently
2048).
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 3dfcb5a..ec505e5 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -233,6 +233,12 @@ static bool use_blk_mq = true;
static bool use_blk_mq = false;
#endif
+#define DM_MQ_NR_HW_QUEUES 1
+#define DM_MQ_QUEUE_DEPTH 2048
+
+static unsigned blk_mq_nr_hw_queues = DM_MQ_NR_HW_QUEUES;
+static unsigned blk_mq_queue_depth = DM_MQ_QUEUE_DEPTH;
+
bool dm_use_blk_mq(struct mapped_device *md)
{
return md->use_blk_mq;
@@ -303,6 +309,17 @@ unsigned dm_get_reserved_rq_based_ios(void)
}
EXPORT_SYMBOL_GPL(dm_get_reserved_rq_based_ios);
+unsigned dm_get_blk_mq_nr_hw_queues(void)
+{
+ return __dm_get_module_param(&blk_mq_nr_hw_queues, 1, 32);
+}
+
+unsigned dm_get_blk_mq_queue_depth(void)
+{
+ return __dm_get_module_param(&blk_mq_queue_depth,
+ DM_MQ_QUEUE_DEPTH, BLK_MQ_MAX_DEPTH);
+}
+
static int __init local_init(void)
{
int r = -ENOMEM;
@@ -2693,10 +2710,10 @@ static int dm_init_request_based_blk_mq_queue(struct mapped_device *md)
memset(&md->tag_set, 0, sizeof(md->tag_set));
md->tag_set.ops = &dm_mq_ops;
- md->tag_set.queue_depth = BLKDEV_MAX_RQ;
+ md->tag_set.queue_depth = dm_get_blk_mq_queue_depth();
md->tag_set.numa_node = NUMA_NO_NODE;
md->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
- md->tag_set.nr_hw_queues = 1;
+ md->tag_set.nr_hw_queues = dm_get_blk_mq_nr_hw_queues();
if (md_type == DM_TYPE_REQUEST_BASED) {
/* make the memory for non-blk-mq clone part of the pdu */
md->tag_set.cmd_size = sizeof(struct dm_rq_target_io) + sizeof(struct request);
@@ -3672,6 +3689,12 @@ MODULE_PARM_DESC(reserved_rq_based_ios, "Reserved IOs in request-based mempools"
module_param(use_blk_mq, bool, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(use_blk_mq, "Use block multiqueue for request-based DM devices");
+module_param(blk_mq_nr_hw_queues, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(blk_mq_nr_hw_queues, "Number of hardware queues for blk-mq request-based DM devices");
+
+module_param(blk_mq_queue_depth, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(blk_mq_queue_depth, "Queue depth for blk-mq request-based DM devices");
+
MODULE_DESCRIPTION(DM_NAME " driver");
MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
MODULE_LICENSE("GPL");
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 08/15] dm: allocate blk_mq_tag_set rather than embed in mapped_device
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (6 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 07/15] dm: add 'blk_mq_nr_hw_queues' and 'blk_mq_queue_depth' module params Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 09/15] dm: rename target's per_bio_data_size to per_io_data_size Mike Snitzer
` (6 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
The blk_mq_tag_set is only needed for dm-mq support. There is point
wasting space in 'struct mapped_device' for non-dm-mq devices.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 42 ++++++++++++++++++++++++------------------
1 file changed, 24 insertions(+), 18 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index ec505e5..1fab790 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -223,7 +223,7 @@ struct mapped_device {
ktime_t last_rq_start_time;
/* for blk-mq request-based DM support */
- struct blk_mq_tag_set tag_set;
+ struct blk_mq_tag_set *tag_set;
bool use_blk_mq;
};
@@ -2386,8 +2386,10 @@ static void free_dev(struct mapped_device *md)
unlock_fs(md);
cleanup_mapped_device(md);
- if (md->use_blk_mq)
- blk_mq_free_tag_set(&md->tag_set);
+ if (md->tag_set) {
+ blk_mq_free_tag_set(md->tag_set);
+ kfree(md->tag_set);
+ }
free_table_devices(&md->table_devices);
dm_stats_cleanup(&md->stats);
@@ -2708,24 +2710,25 @@ static int dm_init_request_based_blk_mq_queue(struct mapped_device *md)
struct request_queue *q;
int err;
- memset(&md->tag_set, 0, sizeof(md->tag_set));
- md->tag_set.ops = &dm_mq_ops;
- md->tag_set.queue_depth = dm_get_blk_mq_queue_depth();
- md->tag_set.numa_node = NUMA_NO_NODE;
- md->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
- md->tag_set.nr_hw_queues = dm_get_blk_mq_nr_hw_queues();
+ md->tag_set = kzalloc(sizeof(struct blk_mq_tag_set), GFP_KERNEL);
+ md->tag_set->ops = &dm_mq_ops;
+ md->tag_set->queue_depth = dm_get_blk_mq_queue_depth();
+ md->tag_set->numa_node = NUMA_NO_NODE;
+ md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_SG_MERGE;
+ md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues();
+ md->tag_set->driver_data = md;
+
+ md->tag_set->cmd_size = sizeof(struct dm_rq_target_io);
if (md_type == DM_TYPE_REQUEST_BASED) {
- /* make the memory for non-blk-mq clone part of the pdu */
- md->tag_set.cmd_size = sizeof(struct dm_rq_target_io) + sizeof(struct request);
- } else
- md->tag_set.cmd_size = sizeof(struct dm_rq_target_io);
- md->tag_set.driver_data = md;
+ /* put the memory for non-blk-mq clone at the end of the pdu */
+ md->tag_set->cmd_size += sizeof(struct request);
+ }
- err = blk_mq_alloc_tag_set(&md->tag_set);
+ err = blk_mq_alloc_tag_set(md->tag_set);
if (err)
- return err;
+ goto out_kfree_tag_set;
- q = blk_mq_init_allocated_queue(&md->tag_set, md->queue);
+ q = blk_mq_init_allocated_queue(md->tag_set, md->queue);
if (IS_ERR(q)) {
err = PTR_ERR(q);
goto out_tag_set;
@@ -2742,7 +2745,10 @@ static int dm_init_request_based_blk_mq_queue(struct mapped_device *md)
return 0;
out_tag_set:
- blk_mq_free_tag_set(&md->tag_set);
+ blk_mq_free_tag_set(md->tag_set);
+out_kfree_tag_set:
+ kfree(md->tag_set);
+
return err;
}
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 09/15] dm: rename target's per_bio_data_size to per_io_data_size
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (7 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 08/15] dm: allocate blk_mq_tag_set rather than embed in mapped_device Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 10/15] dm: allow immutable request-based targets to use blk-mq pdu Mike Snitzer
` (5 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
Request-based DM will also make use of per_bio_data_size.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-cache-target.c | 2 +-
drivers/md/dm-crypt.c | 2 +-
drivers/md/dm-delay.c | 2 +-
drivers/md/dm-flakey.c | 2 +-
drivers/md/dm-log-writes.c | 2 +-
drivers/md/dm-raid1.c | 2 +-
drivers/md/dm-snap.c | 2 +-
drivers/md/dm-table.c | 6 +++---
drivers/md/dm-thin.c | 2 +-
drivers/md/dm-verity-fec.c | 2 +-
drivers/md/dm-verity-target.c | 12 ++++++------
drivers/md/dm.c | 8 ++++----
include/linux/device-mapper.h | 6 +++---
13 files changed, 25 insertions(+), 25 deletions(-)
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index 5780acc..2238d6f 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -2771,7 +2771,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
ti->split_discard_bios = false;
cache->features = ca->features;
- ti->per_bio_data_size = get_per_bio_data_size(cache);
+ ti->per_io_data_size = get_per_bio_data_size(cache);
cache->callbacks.congested_fn = cache_is_congested;
dm_table_add_target_callbacks(ti->table, &cache->callbacks);
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 3147c8d..5c934b6 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1788,7 +1788,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad;
}
- cc->per_bio_data_size = ti->per_bio_data_size =
+ cc->per_bio_data_size = ti->per_io_data_size =
ALIGN(sizeof(struct dm_crypt_io) + cc->dmreq_start +
sizeof(struct dm_crypt_request) + iv_size_padding + cc->iv_size,
ARCH_KMALLOC_MINALIGN);
diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
index b4c356a..cc70871 100644
--- a/drivers/md/dm-delay.c
+++ b/drivers/md/dm-delay.c
@@ -204,7 +204,7 @@ out:
ti->num_flush_bios = 1;
ti->num_discard_bios = 1;
- ti->per_bio_data_size = sizeof(struct dm_delay_info);
+ ti->per_io_data_size = sizeof(struct dm_delay_info);
ti->private = dc;
return 0;
diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
index 09e2afc..b7341de 100644
--- a/drivers/md/dm-flakey.c
+++ b/drivers/md/dm-flakey.c
@@ -220,7 +220,7 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->num_flush_bios = 1;
ti->num_discard_bios = 1;
- ti->per_bio_data_size = sizeof(struct per_bio_data);
+ ti->per_io_data_size = sizeof(struct per_bio_data);
ti->private = fc;
return 0;
diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c
index 624589d..608302e 100644
--- a/drivers/md/dm-log-writes.c
+++ b/drivers/md/dm-log-writes.c
@@ -475,7 +475,7 @@ static int log_writes_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->flush_supported = true;
ti->num_discard_bios = 1;
ti->discards_supported = true;
- ti->per_bio_data_size = sizeof(struct per_bio_data);
+ ti->per_io_data_size = sizeof(struct per_bio_data);
ti->private = lc;
return 0;
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
index f2a363a..b3ccf1e 100644
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -1121,7 +1121,7 @@ static int mirror_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->num_flush_bios = 1;
ti->num_discard_bios = 1;
- ti->per_bio_data_size = sizeof(struct dm_raid1_bio_record);
+ ti->per_io_data_size = sizeof(struct dm_raid1_bio_record);
ti->discard_zeroes_data_unsupported = true;
ms->kmirrord_wq = alloc_workqueue("kmirrord", WQ_MEM_RECLAIM, 0);
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 376638608..62479ac 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -1201,7 +1201,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
ti->private = s;
ti->num_flush_bios = num_flush_bios;
- ti->per_bio_data_size = sizeof(struct dm_snap_tracked_chunk);
+ ti->per_io_data_size = sizeof(struct dm_snap_tracked_chunk);
/* Add snapshot to the list of snapshots for this origin */
/* Exceptions aren't triggered till snapshot_resume() is called */
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 89180fd..7210e53 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -957,7 +957,7 @@ bool dm_table_mq_request_based(struct dm_table *t)
static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md)
{
unsigned type = dm_table_get_type(t);
- unsigned per_bio_data_size = 0;
+ unsigned per_io_data_size = 0;
struct dm_target *tgt;
unsigned i;
@@ -969,10 +969,10 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *
if (type == DM_TYPE_BIO_BASED)
for (i = 0; i < t->num_targets; i++) {
tgt = t->targets + i;
- per_bio_data_size = max(per_bio_data_size, tgt->per_bio_data_size);
+ per_io_data_size = max(per_io_data_size, tgt->per_io_data_size);
}
- t->mempools = dm_alloc_md_mempools(md, type, t->integrity_supported, per_bio_data_size);
+ t->mempools = dm_alloc_md_mempools(md, type, t->integrity_supported, per_io_data_size);
if (!t->mempools)
return -ENOMEM;
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 72d91f4..4fbbe1f 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -4037,7 +4037,7 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
ti->num_flush_bios = 1;
ti->flush_supported = true;
- ti->per_bio_data_size = sizeof(struct dm_thin_endio_hook);
+ ti->per_io_data_size = sizeof(struct dm_thin_endio_hook);
/* In case the pool supports discards, pass them on. */
ti->discard_zeroes_data_unsupported = true;
diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 1cc10c4..459a9f8 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -812,7 +812,7 @@ int verity_fec_ctr(struct dm_verity *v)
}
/* Reserve space for our per-bio data */
- ti->per_bio_data_size += sizeof(struct dm_verity_fec_io);
+ ti->per_io_data_size += sizeof(struct dm_verity_fec_io);
return 0;
}
diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
index 5c5d30c..0aba34a 100644
--- a/drivers/md/dm-verity-target.c
+++ b/drivers/md/dm-verity-target.c
@@ -354,7 +354,7 @@ int verity_for_bv_block(struct dm_verity *v, struct dm_verity_io *io,
size_t len))
{
unsigned todo = 1 << v->data_dev_block_bits;
- struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_bio_data_size);
+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
do {
int r;
@@ -460,7 +460,7 @@ static int verity_verify_io(struct dm_verity_io *io)
static void verity_finish_io(struct dm_verity_io *io, int error)
{
struct dm_verity *v = io->v;
- struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_bio_data_size);
+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
bio->bi_end_io = io->orig_bi_end_io;
bio->bi_error = error;
@@ -574,7 +574,7 @@ static int verity_map(struct dm_target *ti, struct bio *bio)
if (bio_data_dir(bio) == WRITE)
return -EIO;
- io = dm_per_bio_data(bio, ti->per_bio_data_size);
+ io = dm_per_bio_data(bio, ti->per_io_data_size);
io->v = v;
io->orig_bi_end_io = bio->bi_end_io;
io->block = bio->bi_iter.bi_sector >> (v->data_dev_block_bits - SECTOR_SHIFT);
@@ -1036,15 +1036,15 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
goto bad;
}
- ti->per_bio_data_size = sizeof(struct dm_verity_io) +
+ ti->per_io_data_size = sizeof(struct dm_verity_io) +
v->shash_descsize + v->digest_size * 2;
r = verity_fec_ctr(v);
if (r)
goto bad;
- ti->per_bio_data_size = roundup(ti->per_bio_data_size,
- __alignof__(struct dm_verity_io));
+ ti->per_io_data_size = roundup(ti->per_io_data_size,
+ __alignof__(struct dm_verity_io));
return 0;
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 1fab790..8bc798c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -3476,7 +3476,7 @@ int dm_noflush_suspending(struct dm_target *ti)
EXPORT_SYMBOL_GPL(dm_noflush_suspending);
struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned type,
- unsigned integrity, unsigned per_bio_data_size)
+ unsigned integrity, unsigned per_io_data_size)
{
struct dm_md_mempools *pools = kzalloc(sizeof(*pools), GFP_KERNEL);
struct kmem_cache *cachep = NULL;
@@ -3492,7 +3492,7 @@ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned t
case DM_TYPE_BIO_BASED:
cachep = _io_cache;
pool_size = dm_get_reserved_bio_based_ios();
- front_pad = roundup(per_bio_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone);
+ front_pad = roundup(per_io_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone);
break;
case DM_TYPE_REQUEST_BASED:
cachep = _rq_tio_cache;
@@ -3505,8 +3505,8 @@ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned t
if (!pool_size)
pool_size = dm_get_reserved_rq_based_ios();
front_pad = offsetof(struct dm_rq_clone_bio_info, clone);
- /* per_bio_data_size is not used. See __bind_mempools(). */
- WARN_ON(per_bio_data_size != 0);
+ /* per_io_data_size is not used. */
+ WARN_ON(per_io_data_size != 0);
break;
default:
BUG();
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 87d50ec..82ae3b5 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -238,10 +238,10 @@ struct dm_target {
unsigned num_write_same_bios;
/*
- * The minimum number of extra bytes allocated in each bio for the
- * target to use. dm_per_bio_data returns the data location.
+ * The minimum number of extra bytes allocated in each io for the
+ * target to use.
*/
- unsigned per_bio_data_size;
+ unsigned per_io_data_size;
/*
* If defined, this function is called to find out how many
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 10/15] dm: allow immutable request-based targets to use blk-mq pdu
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (8 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 09/15] dm: rename target's per_bio_data_size to per_io_data_size Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 11/15] dm mpath: use blk-mq pdu for per-request 'struct dm_mpath_io' Mike Snitzer
` (4 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
This will allow DM multipath to use a portion of the blk-mq pdu space
for target data (e.g. struct dm_mpath_io).
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-ioctl.c | 2 +-
drivers/md/dm.c | 45 +++++++++++++++++++++++++++++++++++++--------
drivers/md/dm.h | 2 +-
3 files changed, 39 insertions(+), 10 deletions(-)
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 4763c4a..2adf81d 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1304,7 +1304,7 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
dm_set_md_type(md, dm_table_get_type(t));
/* setup md->queue to reflect md's type (may block) */
- r = dm_setup_md_queue(md);
+ r = dm_setup_md_queue(md, t);
if (r) {
DMWARN("unable to set up device queue for new table.");
goto err_unlock_md_type;
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 8bc798c..6b7e80e 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -225,6 +225,7 @@ struct mapped_device {
/* for blk-mq request-based DM support */
struct blk_mq_tag_set *tag_set;
bool use_blk_mq;
+ bool init_tio_pdu;
};
#ifdef CONFIG_DM_MQ_DEFAULT
@@ -243,6 +244,7 @@ bool dm_use_blk_mq(struct mapped_device *md)
{
return md->use_blk_mq;
}
+EXPORT_SYMBOL_GPL(dm_use_blk_mq);
/*
* For mempools pre-allocation at the table loading time.
@@ -1850,6 +1852,8 @@ static struct request *clone_rq(struct request *rq, struct mapped_device *md,
struct dm_rq_target_io *tio, gfp_t gfp_mask)
{
/*
+ * Create clone for use with .request_fn request_queue
+ *
* Do not allocate a clone if tio->clone was already set
* (see: dm_mq_queue_rq).
*/
@@ -1884,7 +1888,13 @@ static void init_tio(struct dm_rq_target_io *tio, struct request *rq,
tio->clone = NULL;
tio->orig = rq;
tio->error = 0;
- memset(&tio->info, 0, sizeof(tio->info));
+ /*
+ * Avoid initializing info for blk-mq; it passes
+ * target-specific data through info.ptr
+ * (see: dm_mq_init_request)
+ */
+ if (md->init_tio_pdu)
+ memset(&tio->info, 0, sizeof(tio->info));
if (md->kworker_task)
init_kthread_work(&tio->work, map_tio_request);
}
@@ -2303,6 +2313,7 @@ static struct mapped_device *alloc_dev(int minor)
goto bad_io_barrier;
md->use_blk_mq = use_blk_mq;
+ md->init_tio_pdu = true;
md->type = DM_TYPE_NONE;
mutex_init(&md->suspend_lock);
mutex_init(&md->type_lock);
@@ -2643,6 +2654,16 @@ static int dm_mq_init_request(void *data, struct request *rq,
*/
tio->md = md;
+ /*
+ * FIXME: If/when there is another blk-mq request-based DM target
+ * other than multipath: make conditional on ti->per_bio_data_size
+ * but it is a serious pain to get target here.
+ */
+ {
+ /* target-specific per-io data is immediately after the tio */
+ tio->info.ptr = tio + 1;
+ }
+
return 0;
}
@@ -2680,8 +2701,11 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
* Both the table and md type cannot change after initial table load
*/
if (dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) {
- /* clone request is allocated at the end of the pdu */
- tio->clone = (void *)blk_mq_rq_to_pdu(rq) + sizeof(struct dm_rq_target_io);
+ /*
+ * Clone the request if underlying devices aren't blk-mq
+ * - clone request is allocated at the end of the pdu
+ */
+ tio->clone = blk_mq_rq_to_pdu(rq) + sizeof(*tio) + ti->per_io_data_size;
(void) clone_rq(rq, md, tio, GFP_ATOMIC);
queue_kthread_work(&md->kworker, &tio->work);
} else {
@@ -2704,7 +2728,8 @@ static struct blk_mq_ops dm_mq_ops = {
.init_request = dm_mq_init_request,
};
-static int dm_init_request_based_blk_mq_queue(struct mapped_device *md)
+static int dm_init_request_based_blk_mq_queue(struct mapped_device *md,
+ struct dm_target *immutable_tgt)
{
unsigned md_type = dm_get_md_type(md);
struct request_queue *q;
@@ -2719,6 +2744,11 @@ static int dm_init_request_based_blk_mq_queue(struct mapped_device *md)
md->tag_set->driver_data = md;
md->tag_set->cmd_size = sizeof(struct dm_rq_target_io);
+ if (immutable_tgt && immutable_tgt->per_io_data_size) {
+ /* any target-specific per-io data is immediately after the tio */
+ md->tag_set->cmd_size += immutable_tgt->per_io_data_size;
+ md->init_tio_pdu = false;
+ }
if (md_type == DM_TYPE_REQUEST_BASED) {
/* put the memory for non-blk-mq clone at the end of the pdu */
md->tag_set->cmd_size += sizeof(struct request);
@@ -2763,7 +2793,7 @@ static unsigned filter_md_type(unsigned type, struct mapped_device *md)
/*
* Setup the DM device's queue based on md's type
*/
-int dm_setup_md_queue(struct mapped_device *md)
+int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
{
int r;
unsigned md_type = filter_md_type(dm_get_md_type(md), md);
@@ -2777,7 +2807,7 @@ int dm_setup_md_queue(struct mapped_device *md)
}
break;
case DM_TYPE_MQ_REQUEST_BASED:
- r = dm_init_request_based_blk_mq_queue(md);
+ r = dm_init_request_based_blk_mq_queue(md, dm_table_get_immutable_target(t));
if (r) {
DMWARN("Cannot initialize queue for request-based blk-mq mapped device");
return r;
@@ -3505,8 +3535,7 @@ struct dm_md_mempools *dm_alloc_md_mempools(struct mapped_device *md, unsigned t
if (!pool_size)
pool_size = dm_get_reserved_rq_based_ios();
front_pad = offsetof(struct dm_rq_clone_bio_info, clone);
- /* per_io_data_size is not used. */
- WARN_ON(per_io_data_size != 0);
+ /* per_io_data_size is used for blk-mq pdu at queue allocation */
break;
default:
BUG();
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 4305a51..13a758e 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -86,7 +86,7 @@ void dm_set_md_type(struct mapped_device *md, unsigned type);
unsigned dm_get_md_type(struct mapped_device *md);
struct target_type *dm_get_immutable_target_type(struct mapped_device *md);
-int dm_setup_md_queue(struct mapped_device *md);
+int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
/*
* To check the return value from dm_table_find_target().
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 11/15] dm mpath: use blk-mq pdu for per-request 'struct dm_mpath_io'
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (9 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 10/15] dm: allow immutable request-based targets to use blk-mq pdu Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 12/15] dm mpath: cleanup 'struct dm_mpath_io' management code Mike Snitzer
` (3 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
Allow the multipath target to avoid making small allocations for each
'struct dm_mpath_io' that is needed for each request.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-mpath.c | 40 +++++++++++++++++++++++++++++-----------
1 file changed, 29 insertions(+), 11 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 3ddaa11..021ea39 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -181,10 +181,9 @@ static void free_priority_group(struct priority_group *pg,
kfree(pg);
}
-static struct multipath *alloc_multipath(struct dm_target *ti)
+static struct multipath *alloc_multipath(struct dm_target *ti, bool use_blk_mq)
{
struct multipath *m;
- unsigned min_ios = dm_get_reserved_rq_based_ios();
m = kzalloc(sizeof(*m), GFP_KERNEL);
if (m) {
@@ -195,11 +194,18 @@ static struct multipath *alloc_multipath(struct dm_target *ti)
INIT_WORK(&m->trigger_event, trigger_event);
init_waitqueue_head(&m->pg_init_wait);
mutex_init(&m->work_mutex);
- m->mpio_pool = mempool_create_slab_pool(min_ios, _mpio_cache);
- if (!m->mpio_pool) {
- kfree(m);
- return NULL;
+
+ m->mpio_pool = NULL;
+ if (!use_blk_mq) {
+ unsigned min_ios = dm_get_reserved_rq_based_ios();
+
+ m->mpio_pool = mempool_create_slab_pool(min_ios, _mpio_cache);
+ if (!m->mpio_pool) {
+ kfree(m);
+ return NULL;
+ }
}
+
m->ti = ti;
ti->private = m;
}
@@ -226,6 +232,13 @@ static int set_mapinfo(struct multipath *m, union map_info *info)
{
struct dm_mpath_io *mpio;
+ if (!m->mpio_pool) {
+ /* Use blk-mq pdu memory requested via per_io_data_size */
+ mpio = info->ptr;
+ memset(mpio, 0, sizeof(*mpio));
+ return mpio;
+ }
+
mpio = mempool_alloc(m->mpio_pool, GFP_ATOMIC);
if (!mpio)
return -ENOMEM;
@@ -238,10 +251,13 @@ static int set_mapinfo(struct multipath *m, union map_info *info)
static void clear_mapinfo(struct multipath *m, union map_info *info)
{
- struct dm_mpath_io *mpio = info->ptr;
+ /* Only needed for non blk-mq */
+ if (m->mpio_pool) {
+ struct dm_mpath_io *mpio = info->ptr;
- info->ptr = NULL;
- mempool_free(mpio, m->mpio_pool);
+ info->ptr = NULL;
+ mempool_free(mpio, m->mpio_pool);
+ }
}
/*-----------------------------------------------
@@ -428,7 +444,6 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
rq_data_dir(rq), GFP_ATOMIC);
if (IS_ERR(*__clone)) {
/* ENOMEM, requeue */
- clear_mapinfo(m, map_context);
return r;
}
(*__clone)->bio = (*__clone)->biotail = NULL;
@@ -820,11 +835,12 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc,
struct dm_arg_set as;
unsigned pg_count = 0;
unsigned next_pg_num;
+ bool use_blk_mq = dm_use_blk_mq(dm_table_get_md(ti->table));
as.argc = argc;
as.argv = argv;
- m = alloc_multipath(ti);
+ m = alloc_multipath(ti, use_blk_mq);
if (!m) {
ti->error = "can't allocate multipath";
return -EINVAL;
@@ -880,6 +896,8 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc,
ti->num_flush_bios = 1;
ti->num_discard_bios = 1;
ti->num_write_same_bios = 1;
+ if (use_blk_mq)
+ ti->per_io_data_size = sizeof(struct dm_mpath_io);
return 0;
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 12/15] dm mpath: cleanup 'struct dm_mpath_io' management code
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (10 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 11/15] dm mpath: use blk-mq pdu for per-request 'struct dm_mpath_io' Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 13/15] dm mpath: use blk_mq_alloc_request() and blk_mq_free_request() directly Mike Snitzer
` (2 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
Refactor and rename existing interfaces to be more specific and
self-documenting.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-mpath.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 021ea39..612ec57 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -228,30 +228,35 @@ static void free_multipath(struct multipath *m)
kfree(m);
}
-static int set_mapinfo(struct multipath *m, union map_info *info)
+static struct dm_mpath_io *get_mpio(union map_info *info)
+{
+ return info->ptr;
+}
+
+static struct dm_mpath_io *set_mpio(struct multipath *m, union map_info *info)
{
struct dm_mpath_io *mpio;
if (!m->mpio_pool) {
/* Use blk-mq pdu memory requested via per_io_data_size */
- mpio = info->ptr;
+ mpio = get_mpio(info);
memset(mpio, 0, sizeof(*mpio));
return mpio;
}
mpio = mempool_alloc(m->mpio_pool, GFP_ATOMIC);
if (!mpio)
- return -ENOMEM;
+ return NULL;
memset(mpio, 0, sizeof(*mpio));
info->ptr = mpio;
- return 0;
+ return mpio;
}
-static void clear_mapinfo(struct multipath *m, union map_info *info)
+static void clear_request_fn_mpio(struct multipath *m, union map_info *info)
{
- /* Only needed for non blk-mq */
+ /* Only needed for non blk-mq (.request_fn) multipath */
if (m->mpio_pool) {
struct dm_mpath_io *mpio = info->ptr;
@@ -421,11 +426,11 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
goto out_unlock;
}
- if (set_mapinfo(m, map_context) < 0)
+ mpio = set_mpio(m, map_context);
+ if (!mpio)
/* ENOMEM, requeue */
goto out_unlock;
- mpio = map_context->ptr;
mpio->pgpath = pgpath;
mpio->nr_bytes = nr_bytes;
@@ -1309,21 +1314,21 @@ static int multipath_end_io(struct dm_target *ti, struct request *clone,
int error, union map_info *map_context)
{
struct multipath *m = ti->private;
- struct dm_mpath_io *mpio = map_context->ptr;
+ struct dm_mpath_io *mpio = get_mpio(map_context);
struct pgpath *pgpath;
struct path_selector *ps;
int r;
BUG_ON(!mpio);
- r = do_end_io(m, clone, error, mpio);
+ r = do_end_io(m, clone, error, mpio);
pgpath = mpio->pgpath;
if (pgpath) {
ps = &pgpath->pg->ps;
if (ps->type->end_io)
ps->type->end_io(ps, &pgpath->path, mpio->nr_bytes);
}
- clear_mapinfo(m, map_context);
+ clear_request_fn_mpio(m, map_context);
return r;
}
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 13/15] dm mpath: use blk_mq_alloc_request() and blk_mq_free_request() directly
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (11 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 12/15] dm mpath: cleanup 'struct dm_mpath_io' management code Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 14/15] dm mpath: reduce granularity of locking in __multipath_map Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 15/15] dm mpath: remove unnecessary casts in front of ti->private Mike Snitzer
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
There isn't any need to support both old .request_fn and blk-mq paths
in the blk-mq specific portion of __multipath_map(). Call
blk_mq_alloc_request() directly rather than use blk_get_request().
Similarly, call blk_mq_free_request(), rather than blk_put_request(), in
multipath_release_clone().
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-mpath.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 612ec57..7986446 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -23,6 +23,7 @@
#include <linux/delay.h>
#include <scsi/scsi_dh.h>
#include <linux/atomic.h>
+#include <linux/blk-mq.h>
#define DM_MSG_PREFIX "multipath"
#define DM_PG_INIT_DELAY_MSECS 2000
@@ -439,14 +440,22 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
spin_unlock_irq(&m->lock);
if (clone) {
- /* Old request-based interface: allocated clone is passed in */
+ /*
+ * Old request-based interface: allocated clone is passed in.
+ * Used by both: .request_fn stacked on .request_fn path(s) and
+ * blk-mq stacked on .request_fn path(s).
+ */
clone->q = bdev_get_queue(bdev);
clone->rq_disk = bdev->bd_disk;
clone->cmd_flags |= REQ_FAILFAST_TRANSPORT;
} else {
- /* blk-mq request-based interface */
- *__clone = blk_get_request(bdev_get_queue(bdev),
- rq_data_dir(rq), GFP_ATOMIC);
+ /*
+ * blk-mq request-based interface; used by both:
+ * .request_fn stacked on blk-mq path(s) and
+ * blk-mq stacked on blk-mq path(s).
+ */
+ *__clone = blk_mq_alloc_request(bdev_get_queue(bdev),
+ rq_data_dir(rq), BLK_MQ_REQ_NOWAIT);
if (IS_ERR(*__clone)) {
/* ENOMEM, requeue */
return r;
@@ -483,7 +492,7 @@ static int multipath_clone_and_map(struct dm_target *ti, struct request *rq,
static void multipath_release_clone(struct request *clone)
{
- blk_put_request(clone);
+ blk_mq_free_request(clone);
}
/*
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 14/15] dm mpath: reduce granularity of locking in __multipath_map
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (12 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 13/15] dm mpath: use blk_mq_alloc_request() and blk_mq_free_request() directly Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 15/15] dm mpath: remove unnecessary casts in front of ti->private Mike Snitzer
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
No need to hold m->lock after path has been selected (and 'struct
multipath' state updated).
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-mpath.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 7986446..199d3d3 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -427,18 +427,18 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
goto out_unlock;
}
+ spin_unlock_irq(&m->lock);
+
mpio = set_mpio(m, map_context);
if (!mpio)
/* ENOMEM, requeue */
- goto out_unlock;
+ return r;
mpio->pgpath = pgpath;
mpio->nr_bytes = nr_bytes;
bdev = pgpath->path.dev->bdev;
- spin_unlock_irq(&m->lock);
-
if (clone) {
/*
* Old request-based interface: allocated clone is passed in.
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [dm-4.6 PATCH v2 15/15] dm mpath: remove unnecessary casts in front of ti->private
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
` (13 preceding siblings ...)
2016-02-07 15:53 ` [dm-4.6 PATCH v2 14/15] dm mpath: reduce granularity of locking in __multipath_map Mike Snitzer
@ 2016-02-07 15:53 ` Mike Snitzer
14 siblings, 0 replies; 16+ messages in thread
From: Mike Snitzer @ 2016-02-07 15:53 UTC (permalink / raw)
To: dm-devel; +Cc: Mike Snitzer, Sagi Grimberg
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-mpath.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 199d3d3..177a016 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -402,7 +402,7 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
union map_info *map_context,
struct request *rq, struct request **__clone)
{
- struct multipath *m = (struct multipath *) ti->private;
+ struct multipath *m = ti->private;
int r = DM_MAPIO_REQUEUE;
size_t nr_bytes = clone ? blk_rq_bytes(clone) : blk_rq_bytes(rq);
struct pgpath *pgpath;
@@ -1350,7 +1350,7 @@ static int multipath_end_io(struct dm_target *ti, struct request *clone,
*/
static void multipath_presuspend(struct dm_target *ti)
{
- struct multipath *m = (struct multipath *) ti->private;
+ struct multipath *m = ti->private;
queue_if_no_path(m, 0, 1);
}
@@ -1369,7 +1369,7 @@ static void multipath_postsuspend(struct dm_target *ti)
*/
static void multipath_resume(struct dm_target *ti)
{
- struct multipath *m = (struct multipath *) ti->private;
+ struct multipath *m = ti->private;
unsigned long flags;
spin_lock_irqsave(&m->lock, flags);
@@ -1398,7 +1398,7 @@ static void multipath_status(struct dm_target *ti, status_type_t type,
{
int sz = 0;
unsigned long flags;
- struct multipath *m = (struct multipath *) ti->private;
+ struct multipath *m = ti->private;
struct priority_group *pg;
struct pgpath *p;
unsigned pg_num;
@@ -1506,7 +1506,7 @@ static int multipath_message(struct dm_target *ti, unsigned argc, char **argv)
{
int r = -EINVAL;
struct dm_dev *dev;
- struct multipath *m = (struct multipath *) ti->private;
+ struct multipath *m = ti->private;
action_fn action;
mutex_lock(&m->work_mutex);
--
2.5.4 (Apple Git-61)
^ permalink raw reply related [flat|nested] 16+ messages in thread
end of thread, other threads:[~2016-02-07 15:53 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-07 15:53 [PATCH v2 00/15] dm: improve request-based DM and multipath Mike Snitzer
2016-02-07 15:53 ` [dm-4.5 PATCH v2 01/15] dm: fix excessive dm-mq context switching Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 02/15] dm: remove unused dm_get_rq_mapinfo() Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 03/15] dm: cleanup dm_any_congested() Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 04/15] dm: set DM_TARGET_WILDCARD feature on "error" target Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 05/15] dm: optimize dm_mq_queue_rq() Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 06/15] dm: optimize dm_request_fn() Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 07/15] dm: add 'blk_mq_nr_hw_queues' and 'blk_mq_queue_depth' module params Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 08/15] dm: allocate blk_mq_tag_set rather than embed in mapped_device Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 09/15] dm: rename target's per_bio_data_size to per_io_data_size Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 10/15] dm: allow immutable request-based targets to use blk-mq pdu Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 11/15] dm mpath: use blk-mq pdu for per-request 'struct dm_mpath_io' Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 12/15] dm mpath: cleanup 'struct dm_mpath_io' management code Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 13/15] dm mpath: use blk_mq_alloc_request() and blk_mq_free_request() directly Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 14/15] dm mpath: reduce granularity of locking in __multipath_map Mike Snitzer
2016-02-07 15:53 ` [dm-4.6 PATCH v2 15/15] dm mpath: remove unnecessary casts in front of ti->private Mike Snitzer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).