* [PATCH v2 00/12] dm: enable discard support for more targets
@ 2010-07-24 16:09 Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 01/12] dm: rename map_info flush_request to target_request_nr Mike Snitzer
` (11 more replies)
0 siblings, 12 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
v2 rebases all patches to Alasdair's latest editing tree and
linux-2.6-block's 'for-2.6.36'.
This patchset enables discard support for most of the DM targets that
discards are intended to be supported on.
This patchset is also available here:
http://people.redhat.com/msnitzer/patches/dm-discard-advanced/latest/
The stripe target's discard support was the most tedious and
challenging to implement. It may see further edits before it lands
upstream.
The mirror target still needs discard support. Either I or someone
else (nudge: Mikulas and/or Jon? :) will need to implement that.
The snapshot and crypt targets will not have discard support.
Snapshots must preserve any data that is deleted so the value of
discard is negligible. Discard support for the origin target may be
considered in the future (could be especially useful if origin and COW
are different devices and origin is a thinly provisioned LUN).
Crypt devices are concerned with security and, until proven otherwise,
it is believed that discards will leak too much pattern information to
the crypt device's underlying storage (especially when underlying
storage uses discards that zero data).
Mike Snitzer (12):
dm: rename map_info flush_request to target_request_nr
dm: introduce num_discard_requests in dm_target structure
dm: remove the DM_TARGET_SUPPORTS_DISCARDS feature flag
dm: use common __issue_target_request for flush and discard support
dm: factor max_io_len for code reuse
dm: split discard requests on target boundaries
dm zero: silently drop discards too
dm error: return error for discards too
dm delay: enable discard support
block: update request stacking methods to support discards
dm mpath: enable discard support
dm stripe: enable efficient discard support
block/blk-core.c | 5 +
drivers/md/dm-delay.c | 1 +
drivers/md/dm-linear.c | 2 +-
drivers/md/dm-mpath.c | 1 +
drivers/md/dm-snap.c | 2 +-
drivers/md/dm-stripe.c | 180 ++++++++++++++++++++++++++++++++++++++---
drivers/md/dm-table.c | 2 +-
drivers/md/dm-target.c | 3 +
drivers/md/dm-zero.c | 3 +
drivers/md/dm.c | 89 +++++++++++++-------
include/linux/device-mapper.h | 11 ++-
11 files changed, 253 insertions(+), 46 deletions(-)
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 01/12] dm: rename map_info flush_request to target_request_nr
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 02/12] dm: introduce num_discard_requests in dm_target structure Mike Snitzer
` (10 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
'target_request_nr' is a more generic name that reflects the fact that
it will be used for both flush and discard support.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-snap.c | 2 +-
drivers/md/dm-stripe.c | 6 ++++--
drivers/md/dm.c | 18 +++++++++---------
include/linux/device-mapper.h | 4 ++--
4 files changed, 16 insertions(+), 14 deletions(-)
Index: linux-2.6-block/drivers/md/dm-snap.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-snap.c
+++ linux-2.6-block/drivers/md/dm-snap.c
@@ -1692,7 +1692,7 @@ static int snapshot_merge_map(struct dm_
chunk_t chunk;
if (unlikely(bio_empty_barrier(bio))) {
- if (!map_context->flush_request)
+ if (!map_context->target_request_nr)
bio->bi_bdev = s->origin->bdev;
else
bio->bi_bdev = s->cow->bdev;
Index: linux-2.6-block/drivers/md/dm-stripe.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-stripe.c
+++ linux-2.6-block/drivers/md/dm-stripe.c
@@ -213,10 +213,12 @@ static int stripe_map(struct dm_target *
struct stripe_c *sc = (struct stripe_c *) ti->private;
sector_t offset, chunk;
uint32_t stripe;
+ unsigned target_request_nr;
if (unlikely(bio_empty_barrier(bio))) {
- BUG_ON(map_context->flush_request >= sc->stripes);
- bio->bi_bdev = sc->stripe[map_context->flush_request].dev->bdev;
+ target_request_nr = map_context->target_request_nr;
+ BUG_ON(target_request_nr >= sc->stripes);
+ bio->bi_bdev = sc->stripe[target_request_nr].dev->bdev;
return DM_MAPIO_REMAPPED;
}
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1182,12 +1182,12 @@ static struct dm_target_io *alloc_tio(st
}
static void __flush_target(struct clone_info *ci, struct dm_target *ti,
- unsigned flush_nr)
+ unsigned request_nr)
{
struct dm_target_io *tio = alloc_tio(ci, ti);
struct bio *clone;
- tio->info.flush_request = flush_nr;
+ tio->info.target_request_nr = request_nr;
clone = bio_alloc_bioset(GFP_NOIO, 0, ci->md->bs);
__bio_clone(clone, ci->bio);
@@ -1198,13 +1198,13 @@ static void __flush_target(struct clone_
static int __clone_and_map_empty_barrier(struct clone_info *ci)
{
- unsigned target_nr = 0, flush_nr;
+ unsigned target_nr = 0, request_nr;
struct dm_target *ti;
while ((ti = dm_table_get_target(ci->map, target_nr++)))
- for (flush_nr = 0; flush_nr < ti->num_flush_requests;
- flush_nr++)
- __flush_target(ci, ti, flush_nr);
+ for (request_nr = 0; request_nr < ti->num_flush_requests;
+ request_nr++)
+ __flush_target(ci, ti, request_nr);
ci->sector_count = 0;
@@ -2435,11 +2435,11 @@ static void dm_queue_flush(struct mapped
queue_work(md->wq, &md->work);
}
-static void dm_rq_set_flush_nr(struct request *clone, unsigned flush_nr)
+static void dm_rq_set_target_request_nr(struct request *clone, unsigned request_nr)
{
struct dm_rq_target_io *tio = clone->end_io_data;
- tio->info.flush_request = flush_nr;
+ tio->info.target_request_nr = request_nr;
}
/* Issue barrier requests to targets and wait for their completion. */
@@ -2457,7 +2457,7 @@ static int dm_rq_barrier(struct mapped_d
ti = dm_table_get_target(map, i);
for (j = 0; j < ti->num_flush_requests; j++) {
clone = clone_rq(md->flush_request, md, GFP_NOIO);
- dm_rq_set_flush_nr(clone, j);
+ dm_rq_set_target_request_nr(clone, j);
atomic_inc(&md->pending[rq_data_dir(clone)]);
map_request(ti, clone, md);
}
Index: linux-2.6-block/include/linux/device-mapper.h
===================================================================
--- linux-2.6-block.orig/include/linux/device-mapper.h
+++ linux-2.6-block/include/linux/device-mapper.h
@@ -22,7 +22,7 @@ typedef enum { STATUSTYPE_INFO, STATUSTY
union map_info {
void *ptr;
unsigned long long ll;
- unsigned flush_request;
+ unsigned target_request_nr;
};
/*
@@ -175,7 +175,7 @@ struct dm_target {
* A number of zero-length barrier requests that will be submitted
* to the target for the purpose of flushing cache.
*
- * The request number will be placed in union map_info->flush_request.
+ * The request number will be placed in union map_info->target_request_nr.
* It is a responsibility of the target driver to remap these requests
* to the real underlying devices.
*/
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 02/12] dm: introduce num_discard_requests in dm_target structure
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 01/12] dm: rename map_info flush_request to target_request_nr Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 03/12] dm: remove the DM_TARGET_SUPPORTS_DISCARDS feature flag Mike Snitzer
` (9 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
A target with num_discard_requests > 0 supports discard requests.
The configured number of discard requests will be submitted to the
target.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-linear.c | 1 +
include/linux/device-mapper.h | 6 ++++++
2 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index 7071f17..8e925fa 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -53,6 +53,7 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
}
ti->num_flush_requests = 1;
+ ti->num_discard_requests = 1;
ti->private = lc;
return 0;
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 87beab7..0fe597d 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -181,6 +181,12 @@ struct dm_target {
*/
unsigned num_flush_requests;
+ /*
+ * The number of discard requests that will be submitted to the
+ * target. map_info->request_nr is used just like num_flush_requests.
+ */
+ unsigned num_discard_requests;
+
/* target specific data */
void *private;
--
1.6.6.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 03/12] dm: remove the DM_TARGET_SUPPORTS_DISCARDS feature flag
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 01/12] dm: rename map_info flush_request to target_request_nr Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 02/12] dm: introduce num_discard_requests in dm_target structure Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 04/12] dm: use common __issue_target_request for flush and discard support Mike Snitzer
` (8 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Eliminate the DM_TARGET_SUPPORTS_DISCARDS feature flag now that
dm_target's 'num_discard_requests' provides the mechanism to enable
discards on a per target basis.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-linear.c | 1 -
drivers/md/dm-table.c | 2 +-
drivers/md/dm.c | 2 +-
include/linux/device-mapper.h | 1 -
4 files changed, 2 insertions(+), 4 deletions(-)
Index: linux-2.6-block/drivers/md/dm-linear.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-linear.c
+++ linux-2.6-block/drivers/md/dm-linear.c
@@ -153,7 +153,6 @@ static struct target_type linear_target
.ioctl = linear_ioctl,
.merge = linear_merge,
.iterate_devices = linear_iterate_devices,
- .features = DM_TARGET_SUPPORTS_DISCARDS,
};
int __init dm_linear_init(void)
Index: linux-2.6-block/drivers/md/dm-table.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-table.c
+++ linux-2.6-block/drivers/md/dm-table.c
@@ -773,7 +773,7 @@ int dm_table_add_target(struct dm_table
t->highs[t->num_targets++] = tgt->begin + tgt->len - 1;
- if (!(tgt->type->features & DM_TARGET_SUPPORTS_DISCARDS))
+ if (!tgt->num_discard_requests)
t->discards_supported = 0;
return 0;
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1242,7 +1242,7 @@ static int __clone_and_map_discard(struc
* check was performed.
*/
- if (!(ti->type->features & DM_TARGET_SUPPORTS_DISCARDS))
+ if (!ti->num_discard_requests)
return -EOPNOTSUPP;
max = max_io_len(ci->md, ci->sector, ti);
Index: linux-2.6-block/include/linux/device-mapper.h
===================================================================
--- linux-2.6-block.orig/include/linux/device-mapper.h
+++ linux-2.6-block/include/linux/device-mapper.h
@@ -130,7 +130,6 @@ void dm_put_device(struct dm_target *ti,
/*
* Target features
*/
-#define DM_TARGET_SUPPORTS_DISCARDS 0x00000001
struct target_type {
uint64_t features;
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 04/12] dm: use common __issue_target_request for flush and discard support
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (2 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 03/12] dm: remove the DM_TARGET_SUPPORTS_DISCARDS feature flag Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 05/12] dm: factor max_io_len for code reuse Mike Snitzer
` (7 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Rename __flush_target to __issue_target_request now that it is used to
issue both flush and discard requests.
Introduce __issue_target_requests as a convenient wrapper to
__issue_target_request 'num_flush_requests' or 'num_discard_requests'
times per target.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1181,30 +1181,42 @@ static struct dm_target_io *alloc_tio(st
return tio;
}
-static void __flush_target(struct clone_info *ci, struct dm_target *ti,
- unsigned request_nr)
+static void __issue_target_request(struct clone_info *ci, struct dm_target *ti,
+ unsigned request_nr)
{
struct dm_target_io *tio = alloc_tio(ci, ti);
struct bio *clone;
tio->info.target_request_nr = request_nr;
- clone = bio_alloc_bioset(GFP_NOIO, 0, ci->md->bs);
+ /*
+ * Discard requests require the bio's inline iovecs be initialized.
+ * ci->bio->bi_max_vecs is BIO_INLINE_VECS anyway, for both flush
+ * and discard, so no need for concern about wasted bvec allocations.
+ */
+ clone = bio_alloc_bioset(GFP_NOIO, ci->bio->bi_max_vecs, ci->md->bs);
__bio_clone(clone, ci->bio);
clone->bi_destructor = dm_bio_destructor;
__map_bio(ti, clone, tio);
}
+static void __issue_target_requests(struct clone_info *ci, struct dm_target *ti,
+ unsigned num_requests)
+{
+ unsigned request_nr;
+
+ for (request_nr = 0; request_nr < num_requests; request_nr++)
+ __issue_target_request(ci, ti, request_nr);
+}
+
static int __clone_and_map_empty_barrier(struct clone_info *ci)
{
- unsigned target_nr = 0, request_nr;
+ unsigned target_nr = 0;
struct dm_target *ti;
while ((ti = dm_table_get_target(ci->map, target_nr++)))
- for (request_nr = 0; request_nr < ti->num_flush_requests;
- request_nr++)
- __flush_target(ci, ti, request_nr);
+ __issue_target_requests(ci, ti, ti->num_flush_requests);
ci->sector_count = 0;
@@ -1253,7 +1265,9 @@ static int __clone_and_map_discard(struc
*/
return -EOPNOTSUPP;
- __clone_and_map_simple(ci, ti);
+ __issue_target_requests(ci, ti, ti->num_discard_requests);
+
+ ci->sector_count = 0;
return 0;
}
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 05/12] dm: factor max_io_len for code reuse
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (3 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 04/12] dm: use common __issue_target_request for flush and discard support Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-26 21:36 ` [PATCH v3 " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 06/12] dm: split discard requests on target boundaries Mike Snitzer
` (6 subsequent siblings)
11 siblings, 1 reply; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Split max_io_len_target_boundary out of max_io_len so that the discard
support can make use of it without duplicating max_io_len code.
Avoiding max_io_len's split_io logic enables DM's discard support to
submit the entire discard request to a target. But discards must still
be split on target boundaries.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 23 ++++++++++++++++-------
1 file changed, 16 insertions(+), 7 deletions(-)
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1029,11 +1029,20 @@ static void end_clone_request(struct req
dm_complete_request(clone, error);
}
-static sector_t max_io_len(struct mapped_device *md,
- sector_t sector, struct dm_target *ti)
+static sector_t max_io_len_target_boundary(sector_t sector, struct dm_target *ti,
+ sector_t *offset_p)
{
sector_t offset = sector - ti->begin;
- sector_t len = ti->len - offset;
+ if (offset_p)
+ *offset_p = offset;
+
+ return ti->len - offset;
+}
+
+static sector_t max_io_len(sector_t sector, struct dm_target *ti)
+{
+ sector_t offset;
+ sector_t len = max_io_len_target_boundary(sector, ti, &offset);
/*
* Does the target need to split even further ?
@@ -1257,7 +1266,7 @@ static int __clone_and_map_discard(struc
if (!ti->num_discard_requests)
return -EOPNOTSUPP;
- max = max_io_len(ci->md, ci->sector, ti);
+ max = max_io_len(ci->sector, ti);
if (ci->sector_count > max)
/*
@@ -1289,7 +1298,7 @@ static int __clone_and_map(struct clone_
if (!dm_target_is_valid(ti))
return -EIO;
- max = max_io_len(ci->md, ci->sector, ti);
+ max = max_io_len(ci->sector, ti);
if (ci->sector_count <= max) {
/*
@@ -1340,7 +1349,7 @@ static int __clone_and_map(struct clone_
if (!dm_target_is_valid(ti))
return -EIO;
- max = max_io_len(ci->md, ci->sector, ti);
+ max = max_io_len(ci->sector, ti);
}
len = min(remaining, max);
@@ -1427,7 +1436,7 @@ static int dm_merge_bvec(struct request_
/*
* Find maximum amount of I/O that won't need splitting
*/
- max_sectors = min(max_io_len(md, bvm->bi_sector, ti),
+ max_sectors = min(max_io_len(bvm->bi_sector, ti),
(sector_t) BIO_MAX_SECTORS);
max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size;
if (max_size < 0)
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 06/12] dm: split discard requests on target boundaries
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (4 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 05/12] dm: factor max_io_len for code reuse Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-26 21:41 ` [PATCH v3 " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 07/12] dm zero: silently drop discards too Mike Snitzer
` (5 subsequent siblings)
11 siblings, 1 reply; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Update __clone_and_map_discard to loop across all targets in a DM
device's table when it processes a discard bio. If a discard crosses a
target boundary it must be split accordingly.
Update __issue_target_requests and __issue_target_request to allow a
cloned discard bio to have a custom start sector and size.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 51 ++++++++++++++++++++++++++++-----------------------
1 file changed, 28 insertions(+), 23 deletions(-)
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1191,7 +1191,7 @@ static struct dm_target_io *alloc_tio(st
}
static void __issue_target_request(struct clone_info *ci, struct dm_target *ti,
- unsigned request_nr)
+ unsigned request_nr, sector_t len)
{
struct dm_target_io *tio = alloc_tio(ci, ti);
struct bio *clone;
@@ -1206,17 +1206,21 @@ static void __issue_target_request(struc
clone = bio_alloc_bioset(GFP_NOIO, ci->bio->bi_max_vecs, ci->md->bs);
__bio_clone(clone, ci->bio);
clone->bi_destructor = dm_bio_destructor;
+ if (len) {
+ clone->bi_sector = ci->sector;
+ clone->bi_size = to_bytes(len);
+ }
__map_bio(ti, clone, tio);
}
static void __issue_target_requests(struct clone_info *ci, struct dm_target *ti,
- unsigned num_requests)
+ unsigned num_requests, sector_t len)
{
unsigned request_nr;
for (request_nr = 0; request_nr < num_requests; request_nr++)
- __issue_target_request(ci, ti, request_nr);
+ __issue_target_request(ci, ti, request_nr, len);
}
static int __clone_and_map_empty_barrier(struct clone_info *ci)
@@ -1225,7 +1229,7 @@ static int __clone_and_map_empty_barrier
struct dm_target *ti;
while ((ti = dm_table_get_target(ci->map, target_nr++)))
- __issue_target_requests(ci, ti, ti->num_flush_requests);
+ __issue_target_requests(ci, ti, ti->num_flush_requests, 0);
ci->sector_count = 0;
@@ -1251,30 +1255,31 @@ static void __clone_and_map_simple(struc
static int __clone_and_map_discard(struct clone_info *ci)
{
struct dm_target *ti;
- sector_t max;
-
- ti = dm_table_find_target(ci->map, ci->sector);
- if (!dm_target_is_valid(ti))
- return -EIO;
-
- /*
- * Even though the device advertised discard support,
- * reconfiguration might have changed that since the
- * check was performed.
- */
-
- if (!ti->num_discard_requests)
- return -EOPNOTSUPP;
+ sector_t max, len, remaining = ci->sector_count;
+ unsigned offset = 0;
- max = max_io_len(ci->sector, ti);
+ do {
+ ti = dm_table_find_target(ci->map, ci->sector);
+ if (!dm_target_is_valid(ti))
+ return -EIO;
- if (ci->sector_count > max)
/*
- * FIXME: Handle a discard that spans two or more targets.
+ * Even though the device advertised discard support,
+ * reconfiguration might have changed that since the
+ * check was performed.
*/
- return -EOPNOTSUPP;
+ if (!ti->num_discard_requests)
+ return -EOPNOTSUPP;
+
+ max = max_io_len_target_boundary(ci->sector, ti, NULL);
+ len = min(remaining, max);
- __issue_target_requests(ci, ti, ti->num_discard_requests);
+ __issue_target_requests(ci, ti, ti->num_discard_requests, len);
+
+ ci->sector += len;
+ ci->sector_count -= len;
+ offset += to_bytes(len);
+ } while (remaining -= len);
ci->sector_count = 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 07/12] dm zero: silently drop discards too
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (5 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 06/12] dm: split discard requests on target boundaries Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 08/12] dm error: return error for " Mike Snitzer
` (4 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Have the zero target silently drop a discard rather than fail the
request with -EOPNOTSUPP.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-zero.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/drivers/md/dm-zero.c b/drivers/md/dm-zero.c
index bbc9703..8bd76e0 100644
--- a/drivers/md/dm-zero.c
+++ b/drivers/md/dm-zero.c
@@ -22,6 +22,9 @@ static int zero_ctr(struct dm_target *ti, unsigned int argc, char **argv)
return -EINVAL;
}
+ /* silently drop discards (this avoids -EOPNOTSUPP) */
+ ti->num_discard_requests = 1;
+
return 0;
}
--
1.6.6.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 08/12] dm error: return error for discards too
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (6 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 07/12] dm zero: silently drop discards too Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 09/12] dm delay: enable discard support Mike Snitzer
` (3 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Have the error target respond to a discard request with a hard -EIO
rather than fail the request with -EOPNOTSUPP.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-target.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c
index 11dea11..98c31b0 100644
--- a/drivers/md/dm-target.c
+++ b/drivers/md/dm-target.c
@@ -113,6 +113,9 @@ void dm_unregister_target(struct target_type *tt)
*/
static int io_err_ctr(struct dm_target *tt, unsigned int argc, char **args)
{
+ /* return error for discards (rather than -EOPNOTSUPP) */
+ tt->num_discard_requests = 1;
+
return 0;
}
--
1.6.6.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 09/12] dm delay: enable discard support
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (7 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 08/12] dm error: return error for " Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 10/12] block: update request stacking methods to support discards Mike Snitzer
` (2 subsequent siblings)
11 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
Enable discard support for the delay target.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-delay.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
index 8520528..cdb7757 100644
--- a/drivers/md/dm-delay.c
+++ b/drivers/md/dm-delay.c
@@ -198,6 +198,7 @@ out:
atomic_set(&dc->may_delay, 1);
ti->num_flush_requests = 1;
+ ti->num_discard_requests = 1;
ti->private = dc;
return 0;
--
1.6.6.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 10/12] block: update request stacking methods to support discards
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (8 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 09/12] dm delay: enable discard support Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-27 14:54 ` Christoph Hellwig
2010-07-24 16:09 ` [PATCH v2 11/12] dm mpath: enable discard support Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 12/12] dm stripe: enable efficient " Mike Snitzer
11 siblings, 1 reply; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel; +Cc: Jens Axboe, Christoph Hellwig, Kiyoshi Ueda
Propagate REQ_DISCARD in cmd_flags when cloning a discard request.
Skip blk_rq_check_limits's existing checks for discard requests because
discard limits will have already been checked in blkdev_issue_discard.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: Jens Axboe <jaxboe@fusionio.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
---
block/blk-core.c | 5 +++++
1 file changed, 5 insertions(+)
Index: linux-2.6-block/block/blk-core.c
===================================================================
--- linux-2.6-block.orig/block/blk-core.c
+++ linux-2.6-block/block/blk-core.c
@@ -1644,6 +1644,9 @@ EXPORT_SYMBOL(submit_bio);
*/
int blk_rq_check_limits(struct request_queue *q, struct request *rq)
{
+ if (rq->cmd_flags & REQ_DISCARD)
+ return 0;
+
if (blk_rq_sectors(rq) > queue_max_sectors(q) ||
blk_rq_bytes(rq) > queue_max_hw_sectors(q) << 9) {
printk(KERN_ERR "%s: over max size limit.\n", __func__);
@@ -2492,6 +2495,8 @@ static void __blk_rq_prep_clone(struct r
{
dst->cpu = src->cpu;
dst->cmd_flags = (rq_data_dir(src) | REQ_NOMERGE);
+ if (src->cmd_flags & REQ_DISCARD)
+ dst->cmd_flags |= REQ_DISCARD;
dst->cmd_type = src->cmd_type;
dst->__sector = blk_rq_pos(src);
dst->__data_len = blk_rq_bytes(src);
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 11/12] dm mpath: enable discard support
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (9 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 10/12] block: update request stacking methods to support discards Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-26 20:41 ` [PATCH v3 " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 12/12] dm stripe: enable efficient " Mike Snitzer
11 siblings, 1 reply; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel; +Cc: Kiyoshi Ueda
Enable discard support in the DM multipath target.
This discard support depends on a few discard-specific fixes to the
block layer's request stacking driver methods.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
---
drivers/md/dm-mpath.c | 1 +
1 file changed, 1 insertion(+)
Index: linux-2.6-block/drivers/md/dm-mpath.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-mpath.c
+++ linux-2.6-block/drivers/md/dm-mpath.c
@@ -893,6 +893,7 @@ static int multipath_ctr(struct dm_targe
}
ti->num_flush_requests = 1;
+ ti->num_discard_requests = 1;
return 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 12/12] dm stripe: enable efficient discard support
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
` (10 preceding siblings ...)
2010-07-24 16:09 ` [PATCH v2 11/12] dm mpath: enable discard support Mike Snitzer
@ 2010-07-24 16:09 ` Mike Snitzer
2010-07-27 20:32 ` [PATCH v3 " Mike Snitzer
11 siblings, 1 reply; 18+ messages in thread
From: Mike Snitzer @ 2010-07-24 16:09 UTC (permalink / raw)
To: dm-devel
The DM core will submit a discard bio to the stripe target for each
stripe in a striped DM device. The stripe target will determine
stripe-specific portions of the supplied bio to be remapped into
individual (at most 'num_discard_requests' extents). If a given
stripe-specific discard bio doesn't touch a particular stripe the bio
will be dropped.
Various useful DMDEBUG messages will be printed if CONFIG_DM_DEBUG is
enabled and a discard is issued to a striped DM device.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-stripe.c | 174 ++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 166 insertions(+), 8 deletions(-)
Index: linux-2.6-block/drivers/md/dm-stripe.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-stripe.c
+++ linux-2.6-block/drivers/md/dm-stripe.c
@@ -167,11 +167,10 @@ static int stripe_ctr(struct dm_target *
sc->stripe_width = width;
ti->split_io = chunk_size;
ti->num_flush_requests = stripes;
+ ti->num_discard_requests = stripes;
sc->chunk_mask = ((sector_t) chunk_size) - 1;
- for (sc->chunk_shift = 0; chunk_size; sc->chunk_shift++)
- chunk_size >>= 1;
- sc->chunk_shift--;
+ sc->chunk_shift = ffs(chunk_size) - 1;
/*
* Get the stripe destinations.
@@ -207,12 +206,167 @@ static void stripe_dtr(struct dm_target
kfree(sc);
}
+static void map_bio_to_stripe(struct bio *bio, struct stripe_c *sc,
+ sector_t chunk, sector_t offset)
+{
+ uint32_t stripe = sector_div(chunk, sc->stripes);
+
+ bio->bi_bdev = sc->stripe[stripe].dev->bdev;
+ bio->bi_sector = sc->stripe[stripe].physical_start +
+ (chunk << sc->chunk_shift) + (offset & sc->chunk_mask);
+}
+
+/*
+ * Set the bio's bi_size based on only the space allocated to 'stripe'.
+ * - first_chunk and last_chunk belong to 'stripe'.
+ * - first_offset and last_offset are only relevant if non-zero.
+ */
+static void set_stripe_bio_size(struct bio *bio, uint32_t stripe,
+ struct stripe_c *sc,
+ sector_t first_chunk, sector_t last_chunk,
+ sector_t first_offset, sector_t last_offset)
+{
+ sector_t temp, stripe_chunks, unused_sectors = 0;
+
+ /*
+ * Determine the number of chunks used from the specified 'stripe'.
+ * stripe_chunks * chunk_size is the upper bound on the 'stripe'
+ * specific bio->bi_size
+ * - requires absolute first_chunk and last_chunk
+ */
+ stripe_chunks = last_chunk - first_chunk + 1;
+ temp = sector_div(stripe_chunks, sc->stripes);
+ stripe_chunks += temp;
+ DMDEBUG("%s: stripe=%u stripe_chunks=%lu",
+ __func__, stripe, stripe_chunks);
+
+ /* Set bi_size based on only the space allocated to 'stripe' */
+ bio->bi_size = to_bytes(stripe_chunks * (sc->chunk_mask + 1));
+ /* must reduce bi_size if first and/or last chunk was partially used */
+ if (first_offset) {
+ unused_sectors += (first_offset & sc->chunk_mask);
+ DMDEBUG("%s: adjusting for first_stripe=%u, unused_sectors=%lu",
+ __func__, stripe, unused_sectors);
+ }
+ if (last_offset) {
+ temp = last_offset & sc->chunk_mask;
+ if (temp)
+ unused_sectors += ((sc->chunk_mask + 1) - temp);
+ DMDEBUG("%s: adjusting for last_stripe=%u, unused_sectors=%lu",
+ __func__, stripe, unused_sectors);
+ }
+ if (unused_sectors)
+ bio->bi_size -= to_bytes(unused_sectors);
+}
+
+/*
+ * Determine the chunk closest to 'chunk' that belongs to 'stripe':
+ * - return first chunk belonging to stripe if 'first_offset' was provided.
+ * - also adjust 'first_offset' accordingly.
+ * - returned chunk may exceed bio or target boundary; caller must check
+ * the return and react accordingly (e.g. drop the bio).
+ * - otherwise return last chunk belonging to stripe
+ * Also return the 'chunk_stripe' associated with the original 'chunk'.
+ */
+static sector_t get_stripe_chunk(struct stripe_c *sc, uint32_t stripe,
+ sector_t chunk, sector_t *first_offset,
+ uint32_t *chunk_stripe)
+{
+ sector_t ret_chunk = chunk;
+ uint32_t stripe_chunk_offset;
+
+ *chunk_stripe = sector_div(chunk, sc->stripes);
+ /* Get absolute offset (in chunks) from 'chunk' to desired 'stripe' */
+ stripe_chunk_offset = abs((long)stripe - *chunk_stripe);
+
+ if (first_offset) {
+ /* first chunk */
+ if (stripe < *chunk_stripe)
+ stripe_chunk_offset = sc->stripes - stripe_chunk_offset;
+ if (stripe_chunk_offset) {
+ ret_chunk += stripe_chunk_offset;
+ *first_offset = ret_chunk << sc->chunk_shift;
+ DMDEBUG("%s: stripe=%u shifted first_offset=%lu",
+ __func__, stripe, *first_offset);
+ }
+ } else {
+ /* last chunk */
+ if (*chunk_stripe < stripe)
+ stripe_chunk_offset = sc->stripes - stripe_chunk_offset;
+ ret_chunk -= stripe_chunk_offset;
+ }
+
+ DMDEBUG("%s: stripe=%u stripe_chunk_offset=%u shifted %s_chunk=%lu",
+ __func__, stripe, stripe_chunk_offset,
+ (first_offset ? "first" : "last"), ret_chunk);
+
+ return ret_chunk;
+}
+
+/*
+ * Confine mapping a bio to an extent of the specified stripe.
+ * If bio doesn't touch stripe drop the bio and return immediately.
+ */
+static int map_stripe_extent(uint32_t stripe, struct bio *bio,
+ struct dm_target *ti, struct stripe_c *sc)
+{
+ sector_t first_offset, last_offset, first_chunk, last_chunk;
+ uint32_t first_stripe, last_stripe;
+
+ DMDEBUG("%s: discard stripe=%u bi_sector=%lu bi_size=%u, bio_sectors=%u",
+ __func__, stripe, bio->bi_sector, bio->bi_size, bio_sectors(bio));
+
+ first_offset = bio->bi_sector - ti->begin;
+ first_chunk = first_offset >> sc->chunk_shift;
+ last_offset = first_offset + to_sector(bio->bi_size);
+ /* Get the last chunk associated with this bio (-1 required) */
+ last_chunk = (last_offset - 1) >> sc->chunk_shift;
+
+ DMDEBUG("%s: first_offset=%lu last_offset=%lu, "
+ "first_chunk=%lu last_chunk=%lu", __func__,
+ first_offset, last_offset, first_chunk, last_chunk);
+
+ /* Determine first_chunk (and first_offset) belonging to 'stripe' */
+ first_chunk = get_stripe_chunk(sc, stripe, first_chunk,
+ &first_offset, &first_stripe);
+
+ if (first_chunk > last_chunk) {
+ /* Drop bio because it doesn't touch desired 'stripe' */
+ bio_endio(bio, 0);
+ DMDEBUG("%s: dropping bio because it doesn't touch stripe=%u\n",
+ __func__, stripe);
+ return DM_MAPIO_SUBMITTED;
+ }
+
+ /* Determine last_chunk belonging to 'stripe' */
+ last_chunk = get_stripe_chunk(sc, stripe, last_chunk,
+ NULL, &last_stripe);
+ BUG_ON(last_chunk < first_chunk);
+
+ DMDEBUG("%s: BEFORE bi_sector=%lu, bi_size=%u, bio_sectors=%u",
+ __func__, bio->bi_sector, bio->bi_size, bio_sectors(bio));
+
+ map_bio_to_stripe(bio, sc, first_chunk, first_offset);
+
+ /* Only account for offsets that impact the 'stripe' bio->bi_size */
+ if (stripe != first_stripe)
+ first_offset = 0;
+ if (stripe != last_stripe)
+ last_offset = 0;
+ set_stripe_bio_size(bio, stripe, sc, first_chunk, last_chunk,
+ first_offset, last_offset);
+
+ DMDEBUG("%s: AFTER bi_sector=%lu, bi_size=%u, bio_sectors=%u\n",
+ __func__, bio->bi_sector, bio->bi_size, bio_sectors(bio));
+
+ return DM_MAPIO_REMAPPED;
+}
+
static int stripe_map(struct dm_target *ti, struct bio *bio,
union map_info *map_context)
{
struct stripe_c *sc = (struct stripe_c *) ti->private;
sector_t offset, chunk;
- uint32_t stripe;
unsigned target_request_nr;
if (unlikely(bio_empty_barrier(bio))) {
@@ -222,13 +376,17 @@ static int stripe_map(struct dm_target *
return DM_MAPIO_REMAPPED;
}
+ if (unlikely(bio->bi_rw & REQ_DISCARD)) {
+ target_request_nr = map_context->target_request_nr;
+ BUG_ON(target_request_nr >= sc->stripes);
+ return map_stripe_extent(target_request_nr, bio, ti, sc);
+ }
+
offset = bio->bi_sector - ti->begin;
chunk = offset >> sc->chunk_shift;
- stripe = sector_div(chunk, sc->stripes);
- bio->bi_bdev = sc->stripe[stripe].dev->bdev;
- bio->bi_sector = sc->stripe[stripe].physical_start +
- (chunk << sc->chunk_shift) + (offset & sc->chunk_mask);
+ map_bio_to_stripe(bio, sc, chunk, offset);
+
return DM_MAPIO_REMAPPED;
}
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 11/12] dm mpath: enable discard support
2010-07-24 16:09 ` [PATCH v2 11/12] dm mpath: enable discard support Mike Snitzer
@ 2010-07-26 20:41 ` Mike Snitzer
0 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-26 20:41 UTC (permalink / raw)
To: dm-devel; +Cc: Kiyoshi Ueda
Enable discard support in the DM multipath target.
This discard support depends on a few discard-specific fixes to the
block layer's request stacking driver methods.
Discard requests are optional so don't allow a failed discard to trigger
path failures. If there is a real problem with a given path the
barriers associated with the discard (either before or after the
discard) will cause path failure. That said, unconditionally passing
discard failures up the stack is not ideal. This must be fixed once DM
has more information about the nature of the underlying storage failure.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
---
drivers/md/dm-mpath.c | 10 ++++++++++
1 file changed, 10 insertions(+)
Index: linux-2.6-block/drivers/md/dm-mpath.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-mpath.c
+++ linux-2.6-block/drivers/md/dm-mpath.c
@@ -893,6 +893,7 @@ static int multipath_ctr(struct dm_targe
}
ti->num_flush_requests = 1;
+ ti->num_discard_requests = 1;
return 0;
@@ -1272,6 +1273,15 @@ static int do_end_io(struct multipath *m
if (error == -EOPNOTSUPP)
return error;
+ if (clone->cmd_flags & REQ_DISCARD)
+ /*
+ * Pass all discard request failures up.
+ * FIXME: only fail_path if the discard failed due to a
+ * transport problem. This requires precise understanding
+ * of the underlying failure (e.g. the SCSI sense).
+ */
+ return error;
+
if (mpio->pgpath)
fail_path(mpio->pgpath);
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 05/12] dm: factor max_io_len for code reuse
2010-07-24 16:09 ` [PATCH v2 05/12] dm: factor max_io_len for code reuse Mike Snitzer
@ 2010-07-26 21:36 ` Mike Snitzer
0 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-26 21:36 UTC (permalink / raw)
To: dm-devel
Split max_io_len_target_boundary out of max_io_len so that the discard
support can make use of it without duplicating max_io_len code.
Avoiding max_io_len's split_io logic enables DM's discard support to
submit the entire discard request to a target. But discards must still
be split on target boundaries.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 22 ++++++++++++++--------
include/linux/device-mapper.h | 2 ++
2 files changed, 16 insertions(+), 8 deletions(-)
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1029,17 +1029,23 @@ static void end_clone_request(struct req
dm_complete_request(clone, error);
}
-static sector_t max_io_len(struct mapped_device *md,
- sector_t sector, struct dm_target *ti)
+static sector_t max_io_len_target_boundary(sector_t sector, struct dm_target *ti)
{
- sector_t offset = sector - ti->begin;
- sector_t len = ti->len - offset;
+ sector_t target_offset = dm_target_offset(ti, sector);
+
+ return ti->len - target_offset;
+}
+
+static sector_t max_io_len(sector_t sector, struct dm_target *ti)
+{
+ sector_t len = max_io_len_target_boundary(sector, ti);
/*
* Does the target need to split even further ?
*/
if (ti->split_io) {
sector_t boundary;
+ sector_t offset = dm_target_offset(ti, sector);
boundary = ((offset + ti->split_io) & ~(ti->split_io - 1))
- offset;
if (len > boundary)
@@ -1257,7 +1263,7 @@ static int __clone_and_map_discard(struc
if (!ti->num_discard_requests)
return -EOPNOTSUPP;
- max = max_io_len(ci->md, ci->sector, ti);
+ max = max_io_len(ci->sector, ti);
if (ci->sector_count > max)
/*
@@ -1289,7 +1295,7 @@ static int __clone_and_map(struct clone_
if (!dm_target_is_valid(ti))
return -EIO;
- max = max_io_len(ci->md, ci->sector, ti);
+ max = max_io_len(ci->sector, ti);
if (ci->sector_count <= max) {
/*
@@ -1340,7 +1346,7 @@ static int __clone_and_map(struct clone_
if (!dm_target_is_valid(ti))
return -EIO;
- max = max_io_len(ci->md, ci->sector, ti);
+ max = max_io_len(ci->sector, ti);
}
len = min(remaining, max);
@@ -1427,7 +1433,7 @@ static int dm_merge_bvec(struct request_
/*
* Find maximum amount of I/O that won't need splitting
*/
- max_sectors = min(max_io_len(md, bvm->bi_sector, ti),
+ max_sectors = min(max_io_len(bvm->bi_sector, ti),
(sector_t) BIO_MAX_SECTORS);
max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size;
if (max_size < 0)
Index: linux-2.6-block/include/linux/device-mapper.h
===================================================================
--- linux-2.6-block.orig/include/linux/device-mapper.h
+++ linux-2.6-block/include/linux/device-mapper.h
@@ -398,6 +398,8 @@ void *dm_vcalloc(unsigned long nmemb, un
#define dm_array_too_big(fixed, obj, num) \
((num) > (UINT_MAX - (fixed)) / (obj))
+#define dm_target_offset(ti, sector) ((sector) - (ti)->begin)
+
static inline sector_t to_sector(unsigned long n)
{
return (n >> SECTOR_SHIFT);
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 06/12] dm: split discard requests on target boundaries
2010-07-24 16:09 ` [PATCH v2 06/12] dm: split discard requests on target boundaries Mike Snitzer
@ 2010-07-26 21:41 ` Mike Snitzer
0 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-26 21:41 UTC (permalink / raw)
To: dm-devel
Update __clone_and_map_discard to loop across all targets in a DM
device's table when it processes a discard bio. If a discard crosses a
target boundary it must be split accordingly.
Update __issue_target_requests and __issue_target_request to allow a
cloned discard bio to have a custom start sector and size.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm.c | 51 ++++++++++++++++++++++++++++-----------------------
1 file changed, 28 insertions(+), 23 deletions(-)
Index: linux-2.6-block/drivers/md/dm.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm.c
+++ linux-2.6-block/drivers/md/dm.c
@@ -1188,7 +1188,7 @@ static struct dm_target_io *alloc_tio(st
}
static void __issue_target_request(struct clone_info *ci, struct dm_target *ti,
- unsigned request_nr)
+ unsigned request_nr, sector_t len)
{
struct dm_target_io *tio = alloc_tio(ci, ti);
struct bio *clone;
@@ -1203,17 +1203,21 @@ static void __issue_target_request(struc
clone = bio_alloc_bioset(GFP_NOIO, ci->bio->bi_max_vecs, ci->md->bs);
__bio_clone(clone, ci->bio);
clone->bi_destructor = dm_bio_destructor;
+ if (len) {
+ clone->bi_sector = ci->sector;
+ clone->bi_size = to_bytes(len);
+ }
__map_bio(ti, clone, tio);
}
static void __issue_target_requests(struct clone_info *ci, struct dm_target *ti,
- unsigned num_requests)
+ unsigned num_requests, sector_t len)
{
unsigned request_nr;
for (request_nr = 0; request_nr < num_requests; request_nr++)
- __issue_target_request(ci, ti, request_nr);
+ __issue_target_request(ci, ti, request_nr, len);
}
static int __clone_and_map_empty_barrier(struct clone_info *ci)
@@ -1222,7 +1226,7 @@ static int __clone_and_map_empty_barrier
struct dm_target *ti;
while ((ti = dm_table_get_target(ci->map, target_nr++)))
- __issue_target_requests(ci, ti, ti->num_flush_requests);
+ __issue_target_requests(ci, ti, ti->num_flush_requests, 0);
ci->sector_count = 0;
@@ -1248,30 +1252,31 @@ static void __clone_and_map_simple(struc
static int __clone_and_map_discard(struct clone_info *ci)
{
struct dm_target *ti;
- sector_t max;
-
- ti = dm_table_find_target(ci->map, ci->sector);
- if (!dm_target_is_valid(ti))
- return -EIO;
-
- /*
- * Even though the device advertised discard support,
- * reconfiguration might have changed that since the
- * check was performed.
- */
-
- if (!ti->num_discard_requests)
- return -EOPNOTSUPP;
+ sector_t max, len, remaining = ci->sector_count;
+ unsigned offset = 0;
- max = max_io_len(ci->sector, ti);
+ do {
+ ti = dm_table_find_target(ci->map, ci->sector);
+ if (!dm_target_is_valid(ti))
+ return -EIO;
- if (ci->sector_count > max)
/*
- * FIXME: Handle a discard that spans two or more targets.
+ * Even though the device advertised discard support,
+ * reconfiguration might have changed that since the
+ * check was performed.
*/
- return -EOPNOTSUPP;
+ if (!ti->num_discard_requests)
+ return -EOPNOTSUPP;
+
+ max = max_io_len_target_boundary(ci->sector, ti);
+ len = min(remaining, max);
- __issue_target_requests(ci, ti, ti->num_discard_requests);
+ __issue_target_requests(ci, ti, ti->num_discard_requests, len);
+
+ ci->sector += len;
+ ci->sector_count -= len;
+ offset += to_bytes(len);
+ } while (remaining -= len);
ci->sector_count = 0;
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 10/12] block: update request stacking methods to support discards
2010-07-24 16:09 ` [PATCH v2 10/12] block: update request stacking methods to support discards Mike Snitzer
@ 2010-07-27 14:54 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2010-07-27 14:54 UTC (permalink / raw)
To: device-mapper development; +Cc: Jens Axboe, Christoph Hellwig, Kiyoshi Ueda
On Sat, Jul 24, 2010 at 12:09:26PM -0400, Mike Snitzer wrote:
> Propagate REQ_DISCARD in cmd_flags when cloning a discard request.
> Skip blk_rq_check_limits's existing checks for discard requests because
> discard limits will have already been checked in blkdev_issue_discard.
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 12/12] dm stripe: enable efficient discard support
2010-07-24 16:09 ` [PATCH v2 12/12] dm stripe: enable efficient " Mike Snitzer
@ 2010-07-27 20:32 ` Mike Snitzer
0 siblings, 0 replies; 18+ messages in thread
From: Mike Snitzer @ 2010-07-27 20:32 UTC (permalink / raw)
To: dm-devel
The DM core will submit a discard bio to the stripe target for each
stripe in a striped DM device. The stripe target will determine
stripe-specific portions of the supplied bio to be remapped into
individual (at most 'num_discard_requests' extents). If a given
stripe-specific discard bio doesn't touch a particular stripe the bio
will be dropped.
Various useful DMDEBUG messages will be printed if CONFIG_DM_DEBUG is
enabled and a discard is issued to a striped DM device.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
drivers/md/dm-stripe.c | 171 +++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 165 insertions(+), 6 deletions(-)
v3: pass stripe to map_bio_to_stripe: eliminates 1 unnecessary
sector_div from discard path
Index: linux-2.6-block/drivers/md/dm-stripe.c
===================================================================
--- linux-2.6-block.orig/drivers/md/dm-stripe.c
+++ linux-2.6-block/drivers/md/dm-stripe.c
@@ -167,11 +167,10 @@ static int stripe_ctr(struct dm_target *
sc->stripe_width = width;
ti->split_io = chunk_size;
ti->num_flush_requests = stripes;
+ ti->num_discard_requests = stripes;
sc->chunk_mask = ((sector_t) chunk_size) - 1;
- for (sc->chunk_shift = 0; chunk_size; sc->chunk_shift++)
- chunk_size >>= 1;
- sc->chunk_shift--;
+ sc->chunk_shift = ffs(chunk_size) - 1;
/*
* Get the stripe destinations.
@@ -207,6 +206,161 @@ static void stripe_dtr(struct dm_target
kfree(sc);
}
+static void map_bio_to_stripe(struct bio *bio, uint32_t stripe,
+ struct stripe_c *sc, sector_t chunk,
+ sector_t offset)
+{
+ bio->bi_bdev = sc->stripe[stripe].dev->bdev;
+ bio->bi_sector = sc->stripe[stripe].physical_start +
+ (chunk << sc->chunk_shift) + (offset & sc->chunk_mask);
+}
+
+/*
+ * Set the bio's bi_size based on only the space allocated to 'stripe'.
+ * - first_chunk and last_chunk belong to 'stripe'.
+ * - first_offset and last_offset are only relevant if non-zero.
+ */
+static void set_stripe_bio_size(struct bio *bio, uint32_t stripe,
+ struct stripe_c *sc,
+ sector_t first_chunk, sector_t last_chunk,
+ sector_t first_offset, sector_t last_offset)
+{
+ sector_t temp, stripe_chunks, unused_sectors = 0;
+
+ /*
+ * Determine the number of chunks used from the specified 'stripe'.
+ * stripe_chunks * chunk_size is the upper bound on the 'stripe'
+ * specific bio->bi_size
+ * - requires absolute first_chunk and last_chunk
+ */
+ stripe_chunks = last_chunk - first_chunk + 1;
+ temp = sector_div(stripe_chunks, sc->stripes);
+ stripe_chunks += temp;
+ DMDEBUG("%s: stripe=%u stripe_chunks=%lu",
+ __func__, stripe, stripe_chunks);
+
+ /* Set bi_size based on only the space allocated to 'stripe' */
+ bio->bi_size = to_bytes(stripe_chunks * (sc->chunk_mask + 1));
+ /* must reduce bi_size if first and/or last chunk was partially used */
+ if (first_offset) {
+ unused_sectors += (first_offset & sc->chunk_mask);
+ DMDEBUG("%s: adjusting for first_stripe=%u, unused_sectors=%lu",
+ __func__, stripe, unused_sectors);
+ }
+ if (last_offset) {
+ temp = last_offset & sc->chunk_mask;
+ if (temp)
+ unused_sectors += ((sc->chunk_mask + 1) - temp);
+ DMDEBUG("%s: adjusting for last_stripe=%u, unused_sectors=%lu",
+ __func__, stripe, unused_sectors);
+ }
+ if (unused_sectors)
+ bio->bi_size -= to_bytes(unused_sectors);
+}
+
+/*
+ * Determine the chunk closest to 'chunk' that belongs to 'stripe':
+ * - return first chunk belonging to stripe if 'first_offset' was provided.
+ * - also adjust 'first_offset' accordingly.
+ * - returned chunk may exceed bio or target boundary; caller must check
+ * the return and react accordingly (e.g. drop the bio).
+ * - otherwise return last chunk belonging to stripe
+ * Also return the 'chunk_stripe' associated with the original 'chunk'.
+ */
+static sector_t get_stripe_chunk(struct stripe_c *sc, uint32_t stripe,
+ sector_t chunk, sector_t *first_offset,
+ uint32_t *chunk_stripe)
+{
+ sector_t ret_chunk = chunk;
+ uint32_t stripe_chunk_offset;
+
+ *chunk_stripe = sector_div(chunk, sc->stripes);
+ /* Get absolute offset (in chunks) from 'chunk' to desired 'stripe' */
+ stripe_chunk_offset = abs((long)stripe - *chunk_stripe);
+
+ if (first_offset) {
+ /* first chunk */
+ if (stripe < *chunk_stripe)
+ stripe_chunk_offset = sc->stripes - stripe_chunk_offset;
+ if (stripe_chunk_offset) {
+ ret_chunk += stripe_chunk_offset;
+ *first_offset = ret_chunk << sc->chunk_shift;
+ DMDEBUG("%s: stripe=%u shifted first_offset=%lu",
+ __func__, stripe, *first_offset);
+ }
+ } else {
+ /* last chunk */
+ if (*chunk_stripe < stripe)
+ stripe_chunk_offset = sc->stripes - stripe_chunk_offset;
+ ret_chunk -= stripe_chunk_offset;
+ }
+
+ DMDEBUG("%s: stripe=%u stripe_chunk_offset=%u shifted %s_chunk=%lu",
+ __func__, stripe, stripe_chunk_offset,
+ (first_offset ? "first" : "last"), ret_chunk);
+
+ return ret_chunk;
+}
+
+/*
+ * Confine mapping a bio to an extent of the specified stripe.
+ * If bio doesn't touch stripe drop the bio and return immediately.
+ */
+static int map_stripe_extent(uint32_t stripe, struct bio *bio,
+ struct dm_target *ti, struct stripe_c *sc)
+{
+ sector_t first_offset, last_offset, first_chunk, last_chunk;
+ uint32_t first_stripe, last_stripe;
+
+ DMDEBUG("%s: discard stripe=%u bi_sector=%lu bi_size=%u, bio_sectors=%u",
+ __func__, stripe, bio->bi_sector, bio->bi_size, bio_sectors(bio));
+
+ first_offset = bio->bi_sector - ti->begin;
+ first_chunk = first_offset >> sc->chunk_shift;
+ last_offset = first_offset + to_sector(bio->bi_size);
+ /* Get the last chunk associated with this bio (-1 required) */
+ last_chunk = (last_offset - 1) >> sc->chunk_shift;
+
+ DMDEBUG("%s: first_offset=%lu last_offset=%lu, "
+ "first_chunk=%lu last_chunk=%lu", __func__,
+ first_offset, last_offset, first_chunk, last_chunk);
+
+ /* Determine first_chunk (and first_offset) belonging to 'stripe' */
+ first_chunk = get_stripe_chunk(sc, stripe, first_chunk,
+ &first_offset, &first_stripe);
+
+ if (first_chunk > last_chunk) {
+ /* Drop bio because it doesn't touch desired 'stripe' */
+ bio_endio(bio, 0);
+ DMDEBUG("%s: dropping bio because it doesn't touch stripe=%u\n",
+ __func__, stripe);
+ return DM_MAPIO_SUBMITTED;
+ }
+
+ /* Determine last_chunk belonging to 'stripe' */
+ last_chunk = get_stripe_chunk(sc, stripe, last_chunk,
+ NULL, &last_stripe);
+ BUG_ON(last_chunk < first_chunk);
+
+ DMDEBUG("%s: BEFORE bi_sector=%lu, bi_size=%u, bio_sectors=%u",
+ __func__, bio->bi_sector, bio->bi_size, bio_sectors(bio));
+
+ map_bio_to_stripe(bio, stripe, sc, first_chunk, first_offset);
+
+ /* Only account for offsets that impact the 'stripe' bio->bi_size */
+ if (stripe != first_stripe)
+ first_offset = 0;
+ if (stripe != last_stripe)
+ last_offset = 0;
+ set_stripe_bio_size(bio, stripe, sc, first_chunk, last_chunk,
+ first_offset, last_offset);
+
+ DMDEBUG("%s: AFTER bi_sector=%lu, bi_size=%u, bio_sectors=%u\n",
+ __func__, bio->bi_sector, bio->bi_size, bio_sectors(bio));
+
+ return DM_MAPIO_REMAPPED;
+}
+
static int stripe_map(struct dm_target *ti, struct bio *bio,
union map_info *map_context)
{
@@ -222,13 +376,18 @@ static int stripe_map(struct dm_target *
return DM_MAPIO_REMAPPED;
}
+ if (unlikely(bio->bi_rw & REQ_DISCARD)) {
+ target_request_nr = map_context->target_request_nr;
+ BUG_ON(target_request_nr >= sc->stripes);
+ return map_stripe_extent(target_request_nr, bio, ti, sc);
+ }
+
offset = bio->bi_sector - ti->begin;
chunk = offset >> sc->chunk_shift;
stripe = sector_div(chunk, sc->stripes);
- bio->bi_bdev = sc->stripe[stripe].dev->bdev;
- bio->bi_sector = sc->stripe[stripe].physical_start +
- (chunk << sc->chunk_shift) + (offset & sc->chunk_mask);
+ map_bio_to_stripe(bio, stripe, sc, chunk, offset);
+
return DM_MAPIO_REMAPPED;
}
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2010-07-27 20:32 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-24 16:09 [PATCH v2 00/12] dm: enable discard support for more targets Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 01/12] dm: rename map_info flush_request to target_request_nr Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 02/12] dm: introduce num_discard_requests in dm_target structure Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 03/12] dm: remove the DM_TARGET_SUPPORTS_DISCARDS feature flag Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 04/12] dm: use common __issue_target_request for flush and discard support Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 05/12] dm: factor max_io_len for code reuse Mike Snitzer
2010-07-26 21:36 ` [PATCH v3 " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 06/12] dm: split discard requests on target boundaries Mike Snitzer
2010-07-26 21:41 ` [PATCH v3 " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 07/12] dm zero: silently drop discards too Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 08/12] dm error: return error for " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 09/12] dm delay: enable discard support Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 10/12] block: update request stacking methods to support discards Mike Snitzer
2010-07-27 14:54 ` Christoph Hellwig
2010-07-24 16:09 ` [PATCH v2 11/12] dm mpath: enable discard support Mike Snitzer
2010-07-26 20:41 ` [PATCH v3 " Mike Snitzer
2010-07-24 16:09 ` [PATCH v2 12/12] dm stripe: enable efficient " Mike Snitzer
2010-07-27 20:32 ` [PATCH v3 " Mike Snitzer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).