linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Plugging changes for blk/md/umem
@ 2012-07-26  2:58 NeilBrown
  2012-07-26  2:58 ` [PATCH 3/4] block: stack unplug NeilBrown
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: NeilBrown @ 2012-07-26  2:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li

Hi Jens,
 the following series makes a number of changes to plugging, moving
 common code from md and umem into block, and modifying it to allow
 md to get more value out of it.
 They've been sitting in -next for a while.
 Are you OK with me forwarding them to Linus, or would you rather they
 went though your tree?

Thanks,
NeilBrown

---

NeilBrown (3):
      blk: pass from_schedule to non-request unplug functions.
      blk: centralize non-request unplug handling.
      md: remove plug_cnt feature of plugging.

Shaohua Li (1):
      block: stack unplug


 block/blk-core.c       |   44 ++++++++++++++++++++++++++++--------
 drivers/block/umem.c   |   37 ++++++------------------------
 drivers/md/md.c        |   59 ++++--------------------------------------------
 drivers/md/md.h        |   11 ++++++---
 drivers/md/raid1.c     |    3 +-
 drivers/md/raid10.c    |    3 +-
 drivers/md/raid5.c     |    5 ++--
 include/linux/blkdev.h |    8 +++++--
 8 files changed, 63 insertions(+), 107 deletions(-)

-- 
Signature


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] md: remove plug_cnt feature of plugging.
  2012-07-26  2:58 [PATCH 0/4] Plugging changes for blk/md/umem NeilBrown
  2012-07-26  2:58 ` [PATCH 3/4] block: stack unplug NeilBrown
@ 2012-07-26  2:58 ` NeilBrown
  2012-07-26  2:58 ` [PATCH 4/4] blk: pass from_schedule to non-request unplug functions NeilBrown
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: NeilBrown @ 2012-07-26  2:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li, NeilBrown

This seemed like a good idea at the time, but after further thought I
cannot see it making a difference other than very occasionally and
testing to try to exercise the case it is most likely to help did not
show any performance difference by removing it.

So remove the counting of active plugs and allow 'pending writes' to
be activated at any time, not just when no plugs are active.

This is only relevant when there is a write-intent bitmap, and the
updating of the bitmap will likely introduce enough delay that
the single-threading of bitmap updates will be enough to collect large
numbers of updates together.

Removing this will make it easier to centralise the unplug code, and
will clear the other for other unplug enhancements which have a
measurable effect.

Signed-off-by: NeilBrown <neilb@suse.de>
---

 drivers/md/md.c     |    5 +----
 drivers/md/md.h     |    3 ---
 drivers/md/raid1.c  |    3 +--
 drivers/md/raid10.c |    3 +--
 drivers/md/raid5.c  |    5 ++---
 5 files changed, 5 insertions(+), 14 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index d5ab449..3438117 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -514,8 +514,7 @@ struct md_plug_cb {
 static void plugger_unplug(struct blk_plug_cb *cb)
 {
 	struct md_plug_cb *mdcb = container_of(cb, struct md_plug_cb, cb);
-	if (atomic_dec_and_test(&mdcb->mddev->plug_cnt))
-		md_wakeup_thread(mdcb->mddev->thread);
+	md_wakeup_thread(mdcb->mddev->thread);
 	kfree(mdcb);
 }
 
@@ -548,7 +547,6 @@ int mddev_check_plugged(struct mddev *mddev)
 
 	mdcb->mddev = mddev;
 	mdcb->cb.callback = plugger_unplug;
-	atomic_inc(&mddev->plug_cnt);
 	list_add(&mdcb->cb.list, &plug->cb_list);
 	return 1;
 }
@@ -602,7 +600,6 @@ void mddev_init(struct mddev *mddev)
 	atomic_set(&mddev->active, 1);
 	atomic_set(&mddev->openers, 0);
 	atomic_set(&mddev->active_io, 0);
-	atomic_set(&mddev->plug_cnt, 0);
 	spin_lock_init(&mddev->write_lock);
 	atomic_set(&mddev->flush_pending, 0);
 	init_waitqueue_head(&mddev->sb_wait);
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 7b4a3c3..91786c4 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -266,9 +266,6 @@ struct mddev {
 	int				new_chunk_sectors;
 	int				reshape_backwards;
 
-	atomic_t			plug_cnt;	/* If device is expecting
-							 * more bios soon.
-							 */
 	struct md_thread		*thread;	/* management thread */
 	struct md_thread		*sync_thread;	/* doing resync or reconstruct */
 	sector_t			curr_resync;	/* last block scheduled */
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index cacd008..36a8fc0 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2173,8 +2173,7 @@ static void raid1d(struct mddev *mddev)
 	blk_start_plug(&plug);
 	for (;;) {
 
-		if (atomic_read(&mddev->plug_cnt) == 0)
-			flush_pending_writes(conf);
+		flush_pending_writes(conf);
 
 		spin_lock_irqsave(&conf->device_lock, flags);
 		if (list_empty(head)) {
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 8da6282..5d33603 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2660,8 +2660,7 @@ static void raid10d(struct mddev *mddev)
 	blk_start_plug(&plug);
 	for (;;) {
 
-		if (atomic_read(&mddev->plug_cnt) == 0)
-			flush_pending_writes(conf);
+		flush_pending_writes(conf);
 
 		spin_lock_irqsave(&conf->device_lock, flags);
 		if (list_empty(head)) {
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index c2192a2..3caf08a 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4552,7 +4552,7 @@ static void raid5d(struct mddev *mddev)
 	while (1) {
 		struct bio *bio;
 
-		if (atomic_read(&mddev->plug_cnt) == 0 &&
+		if (
 		    !list_empty(&conf->bitmap_list)) {
 			/* Now is a good time to flush some bitmap updates */
 			conf->seq_flush++;
@@ -4562,8 +4562,7 @@ static void raid5d(struct mddev *mddev)
 			conf->seq_write = conf->seq_flush;
 			activate_bit_delay(conf);
 		}
-		if (atomic_read(&mddev->plug_cnt) == 0)
-			raid5_activate_delayed(conf);
+		raid5_activate_delayed(conf);
 
 		while ((bio = remove_bio_from_retry(conf))) {
 			int ok;



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] blk: centralize non-request unplug handling.
  2012-07-26  2:58 [PATCH 0/4] Plugging changes for blk/md/umem NeilBrown
                   ` (2 preceding siblings ...)
  2012-07-26  2:58 ` [PATCH 4/4] blk: pass from_schedule to non-request unplug functions NeilBrown
@ 2012-07-26  2:58 ` NeilBrown
  2012-07-31  7:08 ` [PATCH 0/4] Plugging changes for blk/md/umem Jens Axboe
  4 siblings, 0 replies; 10+ messages in thread
From: NeilBrown @ 2012-07-26  2:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li, NeilBrown

Both md and umem has similar code for getting notified on an
blk_finish_plug event.
Centralize this code in block/ and allow each driver to
provide its distinctive difference.

Signed-off-by: NeilBrown <neilb@suse.de>
---

 block/blk-core.c       |   25 +++++++++++++++++++++
 drivers/block/umem.c   |   35 +++++-------------------------
 drivers/md/md.c        |   56 ++++--------------------------------------------
 drivers/md/md.h        |    8 ++++++-
 include/linux/blkdev.h |    8 +++++--
 5 files changed, 49 insertions(+), 83 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 93eb3e4..7296d3d 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2914,6 +2914,31 @@ static void flush_plug_callbacks(struct blk_plug *plug)
 	}
 }
 
+struct blk_plug_cb *blk_check_plugged(blk_plug_cb_fn unplug, void *data,
+				      int size)
+{
+	struct blk_plug *plug = current->plug;
+	struct blk_plug_cb *cb;
+
+	if (!plug)
+		return NULL;
+
+	list_for_each_entry(cb, &plug->cb_list, list)
+		if (cb->callback == unplug && cb->data == data)
+			return cb;
+
+	/* Not currently on the callback list */
+	BUG_ON(size < sizeof(*cb));
+	cb = kzalloc(size, GFP_ATOMIC);
+	if (cb) {
+		cb->data = data;
+		cb->callback = unplug;
+		list_add(&cb->list, &plug->cb_list);
+	}
+	return cb;
+}
+EXPORT_SYMBOL(blk_check_plugged);
+
 void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 {
 	struct request_queue *q;
diff --git a/drivers/block/umem.c b/drivers/block/umem.c
index 9a72277..6ef3489 100644
--- a/drivers/block/umem.c
+++ b/drivers/block/umem.c
@@ -513,42 +513,19 @@ static void process_page(unsigned long data)
 	}
 }
 
-struct mm_plug_cb {
-	struct blk_plug_cb cb;
-	struct cardinfo *card;
-};
-
 static void mm_unplug(struct blk_plug_cb *cb)
 {
-	struct mm_plug_cb *mmcb = container_of(cb, struct mm_plug_cb, cb);
+	struct cardinfo *card = cb->data;
 
-	spin_lock_irq(&mmcb->card->lock);
-	activate(mmcb->card);
-	spin_unlock_irq(&mmcb->card->lock);
-	kfree(mmcb);
+	spin_lock_irq(&card->lock);
+	activate(card);
+	spin_unlock_irq(&card->lock);
+	kfree(cb);
 }
 
 static int mm_check_plugged(struct cardinfo *card)
 {
-	struct blk_plug *plug = current->plug;
-	struct mm_plug_cb *mmcb;
-
-	if (!plug)
-		return 0;
-
-	list_for_each_entry(mmcb, &plug->cb_list, cb.list) {
-		if (mmcb->cb.callback == mm_unplug && mmcb->card == card)
-			return 1;
-	}
-	/* Not currently on the callback list */
-	mmcb = kmalloc(sizeof(*mmcb), GFP_ATOMIC);
-	if (!mmcb)
-		return 0;
-
-	mmcb->card = card;
-	mmcb->cb.callback = mm_unplug;
-	list_add(&mmcb->cb.list, &plug->cb_list);
-	return 1;
+	return !!blk_check_plugged(mm_unplug, card, sizeof(struct blk_plug_cb));
 }
 
 static void mm_make_request(struct request_queue *q, struct bio *bio)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 3438117..b493fa4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -498,59 +498,13 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
 }
 EXPORT_SYMBOL(md_flush_request);
 
-/* Support for plugging.
- * This mirrors the plugging support in request_queue, but does not
- * require having a whole queue or request structures.
- * We allocate an md_plug_cb for each md device and each thread it gets
- * plugged on.  This links tot the private plug_handle structure in the
- * personality data where we keep a count of the number of outstanding
- * plugs so other code can see if a plug is active.
- */
-struct md_plug_cb {
-	struct blk_plug_cb cb;
-	struct mddev *mddev;
-};
-
-static void plugger_unplug(struct blk_plug_cb *cb)
+void md_unplug(struct blk_plug_cb *cb)
 {
-	struct md_plug_cb *mdcb = container_of(cb, struct md_plug_cb, cb);
-	md_wakeup_thread(mdcb->mddev->thread);
-	kfree(mdcb);
-}
-
-/* Check that an unplug wakeup will come shortly.
- * If not, wakeup the md thread immediately
- */
-int mddev_check_plugged(struct mddev *mddev)
-{
-	struct blk_plug *plug = current->plug;
-	struct md_plug_cb *mdcb;
-
-	if (!plug)
-		return 0;
-
-	list_for_each_entry(mdcb, &plug->cb_list, cb.list) {
-		if (mdcb->cb.callback == plugger_unplug &&
-		    mdcb->mddev == mddev) {
-			/* Already on the list, move to top */
-			if (mdcb != list_first_entry(&plug->cb_list,
-						    struct md_plug_cb,
-						    cb.list))
-				list_move(&mdcb->cb.list, &plug->cb_list);
-			return 1;
-		}
-	}
-	/* Not currently on the callback list */
-	mdcb = kmalloc(sizeof(*mdcb), GFP_ATOMIC);
-	if (!mdcb)
-		return 0;
-
-	mdcb->mddev = mddev;
-	mdcb->cb.callback = plugger_unplug;
-	list_add(&mdcb->cb.list, &plug->cb_list);
-	return 1;
+	struct mddev *mddev = cb->data;
+	md_wakeup_thread(mddev->thread);
+	kfree(cb);
 }
-EXPORT_SYMBOL_GPL(mddev_check_plugged);
+EXPORT_SYMBOL(md_unplug);
 
 static inline struct mddev *mddev_get(struct mddev *mddev)
 {
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 91786c4..8f998e0 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -627,6 +627,12 @@ extern struct bio *bio_clone_mddev(struct bio *bio, gfp_t gfp_mask,
 				   struct mddev *mddev);
 extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
 				   struct mddev *mddev);
-extern int mddev_check_plugged(struct mddev *mddev);
 extern void md_trim_bio(struct bio *bio, int offset, int size);
+
+extern void md_unplug(struct blk_plug_cb *cb);
+static inline int mddev_check_plugged(struct mddev *mddev)
+{
+	return !!blk_check_plugged(md_unplug, mddev,
+				   sizeof(struct blk_plug_cb));
+}
 #endif /* _MD_MD_H */
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 07954b0..68ba19d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -911,11 +911,15 @@ struct blk_plug {
 };
 #define BLK_MAX_REQUEST_COUNT 16
 
+struct blk_plug_cb;
+typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *);
 struct blk_plug_cb {
 	struct list_head list;
-	void (*callback)(struct blk_plug_cb *);
+	blk_plug_cb_fn callback;
+	void *data;
 };
-
+extern struct blk_plug_cb *blk_check_plugged(blk_plug_cb_fn unplug,
+					     void *data, int size);
 extern void blk_start_plug(struct blk_plug *);
 extern void blk_finish_plug(struct blk_plug *);
 extern void blk_flush_plug_list(struct blk_plug *, bool);



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] block: stack unplug
  2012-07-26  2:58 [PATCH 0/4] Plugging changes for blk/md/umem NeilBrown
@ 2012-07-26  2:58 ` NeilBrown
  2012-07-26  2:58 ` [PATCH 1/4] md: remove plug_cnt feature of plugging NeilBrown
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: NeilBrown @ 2012-07-26  2:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li, Shaohua Li, NeilBrown

From: Shaohua Li <shli@kernel.org>

MD raid1 prepares to dispatch request in unplug callback. If make_request in
low level queue also uses unplug callback to dispatch request, the low level
queue's unplug callback will not be called. Recheck the callback list helps
this case.

Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
---

 block/blk-core.c |   15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 7296d3d..bf38a5b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2900,17 +2900,16 @@ static void flush_plug_callbacks(struct blk_plug *plug)
 {
 	LIST_HEAD(callbacks);
 
-	if (list_empty(&plug->cb_list))
-		return;
-
-	list_splice_init(&plug->cb_list, &callbacks);
+	while (!list_empty(&plug->cb_list)) {
+		list_splice_init(&plug->cb_list, &callbacks);
 
-	while (!list_empty(&callbacks)) {
-		struct blk_plug_cb *cb = list_first_entry(&callbacks,
+		while (!list_empty(&callbacks)) {
+			struct blk_plug_cb *cb = list_first_entry(&callbacks,
 							  struct blk_plug_cb,
 							  list);
-		list_del(&cb->list);
-		cb->callback(cb);
+			list_del(&cb->list);
+			cb->callback(cb);
+		}
 	}
 }
 



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] blk: pass from_schedule to non-request unplug functions.
  2012-07-26  2:58 [PATCH 0/4] Plugging changes for blk/md/umem NeilBrown
  2012-07-26  2:58 ` [PATCH 3/4] block: stack unplug NeilBrown
  2012-07-26  2:58 ` [PATCH 1/4] md: remove plug_cnt feature of plugging NeilBrown
@ 2012-07-26  2:58 ` NeilBrown
  2012-07-26  2:58 ` [PATCH 2/4] blk: centralize non-request unplug handling NeilBrown
  2012-07-31  7:08 ` [PATCH 0/4] Plugging changes for blk/md/umem Jens Axboe
  4 siblings, 0 replies; 10+ messages in thread
From: NeilBrown @ 2012-07-26  2:58 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li, NeilBrown

This will allow md/raid to know why the unplug was called,
and will be able to act according - if !from_schedule it
is safe to perform tasks which could themselves schedule.

Signed-off-by: NeilBrown <neilb@suse.de>
---

 block/blk-core.c       |    6 +++---
 drivers/block/umem.c   |    2 +-
 drivers/md/md.c        |    2 +-
 drivers/md/md.h        |    2 +-
 include/linux/blkdev.h |    2 +-
 5 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index bf38a5b..c3b17c3 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2896,7 +2896,7 @@ static void queue_unplugged(struct request_queue *q, unsigned int depth,
 
 }
 
-static void flush_plug_callbacks(struct blk_plug *plug)
+static void flush_plug_callbacks(struct blk_plug *plug, bool from_schedule)
 {
 	LIST_HEAD(callbacks);
 
@@ -2908,7 +2908,7 @@ static void flush_plug_callbacks(struct blk_plug *plug)
 							  struct blk_plug_cb,
 							  list);
 			list_del(&cb->list);
-			cb->callback(cb);
+			cb->callback(cb, from_schedule);
 		}
 	}
 }
@@ -2948,7 +2948,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 
 	BUG_ON(plug->magic != PLUG_MAGIC);
 
-	flush_plug_callbacks(plug);
+	flush_plug_callbacks(plug, from_schedule);
 	if (list_empty(&plug->list))
 		return;
 
diff --git a/drivers/block/umem.c b/drivers/block/umem.c
index 6ef3489..eb0d821 100644
--- a/drivers/block/umem.c
+++ b/drivers/block/umem.c
@@ -513,7 +513,7 @@ static void process_page(unsigned long data)
 	}
 }
 
-static void mm_unplug(struct blk_plug_cb *cb)
+static void mm_unplug(struct blk_plug_cb *cb, bool from_schedule)
 {
 	struct cardinfo *card = cb->data;
 
diff --git a/drivers/md/md.c b/drivers/md/md.c
index b493fa4..db02d2e 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -498,7 +498,7 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
 }
 EXPORT_SYMBOL(md_flush_request);
 
-void md_unplug(struct blk_plug_cb *cb)
+void md_unplug(struct blk_plug_cb *cb, bool from_schedule)
 {
 	struct mddev *mddev = cb->data;
 	md_wakeup_thread(mddev->thread);
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 8f998e0..f385b03 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -629,7 +629,7 @@ extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
 				   struct mddev *mddev);
 extern void md_trim_bio(struct bio *bio, int offset, int size);
 
-extern void md_unplug(struct blk_plug_cb *cb);
+extern void md_unplug(struct blk_plug_cb *cb, bool from_schedule);
 static inline int mddev_check_plugged(struct mddev *mddev)
 {
 	return !!blk_check_plugged(md_unplug, mddev,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 68ba19d..2698866 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -912,7 +912,7 @@ struct blk_plug {
 #define BLK_MAX_REQUEST_COUNT 16
 
 struct blk_plug_cb;
-typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *);
+typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool);
 struct blk_plug_cb {
 	struct list_head list;
 	blk_plug_cb_fn callback;



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] Plugging changes for blk/md/umem
  2012-07-26  2:58 [PATCH 0/4] Plugging changes for blk/md/umem NeilBrown
                   ` (3 preceding siblings ...)
  2012-07-26  2:58 ` [PATCH 2/4] blk: centralize non-request unplug handling NeilBrown
@ 2012-07-31  7:08 ` Jens Axboe
  2012-07-31  7:25   ` NeilBrown
  4 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2012-07-31  7:08 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Shaohua Li

On 07/26/2012 04:58 AM, NeilBrown wrote:
> Hi Jens,
>  the following series makes a number of changes to plugging, moving
>  common code from md and umem into block, and modifying it to allow
>  md to get more value out of it.
>  They've been sitting in -next for a while.
>  Are you OK with me forwarding them to Linus, or would you rather they
>  went though your tree?

Thanks Neil, looks good. I will apply them to my tree and do some basic
testing, too, then push it off for this round (where I haven't pushed
out yet).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] Plugging changes for blk/md/umem
  2012-07-31  7:08 ` [PATCH 0/4] Plugging changes for blk/md/umem Jens Axboe
@ 2012-07-31  7:25   ` NeilBrown
  2012-07-31  7:33     ` Jens Axboe
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2012-07-31  7:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li

[-- Attachment #1: Type: text/plain, Size: 1112 bytes --]

On Tue, 31 Jul 2012 09:08:55 +0200 Jens Axboe <axboe@kernel.dk> wrote:

> On 07/26/2012 04:58 AM, NeilBrown wrote:
> > Hi Jens,
> >  the following series makes a number of changes to plugging, moving
> >  common code from md and umem into block, and modifying it to allow
> >  md to get more value out of it.
> >  They've been sitting in -next for a while.
> >  Are you OK with me forwarding them to Linus, or would you rather they
> >  went though your tree?
> 
> Thanks Neil, looks good. I will apply them to my tree and do some basic
> testing, too, then push it off for this round (where I haven't pushed
> out yet).
> 

Thanks Jens,
 However I've just today discovered that the very first patch causes a real
 regression in throughput for RAID5 and while I'm sure I can fix it with a
 subsequent patch (once I figure out exactly what it happening) it might make
 sense to hold them all back for 3.7.

 However if you would like to take them now and fix up later (it is only a
 performance regression so it won't really affect bisection) I can work with
 that.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] Plugging changes for blk/md/umem
  2012-07-31  7:25   ` NeilBrown
@ 2012-07-31  7:33     ` Jens Axboe
  2012-07-31  8:44       ` NeilBrown
  0 siblings, 1 reply; 10+ messages in thread
From: Jens Axboe @ 2012-07-31  7:33 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Shaohua Li

On 07/31/2012 09:25 AM, NeilBrown wrote:
> On Tue, 31 Jul 2012 09:08:55 +0200 Jens Axboe <axboe@kernel.dk> wrote:
> 
>> On 07/26/2012 04:58 AM, NeilBrown wrote:
>>> Hi Jens,
>>>  the following series makes a number of changes to plugging, moving
>>>  common code from md and umem into block, and modifying it to allow
>>>  md to get more value out of it.
>>>  They've been sitting in -next for a while.
>>>  Are you OK with me forwarding them to Linus, or would you rather they
>>>  went though your tree?
>>
>> Thanks Neil, looks good. I will apply them to my tree and do some basic
>> testing, too, then push it off for this round (where I haven't pushed
>> out yet).
>>
> 
> Thanks Jens,
>  However I've just today discovered that the very first patch causes a real
>  regression in throughput for RAID5 and while I'm sure I can fix it with a
>  subsequent patch (once I figure out exactly what it happening) it might make
>  sense to hold them all back for 3.7.
> 
>  However if you would like to take them now and fix up later (it is only a
>  performance regression so it won't really affect bisection) I can work with
>  that.

If you're confident we can fix it in this cycle (sounds like it, if you
have a grasp on the situation), then I don't think that should stop us.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] Plugging changes for blk/md/umem
  2012-07-31  7:33     ` Jens Axboe
@ 2012-07-31  8:44       ` NeilBrown
  2012-07-31  8:54         ` Jens Axboe
  0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2012-07-31  8:44 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-raid, Shaohua Li

[-- Attachment #1: Type: text/plain, Size: 1728 bytes --]

On Tue, 31 Jul 2012 09:33:35 +0200 Jens Axboe <axboe@kernel.dk> wrote:

> On 07/31/2012 09:25 AM, NeilBrown wrote:
> > On Tue, 31 Jul 2012 09:08:55 +0200 Jens Axboe <axboe@kernel.dk> wrote:
> > 
> >> On 07/26/2012 04:58 AM, NeilBrown wrote:
> >>> Hi Jens,
> >>>  the following series makes a number of changes to plugging, moving
> >>>  common code from md and umem into block, and modifying it to allow
> >>>  md to get more value out of it.
> >>>  They've been sitting in -next for a while.
> >>>  Are you OK with me forwarding them to Linus, or would you rather they
> >>>  went though your tree?
> >>
> >> Thanks Neil, looks good. I will apply them to my tree and do some basic
> >> testing, too, then push it off for this round (where I haven't pushed
> >> out yet).
> >>
> > 
> > Thanks Jens,
> >  However I've just today discovered that the very first patch causes a real
> >  regression in throughput for RAID5 and while I'm sure I can fix it with a
> >  subsequent patch (once I figure out exactly what it happening) it might make
> >  sense to hold them all back for 3.7.
> > 
> >  However if you would like to take them now and fix up later (it is only a
> >  performance regression so it won't really affect bisection) I can work with
> >  that.
> 
> If you're confident we can fix it in this cycle (sounds like it, if you
> have a grasp on the situation), then I don't think that should stop us.
> 

Sounds like a plan.
I have a few md patches queue which depend on those plugging patches, so if
you could cc me when you send you pull request (which Linus is keen for "by
Wedneday at the latest" I gather), I'll sort my tree out and send my pull
request.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] Plugging changes for blk/md/umem
  2012-07-31  8:44       ` NeilBrown
@ 2012-07-31  8:54         ` Jens Axboe
  0 siblings, 0 replies; 10+ messages in thread
From: Jens Axboe @ 2012-07-31  8:54 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid, Shaohua Li

On 07/31/2012 10:44 AM, NeilBrown wrote:
> On Tue, 31 Jul 2012 09:33:35 +0200 Jens Axboe <axboe@kernel.dk> wrote:
> 
>> On 07/31/2012 09:25 AM, NeilBrown wrote:
>>> On Tue, 31 Jul 2012 09:08:55 +0200 Jens Axboe <axboe@kernel.dk> wrote:
>>>
>>>> On 07/26/2012 04:58 AM, NeilBrown wrote:
>>>>> Hi Jens,
>>>>>  the following series makes a number of changes to plugging, moving
>>>>>  common code from md and umem into block, and modifying it to allow
>>>>>  md to get more value out of it.
>>>>>  They've been sitting in -next for a while.
>>>>>  Are you OK with me forwarding them to Linus, or would you rather they
>>>>>  went though your tree?
>>>>
>>>> Thanks Neil, looks good. I will apply them to my tree and do some basic
>>>> testing, too, then push it off for this round (where I haven't pushed
>>>> out yet).
>>>>
>>>
>>> Thanks Jens,
>>>  However I've just today discovered that the very first patch causes a real
>>>  regression in throughput for RAID5 and while I'm sure I can fix it with a
>>>  subsequent patch (once I figure out exactly what it happening) it might make
>>>  sense to hold them all back for 3.7.
>>>
>>>  However if you would like to take them now and fix up later (it is only a
>>>  performance regression so it won't really affect bisection) I can work with
>>>  that.
>>
>> If you're confident we can fix it in this cycle (sounds like it, if you
>> have a grasp on the situation), then I don't think that should stop us.
>>
> 
> Sounds like a plan.
> I have a few md patches queue which depend on those plugging patches, so if
> you could cc me when you send you pull request (which Linus is keen for "by
> Wedneday at the latest" I gather), I'll sort my tree out and send my pull
> request.

They'll go out today, or tomorrow at the latest. I'll CC you. This merge
window has been unfortunately timed with my vacation (Linus, how dare
you!), so I have a bit of a time crunch. On the plus side, it's smaller
than last round.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-07-31  8:54 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-26  2:58 [PATCH 0/4] Plugging changes for blk/md/umem NeilBrown
2012-07-26  2:58 ` [PATCH 3/4] block: stack unplug NeilBrown
2012-07-26  2:58 ` [PATCH 1/4] md: remove plug_cnt feature of plugging NeilBrown
2012-07-26  2:58 ` [PATCH 4/4] blk: pass from_schedule to non-request unplug functions NeilBrown
2012-07-26  2:58 ` [PATCH 2/4] blk: centralize non-request unplug handling NeilBrown
2012-07-31  7:08 ` [PATCH 0/4] Plugging changes for blk/md/umem Jens Axboe
2012-07-31  7:25   ` NeilBrown
2012-07-31  7:33     ` Jens Axboe
2012-07-31  8:44       ` NeilBrown
2012-07-31  8:54         ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).