public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup
@ 2014-05-10 17:01 Ming Lei
  2014-05-10 17:01 ` [PATCH 1/5] blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag() Ming Lei
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-10 17:01 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel

Hi Jens and guys,

I just take a look at the new bitmap based tag allocation patches
in -next tree, and play it for a while.

Some of them are fixes, and some of them are cleanup, please
review.

Thanks,
--
Ming Lei



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/5] blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag()
  2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
@ 2014-05-10 17:01 ` Ming Lei
  2014-05-10 17:01 ` [PATCH 2/5] blk-mq: bitmap tag: remove barrier " Ming Lei
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-10 17:01 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel; +Cc: Ming Lei

The unlock memory barrier need to order access to req in free
path and clearing tag bit, otherwise either request free path
may see a allocated request, or initialized request in allocate
path might be modified by the ongoing free path.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-mq-tag.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 8dc933b..b8b968d 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -235,7 +235,11 @@ static void bt_clear_tag(struct blk_mq_bitmap_tags *bt, unsigned int tag)
 	const int index = TAG_TO_INDEX(tag);
 	struct bt_wait_state *bs;
 
-	clear_bit(TAG_TO_BIT(tag), &bt->map[index].word);
+	/*
+	 * The unlock memory barrier need to order access to req in free
+	 * path and clearing tag bit
+	 */
+	clear_bit_unlock(TAG_TO_BIT(tag), &bt->map[index].word);
 
 	bs = bt_wake_ptr(bt);
 	if (bs && atomic_dec_and_test(&bs->wait_cnt)) {
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/5] blk-mq: bitmap tag: remove barrier in bt_clear_tag()
  2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
  2014-05-10 17:01 ` [PATCH 1/5] blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag() Ming Lei
@ 2014-05-10 17:01 ` Ming Lei
  2014-05-10 17:01 ` [PATCH 3/5] blk-mq: bitmap tag: select random tag betweet 0 and (depth - 1) Ming Lei
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-10 17:01 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel; +Cc: Ming Lei

The barrier isn't necessary because both atomic_dec_and_test()
and wake_up() implicate one barrier.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-mq-tag.c |    1 -
 1 file changed, 1 deletion(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index b8b968d..777aad7 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -243,7 +243,6 @@ static void bt_clear_tag(struct blk_mq_bitmap_tags *bt, unsigned int tag)
 
 	bs = bt_wake_ptr(bt);
 	if (bs && atomic_dec_and_test(&bs->wait_cnt)) {
-		smp_mb__after_clear_bit();
 		atomic_set(&bs->wait_cnt, BT_WAIT_BATCH);
 		bt_index_inc(&bt->wake_index);
 		wake_up(&bs->wait);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/5] blk-mq: bitmap tag: select random tag betweet 0 and (depth - 1)
  2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
  2014-05-10 17:01 ` [PATCH 1/5] blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag() Ming Lei
  2014-05-10 17:01 ` [PATCH 2/5] blk-mq: bitmap tag: remove barrier " Ming Lei
@ 2014-05-10 17:01 ` Ming Lei
  2014-05-10 17:01 ` [PATCH 4/5] blk-mq: bitmap tag: cleanup blk_mq_init_tags Ming Lei
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-10 17:01 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel; +Cc: Ming Lei

The selected tag should be selected at random between 0 and
(depth - 1) with probability 1/depth, instead between 0 and
(depth - 2) with probability 1/(depth - 1).

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-mq-tag.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 777aad7..77d2b18 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -427,7 +427,7 @@ void blk_mq_tag_init_last_tag(struct blk_mq_tags *tags, unsigned int *tag)
 {
 	unsigned int depth = tags->nr_tags - tags->nr_reserved_tags;
 
-	*tag = prandom_u32() % (depth - 1);
+	*tag = prandom_u32() % depth;
 }
 
 ssize_t blk_mq_tag_sysfs_show(struct blk_mq_tags *tags, char *page)
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/5] blk-mq: bitmap tag: cleanup blk_mq_init_tags
  2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
                   ` (2 preceding siblings ...)
  2014-05-10 17:01 ` [PATCH 3/5] blk-mq: bitmap tag: select random tag betweet 0 and (depth - 1) Ming Lei
@ 2014-05-10 17:01 ` Ming Lei
  2014-05-10 17:01 ` [PATCH 5/5] blk-mq: bitmap tag: fix wait batch for allocation Ming Lei
  2014-05-10 19:39 ` [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Jens Axboe
  5 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-10 17:01 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel; +Cc: Ming Lei

Both nr_cache and nr_tags arn't needed for bitmap tag anymore.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-mq-tag.c |    4 ----
 1 file changed, 4 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 77d2b18..6532aea 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -394,7 +394,6 @@ enomem:
 struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags,
 				     unsigned int reserved_tags, int node)
 {
-	unsigned int nr_tags, nr_cache;
 	struct blk_mq_tags *tags;
 
 	if (total_tags > BLK_MQ_TAG_MAX) {
@@ -406,9 +405,6 @@ struct blk_mq_tags *blk_mq_init_tags(unsigned int total_tags,
 	if (!tags)
 		return NULL;
 
-	nr_tags = total_tags - reserved_tags;
-	nr_cache = nr_tags / num_online_cpus();
-
 	tags->nr_tags = total_tags;
 	tags->nr_reserved_tags = reserved_tags;
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 5/5] blk-mq: bitmap tag: fix wait batch for allocation
  2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
                   ` (3 preceding siblings ...)
  2014-05-10 17:01 ` [PATCH 4/5] blk-mq: bitmap tag: cleanup blk_mq_init_tags Ming Lei
@ 2014-05-10 17:01 ` Ming Lei
  2014-05-10 19:39 ` [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Jens Axboe
  5 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-10 17:01 UTC (permalink / raw)
  To: Jens Axboe, linux-kernel; +Cc: Ming Lei

One interesting point of bitmap tag allocation is that
it may wait for at least BT_WAIT_BATCH times tag free
for a blocked allocation. Obviously, it may hang allocation
if the depth is smaller than BT_WAIT_BATCH.

This patch simply sets the wait count as 1 if depth is
smaller than BT_WAIT_BATCH to avoid the problem.

Maybe better idea is that it should be set as one ratio
of depth(1/8 or others), but it may need more tests for
verification.

Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-mq-tag.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 6532aea..8e3a22d 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -135,6 +135,16 @@ static struct bt_wait_state *bt_wait_ptr(struct blk_mq_bitmap_tags *bt,
 	return bs;
 }
 
+static void bs_reset_wait_cnt(struct blk_mq_bitmap_tags *bt,
+		struct bt_wait_state *bs)
+{
+	int cnt;
+
+	cnt = bt->depth < BT_WAIT_BATCH ? 1 : BT_WAIT_BATCH;
+
+	atomic_set(&bs->wait_cnt, cnt);
+}
+
 static int bt_get(struct blk_mq_bitmap_tags *bt, struct blk_mq_hw_ctx *hctx,
 		  unsigned int *last_tag, gfp_t gfp)
 {
@@ -160,7 +170,7 @@ static int bt_get(struct blk_mq_bitmap_tags *bt, struct blk_mq_hw_ctx *hctx,
 			break;
 
 		if (was_empty)
-			atomic_set(&bs->wait_cnt, BT_WAIT_BATCH);
+			bs_reset_wait_cnt(bt, bs);
 
 		io_schedule();
 	} while (1);
@@ -243,7 +253,7 @@ static void bt_clear_tag(struct blk_mq_bitmap_tags *bt, unsigned int tag)
 
 	bs = bt_wake_ptr(bt);
 	if (bs && atomic_dec_and_test(&bs->wait_cnt)) {
-		atomic_set(&bs->wait_cnt, BT_WAIT_BATCH);
+		bs_reset_wait_cnt(bt, bs);
 		bt_index_inc(&bt->wake_index);
 		wake_up(&bs->wait);
 	}
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup
  2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
                   ` (4 preceding siblings ...)
  2014-05-10 17:01 ` [PATCH 5/5] blk-mq: bitmap tag: fix wait batch for allocation Ming Lei
@ 2014-05-10 19:39 ` Jens Axboe
  2014-05-11  1:13   ` Ming Lei
  5 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2014-05-10 19:39 UTC (permalink / raw)
  To: Ming Lei, linux-kernel

On 05/10/2014 11:01 AM, Ming Lei wrote:
> Hi Jens and guys,
> 
> I just take a look at the new bitmap based tag allocation patches
> in -next tree, and play it for a while.
> 
> Some of them are fixes, and some of them are cleanup, please
> review.

Thanks, applied 1-4, #5 I think you are looking at an older version,
this one I already fixed yesterday.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup
  2014-05-10 19:39 ` [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Jens Axboe
@ 2014-05-11  1:13   ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2014-05-11  1:13 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Linux Kernel Mailing List

On Sun, May 11, 2014 at 3:39 AM, Jens Axboe <axboe@kernel.dk> wrote:
> On 05/10/2014 11:01 AM, Ming Lei wrote:
>> Hi Jens and guys,
>>
>> I just take a look at the new bitmap based tag allocation patches
>> in -next tree, and play it for a while.
>>
>> Some of them are fixes, and some of them are cleanup, please
>> review.
>
> Thanks, applied 1-4, #5 I think you are looking at an older version,
> this one I already fixed yesterday.

Thanks, it should be because I pulled from -next instead of your tree
directly.


Thanks,
-- 
Ming Lei

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-05-11  1:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-10 17:01 [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Ming Lei
2014-05-10 17:01 ` [PATCH 1/5] blk-mq: bitmap tag: use clear_bit_unlock in bt_clear_tag() Ming Lei
2014-05-10 17:01 ` [PATCH 2/5] blk-mq: bitmap tag: remove barrier " Ming Lei
2014-05-10 17:01 ` [PATCH 3/5] blk-mq: bitmap tag: select random tag betweet 0 and (depth - 1) Ming Lei
2014-05-10 17:01 ` [PATCH 4/5] blk-mq: bitmap tag: cleanup blk_mq_init_tags Ming Lei
2014-05-10 17:01 ` [PATCH 5/5] blk-mq: bitmap tag: fix wait batch for allocation Ming Lei
2014-05-10 19:39 ` [PATCH 0/5] blk-mq: bitmap tag: fix and cleanup Jens Axboe
2014-05-11  1:13   ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox