public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Fix problems about handling bio to plug when bio merged failed.
@ 2012-08-10 11:44 Jianpeng Ma
  2012-09-18  7:19 ` Jianpeng Ma
  0 siblings, 1 reply; 2+ messages in thread
From: Jianpeng Ma @ 2012-08-10 11:44 UTC (permalink / raw)
  To: axboe; +Cc: Shaohua Li, linux-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb2312", Size: 922 bytes --]

There are some problems about handling bio which merge to plug failed.
Patch1 will avoid unnecessary plug should_sort test,although it's not a bug.
Patch2 correct a bug when handle more devices,it leak some devices to trace plug-operation.

Because the patch2,so it's not necessary to sort when flush plug.Although patch2 has 
O(n*n) complexity,it's more than list_sort which has O(nlog(n)) complexity.But the plug 
list is unlikely too long,so i think patch3 can accept.


Jianpeng Ma (3):
  block: avoid unnecessary plug should_sort test.
  block: Fix not tracing all device plug-operation.
  block: Remove unnecessary requests sort.

 block/blk-core.c |   35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

-- 
1.7.9.5
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH 0/3] Fix problems about handling bio to plug when bio merged failed.
  2012-08-10 11:44 [PATCH 0/3] Fix problems about handling bio to plug when bio merged failed Jianpeng Ma
@ 2012-09-18  7:19 ` Jianpeng Ma
  0 siblings, 0 replies; 2+ messages in thread
From: Jianpeng Ma @ 2012-09-18  7:19 UTC (permalink / raw)
  To: axboe; +Cc: shli, linux-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb2312", Size: 7327 bytes --]

On 2012-08-10 19:44 Jianpeng Ma <majianpeng@gmail.com> Wrote:
>There are some problems about handling bio which merge to plug failed.
>Patch1 will avoid unnecessary plug should_sort test,although it's not a bug.
>Patch2 correct a bug when handle more devices,it leak some devices to trace plug-operation.
>
>Because the patch2,so it's not necessary to sort when flush plug.Although patch2 has 
>O(n*n) complexity,it's more than list_sort which has O(nlog(n)) complexity.But the plug 
>list is unlikely too long,so i think patch3 can accept.
>
>
>Jianpeng Ma (3):
>  block: avoid unnecessary plug should_sort test.
>  block: Fix not tracing all device plug-operation.
>  block: Remove unnecessary requests sort.
>
> block/blk-core.c |   35 ++++++++++++++++++-----------------
> 1 file changed, 18 insertions(+), 17 deletions(-)
>
>-- 
>1.7.9.5
Hi axboe:
	Sorry for asking you again. But I found a problem which it contained those code. So i asked how those patchset again.
If you discard those,i will send the patch using the old code. On the other hand,I will wait the patchest release and to continue.

The problem is about blk_plug. 
My workload is raid5  which had 16 disks. And used our filesystem to write used direct mode.
I used the blktrace to find those message:

  8,16   0     3570     1.083923979  2519  I   W 144323176 + 24 [md127_raid5]
  8,16   0        0     1.083926214     0  m   N cfq2519 insert_request
  8,16   0     3571     1.083926586  2519  I   W 144323072 + 104 [md127_raid5]
  8,16   0        0     1.083926952     0  m   N cfq2519 insert_request
  8,16   0     3572     1.083927180  2519  U   N [md127_raid5] 2
  8,16   0        0     1.083927870     0  m   N cfq2519 Not idling. st->count:1
  8,16   0        0     1.083928320     0  m   N cfq2519 dispatch_insert
  8,16   0        0     1.083928951     0  m   N cfq2519 dispatched a request
  8,16   0        0     1.083929443     0  m   N cfq2519 activate rq, drv=1
  8,16   0     3573     1.083929530  2519  D   W 144323176 + 24 [md127_raid5]
  8,16   0        0     1.083933883     0  m   N cfq2519 Not idling. st->count:1
  8,16   0        0     1.083934189     0  m   N cfq2519 dispatch_insert
  8,16   0        0     1.083934654     0  m   N cfq2519 dispatched a request
  8,16   0        0     1.083935014     0  m   N cfq2519 activate rq, drv=2
  8,16   0     3574     1.083935101  2519  D   W 144323072 + 104 [md127_raid5]
  8,16   0     3575     1.084196179     0  C   W 144323176 + 24 [0]
  8,16   0        0     1.084197979     0  m   N cfq2519 complete rqnoidle 0
  8,16   0     3576     1.084769073     0  C   W 144323072 + 104 [0]
  ......
  8,16   1     3596     1.091394357  2519  I   W 144322544 + 16 [md127_raid5]
  8,16   1        0     1.091396181     0  m   N cfq2519 insert_request
  8,16   1     3597     1.091396571  2519  I   W 144322520 + 24 [md127_raid5]
  8,16   1        0     1.091396934     0  m   N cfq2519 insert_request
  8,16   1     3598     1.091397165  2519  I   W 144322488 + 32 [md127_raid5]
  8,16   1        0     1.091397477     0  m   N cfq2519 insert_request
  8,16   1     3599     1.091397708  2519  I   W 144322432 + 56 [md127_raid5]
  8,16   1        0     1.091398023     0  m   N cfq2519 insert_request
  8,16   1     3600     1.091398284  2519  U   N [md127_raid5] 4
  8,16   1        0     1.091398986     0  m   N cfq2519 Not idling. st->count:1
  8,16   1        0     1.091399511     0  m   N cfq2519 dispatch_insert
  8,16   1        0     1.091400217     0  m   N cfq2519 dispatched a request
  8,16   1        0     1.091400688     0  m   N cfq2519 activate rq, drv=1
  8,16   1     3601     1.091400766  2519  D   W 144322544 + 16 [md127_raid5]
  8,16   1        0     1.091406151     0  m   N cfq2519 Not idling. st->count:1
  8,16   1        0     1.091406460     0  m   N cfq2519 dispatch_insert
  8,16   1        0     1.091406931     0  m   N cfq2519 dispatched a request
  8,16   1        0     1.091407291     0  m   N cfq2519 activate rq, drv=2
  8,16   1     3602     1.091407378  2519  D   W 144322520 + 24 [md127_raid5]
  8,16   1        0     1.091414006     0  m   N cfq2519 Not idling. st->count:1
  8,16   1        0     1.091414297     0  m   N cfq2519 dispatch_insert
  8,16   1        0     1.091414702     0  m   N cfq2519 dispatched a request
  8,16   1        0     1.091415047     0  m   N cfq2519 activate rq, drv=3
  8,16   1     3603     1.091415125  2519  D   W 144322488 + 32 [md127_raid5]
  8,16   1        0     1.091416469     0  m   N cfq2519 Not idling. st->count:1
  8,16   1        0     1.091416754     0  m   N cfq2519 dispatch_insert
  8,16   1        0     1.091417186     0  m   N cfq2519 dispatched a request
  8,16   1        0     1.091417535     0  m   N cfq2519 activate rq, drv=4
  8,16   1     3604     1.091417628  2519  D   W 144322432 + 56 [md127_raid5]
  8,16   1     3605     1.091857225  4393  C   W 144322544 + 16 [0]
  8,16   1        0     1.091858753     0  m   N cfq2519 complete rqnoidle 0
  8,16   1     3606     1.092068456  4393  C   W 144322520 + 24 [0]
  8,16   1        0     1.092069851     0  m   N cfq2519 complete rqnoidle 0
  8,16   1     3607     1.092350440  4393  C   W 144322488 + 32 [0]
  8,16   1        0     1.092351688     0  m   N cfq2519 complete rqnoidle 0
  8,16   1     3608     1.093629323     0  C   W 144322432 + 56 [0]
  8,16   1        0     1.093631151     0  m   N cfq2519 complete rqnoidle 0
  8,16   1        0     1.093631574     0  m   N cfq2519 will busy wait
  8,16   1        0     1.093631829     0  m   N cfq schedule dispatch

Because in func "elv_attempt_insert_merge", it only to try to backmerge.So the four request can't merge in theory.
I trace ten minutes and to count those situation, it can count 25%.
So i think i should do something.
I diged into and founded elevator only provide backmerge using hash.
So I think i can sort in func "blk_flush_plug_list".
At present the requests in blk_plug will sort,but only by request_queue not by start_sector of request.
So i modified the code:

diff --git a/block/blk-core.c b/block/blk-core.c
index 1f61b74..c382abb 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2750,7 +2750,8 @@ static int plug_rq_cmp(void *priv, struct list_head *a, struct list_head *b)
        struct request *rqa = container_of(a, struct request, queuelist);
        struct request *rqb = container_of(b, struct request, queuelist);
 
-       return !(rqa->q <= rqb->q);
+       return !(rqa->q < rqb->q ||
+               (rqa->q == rqb->q && blk_rq_pos(rqa) < blk_rq_pos(rqb)));
 }
 
 /*
@@ -2822,10 +2823,10 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
 
        list_splice_init(&plug->list, &list);
 
-       if (plug->should_sort) {
+//     if (plug->should_sort) {
                list_sort(NULL, &list, plug_rq_cmp);
                plug->should_sort = 0;
-       }
+//     }
 
        q = NULL;
        depth = 0;

With those,i tested and not found situation like above.
So i think i can send a patch to you.But because the early patchset, so I wanted to ask your suggestion how to those.

BTW, why are there only backmerge using hash in elevator?

Jianpengÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-09-18  7:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-08-10 11:44 [PATCH 0/3] Fix problems about handling bio to plug when bio merged failed Jianpeng Ma
2012-09-18  7:19 ` Jianpeng Ma

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox