public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [2.6.23 PATCH 07/18] dm io: fix panic on large request
@ 2007-07-11 20:58 Alasdair G Kergon
  2007-07-17 13:16 ` Patrick McHardy
  0 siblings, 1 reply; 8+ messages in thread
From: Alasdair G Kergon @ 2007-07-11 20:58 UTC (permalink / raw)
  To: Andrew Morton; +Cc: dm-devel, linux-kernel, Jun'ichi Nomura

From: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>

bio_alloc_bioset() will return NULL if 'num_vecs' is too large.
Use bio_get_nr_vecs() to get estimation of maximum number.

Signed-off-by: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>

---
 drivers/md/dm-io.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletion(-)

Index: linux/drivers/md/dm-io.c
===================================================================
--- linux.orig/drivers/md/dm-io.c	2007-07-11 21:37:32.000000000 +0100
+++ linux/drivers/md/dm-io.c	2007-07-11 21:37:43.000000000 +0100
@@ -293,7 +293,10 @@ static void do_region(int rw, unsigned i
 		 * bvec for bio_get/set_region() and decrement bi_max_vecs
 		 * to hide it from bio_add_page().
 		 */
-		num_bvecs = (remaining / (PAGE_SIZE >> SECTOR_SHIFT)) + 2;
+		num_bvecs = dm_sector_div_up(remaining,
+					     (PAGE_SIZE >> SECTOR_SHIFT));
+		num_bvecs = 1 + min_t(int, bio_get_nr_vecs(where->bdev),
+				      num_bvecs);
 		bio = bio_alloc_bioset(GFP_NOIO, num_bvecs, io->client->bios);
 		bio->bi_sector = where->sector + (where->count - remaining);
 		bio->bi_bdev = where->bdev;

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-11 20:58 [2.6.23 PATCH 07/18] dm io: fix panic on large request Alasdair G Kergon
@ 2007-07-17 13:16 ` Patrick McHardy
  2007-07-17 16:39   ` Jun'ichi Nomura
  2007-07-18 15:23   ` Chuck Ebbert
  0 siblings, 2 replies; 8+ messages in thread
From: Patrick McHardy @ 2007-07-17 13:16 UTC (permalink / raw)
  To: Alasdair G Kergon
  Cc: Andrew Morton, dm-devel, linux-kernel, Jun'ichi Nomura

Alasdair G Kergon wrote:
> From: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
> 
> bio_alloc_bioset() will return NULL if 'num_vecs' is too large.
> Use bio_get_nr_vecs() to get estimation of maximum number.
> 
> Signed-off-by: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
> 
> ---
>  drivers/md/dm-io.c |    5 ++++-
>  1 files changed, 4 insertions(+), 1 deletion(-)


This patch reproducibly oopses my box:

[  126.754204] BUG: unable to handle kernel NULL pointer dereference at
virtual address 00000000
[  126.754326]  printing eip:
[  126.754369] c0141a67
[  126.754420] *pde = 00000000
[  126.754465] Oops: 0000 [#1]
[  126.754507] PREEMPT
[  126.754585] Modules linked in: [...]


[  126.758372] CPU:    0
[  126.758373] EIP:    0060:[<c0141a67>]    Not tainted VLI
[  126.758374] EFLAGS: 00010282   (2.6.22 #1)
[  126.758511] EIP is at mempool_free+0xe/0xc0
[  126.758558] eax: d39e65d0   ebx: 00000000   ecx: df2b9898   edx: 00000000
[  126.758605] esi: 00000000   edi: d39e65d0   ebp: d487d6d0   esp: df79fec0
[  126.758652] ds: 007b   es: 007b   fs: 0000  gs: 0000  ss: 0068
[  126.758699] Process kcryptd/0 (pid: 3218, ti=df79f000 task=df2b9640
task.ti=df79f000)
[  126.758747] Stack: 00000000 00000000 d3835f80 00000000 e08b0923
e08a5f69 00000200 e0ad1080
[  126.759093]        dfb5ab40 d3835f80 e08b08c0 00000000 e08a5fb7
c01804d8 00000000 00000200
[  126.759439]        c520bc00 00000c00 d0b77438 d5754b00 df79ff5c
e08a515e d0b77444 d5754b00
[  126.759858] Call Trace:
[  126.759965]  [<e08b0923>] clone_endio+0x63/0xc0 [dm_mod]
[  126.760066]  [<e08a5f69>] crypt_convert+0x131/0x17f [dm_crypt]
[  126.760168]  [<e08b08c0>] clone_endio+0x0/0xc0 [dm_mod]
[  126.760264]  [<e08a5fb7>] kcryptd_do_work+0x0/0x30f [dm_crypt]
[  126.760349]  [<c01804d8>] bio_endio+0x33/0x5d
[  126.760462]  [<e08a515e>] dec_pending+0x28/0x39 [dm_crypt]
[  126.760558]  [<e08a61e6>] kcryptd_do_work+0x22f/0x30f [dm_crypt]
[  126.760669]  [<c0112182>] update_stats_wait_end+0x7f/0xb2
[  126.760801]  [<e08a5fb7>] kcryptd_do_work+0x0/0x30f [dm_crypt]
[  126.760888]  [<c012700e>] run_workqueue+0x84/0x179
[  126.760990]  [<c0127292>] worker_thread+0x0/0xf0
[  126.761074]  [<c012732f>] worker_thread+0x9d/0xf0
[  126.761160]  [<c012a360>] autoremove_wake_function+0x0/0x37
[  126.761256]  [<c0127292>] worker_thread+0x0/0xf0
[  126.761334]  [<c012a15c>] kthread+0x52/0x58
[  126.761411]  [<c012a10a>] kthread+0x0/0x58
[  126.761496]  [<c0104983>] kernel_thread_helper+0x7/0x14
[  126.761598]  =======================
[  126.761717] Code: 1c 00 89 f6 eb a9 b8 88 13 00 00 e8 b4 56 1c 00 8d
74 26 00 eb d5 31 db e9 11 ff ff ff 57 56 53 83 ec 04 89 c7 89 d6 85 c0
74 55 <8b> 02 39 42 04 7d 46 9c 58 90 8d b4 26 00 00 00 00 89 c3 fa 90
[  126.763964] EIP: [<c0141a67>] mempool_free+0xe/0xc0 SS:ESP 0068:df79fec0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-17 13:16 ` Patrick McHardy
@ 2007-07-17 16:39   ` Jun'ichi Nomura
  2007-07-17 17:50     ` Patrick McHardy
  2007-07-18 15:23   ` Chuck Ebbert
  1 sibling, 1 reply; 8+ messages in thread
From: Jun'ichi Nomura @ 2007-07-17 16:39 UTC (permalink / raw)
  To: Patrick McHardy; +Cc: Alasdair G Kergon, Andrew Morton, dm-devel, linux-kernel

Hi,

Patrick McHardy wrote:
> Alasdair G Kergon wrote:
>> From: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
>>
>> bio_alloc_bioset() will return NULL if 'num_vecs' is too large.
>> Use bio_get_nr_vecs() to get estimation of maximum number.
>>
>> Signed-off-by: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
>> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
>>
>> ---
>>  drivers/md/dm-io.c |    5 ++++-
>>  1 files changed, 4 insertions(+), 1 deletion(-)
> 
> 
> This patch reproducibly oopses my box:

Thanks for the report.
But I'm not sure how the patch is related to the oops.

The stack trace shows the oops occurred in dm-crypt,
which doesn't use the part of the code modified by the patch
(dm-io).

Are you using other dm modules such as dm-multipath, dm-mirror
or dm-snapshot?
If so, can you take the output of 'dmsetup table' and 'dmsetup ls'?

Do you have a reliable way to reproduce the oops which I can try?

> 
> [  126.754204] BUG: unable to handle kernel NULL pointer dereference at
> virtual address 00000000
> [  126.754326]  printing eip:
> [  126.754369] c0141a67
> [  126.754420] *pde = 00000000
> [  126.754465] Oops: 0000 [#1]
> [  126.754507] PREEMPT
> [  126.754585] Modules linked in: [...]
> 
> 
> [  126.758372] CPU:    0
> [  126.758373] EIP:    0060:[<c0141a67>]    Not tainted VLI
> [  126.758374] EFLAGS: 00010282   (2.6.22 #1)
> [  126.758511] EIP is at mempool_free+0xe/0xc0
> [  126.758558] eax: d39e65d0   ebx: 00000000   ecx: df2b9898   edx: 00000000
> [  126.758605] esi: 00000000   edi: d39e65d0   ebp: d487d6d0   esp: df79fec0
> [  126.758652] ds: 007b   es: 007b   fs: 0000  gs: 0000  ss: 0068
> [  126.758699] Process kcryptd/0 (pid: 3218, ti=df79f000 task=df2b9640
> task.ti=df79f000)
> [  126.758747] Stack: 00000000 00000000 d3835f80 00000000 e08b0923
> e08a5f69 00000200 e0ad1080
> [  126.759093]        dfb5ab40 d3835f80 e08b08c0 00000000 e08a5fb7
> c01804d8 00000000 00000200
> [  126.759439]        c520bc00 00000c00 d0b77438 d5754b00 df79ff5c
> e08a515e d0b77444 d5754b00
> [  126.759858] Call Trace:
> [  126.759965]  [<e08b0923>] clone_endio+0x63/0xc0 [dm_mod]
> [  126.760066]  [<e08a5f69>] crypt_convert+0x131/0x17f [dm_crypt]
> [  126.760168]  [<e08b08c0>] clone_endio+0x0/0xc0 [dm_mod]
> [  126.760264]  [<e08a5fb7>] kcryptd_do_work+0x0/0x30f [dm_crypt]
> [  126.760349]  [<c01804d8>] bio_endio+0x33/0x5d
> [  126.760462]  [<e08a515e>] dec_pending+0x28/0x39 [dm_crypt]
> [  126.760558]  [<e08a61e6>] kcryptd_do_work+0x22f/0x30f [dm_crypt]
> [  126.760669]  [<c0112182>] update_stats_wait_end+0x7f/0xb2
> [  126.760801]  [<e08a5fb7>] kcryptd_do_work+0x0/0x30f [dm_crypt]
> [  126.760888]  [<c012700e>] run_workqueue+0x84/0x179
> [  126.760990]  [<c0127292>] worker_thread+0x0/0xf0
> [  126.761074]  [<c012732f>] worker_thread+0x9d/0xf0
> [  126.761160]  [<c012a360>] autoremove_wake_function+0x0/0x37
> [  126.761256]  [<c0127292>] worker_thread+0x0/0xf0
> [  126.761334]  [<c012a15c>] kthread+0x52/0x58
> [  126.761411]  [<c012a10a>] kthread+0x0/0x58
> [  126.761496]  [<c0104983>] kernel_thread_helper+0x7/0x14
> [  126.761598]  =======================
> [  126.761717] Code: 1c 00 89 f6 eb a9 b8 88 13 00 00 e8 b4 56 1c 00 8d
> 74 26 00 eb d5 31 db e9 11 ff ff ff 57 56 53 83 ec 04 89 c7 89 d6 85 c0
> 74 55 <8b> 02 39 42 04 7d 46 9c 58 90 8d b4 26 00 00 00 00 89 c3 fa 90
> [  126.763964] EIP: [<c0141a67>] mempool_free+0xe/0xc0 SS:ESP 0068:df79fec0
> 

Thanks,
-- 
Jun'ichi Nomura, NEC Corporation of America

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-17 16:39   ` Jun'ichi Nomura
@ 2007-07-17 17:50     ` Patrick McHardy
  2007-07-17 22:20       ` Jun'ichi Nomura
  0 siblings, 1 reply; 8+ messages in thread
From: Patrick McHardy @ 2007-07-17 17:50 UTC (permalink / raw)
  To: Jun'ichi Nomura
  Cc: Alasdair G Kergon, Andrew Morton, dm-devel, linux-kernel

Jun'ichi Nomura wrote:
>>>From: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
>>>
>>>bio_alloc_bioset() will return NULL if 'num_vecs' is too large.
>>>Use bio_get_nr_vecs() to get estimation of maximum number.
>>>
>>>Signed-off-by: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
>>>Signed-off-by: Alasdair G Kergon <agk@redhat.com>
>>>
>>
>>This patch reproducibly oopses my box:
> 
> 
> Thanks for the report.
> But I'm not sure how the patch is related to the oops.
> 
> The stack trace shows the oops occurred in dm-crypt,
> which doesn't use the part of the code modified by the patch
> (dm-io).


I tried reverting the individual patches until it stopped oopsing,
it may have been by luck. I'll try if I can break it again by
reverting the revert.

> Are you using other dm modules such as dm-multipath, dm-mirror
> or dm-snapshot?
> If so, can you take the output of 'dmsetup table' and 'dmsetup ls'?


No other modules.

> Do you have a reliable way to reproduce the oops which I can try?


"/etc/init.d/cryptdisk start" (debian) on a luks partition triggered
it for me.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-17 17:50     ` Patrick McHardy
@ 2007-07-17 22:20       ` Jun'ichi Nomura
  2007-07-18 10:08         ` Patrick McHardy
  0 siblings, 1 reply; 8+ messages in thread
From: Jun'ichi Nomura @ 2007-07-17 22:20 UTC (permalink / raw)
  To: Patrick McHardy; +Cc: Alasdair G Kergon, Andrew Morton, dm-devel, linux-kernel

Patrick McHardy wrote:
> Jun'ichi Nomura wrote:
>> Are you using other dm modules such as dm-multipath, dm-mirror
>> or dm-snapshot?
>> If so, can you take the output of 'dmsetup table' and 'dmsetup ls'?
> 
> No other modules.
> 
>> Do you have a reliable way to reproduce the oops which I can try?
> 
> "/etc/init.d/cryptdisk start" (debian) on a luks partition triggered
> it for me.

With today's git HEAD (commit 49c13b51a15f1ba9f6d47e26e4a3886c4f3931e2),
I tried the following but could not reproduce the oops here.
  # cryptsetup luksFormat /dev/sdb1
  # cryptsetup luksOpen /dev/sdb1 c
  # mkfs.ext3 /dev/mapper/c
  <mount it and do some I/O>

Thanks,
-- 
Jun'ichi Nomura, NEC Corporation of America


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-17 22:20       ` Jun'ichi Nomura
@ 2007-07-18 10:08         ` Patrick McHardy
  0 siblings, 0 replies; 8+ messages in thread
From: Patrick McHardy @ 2007-07-18 10:08 UTC (permalink / raw)
  To: Jun'ichi Nomura
  Cc: Alasdair G Kergon, Andrew Morton, dm-devel, linux-kernel

Jun'ichi Nomura wrote:
> Patrick McHardy wrote:
> 
>>Jun'ichi Nomura wrote:
>>
>>>Are you using other dm modules such as dm-multipath, dm-mirror
>>>or dm-snapshot?
>>>If so, can you take the output of 'dmsetup table' and 'dmsetup ls'?
>>
>>No other modules.
>>
>>
>>>Do you have a reliable way to reproduce the oops which I can try?
>>
>>"/etc/init.d/cryptdisk start" (debian) on a luks partition triggered
>>it for me.
> 
> 
> With today's git HEAD (commit 49c13b51a15f1ba9f6d47e26e4a3886c4f3931e2),
> I tried the following but could not reproduce the oops here.
>   # cryptsetup luksFormat /dev/sdb1
>   # cryptsetup luksOpen /dev/sdb1 c
>   # mkfs.ext3 /dev/mapper/c
>   <mount it and do some I/O>


I put in the patch again and it doesn't oops anymore, so sorry for
the false alarm. I did got the oops I pasted several times before
that though, I'll keep an eye on it and try to gather more information
in case it happens again.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-17 13:16 ` Patrick McHardy
  2007-07-17 16:39   ` Jun'ichi Nomura
@ 2007-07-18 15:23   ` Chuck Ebbert
  2007-07-20 15:07     ` Milan Broz
  1 sibling, 1 reply; 8+ messages in thread
From: Chuck Ebbert @ 2007-07-18 15:23 UTC (permalink / raw)
  To: Patrick McHardy
  Cc: Alasdair G Kergon, Andrew Morton, dm-devel, linux-kernel,
	Jun'ichi Nomura

On 07/17/2007 09:16 AM, Patrick McHardy wrote:
> Alasdair G Kergon wrote:
>> From: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
>>
>> bio_alloc_bioset() will return NULL if 'num_vecs' is too large.
>> Use bio_get_nr_vecs() to get estimation of maximum number.
>>
>> Signed-off-by: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
>> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
>>
>> ---
>>  drivers/md/dm-io.c |    5 ++++-
>>  1 files changed, 4 insertions(+), 1 deletion(-)
> 
> 
> This patch reproducibly oopses my box:
> 
> [  126.754204] BUG: unable to handle kernel NULL pointer dereference at
> virtual address 00000000
> [  126.754326]  printing eip:
> [  126.754369] c0141a67
> [  126.754420] *pde = 00000000
> [  126.754465] Oops: 0000 [#1]
> [  126.754507] PREEMPT
> [  126.754585] Modules linked in: [...]
> 
> 
> [  126.758372] CPU:    0
> [  126.758373] EIP:    0060:[<c0141a67>]    Not tainted VLI
> [  126.758374] EFLAGS: 00010282   (2.6.22 #1)
> [  126.758511] EIP is at mempool_free+0xe/0xc0
> [  126.758558] eax: d39e65d0   ebx: 00000000   ecx: df2b9898   edx: 00000000
> [  126.758605] esi: 00000000   edi: d39e65d0   ebp: d487d6d0   esp: df79fec0
> [  126.758652] ds: 007b   es: 007b   fs: 0000  gs: 0000  ss: 0068
> [  126.758699] Process kcryptd/0 (pid: 3218, ti=df79f000 task=df2b9640
> task.ti=df79f000)
> [  126.758747] Stack: 00000000 00000000 d3835f80 00000000 e08b0923
> e08a5f69 00000200 e0ad1080
> [  126.759093]        dfb5ab40 d3835f80 e08b08c0 00000000 e08a5fb7
> c01804d8 00000000 00000200
> [  126.759439]        c520bc00 00000c00 d0b77438 d5754b00 df79ff5c
> e08a515e d0b77444 d5754b00
> [  126.759858] Call Trace:
> [  126.759965]  [<e08b0923>] clone_endio+0x63/0xc0 [dm_mod]
> [  126.760066]  [<e08a5f69>] crypt_convert+0x131/0x17f [dm_crypt]
> [  126.760168]  [<e08b08c0>] clone_endio+0x0/0xc0 [dm_mod]
> [  126.760264]  [<e08a5fb7>] kcryptd_do_work+0x0/0x30f [dm_crypt]
> [  126.760349]  [<c01804d8>] bio_endio+0x33/0x5d
> [  126.760462]  [<e08a515e>] dec_pending+0x28/0x39 [dm_crypt]
> [  126.760558]  [<e08a61e6>] kcryptd_do_work+0x22f/0x30f [dm_crypt]
> [  126.760669]  [<c0112182>] update_stats_wait_end+0x7f/0xb2
> [  126.760801]  [<e08a5fb7>] kcryptd_do_work+0x0/0x30f [dm_crypt]
> [  126.760888]  [<c012700e>] run_workqueue+0x84/0x179
> [  126.760990]  [<c0127292>] worker_thread+0x0/0xf0
> [  126.761074]  [<c012732f>] worker_thread+0x9d/0xf0
> [  126.761160]  [<c012a360>] autoremove_wake_function+0x0/0x37
> [  126.761256]  [<c0127292>] worker_thread+0x0/0xf0
> [  126.761334]  [<c012a15c>] kthread+0x52/0x58
> [  126.761411]  [<c012a10a>] kthread+0x0/0x58
> [  126.761496]  [<c0104983>] kernel_thread_helper+0x7/0x14
> [  126.761598]  =======================
> [  126.761717] Code: 1c 00 89 f6 eb a9 b8 88 13 00 00 e8 b4 56 1c 00 8d
> 74 26 00 eb d5 31 db e9 11 ff ff ff 57 56 53 83 ec 04 89 c7 89 d6 85 c0
> 74 55 <8b> 02 39 42 04 7d 46 9c 58 90 8d b4 26 00 00 00 00 89 c3 fa 90

mempool_free() was called with a NULL pool. That can't be good.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [2.6.23 PATCH 07/18] dm io: fix panic on large request
  2007-07-18 15:23   ` Chuck Ebbert
@ 2007-07-20 15:07     ` Milan Broz
  0 siblings, 0 replies; 8+ messages in thread
From: Milan Broz @ 2007-07-20 15:07 UTC (permalink / raw)
  To: Alasdair G Kergon
  Cc: Chuck Ebbert, Patrick McHardy, Andrew Morton, dm-devel,
	linux-kernel, Jun'ichi Nomura

Chuck Ebbert wrote:

>> [  126.754204] BUG: unable to handle kernel NULL pointer dereference at
>> virtual address 00000000
>>     
...

> mempool_free() was called with a NULL pool. That can't be good.
Yes, it is really not good :)

Bug http://bugzilla.kernel.org/show_bug.cgi?id=7388
Attached patch fixes this problem, fix needed for stable tree too,
this is not regression, just very old bug...

Milan
--
mbroz@redhat.com

--
From: Milan Broz <mbroz@redhat.com>

Flush workqueue before releasing bioset and mopools
in dm-crypt.
There can be finished but not yet released request.

Call chain causing oops:
  run workqueue
    dec_pending
      bio_endio(...);
      	<remove device request - remove mempool>
      mempool_free(io, cc->io_pool);

This usually happens when cryptsetup create temporary
luks mapping in the beggining of crypt device activation.

When dm-core calls destructor crypt_dtr, no new request
are possible.

Signed-off-by: Milan Broz <mbroz@redhat.com>

---
 drivers/md/dm-crypt.c |    2 ++
 1 file changed, 2 insertions(+)

Index: linux-2.6.22/drivers/md/dm-crypt.c
===================================================================
--- linux-2.6.22.orig/drivers/md/dm-crypt.c	2007-07-17 21:56:36.000000000 +0200
+++ linux-2.6.22/drivers/md/dm-crypt.c	2007-07-19 11:55:13.000000000 +0200
@@ -920,6 +920,8 @@ static void crypt_dtr(struct dm_target *
 {
 	struct crypt_config *cc = (struct crypt_config *) ti->private;
 
+	flush_workqueue(_kcryptd_workqueue);
+
 	bioset_free(cc->bs);
 	mempool_destroy(cc->page_pool);
 	mempool_destroy(cc->io_pool);




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2007-07-20 15:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-07-11 20:58 [2.6.23 PATCH 07/18] dm io: fix panic on large request Alasdair G Kergon
2007-07-17 13:16 ` Patrick McHardy
2007-07-17 16:39   ` Jun'ichi Nomura
2007-07-17 17:50     ` Patrick McHardy
2007-07-17 22:20       ` Jun'ichi Nomura
2007-07-18 10:08         ` Patrick McHardy
2007-07-18 15:23   ` Chuck Ebbert
2007-07-20 15:07     ` Milan Broz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox