* SW RAID5 + high memory support freezes 2.6.3 kernel
@ 2004-02-23 2:41 Pavol Luptak
2004-02-23 4:27 ` Neil Brown
0 siblings, 1 reply; 11+ messages in thread
From: Pavol Luptak @ 2004-02-23 2:41 UTC (permalink / raw)
To: linux-kernel; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 5130 bytes --]
Hello,
issue http://www.spinics.net/lists/lvm/msg10322.html could be still present
in the current 2.6.3 kernel. I am able to repeat the conditions to halt the
2.6.3 kernel (using mkfs.ext3 on RAID device):
Feb 23 02:52:12 psilocybus kernel: Unable to handle kernel NULL pointer dereference at virtual address 00000008
Feb 23 02:52:12 psilocybus kernel: printing eip:
Feb 23 02:52:12 psilocybus kernel: f9885205
Feb 23 02:52:12 psilocybus kernel: *pde = 00000000
Feb 23 02:52:12 psilocybus kernel: Oops: 0000 [#1]
Feb 23 02:52:12 psilocybus kernel: CPU: 0
Feb 23 02:52:12 psilocybus kernel: EIP: 0060:[<f9885205>] Tainted: PF
Feb 23 02:52:12 psilocybus kernel: EFLAGS: 00010246
Feb 23 02:52:12 psilocybus kernel: EIP is at make_request+0x5/0x210 [raid5]
Feb 23 02:52:12 psilocybus kernel: eax: f10e2f2b ebx: f6631800 ecx: f7fe9040 edx: 00001000
Feb 23 02:52:12 psilocybus kernel: esi: 00000008 edi: c644dbc0 ebp: 025c0e08 esp: c1b8bcd0
Feb 23 02:52:12 psilocybus kernel: ds: 007b es: 007b ss: 0068
Feb 23 02:52:12 psilocybus kernel: Process pdflush (pid: 6, threadinfo=c1b8a000 task=c1b8d900)
Feb 23 02:52:12 psilocybus kernel: Stack: c021aea6 f6631800 c644dbc0 e79694f0 f9857b01 c19f94a8 00000001 ffb28000
Feb 23 02:52:12 psilocybus kernel: db022000 00001000 025c0e10 0000000c 00000001 00000001 00000000 025c0e10
Feb 23 02:52:12 psilocybus kernel: f66e4000 00000000 c644dd40 c644dbc0 f9857c03 c644dbc0 c644dbc0 c644dd40
Feb 23 02:52:12 psilocybus kernel: Call Trace:
Feb 23 02:52:12 psilocybus kernel: [<c021aea6>] generic_make_request+0x106/0x180
Feb 23 02:52:12 psilocybus kernel: [<f9857b01>] loop_transfer_bio+0xc1/0x120 [loop]
Feb 23 02:52:12 psilocybus kernel: [<f9857c03>] loop_make_request+0xa3/0x150 [loop]
Feb 23 02:52:12 psilocybus kernel: [<c021aea6>] generic_make_request+0x106/0x180
Feb 23 02:52:12 psilocybus kernel: [<c015bb2b>] bio_alloc+0xcb/0x1a0
Feb 23 02:52:12 psilocybus kernel: [<c021af5d>] submit_bio+0x3d/0x70
Feb 23 02:52:12 psilocybus kernel: [<c0159a10>] __block_write_full_page+0x1d0/0x3d0
Feb 23 02:52:12 psilocybus kernel: [<c018fd13>] ext3_journal_dirty_data+0x23/0x60
Feb 23 02:52:12 psilocybus kernel: [<c015b244>] block_write_full_page+0xe4/0xf0
Feb 23 02:52:12 psilocybus kernel: [<c015df90>] blkdev_get_block+0x0/0x50
Feb 23 02:52:12 psilocybus kernel: [<c015e0ef>] blkdev_writepage+0x1f/0x30
Feb 23 02:52:12 psilocybus kernel: [<c015df90>] blkdev_get_block+0x0/0x50
Feb 23 02:52:12 psilocybus kernel: [<c017840e>] mpage_writepages+0x20e/0x300
Feb 23 02:52:12 psilocybus kernel: [<c017840e>] mpage_writepages+0x20e/0x300
Feb 23 02:52:12 psilocybus kernel: [<c015e0d0>] blkdev_writepage+0x0/0x30
Feb 23 02:52:12 psilocybus kernel: [<c015f31f>] generic_writepages+0x1f/0x23
Feb 23 02:52:12 psilocybus kernel: [<c013fdee>] do_writepages+0x1e/0x40
Feb 23 02:52:12 psilocybus kernel: [<c0176b54>] __sync_single_inode+0xd4/0x230
Feb 23 02:52:12 psilocybus kernel: [<c0176efe>] sync_sb_inodes+0x19e/0x260
Feb 23 02:52:12 psilocybus kernel: [<c017700d>] writeback_inodes+0x4d/0xa0
Feb 23 02:52:12 psilocybus kernel: [<c013fb2b>] background_writeout+0x7b/0xc0
Feb 23 02:52:12 psilocybus kernel: [<c0140212>] __pdflush+0xd2/0x1d0
Feb 23 02:52:12 psilocybus kernel: [<c0140310>] pdflush+0x0/0x20
Feb 23 02:52:12 psilocybus kernel: [<c014031f>] pdflush+0xf/0x20
Feb 23 02:52:12 psilocybus kernel: [<c013fab0>] background_writeout+0x0/0xc0
Feb 23 02:52:12 psilocybus kernel: [<c0109284>] kernel_thread_helper+0x0/0xc
Feb 23 02:52:12 psilocybus kernel: [<c0109289>] kernel_thread_helper+0x5/0xc
Feb 23 02:52:12 psilocybus kernel:
Feb 23 02:52:12 psilocybus kernel: Code: a7 c0 09 b6 d3 db 66 06 71 67 67 d7 32 47 2a 92 23 22 ee b1
Feb 23 02:52:12 psilocybus kernel: int3: 0000 [#2]
Feb 23 02:52:12 psilocybus kernel: CPU: 0
Feb 23 02:52:12 psilocybus kernel: EIP: 0060:[<f9885147>] Tainted: PF
Feb 23 02:52:12 psilocybus kernel: EFLAGS: 00000217
Feb 23 02:52:12 psilocybus kernel: EIP is at raid5_unplug_device+0x7/0xc0 [raid5]
Feb 23 02:52:12 psilocybus kernel: eax: f6631765 ebx: 00000212 ecx: c1bd0a10 edx: c1bd0a00
Feb 23 02:52:12 psilocybus kernel: esi: f66318f4 edi: f66318f0 ebp: c1ba2000 esp: c1ba3f6c
Feb 23 02:52:12 psilocybus kernel: ds: 007b es: 007b ss: 0068
Feb 23 02:52:12 psilocybus kernel: Process kblock0 f25f3ee0 f5e326c0 00000001 00000000
Feb 23 02:52:12 psilocybus kernel: f7dc15c0 00d8cfe8 ffffe000 f25f2000 f9892378 f7dc15c0 00d8cfe8 00000001
Feb 23 02:52:12 psilocybus kernel: f7dc1664 00004000 00d8cf00 00000008 0000011d 0df83a00 f98981bc c011b25e
Feb 23 02:52:12 psilocybus kernel: Call Trace:
Feb 23 02:52:12 psilocybus kernel: [<f9892378>] md_do_sync+0x228/0x780 [md]
Feb 23 02:52:12 psilocybus kernel: [<c011b25e>] recalc_task_prio+0x8e/0x1b0
Feb 23 02:52:12 psilocybus kernel: [<f98912a6>] md_thread+0xc6/0x1a0 [md]
So, RAID5 works for me only without high memory support.
Pavol
--
_____________________________________________________________________________
[wilder@hq.sk] [http://trip.sk/wilder/] [talker: ttt.sk 5678] [ICQ: 133403556]
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SW RAID5 + high memory support freezes 2.6.3 kernel
2004-02-23 2:41 SW RAID5 + high memory support freezes 2.6.3 kernel Pavol Luptak
@ 2004-02-23 4:27 ` Neil Brown
2004-02-23 5:30 ` Andrew Morton
2004-02-23 16:57 ` [PATCH] md: fix device size calculation with non-persistent superblock Paul Clements
0 siblings, 2 replies; 11+ messages in thread
From: Neil Brown @ 2004-02-23 4:27 UTC (permalink / raw)
To: Pavol Luptak; +Cc: linux-kernel, linux-raid
On Monday February 23, P.Luptak@sh.cvut.cz wrote:
> Hello,
> issue http://www.spinics.net/lists/lvm/msg10322.html could be still present
> in the current 2.6.3 kernel. I am able to repeat the conditions to halt the
> 2.6.3 kernel (using mkfs.ext3 on RAID device):
To be fair, your subject should say that
SW RAID5 + high memory + loop device freezes 2.6.3 kernel
^^^^^^^^^^^^^^
Would you be able to try the same tes without using "loop" in the
middle and see what happens?
The trace you sent:
> Feb 23 02:52:12 psilocybus kernel: Unable to handle kernel NULL pointer dereference at virtual address 00000008
> Feb 23 02:52:12 psilocybus kernel: printing eip:
> Feb 23 02:52:12 psilocybus kernel: f9885205
> Feb 23 02:52:12 psilocybus kernel: *pde = 00000000
> Feb 23 02:52:12 psilocybus kernel: Oops: 0000 [#1]
> Feb 23 02:52:12 psilocybus kernel: CPU: 0
> Feb 23 02:52:12 psilocybus kernel: EIP: 0060:[<f9885205>] Tainted: PF
> Feb 23 02:52:12 psilocybus kernel: EFLAGS: 00010246
> Feb 23 02:52:12 psilocybus kernel: EIP is at make_request+0x5/0x210 [raid5]
> Feb 23 02:52:12 psilocybus kernel: eax: f10e2f2b ebx: f6631800 ecx: f7fe9040 edx: 00001000
> Feb 23 02:52:12 psilocybus kernel: esi: 00000008 edi: c644dbc0 ebp: 025c0e08 esp: c1b8bcd0
seems to suggest that %esi is being dereferenced at make_request+0x5,
but when I disassemble my raid5.o, it doesn't.
I tried disassembling the code:
> Feb 23 02:52:12 psilocybus kernel:
> Feb 23 02:52:12 psilocybus kernel: Code: a7 c0 09 b6 d3 db 66 06 71 67 67 d7 32 47 2a 92 23 22 ee b1
but that just produced nonsense.
Could you
gdb raid5.o
disassemble make_request
and send me the output please.
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SW RAID5 + high memory support freezes 2.6.3 kernel
2004-02-23 4:27 ` Neil Brown
@ 2004-02-23 5:30 ` Andrew Morton
2004-02-23 13:35 ` Pavol Luptak
2004-02-23 16:57 ` [PATCH] md: fix device size calculation with non-persistent superblock Paul Clements
1 sibling, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2004-02-23 5:30 UTC (permalink / raw)
To: Neil Brown; +Cc: P.Luptak, linux-kernel, linux-raid
Neil Brown <neilb@cse.unsw.edu.au> wrote:
>
> > Hello,
> > issue http://www.spinics.net/lists/lvm/msg10322.html could be still present
> > in the current 2.6.3 kernel. I am able to repeat the conditions to halt the
> > 2.6.3 kernel (using mkfs.ext3 on RAID device):
>
> To be fair, your subject should say that
> SW RAID5 + high memory + loop device freezes 2.6.3 kernel
> ^^^^^^^^^^^^^^
hm, yes. And the loop code which was involved here was removed from the
kernel last week, so it's a bit academic.
Retest on current 2.6.3-bk or 2.6.3-mm3 please.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SW RAID5 + high memory support freezes 2.6.3 kernel
2004-02-23 5:30 ` Andrew Morton
@ 2004-02-23 13:35 ` Pavol Luptak
2004-02-23 14:05 ` syrius.ml
0 siblings, 1 reply; 11+ messages in thread
From: Pavol Luptak @ 2004-02-23 13:35 UTC (permalink / raw)
To: Andrew Morton; +Cc: Neil Brown, linux-kernel, linux-raid
[-- Attachment #1: Type: text/plain, Size: 1029 bytes --]
On Sun, Feb 22, 2004 at 09:30:11PM -0800, Andrew Morton wrote:
> Neil Brown <neilb@cse.unsw.edu.au> wrote:
> >
> > > Hello,
> > > issue http://www.spinics.net/lists/lvm/msg10322.html could be still present
> > > in the current 2.6.3 kernel. I am able to repeat the conditions to halt the
> > > 2.6.3 kernel (using mkfs.ext3 on RAID device):
> >
> > To be fair, your subject should say that
> > SW RAID5 + high memory + loop device freezes 2.6.3 kernel
> > ^^^^^^^^^^^^^^
>
> hm, yes. And the loop code which was involved here was removed from the
> kernel last week, so it's a bit academic.
>
> Retest on current 2.6.3-bk or 2.6.3-mm3 please.
I tried 2.6.3-bk4 and seems it works without problems :)
I'm going to copy about 300 GB data to this cryptoloop/RAID5 and report the
eventual bugs....
Pavol
--
_____________________________________________________________________________
[wilder@hq.sk] [http://trip.sk/wilder/] [talker: ttt.sk 5678] [ICQ: 133403556]
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: SW RAID5 + high memory support freezes 2.6.3 kernel
2004-02-23 13:35 ` Pavol Luptak
@ 2004-02-23 14:05 ` syrius.ml
0 siblings, 0 replies; 11+ messages in thread
From: syrius.ml @ 2004-02-23 14:05 UTC (permalink / raw)
To: Pavol Luptak; +Cc: Andrew Morton, Neil Brown, linux-kernel, linux-raid
Pavol Luptak <P.Luptak@sh.cvut.cz> writes:
[...]
>> Retest on current 2.6.3-bk or 2.6.3-mm3 please.
>
> I tried 2.6.3-bk4 and seems it works without problems :)
> I'm going to copy about 300 GB data to this cryptoloop/RAID5 and report the
> eventual bugs....
same here with 2.6.3-mm3 + some dm patches i need.
--
S.
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] md: fix device size calculation with non-persistent superblock
2004-02-23 4:27 ` Neil Brown
2004-02-23 5:30 ` Andrew Morton
@ 2004-02-23 16:57 ` Paul Clements
2004-02-24 1:13 ` Neil Brown
2004-02-25 21:39 ` [PATCH] raid1: abort resync if there are no spare drives Paul Clements
1 sibling, 2 replies; 11+ messages in thread
From: Paul Clements @ 2004-02-23 16:57 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 294 bytes --]
Neil,
Currently, the device size calculation is not correct when hot-adding
devices to arrays with non-persistent superblocks. Device size is always
calculated as if there were a physical superblock on every device. The
attached simple change to hot_add_disk() fixes the problem.
Thanks,
Paul
[-- Attachment #2: md_nonpersistent_sb_fix.diff --]
[-- Type: text/x-patch, Size: 495 bytes --]
--- 2_6_3_rc2/drivers/md/md.c.PRISTINE Mon Feb 23 11:01:57 2004
+++ 2_6_3_rc2/drivers/md/md.c Mon Feb 23 11:29:10 2004
@@ -2365,7 +2365,12 @@ static int hot_add_disk(mddev_t * mddev,
return -EINVAL;
}
- rdev->sb_offset = calc_dev_sboffset(rdev->bdev);
+ if (mddev->persistent)
+ rdev->sb_offset = calc_dev_sboffset(rdev->bdev);
+ else
+ rdev->sb_offset = rdev->bdev->bd_inode->i_size
+ >> BLOCK_SIZE_BITS;
+
size = calc_dev_size(rdev, mddev->chunk_size);
rdev->size = size;
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] md: fix device size calculation with non-persistent superblock
2004-02-23 16:57 ` [PATCH] md: fix device size calculation with non-persistent superblock Paul Clements
@ 2004-02-24 1:13 ` Neil Brown
2004-02-24 15:27 ` Paul Clements
2004-02-25 21:39 ` [PATCH] raid1: abort resync if there are no spare drives Paul Clements
1 sibling, 1 reply; 11+ messages in thread
From: Neil Brown @ 2004-02-24 1:13 UTC (permalink / raw)
To: Paul Clements; +Cc: linux-raid
On Monday February 23, Paul.Clements@SteelEye.com wrote:
> Neil,
>
> Currently, the device size calculation is not correct when hot-adding
> devices to arrays with non-persistent superblocks. Device size is always
> calculated as if there were a physical superblock on every device. The
> attached simple change to hot_add_disk() fixes the problem.
Hmm.. I had always assumed that non-persistent superblocks only worked
for linear and raid0. I'm not sure I would trust any other
configuration.
Are you seriously using raid1 with non-persistent superblocks? How do
you ensure reliable re-assembly after a device failure followed by
shutdown?
NeilBrown
>
> Thanks,
> Paul--- 2_6_3_rc2/drivers/md/md.c.PRISTINE Mon Feb 23 11:01:57 2004
> +++ 2_6_3_rc2/drivers/md/md.c Mon Feb 23 11:29:10 2004
> @@ -2365,7 +2365,12 @@ static int hot_add_disk(mddev_t * mddev,
> return -EINVAL;
> }
>
> - rdev->sb_offset = calc_dev_sboffset(rdev->bdev);
> + if (mddev->persistent)
> + rdev->sb_offset = calc_dev_sboffset(rdev->bdev);
> + else
> + rdev->sb_offset = rdev->bdev->bd_inode->i_size
> + >> BLOCK_SIZE_BITS;
> +
> size = calc_dev_size(rdev, mddev->chunk_size);
> rdev->size = size;
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] md: fix device size calculation with non-persistent superblock
2004-02-24 1:13 ` Neil Brown
@ 2004-02-24 15:27 ` Paul Clements
0 siblings, 0 replies; 11+ messages in thread
From: Paul Clements @ 2004-02-24 15:27 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Neil Brown wrote:
>
> On Monday February 23, Paul.Clements@SteelEye.com wrote:
> > Neil,
> >
> > Currently, the device size calculation is not correct when hot-adding
> > devices to arrays with non-persistent superblocks. Device size is always
> > calculated as if there were a physical superblock on every device. The
> > attached simple change to hot_add_disk() fixes the problem.
>
> Hmm.. I had always assumed that non-persistent superblocks only worked
> for linear and raid0. I'm not sure I would trust any other
> configuration.
>
> Are you seriously using raid1 with non-persistent superblocks?
Yes, and this works fine in 2.4, as well. We opted for non-persistent
superblocks in order to support creation of raid1 mirrors over
partitions that already had filesystems or other data present (and yes,
we calculate the size of the new md device to make sure the existing
data will fit).
> How do you ensure reliable re-assembly after a device failure followed by
> shutdown?
There is a user-level clustering framework that sets up, monitors, and
takes down the mirror. Disk failures and other events are handled by
this framework.
--
Paul
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] raid1: abort resync if there are no spare drives
2004-02-23 16:57 ` [PATCH] md: fix device size calculation with non-persistent superblock Paul Clements
2004-02-24 1:13 ` Neil Brown
@ 2004-02-25 21:39 ` Paul Clements
2004-03-03 0:21 ` Neil Brown
1 sibling, 1 reply; 11+ messages in thread
From: Paul Clements @ 2004-02-25 21:39 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 622 bytes --]
The attached patch makes sure that resync/recovery get aborted (and
status recorded correctly) when there are no spare drives left (due to
the failure of a spare during a resync). Previously, if a spare failed
during a resync, the resync would continue until completed (and would
appear to be successful).
Also, there was an erroneous usage of master_bio->bi_bdev field (which
is not always properly set) that has been changed to the simpler
comparison vs. r1_bio->read_disk (as is done in raid1_end_request). I
think Neil has already addressed this issue in another patch that has
been pushed to Andrew...
Thanks,
Paul
[-- Attachment #2: raid1_abort_sync_no_targets.diff --]
[-- Type: text/x-patch, Size: 1240 bytes --]
--- raid1.c.PRISTINE Tue Feb 24 16:10:26 2004
+++ raid1.c Wed Feb 25 16:22:34 2004
@@ -818,6 +818,8 @@ static void sync_request_write(mddev_t *
put_buf(r1_bio);
return;
}
+ /* assume failure until we find a drive to write this to */
+ clear_bit(R1BIO_Uptodate, &r1_bio->state);
spin_lock_irq(&conf->device_lock);
for (i = 0; i < disks ; i++) {
@@ -825,7 +827,7 @@ static void sync_request_write(mddev_t *
if (!conf->mirrors[i].rdev ||
conf->mirrors[i].rdev->faulty)
continue;
- if (conf->mirrors[i].rdev->bdev == bio->bi_bdev)
+ if (i == r1_bio->read_disk)
/*
* we read from here, no need to write
*/
@@ -838,6 +840,8 @@ static void sync_request_write(mddev_t *
continue;
atomic_inc(&conf->mirrors[i].rdev->nr_pending);
r1_bio->write_bios[i] = bio;
+ /* we found a drive to write to */
+ set_bit(R1BIO_Uptodate, &r1_bio->state);
}
spin_unlock_irq(&conf->device_lock);
@@ -859,7 +863,8 @@ static void sync_request_write(mddev_t *
}
if (atomic_dec_and_test(&r1_bio->remaining)) {
- md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9, 1);
+ md_done_sync(mddev, r1_bio->master_bio->bi_size >> 9,
+ test_bit(R1BIO_Uptodate, &r1_bio->state));
put_buf(r1_bio);
}
}
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] raid1: abort resync if there are no spare drives
2004-02-25 21:39 ` [PATCH] raid1: abort resync if there are no spare drives Paul Clements
@ 2004-03-03 0:21 ` Neil Brown
2004-03-03 2:47 ` Paul Clements
0 siblings, 1 reply; 11+ messages in thread
From: Neil Brown @ 2004-03-03 0:21 UTC (permalink / raw)
To: Paul Clements; +Cc: linux-raid
On Wednesday February 25, Paul.Clements@SteelEye.com wrote:
> The attached patch makes sure that resync/recovery get aborted (and
> status recorded correctly) when there are no spare drives left (due to
> the failure of a spare during a resync). Previously, if a spare failed
> during a resync, the resync would continue until completed (and would
> appear to be successful).
Hi,
could you clarify exactly what the problem is.
As far as I can see (without looking very deeply) the only problem is
that the resync process continues on when there is no point doing so,
thus wasting time.
At the end of the process, any devices that failed should still be
marked failed and so the array will be insync, but might be degraded
- but there is no risk to data.
Am I missing something?
(I"m not saying I don't like the patch - it looks like a good idea - I
just want to be sure I understand it).
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] raid1: abort resync if there are no spare drives
2004-03-03 0:21 ` Neil Brown
@ 2004-03-03 2:47 ` Paul Clements
0 siblings, 0 replies; 11+ messages in thread
From: Paul Clements @ 2004-03-03 2:47 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Neil Brown wrote:
> As far as I can see (without looking very deeply) the only problem is
> that the resync process continues on when there is no point doing so,
> thus wasting time.
Yes, that's right. The pointless continuance of the resync is the
problem. The bad device does get marked as "failed", so there is no
problem with that.
--
Paul
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2004-03-03 2:47 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-02-23 2:41 SW RAID5 + high memory support freezes 2.6.3 kernel Pavol Luptak
2004-02-23 4:27 ` Neil Brown
2004-02-23 5:30 ` Andrew Morton
2004-02-23 13:35 ` Pavol Luptak
2004-02-23 14:05 ` syrius.ml
2004-02-23 16:57 ` [PATCH] md: fix device size calculation with non-persistent superblock Paul Clements
2004-02-24 1:13 ` Neil Brown
2004-02-24 15:27 ` Paul Clements
2004-02-25 21:39 ` [PATCH] raid1: abort resync if there are no spare drives Paul Clements
2004-03-03 0:21 ` Neil Brown
2004-03-03 2:47 ` Paul Clements
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).