* RAID5 problem
@ 2005-12-04 14:21 Alfons Andorfer
2005-12-04 21:47 ` Neil Brown
0 siblings, 1 reply; 10+ messages in thread
From: Alfons Andorfer @ 2005-12-04 14:21 UTC (permalink / raw)
To: linux-raid; +Cc: neilb
Hi,
I have a RAID5 array consisting of 4 disks:
/dev/hda3
/dev/hdc3
/dev/hde3
/dev/hdg3
and the Linux machine that this system was running on crashed yesterday
due to a faulty Kernel driver (i.e. the machine just halted).
So I resetted it, but it didn't come up again.
I started the machine with a Knoppix CD and found out that the array had
been running in degraded mode for about two months (/dev/hda3 went off
then).
When I do a
mdadm --assemble /dev/md0 --force /dev/hd[ceg]3
I get
mdadm: forcing event count in /dev/hdc3(1) from 515 upto 516
mdadm: /dev/md0 has been started with 3 drives (out of 4).
I can mount the array with
mount /dev/md0 /mount/
and the data seems to be OK.
But after a
umount /dev/md0
and a
fsck -n /dev/md0
it stops with an error
"pass 1: checking Inodes, Blocks, and sizes
read error - Block 131460 (Attempt to read block from filesystem
resulted in short read) during Inode-Scan Ignore error?"
and if I do the fsck with
e2fsck -y /dev/md0
I get tons of read errors of the type "(Attempt to read block from
filesystem resulted in short read)" and the event counter of the
/dev/hdc3 is then just one _behind_ of the event counters of /dev/hde3
and /dev/hdg3 which is really strange to me!?!
Then I tried
mdadm -S /dev/md0
mdadm --create /dev/md0 -c32 -l5 -n4 missing /dev/hdc3 /dev/hde3 /dev/hdg3
which resultet in
mdadm: /dev/hdc3 appears to be part of a raid array:
level=5 devices=4 ctime=Fri May 30 14:25:47 2003
mdadm: /dev/hde3 appears to be part of a raid array:
level=5 devices=4 ctime=Fri May 30 14:25:47 2003
mdadm: /dev/hdg3 appears to contain an ext2fs file system
size=493736704K mtime=Tue Jan 3 04:48:21 2006
mdadm: /dev/hdg3 appears to be part of a raid array:
level=5 devices=4 ctime=Fri May 30 14:25:47 2003
Continue creating array? no
mdadm: create aborted.
I aborted the above because it look strange to me that /dev/hdg3 appears
two times and /dev/hda3 doesn't at all!?!
So this is where I got stuck, any help appreciated!
Here are the outputs of
cat /mount/etc/raidtab
and
mdadm --examine /dev/hd[aceg]3
----------------------------------------------------------------------
cat /mount/etc/raidtab:
-----------------------
raiddev /dev/md0
raid-level 5
nr-raid-disks 4
nr-spare-disks 0
persistent-superblock 1
parity-algorithm left-symmetric
chunk-size 32
device /dev/hda3
raid-disk 0
device /dev/hdc3
raid-disk 1
device /dev/hde3
raid-disk 2
device /dev/hdg3
raid-disk 3
----------------------------------------------------------------------
mdadm --examine /dev/hd[aceg]3:
-------------------------------
/dev/hda3:
Magic : a92b4efc
Version : 00.90.00
UUID : 02d9c6f2:53c8584d:8815ae94:e4af8e1c
Creation Time : Fri May 30 14:25:47 2003
Raid Level : raid5
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Sat Dec 3 18:56:59 2005
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : f620ca21 - correct
Events : 0.390
Layout : left-symmetric
Chunk Size : 32K
Number Major Minor RaidDevice State
this 0 3 3 0 active sync
0 0 3 3 0 active sync
1 1 0 0 1 faulty removed
2 2 33 3 2 active sync
3 3 34 3 3 active sync
/dev/hdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : 02d9c6f2:53c8584d:8815ae94:e4af8e1c
Creation Time : Fri May 30 14:25:47 2003
Raid Level : raid5
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Sun Dec 4 15:03:42 2005
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : f621e626 - correct
Events : 0.524
Layout : left-symmetric
Chunk Size : 32K
Number Major Minor RaidDevice State
this 1 22 3 1 active sync
0 0 0 0 0 removed
1 1 22 3 1 active sync
2 2 33 3 2 active sync
3 3 34 3 3 active sync
/dev/hde3:
Magic : a92b4efc
Version : 00.90.00
UUID : 02d9c6f2:53c8584d:8815ae94:e4af8e1c
Creation Time : Fri May 30 14:25:47 2003
Raid Level : raid5
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Sun Dec 4 15:03:42 2005
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : f621e633 - correct
Events : 0.524
Layout : left-symmetric
Chunk Size : 32K
Number Major Minor RaidDevice State
this 2 33 3 2 active sync
0 0 0 0 0 removed
1 1 22 3 1 active sync
2 2 33 3 2 active sync
3 3 34 3 3 active sync
/dev/hdg3:
Magic : a92b4efc
Version : 00.90.00
UUID : 02d9c6f2:53c8584d:8815ae94:e4af8e1c
Creation Time : Fri May 30 14:25:47 2003
Raid Level : raid5
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Sun Dec 4 15:03:42 2005
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : f621e636 - correct
Events : 0.524
Layout : left-symmetric
Chunk Size : 32K
Number Major Minor RaidDevice State
this 3 34 3 3 active sync
0 0 0 0 0 removed
1 1 22 3 1 active sync
2 2 33 3 2 active sync
3 3 34 3 3 active sync
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
@ 2005-12-04 21:28 Andrew Burgess
2005-12-04 21:49 ` Neil Brown
0 siblings, 1 reply; 10+ messages in thread
From: Andrew Burgess @ 2005-12-04 21:28 UTC (permalink / raw)
To: linux-raid; +Cc: neilb
>I get tons of read errors of the type "(Attempt to read block from
>filesystem resulted in short read)"
No errors in /var/log/messages?
>mdadm -S /dev/md0
>mdadm --create /dev/md0 -c32 -l5 -n4 missing /dev/hdc3 /dev/hde3 /dev/hdg3
>mdadm: /dev/hdc3 appears to be part of a raid array:
> level=5 devices=4 ctime=Fri May 30 14:25:47 2003
>mdadm: /dev/hde3 appears to be part of a raid array:
> level=5 devices=4 ctime=Fri May 30 14:25:47 2003
>mdadm: /dev/hdg3 appears to contain an ext2fs file system
> size=493736704K mtime=Tue Jan 3 04:48:21 2006
>mdadm: /dev/hdg3 appears to be part of a raid array:
> level=5 devices=4 ctime=Fri May 30 14:25:47 2003
>Continue creating array? no
>mdadm: create aborted.
>I aborted the above because it look strange to me that /dev/hdg3 appears
>two times and /dev/hda3 doesn't at all!?!
You didn't specify hda3 in the command line. If you say 'create' and gives
devices then mdadm doesn't search for additional devices or look in
mdadm.conf.
hdg3 appears twice because mdadm has two different things to say about it.
It looks like an ext2 file system because that's where the ext2 indentifing
data for the raid device just happened to be.
>So this is where I got stuck, any help appreciated!
HTH
PS to Neil. I thought I might submit a patch to you that added a little
more info the the above lines, the slot number and the raid device
So it would read:
mdadm: /dev/hdc3 appears to be part of a raid array:
/dev/md0 slot[1] level=5 devices=4 ctime=Fri May 30 14:25:47 2003
I find this to be information that I have to search for using 'mdadm -E' so it
would be handy to see it all at once when having to force a broken array to
assemble.
Also it would be handy to see the update time rather than the creation time
(IMHO) so I can see how far apart the devices are (or maybe the event count
would be better for this) and whether or not the device was marked 'clean'.
What do you think?
Thanks!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-04 14:21 RAID5 problem Alfons Andorfer
@ 2005-12-04 21:47 ` Neil Brown
2005-12-05 1:44 ` Ross Vandegrift
2005-12-05 10:59 ` Alfons Andorfer
0 siblings, 2 replies; 10+ messages in thread
From: Neil Brown @ 2005-12-04 21:47 UTC (permalink / raw)
To: Alfons Andorfer; +Cc: linux-raid, n
On Sunday December 4, a_a@gmx.de wrote:
> Hi,
>
> I have a RAID5 array consisting of 4 disks:
>
> /dev/hda3
> /dev/hdc3
> /dev/hde3
> /dev/hdg3
>
> and the Linux machine that this system was running on crashed yesterday
> due to a faulty Kernel driver (i.e. the machine just halted).
> So I resetted it, but it didn't come up again.
> I started the machine with a Knoppix CD and found out that the array had
> been running in degraded mode for about two months (/dev/hda3 went off
> then).
You want to be running "mdadm --monitor". You really really do!
Anyone out there who is listening: if you have any md/raid arrays
(other than linear/raid0) and are not running "mdadm --monitor",
please do so. Now.
Also run "mdadm --monitor --oneshot --scan" (or similar) from a
nightly cron job, so it will nag you about degraded arrays.
Please!
But why do you think that hda3 dropped out of the array 2 months ago?
The update time reported by mdadm --examine is
Update Time : Sat Dec 3 18:56:59 2005
The superblock from hda3 seems to suggest that it was hdc3 that was
the problem.... odd.
>
> "pass 1: checking Inodes, Blocks, and sizes
> read error - Block 131460 (Attempt to read block from filesystem
> resulted in short read) during Inode-Scan Ignore error?"
This strongly suggests there is a problem with one of the drives - it
is returning read errors. Are there any informative kernel logs.
If it is hdc that is reporting errors, try to re-assemble the array
from hda3, hde3, hdg3.
NeilBrown
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-04 21:28 Andrew Burgess
@ 2005-12-04 21:49 ` Neil Brown
0 siblings, 0 replies; 10+ messages in thread
From: Neil Brown @ 2005-12-04 21:49 UTC (permalink / raw)
To: Andrew Burgess; +Cc: linux-raid
On Sunday December 4, aab@cichlid.com wrote:
>
> PS to Neil. I thought I might submit a patch to you that added a little
> more info the the above lines, the slot number and the raid device
>
> So it would read:
>
> mdadm: /dev/hdc3 appears to be part of a raid array:
> /dev/md0 slot[1] level=5 devices=4 ctime=Fri May 30 14:25:47 2003
>
> I find this to be information that I have to search for using 'mdadm -E' so it
> would be handy to see it all at once when having to force a broken array to
> assemble.
>
> Also it would be handy to see the update time rather than the creation time
> (IMHO) so I can see how far apart the devices are (or maybe the event count
> would be better for this) and whether or not the device was marked 'clean'.
>
> What do you think?
Sounds reasonable - just send that patch.
NeilBrown
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-04 21:47 ` Neil Brown
@ 2005-12-05 1:44 ` Ross Vandegrift
2005-12-05 2:44 ` Neil Brown
[not found] ` <43977948.6050507@promotionstudios.com>
2005-12-05 10:59 ` Alfons Andorfer
1 sibling, 2 replies; 10+ messages in thread
From: Ross Vandegrift @ 2005-12-05 1:44 UTC (permalink / raw)
To: Neil Brown; +Cc: Alfons Andorfer, linux-raid, n
On Mon, Dec 05, 2005 at 08:47:50AM +1100, Neil Brown wrote:
> You want to be running "mdadm --monitor". You really really do!
> Anyone out there who is listening: if you have any md/raid arrays
> (other than linear/raid0) and are not running "mdadm --monitor",
> please do so. Now.
> Also run "mdadm --monitor --oneshot --scan" (or similar) from a
> nightly cron job, so it will nag you about degraded arrays.
So very, very true - I was bitten by that bit of stupidity a few weeks
ago. I also have a script in my /etc/bashrc that looks for any
degraded arrays in /proc/mdstat. If it finds them it prints obnoxious
messages, beeps, and dumps out mdstat for instant examination.
Neil - did you get a chance to look at the syslog and text
messaging patches I posted?
--
Ross Vandegrift
ross@lug.udel.edu
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-05 1:44 ` Ross Vandegrift
@ 2005-12-05 2:44 ` Neil Brown
2005-12-06 2:26 ` Ross Vandegrift
[not found] ` <43977948.6050507@promotionstudios.com>
1 sibling, 1 reply; 10+ messages in thread
From: Neil Brown @ 2005-12-05 2:44 UTC (permalink / raw)
To: Ross Vandegrift; +Cc: Alfons Andorfer, linux-raid
On Sunday December 4, ross@jose.lug.udel.edu wrote:
>
> Neil - did you get a chance to look at the syslog and text
> messaging patches I posted?
>
Didn't I reply to those?... No, I guess I didn't. Thanks for the
reminder.
The text-messaging I don't like. That is what the --program option if
for. If you don't want the very-basic email message, then write
yourself a little script to do exactly what you want.
The 'syslog' stuff looks fine : -y and --syslog.... though I wander if
--syslog should be the default...
I wish I could remember why I didn't do this at the start! I have a
strong feeling that it was more than just laziness, but I cannot
remember what more :-(
Oh well... expect to see you syslog support in 2.2.
Thanks,
NeilBrown
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-04 21:47 ` Neil Brown
2005-12-05 1:44 ` Ross Vandegrift
@ 2005-12-05 10:59 ` Alfons Andorfer
1 sibling, 0 replies; 10+ messages in thread
From: Alfons Andorfer @ 2005-12-05 10:59 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid, n
Neil Brown wrote:
> On Sunday December 4, a_a@gmx.de wrote:
>
>>Hi,
>>
>>I have a RAID5 array consisting of 4 disks:
>>
>>/dev/hda3
>>/dev/hdc3
>>/dev/hde3
>>/dev/hdg3
>>
>>and the Linux machine that this system was running on crashed yesterday
>>due to a faulty Kernel driver (i.e. the machine just halted).
>>So I resetted it, but it didn't come up again.
>>I started the machine with a Knoppix CD and found out that the array had
>>been running in degraded mode for about two months (/dev/hda3 went off
>>then).
Here is a short snipped of the syslog:
--------------------------------------
Oct 22 15:30:07 omega kernel: hda: dma_intr: status=0x51 { DriveReady
SeekComplete Error }
Oct 22 15:30:07 omega kernel: hda: dma_intr: error=0x40 {
UncorrectableError }, LBAsect=454088, sector=4264
Oct 22 15:30:07 omega kernel: end_request: I/O error, dev 03:03 (hda),
sector 4264
Oct 22 15:30:07 omega kernel: raid5: Disk failure on hda3, disabling
device. Operation continuing on 3 devices
Oct 22 15:30:07 omega kernel: md: updating md0 RAID superblock on device
Oct 22 15:30:07 omega kernel: md: hda3 (skipping faulty)
Oct 22 15:30:07 omega kernel: md: hdc3 [events: 00000137]
Oct 22 15:30:07 omega kernel: (write) hdc3's sb offset: 119834496
Oct 22 15:30:07 omega kernel: md: recovery thread got woken up ...
Oct 22 15:30:07 omega kernel: md: hde3 [events: 00000137]
Oct 22 15:30:07 omega kernel: (write) hde3's sb offset: 119834496
Oct 22 15:30:07 omega kernel: md: hdg3 [events: 00000137]
Oct 22 15:30:07 omega kernel: (write) hdg3's sb offset: 119834496
Oct 22 15:30:07 omega kernel: md0: no spare disk to reconstruct array!
-- continuing in degraded mode
Oct 22 15:30:07 omega kernel: md: recovery thread finished ...
> You want to be running "mdadm --monitor". You really really do!
> Anyone out there who is listening: if you have any md/raid arrays
> (other than linear/raid0) and are not running "mdadm --monitor",
> please do so. Now.
> Also run "mdadm --monitor --oneshot --scan" (or similar) from a
> nightly cron job, so it will nag you about degraded arrays.
> Please!
Yes you are absolutely right! It was my first thought when I saw the
broken array: "There _must_ be a program that monitors the array
automatically for me and gives an alert if something goes wrong!
And it will be the first thing to do after the array is running again!
> But why do you think that hda3 dropped out of the array 2 months ago?
> The update time reported by mdadm --examine is
> Update Time : Sat Dec 3 18:56:59 2005
This comes from an attemt to assemble the array from hda3, hde3 and
hdg3. The first "mdadm --examine" printed out an update time for hda3
something in October...
> The superblock from hda3 seems to suggest that it was hdc3 that was
> the problem.... odd.
>
>
>
>>"pass 1: checking Inodes, Blocks, and sizes
>>read error - Block 131460 (Attempt to read block from filesystem
>>resulted in short read) during Inode-Scan Ignore error?"
>
>
>
> This strongly suggests there is a problem with one of the drives - it
> is returning read errors. Are there any informative kernel logs.
> If it is hdc that is reporting errors, try to re-assemble the array
> from hda3, hde3, hdg3.
That is what I already tried, but didn't succeed. So I tried it with
hd[ceg]3 and could even mount the array and the data seem to be OK at
first glance. What I could certainly do is to plug in an external USB
hard drive and to copy as many data as possible to the USB drive, but
the problem is that the array consists of 4x120GB resulting in about
360GB of data. So I hope I can reconstruct it without copying...
But the real strange thing to me is that I can mount the array and the
data seem to be OK, but the "fsck" produces so many errors....
The other question is why does /dev/hdg3 appear _two_times_ and
/dev/hda3 _doesn't_at_all_ when I type
mdadm --create /dev/md0 -c32 -l5 -n4 missing /dev/hdc3 /dev/hde3 /dev/hdg3
mdadm: /dev/hdc3 appears to be part of a raid array:
level=5 devices=4 ctime=Fri May 30 14:25:47 2003
mdadm: /dev/hde3 appears to be part of a raid array:
level=5 devices=4 ctime=Fri May 30 14:25:47 2003
mdadm: /dev/hdg3 appears to contain an ext2fs file system
size=493736704K mtime=Tue Jan 3 04:48:21 2006
mdadm: /dev/hdg3 appears to be part of a raid array:
level=5 devices=4 ctime=Fri May 30 14:25:47 2003
Continue creating array? no
mdadm: create aborted.
Thanks in advance
Alfons
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-05 2:44 ` Neil Brown
@ 2005-12-06 2:26 ` Ross Vandegrift
0 siblings, 0 replies; 10+ messages in thread
From: Ross Vandegrift @ 2005-12-06 2:26 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
On Mon, Dec 05, 2005 at 01:44:17PM +1100, Neil Brown wrote:
> The text-messaging I don't like. That is what the --program option if
> for. If you don't want the very-basic email message, then write
> yourself a little script to do exactly what you want.
Heh - had I known about that feature it would've saved me the time it
took to write in C!
> The 'syslog' stuff looks fine : -y and --syslog.... though I wander if
> --syslog should be the default...
Maybe - I can't really imagine not enabling it. But I didn't want to
wrap a policy change into a feature.
> I wish I could remember why I didn't do this at the start! I have a
> strong feeling that it was more than just laziness, but I cannot
> remember what more :-(
Hmm - I've give this some thought. The only bad thing I can think of
are the non-events that can potentially be logged when things don't
check for undefined configuration.
For example - is my other change to test for UnDef spare drives
incorrect in some way? I'm not too sure since I'm not really familiar
with the code.
Ross
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
[not found] ` <43977948.6050507@promotionstudios.com>
@ 2005-12-08 1:49 ` Ross Vandegrift
2005-12-08 9:20 ` David Greaves
0 siblings, 1 reply; 10+ messages in thread
From: Ross Vandegrift @ 2005-12-08 1:49 UTC (permalink / raw)
To: James Neale; +Cc: linux-raid
On Thu, Dec 08, 2005 at 11:07:36AM +1100, James Neale wrote:
> Hi Ross
> I'm a bit of a mdadm newb and have been wrangling with -monitor rather
> unsucccessfully.
> Currently I'm manually checking /proc/mdstat until I've sorted out
> something better.
> I'm running a single 1TB raid5 on 6 disks (one is spare) which has been
> smooth so far.
> Any pointers or examples for that reliable noisy mailing beeping script
> of yours in /etc/bashrc
The blob is below, just stick it in your bashrc. My idea was that
everytime I spawn a shell (which is a lot!), mdstat gets checked.
What issues are you having with --monitor? It should be pretty
automatic if you let it scan. I just run something like this:
/sbin/mdadm -F -s -f -y
This stats mdadm in --monitor mode, scans for devices, daemonizes, and
records its results in syslog.
Here's what runs in my bashrc:
# Scream and cry a lot if the RAID looks weird!
if /bin/grep _ /proc/mdstat; then
for ((scream = 0; scream < 5; scream ++)); do
echo -e "POSSIBLE PROBLEM WITH RAID"
sleep 0.1
done
cat /proc/mdstat
fi
--
Ross Vandegrift
ross@lug.udel.edu
"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: RAID5 problem
2005-12-08 1:49 ` Ross Vandegrift
@ 2005-12-08 9:20 ` David Greaves
0 siblings, 0 replies; 10+ messages in thread
From: David Greaves @ 2005-12-08 9:20 UTC (permalink / raw)
To: Ross Vandegrift; +Cc: James Neale, linux-raid
Ross Vandegrift wrote:
>On Thu, Dec 08, 2005 at 11:07:36AM +1100, James Neale wrote:
>
>
>>Hi Ross
>>I'm a bit of a mdadm newb and have been wrangling with -monitor rather
>>unsucccessfully.
>>Currently I'm manually checking /proc/mdstat until I've sorted out
>>something better.
>>I'm running a single 1TB raid5 on 6 disks (one is spare) which has been
>>smooth so far.
>>Any pointers or examples for that reliable noisy mailing beeping script
>>of yours in /etc/bashrc
>>
>>
>
>The blob is below, just stick it in your bashrc. My idea was that
>everytime I spawn a shell (which is a lot!), mdstat gets checked.
>
>What issues are you having with --monitor? It should be pretty
>automatic if you let it scan. I just run something like this:
>
>/sbin/mdadm -F -s -f -y
>
>This stats mdadm in --monitor mode, scans for devices, daemonizes, and
>records its results in syslog.
>
>Here's what runs in my bashrc:
>
># Scream and cry a lot if the RAID looks weird!
>if /bin/grep _ /proc/mdstat; then
> for ((scream = 0; scream < 5; scream ++)); do
> echo -e "POSSIBLE PROBLEM WITH RAID"
> sleep 0.1
> done
> cat /proc/mdstat
>fi
>
>
>
Have a look at monit
And festival
Quite a cool combination....
David
--
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2005-12-08 9:20 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-12-04 14:21 RAID5 problem Alfons Andorfer
2005-12-04 21:47 ` Neil Brown
2005-12-05 1:44 ` Ross Vandegrift
2005-12-05 2:44 ` Neil Brown
2005-12-06 2:26 ` Ross Vandegrift
[not found] ` <43977948.6050507@promotionstudios.com>
2005-12-08 1:49 ` Ross Vandegrift
2005-12-08 9:20 ` David Greaves
2005-12-05 10:59 ` Alfons Andorfer
-- strict thread matches above, loose matches on Subject: below --
2005-12-04 21:28 Andrew Burgess
2005-12-04 21:49 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).