* nonzero mismatch_cnt with no earlier error
@ 2007-02-24 0:23 Eyal Lebedinsky
2007-02-24 0:30 ` Justin Piszcz
` (2 more replies)
0 siblings, 3 replies; 20+ messages in thread
From: Eyal Lebedinsky @ 2007-02-24 0:23 UTC (permalink / raw)
To: linux-raid list
I run a 'check' weekly, and yesterday it came up with a non-zero
mismatch count (184). There were no earlier RAID errors logged
and the count was zero after the run a week ago.
Now, the interesting part is that there was one i/o error logged
during the check *last week*, however the raid did not see it and
the count was zero at the end. No errors were logged during the
week since or during the check last night.
fsck (ext3 with logging) found no errors but I may have bad data
somewhere.
Should the raid have noticed the error, checked the offending
stripe and taken appropriate action? The messages from that error
are below.
Naturally, I do not know if the mismatch is related to the failure
last week, it could be from a number of other reasons (bad memory?
kernel bug?).
system details:
2.6.20 vanilla
/dev/sd[ab]: on motherboard
IDE interface: Intel Corp. 82801EB (ICH5) Serial ATA 150 Storage Controller (rev 02)
/dev/sd[cdef]: Promise SATA-II-150-TX4
Unknown mass storage controller: Promise Technology, Inc.: Unknown device 3d18 (rev 02)
All 6 disks are WD 320GB SATA of similar models
Tail of dmesg, showing all messages since last week 'check':
*** last week check start:
[927080.617744] md: data-check of RAID array md0
[927080.630783] md: minimum _guaranteed_ speed: 24000 KB/sec/disk.
[927080.648734] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
[927080.678103] md: using 128k window, over a total of 312568576 blocks.
*** last week error:
[937567.332751] ata3.00: exception Emask 0x10 SAct 0x0 SErr 0x4190002 action 0x2
[937567.354094] ata3.00: cmd b0/d5:01:09:4f:c2/00:00:00:00:00/00 tag 0 cdb 0x0 data 512 in
[937567.354096] res 51/04:83:45:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error)
[937568.120783] ata3: soft resetting port
[937568.282450] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[937568.306693] ata3.00: configured for UDMA/100
[937568.319733] ata3: EH complete
[937568.361223] SCSI device sdc: 625142448 512-byte hdwr sectors (320073 MB)
[937568.397207] sdc: Write Protect is off
[937568.408620] sdc: Mode Sense: 00 3a 00 00
[937568.453522] SCSI device sdc: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
*** last week check end:
[941696.843935] md: md0: data-check done.
[941697.246454] RAID5 conf printout:
[941697.256366] --- rd:6 wd:6
[941697.264718] disk 0, o:1, dev:sda1
[941697.275146] disk 1, o:1, dev:sdb1
[941697.285575] disk 2, o:1, dev:sdc1
[941697.296003] disk 3, o:1, dev:sdd1
[941697.306432] disk 4, o:1, dev:sde1
[941697.316862] disk 5, o:1, dev:sdf1
*** this week check start:
[1530647.746383] md: data-check of RAID array md0
[1530647.759677] md: minimum _guaranteed_ speed: 24000 KB/sec/disk.
[1530647.778041] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
[1530647.807663] md: using 128k window, over a total of 312568576 blocks.
*** this week check end:
[1545248.680745] md: md0: data-check done.
[1545249.266727] RAID5 conf printout:
[1545249.276930] --- rd:6 wd:6
[1545249.285542] disk 0, o:1, dev:sda1
[1545249.296228] disk 1, o:1, dev:sdb1
[1545249.306923] disk 2, o:1, dev:sdc1
[1545249.317613] disk 3, o:1, dev:sdd1
[1545249.328292] disk 4, o:1, dev:sde1
[1545249.338981] disk 5, o:1, dev:sdf1
--
Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
attach .zip as .dat
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 0:23 nonzero mismatch_cnt with no earlier error Eyal Lebedinsky
@ 2007-02-24 0:30 ` Justin Piszcz
2007-02-24 0:59 ` Eyal Lebedinsky
2007-02-24 6:58 ` Eyal Lebedinsky
2007-02-25 18:33 ` Frank van Maarseveen
2 siblings, 1 reply; 20+ messages in thread
From: Justin Piszcz @ 2007-02-24 0:30 UTC (permalink / raw)
To: Eyal Lebedinsky; +Cc: linux-raid list
Should the raid have noticed the error, checked the offending
stripe and taken appropriate action? The messages from that error
are below.
I don't think so, that is why we need to run check every once and a while
and check the mismatch_cnt file for each md raid device.
Run repair then re-run check to verify the count goes back to 0.
Justin.
On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
> I run a 'check' weekly, and yesterday it came up with a non-zero
> mismatch count (184). There were no earlier RAID errors logged
> and the count was zero after the run a week ago.
>
> Now, the interesting part is that there was one i/o error logged
> during the check *last week*, however the raid did not see it and
> the count was zero at the end. No errors were logged during the
> week since or during the check last night.
>
> fsck (ext3 with logging) found no errors but I may have bad data
> somewhere.
>
> Should the raid have noticed the error, checked the offending
> stripe and taken appropriate action? The messages from that error
> are below.
>
> Naturally, I do not know if the mismatch is related to the failure
> last week, it could be from a number of other reasons (bad memory?
> kernel bug?).
>
>
> system details:
> 2.6.20 vanilla
> /dev/sd[ab]: on motherboard
> IDE interface: Intel Corp. 82801EB (ICH5) Serial ATA 150 Storage Controller (rev 02)
> /dev/sd[cdef]: Promise SATA-II-150-TX4
> Unknown mass storage controller: Promise Technology, Inc.: Unknown device 3d18 (rev 02)
> All 6 disks are WD 320GB SATA of similar models
>
> Tail of dmesg, showing all messages since last week 'check':
>
> *** last week check start:
> [927080.617744] md: data-check of RAID array md0
> [927080.630783] md: minimum _guaranteed_ speed: 24000 KB/sec/disk.
> [927080.648734] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
> [927080.678103] md: using 128k window, over a total of 312568576 blocks.
> *** last week error:
> [937567.332751] ata3.00: exception Emask 0x10 SAct 0x0 SErr 0x4190002 action 0x2
> [937567.354094] ata3.00: cmd b0/d5:01:09:4f:c2/00:00:00:00:00/00 tag 0 cdb 0x0 data 512 in
> [937567.354096] res 51/04:83:45:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error)
> [937568.120783] ata3: soft resetting port
> [937568.282450] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
> [937568.306693] ata3.00: configured for UDMA/100
> [937568.319733] ata3: EH complete
> [937568.361223] SCSI device sdc: 625142448 512-byte hdwr sectors (320073 MB)
> [937568.397207] sdc: Write Protect is off
> [937568.408620] sdc: Mode Sense: 00 3a 00 00
> [937568.453522] SCSI device sdc: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
> *** last week check end:
> [941696.843935] md: md0: data-check done.
> [941697.246454] RAID5 conf printout:
> [941697.256366] --- rd:6 wd:6
> [941697.264718] disk 0, o:1, dev:sda1
> [941697.275146] disk 1, o:1, dev:sdb1
> [941697.285575] disk 2, o:1, dev:sdc1
> [941697.296003] disk 3, o:1, dev:sdd1
> [941697.306432] disk 4, o:1, dev:sde1
> [941697.316862] disk 5, o:1, dev:sdf1
> *** this week check start:
> [1530647.746383] md: data-check of RAID array md0
> [1530647.759677] md: minimum _guaranteed_ speed: 24000 KB/sec/disk.
> [1530647.778041] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
> [1530647.807663] md: using 128k window, over a total of 312568576 blocks.
> *** this week check end:
> [1545248.680745] md: md0: data-check done.
> [1545249.266727] RAID5 conf printout:
> [1545249.276930] --- rd:6 wd:6
> [1545249.285542] disk 0, o:1, dev:sda1
> [1545249.296228] disk 1, o:1, dev:sdb1
> [1545249.306923] disk 2, o:1, dev:sdc1
> [1545249.317613] disk 3, o:1, dev:sdd1
> [1545249.328292] disk 4, o:1, dev:sde1
> [1545249.338981] disk 5, o:1, dev:sdf1
>
> --
> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
> attach .zip as .dat
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 0:30 ` Justin Piszcz
@ 2007-02-24 0:59 ` Eyal Lebedinsky
2007-02-26 4:36 ` Neil Brown
0 siblings, 1 reply; 20+ messages in thread
From: Eyal Lebedinsky @ 2007-02-24 0:59 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid list
But is this not a good opportunity to repair the bad stripe for a very
low cost (no complete resync required)?
At time of error we actually know which disk failed and can re-write
it, something we do not know at resync time, so I assume we always
write to the parity disk.
Justin Piszcz wrote:
> Should the raid have noticed the error, checked the offending
> stripe and taken appropriate action? The messages from that error
> are below.
>
> I don't think so, that is why we need to run check every once and a
> while and check the mismatch_cnt file for each md raid device.
>
> Run repair then re-run check to verify the count goes back to 0.
>
> Justin.
>
> On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
>
>> I run a 'check' weekly, and yesterday it came up with a non-zero
>> mismatch count (184). There were no earlier RAID errors logged
>> and the count was zero after the run a week ago.
>>
>> Now, the interesting part is that there was one i/o error logged
>> during the check *last week*, however the raid did not see it and
>> the count was zero at the end. No errors were logged during the
>> week since or during the check last night.
>>
>> fsck (ext3 with logging) found no errors but I may have bad data
>> somewhere.
>>
>> Should the raid have noticed the error, checked the offending
>> stripe and taken appropriate action? The messages from that error
>> are below.
>>
>> Naturally, I do not know if the mismatch is related to the failure
>> last week, it could be from a number of other reasons (bad memory?
>> kernel bug?).
>>
>>
>> system details:
>> 2.6.20 vanilla
>> /dev/sd[ab]: on motherboard
>> IDE interface: Intel Corp. 82801EB (ICH5) Serial ATA 150 Storage
>> Controller (rev 02)
>> /dev/sd[cdef]: Promise SATA-II-150-TX4
>> Unknown mass storage controller: Promise Technology, Inc.: Unknown
>> device 3d18 (rev 02)
>> All 6 disks are WD 320GB SATA of similar models
>>
>> Tail of dmesg, showing all messages since last week 'check':
>>
>> *** last week check start:
>> [927080.617744] md: data-check of RAID array md0
>> [927080.630783] md: minimum _guaranteed_ speed: 24000 KB/sec/disk.
>> [927080.648734] md: using maximum available idle IO bandwidth (but not
>> more than 200000 KB/sec) for data-check.
>> [927080.678103] md: using 128k window, over a total of 312568576 blocks.
>> *** last week error:
>> [937567.332751] ata3.00: exception Emask 0x10 SAct 0x0 SErr 0x4190002
>> action 0x2
>> [937567.354094] ata3.00: cmd b0/d5:01:09:4f:c2/00:00:00:00:00/00 tag 0
>> cdb 0x0 data 512 in
>> [937567.354096] res 51/04:83:45:00:00/00:00:00:00:00/a0 Emask
>> 0x10 (ATA bus error)
>> [937568.120783] ata3: soft resetting port
>> [937568.282450] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
>> [937568.306693] ata3.00: configured for UDMA/100
>> [937568.319733] ata3: EH complete
>> [937568.361223] SCSI device sdc: 625142448 512-byte hdwr sectors
>> (320073 MB)
>> [937568.397207] sdc: Write Protect is off
>> [937568.408620] sdc: Mode Sense: 00 3a 00 00
>> [937568.453522] SCSI device sdc: write cache: enabled, read cache:
>> enabled, doesn't support DPO or FUA
>> *** last week check end:
>> [941696.843935] md: md0: data-check done.
>> [941697.246454] RAID5 conf printout:
>> [941697.256366] --- rd:6 wd:6
>> [941697.264718] disk 0, o:1, dev:sda1
>> [941697.275146] disk 1, o:1, dev:sdb1
>> [941697.285575] disk 2, o:1, dev:sdc1
>> [941697.296003] disk 3, o:1, dev:sdd1
>> [941697.306432] disk 4, o:1, dev:sde1
>> [941697.316862] disk 5, o:1, dev:sdf1
>> *** this week check start:
>> [1530647.746383] md: data-check of RAID array md0
>> [1530647.759677] md: minimum _guaranteed_ speed: 24000 KB/sec/disk.
>> [1530647.778041] md: using maximum available idle IO bandwidth (but
>> not more than 200000 KB/sec) for data-check.
>> [1530647.807663] md: using 128k window, over a total of 312568576 blocks.
>> *** this week check end:
>> [1545248.680745] md: md0: data-check done.
>> [1545249.266727] RAID5 conf printout:
>> [1545249.276930] --- rd:6 wd:6
>> [1545249.285542] disk 0, o:1, dev:sda1
>> [1545249.296228] disk 1, o:1, dev:sdb1
>> [1545249.306923] disk 2, o:1, dev:sdc1
>> [1545249.317613] disk 3, o:1, dev:sdd1
>> [1545249.328292] disk 4, o:1, dev:sde1
>> [1545249.338981] disk 5, o:1, dev:sdf1
>>
>> --
>> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
>> attach .zip as .dat
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
attach .zip as .dat
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 0:23 nonzero mismatch_cnt with no earlier error Eyal Lebedinsky
2007-02-24 0:30 ` Justin Piszcz
@ 2007-02-24 6:58 ` Eyal Lebedinsky
2007-02-24 9:14 ` Justin Piszcz
2007-02-25 18:33 ` Frank van Maarseveen
2 siblings, 1 reply; 20+ messages in thread
From: Eyal Lebedinsky @ 2007-02-24 6:58 UTC (permalink / raw)
To: linux-raid list
I did a resync since, which ended up with the same mismatch_cnt of 184.
I noticed that the count *was* reset to zero when the resync started,
but ended up with 184 (same as after the check).
I thought that the resync just calculates fresh parity and does not
bother checking if it is different. So what does this final count mean?
This leads me to ask: why bother doing a check if I will always run
a resync after an error - better run a resync in the first place?
--
Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
attach .zip as .dat
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 6:58 ` Eyal Lebedinsky
@ 2007-02-24 9:14 ` Justin Piszcz
2007-02-24 9:37 ` Justin Piszcz
0 siblings, 1 reply; 20+ messages in thread
From: Justin Piszcz @ 2007-02-24 9:14 UTC (permalink / raw)
To: Eyal Lebedinsky; +Cc: linux-raid list
Perhaps,
The way it works (I believe is as follows)
1. echo check > sync_action
2. If mismatch_cnt > 0 then run:
3. echo repair > sync_action
4. Re-run #1
5. Check to make sure it is back to 0.
Justin.
On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
> I did a resync since, which ended up with the same mismatch_cnt of 184.
> I noticed that the count *was* reset to zero when the resync started,
> but ended up with 184 (same as after the check).
>
> I thought that the resync just calculates fresh parity and does not
> bother checking if it is different. So what does this final count mean?
>
> This leads me to ask: why bother doing a check if I will always run
> a resync after an error - better run a resync in the first place?
>
> --
> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
> attach .zip as .dat
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 9:14 ` Justin Piszcz
@ 2007-02-24 9:37 ` Justin Piszcz
2007-02-24 9:48 ` Jason Rainforest
0 siblings, 1 reply; 20+ messages in thread
From: Justin Piszcz @ 2007-02-24 9:37 UTC (permalink / raw)
To: Eyal Lebedinsky; +Cc: linux-raid list
Of course you could just run repair but then you would never know that
mismatch_cnt was > 0.
Justin.
On Sat, 24 Feb 2007, Justin Piszcz wrote:
> Perhaps,
>
> The way it works (I believe is as follows)
>
> 1. echo check > sync_action
> 2. If mismatch_cnt > 0 then run:
> 3. echo repair > sync_action
> 4. Re-run #1
> 5. Check to make sure it is back to 0.
>
> Justin.
>
> On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
>
>> I did a resync since, which ended up with the same mismatch_cnt of 184.
>> I noticed that the count *was* reset to zero when the resync started,
>> but ended up with 184 (same as after the check).
>>
>> I thought that the resync just calculates fresh parity and does not
>> bother checking if it is different. So what does this final count mean?
>>
>> This leads me to ask: why bother doing a check if I will always run
>> a resync after an error - better run a resync in the first place?
>>
>> --
>> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
>> attach .zip as .dat
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 9:37 ` Justin Piszcz
@ 2007-02-24 9:48 ` Jason Rainforest
2007-02-24 9:50 ` Justin Piszcz
2007-02-24 11:09 ` Michael Tokarev
0 siblings, 2 replies; 20+ messages in thread
From: Jason Rainforest @ 2007-02-24 9:48 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Eyal Lebedinsky, linux-raid list
I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
+).
I then ordered a resync. The mismatch_cnt returned to 0 at the start of
the resync, but around the same time that it went up to 8 with the
check, it went up to 8 in the resync. After the resync, it still is 8. I
haven't ordered a check since the resync completed.
On Sat, 2007-02-24 at 04:37 -0500, Justin Piszcz wrote:
> Of course you could just run repair but then you would never know that
> mismatch_cnt was > 0.
>
> Justin.
>
> On Sat, 24 Feb 2007, Justin Piszcz wrote:
>
> > Perhaps,
> >
> > The way it works (I believe is as follows)
> >
> > 1. echo check > sync_action
> > 2. If mismatch_cnt > 0 then run:
> > 3. echo repair > sync_action
> > 4. Re-run #1
> > 5. Check to make sure it is back to 0.
> >
> > Justin.
> >
> > On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
> >
> >> I did a resync since, which ended up with the same mismatch_cnt of 184.
> >> I noticed that the count *was* reset to zero when the resync started,
> >> but ended up with 184 (same as after the check).
> >>
> >> I thought that the resync just calculates fresh parity and does not
> >> bother checking if it is different. So what does this final count mean?
> >>
> >> This leads me to ask: why bother doing a check if I will always run
> >> a resync after an error - better run a resync in the first place?
> >>
> >> --
> >> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
> >> attach .zip as .dat
> >> -
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >>
> >
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 9:48 ` Jason Rainforest
@ 2007-02-24 9:50 ` Justin Piszcz
2007-02-24 9:59 ` Jason Rainforest
2007-02-24 11:09 ` Michael Tokarev
1 sibling, 1 reply; 20+ messages in thread
From: Justin Piszcz @ 2007-02-24 9:50 UTC (permalink / raw)
To: Jason Rainforest; +Cc: Eyal Lebedinsky, linux-raid list
A resync? You're supposed to run a 'repair' are you not?
Justin.
On Sat, 24 Feb 2007, Jason Rainforest wrote:
> I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
> multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
> +).
>
> I then ordered a resync. The mismatch_cnt returned to 0 at the start of
> the resync, but around the same time that it went up to 8 with the
> check, it went up to 8 in the resync. After the resync, it still is 8. I
> haven't ordered a check since the resync completed.
>
>
> On Sat, 2007-02-24 at 04:37 -0500, Justin Piszcz wrote:
>> Of course you could just run repair but then you would never know that
>> mismatch_cnt was > 0.
>>
>> Justin.
>>
>> On Sat, 24 Feb 2007, Justin Piszcz wrote:
>>
>>> Perhaps,
>>>
>>> The way it works (I believe is as follows)
>>>
>>> 1. echo check > sync_action
>>> 2. If mismatch_cnt > 0 then run:
>>> 3. echo repair > sync_action
>>> 4. Re-run #1
>>> 5. Check to make sure it is back to 0.
>>>
>>> Justin.
>>>
>>> On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
>>>
>>>> I did a resync since, which ended up with the same mismatch_cnt of 184.
>>>> I noticed that the count *was* reset to zero when the resync started,
>>>> but ended up with 184 (same as after the check).
>>>>
>>>> I thought that the resync just calculates fresh parity and does not
>>>> bother checking if it is different. So what does this final count mean?
>>>>
>>>> This leads me to ask: why bother doing a check if I will always run
>>>> a resync after an error - better run a resync in the first place?
>>>>
>>>> --
>>>> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
>>>> attach .zip as .dat
>>>> -
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 9:50 ` Justin Piszcz
@ 2007-02-24 9:59 ` Jason Rainforest
2007-02-24 10:01 ` Justin Piszcz
0 siblings, 1 reply; 20+ messages in thread
From: Jason Rainforest @ 2007-02-24 9:59 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Eyal Lebedinsky, linux-raid list
Yes, I meant repair, sorry. I checked my bash history and I did indeed
order a repair (echo repair >/sys/block/md0/md/sync_action). I think I
called it a resync because that's what /proc/mdstat told me it was
doing.
On Sat, 2007-02-24 at 04:50 -0500, Justin Piszcz wrote:
> A resync? You're supposed to run a 'repair' are you not?
>
> Justin.
>
> On Sat, 24 Feb 2007, Jason Rainforest wrote:
>
> > I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
> > multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
> > +).
> >
> > I then ordered a resync. The mismatch_cnt returned to 0 at the start of
> > the resync, but around the same time that it went up to 8 with the
> > check, it went up to 8 in the resync. After the resync, it still is 8. I
> > haven't ordered a check since the resync completed.
> >
> >
> > On Sat, 2007-02-24 at 04:37 -0500, Justin Piszcz wrote:
> >> Of course you could just run repair but then you would never know that
> >> mismatch_cnt was > 0.
> >>
> >> Justin.
> >>
> >> On Sat, 24 Feb 2007, Justin Piszcz wrote:
> >>
> >>> Perhaps,
> >>>
> >>> The way it works (I believe is as follows)
> >>>
> >>> 1. echo check > sync_action
> >>> 2. If mismatch_cnt > 0 then run:
> >>> 3. echo repair > sync_action
> >>> 4. Re-run #1
> >>> 5. Check to make sure it is back to 0.
> >>>
> >>> Justin.
> >>>
> >>> On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
> >>>
> >>>> I did a resync since, which ended up with the same mismatch_cnt of 184.
> >>>> I noticed that the count *was* reset to zero when the resync started,
> >>>> but ended up with 184 (same as after the check).
> >>>>
> >>>> I thought that the resync just calculates fresh parity and does not
> >>>> bother checking if it is different. So what does this final count mean?
> >>>>
> >>>> This leads me to ask: why bother doing a check if I will always run
> >>>> a resync after an error - better run a resync in the first place?
> >>>>
> >>>> --
> >>>> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
> >>>> attach .zip as .dat
> >>>> -
> >>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >>>> the body of a message to majordomo@vger.kernel.org
> >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >>>>
> >>>
> >> -
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 9:59 ` Jason Rainforest
@ 2007-02-24 10:01 ` Justin Piszcz
0 siblings, 0 replies; 20+ messages in thread
From: Justin Piszcz @ 2007-02-24 10:01 UTC (permalink / raw)
To: Jason Rainforest; +Cc: Eyal Lebedinsky, linux-raid list
Ahh, perhaps Neil can fix that? ;)
Cat /sys/block/md0/md/sync_action will tell you what it is really doing.
On Sat, 24 Feb 2007, Jason Rainforest wrote:
> Yes, I meant repair, sorry. I checked my bash history and I did indeed
> order a repair (echo repair >/sys/block/md0/md/sync_action). I think I
> called it a resync because that's what /proc/mdstat told me it was
> doing.
>
> On Sat, 2007-02-24 at 04:50 -0500, Justin Piszcz wrote:
>> A resync? You're supposed to run a 'repair' are you not?
>>
>> Justin.
>>
>> On Sat, 24 Feb 2007, Jason Rainforest wrote:
>>
>>> I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
>>> multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
>>> +).
>>>
>>> I then ordered a resync. The mismatch_cnt returned to 0 at the start of
>>> the resync, but around the same time that it went up to 8 with the
>>> check, it went up to 8 in the resync. After the resync, it still is 8. I
>>> haven't ordered a check since the resync completed.
>>>
>>>
>>> On Sat, 2007-02-24 at 04:37 -0500, Justin Piszcz wrote:
>>>> Of course you could just run repair but then you would never know that
>>>> mismatch_cnt was > 0.
>>>>
>>>> Justin.
>>>>
>>>> On Sat, 24 Feb 2007, Justin Piszcz wrote:
>>>>
>>>>> Perhaps,
>>>>>
>>>>> The way it works (I believe is as follows)
>>>>>
>>>>> 1. echo check > sync_action
>>>>> 2. If mismatch_cnt > 0 then run:
>>>>> 3. echo repair > sync_action
>>>>> 4. Re-run #1
>>>>> 5. Check to make sure it is back to 0.
>>>>>
>>>>> Justin.
>>>>>
>>>>> On Sat, 24 Feb 2007, Eyal Lebedinsky wrote:
>>>>>
>>>>>> I did a resync since, which ended up with the same mismatch_cnt of 184.
>>>>>> I noticed that the count *was* reset to zero when the resync started,
>>>>>> but ended up with 184 (same as after the check).
>>>>>>
>>>>>> I thought that the resync just calculates fresh parity and does not
>>>>>> bother checking if it is different. So what does this final count mean?
>>>>>>
>>>>>> This leads me to ask: why bother doing a check if I will always run
>>>>>> a resync after an error - better run a resync in the first place?
>>>>>>
>>>>>> --
>>>>>> Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
>>>>>> attach .zip as .dat
>>>>>> -
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>
>>>> -
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> -
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 9:48 ` Jason Rainforest
2007-02-24 9:50 ` Justin Piszcz
@ 2007-02-24 11:09 ` Michael Tokarev
2007-02-24 11:12 ` Justin Piszcz
1 sibling, 1 reply; 20+ messages in thread
From: Michael Tokarev @ 2007-02-24 11:09 UTC (permalink / raw)
To: Jason Rainforest; +Cc: Justin Piszcz, Eyal Lebedinsky, linux-raid list
Jason Rainforest wrote:
> I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
> multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
> +).
>
> I then ordered a resync. The mismatch_cnt returned to 0 at the start of
As pointed out later it was repair, not resync.
> the resync, but around the same time that it went up to 8 with the
> check, it went up to 8 in the resync. After the resync, it still is 8. I
> haven't ordered a check since the resync completed.
As far as I understand, repair will do the same as check does, but ALSO
will try to fix the problems found. So the number in mismatch_cnt after
a repair will indicate the amount of mismatches found _and fixed_
/mjt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 11:09 ` Michael Tokarev
@ 2007-02-24 11:12 ` Justin Piszcz
2007-02-25 20:02 ` Bill Davidsen
0 siblings, 1 reply; 20+ messages in thread
From: Justin Piszcz @ 2007-02-24 11:12 UTC (permalink / raw)
To: Michael Tokarev; +Cc: Jason Rainforest, Eyal Lebedinsky, linux-raid list
On Sat, 24 Feb 2007, Michael Tokarev wrote:
> Jason Rainforest wrote:
>> I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
>> multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
>> +).
>>
>> I then ordered a resync. The mismatch_cnt returned to 0 at the start of
>
> As pointed out later it was repair, not resync.
>
>> the resync, but around the same time that it went up to 8 with the
>> check, it went up to 8 in the resync. After the resync, it still is 8. I
>> haven't ordered a check since the resync completed.
>
> As far as I understand, repair will do the same as check does, but ALSO
> will try to fix the problems found. So the number in mismatch_cnt after
> a repair will indicate the amount of mismatches found _and fixed_
>
> /mjt
>
That is what I thought too (I will have to wait until I get another
mismatch to verify), but FYI--
Yesterday I had 512 mismatches for my swap partition (RAID1) after I ran
the check.
I ran repair.
I catted the mismatch_cnt again, still 512.
I re-ran the check, back to 0.
Justin.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 0:23 nonzero mismatch_cnt with no earlier error Eyal Lebedinsky
2007-02-24 0:30 ` Justin Piszcz
2007-02-24 6:58 ` Eyal Lebedinsky
@ 2007-02-25 18:33 ` Frank van Maarseveen
2007-02-25 19:58 ` Christian Pernegger
2 siblings, 1 reply; 20+ messages in thread
From: Frank van Maarseveen @ 2007-02-25 18:33 UTC (permalink / raw)
To: Eyal Lebedinsky; +Cc: linux-raid list
On Sat, Feb 24, 2007 at 11:23:55AM +1100, Eyal Lebedinsky wrote:
[...]
>
> fsck (ext3 with logging) found no errors but I may have bad data
> somewhere.
I've written a program for fast MD5/SHA256 summing which may be useful
for tracking these kind of silent corruptions. See
http://www.frankvm.com/fsindex
--
Frank
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-25 18:33 ` Frank van Maarseveen
@ 2007-02-25 19:58 ` Christian Pernegger
2007-02-25 21:07 ` Justin Piszcz
0 siblings, 1 reply; 20+ messages in thread
From: Christian Pernegger @ 2007-02-25 19:58 UTC (permalink / raw)
To: linux-raid
Sorry to hijack the thread a little but I just noticed that the
mismatch_cnt for my mirror is at 256.
I'd always thought the monthly check done by the mdadm Debian package
does repair as well - apparently it doesn't.
So I guess I should run repair but I'm wondering ...
- is it safe / bugfree considering my oldish software? (mdadm 2.5.2 +
linux 2.6.17.4)
- is there any way to check which files (if any) have been corrupted?
- I have grub installed by hand on both mirror components, but that
shouldn't show up as mismatch, should it?
The box in question is in production so I'd rather not update mdadm
and/or kernel if possible.
Chris
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 11:12 ` Justin Piszcz
@ 2007-02-25 20:02 ` Bill Davidsen
0 siblings, 0 replies; 20+ messages in thread
From: Bill Davidsen @ 2007-02-25 20:02 UTC (permalink / raw)
To: Justin Piszcz
Cc: Michael Tokarev, Jason Rainforest, Eyal Lebedinsky,
linux-raid list
Justin Piszcz wrote:
>
>
> On Sat, 24 Feb 2007, Michael Tokarev wrote:
>
>> Jason Rainforest wrote:
>>> I tried doing a check, found a mismatch_cnt of 8 (7*250Gb SW RAID5,
>>> multiple controllers on Linux 2.6.19.2, SMP x86-64 on Athlon64 X2 4200
>>> +).
>>>
>>> I then ordered a resync. The mismatch_cnt returned to 0 at the start of
>>
>> As pointed out later it was repair, not resync.
>>
>>> the resync, but around the same time that it went up to 8 with the
>>> check, it went up to 8 in the resync. After the resync, it still is
>>> 8. I
>>> haven't ordered a check since the resync completed.
>>
>> As far as I understand, repair will do the same as check does, but ALSO
>> will try to fix the problems found. So the number in mismatch_cnt after
>> a repair will indicate the amount of mismatches found _and fixed_
>>
>> /mjt
>>
>
> That is what I thought too (I will have to wait until I get another
> mismatch to verify), but FYI--
>
> Yesterday I had 512 mismatches for my swap partition (RAID1) after I
> ran the check.
>
> I ran repair.
>
> I catted the mismatch_cnt again, still 512.
>
> I re-ran the check, back to 0.
AFAIK the "repair" action will give you a count of the repairs it does,
and will fail a drive if a read does not succeed after the sector is
rewritten. That's the way I read it, and the way it seems to work.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-25 19:58 ` Christian Pernegger
@ 2007-02-25 21:07 ` Justin Piszcz
0 siblings, 0 replies; 20+ messages in thread
From: Justin Piszcz @ 2007-02-25 21:07 UTC (permalink / raw)
To: Christian Pernegger; +Cc: linux-raid
On Sun, 25 Feb 2007, Christian Pernegger wrote:
> Sorry to hijack the thread a little but I just noticed that the
> mismatch_cnt for my mirror is at 256.
>
> I'd always thought the monthly check done by the mdadm Debian package
> does repair as well - apparently it doesn't.
>
> So I guess I should run repair but I'm wondering ...
> - is it safe / bugfree considering my oldish software? (mdadm 2.5.2 +
> linux 2.6.17.4)
> - is there any way to check which files (if any) have been corrupted?
> - I have grub installed by hand on both mirror components, but that
> shouldn't show up as mismatch, should it?
>
> The box in question is in production so I'd rather not update mdadm
> and/or kernel if possible.
>
> Chris
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
That is a very good question.. Also I hope you are not running XFS with
2.6.17.4. (corruption bug)
Besides that, I wonder if it would be possible (with bitmaps perhaps(?))
to have the kernel increment that and then post via ring buffer/dmesg,
something like:
kernel: md1: mismatch_cnt: 512, file corrupted: /etc/resolv.conf
I would take a performance hit for something like that :)
Justin.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-24 0:59 ` Eyal Lebedinsky
@ 2007-02-26 4:36 ` Neil Brown
2007-02-26 5:46 ` Jeff Breidenbach
2007-02-26 8:18 ` Eyal Lebedinsky
0 siblings, 2 replies; 20+ messages in thread
From: Neil Brown @ 2007-02-26 4:36 UTC (permalink / raw)
To: Eyal Lebedinsky; +Cc: Justin Piszcz, linux-raid list
On Saturday February 24, eyal@eyal.emu.id.au wrote:
> But is this not a good opportunity to repair the bad stripe for a very
> low cost (no complete resync required)?
In this case, 'md' knew nothing about an error. The SCSI layer
detected something and thought it had fixed it itself. Nothing for md
to do.
>
> At time of error we actually know which disk failed and can re-write
> it, something we do not know at resync time, so I assume we always
> write to the parity disk.
md only knows of a 'problem' if the lower level driver reports one.
If it reports a problem for a write request, md will fail the device.
If it reports a problem for a read request, md will try to over-write
correct data on the failed block.
But if the driver doesn't report the failure, there is nothing md can
do.
When performing a check/repair md looks for consistencies and fixes
the 'arbitrarily'. For raid5/6, it just 'corrects' the parity. For
raid1/10, it chooses one block and over-writes the other(s) with it.
Mapping these corrections back to blocks in files in the filesystem is
extremely non-trivial.
NeilBrown
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-26 4:36 ` Neil Brown
@ 2007-02-26 5:46 ` Jeff Breidenbach
2007-02-26 8:18 ` Eyal Lebedinsky
1 sibling, 0 replies; 20+ messages in thread
From: Jeff Breidenbach @ 2007-02-26 5:46 UTC (permalink / raw)
To: Neil Brown; +Cc: Eyal Lebedinsky, Justin Piszcz, linux-raid list
Ok, so hearing all the excitement I ran a check on a multi-disk
RAID-1. One of the RAID-1 disks failed out, maybe by coincidence
but presumably due to the check. (I also have another disk in
the array deliberately removed as a backup mechanism.) And
of course there is a big mismatch count.
Questions: will repair do the right thing for multidisk RAID-1, e.g.
vote or something? Do I need a special version of mdadm to
do this safely? What am I forgetting to ask?
Jeff
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdf1[0] sdb1[4] sdd1[6](F) sdc1[2] sde1[1]
488383936 blocks [6/4] [UUU_U_]
# cat /sys/block/md1/md/mismatch_cnt
128
# cat /proc/version
Linux version 2.6.17-2-amd64 (Debian 2.6.17-7) (waldi@debian.org) (gcc
version 4.1.2 20060814 (prerelease) (Debian 4.1.1-11)) #1 SMP Thu Aug
24 16:13:57 UTC 2006
# dpkg -l | grep mdadm
ii mdadm 1.9.0-4sarge1 Manage MD devices aka Linux Software Raid
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-26 4:36 ` Neil Brown
2007-02-26 5:46 ` Jeff Breidenbach
@ 2007-02-26 8:18 ` Eyal Lebedinsky
2007-03-05 4:00 ` Tejun Heo
1 sibling, 1 reply; 20+ messages in thread
From: Eyal Lebedinsky @ 2007-02-26 8:18 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid list, list linux-ide
I CC'ed linux-ide to see if they think the reported error was really innocent:
Question: does this error report suggest that a disk could be corrupted?
This SATA disk is part of an md raid and no error was reported by md.
[937567.332751] ata3.00: exception Emask 0x10 SAct 0x0 SErr 0x4190002 action 0x2
[937567.354094] ata3.00: cmd b0/d5:01:09:4f:c2/00:00:00:00:00/00 tag 0 cdb 0x0 data 512 in
[937567.354096] res 51/04:83:45:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error)
[937568.120783] ata3: soft resetting port
[937568.282450] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[937568.306693] ata3.00: configured for UDMA/100
[937568.319733] ata3: EH complete
[937568.361223] SCSI device sdc: 625142448 512-byte hdwr sectors (320073 MB)
[937568.397207] sdc: Write Protect is off
[937568.408620] sdc: Mode Sense: 00 3a 00 00
[937568.453522] SCSI device sdc: write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Neil Brown wrote:
> On Saturday February 24, eyal@eyal.emu.id.au wrote:
>
>>But is this not a good opportunity to repair the bad stripe for a very
>>low cost (no complete resync required)?
>
>
> In this case, 'md' knew nothing about an error. The SCSI layer
> detected something and thought it had fixed it itself. Nothing for md
> to do.
I expected this. So either the scsi layer incorrectly held back the error
report of the mismatch_cnt is due to something unrelated to the disk
i/o failure.
>>At time of error we actually know which disk failed and can re-write
>>it, something we do not know at resync time, so I assume we always
>>write to the parity disk.
Again, as I expected, resync cannot correct a problem, effectively
"blaming" the parity block. To know which block to correct one needs
a higher level parity code (can raid6 correct single bit/disk read
errors?).
> md only knows of a 'problem' if the lower level driver reports one.
> If it reports a problem for a write request, md will fail the device.
> If it reports a problem for a read request, md will try to over-write
> correct data on the failed block.
> But if the driver doesn't report the failure, there is nothing md can
> do.
>
> When performing a check/repair md looks for consistencies and fixes
> the 'arbitrarily'. For raid5/6, it just 'corrects' the parity. For
> raid1/10, it chooses one block and over-writes the other(s) with it.
>
> Mapping these corrections back to blocks in files in the filesystem is
> extremely non-trivial.
>
> NeilBrown
--
Eyal Lebedinsky (eyal@eyal.emu.id.au) <http://samba.org/eyal/>
attach .zip as .dat
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: nonzero mismatch_cnt with no earlier error
2007-02-26 8:18 ` Eyal Lebedinsky
@ 2007-03-05 4:00 ` Tejun Heo
0 siblings, 0 replies; 20+ messages in thread
From: Tejun Heo @ 2007-03-05 4:00 UTC (permalink / raw)
To: Eyal Lebedinsky; +Cc: Neil Brown, linux-raid list, list linux-ide
Eyal Lebedinsky wrote:
> I CC'ed linux-ide to see if they think the reported error was really innocent:
>
> Question: does this error report suggest that a disk could be corrupted?
>
> This SATA disk is part of an md raid and no error was reported by md.
>
> [937567.332751] ata3.00: exception Emask 0x10 SAct 0x0 SErr 0x4190002 action 0x2
> [937567.354094] ata3.00: cmd b0/d5:01:09:4f:c2/00:00:00:00:00/00 tag 0 cdb 0x0 data 512 in
> [937567.354096] res 51/04:83:45:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error)
Command 0xb0 is SMART. The device failed some subcommand of SMART, so,
no, it isn't related to data integrity, but your link is reporting
recovered data transmission error and PHY ready status changed and some
other conditions making libata EH mark the failure as ATA bus error.
Care to post full dmesg?
--
tejun
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2007-03-05 4:00 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-02-24 0:23 nonzero mismatch_cnt with no earlier error Eyal Lebedinsky
2007-02-24 0:30 ` Justin Piszcz
2007-02-24 0:59 ` Eyal Lebedinsky
2007-02-26 4:36 ` Neil Brown
2007-02-26 5:46 ` Jeff Breidenbach
2007-02-26 8:18 ` Eyal Lebedinsky
2007-03-05 4:00 ` Tejun Heo
2007-02-24 6:58 ` Eyal Lebedinsky
2007-02-24 9:14 ` Justin Piszcz
2007-02-24 9:37 ` Justin Piszcz
2007-02-24 9:48 ` Jason Rainforest
2007-02-24 9:50 ` Justin Piszcz
2007-02-24 9:59 ` Jason Rainforest
2007-02-24 10:01 ` Justin Piszcz
2007-02-24 11:09 ` Michael Tokarev
2007-02-24 11:12 ` Justin Piszcz
2007-02-25 20:02 ` Bill Davidsen
2007-02-25 18:33 ` Frank van Maarseveen
2007-02-25 19:58 ` Christian Pernegger
2007-02-25 21:07 ` Justin Piszcz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).