* Raid 5 to raid 6 reshape failure after reboot
@ 2009-10-18 16:10 Guy Martin
2009-10-18 20:14 ` NeilBrown
0 siblings, 1 reply; 11+ messages in thread
From: Guy Martin @ 2009-10-18 16:10 UTC (permalink / raw)
To: linux-raid
Hi,
I'm currently doing tests with the latest devel-3.1 branch to migrate
my fileserver's raid 5 array to a raid 6 array.
I thus created a raid5 test array with 3 drives and tried to grow it to
a raid 6 array with 4 drives.
This is how the array looked like before the reshape :
md0 : active raid5 sdf1[3](S) sde1[2] sdd1[1] sdb1[0]
976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
Using kernel 2.6.31.4, the reshape to raid 6 started as expected but
would take about 2 weeks to complete at about 400Kb/s with 4*500G
drives. I used the following command :
mdadm --grow /dev/md0 --backup-file backup -l 6 -n 4
Time is not so critical for me so I let it ran for 2 days. It was then
at about 13% and I decided to perform a reboot to see if it would
recover. Unfortunately, I now have the following output when trying to
assemble the array :
mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup
mdadm: Failed to restore critical section for reshape, sorry.
There were no usefull data on this test array but it would be nice if
this doesn't happen on my fileserver :)
Any idea how to resolve this issue ?
Thanks,
Guy
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-18 16:10 Raid 5 to raid 6 reshape failure after reboot Guy Martin
@ 2009-10-18 20:14 ` NeilBrown
2009-10-19 13:53 ` Guy Martin
0 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2009-10-18 20:14 UTC (permalink / raw)
To: Guy Martin; +Cc: linux-raid
On Mon, October 19, 2009 3:10 am, Guy Martin wrote:
>
> Hi,
>
> I'm currently doing tests with the latest devel-3.1 branch to migrate
> my fileserver's raid 5 array to a raid 6 array.
>
> I thus created a raid5 test array with 3 drives and tried to grow it to
> a raid 6 array with 4 drives.
> This is how the array looked like before the reshape :
> md0 : active raid5 sdf1[3](S) sde1[2] sdd1[1] sdb1[0]
> 976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
>
> Using kernel 2.6.31.4, the reshape to raid 6 started as expected but
> would take about 2 weeks to complete at about 400Kb/s with 4*500G
> drives. I used the following command :
> mdadm --grow /dev/md0 --backup-file backup -l 6 -n 4
>
> Time is not so critical for me so I let it ran for 2 days. It was then
> at about 13% and I decided to perform a reboot to see if it would
> recover. Unfortunately, I now have the following output when trying to
> assemble the array :
> mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup
> mdadm: Failed to restore critical section for reshape, sorry.
>
> There were no usefull data on this test array but it would be nice if
> this doesn't happen on my fileserver :)
>
Thanks for doing this testing!
> Any idea how to resolve this issue ?
Try pulling the very latest from my git repo. I was doing some testing
like this just last week and found a number of issues which I think
I have fixed.
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-18 20:14 ` NeilBrown
@ 2009-10-19 13:53 ` Guy Martin
2009-10-19 20:05 ` NeilBrown
0 siblings, 1 reply; 11+ messages in thread
From: Guy Martin @ 2009-10-19 13:53 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hi Neil,
Same result with the latest commit after a reboot.
Maybe the problem comes from the fact that I first try to assemble
the array without providing a backup file ?
For reference I did the following :
mdadm --create /dev/md0 -l 5 -n 3 /dev/sd[bdf]1
mdadm --add /dev/md0 /dev/sde1
[wait for the raid 5 to rebuild]
mdadm --grow /dev/md0 --backup-file backup -l 6 -n 4
[wait a few hours]
reboot
mdadm --assemble /dev/md0 /dev/sd[bdef]1
mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup
Any other output you'd need ?
Guy
On Mon, 19 Oct 2009 07:14:06 +1100
"NeilBrown" <neilb@suse.de> wrote:
>
> Thanks for doing this testing!
>
> > Any idea how to resolve this issue ?
>
> Try pulling the very latest from my git repo. I was doing some
> testing like this just last week and found a number of issues which I
> think I have fixed.
>
> NeilBrown
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-19 13:53 ` Guy Martin
@ 2009-10-19 20:05 ` NeilBrown
2009-10-20 5:54 ` NeilBrown
0 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2009-10-19 20:05 UTC (permalink / raw)
To: Guy Martin; +Cc: linux-raid
On Tue, October 20, 2009 12:53 am, Guy Martin wrote:
>
> Hi Neil,
>
> Same result with the latest commit after a reboot.
>
> Maybe the problem comes from the fact that I first try to assemble
> the array without providing a backup file ?
That should not cause a problem - mdadm will simply fail in that
case and leave you to do it 'right'.
I assume it is the same backup file from before - not a file in /tmp
or something like that?
>
> For reference I did the following :
> mdadm --create /dev/md0 -l 5 -n 3 /dev/sd[bdf]1
> mdadm --add /dev/md0 /dev/sde1
> [wait for the raid 5 to rebuild]
> mdadm --grow /dev/md0 --backup-file backup -l 6 -n 4
> [wait a few hours]
> reboot
> mdadm --assemble /dev/md0 /dev/sd[bdef]1
> mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup
>
> Any other output you'd need ?
In general adding '-v' to '--assemble' is a good idea, but I doubt
it would give anything particularly useful in this case.
I'll try to reproduce your result and let you know how I go.
Thanks,
NeilBrown
>
> Guy
>
> On Mon, 19 Oct 2009 07:14:06 +1100
> "NeilBrown" <neilb@suse.de> wrote:
>
>>
>> Thanks for doing this testing!
>>
>> > Any idea how to resolve this issue ?
>>
>> Try pulling the very latest from my git repo. I was doing some
>> testing like this just last week and found a number of issues which I
>> think I have fixed.
>>
>> NeilBrown
>>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-19 20:05 ` NeilBrown
@ 2009-10-20 5:54 ` NeilBrown
2009-10-20 8:37 ` Guy Martin
0 siblings, 1 reply; 11+ messages in thread
From: NeilBrown @ 2009-10-20 5:54 UTC (permalink / raw)
To: Guy Martin; +Cc: linux-raid
On Tue, October 20, 2009 7:05 am, NeilBrown wrote:
> On Tue, October 20, 2009 12:53 am, Guy Martin wrote:
>>
>> Hi Neil,
>>
>> Same result with the latest commit after a reboot.
>>
>> Maybe the problem comes from the fact that I first try to assemble
>> the array without providing a backup file ?
>
> That should not cause a problem - mdadm will simply fail in that
> case and leave you to do it 'right'.
>
> I assume it is the same backup file from before - not a file in /tmp
> or something like that?
>
>>
>> For reference I did the following :
>> mdadm --create /dev/md0 -l 5 -n 3 /dev/sd[bdf]1
>> mdadm --add /dev/md0 /dev/sde1
>> [wait for the raid 5 to rebuild]
>> mdadm --grow /dev/md0 --backup-file backup -l 6 -n 4
>> [wait a few hours]
>> reboot
>> mdadm --assemble /dev/md0 /dev/sd[bdef]1
>> mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup
>>
>> Any other output you'd need ?
>
> In general adding '-v' to '--assemble' is a good idea, but I doubt
> it would give anything particularly useful in this case.
> I'll try to reproduce your result and let you know how I go.
I tried to reproduce this and failed - it works perfectly for me.
I have added some more tracing messages to --assemble which are enabled
by --verbose.
Could you please pull the latest devel-3.1 branch from my git tree and
try the same assemble command but with --verbose at the end and report the
result.
Thanks.
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-20 5:54 ` NeilBrown
@ 2009-10-20 8:37 ` Guy Martin
2009-10-21 23:44 ` Neil Brown
0 siblings, 1 reply; 11+ messages in thread
From: Guy Martin @ 2009-10-20 8:37 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hi Neil,
Here is the output I've got :
bleh mdadm # ./mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file
backup -v mdadm: looking for devices for /dev/md0
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 2.
mdadm:/dev/md0 has an active reshape - checking if critical section
needs to be restored mdadm: too-old timestamp on backup-metadata on
backup mdadm: Failed to find backup of critical section
mdadm: Failed to restore critical section for reshape, sorry.
The backup file is of course the one I've been using for the grow
command.
The values I've got :
- info->array.utime : 1256026602
- bsb.mtime : 1256020033
My timezone is Europe/Brussels if that matters.
Removing this check makes the reshape continue and the array start
correctly.
Let me know if you want me to do some more tests.
HTH,
Guy
On Tue, 20 Oct 2009 16:54:43 +1100
"NeilBrown" <neilb@suse.de> wrote:
> I tried to reproduce this and failed - it works perfectly for me.
>
> I have added some more tracing messages to --assemble which are
> enabled by --verbose.
> Could you please pull the latest devel-3.1 branch from my git tree and
> try the same assemble command but with --verbose at the end and
> report the result.
>
> Thanks.
> NeilBrown
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-20 8:37 ` Guy Martin
@ 2009-10-21 23:44 ` Neil Brown
2009-10-22 9:29 ` Guy Martin
2009-10-22 14:20 ` Guy Martin
0 siblings, 2 replies; 11+ messages in thread
From: Neil Brown @ 2009-10-21 23:44 UTC (permalink / raw)
To: Guy Martin; +Cc: linux-raid
On Tuesday October 20, gmsoft@tuxicoman.be wrote:
>
> Hi Neil,
>
> Here is the output I've got :
>
> bleh mdadm # ./mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file
> backup -v mdadm: looking for devices for /dev/md0
> mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 1.
> mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
> mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 2.
> mdadm:/dev/md0 has an active reshape - checking if critical section
> needs to be restored mdadm: too-old timestamp on backup-metadata on
> backup mdadm: Failed to find backup of critical section
Ahhh... I wondered a bit about that as I was adding the fprintf there,
but it was along the lines of "this cannot happen", not "this is where
the bug might be" :-)
I see now what is happening. I need to update the mtime every time I
write the backup metadata (of course!). I never tripped on this
because I never let a reshape run for more than a few minutes.
I have checked in a patch which updated the mtime properly, so it
should now word for you.
Thanks for helping make mdadm even better!
NeilBrown
> mdadm: Failed to restore critical section for reshape, sorry.
>
> The backup file is of course the one I've been using for the grow
> command.
>
> The values I've got :
> - info->array.utime : 1256026602
> - bsb.mtime : 1256020033
>
> My timezone is Europe/Brussels if that matters.
>
> Removing this check makes the reshape continue and the array start
> correctly.
>
> Let me know if you want me to do some more tests.
>
> HTH,
> Guy
>
>
> On Tue, 20 Oct 2009 16:54:43 +1100
> "NeilBrown" <neilb@suse.de> wrote:
>
> > I tried to reproduce this and failed - it works perfectly for me.
> >
> > I have added some more tracing messages to --assemble which are
> > enabled by --verbose.
> > Could you please pull the latest devel-3.1 branch from my git tree and
> > try the same assemble command but with --verbose at the end and
> > report the result.
> >
> > Thanks.
> > NeilBrown
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-21 23:44 ` Neil Brown
@ 2009-10-22 9:29 ` Guy Martin
2009-10-29 4:55 ` Neil Brown
2009-10-22 14:20 ` Guy Martin
1 sibling, 1 reply; 11+ messages in thread
From: Guy Martin @ 2009-10-22 9:29 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Hi Neil,
Thanks, this new mdadm does fix the assemble issue.
However, I performed an additional test and it didn't go so well.
I failed one drive during the reshape and tried to remove and add it
back.
I wasn't able to remove the drive because the mdadm process running in
the background was keeping the partition open. I then decided to stop
the array and restart it but without luck.
I've performed this test with today's devel-3.1 branch.
Is this supposed to be working or no drive should fail during the reshape ?
Here are the commands that I've been issuing :
[array currently reshaping]
mdadm --fail /dev/md0 /dev/sdb1
mdadm -r /dev/md0 /dev/sdb1 -> device busy
mdadm -S /dev/md0 -> array stopped
mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup -v
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 2.
mdadm:/dev/md0 has an active reshape - checking if critical section needs to be restored
mdadm: backup-metadata found on backup but is not needed
mdadm: Failed to find backup of critical section
mdadm: Failed to restore critical section for reshape, sorry.
Guy
> Ahhh... I wondered a bit about that as I was adding the fprintf there,
> but it was along the lines of "this cannot happen", not "this is where
> the bug might be" :-)
>
> I see now what is happening. I need to update the mtime every time I
> write the backup metadata (of course!). I never tripped on this
> because I never let a reshape run for more than a few minutes.
>
> I have checked in a patch which updated the mtime properly, so it
> should now word for you.
>
> Thanks for helping make mdadm even better!
>
> NeilBrown
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-21 23:44 ` Neil Brown
2009-10-22 9:29 ` Guy Martin
@ 2009-10-22 14:20 ` Guy Martin
2009-10-29 3:32 ` Neil Brown
1 sibling, 1 reply; 11+ messages in thread
From: Guy Martin @ 2009-10-22 14:20 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Neil,
While redoing the reboot test, I've also noticed this :
When I first issue the --grow command, I see the following in dmesg :
[192752.106467] md: reshape of RAID array md0
[192752.106473] md: minimum _guaranteed_ speed: 200000 KB/sec/disk.
[192752.106479] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
The minimum guaranteed speed should be 1000KB/sec according to the
entry in /proc/sys/dev/raid/speed_limit_min.
Also, the performances are not really good. I have about 400K/sec according to /proc/mdstat.
Now, if I stop the array and assemble it again, things are better. The output in dmesg displays the correct value :
[193138.646204] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[193138.646210] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
And perf are much better, I now get ~1500K/s which shrinks the time of the reshape from ~2 weeks to 'only' a few days.
Any thoughts ?
HTH,
Guy
On Thu, 22 Oct 2009 10:44:53 +1100
Neil Brown <neilb@suse.de> wrote:
> Thanks for helping make mdadm even better!
>
> NeilBrown
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-22 14:20 ` Guy Martin
@ 2009-10-29 3:32 ` Neil Brown
0 siblings, 0 replies; 11+ messages in thread
From: Neil Brown @ 2009-10-29 3:32 UTC (permalink / raw)
To: Guy Martin; +Cc: linux-raid
On Thursday October 22, gmsoft@tuxicoman.be wrote:
>
> Neil,
>
> While redoing the reboot test, I've also noticed this :
> When I first issue the --grow command, I see the following in dmesg :
> [192752.106467] md: reshape of RAID array md0
> [192752.106473] md: minimum _guaranteed_ speed: 200000 KB/sec/disk.
> [192752.106479] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
>
> The minimum guaranteed speed should be 1000KB/sec according to the
> entry in /proc/sys/dev/raid/speed_limit_min.
This is expected.
Each array can have a local setting in /sys/block/mdX/md/sync_speed_min
which overrides the global setting.
When a reshape does not change the size of the array, we need to
constantly create a backup of the few stripes 'currently' being
reshaped, so that in the event on an unclean shutdown (crash/power
failure) we can restart the reshape without data loss.
The process of reading data to make the backup looks like non-sync IO
to md, so it would normally slow down the resync process.
That is not a good idea, so mdadm deliberately sets
..../md/sync_speed_min very high to keep the resync moving.
>
> Also, the performances are not really good. I have about 400K/sec according to /proc/mdstat.
>
> Now, if I stop the array and assemble it again, things are better. The output in dmesg displays the correct value :
> [193138.646204] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
> [193138.646210] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reshape.
This is different because when assembling the array, mdadm doesn't set
the sync_speed_min until after the reshape has started. I might try
to get mdadm to set it before starting the reshape to avoid confusion.
>
> And perf are much better, I now get ~1500K/s which shrinks the time of the reshape from ~2 weeks to 'only' a few days.
That is surprising. The speed of 1500K/sec seems more reasonable, but
the fact that it changed after you restarted does surprise me.
(goes off to experiment and explore the code).
Ahhh... bug.
in Grow_reshape, we have code:
if (ndata == odata) {
/* Make 'blocks' bigger for better throughput, but
* not so big that we reject it below.
*/
if (blocks * 32 < sra->component_size)
blocks *= 16;
} else
This is meant to do the backup in larger chunks in the case where the
array isn't changing size (where the array does change size, we only
do the backup for a fraction of a second so it doesn't matter).
However sra->component_size is not initialised, so it zero, so
'blocks' does not get changed.
(->component size gets set a little later in "sra = sysfs_read(fd,.....)")
So the reshape is being done with a very small buffer, and you get
bad performance.
The matching code in Grow_continue doesn't check for component_size
and so doesn't suffer the same problem.
A bit of experimentation shows that you can increase the throughput
quite a bit more by changing the multiply factor to e.g. 64 and
increasing the stripe_cache_size (in /sys/.../md/)
I wonder how to pick an 'optimal' size....
Maybe I could get the backup process to occasionally look at
stripe_cache_size and adjust the backup size based on that.
Then the admin could try increasing the cache size to improve
throughput, but be careful not to exhaust memory.
I'll have to think about it a bit.
Thanks for your feedback.
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Raid 5 to raid 6 reshape failure after reboot
2009-10-22 9:29 ` Guy Martin
@ 2009-10-29 4:55 ` Neil Brown
0 siblings, 0 replies; 11+ messages in thread
From: Neil Brown @ 2009-10-29 4:55 UTC (permalink / raw)
To: Guy Martin; +Cc: linux-raid
On Thursday October 22, gmsoft@tuxicoman.be wrote:
>
> Hi Neil,
>
> Thanks, this new mdadm does fix the assemble issue.
>
> However, I performed an additional test and it didn't go so well.
> I failed one drive during the reshape and tried to remove and add it
> back.
> I wasn't able to remove the drive because the mdadm process running in
> the background was keeping the partition open. I then decided to stop
> the array and restart it but without luck.
> I've performed this test with today's devel-3.1 branch.
>
> Is this supposed to be working or no drive should fail during the reshape ?
Thanks for reporting this.
I hadn't tested, or even thought through, that scenario.
I have tested that a degraded array can be reshaped, but not that a
reshaping array can get degraded.
md will certainly not allow you to add the device back - that will
have to wait for the reshape to finish.... I guess it could be managed
by it would be rather complex.... maybe.
However it should handle the failure properly but it doesn't.
In particular, the reshape process in aborted and restarted where it
was up to, but in the process of doing that it 'escapes' from the
controlling mdadm process that was managing the backup. So the
reshape gets way ahead of the backup and as you discovered, the backup
file is no longer useful for restarting the reshaped array.
You can fix this by changing the
mddev->resync_max = MaxSector;
near the end of md_do_sync to
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery))
mddev->resync_max = MaxSector;
But doing that with the current mdadm isn't a good solution as it
could be backing up the wrong data (as mdadm will trust the device
that has been marked as faulty).
It looks like I have some fixing to do....
Thanks!
NeilBrown
>
> Here are the commands that I've been issuing :
> [array currently reshaping]
> mdadm --fail /dev/md0 /dev/sdb1
> mdadm -r /dev/md0 /dev/sdb1 -> device busy
> mdadm -S /dev/md0 -> array stopped
> mdadm --assemble /dev/md0 /dev/sd[bdef]1 --backup-file backup -v
>
> mdadm: looking for devices for /dev/md0
> mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
> mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 1.
> mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
> mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 2.
> mdadm:/dev/md0 has an active reshape - checking if critical section needs to be restored
> mdadm: backup-metadata found on backup but is not needed
> mdadm: Failed to find backup of critical section
> mdadm: Failed to restore critical section for reshape, sorry.
>
>
> Guy
>
>
> > Ahhh... I wondered a bit about that as I was adding the fprintf there,
> > but it was along the lines of "this cannot happen", not "this is where
> > the bug might be" :-)
> >
> > I see now what is happening. I need to update the mtime every time I
> > write the backup metadata (of course!). I never tripped on this
> > because I never let a reshape run for more than a few minutes.
> >
> > I have checked in a patch which updated the mtime properly, so it
> > should now word for you.
> >
> > Thanks for helping make mdadm even better!
> >
> > NeilBrown
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2009-10-29 4:55 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-10-18 16:10 Raid 5 to raid 6 reshape failure after reboot Guy Martin
2009-10-18 20:14 ` NeilBrown
2009-10-19 13:53 ` Guy Martin
2009-10-19 20:05 ` NeilBrown
2009-10-20 5:54 ` NeilBrown
2009-10-20 8:37 ` Guy Martin
2009-10-21 23:44 ` Neil Brown
2009-10-22 9:29 ` Guy Martin
2009-10-29 4:55 ` Neil Brown
2009-10-22 14:20 ` Guy Martin
2009-10-29 3:32 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).