* confused raid1
@ 2005-08-15 15:41 Jon Lewis
2005-08-15 15:55 ` Mario 'BitKoenig' Holbe
2005-08-15 16:02 ` Tyler
0 siblings, 2 replies; 9+ messages in thread
From: Jon Lewis @ 2005-08-15 15:41 UTC (permalink / raw)
To: linux-raid
I've inheritted responsibility for a server with a root raid1 that
degrades every time the system is rebooted. It's a 2.4.x kernel. I've
got both raidutils and mdadm available.
The raid1 device is supposed to be /dev/hde1 & /dev/hdg1 with /dev/hdc1 as
a spare. I believe it was created with raidutils and the following
portion of /etc/raidtab:
raiddev /dev/md1
raid-level 1
nr-raid-disks 2
chunk-size 64k
persistent-superblock 1
nr-spare-disks 1
device /dev/hde1
raid-disk 0
device /dev/hdg1
raid-disk 1
device /dev/hdc1
spare-disk 0
The output of mdadm -E concerns me though.
# mdadm -E /dev/hdc1
/dev/hdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : 8b65fa52:21176cc9:cbb74149:c418b5a4
Creation Time : Tue Jan 13 13:21:41 2004
Raid Level : raid1
Device Size : 30716160 (29.29 GiB 31.45 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Update Time : Thu Aug 11 08:38:59 2005
State : dirty, no-errors
Active Devices : 2
Working Devices : 2
Failed Devices : -1
Spare Devices : 0
Checksum : 6a4dddb8 - correct
Events : 0.195
Number Major Minor RaidDevice State
this 1 22 1 1 active sync /dev/hdc1
0 0 33 1 0 active sync /dev/hde1
1 1 22 1 1 active sync /dev/hdc1
# mdadm -E /dev/hde1
/dev/hde1:
Magic : a92b4efc
Version : 00.90.00
UUID : 8b65fa52:21176cc9:cbb74149:c418b5a4
Creation Time : Tue Jan 13 13:21:41 2004
Raid Level : raid1
Device Size : 30716160 (29.29 GiB 31.45 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Update Time : Mon Aug 15 11:16:43 2005
State : dirty, no-errors
Active Devices : 2
Working Devices : 2
Failed Devices : -1
Spare Devices : 0
Checksum : 6a5348c9 - correct
Events : 0.199
Number Major Minor RaidDevice State
this 0 33 1 0 active sync /dev/hde1
0 0 33 1 0 active sync /dev/hde1
1 1 34 1 1 active sync /dev/hdg1
# mdadm -E /dev/hdg1
/dev/hdg1:
Magic : a92b4efc
Version : 00.90.00
UUID : 8b65fa52:21176cc9:cbb74149:c418b5a4
Creation Time : Tue Jan 13 13:21:41 2004
Raid Level : raid1
Device Size : 30716160 (29.29 GiB 31.45 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Update Time : Mon Aug 15 11:16:43 2005
State : dirty, no-errors
Active Devices : 2
Working Devices : 2
Failed Devices : -1
Spare Devices : 0
Checksum : 6a5348cc - correct
Events : 0.199
Number Major Minor RaidDevice State
this 1 34 1 1 active sync /dev/hdg1
0 0 33 1 0 active sync /dev/hde1
1 1 34 1 1 active sync /dev/hdg1
Shouldn't total devices be at least 2? How can failed devices be -1?
When the system reboots, md1 becomes just /dev/hdc1. I've used mdadm to
add hde1, fail and then remove hdc1, and add hdg1. How can I repair the
array such that it will survive the next reboot and keep hde1 and hdg1 as
the working devices?
md1 : active raid1 hdg1[1] hde1[0]
30716160 blocks [2/2] [UU]
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 15:41 confused raid1 Jon Lewis
@ 2005-08-15 15:55 ` Mario 'BitKoenig' Holbe
2005-08-15 16:01 ` Jon Lewis
2005-08-15 16:02 ` Tyler
1 sibling, 1 reply; 9+ messages in thread
From: Mario 'BitKoenig' Holbe @ 2005-08-15 15:55 UTC (permalink / raw)
To: linux-raid
Jon Lewis <jlewis@lewis.org> wrote:
> I've inheritted responsibility for a server with a root raid1 that
> degrades every time the system is rebooted. It's a 2.4.x kernel. I've
...
> The raid1 device is supposed to be /dev/hde1 & /dev/hdg1 with /dev/hdc1 as
...
> When the system reboots, md1 becomes just /dev/hdc1. I've used mdadm to
Well, reading the kernel boot messages could help.
Perhaps, the hdc1 partition is type fd (raid autodetect) and the driver
for hd[eg] is not in place when the RAID Autodetection is running.
regards
Mario
--
We are the Bore. Resistance is futile. You will be bored.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 15:55 ` Mario 'BitKoenig' Holbe
@ 2005-08-15 16:01 ` Jon Lewis
0 siblings, 0 replies; 9+ messages in thread
From: Jon Lewis @ 2005-08-15 16:01 UTC (permalink / raw)
To: linux-raid
On Mon, 15 Aug 2005, Mario 'BitKoenig' Holbe wrote:
> Well, reading the kernel boot messages could help.
> Perhaps, the hdc1 partition is type fd (raid autodetect) and the driver
> for hd[eg] is not in place when the RAID Autodetection is running.
I should have included that. All 3 of them are type fd.
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 15:41 confused raid1 Jon Lewis
2005-08-15 15:55 ` Mario 'BitKoenig' Holbe
@ 2005-08-15 16:02 ` Tyler
2005-08-15 16:12 ` Jon Lewis
1 sibling, 1 reply; 9+ messages in thread
From: Tyler @ 2005-08-15 16:02 UTC (permalink / raw)
To: Jon Lewis; +Cc: linux-raid
A few questions:
a) what kernel version are you using?
b) what mdadm version are you using?
c) what messages conscerning the raid are in the log when its failing
one of the drives and making hdc1 an active drive?
d) what linux distribution (and version) are you using?
Tyler.
Jon Lewis wrote:
> I've inheritted responsibility for a server with a root raid1 that
> degrades every time the system is rebooted. It's a 2.4.x kernel.
> I've got both raidutils and mdadm available.
>
> The raid1 device is supposed to be /dev/hde1 & /dev/hdg1 with
> /dev/hdc1 as a spare. I believe it was created with raidutils and the
> following portion of /etc/raidtab:
>
> raiddev /dev/md1
> raid-level 1
> nr-raid-disks 2
> chunk-size 64k
> persistent-superblock 1
> nr-spare-disks 1
> device /dev/hde1
> raid-disk 0
> device /dev/hdg1
> raid-disk 1
> device /dev/hdc1
> spare-disk 0
>
> The output of mdadm -E concerns me though.
>
> # mdadm -E /dev/hdc1
> /dev/hdc1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 8b65fa52:21176cc9:cbb74149:c418b5a4
> Creation Time : Tue Jan 13 13:21:41 2004
> Raid Level : raid1
> Device Size : 30716160 (29.29 GiB 31.45 GB)
> Raid Devices : 2
> Total Devices : 1
> Preferred Minor : 1
>
> Update Time : Thu Aug 11 08:38:59 2005
> State : dirty, no-errors
> Active Devices : 2
> Working Devices : 2
> Failed Devices : -1
> Spare Devices : 0
> Checksum : 6a4dddb8 - correct
> Events : 0.195
>
> Number Major Minor RaidDevice State
> this 1 22 1 1 active sync /dev/hdc1
> 0 0 33 1 0 active sync /dev/hde1
> 1 1 22 1 1 active sync /dev/hdc1
>
> # mdadm -E /dev/hde1
> /dev/hde1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 8b65fa52:21176cc9:cbb74149:c418b5a4
> Creation Time : Tue Jan 13 13:21:41 2004
> Raid Level : raid1
> Device Size : 30716160 (29.29 GiB 31.45 GB)
> Raid Devices : 2
> Total Devices : 1
> Preferred Minor : 1
>
> Update Time : Mon Aug 15 11:16:43 2005
> State : dirty, no-errors
> Active Devices : 2
> Working Devices : 2
> Failed Devices : -1
> Spare Devices : 0
> Checksum : 6a5348c9 - correct
> Events : 0.199
>
>
> Number Major Minor RaidDevice State
> this 0 33 1 0 active sync /dev/hde1
> 0 0 33 1 0 active sync /dev/hde1
> 1 1 34 1 1 active sync /dev/hdg1
>
> # mdadm -E /dev/hdg1
> /dev/hdg1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 8b65fa52:21176cc9:cbb74149:c418b5a4
> Creation Time : Tue Jan 13 13:21:41 2004
> Raid Level : raid1
> Device Size : 30716160 (29.29 GiB 31.45 GB)
> Raid Devices : 2
> Total Devices : 1
> Preferred Minor : 1
>
> Update Time : Mon Aug 15 11:16:43 2005
> State : dirty, no-errors
> Active Devices : 2
> Working Devices : 2
> Failed Devices : -1
> Spare Devices : 0
> Checksum : 6a5348cc - correct
> Events : 0.199
>
>
> Number Major Minor RaidDevice State
> this 1 34 1 1 active sync /dev/hdg1
> 0 0 33 1 0 active sync /dev/hde1
> 1 1 34 1 1 active sync /dev/hdg1
>
> Shouldn't total devices be at least 2? How can failed devices be -1?
>
> When the system reboots, md1 becomes just /dev/hdc1. I've used mdadm
> to add hde1, fail and then remove hdc1, and add hdg1. How can I
> repair the array such that it will survive the next reboot and keep
> hde1 and hdg1 as the working devices?
>
> md1 : active raid1 hdg1[1] hde1[0]
> 30716160 blocks [2/2] [UU]
>
> ----------------------------------------------------------------------
> Jon Lewis | I route
> Senior Network Engineer | therefore you are
> Atlantic Net | _________
> http://www.lewis.org/~jlewis/pgp for PGP public key_________
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.9/72 - Release Date: 8/14/2005
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 16:02 ` Tyler
@ 2005-08-15 16:12 ` Jon Lewis
2005-08-15 16:41 ` Tyler
2005-08-15 22:57 ` Neil Brown
0 siblings, 2 replies; 9+ messages in thread
From: Jon Lewis @ 2005-08-15 16:12 UTC (permalink / raw)
To: linux-raid
On Mon, 15 Aug 2005, Tyler wrote:
> A few questions:
>
> a) what kernel version are you using?
> b) what mdadm version are you using?
> c) what messages conscerning the raid are in the log when its failing
> one of the drives and making hdc1 an active drive?
> d) what linux distribution (and version) are you using?
The server is RH 8.0. The kernel is a 3rd party "meant for RH 8" one,
2.4.20-28_36.rh8.0.atsmp. We've had alot of issues with drivers for the
QLA2100 FC host adapter and XFS, so I'm somewhat hesitant to try different
kernels. Last one we tried was a 2.4.31 snapshot from SGI's cvs (supposed
to have the latest/greatest XFS driver, and I'd added the latest QLA2100
driver module from qlogic to it). In that kernel, NFS export of XFS was
broken. Clients could mount, but not actually read files.
It's quite probable, that before the following reboot, md1 was hdc1
and hde1.
Aug 9 02:02:39 kernel: md: created md1
Aug 9 02:02:39 kernel: md: bind<hdc1,1>
Aug 9 02:02:39 kernel: md: bind<hde1,2>
Aug 9 02:02:39 kernel: md: bind<hdg1,3>
Aug 9 02:02:39 kernel: md: running: <hdg1><hde1><hdc1>
Aug 9 02:02:39 kernel: md: hdg1's event counter: 000000b0
Aug 9 02:02:39 kernel: md: hde1's event counter: 000000b4
Aug 9 02:02:39 kernel: md: hdc1's event counter: 000000b4
Aug 9 02:02:39 kernel: md: superblock update time inconsistency -- using the most recent one
Aug 9 02:02:39 kernel: md: freshest: hde1
Aug 9 02:02:39 kernel: md: kicking non-fresh hdg1 from array!
Aug 9 02:02:39 kernel: md: unbind<hdg1,2>
Aug 9 02:02:39 kernel: md: export_rdev(hdg1)
Aug 9 02:02:39 kernel: md: RAID level 1 does not need chunksize! Continuing anyway.
Aug 9 02:02:39 kernel: kmod: failed to exec /sbin/modprobe -s -k md-personality-3, errno = 2
Aug 9 02:02:39 kernel: md: personality 3 is not loaded!
Aug 9 02:02:39 kernel: md :do_md_run() returned -22
Aug 9 02:02:39 kernel: md: md1 stopped.
Aug 9 02:02:39 kernel: md: unbind<hde1,1>
Aug 9 02:02:39 kernel: md: export_rdev(hde1)
Aug 9 02:02:39 kernel: md: unbind<hdc1,0>
Aug 9 02:02:39 kernel: md: export_rdev(hdc1)
Aug 9 02:02:39 kernel: md: ... autorun DONE.
mdadm - v1.4.0 - 29 Oct 2003 (not exactly the latest)
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 16:12 ` Jon Lewis
@ 2005-08-15 16:41 ` Tyler
2005-08-15 17:29 ` Jon Lewis
2005-08-15 22:57 ` Neil Brown
1 sibling, 1 reply; 9+ messages in thread
From: Tyler @ 2005-08-15 16:41 UTC (permalink / raw)
To: Jon Lewis; +Cc: linux-raid
Try this suggestion (regarding modules.conf).
https://www.redhat.com/archives/fedora-list/2003-December/msg05205.html
Tyler.
Jon Lewis wrote:
> On Mon, 15 Aug 2005, Tyler wrote:
>
>> A few questions:
>>
>> a) what kernel version are you using?
>> b) what mdadm version are you using?
>> c) what messages conscerning the raid are in the log when its failing
>> one of the drives and making hdc1 an active drive?
>> d) what linux distribution (and version) are you using?
>
>
> The server is RH 8.0. The kernel is a 3rd party "meant for RH 8" one,
> 2.4.20-28_36.rh8.0.atsmp. We've had alot of issues with drivers for
> the QLA2100 FC host adapter and XFS, so I'm somewhat hesitant to try
> different kernels. Last one we tried was a 2.4.31 snapshot from SGI's
> cvs (supposed to have the latest/greatest XFS driver, and I'd added
> the latest QLA2100 driver module from qlogic to it). In that kernel,
> NFS export of XFS was broken. Clients could mount, but not actually
> read files.
>
> It's quite probable, that before the following reboot, md1 was hdc1
> and hde1.
>
> Aug 9 02:02:39 kernel: md: created md1
> Aug 9 02:02:39 kernel: md: bind<hdc1,1>
> Aug 9 02:02:39 kernel: md: bind<hde1,2>
> Aug 9 02:02:39 kernel: md: bind<hdg1,3>
> Aug 9 02:02:39 kernel: md: running: <hdg1><hde1><hdc1>
> Aug 9 02:02:39 kernel: md: hdg1's event counter: 000000b0
> Aug 9 02:02:39 kernel: md: hde1's event counter: 000000b4
> Aug 9 02:02:39 kernel: md: hdc1's event counter: 000000b4
> Aug 9 02:02:39 kernel: md: superblock update time inconsistency --
> using the most recent one
> Aug 9 02:02:39 kernel: md: freshest: hde1
> Aug 9 02:02:39 kernel: md: kicking non-fresh hdg1 from array!
> Aug 9 02:02:39 kernel: md: unbind<hdg1,2>
> Aug 9 02:02:39 kernel: md: export_rdev(hdg1)
> Aug 9 02:02:39 kernel: md: RAID level 1 does not need chunksize!
> Continuing anyway.
> Aug 9 02:02:39 kernel: kmod: failed to exec /sbin/modprobe -s -k
> md-personality-3, errno = 2
> Aug 9 02:02:39 kernel: md: personality 3 is not loaded!
> Aug 9 02:02:39 kernel: md :do_md_run() returned -22
> Aug 9 02:02:39 kernel: md: md1 stopped.
> Aug 9 02:02:39 kernel: md: unbind<hde1,1>
> Aug 9 02:02:39 kernel: md: export_rdev(hde1)
> Aug 9 02:02:39 kernel: md: unbind<hdc1,0>
> Aug 9 02:02:39 kernel: md: export_rdev(hdc1)
> Aug 9 02:02:39 kernel: md: ... autorun DONE.
>
> mdadm - v1.4.0 - 29 Oct 2003 (not exactly the latest)
>
> ----------------------------------------------------------------------
> Jon Lewis | I route
> Senior Network Engineer | therefore you are
> Atlantic Net | _________
> http://www.lewis.org/~jlewis/pgp for PGP public key_________
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.9/72 - Release Date: 8/14/2005
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 16:41 ` Tyler
@ 2005-08-15 17:29 ` Jon Lewis
2005-08-15 18:56 ` Tyler
0 siblings, 1 reply; 9+ messages in thread
From: Jon Lewis @ 2005-08-15 17:29 UTC (permalink / raw)
To: linux-raid
On Mon, 15 Aug 2005, Tyler wrote:
> Try this suggestion (regarding modules.conf).
>
> https://www.redhat.com/archives/fedora-list/2003-December/msg05205.html
I don't see why that modules.conf addition would be necessary / make a
difference. I have other servers with root-raid1 that haven't needed
that, and mkinitrd is smart enough (reads /etc/raidtab) to know that raid1
is needed and loads the raid1 module in the initrd linuxrc script.
----------------------------------------------------------------------
Jon Lewis | I route
Senior Network Engineer | therefore you are
Atlantic Net |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 17:29 ` Jon Lewis
@ 2005-08-15 18:56 ` Tyler
0 siblings, 0 replies; 9+ messages in thread
From: Tyler @ 2005-08-15 18:56 UTC (permalink / raw)
To: Jon Lewis; +Cc: linux-raid
Well, i guess you won't know if you don't try.
Do your other servers pronounce the same "error" in their logs upon
bootup? regarding the module?
Tyler.
Jon Lewis wrote:
> On Mon, 15 Aug 2005, Tyler wrote:
>
>> Try this suggestion (regarding modules.conf).
>>
>> https://www.redhat.com/archives/fedora-list/2003-December/msg05205.html
>
>
> I don't see why that modules.conf addition would be necessary / make a
> difference. I have other servers with root-raid1 that haven't needed
> that, and mkinitrd is smart enough (reads /etc/raidtab) to know that
> raid1 is needed and loads the raid1 module in the initrd linuxrc script.
>
> ----------------------------------------------------------------------
> Jon Lewis | I route
> Senior Network Engineer | therefore you are
> Atlantic Net | _________
> http://www.lewis.org/~jlewis/pgp for PGP public key_________
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.9/72 - Release Date: 8/14/2005
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: confused raid1
2005-08-15 16:12 ` Jon Lewis
2005-08-15 16:41 ` Tyler
@ 2005-08-15 22:57 ` Neil Brown
1 sibling, 0 replies; 9+ messages in thread
From: Neil Brown @ 2005-08-15 22:57 UTC (permalink / raw)
To: Jon Lewis; +Cc: linux-raid
On Monday August 15, jlewis@lewis.org wrote:
>
> It's quite probable, that before the following reboot, md1 was hdc1
> and hde1.
>
> Aug 9 02:02:39 kernel: md: created md1
> Aug 9 02:02:39 kernel: md: bind<hdc1,1>
> Aug 9 02:02:39 kernel: md: bind<hde1,2>
> Aug 9 02:02:39 kernel: md: bind<hdg1,3>
> Aug 9 02:02:39 kernel: md: running: <hdg1><hde1><hdc1>
> Aug 9 02:02:39 kernel: md: hdg1's event counter: 000000b0
> Aug 9 02:02:39 kernel: md: hde1's event counter: 000000b4
> Aug 9 02:02:39 kernel: md: hdc1's event counter: 000000b4
> Aug 9 02:02:39 kernel: md: superblock update time inconsistency -- using the most recent one
> Aug 9 02:02:39 kernel: md: freshest: hde1
> Aug 9 02:02:39 kernel: md: kicking non-fresh hdg1 from array!
> Aug 9 02:02:39 kernel: md: unbind<hdg1,2>
> Aug 9 02:02:39 kernel: md: export_rdev(hdg1)
> Aug 9 02:02:39 kernel: md: RAID level 1 does not need chunksize! Continuing anyway.
> Aug 9 02:02:39 kernel: kmod: failed to exec /sbin/modprobe -s -k md-personality-3, errno = 2
> Aug 9 02:02:39 kernel: md: personality 3 is not loaded!
> Aug 9 02:02:39 kernel: md :do_md_run() returned -22
> Aug 9 02:02:39 kernel: md: md1 stopped.
> Aug 9 02:02:39 kernel: md: unbind<hde1,1>
> Aug 9 02:02:39 kernel: md: export_rdev(hde1)
> Aug 9 02:02:39 kernel: md: unbind<hdc1,0>
> Aug 9 02:02:39 kernel: md: export_rdev(hdc1)
> Aug 9 02:02:39 kernel: md: ... autorun DONE.
So md-personality-3 doesn't get loaded, and the array doesn't get
started at all. i.e. the 'partition type FD' is not having any
useful effect.
So how does the array get started?
Are there other message about md later in the kernel logs that talk
about md1 ??
My guess is that 'raidstart' is being used to start the array
somewhere along the line. 'raidstart' doesn't not start raid arrays
reliably. Don't use it. Remove it from your system. It is unsafe.
If you cannot get the raid1 module to be loaded properly, make sure
that 'mdadm' is being used to assemble the array. It has a much
better chance of getting it right.
NeilBrown
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2005-08-15 22:57 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-08-15 15:41 confused raid1 Jon Lewis
2005-08-15 15:55 ` Mario 'BitKoenig' Holbe
2005-08-15 16:01 ` Jon Lewis
2005-08-15 16:02 ` Tyler
2005-08-15 16:12 ` Jon Lewis
2005-08-15 16:41 ` Tyler
2005-08-15 17:29 ` Jon Lewis
2005-08-15 18:56 ` Tyler
2005-08-15 22:57 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).