linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: RAID5 causing lockups
@ 2003-06-26  3:54 Corey McGuire
  2003-06-26 11:46 ` Mike Black
  2003-06-26 13:32 ` Matthew Mitchell
  0 siblings, 2 replies; 15+ messages in thread
From: Corey McGuire @ 2003-06-26  3:54 UTC (permalink / raw)
  To: linux-raid

Well, two of my drives did have an older bios, but the upgrade changed nothing.

I noticed that even unmounted, as long as I didn't raidstop the device, the system still crashes.

I tried setting down my bios as much as possible, and I am looking to do the same with the kernel, 2.4.21.  I'll try the magic sysrq key, but I can't find my nulmodem cable to save my life, so I'll have to barrow one from work.

My server marches on, but without /dev/md2... I'll try just letting it sit, /dev/md2 intact, over night, but for now, I need it up, even if it is only for fits and spurts.

Thanks everyone, keep the ideas rolling in.

<sigh>

>Hey folks,
>
>I just upgraded my system from a ~200GB mirror to a ~1TB RAID5, but all has
>not transitioned well.
>
>I really don't know how to debug this issue, though I have tried.  I gave
>up this morning before work, but I was going to try the magickey next
>(something I don't really know how to use, but anything for a clue)
>followed by upgrading to 2.4.21.
>
>The lock up is typical to a system with a failing drive; the system is
>responsive to input, but nothing happens.  Keyboard works fine, but
>programs become idle (not really crashing.)  I tried keeping "top" up,
>hoping I would see something obvious, like raid5syncd doing something
>strange, but if it does, top doesn't update after the problem.
>
>The lockups happen even if the system is doing nothing (other than
>raid5syncd, which is awfully busy since my RAID won't stay up)
>
>If I unmount the RAID5 and RAIDSTOP it, my system will work fine, but I'm
>out 1TB of disk.  Right now, I have it running the bare essentials (all
>services on, but my /home directory has only public_html and mail stuff for
>each user.)
>
>Anything I can do to get more information out of this problem?  I don't
>really know where to look.
>
>
>System Infro
>=======================================================================
>
>My kernel is 2.4.20, my raid tools is raidtools-0.90, no patches on
>anything, home built distro (linux from scratch.)  Had been running on a
>mirror for nearly a year.
>
>Each drive on my system is connected to promise UltraATA 100 controllers.
>I have 6 drives and 3 controllers.  Each drive is a 200GB WD drive, set to
>"Single/Master" on their channel.
>
>No device has a slave.
>
>Drives are hda hdc hde hdg hdi hdk
>
>------- Each drive is configured exactly like the device below -------
>
>Disk /dev/hda: 255 heads, 63 sectors, 24321 cylinders
>Units = cylinders of 16065 * 512 bytes
>
>   Device Boot    Start       End    Blocks   Id  System
>/dev/hda1             1       319   2562336   fd  Linux raid autodetect
>/dev/hda2           320       352    265072+  82  Linux swap
>/dev/hda3           353     24321 192530992+  fd  Linux raid autodetect
>
>------------------------- Here is my raidtab -------------------------
>
>raiddev /dev/md0
>   raid-level            1
>   chunk-size           32
>   nr-raid-disks         2
>   nr-spare-disks        0
>   persistent-superblock 1
>   device        /dev/hda1
>   raid-disk             0
>   device        /dev/hdc1
>   raid-disk             1
>
>raiddev /dev/md1
>   raid-level            1
>   chunk-size           32
>   nr-raid-disks         2
>   nr-spare-disks        0
>   persistent-superblock 1
>   device        /dev/hde1
>   raid-disk             0
>   device        /dev/hdg1
>   raid-disk             1
>
>raiddev /dev/md2
>   raid-level            5
>   chunk-size           32
>   nr-raid-disks         6
>   nr-spare-disks        0
>   persistent-superblock 1
>   device        /dev/hda3
>   raid-disk             0
>   device        /dev/hdc3
>   raid-disk             1
>   device        /dev/hde3
>   raid-disk             2
>   device        /dev/hdg3
>   raid-disk             3
>   device        /dev/hdi3
>   raid-disk             4
>   device        /dev/hdk3
>   raid-disk             5
>
>raiddev /dev/md3
>   raid-level            1
>   chunk-size           32
>   nr-raid-disks         2
>   nr-spare-disks        0
>   persistent-superblock 1
>   device        /dev/hdi1
>   raid-disk             0
>   device        /dev/hdk1
>   raid-disk             1
>
>-------------------------- Here is my fstab --------------------------
>
># Begin /etc/fstab
>
># filesystem   mount-point     fs-type    options           dump fsck-order
>
>/dev/md0       /               reiserfs   defaults             1 1
>/dev/md1       /mnt/backup     reiserfs   noauto,defaults      1 3
>/dev/md2       /home           reiserfs   defaults             1 2
>/dev/hda2      swap            swap       pri=42               0 0
>/dev/hdc2      swap            swap       pri=42               0 0
>/dev/hde2      swap            swap       pri=42               0 0
>/dev/hdg2      swap            swap       pri=42               0 0
>/dev/hdi2      swap            swap       pri=42               0 0
>/dev/hdk2      swap            swap       pri=42               0 0
>proc           /proc           proc       defaults             0 0
>
># End /etc/fstab
>
>=======================================================================
>
>Let me know if I missed anything (probably lots.)
>
>Thanks for your time.
>
>
>/\/\/\/\/\/\ Nothing is foolproof to a talented fool. /\/\/\/\/\/\
>
>coreyfro@coreyfro.com
>http://www.coreyfro.com/
>http://stats.distributed.net/rc5-64/psummary.php3?id=196879
>ICQ : 3168059
>
>-----BEGIN GEEK CODE BLOCK-----
>GCS d--(+) s: a-- C++++$ UBL++>++++ P+ L+ E W+++$ N+ o? K? w++++$>+++++$
>O---- !M--- V- PS+++ PE++(--) Y+ PGP- t--- 5(+) !X- R(+) !tv b-(+)
>Dl++(++++) D++ G+ e>+++ h++(---) r++>+$ y++*>$ H++++ n---(----) p? !au w+
>v- 3+>++ j- G'''' B--- u+++*** f* Quake++++>+++++$
>------END GEEK CODE BLOCK------
>
>Home of Geek Code - http://www.geekcode.com/
>The Geek Code Decoder Page - http://www.ebb.org/ungeek//
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


/\/\/\/\/\/\ Nothing is foolproof to a talented fool. /\/\/\/\/\/\

coreyfro@coreyfro.com
http://www.coreyfro.com/
http://stats.distributed.net/rc5-64/psearch.php3?st=coreyfro
ICQ : 3168059

-----BEGIN GEEK CODE BLOCK-----
GCS !d--(+) s: a- C++++$ UL++>++++ P+ L++>++++ E- W+++$ N++ o? K? w++++$>+++++$ O---- !M--- V- PS+++ PE++(--) Y+ PGP- t--- 5(+) !X- R(+) !tv b-(+) Dl++(++++) D++ G++(-) e>+++ h++(---) r++>+$ y++**>$ H++++ n---(----) p? !au w+ v- 3+>++ j- G'''' B--- u+++*** f* Quake++++>+++++$
------END GEEK CODE BLOCK------

Home of Geek Code - http://www.geekcode.com/
The Geek Code Decoder Page - http://www.ebb.org/ungeek//


^ permalink raw reply	[flat|nested] 15+ messages in thread
* re: RAID5 causing lockups
@ 2003-06-26 17:34 Corey McGuire
  2003-06-27  5:02 ` Corey McGuire
  0 siblings, 1 reply; 15+ messages in thread
From: Corey McGuire @ 2003-06-26 17:34 UTC (permalink / raw)
  To: linux-raid


Much progress has been made, but success is still out of reach.

First of all, 2.4.21 has been very helpful.  Feedback regarding drive
problems is much more verbose.  I don't know who to blame, the RAID people,
the ATA people, or the promise driver people, but immediately, I found that
one of my controllers was hosing up the works.  I moved the devices from
said controller to my VIA onboard controller and gained about 5MB/second on
the rebuild speed.  I don't know if this is because 2.4.21 is faster, VIA
is faster, I was saturating my PCI bus (since the VIA controller in on the
Southbridge) or because I was previously getting these errors and no
feedback.

Alas, problem persists, but I have found out why (90% certain.)

Now when there is a crash, the system spits out why and panics.  It looks
to be HDA (or HDA is getting the blame) and, thanks to a seemingly
pointless script I wrote to watch the rebuild, I found that the system dies
at around 12.5% on the RAID5 rebuild every time.

Bad disk?  Maybe, probably, but I'll keep banging my head against it for a
while.

Score,
2.4.21 + progress script    1
2.4.20 + crossing fingers   0

I am currently running a kernel with DMA turned off by default.  This
sounded like a good idea last night, around 4 in the morning, but now it
sounds like an exercise in futility.  The idea came to me shortly after I
was visited by the bovine-fairy.  She told me that everything can be fixed
with "moon pies."  I know this apparition was real and not a hallucination
because, until last night, I had never heard of "moon pies."  After a quick
search of google, sure enough, moon pies; they look tasty, maybe she's
right. 

Score
Bovine fairies              1
Sleep depravation           0                         

At any rate, by my calculations, without DMA, it will take another 12hours
to get to the 12.5% fail point.  I should be back from work by then.
Longevity through sloth.

To answer some questions,

My power situation is good.  I have had a lot more juice getting sucked
through this power supply before.  Used to be a dual P3's with 30MM
Peltiers and 3 10,000 RPM cheetahs.  (Peltiers are not worth it, I had to
underclock my system and drop the voltage before it would run any cooler.)
I think these WD's draw 20 watts peak, 14 otherwise.  My power supply is
~400 watts.  Shouldn't be a problem, seeing as how I can run my mirrors
just fine for days, but die after turning my stripe on for minutes.

Building smaller RAID's.  Yeah, I will give that a whirl, just to make sure
HDA is the problem.  I don't think I need to yank HDA, I'll just remove it
from my RAIDTAB and mkraid again.

One point I'd like to make; why is a drive failure killing my RAID5?  Kinda
defeats the purpose.

Here is the aforementioned script plus its results so you can see what I
see.

4tlods.sh (for the love of dog, sync!  I said I was sleep deprived.)

while ((1)) ; do  top -n 1 | head -n 20 ; echo ; cat /proc/mdstat ; done

2.4.21

12:12am  up 19 min,  5 users,  load average: 0.87, 1.06, 0.82
49 processes: 48 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  1.0% user, 52.5% system,  0.0% nice, 46.3% idle
Mem:   516592K av,   95204K used,  421388K free,       0K shrd,   52588K
buff
Swap: 1590384K av,       0K used, 1590384K free                   17196K
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
    1 root       9   0   504  504   440 S     0.0  0.0   0:06 init
    2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
    3 root      19  19     0    0     0 SWN   0.0  0.0   0:00
ksoftirqd_CPU0
    4 root       9   0     0    0     0 SW    0.0  0.0   0:00 kswapd
    5 root       9   0     0    0     0 SW    0.0  0.0   0:00 bdflush
    6 root       9   0     0    0     0 SW    0.0  0.0   0:00 kupdated
    7 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 mdrecoveryd
    8 root       7 -20     0    0     0 SW<   0.0  0.0   6:32 raid5d
    9 root      19  19     0    0     0 DWN   0.0  0.0   1:08 raid5syncd
   10 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1d
   11 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1d
   12 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1d
   13 root       9   0     0    0     0 SW    0.0  0.0   0:00 kreiserfsd

Personalities : [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
      2562240 blocks [2/2] [UU]

md1 : active raid1 hdg1[1] hde1[0]
      2562240 blocks [2/2] [UU]

md3 : active raid1 hdk1[1] hdi1[0]
      2562240 blocks [2/2] [UU]

md2 : active raid5 hdk3[5] hdi3[4] hdg3[3] hde3[2] hdc3[1] hda3[0]
      962654400 blocks level 5, 32k chunk, algorithm 0 [6/6] [UUUUUU]
      [==>..................]  resync = 12.5% (24153592/192530880)
finish=134.7min speed=20822K/sec
unused devices: <none>


2.4.21

2:38am  up 19 min,  1 user,  load average: 0.63, 1.13, 0.89
42 processes: 41 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  0.9% user, 52.1% system,  0.0% nice, 46.8% idle
Mem:   516592K av,   89824K used,  426768K free,       0K shrd,   57908K
buff
Swap:       0K av,       0K used,       0K free                   10644K
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
    1 root       8   0   504  504   440 S     0.0  0.0   0:06 init
    2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
    3 root      19  19     0    0     0 SWN   0.0  0.0   0:00
ksoftirqd_CPU0
    4 root       9   0     0    0     0 SW    0.0  0.0   0:00 kswapd
    5 root       9   0     0    0     0 SW    0.0  0.0   0:00 bdflush
    6 root       9   0     0    0     0 SW    0.0  0.0   0:00 kupdated
    7 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 mdrecoveryd
    8 root      15 -20     0    0     0 SW<   0.0  0.0   6:29 raid5d
    9 root      19  19     0    0     0 DWN   0.0  0.0   1:09 raid5syncd
   14 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1d
   15 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1syncd
   16 root       9   0     0    0     0 SW    0.0  0.0   0:00 kreiserfsd
   74 root       9   0   616  616   512 S     0.0  0.1   0:00 syslogd

Personalities : [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
      2562240 blocks [2/2] [UU]
        resync=DELAYED
md2 : active raid5 hdk3[5] hdi3[4] hdg3[3] hde3[2] hdc3[1] hda3[0]
      962654400 blocks level 5, 32k chunk, algorithm 0 [6/6] [UUUUUU]
      [==>..................]  resync = 12.5% (24153596/192530880)
finish=139.2min speed=20147K/sec
unused devices: <none>


2.4.20

3:22am  up 21 min,  1 user,  load average: 1.04, 1.31, 1.02
47 processes: 46 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  0.9% user, 54.7% system,  0.0% nice, 44.2% idle
Mem:   516604K av,  125824K used,  390780K free,       0K shrd,   91628K
buff
Swap: 1590384K av,       0K used, 1590384K free                   10796K
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
    1 root       9   0   504  504   440 S     0.0  0.0   0:10 init
    2 root       9   0     0    0     0 SW    0.0  0.0   0:00 keventd
    3 root       9   0     0    0     0 SW    0.0  0.0   0:00 kapmd
    4 root      18  19     0    0     0 SWN   0.0  0.0   0:00
ksoftirqd_CPU0
    5 root       9   0     0    0     0 SW    0.0  0.0   0:00 kswapd
    6 root       9   0     0    0     0 SW    0.0  0.0   0:00 bdflush
    7 root       9   0     0    0     0 SW    0.0  0.0   0:00 kupdated
    8 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 mdrecoveryd
    9 root       4 -20     0    0     0 SW<   0.0  0.0   7:16 raid5d
   10 root      19  19     0    0     0 DWN   0.0  0.0   1:07 raid5syncd
   11 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1d
   12 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1syncd
   13 root      -1 -20     0    0     0 SW<   0.0  0.0   0:00 raid1d

Personalities : [raid1] [raid5] [multipath]
read_ahead 1024 sectors
md0 : active raid1 hdc1[1] hda1[0]
      2562240 blocks [2/2] [UU]
        resync=DELAYED
md1 : active raid1 hdg1[1] hde1[0]
      2562240 blocks [2/2] [UU]
        resync=DELAYED
md3 : active raid1 hdk1[1] hdi1[0]
      2562240 blocks [2/2] [UU]
        resync=DELAYED
md2 : active raid5 hdk3[5] hdi3[4] hdg3[3] hde3[2] hdc3[1] hda3[0]
      962654400 blocks level 5, 32k chunk, algorithm 0 [6/6] [UUUUUU]
      [==>..................]  resync = 12.5% (24155416/192530880)
finish=181.1min speed=15487K/sec
unused devices: <none>


Thanks for your help everyone, I'll keep trying.


/\/\/\/\/\/\ Nothing is foolproof to a talented fool. /\/\/\/\/\/\

coreyfro@coreyfro.com
http://www.coreyfro.com/
http://stats.distributed.net/rc5-64/psummary.php3?id=196879
ICQ : 3168059

-----BEGIN GEEK CODE BLOCK-----
GCS d--(+) s: a-- C++++$ UBL++>++++ P+ L+ E W+++$ N+ o? K? w++++$>+++++$
O---- !M--- V- PS+++ PE++(--) Y+ PGP- t--- 5(+) !X- R(+) !tv b-(+)
Dl++(++++) D++ G+ e>+++ h++(---) r++>+$ y++*>$ H++++ n---(----) p? !au w+
v- 3+>++ j- G'''' B--- u+++*** f* Quake++++>+++++$
------END GEEK CODE BLOCK------

Home of Geek Code - http://www.geekcode.com/
The Geek Code Decoder Page - http://www.ebb.org/ungeek//


^ permalink raw reply	[flat|nested] 15+ messages in thread
* RAID5 causing lockups
@ 2003-06-25 19:16 Corey McGuire
  2003-06-25 19:28 ` Mike Dresser
  2003-06-25 20:36 ` Matt Simonsen
  0 siblings, 2 replies; 15+ messages in thread
From: Corey McGuire @ 2003-06-25 19:16 UTC (permalink / raw)
  To: alewman, bort, corvus, kratz.franz, blatt.guy, linux-raid,
	mario.scalise, harbeck.seth, phil

Hey folks,

I just upgraded my system from a ~200GB mirror to a ~1TB RAID5, but all has
not transitioned well.

I really don't know how to debug this issue, though I have tried.  I gave
up this morning before work, but I was going to try the magickey next
(something I don't really know how to use, but anything for a clue)
followed by upgrading to 2.4.21.

The lock up is typical to a system with a failing drive; the system is
responsive to input, but nothing happens.  Keyboard works fine, but
programs become idle (not really crashing.)  I tried keeping "top" up,
hoping I would see something obvious, like raid5syncd doing something
strange, but if it does, top doesn't update after the problem.

The lockups happen even if the system is doing nothing (other than
raid5syncd, which is awfully busy since my RAID won't stay up)

If I unmount the RAID5 and RAIDSTOP it, my system will work fine, but I'm
out 1TB of disk.  Right now, I have it running the bare essentials (all
services on, but my /home directory has only public_html and mail stuff for
each user.)

Anything I can do to get more information out of this problem?  I don't
really know where to look.


System Infro
=======================================================================

My kernel is 2.4.20, my raid tools is raidtools-0.90, no patches on
anything, home built distro (linux from scratch.)  Had been running on a
mirror for nearly a year.

Each drive on my system is connected to promise UltraATA 100 controllers.
I have 6 drives and 3 controllers.  Each drive is a 200GB WD drive, set to
"Single/Master" on their channel.

No device has a slave.

Drives are hda hdc hde hdg hdi hdk

------- Each drive is configured exactly like the device below -------

Disk /dev/hda: 255 heads, 63 sectors, 24321 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/hda1             1       319   2562336   fd  Linux raid autodetect
/dev/hda2           320       352    265072+  82  Linux swap
/dev/hda3           353     24321 192530992+  fd  Linux raid autodetect

------------------------- Here is my raidtab -------------------------

raiddev /dev/md0
   raid-level            1
   chunk-size           32
   nr-raid-disks         2
   nr-spare-disks        0
   persistent-superblock 1
   device        /dev/hda1
   raid-disk             0
   device        /dev/hdc1
   raid-disk             1

raiddev /dev/md1
   raid-level            1
   chunk-size           32
   nr-raid-disks         2
   nr-spare-disks        0
   persistent-superblock 1
   device        /dev/hde1
   raid-disk             0
   device        /dev/hdg1
   raid-disk             1

raiddev /dev/md2
   raid-level            5
   chunk-size           32
   nr-raid-disks         6
   nr-spare-disks        0
   persistent-superblock 1
   device        /dev/hda3
   raid-disk             0
   device        /dev/hdc3
   raid-disk             1
   device        /dev/hde3
   raid-disk             2
   device        /dev/hdg3
   raid-disk             3
   device        /dev/hdi3
   raid-disk             4
   device        /dev/hdk3
   raid-disk             5

raiddev /dev/md3
   raid-level            1
   chunk-size           32
   nr-raid-disks         2
   nr-spare-disks        0
   persistent-superblock 1
   device        /dev/hdi1
   raid-disk             0
   device        /dev/hdk1
   raid-disk             1

-------------------------- Here is my fstab --------------------------

# Begin /etc/fstab

# filesystem   mount-point     fs-type    options           dump fsck-order

/dev/md0       /               reiserfs   defaults             1 1
/dev/md1       /mnt/backup     reiserfs   noauto,defaults      1 3
/dev/md2       /home           reiserfs   defaults             1 2
/dev/hda2      swap            swap       pri=42               0 0
/dev/hdc2      swap            swap       pri=42               0 0
/dev/hde2      swap            swap       pri=42               0 0
/dev/hdg2      swap            swap       pri=42               0 0
/dev/hdi2      swap            swap       pri=42               0 0
/dev/hdk2      swap            swap       pri=42               0 0
proc           /proc           proc       defaults             0 0

# End /etc/fstab

=======================================================================

Let me know if I missed anything (probably lots.)

Thanks for your time.


/\/\/\/\/\/\ Nothing is foolproof to a talented fool. /\/\/\/\/\/\

coreyfro@coreyfro.com
http://www.coreyfro.com/
http://stats.distributed.net/rc5-64/psummary.php3?id=196879
ICQ : 3168059

-----BEGIN GEEK CODE BLOCK-----
GCS d--(+) s: a-- C++++$ UBL++>++++ P+ L+ E W+++$ N+ o? K? w++++$>+++++$
O---- !M--- V- PS+++ PE++(--) Y+ PGP- t--- 5(+) !X- R(+) !tv b-(+)
Dl++(++++) D++ G+ e>+++ h++(---) r++>+$ y++*>$ H++++ n---(----) p? !au w+
v- 3+>++ j- G'''' B--- u+++*** f* Quake++++>+++++$
------END GEEK CODE BLOCK------

Home of Geek Code - http://www.geekcode.com/
The Geek Code Decoder Page - http://www.ebb.org/ungeek//


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2003-06-27  5:47 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-06-26  3:54 RAID5 causing lockups Corey McGuire
2003-06-26 11:46 ` Mike Black
2003-06-26 13:32 ` Matthew Mitchell
  -- strict thread matches above, loose matches on Subject: below --
2003-06-26 17:34 Corey McGuire
2003-06-27  5:02 ` Corey McGuire
2003-06-27  5:32   ` Mike Dresser
2003-06-27  5:47     ` Corey McGuire
2003-06-25 19:16 Corey McGuire
2003-06-25 19:28 ` Mike Dresser
2003-06-25 19:41   ` Corey McGuire
2003-06-25 19:56     ` Mike Dresser
2003-06-25 20:51       ` Corey McGuire
2003-06-25 20:36 ` Matt Simonsen
2003-06-25 20:56   ` Corey McGuire
     [not found]     ` <1056575536.24919.101.camel@mattswrk>
2003-06-25 21:16       ` Corey McGuire

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).