linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID 5 performance problems
@ 2003-04-03 15:45 Jonathan Vardy
  2003-04-03 18:05 ` Peter L. Ashford
  2003-04-03 21:06 ` Felipe Alfaro Solana
  0 siblings, 2 replies; 26+ messages in thread
From: Jonathan Vardy @ 2003-04-03 15:45 UTC (permalink / raw)
  To: linux-raid, linux-kernel

Hi,

I'm having trouble with getting the right performance out of my software
raid 5 system. I've installed Red Hat 9.0 with kernel 2.4.20 compiled
myself to match my harware (had the same problem with the default
kernel). When I test the raid device's speed using 'hdparm -Tt /dev/hdx'
I get this:

/dev/md0:
Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28 MB/sec
Timing buffered disk reads:  64 MB in  2.39 seconds = 26.78 MB/sec

Using bonny I get similar speeds (complete info further below):
Write 13668 K/sec, CPU load = 35%
Read  29752 K/sec, CPU load = 45%

This is really low for a raid system and I can't figure out what is
causing this. I use rounded cables but I also tried the original Promise
cables and there was no difference in performance. I've set the
parameters for hdparm so that the harddrives are optmised = 'hdparm -a8
-A1 -c1 -d1 -m16 -u1 /dev/hdc' which results in these settings:

 multcount    = 16 (on)
 IO_support   =  1 (32-bit)
 unmaskirq    =  1 (on)
 using_dma    =  1 (on)
 keepsettings =  0 (off)
 readonly     =  0 (off)
 readahead    =  8 (on)
 geometry     = 14593/255/63, sectors = 234441648, start = 0

But still the raid performs slow. Does anybody have an idea how I could
improve the performance? I've seen raid systems like my own performing
much better (speeds of around 80MB/sec). I've added as many specs as I
could find below so you can see what my configuration is.

cat /proc/mdstat gives: 

Personalities : [raid0] [raid5] 
read_ahead 1024 sectors
md0 : active raid5 hdk1[4] hdi1[3] hdg1[2] hde1[1] hdc1[0]
      468872704 blocks level 5, 128k chunk, algorithm 0 [5/5] [_UUUU]
unused devices: <none>

The hardware in the machine is as following:

Maiboard   	: Asus P2B-d dual PII slot 1 with latest bios update
Processors 	: 2x Intel PII 350Mhz
Memory     	: 512MB SDRam 100Mhz ECC
Controller 	: Promise FastTrak100 TX4 with latest firmware update
Boot HD	: Maxtor 20GB 5400rpm
Raid HD's	: 5x WDC1200BB 120GB 7200rpm ATA100

4 of the raid HD's are connected to the Promise controller and the other
raid HD is master on the second onboard IDE channel (udma2/ATA33)

Here are the speeds found using 'hdparm -Tt /dev/hdx'

hda (boot on onboard ATA33)
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  4.33 seconds = 14.78
MB/sec

hdc (raid on onboard ATA33)
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  4.56 seconds = 14.04
MB/sec

hde (raid on promise )
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.42 seconds = 26.45
MB/sec

hdg (raid on promise )
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.43 seconds = 26.34
MB/sec

hdi (raid on promise )
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.41 seconds = 26.56
MB/sec

hdk (raid on promise )
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.42 seconds = 26.45
MB/sec

As you see /dev/hdc is on the onboard IDE channel. Tos be certain that
this was not the bottleneck, I removed it from the raid so that is runs
in degraded mode. This did not change the performance much. Rebuild
speed is laso very slow, it is round 6MB/sec.

The drives originally came from a friend's file server where they were
also employed in a raid configuration. I've compared my results in
Bonny++ to his results:

My raid:

Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
store.datzegik.c 1G  3712  99 13668  37  6432  28  3948  96 29752  45
342.8   6
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                 16   318  91 +++++ +++ 13157 100   351  98 +++++ +++
1332  92
store.datzegik.com,1G,3712,99,13668,37,6432,28,3948,96,29752,45,342.8,6,
16,318,91,+++++,+++,13157,100,351,98,+++++,+++,1332,92

His raid:

---raid0 5*120gb sw raid:

Version 1.02c       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storagenew.a2000 1G 19931  99 99258  73 61688  31 23784  98 178478  41
616.9   2
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                 16  1966  96 +++++ +++ +++++ +++  2043  99 +++++ +++
5518  99
storagenew.a2000.nu,1G,19931,99,99258,73,61688,31,23784,98,178478,41,616
.9,2,16,1966,96,+++++,+++,+++++,+++,2043,99,+++++,+++,5518,99


His machine has dual P Xeon 2000Mhz processors but that shouldn't be the
reason that the results are so different. My processor isn't at 100%
while testing. Even his older system which had 80gig drives and dual
500Mhz processors is much faster:

---raid5 6*80gb sw raid:
Version 1.02c       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G  7935  98 34886  35 15362  25  8073  97 71953  53
180.1   2
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                 16   711 100 +++++ +++ +++++ +++   710  99 +++++ +++
2856 100
storage.a2000.nu,2G,7935,98,34886,35,15362,25,8073,97,71953,53,180.1,2,1
6,711,100,+++++,+++,+++++,+++,710,99,+++++,+++,2856,100

Yours sincerely, Jonathan Vardy

^ permalink raw reply	[flat|nested] 26+ messages in thread
* Re: RAID 5 performance problems
@ 2003-04-03 19:13 Neil Schemenauer
  0 siblings, 0 replies; 26+ messages in thread
From: Neil Schemenauer @ 2003-04-03 19:13 UTC (permalink / raw)
  To: linux-kernel, linux-raid

Ross Vandegrift <ross@willow.seitz.com> wrote:
> Absolutely correct - you should *never* run IDE RAID on a channel that
> has both a master and slave.  When one disk on an IDE channel has an
> error, the whole channel is reset - this makes both disks
> inaccessible,
> and RAID5 now has two failed disks => you data is gone!  *ALWAYS* use
> separate IDE channels.

I think it's okay to use both channels if you use RAID0+1 (also
known as RAID10), just be sure to mirror across channels.  As a
bonus, RAID0+1 is significantly faster than RAID5.

  Neil

^ permalink raw reply	[flat|nested] 26+ messages in thread
* Re: RAID 5 performance problems
@ 2003-04-03 19:49 Andy Arvai
  2003-04-03 20:25 ` Mike Dresser
  2003-04-03 21:10 ` Jonathan Vardy
  0 siblings, 2 replies; 26+ messages in thread
From: Andy Arvai @ 2003-04-03 19:49 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel


Shouldn't /proc/mdstat have [UUUUU] instead of [_UUUU]? Perhaps
this is running in degraded mode. Also, you have 'algorithm 0',
whereas my raid5 has 'algorithm 2', which is the left-symmetric
parity algorithm.

Andy 

> cat /proc/mdstat gives: 
> 
> Personalities : [raid0] [raid5] 
> read_ahead 1024 sectors
> md0 : active raid5 hdk1[4] hdi1[3] hdg1[2] hde1[1] hdc1[0]
> 	468872704 blocks level 5, 128k chunk, algorithm 0 [5/5] [_UUUU]
> unused devices: <none>




^ permalink raw reply	[flat|nested] 26+ messages in thread
* RE: RAID 5 performance problems
@ 2003-04-04 11:44 Jonathan Vardy
  2003-04-04 14:39 ` Ezra Nugroho
  0 siblings, 1 reply; 26+ messages in thread
From: Jonathan Vardy @ 2003-04-04 11:44 UTC (permalink / raw)
  To: Peter L. Ashford, Jonathan Vardy
  Cc: Stephan van Hienen, linux-raid, linux-kernel

> That's the one.  Your 120GB drives are being seen as UDMA-33. 
>  Whatever is
> causing this is slowing you down.  Fix this, and the 
> performance should
> improve.
> 
> > but after the boot I set hdparm manually for each drive 
> with the following
> > settings:
> >
> > hdparm -a8 -A1 -c1 -d1 -m16 -u1 /dev/hdc.
> 
> According to your single-drive benchmarks, it didn't do the 
> job.  You'll
> have to find the CAUSE of the UDMA-33 identification, and fix it.  An
> example (not necessarily your problem) is that if a 
> 40-conductor cable is
> used, you CAN'T set the drive to UDMA-66/100/133.  There may 
> also be some
> settings in the drive or controller, or some jumpers, that 
> are keeping the
> drive from switching to the fast modes.
> 
> Once you get the drives being identified at a fast UDMA, you 
> then need to
> again verify the array performance.  It should have climbed 
> significantly.
> 
> Good luck.
> 				Peter Ashford
> 

I've rebooted with the original Red Hat kernel (2.4.20) and it
recognizes the drives correctly now (UDMA100) but the performance is
still poor

Here's the latest info:

Bootlog:

ide: Assuming 33MHz system bus speed for PIO modes; override with
idebus=xx
PIIX4: IDE controller at PCI slot 00:04.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
    ide0: BM-DMA at 0xd800-0xd807, BIOS settings: hda:DMA, hdb:pio
    ide1: BM-DMA at 0xd808-0xd80f, BIOS settings: hdc:pio, hdd:pio
PDC20270: IDE controller at PCI slot 02:01.0
PDC20270: chipset revision 2
PDC20270: not 100% native mode: will probe irqs later
    ide2: BM-DMA at 0x9040-0x9047, BIOS settings: hde:pio, hdf:pio
    ide3: BM-DMA at 0x9048-0x904f, BIOS settings: hdg:pio, hdh:pio
    ide4: BM-DMA at 0x90c0-0x90c7, BIOS settings: hdi:pio, hdj:pio
    ide5: BM-DMA at 0x90c8-0x90cf, BIOS settings: hdk:pio, hdl:pio
hda: Maxtor 2B020H1, ATA DISK drive
blk: queue c0453420, I/O limit 4095Mb (mask 0xffffffff)
hdc: WDC WD1200BB-00CAA1, ATA DISK drive
blk: queue c04538a0, I/O limit 4095Mb (mask 0xffffffff)
hde: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c0453d20, I/O limit 4095Mb (mask 0xffffffff)
hdg: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c04541a0, I/O limit 4095Mb (mask 0xffffffff)
hdi: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c0454620, I/O limit 4095Mb (mask 0xffffffff)
hdk: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c0454aa0, I/O limit 4095Mb (mask 0xffffffff)
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
ide2 at 0x9000-0x9007,0x9012 on irq 19
ide3 at 0x9020-0x9027,0x9032 on irq 19
ide4 at 0x9080-0x9087,0x9092 on irq 19
ide5 at 0x90a0-0x90a7,0x90b2 on irq 19
hda: host protected area => 1
hda: 39062500 sectors (20000 MB) w/2048KiB Cache, CHS=2431/255/63,
UDMA(33)
hdc: host protected area => 1
hdc: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(33)
hde: host protected area => 1
hde: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)
hdg: host protected area => 1
hdg: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)
hdi: host protected area => 1
hdi: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)
hdk: host protected area => 1
hdk: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)

The bonny results are as followed:

Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
store.datzegik.c 1G  2717  99 10837  30  4912  10  2747  95 17228  14
114.5   1
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                 16   367  98 +++++ +++ 13163  99   368  98 +++++ +++
1386  93
store.datzegik.com,1G,2717,99,10837,30,4912,10,2747,95,17228,14,114.5,1,
16,367,98,+++++,+++,13163,99,368,98,+++++,+++,1386,93

The results per drive with hdparm:

/dev/hda:
 	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
 	Timing buffered disk reads:  64 MB in  4.32 seconds = 14.81
MB/sec

/dev/hdc:
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  4.55 seconds = 14.07
MB/sec

/dev/hde:
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.50 seconds = 25.60
MB/sec

/dev/hdg:
	Timing buffer-cache reads:   128 MB in  1.12 seconds =114.29
MB/sec
	Timing buffered disk reads:  64 MB in  2.30 seconds = 27.83
MB/sec

/dev/hdi:
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.27 seconds = 28.19
MB/sec

/dev/hdk:
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.54 seconds = 25.20
MB/sec

The raid device:

/dev/md0:
	Timing buffer-cache reads:   128 MB in  1.14 seconds =112.28
MB/sec
	Timing buffered disk reads:  64 MB in  2.19 seconds = 29.22
MB/sec

^ permalink raw reply	[flat|nested] 26+ messages in thread
* RE: RAID 5 performance problems
@ 2003-04-04 15:05 Jonathan Vardy
  0 siblings, 0 replies; 26+ messages in thread
From: Jonathan Vardy @ 2003-04-04 15:05 UTC (permalink / raw)
  To: Ezra Nugroho
  Cc: Peter L. Ashford, Jonathan Vardy, Stephan van Hienen, linux-raid,
	linux-kernel

> From: Ezra Nugroho [mailto:ezran@goshen.edu] 

> Your hdc is still running at udma(33). This is also part of the raid,
> right? This will slow the whole thing down since in raid 5 
> write is done
> to all disks simultaneously. Before the system finishes writing to the
> slow drive, the write is not done yet.

 > Your hdc is still running at udma(33). This is also part of the raid,
> right? This will slow the whole thing down since in raid 5 
> write is done
> to all disks simultaneously. Before the system finishes writing to the
> slow drive, the write is not done yet.

This should not cripple the performance to what I get. My friend had
6x80GB 5400rpm drives with two of those on udma33 and four on udma66 and
he managed 80MB/sec (dual 500Mhz)

Jonathan

^ permalink raw reply	[flat|nested] 26+ messages in thread
* RE: RAID 5 performance problems
@ 2003-04-04 16:01 Jonathan Vardy
  2003-04-05  0:10 ` Jonathan Vardy
  0 siblings, 1 reply; 26+ messages in thread
From: Jonathan Vardy @ 2003-04-04 16:01 UTC (permalink / raw)
  To: Jonathan Vardy, Peter L. Ashford, Jonathan Vardy
  Cc: Stephan van Hienen, linux-raid, linux-kernel

> I've rebooted with the original Red Hat kernel (2.4.20) and it
> recognizes the drives correctly now (UDMA100) but the performance is
> still poor

Added to this I've replaced the rounded cables with the promise cables
to make sure this is not an issue. I've put in two 500Mhz processors so
it is comparable with my friends orignal setup of 2x500Mhz, 6x80GB (2x
Udma33, 4x Udma66) which had around 80MB/sec for reads.

The dual 500Mhz setup with 2.4.20 does:

/dev/md0:
	Timing buffer-cache reads:   128 MB in  0.96 seconds =133.33
MB/sec
	Timing buffered disk reads:  64 MB in  3.05 seconds = 20.98
MB/sec

Latest Bonnie++ results:

Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
store.datzegik.c 1G  3800  99 11114  22  4910   7  3632  92 17681  10
105.9   0
                    ------Sequential Create------ --------Random
Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
/sec %CP
                 16   484  98 +++++ +++ 15714  86   518  98 +++++ +++
1985  92
store.datzegik.com,1G,3800,99,11114,22,4910,7,3632,92,17681,10,105.9,0,1
6,484,98,+++++,+++,15714,86,518,98,+++++,+++,1985,92

Bootmessages:

ide: Assuming 33MHz system bus speed for PIO modes; override with
idebus=xx
PIIX4: IDE controller at PCI slot 00:04.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
    ide0: BM-DMA at 0xd800-0xd807, BIOS settings: hda:DMA, hdb:pio
    ide1: BM-DMA at 0xd808-0xd80f, BIOS settings: hdc:pio, hdd:pio
PDC20270: IDE controller at PCI slot 02:01.0
PDC20270: chipset revision 2
PDC20270: not 100% native mode: will probe irqs later
    ide2: BM-DMA at 0x9040-0x9047, BIOS settings: hde:pio, hdf:pio
    ide3: BM-DMA at 0x9048-0x904f, BIOS settings: hdg:pio, hdh:pio
    ide4: BM-DMA at 0x90c0-0x90c7, BIOS settings: hdi:pio, hdj:pio
    ide5: BM-DMA at 0x90c8-0x90cf, BIOS settings: hdk:pio, hdl:pio
hda: Maxtor 2B020H1, ATA DISK drive
blk: queue c0453420, I/O limit 4095Mb (mask 0xffffffff)
hdc: WDC WD1200BB-00CAA1, ATA DISK drive
blk: queue c04538a0, I/O limit 4095Mb (mask 0xffffffff)
hde: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c0453d20, I/O limit 4095Mb (mask 0xffffffff)
hdg: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c04541a0, I/O limit 4095Mb (mask 0xffffffff)
hdi: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c0454620, I/O limit 4095Mb (mask 0xffffffff)
hdk: WDC WD1200BB-60CJA1, ATA DISK drive
blk: queue c0454aa0, I/O limit 4095Mb (mask 0xffffffff)
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
ide2 at 0x9000-0x9007,0x9012 on irq 19
ide3 at 0x9020-0x9027,0x9032 on irq 19
ide4 at 0x9080-0x9087,0x9092 on irq 19
ide5 at 0x90a0-0x90a7,0x90b2 on irq 19
hda: host protected area => 1
hda: 39062500 sectors (20000 MB) w/2048KiB Cache, CHS=2431/255/63,
UDMA(33)
hdc: host protected area => 1
hdc: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(33)
hde: host protected area => 1
hde: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)
hdg: host protected area => 1
hdg: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)
hdi: host protected area => 1
hdi: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)
hdk: host protected area => 1
hdk: 234441648 sectors (120034 MB) w/2048KiB Cache, CHS=232581/16/63,
UDMA(100)

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2003-04-05  0:10 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-04-03 15:45 RAID 5 performance problems Jonathan Vardy
2003-04-03 18:05 ` Peter L. Ashford
2003-04-03 18:47   ` Ross Vandegrift
2003-04-03 19:22     ` Stephan van Hienen
2003-04-03 19:20   ` Stephan van Hienen
2003-04-03 19:28     ` Alan Cox
2003-04-03 21:02     ` Ezra Nugroho
2003-04-03 21:25       ` Stephan van Hienen
2003-04-03 21:38     ` Peter L. Ashford
2003-04-03 22:09       ` Jonathan Vardy
2003-04-03 22:16         ` Jonathan Vardy
2003-04-03 22:28         ` Peter L. Ashford
2003-04-03 21:42   ` Jonathan Vardy
2003-04-03 22:13     ` Peter L. Ashford
2003-04-03 21:06 ` Felipe Alfaro Solana
2003-04-03 21:14   ` Jonathan Vardy
  -- strict thread matches above, loose matches on Subject: below --
2003-04-03 19:13 Neil Schemenauer
2003-04-03 19:49 Andy Arvai
2003-04-03 20:25 ` Mike Dresser
2003-04-03 21:10 ` Jonathan Vardy
2003-04-03 21:31   ` Ezra Nugroho
2003-04-04 11:44 Jonathan Vardy
2003-04-04 14:39 ` Ezra Nugroho
2003-04-04 15:05 Jonathan Vardy
2003-04-04 16:01 Jonathan Vardy
2003-04-05  0:10 ` Jonathan Vardy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).