* Typical RAID5 transfer speeds
@ 2009-12-19 0:37 Matt Tehonica
2009-12-19 1:05 ` Bernd Schubert
` (3 more replies)
0 siblings, 4 replies; 20+ messages in thread
From: Matt Tehonica @ 2009-12-19 0:37 UTC (permalink / raw)
To: linux-raid
I have a 4 disk RAID5 using a 2048K chunk size and using XFS
filesystem. Typical file size is about 2GB-5GB. I usually get around
50MB/sec transfer speed when writting files to the array. Is this
typcial or is it below normal? A friend has a 20 disk RAID6 using the
same filesystem and chunk size and gets around 150MB/sec. Any input on
this??
Thanks,
Matt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 0:37 Typical RAID5 transfer speeds Matt Tehonica
@ 2009-12-19 1:05 ` Bernd Schubert
2009-12-19 8:30 ` Thomas Fjellstrom
2009-12-19 11:43 ` John Robinson
2009-12-19 21:35 ` Roger Heflin
` (2 subsequent siblings)
3 siblings, 2 replies; 20+ messages in thread
From: Bernd Schubert @ 2009-12-19 1:05 UTC (permalink / raw)
To: Matt Tehonica; +Cc: linux-raid
On Saturday 19 December 2009, Matt Tehonica wrote:
> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
4 disks is a bad idea. You should have 2^n data disks, but you have 2^1 + 1 =
3 data disks. As parity information are calculated in the power of two and
blocks are written in the power of two, you probably have read operations,
when you only want to write.
> filesystem. Typical file size is about 2GB-5GB. I usually get around
> 50MB/sec transfer speed when writting files to the array. Is this
> typcial or is it below normal? A friend has a 20 disk RAID6 using the
> same filesystem and chunk size and gets around 150MB/sec. Any input on
> this??
I would remove two disks, to get 16 + 2 drives (2^4). Performance probably
would be limited by CPU speed then. 150MB/s for 18 drives is also bad, this is
only the performance of two single raid0 drives.
Cheers,
Bernd
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 1:05 ` Bernd Schubert
@ 2009-12-19 8:30 ` Thomas Fjellstrom
2009-12-19 9:38 ` Michael Evans
2009-12-19 11:43 ` John Robinson
1 sibling, 1 reply; 20+ messages in thread
From: Thomas Fjellstrom @ 2009-12-19 8:30 UTC (permalink / raw)
To: Bernd Schubert; +Cc: Matt Tehonica, linux-raid
On Fri December 18 2009, Bernd Schubert wrote:
> On Saturday 19 December 2009, Matt Tehonica wrote:
> > I have a 4 disk RAID5 using a 2048K chunk size and using XFS
>
> 4 disks is a bad idea. You should have 2^n data disks, but you have 2^1 +
> 1 = 3 data disks. As parity information are calculated in the power of
> two and blocks are written in the power of two, you probably have read
> operations, when you only want to write.
>
> > filesystem. Typical file size is about 2GB-5GB. I usually get around
> > 50MB/sec transfer speed when writting files to the array. Is this
> > typcial or is it below normal? A friend has a 20 disk RAID6 using the
> > same filesystem and chunk size and gets around 150MB/sec. Any input on
> > this??
>
> I would remove two disks, to get 16 + 2 drives (2^4). Performance
> probably would be limited by CPU speed then. 150MB/s for 18 drives is
> also bad, this is only the performance of two single raid0 drives.
I'd have to agree. My 5 disk raid5 array gets me 200-400MB/s, depending on
the kernel. I'm using a 512K chunk size, formatted with XFS, with 32 AGs,
and xfs_info reporting: sunit=128 swidth=512 blks (which should be
right...), and mounted with:
noatime,nodiratime,logbufs=8,allocsize=512m,largeio,swalloc
oh, not quite 200MB/s, iozone is showing 112MB/s write, and 300MB/s read.
I'm pretty sure that has something to do with the writeback stuff though,
and aught to be improved in 2.6.32+ (I have yet to find a good time to
upgrade my server). I know I have seen the SAS card, and an initial array
handle more throughput than that when I was first testing stuff months and
months ago. It was more like 200-350 write, and 400-550 read.
But yeah, 50MB/s is pretty bad for a raid array. The individual disks in my
array are all capable of more than that each. (Yes, I know raid5 will not
give a linear improvement when adding more drives, but it aught to be a heck
of a lot better than a decrease in performance)
>
> Cheers,
> Bernd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Thomas Fjellstrom
tfjellstrom@shaw.ca
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 8:30 ` Thomas Fjellstrom
@ 2009-12-19 9:38 ` Michael Evans
0 siblings, 0 replies; 20+ messages in thread
From: Michael Evans @ 2009-12-19 9:38 UTC (permalink / raw)
To: tfjellstrom; +Cc: Bernd Schubert, Matt Tehonica, linux-raid
On Sat, Dec 19, 2009 at 12:30 AM, Thomas Fjellstrom <tfjellstrom@shaw.ca> wrote:
> On Fri December 18 2009, Bernd Schubert wrote:
>> On Saturday 19 December 2009, Matt Tehonica wrote:
>> > I have a 4 disk RAID5 using a 2048K chunk size and using XFS
>>
>> 4 disks is a bad idea. You should have 2^n data disks, but you have 2^1 +
>> 1 = 3 data disks. As parity information are calculated in the power of
>> two and blocks are written in the power of two, you probably have read
>> operations, when you only want to write.
>>
>> > filesystem. Typical file size is about 2GB-5GB. I usually get around
>> > 50MB/sec transfer speed when writting files to the array. Is this
>> > typcial or is it below normal? A friend has a 20 disk RAID6 using the
>> > same filesystem and chunk size and gets around 150MB/sec. Any input on
>> > this??
>>
>> I would remove two disks, to get 16 + 2 drives (2^4). Performance
>> probably would be limited by CPU speed then. 150MB/s for 18 drives is
>> also bad, this is only the performance of two single raid0 drives.
>
> I'd have to agree. My 5 disk raid5 array gets me 200-400MB/s, depending on
> the kernel. I'm using a 512K chunk size, formatted with XFS, with 32 AGs,
> and xfs_info reporting: sunit=128 swidth=512 blks (which should be
> right...), and mounted with:
> noatime,nodiratime,logbufs=8,allocsize=512m,largeio,swalloc
>
> oh, not quite 200MB/s, iozone is showing 112MB/s write, and 300MB/s read.
> I'm pretty sure that has something to do with the writeback stuff though,
> and aught to be improved in 2.6.32+ (I have yet to find a good time to
> upgrade my server). I know I have seen the SAS card, and an initial array
> handle more throughput than that when I was first testing stuff months and
> months ago. It was more like 200-350 write, and 400-550 read.
>
> But yeah, 50MB/s is pretty bad for a raid array. The individual disks in my
> array are all capable of more than that each. (Yes, I know raid5 will not
> give a linear improvement when adding more drives, but it aught to be a heck
> of a lot better than a decrease in performance)
>
>>
>> Cheers,
>> Bernd
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
> --
> Thomas Fjellstrom
> tfjellstrom@shaw.ca
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
It is possible that in your setup memory speed could be the
bottleneck; have you considered that maybe the issue is the size of
your processor's cache compared to the size of the stripe? I know
server chips from Intel and AMD typically have larger caches, and that
in the consumer end the Intel chips typically also have more cache.
It could be that your selection of stripe size is simply larger than
the cache size and thus you actually notice the cost of dipping down
to memory speeds when you're processor is starved for new data (or if
it's really thrashed with other tasks). I'm not sure how much of an
issue this is for me, since the idea size of my cache is only a little
larger than it actually is, and DDR-2 might still be fast enough in a
burst mode to read ahead of the end of the request. I do know that in
my case performance is more than sufficent and the main concern as
simply not loosing all of that data.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 1:05 ` Bernd Schubert
2009-12-19 8:30 ` Thomas Fjellstrom
@ 2009-12-19 11:43 ` John Robinson
2009-12-19 19:18 ` Leslie Rhorer
1 sibling, 1 reply; 20+ messages in thread
From: John Robinson @ 2009-12-19 11:43 UTC (permalink / raw)
To: Bernd Schubert; +Cc: Linux RAID
On 19/12/2009 01:05, Bernd Schubert wrote:
> On Saturday 19 December 2009, Matt Tehonica wrote:
>> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
>
> 4 disks is a bad idea. You should have 2^n data disks, but you have 2^1 + 1 =
> 3 data disks. As parity information are calculated in the power of two and
> blocks are written in the power of two
Sorry, but where did you get that from? p = d1 xor d2 xor d3 has nothing
to do with powers of two, and I'm sure blocks are written whenever they
need to be, not in powers of two.
> you probably have read operations,
> when you only want to write.
That will depend on how much data you're trying to write. With 3 data
discs and a 2M chunk size, writes in multiples of 6M won't need reads.
Writing a 25M file would therefore write 4 stripes and need to read to
do the last 1M. With 4 data discs, it'd be 8M multiples, and you'd write
3 stripes and need a read to do the last 1M. No difference.
Cheers,
John.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Typical RAID5 transfer speeds
2009-12-19 11:43 ` John Robinson
@ 2009-12-19 19:18 ` Leslie Rhorer
2009-12-21 13:06 ` Goswin von Brederlow
0 siblings, 1 reply; 20+ messages in thread
From: Leslie Rhorer @ 2009-12-19 19:18 UTC (permalink / raw)
To: linux-raid
> On 19/12/2009 01:05, Bernd Schubert wrote:
> > On Saturday 19 December 2009, Matt Tehonica wrote:
> >> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
> >
> > 4 disks is a bad idea. You should have 2^n data disks, but you have 2^1
> + 1 =
> > 3 data disks. As parity information are calculated in the power of two
> and
> > blocks are written in the power of two
>
> Sorry, but where did you get that from? p = d1 xor d2 xor d3 has nothing
> to do with powers of two, and I'm sure blocks are written whenever they
> need to be, not in powers of two.
Yeah, I was scratching my head over that one, too. It sounded bogus
to me, but I didn't want to open my mouth, so to speak, when I was unsure of
myself. Being far from expert in the matter, I can't be certain, but I
surely can think of no reason why writes would occur in powers of two, or
even be more efficient because of it.
> > you probably have read operations,
> > when you only want to write.
>
> That will depend on how much data you're trying to write. With 3 data
> discs and a 2M chunk size, writes in multiples of 6M won't need reads.
> Writing a 25M file would therefore write 4 stripes and need to read to
> do the last 1M. With 4 data discs, it'd be 8M multiples, and you'd write
> 3 stripes and need a read to do the last 1M. No difference.
I hadn't really considered this before, and I am curious. Of course
there is no reason for md to read a stripe marked as being in use if the
data to be written will fill an entire stripe. However, does it only apply
this logic if the data will completely fill a stripe? The most efficient
use of disk space of course will be accomplished if the system reads the
potential partially used target stripe whenever the write buffer contains
even 1 chunk less than a full stripe, but the most efficient write speeds
will only check on writing to a partially used stripe if the write buffer
contains less than half a stripe worth of data. Does anyone know which is
the case?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 0:37 Typical RAID5 transfer speeds Matt Tehonica
2009-12-19 1:05 ` Bernd Schubert
@ 2009-12-19 21:35 ` Roger Heflin
2009-12-20 4:21 ` Michael Evans
2009-12-20 10:04 ` Erwan MAS
2009-12-20 15:25 ` Andre Tomt
3 siblings, 1 reply; 20+ messages in thread
From: Roger Heflin @ 2009-12-19 21:35 UTC (permalink / raw)
To: Matt Tehonica; +Cc: linux-raid
Matt Tehonica wrote:
> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
> filesystem. Typical file size is about 2GB-5GB. I usually get around
> 50MB/sec transfer speed when writting files to the array. Is this
> typcial or is it below normal? A friend has a 20 disk RAID6 using the
> same filesystem and chunk size and gets around 150MB/sec. Any input on
> this??
>
> Thanks,
> Matt
Speed depends on how the disks are connected to the system, and how
many disks there are per connection, and what kind of disks they are.
If your friend had a 20 disk raid6 on one 4port sata pci-32bit/33mhz
card with port multipliers his total throughput would be <110mb/second
reads or writes, if your friend had 20 disks on 10+ port pcie-x16
cards his total possible speed would be much much higher, reads would
be expected to be 18x(rawdiskrate) if the machine could handle it.
Also newer disks are faster than older disks.
1.5tb disks read/write at 125-130+ MB/second on a fast port.
1.5tb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
500gb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
250gb disks read/write at 50-55 MB/second on a fast port.
And those PCI-32bit/33mhz ports are with only a single disk, put more
than one on there, and the io rates drop...so 2 disk on
pci-32bit/33mhz (old PCI) port will have <50MB/second each no matter
how fast the disk is, put 3 on there and each disk is down to 33mhz, 4
25MB/second or less.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 21:35 ` Roger Heflin
@ 2009-12-20 4:21 ` Michael Evans
2009-12-20 9:55 ` Thomas Fjellstrom
2009-12-20 18:28 ` Roger Heflin
0 siblings, 2 replies; 20+ messages in thread
From: Michael Evans @ 2009-12-20 4:21 UTC (permalink / raw)
To: Roger Heflin; +Cc: Matt Tehonica, linux-raid
On Sat, Dec 19, 2009 at 1:35 PM, Roger Heflin <rogerheflin@gmail.com> wrote:
> Matt Tehonica wrote:
>>
>> I have a 4 disk RAID5 using a 2048K chunk size and using XFS filesystem.
>> Typical file size is about 2GB-5GB. I usually get around 50MB/sec transfer
>> speed when writting files to the array. Is this typcial or is it below
>> normal? A friend has a 20 disk RAID6 using the same filesystem and chunk
>> size and gets around 150MB/sec. Any input on this??
>>
>> Thanks,
>> Matt
>
> Speed depends on how the disks are connected to the system, and how many
> disks there are per connection, and what kind of disks they are.
>
> If your friend had a 20 disk raid6 on one 4port sata pci-32bit/33mhz card
> with port multipliers his total throughput would be <110mb/second reads or
> writes, if your friend had 20 disks on 10+ port pcie-x16 cards his total
> possible speed would be much much higher, reads would be expected to be
> 18x(rawdiskrate) if the machine could handle it.
>
> Also newer disks are faster than older disks.
>
> 1.5tb disks read/write at 125-130+ MB/second on a fast port.
> 1.5tb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
> 500gb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
> 250gb disks read/write at 50-55 MB/second on a fast port.
>
> And those PCI-32bit/33mhz ports are with only a single disk, put more than
> one on there, and the io rates drop...so 2 disk on pci-32bit/33mhz (old PCI)
> port will have <50MB/second each no matter how fast the disk is, put 3 on
> there and each disk is down to 33mhz, 4 25MB/second or less.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Speaking of 16x 16 port cards, why is it that it's so difficult to
find an 8 or 16 port 4 or 8/16x pcie adapter? A good 1xpci-e to 2x
SATA costs like 25 to 50 USD. Given the reduction in duplicate
components, it should not be hard to make a card with 8 ports for 100
USD or less right? I don't even want any intelligence, just normal
disk to PCI-E lane connectin would be fine.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-20 4:21 ` Michael Evans
@ 2009-12-20 9:55 ` Thomas Fjellstrom
2009-12-20 14:53 ` Andre Tomt
2009-12-20 18:28 ` Roger Heflin
1 sibling, 1 reply; 20+ messages in thread
From: Thomas Fjellstrom @ 2009-12-20 9:55 UTC (permalink / raw)
To: Michael Evans; +Cc: Roger Heflin, Matt Tehonica, linux-raid
On Sat December 19 2009, Michael Evans wrote:
> On Sat, Dec 19, 2009 at 1:35 PM, Roger Heflin <rogerheflin@gmail.com>
wrote:
> > Matt Tehonica wrote:
> >> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
> >> filesystem. Typical file size is about 2GB-5GB. I usually get around
> >> 50MB/sec transfer speed when writting files to the array. Is this
> >> typcial or is it below normal? A friend has a 20 disk RAID6 using the
> >> same filesystem and chunk size and gets around 150MB/sec. Any input on
> >> this??
> >>
> >> Thanks,
> >> Matt
> >
> > Speed depends on how the disks are connected to the system, and how
> > many disks there are per connection, and what kind of disks they are.
> >
> > If your friend had a 20 disk raid6 on one 4port sata pci-32bit/33mhz
> > card with port multipliers his total throughput would be <110mb/second
> > reads or writes, if your friend had 20 disks on 10+ port pcie-x16 cards
> > his total possible speed would be much much higher, reads would be
> > expected to be 18x(rawdiskrate) if the machine could handle it.
> >
> > Also newer disks are faster than older disks.
> >
> > 1.5tb disks read/write at 125-130+ MB/second on a fast port.
> > 1.5tb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
> > 500gb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
> > 250gb disks read/write at 50-55 MB/second on a fast port.
> >
> > And those PCI-32bit/33mhz ports are with only a single disk, put more
> > than one on there, and the io rates drop...so 2 disk on pci-32bit/33mhz
> > (old PCI) port will have <50MB/second each no matter how fast the disk
> > is, put 3 on there and each disk is down to 33mhz, 4 25MB/second or
> > less.
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid"
> > in the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> Speaking of 16x 16 port cards, why is it that it's so difficult to
> find an 8 or 16 port 4 or 8/16x pcie adapter? A good 1xpci-e to 2x
> SATA costs like 25 to 50 USD. Given the reduction in duplicate
> components, it should not be hard to make a card with 8 ports for 100
> USD or less right? I don't even want any intelligence, just normal
> disk to PCI-E lane connectin would be fine.
While the driver support isn't "perfect"*, I have an AOC-SASLP-MV8, a 2 port
SAS 4x PCI-e card. (with SAS->SATA converters, its an 8 port SATA card)
When I was running some tests on individual drives, and watching iostat, I
saw over 500MB/s combined throughput, and that was with only 5 drives.
Theoretically it should be capable of 1GB/s given its a x4 card.
Theoretically.
At the very least, the card should be more than capable of providing enough
bandwidth for all 8 ports to be filled with regular mechanical hard drives
(sans port expanders).
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
* by "perfect" I mean the current kernel drivers don't work at all once you
try and build a md-raid array on it. The new version of the drivers appeared
in linux-scsi earlier this month, and should only need minor adjustments,
but a second series haven't appeared yet, not sure when they'll make it into
the kernel at this point. I was really hoping for them to make it into
2.6.32, but heck, at this rate they might not make it for 2.6.33.
Even with all the troubles, this card has saved me hundreds in not having to
buy a hw raid card :D, or a more expensive multi port sata/sas card. So I'm
happy.
--
Thomas Fjellstrom
tfjellstrom@shaw.ca
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 0:37 Typical RAID5 transfer speeds Matt Tehonica
2009-12-19 1:05 ` Bernd Schubert
2009-12-19 21:35 ` Roger Heflin
@ 2009-12-20 10:04 ` Erwan MAS
2009-12-20 10:31 ` Keld Jørn Simonsen
2009-12-20 15:25 ` Andre Tomt
3 siblings, 1 reply; 20+ messages in thread
From: Erwan MAS @ 2009-12-20 10:04 UTC (permalink / raw)
To: Matt Tehonica; +Cc: linux-raid
On Fri, Dec 18, 2009 at 07:37:20PM -0500, Matt Tehonica wrote:
> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
> filesystem. Typical file size is about 2GB-5GB. I usually get around
> 50MB/sec transfer speed when writting files to the array. Is this
> typcial or is it below normal? A friend has a 20 disk RAID6 using the
> same filesystem and chunk size and gets around 150MB/sec. Any input on
> this??
You must be aware :
- each disk has physical limitations that depends from rpm
- that writing is slow on raid5 & raid6 .
With a raid5 device ,when writing a new block , you must :
- read the original block
- read the parity block
- compute the new parity
- write the new block
- write the new parity
With a raid6 device ,when writing a new block , you must :
- read the original block
- read the parity 1 block
- read the parity 2 block
- compute the new parity 1
- compute the new parity 2
- write the new block
- write the new parity 1
- write the new parity 2
With cache , you can have better performance somes times , that depends of usage of device
by the application .
Its'common to said that :
on raid 5 you have x4 penalty in writing
on raid 6 you have x6 penalty in writing
on raid 1/10 you have x2 penalty in writing
on raid 0 you have no penalty in writing
Its'common to said that :
one 15k rpm drive can do 180 random IO per second
one 10k rpm drive can do 140 random IO per second
one 7200 rpm drive can do 80 random IO per second
If you have perfomance problem you must have more drive, because more drive give you more iops .
If you have bad perfomance with raid5 during write you must try raid1 .
It's better to have many small disks that a big one , for perfomance .
But for Electricity consumption , it's different :
2,5 drive use less electricity that 3,5 drive
slower in RPM drive use lesser electricity
There no magic formula !
In your case :
you used 4 drives in raid5 , so when you write data , you have only the performance of one drive .
number_of_disk_in_the_array / 4 ( because you are in raid5 )
For your friend :
He used a 20 drives in raid6 , so when he write data , he have performance of 3.6 drives
number_of_disk_in_the_array / 6 ( because you are in raid6 )
--
____________________________________________________________
/ Erwan MAS /\
| mailto:erwan@mas.nom.fr |_/
___|________________________________________________________ |
\___________________________________________________________\__/
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-20 10:04 ` Erwan MAS
@ 2009-12-20 10:31 ` Keld Jørn Simonsen
0 siblings, 0 replies; 20+ messages in thread
From: Keld Jørn Simonsen @ 2009-12-20 10:31 UTC (permalink / raw)
To: Erwan MAS; +Cc: Matt Tehonica, linux-raid
On Sun, Dec 20, 2009 at 11:04:50AM +0100, Erwan MAS wrote:
> On Fri, Dec 18, 2009 at 07:37:20PM -0500, Matt Tehonica wrote:
> > I have a 4 disk RAID5 using a 2048K chunk size and using XFS
> > filesystem. Typical file size is about 2GB-5GB. I usually get around
> > 50MB/sec transfer speed when writting files to the array. Is this
> > typcial or is it below normal? A friend has a 20 disk RAID6 using the
> > same filesystem and chunk size and gets around 150MB/sec. Any input on
> > this??
>
> You must be aware :
> - each disk has physical limitations that depends from rpm
> - that writing is slow on raid5 & raid6 .
>
> With a raid5 device ,when writing a new block , you must :
> - read the original block
> - read the parity block
> - compute the new parity
> - write the new block
> - write the new parity
>
> With a raid6 device ,when writing a new block , you must :
> - read the original block
> - read the parity 1 block
> - read the parity 2 block
> - compute the new parity 1
> - compute the new parity 2
> - write the new block
> - write the new parity 1
> - write the new parity 2
>
> With cache , you can have better performance somes times , that depends of usage of device
> by the application .
There is also a mode for writing that detects that you are wiping many
blocks, and thus does not read the parity blocks nor the original
blocks. This is good for writing big files. The kernel detects this mode
for you, eg when writing large sequential files.
> Its'common to said that :
> on raid 5 you have x4 penalty in writing
> on raid 6 you have x6 penalty in writing
> on raid 1/10 you have x2 penalty in writing
> on raid 0 you have no penalty in writing
In the sequential mode, the penalty is then only 1 parity drive writing
for RAID5, and 2 drive writing for RAID6. This RAID5/6 can be much
faster than RAID1 (and RAID10) for writing.
> Its'common to said that :
> one 15k rpm drive can do 180 random IO per second
> one 10k rpm drive can do 140 random IO per second
> one 7200 rpm drive can do 80 random IO per second
The elevator algorithm used can du much to improve these rates.
> If you have perfomance problem you must have more drive, because more drive give you more iops .
Or, if IOPS is your concern, then try SSD.
> If you have bad perfomance with raid5 during write you must try raid1 .
>
> It's better to have many small disks that a big one , for perfomance .
>
> But for Electricity consumption , it's different :
> 2,5 drive use less electricity that 3,5 drive
> slower in RPM drive use lesser electricity
>
> There no magic formula !
>
> In your case :
> you used 4 drives in raid5 , so when you write data , you have only the performance of one drive .
> number_of_disk_in_the_array / 4 ( because you are in raid5 )
>
> For your friend :
> He used a 20 drives in raid6 , so when he write data , he have performance of 3.6 drives
> number_of_disk_in_the_array / 6 ( because you are in raid6 )
Best regards
keld
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-20 9:55 ` Thomas Fjellstrom
@ 2009-12-20 14:53 ` Andre Tomt
2009-12-20 16:03 ` Thomas Fjellstrom
0 siblings, 1 reply; 20+ messages in thread
From: Andre Tomt @ 2009-12-20 14:53 UTC (permalink / raw)
To: tfjellstrom; +Cc: Michael Evans, Roger Heflin, Matt Tehonica, linux-raid
On 20.12.2009 10:55, Thomas Fjellstrom wrote:
> While the driver support isn't "perfect"*, I have an AOC-SASLP-MV8, a 2 port
> SAS 4x PCI-e card. (with SAS->SATA converters, its an 8 port SATA card)
>
> When I was running some tests on individual drives, and watching iostat, I
> saw over 500MB/s combined throughput, and that was with only 5 drives.
> Theoretically it should be capable of 1GB/s given its a x4 card.
> Theoretically.
>
> At the very least, the card should be more than capable of providing enough
> bandwidth for all 8 ports to be filled with regular mechanical hard drives
> (sans port expanders).
Having tried it connected directly to the north bridge, it seems to top
out at ~650-700MB/s (also using Windows driver). Which is not too bad
for a pcie 1.0 x4 card, but unfortunatly will only manage to saturate
about ~5 modern mechanical drives (currently at 120-130MB/s each for
SATA 7200rpm), if high sequential performance is all you're looking for.
Currently I solved that by having 4 drives from the array on that card,
and populating the last four ports with with drives that are seldomly
used and when used get mostly random i/o but still needs a port ;-)
Still good value for money, indeed.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 0:37 Typical RAID5 transfer speeds Matt Tehonica
` (2 preceding siblings ...)
2009-12-20 10:04 ` Erwan MAS
@ 2009-12-20 15:25 ` Andre Tomt
3 siblings, 0 replies; 20+ messages in thread
From: Andre Tomt @ 2009-12-20 15:25 UTC (permalink / raw)
To: Matt Tehonica; +Cc: linux-raid
On 19.12.2009 01:37, Matt Tehonica wrote:
> I have a 4 disk RAID5 using a 2048K chunk size and using XFS filesystem.
> Typical file size is about 2GB-5GB. I usually get around 50MB/sec
> transfer speed when writting files to the array. Is this typcial or is
> it below normal? A friend has a 20 disk RAID6 using the same filesystem
> and chunk size and gets around 150MB/sec. Any input on this??
Software RAID performance should not be that slow, unless the drives are
connected to controller on 32bit/33Mhz PCI slots of course. There is a
few things to keep in mind though.
Controllers and bus topology of the motherboard matters a great deal on
I/O performance, but even on recent (up to around 3 years back, I think)
desktop motherboards you should be able to go very fast when using the
right slots and busses. PCI-Express was the game changer here, but you
should try to get most SATA ports on a slot connected to the north
bridge if you want to go REALLY fast.
Filesystem alignment and stripe size awareness helps quite a bit, and I
guess even more on a machine that is already bus starved (if thats your
problem) as it helps reduce "invisible" I/O - operations spanning
multiple stripes when they could have spanned one, for example, and a
reduction in read-modify-write cycles in general.
A bigger stripe_cache on the array might help, especially if things
aren't aligned/aware, if you got the memory (check /sys/block/<md
dev>/md/stripe_cache_active and _size.)
On a old Intel core 2 duo, with a MD RAID6 set using 128k chunks over 8
1.5TB 7200rpm SATA drives I'm seeing about 600MB/s writes and
700-750MB/s reads with sequential I/O - which is very near the maximum
for the resulting stripe size with those drives. Changing to RAID5 would
probably net me another ~100MB/s as the stripe would span one more drive.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-20 14:53 ` Andre Tomt
@ 2009-12-20 16:03 ` Thomas Fjellstrom
0 siblings, 0 replies; 20+ messages in thread
From: Thomas Fjellstrom @ 2009-12-20 16:03 UTC (permalink / raw)
To: Andre Tomt; +Cc: Michael Evans, Roger Heflin, Matt Tehonica, linux-raid
On Sun December 20 2009, Andre Tomt wrote:
> On 20.12.2009 10:55, Thomas Fjellstrom wrote:
> > While the driver support isn't "perfect"*, I have an AOC-SASLP-MV8, a 2
> > port SAS 4x PCI-e card. (with SAS->SATA converters, its an 8 port SATA
> > card)
> >
> > When I was running some tests on individual drives, and watching
> > iostat, I saw over 500MB/s combined throughput, and that was with only
> > 5 drives. Theoretically it should be capable of 1GB/s given its a x4
> > card. Theoretically.
> >
> > At the very least, the card should be more than capable of providing
> > enough bandwidth for all 8 ports to be filled with regular mechanical
> > hard drives (sans port expanders).
>
> Having tried it connected directly to the north bridge, it seems to top
> out at ~650-700MB/s (also using Windows driver). Which is not too bad
> for a pcie 1.0 x4 card, but unfortunatly will only manage to saturate
> about ~5 modern mechanical drives (currently at 120-130MB/s each for
> SATA 7200rpm), if high sequential performance is all you're looking for.
>
> Currently I solved that by having 4 drives from the array on that card,
> and populating the last four ports with with drives that are seldomly
> used and when used get mostly random i/o but still needs a port ;-)
>
> Still good value for money, indeed.
I tried it on windows, and the tests were abysmal. worse than anything linux
gave. Maybe it was windows's dynamic disks raid sucking, but it was
horrible. I ran a disk benchmark test on it, and the whole suite of tests
should have been able to finish after 12 hours, but after about 24, it
wasn't even done the first read test.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Thomas Fjellstrom
tfjellstrom@shaw.ca
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-20 4:21 ` Michael Evans
2009-12-20 9:55 ` Thomas Fjellstrom
@ 2009-12-20 18:28 ` Roger Heflin
2009-12-21 1:18 ` Michael Evans
1 sibling, 1 reply; 20+ messages in thread
From: Roger Heflin @ 2009-12-20 18:28 UTC (permalink / raw)
To: Michael Evans; +Cc: Matt Tehonica, linux-raid
Michael Evans wrote:
> On Sat, Dec 19, 2009 at 1:35 PM, Roger Heflin <rogerheflin@gmail.com> wrote:
>> Matt Tehonica wrote:
>>> I have a 4 disk RAID5 using a 2048K chunk size and using XFS filesystem.
>>> Typical file size is about 2GB-5GB. I usually get around 50MB/sec transfer
>>> speed when writting files to the array. Is this typcial or is it below
>>> normal? A friend has a 20 disk RAID6 using the same filesystem and chunk
>>> size and gets around 150MB/sec. Any input on this??
>>>
>>> Thanks,
>>> Matt
>> Speed depends on how the disks are connected to the system, and how many
>> disks there are per connection, and what kind of disks they are.
>>
>> If your friend had a 20 disk raid6 on one 4port sata pci-32bit/33mhz card
>> with port multipliers his total throughput would be <110mb/second reads or
>> writes, if your friend had 20 disks on 10+ port pcie-x16 cards his total
>> possible speed would be much much higher, reads would be expected to be
>> 18x(rawdiskrate) if the machine could handle it.
>>
>> Also newer disks are faster than older disks.
>>
>> 1.5tb disks read/write at 125-130+ MB/second on a fast port.
>> 1.5tb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
>> 500gb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
>> 250gb disks read/write at 50-55 MB/second on a fast port.
>>
>> And those PCI-32bit/33mhz ports are with only a single disk, put more than
>> one on there, and the io rates drop...so 2 disk on pci-32bit/33mhz (old PCI)
>> port will have <50MB/second each no matter how fast the disk is, put 3 on
>> there and each disk is down to 33mhz, 4 25MB/second or less.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> Speaking of 16x 16 port cards, why is it that it's so difficult to
> find an 8 or 16 port 4 or 8/16x pcie adapter? A good 1xpci-e to 2x
> SATA costs like 25 to 50 USD. Given the reduction in duplicate
> components, it should not be hard to make a card with 8 ports for 100
> USD or less right? I don't even want any intelligence, just normal
> disk to PCI-E lane connectin would be fine.
>
I am pretty sure it is lack of need.
I believe someone mentioned supermicro has a 8 port pcie-x4 card, that
is in the $100 range, but the driver for it is kind of new and has
some issues at this time.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-20 18:28 ` Roger Heflin
@ 2009-12-21 1:18 ` Michael Evans
2009-12-21 1:50 ` Richard Scobie
0 siblings, 1 reply; 20+ messages in thread
From: Michael Evans @ 2009-12-21 1:18 UTC (permalink / raw)
To: Roger Heflin; +Cc: Matt Tehonica, linux-raid
On Sun, Dec 20, 2009 at 10:28 AM, Roger Heflin <rogerheflin@gmail.com> wrote:
> Michael Evans wrote:
>>
>> On Sat, Dec 19, 2009 at 1:35 PM, Roger Heflin <rogerheflin@gmail.com>
>> wrote:
>>>
>>> Matt Tehonica wrote:
>>>>
>>>> I have a 4 disk RAID5 using a 2048K chunk size and using XFS filesystem.
>>>> Typical file size is about 2GB-5GB. I usually get around 50MB/sec
>>>> transfer
>>>> speed when writting files to the array. Is this typcial or is it below
>>>> normal? A friend has a 20 disk RAID6 using the same filesystem and
>>>> chunk
>>>> size and gets around 150MB/sec. Any input on this??
>>>>
>>>> Thanks,
>>>> Matt
>>>
>>> Speed depends on how the disks are connected to the system, and how many
>>> disks there are per connection, and what kind of disks they are.
>>>
>>> If your friend had a 20 disk raid6 on one 4port sata pci-32bit/33mhz card
>>> with port multipliers his total throughput would be <110mb/second reads
>>> or
>>> writes, if your friend had 20 disks on 10+ port pcie-x16 cards his total
>>> possible speed would be much much higher, reads would be expected to be
>>> 18x(rawdiskrate) if the machine could handle it.
>>>
>>> Also newer disks are faster than older disks.
>>>
>>> 1.5tb disks read/write at 125-130+ MB/second on a fast port.
>>> 1.5tb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
>>> 500gb disks read/write at 75-80 MB/second on a PCI-32bit/33mhz port.
>>> 250gb disks read/write at 50-55 MB/second on a fast port.
>>>
>>> And those PCI-32bit/33mhz ports are with only a single disk, put more
>>> than
>>> one on there, and the io rates drop...so 2 disk on pci-32bit/33mhz (old
>>> PCI)
>>> port will have <50MB/second each no matter how fast the disk is, put 3 on
>>> there and each disk is down to 33mhz, 4 25MB/second or less.
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
>> Speaking of 16x 16 port cards, why is it that it's so difficult to
>> find an 8 or 16 port 4 or 8/16x pcie adapter? A good 1xpci-e to 2x
>> SATA costs like 25 to 50 USD. Given the reduction in duplicate
>> components, it should not be hard to make a card with 8 ports for 100
>> USD or less right? I don't even want any intelligence, just normal
>> disk to PCI-E lane connectin would be fine.
>>
>
> I am pretty sure it is lack of need.
>
> I believe someone mentioned supermicro has a 8 port pcie-x4 card, that is in
> the $100 range, but the driver for it is kind of new and has some issues at
> this time.
>
Wow, I had no idea these even existed (but I now know that I have to
use -really- specific search terms to find them).
The searches,
8-port pci-e OR pciexpress OR pci-express
16-port pci-e OR pciexpress OR pci-express
yield the desired results on Google's product search, though there
seems to be only one manufacturer, and only one seller currently. I
guess most people building >6 drive arrays have the cash to waste on
limited boxed solutions or higher-end hardware controllers that
abstract the details (and often flexibility) from the system.
I'll have to remember the search for the next time I buy
upgrade/replacement hardware.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-21 1:18 ` Michael Evans
@ 2009-12-21 1:50 ` Richard Scobie
2009-12-21 11:30 ` Asdo
0 siblings, 1 reply; 20+ messages in thread
From: Richard Scobie @ 2009-12-21 1:50 UTC (permalink / raw)
To: Michael Evans; +Cc: Roger Heflin, Matt Tehonica, linux-raid
Michael Evans wrote:
> seems to be only one manufacturer, and only one seller currently. I
> guess most people building >6 drive arrays have the cash to waste on
> limited boxed solutions or higher-end hardware controllers that
> abstract the details (and often flexibility) from the system.
Or use dumb LSI SAS controllers (sas3442e-r with IT firmware loaded,
$200) and port expander based chassis that allow considerable
flexibility regarding md RAID and number of SAS/SATA drives attached.
The only current caveat that is being worked on, is that use of
smartmontools causes drives to disconnect.
Regards,
Richard
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-21 1:50 ` Richard Scobie
@ 2009-12-21 11:30 ` Asdo
2009-12-21 18:28 ` Richard Scobie
0 siblings, 1 reply; 20+ messages in thread
From: Asdo @ 2009-12-21 11:30 UTC (permalink / raw)
To: Richard Scobie; +Cc: Michael Evans, Roger Heflin, Matt Tehonica, linux-raid
Richard Scobie wrote:
> Or use dumb LSI SAS controllers (sas3442e-r with IT firmware loaded,
> $200) and port expander based chassis that allow considerable
> flexibility regarding md RAID and number of SAS/SATA drives attached.
The problem then becomes choosing the port expander chassis...
I know nothing about this topic :-(
Does the brand/model make difference in performance or reliability?
Do you have any recommendation? Even just one "known good"...
(These chassis are also quite expensive for what I can see.)
Up to 24 and maybe even 48 drives you can find monolithic solutions (one
chassis for mainboard and disks), but then not many brands make 16-24
ports controllers. LSI does not seem to make them, am I correct?
Also in a monolithic storage you only have to check that the controller
declares compatibility with the drives (sometimes there are indeed
issues) while for port-replicator based storage I don't know if I should
check the Compatibility List for drives against controllers, or for
drives against port replicator, or for controller against port
replicator...?
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-19 19:18 ` Leslie Rhorer
@ 2009-12-21 13:06 ` Goswin von Brederlow
0 siblings, 0 replies; 20+ messages in thread
From: Goswin von Brederlow @ 2009-12-21 13:06 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
"Leslie Rhorer" <lrhorer@satx.rr.com> writes:
>> On 19/12/2009 01:05, Bernd Schubert wrote:
>> > On Saturday 19 December 2009, Matt Tehonica wrote:
>> >> I have a 4 disk RAID5 using a 2048K chunk size and using XFS
>> >
>> > 4 disks is a bad idea. You should have 2^n data disks, but you have 2^1
>> + 1 =
>> > 3 data disks. As parity information are calculated in the power of two
>> and
>> > blocks are written in the power of two
>>
>> Sorry, but where did you get that from? p = d1 xor d2 xor d3 has nothing
>> to do with powers of two, and I'm sure blocks are written whenever they
>> need to be, not in powers of two.
>
> Yeah, I was scratching my head over that one, too. It sounded bogus
> to me, but I didn't want to open my mouth, so to speak, when I was unsure of
> myself. Being far from expert in the matter, I can't be certain, but I
> surely can think of no reason why writes would occur in powers of two, or
> even be more efficient because of it.
But d1/2/3 and p are 2^n bytes large. So a stripe is 3*2^n byte while
the filesystem alignes its data to 2^m boundaries usualy.
So writes of 1/2/4/8/16/32/64 MB sequentially are more likely than
3/6/12/24/48 MB. Often a (large) write will have a partial stripe at
the start and end of the request.
>> > you probably have read operations,
>> > when you only want to write.
>>
>> That will depend on how much data you're trying to write. With 3 data
>> discs and a 2M chunk size, writes in multiples of 6M won't need reads.
That assumes the filesystem puts a 6M sequential write to 6M
sequential blocks. I.e. is not fragmented. That never lasts long.
>> Writing a 25M file would therefore write 4 stripes and need to read to
>> do the last 1M. With 4 data discs, it'd be 8M multiples, and you'd write
>> 3 stripes and need a read to do the last 1M. No difference.
>
> I hadn't really considered this before, and I am curious. Of course
> there is no reason for md to read a stripe marked as being in use if the
> data to be written will fill an entire stripe. However, does it only apply
> this logic if the data will completely fill a stripe? The most efficient
> use of disk space of course will be accomplished if the system reads the
> potential partially used target stripe whenever the write buffer contains
> even 1 chunk less than a full stripe, but the most efficient write speeds
> will only check on writing to a partially used stripe if the write buffer
> contains less than half a stripe worth of data. Does anyone know which is
> the case?
I only know that in the raid6 case it always reads all data blocks and
recomputes the parity while raid5 iirc can update the parity by xoring
the old and new data block without having to read all data blocks of a
stripe. But I have no idea where the cutoff would be for this.
MfG
Goswin
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Typical RAID5 transfer speeds
2009-12-21 11:30 ` Asdo
@ 2009-12-21 18:28 ` Richard Scobie
0 siblings, 0 replies; 20+ messages in thread
From: Richard Scobie @ 2009-12-21 18:28 UTC (permalink / raw)
To: Asdo; +Cc: Michael Evans, Roger Heflin, Matt Tehonica, linux-raid
Asdo wrote:
> Richard Scobie wrote:
>
>> Or use dumb LSI SAS controllers (sas3442e-r with IT firmware loaded,
>> $200) and port expander based chassis that allow considerable
>> flexibility regarding md RAID and number of SAS/SATA drives attached.
>
>
> The problem then becomes choosing the port expander chassis...
> I know nothing about this topic :-(
> Does the brand/model make difference in performance or reliability?
> Do you have any recommendation? Even just one "known good"...
> (These chassis are also quite expensive for what I can see.)
The only experience I have, is with products from AIC.
I have used 3 of these with WD 750GB RE2 and 1TB RE3 SATA drives:
http://www.aicipc.com/ProductDetail.aspx?ref=RSC-3EC2-2
The port expander is Vitesse based and has a daisy chain output allowing
connection of another expander.
So far, in over 2 years of use I have not had any trouble with these
chassis and recently added this chassis (which is essentially the same,
but without motherboard space):
http://www.aicipc.com/ProductDetail.aspx?ref=XJ1100%20series%20-%203U%2016-bay
to the external SAS output on one system's controller.
The server chassis cost around $US3300, which I think is quite
reasonable for a solid, hot swap (disks and fans) enclosure with
redundent PSU.
Another brand which I have not used but know of others that do, is the
Dell MD1000.
> Up to 24 and maybe even 48 drives you can find monolithic solutions (one
> chassis for mainboard and disks), but then not many brands make 16-24
> ports controllers. LSI does not seem to make them, am I correct?
No, that is the point. These SAS controllers do not have discrete drive
outputs. The LSISAS3442E HBA I am using can address 122 drives and it is
the port expanders in the chassis that break out to the drives and allow
for daisy chaining to more expanders. I'm sure there is a maximum
recommended number of expanders in a chain, but given the HBA has 2 x 4
port outputs, a chassis on each chained to another chassis each, covers
a lot of drives.
> Also in a monolithic storage you only have to check that the controller
> declares compatibility with the drives (sometimes there are indeed
> issues) while for port-replicator based storage I don't know if I should
> check the Compatibility List for drives against controllers, or for
> drives against port replicator, or for controller against port
> replicator...?
The AIC site has fairly comprehensive compatibility info for both
controller and drives. Obviously both need to be considered for
compatibilty with the expander.
Regards,
Richard
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2009-12-21 18:28 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-19 0:37 Typical RAID5 transfer speeds Matt Tehonica
2009-12-19 1:05 ` Bernd Schubert
2009-12-19 8:30 ` Thomas Fjellstrom
2009-12-19 9:38 ` Michael Evans
2009-12-19 11:43 ` John Robinson
2009-12-19 19:18 ` Leslie Rhorer
2009-12-21 13:06 ` Goswin von Brederlow
2009-12-19 21:35 ` Roger Heflin
2009-12-20 4:21 ` Michael Evans
2009-12-20 9:55 ` Thomas Fjellstrom
2009-12-20 14:53 ` Andre Tomt
2009-12-20 16:03 ` Thomas Fjellstrom
2009-12-20 18:28 ` Roger Heflin
2009-12-21 1:18 ` Michael Evans
2009-12-21 1:50 ` Richard Scobie
2009-12-21 11:30 ` Asdo
2009-12-21 18:28 ` Richard Scobie
2009-12-20 10:04 ` Erwan MAS
2009-12-20 10:31 ` Keld Jørn Simonsen
2009-12-20 15:25 ` Andre Tomt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).