* Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
@ 2008-07-15 14:06 Ben Martin
2008-07-15 14:42 ` Justin Piszcz
` (6 more replies)
0 siblings, 7 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-15 14:06 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 236 bytes --]
Hi,
Apologies if posting this here is inappropriate but a recent article
of mine compares the Linux Kernel RAID code to an $800 hardware RAID
card and might be of interest to list members:
http://www.linux.com/feature/140734
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
@ 2008-07-15 14:42 ` Justin Piszcz
2008-07-16 4:14 ` Ben Martin
2008-07-15 15:46 ` Michal Soltys
` (5 subsequent siblings)
6 siblings, 1 reply; 42+ messages in thread
From: Justin Piszcz @ 2008-07-15 14:42 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid, xfs
What were the commands used when creating the XFS filesystem
(sunit,swidth)?
Justin.
On Wed, 16 Jul 2008, Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
2008-07-15 14:42 ` Justin Piszcz
@ 2008-07-15 15:46 ` Michal Soltys
2008-07-15 20:34 ` Richard Scobie
2008-07-16 3:36 ` Ben Martin
2008-07-15 16:39 ` Keld Jørn Simonsen
` (4 subsequent siblings)
6 siblings, 2 replies; 42+ messages in thread
From: Michal Soltys @ 2008-07-15 15:46 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
>
Very interesting. Btw - did you use stripe-width on ext3s as well, or
only stride ?
Also looking back at other benchmarks, stripe_cache_size can have pretty
tremendous effect on md raid performance. There're other settings that
could matter as well (read ahead, queue depths). It would be interesting
to see e.g. raid5 256k chunk comparison, done with those altered (
_especially_ md's stripe_cache_size with some high value like 16384 or
32768), and with stripe-width used in ext3 case (if it wasn't used already).
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
2008-07-15 14:42 ` Justin Piszcz
2008-07-15 15:46 ` Michal Soltys
@ 2008-07-15 16:39 ` Keld Jørn Simonsen
2008-07-15 16:50 ` thomas62186218
` (3 more replies)
2008-07-15 19:41 ` Keld Jørn Simonsen
` (3 subsequent siblings)
6 siblings, 4 replies; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-15 16:39 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
It is (at least to me) very interesting to see comparisons on HW vs SW
raid.
I think tho, that you give it some bias not testing the best
configurations for Linux SW raid. For example raid10, the f2 and o2
layouts are faster than n2, and you don't know if Adaptec has some kind
of improved layout.
It would be interesting if you could enhance your article with benchmarks
on raid10, f2 and o2 layouts. I think they would outperform HW raid, at
least on input. And I would like to see how they perform on outout and
rewrite, with ext3 and xfs. We do have some tests, but many of them are
without a file system layer.
I would also welcome test with mobo HW RAID - Many mobo's today come
with some HW raid functionality, and many people would be in the
situation of whether to choose a HW or SW configuration.
best regards
keld
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 16:39 ` Keld Jørn Simonsen
@ 2008-07-15 16:50 ` thomas62186218
2008-07-15 17:39 ` Keld Jørn Simonsen
2008-07-15 17:06 ` Jon Nelson
` (2 subsequent siblings)
3 siblings, 1 reply; 42+ messages in thread
From: thomas62186218 @ 2008-07-15 16:50 UTC (permalink / raw)
To: keld, monkeyiq; +Cc: linux-raid
Hi Ben,
Nice reporting on the benchmarks. It would be helpful though to run
these tests without a file system involved, using a block level
benchmark utility like fio or similar to really measure the RAID
performance in isolation. While you did use a file system in both your
hardware and software RAID tests, before directly implicating software
RAID, it makes sense to isolate it as much as possible in the testing
by eliminating the file system for benchmarks.
Thanks again for the report!
-Thomas
-----Original Message-----
From: Keld Jørn Simonsen <keld@dkuug.dk>
To: Ben Martin <monkeyiq@users.sourceforge.net>
Cc: linux-raid@vger.kernel.org
Sent: Tue, 15 Jul 2008 9:39 am
Subject: Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
It is (at least to me) very interesting to see comparisons on HW vs SW
raid.
I think tho, that you give it some bias not testing the best
configurations for Linux SW raid. For example raid10, the f2 and o2
layouts are faster than n2, and you don't know if Adaptec has some kind
of improved layout.
It would be interesting
if you could enhance your article with
benchmarks
on raid10, f2 and o2 layouts. I think they would outperform HW raid, at
least on input. And I would like to see how they perform on outout and
rewrite, with ext3 and xfs. We do have some tests, but many of them are
without a file system layer.
I would also welcome test with mobo HW RAID - Many mobo's today come
with some HW raid functionality, and many people would be in the
situation of whether to choose a HW or SW configuration.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 16:39 ` Keld Jørn Simonsen
2008-07-15 16:50 ` thomas62186218
@ 2008-07-15 17:06 ` Jon Nelson
2008-07-16 3:44 ` Ben Martin
2008-07-15 18:40 ` Brad Campbell
2008-07-16 3:58 ` Ben Martin
3 siblings, 1 reply; 42+ messages in thread
From: Jon Nelson @ 2008-07-15 17:06 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Ben Martin, linux-raid
I would add to that request that you consider using a smaller chunk size.
My own benchmarks showed that for some raid levels and layouts, a
smaller chunk size gave the best overall performance.
On Tue, Jul 15, 2008 at 11:39 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
...
> It would be interesting if you could enhance your article with benchmarks
> on raid10, f2 and o2 layouts. I think they would outperform HW raid, at
> least on input. And I would like to see how they perform on outout and
> rewrite, with ext3 and xfs. We do have some tests, but many of them are
> without a file system layer.
--
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 16:50 ` thomas62186218
@ 2008-07-15 17:39 ` Keld Jørn Simonsen
2008-07-16 0:01 ` Richard Scobie
0 siblings, 1 reply; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-15 17:39 UTC (permalink / raw)
To: thomas62186218; +Cc: monkeyiq, linux-raid
On Tue, Jul 15, 2008 at 12:50:57PM -0400, thomas62186218@aol.com wrote:
> Hi Ben,
>
> Nice reporting on the benchmarks. It would be helpful though to run
> these tests without a file system involved, using a block level
> benchmark utility like fio or similar to really measure the RAID
> performance in isolation. While you did use a file system in both your
> hardware and software RAID tests, before directly implicating software
> RAID, it makes sense to isolate it as much as possible in the testing
> by eliminating the file system for benchmarks.
I think actually the use of the file systems is one of the strengths of
this report. Many benchmarks are done only on the raw raid systems, and
that gives some aritficial benchmarks that are only of theoretical
interest, as the user really needs to have a FS to employ the raids.
And the file system layer can compensate a lot for some characteristics
of the raid types.
I would actually welcome more tests with specific user profiles, like
many small reads and writes for database use, and concurrent random
reading and writing to simulate the load on a server. What bonnie++ is
reporting is only equential IO. This is important on work stations, but
actually not on servers.
I have a new category - namely sequential reads on a system already
running a workload of mostly random reading, but also some writing.
this is important on some of my servers, like a ftp server.
How fast can a new user get a file on an already loaded server?
I dont know how to measure it in a reproducable way, but I do have some
experimential figures.
Best regards
Keld
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 16:39 ` Keld Jørn Simonsen
2008-07-15 16:50 ` thomas62186218
2008-07-15 17:06 ` Jon Nelson
@ 2008-07-15 18:40 ` Brad Campbell
2008-07-15 20:12 ` Keld Jørn Simonsen
[not found] ` <487CF499.6080105@harddata.com>
2008-07-16 3:58 ` Ben Martin
3 siblings, 2 replies; 42+ messages in thread
From: Brad Campbell @ 2008-07-15 18:40 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Ben Martin, linux-raid
Keld Jørn Simonsen wrote:
> I would also welcome test with mobo HW RAID - Many mobo's today come
> with some HW raid functionality, and many people would be in the
> situation of whether to choose a HW or SW configuration.
Do you have some pointers for information on motherboards that come with hardware raid? I'd be
interested to have a look at them. I've certainly seen plenty of "fakeraid" systems, but I've not
yet come across hardware raid on an MB.
Brad
--
Dolphins are so intelligent that within a few weeks they can
train Americans to stand at the edge of the pool and throw them
fish.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
` (2 preceding siblings ...)
2008-07-15 16:39 ` Keld Jørn Simonsen
@ 2008-07-15 19:41 ` Keld Jørn Simonsen
2008-07-16 3:25 ` Ben Martin
2008-07-15 20:40 ` Peter Grandi
` (2 subsequent siblings)
6 siblings, 1 reply; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-15 19:41 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
Thanks for this! I have written about it on the wiki.
I see that the Adaptec has a mode 1E which looks a lot like Linux
raid10,o2. It could be fun to compare these, and also raid10,f2.
Also, if you wish, it would be nice to see what can actually be achieved
without the expensive Adaptec card, that is what can be done with the
controller on the mobo, with arrays with 4 disks.
And also if the mobo controller has raid types, what could be achieved
with this....
best regards
keld
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 18:40 ` Brad Campbell
@ 2008-07-15 20:12 ` Keld Jørn Simonsen
2008-07-21 16:55 ` Bill Davidsen
[not found] ` <487CF499.6080105@harddata.com>
1 sibling, 1 reply; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-15 20:12 UTC (permalink / raw)
To: Brad Campbell; +Cc: Ben Martin, linux-raid
On Tue, Jul 15, 2008 at 10:40:26PM +0400, Brad Campbell wrote:
> Keld Jørn Simonsen wrote:
>
> >I would also welcome test with mobo HW RAID - Many mobo's today come
> >with some HW raid functionality, and many people would be in the
> >situation of whether to choose a HW or SW configuration.
>
> Do you have some pointers for information on motherboards that come with
> hardware raid? I'd be interested to have a look at them. I've certainly
> seen plenty of "fakeraid" systems, but I've not yet come across hardware
> raid on an MB.
It is possible that I am thinking of what you call fakeraid.
Anyway this is what many users buy and look at employing.
And my understanding is that you can set up the mobo controller in the
bios, and then have it running with Linux, without further software.
I would then like to see what the differences are, and I hope to
document that Linux raid is better... Anyway I am first and foremost
just curious, and if HW/fake raid is faster or better, the I would
gladly recommend it.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
[not found] ` <487CF499.6080105@harddata.com>
@ 2008-07-15 20:15 ` Keld Jørn Simonsen
0 siblings, 0 replies; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-15 20:15 UTC (permalink / raw)
To: Maurice Hilarius; +Cc: Brad Campbell, Ben Martin, linux-raid
On Tue, Jul 15, 2008 at 01:03:53PM -0600, Maurice Hilarius wrote:
> Brad Campbell wrote:
> >Do you have some pointers for information on motherboards that come
> >with hardware raid? I'd be interested to have a look at them. I've
> >certainly seen plenty of "fakeraid" systems, but I've not yet come
> >across hardware raid on an MB.
> >
> >Brad
> Quite a few server boards now offer the LSI RAID SAS chips.
> Most only do RAID 0,1,10 ( not 5 or 6)
It is my impression that many users want to employ raid, for safety, and
possibly speed. A popular choice is raid1. And there I would like to
advocate people to use raid10.
It looks like mobo raid1 is easier to use, tho.
Best regards
keld
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 15:46 ` Michal Soltys
@ 2008-07-15 20:34 ` Richard Scobie
2008-07-16 2:34 ` Mr. James W. Laferriere
2008-07-16 3:36 ` Ben Martin
1 sibling, 1 reply; 42+ messages in thread
From: Richard Scobie @ 2008-07-15 20:34 UTC (permalink / raw)
To: Michal Soltys; +Cc: Ben Martin, linux-raid
Michal Soltys wrote:
> Also looking back at other benchmarks, stripe_cache_size can have pretty
> tremendous effect on md raid performance. There're other settings that
> could matter as well (read ahead, queue depths). It would be interesting
> to see e.g. raid5 256k chunk comparison, done with those altered (
> _especially_ md's stripe_cache_size with some high value like 16384 or
> 32768), and with stripe-width used in ext3 case (if it wasn't used
> already).
Agreed. On a 16 SATA drive SAS attached md RAID6, large streaming write
performance goes from around 110MB/s with the default stripe_cache_size,
to 600MB/s using 16384.
There are major read performance too, setting readahead on the md device
to 65536.
Regards,
Richard
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
` (3 preceding siblings ...)
2008-07-15 19:41 ` Keld Jørn Simonsen
@ 2008-07-15 20:40 ` Peter Grandi
2008-07-16 3:38 ` Eric Sandeen
2008-07-16 18:54 ` Alan D. Brunelle
6 siblings, 0 replies; 42+ messages in thread
From: Peter Grandi @ 2008-07-15 20:40 UTC (permalink / raw)
To: linux-raid
> [ ... ] compares the Linux Kernel RAID code to an $800
> hardware RAID card and might be of interest to list members:
> http://www.linux.com/feature/140734
Perhaps they could, if they were accompanied by vital details
like the exact experimental conditions used, because those who
talk of Bonnie++ and Iozone etc. as if they were mostly useful
benchmarks tend, in my experience, to forget about the several
pitfalls one can get into. I have seen several speed tests in
this mailing lists from the usual suspects, and most range from
the grossly meaningless to the rather misleading.
The comparison above is somewhat pointless to read without
knowing which elevator, readhead, plugging/unplugging, kernel
version (even if some ot these are somewhat implicit in Fedora
9, e.g. CFQ) and so on; also whether caching was enabled or not
between runs (even if the use, if properly done, of 100G
datasets with 2G memory might reduce the influence of that,
except of course for the CPU cost).
The "using equal-sized partitions" bit is also worrying: it is
not clear whether the sw RAID is built on the partitions or
viceversa. This has a huge influence on alignment, and I am
not sure that "chunk and stride aligned to the RAID where
possible" can be relied upon without seeing exactly how the
filesystems were created.
Also, the graphs seem mislabeled, as all results are reported
as coming from Bonnie++, including the metadata ones which are
more likely to be coming from Iozone. Also for comparison they
should have been drawn to the same Y scale.
There are so many nonlinearities in the Linux IO subsystem where
one can get wildly different results with small changes in
obscure parameters (or large changes in parameters that should
matter little, like the block device readahead) that overall
tests like the above are not that useful without a list of the
exact conditions.
Overall however most of the results are within the boundaries of
very rough plausibility, but the devil is in the details.
The overall conclusions that alignment matters a great deal with
parity RAID and that a system with a recent CPU and a PCIe bus
can outperform in most cases a hardware RAID card are not that
novel...
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 17:39 ` Keld Jørn Simonsen
@ 2008-07-16 0:01 ` Richard Scobie
2008-07-16 0:20 ` Jon Nelson
2008-07-16 4:23 ` Keld Jørn Simonsen
0 siblings, 2 replies; 42+ messages in thread
From: Richard Scobie @ 2008-07-16 0:01 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: thomas62186218, monkeyiq, linux-raid
Keld Jørn Simonsen wrote:
> I would actually welcome more tests with specific user profiles, like
> many small reads and writes for database use, and concurrent random
> reading and writing to simulate the load on a server. What bonnie++ is
> reporting is only equential IO. This is important on work stations, but
> actually not on servers.
The following is from the Bonnie++ man page:
"There are two sections to the program's operations. The first is to
test the IO throughput in a fashion that is designed to simulate some
types of database applications. The second is to test creation, reading
and deleting many small files in a fashion similar to the usage patterns
of programs such as Squid or INN."
So I guess the author thinks it's valid for more than sequential I/O.
In any case, while we may be in a minority, Justin, I and a few others
are interested in sequential I/O, as we build servers required to read
and write multiple streams of uncompressed SD and HD video in realtime.
Regards,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 0:01 ` Richard Scobie
@ 2008-07-16 0:20 ` Jon Nelson
2008-07-16 4:06 ` Ben Martin
2008-07-16 15:42 ` Ben Martin
2008-07-16 4:23 ` Keld Jørn Simonsen
1 sibling, 2 replies; 42+ messages in thread
From: Jon Nelson @ 2008-07-16 0:20 UTC (permalink / raw)
To: Richard Scobie
Cc: Keld Jørn Simonsen, thomas62186218, monkeyiq, linux-raid
On Tue, Jul 15, 2008 at 7:01 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Keld Jørn Simonsen wrote:
>
>> I would actually welcome more tests with specific user profiles, like
>> many small reads and writes for database use, and concurrent random
>> reading and writing to simulate the load on a server. What bonnie++ is
>> reporting is only equential IO. This is important on work stations, but
>> actually not on servers.
>
> The following is from the Bonnie++ man page:
>
> "There are two sections to the program's operations. The first is to
> test the IO throughput in a fashion that is designed to simulate some
> types of database applications. The second is to test creation, reading and
> deleting many small files in a fashion similar to the usage patterns of
> programs such as Squid or INN."
>
> So I guess the author thinks it's valid for more than sequential I/O.
>
> In any case, while we may be in a minority, Justin, I and a few others are
> interested in sequential I/O, as we build servers required to read and write
> multiple streams of uncompressed SD and HD video in realtime.
In that case, the 'fstest' program (google for it, it's associated
with the samba folks IIRC), might be just the ticket.
You specify the number of children (real children, not threads), the
number of files each child will open, populate, verify, and then
delete, the size of the files, and a few other options. It very nicely
simulates a set of totally greedy I/O intensive processes.
--
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 20:34 ` Richard Scobie
@ 2008-07-16 2:34 ` Mr. James W. Laferriere
2008-07-16 2:48 ` Richard Scobie
2008-07-16 5:50 ` Michal Soltys
0 siblings, 2 replies; 42+ messages in thread
From: Mr. James W. Laferriere @ 2008-07-16 2:34 UTC (permalink / raw)
To: Richard Scobie; +Cc: linux-raid maillist
Hello Richard ,
On Wed, 16 Jul 2008, Richard Scobie wrote:
> Michal Soltys wrote:
>
>> Also looking back at other benchmarks, stripe_cache_size can have pretty
>> tremendous effect on md raid performance. There're other settings that
>> could matter as well (read ahead, queue depths). It would be interesting to
>> see e.g. raid5 256k chunk comparison, done with those altered (
>> _especially_ md's stripe_cache_size with some high value like 16384 or
>> 32768), and with stripe-width used in ext3 case (if it wasn't used
>> already).
>
> Agreed. On a 16 SATA drive SAS attached md RAID6, large streaming write
> performance goes from around 110MB/s with the default stripe_cache_size, to
> 600MB/s using 16384.
>
> There are major read performance too, setting readahead on the md device to
> 65536.
At what level (2.6.26) does one set readahead ?
I've searched 'mount' , 'mdadm' , 'Documentation/md.txt' . Where
else should I look ? Obviously I am missing something .
> Regards,
> Richard
TIa , JimL
--
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network&System Engineer | 2133 McCullam Ave | Give me Linux |
| babydr@baby-dragons.com | Fairbanks, AK. 99701 | only on AXP |
+------------------------------------------------------------------+
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 2:34 ` Mr. James W. Laferriere
@ 2008-07-16 2:48 ` Richard Scobie
2008-07-16 2:52 ` Mr. James W. Laferriere
2008-07-16 5:50 ` Michal Soltys
1 sibling, 1 reply; 42+ messages in thread
From: Richard Scobie @ 2008-07-16 2:48 UTC (permalink / raw)
To: Mr. James W. Laferriere; +Cc: linux-raid maillist
Mr. James W. Laferriere wrote:
> At what level (2.6.26) does one set readahead ?
> I've searched 'mount' , 'mdadm' , 'Documentation/md.txt' . Where
> else should I look ? Obviously I am missing something .
Hi Jim,
/sbin/blockdev --setra 65536 /dev/md5 gets it done here.
Regards,
Richard
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 2:48 ` Richard Scobie
@ 2008-07-16 2:52 ` Mr. James W. Laferriere
2008-07-16 4:17 ` Keld Jørn Simonsen
0 siblings, 1 reply; 42+ messages in thread
From: Mr. James W. Laferriere @ 2008-07-16 2:52 UTC (permalink / raw)
To: Richard Scobie; +Cc: linux-raid maillist
Hello Richard ,
On Wed, 16 Jul 2008, Richard Scobie wrote:
> Mr. James W. Laferriere wrote:
>> At what level (2.6.26) does one set readahead ?
>> I've searched 'mount' , 'mdadm' , 'Documentation/md.txt' . Where
>> else should I look ? Obviously I am missing something .
>
> Hi Jim,
>
> /sbin/blockdev --setra 65536 /dev/md5 gets it done here.
>
> Regards,
> Richard
This does not bode well for my memory , It's already in my
optimizations shell script for startup . I thought it looked very familiar ,
But the usual places didn't bring it up .
Thank you , JimL
--
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network&System Engineer | 2133 McCullam Ave | Give me Linux |
| babydr@baby-dragons.com | Fairbanks, AK. 99701 | only on AXP |
+------------------------------------------------------------------+
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 19:41 ` Keld Jørn Simonsen
@ 2008-07-16 3:25 ` Ben Martin
0 siblings, 0 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-16 3:25 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 1368 bytes --]
On Tue, 2008-07-15 at 21:41 +0200, Keld Jørn Simonsen wrote:
> On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
> > Hi,
> > Apologies if posting this here is inappropriate but a recent article
> > of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> > card and might be of interest to list members:
> >
> > http://www.linux.com/feature/140734
>
> Thanks for this! I have written about it on the wiki.
Excellent :)
>
> I see that the Adaptec has a mode 1E which looks a lot like Linux
> raid10,o2. It could be fun to compare these, and also raid10,f2.
I'd also like to compare the 5EE type modes as RAID-6 does bring a fair
degradation of performance for its two failure protection.
>
> Also, if you wish, it would be nice to see what can actually be achieved
> without the expensive Adaptec card, that is what can be done with the
> controller on the mobo, with arrays with 4 disks.
I was actually a bit worried that the card was geared toward middle tear
as there are more performance oriented cards around (including the
commensurate price).
>
> And also if the mobo controller has raid types, what could be achieved
> with this....
This would be very interesting. Taking a common middle level P45 board
and benchmarking the mobo RAID vs the kernel RAID.
>
> best regards
> keld
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 15:46 ` Michal Soltys
2008-07-15 20:34 ` Richard Scobie
@ 2008-07-16 3:36 ` Ben Martin
2008-07-16 3:55 ` Richard Scobie
1 sibling, 1 reply; 42+ messages in thread
From: Ben Martin @ 2008-07-16 3:36 UTC (permalink / raw)
To: Michal Soltys; +Cc: linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 2059 bytes --]
On Tue, 2008-07-15 at 17:46 +0200, Michal Soltys wrote:
> Ben Martin wrote:
> > Hi,
> > Apologies if posting this here is inappropriate but a recent article
> > of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> > card and might be of interest to list members:
> >
> > http://www.linux.com/feature/140734
> >
>
> Very interesting. Btw - did you use stripe-width on ext3s as well, or
> only stride ?
Only stride unfortunately. After digging through this ML's archives the
stride was mentioned a lot but not stripe-width. I still have some
reserved partitions across the RAID for future benchmarks and
verification so I'll whip up a comparison of using stripe-width vs not
using it at some stage (ah, free time).
Perhaps this page could use some love with append(stripe-width).
http://linux-raid.osdl.org/index.php/RAID_setup#Options_for_mke2fs
>
> Also looking back at other benchmarks, stripe_cache_size can have pretty
> tremendous effect on md raid performance. There're other settings that
> could matter as well (read ahead, queue depths). It would be interesting
> to see e.g. raid5 256k chunk comparison, done with those altered (
> _especially_ md's stripe_cache_size with some high value like 16384 or
> 32768), and with stripe-width used in ext3 case (if it wasn't used already).
I have been meaning to ask for a while, why is the stripe_cache_size
set so low by default if it has such an effect on performance?
Obviously being conservative by default is less likely to get the kernel
devs into trouble, but perhaps having the system realize that the system
has 8gb of RAM and two RAID-$whatever setups in /etc/mdadm.conf and
using some simple dynamic algorithm to figure out that for this amount
of system RAM and two RAIDs of type $whatever it should chomp up say
256Mb of RAM for each RAID and turn read ahead on for each mdadm device.
I'm not sure if each individual Linux distro cares enough about this to
hack up their boot time scripts to do it.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
` (4 preceding siblings ...)
2008-07-15 20:40 ` Peter Grandi
@ 2008-07-16 3:38 ` Eric Sandeen
2008-07-16 18:54 ` Alan D. Brunelle
6 siblings, 0 replies; 42+ messages in thread
From: Eric Sandeen @ 2008-07-16 3:38 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
>
Ben, if you get fired up, you might run ext4 over the same suite of
tests... :) The ext4 allocator is a bit more raid-geometry-aware than ext3.
-Eric
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 17:06 ` Jon Nelson
@ 2008-07-16 3:44 ` Ben Martin
0 siblings, 0 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-16 3:44 UTC (permalink / raw)
To: Jon Nelson; +Cc: Keld Jørn Simonsen, linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 996 bytes --]
This sounds perfect for an article that compares mobo fakeraid/bois-raid
for RAID-10 (if a board with it can be decided on) and kernel RAID-10. A
line graph of the bonnie xfer stats from say 16kb chunks through 1mb.
If/when I get this happening I'll post a link again to the list.
On Tue, 2008-07-15 at 12:06 -0500, Jon Nelson wrote:
> I would add to that request that you consider using a smaller chunk size.
> My own benchmarks showed that for some raid levels and layouts, a
> smaller chunk size gave the best overall performance.
>
>
> On Tue, Jul 15, 2008 at 11:39 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> ...
> > It would be interesting if you could enhance your article with benchmarks
> > on raid10, f2 and o2 layouts. I think they would outperform HW raid, at
> > least on input. And I would like to see how they perform on outout and
> > rewrite, with ext3 and xfs. We do have some tests, but many of them are
> > without a file system layer.
>
>
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 3:36 ` Ben Martin
@ 2008-07-16 3:55 ` Richard Scobie
2008-07-16 7:00 ` Dan Williams
2008-07-16 17:08 ` thomas62186218
0 siblings, 2 replies; 42+ messages in thread
From: Richard Scobie @ 2008-07-16 3:55 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
Ben Martin wrote:
> I have been meaning to ask for a while, why is the stripe_cache_size
> set so low by default if it has such an effect on performance?
I can't comment on the above, but looking at this:
http://marc.info/?l=linux-raid&m=120668318422034&w=2
with the 2.6.26 release, bumping up the stripe cache should no longer be
necessary.
Regards,
Richard
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 16:39 ` Keld Jørn Simonsen
` (2 preceding siblings ...)
2008-07-15 18:40 ` Brad Campbell
@ 2008-07-16 3:58 ` Ben Martin
2008-07-16 4:47 ` Keld Jørn Simonsen
3 siblings, 1 reply; 42+ messages in thread
From: Ben Martin @ 2008-07-16 3:58 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 2030 bytes --]
On Tue, 2008-07-15 at 18:39 +0200, Keld Jørn Simonsen wrote:
> On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
> > Hi,
> > Apologies if posting this here is inappropriate but a recent article
> > of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> > card and might be of interest to list members:
> >
> > http://www.linux.com/feature/140734
>
> It is (at least to me) very interesting to see comparisons on HW vs SW
> raid.
>
> I think tho, that you give it some bias not testing the best
> configurations for Linux SW raid. For example raid10, the f2 and o2
> layouts are faster than n2, and you don't know if Adaptec has some kind
> of improved layout.
This was one of those compromises in the article. If I was going to
fully treat RAID-10 then I'd have no time to do any sort of justice to
parity RAID configurations. I'll stay away from the parity vs r10 debate
that has arisen... I tried to make the article interesting to readers
wanting to use either configuration.
One interesting question about the r10 layouts, if f2/o2 perform better
than n2 why is the default for creation still n2? Just like mkfs.xfs
determining the raid chunk and stripe size for you (assuming you don't
use LVM to circumvent this, gah!), shouldn't mdadm select the "fastest"
--layout as the default for RAID-10?
>
> It would be interesting if you could enhance your article with benchmarks
> on raid10, f2 and o2 layouts. I think they would outperform HW raid, at
> least on input. And I would like to see how they perform on outout and
> rewrite, with ext3 and xfs. We do have some tests, but many of them are
> without a file system layer.
This will probably be the target of a new article at some point which
just focuses on R10.
>
> I would also welcome test with mobo HW RAID - Many mobo's today come
> with some HW raid functionality, and many people would be in the
> situation of whether to choose a HW or SW configuration.
>
> best regards
> keld
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 0:20 ` Jon Nelson
@ 2008-07-16 4:06 ` Ben Martin
2008-07-16 15:42 ` Ben Martin
1 sibling, 0 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-16 4:06 UTC (permalink / raw)
To: Jon Nelson
Cc: Richard Scobie, Keld Jørn Simonsen, thomas62186218,
linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 2430 bytes --]
On Tue, 2008-07-15 at 19:20 -0500, Jon Nelson wrote:
> On Tue, Jul 15, 2008 at 7:01 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> > Keld Jørn Simonsen wrote:
> >
> >> I would actually welcome more tests with specific user profiles, like
> >> many small reads and writes for database use, and concurrent random
> >> reading and writing to simulate the load on a server. What bonnie++ is
> >> reporting is only equential IO. This is important on work stations, but
> >> actually not on servers.
> >
> > The following is from the Bonnie++ man page:
> >
> > "There are two sections to the program's operations. The first is to
> > test the IO throughput in a fashion that is designed to simulate some
> > types of database applications. The second is to test creation, reading and
> > deleting many small files in a fashion similar to the usage patterns of
> > programs such as Squid or INN."
> >
> > So I guess the author thinks it's valid for more than sequential I/O.
> >
> > In any case, while we may be in a minority, Justin, I and a few others are
> > interested in sequential I/O, as we build servers required to read
> and write
> > multiple streams of uncompressed SD and HD video in realtime.
>
> In that case, the 'fstest' program (google for it, it's associated
> with the samba folks IIRC), might be just the ticket.
> You specify the number of children (real children, not threads), the
> number of files each child will open, populate, verify, and then
> delete, the size of the files, and a few other options. It very nicely
> simulates a set of totally greedy I/O intensive processes.
>
I was thinking of benchmarks at a higher level too.
fstest and fio seem like good candidates for making a configuration that
tests performance that can be easily run on another system without much
setup required.
A few other ideas I had were using postal for simulating a mail server
env and doing "something" with PostgreSQL. The latter is really a fairly
large can of worms because you would need to lock down the various RAID
caches, read aheads (which might actually adversely effect performance
if the kernel second guesses PG badly), the pg configuration (eg, its
caching settings) and of course where the base tables and indexes landed
on disk.
If anyone has pointers for testing PG performance on RAID I'd love to
hear about them either on / off list.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:42 ` Justin Piszcz
@ 2008-07-16 4:14 ` Ben Martin
0 siblings, 0 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-16 4:14 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid, xfs, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 2366 bytes --]
On Tue, 2008-07-15 at 10:42 -0400, Justin Piszcz wrote:
> What were the commands used when creating the XFS filesystem
> (sunit,swidth)?
>
> Justin.
I noticed that on the hardware card I had to use nobarrier to get decent
file metadata performance (create,del etc). I have noticed also recently
on an 32gb Mtron SSD that with barriers enabled with XFS on the SSD
metadata performance was an order of magnitude slower.
The below is a snippit from the hardware RAID testing config. The XFS
creation and mounting args are identical for software RAID.
#!/bin/bash
RAIDLEVEL=6
CHUNK_SZ_KB=256
PARITY_DRIVE_COUNT=2
NON_PARITY_DRIVE_COUNT=4
DEVICE=/dev/disk/by-id/scsi-SAdaptec_1024ktmpraid6
...
run_bonnie() {
fsdev=$1
mountopts=$2
N=256
mount -o "$mountopts" $DEVICE /mnt/tmpraid
chown ben /mnt/tmpraid
sync
sleep 1
sudo -u ben /usr/sbin/bonnie++ -q -m $fsdev -n $N -d \
/mnt/tmpraid >>/T/adaptec-raid
${RAIDLEVEL}-${CHUNK_SZ_KB}kb-chunks-bonnie.csv
umount /mnt/tmpraid
}
...
fsdev=hardxfsdefaultnb
mkfs.xfs -f -l lazy-count=1 \
$DEVICE
run_bonnie $fsdev "nobarrier"
fsdev=hardxfsalign
mkfs.xfs -f -s size=4096 \
-d sunit=$(($CHUNK_SZ_KB*2)),swidth=$(($CHUNK_SZ_KB*2*
$NON_PARITY_DRIVE_COUNT)) \
-l lazy-count=1 \
$DEVICE
run_bonnie $fsdev "nobarrier"
fsdev=hardxfsdlalign
mkfs.xfs -f -s size=4096 \
-d sunit=$(($CHUNK_SZ_KB*2)),swidth=$(($CHUNK_SZ_KB*2*
$NON_PARITY_DRIVE_COUNT)) \
-l lazy-count=1,sunit=$((CHUNK_SZ_KB*2)),size=128m \
$DEVICE
run_bonnie $fsdev "nobarrier"
# run iozone on it.
fsdev=hardxfsalign
mkfs.xfs -f -s size=4096 \
-d sunit=$(($CHUNK_SZ_KB*2)),swidth=$(($CHUNK_SZ_KB*2*
$NON_PARITY_DRIVE_COUNT)) \
-l lazy-count=1 \
$DEVICE
mount -o "nobarrier" $DEVICE /mnt/tmpraid
chown ben /mnt/tmpraid
sync
sleep 1
sudo -u ben iozone -a -g 4G -f /mnt/tmpraid/iozone_file \
>/T/adaptec-raid${RAIDLEVEL}-${CHUNK_SZ_KB}kb-chunks-iozone.txt
umount /mnt/tmpraid
>
> On Wed, 16 Jul 2008, Ben Martin wrote:
>
> > Hi,
> > Apologies if posting this here is inappropriate but a recent article
> > of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> > card and might be of interest to list members:
> >
> > http://www.linux.com/feature/140734
> >
> >
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 2:52 ` Mr. James W. Laferriere
@ 2008-07-16 4:17 ` Keld Jørn Simonsen
0 siblings, 0 replies; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-16 4:17 UTC (permalink / raw)
To: Mr. James W. Laferriere; +Cc: Richard Scobie, linux-raid maillist
On Tue, Jul 15, 2008 at 06:52:59PM -0800, Mr. James W. Laferriere wrote:
> Hello Richard ,
>
> On Wed, 16 Jul 2008, Richard Scobie wrote:
> >Mr. James W. Laferriere wrote:
> >> At what level (2.6.26) does one set readahead ?
> >> I've searched 'mount' , 'mdadm' , 'Documentation/md.txt' . Where
> >>else should I look ? Obviously I am missing something .
> >
> >Hi Jim,
> >
> >/sbin/blockdev --setra 65536 /dev/md5 gets it done here.
> >
> >Regards,
> >Richard
>
> This does not bode well for my memory , It's already in my
> optimizations shell script for startup . I thought it looked very familiar
> , But the usual places didn't bring it up .
There is a note on the wiki for preformance
Best regards
keld
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 0:01 ` Richard Scobie
2008-07-16 0:20 ` Jon Nelson
@ 2008-07-16 4:23 ` Keld Jørn Simonsen
2008-07-16 5:18 ` Richard Scobie
1 sibling, 1 reply; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-16 4:23 UTC (permalink / raw)
To: Richard Scobie; +Cc: thomas62186218, monkeyiq, linux-raid
On Wed, Jul 16, 2008 at 12:01:08PM +1200, Richard Scobie wrote:
> Keld Jørn Simonsen wrote:
>
> >I would actually welcome more tests with specific user profiles, like
> >many small reads and writes for database use, and concurrent random
> >reading and writing to simulate the load on a server. What bonnie++ is
> >reporting is only equential IO. This is important on work stations, but
> >actually not on servers.
>
> The following is from the Bonnie++ man page:
>
> "There are two sections to the program's operations. The first is to
> test the IO throughput in a fashion that is designed to simulate some
> types of database applications. The second is to test creation, reading
> and deleting many small files in a fashion similar to the usage patterns
> of programs such as Squid or INN."
>
> So I guess the author thinks it's valid for more than sequential I/O.
Yes, there are tests in bonnie++ to address some of it. Such as rewrite
end deletes etc.
My understanding tho, is that the IO output and input figures are on
sequential IO. There are no figures on random IO,
> In any case, while we may be in a minority, Justin, I and a few others
> are interested in sequential I/O, as we build servers required to read
> and write multiple streams of uncompressed SD and HD video in realtime.
I think this is random IO, that is, a number of users reading
sequentiallly a number of files at the same time. I have similar use,
running a ftp site, and also a video site.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 3:58 ` Ben Martin
@ 2008-07-16 4:47 ` Keld Jørn Simonsen
2008-07-21 16:58 ` Bill Davidsen
0 siblings, 1 reply; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-16 4:47 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
On Wed, Jul 16, 2008 at 01:58:05PM +1000, Ben Martin wrote:
> On Tue, 2008-07-15 at 18:39 +0200, Keld Jørn Simonsen wrote:
> > On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
>
> One interesting question about the r10 layouts, if f2/o2 perform better
> than n2 why is the default for creation still n2? Just like mkfs.xfs
> determining the raid chunk and stripe size for you (assuming you don't
> use LVM to circumvent this, gah!), shouldn't mdadm select the "fastest"
> --layout as the default for RAID-10?
Perhaps. I think Neil first implemented "near", and more or less at the same
time "far" - but to him the main thing was "near". This was 2.6.9. Then
in 2.6.18 he implemented "offset". In the early days the
characteristics of each layout were not well known. It is only in the
last year or so I have seen benchmarks with n2, f2 and o2.
> >
> > It would be interesting if you could enhance your article with benchmarks
> > on raid10, f2 and o2 layouts. I think they would outperform HW raid, at
> > least on input. And I would like to see how they perform on outout and
> > rewrite, with ext3 and xfs. We do have some tests, but many of them are
> > without a file system layer.
>
> This will probably be the target of a new article at some point which
> just focuses on R10.
Sounds good!
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 4:23 ` Keld Jørn Simonsen
@ 2008-07-16 5:18 ` Richard Scobie
2008-07-16 8:17 ` Keld Jørn Simonsen
0 siblings, 1 reply; 42+ messages in thread
From: Richard Scobie @ 2008-07-16 5:18 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Linux RAID Mailing List
Keld Jørn Simonsen wrote:
> My understanding tho, is that the IO output and input figures are on
> sequential IO. There are no figures on random IO,
If you look at some results Justin posted:
http://home.comcast.net/~jpiszcz/20080707/veliciraptors_with_x4.html
The column "Random Create" shows performance for randomly creating,
reading and deleting 16384 files sized between 16bytes and 1MB
distributed randomly through 64 directories (this is the -n parameter
and is shown in the "Num Files" column.)
Regards,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 2:34 ` Mr. James W. Laferriere
2008-07-16 2:48 ` Richard Scobie
@ 2008-07-16 5:50 ` Michal Soltys
1 sibling, 0 replies; 42+ messages in thread
From: Michal Soltys @ 2008-07-16 5:50 UTC (permalink / raw)
To: Mr. James W. Laferriere; +Cc: Richard Scobie, linux-raid maillist
Mr. James W. Laferriere wrote:
> Hello Richard ,
>
> At what level (2.6.26) does one set readahead ?
> I've searched 'mount' , 'mdadm' , 'Documentation/md.txt' . Where
> else should I look ? Obviously I am missing something .
>
From what I know, only the layer directly under the filesystem is
meaningful in any way - check out this post:
http://marc.info/?l=linux-raid&m=120659089505963&w=2
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 3:55 ` Richard Scobie
@ 2008-07-16 7:00 ` Dan Williams
2008-07-16 17:08 ` thomas62186218
1 sibling, 0 replies; 42+ messages in thread
From: Dan Williams @ 2008-07-16 7:00 UTC (permalink / raw)
To: Richard Scobie; +Cc: Ben Martin, linux-raid
On Tue, Jul 15, 2008 at 8:55 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> Ben Martin wrote:
>
>> I have been meaning to ask for a while, why is the stripe_cache_size
>> set so low by default if it has such an effect on performance?
>
> I can't comment on the above, but looking at this:
>
> http://marc.info/?l=linux-raid&m=120668318422034&w=2
>
> with the 2.6.26 release, bumping up the stripe cache should no longer be
> necessary.
>
To be clear bumping up the cache size can still increase performance
but the ratio of performance/stripe_cache_size should now be larger,
i.e. more sequential write performance at smaller cache sizes.
> Regards,
>
> Richard
--
Dan
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 5:18 ` Richard Scobie
@ 2008-07-16 8:17 ` Keld Jørn Simonsen
0 siblings, 0 replies; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-16 8:17 UTC (permalink / raw)
To: Richard Scobie; +Cc: Linux RAID Mailing List
On Wed, Jul 16, 2008 at 05:18:46PM +1200, Richard Scobie wrote:
> Keld Jørn Simonsen wrote:
>
> >My understanding tho, is that the IO output and input figures are on
> >sequential IO. There are no figures on random IO,
>
> If you look at some results Justin posted:
>
> http://home.comcast.net/~jpiszcz/20080707/veliciraptors_with_x4.html
>
> The column "Random Create" shows performance for randomly creating,
> reading and deleting 16384 files sized between 16bytes and 1MB
> distributed randomly through 64 directories (this is the -n parameter
> and is shown in the "Num Files" column.)
Yes, but these are not random IO in the sense that I meant it, which was
unclear. These are specific system calls, delete, and seeks, with some IO
on the inodes and maybe releasing blocks to the empty block list.
I meant to say: There are no tests in bonnie++ that measures random
read and write thruput.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 0:20 ` Jon Nelson
2008-07-16 4:06 ` Ben Martin
@ 2008-07-16 15:42 ` Ben Martin
1 sibling, 0 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-16 15:42 UTC (permalink / raw)
To: Jon Nelson
Cc: Richard Scobie, Keld Jørn Simonsen, thomas62186218,
linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 1480 bytes --]
On Tue, 2008-07-15 at 19:20 -0500, Jon Nelson wrote:
> On Tue, Jul 15, 2008 at 7:01 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> > Keld Jørn Simonsen wrote:
> >
> >> I would actually welcome more tests with specific user profiles, like
> >> many small reads and writes for database use, and concurrent random
> >> reading and writing to simulate the load on a server. What bonnie++ is
> >> reporting is only equential IO. This is important on work stations, but
> >> actually not on servers.
> >
> > The following is from the Bonnie++ man page:
> >
> > "There are two sections to the program's operations. The first is to
> > test the IO throughput in a fashion that is designed to simulate some
> > types of database applications. The second is to test creation, reading and
> > deleting many small files in a fashion similar to the usage patterns of
> > programs such as Squid or INN."
> >
> > So I guess the author thinks it's valid for more than sequential I/O.
> >
> > In any case, while we may be in a minority, Justin, I and a few others are
> > interested in sequential I/O, as we build servers required to read and write
> > multiple streams of uncompressed SD and HD video in realtime.
>
> In that case, the 'fstest' program (google for it, it's associated
> with the samba folks IIRC), might be just the ticket.
Since I was digging around, the link for future readers;
http://www.samba.org/ftp/unpacked/junkcode/fstest.c
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 3:55 ` Richard Scobie
2008-07-16 7:00 ` Dan Williams
@ 2008-07-16 17:08 ` thomas62186218
2008-07-16 19:34 ` Richard Scobie
1 sibling, 1 reply; 42+ messages in thread
From: thomas62186218 @ 2008-07-16 17:08 UTC (permalink / raw)
To: richard, monkeyiq; +Cc: linux-raid
Richard,
Are you sure that the patch you reference below is actually in 2.6.26?
I checked the CHANGELOG and couldn't find anything about get priority
stripe in there (maybe its called something else now)?
Thanks
-Thomas
-----Original Message-----
From: Richard Scobie <richard@sauce.co.nz>
To: Ben Martin <monkeyiq@users.sourceforge.net>
Cc: linux-raid@vger.kernel.org
Sent: Tue, 15 Jul 2008 8:55 pm
Subject: Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
Ben Martin wrote:
> I have been meaning to ask for a while, why is the stripe_cache_size
> set so low by default if it has such an effect on performance?
I can't comment on the above, but looking at this:
http://marc.info/?l=linux-raid&m=120668318422034&w=2
with the 2.6.26 release, bumping up the stripe cache should no longer
be
necessary.
Regards,
Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-raid"
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
` (5 preceding siblings ...)
2008-07-16 3:38 ` Eric Sandeen
@ 2008-07-16 18:54 ` Alan D. Brunelle
2008-07-17 8:26 ` Ben Martin
6 siblings, 1 reply; 42+ messages in thread
From: Alan D. Brunelle @ 2008-07-16 18:54 UTC (permalink / raw)
To: Ben Martin; +Cc: linux-raid
Ben Martin wrote:
> Hi,
> Apologies if posting this here is inappropriate but a recent article
> of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> card and might be of interest to list members:
>
> http://www.linux.com/feature/140734
>
Hi Ben -
Did you do any performance testing of running /multiple/ disks through
the Adaptec card w/out HW RAID enabled - for example, comparing the 4
ports on the mobo against 4 drives exported through the Adaptec card?
I wonder if there are any potential bottleneck issues with the single
adapter card handling 6 independent drives (when testing SW RAID) versus
one exported "drive" (when testing HW RAID).
Regards,
Alan
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 17:08 ` thomas62186218
@ 2008-07-16 19:34 ` Richard Scobie
0 siblings, 0 replies; 42+ messages in thread
From: Richard Scobie @ 2008-07-16 19:34 UTC (permalink / raw)
To: thomas62186218; +Cc: monkeyiq, linux-raid
thomas62186218@aol.com wrote:
> Richard,
>
> Are you sure that the patch you reference below is actually in 2.6.26? I
> checked the CHANGELOG and couldn't find anything about get priority
> stripe in there (maybe its called something else now)?
No, I haven't checked but was just going by the comment at the top of
the post I referenced.
Dan may be able to comment.
Regards,
Richard
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 18:54 ` Alan D. Brunelle
@ 2008-07-17 8:26 ` Ben Martin
0 siblings, 0 replies; 42+ messages in thread
From: Ben Martin @ 2008-07-17 8:26 UTC (permalink / raw)
To: Alan D. Brunelle; +Cc: linux-raid, Ben Martin
[-- Attachment #1: Type: text/plain, Size: 1224 bytes --]
On Wed, 2008-07-16 at 14:54 -0400, Alan D. Brunelle wrote:
> Ben Martin wrote:
> > Hi,
> > Apologies if posting this here is inappropriate but a recent article
> > of mine compares the Linux Kernel RAID code to an $800 hardware RAID
> > card and might be of interest to list members:
> >
> > http://www.linux.com/feature/140734
> >
>
> Hi Ben -
>
> Did you do any performance testing of running /multiple/ disks through
> the Adaptec card w/out HW RAID enabled - for example, comparing the 4
> ports on the mobo against 4 drives exported through the Adaptec card?
Good question, however I must admit that I didn't test this.
Being an 8 lane PCIe card it I thought that there would not be a huge
issue with the interface to the card and drives being a bottleneck.
http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31205/
Though it would be interesting to compare, once I move a few old drives
off the mobo SATA controller and get 4 ports free.
>
> I wonder if there are any potential bottleneck issues with the single
> adapter card handling 6 independent drives (when testing SW RAID) versus
> one exported "drive" (when testing HW RAID).
>
> Regards,
> Alan
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-15 20:12 ` Keld Jørn Simonsen
@ 2008-07-21 16:55 ` Bill Davidsen
2008-07-23 7:45 ` Keld Jørn Simonsen
0 siblings, 1 reply; 42+ messages in thread
From: Bill Davidsen @ 2008-07-21 16:55 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Brad Campbell, Ben Martin, linux-raid
Keld Jørn Simonsen wrote:
> On Tue, Jul 15, 2008 at 10:40:26PM +0400, Brad Campbell wrote:
>
>> Keld Jørn Simonsen wrote:
>>
>>
>>> I would also welcome test with mobo HW RAID - Many mobo's today come
>>> with some HW raid functionality, and many people would be in the
>>> situation of whether to choose a HW or SW configuration.
>>>
>> Do you have some pointers for information on motherboards that come with
>> hardware raid? I'd be interested to have a look at them. I've certainly
>> seen plenty of "fakeraid" systems, but I've not yet come across hardware
>> raid on an MB.
>>
>
> It is possible that I am thinking of what you call fakeraid.
>
I like "firmware raid" better, but the bottom line here is that the raid
is still done in the system CPU. These often have little or no hardware
support, such as cache so that multiple drives can be written without
passing the data through the system bus more than once (and chancing
change while that happens).
> Anyway this is what many users buy and look at employing.
> And my understanding is that you can set up the mobo controller in the
> bios, and then have it running with Linux, without further software.
>
Yes, although the reliability of firmware raid under device failure
conditions is dependent on the vendor, firmware level, etc, etc.
> I would then like to see what the differences are, and I hope to
> document that Linux raid is better... Anyway I am first and foremost
> just curious, and if HW/fake raid is faster or better, the I would
> gladly recommend it.
>
Don't confuse HW and firmware raid, they don't have the same failure
points and capabilities.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-16 4:47 ` Keld Jørn Simonsen
@ 2008-07-21 16:58 ` Bill Davidsen
0 siblings, 0 replies; 42+ messages in thread
From: Bill Davidsen @ 2008-07-21 16:58 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Ben Martin, linux-raid
Keld Jørn Simonsen wrote:
> On Wed, Jul 16, 2008 at 01:58:05PM +1000, Ben Martin wrote:
>
>> On Tue, 2008-07-15 at 18:39 +0200, Keld Jørn Simonsen wrote:
>>
>>> On Wed, Jul 16, 2008 at 12:06:09AM +1000, Ben Martin wrote:
>>>
>> One interesting question about the r10 layouts, if f2/o2 perform better
>> than n2 why is the default for creation still n2? Just like mkfs.xfs
>> determining the raid chunk and stripe size for you (assuming you don't
>> use LVM to circumvent this, gah!), shouldn't mdadm select the "fastest"
>> --layout as the default for RAID-10?
>>
>
> Perhaps. I think Neil first implemented "near", and more or less at the same
> time "far" - but to him the main thing was "near". This was 2.6.9. Then
> in 2.6.18 he implemented "offset". In the early days the
> characteristics of each layout were not well known. It is only in the
> last year or so I have seen benchmarks with n2, f2 and o2.
I admit that I have done "raw device performance" benchmarks with n2 and
f2, and neglected o2 completely, so I rely on other's work. However, I
am most often bitten by raid5 issues, so I have been paying most
attention to them.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-21 16:55 ` Bill Davidsen
@ 2008-07-23 7:45 ` Keld Jørn Simonsen
2008-07-23 10:29 ` Brad Campbell
0 siblings, 1 reply; 42+ messages in thread
From: Keld Jørn Simonsen @ 2008-07-23 7:45 UTC (permalink / raw)
To: Bill Davidsen; +Cc: Brad Campbell, Ben Martin, linux-raid
On Mon, Jul 21, 2008 at 12:55:23PM -0400, Bill Davidsen wrote:
> Keld Jørn Simonsen wrote:
> >On Tue, Jul 15, 2008 at 10:40:26PM +0400, Brad Campbell wrote:
> >
> >>Keld Jørn Simonsen wrote:
> >>
> >>
> >>>I would also welcome test with mobo HW RAID - Many mobo's today come
> >>>with some HW raid functionality, and many people would be in the
> >>>situation of whether to choose a HW or SW configuration.
> >>>
> >>Do you have some pointers for information on motherboards that come with
> >>hardware raid? I'd be interested to have a look at them. I've certainly
> >>seen plenty of "fakeraid" systems, but I've not yet come across hardware
> >>raid on an MB.
> >>
> >
> >It is possible that I am thinking of what you call fakeraid.
> >
>
> I like "firmware raid" better, but the bottom line here is that the raid
> is still done in the system CPU. These often have little or no hardware
> support, such as cache so that multiple drives can be written without
> passing the data through the system bus more than once (and chancing
> change while that happens).
OK, if the definition on these controllers is that they must have support
from the OS, then I think most of the mobos today come with hardware
raid, that is there needs to be no support in Linux for this. Everything
is set up via the bios, and from the linux kernel it looks like an
ordinary disk.
Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Benchmarks: Linux Kernel RAID vs a Hardware RAID setup
2008-07-23 7:45 ` Keld Jørn Simonsen
@ 2008-07-23 10:29 ` Brad Campbell
0 siblings, 0 replies; 42+ messages in thread
From: Brad Campbell @ 2008-07-23 10:29 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Bill Davidsen, Ben Martin, linux-raid
Keld Jørn Simonsen wrote:
>>> It is possible that I am thinking of what you call fakeraid.
>>>
>> I like "firmware raid" better, but the bottom line here is that the raid
>> is still done in the system CPU. These often have little or no hardware
>> support, such as cache so that multiple drives can be written without
>> passing the data through the system bus more than once (and chancing
>> change while that happens).
>
> OK, if the definition on these controllers is that they must have support
> from the OS, then I think most of the mobos today come with hardware
> raid, that is there needs to be no support in Linux for this. Everything
> is set up via the bios, and from the linux kernel it looks like an
> ordinary disk.
The problem here is it looks like an ordinary disk to the BIOS only. Grub can use this to load the
kernel and initrd from, but when the kernel boots it sees the controller as the simple controller it
is with some disks connected to it. It requires something like dmraid to make the "array" visible as
an actual array.
None of these controllers perform any of the RAID offload themselves, it's all handled by the
driver. Windows loads using the BIOS and then hands over to the driver which handles the raid, just
the same as linux does with dmraid.
I believe some of the ITE chipsets can handle raid-1 and possibly raid-0 internally, and the kernel
has the ability to drive those as a raid controller, but they are somewhat rarer than the other
fakeraid implementations out there.
Brad
--
Dolphins are so intelligent that within a few weeks they can
train Americans to stand at the edge of the pool and throw them
fish.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 42+ messages in thread
end of thread, other threads:[~2008-07-23 10:29 UTC | newest]
Thread overview: 42+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-15 14:06 Benchmarks: Linux Kernel RAID vs a Hardware RAID setup Ben Martin
2008-07-15 14:42 ` Justin Piszcz
2008-07-16 4:14 ` Ben Martin
2008-07-15 15:46 ` Michal Soltys
2008-07-15 20:34 ` Richard Scobie
2008-07-16 2:34 ` Mr. James W. Laferriere
2008-07-16 2:48 ` Richard Scobie
2008-07-16 2:52 ` Mr. James W. Laferriere
2008-07-16 4:17 ` Keld Jørn Simonsen
2008-07-16 5:50 ` Michal Soltys
2008-07-16 3:36 ` Ben Martin
2008-07-16 3:55 ` Richard Scobie
2008-07-16 7:00 ` Dan Williams
2008-07-16 17:08 ` thomas62186218
2008-07-16 19:34 ` Richard Scobie
2008-07-15 16:39 ` Keld Jørn Simonsen
2008-07-15 16:50 ` thomas62186218
2008-07-15 17:39 ` Keld Jørn Simonsen
2008-07-16 0:01 ` Richard Scobie
2008-07-16 0:20 ` Jon Nelson
2008-07-16 4:06 ` Ben Martin
2008-07-16 15:42 ` Ben Martin
2008-07-16 4:23 ` Keld Jørn Simonsen
2008-07-16 5:18 ` Richard Scobie
2008-07-16 8:17 ` Keld Jørn Simonsen
2008-07-15 17:06 ` Jon Nelson
2008-07-16 3:44 ` Ben Martin
2008-07-15 18:40 ` Brad Campbell
2008-07-15 20:12 ` Keld Jørn Simonsen
2008-07-21 16:55 ` Bill Davidsen
2008-07-23 7:45 ` Keld Jørn Simonsen
2008-07-23 10:29 ` Brad Campbell
[not found] ` <487CF499.6080105@harddata.com>
2008-07-15 20:15 ` Keld Jørn Simonsen
2008-07-16 3:58 ` Ben Martin
2008-07-16 4:47 ` Keld Jørn Simonsen
2008-07-21 16:58 ` Bill Davidsen
2008-07-15 19:41 ` Keld Jørn Simonsen
2008-07-16 3:25 ` Ben Martin
2008-07-15 20:40 ` Peter Grandi
2008-07-16 3:38 ` Eric Sandeen
2008-07-16 18:54 ` Alan D. Brunelle
2008-07-17 8:26 ` Ben Martin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).