* Without tweaking , (was:Re: mkfs options for a 16x hw raid5 and xfs ...)
@ 2007-09-26 17:44 Mr. James W. Laferriere
2007-09-26 18:12 ` Justin Piszcz
0 siblings, 1 reply; 7+ messages in thread
From: Mr. James W. Laferriere @ 2007-09-26 17:44 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid maillist
Hello Justin & all ,
> ----------Justin Piszcz Wrote: ----------
> Date: Wed, 26 Sep 2007 12:24:20 -0400 (EDT)
> From: Justin Piszcz <jpiszcz@lucidpixels.com>
> Subject: Re: mkfs options for a 16x hw raid5 and xfs (mostly large files)
>
> I have a question, when I use multiple writer threads (2 or 3) I see 550-600
> MiB/s write speed (vmstat) but when using only 1 thread, ~420-430 MiB/s... Also
> without tweaking, SW RAID is very slow (180-200 MiB/s) using the same disks.
> Justin.
Speaking of 'without tweaking' , Might you have or know of a relatively
accurate list of points to begin tweaking & possible( even guesses at the) out
come of making those changes ?
We(maybe even I) could put together a patch for tuning options in the
Documentation directory (&/or other files if necessary) .
The kernel method would allow those with 'doxygen' (amongst other
installed tools ) acquire a mediocum of information .
The info could be , ear marked . Such as fs-tunable , disk-tunable ,
for ease of identification of the intended subject matter .
Tho without a list of the present known tunables I am probably going to
find the challenge a bit confusing as well as time consuming .
At present I believe I (just might) be able to , with everyones help ,
put together a list for the linux-raid tunables . Note: 'with everyones help'
.
Just thoughts .
... much good info snipped...
Tia , JimL
--
+-----------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | 663 Beaumont Blvd | Give me Linux |
| babydr@baby-dragons.com | Pacifica, CA. 94044 | only on AXP |
+-----------------------------------------------------------------+
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Without tweaking , (was:Re: mkfs options for a 16x hw raid5 and xfs ...)
2007-09-26 17:44 Without tweaking , (was:Re: mkfs options for a 16x hw raid5 and xfs ...) Mr. James W. Laferriere
@ 2007-09-26 18:12 ` Justin Piszcz
2007-09-26 19:52 ` Without tweaking , Richard Scobie
0 siblings, 1 reply; 7+ messages in thread
From: Justin Piszcz @ 2007-09-26 18:12 UTC (permalink / raw)
To: Mr. James W. Laferriere; +Cc: linux-raid maillist
On Wed, 26 Sep 2007, Mr. James W. Laferriere wrote:
> Hello Justin & all ,
>
>> ----------Justin Piszcz Wrote: ----------
>> Date: Wed, 26 Sep 2007 12:24:20 -0400 (EDT)
>> From: Justin Piszcz <jpiszcz@lucidpixels.com>
>> Subject: Re: mkfs options for a 16x hw raid5 and xfs (mostly large files)
>>
>> I have a question, when I use multiple writer threads (2 or 3) I see
>> 550-600 MiB/s write speed (vmstat) but when using only 1 thread, ~420-430
>> MiB/s... Also without tweaking, SW RAID is very slow (180-200 MiB/s) using
>> the same disks.
>> Justin.
> Speaking of 'without tweaking' , Might you have or know of a
> relatively accurate list of points to begin tweaking & possible( even guesses
> at the) out come of making those changes ?
>
> We(maybe even I) could put together a patch for tuning options in the
> Documentation directory (&/or other files if necessary) .
> The kernel method would allow those with 'doxygen' (amongst other
> installed tools ) acquire a mediocum of information .
> The info could be , ear marked . Such as fs-tunable , disk-tunable
> ,
> for ease of identification of the intended subject matter .
> Tho without a list of the present known tunables I am probably going
> to find the challenge a bit confusing as well as time consuming .
> At present I believe I (just might) be able to , with everyones help
> ,
> put together a list for the linux-raid tunables . Note: 'with everyones
> help' .
>
> Just thoughts .
>
Well here is a start:
I am sure these will be highly argued over but after weeks of
benchmarking these "work for me" with a 10-disk Raptor Software RAID5
disk set. They may not be good for all workloads. I also have a 6-disk
400GB SATA RAID5 and I find a 256k chunk size offers the best
performance.
Here is what I optimize:
Stripe size of the volume is 1 megabyte, mainly dealing with large files
here: I use the default, left-symmetric layout for the RAID5.
I utilize XFS on-top of the MD, it has been mentioned you may incur a
'hit' if using LVM.
# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Wed Aug 22 10:38:53 2007
Raid Level : raid5
Array Size : 1318680576 (1257.59 GiB 1350.33 GB)
Used Dev Size : 146520064 (139.73 GiB 150.04 GB)
Raid Devices : 10
Total Devices : 10
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Wed Sep 26 14:02:18 2007
State : clean
Active Devices : 10
Working Devices : 10
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 1024K
UUID : e37a12d1:1b0b989a:083fb634:68e9eb49
Events : 0.4178
Without any optimizations I get very poor performance, again, 160-220
MiB/s with no optimizations for read and write.
With the optimizations, yes just sequential performance, but I see ~430
MiB/s reads with ~500-630 MiB/s writes using XFS.
I use the following mount options as I have found them to offer the best
overall performance, I have tried various logbufs settings: 2,4,8 and
different log buffer sizes and have found these to be the best.
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/md3 /r1 xfs noatime,nodiratime,logbufs=8,logbsize=262144 0 1
Now for the specific optimizations:
echo "Setting max_sectors_kb to 128 KiB"
for i in $DISKS
do
echo "Setting /dev/$i to 128 KiB..."
echo 128 > /sys/block/"$i"/queue/max_sectors_kb
done
echo "Setting nr_requests to 512 KiB"
for i in $DISKS
do
echo "Setting /dev/$i to 512K KiB"
echo 512 > /sys/block/"$i"/queue/nr_requests
done
echo "Setting read-ahead to 64 MiB for /dev/md3"
blockdev --setra 65536 /dev/md3
echo "Setting stripe_cache_size to 16 MiB for /dev/md3"
echo 16384 > /sys/block/md3/md/stripe_cache_size
# Set minimum and maximum raid rebuild speed to 30MB/s.
echo "Setting minimum and maximum resync speed to 30 MiB/s..."
echo 30000 > /sys/block/md3/md/sync_speed_min
echo 30000 > /sys/block/md3/md/sync_speed_max
^ The above step is needed because there is a bug in the md raid code if you
use stripe sizes larger than 128k or so with a big stripe_cache_size, it does
not know how to handle that and the RAID verifies/etc run at a paltry 1 MiB/s
or less.
For raptors, they are inheriently known for their poor speed when NCQ is
enabled, I see 20-30MiB/s better performance with NCQ off.
echo "Disabling NCQ on all disks..."
for i in $DISKS
do
echo "Disabling NCQ on $i"
echo 1 > /sys/block/"$i"/device/queue_depth
done
Justin.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Without tweaking ,
2007-09-26 18:12 ` Justin Piszcz
@ 2007-09-26 19:52 ` Richard Scobie
2007-09-26 20:46 ` Justin Piszcz
0 siblings, 1 reply; 7+ messages in thread
From: Richard Scobie @ 2007-09-26 19:52 UTC (permalink / raw)
To: linux-raid maillist
Justin Piszcz wrote:
> For raptors, they are inheriently known for their poor speed when NCQ is
> enabled, I see 20-30MiB/s better performance with NCQ off.
Hi Justin,
Have you tested this for multiple reader/writers?
Regards,
Richard
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Without tweaking ,
2007-09-26 19:52 ` Without tweaking , Richard Scobie
@ 2007-09-26 20:46 ` Justin Piszcz
2007-09-26 20:51 ` Richard Scobie
0 siblings, 1 reply; 7+ messages in thread
From: Justin Piszcz @ 2007-09-26 20:46 UTC (permalink / raw)
To: Richard Scobie; +Cc: linux-raid maillist
On Thu, 27 Sep 2007, Richard Scobie wrote:
> Justin Piszcz wrote:
>
>> For raptors, they are inheriently known for their poor speed when NCQ is
>> enabled, I see 20-30MiB/s better performance with NCQ off.
>
> Hi Justin,
>
> Have you tested this for multiple reader/writers?
>
> Regards,
>
> Richard
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
If you have a good repeatable benchmark you want me to run with it on/off
let me know, no I only used bonnie++/iozone/tiobench/dd but not any
parallelism with those utilities.
Justin.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Without tweaking ,
2007-09-26 20:46 ` Justin Piszcz
@ 2007-09-26 20:51 ` Richard Scobie
2007-09-26 21:24 ` Justin Piszcz
0 siblings, 1 reply; 7+ messages in thread
From: Richard Scobie @ 2007-09-26 20:51 UTC (permalink / raw)
To: linux-raid maillist
Justin Piszcz wrote:
> If you have a good repeatable benchmark you want me to run with it
> on/off let me know, no I only used bonnie++/iozone/tiobench/dd but not
> any parallelism with those utilities.
Perhaps iozone with 5 threads, NCQ on and off?
Regards,
Richard
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Without tweaking ,
2007-09-26 20:51 ` Richard Scobie
@ 2007-09-26 21:24 ` Justin Piszcz
2007-09-26 21:44 ` Richard Scobie
0 siblings, 1 reply; 7+ messages in thread
From: Justin Piszcz @ 2007-09-26 21:24 UTC (permalink / raw)
To: Richard Scobie; +Cc: linux-raid maillist
On Thu, 27 Sep 2007, Richard Scobie wrote:
> Justin Piszcz wrote:
>
>> If you have a good repeatable benchmark you want me to run with it on/off
>> let me know, no I only used bonnie++/iozone/tiobench/dd but not any
>> parallelism with those utilities.
>
> Perhaps iozone with 5 threads, NCQ on and off?
>
> Regards,
>
> Richard
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
With multiple threads, not too much difference..
NCQ OFF: iozone -l 5
Children see throughput for 5 initial writers = 894857.31 KB/sec
Parent sees throughput for 5 initial writers = 5484.80 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 894857.31 KB/sec
Avg throughput per process = 178971.46 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 rewriters = 1289930.50 KB/sec
Parent sees throughput for 5 rewriters = 12722.45 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1289930.50 KB/sec
Avg throughput per process = 257986.10 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 readers = 1992459.00 KB/sec
Parent sees throughput for 5 readers = 361601.94 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1992459.00 KB/sec
Avg throughput per process = 398491.80 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 re-readers = 2169601.25 KB/sec
Parent sees throughput for 5 re-readers = 545904.86 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 2169601.25 KB/sec
Avg throughput per process = 433920.25 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 reverse readers = 1662389.12 KB/sec
Parent sees throughput for 5 reverse readers = 530530.32 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1662389.12 KB/sec
Avg throughput per process = 332477.83 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 stride readers = 1689860.00 KB/sec
Parent sees throughput for 5 stride readers = 559560.28 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1689860.00 KB/sec
Avg throughput per process = 337972.00 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 random readers = 1640796.38 KB/sec
Parent sees throughput for 5 random readers = 384384.88 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1640796.38 KB/sec
Avg throughput per process = 328159.28 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 mixed workload = 1723771.00 KB/sec
Parent sees throughput for 5 mixed workload = 2954.09 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1723771.00 KB/sec
Avg throughput per process = 344754.20 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 random writers = 1312798.75 KB/sec
Parent sees throughput for 5 random writers = 3750.95 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1312798.75 KB/sec
Avg throughput per process = 262559.75 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 pwrite writers = 915847.19 KB/sec
Parent sees throughput for 5 pwrite writers = 2395.21 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 915847.19 KB/sec
Avg throughput per process = 183169.44 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 pread readers = 1620980.12 KB/sec
Parent sees throughput for 5 pread readers = 272911.00 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1620980.12 KB/sec
Avg throughput per process = 324196.03 KB/sec
Min xfer = 0.00 KB
NCQ ON: iozone -l 5
Children see throughput for 5 initial writers = 867738.31 KB/sec
Parent sees throughput for 5 initial writers = 4722.90 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 867738.31 KB/sec
Avg throughput per process = 173547.66 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 rewriters = 1326585.25 KB/sec
Parent sees throughput for 5 rewriters = 11928.29 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1326585.25 KB/sec
Avg throughput per process = 265317.05 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 readers = 1895721.12 KB/sec
Parent sees throughput for 5 readers = 334665.53 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1895721.12 KB/sec
Avg throughput per process = 379144.22 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 re-readers = 2091421.75 KB/sec
Parent sees throughput for 5 re-readers = 310473.32 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 2091421.75 KB/sec
Avg throughput per process = 418284.35 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 reverse readers = 1630828.12 KB/sec
Parent sees throughput for 5 reverse readers = 260181.03 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1630828.12 KB/sec
Avg throughput per process = 326165.62 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 stride readers = 1647088.75 KB/sec
Parent sees throughput for 5 stride readers = 311644.78 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1647088.75 KB/sec
Avg throughput per process = 329417.75 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 random readers = 1736314.50 KB/sec
Parent sees throughput for 5 random readers = 547017.30 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1736314.50 KB/sec
Avg throughput per process = 347262.90 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 mixed workload = 1599251.25 KB/sec
Parent sees throughput for 5 mixed workload = 11172.20 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1599251.25 KB/sec
Avg throughput per process = 319850.25 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 random writers = 1333173.62 KB/sec
Parent sees throughput for 5 random writers = 3302.71 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1333173.62 KB/sec
Avg throughput per process = 266634.72 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 pwrite writers = 1117430.12 KB/sec
Parent sees throughput for 5 pwrite writers = 10313.44 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1117430.12 KB/sec
Avg throughput per process = 223486.02 KB/sec
Min xfer = 0.00 KB
Children see throughput for 5 pread readers = 1610042.38 KB/sec
Parent sees throughput for 5 pread readers = 269047.35 KB/sec
Min throughput per process = 0.00 KB/sec
Max throughput per process = 1610042.38 KB/sec
Avg throughput per process = 322008.47 KB/sec
Min xfer = 0.00 KB
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Without tweaking ,
2007-09-26 21:24 ` Justin Piszcz
@ 2007-09-26 21:44 ` Richard Scobie
0 siblings, 0 replies; 7+ messages in thread
From: Richard Scobie @ 2007-09-26 21:44 UTC (permalink / raw)
To: linux-raid maillist
Justin Piszcz wrote:
> With multiple threads, not too much difference..
Thanks for that - as you say not a great deal there, slight improvements
for some of the random tests.
Regards,
Richard
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2007-09-26 21:44 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-26 17:44 Without tweaking , (was:Re: mkfs options for a 16x hw raid5 and xfs ...) Mr. James W. Laferriere
2007-09-26 18:12 ` Justin Piszcz
2007-09-26 19:52 ` Without tweaking , Richard Scobie
2007-09-26 20:46 ` Justin Piszcz
2007-09-26 20:51 ` Richard Scobie
2007-09-26 21:24 ` Justin Piszcz
2007-09-26 21:44 ` Richard Scobie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).