* raid 10 su, sw settings
@ 2007-12-31 0:00 Brad Langhorst
2007-12-31 17:04 ` Justin Piszcz
0 siblings, 1 reply; 9+ messages in thread
From: Brad Langhorst @ 2007-12-31 0:00 UTC (permalink / raw)
To: xfs
I have this system
- 3ware 9650 controller
- 4 disk raid 10
- 64k stripe size
- this is a vmware host, so lots of r/w on a few big files.
I'm not entirely satisfied with its performance.
Typical blocks/sec from iostat during large file movements is about
100M/s read and 80M/s write.
When I set this up, I did not fully understand all the details... so I
want to check a few things.
- is the partition aligned correctly? i fear not...
/dev/sda1 * 1 24 192748+ 83 Linux
/dev/sda2 25 19449 156031312+ 83 Linux
Is this where I'm losing performance?
- What should the sunit and swidth settings be during mount?
I guess with raid 10 the width is 2 so...
sunit = 128 (64k/512) and swidth = 256 (2*64k/512)
Or maybe I should use width 1 ?
Remounting (mount -o remount) with these options does not lead
to a noticeable change in performance. Must I recreate the fs or
unmount and remount?
Here's the output of xfsinfo in case it's relevant.
xfs_info /
meta-data=/dev/sda2 isize=256 agcount=16, agsize=2437989
blks
= sectsz=512 attr=0
data = bsize=4096 blocks=39007824,
imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=19046, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 0:00 raid 10 su, sw settings Brad Langhorst
@ 2007-12-31 17:04 ` Justin Piszcz
2007-12-31 18:43 ` Brad Langhorst
0 siblings, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2007-12-31 17:04 UTC (permalink / raw)
To: Brad Langhorst; +Cc: xfs
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2035 bytes --]
On Sun, 30 Dec 2007, Brad Langhorst wrote:
> I have this system
> - 3ware 9650 controller
> - 4 disk raid 10
> - 64k stripe size
> - this is a vmware host, so lots of r/w on a few big files.
>
> I'm not entirely satisfied with its performance.
>
> Typical blocks/sec from iostat during large file movements is about
> 100M/s read and 80M/s write.
>
> When I set this up, I did not fully understand all the details... so I
> want to check a few things.
>
> - is the partition aligned correctly? i fear not...
> /dev/sda1 * 1 24 192748+ 83 Linux
> /dev/sda2 25 19449 156031312+ 83 Linux
>
> Is this where I'm losing performance?
>
> - What should the sunit and swidth settings be during mount?
> I guess with raid 10 the width is 2 so...
> sunit = 128 (64k/512) and swidth = 256 (2*64k/512)
>
> Or maybe I should use width 1 ?
> ÿÿ
> Remounting (mount -o remount) with these options does not lead
> to a noticeable change in performance. Must I recreate the fs or
> unmount and remount?
>
> Here's the output of xfsinfo in case it's relevant.
>
> xfs_info /
> meta-data=/dev/sda2 isize=256 agcount=16, agsize=2437989
> blks
> = sectsz=512 attr=0
> data = bsize=4096 blocks=39007824,
> imaxpct=25
> = sunit=0 swidth=0 blks, unwritten=1
> naming =version 2 bsize=4096
> log =internal bsize=4096 blocks=19046, version=1
> = sectsz=512 sunit=0 blks
> realtime =none extsz=65536 blocks=0, rtextents=0
>
>
>
>
>
>
>
>
#1 What type of performance do you expect with a 4-disk raid10?
#2 You should be able to umount/mount with the new sizes, although I have
not tested it myself b/c I typically use sw raid here (sunit/etc is
optimized for sw raid).
Justin.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 17:04 ` Justin Piszcz
@ 2007-12-31 18:43 ` Brad Langhorst
2007-12-31 19:07 ` Justin Piszcz
0 siblings, 1 reply; 9+ messages in thread
From: Brad Langhorst @ 2007-12-31 18:43 UTC (permalink / raw)
To: Justin Piszcz; +Cc: xfs
On Mon, 2007-12-31 at 12:04 -0500, Justin Piszcz wrote:
> >
> > Typical blocks/sec from iostat during large file movements is about
> > 100M/s read and 80M/s write.
> >
>
> #1 What type of performance do you expect with a 4-disk raid10?
Are you saying that i should not expect more?
I expect about 70% better performance, since I think a single disk
should be able to do 100M/s. Maybe this is unreasonable?
> #2 You should be able to umount/mount with the new sizes, although I have
> not tested it myself b/c I typically use sw raid here (sunit/etc is
> optimized for sw raid).
I am able to do the remount, but it seems to have had no impact.
I don't know why but I see 3 possibilities:
- Perhaps because su/sw settings don't matter very much.
- maybe it didn't take effect (rebooting this system is not a preferred
option)
- maybe it doesn't matter if the partition layout is not optimized.
Brad
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 18:43 ` Brad Langhorst
@ 2007-12-31 19:07 ` Justin Piszcz
2007-12-31 20:17 ` Iustin Pop
0 siblings, 1 reply; 9+ messages in thread
From: Justin Piszcz @ 2007-12-31 19:07 UTC (permalink / raw)
To: Brad Langhorst; +Cc: xfs
On Mon, 31 Dec 2007, Brad Langhorst wrote:
>
> On Mon, 2007-12-31 at 12:04 -0500, Justin Piszcz wrote:
>
>>>
>>> Typical blocks/sec from iostat during large file movements is about
>>> 100M/s read and 80M/s write.
>>>
>>
>> #1 What type of performance do you expect with a 4-disk raid10?
>
> Are you saying that i should not expect more?
> I expect about 70% better performance, since I think a single disk
> should be able to do 100M/s. Maybe this is unreasonable?
A single disk may do 90MiB/s burst but not sustained for read or write, at
least not cheap SATA disks and when you get toward the middle part of the
disk the speed wil drop off significantly. 100MiB/s read and 80MiB/s
write for RAID10 sounds about right to me. Maybe someone else on the list
with a similar configuration can chime in with their benchmarks.
>
>
>> #2 You should be able to umount/mount with the new sizes, although I have
>> not tested it myself b/c I typically use sw raid here (sunit/etc is
>> optimized for sw raid).
> I am able to do the remount, but it seems to have had no impact.
> I don't know why but I see 3 possibilities:
> - Perhaps because su/sw settings don't matter very much.
> - maybe it didn't take effect (rebooting this system is not a preferred
> option)
> - maybe it doesn't matter if the partition layout is not optimized.
>
> Brad
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 19:07 ` Justin Piszcz
@ 2007-12-31 20:17 ` Iustin Pop
2007-12-31 20:55 ` Brad Langhorst
0 siblings, 1 reply; 9+ messages in thread
From: Iustin Pop @ 2007-12-31 20:17 UTC (permalink / raw)
To: Justin Piszcz; +Cc: Brad Langhorst, xfs
On Mon, Dec 31, 2007 at 02:07:27PM -0500, Justin Piszcz wrote:
>
>
> On Mon, 31 Dec 2007, Brad Langhorst wrote:
>
>>
>> On Mon, 2007-12-31 at 12:04 -0500, Justin Piszcz wrote:
>>
>>>>
>>>> Typical blocks/sec from iostat during large file movements is about
>>>> 100M/s read and 80M/s write.
>>>>
>>>
>>> #1 What type of performance do you expect with a 4-disk raid10?
>>
>> Are you saying that i should not expect more?
>> I expect about 70% better performance, since I think a single disk
>> should be able to do 100M/s. Maybe this is unreasonable?
> A single disk may do 90MiB/s burst but not sustained for read or write, at
> least not cheap SATA disks and when you get toward the middle part of the
> disk the speed wil drop off significantly. 100MiB/s read and 80MiB/s write
> for RAID10 sounds about right to me. Maybe someone else on the list with a
> similar configuration can chime in with their benchmarks.
I agree about the disk speed - 100MiB/s for SATA drives is a little bit
too much to expect. And certainly, *only* in purely single-reader or
single-writer sequential workloads.
I have the same config - 4 drive hw raid10 on 9650. A recent zcav log
shows read speeds start at around 140MiB/s and decrease toward 75MiB/s.
Since this is zcav from the bonie++ package, it doesn't take into
account any filesystem or partitioning overhead.
regards,
iustin
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 20:17 ` Iustin Pop
@ 2007-12-31 20:55 ` Brad Langhorst
2007-12-31 21:42 ` Iustin Pop
0 siblings, 1 reply; 9+ messages in thread
From: Brad Langhorst @ 2007-12-31 20:55 UTC (permalink / raw)
To: Iustin Pop; +Cc: xfs
On Mon, 2007-12-31 at 21:17 +0100, Iustin Pop wrote:
> On Mon, Dec 31, 2007 at 02:07:27PM -0500, Justin Piszcz wrote:
> > On Mon, 31 Dec 2007, Brad Langhorst wrote:
> >> On Mon, 2007-12-31 at 12:04 -0500, Justin Piszcz wrote:
> >>
> >>>>
> >>>> Typical blocks/sec from iostat during large file movements is about
> >>>> 100M/s read and 80M/s write.
> >>>>
> >>>
> >>> #1 What type of performance do you expect with a 4-disk raid10?
> >>
> >> Are you saying that i should not expect more?
> >> I expect about 70% better performance, since I think a single disk
> >> should be able to do 100M/s. Maybe this is unreasonable?
> > A single disk may do 90MiB/s burst but not sustained for read or write, at
> > least not cheap SATA disks and when you get toward the middle part of the
> > disk the speed wil drop off significantly. 100MiB/s read and 80MiB/s write
> > for RAID10 sounds about right to me. Maybe someone else on the list with a
> > similar configuration can chime in with their benchmarks.
>
> I agree about the disk speed - 100MiB/s for SATA drives is a little bit
> too much to expect. And certainly, *only* in purely single-reader or
> single-writer sequential workloads.
>
> I have the same config - 4 drive hw raid10 on 9650. A recent zcav log
> shows read speeds start at around 140MiB/s and decrease toward 75MiB/s.
> Since this is zcav from the bonie++ package, it doesn't take into
> account any filesystem or partitioning overhead.
I guess I should re-adjust my expectations
Any opinions on the partition layout? Did you go to special effort to
layout your partitions on the stripe boundaries (actually i don't really
understand this fully yet).
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 20:55 ` Brad Langhorst
@ 2007-12-31 21:42 ` Iustin Pop
2007-12-31 22:54 ` Brad Langhorst
0 siblings, 1 reply; 9+ messages in thread
From: Iustin Pop @ 2007-12-31 21:42 UTC (permalink / raw)
To: Brad Langhorst; +Cc: xfs
On Mon, Dec 31, 2007 at 03:55:01PM -0500, Brad Langhorst wrote:
> Any opinions on the partition layout? Did you go to special effort to
> layout your partitions on the stripe boundaries (actually i don't really
> understand this fully yet).
So instead of the usual 255 heads, 63 cylinders fake geometry that is
not a multiple of anything, I setup 16h/16c geometry that gives a nice
power-of-two multiplier so all partitions *should* be aligned at a nice
multiple of any size you choose; fdisk -l on the drive reports units of
128k.
I have to say that the performance of the filesystem (XFS) on that
raid10 is satisfactory and about what I expected. Certainly ~50MiB/s
write while doing ~50MiB/s reads (for a combined, not purely sequential
throughput of ~100MiB/s) is enough for my needs.
regards,
iustin
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 21:42 ` Iustin Pop
@ 2007-12-31 22:54 ` Brad Langhorst
2008-01-01 0:15 ` Iustin Pop
0 siblings, 1 reply; 9+ messages in thread
From: Brad Langhorst @ 2007-12-31 22:54 UTC (permalink / raw)
To: Iustin Pop, xfs
On Mon, 2007-12-31 at 22:42 +0100, Iustin Pop wrote:
> On Mon, Dec 31, 2007 at 03:55:01PM -0500, Brad Langhorst wrote:
> > Any opinions on the partition layout? Did you go to special effort to
> > layout your partitions on the stripe boundaries (actually i don't really
> > understand this fully yet).
>
> So instead of the usual 255 heads, 63 cylinders fake geometry that is
> not a multiple of anything, I setup 16h/16c geometry that gives a nice
> power-of-two multiplier so all partitions *should* be aligned at a nice
> multiple of any size you choose; fdisk -l on the drive reports units of
> 128k.
Sorry to be thick... i don't understand this.
Do you configure the raid controller with these settings? or is this an
fdisk option?
thanks!
brad
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: raid 10 su, sw settings
2007-12-31 22:54 ` Brad Langhorst
@ 2008-01-01 0:15 ` Iustin Pop
0 siblings, 0 replies; 9+ messages in thread
From: Iustin Pop @ 2008-01-01 0:15 UTC (permalink / raw)
To: Brad Langhorst; +Cc: xfs
On Mon, Dec 31, 2007 at 05:54:04PM -0500, Brad Langhorst wrote:
>
> On Mon, 2007-12-31 at 22:42 +0100, Iustin Pop wrote:
> > On Mon, Dec 31, 2007 at 03:55:01PM -0500, Brad Langhorst wrote:
> > > Any opinions on the partition layout? Did you go to special effort to
> > > layout your partitions on the stripe boundaries (actually i don't really
> > > understand this fully yet).
> >
> > So instead of the usual 255 heads, 63 cylinders fake geometry that is
> > not a multiple of anything, I setup 16h/16c geometry that gives a nice
> > power-of-two multiplier so all partitions *should* be aligned at a nice
> > multiple of any size you choose; fdisk -l on the drive reports units of
> > 128k.
> Sorry to be thick... i don't understand this.
> Do you configure the raid controller with these settings? or is this an
> fdisk option?
Note: I'm not even sure it matters, or if I did it right :)
It's an fdisk option. For example, "fdisk -H 16 -S 16 /dev/sdX".
Also, happy new year!
iustin
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2008-01-01 1:16 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-12-31 0:00 raid 10 su, sw settings Brad Langhorst
2007-12-31 17:04 ` Justin Piszcz
2007-12-31 18:43 ` Brad Langhorst
2007-12-31 19:07 ` Justin Piszcz
2007-12-31 20:17 ` Iustin Pop
2007-12-31 20:55 ` Brad Langhorst
2007-12-31 21:42 ` Iustin Pop
2007-12-31 22:54 ` Brad Langhorst
2008-01-01 0:15 ` Iustin Pop
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox