* Differences in su/sw values for hw vs. sw RAID 5?
@ 2006-08-21 1:55 bridavis
2006-08-21 6:15 ` Shailendra Tripathi
0 siblings, 1 reply; 3+ messages in thread
From: bridavis @ 2006-08-21 1:55 UTC (permalink / raw)
To: xfs
I getting conflicting reports as to how I should generate my sunit/swidth vaules for hardware RAID 5.
Setup: hardware RAID 5, 3 disks at 300 GBs each, 64k stripe size.
Originally, following the man page and the mailing list archives, I came up sw=2,su=64k.
However, I read a reply to an earlier question I sent to the list, and it indicated that the hardward RAID should be treated as a single disk, so I came up with sw=1,su=128k.
Which one is correct for my setup?
Thanks!
[[HTML alternate version deleted]]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Differences in su/sw values for hw vs. sw RAID 5?
2006-08-21 1:55 Differences in su/sw values for hw vs. sw RAID 5? bridavis
@ 2006-08-21 6:15 ` Shailendra Tripathi
2006-08-21 12:27 ` Brian Davis
0 siblings, 1 reply; 3+ messages in thread
From: Shailendra Tripathi @ 2006-08-21 6:15 UTC (permalink / raw)
To: bridavis, xfs
For RAID-5 device, for any write, the parity as well has to be
calculated before writing. In absence of any column of RAID, it is read
from disk and then re-written. When you choose writes such as all
columns are already there, parity can be directly calculated and written
(without incurring any extra read I/O) and that's why, declaring in that
form is desirable. Someone correct me if I am wrong.
# mdadm --create /dev/md15 --level=5 --raid-devices=3 -c 64 /dev/sd[hvi]1
mdadm: array /dev/md15 started.
When forced choice of sw=1,su=128k
# cat /proc/mdstat | more
...
md15 : active raid5 sdv1[2] sdi1[1] sdh1[0]
78139904 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
# mkfs.xfs -f -d sw=1,su=128k /dev/md15
mkfs.xfs: Specified data stripe unit 256 is not the same as the volume
stripe unit 128
meta-data=/dev/md15 isize=256 agcount=16, agsize=1220928
blks
= sectsz=512
data = bsize=4096 blocks=19534848, imaxpct=25
= sunit=32 swidth=32 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=9568, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=131072 blocks=0, rtextents=0
Though by default, it detects the former one.
# mkfs.xfs -f /dev/md15
meta-data=/dev/md15 isize=256 agcount=16, agsize=1220944
blks
= sectsz=512
data = bsize=4096 blocks=19534976, imaxpct=25
= sunit=16 swidth=32 blks, unwritten=1
naming =version 2 bsize=4096
Please note that default created here is: sunit=16, swidth=3
bridavis@comcast.net wrote:
> I getting conflicting reports as to how I should generate my sunit/swidth vaules for hardware RAID 5.
>
> Setup: hardware RAID 5, 3 disks at 300 GBs each, 64k stripe size.
>
> Originally, following the man page and the mailing list archives, I came up sw=2,su=64k.
>
> However, I read a reply to an earlier question I sent to the list, and it indicated that the hardward RAID should be treated as a single disk, so I came up with sw=1,su=128k.
>
> Which one is correct for my setup?
>
> Thanks!
>
> [[HTML alternate version deleted]]
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Differences in su/sw values for hw vs. sw RAID 5?
2006-08-21 6:15 ` Shailendra Tripathi
@ 2006-08-21 12:27 ` Brian Davis
0 siblings, 0 replies; 3+ messages in thread
From: Brian Davis @ 2006-08-21 12:27 UTC (permalink / raw)
To: Shailendra Tripathi; +Cc: xfs
Maybe I'm missing something, but I'm not sure how the information below
maps to setting the values on Hardware RAID.
A nice feature of xfs is that it's intelligent enough to figure out the
proper values for SW RAID.
Thanks!
Shailendra Tripathi wrote:
> For RAID-5 device, for any write, the parity as well has to be
> calculated before writing. In absence of any column of RAID, it is
> read from disk and then re-written. When you choose writes such as all
> columns are already there, parity can be directly calculated and
> written (without incurring any extra read I/O) and that's why,
> declaring in that form is desirable. Someone correct me if I am wrong.
>
> # mdadm --create /dev/md15 --level=5 --raid-devices=3 -c 64 /dev/sd[hvi]1
> mdadm: array /dev/md15 started.
>
> When forced choice of sw=1,su=128k
> # cat /proc/mdstat | more
> ...
> md15 : active raid5 sdv1[2] sdi1[1] sdh1[0]
> 78139904 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> # mkfs.xfs -f -d sw=1,su=128k /dev/md15
> mkfs.xfs: Specified data stripe unit 256 is not the same as the volume
> stripe unit 128
> meta-data=/dev/md15 isize=256 agcount=16,
> agsize=1220928 blks
> = sectsz=512
> data = bsize=4096 blocks=19534848, imaxpct=25
> = sunit=32 swidth=32 blks, unwritten=1
> naming =version 2 bsize=4096
> log =internal log bsize=4096 blocks=9568, version=1
> = sectsz=512 sunit=0 blks
> realtime =none extsz=131072 blocks=0, rtextents=0
>
> Though by default, it detects the former one.
>
> # mkfs.xfs -f /dev/md15
> meta-data=/dev/md15 isize=256 agcount=16,
> agsize=1220944 blks
> = sectsz=512
> data = bsize=4096 blocks=19534976, imaxpct=25
> = sunit=16 swidth=32 blks, unwritten=1
> naming =version 2 bsize=4096
>
> Please note that default created here is: sunit=16, swidth=3
> bridavis@comcast.net wrote:
>> I getting conflicting reports as to how I should generate my
>> sunit/swidth vaules for hardware RAID 5.
>>
>> Setup: hardware RAID 5, 3 disks at 300 GBs each, 64k stripe size.
>>
>> Originally, following the man page and the mailing list archives, I
>> came up sw=2,su=64k.
>> However, I read a reply to an earlier question I sent to the list,
>> and it indicated that the hardward RAID should be treated as a single
>> disk, so I came up with sw=1,su=128k.
>>
>> Which one is correct for my setup?
>>
>> Thanks!
>>
>> [[HTML alternate version deleted]]
>>
>>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2006-08-21 14:23 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-08-21 1:55 Differences in su/sw values for hw vs. sw RAID 5? bridavis
2006-08-21 6:15 ` Shailendra Tripathi
2006-08-21 12:27 ` Brian Davis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox