public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS over LVM over md RAID
@ 2010-09-09 22:58 Richard Scobie
  2010-09-10  0:25 ` Michael Monnerie
  2010-09-10  1:30 ` Dave Chinner
  0 siblings, 2 replies; 10+ messages in thread
From: Richard Scobie @ 2010-09-09 22:58 UTC (permalink / raw)
  To: xfs

Using the latest, stable versions of LVM2 and xfsprogs and the 2.6.35.4 
kernel, I am setting up lvm on a 16 drive, 256k chunk md RAID6, which 
has been used to date with XFS directly on the RAID.

mkfs.xfs directly on the RAID gives:

meta-data=/dev/md8               isize=256    agcount=32, 
agsize=106814656 blks
          =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=3418068864, imaxpct=5
          =                       sunit=64     swidth=896 blks
naming   =version 2              bsize=4096   ascii-ci=0

which gives the correct sunit and swidth values for the array.

Creating an lv which uses the entire array and mkfs.xfs on that, gives:

meta-data=/dev/vg_local/Storage  isize=256    agcount=13, 
agsize=268435455 blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=3418067968, imaxpct=5
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0

Limited testing using dd and bonnie++ shows no difference in write 
performance whether I use sunit=64/swidth=896 or sunit=0/swidth=0 on the lv.

My gut reaction is that I should be using 64/896 but maybe mkfs.xfs 
knows better?

Regards,

Richard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread
* Re: XFS over LVM over md RAID
@ 2010-09-10 23:08 Richard Scobie
  0 siblings, 0 replies; 10+ messages in thread
From: Richard Scobie @ 2010-09-10 23:08 UTC (permalink / raw)
  To: xfs

  Stan Hoeppner wrote:

 > What is the reasoning behind adding so many terabytes under a single 
filesystem?

Heavily scripted project environments, where initial storage estimates 
are exceeded and more needs to be added without the complications of 
managing seperate filesystems part way through.

It is unlikely that more than 2 arrays would be involved and I used the 
example to try and understand how XFS adapts to changing topologies.

Regards,

Richard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-09-10 23:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-09 22:58 XFS over LVM over md RAID Richard Scobie
2010-09-10  0:25 ` Michael Monnerie
2010-09-10  0:52   ` Richard Scobie
2010-09-10  1:14   ` Richard Scobie
2010-09-10  1:30 ` Dave Chinner
2010-09-10  2:29   ` Richard Scobie
2010-09-10 14:24     ` Eric Sandeen
2010-09-10 21:42       ` Richard Scobie
2010-09-10 22:19         ` Stan Hoeppner
  -- strict thread matches above, loose matches on Subject: below --
2010-09-10 23:08 Richard Scobie

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox