public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Optimal mkfs settings for md RAID0 over 2x3ware RAIDS
@ 2008-01-14 13:23 andrewl733
  2008-01-14 22:55 ` David Chinner
  0 siblings, 1 reply; 3+ messages in thread
From: andrewl733 @ 2008-01-14 13:23 UTC (permalink / raw)
  To: xfs

Hello XFS list,

I am trying to figure out the optimal mkfs settings for a large array (i.e., 18 TB) consisting of 2 or 4 PHYSICAL 3ware RAID-5 arrays striped together with Linux software RAID-0. As far as I can tell, this question about combining physical and software RAID has not been asked or answered on the list. 

As I understand it, for a SINGLE 12-drive 3ware PHYSICAL Hardware RAID-5 created with a 3-ware-defined "stripe size" of 64K, the optimal mkfs setting should be: 

mkfs.xfs –d  su=64k,sw=11 /dev/sdX


The question is, what is optimal if I stripe together TWO of these Physical Hardware RAID-5 arrays as a SOFTWARE RAID-0. Casual testing shows striping together two PHYSICAL RAIDS as sucn can yield a gain in performance of approximately 60 percent versus 12-drives. But in order to optimize the RAID-0 device, would the correct mkfs be: 

mkfs.xfs -d su=64k,sw=22 /dev/mdX

There are now 24 drives minus two for parity. Is the logic correct here? 

Regards, 
Andrew

________________________________________________________________________
More new features than ever.  Check out the new AOL Mail ! - http://webmail.aol.com


[[HTML alternate version deleted]]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Optimal mkfs settings for md RAID0 over 2x3ware RAIDS
  2008-01-14 13:23 Optimal mkfs settings for md RAID0 over 2x3ware RAIDS andrewl733
@ 2008-01-14 22:55 ` David Chinner
  2008-01-15  3:02   ` ***** SUSPECTED SPAM ***** " andrewl733
  0 siblings, 1 reply; 3+ messages in thread
From: David Chinner @ 2008-01-14 22:55 UTC (permalink / raw)
  To: andrewl733; +Cc: xfs

On Mon, Jan 14, 2008 at 08:23:44AM -0500, andrewl733@aol.com wrote:
> Hello XFS list,
> 
> I am trying to figure out the optimal mkfs settings for a large
> array (i.e., 18 TB) consisting of 2 or 4 PHYSICAL 3ware RAID-5
> arrays striped together with Linux software RAID-0. As far as I
> can tell, this question about combining physical and software RAID
> has not been asked or answered on the list. 
> As I understand it, for a SINGLE 12-drive 3ware PHYSICAL Hardware
> RAID-5 created with a 3-ware-defined "stripe size" of 64K, the
> optimal mkfs setting should be: .
> 
> mkfs.xfs -d su=64k,sw=11 /dev/sdX
> 
> The question is, what is optimal if I stripe together TWO of these
> Physical Hardware RAID-5 arrays as a SOFTWARE RAID-0. Casual
> testing shows striping together two PHYSICAL RAIDS as sucn can
> yield a gain in performance of approximately 60 percent versus
> 12-drives. But in order to optimize the RAID-0 device, would the
> correct mkfs be: 
> 
> mkfs.xfs -d su=64k,sw=22 /dev/mdX
> 
> There are now 24 drives minus two for parity. Is the logic correct here? 

Depends on your workload and file mix. For lots of small files,
the above will work fine. For maximum bandwidth, it will suck.

For maximum bandwidth you want XFS to align to the start of a RAID5
lun and do full RAID5 stripe width allocations so that large
allocations do not partially overlap RAID5 luns.

i.e. with what you suggested, an allocation of 22x64k (full
filesystem stripe width) will only be aligned to the underlying
hardware in 2 of the possible 22 places it could be allocated with a
64k alignment. in the other 20 cases, you'll get one full RAID5
write to one lun, and two sets of partial RMW cycles to the other
lun because they are not full RAID5 stripe writes.  That will be
slow.

With su=11*64k,sw=2, a 22x64k allocation will always be aligned to
the underlying geometry (until you start to run out of space) and
hence both luns will do a full RAID5 stripe write and it will be
fast.

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

^ permalink raw reply	[flat|nested] 3+ messages in thread

* ***** SUSPECTED SPAM *****  Re: Optimal mkfs settings for md RAID0 over 2x3ware RAIDS
  2008-01-14 22:55 ` David Chinner
@ 2008-01-15  3:02   ` andrewl733
  0 siblings, 0 replies; 3+ messages in thread
From: andrewl733 @ 2008-01-15  3:02 UTC (permalink / raw)
  To: dgc; +Cc: xfs

Thanks for your speedy reply. Please see a follow up question below. 



On Mon, Jan 14, 2008 at 08:23:44AM -0500, andrewl733@aol.com wrote:
> Hello XFS list,
> 
> I am trying to figure out the optimal mkfs settings for a large
> array (i.e., 18 TB) consisting of 2 or 4 PHYSICAL 3ware RAID-5
> arrays striped together with Linux software RAID-0. As far as I
> can tell, this question about combining physical and software RAID
> has not been asked or answered on the list. 
> As I understand it, for a SINGLE 12-drive 3ware PHYSICAL Hardware
> RAID-5 created with a 3-ware-defined "stripe size" of 64K, the
> optimal mkfs setting should be: .
> 
> mkfs.xfs -d su=64k,sw=11 /dev/sdX
> 
> The question is, what is optimal if I stripe together TWO of these
> Physical Hardware RAID-5 arrays as a SOFTWARE RAID-0. Casual
> testing shows striping together two PHYSICAL RAIDS as sucn can
> yield a gain in performance of approximately 60 percent versus
> 12-drives. But in order to optimize the RAID-0 device, would the
> correct mkfs be: 
> 
> mkfs.xfs -d su=64k,sw=22 /dev/mdX
> 
> There are now 24 drives minus two for parity. Is the logic correct here? 

Depends on your workload and file mix. For lots of small files,
the above will work fine. For maximum bandwidth, it will suck.

For maximum bandwidth you want XFS to align to the start of a RAID5
lun and do full RAID5 stripe width allocations so that large
allocations do not partially overlap RAID5 luns.

i.e. with what you suggested, an allocation of 22x64k (full
filesystem stripe width) will only be aligned to the underlying
hardware in 2 of the possible 22 places it could be allocated with a
64k alignment. in the other 20 cases, you'll get one full RAID5
write to one lun, and two sets of partial RMW cycles to the other
lun because they are not full RAID5 stripe writes.  That will be
slow.

With su=11*64k,sw=2, a 22x64k allocation will always be aligned to
the underlying geometry (until you start to run out of space) and
hence both luns will do a full RAID5 stripe write and it will be
fast.


In fact, I am testing a 64-drive SAS array today -- 4 x 16-drive RAID-5
arrays striped together with Linux RAID-0.? By your instructions, I
should do mkfs as follows: 



mkfs.xfs -d su=960k,sw=4? /dev/mdX?? where su=15*64k



However, I get back the following message: 



mkfs.xfs:? Specified data stripe unit 1920 is not the same as the volume stripe unit 512

mkfs.xfs:? Specified data stripe width 7680 is not the same as the volume stripe width 2048



The filesystem gets created. What's wrong here? In this case I have
chosen to use a Linux md RAID-0 "chunk size" of 256k.? I get a similar
message (with different numbers, of course) if I use a "chunk size" of
64k.? 



Is there an optimal ratio of 3ware "stripe size" to Linux md "chunk size" that also must come into play here? 



Thanks again in advance. 



Andrew





Cheers,

Dave.


 


________________________________________________________________________
More new features than ever.  Check out the new AOL Mail ! - http://webmail.aol.com


[[HTML alternate version deleted]]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2008-01-15  3:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-14 13:23 Optimal mkfs settings for md RAID0 over 2x3ware RAIDS andrewl733
2008-01-14 22:55 ` David Chinner
2008-01-15  3:02   ` ***** SUSPECTED SPAM ***** " andrewl733

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox