From: Eric Sandeen <sandeen@sandeen.net>
To: Mike Jensen <Mike.Jensen@dothill.com>
Cc: "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: XFS - configuration for multi-thread high-speed streams
Date: Wed, 16 Oct 2013 10:05:00 -0500 [thread overview]
Message-ID: <525EAB1C.6020504@sandeen.net> (raw)
In-Reply-To: <C08036CF9E2BAC498EFDA29CB9BDF25B34B9D1E06D@dc-ex1.power.com>
On 10/15/13 3:45 PM, Mike Jensen wrote:
> Hi Group - 1^st time post, looks like just use the email address I have seen from other posts
>
> If the setup/info/questions below are not suitable for this list, is there another xfs list available for these??
>
User questions are appropriate here, but since this is a Red Hat distro kernel, not upstream, it's probably better to engage Red Hat for information on optimal tuning of something like this.
-Eric
>
> Thanks
>
> mike
>
>
>
> Configuration –
>
>
>
> - RH 6.4 on HP G8 server w/16GB mem (could add a lot more)
>
> - 12Gb SAS connections (2) to storage
>
> - Storage
>
> - 96 LFF HDD’s (4TB 7K NLSAS) presently organized as 8 x 10+2 R6 raidsets
>
> - 8 volumes mapped to host ports (MPIO engaged)
>
> - Mdadm used to stripe (/dev/dm*) LUNs into a single md device (approx. 300+TB file system space)
>
> - Mkfs.xfs used to lay down file system
>
> -
>
> - Workload
>
> - Application creates file of specified length in dir of mount point, IO’s then issued to file(s), each file receives 1 stream of IO
>
> - 1 x high speed stream 1000MB/s Seq W (IO’s arrive as 512KB -=> 4MB frame would be 8 x 512KB IO’s)
>
> - 2 x med-speed streams 200MB/s Seq W “
>
> - 30-50 low speed streams 10MB/s Seq W “
>
>
>
> Objectives/Questions –
>
>
>
> - Would like to optimize xfs/mount parameters to make maximal use of storage assets
>
> - Thinking of using 15K HDD R1 set for log files – would this get all/most of metadata or just a subset? Right now seeing metadata writes arriving with data writes and want to peel them off the 7K HDD’s
>
> - Would using sub-directories for each file ensure that each subdir/file would be in it’s own alloc group? And that would keep the streaming data in a separate AG – could help the storage more efficiently destage the writes from cache to disk
>
>
>
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2013-10-16 15:05 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-15 20:45 XFS - configuration for multi-thread high-speed streams Mike Jensen
2013-10-16 15:05 ` Eric Sandeen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=525EAB1C.6020504@sandeen.net \
--to=sandeen@sandeen.net \
--cc=Mike.Jensen@dothill.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox