From: Ric Wheeler <ricwheeler@gmail.com>
To: Marcin Sura <mailing-lists@sura.pl>, xfs@oss.sgi.com
Subject: Re: xfs + 100TB+ storage + lots of small files + NFS
Date: Sun, 10 Jul 2016 12:24:22 +0300 [thread overview]
Message-ID: <0f365b8a-16e2-7814-06d2-d452e80c7c07@gmail.com> (raw)
In-Reply-To: <CACNifpXnSMcm6EfSjqLFPXXfqv7XWd4Yu_=UhqTcdn6-o+49Yw@mail.gmail.com>
On 07/09/2016 02:14 PM, Marcin Sura wrote:
> Hi,
>
> Friend of mine asked me about evaluation of XFS for their purposes. Currently
> I don't have physical access to their system, but here are the info I've got
> so far:
>
> SAN:
> - physical storage is from FSC array, thin provisioned raid 6 volume,
> - volumes are 100TB+ in size
> - there are SSD disks in the array, which potentially can be used for journal
> - storage is connected to the host via 10GbE iSCSI
>
> Host:
> - They are using CentOS 6.5, with stock kernel 2.6.32-*
> - System uses all default values, no optimization has beed done
> - OS installed on SSD
> - Don't know exact details of CPU, but I assume some recent multicore CPU
> - Don't know amount of RAM installed, I assume 32GB+
>
> NFS:
> - they are exporting filesystem via NFS to 10-20 clients (services), some VMs,
> some bare metal
> - clients are connected via 1GbE or 10GbE links
>
> Workload:
> - they are storing tens or hundreds of millions of small files
> - files are not in single directory
> - files are undek 1K, usually 200 - 500 bytes
> - I assume, that some NFS clients constantly write files
> - some NFS clients initiates massive reads, millions of random files
> - those reads are on demand, but during peak hours there can be many of such
> requests
>
> So far they were using Ext4, after some basic test they observed 40%
> improvement in application counters. But I'm afraid that those tests were done
> in environment not even close to the production (not so big size of
> filesystem, not so much files).
>
> I want to ask you what would be best mkfs.xfs settings for such setup.
>
> I assume, that they should use inode64 mount option for such large filesystem
> with that amount of files, but I'm a bit worried about compatibility with NFS
> (default shipped with CentOS 6.5). I think inode32 is totally out of scope here.
>
> Any other hints for setting this stuff up?
> Probably some recent OS/kernel would also help a lot, right?
>
> Also, do you know any benchmark which can be used to simulate such workload?
> I've googled a lot, but there is quite short list of multi-threaded, small
> files oriented benchmarks. To be honest, I've found only
> https://github.com/bengland2/smallfile to be close to what I need. Any other
> alternatives?
>
> BR
> Marcin
I think that is a good test to explore - Ben wrote that for exactly this kind of
workload.
For a single system (i.e., performance a single NFS client or local file
system), you could also test using fs_mark.
Regards,
Ric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2016-07-10 9:24 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-09 11:14 xfs + 100TB+ storage + lots of small files + NFS Marcin Sura
2016-07-10 9:24 ` Ric Wheeler [this message]
2016-07-10 23:48 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0f365b8a-16e2-7814-06d2-d452e80c7c07@gmail.com \
--to=ricwheeler@gmail.com \
--cc=mailing-lists@sura.pl \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox