public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "R. Jason Adams" <rjasonadams@gmail.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: Suggested XFS setup/options for 10TB file system w/ 18-20M files.
Date: Tue, 3 Oct 2017 14:10:57 -0400	[thread overview]
Message-ID: <2F69CA38-6029-4D9B-BA3C-FD793E613693@gmail.com> (raw)
In-Reply-To: <20171002231420.GI3666@dastard>

> 
> With 4k directory block size and your write heavy workload, you
> could get away with just 10 directories. However, it'd probably be
> better to use a single level 100-directory wide hash to bring to
> down to less than 200k files per directory….


Moved over to single level with 100 directories. 


> Small files should be a single extent, so there's heaps of room for
> a 200 byte xattr in the inode. using 512 byte inodes will half
> memory demand for caching inode buffers….

Moved to 512 byte inodes.

> In general, use the defaults and don't add anything extra unless you
> know it solves a specific problem you've witnessed in testing…

Moved to the defaults.

> 
> Most likely going to be metadata writeback of inode buffers
> requiring RMW based on experience with gluster and ceph having
> exactly the same problems.  Use blktrace to identify what the reads
> are, and see if those same blocks are written later on. An io marked
> a "M" is a metadata IO. Post the blktrace output of the bits you
> find relevant.

Reformated the drive and it's refilling. With the changes suggested (100 dir, 512 nodes, defaults) it already seems better. We’re currently at 6% full and the reads are quite a bit less than they were before at similar fullness. One thing I’m noticing in Grafana, the read request/s seem to keep increasing (up to ~8/s) for around an 15 minutes, then they drop down 1/s for 10-15 minutes.. then over the next 15 minutes they build back up.. etc

> FWIW, how much RAM do you have in the system, and what does 'echo
> 200 > /proc/sys/fs/xfs/xfssyncd_centisecs' do to the behaviour?

System has 24G of ram. I’m guessing a move to 96 or 192G would help a lot.. in the end the system will have 36 of these 10TB drives.

I want to thank you and Eric for the time you’ve taken to help. Feels good to make some progress on this issue.

-R. Jason Adams


  reply	other threads:[~2017-10-03 18:11 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-02 13:14 Suggested XFS setup/options for 10TB file system w/ 18-20M files R. Jason Adams
2017-10-02 13:36 ` Eric Sandeen
2017-10-02 13:49   ` R. Jason Adams
2017-10-02 14:10     ` R. Jason Adams
2017-10-02 14:12     ` Eric Sandeen
2017-10-02 23:14 ` Dave Chinner
2017-10-03 18:10   ` R. Jason Adams [this message]
2017-10-03 20:32     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2F69CA38-6029-4D9B-BA3C-FD793E613693@gmail.com \
    --to=rjasonadams@gmail.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox