public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: Nick Fisk <friskyfisk10@googlemail.com>
Cc: xfs@oss.sgi.com
Subject: Re: High Fragmentation with XFS and NFS Sync
Date: Sat, 2 Jul 2016 13:12:49 -0700	[thread overview]
Message-ID: <20160702201249.GH4917@birch.djwong.org> (raw)
In-Reply-To: <CAC5UwBi8Skjx90_XC5Z5B8P+CadawBZ3iUabKtm-2ZvrkgocZQ@mail.gmail.com>

On Sat, Jul 02, 2016 at 09:52:40AM +0100, Nick Fisk wrote:
> Hi, hope someone can help me here.
> 
> I'm exporting some XFS fs's to ESX via NFS with the sync option enabled. I'm
> seeing really heavy fragmentation when multiple VM's are copied onto the
> share at the same time. I'm also seeing kmem_alloc failures, which is
> probably the biggest problem as this effectively takes everything down.

(Probably a result of loading the millions of bmbt extents into memory?)

> Underlying storage is a Ceph RBD, the server the FS is running on, is
> running kernel 4.5.7. Mount options are currently default. I'm seeing
> Millions of extents, where the ideal is listed as a couple of thousand when
> running xfs_db, there is only a couple of 100 files on the FS. It looks
> like roughly the extent sizes roughly match the IO size that the VM's were
> written to XFS with. So it looks like each parallel IO thread is being
> allocated next to each other rather than at spaced out regions of the disk.
> 
> From what I understand, this is because each NFS write opens and closes the
> file which throws off any chance that XFS will be able to use its allocation
> features to stop parallel write streams from interleaving with each other.
> 
> Is there anything I can tune to try and give each write to each file a
> little bit of space, so that it at least gives readahead a chance when
> reading, that it might hit at least a few MB of sequential data?

/me wonders if setting an extent size hint on the rootdir before copying
the files over would help here...

--D

> 
> I have read that inode32 allocates more randomly compared to inode64, so I'm
> not sure if it's worth trying this as there will likely be less than a 1000
> files per FS.
> 
> Or am I best just to run fsr after everything has been copied on?
> 
> Thanks for any advice
> Nick

> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2016-07-02 20:13 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-02  8:52 High Fragmentation with XFS and NFS Sync Nick Fisk
2016-07-02 20:12 ` Darrick J. Wong [this message]
2016-07-02 21:00   ` Nick Fisk
2016-07-02 21:30     ` Nick Fisk
2016-07-02 22:02       ` Darrick J. Wong
2016-07-02 22:10         ` Nick Fisk
2016-07-03 12:34 ` Stefan Ring
2016-07-03 12:40   ` Nick Fisk
  -- strict thread matches above, loose matches on Subject: below --
2016-07-02 19:49 Richard Scobie
2016-07-02 20:41 ` Nick Fisk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160702201249.GH4917@birch.djwong.org \
    --to=darrick.wong@oracle.com \
    --cc=friskyfisk10@googlemail.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox