public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Re: High Fragmentation with XFS and NFS Sync
@ 2016-07-02 19:49 Richard Scobie
  2016-07-02 20:41 ` Nick Fisk
  0 siblings, 1 reply; 10+ messages in thread
From: Richard Scobie @ 2016-07-02 19:49 UTC (permalink / raw)
  To: xfs

  Nick Fisk wrote:

"So it looks like each parallel IO thread is being
allocated next to each other rather than at spaced out regions of the disk."

It's possible that the "filestreams" XFS mount option may help you out. See:

 
http://www.xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/ch06s16.html

Regards,

Richard

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread
* High Fragmentation with XFS and NFS Sync
@ 2016-07-02  8:52 Nick Fisk
  2016-07-02 20:12 ` Darrick J. Wong
  2016-07-03 12:34 ` Stefan Ring
  0 siblings, 2 replies; 10+ messages in thread
From: Nick Fisk @ 2016-07-02  8:52 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1540 bytes --]

Hi, hope someone can help me here.

I'm exporting some XFS fs's to ESX via NFS with the sync option enabled. I'm
seeing really heavy fragmentation when multiple VM's are copied onto the
share at the same time. I'm also seeing kmem_alloc failures, which is
probably the biggest problem as this effectively takes everything down.

Underlying storage is a Ceph RBD, the server the FS is running on, is
running kernel 4.5.7. Mount options are currently default. I'm seeing
Millions of extents, where the ideal is listed as a couple of thousand when
running xfs_db, there is only a couple of 100 files on the FS. It looks
like roughly the extent sizes roughly match the IO size that the VM's were
written to XFS with. So it looks like each parallel IO thread is being
allocated next to each other rather than at spaced out regions of the disk.

>From what I understand, this is because each NFS write opens and closes the
file which throws off any chance that XFS will be able to use its allocation
features to stop parallel write streams from interleaving with each other.

Is there anything I can tune to try and give each write to each file a
little bit of space, so that it at least gives readahead a chance when
reading, that it might hit at least a few MB of sequential data?

I have read that inode32 allocates more randomly compared to inode64, so I'm
not sure if it's worth trying this as there will likely be less than a 1000
files per FS.

Or am I best just to run fsr after everything has been copied on?

Thanks for any advice
Nick

[-- Attachment #1.2: Type: text/html, Size: 12208 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-07-03 12:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-02 19:49 High Fragmentation with XFS and NFS Sync Richard Scobie
2016-07-02 20:41 ` Nick Fisk
  -- strict thread matches above, loose matches on Subject: below --
2016-07-02  8:52 Nick Fisk
2016-07-02 20:12 ` Darrick J. Wong
2016-07-02 21:00   ` Nick Fisk
2016-07-02 21:30     ` Nick Fisk
2016-07-02 22:02       ` Darrick J. Wong
2016-07-02 22:10         ` Nick Fisk
2016-07-03 12:34 ` Stefan Ring
2016-07-03 12:40   ` Nick Fisk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox