public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Nikolay Borisov <kernel@kyup.com>
To: Brian Foster <bfoster@redhat.com>
Cc: xfs@oss.sgi.com
Subject: Re: Failing XFS memory allocation
Date: Wed, 23 Mar 2016 17:03:18 +0200	[thread overview]
Message-ID: <56F2B036.4090306@kyup.com> (raw)
In-Reply-To: <20160323131059.GC43073@bfoster.bfoster>



On 03/23/2016 03:10 PM, Brian Foster wrote:
> On Wed, Mar 23, 2016 at 02:56:25PM +0200, Nikolay Borisov wrote:
>>
>>
>> On 03/23/2016 02:43 PM, Brian Foster wrote:
>>> On Wed, Mar 23, 2016 at 12:15:42PM +0200, Nikolay Borisov wrote:
> ...
>>> It looks like it's working to add a new extent to the in-core extent
>>> list. If this is the stack associated with the warning message (combined
>>> with the large alloc size), I wonder if there's a fragmentation issue on
>>> the file leading to an excessive number of extents.
>>
>> Yes this is the stack trace associated.
>>
>>>
>>> What does 'xfs_bmap -v /storage/loop/file1' show?
>>
>> It spews a lot of stuff but here is a summary, more detailed info can be
>> provided if you need it:
>>
>> xfs_bmap -v /storage/loop/file1 | wc -l
>> 900908
>> xfs_bmap -v /storage/loop/file1 | grep -c hole
>> 94568
>>
>> Also, what would constitute an "excessive number of extents"?
>>
> 
> I'm not sure where one would draw the line tbh, it's just a matter of
> having too many extents to the point that it causes problems in terms of
> performance (i.e., reading/modifying the extent list) or such as the
> allocation problem you're running into. As it is, XFS maintains the full
> extent list for an active inode in memory, so that's 800k+ extents that
> it's looking for memory for.

I saw in the comments that this problem has already been identified and
a possible solution would be to add another level of indirection. Also,
can you confirm that my understanding of the operation of the
indirection array is correct in that each entry in the indirection array
xfs_ext_irec is responsible for 256 extents. (the er_extbuf is
PAGE_SIZE/4kb and an extent is 16 bytes which results in 256 extents)

> 
> It looks like that is your problem here. 800k or so extents over 878G
> looks to be about 1MB per extent. Are you using extent size hints? One
> option that might prevent this is to use a larger extent size hint
> value. Another might be to preallocate the entire file up front with
> fallocate. You'd probably have to experiment with what option or value
> works best for your workload.

By preallocating with fallocate you mean using fallocate with
FALLOC_FL_ZERO_RANGE and not FALLOC_FL_PUNCH_HOLE, right? Because as it
stands now the file does have holes, which presumably are being filled
and in order to be filled an extent has to be allocated which caused the
issue?  Am I right in this reasoning?

Currently I'm not using extents size hint but will look into that, also
if the extent size hint is say 4mb, wouldn't that cause a fairly serious
loss of space, provided that the writes are smaller than 4mb. Would XFS
try to perform some sort of extent coalescing or something else? I'm not
an FS developer but my understanding is that with a 4mb extent size,
whenever a new write occurs even if it's 256kb a new 4mb extent would be
allocated, no?

And a final question - when i printed the contents of the inode with
xfs_db I get core.nextents = 972564 whereas invoking the xfs_bmap | wc
-l on the file always gives varying numbers?

Thanks a lot for taking the time to reply.




_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2016-03-23 15:03 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-23 10:15 Failing XFS memory allocation Nikolay Borisov
2016-03-23 12:43 ` Brian Foster
2016-03-23 12:56   ` Nikolay Borisov
2016-03-23 13:10     ` Brian Foster
2016-03-23 15:03       ` Nikolay Borisov [this message]
2016-03-23 16:58         ` Brian Foster
2016-03-23 23:00       ` Dave Chinner
2016-03-24  9:20         ` Nikolay Borisov
2016-03-24 21:58           ` Dave Chinner
2016-03-24  9:31         ` Christoph Hellwig
2016-03-24 22:00           ` Dave Chinner
2016-03-24  9:33 ` Christoph Hellwig
2016-03-24  9:42   ` Nikolay Borisov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56F2B036.4090306@kyup.com \
    --to=kernel@kyup.com \
    --cc=bfoster@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox