From: Brian Foster <bfoster@redhat.com>
To: "Arkadiusz Miśkiewicz" <arekm@maven.pl>
Cc: xfs@oss.sgi.com
Subject: Re: [FAQ] XFS speculative preallocation
Date: Fri, 21 Mar 2014 14:02:41 -0400 [thread overview]
Message-ID: <20140321180241.GB3087@laptop.bfoster> (raw)
In-Reply-To: <201403211809.03683.arekm@maven.pl>
On Fri, Mar 21, 2014 at 06:09:03PM +0100, Arkadiusz Miśkiewicz wrote:
> On Friday 21 of March 2014, Brian Foster wrote:
> > Hi all,
> >
> > Eric had suggested we add an FAQ entry for speculative preallocation
> > since it seems to be a common question, so I offered to write something
> > up. I started with a single entry but split it into a couple Q's when it
> > turned into TL;DR fodder. ;)
> >
> > The text is embedded below for review. Thoughts on the questions or
> > content is appreciated. Also, once folks are Ok with this... how does
> > one gain edit access to the wiki?
>
> More questions or topics that can be converted to questions from me:
>
> 1) Before preallocation kernel did things differently. AFAIK it wasn't the
> same as allocsize=64k, was it? Is there a way to get old behaviour or
> something similar to old behaviour?
>
Going from the commit log that introduced speculative preallocation, it
appears that the behavior was effectively allocsize=64k. For reference:
055388a3 xfs: dynamic speculative EOF preallocation
> 2) Is there a way to see which file got some preallocation and how big that
> preallocation is? Scenario - something ate free space due to preallocation and
> from admin point of view it would be usefull to know which app did that and
> how many MB was due to preallocation (vs real, written data).
>
The common scenario is when du/stat reports a larger block usage than
file size, so the question of how much extra space is allocated is just
the difference between the two. I suppose we could include a simple
example of that in the first Q.
This isn't necessarily true in the case of sparse files. xfs_bmap prints
the extent information for a file, so it should be possible to determine
how much post-EOF space exists from looking at the extent that covers
EOF. That said, this strikes me as more "user guide" material than FAQ.
> > Linux 3.8 (and later) includes a scanner to perform background trimming
> > of files with lingering post-EOF preallocations. The scanner bypasses
> > files that have been recently
>
> What time is "recently" ? Is "modified" equal to "file data modified" or "file
> data or metadata modified" ?
>
I originally had something like "files that have not been modified since
last flushed to disk," which is the heuristic as I understand it. That
seemed too verbose and technical for FAQ. I could replace "recently
modified" with "... bypasses files that are dirty ..." if that is more
useful..?
> > modified to not interfere with ongoing
> > writes.
>
> In case of some app that constantly writes to files (apache web server
> writting to its logs for example) that background trimming will never do
> anything for these files, right?
>
Most likely true. Though by the same logic, those files will eventually
use the preallocated space.
> > A 5 minute scan interval is used by default and can be adjusted
> > via the following file (value in seconds):
> >
> > /proc/sys/fs/xfs/speculative_prealloc_lifetime
> >
> > Although speculative preallocation can lead to reports of excess space
> > usage, the preallocated space is not permanent unless explicitly made so
> > via fallocate or a similar interface. Preallocated space can also be
> > encoded permanently in situations where file size is extended beyond a
> > range of post-EOF blocks (i.e., via truncate). Otherwise, preallocated
> > blocks are reclaimed on file close, inode reclaim, unmount or in the
> > background once file write activity subsides.
>
> So there is no mechanism that would shirnk preallocations in case when free
> space is (almost or) gone on a fs? Case: apache causes xfs to preallocate
> several GB for its /var/..../{access,error}_log (common problem here) and then
> free space ends on that fs causing problems for every app that writes to /var.
>
I noted in the second answer that the preallocation is throttled as we
near allocation limits such as no free space or quota. I think that
should cover most cases. I still have some code lying around somewhere
that forces a scan and retry in EDQUOT scenarios though. I should dust
that off...
Thanks for the reviews!
Brian
> Thanks!
>
> --
> Arkadiusz Miśkiewicz, arekm / maven.pl
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2014-03-21 18:03 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-21 16:29 [FAQ] XFS speculative preallocation Brian Foster
2014-03-21 16:54 ` Shaun Gosse
2014-03-21 17:09 ` Arkadiusz Miśkiewicz
2014-03-21 18:02 ` Brian Foster [this message]
2014-03-21 23:16 ` Dave Chinner
2014-03-21 20:11 ` Florian Weimer
2014-03-21 23:10 ` Dave Chinner
2014-03-21 23:13 ` Eric Sandeen
2014-03-21 23:18 ` Dave Chinner
2014-03-22 13:32 ` Christoph Hellwig
2014-03-21 23:05 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140321180241.GB3087@laptop.bfoster \
--to=bfoster@redhat.com \
--cc=arekm@maven.pl \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox