From: Dave Chinner <david@fromorbit.com>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: xfs@oss.sgi.com, Brian Candler <B.Candler@pobox.com>
Subject: Re: df bigger than ls?
Date: Thu, 8 Mar 2012 13:10:54 +1100 [thread overview]
Message-ID: <20120308021054.GM3592@dastard> (raw)
In-Reply-To: <4F57A32A.5010704@sandeen.net>
On Wed, Mar 07, 2012 at 12:04:26PM -0600, Eric Sandeen wrote:
> On 3/7/12 11:16 AM, Brian Candler wrote:
> > On Wed, Mar 07, 2012 at 03:54:39PM +0000, Brian Candler wrote:
> >> core.size = 1085407232
> >> core.nblocks = 262370
> >
> > core.nblocks is correct here: space used = 262370 * 4 = 1049480 KB
> >
> > (If I add up all the non-hole extents I get 2098944 blocks = 1049472 KB
> > so there are two extra blocks of something)
> >
> > This begs the question of where stat() is getting its info from?
stat(2) also reported delayed allocation reservations that are only
kept in memory.
....
> so:
>
> # dd if=/dev/zero of=bigfile bs=1M count=1100 &>/dev/null
> # ls -lh bigfile
> -rw-r--r--. 1 root root 1.1G Mar 7 11:47 bigfile
> # du -h bigfile
> 1.1G bigfile
>
> but:
>
> # rm -f bigfile
> # for I in `seq 1 1100`; do dd if=/dev/zero of=bigfile conv=notrunc bs=1M seek=$I count=1 &>/dev/null; done
> # ls -lh bigfile
> -rw-r--r--. 1 root root 1.1G Mar 7 11:49 bigfile
> # du -h bigfile
> 2.0G bigfile
This is tripping the NFS server write pattern heuristic. i.e. it is
detecting repeated open/write at EOF/close patterns and so is not
truncating away the speculative EOF reservation on close(). This
si what prevents fragmentation of files being written concurrently
with this pattern.
> This should get freed when the inode is dropped from the cache;
> hence your cache drop bringing it back to size.
Right. It is assumes that once you've triggered that heuristic, the
preallocation needs to last for as long as the inode is in the
working set. The inode cache tracks the current working set, so the
preallocation release is tied to cache eviction.
> But there does seem to be an issue here; if I make a 4G filesystem
> and repeat the above test 3 times, the 3rd run gets ENOSPC, and
> the last file written comes up short, while the first one retains
> all it's extra preallocated space:
>
> # du -hc bigfile* 2.0G bigfile1 1.1G bigfile2 907M
> bigfile3
>
> Dave, is this working as intended?
Yes. Your problem is that you have a very small filesystem, which is
not the case that we optimise XFS for. :/
> I know the speculative
> preallocation amount for new files is supposed to go down as the
> fs fills, but is there no way to discard prealloc space to avoid
> ENOSPC on other files?
We don't track what files have current active preallocations, we
only reduce the preallocation size as the filesystem nears ENOSPC.
This generally works just fine in situations where the filesystem
size is significantly greater than the maximum extent size. i.e. the
common case
The problem you are tripping over here is that the maximum extent
size is greater than the filesystem size, so the preallocation size
is also greater than the filesystem size and hence can contribute
significantly to premature ENOSPC. I see two possible ways to
minimise this problem:
1. reduce the maximum speculative preallocation size based
on filesystem size at mount time.
2. track inodes with active speculative preallocation and
have an enospc based trigger that can find them and truncate
away excess idle speculative preallocation.
The first is relatively easy to do, but will only reduce the
incidence of your problem - we still need to allow significant
preallocation sizes (e.g. 64MB) to avoid the fragmentation problems.
The second is needed to reclaim the space we've already preallocated
but is not being used. That's more complex to do - probably a radix
tree bit and a periodic background scan to reduce the time window
the preallocation sits around from cache lifetime to "idle for some
time" along with a on-demand, synchronous ENOSPC scan. This will
need some more thought as to how to do it effectively, but isn't
impossible to do....
Cheers,
Dave.
>
> -Eric
>
> > root@storage1:~# du -h /disk*/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk10/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk11/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk12/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk1/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk2/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk3/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk4/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk5/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk6/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk7/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk8/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 2.0G /disk9/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > root@storage1:~# echo 3 >/proc/sys/vm/drop_caches
> > root@storage1:~# du -h /disk*/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk10/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk11/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk12/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk1/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk2/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk3/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk4/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk5/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk6/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk7/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk8/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > 1.1G /disk9/scratch2/work/PRSRA1/PRSRA1.1.0.bff
> > root@storage1:~#
> >
> > Very odd, but not really a major problem other than the confusion it causes.
> >
> > Regards,
> >
> > Brian.
> >
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
> >
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-03-08 2:11 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-07 15:54 df bigger than ls? Brian Candler
2012-03-07 17:16 ` Brian Candler
2012-03-07 18:04 ` Eric Sandeen
2012-03-08 2:10 ` Dave Chinner [this message]
2012-03-08 2:17 ` Eric Sandeen
2012-03-08 9:10 ` Brian Candler
2012-03-08 9:28 ` Dave Chinner
2012-03-08 16:23 ` Ben Myers
2012-03-09 0:17 ` Dave Chinner
2012-03-09 1:56 ` Ben Myers
2012-03-09 2:57 ` Dave Chinner
2012-03-08 8:04 ` Arkadiusz Miśkiewicz
2012-03-08 10:03 ` Dave Chinner
2012-03-08 8:50 ` Brian Candler
2012-03-08 9:59 ` Brian Candler
2012-03-08 10:22 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120308021054.GM3592@dastard \
--to=david@fromorbit.com \
--cc=B.Candler@pobox.com \
--cc=sandeen@sandeen.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox