From: OmegaPhil <OmegaPhil@startmail.com>
To: Hugo Mills <hugo@carfax.org.uk>, linux-btrfs@vger.kernel.org
Subject: Re: Unable to allocate for space usage in particular btrfs volume
Date: Wed, 04 Nov 2015 21:53:09 +0000 [thread overview]
Message-ID: <563A7E45.1050806@startmail.com> (raw)
In-Reply-To: <20151104213039.GC27446@carfax.org.uk>
[-- Attachment #1: Type: text/plain, Size: 2236 bytes --]
On 04/11/15 21:30, Hugo Mills wrote:
> On Wed, Nov 04, 2015 at 09:10:42PM +0000, OmegaPhil wrote:
>> Back in September I noticed that 'sudo du -chs /mnt/storage-1' reported
>> 887GB used and 'df -h' 920GB for this particular volume - I went on
>> #btrfs for any suggestions, and balancing + defraging made no
>> difference. It had no subvolumes/snapshots etc, I basically used it like
>> a checksumed ext4fs.
>>
>> Since the volume was converted from ext4, I redid it from scratch (so
>> made with kernel v4.1.3 or v4.1.6 on this Debian Testing machine), and
>> the problem went away.
>>
>> After a couple of months, df reports 907GB used, whereas du says 884GB -
>> I currently have 8 large (1-5.5TB volumes) btrfs volumes in use,
>> storage-1 is the only SSD volume and the only one with this problem.
>>
>> No balancing or defraging this time, it didn't make a difference before
>> and this is a relatively new volume.
>>
>> Are there any sysadmin-level ways I can account for the ~23GB lost space?
>
> There's an issue where replacing blocks in the middle of an
> existing extent won't split the extent, and thus the "old" blocks
> aren't freed up, because they're held by the original extent (even
> though not actually referenced by any existing file). This might be
> what you're seeing.
>
> I'm not sure how to confirm this theory, or what to do about it if
> it's true. (Defrag the file? Copy it elsewhere? Other?)
>
> Two other cases for df > du are orphaned files, although 23 GiB of
> orphans is large; and missing out the dot-files in the directory that
> du is run from (if doing, say, "du *" rather than "du ."). I've been
> bitten by both of those in the past.
>
> Hugo.
The volume doesn't change hugely over time, so it really ought not to
have broken so quickly - a quick rundown of the storage usage:
36% general (small files, some smallish videos)
24% music
23% pr0n
17% VMs
But in terms of 'large files changing', it could be the VM disks perhaps
- I'll move them out, balance, and then back in again, hopefully that'd
be a meaningful test.
du-wise was direct on the root directory, any idea how I could audit
orphan files?
Thanks
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
next prev parent reply other threads:[~2015-11-04 21:53 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-04 21:10 Unable to allocate for space usage in particular btrfs volume OmegaPhil
2015-11-04 21:30 ` Hugo Mills
2015-11-04 21:53 ` OmegaPhil [this message]
2015-11-05 4:18 ` Duncan
2015-11-05 10:44 ` OmegaPhil
2015-11-05 11:49 ` Hugo Mills
2015-11-06 20:15 ` Calvin Walton
2015-11-06 20:35 ` Austin S Hemmelgarn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=563A7E45.1050806@startmail.com \
--to=omegaphil@startmail.com \
--cc=hugo@carfax.org.uk \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).