public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Thomas Kuther <tom@kuther.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Issues with
Date: Tue, 14 Jan 2014 08:23:41 +0000 (UTC)	[thread overview]
Message-ID: <loom.20140114T083934-800@post.gmane.org> (raw)
In-Reply-To: pan$13b5$4830a55$8e383081$f7bb4fd7@cox.net

Duncan <1i5t5.duncan <at> cox.net> writes:

[...]
> Normally you'd do a data balance to consolidate data in the data chunks 
> and return the now freed chunks to the unallocated space pool, but you're 
> going to have problems doing that ATM, for two reasons.  The likely 
> easiest to work around is that since all space is allocated and balance 
> works by allocating new chunks and copying data/metadata from the old 
> chunks over, rewriting, defragging and consolidating as it goes, but 
> there's no space left to allocate that new one...
> 
> The usual solution to that is to temporarily btrfs device add another 
> device with a few gigs available, do the rebalance with it providing the 
> necessary new-chunk space, then btrfs device delete, to move the chunks 
> on the temporary-add back to the main device so you can safely remove the 
> temporary-add.  Ordinarily, even a loopback on tmpfs could be used to 
> provide a few gigs, and that should be enough, but of course you can't 
> reboot while the chunks are on that tmpfs-based loopback or you'll lose 
> that data, and the below will likely trigger a live-lock and you'll 
> pretty much HAVE to reboot, so having those chunks on tmpfs probably 
> isn't such a good idea after all.  But a few gig thumbdrive should work, 
> and should keep the data safe over a reboot, so that's probably what I'd 
> recommend ATM.


Thanks again for your input, Duncan.

What I did now was:
a) took another read on the matter first
b) verified the qemu-img'ed backup of the VM is working properly
c) deleted the original vm image and refreshed my system backups

At that point data was still fully allocated.

d) dropped that single system chunk as you suggested previously. 
Interestingly this gave me:

Label: none  uuid: 52bc94ba-b21a-400f-a80d-e75c4cd8a936
        Total devices 1 FS bytes used 43.92GiB
        devid    1 size 119.24GiB used 119.03GiB path /dev/sda2

..some free data chunks.

e) Ran an iteration with balance -dusage=5, -dusage=10, 15, 20, 25.
After 25 it looks like:

Label: none  uuid: 52bc94ba-b21a-400f-a80d-e75c4cd8a936
        Total devices 1 FS bytes used 43.92GiB
        devid    1 size 119.24GiB used 95.02GiB path /dev/sda2


So, your suggestion regarding d) saved me from having to recreate the FS or 
having to add a drive.

Now I need to make sure I get a notice when data gets filled up too much.

Regards,
Tom


      reply	other threads:[~2014-01-14  8:25 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20140113002532.3975c806@ws>
2014-01-13 10:29 ` Issues with "no space left on device" maybe related to 3.13 Thomas Kuther
2014-01-14  5:52   ` Duncan
2014-01-14  8:23     ` Thomas Kuther [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=loom.20140114T083934-800@post.gmane.org \
    --to=tom@kuther.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox