linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Volume appears full but TB's of space available
Date: Sat, 8 Apr 2017 05:12:08 +0000 (UTC)	[thread overview]
Message-ID: <pan$2e792$2cb39d1$9f52bd61$285d9133@cox.net> (raw)
In-Reply-To: d531fd79-c08a-0ba1-ae51-5503bc56b121@gmail.com

Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as
excerpted:

> 2. Results from 'btrfs scrub'.  This is somewhat tricky because scrub is
> either asynchronous or blocks for a _long_ time.  The simplest option
> I've found is to fire off an asynchronous scrub to run during down-time,
> and then schedule recurring checks with 'btrfs scrub status'.  On the
> plus side, 'btrfs scrub status' already returns non-zero if the scrub
> found errors.

This is (one place) where my "keep it small enough to be in-practice-
manageable" comes in.

I always run my scrubs with -B (don't background, always, because I've 
scripted it), and they normally come back within a minute. =:^)

But that's because I'm running multiple btrfs pair-device raid1 on a pair 
of partitioned SSDs, with each independent btrfs built on a partition 
from each ssd, with all partitions under 50 GiB.  So scrubs takes less 
than a minute to run (on the under 1 GiB /var/log, it returns effectively 
immediately, as soon as I hit enter on the command), but that's not 
entirely surprising at the sizes of the ssd-based btrfs' I am running.

When scrubs (and balances, and checks) come back in a minute or so, it 
makes maintenance /so/ much less of a hassle. =:^)

And the generally single-purpose and relatively small size of each 
filesystem means I can, for instance, keep / (with all the system libs, 
bins, manpages, and the installed-package database, among other things) 
mounted read-only by default, and keep the updates partition (gentoo so 
that's the gentoo and overlay trees, the sources and binpkg cache, ccache 
cache, etc) and (large non-ssd/non-btrfs) media partitions unmounted by 
default.

Which in turn means when something /does/ go wrong, as long as it wasn't 
a physical device, there's much less data at risk, because most of it was 
probably either unmounted, or mounted read-only.

Which in turn means I don't have to worry about scrub/check or other 
repair on those filesystems at all, only the ones that were actually 
mounted writable.  And as mentioned, those scrub and check fast enough 
that I can literally wait at the terminal for command completion. =:^)

Of course my setup's what most would call partitioned to the extreme, but 
it does have its advantages, and it works well for me, which after all is 
the important thing for /my/ setup.

But the more generic point remains, if you setup multi-TB filesystems 
that take days or weeks for a maintenance command to complete, running 
those maintenance commands isn't going to be something done as often as 
one arguably should, and rebuilding from a filesystem or device failure 
is going to take far longer than one would like, as well.  We've seen the 
reports here.  If that's what you're doing, strongly consider breaking 
your filesystems down to something rather more manageable, say a couple 
TiB each.  Broken along natural usage lines, it can save a lot on the  
caffeine and headache pills when something does go wrong.

Unless of course like one poster here, you're handling double-digit-TB 
super-collider data files.  Those tend to be a bit difficult to store on 
sub-double-digit-TB filesystems.  =:^)  But that's the other extreme from 
what I've done here, and he actually has a good /reason/ for /his/
double-digit- or even triple-digit-TB filesystems.  There's not much to 
be done about his use-case, and indeed, AFAIK he decided btrfs simply 
isn't stable and mature enough for that use-case yet, tho I believe he's 
using it for some other, more minor and less gargantuan use-cases.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


  parent reply	other threads:[~2017-04-08  5:12 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-07  0:47 Volume appears full but TB's of space available John Petrini
2017-04-07  1:15 ` John Petrini
2017-04-07  1:21   ` Chris Murphy
2017-04-07  1:31     ` John Petrini
2017-04-07  2:42       ` Chris Murphy
2017-04-07  3:25         ` John Petrini
2017-04-07 11:41           ` Austin S. Hemmelgarn
2017-04-07 13:28             ` John Petrini
2017-04-07 13:50               ` Austin S. Hemmelgarn
2017-04-07 16:28                 ` Chris Murphy
2017-04-07 16:58                   ` Austin S. Hemmelgarn
2017-04-07 17:05                     ` John Petrini
2017-04-07 17:11                       ` Austin S. Hemmelgarn
2017-04-07 16:04             ` Chris Murphy
2017-04-07 16:51               ` Austin S. Hemmelgarn
2017-04-07 16:58                 ` John Petrini
2017-04-07 17:04                   ` Austin S. Hemmelgarn
2017-04-08  5:12             ` Duncan [this message]
2017-04-10 11:31               ` Austin S. Hemmelgarn
2017-04-07  1:17 ` Chris Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='pan$2e792$2cb39d1$9f52bd61$285d9133@cox.net' \
    --to=1i5t5.duncan@cox.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).