From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: Volume appears full but TB's of space available
Date: Mon, 10 Apr 2017 07:31:44 -0400 [thread overview]
Message-ID: <97a26215-464f-290b-c242-ffeff567da3b@gmail.com> (raw)
In-Reply-To: <pan$2e792$2cb39d1$9f52bd61$285d9133@cox.net>
On 2017-04-08 01:12, Duncan wrote:
> Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as
> excerpted:
>
>> 2. Results from 'btrfs scrub'. This is somewhat tricky because scrub is
>> either asynchronous or blocks for a _long_ time. The simplest option
>> I've found is to fire off an asynchronous scrub to run during down-time,
>> and then schedule recurring checks with 'btrfs scrub status'. On the
>> plus side, 'btrfs scrub status' already returns non-zero if the scrub
>> found errors.
>
> This is (one place) where my "keep it small enough to be in-practice-
> manageable" comes in.
>
> I always run my scrubs with -B (don't background, always, because I've
> scripted it), and they normally come back within a minute. =:^)
>
> But that's because I'm running multiple btrfs pair-device raid1 on a pair
> of partitioned SSDs, with each independent btrfs built on a partition
> from each ssd, with all partitions under 50 GiB. So scrubs takes less
> than a minute to run (on the under 1 GiB /var/log, it returns effectively
> immediately, as soon as I hit enter on the command), but that's not
> entirely surprising at the sizes of the ssd-based btrfs' I am running.
>
> When scrubs (and balances, and checks) come back in a minute or so, it
> makes maintenance /so/ much less of a hassle. =:^)
>
> And the generally single-purpose and relatively small size of each
> filesystem means I can, for instance, keep / (with all the system libs,
> bins, manpages, and the installed-package database, among other things)
> mounted read-only by default, and keep the updates partition (gentoo so
> that's the gentoo and overlay trees, the sources and binpkg cache, ccache
> cache, etc) and (large non-ssd/non-btrfs) media partitions unmounted by
> default.
>
> Which in turn means when something /does/ go wrong, as long as it wasn't
> a physical device, there's much less data at risk, because most of it was
> probably either unmounted, or mounted read-only.
>
> Which in turn means I don't have to worry about scrub/check or other
> repair on those filesystems at all, only the ones that were actually
> mounted writable. And as mentioned, those scrub and check fast enough
> that I can literally wait at the terminal for command completion. =:^)
>
> Of course my setup's what most would call partitioned to the extreme, but
> it does have its advantages, and it works well for me, which after all is
> the important thing for /my/ setup.
Eh, maybe most people who never dealt with disks with capacities on the
order of triple-digit _megabytes_. TBH, most of my systems look pretty
similar, although I split at places that most people think are odd until
I explain the reasoning (like /var/cache or the RRD storage for
collectd). With the exception of the backing storage for the storage
micro-cluster I have on my home network and the VM storage, all my
filesystems are 32GB or less (and usually some multiple of 8G), although
I'm not lucky enough to have a good enough system to run maintenance
that fast (although part of that might be that I don't heavily
over-provision space in most of the filesystems, but instead leave a
reasonable amount of slack-space at the LVM level, so if a filesystem
gets wedged, I just temporarily resize the LV it's on so I can fix it).
>
> But the more generic point remains, if you setup multi-TB filesystems
> that take days or weeks for a maintenance command to complete, running
> those maintenance commands isn't going to be something done as often as
> one arguably should, and rebuilding from a filesystem or device failure
> is going to take far longer than one would like, as well. We've seen the
> reports here. If that's what you're doing, strongly consider breaking
> your filesystems down to something rather more manageable, say a couple
> TiB each. Broken along natural usage lines, it can save a lot on the
> caffeine and headache pills when something does go wrong.
>
> Unless of course like one poster here, you're handling double-digit-TB
> super-collider data files. Those tend to be a bit difficult to store on
> sub-double-digit-TB filesystems. =:^) But that's the other extreme from
> what I've done here, and he actually has a good /reason/ for /his/
> double-digit- or even triple-digit-TB filesystems. There's not much to
> be done about his use-case, and indeed, AFAIK he decided btrfs simply
> isn't stable and mature enough for that use-case yet, tho I believe he's
> using it for some other, more minor and less gargantuan use-cases.
Even aside from that, there are cases where you essentially need large
filesystems. One good example is NAS usage. In that case, it's a lot
simpler to provision one filesystem and then share out subsets of it
than it is to provision one for each share. Clustering is another good
example (the micro-cluster I mentioned above being a good example of
this, by just using one filesystem for each back-end system, I end up
saving a very large amount of resources without compromising performance
(although, the 200GB back-end filesystems are nowhere near the multi-TB
filesystems that are usually the issue).
next prev parent reply other threads:[~2017-04-10 11:31 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-07 0:47 Volume appears full but TB's of space available John Petrini
2017-04-07 1:15 ` John Petrini
2017-04-07 1:21 ` Chris Murphy
2017-04-07 1:31 ` John Petrini
2017-04-07 2:42 ` Chris Murphy
2017-04-07 3:25 ` John Petrini
2017-04-07 11:41 ` Austin S. Hemmelgarn
2017-04-07 13:28 ` John Petrini
2017-04-07 13:50 ` Austin S. Hemmelgarn
2017-04-07 16:28 ` Chris Murphy
2017-04-07 16:58 ` Austin S. Hemmelgarn
2017-04-07 17:05 ` John Petrini
2017-04-07 17:11 ` Austin S. Hemmelgarn
2017-04-07 16:04 ` Chris Murphy
2017-04-07 16:51 ` Austin S. Hemmelgarn
2017-04-07 16:58 ` John Petrini
2017-04-07 17:04 ` Austin S. Hemmelgarn
2017-04-08 5:12 ` Duncan
2017-04-10 11:31 ` Austin S. Hemmelgarn [this message]
2017-04-07 1:17 ` Chris Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=97a26215-464f-290b-c242-ffeff567da3b@gmail.com \
--to=ahferroin7@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).