From: Hugo Mills <hugo@carfax.org.uk>
To: Duncan <1i5t5.duncan@cox.net>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: corrupted RAID1: unsuccessful recovery / help needed
Date: Fri, 30 Oct 2015 11:25:41 +0000 [thread overview]
Message-ID: <20151030112541.GA21103@carfax.org.uk> (raw)
In-Reply-To: <pan$2c57b$16ebecd2$7cec1c09$cc5ded39@cox.net>
[-- Attachment #1: Type: text/plain, Size: 5189 bytes --]
On Fri, Oct 30, 2015 at 10:58:47AM +0000, Duncan wrote:
> Lukas Pirl posted on Fri, 30 Oct 2015 10:43:41 +1300 as excerpted:
>
> > If there is one subvolume that contains all other (read only) snapshots
> > and there is insufficient storage to copy them all separately:
> > Is there an elegant way to preserve those when moving the data across
> > disks?
If they're read-only snapshots already, then yes:
sent=
for sub in *; do
btrfs send $sent $sub | btrfs receive /where/ever
sent="$sent -c$sub"
done
That will preserve the shared extents between the subvols on the
receiving FS.
If they're not read-only, then snapshotting each one again as RO
before sending would be the approach, but if your FS is itself RO,
that's not going to be possible, and you need to look at Duncan's
email.
Hugo.
> AFAIK, no elegant way without a writable mount.
>
> Tho I'm not sure, btrfs send, to a btrfs elsewhere using receive, may
> work, since you did specify read-only snapshots, which is what send
> normally works with in ordered to avoid changes to the snapshot while
> it's sending it. My own use-case doesn't involve either snapshots or
> send/receive, however, so I'm not sure if send can work with a read-only
> filesystem or not, but I think its normal method of operation is to
> create those read-only snapshots itself, which of course would require a
> writable filesystem, so I'm guessing it won't work unless you can
> convince it to use the read-only mounts as-is.
>
> The less elegant way would involve manual deduplication. Copy one
> snapshot, then another, and dedup what hasn't changed between the two,
> then add a third and dedup again. ... Depending on the level of dedup
> (file vs block level) and the level of change in your filesystem, this
> should ultimately take about the same level of space as a full backup
> plus a series of incrementals.
>
>
> Meanwhile, this does reinforce the point that snapshots don't replace
> full backups, that being the reason I don't use them here, since if the
> filesystem goes bad, it'll very likely take all the snapshots with it.
>
> Snapshots do tend to be pretty convenient, arguably /too/ convenient and
> near-zero-cost to make, as people then tend to just do scheduled
> snapshots, without thinking about their overhead and maintenance costs on
> the filesystem, until they already have problems. I'm not sure if you
> are a regular list reader and have thus seen my normal spiel on btrfs
> snapshot scaling and recommended limits to avoid problems or not, so if
> not, here's a slightly condensed version...
>
> Btrfs has scaling issues that appear when trying to manage too many
> snapshots. These tend to appear first in tools like balance and check,
> where time to process a filesystem goes up dramatically as the number of
> snapshots increases, to the point where it can become entirely
> impractical to manage at all somewhere near the 100k snapshots range, and
> is already dramatically affecting runtime at 10k snapshots.
>
> As a result, I recommend keeping per-subvol snapshots to 250-ish, which
> will allow snapshotting four subvolumes while still keeping total
> filesystem snapshots to 1000, or eight subvolumes at a filesystem total
> of 2000 snapshots, levels where the scaling issues should remain well
> within control. And 250-ish snapshots per subvolume is actually very
> reasonable even with half-hour scheduled snapshotting, provided a
> reasonable scheduled snapshot thinning program is also implemented,
> cutting say to hourly after six hours, six-hourly after a day, 12 hourly
> after 2 days, daily after a week, and weekly after four weeks to a
> quarter (13 weeks). Out beyond a quarter or two, certainly within a
> year, longer term backups to other media should be done, and snapshots
> beyond that can be removed entirely, freeing up the space the old
> snapshots kept locked down and helping to keep the btrfs healthy and
> functioning well within its practical scalability limits.
>
> Because a balance that takes a month to complete because it's dealing
> with a few hundred k snapshots is in practice (for most people) not
> worthwhile to do at all, and also in practice, a year or even six months
> out, are you really going to care about the precise half-hour snapshot,
> or is the next daily or weekly snapshot going to be just as good, and a
> whole lot easier to find among a couple hundred snapshots than hundreds
> of thousands?
>
> If you have far too many snapshots, perhaps this sort of thinning
> strategy will as well allow you to copy and dedup only key snapshots, say
> weekly plus daily for the last week, doing the backup thing manually, as
> well, modifying the thinning strategy accordingly if necessary to get it
> to fit. Tho using the copy and dedup strategy above will still require
> at least double the full space of a single copy, plus the space necessary
> for each deduped snapshot copy you keep, since the dedup occurs after the
> copy.
>
--
Hugo Mills | Beware geeks bearing GIFs
hugo@... carfax.org.uk |
http://carfax.org.uk/ |
PGP: E2AB1DE4 |
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
next prev parent reply other threads:[~2015-10-30 11:25 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-26 6:19 corrupted RAID1: unsuccessful recovery / help needed Lukas Pirl
2015-10-26 8:31 ` Duncan
2015-10-29 21:43 ` Lukas Pirl
2015-10-30 9:40 ` Duncan
2015-10-30 10:58 ` Duncan
2015-10-30 11:25 ` Hugo Mills [this message]
2015-10-30 15:03 ` Austin S Hemmelgarn
2015-11-08 2:59 ` Lukas Pirl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151030112541.GA21103@carfax.org.uk \
--to=hugo@carfax.org.uk \
--cc=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).