From: Austin S Hemmelgarn <ahferroin7@gmail.com>
To: Duncan <1i5t5.duncan@cox.net>, linux-btrfs@vger.kernel.org
Subject: Re: corrupted RAID1: unsuccessful recovery / help needed
Date: Fri, 30 Oct 2015 11:03:25 -0400 [thread overview]
Message-ID: <563386BD.8060707@gmail.com> (raw)
In-Reply-To: <pan$2c57b$16ebecd2$7cec1c09$cc5ded39@cox.net>
[-- Attachment #1: Type: text/plain, Size: 4325 bytes --]
On 2015-10-30 06:58, Duncan wrote:
> Lukas Pirl posted on Fri, 30 Oct 2015 10:43:41 +1300 as excerpted:
>
>> If there is one subvolume that contains all other (read only) snapshots
>> and there is insufficient storage to copy them all separately:
>> Is there an elegant way to preserve those when moving the data across
>> disks?
>
> AFAIK, no elegant way without a writable mount.
>
> Tho I'm not sure, btrfs send, to a btrfs elsewhere using receive, may
> work, since you did specify read-only snapshots, which is what send
> normally works with in ordered to avoid changes to the snapshot while
> it's sending it. My own use-case doesn't involve either snapshots or
> send/receive, however, so I'm not sure if send can work with a read-only
> filesystem or not, but I think its normal method of operation is to
> create those read-only snapshots itself, which of course would require a
> writable filesystem, so I'm guessing it won't work unless you can
> convince it to use the read-only mounts as-is.
Unless something has significantly changed since I last looked, send
only works on existing snapshots and doesn't create any directly itself,
and as such should work fine to send snapshots from a read-only
filesystem. In theory, you could use it to send all the snapshots at
once, although that would probably take a long time, so you'll probably
have to use a loop like the fragment of shell-script that Hugo suggested
in his response. That should result in an (almost) identical level of
sharing.
>
> The less elegant way would involve manual deduplication. Copy one
> snapshot, then another, and dedup what hasn't changed between the two,
> then add a third and dedup again. ... Depending on the level of dedup
> (file vs block level) and the level of change in your filesystem, this
> should ultimately take about the same level of space as a full backup
> plus a series of incrementals.
If you're using duperemove (which is the only maintained dedupe tool I
know of for BTRFS), then this will likely take a long time for any
reasonable amount of data, and probably take up more space on the
destination drive than it does on the source (while duperemove does
block-based deduplication, it uses large chunks by default).
>
> Meanwhile, this does reinforce the point that snapshots don't replace
> full backups, that being the reason I don't use them here, since if the
> filesystem goes bad, it'll very likely take all the snapshots with it.
FWIW, while I don't use them directly myself as a backup, they are
useful when doing a backup to get a guaranteed stable version of the
filesystem being backed-up (this is also one of the traditional use
cases for LVM snapshots, although those have a lot of different issues
to deal with). For local backups (I also do cloud-storage based remote
backups, but local is what matters in this case because it's where I
actually use send/receive and snapshots) I use two different methods
depending on the amount of storage I have:
1. If I'm relatively limited on local storage (like in my laptop where
the secondary internal disk is only 64G), I use a temporary snapshot to
generate a SquashFS image of the system, which I then store on the
secondary drive.
2. If I have a lot of spare space (like on my desktop where I have 4x
1TB HDD's and 2x 128G SSD's), I make a snapshot of the filesystem, then
use send/receive to transfer that to a backup filesystem on a separate
disk. I then keep the original snapshot around on the filesystem so I
can do incremental send/receive to speed up future backups.
In both cases, I can directly boot my most recent backups if need be,
and in the second case, I can actually use it to trivially regenerate
the backed-up filesystems (by simply doing a send/receive in the
opposite direction).
Beyond providing a stable system-image for backups, the only valid use
case for snapshots in my opinion is to provide the equivalent to MS
Windows' 'Restore Point' feature (which I'm pretty sure is done
currently by RHEL and SLES if they are installed on BTRFS) and possibly
'File History' for people who for some reason can't use real VCS or just
need to store the last few revision (which is itself done by stuff like
'snapper').
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3019 bytes --]
next prev parent reply other threads:[~2015-10-30 15:04 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-26 6:19 corrupted RAID1: unsuccessful recovery / help needed Lukas Pirl
2015-10-26 8:31 ` Duncan
2015-10-29 21:43 ` Lukas Pirl
2015-10-30 9:40 ` Duncan
2015-10-30 10:58 ` Duncan
2015-10-30 11:25 ` Hugo Mills
2015-10-30 15:03 ` Austin S Hemmelgarn [this message]
2015-11-08 2:59 ` Lukas Pirl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=563386BD.8060707@gmail.com \
--to=ahferroin7@gmail.com \
--cc=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).