* Incremental backup for a raid1
@ 2014-03-13 19:12 Michael Schuerig
2014-03-13 19:28 ` Hugo Mills
2014-03-14 6:42 ` Duncan
0 siblings, 2 replies; 19+ messages in thread
From: Michael Schuerig @ 2014-03-13 19:12 UTC (permalink / raw)
To: linux-btrfs
My backup use case is different from the what has been recently
discussed in another thread. I'm trying to guard against hardware
failure and other causes of destruction.
I have a btrfs raid1 filesystem spread over two disks. I want to backup
this filesystem regularly and efficiently to an external disk (same
model as the ones in the raid) in such a way that
* when one disk in the raid fails, I can substitute the backup and
rebalancing from the surviving disk to the substitute only applies the
missing changes.
* when the entire raid fails, I can re-build a new one from the backup.
The filesystem is mounted at its root and has several nested subvolumes
and snapshots (in a .snapshots subdir on each subvol).
Is it possible to do what I'm looking for?
Michael
--
Michael Schuerig
mailto:michael@schuerig.de
http://www.schuerig.de/michael/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 19:12 Incremental backup for a raid1 Michael Schuerig
@ 2014-03-13 19:28 ` Hugo Mills
2014-03-13 19:48 ` Andrew Skretvedt
2014-03-14 6:42 ` Duncan
1 sibling, 1 reply; 19+ messages in thread
From: Hugo Mills @ 2014-03-13 19:28 UTC (permalink / raw)
To: Michael Schuerig; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 1709 bytes --]
On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
>
> My backup use case is different from the what has been recently
> discussed in another thread. I'm trying to guard against hardware
> failure and other causes of destruction.
>
> I have a btrfs raid1 filesystem spread over two disks. I want to backup
> this filesystem regularly and efficiently to an external disk (same
> model as the ones in the raid) in such a way that
>
> * when one disk in the raid fails, I can substitute the backup and
> rebalancing from the surviving disk to the substitute only applies the
> missing changes.
>
> * when the entire raid fails, I can re-build a new one from the backup.
>
> The filesystem is mounted at its root and has several nested subvolumes
> and snapshots (in a .snapshots subdir on each subvol).
>
> Is it possible to do what I'm looking for?
For point 2, yes. (Add new disk, balance -oconvert from single to
raid1).
For point 1, not really. It's a different filesystem, so it'll have
a different UUID. You *might* be able to get away with rsync of one of
the block devices in the array to the backup block device, but you'd
have to unmount the FS (or halt all writes to it) for the period of
the rsync to ensure a consistent image, and the rsync would have to
read all the data in the device being synced to work out what to send.
Probably not what you want.
Hugo.
--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- Do not meddle in the affairs of system administrators, for ---
they are subtle, and quick to anger.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 19:28 ` Hugo Mills
@ 2014-03-13 19:48 ` Andrew Skretvedt
2014-03-13 21:09 ` Brendan Hide
2014-03-13 21:14 ` Michael Schuerig
0 siblings, 2 replies; 19+ messages in thread
From: Andrew Skretvedt @ 2014-03-13 19:48 UTC (permalink / raw)
To: linux-btrfs
On 2014-Mar-13 14:28, Hugo Mills wrote:
> On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
>>
>> My backup use case is different from the what has been recently
>> discussed in another thread. I'm trying to guard against hardware
>> failure and other causes of destruction.
>>
>> I have a btrfs raid1 filesystem spread over two disks. I want to backup
>> this filesystem regularly and efficiently to an external disk (same
>> model as the ones in the raid) in such a way that
>>
>> * when one disk in the raid fails, I can substitute the backup and
>> rebalancing from the surviving disk to the substitute only applies the
>> missing changes.
>>
>> * when the entire raid fails, I can re-build a new one from the backup.
>>
>> The filesystem is mounted at its root and has several nested subvolumes
>> and snapshots (in a .snapshots subdir on each subvol).
>>
>> Is it possible to do what I'm looking for?
>
> For point 2, yes. (Add new disk, balance -oconvert from single to
> raid1).
>
> For point 1, not really. It's a different filesystem, so it'll have
> a different UUID. You *might* be able to get away with rsync of one of
> the block devices in the array to the backup block device, but you'd
> have to unmount the FS (or halt all writes to it) for the period of
> the rsync to ensure a consistent image, and the rsync would have to
> read all the data in the device being synced to work out what to send.
> Probably not what you want.
>
> Hugo.
>
I'm new; btrfs noob; completely unqualified to write intelligently on
this topic, nevertheless:
I understand your setup to be btrfs RAID1 with /dev/A /dev/B, and a
backup device someplace /dev/C
Could you, at the time you wanted to backup the filesystem:
1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
3) balance to effect the backup (i.e. rebuilding the RAID1 onto /dev/C)
4) break/reconnect the original devices: remove /dev/C; re-add /dev/B to
the fs
I think this could be done online. Any one device [ABC] surviving is
sufficient to rebuild a RAID1 of the filesystem, or be accessed alone in
degraded fashion for disaster recovery purposes.
I think that would address point 1. Is my thinking horrible on this?
(again, noob to btrfs)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 19:48 ` Andrew Skretvedt
@ 2014-03-13 21:09 ` Brendan Hide
2014-03-13 21:14 ` Michael Schuerig
1 sibling, 0 replies; 19+ messages in thread
From: Brendan Hide @ 2014-03-13 21:09 UTC (permalink / raw)
To: Andrew Skretvedt, linux-btrfs
On 2014/03/13 09:48 PM, Andrew Skretvedt wrote:
> On 2014-Mar-13 14:28, Hugo Mills wrote:
>> On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
>>> I have a btrfs raid1 filesystem spread over two disks. I want to backup
>>> this filesystem regularly and efficiently to an external disk (same
>>> model as the ones in the raid) in such a way that
>>>
>>> * when one disk in the raid fails, I can substitute the backup and
>>> rebalancing from the surviving disk to the substitute only applies the
>>> missing changes.
>> For point 1, not really. It's a different filesystem
>> [snip]
>> Hugo.
> I'm new
We all start somewhere. ;)
> Could you, at the time you wanted to backup the filesystem:
> 1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
> 2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
Its this step that won't work "as is" and, from an outsider's
perspective, it is not obvious why:
As Hugo mentioned, "It's a different filesystem". The two disks don't
have any "co-ordinating" record of data and don't have any record
indicating that the other disk even exists. The files they store might
even be identical - but there's a lot of missing information that would
be necessary to tell them they can work together. All this will do is
reformat /dev/C and then it will be rewritten again by the balance
operation in step 3) below.
> 3) balance to effect the backup (i.e. rebuilding the RAID1 onto /dev/C)
> 4) break/reconnect the original devices: remove /dev/C; re-add /dev/B
> to the fs
Again, as with 2), /dev/A is now synchronised with (for all intents and
purposes) a new disk. If you want to re-add /dev/B, you're going to lose
any data on /dev/B (view this in the sense that, if you wiped the disk,
the end-result would be the same) and then you would be re-balancing new
data onto it from scratch.
Before removing /dev/B:
Disk A: abdeg__cf__
Disk B: abc_df_ge__ <- note that data is *not* necessarily stored in the
exact same position on both disks
Disk C: gbfc_d__a_e
All data is available on all disks. Disk C has no record indicating that
disks A and B exist.
Disk A and B have a record indicating that the other disk is part of the
same FS. These two disks have no record indicating disk C exists.
1. Remove /dev/B:
Disk A: abdeg__cf__
Disk C: gbfc_d__a_e
2. Add /dev/C to /dev/A as RAID1:
Disk A: abdeg__cf__
Disk C: _########## <- system reformats /dev/C and treats the old data
as garbage
3. Balance /dev/{A,C}:
Disk A: abdeg__cf__
Disk C: abcdefg____
Both disks now have a full record of where the data is supposed to be
and have a record indicating that the other disk is part of the FS.
Notice that, though Disk C has the exact same files as it did before
step 1, the on-disk filesystem looks very different.
4. Follow steps 1, 2, and 3 above - but with different disks - similar
end-result.
--
__________
Brendan Hide
http://swiftspirit.co.za/
http://www.webafrica.co.za/?AFF1E97
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 19:48 ` Andrew Skretvedt
2014-03-13 21:09 ` Brendan Hide
@ 2014-03-13 21:14 ` Michael Schuerig
2014-03-13 22:04 ` Chris Murphy
1 sibling, 1 reply; 19+ messages in thread
From: Michael Schuerig @ 2014-03-13 21:14 UTC (permalink / raw)
To: linux-btrfs
On Thursday 13 March 2014 14:48:55 Andrew Skretvedt wrote:
> On 2014-Mar-13 14:28, Hugo Mills wrote:
> > On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
> >> My backup use case is different from the what has been recently
> >> discussed in another thread. I'm trying to guard against hardware
> >> failure and other causes of destruction.
> >>
> >> I have a btrfs raid1 filesystem spread over two disks. I want to
> >> backup this filesystem regularly and efficiently to an external
> >> disk (same model as the ones in the raid) in such a way that
> >>
> >> * when one disk in the raid fails, I can substitute the backup and
> >> rebalancing from the surviving disk to the substitute only applies
> >> the missing changes.
> >>
> >> * when the entire raid fails, I can re-build a new one from the
> >> backup.
> >>
> >> The filesystem is mounted at its root and has several nested
> >> subvolumes and snapshots (in a .snapshots subdir on each subvol).
[...]
> I'm new; btrfs noob; completely unqualified to write intelligently on
> this topic, nevertheless:
> I understand your setup to be btrfs RAID1 with /dev/A /dev/B, and a
> backup device someplace /dev/C
>
> Could you, at the time you wanted to backup the filesystem:
> 1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
> 2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
> 3) balance to effect the backup (i.e. rebuilding the RAID1 onto
> /dev/C) 4) break/reconnect the original devices: remove /dev/C;
> re-add /dev/B to the fs
I've thought of this but don't dare try it without approval from the
experts. At any rate, for being practical, this approach hinges on an
ability to rebuild the raid1 incrementally. That is, the rebuild would
have to start from what already is present on disk B (or C, when it is
re-added). Starting from an effectively blank disk each time would be
prohibitive.
Even if this would work, I'd much prefer keeping the original raid1
intact and to only temporarily add another mirror: "lazy mirroring", to
give the thing a name.
Michael
--
Michael Schuerig
mailto:michael@schuerig.de
http://www.schuerig.de/michael/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 21:14 ` Michael Schuerig
@ 2014-03-13 22:04 ` Chris Murphy
2014-03-13 23:03 ` Michael Schuerig
0 siblings, 1 reply; 19+ messages in thread
From: Chris Murphy @ 2014-03-13 22:04 UTC (permalink / raw)
To: Btrfs BTRFS
On Mar 13, 2014, at 3:14 PM, Michael Schuerig <michael.lists@schuerig.de> wrote:
> On Thursday 13 March 2014 14:48:55 Andrew Skretvedt wrote:
>> On 2014-Mar-13 14:28, Hugo Mills wrote:
>>> On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
>>>> My backup use case is different from the what has been recently
>>>> discussed in another thread. I'm trying to guard against hardware
>>>> failure and other causes of destruction.
>>>>
>>>> I have a btrfs raid1 filesystem spread over two disks. I want to
>>>> backup this filesystem regularly and efficiently to an external
>>>> disk (same model as the ones in the raid) in such a way that
>>>>
>>>> * when one disk in the raid fails, I can substitute the backup and
>>>> rebalancing from the surviving disk to the substitute only applies
>>>> the missing changes.
>>>>
>>>> * when the entire raid fails, I can re-build a new one from the
>>>> backup.
>>>>
>>>> The filesystem is mounted at its root and has several nested
>>>> subvolumes and snapshots (in a .snapshots subdir on each subvol).
> [...]
>
>> I'm new; btrfs noob; completely unqualified to write intelligently on
>> this topic, nevertheless:
>> I understand your setup to be btrfs RAID1 with /dev/A /dev/B, and a
>> backup device someplace /dev/C
>>
>> Could you, at the time you wanted to backup the filesystem:
>> 1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
>> 2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
>> 3) balance to effect the backup (i.e. rebuilding the RAID1 onto
>> /dev/C) 4) break/reconnect the original devices: remove /dev/C;
>> re-add /dev/B to the fs
>
> I've thought of this but don't dare try it without approval from the
> experts. At any rate, for being practical, this approach hinges on an
> ability to rebuild the raid1 incrementally. That is, the rebuild would
> have to start from what already is present on disk B (or C, when it is
> re-added). Starting from an effectively blank disk each time would be
> prohibitive.
>
> Even if this would work, I'd much prefer keeping the original raid1
> intact and to only temporarily add another mirror: "lazy mirroring", to
> give the thing a name.
At best this seems fragile, but I don't think it works and is an edge case from the start. This is what send/receive is for.
In the btrfs replace scenario, the missing device is removed from the volume. It's like a divorce. Missing device 2 is replaced by a different physical device also called device 2. If you then removed 2b and readd (formerly replaced) device 2a, what happens? I don't know, I'm pretty sure the volume knows this is not device 2b as it should be, and won't accept formerly replaced device 2a. But it's an edge case to do this because you've said "device replace". So lexicon wise, I wouldn't even want this to work, we'd need a different command even if not different logic.
In the btfs device add case, you now have a three disk raid1 which is a whole different beast. Since this isn't n-way raid1, each disk is not stand alone. You're only assured data survives a one disk failure meaning you must have two drives. You've just increased your risk by doing this, not reduced it. It further proposes running an (ostensibly) production workflow with an always degraded volume, mounted with -o degraded, on an on-going basis. So it's three strikes. It's not n-way, you have no uptime if you lose one of two disks onsite, you's have to go get the offsite/onshelf disk to keep working. Plus that offsite disk isn't stand alone, so why even have it offsite? This is a fail.
So the btrfs replace scenario might work but it seems like a bad idea. And overall it's a use case for which send/receive was designed anyway so why not just use that?
Chris Murphy
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 22:04 ` Chris Murphy
@ 2014-03-13 23:03 ` Michael Schuerig
2014-03-14 0:29 ` George Mitchell
0 siblings, 1 reply; 19+ messages in thread
From: Michael Schuerig @ 2014-03-13 23:03 UTC (permalink / raw)
To: linux-btrfs
On Thursday 13 March 2014 16:04:33 Chris Murphy wrote:
> On Mar 13, 2014, at 3:14 PM, Michael Schuerig
<michael.lists@schuerig.de> wrote:
> > On Thursday 13 March 2014 14:48:55 Andrew Skretvedt wrote:
> >> On 2014-Mar-13 14:28, Hugo Mills wrote:
> >>> On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
> >>>> My backup use case is different from the what has been recently
> >>>> discussed in another thread. I'm trying to guard against hardware
> >>>> failure and other causes of destruction.
> >>>>
> >>>> I have a btrfs raid1 filesystem spread over two disks. I want to
> >>>> backup this filesystem regularly and efficiently to an external
> >>>> disk (same model as the ones in the raid) in such a way that
> >>>>
> >>>> * when one disk in the raid fails, I can substitute the backup
> >>>> and
> >>>> rebalancing from the surviving disk to the substitute only
> >>>> applies
> >>>> the missing changes.
> >>>>
> >>>> * when the entire raid fails, I can re-build a new one from the
> >>>> backup.
> >>>>
> >>>> The filesystem is mounted at its root and has several nested
> >>>> subvolumes and snapshots (in a .snapshots subdir on each subvol).
> >
> > [...]
> >
> >> I'm new; btrfs noob; completely unqualified to write intelligently
> >> on
> >> this topic, nevertheless:
> >> I understand your setup to be btrfs RAID1 with /dev/A /dev/B, and a
> >> backup device someplace /dev/C
> >>
> >> Could you, at the time you wanted to backup the filesystem:
> >> 1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
> >> 2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
> >> 3) balance to effect the backup (i.e. rebuilding the RAID1 onto
> >> /dev/C) 4) break/reconnect the original devices: remove /dev/C;
> >> re-add /dev/B to the fs
> >
> > I've thought of this but don't dare try it without approval from the
> > experts. At any rate, for being practical, this approach hinges on
> > an
> > ability to rebuild the raid1 incrementally. That is, the rebuild
> > would have to start from what already is present on disk B (or C,
> > when it is re-added). Starting from an effectively blank disk each
> > time would be prohibitive.
> >
> > Even if this would work, I'd much prefer keeping the original raid1
> > intact and to only temporarily add another mirror: "lazy mirroring",
> > to give the thing a name.
[...]
> In the btfs device add case, you now have a three disk raid1 which is
> a whole different beast. Since this isn't n-way raid1, each disk is
> not stand alone. You're only assured data survives a one disk failure
> meaning you must have two drives.
Yes, I understand that. Unless someone convinces me that it's a bad
idea, I keep wishing for a feature that allows to intermittently add a
third disk to a two disk raid1 and update that disk so that it could
replace one of the others.
> So the btrfs replace scenario might work but it seems like a bad idea.
> And overall it's a use case for which send/receive was designed
> anyway so why not just use that?
Because it's not "just". Doing it right doesn't seem trivial. For one
thing, there are multiple subvolumes; not at the top-level but nested
inside a root subvolume. Each of them already has snapshots of its own.
If there already is a send/receive script that can handle such a setup
I'll happily have a look at it.
Michael
--
Michael Schuerig
mailto:michael@schuerig.de
http://www.schuerig.de/michael/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 23:03 ` Michael Schuerig
@ 2014-03-14 0:29 ` George Mitchell
2014-03-14 1:14 ` Lists
2014-03-15 11:35 ` Michael Schuerig
0 siblings, 2 replies; 19+ messages in thread
From: George Mitchell @ 2014-03-14 0:29 UTC (permalink / raw)
To: linux-btrfs
On 03/13/2014 04:03 PM, Michael Schuerig wrote:
> On Thursday 13 March 2014 16:04:33 Chris Murphy wrote:
>> On Mar 13, 2014, at 3:14 PM, Michael Schuerig
> <michael.lists@schuerig.de> wrote:
>>> On Thursday 13 March 2014 14:48:55 Andrew Skretvedt wrote:
>>>> On 2014-Mar-13 14:28, Hugo Mills wrote:
>>>>> On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
>>>>>> My backup use case is different from the what has been recently
>>>>>> discussed in another thread. I'm trying to guard against hardware
>>>>>> failure and other causes of destruction.
>>>>>>
>>>>>> I have a btrfs raid1 filesystem spread over two disks. I want to
>>>>>> backup this filesystem regularly and efficiently to an external
>>>>>> disk (same model as the ones in the raid) in such a way that
>>>>>>
>>>>>> * when one disk in the raid fails, I can substitute the backup
>>>>>> and
>>>>>> rebalancing from the surviving disk to the substitute only
>>>>>> applies
>>>>>> the missing changes.
>>>>>>
>>>>>> * when the entire raid fails, I can re-build a new one from the
>>>>>> backup.
>>>>>>
>>>>>> The filesystem is mounted at its root and has several nested
>>>>>> subvolumes and snapshots (in a .snapshots subdir on each subvol).
>>> [...]
>>>
>>>> I'm new; btrfs noob; completely unqualified to write intelligently
>>>> on
>>>> this topic, nevertheless:
>>>> I understand your setup to be btrfs RAID1 with /dev/A /dev/B, and a
>>>> backup device someplace /dev/C
>>>>
>>>> Could you, at the time you wanted to backup the filesystem:
>>>> 1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
>>>> 2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
>>>> 3) balance to effect the backup (i.e. rebuilding the RAID1 onto
>>>> /dev/C) 4) break/reconnect the original devices: remove /dev/C;
>>>> re-add /dev/B to the fs
>>> I've thought of this but don't dare try it without approval from the
>>> experts. At any rate, for being practical, this approach hinges on
>>> an
>>> ability to rebuild the raid1 incrementally. That is, the rebuild
>>> would have to start from what already is present on disk B (or C,
>>> when it is re-added). Starting from an effectively blank disk each
>>> time would be prohibitive.
>>>
>>> Even if this would work, I'd much prefer keeping the original raid1
>>> intact and to only temporarily add another mirror: "lazy mirroring",
>>> to give the thing a name.
> [...]
>> In the btfs device add case, you now have a three disk raid1 which is
>> a whole different beast. Since this isn't n-way raid1, each disk is
>> not stand alone. You're only assured data survives a one disk failure
>> meaning you must have two drives.
> Yes, I understand that. Unless someone convinces me that it's a bad
> idea, I keep wishing for a feature that allows to intermittently add a
> third disk to a two disk raid1 and update that disk so that it could
> replace one of the others.
>
>> So the btrfs replace scenario might work but it seems like a bad idea.
>> And overall it's a use case for which send/receive was designed
>> anyway so why not just use that?
> Because it's not "just". Doing it right doesn't seem trivial. For one
> thing, there are multiple subvolumes; not at the top-level but nested
> inside a root subvolume. Each of them already has snapshots of its own.
> If there already is a send/receive script that can handle such a setup
> I'll happily have a look at it.
>
> Michael
>
I think the closest thing there will ever be to this is n-way
mirroring. I currently use rsync to a separate drive to maintain a
backup copy, but it is not integrated into the array like n-way would
be, and is definitely not a perfect solution. But a 3 drive 3-way would
require the 3rd drive to be in the array the whole time or it would run
into the same problem requiring a complete rebuild rather than an
incremental when reintroduced, UNLESS such a feature was specifically
included in the design, and even then, in a 3-way configuration, you
would end up simplex on at least some data until the partial rebuild was
completed. Personally, I will be DELIGHTED when n-way appears simply
because basic 3-way gets us out of the dreaded simplex trap.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 0:29 ` George Mitchell
@ 2014-03-14 1:14 ` Lists
2014-03-14 3:37 ` Chris Murphy
2014-03-15 11:35 ` Michael Schuerig
1 sibling, 1 reply; 19+ messages in thread
From: Lists @ 2014-03-14 1:14 UTC (permalink / raw)
To: linux-btrfs
See comments at the bottom:
On 03/13/2014 05:29 PM, George Mitchell wrote:
> On 03/13/2014 04:03 PM, Michael Schuerig wrote:
>> On Thursday 13 March 2014 16:04:33 Chris Murphy wrote:
>>> On Mar 13, 2014, at 3:14 PM, Michael Schuerig
>> <michael.lists@schuerig.de> wrote:
>>>> On Thursday 13 March 2014 14:48:55 Andrew Skretvedt wrote:
>>>>> On 2014-Mar-13 14:28, Hugo Mills wrote:
>>>>>> On Thu, Mar 13, 2014 at 08:12:44PM +0100, Michael Schuerig wrote:
>>>>>>> My backup use case is different from the what has been recently
>>>>>>> discussed in another thread. I'm trying to guard against hardware
>>>>>>> failure and other causes of destruction.
>>>>>>>
>>>>>>> I have a btrfs raid1 filesystem spread over two disks. I want to
>>>>>>> backup this filesystem regularly and efficiently to an external
>>>>>>> disk (same model as the ones in the raid) in such a way that
>>>>>>>
>>>>>>> * when one disk in the raid fails, I can substitute the backup
>>>>>>> and
>>>>>>> rebalancing from the surviving disk to the substitute only
>>>>>>> applies
>>>>>>> the missing changes.
>>>>>>>
>>>>>>> * when the entire raid fails, I can re-build a new one from the
>>>>>>> backup.
>>>>>>>
>>>>>>> The filesystem is mounted at its root and has several nested
>>>>>>> subvolumes and snapshots (in a .snapshots subdir on each subvol).
>>>> [...]
>>>>
>>>>> I'm new; btrfs noob; completely unqualified to write intelligently
>>>>> on
>>>>> this topic, nevertheless:
>>>>> I understand your setup to be btrfs RAID1 with /dev/A /dev/B, and a
>>>>> backup device someplace /dev/C
>>>>>
>>>>> Could you, at the time you wanted to backup the filesystem:
>>>>> 1) in the filesystem, break RAID1: /dev/A /dev/B <-- remove /dev/B
>>>>> 2) reestablish RAID1 to the backup device: /dev/A /dev/C <-- added
>>>>> 3) balance to effect the backup (i.e. rebuilding the RAID1 onto
>>>>> /dev/C) 4) break/reconnect the original devices: remove /dev/C;
>>>>> re-add /dev/B to the fs
>>>> I've thought of this but don't dare try it without approval from the
>>>> experts. At any rate, for being practical, this approach hinges on
>>>> an
>>>> ability to rebuild the raid1 incrementally. That is, the rebuild
>>>> would have to start from what already is present on disk B (or C,
>>>> when it is re-added). Starting from an effectively blank disk each
>>>> time would be prohibitive.
>>>>
>>>> Even if this would work, I'd much prefer keeping the original raid1
>>>> intact and to only temporarily add another mirror: "lazy mirroring",
>>>> to give the thing a name.
>> [...]
>>> In the btfs device add case, you now have a three disk raid1 which is
>>> a whole different beast. Since this isn't n-way raid1, each disk is
>>> not stand alone. You're only assured data survives a one disk failure
>>> meaning you must have two drives.
>> Yes, I understand that. Unless someone convinces me that it's a bad
>> idea, I keep wishing for a feature that allows to intermittently add a
>> third disk to a two disk raid1 and update that disk so that it could
>> replace one of the others.
>>
>>> So the btrfs replace scenario might work but it seems like a bad idea.
>>> And overall it's a use case for which send/receive was designed
>>> anyway so why not just use that?
>> Because it's not "just". Doing it right doesn't seem trivial. For one
>> thing, there are multiple subvolumes; not at the top-level but nested
>> inside a root subvolume. Each of them already has snapshots of its own.
>> If there already is a send/receive script that can handle such a setup
>> I'll happily have a look at it.
>>
>> Michael
>>
> I think the closest thing there will ever be to this is n-way
> mirroring. I currently use rsync to a separate drive to maintain a
> backup copy, but it is not integrated into the array like n-way would
> be, and is definitely not a perfect solution. But a 3 drive 3-way
> would require the 3rd drive to be in the array the whole time or it
> would run into the same problem requiring a complete rebuild rather
> than an incremental when reintroduced, UNLESS such a feature was
> specifically included in the design, and even then, in a 3-way
> configuration, you would end up simplex on at least some data until
> the partial rebuild was completed. Personally, I will be DELIGHTED
> when n-way appears simply because basic 3-way gets us out of the
> dreaded simplex trap.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
I'm coming from ZFS land, am a BTRFS newbie, and I don't understand this
discussion, at all. I'm assuming that BTRFS send/receive works similar
to ZFS's similarly named feature. We use snapshots and ZFS send/receive
to a remote server to do our backups. To do an rsync of our production
file store takes days because there are so many files, while
snapshotting and using ZFS send/receive takes tens of minutes at local
(Gbit) speeds, and a few hours at WAN speeds, nearly all of that time
being transfer time.
So just I don't get the "backup" problem. Place btrfs' equivalent of a
pool on the external drive, and use send/receive of the filesystem or
snapshot(s). Does BTRFS work so differently in this regard? If so, I'd
like to know what's different.
My primary interest in BTRFS vs ZFS is two-fold:
1) ZFS has a couple of limitations that I find disappointing, that don't
appear to be present in BTRFS.
A) Inability to upgrade a non-redundant ZFS pool/vdev to raidz or
increase the raidz (redundancy) level after creation. (Yes, you can plan
around this, but I see no good reason to HAVE to)
B) Inability to remove a vdev once added to a pool.
2) Licensing: ZFS on Linux is truly great so far in all my testing,
can't throw enough compliments their way, but I would really like to
rely on a "first class citizen" as far as the Linux kernel is concerned.
-Ben
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 1:14 ` Lists
@ 2014-03-14 3:37 ` Chris Murphy
0 siblings, 0 replies; 19+ messages in thread
From: Chris Murphy @ 2014-03-14 3:37 UTC (permalink / raw)
To: Btrfs BTRFS
On Mar 13, 2014, at 7:14 PM, Lists <lists@benjamindsmith.com> wrote:
>
> I'm assuming that BTRFS send/receive works similar to ZFS's similarly named feature.
Similar yes but not all options are the same between them. e.g. zfs send -R replicates all descendent file systems. I don't think zfs requires volumes, filesystems, or snapshots to be read-only, whereas btrfs send only works on read only snapshot-subvolumes. There has been some suggestion of a recursive snapshot creation and recursive send for btrfs.
> So just I don't get the "backup" problem. Place btrfs' equivalent of a pool on the external drive, and use send/receive of the filesystem or snapshot(s). Does BTRFS work so differently in this regard? If so, I'd like to know what's different.
Top most thing in zfs is the pool, which on btrfs is the volume. Neither zfs send or btrfs send works on this level to send everything within a pool/volume. zfs has the file system and btrfs has the subvolume which can be snapshot. Either (or both) can be used with send.
zfs also has the volume which is a block device that can be snapshot, there isn't yet a btrfs equivalent.
Btrfs and zfs have clones but the distinction is stronger with zfs. Like zfs snapshots can't be deleted unless its clones are deleted. Btrfs send has a -c clone-src option that I don't really understand, and also the --reflink which is a clone at the file level.
Anyway there are a lot of similarities but also quite a few differences. Basic functionality seems pretty much the same.
>
> My primary interest in BTRFS vs ZFS is two-fold:
>
> 1) ZFS has a couple of limitations that I find disappointing, that don't appear to be present in BTRFS.
> A) Inability to upgrade a non-redundant ZFS pool/vdev to raidz or increase the raidz (redundancy) level after creation. (Yes, you can plan around this, but I see no good reason to HAVE to)
> B) Inability to remove a vdev once added to a pool.
>
> 2) Licensing: ZFS on Linux is truly great so far in all my testing, can't throw enough compliments their way, but I would really like to rely on a "first class citizen" as far as the Linux kernel is concerned.
3. On btrfs you can delete a parent subvolume and the children remain. On zfs, you can't destroy a zfs filesystem/volume unless its snapshots are deleted, and you can't delete snapshots unless their clones are deleted.
Chris Murphy
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-13 19:12 Incremental backup for a raid1 Michael Schuerig
2014-03-13 19:28 ` Hugo Mills
@ 2014-03-14 6:42 ` Duncan
2014-03-14 8:56 ` Michael Schuerig
1 sibling, 1 reply; 19+ messages in thread
From: Duncan @ 2014-03-14 6:42 UTC (permalink / raw)
To: linux-btrfs
Michael Schuerig posted on Thu, 13 Mar 2014 20:12:44 +0100 as excerpted:
> My backup use case is different from the what has been recently
> discussed in another thread. I'm trying to guard against hardware
> failure and other causes of destruction.
>
> I have a btrfs raid1 filesystem spread over two disks. I want to backup
> this filesystem regularly and efficiently to an external disk (same
> model as the ones in the raid) in such a way that
>
> * when one disk in the raid fails, I can substitute the backup and
> rebalancing from the surviving disk to the substitute only applies the
> missing changes.
>
> * when the entire raid fails, I can re-build a new one from the backup.
>
> The filesystem is mounted at its root and has several nested subvolumes
> and snapshots (in a .snapshots subdir on each subvol).
>
> Is it possible to do what I'm looking for?
AFAICS, as mentioned down the other subthread, the closest thing to this
would be N-way mirroring, a coming feature on the roadmap for
introduction after raid5/6 mode[1] gets completed. The current raid1
mode is 2-way-mirroring only, regardless of the number of devices.
N-way-mirroring is actually my most hotly anticipated feature for a
different reason[2], but for you it would work like this:
1) Setup the 3-way (or 4-way if preferred) mirroring and balance to
ensured copies of all data on all devices.
2) Optionally scrub to ensure the integrity of all copies.
3) Disconnect the backup device(s). (Don't btrfs device delete, this
would remove the copy. Just disconnect.)
4) Store the backups.
5) Periodically get them out and reconnect.
6) Rebalance to update. (Since the devices remain members of the mirror,
simply outdated, the balance should only update, not rewrite the entire
thing.)
7) Optionally scrub to verify.
8) Repeat steps 3-7 as necessary.
If you went 4-way so two backups and alternated the one you plugged in,
it'd also protect against mishap that might take out all devices during
steps 5-7 when the backup is connected as well, since you'd still have
that other backup available.
Unfortunately, completing raid5/6 support is still an ongoing project,
and as a result, fully functional and /reasonably/ tested N-way-mirroring
remains the same 6-months-minimum away that it has been for over a year
now. But I sure am anticipating that day!
---
[1] Currently, the raid5/6 support is incomplete, the parity is
calculated and writes are done, but some restore scenarios aren't yet
properly supported and raid5/6-mode scrub isn't complete either, so the
current code is considered testing-only, not for deployment where the
raid5/6 feature would actually be relied on. That has remained the
raid5/6 status for several kernels now, as the focus has been on bugfixing
other areas including snapshot-aware defrag which is currently
deactivated due to horrible scaling issues (current defrag COWS the
operational mount only, duplicating previously shared blocks), send/
receive.
[2] In addition to loss of N-1 device-protection, I really love btrfs'
data integrity features and the ability to recover from other copies if
the one is found to be corrupted, which is why I'm running raid1 mode
here. But currently, there's only the two copies and if both get
corrupted... My sweet spot would be three copies, allowing corruption of
two and recovery from the third, which is why I personally am so hotly
anticipating N-way-mirroring, but unfortunately, it's looking a bit like
the proverbial carrot on the stick in front of the donkey, these days.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 6:42 ` Duncan
@ 2014-03-14 8:56 ` Michael Schuerig
2014-03-14 11:24 ` Duncan
0 siblings, 1 reply; 19+ messages in thread
From: Michael Schuerig @ 2014-03-14 8:56 UTC (permalink / raw)
To: linux-btrfs
On Friday 14 March 2014 06:42:27 Duncan wrote:
> N-way-mirroring is actually my most hotly anticipated feature for a
> different reason[2], but for you it would work like this:
>
> 1) Setup the 3-way (or 4-way if preferred) mirroring and balance to
> ensured copies of all data on all devices.
>
> 2) Optionally scrub to ensure the integrity of all copies.
>
> 3) Disconnect the backup device(s). (Don't btrfs device delete, this
> would remove the copy. Just disconnect.)
>
> 4) Store the backups.
>
> 5) Periodically get them out and reconnect.
>
> 6) Rebalance to update. (Since the devices remain members of the
> mirror, simply outdated, the balance should only update, not rewrite
> the entire thing.)
>
> 7) Optionally scrub to verify.
>
> 8) Repeat steps 3-7 as necessary.
Judging from your description, N-way mirroring is (going to be) exactly
what I was hoping for.
Michael
--
Michael Schuerig
mailto:michael@schuerig.de
http://www.schuerig.de/michael/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 8:56 ` Michael Schuerig
@ 2014-03-14 11:24 ` Duncan
2014-03-14 13:46 ` George Mitchell
0 siblings, 1 reply; 19+ messages in thread
From: Duncan @ 2014-03-14 11:24 UTC (permalink / raw)
To: linux-btrfs
Michael Schuerig posted on Fri, 14 Mar 2014 09:56:20 +0100 as excerpted:
[Duncan posted...]
>> 3) Disconnect the backup device(s). (Don't btrfs device delete, this
>> would remove the copy. Just disconnect.)
Hmm... Looking back at what I wrote...
Presumably either have the filesystem unmounted for the disconnect (and
ideally, the system off, tho with modern drives in theory that's not an
issue, but still good if it can be done), or at least remounted read-only.
I had guessed that was implicit, but making it explicit is probably best
all around, just in case. At least I can rest better with it, having
made that explicit.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 11:24 ` Duncan
@ 2014-03-14 13:46 ` George Mitchell
2014-03-14 14:36 ` Duncan
2014-03-14 14:44 ` Austin S Hemmelgarn
0 siblings, 2 replies; 19+ messages in thread
From: George Mitchell @ 2014-03-14 13:46 UTC (permalink / raw)
To: Duncan, linux-btrfs
Actually, an interesting concept would be to have the initial two drive
RAID 1 mirrored by 2 additional drives in 4-way configuration on a
second machine at a remote location on a private high speed network with
both machines up 24/7. In that case, if such a configuration would
work, either machine could be obliterated and the data would survive
fully intact in full duplex mode. It would just need to be remounted
from the backup system and away it goes. Just thinking of interesting
possibilities with n-way mirroring. Oh how I would love to have n-way
mirroring to play with!
On 03/14/2014 04:24 AM, Duncan wrote:
> Michael Schuerig posted on Fri, 14 Mar 2014 09:56:20 +0100 as excerpted:
>
> [Duncan posted...]
>
>>> 3) Disconnect the backup device(s). (Don't btrfs device delete, this
>>> would remove the copy. Just disconnect.)
> Hmm... Looking back at what I wrote...
>
> Presumably either have the filesystem unmounted for the disconnect (and
> ideally, the system off, tho with modern drives in theory that's not an
> issue, but still good if it can be done), or at least remounted read-only.
>
> I had guessed that was implicit, but making it explicit is probably best
> all around, just in case. At least I can rest better with it, having
> made that explicit.
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 13:46 ` George Mitchell
@ 2014-03-14 14:36 ` Duncan
2014-03-14 14:44 ` Austin S Hemmelgarn
1 sibling, 0 replies; 19+ messages in thread
From: Duncan @ 2014-03-14 14:36 UTC (permalink / raw)
To: linux-btrfs
George Mitchell posted on Fri, 14 Mar 2014 06:46:19 -0700 as excerpted:
> Actually, an interesting concept would be to have the initial two drive
> RAID 1 mirrored by 2 additional drives in 4-way configuration on a
> second machine at a remote location on a private high speed network with
> both machines up 24/7. In that case, if such a configuration would
> work, either machine could be obliterated and the data would survive
> fully intact in full duplex mode. It would just need to be remounted
> from the backup system and away it goes. Just thinking of interesting
> possibilities with n-way mirroring. Oh how I would love to have n-way
> mirroring to play with!
In terms of raid1, mdraid already supports such a concept with its "write
mostly" component device designation. A component device designated
"write mostly" is never read from unless it becomes the only device
available, so it's perfect for such an "over-the-net real-time-online-
backup" solution.
The other half of the solution are the various block-device-over-network
drivers such as BLK_DEV_NBD (see Documentation/blockdev/nbd.txt) for the
client side, the server-side of which is in userspace. That lets you
have what appears to be a block-device routed over the inet to that
remote location.
Of course mdraid is lacking btrfs' data integrity features, etc, with its
raid1 implementation entirely lacking any data integrity or real-time
cross-checking at all, but unlike btrfs' N-way-mirroring it gets points
for actually being available right now, so as they say, YMMV.
Of course the other notable issue with your idea is that while it DOES
address the real-time remote redundancy issue, that doesn't (by itself)
deal with fat-fingering or similar issues where real-time actually means
the same problem's duplicated to the backup as well.
But btrfs snapshots address the fat-fingering issue and can be done on
the partially-remote filesystem solution as well, and local or remote-
local solutions (like periodic btrfs send to a separate local filesystem
at both ends) can deal with the filesystem damage possibilities.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 13:46 ` George Mitchell
2014-03-14 14:36 ` Duncan
@ 2014-03-14 14:44 ` Austin S Hemmelgarn
1 sibling, 0 replies; 19+ messages in thread
From: Austin S Hemmelgarn @ 2014-03-14 14:44 UTC (permalink / raw)
To: george, Duncan, linux-btrfs
On 2014-03-14 09:46, George Mitchell wrote:
> Actually, an interesting concept would be to have the initial two drive
> RAID 1 mirrored by 2 additional drives in 4-way configuration on a
> second machine at a remote location on a private high speed network with
> both machines up 24/7. In that case, if such a configuration would
> work, either machine could be obliterated and the data would survive
> fully intact in full duplex mode. It would just need to be remounted
> from the backup system and away it goes. Just thinking of interesting
> possibilities with n-way mirroring. Oh how I would love to have n-way
> mirroring to play with!
That can already be done, albeit slightly differently by stacking btrfs
RAID 1 on top of a pair of DRBD devices. Of course, this doesn't
provide quite the same degree of safety as your suggestion, but it does
work (and DRBD makes the remote copy write-mostly for the local system
automatically).
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-14 0:29 ` George Mitchell
2014-03-14 1:14 ` Lists
@ 2014-03-15 11:35 ` Michael Schuerig
2014-03-15 11:53 ` Hugo Mills
2014-03-15 16:01 ` George Mitchell
1 sibling, 2 replies; 19+ messages in thread
From: Michael Schuerig @ 2014-03-15 11:35 UTC (permalink / raw)
To: linux-btrfs
On Thursday 13 March 2014 17:29:11 George Mitchell wrote:
> I currently use rsync to a separate drive to maintain a
> backup copy, but it is not integrated into the array like n-way would
> be, and is definitely not a perfect solution.
Could you explain how you're using rsync? I was just about to copy a
btrfs filesystem to another disk. That filesystem has several subvolumes
and about 100 snapshots overall. Owing to COW, this amounts to about
1.2TB. However, I reckon that rsync doesn't know anything about COW and
accordingly would blow up my data immensely on the destination disk.
How do I copy a btrfs filesystem preserving its complete contents? How
do I update such a copy?
Yes, I want to keep the subvolume layout of the original and I want to
copy all snapshots. I don't think send/receive is the answer, but it's
likey I don't understand it well enough. I'm concerned, that a
send/receive-based approach is not robust against mishaps.
Consider: I want to incrementally back-up a filesystem to two external
disks. For this I'd have to for each subvolume keep a snapshot
corresponding to its state on the backup disk. If I make any mistake in
managing these snapshots, I can't update the external backup anymore.
Also, I don't understand whether send/receive would allow me to
copy/update a subvolume *including* its snapshots.
Things have become a little more complicated than I had hoped for, but
I've only been using btrfs for a couple of weeks.
Michael
--
Michael Schuerig
mailto:michael@schuerig.de
http://www.schuerig.de/michael/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-15 11:35 ` Michael Schuerig
@ 2014-03-15 11:53 ` Hugo Mills
2014-03-15 16:01 ` George Mitchell
1 sibling, 0 replies; 19+ messages in thread
From: Hugo Mills @ 2014-03-15 11:53 UTC (permalink / raw)
To: Michael Schuerig; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 3091 bytes --]
On Sat, Mar 15, 2014 at 12:35:30PM +0100, Michael Schuerig wrote:
> On Thursday 13 March 2014 17:29:11 George Mitchell wrote:
> > I currently use rsync to a separate drive to maintain a
> > backup copy, but it is not integrated into the array like n-way would
> > be, and is definitely not a perfect solution.
>
> Could you explain how you're using rsync? I was just about to copy a
> btrfs filesystem to another disk. That filesystem has several subvolumes
> and about 100 snapshots overall. Owing to COW, this amounts to about
> 1.2TB. However, I reckon that rsync doesn't know anything about COW and
> accordingly would blow up my data immensely on the destination disk.
>
> How do I copy a btrfs filesystem preserving its complete contents? How
> do I update such a copy?
>
> Yes, I want to keep the subvolume layout of the original and I want to
> copy all snapshots. I don't think send/receive is the answer, but it's
> likey I don't understand it well enough. I'm concerned, that a
> send/receive-based approach is not robust against mishaps.
send/receive is the answer, but it's going to be a bit more
complicated to manage *all* of the snapshots. (Questions -- do you
actually need them all backed up? Can you instead do incremental
backups of the "main" subvol and keep each of those independently on
the backup machine instead?)
> Consider: I want to incrementally back-up a filesystem to two external
> disks. For this I'd have to for each subvolume keep a snapshot
> corresponding to its state on the backup disk. If I make any mistake in
> managing these snapshots, I can't update the external backup anymore.
Correct (I got bitten by this last week with my fledgling backup
process). You need a place that stores the "current state" subvolumes
that's not going to be touched by anything else, and you can't clean
up any given base until you're certain that there's a good new one
available on both sides. One thing that helps here is that send
requires the snapshot being sent to be marked read-only, so it's not
possible to change it at all -- but you can delete them.
> Also, I don't understand whether send/receive would allow me to
> copy/update a subvolume *including* its snapshots.
Snapshots aren't owned by subvolumes. Once you've made a snapshot,
that snapshot is a fully equal partner of the subvol that it was a
snapshot of -- there is no hierarchy of ownership. This means that you
will have to send each snapshot independently.
What send allows you to do is to specify that one or more
subvolumes on the send side can be assumed to exist on the receive
side (via -p and -c). If you do that, the stream can then use them as
clone sources (i.e. should make shared CoW copies from them, rather
than sending all the data).
Hugo.
--
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- ... one ping(1) to rule them all, and in the ---
darkness bind(2) them.
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Incremental backup for a raid1
2014-03-15 11:35 ` Michael Schuerig
2014-03-15 11:53 ` Hugo Mills
@ 2014-03-15 16:01 ` George Mitchell
1 sibling, 0 replies; 19+ messages in thread
From: George Mitchell @ 2014-03-15 16:01 UTC (permalink / raw)
To: linux-btrfs
Michael, I am currently using rsync INSTEAD of btrfs backup tools. I
really don't see anyway that it could be compatible with the backup
features of btrfs. As I noted in my post, it is definitely not a
perfect solution, but it is doing the job for me. What I REALLY want in
this regard is n-way mirroring to get me out of the simplex trap
completely. At that point, I can have more confidence in btrfs snapshop
capability.
On 03/15/2014 04:35 AM, Michael Schuerig wrote:
> On Thursday 13 March 2014 17:29:11 George Mitchell wrote:
>> I currently use rsync to a separate drive to maintain a
>> backup copy, but it is not integrated into the array like n-way would
>> be, and is definitely not a perfect solution.
> Could you explain how you're using rsync? I was just about to copy a
> btrfs filesystem to another disk. That filesystem has several subvolumes
> and about 100 snapshots overall. Owing to COW, this amounts to about
> 1.2TB. However, I reckon that rsync doesn't know anything about COW and
> accordingly would blow up my data immensely on the destination disk.
>
> How do I copy a btrfs filesystem preserving its complete contents? How
> do I update such a copy?
>
> Yes, I want to keep the subvolume layout of the original and I want to
> copy all snapshots. I don't think send/receive is the answer, but it's
> likey I don't understand it well enough. I'm concerned, that a
> send/receive-based approach is not robust against mishaps.
>
> Consider: I want to incrementally back-up a filesystem to two external
> disks. For this I'd have to for each subvolume keep a snapshot
> corresponding to its state on the backup disk. If I make any mistake in
> managing these snapshots, I can't update the external backup anymore.
>
> Also, I don't understand whether send/receive would allow me to
> copy/update a subvolume *including* its snapshots.
>
> Things have become a little more complicated than I had hoped for, but
> I've only been using btrfs for a couple of weeks.
>
> Michael
>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2014-03-15 16:00 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-13 19:12 Incremental backup for a raid1 Michael Schuerig
2014-03-13 19:28 ` Hugo Mills
2014-03-13 19:48 ` Andrew Skretvedt
2014-03-13 21:09 ` Brendan Hide
2014-03-13 21:14 ` Michael Schuerig
2014-03-13 22:04 ` Chris Murphy
2014-03-13 23:03 ` Michael Schuerig
2014-03-14 0:29 ` George Mitchell
2014-03-14 1:14 ` Lists
2014-03-14 3:37 ` Chris Murphy
2014-03-15 11:35 ` Michael Schuerig
2014-03-15 11:53 ` Hugo Mills
2014-03-15 16:01 ` George Mitchell
2014-03-14 6:42 ` Duncan
2014-03-14 8:56 ` Michael Schuerig
2014-03-14 11:24 ` Duncan
2014-03-14 13:46 ` George Mitchell
2014-03-14 14:36 ` Duncan
2014-03-14 14:44 ` Austin S Hemmelgarn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox