* how to clone a btrfs filesystem
@ 2015-04-17 23:08 Christoph Anton Mitterer
2015-04-18 4:24 ` Russell Coker
` (3 more replies)
0 siblings, 4 replies; 19+ messages in thread
From: Christoph Anton Mitterer @ 2015-04-17 23:08 UTC (permalink / raw)
To: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 574 bytes --]
Hey.
I've seen that this has been asked some times before, and there are
stackoverflow/etc. questions on that, but none with a really good
answer.
How can I best copy one btrfs filesystem (with snapshots and subvolumes)
into another, especially with keeping the CoW/reflink status of all
files?
And ideally incrementally upgrade it later (again with all
snapshots/subvols, and again not loosing the shared blocks between these
files).
send/receive apparently also works for just one subvolume,... and
documentation is quite sparse :-/
Cheers,
Chris.
[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5313 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: how to clone a btrfs filesystem 2015-04-17 23:08 how to clone a btrfs filesystem Christoph Anton Mitterer @ 2015-04-18 4:24 ` Russell Coker 2015-04-18 5:17 ` Christoph Anton Mitterer 2015-04-18 8:10 ` Martin Steigerwald ` (2 subsequent siblings) 3 siblings, 1 reply; 19+ messages in thread From: Russell Coker @ 2015-04-18 4:24 UTC (permalink / raw) To: Christoph Anton Mitterer; +Cc: linux-btrfs On Fri, 17 Apr 2015 11:08:44 PM Christoph Anton Mitterer wrote: > How can I best copy one btrfs filesystem (with snapshots and subvolumes) > into another, especially with keeping the CoW/reflink status of all > files? dd works. ;) > And ideally incrementally upgrade it later (again with all > snapshots/subvols, and again not loosing the shared blocks between these > files). There are patches to rsync that make it work on block devices. Of course that will copy space occupied by deleted files too. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/ ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 4:24 ` Russell Coker @ 2015-04-18 5:17 ` Christoph Anton Mitterer 2015-04-18 15:02 ` Russell Coker 0 siblings, 1 reply; 19+ messages in thread From: Christoph Anton Mitterer @ 2015-04-18 5:17 UTC (permalink / raw) To: Russell Coker; +Cc: linux-btrfs [-- Attachment #1: Type: text/plain, Size: 2044 bytes --] On Sat, 2015-04-18 at 04:24 +0000, Russell Coker wrote: > dd works. ;) > There are patches to rsync that make it work on block devices. Of course that > will copy space occupied by deleted files too. I think both are not quite the solutions I was looking for. Guess for dd this is obvious, but for rsync I'd also loose all btrfs features like checksum verifications,... and even if these patches you mention would make it work on block devices, I'd guess it would at least need to read everything, it would not longer be a merge into another filesystem (perhaps I shouldn't have written "clone")... and the target block device would need to have at least the size of the origin. Can't one do something like the following: 1) The source fs has several snapshots and subvols. The target fs is empty (the first time). For the first time populating the target fs: 2) Make ro snapshots of all non-ro snapshots and subvols on the source-fs. 3) Send/receive the first of the ro snapshots to the target fs, with no parent and no clone-src. 4) Send/receive all further ro snapshots to the target fs, with no parents, but each time specifying one further clone-src (i.e. all that have already been sent/received) so that they're used for reflinks and so on 5) At the end somehow make rw-subvols from the snapshots/subvols that have been previously rw (how?). In the future, when an incremental backup should be made: 2) as above 3) Send/receive the each of the ro snapshots to the target fs, with using the exact matching ro snapshot on the other side as parent. (Would I need to give anything as clone-src??) Does that sound as if it would somehow work like that? Especially would it preserve all the reflink statuses and everything else (sparse files, etc.) Some additional questions: a) Can btrfs send change anything(!) on the source fs? b) Can one abort (Ctrl-C) a send and/or receive... and make it continue at the same place were it was stopped? Thanks, Chris [-- Attachment #2: smime.p7s --] [-- Type: application/x-pkcs7-signature, Size: 5313 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 5:17 ` Christoph Anton Mitterer @ 2015-04-18 15:02 ` Russell Coker 2015-04-19 3:56 ` Duncan 2015-04-19 20:00 ` Christoph Anton Mitterer 0 siblings, 2 replies; 19+ messages in thread From: Russell Coker @ 2015-04-18 15:02 UTC (permalink / raw) To: Christoph Anton Mitterer; +Cc: linux-btrfs On Sat, 18 Apr 2015, Christoph Anton Mitterer <calestyo@scientia.net> wrote: > On Sat, 2015-04-18 at 04:24 +0000, Russell Coker wrote: > > dd works. ;) > > > > There are patches to rsync that make it work on block devices. Of course > > that will copy space occupied by deleted files too. > > I think both are not quite the solutions I was looking for. I know, but I don't think what you want is possible at this time. > Guess for dd this is obvious, but for rsync I'd also loose all btrfs > features like checksum verifications,... and even if these patches you > mention would make it work on block devices, I'd guess it would at least > need to read everything, it would not longer be a merge into another > filesystem (perhaps I shouldn't have written "clone")... and the target > block device would need to have at least the size of the origin. An rsync on block devices wouldn't lose BTRFS checksums, you could run a scrub on the target at any time to verify them. For a dd or anything based on that the target needs to be at least as big as the source. But typical use of BTRFS for backup devices tends to result in keeping as many snapshots as possible without running out of space which means that no matter how you were to copy it the target would need to be as big. > Can't one do something like the following: > 1) The source fs has several snapshots and subvols. > The target fs is empty (the first time). > > > For the first time populating the target fs: > > 2) Make ro snapshots of all non-ro snapshots and subvols on the > source-fs. > > 3) Send/receive the first of the ro snapshots to the target fs, with no > parent and no clone-src. > > 4) Send/receive all further ro snapshots to the target fs, with no > parents, but each time specifying one further clone-src (i.e. all that > have already been sent/received) so that they're used for reflinks and > so on > > 5) At the end somehow make rw-subvols from the snapshots/subvols that > have been previously rw (how?). Sure, for 5 I believe you can make a rw snapshot of a ro subvol. > Does that sound as if it would somehow work like that? Especially would > it preserve all the reflink statuses and everything else (sparse files, > etc.) Yes, but it would take a bit of scripting work. > Some additional questions: > a) Can btrfs send change anything(!) on the source fs? > b) Can one abort (Ctrl-C) a send and/or receive... and make it continue > at the same place were it was stopped? A yes, B I don't know. Also I'm not personally inclined to trust send/recv at this time. I don't think it's had a lot of testing. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/ ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 15:02 ` Russell Coker @ 2015-04-19 3:56 ` Duncan 2015-04-19 20:00 ` Christoph Anton Mitterer 1 sibling, 0 replies; 19+ messages in thread From: Duncan @ 2015-04-19 3:56 UTC (permalink / raw) To: linux-btrfs Russell Coker posted on Sun, 19 Apr 2015 01:02:42 +1000 as excerpted: >> Some additional questions: >> a) Can btrfs send change anything(!) on the source fs? >> b) Can one abort (Ctrl-C) a send and/or receive... and make it continue >> at the same place were it was stopped? > > A yes, B I don't know. More directly on A, btrfs send creates a read-only snapshot and sends it, so the filesystem isn't changing out from under it as it sends. So that's what it changes on the source filesystem. AFAIK, nothing else. For B, my use-case doesn't include send/receive, so I don't know, directly, either. But due to the way it works, assuming an aborted send/ receive doesn't get automatically cleaned up (I simply don't know that, and I'm not bothering to look it up, but it should be documented if it does), it should at minimum be possible to include the aborted version as a parent or reference on each end, such that if any data was sent in the aborted send/receive, it shouldn't have to be sent again, only a metadata reference to it will need to be sent. > Also I'm not personally inclined to trust send/recv at this time. I > don't think it's had a lot of testing. Based on posts from people using it here, as well as watching the patches, etc, going by, I'd say that given a send/receive that has completed without error, it should be reliably golden. There continue to be various corner-case bugs where it doesn't always work, but in that case it should reliably error on one side or the other. A very simple example of the type of corner-case that still causes problems, tho this one should work, it's the much more complex ones that sometimes don't, is where the original filesystem had /dir/suba/subb/, but that nesting was reversed to /dir/subb/suba/. That's the general sort of thing that can still cause problems, altho simple example shouldn't, but obviously, it can only be a problem on later incrementals that reference a parent with a tree that has "gone inside out", so to speak, since the parent send. But again, if the process works without error on both sides, then the result should be golden, barring of course a serious regression bug. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 15:02 ` Russell Coker 2015-04-19 3:56 ` Duncan @ 2015-04-19 20:00 ` Christoph Anton Mitterer 2015-04-20 5:23 ` Duncan 1 sibling, 1 reply; 19+ messages in thread From: Christoph Anton Mitterer @ 2015-04-19 20:00 UTC (permalink / raw) To: linux-btrfs [-- Attachment #1: Type: text/plain, Size: 5817 bytes --] On Sun, 2015-04-19 at 01:02 +1000, Russell Coker wrote: > An rsync on block devices wouldn't lose BTRFS checksums, you could run a scrub > on the target at any time to verify them. For a dd or anything based on that > the target needs to be at least as big as the source. But typical use of > BTRFS for backup devices tends to result in keeping as many snapshots as > possible without running out of space which means that no matter how you were > to copy it the target would need to be as big. Hmm all not really satisfying. > > Can't one do something like the following: > > 1) The source fs has several snapshots and subvols. > > The target fs is empty (the first time). > > > > > > For the first time populating the target fs: > > > > 2) Make ro snapshots of all non-ro snapshots and subvols on the > > source-fs. > > > > 3) Send/receive the first of the ro snapshots to the target fs, with no > > parent and no clone-src. > > > > 4) Send/receive all further ro snapshots to the target fs, with no > > parents, but each time specifying one further clone-src (i.e. all that > > have already been sent/received) so that they're used for reflinks and > > so on > > > > 5) At the end somehow make rw-subvols from the snapshots/subvols that > > have been previously rw (how?). > > Sure, for 5 I believe you can make a rw snapshot of a ro subvol. So, is clone-src really what I think it is? Or better, what exactly are the parent and clone-srcs and how do they work? AFAIU the clone-src is a ro snapshot on the SOURCE(!) fs, that has previously sent to the target fs (thus it's there as well). So for the snapshot that is sent "now" it looks for any blocks that are also in the SOURCE fs version of clone-src and for those it only sends reflinks... and on the target fs these are again used. So I'd guess each of these blocks has some kind of ID that is sent? But that already sounds quite similar to what I'd expect parent to be: When I send a ro snapshot the first time, e.g. $ btrfs subvolume create /src-btrfs/foo # do stuff in foo # first time backup of foo: $ btrfs subvolume snapshot -r /src-btrfs/snapshots/foo-2015-01 $ btrfs send /src-btrfs/snapshots/foo-2015-01 | btrfs receive /trgt-btrfs/ # do more stuff in foo # secondary backup of foo: $ btrfs subvolume snapshot -r /src-btrfs/snapshots/foo-2015-04 $ btrfs send -p /src-btrfs/snapshots/foo-2015-01 /src-btrfs/snapshots/foo-2015-04 | btrfs receive /trgt-btrfs/ # then it would know that foo-2015-01 is already on /trgt-btrfs/ and # only send the differences between those two? # I guess the UUID and gen of foo-2015-01 would be the same on both # sides? And foo-2015-04 would then have the UUID of foo-2015-01 as # parent on the target side? Now if I make another subvolume on the source, which shares *some* refs with foo, e.g. $ btrfs subvolume create /src-btrfs/bar # do stuff in bar, or e.g let bar be another ro or rw snapshot of # foo that has been modified # Then I'd use clone sources to transfer this data efficiently? $ btrfs send -c /src-btrfs/snapshots/foo-* /src-btrfs/bar | btrfs receive /trgt-btrfs/ My actual situation is this: I have a btrfs-fs, with one non-snapshot subvolume (i.e. the root subvolume) and a number of snapshots, e.g. 2015-01 2015-02 2015-03 All being ro-snapshots from the root subvol. And I want to get all these snapshots copied over to the target fs (i.e. I want the snapshot history preserved). ATM, I do I send receive on the first snapshot I've made like: $ btrfs send /src-btrfs/snapshots/2015-03-10/ | btrfs receive /trgt-btrfs/ Afterwards I'd expect 2015-01 to be "copied" over. How would I continue with -02 and -03? Using -01 as parent? Or as clone-src? (Did I mention before, that the overall documentation seems to be really in a... "suboptimal" state? :-( ) > > Does that sound as if it would somehow work like that? Especially would > > it preserve all the reflink statuses and everything else (sparse files, > > etc.) > Yes, but it would take a bit of scripting work. What do you mean? To automate if for arbitrary number of subvols? > > Some additional questions: > > a) Can btrfs send change anything(!) on the source fs? > > b) Can one abort (Ctrl-C) a send and/or receive... and make it continue > > at the same place were it was stopped? > > A yes Mhh than it's really weird that it's allowed to work on a ro-mounted fs?! Doesn't look that trustworthy :-( > , B I don't know. Yeah, as posted somewhere above, I found out myself in the meantime that one cannot even temporarily stop such process without having it failing. > Also I'm not personally inclined to trust send/recv at this time. I don't > think it's had a lot of testing. Moreover, it seems to have rather bad performance (though I haven't done any thorough testing/measuring). Right now I send/receive a first subvolume from a HDD to another, the 2nd being a fresh btrfs, both fs with -o compress. Both filesystems are on top of dm-crypt on a SATA 8TiB HDD connected via USB (the source 3, the target 2). The system is rather powerful, 8 cores, 16 TiB RAM. Most of the time, only one of the two disks shows activity, while the other is reading respectively writing. The CPU isn't really under full load either (not even a single core),... it's rather in the 5-10% utilisation. The resulting transferspeed is about 21,5 MB/s (not MiB). Seems to be pretty low... Also, the system freezes up all the time for some seconds... so there is probably some IO scheduling issue. Further, when I run e.g. btrfs filesystem show, it actually prints the info for both, but afterwards it seems to hang forever. Thanks, Chris. [-- Attachment #2: smime.p7s --] [-- Type: application/x-pkcs7-signature, Size: 5313 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-19 20:00 ` Christoph Anton Mitterer @ 2015-04-20 5:23 ` Duncan 2015-04-20 16:32 ` Christoph Anton Mitterer 0 siblings, 1 reply; 19+ messages in thread From: Duncan @ 2015-04-20 5:23 UTC (permalink / raw) To: linux-btrfs Christoph Anton Mitterer posted on Sun, 19 Apr 2015 22:00:24 +0200 as excerpted: > Most of the time, only one of the two disks shows activity, while the > other is reading respectively writing. > The CPU isn't really under full load either (not even a single core),... > it's rather in the 5-10% utilisation. > The resulting transferspeed is about 21,5 MB/s (not MiB). > Seems to be pretty low... In general, btrfs hasn't really been optimized yet. One of the more obvious cases of this is the btrfs raid1 mode read case, where the read- scheduling algorithm is simple even/odd based on the PID, which means a single read thread will bottleneck on a single device, even if the other one is totally idle. That's the biggest "forget optimization, 50% utilization is the best-case as we're not yet optimizing" example out there, AFAIK. Which, given the common developer wisdom about premature optimization, can be explained. But accepting that explanation, one is still stymied by the fact that all the previous warnings about btrfs being in heavy development, keep good backups and be prepared to use them, etc, are being stripped, because btrfs is supposedly ready for normal use now. But if it's ready for normal use, why isn't it optimized for normal use, then? And if it's not ready for normal use, why are the warnings actually telling people that being prematurely stripped? IMO, therefore, a major btrfs development milestone will be when they decide development has stabilized enough that it's time to actually optimize these things for production use, and that doing so is no longer "premature optimization", because the filesystem is mature and its methods stable enough that optimization is no longer premature. Once /that/ happens, arguably then, and /only/ then, can /real/ discussion start on whether/when btrfs is /truly/ mature and stable. Until that optimization, then, the clear answer is that btrfs is not yet stable, because developers are demonstrating by their failure to optimize, that they don't consider the filesystem mature and stable enough for it yet, such that any major effort at optimization (as opposed to simply taking the opportunity when it presents itself as the most sensible option) is premature. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-20 5:23 ` Duncan @ 2015-04-20 16:32 ` Christoph Anton Mitterer 0 siblings, 0 replies; 19+ messages in thread From: Christoph Anton Mitterer @ 2015-04-20 16:32 UTC (permalink / raw) To: linux-btrfs [-- Attachment #1: Type: text/plain, Size: 1310 bytes --] On Mon, 2015-04-20 at 05:23 +0000, Duncan wrote: > Which, given the common developer wisdom about premature optimization, > can be explained. But accepting that explanation, one is still stymied > by the fact that all the previous warnings about btrfs being in heavy > development, keep good backups and be prepared to use them, etc, are > being stripped, because btrfs is supposedly ready for normal use now. > But if it's ready for normal use, why isn't it optimized for normal use, > then? And if it's not ready for normal use, why are the warnings > actually telling people that being prematurely stripped? Well I've mentioned it several times already, that the bad state of the documentation i.e. what can be expected (e.g. in terms of "Can I interrupt a fsck or will this be bad?" and so on), which functionalities are considered stable and which not, practical use cases and more in-depth (but still abstract, i.e. not at the developer/code level)... When should one use nodatacow, which implications does this have. All these and much more missing documentations are currently one of the biggest obstacle in the way of using btrfs. And this kind of documentation need to be "definite" i.e. written by (main) developers and not "just" list regulars. Cheers, Chris. [-- Attachment #2: smime.p7s --] [-- Type: application/x-pkcs7-signature, Size: 5313 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-17 23:08 how to clone a btrfs filesystem Christoph Anton Mitterer 2015-04-18 4:24 ` Russell Coker @ 2015-04-18 8:10 ` Martin Steigerwald 2015-05-07 5:14 ` Paul Harvey 2015-04-18 16:09 ` Kai Krakow 2015-04-18 16:20 ` Chris Murphy 3 siblings, 1 reply; 19+ messages in thread From: Martin Steigerwald @ 2015-04-18 8:10 UTC (permalink / raw) To: Christoph Anton Mitterer; +Cc: linux-btrfs [-- Attachment #1: Type: text/plain, Size: 1450 bytes --] Am Samstag, 18. April 2015, 01:08:44 schrieb Christoph Anton Mitterer: > Hey. Hi Christoph, > I've seen that this has been asked some times before, and there are > stackoverflow/etc. questions on that, but none with a really good > answer. > > How can I best copy one btrfs filesystem (with snapshots and subvolumes) > into another, especially with keeping the CoW/reflink status of all > files? > And ideally incrementally upgrade it later (again with all > snapshots/subvols, and again not loosing the shared blocks between these > files). > > send/receive apparently also works for just one subvolume,... and > documentation is quite sparse :-/ To make it short and simple: I am not aware of any out of the box solution for this use case. And I think that is just why you didn´t found any. I want to buy a new backup harddisk sometime in the future, and ideally transfer the contents of the current one with all subvolumes and snapshots, but I think except for some old backups that I have only there and I do not have the sources anymore, I will just start from scratch and let it collect its own snapshots. That said, I think it can be scripted. But I am not aware of anyone having done this. I may be missing something, so maybe someone on the list has a recommendation. Ciao, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 181 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 8:10 ` Martin Steigerwald @ 2015-05-07 5:14 ` Paul Harvey 2015-05-07 18:57 ` Martin Steigerwald 0 siblings, 1 reply; 19+ messages in thread From: Paul Harvey @ 2015-05-07 5:14 UTC (permalink / raw) To: Martin Steigerwald; +Cc: Christoph Anton Mitterer, linux-btrfs, russell Sorry I'm late to this conversation... On 18 April 2015 at 18:10, Martin Steigerwald <martin@lichtvoll.de> wrote: > That said, I think it can be scripted. But I am not aware of anyone having > done this. I may be missing something, so maybe someone on the list has a > recommendation. I thought I'd mention that I have developed a script [1] which is a part of snazzer [2] which at least transports all snapshots of all subvolumes on all mounted btrfs filesystems (or a subset thereof), using send -p where possible between two local filesystems or a remote host via ssh. That is, of course, assuming your snapshots are named and located according to the idiomatic convention expected by snazzer-receive [1] - which is unlikely, I admit. Everybody uses btrfs slightly differently. Regarding comments in this thread that btrfs send/receive hasn't been well tested and optimized, I've already found one inconsistency myself [3] so I thought I'd just contribute that I've also got a "snapshot measurement" script [4] which will give you a reliable, reproducible sha512sum and PGP signature report (this report contains a shell "one-liner" (!) consisting of only standard GNU core utils which you can run at any point to reproduce the measurement or verify the PGP signature still matches the filesystem under the path). snazzer-measure [4] should give consistent sha512sums and PGP signature verification regardless of whether the snapshot path happens to live on a btrfs filesystem or not, so you can maintain a chain of integrity all the way up to and beyond the point you move your snapshots onto non-btrfs filesystems if you so wish. [1] https://github.com/csirac2/snazzer/blob/master/doc/snazzer-receive.md [2] https://github.com/csirac2/snazzer [3] https://bugzilla.kernel.org/show_bug.cgi?id=95201 [4] https://github.com/csirac2/snazzer/blob/master/doc/snazzer-measure.md ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-05-07 5:14 ` Paul Harvey @ 2015-05-07 18:57 ` Martin Steigerwald 2015-05-08 3:22 ` Paul Harvey 0 siblings, 1 reply; 19+ messages in thread From: Martin Steigerwald @ 2015-05-07 18:57 UTC (permalink / raw) To: Paul Harvey; +Cc: Christoph Anton Mitterer, linux-btrfs, russell Am Donnerstag, 7. Mai 2015, 15:14:11 schrieb Paul Harvey: > Sorry I'm late to this conversation... > > On 18 April 2015 at 18:10, Martin Steigerwald <martin@lichtvoll.de> wrote: > > That said, I think it can be scripted. But I am not aware of anyone > > having done this. I may be missing something, so maybe someone on the > > list has a recommendation. > > I thought I'd mention that I have developed a script [1] which is a > part of snazzer [2] which at least transports all snapshots of all > subvolumes on all mounted btrfs filesystems (or a subset thereof), > using send -p where possible between two local filesystems or a remote > host via ssh. That is, of course, assuming your snapshots are named > and located according to the idiomatic convention expected by > snazzer-receive [1] - which is unlikely, I admit. Everybody uses btrfs > slightly differently. Thank you for the hint to your tool, sounds interesting. I am not sure whether it would work with my setup of having snapshots in root subvol, but then setting another subvol as default to hide snapshots behind a mount of the root subvol like this: LABEL=home /home btrfs noatime,space_cache,compress=lzo 0 0 LABEL=home /mnt/home-snaps btrfs noatime,space_cache,compress=lzo,subvolid=5 0 0 I know that space_cache is not needed after first mount with that option (thats why I think it does not make much sense to have this as a mount option in the first place, but rather as a property of the filesystem). Thanks, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-05-07 18:57 ` Martin Steigerwald @ 2015-05-08 3:22 ` Paul Harvey 0 siblings, 0 replies; 19+ messages in thread From: Paul Harvey @ 2015-05-08 3:22 UTC (permalink / raw) To: Martin Steigerwald; +Cc: linux-btrfs That's useful to know, thanks! Perhaps I'll add a configuration option for the user to specify where their snapshots should live relative to the subvolume, it seems like an easy addition to have. I've been meaning to deal with this anyway, somewhat related to https://github.com/csirac2/snazzer/issues/2 On 8 May 2015 at 04:57, Martin Steigerwald <martin@lichtvoll.de> wrote: > Am Donnerstag, 7. Mai 2015, 15:14:11 schrieb Paul Harvey: >> Sorry I'm late to this conversation... >> >> On 18 April 2015 at 18:10, Martin Steigerwald <martin@lichtvoll.de> wrote: >> > That said, I think it can be scripted. But I am not aware of anyone >> > having done this. I may be missing something, so maybe someone on the >> > list has a recommendation. >> >> I thought I'd mention that I have developed a script [1] which is a >> part of snazzer [2] which at least transports all snapshots of all >> subvolumes on all mounted btrfs filesystems (or a subset thereof), >> using send -p where possible between two local filesystems or a remote >> host via ssh. That is, of course, assuming your snapshots are named >> and located according to the idiomatic convention expected by >> snazzer-receive [1] - which is unlikely, I admit. Everybody uses btrfs >> slightly differently. > > Thank you for the hint to your tool, sounds interesting. I am not sure > whether it would work with my setup of having snapshots in root subvol, but > then setting another subvol as default to hide snapshots behind a mount of > the root subvol like this: > > LABEL=home /home btrfs > noatime,space_cache,compress=lzo 0 0 > > LABEL=home /mnt/home-snaps btrfs > noatime,space_cache,compress=lzo,subvolid=5 0 0 > > I know that space_cache is not needed after first mount with that option > (thats why I think it does not make much sense to have this as a mount > option in the first place, but rather as a property of the filesystem). > > Thanks, > -- > Martin 'Helios' Steigerwald - http://www.Lichtvoll.de > GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-17 23:08 how to clone a btrfs filesystem Christoph Anton Mitterer 2015-04-18 4:24 ` Russell Coker 2015-04-18 8:10 ` Martin Steigerwald @ 2015-04-18 16:09 ` Kai Krakow 2015-04-18 16:23 ` Kai Krakow 2015-04-18 16:23 ` Chris Murphy 2015-04-18 16:20 ` Chris Murphy 3 siblings, 2 replies; 19+ messages in thread From: Kai Krakow @ 2015-04-18 16:09 UTC (permalink / raw) To: linux-btrfs Christoph Anton Mitterer <calestyo@scientia.net> schrieb: > Hey. > > I've seen that this has been asked some times before, and there are > stackoverflow/etc. questions on that, but none with a really good > answer. > > How can I best copy one btrfs filesystem (with snapshots and subvolumes) > into another, especially with keeping the CoW/reflink status of all > files? > And ideally incrementally upgrade it later (again with all > snapshots/subvols, and again not loosing the shared blocks between these > files). > > send/receive apparently also works for just one subvolume,... and > documentation is quite sparse :-/ You could simply "btrfs device add" the new device, then "btrfs device del" the old device... It won't create a 1:1 clone, if that is your intention. But it will migrate all your data over to the new device (even a bigger/smaller one), keeping shared extents (CoW/reflink status), generation numbers, file system uid, label, each subvolume, etc... And it can be done while the system is running. It looks like this way fulfills all your requirements. That same way you can later incrementally update it again. -- Replies to list only preferred. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 16:09 ` Kai Krakow @ 2015-04-18 16:23 ` Kai Krakow 2015-04-18 16:23 ` Chris Murphy 1 sibling, 0 replies; 19+ messages in thread From: Kai Krakow @ 2015-04-18 16:23 UTC (permalink / raw) To: linux-btrfs Kai Krakow <hurikhan77@gmail.com> schrieb: > Christoph Anton Mitterer <calestyo@scientia.net> schrieb: > >> Hey. >> >> I've seen that this has been asked some times before, and there are >> stackoverflow/etc. questions on that, but none with a really good >> answer. >> >> How can I best copy one btrfs filesystem (with snapshots and subvolumes) >> into another, especially with keeping the CoW/reflink status of all >> files? >> And ideally incrementally upgrade it later (again with all >> snapshots/subvols, and again not loosing the shared blocks between these >> files). >> >> send/receive apparently also works for just one subvolume,... and >> documentation is quite sparse :-/ > > You could simply "btrfs device add" the new device, then "btrfs device > del" the old device... > > It won't create a 1:1 clone, if that is your intention. But it will > migrate all your data over to the new device (even a bigger/smaller one), > keeping shared extents (CoW/reflink status), generation numbers, file > system uid, label, each subvolume, etc... And it can be done while the > system is running. > > It looks like this way fulfills all your requirements. That same way you > can later incrementally update it again. BTW: I've done that when migrating my 3-device btrfs pool to bcache. I simply removed one device, reformatted with bcache, then added it. I repeated those steps for each device. It took a while (like 24 hours) but it worked. In the end I just did a balance to rebalance all 3 devices. That last step is not needed if you are migrating only one device. As a safety measurement I did a backup first, then ran "btrfs check" to ensure file system integrity, and did start into the rescue shell to have only a minimum set of processes running. Due to a bug in the Intel graphics stack the system froze sometimes, and I didn't want that to happen during data migration. -- Replies to list only preferred. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 16:09 ` Kai Krakow 2015-04-18 16:23 ` Kai Krakow @ 2015-04-18 16:23 ` Chris Murphy 2015-04-18 16:36 ` Kai Krakow 1 sibling, 1 reply; 19+ messages in thread From: Chris Murphy @ 2015-04-18 16:23 UTC (permalink / raw) To: Kai Krakow; +Cc: Btrfs BTRFS On Sat, Apr 18, 2015 at 10:09 AM, Kai Krakow <hurikhan77@gmail.com> wrote: > You could simply "btrfs device add" the new device, then "btrfs device del" > the old device... That wipes the btrfs signature (maybe the entire superblock, I'm not sure) from the deleted device. It needs to be a seed device first to prevent that, which makes it ro. -- Chris Murphy ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 16:23 ` Chris Murphy @ 2015-04-18 16:36 ` Kai Krakow 2015-04-19 20:33 ` Chris Murphy 0 siblings, 1 reply; 19+ messages in thread From: Kai Krakow @ 2015-04-18 16:36 UTC (permalink / raw) To: linux-btrfs Chris Murphy <lists@colorremedies.com> schrieb: > On Sat, Apr 18, 2015 at 10:09 AM, Kai Krakow <hurikhan77@gmail.com> wrote: > >> You could simply "btrfs device add" the new device, then "btrfs device >> del" the old device... > > That wipes the btrfs signature (maybe the entire superblock, I'm not > sure) from the deleted device. It needs to be a seed device first to > prevent that, which makes it ro. Yeah, I figured I forgot about the "copy" requirement Cristoph mentioned... My suggestions only works for cloning if you want to actually migrate from the old to the new device, and no longer use the old one. I wonder if one could split mirrors in btrfs... Read: btrfs device add the new device, set the raid policy for data, meta data, and system to raid-1, balance, and then unmount and detach one of the devices. I'm not sure how to get out of the degraded state then. Is it possible to simply resort from raid-1 to single raid policy again and remove the missing device from the pool? Regarding data, it should contain everything needed for running the filesystem - so it should have no problem here. I guess there's one caveat: The signature of both devices will then indicate they are belonging to the same pool, making it impossible to ever attach those to the same system again without causing trouble for your data. If one could change that to make both devices distinct filesystems, it could be used to implement a "btrfs filesystem clone" call. -- Replies to list only preferred. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 16:36 ` Kai Krakow @ 2015-04-19 20:33 ` Chris Murphy 0 siblings, 0 replies; 19+ messages in thread From: Chris Murphy @ 2015-04-19 20:33 UTC (permalink / raw) To: Btrfs BTRFS On Sat, Apr 18, 2015 at 10:36 AM, Kai Krakow <hurikhan77@gmail.com> wrote: > I wonder if one could split mirrors in btrfs... Read: > > btrfs device add the new device, set the raid policy for data, meta data, > and system to raid-1, balance, and then unmount and detach one of the > devices. > > I'm not sure how to get out of the degraded state then. Is it possible to > simply resort from raid-1 to single raid policy again and remove the missing > device from the pool? Not currently. It requires a balance -dconvert=single -mconvert=single/DUP --force, and it's not efficient, it rewrites the whole volume. It cannot be paused, canceled or otherwise interrupted without losing the ability to ever write to the volume again. > I guess there's one caveat: The signature of both devices will then indicate > they are belonging to the same pool, making it impossible to ever attach > those to the same system again without causing trouble for your data. If one > could change that to make both devices distinct filesystems, it could be > used to implement a "btrfs filesystem clone" call. Like ext4 and XFS metadata checksums, the fs UUID is found throughout the fs metadata. So "breaking" a raid1 back to single ought to include (and depend on) a feature to change UUIDs in metadata. That code already exists somewhere because btrfs seed device create, add new device, delete see device causes the source UUID to be replaced with the destination UUID in the course of migrating the data. So it can be done, there just isn't a general purpose method of rewriting only metadata. -- Chris Murphy ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-17 23:08 how to clone a btrfs filesystem Christoph Anton Mitterer ` (2 preceding siblings ...) 2015-04-18 16:09 ` Kai Krakow @ 2015-04-18 16:20 ` Chris Murphy 2015-04-18 17:23 ` Christoph Anton Mitterer 3 siblings, 1 reply; 19+ messages in thread From: Chris Murphy @ 2015-04-18 16:20 UTC (permalink / raw) To: Christoph Anton Mitterer; +Cc: linux-btrfs On Fri, Apr 17, 2015 at 5:08 PM, Christoph Anton Mitterer <calestyo@scientia.net> wrote: > Hey. > > I've seen that this has been asked some times before, and there are > stackoverflow/etc. questions on that, but none with a really good > answer. > > How can I best copy one btrfs filesystem (with snapshots and subvolumes) > into another, especially with keeping the CoW/reflink status of all > files? > And ideally incrementally upgrade it later (again with all > snapshots/subvols, and again not loosing the shared blocks between these > files). Make the source a seed device, add new device, delete seed. Once that completes, unmount, unset btrfs seed, and now the two devices are separate fs volumes each with unique UUID. There may still be bugs with seed device, it's been maybe 6 months since I last checked. -- Chris Murphy ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: how to clone a btrfs filesystem 2015-04-18 16:20 ` Chris Murphy @ 2015-04-18 17:23 ` Christoph Anton Mitterer 0 siblings, 0 replies; 19+ messages in thread From: Christoph Anton Mitterer @ 2015-04-18 17:23 UTC (permalink / raw) To: Chris Murphy; +Cc: linux-btrfs [-- Attachment #1: Type: text/plain, Size: 656 bytes --] On Sat, 2015-04-18 at 10:20 -0600, Chris Murphy wrote: > Make the source a seed device, add new device, delete seed. Once that > completes, unmount, unset btrfs seed, and now the two devices are > separate fs volumes each with unique UUID. There may still be bugs > with seed device, it's been maybe 6 months since I last checked. Hmm but that also modifies the source fs, at least the seed device part, right? Given that all this apparently contains still bugs, I'd have rather kept the source fs read only. Also, AFAIU this would work only once then.... and I wouldn't be able to do incremental backups, would I? Cheers, Chris. [-- Attachment #2: smime.p7s --] [-- Type: application/x-pkcs7-signature, Size: 5313 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2015-05-08 3:23 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-04-17 23:08 how to clone a btrfs filesystem Christoph Anton Mitterer 2015-04-18 4:24 ` Russell Coker 2015-04-18 5:17 ` Christoph Anton Mitterer 2015-04-18 15:02 ` Russell Coker 2015-04-19 3:56 ` Duncan 2015-04-19 20:00 ` Christoph Anton Mitterer 2015-04-20 5:23 ` Duncan 2015-04-20 16:32 ` Christoph Anton Mitterer 2015-04-18 8:10 ` Martin Steigerwald 2015-05-07 5:14 ` Paul Harvey 2015-05-07 18:57 ` Martin Steigerwald 2015-05-08 3:22 ` Paul Harvey 2015-04-18 16:09 ` Kai Krakow 2015-04-18 16:23 ` Kai Krakow 2015-04-18 16:23 ` Chris Murphy 2015-04-18 16:36 ` Kai Krakow 2015-04-19 20:33 ` Chris Murphy 2015-04-18 16:20 ` Chris Murphy 2015-04-18 17:23 ` Christoph Anton Mitterer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).