linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kai Krakow <hurikhan77@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: how to clone a btrfs filesystem
Date: Sat, 18 Apr 2015 18:23:07 +0200	[thread overview]
Message-ID: <c0da0c-43d.ln1@hurikhan77.spdns.de> (raw)
In-Reply-To: u6ca0c-12c.ln1@hurikhan77.spdns.de

Kai Krakow <hurikhan77@gmail.com> schrieb:

> Christoph Anton Mitterer <calestyo@scientia.net> schrieb:
> 
>> Hey.
>> 
>> I've seen that this has been asked some times before, and there are
>> stackoverflow/etc. questions on that, but none with a really good
>> answer.
>> 
>> How can I best copy one btrfs filesystem (with snapshots and subvolumes)
>> into another, especially with keeping the CoW/reflink status of all
>> files?
>> And ideally incrementally upgrade it later (again with all
>> snapshots/subvols, and again not loosing the shared blocks between these
>> files).
>> 
>> send/receive apparently also works for just one subvolume,... and
>> documentation is quite sparse :-/
> 
> You could simply "btrfs device add" the new device, then "btrfs device
> del" the old device...
> 
> It won't create a 1:1 clone, if that is your intention. But it will
> migrate all your data over to the new device (even a bigger/smaller one),
> keeping shared extents (CoW/reflink status), generation numbers, file
> system uid, label, each subvolume, etc... And it can be done while the
> system is running.
> 
> It looks like this way fulfills all your requirements. That same way you
> can later incrementally update it again.

BTW: I've done that when migrating my 3-device btrfs pool to bcache. I 
simply removed one device, reformatted with bcache, then added it. I 
repeated those steps for each device. It took a while (like 24 hours) but it 
worked. In the end I just did a balance to rebalance all 3 devices. That 
last step is not needed if you are migrating only one device.

As a safety measurement I did a backup first, then ran "btrfs check" to 
ensure file system integrity, and did start into the rescue shell to have 
only a minimum set of processes running. Due to a bug in the Intel graphics 
stack the system froze sometimes, and I didn't want that to happen during 
data migration.

-- 
Replies to list only preferred.


  reply	other threads:[~2015-04-18 16:23 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-17 23:08 how to clone a btrfs filesystem Christoph Anton Mitterer
2015-04-18  4:24 ` Russell Coker
2015-04-18  5:17   ` Christoph Anton Mitterer
2015-04-18 15:02     ` Russell Coker
2015-04-19  3:56       ` Duncan
2015-04-19 20:00       ` Christoph Anton Mitterer
2015-04-20  5:23         ` Duncan
2015-04-20 16:32           ` Christoph Anton Mitterer
2015-04-18  8:10 ` Martin Steigerwald
2015-05-07  5:14   ` Paul Harvey
2015-05-07 18:57     ` Martin Steigerwald
2015-05-08  3:22       ` Paul Harvey
2015-04-18 16:09 ` Kai Krakow
2015-04-18 16:23   ` Kai Krakow [this message]
2015-04-18 16:23   ` Chris Murphy
2015-04-18 16:36     ` Kai Krakow
2015-04-19 20:33       ` Chris Murphy
2015-04-18 16:20 ` Chris Murphy
2015-04-18 17:23   ` Christoph Anton Mitterer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c0da0c-43d.ln1@hurikhan77.spdns.de \
    --to=hurikhan77@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).