From: Evert Vorster <evorster@gmail.com>
To: Stephane Chazelas <stephane.chazelas@gmail.com>
Cc: cwillu <cwillu@cwillu.com>, linux-btrfs@vger.kernel.org
Subject: Re: cloning single-device btrfs file system onto multi-device one
Date: Tue, 5 Apr 2011 22:45:42 -0700 [thread overview]
Message-ID: <BANLkTimQaPNm22kN8g2ha+XrOP3uyBUE-Q@mail.gmail.com> (raw)
In-Reply-To: <chaz20110328131748.GA18131@seebyte.com>
Hi there.
=46rom my limited understanding, btrfs will write metadata in raid1 by
default. So, this could be where your 2TB has gone.
I am assuming you used raid0 for the three new disks?
Also, hard-stopping a btrfs is a no-no...
Kind regards,
-Evert-
On Mon, Mar 28, 2011 at 6:17 AM, Stephane Chazelas
<stephane.chazelas@gmail.com> wrote:
> 2011-03-22 18:06:29 -0600, cwillu:
>> > I can mount it back, but not if I reload the btrfs module, in whic=
h case I get:
>> >
>> > [ 1961.328280] Btrfs loaded
>> > [ 1961.328695] device fsid df4e5454eb7b1c23-7a68fc421060b18b devid=
1 transid 118 /dev/loop0
>> > [ 1961.329007] btrfs: failed to read the system array on loop0
>> > [ 1961.340084] btrfs: open_ctree failed
>>
>> Did you rescan all the loop devices (btrfs dev scan /dev/loop*) afte=
r
>> reloading the module, before trying to mount again?
>
> Thanks. That probably was the issue, that and using too big
> files on too small volumes I'd guess.
>
> I've tried it in real life and it seemed to work to some extent.
> So here is how I transferred a 6TB btrfs on one 6TB raid5 device
> (on host src) over the network onto a btrfs on 3 3TB hard drives
> (on host dst):
>
> on src:
>
> lvm snapshot -L100G -n snap /dev/VG/vol
> nbd-server 12345 /dev/VG/snap
>
> (if you're not lucky enough to have used lvm there, you can use
> nbd-server's copy-on-write feature).
>
> on dst:
>
> nbd-client src 12345 /dev/nbd0
> mount /dev/nbd0 /mnt
> btrfs device add /dev/sdb /dev/sdc /dev/sdd /mnt
> =A0# in reality it was /dev/sda4 (a little under 3TB), /dev/sdb,
> =A0# /dev/sdc
> btrfs device delete /dev/nbd0 /mnt
>
> That was relatively fast (about 18 hours) but failed with an
> error. Apparently, it managed to fill up the 3 3TB drives (as
> shown by btrfs fi show). Usage for /dev/nbd0 was at 16MB though
> (?!)
>
> I then did a "btrfs fi balance /mnt". I could see usage on the
> drives go down quickly. However, that was writing data onto
> /dev/nbd0 so was threatening to fill up my LVM snapshot. I then
> cancelled that by doing a hard reset on "dst" (couldn't find
> any other way). And then:
>
> Upon reboot, I mounted /dev/sdb instead of /dev/nbd0 in case
> that made a difference and then ran the
>
> btrfs device delete /dev/nbd0 /mnt
>
> again, which this time went through.
>
> I then did a btrfs fi balance again and let it run through. However h=
ere is
> what I get:
>
> $ df -h /mnt
> Filesystem =A0 =A0 =A0 =A0 =A0 =A0Size =A0Used Avail Use% Mounted on
> /dev/sdb =A0 =A0 =A0 =A0 =A0 =A0 =A08.2T =A03.5T =A03.2T =A053% /mnt
>
> Only 3.2T left. How would I reclaim the missing space?
>
> $ sudo btrfs fi show
> Label: none =A0uuid: ...
> =A0 =A0 =A0 =A0Total devices 3 FS bytes used 3.43TB
> =A0 =A0 =A0 =A0devid =A0 =A04 size 2.73TB used 1.17TB path /dev/sdc
> =A0 =A0 =A0 =A0devid =A0 =A03 size 2.73TB used 1.17TB path /dev/sdb
> =A0 =A0 =A0 =A0devid =A0 =A02 size 2.70TB used 1.14TB path /dev/sda4
> $ sudo btrfs fi df /mnt
> Data, RAID0: total=3D3.41TB, used=3D3.41TB
> System, RAID1: total=3D16.00MB, used=3D232.00KB
> Metadata, RAID1: total=3D35.25GB, used=3D20.55GB
>
> So that kind of worked but that is of little use to me as 2TB
> kind of disappeared under my feet in the process.
>
> Any idea, anyone?
>
> Thanks
> Stephane
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs=
" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
-Evert-
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" =
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-04-06 5:45 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-21 16:24 cloning single-device btrfs file system onto multi-device one Stephane Chazelas
2011-03-22 9:22 ` Stephane Chazelas
2011-03-23 0:06 ` cwillu
2011-03-28 13:17 ` Stephane Chazelas
2011-04-06 5:45 ` Evert Vorster [this message]
2011-04-06 6:30 ` Helmut Hullen
2011-04-06 6:57 ` Helmut Hullen
2011-04-06 8:25 ` Arne Jansen
2011-04-06 12:05 ` Stephane Chazelas
2011-04-06 13:43 ` Arne Jansen
2011-04-06 11:57 ` Stephane Chazelas
2011-03-23 5:13 ` Fajar A. Nugraha
2011-03-28 13:24 ` Stephane Chazelas
2011-03-30 11:58 ` Stephane Chazelas
2011-04-17 15:12 ` Hubert Kario
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BANLkTimQaPNm22kN8g2ha+XrOP3uyBUE-Q@mail.gmail.com \
--to=evorster@gmail.com \
--cc=cwillu@cwillu.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=stephane.chazelas@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).