linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hannes Schweizer <schweizer.hannes@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Incremental send/receive broken after snapshot restore
Date: Thu, 28 Jun 2018 22:09:04 +0200	[thread overview]
Message-ID: <CAOfGOYyT2F-6jtKgpgm6iX5aK-kP1Ryuz_rcTFs=s05_FeAPoQ@mail.gmail.com> (raw)

Hi,

Here's my environment:
Linux diablo 4.17.0-gentoo #5 SMP Mon Jun 25 00:26:55 CEST 2018 x86_64
Intel(R) Core(TM) i5 CPU 760 @ 2.80GHz GenuineIntel GNU/Linux
btrfs-progs v4.17

Label: 'online'  uuid: e4dc6617-b7ed-4dfb-84a6-26e3952c8390
        Total devices 2 FS bytes used 3.16TiB
        devid    1 size 1.82TiB used 1.58TiB path /dev/mapper/online0
        devid    2 size 1.82TiB used 1.58TiB path /dev/mapper/online1
Data, RAID0: total=3.16TiB, used=3.15TiB
System, RAID0: total=16.00MiB, used=240.00KiB
Metadata, RAID0: total=7.00GiB, used=4.91GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

Label: 'offline'  uuid: 5b449116-93e5-473e-aaf5-bf3097b14f29
        Total devices 2 FS bytes used 3.52TiB
        devid    1 size 5.46TiB used 3.53TiB path /dev/mapper/offline0
        devid    2 size 5.46TiB used 3.53TiB path /dev/mapper/offline1
Data, RAID1: total=3.52TiB, used=3.52TiB
System, RAID1: total=8.00MiB, used=512.00KiB
Metadata, RAID1: total=6.00GiB, used=5.11GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

Label: 'external'  uuid: 8bf13621-01f0-4f09-95c7-2c157d3087d0
        Total devices 1 FS bytes used 3.65TiB
        devid    1 size 5.46TiB used 3.66TiB path
/dev/mapper/luks-3c196e96-d46c-4a9c-9583-b79c707678fc
Data, single: total=3.64TiB, used=3.64TiB
System, DUP: total=32.00MiB, used=448.00KiB
Metadata, DUP: total=11.00GiB, used=9.72GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


The following automatic backup scheme is in place:
hourly:
btrfs sub snap -r online/root online/root.<date>

daily:
btrfs sub snap -r online/root online/root.<new_offline_reference>
btrfs send -c online/root.<old_offline_reference>
online/root.<new_offline_reference> | btrfs receive offline
btrfs sub del -c online/root.<old_offline_reference>

monthly:
btrfs sub snap -r online/root online/root.<new_external_reference>
btrfs send -c online/root.<old_external_reference>
online/root.<new_external_reference> | btrfs receive external
btrfs sub del -c online/root.<old_external_reference>

Now here are the commands leading up to my problem:
After the online filesystem suddenly went ro, and btrfs check showed
massive problems, I decided to start the online array from scratch:
1: mkfs.btrfs -f -d raid0 -m raid0 -L "online" /dev/mapper/online0
/dev/mapper/online1

As you can see from the backup commands above, the snapshots of
offline and external are not related, so in order to at least keep the
extensive backlog of the external snapshot set (including all
reflinks), I decided to restore the latest snapshot from external.
2: btrfs send external/root.<external_reference> | btrfs receive online

I wanted to ensure I can restart the incremental backup flow from
online to external, so I did this
3: mv online/root.<external_reference> online/root
4: btrfs sub snap -r online/root online/root.<external_reference>
5: btrfs property set online/root ro false

Now, I naively expected a simple restart of my automatic backups for
external should work.
However after running
6: btrfs sub snap -r online/root online/root.<new_external_reference>
7: btrfs send -c online/root.<old_external_reference>
online/root.<new_external_reference> | btrfs receive external
I see the following error:
ERROR: unlink root/.ssh/agent-diablo-_dev_pts_3 failed. No such file
or directory

Which is unfortunate, but the second problem actually encouraged me to
post this message.
As planned, I had to start the offline array from scratch as well,
because I no longer had any reference snapshot for incremental backups
on other devices:
8: mkfs.btrfs -f -d raid1 -m raid1 -L "offline" /dev/mapper/offline0
/dev/mapper/offline1

However restarting the automatic daily backup flow bails out with a
similar error, although no potentially problematic previous
incremental snapshots should be involved here!
ERROR: unlink o925031-987-0/2139527549 failed. No such file or directory

I'm a bit lost now. The only thing I could image which might be
confusing for btrfs,
is the residual "Received UUID" of online/root.<external_reference>
after command 2.
What's the recommended way to restore snapshots with send/receive
without breaking subsequent incremental backups (including reflinks of
existing backups)?

Any hints appreciated...

             reply	other threads:[~2018-06-28 20:09 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-28 20:09 Hannes Schweizer [this message]
2018-06-29 17:44 ` Incremental send/receive broken after snapshot restore Andrei Borzenkov
     [not found]   ` <CAOfGOYyFcQ5gN7z=4zEaGH0VMVUuFE5qiGwgF+c14FU228Y3iQ@mail.gmail.com>
2018-06-30  6:24     ` Andrei Borzenkov
2018-06-30 17:49       ` Hannes Schweizer
2018-06-30 18:49         ` Andrei Borzenkov
2018-06-30 20:02           ` Andrei Borzenkov
2018-06-30 23:03             ` Hannes Schweizer
2018-06-30 23:16               ` Marc MERLIN
2018-07-01  4:54                 ` Andrei Borzenkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOfGOYyT2F-6jtKgpgm6iX5aK-kP1Ryuz_rcTFs=s05_FeAPoQ@mail.gmail.com' \
    --to=schweizer.hannes@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).