public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: L A Walsh <xfs@tlinx.org>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs <linux-xfs@vger.kernel.org>
Subject: Re: cause of xfsdump msg: root ino 192 differs from mount dir ino 256
Date: Mon, 01 Nov 2021 21:45:18 -0700	[thread overview]
Message-ID: <6180C25E.7030901@tlinx.org> (raw)
In-Reply-To: <20211101211244.GC449541@dread.disaster.area>

The restore finished, the beginning is:
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.8 (dump format 3.0)
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description: 
xfsrestore: hostname: Ishtar
xfsrestore: mount point: /home
xfsrestore: volume: /dev/Space/Home2
xfsrestore: session time: Mon Nov  1 07:37:47 2021
xfsrestore: level: 0
xfsrestore: session label: "home"
xfsrestore: media label: ""
xfsrestore: file system id: 5f41265a-3114-fb3c-2020-082214061852
xfsrestore: session id: 586026b8-5947-4b95-a213-1532ba25f503
xfsrestore: media id: 5fb4cd58-5cc9-4678-9829-a6539588a170
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: status at 18:21:14: 1289405/1338497 directories reconstructed, 96.3% complete, 13840475 directory entries processed, 60 seconds elapsed
xfsrestore: 1338497 directories and 14357961 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: NOTE: ino 259 salvaging file, placing in orphanage/256.0/root+usr+var_copies/20210316/usr/lib/mono/gac/System.Reactive.Runtime.Remoting/2.2.0.0__31bf3856ad364e35/System.Reactive.Runtime.Remoting.dll
...
there are a bunch of lines like that, 'wc' on the file shows:

> wc /tmp/xfsrestore.log 
  5320822  50100130 821050625 /tmp/xfsrestore.log

Then the end of the file looks like:

xfsrestore: NOTE: ino 8485912415 salvaging file, placing in orphanage/256.0/tools/samba/samba-4.14.2/third_party/resolv_wrapper/wscript
xfsrestore: WARNING: unable to rmdir /nhome/./orphanage: Directory not empty
xfsrestore: restore complete: 7643 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /backups/ishtar/home/home-211101-0-0737.dump OK (success)
xfsrestore: Restore Status: SUCCESS

The lines in between beginning and end appear to be 
an incrementing inode & file list of the disk as it was
put into the orphanage

The restored file system appears to slightly larger, but
that's likely because I cleared off some garbage from the
current home.

Ah, the xfsdump just finished:

>/root/bin/dump1fs#160(Xfsdump)> xfsdump -b 268435456 -l 0 -L home -e - /home
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.8 (dump format 3.0)
xfsdump: level 0 dump of Ishtar:/home
xfsdump: dump date: Mon Nov  1 18:15:07 2021
xfsdump: session id: 8f996280-21df-42c5-b0a0-3f1584ae1f54
xfsdump: session label: "home"
xfsdump: NOTE: root ino 192 differs from mount dir ino 256, bind mount?
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 1587242183552 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 1577602668640 bytes
xfsdump: dump size (non-dir files) : 1574177966864 bytes
xfsdump: dump complete: 12536 seconds elapsed
xfsdump: Dump Status: SUCCESS


Except for the 5.3 million lines between the start+end, the xfsrestore output is above.

I can't imagine why you'd want the 5.3 million lines of
file listings, but if you do, I'll need to upload it somewhere.





  parent reply	other threads:[~2021-11-02  4:47 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-25 21:30 xfsrestore'ing from file backups don't restore...why not? L A Walsh
2021-10-26  0:48 ` Dave Chinner
     [not found]   ` <617B5DA3.7060106@tlinx.org>
2021-10-29 19:24     ` L A Walsh
2021-10-31 21:28   ` L A Walsh
2021-11-01 20:23     ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
2021-11-01 21:12       ` Dave Chinner
2021-11-02  1:35         ` L A Walsh
2021-11-02  1:47         ` L A Walsh
2021-11-02  4:45         ` L A Walsh [this message]
2021-11-02 14:24     ` xfsrestore'ing from file backups don't restore...why not? Eric Sandeen
2021-11-01 19:39   ` cause of xfsdump msg: root ino 192 differs from mount dir ino 256 L A Walsh
2021-11-02 14:29     ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6180C25E.7030901@tlinx.org \
    --to=xfs@tlinx.org \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox