From: "Libor Klepáč" <libor.klepac@bcom.cz>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: Eric Sandeen <sandeen@redhat.com>, linux-xfs <linux-xfs@vger.kernel.org>
Subject: Re: [PATCH] xfs_repair: junk leaf attribute if count == 0
Date: Thu, 23 Feb 2017 10:05:08 +0100 [thread overview]
Message-ID: <3114301.pdi8LQHD6i@libor-nb> (raw)
In-Reply-To: <30c5d3eb-5f62-5488-aff7-6b1544f1c29a@sandeen.net>
Hello,
so repair did something, in log it says "block 0", is that ok?
Inode=335629253 is now in lost+found - empty dir belonging to user cust1.
When looking at
u.sfdir2.hdr.parent.i4 = 319041478
from xfs_db before repair, it seemed to be in dir, which is owned by cust2,
totally unrelated to cust1, is that possible?
>From our view, it's not possible because of FS rights (homedir of each user has rights 0750)
Second inode=1992635 , repaired by xfs_repair, is directory lost+found
Libor
Commands output follows:
Kernel 4.9.2, xfsprogs 4.10.0-rc1
-----------------------------------------------------------------------------
---- check
# xfs_repair -n /dev/mapper/vgDisk2-lvData
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
agi unlinked bucket 5 is 84933 in ag 20 (inode=335629253)
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
bad attribute count 0 in attr block 0, inode 335629253
problem with attribute contents in inode 335629253
would clear attr fork
bad nblocks 1 for inode 335629253, would reset to 0
bad anextents 1 for inode 335629253, would reset to 0
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected dir inode 335629253, would move to lost+found
Phase 7 - verify link counts...
would have reset inode 335629253 nlinks from 0 to 2
No modify flag set, skipping filesystem flush and exiting.
-----------------------------------------------------------------------------
---- repair
# xfs_repair /dev/mapper/vgDisk2-lvData
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
agi unlinked bucket 5 is 84933 in ag 20 (inode=335629253)
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
bad attribute count 0 in attr block 0, inode 335629253
problem with attribute contents in inode 335629253
clearing inode 335629253 attributes
correcting nblocks for inode 335629253, was 1 - counted 0
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
bad attribute format 1 in inode 335629253, resetting value
- agno = 21
- agno = 22
- agno = 23
- agno = 24
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
disconnected dir inode 335629253, moving to lost+found
Phase 7 - verify and correct link counts...
resetting inode 1992635 nlinks from 2 to 3
resetting inode 335629253 nlinks from 0 to 2
Note - quota info will be regenerated on next quota mount.
Done
-----------------------------------------------------------------------------
---- xfs_db before repair
# xfs_db -r /dev/vgDisk2/lvData
xfs_db> inode 335629253
xfs_db> print
core.magic = 0x494e
core.mode = 040775
core.version = 2
core.format = 1 (local)
core.nlinkv2 = 0
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 10106
core.gid = 10106
core.flushiter = 2
core.atime.sec = Wed Feb 22 11:04:21 2017
core.atime.nsec = 464104444
core.mtime.sec = Wed Feb 22 11:46:41 2017
core.mtime.nsec = 548670485
core.ctime.sec = Wed Feb 22 11:46:41 2017
core.ctime.nsec = 548670485
core.size = 6
core.nblocks = 1
core.extsize = 0
core.nextents = 0
core.naextents = 1
core.forkoff = 15
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 1322976790
next_unlinked = null
u.sfdir2.hdr.count = 0
u.sfdir2.hdr.i8count = 0
u.sfdir2.hdr.parent.i4 = 319041478
a.bmx[0] = [startoff,startblock,blockcount,extentflag]
0:[0,20976867,1,0]
-----------------------------------------------------------------------------
---- xfs_db after repair
# xfs_db -r /dev/vgDisk2/lvData
xfs_db> inode 335629253
xfs_db> print
core.magic = 0x494e
core.mode = 040775
core.version = 2
core.format = 1 (local)
core.nlinkv2 = 2
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 10106
core.gid = 10106
core.flushiter = 2
core.atime.sec = Wed Feb 22 11:04:21 2017
core.atime.nsec = 464104444
core.mtime.sec = Wed Feb 22 11:46:41 2017
core.mtime.nsec = 548670485
core.ctime.sec = Wed Feb 22 11:46:41 2017
core.ctime.nsec = 548670485
core.size = 6
core.nblocks = 0
core.extsize = 0
core.nextents = 0
core.naextents = 0
core.forkoff = 0
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 1322976790
next_unlinked = null
u.sfdir2.hdr.count = 0
u.sfdir2.hdr.i8count = 0
u.sfdir2.hdr.parent.i4 = 1992635
xfs_db> quit
prev parent reply other threads:[~2017-02-23 9:19 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-08 18:06 [PATCH] xfs_repair: junk leaf attribute if count == 0 Eric Sandeen
2016-12-12 18:36 ` Brian Foster
2016-12-13 10:52 ` Libor Klepáč
2016-12-13 16:04 ` Eric Sandeen
2016-12-15 20:48 ` Libor Klepáč
2016-12-21 8:25 ` Libor Klepáč
2016-12-24 17:50 ` Eric Sandeen
2017-01-31 8:03 ` Libor Klepáč
2017-03-13 13:48 ` Libor Klepáč
2017-03-13 14:14 ` Eric Sandeen
2017-03-14 8:15 ` Libor Klepáč
2017-03-14 16:54 ` Eric Sandeen
2017-03-14 18:51 ` Eric Sandeen
2017-03-15 10:07 ` Libor Klepáč
2017-03-15 15:22 ` Eric Sandeen
2017-03-16 8:58 ` Libor Klepáč
2017-03-16 15:21 ` Eric Sandeen
2017-03-29 13:33 ` Libor Klepáč
2017-04-11 11:23 ` Libor Klepáč
2017-05-24 11:18 ` Libor Klepáč
2017-05-24 12:24 ` Libor Klepáč
2017-02-01 12:48 ` Libor Klepáč
2017-02-01 22:49 ` Eric Sandeen
2017-02-02 8:35 ` Libor Klepáč
2017-02-22 11:42 ` Libor Klepáč
2017-02-22 13:45 ` Eric Sandeen
2017-02-22 14:19 ` Libor Klepáč
2017-02-23 9:05 ` Libor Klepáč [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3114301.pdi8LQHD6i@libor-nb \
--to=libor.klepac@bcom.cz \
--cc=linux-xfs@vger.kernel.org \
--cc=sandeen@redhat.com \
--cc=sandeen@sandeen.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).