* XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c
@ 2006-10-21 1:34 Carl-Johan Kjellander
2006-10-21 17:29 ` Shailendra Tripathi
0 siblings, 1 reply; 4+ messages in thread
From: Carl-Johan Kjellander @ 2006-10-21 1:34 UTC (permalink / raw)
To: xfs
I've been hit by a bug installing FC6test4. It will hold us back upgrading
since we use XFS for our home-dirs at work.
Here is the link to the bug report at redhat's bugzilla:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=211086
Here is dmesg when I try to read the file /var/lib/rpm/Packages:
XFS internal error XFS_WANT_CORRUPTED_GOTO at line 4528 of file
fs/xfs/xfs_bmap.c. Caller 0xeeb4e6ba
[<c040571a>] dump_trace+0x69/0x1af
[<c0405878>] show_trace_log_lvl+0x18/0x2c
[<c0405e18>] show_trace+0xf/0x11
[<c0405e47>] dump_stack+0x15/0x17
[<eeb3128a>] xfs_bmap_read_extents+0x448/0x462 [xfs]
[<eeb4e6ba>] xfs_iread_extents+0xa0/0xbb [xfs]
[<eeb2e692>] xfs_bmapi+0x23a/0x1f83 [xfs]
[<eeb50e1d>] xfs_iomap+0x2e1/0x78d [xfs]
[<eeb6c52e>] __xfs_get_blocks+0x72/0x237 [xfs]
[<eeb6c748>] xfs_get_blocks+0x28/0x2d [xfs]
[<c0484fa9>] do_mpage_readpage+0x282/0x5e2
[<c048581c>] mpage_readpages+0xac/0x114
[<c044d03f>] __do_page_cache_readahead+0x124/0x1c8
[<c044d12f>] blockable_page_cache_readahead+0x4c/0x9f
[<c044d2da>] page_cache_readahead+0xbf/0x196
[<c044793b>] do_generic_mapping_read+0x13d/0x49b
[<c0448573>] __generic_file_aio_read+0x18c/0x1d1
[<eeb73b3c>] xfs_read+0x294/0x2fc [xfs]
[<eeb707b7>] xfs_file_aio_read+0x70/0x78 [xfs]
[<c0465c3a>] do_sync_read+0xc1/0xfb
[<c04665bc>] vfs_read+0xa6/0x157
[<c0466a2b>] sys_read+0x41/0x67
[<c0404ea7>] syscall_call+0x7/0xb
DWARF2 unwinder stuck at syscall_call+0x7/0xb
I have the actual partition bzip:ed.
http://razor.csbnet.se/varfucked.bz2
Unpack it and dd to a 4GB logical volume and try to read from
/var/lib/rpm/Packages
/cjk
--
begin 644 carljohan_at_kjellander_dot_com.gif
Y1TE&.#=A(0`F`(```````/___RP`````(0`F```"@XR/!\N<#U.;+MI`<[U(>\!UGQ9BGT%>'D2I
Y*=NX,2@OUF2&<827ILW;^822C>\7!!Z1,!K'B5(6H<SH-"E*TJ3%*/>QI6:7"A>Y?):D2^*U@NCV
R<MOQ=]V(B6>LZYD-_T1U<@3W]A4(^$-W4]A#V")W6#.R"$;IR'@).46BN7$9>5D``#L`
^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c
2006-10-21 1:34 XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c Carl-Johan Kjellander
@ 2006-10-21 17:29 ` Shailendra Tripathi
2006-10-21 21:15 ` Eric Sandeen
0 siblings, 1 reply; 4+ messages in thread
From: Shailendra Tripathi @ 2006-10-21 17:29 UTC (permalink / raw)
To: Carl-Johan Kjellander, xfs
Hi Carl,
The best way to see what is going on is to see what log thinks it has. You must have got a message indicating the inode number and other details of the extent as indicated by the source.
if(!rt && !gotp->br_startblock && (*lastxp != NULLEXTNUM <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=NULLEXTNUM> )) {
cmn_err <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=cmn_err> (CE_PANIC <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=CE_PANIC> ,"Access to block zero: fs: <%s> inode: %lld "
"start_block : %llx start_off : %llx blkcnt : %llx "
"extent-state : %x \n",
(ip <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=ip> ->i_mount)->m_fsname,(long long)ip <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=ip> ->i_ino,
gotp->br_startblock, gotp->br_startoff,
gotp->br_blockcount,gotp->br_state);
}
Can you attach the hex dump of complete log with inode and extent information ? I want to see what has gone into the log for the operations on this inode. It could be that the extent map was corrupted just before shutdown or something in recovery screwed up.
-shailendra
From: xfs-bounce@oss.sgi.com on behalf of Carl-Johan Kjellander
Sent: Fri 10/20/2006 6:34 PM
To: xfs@oss.sgi.com
Subject: XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c
I've been hit by a bug installing FC6test4. It will hold us back upgrading
since we use XFS for our home-dirs at work.
Here is the link to the bug report at redhat's bugzilla:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=211086
Here is dmesg when I try to read the file /var/lib/rpm/Packages:
XFS internal error XFS_WANT_CORRUPTED_GOTO at line 4528 of file
fs/xfs/xfs_bmap.c. Caller 0xeeb4e6ba
[<c040571a>] dump_trace+0x69/0x1af
[<c0405878>] show_trace_log_lvl+0x18/0x2c
[<c0405e18>] show_trace+0xf/0x11
[<c0405e47>] dump_stack+0x15/0x17
[<eeb3128a>] xfs_bmap_read_extents+0x448/0x462 [xfs]
[<eeb4e6ba>] xfs_iread_extents+0xa0/0xbb [xfs]
[<eeb2e692>] xfs_bmapi+0x23a/0x1f83 [xfs]
[<eeb50e1d>] xfs_iomap+0x2e1/0x78d [xfs]
[<eeb6c52e>] __xfs_get_blocks+0x72/0x237 [xfs]
[<eeb6c748>] xfs_get_blocks+0x28/0x2d [xfs]
[<c0484fa9>] do_mpage_readpage+0x282/0x5e2
[<c048581c>] mpage_readpages+0xac/0x114
[<c044d03f>] __do_page_cache_readahead+0x124/0x1c8
[<c044d12f>] blockable_page_cache_readahead+0x4c/0x9f
[<c044d2da>] page_cache_readahead+0xbf/0x196
[<c044793b>] do_generic_mapping_read+0x13d/0x49b
[<c0448573>] __generic_file_aio_read+0x18c/0x1d1
[<eeb73b3c>] xfs_read+0x294/0x2fc [xfs]
[<eeb707b7>] xfs_file_aio_read+0x70/0x78 [xfs]
[<c0465c3a>] do_sync_read+0xc1/0xfb
[<c04665bc>] vfs_read+0xa6/0x157
[<c0466a2b>] sys_read+0x41/0x67
[<c0404ea7>] syscall_call+0x7/0xb
DWARF2 unwinder stuck at syscall_call+0x7/0xb
I have the actual partition bzip:ed.
http://razor.csbnet.se/varfucked.bz2
Unpack it and dd to a 4GB logical volume and try to read from
/var/lib/rpm/Packages
/cjk
--
begin 644 carljohan_at_kjellander_dot_com.gif
Y1TE&.#=A(0`F`(```````/___RP`````(0`F```"@XR/!\N<#U.;+MI`<[U(>\!UGQ9BGT%>'D2I
Y*=NX,2@OUF2&<827ILW;^822C>\7!!Z1,!K'B5(6H<SH-"E*TJ3%*/>QI6:7"A>Y?):D2^*U@NCV
R<MOQ=]V(B6>LZYD-_T1U<@3W]A4(^$-W4]A#V")W6#.R"$;IR'@).46BN7$9>5D``#L`
[[HTML alternate version deleted]]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c
2006-10-21 17:29 ` Shailendra Tripathi
@ 2006-10-21 21:15 ` Eric Sandeen
2006-10-21 22:30 ` Shailendra Tripathi
0 siblings, 1 reply; 4+ messages in thread
From: Eric Sandeen @ 2006-10-21 21:15 UTC (permalink / raw)
To: Shailendra Tripathi; +Cc: Carl-Johan Kjellander, xfs
Shailendra Tripathi wrote:
> Hi Carl,
> The best way to see what is going on is to see what log thinks it has. You must have got a message indicating the inode number and other details of the extent as indicated by the source.
>
> if(!rt && !gotp->br_startblock && (*lastxp != NULLEXTNUM <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=NULLEXTNUM> )) {
> cmn_err <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=cmn_err> (CE_PANIC <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=CE_PANIC> ,"Access to block zero: fs: <%s> inode: %lld "
> "start_block : %llx start_off : %llx blkcnt : %llx "
> "extent-state : %x \n",
> (ip <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=ip> ->i_mount)->m_fsname,(long long)ip <http://fxr.watson.org/fxr/ident?v=linux-2.6.11.8;i=ip> ->i_ino,
> gotp->br_startblock, gotp->br_startoff,
> gotp->br_blockcount,gotp->br_state);
> }
>
>
>
> Can you attach the hex dump of complete log with inode and extent information ? I want to see what has gone into the log for the operations on this inode. It could be that the extent map was corrupted just before shutdown or something in recovery screwed up.
> -shailendra
(russell, looks like your take-message-rewriter is going a bit nuts up there? or
something)
Anyway, the code in question that tripped here is:
XFS_WANT_CORRUPTED_GOTO(
XFS_BMAP_SANITY_CHECK(mp, block, level),
error0);
which is checking magic & other things.
The filesystem is corrupted; I'd guess this may be from 4kstacks + xfs + lvm + raid.
the image he provided has a clean log...
XFS mounting filesystem loop0
Ending clean XFS mount for filesystem: loop0
this file is one of the ones in bad shape as he mentioned:
[root@link-07 tmp]# ls -i mnt/lib/rpm/Packages
1048708 mnt/lib/rpm/Packages
in my case if I try to read it I get:
attempt to access beyond end of device
loop0: rw=0, want=1546188226568, limit=4194304
I/O error in filesystem ("loop0") meta-data dev loop0 block 0x16800000000
("xfs_trans_read_buf") error 5 buf count 4096
attempt to access beyond end of device
the inode looks like :
xfs_db> p
core.magic = 0x494e
core.mode = 0100644
core.version = 1
core.format = 3 (btree)
core.nlinkv1 = 1
core.uid = 37
core.gid = 37
core.flushiter = 121
core.atime.sec = Sun Oct 15 17:17:38 2006
core.atime.nsec = 403299952
core.mtime.sec = Sun Oct 15 17:17:38 2006
core.mtime.nsec = 403299952
core.ctime.sec = Sun Oct 15 17:17:45 2006
core.ctime.nsec = 983773702
core.size = 21340160
core.nblocks = 5212
core.extsize = 0
core.nextents = 154
core.naextents = 1
core.forkoff = 15
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.gen = 0
next_unlinked = null
u.bmbt.level = 1
u.bmbt.numrecs = 1
u.bmbt.keys[1] = [startoff] 1:[0]
u.bmbt.ptrs[1] = 1:1233986491173044224
a.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,72071,1,0]
repair output:
repair output follows.
[root@link-07 tmp]# xfs_repair -n image
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- scan filesystem freespace and inode maps...
would zero unused portion of primary superblock (AG #0)
would zero unused portion of secondary superblock (AG #1)
would zero unused portion of secondary superblock (AG #2)
would zero unused portion of secondary superblock (AG #3)
would zero unused portion of secondary superblock (AG #4)
would zero unused portion of secondary superblock (AG #5)
would zero unused portion of secondary superblock (AG #6)
would zero unused portion of secondary superblock (AG #7)
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
bad bmap btree ptr 0x1120002d00000000 in ino 1048708
bad data fork in inode 1048708
would have cleared inode 1048708
bad bmap btree ptr 0x8000000300000000 in ino 1048709
bad data fork in inode 1048709
would have cleared inode 1048709
bad bmap btree ptr 0x57e0001700000000 in ino 1048711
bad data fork in inode 1048711
would have cleared inode 1048711
bad bmap btree ptr 0x9b00000200000000 in ino 1048716
bad data fork in inode 1048716
would have cleared inode 1048716
bad bmap btree ptr 0x9140000100000000 in ino 1048717
bad data fork in inode 1048717
would have cleared inode 1048717
bad bmap btree ptr 0xbf00000100000000 in ino 1048718
bad data fork in inode 1048718
would have cleared inode 1048718
bad bmap btree ptr 0xa840000100000000 in ino 1048719
bad data fork in inode 1048719
would have cleared inode 1048719
bad bmap btree ptr 0x6b00000100000000 in ino 1048722
bad data fork in inode 1048722
would have cleared inode 1048722
bad bmap btree ptr 0x98c0000100000000 in ino 1048723
bad data fork in inode 1048723
would have cleared inode 1048723
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
entry "Packages" at block 0 offset 144 in directory inode 1048704 references
free inode 1048708
would clear inode number in entry at offset 144...
entry "Providename" at block 0 offset 200 in directory inode 1048704 references
free inode 1048709
would clear inode number in entry at offset 200...
entry "Basenames" at block 0 offset 312 in directory inode 1048704 references
free inode 1048711
would clear inode number in entry at offset 312...
entry "Requirename" at block 0 offset 520 in directory inode 1048704 references
free inode 1048716
would clear inode number in entry at offset 520...
entry "Dirnames" at block 0 offset 568 in directory inode 1048704 references
free inode 1048717
would clear inode number in entry at offset 568...
entry "Requireversion" at block 0 offset 624 in directory inode 1048704
references free inode 1048718
would clear inode number in entry at offset 624...
entry "Provideversion" at block 0 offset 688 in directory inode 1048704
references free inode 1048719
would clear inode number in entry at offset 688...
entry "Sha1header" at block 0 offset 856 in directory inode 1048704 references
free inode 1048722
would clear inode number in entry at offset 856...
entry "Filemd5s" at block 0 offset 904 in directory inode 1048704 references
free inode 1048723
would clear inode number in entry at offset 904...
bad bmap btree ptr 0x1120002d00000000 in ino 1048708
bad data fork in inode 1048708
would have cleared inode 1048708
bad bmap btree ptr 0x8000000300000000 in ino 1048709
bad data fork in inode 1048709
would have cleared inode 1048709
bad bmap btree ptr 0x57e0001700000000 in ino 1048711
bad data fork in inode 1048711
would have cleared inode 1048711
bad bmap btree ptr 0x9b00000200000000 in ino 1048716
bad data fork in inode 1048716
would have cleared inode 1048716
bad bmap btree ptr 0x9140000100000000 in ino 1048717
bad data fork in inode 1048717
would have cleared inode 1048717
bad bmap btree ptr 0xbf00000100000000 in ino 1048718
bad data fork in inode 1048718
would have cleared inode 1048718
bad bmap btree ptr 0xa840000100000000 in ino 1048719
bad data fork in inode 1048719
would have cleared inode 1048719
bad bmap btree ptr 0x6b00000100000000 in ino 1048722
bad data fork in inode 1048722
would have cleared inode 1048722
bad bmap btree ptr 0x98c0000100000000 in ino 1048723
bad data fork in inode 1048723
would have cleared inode 1048723
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem starting at / ...
entry "Packages" in directory inode 1048704 points to free inode 1048708, would
junk entry
entry "Providename" in directory inode 1048704 points to free inode 1048709,
would junk entry
entry "Basenames" in directory inode 1048704 points to free inode 1048711, would
junk entry
entry "Requirename" in directory inode 1048704 points to free inode 1048716,
would junk entry
entry "Dirnames" in directory inode 1048704 points to free inode 1048717, would
junk entry
entry "Requireversion" in directory inode 1048704 points to free inode 1048718,
would junk entry
entry "Provideversion" in directory inode 1048704 points to free inode 1048719,
would junk entry
entry "Sha1header" in directory inode 1048704 points to free inode 1048722,
would junk entry
entry "Filemd5s" in directory inode 1048704 points to free inode 1048723, would
junk entry
- traversal finished ...
- traversing all unattached subtrees ...
- traversals finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c
2006-10-21 21:15 ` Eric Sandeen
@ 2006-10-21 22:30 ` Shailendra Tripathi
0 siblings, 0 replies; 4+ messages in thread
From: Shailendra Tripathi @ 2006-10-21 22:30 UTC (permalink / raw)
To: Eric Sandeen; +Cc: Carl-Johan Kjellander, xfs
Eric Sandeen wrote:
> Anyway, the code in question that tripped here is:
>
> XFS_WANT_CORRUPTED_GOTO(
> XFS_BMAP_SANITY_CHECK(mp, block, level),
> error0);
>
> which is checking magic & other things.
>
OK. Thanks.
> The filesystem is corrupted; I'd guess this may be from 4kstacks + xfs
> + lvm + raid.
>
Yes, the corruption is persisted. I'd like to think on another direction
as well besides stack corruption isue and I have a reason for the same:
All the inodes which are corrupted are in the same region. The
corruption starts from 1048704 which is a directory and other inodes
which are corrupted are:
1048708
1048709
1048711
1048716
1048717
1048718
1048719
1048722
1048723
(gdb) p/x 1048704 % 64
$1 = 0x0
(gdb) p/x 1048708 % 64
$2 = 0x4
(gdb) p 1048723 % 64
$7 = 19
(gdb) p 1048723 - 1048704
$8 = 19
XFS flushes inodes in 8K buffer at a time. If I assume 256 byte inode(I
couldn't get enough data to verify. I didn't hard to find if this is the
case as well), so all these inodes are part of one flush buffer(32
inodes will be flushed in one buffer). So, it could be possible that an
I/O to the buffer is lost somewhere. That's why, whole bunch of inodes
in the same region are corrupted.
> the image he provided has a clean log...
> XFS mounting filesystem loop0
> Ending clean XFS mount for filesystem: loop0
>
> this file is one of the ones in bad shape as he mentioned:
>
> [root@link-07 tmp]# ls -i mnt/lib/rpm/Packages
> 1048708 mnt/lib/rpm/Packages
>
> in my case if I try to read it I get:
>
> attempt to access beyond end of device
> loop0: rw=0, want=1546188226568, limit=4194304
> I/O error in filesystem ("loop0") meta-data dev loop0 block
> 0x16800000000 ("xfs_trans_read_buf") error 5 buf count 4096
> attempt to access beyond end of device
>
> the inode looks like :
>
> xfs_db> p
> core.magic = 0x494e
> core.mode = 0100644
> core.version = 1
> core.format = 3 (btree)
> core.nlinkv1 = 1
> core.uid = 37
> core.gid = 37
> core.flushiter = 121
> core.atime.sec = Sun Oct 15 17:17:38 2006
> core.atime.nsec = 403299952
> core.mtime.sec = Sun Oct 15 17:17:38 2006
> core.mtime.nsec = 403299952
> core.ctime.sec = Sun Oct 15 17:17:45 2006
> core.ctime.nsec = 983773702
> core.size = 21340160
> core.nblocks = 5212
> core.extsize = 0
> core.nextents = 154
> core.naextents = 1
> core.forkoff = 15
> core.aformat = 2 (extents)
> core.dmevmask = 0
> core.dmstate = 0
> core.newrtbm = 0
> core.prealloc = 0
> core.realtime = 0
> core.immutable = 0
> core.append = 0
> core.sync = 0
> core.noatime = 0
> core.nodump = 0
> core.rtinherit = 0
> core.projinherit = 0
> core.nosymlinks = 0
> core.extsz = 0
> core.extszinherit = 0
> core.nodefrag = 0
> core.gen = 0
> next_unlinked = null
> u.bmbt.level = 1
> u.bmbt.numrecs = 1
> u.bmbt.keys[1] = [startoff] 1:[0]
> u.bmbt.ptrs[1] = 1:1233986491173044224
> a.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,72071,1,0]
>
> repair output:
>
> repair output follows.
>
> [root@link-07 tmp]# xfs_repair -n image
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
> - scan filesystem freespace and inode maps...
> would zero unused portion of primary superblock (AG #0)
> would zero unused portion of secondary superblock (AG #1)
> would zero unused portion of secondary superblock (AG #2)
> would zero unused portion of secondary superblock (AG #3)
> would zero unused portion of secondary superblock (AG #4)
> would zero unused portion of secondary superblock (AG #5)
> would zero unused portion of secondary superblock (AG #6)
> would zero unused portion of secondary superblock (AG #7)
> - found root inode chunk
> Phase 3 - for each AG...
> - scan (but don't clear) agi unlinked lists...
> - process known inodes and perform inode discovery...
> - agno = 0
> - agno = 1
> bad bmap btree ptr 0x1120002d00000000 in ino 1048708
> bad data fork in inode 1048708
> would have cleared inode 1048708
> bad bmap btree ptr 0x8000000300000000 in ino 1048709
> bad data fork in inode 1048709
> would have cleared inode 1048709
> bad bmap btree ptr 0x57e0001700000000 in ino 1048711
> bad data fork in inode 1048711
> would have cleared inode 1048711
> bad bmap btree ptr 0x9b00000200000000 in ino 1048716
> bad data fork in inode 1048716
> would have cleared inode 1048716
> bad bmap btree ptr 0x9140000100000000 in ino 1048717
> bad data fork in inode 1048717
> would have cleared inode 1048717
> bad bmap btree ptr 0xbf00000100000000 in ino 1048718
> bad data fork in inode 1048718
> would have cleared inode 1048718
> bad bmap btree ptr 0xa840000100000000 in ino 1048719
> bad data fork in inode 1048719
> would have cleared inode 1048719
> bad bmap btree ptr 0x6b00000100000000 in ino 1048722
> bad data fork in inode 1048722
> would have cleared inode 1048722
> bad bmap btree ptr 0x98c0000100000000 in ino 1048723
> bad data fork in inode 1048723
> would have cleared inode 1048723
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
> - setting up duplicate extent list...
> - check for inodes claiming duplicate blocks...
> - agno = 0
> - agno = 1
> entry "Packages" at block 0 offset 144 in directory inode 1048704
> references free inode 1048708
> would clear inode number in entry at offset 144...
> entry "Providename" at block 0 offset 200 in directory inode 1048704
> references free inode 1048709
> would clear inode number in entry at offset 200...
> entry "Basenames" at block 0 offset 312 in directory inode 1048704
> references free inode 1048711
> would clear inode number in entry at offset 312...
> entry "Requirename" at block 0 offset 520 in directory inode 1048704
> references free inode 1048716
> would clear inode number in entry at offset 520...
> entry "Dirnames" at block 0 offset 568 in directory inode 1048704
> references free inode 1048717
> would clear inode number in entry at offset 568...
> entry "Requireversion" at block 0 offset 624 in directory inode
> 1048704 references free inode 1048718
> would clear inode number in entry at offset 624...
> entry "Provideversion" at block 0 offset 688 in directory inode
> 1048704 references free inode 1048719
> would clear inode number in entry at offset 688...
> entry "Sha1header" at block 0 offset 856 in directory inode 1048704
> references free inode 1048722
> would clear inode number in entry at offset 856...
> entry "Filemd5s" at block 0 offset 904 in directory inode 1048704
> references free inode 1048723
> would clear inode number in entry at offset 904...
> bad bmap btree ptr 0x1120002d00000000 in ino 1048708
> bad data fork in inode 1048708
> would have cleared inode 1048708
> bad bmap btree ptr 0x8000000300000000 in ino 1048709
> bad data fork in inode 1048709
> would have cleared inode 1048709
> bad bmap btree ptr 0x57e0001700000000 in ino 1048711
> bad data fork in inode 1048711
> would have cleared inode 1048711
> bad bmap btree ptr 0x9b00000200000000 in ino 1048716
> bad data fork in inode 1048716
> would have cleared inode 1048716
> bad bmap btree ptr 0x9140000100000000 in ino 1048717
> bad data fork in inode 1048717
> would have cleared inode 1048717
> bad bmap btree ptr 0xbf00000100000000 in ino 1048718
> bad data fork in inode 1048718
> would have cleared inode 1048718
> bad bmap btree ptr 0xa840000100000000 in ino 1048719
> bad data fork in inode 1048719
> would have cleared inode 1048719
> bad bmap btree ptr 0x6b00000100000000 in ino 1048722
> bad data fork in inode 1048722
> would have cleared inode 1048722
> bad bmap btree ptr 0x98c0000100000000 in ino 1048723
> bad data fork in inode 1048723
> would have cleared inode 1048723
> - agno = 2
> - agno = 3
> - agno = 4
> - agno = 5
> - agno = 6
> - agno = 7
> No modify flag set, skipping phase 5
> Phase 6 - check inode connectivity...
> - traversing filesystem starting at / ...
> entry "Packages" in directory inode 1048704 points to free inode
> 1048708, would junk entry
> entry "Providename" in directory inode 1048704 points to free inode
> 1048709, would junk entry
> entry "Basenames" in directory inode 1048704 points to free inode
> 1048711, would junk entry
> entry "Requirename" in directory inode 1048704 points to free inode
> 1048716, would junk entry
> entry "Dirnames" in directory inode 1048704 points to free inode
> 1048717, would junk entry
> entry "Requireversion" in directory inode 1048704 points to free inode
> 1048718, would junk entry
> entry "Provideversion" in directory inode 1048704 points to free inode
> 1048719, would junk entry
> entry "Sha1header" in directory inode 1048704 points to free inode
> 1048722, would junk entry
> entry "Filemd5s" in directory inode 1048704 points to free inode
> 1048723, would junk entry
> - traversal finished ...
> - traversing all unattached subtrees ...
> - traversals finished ...
> - moving disconnected inodes to lost+found ...
> Phase 7 - verify link counts...
> No modify flag set, skipping filesystem flush and exiting.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2006-10-21 22:31 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-10-21 1:34 XFS_WANT_CORRUPTED_GOTO at line 4528 of file fs/xfs/xfs_bmap.c Carl-Johan Kjellander
2006-10-21 17:29 ` Shailendra Tripathi
2006-10-21 21:15 ` Eric Sandeen
2006-10-21 22:30 ` Shailendra Tripathi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox