* xfs_repair segfaulting in phase 3
@ 2013-08-12 11:38 Jan Yves Brueckner
2013-08-12 14:02 ` Eric Sandeen
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Jan Yves Brueckner @ 2013-08-12 11:38 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 9984 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfs_repair segfaulting in phase 3
2013-08-12 11:38 xfs_repair segfaulting in phase 3 Jan Yves Brueckner
@ 2013-08-12 14:02 ` Eric Sandeen
2013-08-26 20:41 ` Eric Sandeen
2013-09-04 13:39 ` Eric Sandeen
2 siblings, 0 replies; 6+ messages in thread
From: Eric Sandeen @ 2013-08-12 14:02 UTC (permalink / raw)
To: Jan Yves Brueckner; +Cc: xfs
On 8/12/13 6:38 AM, Jan Yves Brueckner wrote:
> Hi there,
>
> as in previous posts we've got a problem in repair/dir2.c after a
> xfs_repair -L -m 60000 segfaulting reproducibly at the very same
> point of recovery;
>
> I did the initial repair with debianish 2.9.8 (some patches applied);
> then upgrading to latest stable 3.1.11 where the problem persists.
>
> 3.1.11 when compiled w/o optimization and run with gdb however
> segfaulted in libpthread so I repeated with an -O0 of 2.9.8 to get
> the debugging information:
...
> I'll get the metadump on request.
If you'd like to send it my way off-list, I'll take a look.
Thanks,
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfs_repair segfaulting in phase 3
2013-08-12 11:38 xfs_repair segfaulting in phase 3 Jan Yves Brueckner
2013-08-12 14:02 ` Eric Sandeen
@ 2013-08-26 20:41 ` Eric Sandeen
2013-09-04 13:39 ` Eric Sandeen
2 siblings, 0 replies; 6+ messages in thread
From: Eric Sandeen @ 2013-08-26 20:41 UTC (permalink / raw)
To: Jan Yves Brueckner; +Cc: xfs
On 8/12/13 6:38 AM, Jan Yves Brueckner wrote:
> Hi there,
>
> as in previous posts we've got a problem in repair/dir2.c after a
> xfs_repair -L -m 60000 segfaulting reproducibly at the very same
> point of recovery;
>
> I did the initial repair with debianish 2.9.8 (some patches applied);
> then upgrading to latest stable 3.1.11 where the problem persists.
>
> 3.1.11 when compiled w/o optimization and run with gdb however
> segfaulted in libpthread so I repeated with an -O0 of 2.9.8 to get
> the debugging information:
...
Ok finally looking at the metadump you provided, thanks.
But out of curiosity, what happened to this filesystem? Seems
like a real mess.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: xfs_repair segfaulting in phase 3
2013-08-12 11:38 xfs_repair segfaulting in phase 3 Jan Yves Brueckner
2013-08-12 14:02 ` Eric Sandeen
2013-08-26 20:41 ` Eric Sandeen
@ 2013-09-04 13:39 ` Eric Sandeen
2013-10-25 11:23 ` Aw: " Jan Yves Brueckner
2 siblings, 1 reply; 6+ messages in thread
From: Eric Sandeen @ 2013-09-04 13:39 UTC (permalink / raw)
To: Jan Yves Brueckner; +Cc: xfs
On 8/12/13 6:38 AM, Jan Yves Brueckner wrote:
> Hi there,
>
> as in previous posts we've got a problem in repair/dir2.c after a
> xfs_repair -L -m 60000 segfaulting reproducibly at the very same
> point of recovery;
>
> I did the initial repair with debianish 2.9.8 (some patches applied);
> then upgrading to latest stable 3.1.11 where the problem persists.
>
> 3.1.11 when compiled w/o optimization and run with gdb however
> segfaulted in libpthread so I repeated with an -O0 of 2.9.8 to get
> the debugging information:
>
Jan - 3 bugfixes into this, and I can get repair to complete w/o
a segv. However, the fs is still not fully repaired.
Nor is it fully repaired after the 2nd pass, etc etc. :(
So you may have contributed a bit to xfs_repair stability
by uncovering this, but I'm not sure I will be able to contribute
to recovery of your (apparently _severely_ damaged) filesystem.
:(
-Eric
> corrupt block 35 in directory inode 39869938
>
> will junk block
>
> corrupt block 51 in directory inode 39869938
>
> will junk block
>
>
>
> Program received signal SIGSEGV, Segmentation fault.
>
> [Switching to Thread 0x7fcd982ae730 (LWP 19563)]
>
> 0x0000000000419428 in verify_dir2_path (mp=0x7ffff8381580,
> cursor=0x7ffff8380f10, p_level=0) at dir2.c:619
>
> 619 node = cursor->level[this_level].bp->data;
>
> (gdb) info locals
>
> node = (xfs_da_intnode_t *) 0x7ffff8380e94
>
> newnode = (xfs_da_intnode_t *) 0x52202867f8380de0
>
> dabno = 0
>
> bp = (xfs_dabuf_t *) 0x80000200000001
>
> bad = -474527744
>
> entry = 0
>
> this_level = 1
>
> bmp = (bmap_ext_t *) 0x1
>
> nex = 134250496
>
> lbmp = {startoff = 8459390528, startblock = 72058695280238674,
> blockcount = 140737357811264, flag = 4309438}
>
> __PRETTY_FUNCTION__ = "verify_dir2_path"
>
> (gdb)
>
>
>
> (gdb) bt
>
> #0 0x0000000000419428 in verify_dir2_path (mp=0x7ffff8381580,
> cursor=0x7ffff8380f10, p_level=0) at dir2.c:619
>
> #1 0x000000000041c441 in process_leaf_level_dir2 (mp=0x7ffff8381580,
> da_cursor=0x7ffff8380f10, repair=0x7ffff8381134)
>
> at dir2.c:1899
>
> #2 0x000000000041c61e in process_node_dir2 (mp=0x7ffff8381580,
> ino=39869938, dip=0x7fc9e2b38000, blkmap=0x7fca249ffd40,
>
> repair=0x7ffff8381134) at dir2.c:1979
>
> #3 0x000000000041c8cf in process_leaf_node_dir2 (mp=0x7ffff8381580,
> ino=39869938, dip=0x7fc9e2b38000, ino_discovery=1,
>
> dirname=0x4911f6 "", parent=0x7ffff8381398, blkmap=0x7fca249ffd40,
> dot=0x7ffff838113c, dotdot=0x7ffff8381138,
>
> repair=0x7ffff8381134, isnode=1) at dir2.c:2059
>
> #4 0x000000000041cb33 in process_dir2 (mp=0x7ffff8381580,
> ino=39869938, dip=0x7fc9e2b38000, ino_discovery=1,
>
> dino_dirty=0x7ffff8381390, dirname=0x4911f6 "",
> parent=0x7ffff8381398, blkmap=0x7fca249ffd40) at dir2.c:2113
>
> #5 0x00000000004127ac in process_dinode_int (mp=0x7ffff8381580,
> dino=0x7fc9e2b38000, agno=0, ino=39869938, was_free=0,
>
> dirty=0x7ffff8381390, cleared=0x7ffff838138c, used=0x7ffff8381394,
> verify_mode=0, uncertain=0, ino_discovery=1,
>
> check_dups=0, extra_attr_check=1, isa_dir=0x7ffff8381388,
> parent=0x7ffff8381398) at dinode.c:2783
>
> #6 0x0000000000412d94 in process_dinode (mp=0x7ffff8381580,
> dino=0x7fc9e2b38000, agno=0, ino=39869938, was_free=0,
>
> dirty=0x7ffff8381390, cleared=0x7ffff838138c, used=0x7ffff8381394,
> ino_discovery=1, check_dups=0, extra_attr_check=1,
>
> isa_dir=0x7ffff8381388, parent=0x7ffff8381398) at dinode.c:3017
>
> #7 0x000000000040b607 in process_inode_chunk (mp=0x7ffff8381580,
> agno=0, num_inos=64, first_irec=0x751c810, ino_discovery=1,
>
> check_dups=0, extra_attr_check=1, bogus=0x7ffff8381430) at
> dino_chunks.c:778
>
> #8 0x000000000040bf46 in process_aginodes (mp=0x7ffff8381580,
> pf_args=0x75e6810, agno=0, ino_discovery=1, check_dups=0,
>
> extra_attr_check=1) at dino_chunks.c:1025
>
> #9 0x0000000000421db3 in process_ag_func (wq=0x1fe3790, agno=0,
> arg=0x75e6810) at phase3.c:162
>
> #10 0x0000000000421f84 in process_ags (mp=0x7ffff8381580) at
> phase3.c:201
>
> #11 0x00000000004220aa in phase3 (mp=0x7ffff8381580) at phase3.c:240
>
> #12 0x000000000043bec4 in main (argc=5, argv=0x7ffff83818c8) at
> xfs_repair.c:697
>
>
>
>
>
> I'll get the metadump on request.
>
>
>
>
>
> Thanks for helping,
>
>
>
> Jan
>
>
>
> _______________________________________________ xfs mailing list
> xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Aw: Re: xfs_repair segfaulting in phase 3
2013-09-04 13:39 ` Eric Sandeen
@ 2013-10-25 11:23 ` Jan Yves Brueckner
2013-10-25 15:12 ` Eric Sandeen
0 siblings, 1 reply; 6+ messages in thread
From: Jan Yves Brueckner @ 2013-10-25 11:23 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs
[-- Attachment #1: Type: text/html, Size: 7406 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Aw: Re: xfs_repair segfaulting in phase 3
2013-10-25 11:23 ` Aw: " Jan Yves Brueckner
@ 2013-10-25 15:12 ` Eric Sandeen
0 siblings, 0 replies; 6+ messages in thread
From: Eric Sandeen @ 2013-10-25 15:12 UTC (permalink / raw)
To: Jan Yves Brueckner; +Cc: xfs
On 10/25/13 6:23 AM, Jan Yves Brueckner wrote:
> Thanks a lot for taking care,
>
> I just teted with 3.2 alpha1 and had these results:
>
> corrupt block 21 in directory inode 39869938
> will junk block
> xfs_dir3_data_read_verify: XFS_CORRUPTION_ERROR
> corrupt block 34 in directory inode 39869938
> will junk block
> xfs_dir3_data_read_verify: XFS_CORRUPTION_ERROR
> corrupt block 35 in directory inode 39869938
> will junk block
> xfs_dir3_data_read_verify: XFS_CORRUPTION_ERROR
> corrupt block 51 in directory inode 39869938
> will junk block
> xfs_da3_node_read_verify: XFS_CORRUPTION_ERROR
> Segmentation fault
>
>
> Should I go on with git latest?
Hm, trying to remember which "3 patches" I referred to. ;)
This one:
commit 44dae5e6804408b4123a916a2738b73e21d8c61e
Author: Eric Sandeen <sandeen@sandeen.net>
Date: Thu Sep 12 20:56:36 2013 +0000
xfs_repair: test for bad level in dir2 node
is indeed committed post 3.2.0-alpha1.
Let me try to remember what the other two were, and where
they're at.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-10-25 15:12 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-12 11:38 xfs_repair segfaulting in phase 3 Jan Yves Brueckner
2013-08-12 14:02 ` Eric Sandeen
2013-08-26 20:41 ` Eric Sandeen
2013-09-04 13:39 ` Eric Sandeen
2013-10-25 11:23 ` Aw: " Jan Yves Brueckner
2013-10-25 15:12 ` Eric Sandeen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox