public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@sandeen.net>
To: Richard Hartmann <richih.mailinglist@gmail.com>
Cc: linux-xfs@oss.sgi.com, Nicolas Stransky <nico@stransky.cx>
Subject: Re: xfs_repair on a 1.5 TiB image has been hanging for about an hour, now
Date: Fri, 12 Feb 2010 16:19:56 -0600	[thread overview]
Message-ID: <4B75D40C.4000903@sandeen.net> (raw)
In-Reply-To: <2d460de71002121201q224d3bc8xe48089eccdf6f6a@mail.gmail.com>

Richard Hartmann wrote:
> On Fri, Feb 12, 2010 at 18:50, Eric Sandeen <sandeen@sandeen.net> wrote:
> 
>> hard to say without knowing for sure what version you're using, and
>> what exactly "this" is that you're seeing :)
> 
> 3.0.4 - I stated that in another subthread so it might have gotten lost.
> 
> 
>> Providing an xfs_metadump of the corrupted fs that hangs repair
>> is also about the best thing you could do for investigation,
>> if you've already determined that the latest release doesn't help.
> 
> http://dediserver.eu/misc/mailstore_metadata_obscured__after_xfs_repair_hang.bz2
> http://dediserver.eu/misc/mailstore_metadata_obscured.bz2
> 
> These logs will stay up for at least a week or three.
> 

Ok it's hung in here it seems:

(gdb) bt
#0  0x0000003df2e0ce74 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x0000003df2e08874 in _L_lock_106 () from /lib64/libpthread.so.0
#2  0x0000003df2e082e0 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00000000004310d9 in libxfs_getbuf (device=<value optimized out>, blkno=<value optimized out>, len=<value optimized out>)
    at rdwr.c:394
#4  0x000000000043110d in libxfs_readbuf (dev=140518781147480, blkno=128, len=-220135752, flags=-1) at rdwr.c:483
#5  0x0000000000413d94 in da_read_buf (mp=0x7fff54dbcb70, nex=1, bmp=<value optimized out>) at dir2.c:110
#6  0x0000000000415b30 in process_block_dir2 (mp=0x7fff54dbcb70, ino=128, dip=0x7fcd14080e00, ino_discovery=1, 
    dino_dirty=<value optimized out>, dirname=0x464619 "", parent=0x7fff54dbca10, blkmap=0x1c19dd0, dot=0x7fff54dbc6fc, 
    dotdot=0x7fff54dbc6f8, repair=0x7fff54dbc6f4) at dir2.c:1697
#7  0x00000000004161ac in process_dir2 (mp=0x7fff54dbcb70, ino=128, dip=0x7fcd14080e00, ino_discovery=1, dino_dirty=0x7fff54dbca20, 
    dirname=0x464619 "", parent=0x7fff54dbca10, blkmap=0x1c19dd0) at dir2.c:2084
#8  0x000000000040e422 in process_dinode_int (mp=0x7fff54dbcb70, dino=0x7fcd14080e00, agno=0, ino=128, was_free=0, dirty=0x7fff54dbca20, 
    used=0x7fff54dbca24, verify_mode=0, uncertain=0, ino_discovery=1, check_dups=0, extra_attr_check=1, isa_dir=0x7fff54dbca1c, 
    parent=0x7fff54dbca10) at dinode.c:2661
#9  0x000000000040e79e in process_dinode (mp=0x7fcd1408d958, dino=0x80, agno=4074831544, ino=4294967295, was_free=28730568, 
    dirty=0x464619, used=0x7fff54dbca24, ino_discovery=1, check_dups=0, extra_attr_check=1, isa_dir=0x7fff54dbca1c, 
    parent=0x7fff54dbca10) at dinode.c:2772
#10 0x0000000000408483 in process_inode_chunk (mp=0x7fff54dbcb70, agno=0, num_inos=<value optimized out>, first_irec=0x1b77930, 
    ino_discovery=1, check_dups=0, extra_attr_check=1, bogus=0x7fff54dbcaa4) at dino_chunks.c:777
#11 0x0000000000408b22 in process_aginodes (mp=0x7fff54dbcb70, pf_args=0x361bae0, agno=0, ino_discovery=1, check_dups=0, 
    extra_attr_check=1) at dino_chunks.c:1024
#12 0x000000000041a4ef in process_ag_func (wq=0x1d65a00, agno=0, arg=0x361bae0) at phase3.c:154
#13 0x000000000041ab55 in phase3 (mp=0x7fff54dbcb70) at phase3.c:193
#14 0x000000000042d5a1 in main (argc=<value optimized out>, argv=<value optimized out>) at xfs_repair.c:712

And you're right, it's not progressing.

The filesystem is a real mess, but it's also making repair pretty unhappy :)

1st run hangs
2nd run completes with -P
next run resets more link counts
run after that segfaults

:(

And just a warning, post-repair about 22% of the files are in lost+found.

It'd take a bit of dedicated time to sort out the issues in repair here,
we need to do it but somebody's going to hav to find the time ...

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2010-02-12 22:18 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-12 14:07 xfs_repair on a 1.5 TiB image has been hanging for about an hour, now Richard Hartmann
2010-02-12 15:28 ` Eric Sandeen
2010-02-12 16:45   ` Richard Hartmann
2010-02-12 17:02     ` Richard Hartmann
2010-02-12 17:31       ` Nicolas Stransky
2010-02-12 17:50         ` Eric Sandeen
2010-02-12 18:16           ` Nicolas Stransky
2010-02-12 20:30             ` Nicolas Stransky
2010-02-12 20:39               ` Richard Hartmann
2010-02-12 22:21                 ` Nicolas Stransky
2010-02-12 20:48               ` Eric Sandeen
2010-02-12 22:27                 ` Nicolas Stransky
2010-02-12 20:01           ` Richard Hartmann
2010-02-12 20:11             ` Eric Sandeen
2010-02-12 22:19             ` Eric Sandeen [this message]
     [not found] ` <20100212165801.GA7323@puku.stupidest.org>
2010-02-12 17:04   ` Richard Hartmann
2010-02-12 17:06     ` Richard Hartmann
2010-02-12 17:13     ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B75D40C.4000903@sandeen.net \
    --to=sandeen@sandeen.net \
    --cc=linux-xfs@oss.sgi.com \
    --cc=nico@stransky.cx \
    --cc=richih.mailinglist@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox