From: Eric Sandeen <sandeen@sandeen.net>
To: Ole Tange <tange@binf.ku.dk>
Cc: xfs@oss.sgi.com
Subject: Re: xfs_repair segfaults
Date: Fri, 01 Mar 2013 16:14:23 -0600 [thread overview]
Message-ID: <5131283F.8030704@sandeen.net> (raw)
In-Reply-To: <CANU9nTnvJS50vdQv2K0gKHZPvzzH5EY1qpizJNsqUobrr2juDA@mail.gmail.com>
On 2/28/13 9:22 AM, Ole Tange wrote:
> I forced a RAID online. I have done that before and xfs_repair
> normally removes the last hour of data or so, but saves everything
> else.
>
> Today that did not work:
>
> /usr/local/src/xfsprogs-3.1.10/repair# ./xfs_repair -n /dev/md5p1
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
> - scan filesystem freespace and inode maps...
> flfirst 232 in agf 91 too large (max = 128)
> Segmentation fault (core dumped)
FWIW, the fs in question seems to need a log replay, so
xfs_repair -n would find it in a worse state...
I had forgotten that xfs_repair -n won't complain about
a dirty log. Seems like it should.
But, the log is corrupt enough that it won't replay:
XFS (loop0): Mounting Filesystem
XFS (loop0): Starting recovery (logdev: internal)
ffff88036e7cd800: 58 41 47 46 00 00 00 01 00 00 00 5b 0f ff ff 00 XAGF.......[....
XFS (loop0): Internal error xfs_alloc_read_agf at line 2146 of file fs/xfs/xfs_alloc.c. Caller 0xffffffffa033d009
so really this'll require xfs_repair -L
xfs_repair -L doesn't segfault though, FWIW.
I'll try to look into the -n segfault in any case.
-Eric
> Core put in: http://dna.ku.dk/~tange/tmp/xfs_repair.core.bz2
>
> I tried using the git-version, too, but could not get that to compile.
>
> # uname -a
> Linux franklin 3.2.0-0.bpo.4-amd64 #1 SMP Debian 3.2.35-2~bpo60+1
> x86_64 GNU/Linux
>
> # ./xfs_repair -V
> xfs_repair version 3.1.10
>
> # cat /proc/cpuinfo |grep MH | wc
> 64 256 1280
>
> # cat /proc/partitions |grep md5
> 9 5 125024550912 md5
> 259 0 107521114112 md5p1
> 259 1 17503434752 md5p2
>
> # cat /proc/mdstat
> Personalities : [raid0] [raid6] [raid5] [raid4]
> md5 : active raid0 md1[0] md4[3] md3[2] md2[1]
> 125024550912 blocks super 1.2 512k chunks
>
> md1 : active raid6 sdd[1] sdi[9] sdq[13] sdau[7] sdt[10] sdg[5] sdf[4] sde[2]
> 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/8] [_UU_UUUUUU]
> bitmap: 2/2 pages [8KB], 1048576KB chunk
>
> md4 : active raid6 sdo[13] sdu[9] sdad[8] sdh[7] sdc[6] sds[11]
> sdap[3] sdao[2] sdk[1]
> 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/8] [_UUUU_UUUU]
> [>....................] recovery = 2.1% (84781876/3907017344)
> finish=2196.4min speed=29003K/sec
> bitmap: 2/2 pages [8KB], 1048576KB chunk
>
> md2 : active raid6 sdac[0] sdal[9] sdak[8] sdaj[7] sdai[6] sdah[5]
> sdag[4] sdaf[3] sdae[2] sdr[10]
> 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/10] [UUUUUUUUUU]
> bitmap: 0/2 pages [0KB], 1048576KB chunk
>
> md3 : active raid6 sdaq[0] sdab[9] sdaa[8] sdb[7] sdy[6] sdx[5] sdw[4]
> sdv[3] sdz[10] sdj[1]
> 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2
> [10/10] [UUUUUUUUUU]
> bitmap: 0/2 pages [0KB], 1048576KB chunk
>
> unused devices: <none>
>
> # smartctl -a /dev/sdau|grep Model
> Device Model: Hitachi HDS724040ALE640
>
> # hdparm -W /dev/sdau
> /dev/sdau:
> write-caching = 0 (off)
>
> # dmesg
> [ 3745.914280] xfs_repair[25300]: segfault at 7f5d9282b000 ip
> 000000000042d068 sp 00007f5da3183dd0 error 4 in
> xfs_repair[400000+7f000]
>
>
> /Ole
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-03-01 22:14 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-28 15:22 xfs_repair segfaults Ole Tange
2013-02-28 18:48 ` Eric Sandeen
2013-03-01 9:37 ` Ole Tange
2013-03-01 16:46 ` Eric Sandeen
2013-03-04 9:00 ` Ole Tange
2013-03-04 15:20 ` Eric Sandeen
2013-03-08 10:21 ` Ole Tange
2013-03-08 20:32 ` Eric Sandeen
2013-03-12 10:41 ` Ole Tange
2013-03-12 14:40 ` Eric Sandeen
2013-03-12 11:37 ` Ole Tange
2013-03-12 14:47 ` Eric Sandeen
2013-03-01 11:17 ` Dave Chinner
2013-03-01 12:24 ` Ole Tange
2013-03-01 20:53 ` Dave Chinner
2013-03-04 9:03 ` Ole Tange
2013-03-04 23:23 ` Dave Chinner
2013-03-08 10:09 ` Ole Tange
2013-03-01 22:14 ` Eric Sandeen [this message]
2013-03-01 22:31 ` Dave Chinner
2013-03-01 22:32 ` Eric Sandeen
2013-03-01 23:55 ` Eric Sandeen
2013-03-04 12:47 ` Ole Tange
2013-03-04 15:17 ` Eric Sandeen
2013-03-04 23:11 ` Dave Chinner
-- strict thread matches above, loose matches on Subject: below --
2013-05-06 12:06 Xfs_repair segfaults Filippo Stenico
2013-05-06 14:34 ` Eric Sandeen
2013-05-06 15:00 ` Filippo Stenico
[not found] ` <CADNx=Kv0bt3fNGW8Y24GziW9MOO-+b7fBGub4AYP70b5gAegxw@mail.gmail.com>
2013-05-07 13:20 ` Eric Sandeen
2013-05-07 13:36 ` Filippo Stenico
2013-05-07 18:20 ` Filippo Stenico
2013-05-08 17:30 ` Filippo Stenico
2013-05-08 17:42 ` Filippo Stenico
2013-05-08 23:39 ` Dave Chinner
2013-05-09 15:11 ` Filippo Stenico
2013-05-09 17:22 ` Filippo Stenico
2013-05-09 22:39 ` Dave Chinner
[not found] ` <CADNx=KuQjMNHUk6t0+hBZ5DN6s=RXqrPEjeoSxpBta47CJoDgQ@mail.gmail.com>
2013-05-10 11:00 ` Filippo Stenico
2013-05-09 22:37 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5131283F.8030704@sandeen.net \
--to=sandeen@sandeen.net \
--cc=tange@binf.ku.dk \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox