From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from relay.sgi.com (relay1.corp.sgi.com [137.38.102.111]) by oss.sgi.com (Postfix) with ESMTP id 9FACE7F94 for ; Thu, 28 Feb 2013 12:48:34 -0600 (CST) Received: from cuda.sgi.com (cuda3.sgi.com [192.48.176.15]) by relay1.corp.sgi.com (Postfix) with ESMTP id 8BC3F8F804C for ; Thu, 28 Feb 2013 10:48:34 -0800 (PST) Received: from sandeen.net (sandeen.net [63.231.237.45]) by cuda.sgi.com with ESMTP id CyoRDQ3xoV5hfXDa for ; Thu, 28 Feb 2013 10:48:29 -0800 (PST) Message-ID: <512FA67D.2090708@sandeen.net> Date: Thu, 28 Feb 2013 12:48:29 -0600 From: Eric Sandeen MIME-Version: 1.0 Subject: Re: xfs_repair segfaults References: In-Reply-To: List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: xfs-bounces@oss.sgi.com Sender: xfs-bounces@oss.sgi.com To: Ole Tange Cc: xfs@oss.sgi.com On 2/28/13 9:22 AM, Ole Tange wrote: > I forced a RAID online. I have done that before and xfs_repair > normally removes the last hour of data or so, but saves everything > else. > > Today that did not work: > > /usr/local/src/xfsprogs-3.1.10/repair# ./xfs_repair -n /dev/md5p1 > Phase 1 - find and verify superblock... > Phase 2 - using internal log > - scan filesystem freespace and inode maps... > flfirst 232 in agf 91 too large (max = 128) > Segmentation fault (core dumped) > > Core put in: http://dna.ku.dk/~tange/tmp/xfs_repair.core.bz2 We'd need a binary w/ debug symbols to go along with it. an xfs_metadump might recreate the problem too. > I tried using the git-version, too, but could not get that to compile. How'd it fail, can you report that in a different thread? thanks, -eric > # uname -a > Linux franklin 3.2.0-0.bpo.4-amd64 #1 SMP Debian 3.2.35-2~bpo60+1 > x86_64 GNU/Linux > > # ./xfs_repair -V > xfs_repair version 3.1.10 > > # cat /proc/cpuinfo |grep MH | wc > 64 256 1280 > > # cat /proc/partitions |grep md5 > 9 5 125024550912 md5 > 259 0 107521114112 md5p1 > 259 1 17503434752 md5p2 > > # cat /proc/mdstat > Personalities : [raid0] [raid6] [raid5] [raid4] > md5 : active raid0 md1[0] md4[3] md3[2] md2[1] > 125024550912 blocks super 1.2 512k chunks > > md1 : active raid6 sdd[1] sdi[9] sdq[13] sdau[7] sdt[10] sdg[5] sdf[4] sde[2] > 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2 > [10/8] [_UU_UUUUUU] > bitmap: 2/2 pages [8KB], 1048576KB chunk > > md4 : active raid6 sdo[13] sdu[9] sdad[8] sdh[7] sdc[6] sds[11] > sdap[3] sdao[2] sdk[1] > 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2 > [10/8] [_UUUU_UUUU] > [>....................] recovery = 2.1% (84781876/3907017344) > finish=2196.4min speed=29003K/sec > bitmap: 2/2 pages [8KB], 1048576KB chunk > > md2 : active raid6 sdac[0] sdal[9] sdak[8] sdaj[7] sdai[6] sdah[5] > sdag[4] sdaf[3] sdae[2] sdr[10] > 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2 > [10/10] [UUUUUUUUUU] > bitmap: 0/2 pages [0KB], 1048576KB chunk > > md3 : active raid6 sdaq[0] sdab[9] sdaa[8] sdb[7] sdy[6] sdx[5] sdw[4] > sdv[3] sdz[10] sdj[1] > 31256138752 blocks super 1.2 level 6, 128k chunk, algorithm 2 > [10/10] [UUUUUUUUUU] > bitmap: 0/2 pages [0KB], 1048576KB chunk > > unused devices: > > # smartctl -a /dev/sdau|grep Model > Device Model: Hitachi HDS724040ALE640 > > # hdparm -W /dev/sdau > /dev/sdau: > write-caching = 0 (off) > > # dmesg > [ 3745.914280] xfs_repair[25300]: segfault at 7f5d9282b000 ip > 000000000042d068 sp 00007f5da3183dd0 error 4 in > xfs_repair[400000+7f000] > > > /Ole > > _______________________________________________ > xfs mailing list > xfs@oss.sgi.com > http://oss.sgi.com/mailman/listinfo/xfs > _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs