* XFS Repair hangs at inode repair in phase3
@ 2014-06-24 13:01 Dragon
2014-06-24 14:15 ` Eric Sandeen
0 siblings, 1 reply; 16+ messages in thread
From: Dragon @ 2014-06-24 13:01 UTC (permalink / raw)
To: xfs
Hello,
i have a fresh install Debian Wheezy with software raid as md2 for my files. MD2 is nealry full of 8TB and i wanted to migrate the data to a newier system. While copy files i got an message that i have to run xfs_repair. I first run xfs_check but it eats up all my 11GB memory and the system stalls. Then i run xfs_repair -n /dev/md2 but it hangs each time add:
problem with directory contens in inode 2147997719 - would have cleared inode 2147997719. If i break this and restart, it hangs at the same position. System seems to do nothing over hours.
Some Infos:
xfsprogs in Version 3.1.7+b1
Kernel: 3.2.0-4-amd64
1 CPU with 2 Cores
Disks are Seagate ST3000DM001
MD2 = Software Raid6 clear 4/5 disks
LVM = no
cat /proc/meminfo
MemTotal: 12057912 kB
MemFree: 9016128 kB
Buffers: 9760 kB
Cached: 56260 kB
SwapCached: 0 kB
Active: 2908368 kB
Inactive: 42616 kB
Active(anon): 2887884 kB
Inactive(anon): 300 kB
Active(file): 20484 kB
Inactive(file): 42316 kB
Unevictable: 4280 kB
Mlocked: 4280 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 28 kB
Writeback: 0 kB
AnonPages: 2889240 kB
Mapped: 10656 kB
Shmem: 508 kB
Slab: 21156 kB
SReclaimable: 6240 kB
SUnreclaim: 14916 kB
KernelStack: 760 kB
PageTables: 7904 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6028956 kB
Committed_AS: 2751008 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 311940 kB
VmallocChunk: 34359422892 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 50816 kB
DirectMap2M: 2813952 kB
DirectMap1G: 9437184 kB
-------
cat /proc/partitions
major minor #blocks name
8 16 2930266584 sdb
8 17 96256 sdb1
8 18 9765888 sdb2
8 19 1952768 sdb3
8 20 2918450176 sdb4
8 0 2930266584 sda
8 1 2930265088 sda1
8 32 2930266584 sdc
8 33 96256 sdc1
8 34 9765888 sdc2
8 35 1952768 sdc3
8 36 2918450176 sdc4
8 48 2930266584 sdd
8 49 96256 sdd1
8 50 9765888 sdd2
8 51 1952768 sdd3
8 52 2918450176 sdd4
8 64 2930266584 sde
8 65 96256 sde1
8 66 9765888 sde2
8 67 1952768 sde3
8 68 2918450176 sde4
8 80 2930266584 sdf
8 81 2930265088 sdf1
9 0 9757568 md0
9 2 8754955776 md2
Any help?
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-24 13:01 Dragon
@ 2014-06-24 14:15 ` Eric Sandeen
0 siblings, 0 replies; 16+ messages in thread
From: Eric Sandeen @ 2014-06-24 14:15 UTC (permalink / raw)
To: Dragon, xfs
On 6/24/14, 8:01 AM, Dragon wrote:
>
> Hello,
> i have a fresh install Debian Wheezy with software raid as md2 for my
> files. MD2 is nealry full of 8TB and i wanted to migrate the data to
> a newier system. While copy files i got an message that i have to run
> xfs_repair. I first run xfs_check but it eats up all my 11GB memory
> and the system stalls.
Yep, xfs_check doesn't scale and is on the way to deprecation.
> Then i run xfs_repair -n /dev/md2 but it hangs
> each time add: problem with directory contens in inode 2147997719 -
> would have cleared inode 2147997719. If i break this and restart, it
> hangs at the same position. System seems to do nothing over hours.
You might try the -P option to repair.
-P Disable prefetching of inode and directory blocks. Use this option if you find xfs_repair gets stuck and stops proceeding.
If that works, it likely indicates a bug, but it might get you going.
>
> Some Infos:
> xfsprogs in Version 3.1.7+b1
And that's a 3-year old xfsprogs, so an upgrade might help, too.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-24 14:24 Dragon
2014-06-24 15:35 ` Eric Sandeen
0 siblings, 1 reply; 16+ messages in thread
From: Dragon @ 2014-06-24 14:24 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 1749 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-24 14:24 XFS Repair hangs at inode repair in phase3 Dragon
@ 2014-06-24 15:35 ` Eric Sandeen
0 siblings, 0 replies; 16+ messages in thread
From: Eric Sandeen @ 2014-06-24 15:35 UTC (permalink / raw)
To: Dragon, xfs
You can always try:
# git clone git://oss.sgi.com/xfs/cmds/xfsprogs.git
# cd xfsprogs
# git checkout v3.2.0
# make
# repair/xfs_repair -n /dev/md2
-Eric
On 6/24/14, 9:24 AM, Dragon wrote:
> Hello Eric,
> ok will never use xfs_check ;). Yes i saw option -P but i though if i
> desabled that test, i will never get fix those problems. The Version
> is from Debian Wheezy stable, so why it is such old, when it is that
> important :do not understand this from debianmaintainer...: So what
> could i do to use latest Version under Debian Wheezy? Normaly i use
> the package system.. Any risk to loos all my data? Where came the
> Problem from? If the version is such old, was the md2 created out of
> that, or are there only the tools in it?
> Thx
>
> On 6/24/14, 8:01 AM, Dragon wrote:
>
> Hello, i have a fresh install Debian Wheezy with software raid as md2 for my files. MD2 is nealry full of 8TB and i wanted to migrate the data to a newier system. While copy files i got an message that i have to run xfs_repair. I first run xfs_check but it eats up all my 11GB memory and the system stalls.
>
> Yep, xfs_check doesn't scale and is on the way to deprecation.
>
> Then i run xfs_repair -n /dev/md2 but it hangs each time add: problem with directory contens in inode 2147997719 - would have cleared inode 2147997719. If i break this and restart, it hangs at the same position. System seems to do nothing over hours.
>
> You might try the -P option to repair. -P Disable prefetching of inode and directory blocks. Use this option if you find xfs_repair gets stuck and stops proceeding. If that works, it likely indicates a bug, but it might get you going.
>
> Some Infos: xfsprogs in Version 3.1.7+b1
>
> And that's a 3-year old xfsprogs, so an upgrade might help, too. -Eric
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-24 15:43 Dragon
2014-06-24 18:54 ` Stan Hoeppner
0 siblings, 1 reply; 16+ messages in thread
From: Dragon @ 2014-06-24 15:43 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 2190 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-24 15:43 Dragon
@ 2014-06-24 18:54 ` Stan Hoeppner
0 siblings, 0 replies; 16+ messages in thread
From: Stan Hoeppner @ 2014-06-24 18:54 UTC (permalink / raw)
To: Dragon, xfs
On 6/24/2014 10:43 AM, Dragon wrote:
> Hi,
> ok will try. So what could be the problem of this? Maybe the lost of one disk
> out of raid6? I think not, or? Maybe a controller failure? Before i used ext4
> and never had problems with this hardware...
If it's a hardware problem there should be errors in dmesg.
Cheers,
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-24 19:07 Dragon
0 siblings, 0 replies; 16+ messages in thread
From: Dragon @ 2014-06-24 19:07 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 660 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-24 20:02 Dragon
2014-06-24 21:29 ` Stan Hoeppner
0 siblings, 1 reply; 16+ messages in thread
From: Dragon @ 2014-06-24 20:02 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 8639 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-24 20:02 Dragon
@ 2014-06-24 21:29 ` Stan Hoeppner
0 siblings, 0 replies; 16+ messages in thread
From: Stan Hoeppner @ 2014-06-24 21:29 UTC (permalink / raw)
To: Dragon, xfs
On 6/24/2014 3:02 PM, Dragon wrote:
> Hello,
> i actually try to backup the data. Next is that what my lock shows:
> [start]
> XFS (md2): Internal error xfs_da_do_buf(2) at line 2097 of file
> /build/linux-5U_ZPM/linux-3.2.57/fs/xfs/xfs_da_btree.c. Caller 0xffffffffa03e9940
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.454987]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476710] Pid: 3352, comm: mc Not
> tainted 3.2.0-4-amd64 #1 Debian 3.2.57-3+deb7u2
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476718] Call Trace:
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476763] [<ffffffffa03c2731>] ?
> xfs_corruption_error+0x54/0x6f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476818] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476868] [<ffffffffa03e982f>] ?
> xfs_da_do_buf+0x47e/0x53c [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476916] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476932] [<ffffffff810cb42d>] ?
> zone_statistics+0x41/0x74
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.476980] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477029] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477078] [<ffffffffa03ee7b2>] ?
> xfs_dir2_leaf_lookup_int+0x54/0x24b [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477125] [<ffffffffa03dfa0c>] ?
> xfs_bmap_last_extent.constprop.22+0x57/0x66 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477174] [<ffffffffa03eeda9>] ?
> xfs_dir2_leaf_lookup+0x4b/0xdc [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477223] [<ffffffffa03eb4fa>] ?
> xfs_dir2_isleaf+0x18/0x45 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477272] [<ffffffffa03eb949>] ?
> xfs_dir_lookup+0xf7/0x137 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477316] [<ffffffffa03d0386>] ?
> xfs_lookup+0x76/0xd3 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477331] [<ffffffff810ed405>] ?
> kmem_cache_alloc+0x86/0xea
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477370] [<ffffffffa03c8e5a>] ?
> xfs_vn_lookup+0x3f/0x7e [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477386] [<ffffffff81102d15>] ?
> d_alloc_and_lookup+0x3a/0x60
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477401] [<ffffffff811037b9>] ?
> walk_component+0x219/0x406
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477415] [<ffffffff811102ee>] ?
> mntget+0x17/0x1c
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477429] [<ffffffff8110464d>] ?
> path_lookupat+0x7c/0x2bd
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477443] [<ffffffff81036628>] ?
> should_resched+0x5/0x23
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477457] [<ffffffff8134e90c>] ?
> _cond_resched+0x7/0x1c
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477472] [<ffffffff811048aa>] ?
> do_path_lookup+0x1c/0x87
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477487] [<ffffffff81106333>] ?
> user_path_at_empty+0x47/0x7b
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477501] [<ffffffff810fe1b9>] ?
> cp_new_stat+0xe6/0xfa
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477515] [<ffffffff810380dd>] ?
> set_next_entity+0x32/0x55
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477531] [<ffffffff8100d750>] ?
> __switch_to+0x1e5/0x258
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477543] [<ffffffff810fe386>] ?
> vfs_fstatat+0x32/0x60
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477557] [<ffffffff810fe4e7>] ?
> sys_newlstat+0x12/0x2b
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477571] [<ffffffff81354c12>] ?
> system_call_fastpath+0x16/0x1b
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.477583] XFS (md2): Corruption
> detected. Unmount and run xfs_repair
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.488542] ffff8801e3545000: 44 c1 52
> 1c 70 87 80 ee c9 b8 26 ad fc cb 65 4d D.R.p.....&...eM
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.492077] XFS (md2): Internal error
> xfs_da_do_buf(2) at line 2097 of file
> /build/linux-5U_ZPM/linux-3.2.57/fs/xfs/xfs_da_btree.c. Caller 0xffffffffa03e9940
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.492080]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503378] Pid: 3352, comm: mc Not
> tainted 3.2.0-4-amd64 #1 Debian 3.2.57-3+deb7u2
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503381] Call Trace:
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503391] [<ffffffffa03c2731>] ?
> xfs_corruption_error+0x54/0x6f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503406] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503420] [<ffffffffa03e982f>] ?
> xfs_da_do_buf+0x47e/0x53c [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503434] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503438] [<ffffffff810cb42d>] ?
> zone_statistics+0x41/0x74
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503452] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503469] [<ffffffffa03e9940>] ?
> xfs_da_read_buf+0x1a/0x1f [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503488] [<ffffffffa03ee7b2>] ?
> xfs_dir2_leaf_lookup_int+0x54/0x24b [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503506] [<ffffffffa03dfa0c>] ?
> xfs_bmap_last_extent.constprop.22+0x57/0x66 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503524] [<ffffffffa03eeda9>] ?
> xfs_dir2_leaf_lookup+0x4b/0xdc [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503542] [<ffffffffa03eb4fa>] ?
> xfs_dir2_isleaf+0x18/0x45 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503561] [<ffffffffa03eb949>] ?
> xfs_dir_lookup+0xf7/0x137 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503578] [<ffffffffa03d0386>] ?
> xfs_lookup+0x76/0xd3 [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503585] [<ffffffff810ed405>] ?
> kmem_cache_alloc+0x86/0xea
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503600] [<ffffffffa03c8e5a>] ?
> xfs_vn_lookup+0x3f/0x7e [xfs]
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503607] [<ffffffff81102d15>] ?
> d_alloc_and_lookup+0x3a/0x60
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503615] [<ffffffff811037b9>] ?
> walk_component+0x219/0x406
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503622] [<ffffffff811102ee>] ?
> mntget+0x17/0x1c
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503630] [<ffffffff8110464d>] ?
> path_lookupat+0x7c/0x2bd
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503637] [<ffffffff81036628>] ?
> should_resched+0x5/0x23
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503644] [<ffffffff8134e90c>] ?
> _cond_resched+0x7/0x1c
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503652] [<ffffffff811048aa>] ?
> do_path_lookup+0x1c/0x87
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503660] [<ffffffff81106333>] ?
> user_path_at_empty+0x47/0x7b
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503667] [<ffffffff810fe1b9>] ?
> cp_new_stat+0xe6/0xfa
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503674] [<ffffffff810380dd>] ?
> set_next_entity+0x32/0x55
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503682] [<ffffffff8100d750>] ?
> __switch_to+0x1e5/0x258
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503689] [<ffffffff810fe386>] ?
> vfs_fstatat+0x32/0x60
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503696] [<ffffffff810fe4e7>] ?
> sys_newlstat+0x12/0x2b
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503704] [<ffffffff81354c12>] ?
> system_call_fastpath+0x16/0x1b
> Jun 24 22:00:50 clusternode01 kernel: [ 5461.503709] XFS (md2): Corruption
> detected. Unmount and run xfs_repair
> [END]
> Any Idea?
The error msg above tells you what to do. Follow Eric's suggestion,
install the latest xfsprogs, unmount, and run xfs_repair.
Cheers,
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-24 22:15 Dragon
2014-06-24 23:08 ` Stan Hoeppner
0 siblings, 1 reply; 16+ messages in thread
From: Dragon @ 2014-06-24 22:15 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 549 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-24 22:15 Dragon
@ 2014-06-24 23:08 ` Stan Hoeppner
0 siblings, 0 replies; 16+ messages in thread
From: Stan Hoeppner @ 2014-06-24 23:08 UTC (permalink / raw)
To: Dragon, xfs
On 6/24/2014 5:15 PM, Dragon wrote:
> Hi,
> yes read this, but i am new and dont know what this command will do and i dont
> want to lost my data. I read in the net, that some people couldnt start or boot
> from that disk, after running xfs_repair. I looked for a upgrade to latest
> xfsprogs version, but it seems difficult for amd64 systems. Do you have a
> changelog to latest version, could my installed version be ok, or better not to use?
> Thx Stan and Eric
# xfs_repair -n -P /dev/device
Post the output in your next reply, assuming it doesn't hang again.
Cheers,
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-24 23:41 Dragon
0 siblings, 0 replies; 16+ messages in thread
From: Dragon @ 2014-06-24 23:41 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 917 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-26 19:17 Dragon
2014-06-26 23:32 ` Stan Hoeppner
2014-06-27 0:02 ` Dave Chinner
0 siblings, 2 replies; 16+ messages in thread
From: Dragon @ 2014-06-26 19:17 UTC (permalink / raw)
To: xfs
[-- Attachment #1: Type: text/html, Size: 1235 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-26 19:17 Dragon
@ 2014-06-26 23:32 ` Stan Hoeppner
2014-06-27 0:02 ` Dave Chinner
1 sibling, 0 replies; 16+ messages in thread
From: Stan Hoeppner @ 2014-06-26 23:32 UTC (permalink / raw)
To: Dragon, xfs
On 6/26/2014 2:17 PM, Dragon wrote:
> Hello,
> i upgraded to debian jessie with xfsprogs 3.2. if i run xfs_repair /dev/md2 the system hangs too. If i use -P it ends up with:
> corrupt block 0 in directory inode 2147702899: junking block
> Segmentation fault
>
> if i use -n -P it ends up with:
>
> No modify flag set, skipping phase 5
> Inode allocation btrees are too corrupted, skipping phases 6 and 7
> No modify flag set, skipping filesystem flush and exiting.
Did you lose the md/RAID6 array and reassemble it prior to seeing the
problems with the XFS filesystem? You may have reassembled it in the
wrong order, in which case the sector offsets will be wrong, and XFS
will not see data where it should be. This is reported as corruption.
You never posted your dmesg output so you may or may not be experiencing
hardware problems. That needs to be eliminated as a possible cause.
Please post relevant lines from dmesg.
> Files are still not accessible - i think i lost some TB ;( - not a good experience for first use of xfs...
An XFS filesystem doesn't simply become corrupt like this during normal
operation. Something happened that corrupted the on disk structures.
XFS is the messenger here, not the cause.
Cheers,
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
2014-06-26 19:17 Dragon
2014-06-26 23:32 ` Stan Hoeppner
@ 2014-06-27 0:02 ` Dave Chinner
1 sibling, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-06-27 0:02 UTC (permalink / raw)
To: Dragon; +Cc: xfs
[ A couple of email-to-lists tips: Please:
- don't post in html format, use plain text
- please don't top post, commentin line
- fix the in-reply-to fields on your email replies
so that mail programs can thread the conversion properly
]
On Thu, Jun 26, 2014 at 09:17:27PM +0200, Dragon wrote:
> i upgraded to debian jessie with xfsprogs 3.2. if i run xfs_repair
> /dev/md2 the system hangs too. If i use -P it ends up with:
> corrupt block 0 in directory inode 2147702899: junking block
> Segmentation fault
Can you run this under a gdb and get a stack trace from where it
crashed. You might need to grab the source and build that to get a
meaningful stack trace....
> No modify flag set, skipping phase 5
> Inode allocation btrees are too corrupted, skipping phases 6 and 7
> No modify flag set, skipping filesystem flush and exiting.
What actually went wrong with your storage? The only time I've seen
that warning is when a raid array had been reconstructed incorrectly
after a series of disk failures. Did your RAID have failures or
reconstruction problems before XFS started reporting errors?
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: XFS Repair hangs at inode repair in phase3
@ 2014-06-27 7:42 Dragon
0 siblings, 0 replies; 16+ messages in thread
From: Dragon @ 2014-06-27 7:42 UTC (permalink / raw)
To: xfs
Hello,
thanks for advises for the list - i changed to txt-format. Situation before xfs failure was that i had have a raid6 with 5 Disks. I copied alle files from one server to this, because the space on the other was full and have to rebuild.. After that i needed one more disk, which i didnt had, so i take 1 out of the raid6. Restart, check and all looks good. I dont know, because since that its aprox 4 weeks ago, when and whey the xfs failure occures, but as i wanted to copy all files back to the new server, i get this message from xfs to repair the file structure. I searched a lot, was in xfs irc, where no one could/whould help so i finally wrote to this list. While a lot time past and i need those server i yesterday reboot from live-cd and backup the most of it. i think i lost aprox 500gb which is not as important as wait another 4 weeks with perhaps no solution. So i yesterday deleted the raid and since this night i run badblocks on the disks to check them.
So presently no more help is needed, but i will thanks both of you for your help and fast replies. i think i gave xfs another chance and after all build up the server again with it.
best regards and sunny weekend
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2014-06-27 7:42 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-24 14:24 XFS Repair hangs at inode repair in phase3 Dragon
2014-06-24 15:35 ` Eric Sandeen
-- strict thread matches above, loose matches on Subject: below --
2014-06-27 7:42 Dragon
2014-06-26 19:17 Dragon
2014-06-26 23:32 ` Stan Hoeppner
2014-06-27 0:02 ` Dave Chinner
2014-06-24 23:41 Dragon
2014-06-24 22:15 Dragon
2014-06-24 23:08 ` Stan Hoeppner
2014-06-24 20:02 Dragon
2014-06-24 21:29 ` Stan Hoeppner
2014-06-24 19:07 Dragon
2014-06-24 15:43 Dragon
2014-06-24 18:54 ` Stan Hoeppner
2014-06-24 13:01 Dragon
2014-06-24 14:15 ` Eric Sandeen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox