* xfs_repair use all memory
@ 2015-06-12 10:14 Lampa
2015-06-12 10:27 ` Emmanuel Florac
0 siblings, 1 reply; 7+ messages in thread
From: Lampa @ 2015-06-12 10:14 UTC (permalink / raw)
To: xfs
Hello,
i have 8TB filesystem but unable to check it or repair
df -h
/dev/mapper/vg0-data 8,0T 7,5T
525G 94% /home
df -i
/dev/mapper/vg0-data 1717983616
106594156 1611389460 7% /home
I got lot of "XFS (dm-0): xfs_log_force: error 5 returned" in log
trying to repair (check) filesystem but xfs repair is always killed by
oom - it use all memory (8GB).
xfs_repair -V
xfs_repair version 3.1.7
xfs_repair -n -m 2048 -vv /dev/mapper/vg0-data
Phase 1 - find and verify superblock...
- max_mem = 2097152, icount = 109324864, imem = 427050, dblock
= 2147479552, dmem = 1048574
- block cache size set to 71440 entries
Phase 2 - using internal log
- scan filesystem freespace and inode maps...
- found root inode chunk
libxfs_bcache: 0x147c4c0
Max supported entries = 71440
Max utilized entries = 11839
Active entries = 11839
Hash table size = 8930
Hits = 0
Misses = 11839
Hit ratio = 0,00
MRU 0 entries = 11839 (100%)
MRU 1 entries = 0 ( 0%)
MRU 2 entries = 0 ( 0%)
MRU 3 entries = 0 ( 0%)
MRU 4 entries = 0 ( 0%)
MRU 5 entries = 0 ( 0%)
MRU 6 entries = 0 ( 0%)
MRU 7 entries = 0 ( 0%)
MRU 8 entries = 0 ( 0%)
MRU 9 entries = 0 ( 0%)
MRU 10 entries = 0 ( 0%)
MRU 11 entries = 0 ( 0%)
MRU 12 entries = 0 ( 0%)
MRU 13 entries = 0 ( 0%)
MRU 14 entries = 0 ( 0%)
MRU 15 entries = 0 ( 0%)
Hash buckets with 0 entries 3161 ( 0%)
Hash buckets with 1 entries 2587 ( 21%)
Hash buckets with 2 entries 1615 ( 27%)
Hash buckets with 3 entries 823 ( 20%)
Hash buckets with 4 entries 416 ( 14%)
Hash buckets with 5 entries 203 ( 8%)
Hash buckets with 6 entries 62 ( 3%)
Hash buckets with 7 entries 35 ( 2%)
Hash buckets with 8 entries 15 ( 1%)
Hash buckets with 9 entries 6 ( 0%)
Hash buckets with 10 entries 2 ( 0%)
Hash buckets with 11 entries 2 ( 0%)
Hash buckets with 12 entries 1 ( 0%)
Hash buckets with 14 entries 1 ( 0%)
Hash buckets with 15 entries 1 ( 0%)
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
doubling cache size to 142880
doubling cache size to 285760
data fork in ino 700711615 claims free block 43794495
data fork in ino 700711615 claims free block 43794496
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- process newly discovered inodes...
libxfs_bcache: 0x147c4c0
Max supported entries = 285760
Max utilized entries = 285760
Active entries = 285705
Hash table size = 8930
Hits = 26843840
Misses = 34163400
Hit ratio = 44,00
MRU 0 entries = 3933 ( 1%)
MRU 1 entries = 0 ( 0%)
MRU 2 entries = 2311 ( 0%)
MRU 3 entries = 14456 ( 5%)
MRU 4 entries = 345 ( 0%)
MRU 5 entries = 14955 ( 5%)
MRU 6 entries = 216435 ( 75%)
MRU 7 entries = 3442 ( 1%)
MRU 8 entries = 0 ( 0%)
MRU 9 entries = 0 ( 0%)
MRU 10 entries = 0 ( 0%)
MRU 11 entries = 5852 ( 2%)
MRU 12 entries = 232 ( 0%)
MRU 13 entries = 23545 ( 8%)
MRU 14 entries = 199 ( 0%)
MRU 15 entries = 0 ( 0%)
Hash buckets with 9 entries 2 ( 0%)
Hash buckets with 10 entries 5 ( 0%)
Hash buckets with 11 entries 4 ( 0%)
Hash buckets with 12 entries 13 ( 0%)
Hash buckets with 13 entries 8 ( 0%)
Hash buckets with 14 entries 22 ( 0%)
Hash buckets with 15 entries 31 ( 0%)
Hash buckets with 16 entries 75 ( 0%)
Hash buckets with 17 entries 86 ( 0%)
Hash buckets with 18 entries 112 ( 0%)
Hash buckets with 19 entries 128 ( 0%)
Hash buckets with 20 entries 200 ( 1%)
Hash buckets with 21 entries 218 ( 1%)
Hash buckets with 22 entries 274 ( 2%)
Hash buckets with 23 entries 292 ( 2%)
Hash buckets with 24 entries 342 ( 2%)
Hash buckets with >24 entries 7118 ( 86%)
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
doubling cache size to 571520
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
libxfs_bcache: 0x147c4c0
Max supported entries = 571520
Max utilized entries = 571520
Active entries = 571506
Hash table size = 8930
Hits = 51552448
Misses = 71034716
Hit ratio = 42,05
MRU 0 entries = 16024 ( 2%)
MRU 1 entries = 0 ( 0%)
MRU 2 entries = 9154 ( 1%)
MRU 3 entries = 83965 ( 14%)
MRU 4 entries = 517 ( 0%)
MRU 5 entries = 155 ( 0%)
MRU 6 entries = 435429 ( 76%)
MRU 7 entries = 0 ( 0%)
MRU 8 entries = 0 ( 0%)
MRU 9 entries = 0 ( 0%)
MRU 10 entries = 0 ( 0%)
MRU 11 entries = 8709 ( 1%)
MRU 12 entries = 521 ( 0%)
MRU 13 entries = 16818 ( 2%)
MRU 14 entries = 214 ( 0%)
MRU 15 entries = 0 ( 0%)
Hash buckets with >24 entries 8930 (100%)
No modify flag set, skipping phase 5
libxfs_bcache: 0x147c4c0
Max supported entries = 571520
Max utilized entries = 571520
Active entries = 571506
Hash table size = 8930
Hits = 51552448
Misses = 71034716
Hit ratio = 42,05
MRU 0 entries = 16024 ( 2%)
MRU 1 entries = 0 ( 0%)
MRU 2 entries = 9154 ( 1%)
MRU 3 entries = 83965 ( 14%)
MRU 4 entries = 517 ( 0%)
MRU 5 entries = 155 ( 0%)
MRU 6 entries = 435429 ( 76%)
MRU 7 entries = 0 ( 0%)
MRU 8 entries = 0 ( 0%)
MRU 9 entries = 0 ( 0%)
MRU 10 entries = 0 ( 0%)
MRU 11 entries = 8709 ( 1%)
MRU 12 entries = 521 ( 0%)
MRU 13 entries = 16818 ( 2%)
MRU 14 entries = 214 ( 0%)
MRU 15 entries = 0 ( 0%)
Hash buckets with >24 entries 8930 (100%)
Phase 6 - check inode connectivity...
- traversing filesystem ...
- agno = 0
doubling cache size to 1143040
Killed (SIGKILL)
--
Lampa
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: xfs_repair use all memory
2015-06-12 10:14 xfs_repair use all memory Lampa
@ 2015-06-12 10:27 ` Emmanuel Florac
2015-06-12 10:36 ` Lampa
0 siblings, 1 reply; 7+ messages in thread
From: Emmanuel Florac @ 2015-06-12 10:27 UTC (permalink / raw)
To: Lampa; +Cc: xfs
Le Fri, 12 Jun 2015 12:14:49 +0200
Lampa <lampacz@gmail.com> écrivait:
> i have 8TB filesystem but unable to check it or repair
>
> df -h
> /dev/mapper/vg0-data 8,0T 7,5T
> 525G 94% /home
I have successfully repaired bigger FS with 2 GB of RAM, that's weird.
Try using a fresher xfs_repair (3.2.x), newer versions are easier on
RAM.
Just in case it may help you, here's one I compiled myself for amd64:
http://update.intellique.com/pub/xfs_repair-3.2.2.gz
However if you have XFS error, please first get sure you haven't an
underlying (hardware?) problem...
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: xfs_repair use all memory
2015-06-12 10:27 ` Emmanuel Florac
@ 2015-06-12 10:36 ` Lampa
2015-06-12 13:43 ` Emmanuel Florac
0 siblings, 1 reply; 7+ messages in thread
From: Lampa @ 2015-06-12 10:36 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
Hello,
thank you for fast reply. I run memtest - and it's ok. It's virtual
host running under esxi 5.5
If i try delete directory on drive got lot of errors
[ 150.327050] Pid: 5892, comm: rm Not tainted 3.2.0-4-amd64 #1 Debian
3.2.68-1+deb7u1
[ 150.327053] Call Trace:
[ 150.327066] [<ffffffffa02fb567>] ? xfs_da_do_buf+0x1aa/0x53c [xfs]
[ 150.327078] [<ffffffffa02fbb67>] ?
xfs_da_read_buf.constprop.12+0x24/0x29 [xfs]
[ 150.327091] [<ffffffffa02fc795>] ? xfs_da_path_shift+0xb6/0x171 [xfs]
[ 150.327104] [<ffffffffa03025f1>] ? xfs_dir2_leafn_toosmall+0x1ae/0x1db [xfs]
[ 150.327117] [<ffffffffa02fceb1>] ? xfs_da_join+0x9a/0x18f [xfs]
[ 150.327129] [<ffffffffa02fada3>] ? xfs_da_fixhashpath+0x5c/0xe3 [xfs]
[ 150.327143] [<ffffffffa0303485>] ?
xfs_dir2_node_removename+0x445/0x470 [xfs]
[ 150.327155] [<ffffffffa02fd850>] ? xfs_dir_removename+0x112/0x120 [xfs]
[ 150.327168] [<ffffffffa02e2cfe>] ? xfs_remove+0x1f9/0x321 [xfs]
[ 150.327180] [<ffffffffa02db16d>] ? xfs_vn_unlink+0x37/0x6e [xfs]
[ 150.327185] [<ffffffff81105719>] ? vfs_unlink+0x68/0xbb
[ 150.327189] [<ffffffff81106613>] ? do_unlinkat+0xd0/0x156
[ 150.327194] [<ffffffff81356172>] ? system_call_fastpath+0x16/0x1b
[ 150.327198] XFS (dm-0): Internal error xfs_trans_cancel at line
1925 of file /build/linux-4wkEzn/linux-3.2.68/fs/xfs/xfs_trans.c.
Caller 0xffffffffa02e2e11
[ 150.327200]
[ 150.327203] Pid: 5892, comm: rm Not tainted 3.2.0-4-amd64 #1 Debian
3.2.[ 150.327050] Pid: 5892, comm: rm Not tainted 3.2.0-4-amd64 #1
Debian 3.2.68-1+deb7u1
[ 150.327053] Call Trace:
[ 150.327066] [<ffffffffa02fb567>] ? xfs_da_do_buf+0x1aa/0x53c [xfs]
[ 150.327078] [<ffffffffa02fbb67>] ?
xfs_da_read_buf.constprop.12+0x24/0x29 [xfs]
[ 150.327091] [<ffffffffa02fc795>] ? xfs_da_path_shift+0xb6/0x171 [xfs]
[ 150.327104] [<ffffffffa03025f1>] ? xfs_dir2_leafn_toosmall+0x1ae/0x1db [xfs]
[ 150.327117] [<ffffffffa02fceb1>] ? xfs_da_join+0x9a/0x18f [xfs]
[ 150.327129] [<ffffffffa02fada3>] ? xfs_da_fixhashpath+0x5c/0xe3 [xfs]
[ 150.327143] [<ffffffffa0303485>] ?
xfs_dir2_node_removename+0x445/0x470 [xfs]
[ 150.327155] [<ffffffffa02fd850>] ? xfs_dir_removename+0x112/0x120 [xfs]
[ 150.327168] [<ffffffffa02e2cfe>] ? xfs_remove+0x1f9/0x321 [xfs]
[ 150.327180] [<ffffffffa02db16d>] ? xfs_vn_unlink+0x37/0x6e [xfs]
[ 150.327185] [<ffffffff81105719>] ? vfs_unlink+0x68/0xbb
[ 150.327189] [<ffffffff81106613>] ? do_unlinkat+0xd0/0x156
[ 150.327194] [<ffffffff81356172>] ? system_call_fastpath+0x16/0x1b
[ 150.327198] XFS (dm-0): Internal error xfs_trans_cancel at line
1925 of file /build/linux-4wkEzn/linux-3.2.68/fs/xfs/xfs_trans.c.
Caller 0xffffffffa02e2e11
[ 150.327200]
[ 150.327203] Pid: 5892, comm: rm Not tainted 3.2.0-4-amd64 #1 Debian
3.2.68-1+deb7u1
[ 150.327205] Call Trace:
[ 150.327220] [<ffffffffa031265e>] ? xfs_trans_cancel+0x55/0xd7 [xfs]
[ 150.327232] [<ffffffffa02e2e11>] ? xfs_remove+0x30c/0x321 [xfs]
[ 150.327241] [<ffffffffa02db16d>] ? xfs_vn_unlink+0x37/0x6e [xfs]
[ 150.327245] [<ffffffff81105719>] ? vfs_unlink+0x68/0xbb
[ 150.327248] [<ffffffff81106613>] ? do_unlinkat+0xd0/0x156
[ 150.327250] [<ffffffff81356172>] ? system_call_fastpath+0x16/0x1b
[ 150.327254] XFS (dm-0): xfs_do_force_shutdown(0x8) called from line
1926 of file /build/linux-4wkEzn/linux-3.2.68/fs/xfs/xfs_trans.c.
Return address = 0xffffffffa0312677
[ 150.327262] XFS (dm-0): Corruption of in-memory data detected.
Shutting down filesystem
[ 150.327264] XFS (dm-0): Please umount the filesystem and rectify
the problem(s)
[ 159.304287] XFS (dm-0): xfs_log_force: error 5 returned.
68-1+deb7u1
[ 150.327205] Call Trace:
[ 150.327220] [<ffffffffa031265e>] ? xfs_trans_cancel+0x55/0xd7 [xfs]
[ 150.327232] [<ffffffffa02e2e11>] ? xfs_remove+0x30c/0x321 [xfs]
[ 150.327241] [<ffffffffa02db16d>] ? xfs_vn_unlink+0x37/0x6e [xfs]
[ 150.327245] [<ffffffff81105719>] ? vfs_unlink+0x68/0xbb
[ 150.327248] [<ffffffff81106613>] ? do_unlinkat+0xd0/0x156
[ 150.327250] [<ffffffff81356172>] ? system_call_fastpath+0x16/0x1b
[ 150.327254] XFS (dm-0): xfs_do_force_shutdown(0x8) called from line
1926 of file /build/linux-4wkEzn/linux-3.2.68/fs/xfs/xfs_trans.c.
Return address = 0xffffffffa0312677
[ 150.327262] XFS (dm-0): Corruption of in-memory data detected.
Shutting down filesystem
[ 150.327264] XFS (dm-0): Please umount the filesystem and rectify
the problem(s)
[ 159.304287] XFS (dm-0): xfs_log_force: error 5 returned.
i will try your xfsprogs version
2015-06-12 12:27 GMT+02:00 Emmanuel Florac <eflorac@intellique.com>:
> Le Fri, 12 Jun 2015 12:14:49 +0200
> Lampa <lampacz@gmail.com> écrivait:
>
>> i have 8TB filesystem but unable to check it or repair
>>
>> df -h
>> /dev/mapper/vg0-data 8,0T 7,5T
>> 525G 94% /home
>
> I have successfully repaired bigger FS with 2 GB of RAM, that's weird.
> Try using a fresher xfs_repair (3.2.x), newer versions are easier on
> RAM.
>
> Just in case it may help you, here's one I compiled myself for amd64:
>
> http://update.intellique.com/pub/xfs_repair-3.2.2.gz
>
> However if you have XFS error, please first get sure you haven't an
> underlying (hardware?) problem...
>
> --
> ------------------------------------------------------------------------
> Emmanuel Florac | Direction technique
> | Intellique
> | <eflorac@intellique.com>
> | +33 1 78 94 84 02
> ------------------------------------------------------------------------
--
Lampa
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: xfs_repair use all memory
2015-06-12 10:36 ` Lampa
@ 2015-06-12 13:43 ` Emmanuel Florac
2015-06-17 11:33 ` Lampa
0 siblings, 1 reply; 7+ messages in thread
From: Emmanuel Florac @ 2015-06-12 13:43 UTC (permalink / raw)
To: Lampa; +Cc: xfs
Le Fri, 12 Jun 2015 12:36:58 +0200
Lampa <lampacz@gmail.com> écrivait:
> Return address = 0xffffffffa0312677
> [ 150.327262] XFS (dm-0): Corruption of in-memory data detected.
> Shutting down filesystem
> [ 150.327264] XFS (dm-0): Please umount the filesystem and rectify
> the problem(s)
> [ 159.304287] XFS (dm-0): xfs_log_force: error 5 returned.
>
>
> i will try your xfsprogs version
>
Yes, and try to mount/unmount the filesystem first to clear the log if
possible, before running xfs_repair.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: xfs_repair use all memory
2015-06-12 13:43 ` Emmanuel Florac
@ 2015-06-17 11:33 ` Lampa
2015-06-17 11:37 ` Carsten Aulbert
2015-06-17 11:42 ` Emmanuel Florac
0 siblings, 2 replies; 7+ messages in thread
From: Lampa @ 2015-06-17 11:33 UTC (permalink / raw)
To: Emmanuel Florac; +Cc: xfs
Hello,
unable to repair - running without -n parameter requires more memory.
Always killed by kernel
2015-06-12 15:43 GMT+02:00 Emmanuel Florac <eflorac@intellique.com>:
> Le Fri, 12 Jun 2015 12:36:58 +0200
> Lampa <lampacz@gmail.com> écrivait:
>
>> Return address = 0xffffffffa0312677
>> [ 150.327262] XFS (dm-0): Corruption of in-memory data detected.
>> Shutting down filesystem
>> [ 150.327264] XFS (dm-0): Please umount the filesystem and rectify
>> the problem(s)
>> [ 159.304287] XFS (dm-0): xfs_log_force: error 5 returned.
>>
>>
>> i will try your xfsprogs version
>>
>
> Yes, and try to mount/unmount the filesystem first to clear the log if
> possible, before running xfs_repair.
>
>
> --
> ------------------------------------------------------------------------
> Emmanuel Florac | Direction technique
> | Intellique
> | <eflorac@intellique.com>
> | +33 1 78 94 84 02
> ------------------------------------------------------------------------
--
Lampa
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: xfs_repair use all memory
2015-06-17 11:33 ` Lampa
@ 2015-06-17 11:37 ` Carsten Aulbert
2015-06-17 11:42 ` Emmanuel Florac
1 sibling, 0 replies; 7+ messages in thread
From: Carsten Aulbert @ 2015-06-17 11:37 UTC (permalink / raw)
To: Lampa, Emmanuel Florac; +Cc: xfs
Hi
On 06/17/2015 01:33 PM, Lampa wrote:
> unable to repair - running without -n parameter requires more memory.
> Always killed by kernel
>
What we did (not really beautiful) but it worked for a large file system:
Hooking up a SSD to the system (internall/USB) and make it swap space
for the box.
It will not help the speed, but it may be enough to get enough "memory"
to xfs_repair to finish.
Cheers
Carsten
--
Dr. Carsten Aulbert, Atlas cluster administration
Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
Callinstraße 38, 30167 Hannover, Germany
Tel: +49 511 762 17185, Fax: +49 511 762 17193
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: xfs_repair use all memory
2015-06-17 11:33 ` Lampa
2015-06-17 11:37 ` Carsten Aulbert
@ 2015-06-17 11:42 ` Emmanuel Florac
1 sibling, 0 replies; 7+ messages in thread
From: Emmanuel Florac @ 2015-06-17 11:42 UTC (permalink / raw)
To: Lampa; +Cc: xfs
Le Wed, 17 Jun 2015 13:33:48 +0200
Lampa <lampacz@gmail.com> écrivait:
> unable to repair - running without -n parameter requires more memory.
> Always killed by kernel
>
If you have some storage room available on another filesystem, create
some swap space. I've done it once on a 1 GB RAM machine (adding up 8
GB of swap). The repair will be much, much longer but it works.
To create 64GB of swap space in a file, do something like this:
dd if=/dev/zero bs=1M count=65536 of=/some/file.swp
mkswap /some/file.swp
swapon /some/file.swp
However preferably don't use a single spinning drive, it's much too
slow. Flash devices are to be preferred, even a good quality USB thumb
drive will do.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2015-06-17 11:42 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-12 10:14 xfs_repair use all memory Lampa
2015-06-12 10:27 ` Emmanuel Florac
2015-06-12 10:36 ` Lampa
2015-06-12 13:43 ` Emmanuel Florac
2015-06-17 11:33 ` Lampa
2015-06-17 11:37 ` Carsten Aulbert
2015-06-17 11:42 ` Emmanuel Florac
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox