* xfs_repair segfault + debug info
@ 2015-05-29 14:03 Mike Grant
2015-05-29 22:27 ` Dave Chinner
0 siblings, 1 reply; 4+ messages in thread
From: Mike Grant @ 2015-05-29 14:03 UTC (permalink / raw)
To: xfs
We recently had a 180TB XFS filesystem go down after following some
ill-considered advice from a Dell tech (re-onlining a maybe-failed disk,
which one might think was ok..). It's not irreplaceable data, but
xfs_repair segfaults when trying to fix up and I thought that might be
of interest here to help fix the segfault. We're not expecting to
recover the data, though it would be nice.
Partial logs & backtraces of xfs_repair runs using the latest Centos-7
xfsprogs package and also run with the xfs_repair built from the git
master, copies of core dumps and a metadump are at:
https://rsg.pml.ac.uk/shared_files/mggr/xfs_segfault
Maximum memory use was only about 1GB by the time of the crash, and
there was 120GB+ of swap available, so I don't think that was an issue.
The command was "xfs_repair -v /dev/md0 -t 60 -P".
Run time is about 2 hours to a crash and we'll probably want to wipe and
rebuild the server next week sometime. Happy to run tests or gather
more info in the meantime though!
Please let me know if there's anything else that would be useful.
Cheers,
Mike.
Please visit our new website at www.pml.ac.uk and follow us on Twitter @PlymouthMarine
Winner of the Environment & Conservation category, the Charity Awards 2014.
Plymouth Marine Laboratory (PML) is a company limited by guarantee registered in England & Wales, company number 4178503. Registered Charity No. 1091222. Registered Office: Prospect Place, The Hoe, Plymouth PL1 3DH, UK.
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. You are reminded that e-mail communications are not secure and may contain viruses; PML accepts no liability for any loss or damage which may be caused by viruses.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: xfs_repair segfault + debug info
2015-05-29 14:03 xfs_repair segfault + debug info Mike Grant
@ 2015-05-29 22:27 ` Dave Chinner
2015-06-01 7:32 ` Mike Grant
0 siblings, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2015-05-29 22:27 UTC (permalink / raw)
To: Mike Grant; +Cc: xfs
On Fri, May 29, 2015 at 03:03:57PM +0100, Mike Grant wrote:
> We recently had a 180TB XFS filesystem go down after following some
> ill-considered advice from a Dell tech (re-onlining a maybe-failed disk,
> which one might think was ok..). It's not irreplaceable data, but
> xfs_repair segfaults when trying to fix up and I thought that might be
> of interest here to help fix the segfault. We're not expecting to
> recover the data, though it would be nice.
>
> Partial logs & backtraces of xfs_repair runs using the latest Centos-7
> xfsprogs package and also run with the xfs_repair built from the git
> master, copies of core dumps and a metadump are at:
> https://rsg.pml.ac.uk/shared_files/mggr/xfs_segfault
Given it is choking on directory corruption repair, I'd strong
recommend trying the current git version (3.2.3-rc1) here:
git://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git
> Maximum memory use was only about 1GB by the time of the crash, and
> there was 120GB+ of swap available, so I don't think that was an issue.
> The command was "xfs_repair -v /dev/md0 -t 60 -P".
>
> Run time is about 2 hours to a crash and we'll probably want to wipe and
Probably because you turned off prefetch, which makes it *slow*. :P
I'd build the new xfsprogs, restore the metadump to a file on a
different machine, and then run the new xfs_repair binary on the
restored metadump image. That will tell you pretty quickly if the
problem is solved. If it is solved, then you can run the new
xfs_repair on the real server.
Just remember, though, that even once the FS has been repaired,
you'll still have to search for data corruption manually and deal
with that...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: xfs_repair segfault + debug info
2015-05-29 22:27 ` Dave Chinner
@ 2015-06-01 7:32 ` Mike Grant
2015-06-12 18:49 ` [PATCH] " Mike Grant
0 siblings, 1 reply; 4+ messages in thread
From: Mike Grant @ 2015-06-01 7:32 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
On 29/05/15 23:27, Dave Chinner wrote:
> Given it is choking on directory corruption repair, I'd strong
> recommend trying the current git version (3.2.3-rc1) here:
Thanks for the reply. I did actually grab the git version (as of May
28) before bothering you all and got something that looked like the same
crash. The log is here:
https://rsg.pml.ac.uk/shared_files/mggr/xfs_segfault/xfs_repair_fail-git.log
Since I messed up the backtrace on that log, here it is in full (binary
and core also available from the parent directory of the link above):
#0 libxfs_putbuf (bp=0x100010000) at rdwr.c:656
#1 0x000000000041e7ce in longform_dir2_entry_check (hashtab=<optimized
out>, ino_offset=37, irec=0x7f37ddaafe20, need_dot=0x7fff1955bad0,
num_illegal=0x7fff1955bad8,
ip=0x11696610, ino=20136101, mp=0x7fff1955c170) at phase6.c:2297
#2 process_dir_inode (mp=0x7fff1955c170, agno=agno@entry=0,
irec=irec@entry=0x7f37ddaafe20, ino_offset=ino_offset@entry=37) at
phase6.c:2801
#3 0x00000000004205f6 in traverse_function (wq=0x7fff1955bdc0, agno=0,
arg=0x0) at phase6.c:3085
#4 0x00000000004255fa in prefetch_ag_range (work=0x7fff1955bdc0,
start_ag=<optimized out>, end_ag=204, dirs_only=true, func=0x420560
<traverse_function>) at prefetch.c:906
#5 0x000000000042575b in do_inode_prefetch (mp=0x7fff1955c170,
stride=0, func=0x420560 <traverse_function>, check_cache=<optimized
out>, dirs_only=true) at prefetch.c:969
#6 0x0000000000421365 in traverse_ags (mp=0x7fff1955c170) at phase6.c:3115
#7 phase6 (mp=mp@entry=0x7fff1955c170) at phase6.c:3203
#8 0x00000000004036c6 in main (argc=<optimized out>, argv=<optimized
out>) at xfs_repair.c:808
> Probably because you turned off prefetch, which makes it *slow*. :P
Ah, fair enough - my colleague was trying to reduce RAM usage before we
dug deeper into it and I just picked up his command line when
replicating his results. I'll ditch the -P in future.
> Just remember, though, that even once the FS has been repaired,
> you'll still have to search for data corruption manually and deal
> with that...
Yeah, we're conscious that the data are probably damaged :( My main aim
here is to provide some debug data for a xfs_repair crasher.
Cheers,
Mike.
Please visit our new website at www.pml.ac.uk and follow us on Twitter @PlymouthMarine
Winner of the Environment & Conservation category, the Charity Awards 2014.
Plymouth Marine Laboratory (PML) is a company limited by guarantee registered in England & Wales, company number 4178503. Registered Charity No. 1091222. Registered Office: Prospect Place, The Hoe, Plymouth PL1 3DH, UK.
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. You are reminded that e-mail communications are not secure and may contain viruses; PML accepts no liability for any loss or damage which may be caused by viruses.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH] Re: xfs_repair segfault + debug info
2015-06-01 7:32 ` Mike Grant
@ 2015-06-12 18:49 ` Mike Grant
0 siblings, 0 replies; 4+ messages in thread
From: Mike Grant @ 2015-06-12 18:49 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1: Type: text/plain, Size: 4091 bytes --]
I had a bit of time, so dug into this more and found a possible bug.
longform_dir2_entry_check in phase6.c has a calloc'ed array of xfs_buf
pointers (bplist). On line 2331, it reallocs this array if there turns
out to be more blocks than expected. However, realloc doesn't zero the
new memory like calloc. In unusual circumstances*, things may then blow
up later due to random data populating the new part of the array.
This is easily fixed by zeroing the new part of the array. A diff
against master is attached, using memset to do this (though you might
have a nicer way to do it). When patched, the xfs_repair ran through to
completion for my corrupted filesystem.
I also looked through the rest of xfs_repair, looking for similar
pairings of calloc + realloc, but didn't spot any, so I think this is an
isolated case. I've not checked the rest of xfs_progs.
Cheers,
Mike.
* (bit speculative) as dir_read_buf zeros the element it's looking at, I
think this can only happen if the realloc adds several members and one
of the first is corrupt. In my case, the realloc went from 35 to 37
members, meaning db must have been 36 without being 35. A read error
then caused it to goto out_fix. The crash then occurred in the
libxfs_putbuf when looping through the bplist structure, checking it for
NULL pointers (and presumably tripping over the non-zeroed data at
position 35?
I see it loops on freetab->naents rather than num_bps though, so I'm not
certain it's also including the realloced part of the list here, unless
freetab->naents is updated somewhere.. this might be another bug, or
might be I just don't understand it
On 01/06/15 08:32, Mike Grant wrote:
> On 29/05/15 23:27, Dave Chinner wrote:
>> Given it is choking on directory corruption repair, I'd strong
>> recommend trying the current git version (3.2.3-rc1) here:
>
> Thanks for the reply. I did actually grab the git version (as of May
> 28) before bothering you all and got something that looked like the same
> crash. The log is here:
>
> https://rsg.pml.ac.uk/shared_files/mggr/xfs_segfault/xfs_repair_fail-git.log
>
> Since I messed up the backtrace on that log, here it is in full (binary
> and core also available from the parent directory of the link above):
> #0 libxfs_putbuf (bp=0x100010000) at rdwr.c:656
> #1 0x000000000041e7ce in longform_dir2_entry_check (hashtab=<optimized
> out>, ino_offset=37, irec=0x7f37ddaafe20, need_dot=0x7fff1955bad0,
> num_illegal=0x7fff1955bad8,
> ip=0x11696610, ino=20136101, mp=0x7fff1955c170) at phase6.c:2297
> #2 process_dir_inode (mp=0x7fff1955c170, agno=agno@entry=0,
> irec=irec@entry=0x7f37ddaafe20, ino_offset=ino_offset@entry=37) at
> phase6.c:2801
> #3 0x00000000004205f6 in traverse_function (wq=0x7fff1955bdc0, agno=0,
> arg=0x0) at phase6.c:3085
> #4 0x00000000004255fa in prefetch_ag_range (work=0x7fff1955bdc0,
> start_ag=<optimized out>, end_ag=204, dirs_only=true, func=0x420560
> <traverse_function>) at prefetch.c:906
> #5 0x000000000042575b in do_inode_prefetch (mp=0x7fff1955c170,
> stride=0, func=0x420560 <traverse_function>, check_cache=<optimized
> out>, dirs_only=true) at prefetch.c:969
> #6 0x0000000000421365 in traverse_ags (mp=0x7fff1955c170) at phase6.c:3115
> #7 phase6 (mp=mp@entry=0x7fff1955c170) at phase6.c:3203
> #8 0x00000000004036c6 in main (argc=<optimized out>, argv=<optimized
> out>) at xfs_repair.c:808
Please visit our new website at www.pml.ac.uk and follow us on Twitter @PlymouthMarine
Winner of the Environment & Conservation category, the Charity Awards 2014.
Plymouth Marine Laboratory (PML) is a company limited by guarantee registered in England & Wales, company number 4178503. Registered Charity No. 1091222. Registered Office: Prospect Place, The Hoe, Plymouth PL1 3DH, UK.
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. You are reminded that e-mail communications are not secure and may contain viruses; PML accepts no liability for any loss or damage which may be caused by viruses.
[-- Attachment #2: realloc.diff --]
[-- Type: text/x-patch, Size: 750 bytes --]
diff --git a/repair/phase6.c b/repair/phase6.c
index 105bce4..ed44e1b 100644
--- a/repair/phase6.c
+++ b/repair/phase6.c
@@ -2326,12 +2326,15 @@ longform_dir2_entry_check(xfs_mount_t *mp,
db = xfs_dir2_da_to_db(mp, da_bno);
if (db >= num_bps) {
/* more data blocks than expected */
+ int num_bps_prev = num_bps;
num_bps = db + 1;
bplist = realloc(bplist, num_bps * sizeof(struct xfs_buf*));
if (!bplist)
do_error(_("realloc failed in %s (%zu bytes)\n"),
__func__,
num_bps * sizeof(struct xfs_buf*));
+ /* clear new memory as previous bplist was calloc'ed */
+ memset( (void *) bplist + num_bps_prev * sizeof(struct xfs_buf*), 0, (num_bps - num_bps_prev) * sizeof(struct xfs_buf*));
}
if (isblock)
[-- Attachment #3: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-06-12 18:50 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-29 14:03 xfs_repair segfault + debug info Mike Grant
2015-05-29 22:27 ` Dave Chinner
2015-06-01 7:32 ` Mike Grant
2015-06-12 18:49 ` [PATCH] " Mike Grant
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox