From: Dave Chinner <david@fromorbit.com>
To: Richard Neuboeck <hawk@tbi.univie.ac.at>
Cc: xfs@oss.sgi.com
Subject: Re: File System Corruption - Internal error xfs_dir3_data_reada_verify
Date: Wed, 13 Aug 2014 20:42:55 +1000 [thread overview]
Message-ID: <20140813104255.GO26465@dastard> (raw)
In-Reply-To: <53EB3302.1090000@tbi.univie.ac.at>
On Wed, Aug 13, 2014 at 11:42:26AM +0200, Richard Neuboeck wrote:
> Hi,
>
> for some time now our storage machine using XFS stops the file
> system due to some reason I don't seem to have found so far. In this
> process the file system gets corrupted and the attached trace log is
> shown.
What's the workload the VM runs?
> After xfs_repair is run it's running again for an always
> changing amount of time.
What errors does xfs_repair correct? Can you post the output of a
repair run that corrects the issue.
> In general it fails within a few hours or
> days. There are no relevant log messages before the entries shown
> below and no immediate actions that lead to this condition. So far
> my experiments (Ubuntu upgrade from 10.04 to 14.04, different kernel
> versions, changes to the hypervisor) didn't show any lasting effects
> (positive or negative). If any one could shed some light on what XFS
> is trying to tell me it would be highly appreciated.
The directory is trying to read a block of data that does not
contain directory data. i.e. the directory has somehow been
corrupted. The block contains file data, but that's about all
I can tell you right now.
> I've found the mention of 'xfs_dir3_data_reada_verify' in the
> mailing list but didn't find a solution that was applicable.
It's just checking the block read from disk.
However, that's not the only error that is occurring:
> [ 5247.327164] XFS (vdb): metadata I/O error: block 0x160003e488 ("xfs_trans_read_buf_map") error 117 numblks 8
> [ 5252.482540] XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1602 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_alloc.c. Caller 0xffffffffa0088485
There are corrupted free space btrees. In this case, the by-bno tree
has been found to be inconsistent. So there's something corrupting
more than just the directory.
SO, more information needed. Lets start with:
http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
and the output of xfs_repair. Also, a metadump image of the
filesystem before you run repair would be helpful. And finally, the
configuration of the block devices the VM is using (i.e. virtio,
cache=?, etc). Describing the physical storage the VM is using might
also be helpful - it could be host based corruption, not guest based
corruption that is occurring...
Cheers,
Dave.
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2014-08-13 10:43 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-13 9:42 File System Corruption - Internal error xfs_dir3_data_reada_verify Richard Neuboeck
2014-08-13 10:42 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140813104255.GO26465@dastard \
--to=david@fromorbit.com \
--cc=hawk@tbi.univie.ac.at \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox