From: Dave Chinner <david@fromorbit.com>
To: Gerard Beekmans <GBeekmans@tsag.net>
Cc: Eric Sandeen <sandeen@sandeen.net>, "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: Unable to mount and repair filesystems
Date: Fri, 30 Jan 2015 09:57:50 +1100 [thread overview]
Message-ID: <20150129225750.GC6282@dastard> (raw)
In-Reply-To: <D90435AEFF34654AA1122988C66C8678023F027956@exchange.tsag.local>
On Thu, Jan 29, 2015 at 09:27:32PM +0000, Gerard Beekmans wrote:
> > -----Original Message-----
> > Are you certain that the volume /
> > storage behind dm-9 is in decent shape? (i.e. is it really even
> > an xfs filesystem?)
.....
> The outage occurred at the SAN level making the NFS storage
> unavailable which in turn turned off all the VMs running on it
> (turned off in the virtual sense).
Define "SAN" outage. All this tells me is that the backing store
went bad in some way and needed recovery, not what the actual
problem in the SAN was. If it was a potential data loss event, then
that's the prime candidate for the storage returning zeros where
there should be data.
The second candidate is the NFS server. What was the NFS server?
Did the NFS server get rebooted? Did the NFS clients (i.e. the
physical machines running the hypervisor, not the guests) get
rebooted too? If you reboot the server, the NFS clients are
supposed to retransmit any unstable data they have to the server. If
the clients are rebooted or the NFS mount forcible unmounted while
the server is down, then that unstable data is lost forever.
Really, fully zeroed blocks in critical XFS metadata blocks is
almost always an indication of data loss somewhere in the lower
layers of the storage stack. As a precaution, though, if one vmdk
is bad, I'd consider all the others as suspect, even if the
filesystem checkers haven't thrown errors. Random block data loss
can really only be reliably recovered from backups, as user data is
notoriously difficult to validate as correct.
.....
> It is possible that it is the vmware VMDK file that belongs to
> this VM that is the issue but it does not appear to be corrupt
> from a vmdk standpoint. Just the data inside of it.
Also, you are using VMDK image files, that implies you are running
ESX as your hypervisor, yes? If so, that limits our ability to help
you track the source of the corruption...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
prev parent reply other threads:[~2015-01-29 22:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-29 17:36 Unable to mount and repair filesystems Gerard Beekmans
2015-01-29 20:18 ` Eric Sandeen
2015-01-29 21:27 ` Gerard Beekmans
2015-01-29 21:49 ` Eric Sandeen
2015-01-29 21:59 ` Gerard Beekmans
2015-01-29 22:15 ` Eric Sandeen
2015-01-29 23:12 ` Dave Chinner
2015-01-30 0:04 ` Gerard Beekmans
2015-01-29 22:57 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150129225750.GC6282@dastard \
--to=david@fromorbit.com \
--cc=GBeekmans@tsag.net \
--cc=sandeen@sandeen.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox