From: Brian Foster <bfoster@redhat.com>
To: Tapani Tarvainen <tapani.j.tarvainen@jyu.fi>
Cc: xfs@oss.sgi.com
Subject: Re: "This is a bug."
Date: Thu, 10 Sep 2015 11:05:25 -0400 [thread overview]
Message-ID: <20150910150525.GD27863@bfoster.bfoster> (raw)
In-Reply-To: <20150910145154.GC27863@bfoster.bfoster>
On Thu, Sep 10, 2015 at 10:51:54AM -0400, Brian Foster wrote:
> On Thu, Sep 10, 2015 at 04:05:30PM +0300, Tapani Tarvainen wrote:
> > On 10 Sep 09:01, Brian Foster (bfoster@redhat.com) wrote:
> >
> > > > It is 2.5GB so not really nice to mail...
> >
> > > Can you compress it?
> >
> > Ah. Of course, should've done it in the first place.
> > Still 250MB though:
> >
> > https://huom.it.jyu.fi/tmp/data1.metadump.gz
> >
>
> First off, I see ~60MB of corruption output before I even get to the
> reported repair failure, so this appears to be an extremely severe
> corruption and I wouldn't be surprised if ultimately beyond repair (not
> that it matters for you, since you are restoring from backups).
>
> The failure itself is an assert failure against an error return value
> that appears to have a fallback path, so I'm not really sure why it's
> there. I tried just removing it to see what happens. It ran to
> completion, but there was a ton of output, write verifier errors, etc.,
> so I'm not totally sure how coherent the result is yet. I'll run another
> repair pass and do some directory traversals and whatnot and see if it
> explodes...
>
FWIW, the follow up repair did come up clean so it appears (so far) to
have put the fs back together from a metadata standpoint. That said,
>570k files end up in lost+found and who knows whether the files
themselves would have contained the expected data once all of the bmaps
are fixed up and whatnot.
Brian
> I suspect what's more interesting at this point is what happened to
> cause this level of corruption? What kind of event lead to this? Was it
> a pure filesystem crash or some kind of hardware/raid failure?
>
> Also, do you happen to know the geometry (xfs_info) of the original fs?
> Repair was showing agno's up in the 20k's and now that I've mounted the
> repaired image, xfs_info shows the following:
>
> meta-data=/dev/loop0 isize=256 agcount=24576, agsize=65536 blks
> = sectsz=4096 attr=2, projid32bit=0
> = crc=0 finobt=0 spinodes=0
> data = bsize=4096 blocks=1610612736, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0 ftype=0
> log =internal bsize=4096 blocks=2560, version=2
> = sectsz=4096 sunit=1 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> So that's a 6TB fs with over 24000 allocation groups of size 256MB, as
> opposed to the mkfs default of 6 allocation groups of 1TB each. Is that
> intentional?
>
> Brian
>
> > --
> > Tapani Tarvainen
> >
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2015-09-10 15:05 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-10 9:18 "This is a bug." Tapani Tarvainen
2015-09-10 10:31 ` Tapani Tarvainen
2015-09-10 11:53 ` Emmanuel Florac
2015-09-10 12:05 ` Tapani Tarvainen
2015-09-10 11:48 ` Emmanuel Florac
2015-09-10 11:55 ` Tapani Tarvainen
2015-09-10 12:30 ` Tapani Tarvainen
2015-09-10 12:36 ` Brian Foster
2015-09-10 12:54 ` Tapani Tarvainen
2015-09-10 13:01 ` Brian Foster
2015-09-10 13:05 ` Tapani Tarvainen
2015-09-10 14:51 ` Brian Foster
2015-09-10 15:05 ` Brian Foster [this message]
2015-09-10 17:52 ` Tapani Tarvainen
2015-09-10 18:01 ` Tapani Tarvainen
2015-09-10 17:31 ` Tapani Tarvainen
2015-09-10 17:55 ` Brian Foster
2015-09-10 18:03 ` Tapani Tarvainen
2015-09-10 18:33 ` Brian Foster
2015-09-11 6:19 ` Tapani Tarvainen
2015-09-11 0:12 ` Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150910150525.GD27863@bfoster.bfoster \
--to=bfoster@redhat.com \
--cc=tapani.j.tarvainen@jyu.fi \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox