From: Pallai Roland <dap@mail.index.hu>
To: David Chinner <dgc@sgi.com>
Cc: Linux-Raid <linux-raid@vger.kernel.org>, xfs@oss.sgi.com
Subject: Re: raid5: I lost a XFS file system due to a minor IDE cable problem
Date: Mon, 28 May 2007 17:30:52 +0200 [thread overview]
Message-ID: <200705281730.53343.dap@mail.index.hu> (raw)
In-Reply-To: <200705281453.55618.dap@mail.index.hu>
On Monday 28 May 2007 14:53:55 Pallai Roland wrote:
> On Friday 25 May 2007 02:05:47 David Chinner wrote:
> > "-o ro,norecovery" will allow you to mount the filesystem and get any
> > uncorrupted data off it.
> >
> > You still may get shutdowns if you trip across corrupted metadata in
> > the filesystem, though.
>
> This filesystem is completely dead.
> [...]
I tried to make a md patch to stop writes if a raid5 array got 2+ failed
drives, but I found it's already done, oops. :) handle_stripe5() ignores
writes in this case quietly, I tried and works.
So how I lost my file system? My first guess about partially successed writes
wasn't right: there wasn't real write to the disks after the second disk has
been kicked, so the scenario is same to a simple power loss from this point
of view. Am I thinking right?
There's an another layer I used on this box between md and xfs: loop-aes. I
used it since years and rock stable, but now it's my first suspect, cause I
found a bug in it today:
I assembled my array from n-1 disks, and I failed a second disk for a test
and I found /dev/loop1 still provides *random* data where /dev/md1 serves
nothing, it's definitely a loop-aes bug:
/dev/loop1: [0700]:180907 (/dev/md1) encryption=AES128 multi-key-v3
hq:~# dd if=/dev/md1 bs=1k count=128 skip=128 >/dev/null
dd: reading `/dev/md1': Input/output error
0+0 records in
0+0 records out
hq:~# dd if=/dev/loop1 bs=1k count=128 skip=128 | md5sum
128+0 records in
128+0 records out
131072 bytes (131 kB) copied, 0.027775 seconds, 4.7 MB/s
e2548a924a0e835bb45fb50058acba98 - (!!!)
hq:~# dd if=/dev/loop1 bs=1k count=128 skip=128 | md5sum
128+0 records in
128+0 records out
131072 bytes (131 kB) copied, 0.030311 seconds, 4.3 MB/s
c6a23412fb75eb5a7eb1d6a7813eb86b - (!!!)
It's not an explanation to my screwed up file system, but for me it's enough
to drop loop-aes. Eh.
--
d
next prev parent reply other threads:[~2007-05-28 15:30 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-24 11:18 raid5: I lost a XFS file system due to a minor IDE cable problem Pallai Roland
2007-05-24 11:20 ` Justin Piszcz
2007-05-25 0:05 ` David Chinner
2007-05-25 1:35 ` Pallai Roland
2007-05-25 4:55 ` David Chinner
2007-05-25 5:43 ` Alberto Alonso
2007-05-25 8:36 ` David Chinner
2007-05-28 22:45 ` Alberto Alonso
2007-05-29 3:28 ` David Chinner
2007-05-29 3:37 ` Alberto Alonso
2007-05-25 14:35 ` Pallai Roland
2007-05-28 0:30 ` David Chinner
2007-05-28 1:50 ` Pallai Roland
2007-05-28 2:17 ` David Chinner
2007-05-28 11:17 ` Pallai Roland
2007-05-28 23:06 ` David Chinner
2007-05-25 14:01 ` Pallai Roland
2007-05-28 12:53 ` Pallai Roland
2007-05-28 15:30 ` Pallai Roland [this message]
2007-05-28 23:36 ` David Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200705281730.53343.dap@mail.index.hu \
--to=dap@mail.index.hu \
--cc=dgc@sgi.com \
--cc=linux-raid@vger.kernel.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).