From: David Greaves <david@dgreaves.com>
To: Chris Eddington <chrise@synplicity.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid5 assemble after dual sata port failure
Date: Sat, 10 Nov 2007 09:16:41 +0000 [thread overview]
Message-ID: <473576F9.6040602@dgreaves.com> (raw)
In-Reply-To: <4734FB4A.4070401@synplicity.com>
Ok - it looks like the raid array is up. There will have been an event count
mismatch which is why you needed --force. This may well have caused some
(hopefully minor) corruption.
FWIW, xfs_check is almost never worth running :) (It runs out of memory easily).
xfs_repair -n is much better.
What does the end of dmesg say after trying to mount the fs?
Also try:
xfs_repair -n -L
I think you then have 2 options:
* xfs_repair -L
This may well lose data that was being written as the drives crashed.
* contact the xfs mailing list
David
Chris Eddington wrote:
> Hi David,
>
> I ran xfs_check and get this:
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed. Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check. If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
>
> After mounting (which fails) and re-running xfs_check it gives the same
> message.
>
> The array info details are below and seems it is running correctly ?? I
> interpret the message above as actually a good sign - seems that
> xfs_check sees the filesystem but the log file and maybe the most
> currently written data is corrupted or will be lost. But I'd like to
> hear some advice/guidance before doing anything permanent with
> xfs_repair. I also would like to confirm somehow that the array is in
> the right order, etc. Appreciate your feedback.
>
> Thks,
> Chris
>
>
>
> --------------------
> cat /etc/mdadm/mdadm.conf
> DEVICE /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
> ARRAY /dev/md0 level=raid5 num-devices=4
> UUID=bc74c21c:9655c1c6:ba6cc37a:df870496
> MAILADDR root
>
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sda1[0] sdd1[2] sdb1[1]
> 1465151808 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
> unused devices: <none>
>
> mdadm -D /dev/md0
> /dev/md0:
> Version : 00.90.03
> Creation Time : Sun Nov 5 14:25:01 2006
> Raid Level : raid5
> Array Size : 1465151808 (1397.28 GiB 1500.32 GB)
> Device Size : 488383936 (465.76 GiB 500.11 GB)
> Raid Devices : 4
> Total Devices : 3
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Fri Nov 9 16:26:31 2007
> State : clean, degraded
> Active Devices : 3
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : bc74c21c:9655c1c6:ba6cc37a:df870496
> Events : 0.4880384
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 17 1 active sync /dev/sdb1
> 2 8 49 2 active sync /dev/sdd1
> 3 0 0 3 removed
>
>
>
> Chris Eddington wrote:
>> Thanks David.
>>
>> I've had cable/port failures in the past and after re-adding the
>> drive, the order changed - I'm not sure why, but I noticed it sometime
>> ago but don't remember the exact order.
>>
>> My initial attempt to assemble, it came up with only two drives in the
>> array. Then I tried assembling with --force and that brought up 3 of
>> the drives. At that point I thought I was good, so I tried mount
>> /dev/md0 and it failed. Would that have written to the disk? I'm
>> using XFS.
>>
>> After that, I tried assembling with different drive orders on the
>> command line, i.e. mdadm -Av --force /dev/md0 /dev/sda1, ... thinking
>> that the order might not be right.
>>
>> At the moment I can't access the machine, but I'll try fsck -n and
>> send you the other info later this evening.
>>
>> Many thanks,
>> Chris
>>
next prev parent reply other threads:[~2007-11-10 9:16 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-07 20:28 Raid5 assemble after dual sata port failure Chris Eddington
2007-11-08 10:33 ` David Greaves
2007-11-09 21:23 ` Chris Eddington
2007-11-10 0:28 ` Chris Eddington
2007-11-10 9:16 ` David Greaves [this message]
2007-11-10 18:46 ` Chris Eddington
2007-11-11 17:09 ` David Greaves
2007-11-11 17:41 ` Chris Eddington
2007-11-11 22:49 ` David Greaves
2007-11-12 1:01 ` Bill Davidsen
2007-11-17 6:31 ` Chris Eddington
2007-11-18 12:25 ` David Greaves
-- strict thread matches above, loose matches on Subject: below --
2007-11-07 20:23 chrise
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=473576F9.6040602@dgreaves.com \
--to=david@dgreaves.com \
--cc=chrise@synplicity.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).