From: Berkey B Walker <berk@panix.com>
To: Kyler Laird <kyler-keyword-linuxraid00.a7e7f0@lairds.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: recovering from a controller failure
Date: Sat, 29 May 2010 15:46:31 -0400 [thread overview]
Message-ID: <4C016F17.4050709@panix.com> (raw)
In-Reply-To: <20100529190751.GM2167@flews.lairds.us>
To me, things do not look good for a quick fix. It kinda looks like you
killed it. Any info about the details of how things died, and exactly
what you did after things atarted going south? What are you using for a
controller? It sounds like it is ready for the dump. Any messages from
the controller, itself?
b-
Kyler Laird wrote:
> Recently a drive failed on one of our file servers. The machine has
> three RAID6 arrays (15 1TB each plus spares). I let the spare rebuild
> and then started the process of replacing the drive.
>
> Unfortunately I'd misplaced the list of drive IDs so I generated a new
> list in order to identify the failed drive. I used "smartctl" and made
> a quick script to scan all 48 drives and generate pretty output. That
> was a mistake. After running it a couple times one of the controllers
> failed and several disks in the first array were failed.
>
> I worked on the machine for awhile. (It has an NFS root.) I got some
> information from it before it rebooted (via watchdog). I've dumped all
> of the information here.
> http://lairds.us/temp/ucmeng_md/
>
> In mdstat_0 you can see the status of the arrays right after the
> controller failure. mdstat_1 shows the status after reboot.
>
> sys_block shows a listing of the block devices so you can see that the
> problem drives are on controller 1.
>
> The examine_sd?1 files show -E output from each drive in md0. Note that
> the Events count is different for the drives on the problem controller.
>
> I'd like to know if this is something I can recover. I do have backups
> but it's a huge pain to recover this much data.
>
> Thank you.
>
> --kyler
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
next prev parent reply other threads:[~2010-05-29 19:46 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-29 19:07 recovering from a controller failure Kyler Laird
2010-05-29 19:46 ` Berkey B Walker [this message]
2010-05-29 20:44 ` Kyler Laird
2010-05-29 21:18 ` Richard
2010-05-29 21:36 ` Kyler Laird
2010-05-29 21:38 ` Richard
2010-05-29 21:45 ` Kyler Laird
2010-05-29 21:50 ` Richard
2010-05-30 0:15 ` Kyler Laird
2010-05-30 0:28 ` Richard
2010-05-30 0:54 ` Richard
2010-05-30 3:33 ` Leslie Rhorer
2010-05-30 13:17 ` CoolCold
2010-05-30 22:38 ` Leslie Rhorer
2010-05-31 8:33 ` CoolCold
2010-05-31 8:50 ` Leslie Rhorer
2010-05-30 18:55 ` Richard Scobie
2010-05-30 22:23 ` Leslie Rhorer
2010-05-29 21:59 ` Richard
2010-05-29 21:43 ` Berkey B Walker
-- strict thread matches above, loose matches on Subject: below --
2010-05-31 18:27 Kyler Laird
2010-06-01 15:49 ` Kyler Laird
2010-06-01 19:15 ` Richard Scobie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C016F17.4050709@panix.com \
--to=berk@panix.com \
--cc=kyler-keyword-linuxraid00.a7e7f0@lairds.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).