* emergency recovery
@ 2008-04-10 6:29 jeff sacksteder
2008-04-10 6:38 ` Mario 'BitKoenig' Holbe
0 siblings, 1 reply; 8+ messages in thread
From: jeff sacksteder @ 2008-04-10 6:29 UTC (permalink / raw)
To: linux-raid
I have a root raid-5 set that has failed 2 disks simultaneously. This
is catastrophically bad news. I'm not sure what happened, but the
disks spin up and appear to function independently without problem.
The superblocks are all out of sync if I look at them with mdadm
examine.
It is essential that I not make this worse than it is. Is the
information in the wiki current regarding recovery of failed arrays?
Are there any other resources I should be looking at?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: emergency recovery
2008-04-10 6:29 emergency recovery jeff sacksteder
@ 2008-04-10 6:38 ` Mario 'BitKoenig' Holbe
2008-04-10 13:08 ` john
2008-04-10 16:09 ` john
0 siblings, 2 replies; 8+ messages in thread
From: Mario 'BitKoenig' Holbe @ 2008-04-10 6:38 UTC (permalink / raw)
To: linux-raid
jeff sacksteder <jsacksteder@gmail.com> wrote:
> It is essential that I not make this worse than it is. Is the
Well, this in mind the first thing you should do is taking raw images
from all the component drives before even thinking about any kind of
recovery. Just to make sure you have more than one try when something
goes wrong.
regards
Mario
--
The social dynamics of the net are a direct consequence of the fact that
nobody has yet developed a Remote Strangulation Protocol. -- Larry Wall
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: emergency recovery
2008-04-10 6:38 ` Mario 'BitKoenig' Holbe
@ 2008-04-10 13:08 ` john
2008-04-10 13:17 ` Mario 'BitKoenig' Holbe
2008-04-10 16:09 ` john
1 sibling, 1 reply; 8+ messages in thread
From: john @ 2008-04-10 13:08 UTC (permalink / raw)
To: Mario 'BitKoenig' Holbe; +Cc: linux-raid
Mario 'BitKoenig' Holbe wrote:
> jeff sacksteder <jsacksteder@gmail.com> wrote:
>
>> It is essential that I not make this worse than it is. Is the
>>
>
> Well, this in mind the first thing you should do is taking raw images
> from all the component drives before even thinking about any kind of
> recovery. Just to make sure you have more than one try when something
> goes wrong.
>
>
> regards
> Mario
>
I have the same problem. Regarding raw images, I have one drive that has
a periodic clicking and the other is fine (but stale by 1 event).
What's the best way to make a raw image of a temperamental drive? I
have an exact duplicate that I could image to.
thanks.
john.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: emergency recovery
2008-04-10 6:38 ` Mario 'BitKoenig' Holbe
2008-04-10 13:08 ` john
@ 2008-04-10 16:09 ` john
2008-04-10 23:13 ` jeff sacksteder
1 sibling, 1 reply; 8+ messages in thread
From: john @ 2008-04-10 16:09 UTC (permalink / raw)
To: Mario 'BitKoenig' Holbe; +Cc: linux-raid
Mario 'BitKoenig' Holbe wrote:
> jeff sacksteder <jsacksteder@gmail.com> wrote:
>
>> It is essential that I not make this worse than it is. Is the
>>
>
> Well, this in mind the first thing you should do is taking raw images
> from all the component drives before even thinking about any kind of
> recovery.
Thanks. I'll try to use a fresh drive with a ddrescue(ed) image on it
with the corrupt drive. I may do the same with the stale(evms
terminology?) drive.
Do I need to dd image all 6 dives somewhere to be safe?
I want to force assemble 4 good +1stale +1 corrupt to a degraded raid5
array with 5 good +1 corrupt. I could avoid dd imaging the 4 good
drives if I could mark these 4 drives read-only. Is that possible or
with the forced assembly does it need to write new md meta data to all
the drives and therefore need to modify these drives?
Tell me to start a new thread if this doesn't pertain to the OP's
question.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: emergency recovery
2008-04-10 16:09 ` john
@ 2008-04-10 23:13 ` jeff sacksteder
2008-04-10 23:45 ` jeff sacksteder
0 siblings, 1 reply; 8+ messages in thread
From: jeff sacksteder @ 2008-04-10 23:13 UTC (permalink / raw)
To: john; +Cc: Mario 'BitKoenig' Holbe, linux-raid
I have images of all 5 drives in my set. The set looks like this right now.
/dev/sda1 - active sync
/dev/sdb1 - faulty
/dev/sdc1 - faulty removed
/dev/sdd1 - active sync
/dev/sde1 - active sync
I am lead to believe that the way to proceed would be to
force-assemble the array with mdadm. Should I start with 4 drives, or
try to do all 5 at once? Which should I leave out?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: emergency recovery
2008-04-10 23:13 ` jeff sacksteder
@ 2008-04-10 23:45 ` jeff sacksteder
2008-04-11 14:35 ` Bill Davidsen
0 siblings, 1 reply; 8+ messages in thread
From: jeff sacksteder @ 2008-04-10 23:45 UTC (permalink / raw)
To: john; +Cc: Mario 'BitKoenig' Holbe, linux-raid
I should also note that I'm doing this from a liveCD and though I have
modprobe-ed the appropriate modules, I still don't see the node in
/dev for md0.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: emergency recovery
2008-04-10 23:45 ` jeff sacksteder
@ 2008-04-11 14:35 ` Bill Davidsen
0 siblings, 0 replies; 8+ messages in thread
From: Bill Davidsen @ 2008-04-11 14:35 UTC (permalink / raw)
To: jeff sacksteder; +Cc: john, Mario 'BitKoenig' Holbe, linux-raid
jeff sacksteder wrote:
> I should also note that I'm doing this from a liveCD and though I have
> modprobe-ed the appropriate modules, I still don't see the node in
> /dev for md0.
You can try doing an assemble with force after backing up whatever you
feel you should (or can). Assuming you get an array at that point, you
can do an information only file system check (fsck -n) to see what the
state of the f/s might be. You can also run the 'check' action to see
how unhappy mdadm thinks things are.
By that time you will have information to use or report back to the list
for more ideas.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2008-04-11 14:35 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-04-10 6:29 emergency recovery jeff sacksteder
2008-04-10 6:38 ` Mario 'BitKoenig' Holbe
2008-04-10 13:08 ` john
2008-04-10 13:17 ` Mario 'BitKoenig' Holbe
2008-04-10 16:09 ` john
2008-04-10 23:13 ` jeff sacksteder
2008-04-10 23:45 ` jeff sacksteder
2008-04-11 14:35 ` Bill Davidsen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).