linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Maarten <maarten@ultratux.net>
To: linux-raid@vger.kernel.org
Subject: Re: Raid6 array crashed-- 4-disk failure...(?)
Date: Tue, 16 Sep 2008 20:12:47 +0200	[thread overview]
Message-ID: <48CFF71F.3070803@ultratux.net> (raw)
In-Reply-To: <48CFF1E9.30006@ultratux.net>

Maarten wrote:
> Andre Noll wrote:
>> On 19:14, Maarten wrote:

> 
> I'm saving the 15 GB data I did not have any backup of elsewhere now and 
> will sunsequently start a forced resync and then hot-add the 7th drive.

Hm... I'm having some trouble here. In short, I have to stop the array 
to get it to resync, and when I try it like that it doesn't want to 
anyhow. Peculiar. Maybe my mdadm or manpage is out of date, I'll google.


apoc ~ # mdadm --update=resync /dev/md5
mdadm: --update does not set the mode, and so cannot be the first option.
apoc ~ # man mdadm
apoc ~ # mdadm --assemble --update=resync /dev/md5
mdadm: device /dev/md5 already active - cannot assemble it

apoc ~ # mdadm -S /dev/md5
mdadm: stopped /dev/md5
apoc ~ # mdadm --assemble --update=resync /dev/md5
mdadm: failed to RUN_ARRAY /dev/md5: Input/output error
apoc ~ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]

md5 : inactive sdl1[0] sdi1[5] sdh1[4] sdf1[3] sdk1[2] sdj1[1]
       2925435648 blocks

apoc ~ # mdadm --assemble --update=resync /dev/md5
mdadm: device /dev/md5 already active - cannot assemble it
apoc ~ # mdadm -S /dev/md5
mdadm: stopped /dev/md5
apoc ~ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]

unused devices: <none>
apoc ~ # mdadm --assemble --update=resync /dev/md5
mdadm: /dev/md5 assembled from 6 drives - not enough to start the array 
while not clean - consider --force.
apoc ~ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]

md5 : inactive sdl1[0](S) sdi1[5](S) sdh1[4](S) sdf1[3](S) sdk1[2](S) 
sdj1[1](S)
       2925435648 blocks

unused devices: <none>
apoc ~ # mdadm --assemble --update=resync --force /dev/md5
mdadm: /dev/md5 has been started with 6 drives (out of 7).
apoc ~ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [multipath]

md5 : active raid6 sdl1[0] sdi1[5] sdh1[4] sdf1[3] sdk1[2] sdj1[1]
       2437863040 blocks level 6, 64k chunk, algorithm 2 [7/6] [UUUUUU_]


So, I'm back at square one with that.

Regards,
Maarten

  reply	other threads:[~2008-09-16 18:12 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-15  9:04 Raid6 array crashed-- 4-disk failure...(?) Maarten
2008-09-15 10:16 ` Neil Brown
2008-09-15 16:32   ` Maarten
2008-09-15 20:57     ` Maarten
2008-09-16 13:12       ` Andre Noll
2008-09-15 11:03 ` Peter Grandi
2008-09-15 16:57   ` Maarten
2008-09-16 19:06     ` Bill Davidsen
2008-09-15 12:59 ` Andre Noll
2008-09-15 17:14   ` Maarten
2008-09-16  8:25     ` Andre Noll
2008-09-16 17:50       ` Maarten
2008-09-16 18:12         ` Maarten [this message]
2008-09-17  8:25         ` Andre Noll
2008-09-19 14:55         ` John Stoffel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48CFF71F.3070803@ultratux.net \
    --to=maarten@ultratux.net \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).