linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dominik Sennfelder <sennfelder@gmx.de>
To: linux-raid@vger.kernel.org
Subject: Raid Failed What to to
Date: Fri, 21 May 2004 00:07:09 +0200	[thread overview]
Message-ID: <40AD2C0D.10709@gmx.de> (raw)

Hello

I have got a Raid 5 with 4 160 GB Disk,
On of the Disks Failed because. But I know its OK I had this for some times.
A Restart solved The Problem.
But now  Tried to raidhotremove the Drive and removed the wrong drive.
I just recongized the Problem after i raidhotadded itagain.
No the Raid tries to sync again.
Syslog gets flooded by the following

May 20 23:52:25 utgard kernel: md: syncing RAID array md0
May 20 23:52:25 utgard kernel: md: minimum _guaranteed_ reconstruction 
speed: 1000 KB/sec/disc.
May 20 23:52:25 utgard kernel: md: using maximum available idle IO 
bandwith (but not more than 200000 KB/sec) for reconstruction.
May 20 23:52:25 utgard kernel: md: using 128k window, over a total of 
156288256 blocks.
May 20 23:52:25 utgard kernel: md: md0: sync done.

a cat /proc/mdstat gibes me different output

utgard:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
md1 : active raid1 hdd2[1] hdc2[0]
      79055744 blocks [2/2] [UU]
     
md0 : active raid5 hdk1[4] hdi1[3] hdg1[5](F) hde1[0]
      468864768 blocks level 5, 32k chunk, algorithm 2 [4/2] [U__U]
      [>....................]  recovery =  0.0% (128/156288256) 
finish=13024.0min speed=128K/sec
unused devices: <none>
utgard:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
md1 : active raid1 hdd2[1] hdc2[0]
      79055744 blocks [2/2] [UU]
     
md0 : active raid5 hdk1[4] hdi1[3] hdg1[5](F) hde1[0]
      468864768 blocks level 5, 32k chunk, algorithm 2 [4/2] [U__U]
      [>....................]  recovery =  0.0% (128/156288256) 
finish=13024.0min speed=128K/sec
unused devices: <none>
utgard:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
md1 : active raid1 hdd2[1] hdc2[0]
      79055744 blocks [2/2] [UU]
     
md0 : active raid5 hdk1[4] hdi1[3] hdg1[5](F) hde1[0]
      468864768 blocks level 5, 32k chunk, algorithm 2 [4/2] [U__U]
     
unused devices: <none>
utgard:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
md1 : active raid1 hdd2[1] hdc2[0]
      79055744 blocks [2/2] [UU]
     
md0 : active raid5 hdk1[4] hdi1[3] hdg1[5](F) hde1[0]
      468864768 blocks level 5, 32k chunk, algorithm 2 [4/2] [U__U]
     
unused devices: <none>

I know that all 4 Dives should be ok, how can i recover the Raid without 
loosing data?
Does a restart help?

Thanks buliwyf









             reply	other threads:[~2004-05-20 22:07 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-05-20 22:07 Dominik Sennfelder [this message]
2004-05-21  8:45 ` Raid Failed What to to Clemens Schwaighofer
2004-05-21 14:00   ` Guy
     [not found] <200405211459.i4LExPB24054@www.watkins-home.com>
2004-05-21 20:58 ` Dominik Sennfelder
2004-05-21 21:41   ` Guy
2004-05-21 22:35     ` Dominik Sennfelder
2004-05-22  0:20       ` Guy
2004-05-23 18:31 ` Dominik Sennfelder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40AD2C0D.10709@gmx.de \
    --to=sennfelder@gmx.de \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).