linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Timothy D. Lenz" <tlenz@vorgon.com>
To: linux-raid@vger.kernel.org
Subject: Raid failing, which command to remove the bad drive?
Date: Fri, 26 Aug 2011 13:13:01 -0700	[thread overview]
Message-ID: <4E57FE4D.5080503@vorgon.com> (raw)

I have 4 drives set up as 2 pairs.  The first part has 3 partitions on 
it and it seems 1 of those drives is failing (going to have to figure 
out which drive it is too so I don't pull the wrong one out of the case)

It's been awhile since I had to replace a drive in the array and my 
notes are a bit confusing. I'm not sure which I need to use to remove 
the drive:


	sudo mdadm --manage /dev/md0 --fail /dev/sdb
	sudo mdadm --manage /dev/md0 --remove /dev/sdb
	sudo mdadm --manage /dev/md1 --fail /dev/sdb
	sudo mdadm --manage /dev/md1 --remove /dev/sdb
	sudo mdadm --manage /dev/md2 --fail /dev/sdb
	sudo mdadm --manage /dev/md2 --remove /dev/sdb

or

sudo mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
sudo mdadm /dev/md1 --fail /dev/sdb2 --remove /dev/sdb2
sudo mdadm /dev/md2 --fail /dev/sdb3 --remove /dev/sdb3

I'm not sure if I fail the drive partition or whole drive for each.

-------------------------------------
The mails I got are:
-------------------------------------
A Fail event had been detected on md device /dev/md0.

It could be related to component device /dev/sdb1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb2[2](F) sda2[0]
       4891712 blocks [2/1] [U_]

md2 : active raid1 sdb3[1] sda3[0]
       459073344 blocks [2/2] [UU]

md3 : active raid1 sdd1[1] sdc1[0]
       488383936 blocks [2/2] [UU]

md0 : active raid1 sdb1[2](F) sda1[0]
       24418688 blocks [2/1] [U_]

unused devices: <none>
-------------------------------------
A Fail event had been detected on md device /dev/md1.

It could be related to component device /dev/sdb2.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb2[2](F) sda2[0]
       4891712 blocks [2/1] [U_]

md2 : active raid1 sdb3[1] sda3[0]
       459073344 blocks [2/2] [UU]

md3 : active raid1 sdd1[1] sdc1[0]
       488383936 blocks [2/2] [UU]

md0 : active raid1 sdb1[2](F) sda1[0]
       24418688 blocks [2/1] [U_]

unused devices: <none>
-------------------------------------
A Fail event had been detected on md device /dev/md2.

It could be related to component device /dev/sdb3.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [raid1] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdb2[2](F) sda2[0]
       4891712 blocks [2/1] [U_]

md2 : active raid1 sdb3[2](F) sda3[0]
       459073344 blocks [2/1] [U_]

md3 : active raid1 sdd1[1] sdc1[0]
       488383936 blocks [2/2] [UU]

md0 : active raid1 sdb1[2](F) sda1[0]
       24418688 blocks [2/1] [U_]

unused devices: <none>
-------------------------------------

             reply	other threads:[~2011-08-26 20:13 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-26 20:13 Timothy D. Lenz [this message]
2011-08-26 21:25 ` Raid failing, which command to remove the bad drive? Mathias Burén
2011-08-26 22:26   ` Timothy D. Lenz
2011-08-26 22:45     ` Mathias Burén
2011-08-26 23:14       ` Timothy D. Lenz
2011-08-26 22:45 ` NeilBrown
2011-09-01 17:51   ` Timothy D. Lenz
2011-09-02  5:24     ` Simon Matthews
2011-09-02 15:42       ` Timothy D. Lenz
2011-09-03 11:35         ` Simon Matthews
2011-09-03 12:17           ` Robin Hill
2011-09-03 17:03             ` Simon Matthews
2011-09-03 17:04               ` Simon Matthews
2011-09-09 22:01                 ` Bill Davidsen
2011-09-12 20:56                   ` Timothy D. Lenz
2011-09-03 18:45             ` Timothy D. Lenz
2011-09-05  8:57             ` CoolCold
2011-09-09 21:54     ` Bill Davidsen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E57FE4D.5080503@vorgon.com \
    --to=tlenz@vorgon.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).