linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Problem with Raid1 when all drives failed
@ 2013-06-20  6:22 Baldysiak, Pawel
  2013-06-20  9:11 ` Stan Hoeppner
  2013-06-24  7:08 ` NeilBrown
  0 siblings, 2 replies; 3+ messages in thread
From: Baldysiak, Pawel @ 2013-06-20  6:22 UTC (permalink / raw)
  To: neilb@suse.de; +Cc: linux-raid@vger.kernel.org

Hi Neil,

We have observed a strange behavior of a RAID1 volume when all its drives failed.
Here is our test case:

Steps to reproduce:
1. Create 2-drives RAID1 (tested on both native and IMSM metadata)
2. Wait for the end of the initial resync 
3. Hot-unplug both drives of the RAID1 volume

Actual behavior:
The RAID1 volume is still present in OS as a degraded one-drive array

Expected behavior:
Should a RAID volume disappear from OS?

I see that when a drive is removed from OS udev runs "mdadm -If <>" for missing member which tries to write "faulty" to the state of array's member.
I see also that md driver prevents from doing this operation for the last drive in a RAID1 array, so when two drives fail nothing really happens to the one that fails as the second one.

It can be very dangerous, because if user has mounted file system at this array it can lead to unstable work of system or even a system crash. More over user does not have proper information about the state of an array.

How should it work according to the design? Should mdadm stop volume when all its members disappear?

Pawel Baldysiak

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-06-24  7:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-20  6:22 Problem with Raid1 when all drives failed Baldysiak, Pawel
2013-06-20  9:11 ` Stan Hoeppner
2013-06-24  7:08 ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).