linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid1 missing disk
@ 2017-10-12 14:58 James J Dempsey
  2017-10-12 15:19 ` Adam Goryachev
  2017-10-12 16:48 ` Reindl Harald
  0 siblings, 2 replies; 5+ messages in thread
From: James J Dempsey @ 2017-10-12 14:58 UTC (permalink / raw)
  To: linux-raid; +Cc: James J Dempsey

I have a simple problem with a 2-disk raid-1 configuration that I can’t find the solution for on the internet so I hope someone can help.

One disk failed on my 2-disk raid-1 and I foolishly (physically) removed the disk before software removing it from the array.

The current state is that the array is running degraded with 1 disk.  My goal is to add a new disk and return it to a non-degraded 2-disk array.

I still have the failed disk, but I don’t really want to physically re-install it because the last time I tested that, the array started and showed the pre-failed data, not the current data.  My theory of this is that the computer switched it’s idea of which disk was /dev/sda and which was /dev/sdb as a result of the original removal.  So I’d like to just continue from this state without the confusion of adding that old disk physically back.

Can I just use mdadm -a to add the new disk into this existing array?  How do I get rid of the missing ‘ghost’ drive?

Thanks for any help you can provide.  I’ll include the output of mdstat and mdadm -D and mdadm.conf below.

—Jim—

$ cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sda1[0]
      1953366016 blocks super 1.2 [2/1] [U_]
      bitmap: 8/15 pages [32KB], 65536KB chunk

unused devices: <none>
$ 
$ sudo mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Aug 15 12:52:04 2015
     Raid Level : raid1
     Array Size : 1953366016 (1862.88 GiB 2000.25 GB)
  Used Dev Size : 1953366016 (1862.88 GiB 2000.25 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Oct 12 10:36:32 2017
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rockridge:0  (local to host rockridge)
           UUID : d22065e7:6796a446:b75d7602:71434594
         Events : 560037

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed
$ 
$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=d22065e7:6796a446:b75d7602:71434594 name=rockridge:0

# This configuration was auto-generated on Sat, 15 Aug 2015 13:02:22 -0400 by mkconf
$ 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 missing disk
  2017-10-12 14:58 raid1 missing disk James J Dempsey
@ 2017-10-12 15:19 ` Adam Goryachev
  2017-10-12 16:48 ` Reindl Harald
  1 sibling, 0 replies; 5+ messages in thread
From: Adam Goryachev @ 2017-10-12 15:19 UTC (permalink / raw)
  To: James J Dempsey, linux-raid



On 13/10/17 01:58, James J Dempsey wrote:
> I have a simple problem with a 2-disk raid-1 configuration that I can’t find the solution for on the internet so I hope someone can help.
>
> One disk failed on my 2-disk raid-1 and I foolishly (physically) removed the disk before software removing it from the array.
>
> The current state is that the array is running degraded with 1 disk.  My goal is to add a new disk and return it to a non-degraded 2-disk array.
>
> I still have the failed disk, but I don’t really want to physically re-install it because the last time I tested that, the array started and showed the pre-failed data, not the current data.  My theory of this is that the computer switched it’s idea of which disk was /dev/sda and which was /dev/sdb as a result of the original removal.  So I’d like to just continue from this state without the confusion of adding that old disk physically back.
>
> Can I just use mdadm -a to add the new disk into this existing array?  How do I get rid of the missing ‘ghost’ drive?
>
> Thanks for any help you can provide.  I’ll include the output of mdstat and mdadm -D and mdadm.conf below.
>
> —Jim—
>
> $ cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sda1[0]
>        1953366016 blocks super 1.2 [2/1] [U_]
>        bitmap: 8/15 pages [32KB], 65536KB chunk
>
> unused devices: <none>
> $
Perfect...

mdadm --manage /dev/md0 --add /dev/sdb1
Wait for it to finish sync and you are all done (if only every broken 
raid issue was this easy ;)

Oh, make sure you have read and understood and fixed the timeout issue 
if using desktop drives.

Regards,
Adam

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 missing disk
  2017-10-12 14:58 raid1 missing disk James J Dempsey
  2017-10-12 15:19 ` Adam Goryachev
@ 2017-10-12 16:48 ` Reindl Harald
  2017-10-12 18:40   ` James J Dempsey
  1 sibling, 1 reply; 5+ messages in thread
From: Reindl Harald @ 2017-10-12 16:48 UTC (permalink / raw)
  To: James J Dempsey, linux-raid


Am 12.10.2017 um 16:58 schrieb James J Dempsey:
> I have a simple problem with a 2-disk raid-1 configuration that I can’t find the solution for on the internet so I hope someone can help.
> 
> One disk failed on my 2-disk raid-1 and I foolishly (physically) removed the disk before software removing it from the array.
> 
> The current state is that the array is running degraded with 1 disk.  My goal is to add a new disk and return it to a non-degraded 2-disk array

what do you mean by "I foolishly (physically) removed the disk before 
software removing it from the array"?

if a disk explodes it also disappears unannounced

"My theory of this is that the computer switched it?s idea of which disk 
was /dev/sda and which was /dev/sdb as a result of the original removal" 
does not matter at all since you need the botaloader anyways on both 
(grub-install) and on a RAID1 it must not matter at all which is sda and 
wich is sdb, both can die at any point in time and both need to be 
bootable at any point in time

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 missing disk
  2017-10-12 16:48 ` Reindl Harald
@ 2017-10-12 18:40   ` James J Dempsey
  2017-10-12 21:02     ` John Stoffel
  0 siblings, 1 reply; 5+ messages in thread
From: James J Dempsey @ 2017-10-12 18:40 UTC (permalink / raw)
  To: linux-raid; +Cc: James J Dempsey


> On Oct 12, 2017, at 12:48 PM, Reindl Harald <h.reindl@thelounge.net> wrote:
>> 
> what do you mean by "I foolishly (physically) removed the disk before software removing it from the array”?

I mean that I wasn’t thinking straight and the first thing I did was remove the disk and install the new disk without using mdadm to manage the array and remove the disk first.

> if a disk explodes it also disappears unannounced

Yes that’s true, and this is basically the same situation.

> "My theory of this is that the computer switched it?s idea of which disk was /dev/sda and which was /dev/sdb as a result of the original removal" does not matter at all since you need the botaloader anyways on both (grub-install) and on a RAID1 it must not matter at all which is sda and wich is sdb, both can die at any point in time and both need to be bootable at any point in time

Both are bootable.  What I was talking about here was when I tried to resolve the problem by putting the failed disk back, the raid array started up and showed me the filesystem data before the raid failed the bad drive.  I.e. it was not showing me the “good” “current” data presumably because it thought the “failed” drive was the other one.   It was showing me the data from the drive it had previously failed.

Thanks for your advice and thanks to  Adam Goryachev for suggesting I just add the new disk to the array.  I will try that.

—Jim—


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: raid1 missing disk
  2017-10-12 18:40   ` James J Dempsey
@ 2017-10-12 21:02     ` John Stoffel
  0 siblings, 0 replies; 5+ messages in thread
From: John Stoffel @ 2017-10-12 21:02 UTC (permalink / raw)
  To: James J Dempsey; +Cc: linux-raid

>>>>> "James" == James J Dempsey <jjd@jjd.com> writes:

>> On Oct 12, 2017, at 12:48 PM, Reindl Harald <h.reindl@thelounge.net> wrote:
>>> 
>> what do you mean by "I foolishly (physically) removed the disk before software removing it from the array”?

James> I mean that I wasn’t thinking straight and the first thing I did was remove the disk and install the new disk without using mdadm to manage the array and remove the disk first.

>> if a disk explodes it also disappears unannounced

James> Yes that’s true, and this is basically the same situation.

>> "My theory of this is that the computer switched it?s idea of which disk was /dev/sda and which was /dev/sdb as a result of the original removal" does not matter at all since you need the botaloader anyways on both (grub-install) and on a RAID1 it must not matter at all which is sda and wich is sdb, both can die at any point in time and both need to be bootable at any point in time

James> Both are bootable.  What I was talking about here was when I
James> tried to resolve the problem by putting the failed disk back,
James> the raid array started up and showed me the filesystem data
James> before the raid failed the bad drive.  I.e. it was not showing
James> me the “good” “current” data presumably because it thought the
James> “failed” drive was the other one.  It was showing me the data
James> from the drive it had previously failed.

James> Thanks for your advice and thanks to Adam Goryachev for
James> suggesting I just add the new disk to the array.  I will try
James> that.

Don't forget to run upgrade-grub to make the new disk bootable as
well!  

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-10-12 21:02 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-12 14:58 raid1 missing disk James J Dempsey
2017-10-12 15:19 ` Adam Goryachev
2017-10-12 16:48 ` Reindl Harald
2017-10-12 18:40   ` James J Dempsey
2017-10-12 21:02     ` John Stoffel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).