linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* please help - raid 1 degraded
@ 2015-02-11 18:04 sunruh
  2015-02-11 22:12 ` Adam Goryachev
  0 siblings, 1 reply; 8+ messages in thread
From: sunruh @ 2015-02-11 18:04 UTC (permalink / raw)
  To: linux-raid

centos 6.6
2x 240gig ssd in raid1
this is a live running production machine and the raid1 is for /u of
users home dirs.

1 ssd went totally offline and i replaced it after noticing the firmware
levels are not the same.  the new ssd has the same level firmware.

/dev/sdb is the good ssd
/dev/sdc is the new blank ssd

when working it was /u1 from /dev/md127p1 and /u2 from /dev/md127p2
p1 is 80gig and p2 is 160gig for the full 240gig size of the ssd

> ls -al /dev/md*
brw-rw---- 1 root disk   9, 127 Feb 11 11:09 /dev/md127
brw-rw---- 1 root disk 259,   0 Feb 10 20:23 /dev/md127p1
brw-rw---- 1 root disk 259,   1 Feb 10 20:23 /dev/md127p2

/dev/md:
total 8
drwxr-xr-x  2 root root  140 Feb 10 20:24 .
drwxr-xr-x 20 root root 3980 Feb 10 20:24 ..
lrwxrwxrwx  1 root root    8 Feb 11 11:09 240ssd_0 -> ../md127
lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p1 -> ../md127p1
lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p2 -> ../md127p2
-rw-r--r--  1 root root    5 Feb 10 20:24 autorebuild.pid
-rw-------  1 root root   63 Feb 10 20:23 md-device-map

> ps -eaf | grep mdadm
root      2188     1  0 Feb10 ?        00:00:00 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid

how do i rebuild /dev/sdc into the mirror of /dev/sdb?

and thanks much for the help!
steve

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-11 18:04 please help - raid 1 degraded sunruh
@ 2015-02-11 22:12 ` Adam Goryachev
  2015-02-12  0:09   ` sunruh
  0 siblings, 1 reply; 8+ messages in thread
From: Adam Goryachev @ 2015-02-11 22:12 UTC (permalink / raw)
  To: sunruh, linux-raid

On 12/02/15 05:04, sunruh@prismnet.com wrote:
> centos 6.6
> 2x 240gig ssd in raid1
> this is a live running production machine and the raid1 is for /u of
> users home dirs.
>
> 1 ssd went totally offline and i replaced it after noticing the firmware
> levels are not the same.  the new ssd has the same level firmware.
>
> /dev/sdb is the good ssd
> /dev/sdc is the new blank ssd
>
> when working it was /u1 from /dev/md127p1 and /u2 from /dev/md127p2
> p1 is 80gig and p2 is 160gig for the full 240gig size of the ssd
>
>> ls -al /dev/md*
> brw-rw---- 1 root disk   9, 127 Feb 11 11:09 /dev/md127
> brw-rw---- 1 root disk 259,   0 Feb 10 20:23 /dev/md127p1
> brw-rw---- 1 root disk 259,   1 Feb 10 20:23 /dev/md127p2
>
> /dev/md:
> total 8
> drwxr-xr-x  2 root root  140 Feb 10 20:24 .
> drwxr-xr-x 20 root root 3980 Feb 10 20:24 ..
> lrwxrwxrwx  1 root root    8 Feb 11 11:09 240ssd_0 -> ../md127
> lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p1 -> ../md127p1
> lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p2 -> ../md127p2
> -rw-r--r--  1 root root    5 Feb 10 20:24 autorebuild.pid
> -rw-------  1 root root   63 Feb 10 20:23 md-device-map
>
>> ps -eaf | grep mdadm
> root      2188     1  0 Feb10 ?        00:00:00 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
>
> how do i rebuild /dev/sdc into the mirror of /dev/sdb?
>

Please send the output of fdisk -lu /dev/sd[bc] and cat /proc/mdstat 
(preferably both when it was working and current).

In general, when replacing a failed RAID1 disk, and assuming you 
configured it the way I think you did:
1) fdisk -lu /dev/sdb
Find out the exact partition sizes
2) fdisk /dev/sdc
Create the new partitions exactly the same as /dev/sdb
3) mdadm --manage /dev/md127 --add /dev/sdb1
Add the partition to the array
4) cat /proc/mdstat
Watch the rebuild progress, once it is complete, relax.

PS, steps 1 and 2 may not be needed if you are using the full block 
device instead of a partition. Also, change the command in step 3 to 
"mdadm --manage /dev/md127 --add /dev/sdb"

PPS, if this is a bootable disk, you will probably also need to do 
something with your boot manager to get that installed onto the new disk 
as well.

Hope this helps, otherwise, please provide more information.


Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-11 22:12 ` Adam Goryachev
@ 2015-02-12  0:09   ` sunruh
  2015-02-12  0:36     ` Adam Goryachev
  2015-02-12 10:11     ` Roaming
  0 siblings, 2 replies; 8+ messages in thread
From: sunruh @ 2015-02-12  0:09 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: sunruh, linux-raid

On Thu, Feb 12, 2015 at 09:12:50AM +1100, Adam Goryachev wrote:
> On 12/02/15 05:04, sunruh@prismnet.com wrote:
> > centos 6.6
> > 2x 240gig ssd in raid1
> > this is a live running production machine and the raid1 is for /u of
> > users home dirs.
> >
> > 1 ssd went totally offline and i replaced it after noticing the firmware
> > levels are not the same.  the new ssd has the same level firmware.
> >
> > /dev/sdb is the good ssd
> > /dev/sdc is the new blank ssd
> >
> > when working it was /u1 from /dev/md127p1 and /u2 from /dev/md127p2
> > p1 is 80gig and p2 is 160gig for the full 240gig size of the ssd
> >
> >> ls -al /dev/md*
> > brw-rw---- 1 root disk   9, 127 Feb 11 11:09 /dev/md127
> > brw-rw---- 1 root disk 259,   0 Feb 10 20:23 /dev/md127p1
> > brw-rw---- 1 root disk 259,   1 Feb 10 20:23 /dev/md127p2
> >
> > /dev/md:
> > total 8
> > drwxr-xr-x  2 root root  140 Feb 10 20:24 .
> > drwxr-xr-x 20 root root 3980 Feb 10 20:24 ..
> > lrwxrwxrwx  1 root root    8 Feb 11 11:09 240ssd_0 -> ../md127
> > lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p1 -> ../md127p1
> > lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p2 -> ../md127p2
> > -rw-r--r--  1 root root    5 Feb 10 20:24 autorebuild.pid
> > -rw-------  1 root root   63 Feb 10 20:23 md-device-map
> >
> >> ps -eaf | grep mdadm
> > root      2188     1  0 Feb10 ?        00:00:00 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
> >
> > how do i rebuild /dev/sdc into the mirror of /dev/sdb?
> >
> 
> Please send the output of fdisk -lu /dev/sd[bc] and cat /proc/mdstat 
> (preferably both when it was working and current).
> 
> In general, when replacing a failed RAID1 disk, and assuming you 
> configured it the way I think you did:
> 1) fdisk -lu /dev/sdb
> Find out the exact partition sizes
> 2) fdisk /dev/sdc
> Create the new partitions exactly the same as /dev/sdb
> 3) mdadm --manage /dev/md127 --add /dev/sdb1
> Add the partition to the array
> 4) cat /proc/mdstat
> Watch the rebuild progress, once it is complete, relax.
> 
> PS, steps 1 and 2 may not be needed if you are using the full block 
> device instead of a partition. Also, change the command in step 3 to 
> "mdadm --manage /dev/md127 --add /dev/sdb"
> 
> PPS, if this is a bootable disk, you will probably also need to do 
> something with your boot manager to get that installed onto the new disk 
> as well.
> 
> Hope this helps, otherwise, please provide more information.
> 
> 
> Regards,
> Adam
> 
> -- 
> Adam Goryachev Website Managers www.websitemanagers.com.au

Adam (and anybody else that can help),
after issue i do not have before. and no they are not bootable.

[root@shell ~]# fdisk -lu /dev/sd[bc]

Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001a740


Disk /dev/sdc: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@shell ~]# cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sdb[2]
      234299840 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

[root@shell ~]# fdisk -lu /dev/sdb

Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001a740

i dont seem to be seeing the partition sizes or im stupid.
couldnt i just dd if=/dev/sdb of=/dev/sdc bs=1G count=240 and then do the
mdadm?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-12  0:09   ` sunruh
@ 2015-02-12  0:36     ` Adam Goryachev
  2015-02-12  1:02       ` sunruh
  2015-02-12 10:11     ` Roaming
  1 sibling, 1 reply; 8+ messages in thread
From: Adam Goryachev @ 2015-02-12  0:36 UTC (permalink / raw)
  To: sunruh; +Cc: linux-raid

On 12/02/15 11:09, sunruh@prismnet.com wrote:
> On Thu, Feb 12, 2015 at 09:12:50AM +1100, Adam Goryachev wrote:
>> On 12/02/15 05:04, sunruh@prismnet.com wrote:
>>> centos 6.6
>>> 2x 240gig ssd in raid1
>>> this is a live running production machine and the raid1 is for /u of
>>> users home dirs.
>>>
>>> 1 ssd went totally offline and i replaced it after noticing the firmware
>>> levels are not the same.  the new ssd has the same level firmware.
>>>
>>> /dev/sdb is the good ssd
>>> /dev/sdc is the new blank ssd
>>>
>>> when working it was /u1 from /dev/md127p1 and /u2 from /dev/md127p2
>>> p1 is 80gig and p2 is 160gig for the full 240gig size of the ssd
>>>
>>>> ls -al /dev/md*
>>> brw-rw---- 1 root disk   9, 127 Feb 11 11:09 /dev/md127
>>> brw-rw---- 1 root disk 259,   0 Feb 10 20:23 /dev/md127p1
>>> brw-rw---- 1 root disk 259,   1 Feb 10 20:23 /dev/md127p2
>>>
>>> /dev/md:
>>> total 8
>>> drwxr-xr-x  2 root root  140 Feb 10 20:24 .
>>> drwxr-xr-x 20 root root 3980 Feb 10 20:24 ..
>>> lrwxrwxrwx  1 root root    8 Feb 11 11:09 240ssd_0 -> ../md127
>>> lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p1 -> ../md127p1
>>> lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p2 -> ../md127p2
>>> -rw-r--r--  1 root root    5 Feb 10 20:24 autorebuild.pid
>>> -rw-------  1 root root   63 Feb 10 20:23 md-device-map
>>>
>>>> ps -eaf | grep mdadm
>>> root      2188     1  0 Feb10 ?        00:00:00 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
>>>
>>> how do i rebuild /dev/sdc into the mirror of /dev/sdb?
>>>
>> Please send the output of fdisk -lu /dev/sd[bc] and cat /proc/mdstat
>> (preferably both when it was working and current).
>>
>> In general, when replacing a failed RAID1 disk, and assuming you
>> configured it the way I think you did:
>> 1) fdisk -lu /dev/sdb
>> Find out the exact partition sizes
>> 2) fdisk /dev/sdc
>> Create the new partitions exactly the same as /dev/sdb
>> 3) mdadm --manage /dev/md127 --add /dev/sdb1
>> Add the partition to the array
>> 4) cat /proc/mdstat
>> Watch the rebuild progress, once it is complete, relax.
>>
>> PS, steps 1 and 2 may not be needed if you are using the full block
>> device instead of a partition. Also, change the command in step 3 to
>> "mdadm --manage /dev/md127 --add /dev/sdb"
>>
>> PPS, if this is a bootable disk, you will probably also need to do
>> something with your boot manager to get that installed onto the new disk
>> as well.
>>
>> Hope this helps, otherwise, please provide more information.
>>
>>
>> Regards,
>> Adam
>>
>> -- 
>> Adam Goryachev Website Managers www.websitemanagers.com.au
> Adam (and anybody else that can help),
> after issue i do not have before. and no they are not bootable.
>
> [root@shell ~]# fdisk -lu /dev/sd[bc]
>
> Disk /dev/sdb: 240.1 GB, 240057409536 bytes
> 255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0001a740
>
>
> Disk /dev/sdc: 240.1 GB, 240057409536 bytes
> 255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
> [root@shell ~]# cat /proc/mdstat
> Personalities : [raid1]
> md127 : active raid1 sdb[2]
>        234299840 blocks super 1.2 [2/1] [U_]
>        
> unused devices: <none>

> i dont seem to be seeing the partition sizes or im stupid.
> couldnt i just dd if=/dev/sdb of=/dev/sdc bs=1G count=240 and then do the
> mdadm?
OK, so you aren't using partitioned disks, so it is as simple as what I 
said above (with one minor correction):

"mdadm --manage /dev/md127 --add /dev/sdc"


/dev/sdc is the new blank ssd, so that is the one to add, the above 
command with /dev/sdb wouldn't have done anything at all .... So just 
run that command, and then do "watch cat /proc/mdstat" until the good 
stuff is completed.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-12  0:36     ` Adam Goryachev
@ 2015-02-12  1:02       ` sunruh
  2015-02-12  1:10         ` Adam Goryachev
  0 siblings, 1 reply; 8+ messages in thread
From: sunruh @ 2015-02-12  1:02 UTC (permalink / raw)
  To: Adam Goryachev; +Cc: sunruh, linux-raid

On Thu, Feb 12, 2015 at 11:36:23AM +1100, Adam Goryachev wrote:
> On 12/02/15 11:09, sunruh@prismnet.com wrote:
> > On Thu, Feb 12, 2015 at 09:12:50AM +1100, Adam Goryachev wrote:
> >> On 12/02/15 05:04, sunruh@prismnet.com wrote:
> >>> centos 6.6
> >>> 2x 240gig ssd in raid1
> >>> this is a live running production machine and the raid1 is for /u of
> >>> users home dirs.
> >>>
> >>> 1 ssd went totally offline and i replaced it after noticing the firmware
> >>> levels are not the same.  the new ssd has the same level firmware.
> >>>
> >>> /dev/sdb is the good ssd
> >>> /dev/sdc is the new blank ssd
> >>>
> >>> when working it was /u1 from /dev/md127p1 and /u2 from /dev/md127p2
> >>> p1 is 80gig and p2 is 160gig for the full 240gig size of the ssd
> >>>
> >>>> ls -al /dev/md*
> >>> brw-rw---- 1 root disk   9, 127 Feb 11 11:09 /dev/md127
> >>> brw-rw---- 1 root disk 259,   0 Feb 10 20:23 /dev/md127p1
> >>> brw-rw---- 1 root disk 259,   1 Feb 10 20:23 /dev/md127p2
> >>>
> >>> /dev/md:
> >>> total 8
> >>> drwxr-xr-x  2 root root  140 Feb 10 20:24 .
> >>> drwxr-xr-x 20 root root 3980 Feb 10 20:24 ..
> >>> lrwxrwxrwx  1 root root    8 Feb 11 11:09 240ssd_0 -> ../md127
> >>> lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p1 -> ../md127p1
> >>> lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p2 -> ../md127p2
> >>> -rw-r--r--  1 root root    5 Feb 10 20:24 autorebuild.pid
> >>> -rw-------  1 root root   63 Feb 10 20:23 md-device-map
> >>>
> >>>> ps -eaf | grep mdadm
> >>> root      2188     1  0 Feb10 ?        00:00:00 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
> >>>
> >>> how do i rebuild /dev/sdc into the mirror of /dev/sdb?
> >>>
> >> Please send the output of fdisk -lu /dev/sd[bc] and cat /proc/mdstat
> >> (preferably both when it was working and current).
> >>
> >> In general, when replacing a failed RAID1 disk, and assuming you
> >> configured it the way I think you did:
> >> 1) fdisk -lu /dev/sdb
> >> Find out the exact partition sizes
> >> 2) fdisk /dev/sdc
> >> Create the new partitions exactly the same as /dev/sdb
> >> 3) mdadm --manage /dev/md127 --add /dev/sdb1
> >> Add the partition to the array
> >> 4) cat /proc/mdstat
> >> Watch the rebuild progress, once it is complete, relax.
> >>
> >> PS, steps 1 and 2 may not be needed if you are using the full block
> >> device instead of a partition. Also, change the command in step 3 to
> >> "mdadm --manage /dev/md127 --add /dev/sdb"
> >>
> >> PPS, if this is a bootable disk, you will probably also need to do
> >> something with your boot manager to get that installed onto the new disk
> >> as well.
> >>
> >> Hope this helps, otherwise, please provide more information.
> >>
> >>
> >> Regards,
> >> Adam
> >>
> >> -- 
> >> Adam Goryachev Website Managers www.websitemanagers.com.au
> > Adam (and anybody else that can help),
> > after issue i do not have before. and no they are not bootable.
> >
> > [root@shell ~]# fdisk -lu /dev/sd[bc]
> >
> > Disk /dev/sdb: 240.1 GB, 240057409536 bytes
> > 255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
> > Units = sectors of 1 * 512 = 512 bytes
> > Sector size (logical/physical): 512 bytes / 512 bytes
> > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > Disk identifier: 0x0001a740
> >
> >
> > Disk /dev/sdc: 240.1 GB, 240057409536 bytes
> > 255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
> > Units = sectors of 1 * 512 = 512 bytes
> > Sector size (logical/physical): 512 bytes / 512 bytes
> > I/O size (minimum/optimal): 512 bytes / 512 bytes
> > Disk identifier: 0x00000000
> >
> > [root@shell ~]# cat /proc/mdstat
> > Personalities : [raid1]
> > md127 : active raid1 sdb[2]
> >        234299840 blocks super 1.2 [2/1] [U_]
> >        
> > unused devices: <none>
> 
> > i dont seem to be seeing the partition sizes or im stupid.
> > couldnt i just dd if=/dev/sdb of=/dev/sdc bs=1G count=240 and then do the
> > mdadm?
> OK, so you aren't using partitioned disks, so it is as simple as what I 
> said above (with one minor correction):
> 
> "mdadm --manage /dev/md127 --add /dev/sdc"
> 
> 
> /dev/sdc is the new blank ssd, so that is the one to add, the above 
> command with /dev/sdb wouldn't have done anything at all .... So just 
> run that command, and then do "watch cat /proc/mdstat" until the good 
> stuff is completed.
> 
> Regards,
> Adam
> 
> -- 
> Adam Goryachev Website Managers www.websitemanagers.com.au

awesome sauce!
it is recovering and at a fast pace too.  says it will be done in 16mins.

ok, so now the really important questions:
once done, what files/stats do i need to save off for the next time it 
craters?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-12  1:02       ` sunruh
@ 2015-02-12  1:10         ` Adam Goryachev
  2015-02-12  3:12           ` Eyal Lebedinsky
  0 siblings, 1 reply; 8+ messages in thread
From: Adam Goryachev @ 2015-02-12  1:10 UTC (permalink / raw)
  To: sunruh; +Cc: linux-raid

On 12/02/15 12:02, sunruh@prismnet.com wrote:
> ok, so now the really important questions: once done, what files/stats 
> do i need to save off for the next time it craters?

I think the usual information requested is the following:
fdisk -lu /dev/sd?
mdadm --manage --query /dev/sd?
mdadm --manage --detail /dev/md*
mdadm --manage --examine /dev/sd?
cat /proc/mdstat
ls -l /dev/disk/by-id/

If you can keep a copy of all those things, then you will be much 
further ahead than many people. Of course, RAID1 is just so much 
easier/simpler than RAID5/RAID6, so usually you won't need any of that. 
RAID1 is simple mirror, so if you have two disks, one with data, one 
without, then you just need to decide which disk has the data, and start 
with that.
It is even possible to start two MD arrays, one from each disk, and then 
compare the contents to decide which one you want to keep.
Or, you can simply mount the device directly (skipping any MD data at 
the beginning if needed).

Like I said, RAID1 is by far the simplest type of RAID if you want 
redundancy and can fit your dataset onto a single device.

Glad you had a successful recovery :)

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-12  1:10         ` Adam Goryachev
@ 2015-02-12  3:12           ` Eyal Lebedinsky
  0 siblings, 0 replies; 8+ messages in thread
From: Eyal Lebedinsky @ 2015-02-12  3:12 UTC (permalink / raw)
  Cc: linux-raid



On 12/02/15 12:10, Adam Goryachev wrote:
> On 12/02/15 12:02, sunruh@prismnet.com wrote:
>> ok, so now the really important questions: once done, what files/stats do i need to save off for the next time it craters?
>
> I think the usual information requested is the following:
> fdisk -lu /dev/sd?
> mdadm --manage --query /dev/sd?
> mdadm --manage --detail /dev/md*
> mdadm --manage --examine /dev/sd?

Maybe
	mdadm --misc --query   /dev/md*
         mdadm --misc --detail  /dev/md*
         mdadm --misc --examine /dev/sd*

> cat /proc/mdstat
> ls -l /dev/disk/by-id/
>
> If you can keep a copy of all those things, then you will be much further ahead than many people. Of course, RAID1 is just so much easier/simpler than RAID5/RAID6, so usually you won't need any of that. RAID1 is simple mirror, so if you have two disks, one with data, one without, then you just need to decide which disk has the data, and start with that.
> It is even possible to start two MD arrays, one from each disk, and then compare the contents to decide which one you want to keep.
> Or, you can simply mount the device directly (skipping any MD data at the beginning if needed).
>
> Like I said, RAID1 is by far the simplest type of RAID if you want redundancy and can fit your dataset onto a single device.
>
> Glad you had a successful recovery :)
>
> Regards,
> Adam
>

-- 
Eyal Lebedinsky (eyal@eyal.emu.id.au)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: please help - raid 1 degraded
  2015-02-12  0:09   ` sunruh
  2015-02-12  0:36     ` Adam Goryachev
@ 2015-02-12 10:11     ` Roaming
  1 sibling, 0 replies; 8+ messages in thread
From: Roaming @ 2015-02-12 10:11 UTC (permalink / raw)
  To: sunruh; +Cc: linux-raid


On 12/02/2015 00:09, sunruh@prismnet.com wrote:
> i dont seem to be seeing the partition sizes or im stupid.
> couldnt i just dd if=/dev/sdb of=/dev/sdc bs=1G count=240 and then do the
> mdadm?
NOT a good idea. I don't know what it would do in your case, where you 
are using the entire disk, but if you're using a partition table you 
would suddenly end up with a bunch of duplicate GUIDs. Bearing in mind 
the "GU" stands for "globally unique", your management tools are likely 
to get confused ... not a good idea especially wrt raid.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-02-12 10:11 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-02-11 18:04 please help - raid 1 degraded sunruh
2015-02-11 22:12 ` Adam Goryachev
2015-02-12  0:09   ` sunruh
2015-02-12  0:36     ` Adam Goryachev
2015-02-12  1:02       ` sunruh
2015-02-12  1:10         ` Adam Goryachev
2015-02-12  3:12           ` Eyal Lebedinsky
2015-02-12 10:11     ` Roaming

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).