linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: sunruh@prismnet.com
To: Adam Goryachev <mailinglists@websitemanagers.com.au>
Cc: sunruh@prismnet.com, linux-raid@vger.kernel.org
Subject: Re: please help - raid 1 degraded
Date: Wed, 11 Feb 2015 18:09:40 -0600	[thread overview]
Message-ID: <20150212000940.GA49579@eris.prismnet.com> (raw)
In-Reply-To: <54DBD3E2.80701@websitemanagers.com.au>

On Thu, Feb 12, 2015 at 09:12:50AM +1100, Adam Goryachev wrote:
> On 12/02/15 05:04, sunruh@prismnet.com wrote:
> > centos 6.6
> > 2x 240gig ssd in raid1
> > this is a live running production machine and the raid1 is for /u of
> > users home dirs.
> >
> > 1 ssd went totally offline and i replaced it after noticing the firmware
> > levels are not the same.  the new ssd has the same level firmware.
> >
> > /dev/sdb is the good ssd
> > /dev/sdc is the new blank ssd
> >
> > when working it was /u1 from /dev/md127p1 and /u2 from /dev/md127p2
> > p1 is 80gig and p2 is 160gig for the full 240gig size of the ssd
> >
> >> ls -al /dev/md*
> > brw-rw---- 1 root disk   9, 127 Feb 11 11:09 /dev/md127
> > brw-rw---- 1 root disk 259,   0 Feb 10 20:23 /dev/md127p1
> > brw-rw---- 1 root disk 259,   1 Feb 10 20:23 /dev/md127p2
> >
> > /dev/md:
> > total 8
> > drwxr-xr-x  2 root root  140 Feb 10 20:24 .
> > drwxr-xr-x 20 root root 3980 Feb 10 20:24 ..
> > lrwxrwxrwx  1 root root    8 Feb 11 11:09 240ssd_0 -> ../md127
> > lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p1 -> ../md127p1
> > lrwxrwxrwx  1 root root   10 Feb 10 20:23 240ssd_0p2 -> ../md127p2
> > -rw-r--r--  1 root root    5 Feb 10 20:24 autorebuild.pid
> > -rw-------  1 root root   63 Feb 10 20:23 md-device-map
> >
> >> ps -eaf | grep mdadm
> > root      2188     1  0 Feb10 ?        00:00:00 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
> >
> > how do i rebuild /dev/sdc into the mirror of /dev/sdb?
> >
> 
> Please send the output of fdisk -lu /dev/sd[bc] and cat /proc/mdstat 
> (preferably both when it was working and current).
> 
> In general, when replacing a failed RAID1 disk, and assuming you 
> configured it the way I think you did:
> 1) fdisk -lu /dev/sdb
> Find out the exact partition sizes
> 2) fdisk /dev/sdc
> Create the new partitions exactly the same as /dev/sdb
> 3) mdadm --manage /dev/md127 --add /dev/sdb1
> Add the partition to the array
> 4) cat /proc/mdstat
> Watch the rebuild progress, once it is complete, relax.
> 
> PS, steps 1 and 2 may not be needed if you are using the full block 
> device instead of a partition. Also, change the command in step 3 to 
> "mdadm --manage /dev/md127 --add /dev/sdb"
> 
> PPS, if this is a bootable disk, you will probably also need to do 
> something with your boot manager to get that installed onto the new disk 
> as well.
> 
> Hope this helps, otherwise, please provide more information.
> 
> 
> Regards,
> Adam
> 
> -- 
> Adam Goryachev Website Managers www.websitemanagers.com.au

Adam (and anybody else that can help),
after issue i do not have before. and no they are not bootable.

[root@shell ~]# fdisk -lu /dev/sd[bc]

Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001a740


Disk /dev/sdc: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@shell ~]# cat /proc/mdstat
Personalities : [raid1] 
md127 : active raid1 sdb[2]
      234299840 blocks super 1.2 [2/1] [U_]
      
unused devices: <none>

[root@shell ~]# fdisk -lu /dev/sdb

Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001a740

i dont seem to be seeing the partition sizes or im stupid.
couldnt i just dd if=/dev/sdb of=/dev/sdc bs=1G count=240 and then do the
mdadm?

  reply	other threads:[~2015-02-12  0:09 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-11 18:04 please help - raid 1 degraded sunruh
2015-02-11 22:12 ` Adam Goryachev
2015-02-12  0:09   ` sunruh [this message]
2015-02-12  0:36     ` Adam Goryachev
2015-02-12  1:02       ` sunruh
2015-02-12  1:10         ` Adam Goryachev
2015-02-12  3:12           ` Eyal Lebedinsky
2015-02-12 10:11     ` Roaming

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150212000940.GA49579@eris.prismnet.com \
    --to=sunruh@prismnet.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=mailinglists@websitemanagers.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).