linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Pavlik Kirilov <pavllik@yahoo.ca>
To: "linux-lvm@redhat.com" <linux-lvm@redhat.com>
Subject: [linux-lvm] Raid 10 - recovery after a disk failure
Date: Sun, 31 Jan 2016 23:00:39 +0000 (UTC)	[thread overview]
Message-ID: <218341039.2921649.1454281239033.JavaMail.yahoo@mail.yahoo.com> (raw)
In-Reply-To: 218341039.2921649.1454281239033.JavaMail.yahoo.ref@mail.yahoo.com

Hi,

  I am encountering strange behaviour when trying to recover a Raid 10 LV, created with the following command: 


lvcreate --type raid10 -L3G  -i 2 -I 256 -n lv_r10 vg_data /dev/vdb1:1-500 /dev/vdc1:1-500 /dev/vdd1:1-500 /dev/vde1:1-500

As it can be seen, I have 4 PVs and give the first 500 PE  of each of them for the raid 10 logical volume. I am able to see the following PE layout:

lvs  -o seg_pe_ranges,lv_name,stripes -a

PE Ranges                                                                               LV                #Str
lv_r10_rimage_0:0-767 lv_r10_rimage_1:0-767 lv_r10_rimage_2:0-767 lv_r10_rimage_3:0-767 lv_r10               4
/dev/vdb1:2-385                                                                         [lv_r10_rimage_0]    1
/dev/vdc1:2-385                                                                         [lv_r10_rimage_1]    1
/dev/vdd1:2-385                                                                         [lv_r10_rimage_2]    1
/dev/vde1:2-385                                                                         [lv_r10_rimage_3]    1
/dev/vdb1:1-1                                                                           [lv_r10_rmeta_0]     1
/dev/vdc1:1-1                                                                           [lv_r10_rmeta_1]     1
/dev/vdd1:1-1                                                                           [lv_r10_rmeta_2]     1
/dev/vde1:1-1                                                                           [lv_r10_rmeta_3]     1


 So far everything is OK and the number of PE is automatically reduced to 385 per PV  to have the size equal to 3 Gigabytes. 

 The problem comes when I shut down the system, replace one disk (vdc), boot again and try to recover the array. Here are the commands I execute:
pvs
Couldn't find device with uuid 2hU2pD-xNDa-yi1J-OkkP-NjGq-hIxo-Q5AgQC.
PV             VG         Fmt  Attr PSize  PFree
/dev/vdb1      vg_data    lvm2 a--   8.00g 6.49g
/dev/vdd1      vg_data    lvm2 a--   8.00g 6.49g
/dev/vde1      vg_data    lvm2 a--   8.00g 6.49g
unknown device vg_data    lvm2 a-m   8.00g 6.49g

pvcreate /dev/vdc1
Physical volume "/dev/vdc1" successfully created

vgextend vg_data /dev/vdc1
Couldn't find device with uuid 2hU2pD-xNDa-yi1J-OkkP-NjGq-hIxo-Q5AgQC
Volume group "vg_data" successfully extended

lvs  -o seg_pe_ranges,lv_name,stripes -a
Couldn't find device with uuid 2hU2pD-xNDa-yi1J-OkkP-NjGq-hIxo-Q5AgQC.
PE Ranges                                                                               LV                #Str
lv_r10_rimage_0:0-767 lv_r10_rimage_1:0-767 lv_r10_rimage_2:0-767 lv_r10_rimage_3:0-767 lv_r10               4
/dev/vdb1:2-385                                                                         [lv_r10_rimage_0]    1
unknown device:2-385                                                                    [lv_r10_rimage_1]    1
/dev/vdd1:2-385                                                                         [lv_r10_rimage_2]    1
/dev/vde1:2-385                                                                         [lv_r10_rimage_3]    1
/dev/vdb1:1-1                                                                           [lv_r10_rmeta_0]     1
unknown device:1-1                                                                      [lv_r10_rmeta_1]     1
/dev/vdd1:1-1                                                                           [lv_r10_rmeta_2]     1
/dev/vde1:1-1                                                                           [lv_r10_rmeta_3]     1

lvchange -ay --partial /dev/vg_data/lv_r10
PARTIAL MODE. Incomplete logical volumes will be processed.
Couldn't find device with uuid 2hU2pD-xNDa-yi1J-OkkP-NjGq-hIxo-Q5AgQC

lvconvert --repair vg_data/lv_r10 /dev/vdc1:1-385
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
Insufficient free space: 770 extents needed, but only 345 available
Failed to allocate replacement images for vg_data/lv_r10

lvconvert --repair vg_data/lv_r10
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
Faulty devices in vg_data/lv_r10 successfully replaced.

lvs  -o seg_pe_ranges,lv_name,stripes -a
Couldn't find device with uuid 
2hU2pD-xNDa-yi1J-OkkP-NjGq-hIxo-Q5AgQC.

PE Ranges                                                                               LV                #Str
lv_r10_rimage_0:0-767 lv_r10_rimage_1:0-767 lv_r10_rimage_2:0-767 lv_r10_rimage_3:0-767 lv_r10               4
/dev/vdb1:2-385                                                                         [lv_r10_rimage_0]    1
/dev/vdc1:1-768                                                                         [lv_r10_rimage_1]    1
/dev/vdd1:2-385                                                                         [lv_r10_rimage_2]    1
/dev/vde1:2-385                                                                         [lv_r10_rimage_3]    1
/dev/vdb1:1-1                                                                           [lv_r10_rmeta_0]     1
/dev/vdc1:0-0                                                                           [lv_r10_rmeta_1]     1
/dev/vdd1:1-1                                                                           [lv_r10_rmeta_2]     1
/dev/vde1:1-1                                                                           [lv_r10_rmeta_3]     1

  The array was recovered, but it is definitely not what I expected, because on /dev/vdc1 now 768 PEs are used instead of 385, like on the other PVs. In this case I had some extra free space on /dev/vdc1, but what if I did not? Please, suggest what should be done.

Linux ubuntu1 3.13.0-32-generic, x86_64
LVM version:     2.02.98(2) (2012-10-15)
Library version: 1.02.77 (2012-10-15)
Driver version:  4.27.0

Pavlik Petrov

       reply	other threads:[~2016-01-31 23:07 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <218341039.2921649.1454281239033.JavaMail.yahoo.ref@mail.yahoo.com>
2016-01-31 23:00 ` Pavlik Kirilov [this message]
2016-01-31 23:23   ` [linux-lvm] Raid 10 - recovery after a disk failure emmanuel segura
2016-02-01 17:06     ` Pavlik Kirilov
2016-02-01 18:22       ` emmanuel segura
     [not found]         ` <1744577429.81266.1454367573843.JavaMail.yahoo@mail.yahoo.com>
2016-02-02 16:50           ` emmanuel segura
2016-02-02 23:20             ` Pavlik Kirilov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=218341039.2921649.1454281239033.JavaMail.yahoo@mail.yahoo.com \
    --to=pavllik@yahoo.ca \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).