From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx1.redhat.com (ext-mx11.extmail.prod.ext.phx2.redhat.com [10.5.110.16]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t2A9YW6N008684 for ; Tue, 10 Mar 2015 05:34:33 -0400 Received: from mail.webx.cz (mail.webx.cz [109.123.222.201]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t2A9YSxZ020357 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 10 Mar 2015 05:34:29 -0400 Received: from localhost (localhost [127.0.0.1]) by mail.webx.cz (Postfix) with ESMTP id 468003FFBA for ; Tue, 10 Mar 2015 10:34:27 +0100 (CET) Received: from mail.webx.cz ([127.0.0.1]) by localhost (mail1.webx.cz [127.0.0.1]) (amavisd-new, port 10042) with LMTP id F54gSD8UdSyP for ; Tue, 10 Mar 2015 10:34:27 +0100 (CET) Received: from libor-nb.localnet (remote.bcom.cz [88.208.120.34]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.webx.cz (Postfix) with ESMTPSA id 11F043FFC0 for ; Tue, 10 Mar 2015 10:34:27 +0100 (CET) From: Libor =?utf-8?B?S2xlcMOhxI0=?= Date: Tue, 10 Mar 2015 10:34:26 +0100 Message-ID: <12305881.n8GeDYuzBu@libor-nb> In-Reply-To: References: <2134825.VZEVSUbKJK@libor-nb> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="nextPart3952121.DiMfKxZBe6" Content-Transfer-Encoding: 7Bit Subject: Re: [linux-lvm] Removing disk from raid LVM Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: To: linux-lvm@redhat.com This is a multi-part message in MIME format. --nextPart3952121.DiMfKxZBe6 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Hi, thanks for the link. I think, this procedure was used last week, maybe i read it on this ver= y page. Shutdown computer, replace disk, boot computer, create PV with old uuid= , then=20 do vgcfgrestore. This "echo 1 > /sys/block/sde/device/delete" is what i test now in virt= ual=20 machine, it's like if disk failed completly, i think LVM raid should be= able to handle=20 this situation ;) With regards, Libor On =C3=9At 10. b=C5=99ezna 2015 10:23:08 emmanuel segura wrote: > echo 1 > /sys/block/sde/device/delete #this is wrong from my point of= > view, you need first try to remove the disk from lvm >=20 > vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove >=20 > vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove >=20 > vgreduce --removemissing --force vgPecDisk2 #works, alerts me about > rimage and rmeta LVS >=20 >=20 > 1: remove the failed device physically not from lvm and insert the ne= w > device 2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk" > --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the p= v > with OLD UUID of remove disk > 3: now you can restore the vg metadata >=20 >=20 >=20 https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mdata= re cov > er.html > 2015-03-09 12:21 GMT+01:00 Libor Klep=C3=A1=C4=8D : > > Hello, > >=20 > > web have 4x3TB disks in LVM for backups and I setup per customer/pe= r > > "task" > > LV type raid5 on it. > >=20 > > Last week, smartd started to alarm us, that one of the disk will so= on go > > away. > >=20 > > So we shut down the computer, replaced disk and then i used vgcfgre= store > > on > > new disk to restore metadata. > >=20 > > Result was, that some LVs came up with damaged filesystem, some did= n't > > came > > up at all with messages like (one of rimage and rmeta was "wrong", = when i > > used KVPM util, it was type "virtual" > >=20 > > ---- > >=20 > > [123995.826650] mdX: bitmap initialized from disk: read 4 pages, se= t 1 of > > 98312 bits > >=20 > > [124071.037501] device-mapper: raid: Failed to read superblock of d= evice > > at > > position 2 > >=20 > > [124071.055473] device-mapper: raid: New device injected into exist= ing > > array without 'rebuild' parameter specified > >=20 > > [124071.055969] device-mapper: table: 253:83: raid: Unable to assem= ble > > array: Invalid superblocks > >=20 > > [124071.056432] device-mapper: ioctl: error adding target to table > >=20 > > ---- > >=20 > > After that, i tried several combinations of > >=20 > > lvconvert --repair > >=20 > > and > >=20 > > lvchange -ay --resync > >=20 > >=20 > >=20 > > Without success. So i saved some data and than created new empty LV= 's=20 and > > started backups from scratch. > >=20 > >=20 > >=20 > > Today, smartd alerted on another disk. > >=20 > > So how can i safely remove disk from VG? > >=20 > > I tried to simulate it on VM > >=20 > >=20 > >=20 > > echo 1 > /sys/block/sde/device/delete > >=20 > > vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove > >=20 > > vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove > >=20 > > vgreduce --removemissing --force vgPecDisk2 #works, alerts me about= =20 rimage > > and rmeta LVS > >=20 > >=20 > >=20 > > vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and= rmeta > > LV show up. > >=20 > >=20 > >=20 > > So how to safely remove soon-to-be-bad-drive and insert new drive t= o > > array? > >=20 > > Server has no more physical space for new drive, so we cannot use=20= pvmove. > >=20 > >=20 > >=20 > > Server is debian wheezy, but kernel is 2.6.14. > > --nextPart3952121.DiMfKxZBe6 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset="utf-8"

Hi,=

tha= nks for the link.

I t= hink, this procedure was used last week, maybe i read it on this very p= age.

 

Shu= tdown computer, replace disk, boot computer, create PV with old uuid, t= hen do vgcfgrestore.

 

Thi= s "echo 1 > /sys/block/sde/device/delete" is what i test n= ow in virtual machine, it's like if disk failed completly, i think LVM = raid should be able to handle this situation ;)

 

Wit= h regards,

Lib= or

 

On = =C3=9At 10.=C2=A0b=C5=99ezna=C2=A02015 10:23:08 emmanuel segura wrote:<= /p>

>= ; echo 1 > /sys/block/sde/device/delete #this is wrong from my point= of

>= ; view, you need first try to remove the disk from lvm

>= ;

>= ; vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove

>= ;

>= ; vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove

>= ;

>= ; vgreduce --removemissing --force vgPecDisk2 #works, alerts me about

>= ; rimage and rmeta LVS

>= ;

>= ;

>= ; 1: remove the failed device physically not from lvm and insert the ne= w

>= ; device 2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5S= k"

>= ; --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the p= v

>= ; with OLD UUID of remove disk

>= ; 3: now you can restore the vg metadata

>= ;

>= ;

>= ; https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mda= tarecov

>= ; er.html

>= ; 2015-03-09 12:21 GMT+01:00 Libor Klep=C3=A1=C4=8D <libor.klepac@bc= om.cz>:

>= ; > Hello,

>= ; >

>= ; > web have 4x3TB disks in LVM for backups and I setup per customer= /per

>= ; > "task"

>= ; > LV type raid5 on it.

>= ; >

>= ; > Last week, smartd started to alarm us, that one of the disk will= soon go

>= ; > away.

>= ; >

>= ; > So we shut down the computer, replaced disk and then i used vgcf= grestore

>= ; > on

>= ; > new disk to restore metadata.

>= ; >

>= ; > Result was, that some LVs came up with damaged filesystem, some = didn't

>= ; > came

>= ; > up at all with messages like (one of rimage and rmeta was "= wrong", when i

>= ; > used KVPM util, it was type "virtual"

>= ; >

>= ; > ----

>= ; >

>= ; > [123995.826650] mdX: bitmap initialized from disk: read 4 pages,= set 1 of

>= ; > 98312 bits

>= ; >

>= ; > [124071.037501] device-mapper: raid: Failed to read superblock o= f device

>= ; > at

>= ; > position 2

>= ; >

>= ; > [124071.055473] device-mapper: raid: New device injected into ex= isting

>= ; > array without 'rebuild' parameter specified

>= ; >

>= ; > [124071.055969] device-mapper: table: 253:83: raid: Unable to as= semble

>= ; > array: Invalid superblocks

>= ; >

>= ; > [124071.056432] device-mapper: ioctl: error adding target to tab= le

>= ; >

>= ; > ----

>= ; >

>= ; > After that, i tried several combinations of

>= ; >

>= ; > lvconvert --repair

>= ; >

>= ; > and

>= ; >

>= ; > lvchange -ay --resync

>= ; >

>= ; >

>= ; >

>= ; > Without success. So i saved some data and than created new empty= LV's and

>= ; > started backups from scratch.

>= ; >

>= ; >

>= ; >

>= ; > Today, smartd alerted on another disk.

>= ; >

>= ; > So how can i safely remove disk from VG?

>= ; >

>= ; > I tried to simulate it on VM

>= ; >

>= ; >

>= ; >

>= ; > echo 1 > /sys/block/sde/device/delete

>= ; >

>= ; > vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove

=

>= ; >

>= ; > vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove<= /p>

>= ; >

>= ; > vgreduce --removemissing --force vgPecDisk2 #works, alerts me ab= out rimage

>= ; > and rmeta LVS

>= ; >

>= ; >

>= ; >

>= ; > vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage = and rmeta

>= ; > LV show up.

>= ; >

>= ; >

>= ; >

>= ; > So how to safely remove soon-to-be-bad-drive and insert new driv= e to

>= ; > array?

>= ; >

>= ; > Server has no more physical space for new drive, so we cannot us= e pvmove.

>= ; >

>= ; >

>= ; >

>= ; > Server is debian wheezy, but kernel is 2.6.14.

>= ; >

>= ; > Lvm is in version 2.02.95-8 , but i have another copy i use for = raid

>= ; > operations, which is in version 2.02.104

>= ; >

>= ; >

>= ; >

>= ; > With regards

>= ; >

>= ; > Libor

>= ; >

>= ; >

>= ; > _______________________________________________

>= ; > linux-lvm mailing list

>= ; > linux-lvm@redhat.com

>= ; > https://www.redhat.com/mailman/listinfo/linux-lvm

>= ; > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

--nextPart3952121.DiMfKxZBe6--