linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "Libor Klepáč" <libor.klepac@bcom.cz>
To: linux-lvm@redhat.com
Subject: Re: [linux-lvm] Removing disk from raid LVM
Date: Tue, 10 Mar 2015 10:34:26 +0100	[thread overview]
Message-ID: <12305881.n8GeDYuzBu@libor-nb> (raw)
In-Reply-To: <CAE7pJ3B1oSS1WvfKH7e9zux4kbTVL9WvA2-uWsgpsxF17-dK4Q@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3561 bytes --]

Hi,
thanks for the link.
I think, this procedure was used last week, maybe i read it on this very page.

Shutdown computer, replace disk, boot computer, create PV with old uuid, then 
do vgcfgrestore.

This "echo 1 > /sys/block/sde/device/delete" is what i test now in virtual 
machine, it's like if disk failed completly, i think LVM raid should be able to handle 
this situation ;)

With regards,
Libor

On Út 10. března 2015 10:23:08 emmanuel segura wrote:
> echo 1 > /sys/block/sde/device/delete #this is wrong from my point of
> view, you need first try to remove the disk from lvm
> 
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
> 
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
> 
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about
> rimage and rmeta LVS
> 
> 
> 1: remove the failed device physically not from lvm and insert the new
> device 2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk"
> --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the pv
> with OLD UUID of remove disk
> 3: now you can restore the vg metadata
> 
> 
> 
https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mdatare
cov
> er.html
> 2015-03-09 12:21 GMT+01:00 Libor Klepáč <libor.klepac@bcom.cz>:
> > Hello,
> > 
> > web have 4x3TB disks in LVM for backups and I setup per customer/per
> > "task"
> > LV type raid5 on it.
> > 
> > Last week, smartd started to alarm us, that one of the disk will soon go
> > away.
> > 
> > So we shut down the computer, replaced disk and then i used vgcfgrestore
> > on
> > new disk to restore metadata.
> > 
> > Result was, that some LVs came up with damaged filesystem, some didn't
> > came
> > up at all with messages like (one of rimage and rmeta was "wrong", when i
> > used KVPM util, it was type "virtual"
> > 
> > ----
> > 
> > [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> > 98312 bits
> > 
> > [124071.037501] device-mapper: raid: Failed to read superblock of device
> > at
> > position 2
> > 
> > [124071.055473] device-mapper: raid: New device injected into existing
> > array without 'rebuild' parameter specified
> > 
> > [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> > array: Invalid superblocks
> > 
> > [124071.056432] device-mapper: ioctl: error adding target to table
> > 
> > ----
> > 
> > After that, i tried several combinations of
> > 
> > lvconvert --repair
> > 
> > and
> > 
> > lvchange -ay --resync
> > 
> > 
> > 
> > Without success. So i saved some data and than created new empty LV's 
and
> > started backups from scratch.
> > 
> > 
> > 
> > Today, smartd alerted on another disk.
> > 
> > So how can i safely remove disk from VG?
> > 
> > I tried to simulate it on VM
> > 
> > 
> > 
> > echo 1 > /sys/block/sde/device/delete
> > 
> > vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
> > 
> > vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
> > 
> > vgreduce --removemissing --force vgPecDisk2 #works, alerts me about 
rimage
> > and rmeta LVS
> > 
> > 
> > 
> > vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta
> > LV show up.
> > 
> > 
> > 
> > So how to safely remove soon-to-be-bad-drive and insert new drive to
> > array?
> > 
> > Server has no more physical space for new drive, so we cannot use 
pvmove.
> > 
> > 
> > 
> > Server is debian wheezy, but kernel is 2.6.14.
> > 

[-- Attachment #2: Type: text/html, Size: 23820 bytes --]

  reply	other threads:[~2015-03-10  9:34 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-09 11:21 [linux-lvm] Removing disk from raid LVM Libor Klepáč
2015-03-10  9:23 ` emmanuel segura
2015-03-10  9:34   ` Libor Klepáč [this message]
2015-03-10 14:05 ` John Stoffel
2015-03-11 13:05   ` Libor Klepáč
2015-03-11 15:57     ` John Stoffel
2015-03-11 18:02       ` Libor Klepáč
2015-03-12 14:53         ` John Stoffel
2015-03-12 15:21           ` Libor Klepáč
2015-03-12 17:20             ` John Stoffel
2015-03-12 21:32               ` Libor Klepáč
2015-03-13 16:18                 ` John Stoffel
2015-03-12 15:32           ` Libor Klepáč
2015-03-11 23:12 ` Premchand Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12305881.n8GeDYuzBu@libor-nb \
    --to=libor.klepac@bcom.cz \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).