linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Removing disk from raid LVM
@ 2015-03-09 11:21 Libor Klepáč
  2015-03-10  9:23 ` emmanuel segura
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Libor Klepáč @ 2015-03-09 11:21 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1926 bytes --]

Hello,
web have 4x3TB disks in LVM for backups and I setup per customer/per "task" 
LV type raid5 on it.
Last week, smartd started to alarm us, that one of the disk will soon go away.

So we shut down the computer, replaced disk and then i used vgcfgrestore on 
new disk to restore metadata.
Result was, that some LVs came up with damaged filesystem, some didn't came 
up at all with messages like (one of rimage and rmeta was "wrong", when i used 
KVPM util, it was type "virtual"
----
[123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of 
98312 bits
[124071.037501] device-mapper: raid: Failed to read superblock of device at 
position 2
[124071.055473] device-mapper: raid: New device injected into existing array 
without 'rebuild' parameter specified
[124071.055969] device-mapper: table: 253:83: raid: Unable to assemble array: 
Invalid superblocks
[124071.056432] device-mapper: ioctl: error adding target to table
----
After that, i tried several combinations of
lvconvert --repair 
and
lvchange -ay --resync

Without success. So i saved some data and than created new empty LV's and 
started backups from scratch.

Today, smartd alerted on another disk.
So how can i safely remove disk from VG?
I tried to simulate it on VM

echo 1 > /sys/block/sde/device/delete
vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
vgreduce --removemissing --force vgPecDisk2 #works, alerts me about rimage 
and rmeta LVS

vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta LV 
show up.

So how to safely remove soon-to-be-bad-drive and insert new drive to array?
Server has no more physical space for new drive, so we cannot use pvmove.

Server is debian wheezy, but kernel is 2.6.14.
Lvm is in version 2.02.95-8 , but i have another copy i use for raid operations, 
which is in version 2.02.104

With regards
Libor

[-- Attachment #2: Type: text/html, Size: 7595 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-09 11:21 [linux-lvm] Removing disk from raid LVM Libor Klepáč
@ 2015-03-10  9:23 ` emmanuel segura
  2015-03-10  9:34   ` Libor Klepáč
  2015-03-10 14:05 ` John Stoffel
  2015-03-11 23:12 ` Premchand Gupta
  2 siblings, 1 reply; 14+ messages in thread
From: emmanuel segura @ 2015-03-10  9:23 UTC (permalink / raw)
  To: LVM general discussion and development

echo 1 > /sys/block/sde/device/delete #this is wrong from my point of
view, you need first try to remove the disk from lvm

vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove

vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove

vgreduce --removemissing --force vgPecDisk2 #works, alerts me about
rimage and rmeta LVS


1: remove the failed device physically not from lvm and insert the new device
2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk"
--restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the pv
with OLD UUID of remove disk
3: now you can restore the vg metadata


https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mdatarecover.html

2015-03-09 12:21 GMT+01:00 Libor Klepáč <libor.klepac@bcom.cz>:
> Hello,
>
> web have 4x3TB disks in LVM for backups and I setup per customer/per "task"
> LV type raid5 on it.
>
> Last week, smartd started to alarm us, that one of the disk will soon go
> away.
>
> So we shut down the computer, replaced disk and then i used vgcfgrestore on
> new disk to restore metadata.
>
> Result was, that some LVs came up with damaged filesystem, some didn't came
> up at all with messages like (one of rimage and rmeta was "wrong", when i
> used KVPM util, it was type "virtual"
>
> ----
>
> [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> 98312 bits
>
> [124071.037501] device-mapper: raid: Failed to read superblock of device at
> position 2
>
> [124071.055473] device-mapper: raid: New device injected into existing array
> without 'rebuild' parameter specified
>
> [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> array: Invalid superblocks
>
> [124071.056432] device-mapper: ioctl: error adding target to table
>
> ----
>
> After that, i tried several combinations of
>
> lvconvert --repair
>
> and
>
> lvchange -ay --resync
>
>
>
> Without success. So i saved some data and than created new empty LV's and
> started backups from scratch.
>
>
>
> Today, smartd alerted on another disk.
>
> So how can i safely remove disk from VG?
>
> I tried to simulate it on VM
>
>
>
> echo 1 > /sys/block/sde/device/delete
>
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
>
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
>
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about rimage
> and rmeta LVS
>
>
>
> vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta LV
> show up.
>
>
>
> So how to safely remove soon-to-be-bad-drive and insert new drive to array?
>
> Server has no more physical space for new drive, so we cannot use pvmove.
>
>
>
> Server is debian wheezy, but kernel is 2.6.14.
>
> Lvm is in version 2.02.95-8 , but i have another copy i use for raid
> operations, which is in version 2.02.104
>
>
>
> With regards
>
> Libor
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



-- 
esta es mi vida e me la vivo hasta que dios quiera

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-10  9:23 ` emmanuel segura
@ 2015-03-10  9:34   ` Libor Klepáč
  0 siblings, 0 replies; 14+ messages in thread
From: Libor Klepáč @ 2015-03-10  9:34 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 3561 bytes --]

Hi,
thanks for the link.
I think, this procedure was used last week, maybe i read it on this very page.

Shutdown computer, replace disk, boot computer, create PV with old uuid, then 
do vgcfgrestore.

This "echo 1 > /sys/block/sde/device/delete" is what i test now in virtual 
machine, it's like if disk failed completly, i think LVM raid should be able to handle 
this situation ;)

With regards,
Libor

On Út 10. března 2015 10:23:08 emmanuel segura wrote:
> echo 1 > /sys/block/sde/device/delete #this is wrong from my point of
> view, you need first try to remove the disk from lvm
> 
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
> 
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
> 
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about
> rimage and rmeta LVS
> 
> 
> 1: remove the failed device physically not from lvm and insert the new
> device 2: pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk"
> --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 #to create the pv
> with OLD UUID of remove disk
> 3: now you can restore the vg metadata
> 
> 
> 
https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/mdatare
cov
> er.html
> 2015-03-09 12:21 GMT+01:00 Libor Klepáč <libor.klepac@bcom.cz>:
> > Hello,
> > 
> > web have 4x3TB disks in LVM for backups and I setup per customer/per
> > "task"
> > LV type raid5 on it.
> > 
> > Last week, smartd started to alarm us, that one of the disk will soon go
> > away.
> > 
> > So we shut down the computer, replaced disk and then i used vgcfgrestore
> > on
> > new disk to restore metadata.
> > 
> > Result was, that some LVs came up with damaged filesystem, some didn't
> > came
> > up at all with messages like (one of rimage and rmeta was "wrong", when i
> > used KVPM util, it was type "virtual"
> > 
> > ----
> > 
> > [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> > 98312 bits
> > 
> > [124071.037501] device-mapper: raid: Failed to read superblock of device
> > at
> > position 2
> > 
> > [124071.055473] device-mapper: raid: New device injected into existing
> > array without 'rebuild' parameter specified
> > 
> > [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> > array: Invalid superblocks
> > 
> > [124071.056432] device-mapper: ioctl: error adding target to table
> > 
> > ----
> > 
> > After that, i tried several combinations of
> > 
> > lvconvert --repair
> > 
> > and
> > 
> > lvchange -ay --resync
> > 
> > 
> > 
> > Without success. So i saved some data and than created new empty LV's 
and
> > started backups from scratch.
> > 
> > 
> > 
> > Today, smartd alerted on another disk.
> > 
> > So how can i safely remove disk from VG?
> > 
> > I tried to simulate it on VM
> > 
> > 
> > 
> > echo 1 > /sys/block/sde/device/delete
> > 
> > vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
> > 
> > vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
> > 
> > vgreduce --removemissing --force vgPecDisk2 #works, alerts me about 
rimage
> > and rmeta LVS
> > 
> > 
> > 
> > vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta
> > LV show up.
> > 
> > 
> > 
> > So how to safely remove soon-to-be-bad-drive and insert new drive to
> > array?
> > 
> > Server has no more physical space for new drive, so we cannot use 
pvmove.
> > 
> > 
> > 
> > Server is debian wheezy, but kernel is 2.6.14.
> > 

[-- Attachment #2: Type: text/html, Size: 23820 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-09 11:21 [linux-lvm] Removing disk from raid LVM Libor Klepáč
  2015-03-10  9:23 ` emmanuel segura
@ 2015-03-10 14:05 ` John Stoffel
  2015-03-11 13:05   ` Libor Klepáč
  2015-03-11 23:12 ` Premchand Gupta
  2 siblings, 1 reply; 14+ messages in thread
From: John Stoffel @ 2015-03-10 14:05 UTC (permalink / raw)
  To: LVM general discussion and development


Libor> web have 4x3TB disks in LVM for backups and I setup per
Libor> customer/per "task" LV type raid5 on it.

Can you post the configuration details please, since they do matter.
It would seem to me, that it would be better to use 'md' to create the
underlying RAID5 device, and then use LVM on top of that /dev/md0 to
create the customer LV(s) as needed.  

Libor> Last week, smartd started to alarm us, that one of the disk
Libor> will soon go away.

Libor> So we shut down the computer, replaced disk and then i used
Libor> vgcfgrestore on new disk to restore metadata.

You should have shutdown the system, added in a new disk, and then
rebooted the system.  At that point you would add the new disk into
the RAID5, and then fail the dying disk.  It would be transparent to
the LVM setup and be much safer.

I'd also strongly advise you to get RAID6 setup and have a hot spare
also setup, so that you don't have this type of issue in the future.  

Libor> Result was, that some LVs came up with damaged filesystem, some
Libor> didn't came up at all with messages like (one of rimage and
Libor> rmeta was "wrong", when i used KVPM util, it was type "virtual"

This sounds very much like you just lost a bunch of data, which RAID5
shouldn't do.  So please post the details of your setup, starting at
the disk level and moving up the stack to the filesystem(s) you have
mounted for backups.  We don't need the customer names, etc, just the
details of the system.

Also, which version of lvm, md, linux kernel, etc are you using?  The
more details the better.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-10 14:05 ` John Stoffel
@ 2015-03-11 13:05   ` Libor Klepáč
  2015-03-11 15:57     ` John Stoffel
  0 siblings, 1 reply; 14+ messages in thread
From: Libor Klepáč @ 2015-03-11 13:05 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 3862 bytes --]

Hello John,

On Út 10. března 2015 10:05:38 John Stoffel wrote:
> Libor> web have 4x3TB disks in LVM for backups and I setup per
> Libor> customer/per "task" LV type raid5 on it.
> 
> Can you post the configuration details please, since they do matter.
> It would seem to me, that it would be better to use 'md' to create the
> underlying RAID5 device, and then use LVM on top of that /dev/md0 to
> create the customer LV(s) as needed.

I used mdraid everytime before (in fact, OS is on another disks on mdraid). But i 
really loved idea/flexibility of raid in LVM and wanted to try it.

> 
> Libor> Last week, smartd started to alarm us, that one of the disk
> Libor> will soon go away.
> 
> Libor> So we shut down the computer, replaced disk and then i used
> Libor> vgcfgrestore on new disk to restore metadata.
> 
> You should have shutdown the system, added in a new disk, and then
> rebooted the system.  At that point you would add the new disk into
> the RAID5, and then fail the dying disk.  It would be transparent to
> the LVM setup and be much safer.
> 

I see, but there is no physical space for extra disk. Maybe external disk should 
do the trick, but it would take hours to migrate data and server is in remote 
housing facility.

> I'd also strongly advise you to get RAID6 setup and have a hot spare
> also setup, so that you don't have this type of issue in the future.
> 
> Libor> Result was, that some LVs came up with damaged filesystem, some
> Libor> didn't came up at all with messages like (one of rimage and
> Libor> rmeta was "wrong", when i used KVPM util, it was type "virtual"
> 
> This sounds very much like you just lost a bunch of data, which RAID5
> shouldn't do.  So please post the details of your setup, starting at
> the disk level and moving up the stack to the filesystem(s) you have
> mounted for backups.  We don't need the customer names, etc, just the
> details of the system.
> 

System is Dell T20.

Backup disks are connected over 
00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 
[AHCI mode] (rev 04)
System disks are connected over
04:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 
6Gb/s Controller (rev 10)

First three are 3TB sata discs, 3,5'', 7200RPM
[0:0:0:0]    disk    ATA      TOSHIBA MG03ACA3 n/a   /dev/sda 
[1:0:0:0]    disk    ATA      ST3000NM0033-9ZM n/a   /dev/sdb 
[2:0:0:0]    disk    ATA      ST3000NM0033-9ZM n/a   /dev/sdg 
[3:0:0:0]    disk    ATA      TOSHIBA MG03ACA3 n/a   /dev/sdd

Remaining two are 500GB 2,5'' disks for system
[6:0:0:0]    disk    ATA      ST9500620NS      n/a   /dev/sde 
[8:0:0:0]    disk    ATA      ST9500620NS      n/a   /dev/sdf 

System is on mdraid (raid1) + LVM

On top of LVs, we use ext4 for OS and XFS for backup/customer disks.

> Also, which version of lvm, md, linux kernel, etc are you using?  The
> more details the better.

It's Debian Wheezy, with kernel 3.14(.14)
System LVM is
 LVM version:     2.02.95(2) (2012-03-06)
  Library version: 1.02.74 (2012-03-06)
  Driver version:  4.27.0

I also use another copy of lvm, for raid operations (creating LV, extending LVs, 
show progress of resync) ...
  LVM version:     2.02.104(2) (2013-11-13)
  Library version: 1.02.83 (2013-11-13)
  Driver version:  4.27.0


Should the problem be, that VG/LVs are first constructed using system old utils?
I think, i could upgrade whole system to Debian Jessie as last resort operation.
This should bring kernel to version 3.16 and lvm to 2.02.111

Thanks for your reply

With regards,
Libor




> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[-- Attachment #2: Type: text/html, Size: 18055 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-11 13:05   ` Libor Klepáč
@ 2015-03-11 15:57     ` John Stoffel
  2015-03-11 18:02       ` Libor Klepáč
  0 siblings, 1 reply; 14+ messages in thread
From: John Stoffel @ 2015-03-11 15:57 UTC (permalink / raw)
  To: LVM general discussion and development


Libor,

Can you please post the output of the following commands, so that we
can understand your setup and see what's really going on here.  More
info is better than less!

  cat /proc/partitions
  pvs -v
  pvdisplay
  vgs -v 
  vgdisplay
  lvs -v
  lvdisplay

and if you have PVs which are NOT on top of raw partitions, then
include cat /proc/mdstat as well, or whatever device tool you have.  

Basically, we're trying to understand how you configured your setup
from the physical disks, to the volumes on them.  I don't care much
about the filesystems, they're going to be inside individual LVs I
assume.  

John

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-11 15:57     ` John Stoffel
@ 2015-03-11 18:02       ` Libor Klepáč
  2015-03-12 14:53         ` John Stoffel
  0 siblings, 1 reply; 14+ messages in thread
From: Libor Klepáč @ 2015-03-11 18:02 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 27710 bytes --]

Hello John,
here it comes.
I think i have all PV not on top of raw partitions. System is on mdraid and backup PVs are directly on disks, without partitions.

I think that LVs:
lvAmandaDaily01old
lvBackupPc
lvBackupRsync
are old damaged LVs, i left for experimenting on.

These LVs are some broken parts of old raid?
lvAmandaDailyAuS01_rimage_2_extracted
lvAmandaDailyAuS01_rmeta_2_extracted

LV lvAmandaDailyBlS01 is also from before crash, but i didn't try to repair it (i think)

Libor


---------------
cat /proc/mdstat (mdraid used only for OS)
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid1 sde3[0] sdf3[1]
      487504704 blocks super 1.2 [2/2] [UU]
      bitmap: 1/4 pages [4KB], 65536KB chunk

md0 : active raid1 sde2[0] sdf2[1]
      249664 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk
-----------------

cat /proc/partitions       
major minor  #blocks  name

   8       80  488386584 sdf
   8       81     498688 sdf1
   8       82     249856 sdf2
   8       83  487635968 sdf3
   8       48 2930266584 sdd
   8       64  488386584 sde
   8       65     498688 sde1
   8       66     249856 sde2
   8       67  487635968 sde3
   8        0 2930266584 sda
   8       16 2930266584 sdb
   9        0     249664 md0
   9        1  487504704 md1
 253        0   67108864 dm-0
 253        1    3903488 dm-1
   8       96 2930266584 sdg
 253      121       4096 dm-121
 253      122   34955264 dm-122
 253      123       4096 dm-123
 253      124   34955264 dm-124
 253      125       4096 dm-125
 253      126   34955264 dm-126
 253      127       4096 dm-127
 253      128   34955264 dm-128
 253      129  104865792 dm-129
 253       11       4096 dm-11
 253       12  209715200 dm-12
 253       13       4096 dm-13
 253       14  209715200 dm-14
 253       15       4096 dm-15
 253       16  209715200 dm-16
 253       17       4096 dm-17
 253       18  209715200 dm-18
 253       19  629145600 dm-19
 253       38       4096 dm-38
 253       39  122335232 dm-39
 253       40       4096 dm-40
 253       41  122335232 dm-41
 253       42       4096 dm-42
 253       43  122335232 dm-43
 253       44       4096 dm-44
 253       45  122335232 dm-45
 253       46  367005696 dm-46
 253       47       4096 dm-47
 253       48   16777216 dm-48
 253       49       4096 dm-49
 253       50   16777216 dm-50
 253       51   16777216 dm-51
 253       52       4096 dm-52
 253       53    4194304 dm-53
 253       54       4096 dm-54
 253       55    4194304 dm-55
 253       56    4194304 dm-56
 253       57       4096 dm-57
 253       58   11186176 dm-58
 253       59       4096 dm-59
 253       60   11186176 dm-60
 253       61       4096 dm-61
 253       62   11186176 dm-62
 253       63       4096 dm-63
 253       64   11186176 dm-64
 253       65   33558528 dm-65
 253        2       4096 dm-2
 253        3  125829120 dm-3
 253        4       4096 dm-4
 253        5  125829120 dm-5
 253        6       4096 dm-6
 253        7  125829120 dm-7
 253        8       4096 dm-8
 253        9  125829120 dm-9
 253       10  377487360 dm-10
 253       20       4096 dm-20
 253       21   12582912 dm-21
 253       22       4096 dm-22
 253       23   12582912 dm-23
 253       24       4096 dm-24
 253       25   12582912 dm-25
 253       26       4096 dm-26
 253       27   12582912 dm-27
 253       28   37748736 dm-28
 253       66       4096 dm-66
 253       67  122335232 dm-67
 253       68       4096 dm-68
 253       69  122335232 dm-69
 253       70       4096 dm-70
 253       71  122335232 dm-71
 253       72       4096 dm-72
 253       73  122335232 dm-73
 253       74  367005696 dm-74
 253       31  416489472 dm-31
 253       32       4096 dm-32
 253       75   34955264 dm-75
 253       78       4096 dm-78
 253       79   34955264 dm-79
 253       80       4096 dm-80
 253       81   34955264 dm-81
 253       82  104865792 dm-82
 253       92       4096 dm-92
 253       93   17477632 dm-93
 253       94       4096 dm-94
 253       95   17477632 dm-95
 253       96       4096 dm-96
 253       97   17477632 dm-97
 253       98       4096 dm-98
 253       99   17477632 dm-99
 253      100   52432896 dm-100
 253       76       4096 dm-76
 253       77   50331648 dm-77
 253       83       4096 dm-83
 253       84   50331648 dm-84
 253       85       4096 dm-85
 253       86   50331648 dm-86
 253       87       4096 dm-87
 253       88   50331648 dm-88
 253       89  150994944 dm-89
 253       90       4096 dm-90
 253       91   44740608 dm-91
 253      101       4096 dm-101
 253      102   44740608 dm-102
 253      103       4096 dm-103
 253      104   44740608 dm-104
 253      105       4096 dm-105
 253      106   44740608 dm-106
 253      107  134221824 dm-107

-------------------------------
pvs -v 
    Scanning for physical volume names
  PV         VG         Fmt  Attr PSize   PFree DevSize PV UUID                               
  /dev/md1   vgPecDisk1 lvm2 a--  464.92g    0  464.92g MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI
  /dev/sda   vgPecDisk2 lvm2 a--    2.73t 1.20t   2.73t 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw
  /dev/sdb   vgPecDisk2 lvm2 a--    2.73t 1.20t   2.73t 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr
  /dev/sdd   vgPecDisk2 lvm2 a--    2.73t 2.03t   2.73t RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO
  /dev/sdg   vgPecDisk2 lvm2 a--    2.73t 1.23t   2.73t yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

-------------------------------

pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               vgPecDisk1
  PV Size               464.92 GiB / not usable 1.81 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              119019
  Free PE               0
  Allocated PE          119019
  PV UUID               MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI
   
  --- Physical volume ---
  PV Name               /dev/sdd
  VG Name               vgPecDisk2
  PV Size               2.73 TiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               531917
  Allocated PE          183479
  PV UUID               RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO
   
  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               vgPecDisk2
  PV Size               2.73 TiB / not usable 1022.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              714884
  Free PE               315671
  Allocated PE          399213
  PV UUID               0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw
   
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               vgPecDisk2
  PV Size               2.73 TiB / not usable 1022.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              714884
  Free PE               315671
  Allocated PE          399213
  PV UUID               5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr
   
  --- Physical volume ---
  PV Name               /dev/sdg
  VG Name               vgPecDisk2
  PV Size               2.73 TiB / not usable 2.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              715396
  Free PE               321305
  Allocated PE          394091
  PV UUID               yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

-----------------------------

vgs -v 
 VG         Attr   Ext   #PV #LV #SN VSize   VFree VG UUID                               
  vgPecDisk1 wz--n- 4.00m   1   3   0 464.92g    0  Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv
  vgPecDisk2 wz--n- 4.00m   4  20   0  10.91t 5.66t 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

--------------------------------

vgdisplay
  --- Volume group ---
  VG Name               vgPecDisk1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               464.92 GiB
  PE Size               4.00 MiB
  Total PE              119019
  Alloc PE / Size       119019 / 464.92 GiB
  Free  PE / Size       0 / 0   
  VG UUID               Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv
   
  --- Volume group ---
  VG Name               vgPecDisk2
  System ID             
  Format                lvm2
  Metadata Areas        8
  Metadata Sequence No  476
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                20
  Open LV               13
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               10.91 TiB
  PE Size               4.00 MiB
  Total PE              2860560
  Alloc PE / Size       1375996 / 5.25 TiB
  Free  PE / Size       1484564 / 5.66 TiB
  VG UUID               0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

------------------------------

lvs -v
    Finding all logical volumes
  LV                                           VG         #Seg Attr     LSize   Maj Min KMaj KMin Pool Origin Data%  Meta%  Move Copy%  Log Convert LV UUID                               
  lvSwap                                       vgPecDisk1    1 -wi-ao--   3.72g  -1  -1 253  1                                                      Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe
  lvSystem                                     vgPecDisk1    1 -wi-ao--  64.00g  -1  -1 253  0                                                      ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD
  lvTmp                                        vgPecDisk1    1 -wi-ao-- 397.20g  -1  -1 253  31                                                     JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9
  lvAmandaDaily01                              vgPecDisk2    1 rwi-aor- 100.01g  -1  -1 253  82                                                     lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK
  lvAmandaDaily01old                           vgPecDisk2    1 rwi---r-   1.09t  -1  -1 -1   -1                                                     nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq
  lvAmandaDailyAuS01                    vgPecDisk2    1 rwi-aor- 360.00g  -1  -1 253  10                                                     fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB
  lvAmandaDailyAuS01_rimage_2_extracted vgPecDisk2    1 vwi---v- 120.00g  -1  -1 -1   -1                                                     Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq
  lvAmandaDailyAuS01_rmeta_2_extracted  vgPecDisk2    1 vwi---v-   4.00m  -1  -1 -1   -1                                                     WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS
  lvAmandaDailyBlS01                    vgPecDisk2    1 rwi---r- 320.00g  -1  -1 -1   -1                                                     fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt
  lvAmandaDailyElme01                          vgPecDisk2    1 rwi-aor- 144.00g  -1  -1 253  89                                                     1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp
  lvAmandaDailyEl01                          vgPecDisk2    1 rwi-aor- 350.00g  -1  -1 253  74                                                     Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22
  lvAmandaHoldingDisk                          vgPecDisk2    1 rwi-aor-  36.00g  -1  -1 253  28                                                     e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY
  lvBackupElme2                                vgPecDisk2    1 rwi-aor- 350.00g  -1  -1 253  46                                                     Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9
  lvBackupPc                                   vgPecDisk2    1 rwi---r- 640.01g  -1  -1 -1   -1                                                     KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ
  lvBackupPc2                                  vgPecDisk2    1 rwi-aor- 600.00g  -1  -1 253  19                                                     2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke
  lvBackupRsync                                vgPecDisk2    1 rwi---r- 256.01g  -1  -1 -1   -1                                                     cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ
  lvBackupRsync2                               vgPecDisk2    1 rwi-aor- 100.01g  -1  -1 253  129                                                    S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM
  lvBackupRsyncCCCrossserver                   vgPecDisk2    1 rwi-aor-  50.00g  -1  -1 253  100                                                    ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf
  lvBackupVokapo                               vgPecDisk2    1 rwi-aor- 128.00g  -1  -1 253  107                                                    pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag
  lvLXCElMysqlSlave                          vgPecDisk2    1 rwi-aor-  32.00g  -1  -1 253  65                                                     2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut
  lvLXCIcinga                                  vgPecDisk2    1 rwi---r-  32.00g  -1  -1 -1   -1                                                     2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU
  lvLXCJabber                                  vgPecDisk2    1 rwi-aom-   4.00g  -1  -1 253  56                                  100.00             AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ
  lvLXCWebxMysqlSlave                          vgPecDisk2    1 rwi-aom-  16.00g  -1  -1 253  51                                  100.00             m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

-----------------------------
lvdisplay
  --- Logical volume ---
  LV Path                /dev/vgPecDisk1/lvSwap
  LV Name                lvSwap
  VG Name                vgPecDisk1
  LV UUID                Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe
  LV Write Access        read/write
  LV Creation host, time pec, 2014-02-20 12:22:52 +0100
  LV Status              available
  # open                 2
  LV Size                3.72 GiB
  Current LE             953
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk1/lvSystem
  LV Name                lvSystem
  VG Name                vgPecDisk1
  LV UUID                ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD
  LV Write Access        read/write
  LV Creation host, time pec, 2014-02-20 12:23:03 +0100
  LV Status              available
  # open                 1
  LV Size                64.00 GiB
  Current LE             16384
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk1/lvTmp
  LV Name                lvTmp
  VG Name                vgPecDisk1
  LV UUID                JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9
  LV Write Access        read/write
  LV Creation host, time pec, 2014-06-10 06:47:09 +0200
  LV Status              available
  # open                 1
  LV Size                397.20 GiB
  Current LE             101682
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:31
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvLXCWebxMysqlSlave
  LV Name                lvLXCWebxMysqlSlave
  VG Name                vgPecDisk2
  LV UUID                m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae
  LV Write Access        read/write
  LV Creation host, time pec, 2014-02-21 18:15:22 +0100
  LV Status              available
  # open                 1
  LV Size                16.00 GiB
  Current LE             4096
  Mirrored volumes       2
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:51
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDaily01old
  LV Name                lvAmandaDaily01old
  VG Name                vgPecDisk2
  LV UUID                nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq
  LV Write Access        read/write
  LV Creation host, time pec, 2014-02-24 21:03:49 +0100
  LV Status              NOT available
  LV Size                1.09 TiB
  Current LE             286722
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDailyBlS01
  LV Name                lvAmandaDailyBlS01
  VG Name                vgPecDisk2
  LV UUID                fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt
  LV Write Access        read/write
  LV Creation host, time pec, 2014-03-18 08:50:38 +0100
  LV Status              NOT available
  LV Size                320.00 GiB
  Current LE             81921
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvLXCJabber
  LV Name                lvLXCJabber
  VG Name                vgPecDisk2
  LV UUID                AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ
  LV Write Access        read/write
  LV Creation host, time pec, 2014-03-20 15:19:54 +0100
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Current LE             1024
  Mirrored volumes       2
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:56
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupPc
  LV Name                lvBackupPc
  VG Name                vgPecDisk2
  LV UUID                KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ
  LV Write Access        read/write
  LV Creation host, time pec, 2014-07-01 13:22:50 +0200
  LV Status              NOT available
  LV Size                640.01 GiB
  Current LE             163842
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvLXCIcinga
  LV Name                lvLXCIcinga
  VG Name                vgPecDisk2
  LV UUID                2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU
  LV Write Access        read/write
  LV Creation host, time pec, 2014-08-13 19:04:28 +0200
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8193
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupRsync
  LV Name                lvBackupRsync
  VG Name                vgPecDisk2
  LV UUID                cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ
  LV Write Access        read/write
  LV Creation host, time pec, 2014-09-17 14:49:57 +0200
  LV Status              NOT available
  LV Size                256.01 GiB
  Current LE             65538
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDaily01
  LV Name                lvAmandaDaily01
  VG Name                vgPecDisk2
  LV UUID                lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-04 08:26:46 +0100
  LV Status              available
  # open                 1
  LV Size                100.01 GiB
  Current LE             25602
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:82
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupRsync2
  LV Name                lvBackupRsync2
  VG Name                vgPecDisk2
  LV UUID                S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-04 19:17:17 +0100
  LV Status              available
  # open                 1
  LV Size                100.01 GiB
  Current LE             25602
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:129
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupPc2
  LV Name                lvBackupPc2
  VG Name                vgPecDisk2
  LV UUID                2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-04 23:13:51 +0100
  LV Status              available
  # open                 1
  LV Size                600.00 GiB
  Current LE             153600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:19
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupElme2
  LV Name                lvBackupElme2
  VG Name                vgPecDisk2
  LV UUID                Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-04 23:21:44 +0100
  LV Status              available
  # open                 1
  LV Size                350.00 GiB
  Current LE             89601
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:46
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvLXCElMysqlSlave
  LV Name                lvLXCElMysqlSlave
  VG Name                vgPecDisk2
  LV UUID                2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 16:36:42 +0100
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Current LE             8193
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:65
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDailyAuS01_rimage_2_extracted
  LV Name                lvAmandaDailyAuS01_rimage_2_extracted
  VG Name                vgPecDisk2
  LV UUID                Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq
  LV Write Access        read/write
  LV Creation host, time pec, 2014-02-25 09:55:03 +0100
  LV Status              NOT available
  LV Size                120.00 GiB
  Current LE             30721
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDailyAuS01_rmeta_2_extracted
  LV Name                lvAmandaDailyAuS01_rmeta_2_extracted
  VG Name                vgPecDisk2
  LV UUID                WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS
  LV Write Access        read/write
  LV Creation host, time pec, 2014-02-25 09:55:03 +0100
  LV Status              NOT available
  LV Size                4.00 MiB
  Current LE             1
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDailyAuS01
  LV Name                lvAmandaDailyAuS01
  VG Name                vgPecDisk2
  LV UUID                fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 17:49:47 +0100
  LV Status              available
  # open                 1
  LV Size                360.00 GiB
  Current LE             92160
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:10
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaHoldingDisk
  LV Name                lvAmandaHoldingDisk
  VG Name                vgPecDisk2
  LV UUID                e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 18:48:36 +0100
  LV Status              available
  # open                 1
  LV Size                36.00 GiB
  Current LE             9216
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:28
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDailyEl01
  LV Name                lvAmandaDailyEl01
  VG Name                vgPecDisk2
  LV UUID                Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 19:00:26 +0100
  LV Status              available
  # open                 1
  LV Size                350.00 GiB
  Current LE             89601
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:74
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupRsyncCCCrossserver
  LV Name                lvBackupRsyncCCCrossserver
  VG Name                vgPecDisk2
  LV UUID                ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 22:39:09 +0100
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12801
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:100
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvAmandaDailyElme01
  LV Name                lvAmandaDailyElme01
  VG Name                vgPecDisk2
  LV UUID                1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 22:49:05 +0100
  LV Status              available
  # open                 1
  LV Size                144.00 GiB
  Current LE             36864
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:89
   
  --- Logical volume ---
  LV Path                /dev/vgPecDisk2/lvBackupVokapo
  LV Name                lvBackupVokapo
  VG Name                vgPecDisk2
  LV UUID                pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag
  LV Write Access        read/write
  LV Creation host, time pec, 2015-03-05 22:54:23 +0100
  LV Status              available
  # open                 1
  LV Size                128.00 GiB
  Current LE             32769
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           253:107

-----------------------






> Libor,
> 
> Can you please post the output of the following commands, so that we
> can understand your setup and see what's really going on here.  More
> info is better than less!
> 
>   cat /proc/partitions
>   pvs -v
>   pvdisplay
>   vgs -v
>   vgdisplay
>   lvs -v
>   lvdisplay
> 
> and if you have PVs which are NOT on top of raw partitions, then
> include cat /proc/mdstat as well, or whatever device tool you have.
> 
> Basically, we're trying to understand how you configured your setup
> from the physical disks, to the volumes on them.  I don't care much
> about the filesystems, they're going to be inside individual LVs I
> assume.
> 
> John
> 
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[-- Attachment #2: Type: text/html, Size: 130618 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-09 11:21 [linux-lvm] Removing disk from raid LVM Libor Klepáč
  2015-03-10  9:23 ` emmanuel segura
  2015-03-10 14:05 ` John Stoffel
@ 2015-03-11 23:12 ` Premchand Gupta
  2 siblings, 0 replies; 14+ messages in thread
From: Premchand Gupta @ 2015-03-11 23:12 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 2769 bytes --]

Hi ,

if software raid is configure over the lvm, then follow below step.
step 1) fail and remove the device from raid 5 with command mdamd
step 2) removed disk form lv, vg then pv
step 3) delete it from system with echo command







*Thanks & RegardsPremchand S. Gupta09820314487*

On Mon, Mar 9, 2015 at 4:51 PM, Libor Klepáč <libor.klepac@bcom.cz> wrote:

>  Hello,
>
> web have 4x3TB disks in LVM for backups and I setup per customer/per
> "task" LV type raid5 on it.
>
> Last week, smartd started to alarm us, that one of the disk will soon go
> away.
>
> So we shut down the computer, replaced disk and then i used vgcfgrestore
> on new disk to restore metadata.
>
> Result was, that some LVs came up with damaged filesystem, some didn't
> came up at all with messages like (one of rimage and rmeta was "wrong",
> when i used KVPM util, it was type "virtual"
>
> ----
>
> [123995.826650] mdX: bitmap initialized from disk: read 4 pages, set 1 of
> 98312 bits
>
> [124071.037501] device-mapper: raid: Failed to read superblock of device
> at position 2
>
> [124071.055473] device-mapper: raid: New device injected into existing
> array without 'rebuild' parameter specified
>
> [124071.055969] device-mapper: table: 253:83: raid: Unable to assemble
> array: Invalid superblocks
>
> [124071.056432] device-mapper: ioctl: error adding target to table
>
> ----
>
> After that, i tried several combinations of
>
> lvconvert --repair
>
> and
>
> lvchange -ay --resync
>
>
>
> Without success. So i saved some data and than created new empty LV's and
> started backups from scratch.
>
>
>
> Today, smartd alerted on another disk.
>
> So how can i safely remove disk from VG?
>
> I tried to simulate it on VM
>
>
>
> echo 1 > /sys/block/sde/device/delete
>
> vgreduce -ff vgPecDisk2 /dev/sde #doesn't allow me to remove
>
> vgreduce --removemissing vgPecDisk2 #doesn't allow me to remove
>
> vgreduce --removemissing --force vgPecDisk2 #works, alerts me about rimage
> and rmeta LVS
>
>
>
> vgchange -ay vgPecDisk2 #works but LV isn't active, only rimage and rmeta
> LV show up.
>
>
>
> So how to safely remove soon-to-be-bad-drive and insert new drive to array?
>
> Server has no more physical space for new drive, so we cannot use pvmove.
>
>
>
> Server is debian wheezy, but kernel is 2.6.14.
>
> Lvm is in version 2.02.95-8 , but i have another copy i use for raid
> operations, which is in version 2.02.104
>
>
>
> With regards
>
> Libor
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

[-- Attachment #2: Type: text/html, Size: 7123 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-11 18:02       ` Libor Klepáč
@ 2015-03-12 14:53         ` John Stoffel
  2015-03-12 15:21           ` Libor Klepáč
  2015-03-12 15:32           ` Libor Klepáč
  0 siblings, 2 replies; 14+ messages in thread
From: John Stoffel @ 2015-03-12 14:53 UTC (permalink / raw)
  To: LVM general discussion and development


Libor> here it comes.

Great, this is a big help, and it shows me that you are NOT using
RAID5 for your backup volumes.  The first clue is that you have 4 x
3tb disks and you only have a VG with 10.91t (terabytes) of useable
space, with a name of 'vgPecDisk2'.

And then none of the LVs in this VG are of type RAID5, so I don't
think you actually created them properly.  So when you lost one of the
disks in your VG, you immediately lost any LVs which had extents on
that missing disk.  Even though you did a vgcfgrestore, that did NOT
restore the data.

You really need to redo this entirely.  What you WANT to do is this:

0. copy all the remaining good backups elsewhere.  You want to empty
   all of the disks in the existing vgPecDisk2 VG.

1. setup an MD RAID5 using the four big disks.

   mdadm --create -l 5 -n 4 --name vgPecDisk2 /dev/sda /dev/sdb /dev/sdd /dev/sdg

2. Create the PV on there

   pvcreate /dev/md/vgPecDisk2

3. Create a new VG ontop of the RAID5 array.

   vgcreate vgPecDisk2 /dev/md/vgPecDisk2

3. NOW you create your LVs on top of this 

   lvcreate ....


The problem you have is that none of your LVs was ever created with
RAID5.  If you want to do a test, try this:

  lvcreate -n test-raid5 --type raid5 --size 5g --stripes 4 vgPecDisk2

and if it works (which it probably will on your system, assuming your
LVM tools have support for RAID5 in the first please, you can then
look at the output of the 'lvdisplay test-raid5' command to see how
many devices and stripes (segments) that LV has.  

None of the ones you show have this.  For example, your lvBackupVokapo
only shows 1 segment.  Without multiple segments, and RAID, you can't
survive any sort of failure in your setup.

This is why I personally only ever put LVs ontop of RAID devices if I
have important data.  

Does this help you understand what went wrong here?

John


Libor> I think i have all PV not on top of raw partitions. System is on mdraid and backup PVs are
Libor> directly on disks, without partitions.

Libor> I think that LVs:

Libor> lvAmandaDaily01old

Libor> lvBackupPc

Libor> lvBackupRsync

Libor> are old damaged LVs, i left for experimenting on.

Libor> These LVs are some broken parts of old raid?

Libor> lvAmandaDailyAuS01_rimage_2_extracted

Libor> lvAmandaDailyAuS01_rmeta_2_extracted

Libor> LV lvAmandaDailyBlS01 is also from before crash, but i didn't try to repair it (i think)

Libor> Libor

Libor> ---------------

Libor> cat /proc/mdstat (mdraid used only for OS)

Libor> Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]

Libor> md1 : active raid1 sde3[0] sdf3[1]

Libor> 487504704 blocks super 1.2 [2/2] [UU]

Libor> bitmap: 1/4 pages [4KB], 65536KB chunk

Libor> md0 : active raid1 sde2[0] sdf2[1]

Libor> 249664 blocks super 1.2 [2/2] [UU]

Libor> bitmap: 0/1 pages [0KB], 65536KB chunk

Libor> -----------------

Libor> cat /proc/partitions

Libor> major minor #blocks name

Libor> 8 80 488386584 sdf

Libor> 8 81 498688 sdf1

Libor> 8 82 249856 sdf2

Libor> 8 83 487635968 sdf3

Libor> 8 48 2930266584 sdd

Libor> 8 64 488386584 sde

Libor> 8 65 498688 sde1

Libor> 8 66 249856 sde2

Libor> 8 67 487635968 sde3

Libor> 8 0 2930266584 sda

Libor> 8 16 2930266584 sdb

Libor> 9 0 249664 md0

Libor> 9 1 487504704 md1

Libor> 253 0 67108864 dm-0

Libor> 253 1 3903488 dm-1

Libor> 8 96 2930266584 sdg

Libor> 253 121 4096 dm-121

Libor> 253 122 34955264 dm-122

Libor> 253 123 4096 dm-123

Libor> 253 124 34955264 dm-124

Libor> 253 125 4096 dm-125

Libor> 253 126 34955264 dm-126

Libor> 253 127 4096 dm-127

Libor> 253 128 34955264 dm-128

Libor> 253 129 104865792 dm-129

Libor> 253 11 4096 dm-11

Libor> 253 12 209715200 dm-12

Libor> 253 13 4096 dm-13

Libor> 253 14 209715200 dm-14

Libor> 253 15 4096 dm-15

Libor> 253 16 209715200 dm-16

Libor> 253 17 4096 dm-17

Libor> 253 18 209715200 dm-18

Libor> 253 19 629145600 dm-19

Libor> 253 38 4096 dm-38

Libor> 253 39 122335232 dm-39

Libor> 253 40 4096 dm-40

Libor> 253 41 122335232 dm-41

Libor> 253 42 4096 dm-42

Libor> 253 43 122335232 dm-43

Libor> 253 44 4096 dm-44

Libor> 253 45 122335232 dm-45

Libor> 253 46 367005696 dm-46

Libor> 253 47 4096 dm-47

Libor> 253 48 16777216 dm-48

Libor> 253 49 4096 dm-49

Libor> 253 50 16777216 dm-50

Libor> 253 51 16777216 dm-51

Libor> 253 52 4096 dm-52

Libor> 253 53 4194304 dm-53

Libor> 253 54 4096 dm-54

Libor> 253 55 4194304 dm-55

Libor> 253 56 4194304 dm-56

Libor> 253 57 4096 dm-57

Libor> 253 58 11186176 dm-58

Libor> 253 59 4096 dm-59

Libor> 253 60 11186176 dm-60

Libor> 253 61 4096 dm-61

Libor> 253 62 11186176 dm-62

Libor> 253 63 4096 dm-63

Libor> 253 64 11186176 dm-64

Libor> 253 65 33558528 dm-65

Libor> 253 2 4096 dm-2

Libor> 253 3 125829120 dm-3

Libor> 253 4 4096 dm-4

Libor> 253 5 125829120 dm-5

Libor> 253 6 4096 dm-6

Libor> 253 7 125829120 dm-7

Libor> 253 8 4096 dm-8

Libor> 253 9 125829120 dm-9

Libor> 253 10 377487360 dm-10

Libor> 253 20 4096 dm-20

Libor> 253 21 12582912 dm-21

Libor> 253 22 4096 dm-22

Libor> 253 23 12582912 dm-23

Libor> 253 24 4096 dm-24

Libor> 253 25 12582912 dm-25

Libor> 253 26 4096 dm-26

Libor> 253 27 12582912 dm-27

Libor> 253 28 37748736 dm-28

Libor> 253 66 4096 dm-66

Libor> 253 67 122335232 dm-67

Libor> 253 68 4096 dm-68

Libor> 253 69 122335232 dm-69

Libor> 253 70 4096 dm-70

Libor> 253 71 122335232 dm-71

Libor> 253 72 4096 dm-72

Libor> 253 73 122335232 dm-73

Libor> 253 74 367005696 dm-74

Libor> 253 31 416489472 dm-31

Libor> 253 32 4096 dm-32

Libor> 253 75 34955264 dm-75

Libor> 253 78 4096 dm-78

Libor> 253 79 34955264 dm-79

Libor> 253 80 4096 dm-80

Libor> 253 81 34955264 dm-81

Libor> 253 82 104865792 dm-82

Libor> 253 92 4096 dm-92

Libor> 253 93 17477632 dm-93

Libor> 253 94 4096 dm-94

Libor> 253 95 17477632 dm-95

Libor> 253 96 4096 dm-96

Libor> 253 97 17477632 dm-97

Libor> 253 98 4096 dm-98

Libor> 253 99 17477632 dm-99

Libor> 253 100 52432896 dm-100

Libor> 253 76 4096 dm-76

Libor> 253 77 50331648 dm-77

Libor> 253 83 4096 dm-83

Libor> 253 84 50331648 dm-84

Libor> 253 85 4096 dm-85

Libor> 253 86 50331648 dm-86

Libor> 253 87 4096 dm-87

Libor> 253 88 50331648 dm-88

Libor> 253 89 150994944 dm-89

Libor> 253 90 4096 dm-90

Libor> 253 91 44740608 dm-91

Libor> 253 101 4096 dm-101

Libor> 253 102 44740608 dm-102

Libor> 253 103 4096 dm-103

Libor> 253 104 44740608 dm-104

Libor> 253 105 4096 dm-105

Libor> 253 106 44740608 dm-106

Libor> 253 107 134221824 dm-107

Libor> -------------------------------

Libor> pvs -v

Libor> Scanning for physical volume names

Libor> PV VG Fmt Attr PSize PFree DevSize PV UUID

Libor> /dev/md1 vgPecDisk1 lvm2 a-- 464.92g 0 464.92g MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI

Libor> /dev/sda vgPecDisk2 lvm2 a-- 2.73t 1.20t 2.73t 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw

Libor> /dev/sdb vgPecDisk2 lvm2 a-- 2.73t 1.20t 2.73t 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr

Libor> /dev/sdd vgPecDisk2 lvm2 a-- 2.73t 2.03t 2.73t RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO

Libor> /dev/sdg vgPecDisk2 lvm2 a-- 2.73t 1.23t 2.73t yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

Libor> -------------------------------

Libor> pvdisplay

Libor> --- Physical volume ---

Libor> PV Name /dev/md1

Libor> VG Name vgPecDisk1

Libor> PV Size 464.92 GiB / not usable 1.81 MiB

Libor> Allocatable yes (but full)

Libor> PE Size 4.00 MiB

Libor> Total PE 119019

Libor> Free PE 0

Libor> Allocated PE 119019

Libor> PV UUID MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI

Libor> --- Physical volume ---

Libor> PV Name /dev/sdd

Libor> VG Name vgPecDisk2

Libor> PV Size 2.73 TiB / not usable 2.00 MiB

Libor> Allocatable yes

Libor> PE Size 4.00 MiB

Libor> Total PE 715396

Libor> Free PE 531917

Libor> Allocated PE 183479

Libor> PV UUID RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO

Libor> --- Physical volume ---

Libor> PV Name /dev/sda

Libor> VG Name vgPecDisk2

Libor> PV Size 2.73 TiB / not usable 1022.00 MiB

Libor> Allocatable yes

Libor> PE Size 4.00 MiB

Libor> Total PE 714884

Libor> Free PE 315671

Libor> Allocated PE 399213

Libor> PV UUID 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw

Libor> --- Physical volume ---

Libor> PV Name /dev/sdb

Libor> VG Name vgPecDisk2

Libor> PV Size 2.73 TiB / not usable 1022.00 MiB

Libor> Allocatable yes

Libor> PE Size 4.00 MiB

Libor> Total PE 714884

Libor> Free PE 315671

Libor> Allocated PE 399213

Libor> PV UUID 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr

Libor> --- Physical volume ---

Libor> PV Name /dev/sdg

Libor> VG Name vgPecDisk2

Libor> PV Size 2.73 TiB / not usable 2.00 MiB

Libor> Allocatable yes

Libor> PE Size 4.00 MiB

Libor> Total PE 715396

Libor> Free PE 321305

Libor> Allocated PE 394091

Libor> PV UUID yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

Libor> -----------------------------

Libor> vgs -v

Libor> VG Attr Ext #PV #LV #SN VSize VFree VG UUID

Libor> vgPecDisk1 wz--n- 4.00m 1 3 0 464.92g 0 Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv

Libor> vgPecDisk2 wz--n- 4.00m 4 20 0 10.91t 5.66t 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

Libor> --------------------------------

Libor> vgdisplay

Libor> --- Volume group ---

Libor> VG Name vgPecDisk1

Libor> System ID

Libor> Format lvm2

Libor> Metadata Areas 1

Libor> Metadata Sequence No 9

Libor> VG Access read/write

Libor> VG Status resizable

Libor> MAX LV 0

Libor> Cur LV 3

Libor> Open LV 3

Libor> Max PV 0

Libor> Cur PV 1

Libor> Act PV 1

Libor> VG Size 464.92 GiB

Libor> PE Size 4.00 MiB

Libor> Total PE 119019

Libor> Alloc PE / Size 119019 / 464.92 GiB

Libor> Free PE / Size 0 / 0

Libor> VG UUID Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv

Libor> --- Volume group ---

Libor> VG Name vgPecDisk2

Libor> System ID

Libor> Format lvm2

Libor> Metadata Areas 8

Libor> Metadata Sequence No 476

Libor> VG Access read/write

Libor> VG Status resizable

Libor> MAX LV 0

Libor> Cur LV 20

Libor> Open LV 13

Libor> Max PV 0

Libor> Cur PV 4

Libor> Act PV 4

Libor> VG Size 10.91 TiB

Libor> PE Size 4.00 MiB

Libor> Total PE 2860560

Libor> Alloc PE / Size 1375996 / 5.25 TiB

Libor> Free PE / Size 1484564 / 5.66 TiB

Libor> VG UUID 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

Libor> ------------------------------

Libor> lvs -v

Libor> Finding all logical volumes

Libor> LV VG #Seg Attr LSize Maj Min KMaj KMin Pool Origin Data% Meta% Move Copy% Log Convert LV UUID

Libor> lvSwap vgPecDisk1 1 -wi-ao-- 3.72g -1 -1 253 1 Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe

Libor> lvSystem vgPecDisk1 1 -wi-ao-- 64.00g -1 -1 253 0 ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD

Libor> lvTmp vgPecDisk1 1 -wi-ao-- 397.20g -1 -1 253 31 JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9

Libor> lvAmandaDaily01 vgPecDisk2 1 rwi-aor- 100.01g -1 -1 253 82 lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK

Libor> lvAmandaDaily01old vgPecDisk2 1 rwi---r- 1.09t -1 -1 -1 -1 nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq

Libor> lvAmandaDailyAuS01 vgPecDisk2 1 rwi-aor- 360.00g -1 -1 253 10
Libor> fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB

Libor> lvAmandaDailyAuS01_rimage_2_extracted vgPecDisk2 1 vwi---v- 120.00g -1 -1 -1 -1
Libor> Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq

Libor> lvAmandaDailyAuS01_rmeta_2_extracted vgPecDisk2 1 vwi---v- 4.00m -1 -1 -1 -1
Libor> WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS

Libor> lvAmandaDailyBlS01 vgPecDisk2 1 rwi---r- 320.00g -1 -1 -1 -1
Libor> fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt

Libor> lvAmandaDailyElme01 vgPecDisk2 1 rwi-aor- 144.00g -1 -1 253 89
Libor> 1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp

Libor> lvAmandaDailyEl01 vgPecDisk2 1 rwi-aor- 350.00g -1 -1 253 74
Libor> Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22

Libor> lvAmandaHoldingDisk vgPecDisk2 1 rwi-aor- 36.00g -1 -1 253 28
Libor> e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY

Libor> lvBackupElme2 vgPecDisk2 1 rwi-aor- 350.00g -1 -1 253 46 Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9

Libor> lvBackupPc vgPecDisk2 1 rwi---r- 640.01g -1 -1 -1 -1 KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ

Libor> lvBackupPc2 vgPecDisk2 1 rwi-aor- 600.00g -1 -1 253 19 2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke

Libor> lvBackupRsync vgPecDisk2 1 rwi---r- 256.01g -1 -1 -1 -1 cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ

Libor> lvBackupRsync2 vgPecDisk2 1 rwi-aor- 100.01g -1 -1 253 129 S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM

Libor> lvBackupRsyncCCCrossserver vgPecDisk2 1 rwi-aor- 50.00g -1 -1 253 100
Libor> ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf

Libor> lvBackupVokapo vgPecDisk2 1 rwi-aor- 128.00g -1 -1 253 107 pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag

Libor> lvLXCElMysqlSlave vgPecDisk2 1 rwi-aor- 32.00g -1 -1 253 65 2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut

Libor> lvLXCIcinga vgPecDisk2 1 rwi---r- 32.00g -1 -1 -1 -1 2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU

Libor> lvLXCJabber vgPecDisk2 1 rwi-aom- 4.00g -1 -1 253 56 100.00 AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ

Libor> lvLXCWebxMysqlSlave vgPecDisk2 1 rwi-aom- 16.00g -1 -1 253 51 100.00
Libor> m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

Libor> -----------------------------

Libor> lvdisplay

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk1/lvSwap

Libor> LV Name lvSwap

Libor> VG Name vgPecDisk1

Libor> LV UUID Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-02-20 12:22:52 +0100

Libor> LV Status available

Libor> # open 2

Libor> LV Size 3.72 GiB

Libor> Current LE 953

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 256

Libor> Block device 253:1

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk1/lvSystem

Libor> LV Name lvSystem

Libor> VG Name vgPecDisk1

Libor> LV UUID ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-02-20 12:23:03 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 64.00 GiB

Libor> Current LE 16384

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 256

Libor> Block device 253:0

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk1/lvTmp

Libor> LV Name lvTmp

Libor> VG Name vgPecDisk1

Libor> LV UUID JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-06-10 06:47:09 +0200

Libor> LV Status available

Libor> # open 1

Libor> LV Size 397.20 GiB

Libor> Current LE 101682

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 256

Libor> Block device 253:31

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvLXCWebxMysqlSlave

Libor> LV Name lvLXCWebxMysqlSlave

Libor> VG Name vgPecDisk2

Libor> LV UUID m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-02-21 18:15:22 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 16.00 GiB

Libor> Current LE 4096

Libor> Mirrored volumes 2

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 256

Libor> Block device 253:51

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDaily01old

Libor> LV Name lvAmandaDaily01old

Libor> VG Name vgPecDisk2

Libor> LV UUID nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-02-24 21:03:49 +0100

Libor> LV Status NOT available

Libor> LV Size 1.09 TiB

Libor> Current LE 286722

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyBlS01

Libor> LV Name lvAmandaDailyBlS01

Libor> VG Name vgPecDisk2

Libor> LV UUID fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-03-18 08:50:38 +0100

Libor> LV Status NOT available

Libor> LV Size 320.00 GiB

Libor> Current LE 81921

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvLXCJabber

Libor> LV Name lvLXCJabber

Libor> VG Name vgPecDisk2

Libor> LV UUID AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-03-20 15:19:54 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 4.00 GiB

Libor> Current LE 1024

Libor> Mirrored volumes 2

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 256

Libor> Block device 253:56

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupPc

Libor> LV Name lvBackupPc

Libor> VG Name vgPecDisk2

Libor> LV UUID KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-07-01 13:22:50 +0200

Libor> LV Status NOT available

Libor> LV Size 640.01 GiB

Libor> Current LE 163842

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvLXCIcinga

Libor> LV Name lvLXCIcinga

Libor> VG Name vgPecDisk2

Libor> LV UUID 2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-08-13 19:04:28 +0200

Libor> LV Status NOT available

Libor> LV Size 32.00 GiB

Libor> Current LE 8193

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupRsync

Libor> LV Name lvBackupRsync

Libor> VG Name vgPecDisk2

Libor> LV UUID cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-09-17 14:49:57 +0200

Libor> LV Status NOT available

Libor> LV Size 256.01 GiB

Libor> Current LE 65538

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDaily01

Libor> LV Name lvAmandaDaily01

Libor> VG Name vgPecDisk2

Libor> LV UUID lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-04 08:26:46 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 100.01 GiB

Libor> Current LE 25602

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:82

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupRsync2

Libor> LV Name lvBackupRsync2

Libor> VG Name vgPecDisk2

Libor> LV UUID S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-04 19:17:17 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 100.01 GiB

Libor> Current LE 25602

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:129

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupPc2

Libor> LV Name lvBackupPc2

Libor> VG Name vgPecDisk2

Libor> LV UUID 2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-04 23:13:51 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 600.00 GiB

Libor> Current LE 153600

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:19

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupElme2

Libor> LV Name lvBackupElme2

Libor> VG Name vgPecDisk2

Libor> LV UUID Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-04 23:21:44 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 350.00 GiB

Libor> Current LE 89601

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:46

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvLXCElMysqlSlave

Libor> LV Name lvLXCElMysqlSlave

Libor> VG Name vgPecDisk2

Libor> LV UUID 2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 16:36:42 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 32.00 GiB

Libor> Current LE 8193

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:65

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01_rimage_2_extracted

Libor> LV Name lvAmandaDailyAuS01_rimage_2_extracted

Libor> VG Name vgPecDisk2

Libor> LV UUID Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-02-25 09:55:03 +0100

Libor> LV Status NOT available

Libor> LV Size 120.00 GiB

Libor> Current LE 30721

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01_rmeta_2_extracted

Libor> LV Name lvAmandaDailyAuS01_rmeta_2_extracted

Libor> VG Name vgPecDisk2

Libor> LV UUID WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2014-02-25 09:55:03 +0100

Libor> LV Status NOT available

Libor> LV Size 4.00 MiB

Libor> Current LE 1

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01

Libor> LV Name lvAmandaDailyAuS01

Libor> VG Name vgPecDisk2

Libor> LV UUID fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 17:49:47 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 360.00 GiB

Libor> Current LE 92160

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:10

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaHoldingDisk

Libor> LV Name lvAmandaHoldingDisk

Libor> VG Name vgPecDisk2

Libor> LV UUID e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 18:48:36 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 36.00 GiB

Libor> Current LE 9216

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:28

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyEl01

Libor> LV Name lvAmandaDailyEl01

Libor> VG Name vgPecDisk2

Libor> LV UUID Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 19:00:26 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 350.00 GiB

Libor> Current LE 89601

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:74

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupRsyncCCCrossserver

Libor> LV Name lvBackupRsyncCCCrossserver

Libor> VG Name vgPecDisk2

Libor> LV UUID ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 22:39:09 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 50.00 GiB

Libor> Current LE 12801

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:100

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyElme01

Libor> LV Name lvAmandaDailyElme01

Libor> VG Name vgPecDisk2

Libor> LV UUID 1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 22:49:05 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 144.00 GiB

Libor> Current LE 36864

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:89

Libor> --- Logical volume ---

Libor> LV Path /dev/vgPecDisk2/lvBackupVokapo

Libor> LV Name lvBackupVokapo

Libor> VG Name vgPecDisk2

Libor> LV UUID pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag

Libor> LV Write Access read/write

Libor> LV Creation host, time pec, 2015-03-05 22:54:23 +0100

Libor> LV Status available

Libor> # open 1

Libor> LV Size 128.00 GiB

Libor> Current LE 32769

Libor> Segments 1

Libor> Allocation inherit

Libor> Read ahead sectors auto

Libor> - currently set to 1024

Libor> Block device 253:107

Libor> -----------------------

Libor> Dne St 11.�b�ezna�2015 11:57:43, John Stoffel napsal(a):

>> Libor,

>> 

>> Can you please post the output of the following commands, so that we

>> can understand your setup and see what's really going on here. More

>> info is better than less!

>> 

>> cat /proc/partitions

>> pvs -v

>> pvdisplay

>> vgs -v

>> vgdisplay

>> lvs -v

>> lvdisplay

>> 

>> and if you have PVs which are NOT on top of raw partitions, then

>> include cat /proc/mdstat as well, or whatever device tool you have.

>> 

>> Basically, we're trying to understand how you configured your setup

>> from the physical disks, to the volumes on them. I don't care much

>> about the filesystems, they're going to be inside individual LVs I

>> assume.

>> 

>> John

>> 

>> 

>> 

>> _______________________________________________

>> linux-lvm mailing list

>> linux-lvm@redhat.com

>> https://www.redhat.com/mailman/listinfo/linux-lvm

>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Libor> _______________________________________________
Libor> linux-lvm mailing list
Libor> linux-lvm@redhat.com
Libor> https://www.redhat.com/mailman/listinfo/linux-lvm
Libor> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-12 14:53         ` John Stoffel
@ 2015-03-12 15:21           ` Libor Klepáč
  2015-03-12 17:20             ` John Stoffel
  2015-03-12 15:32           ` Libor Klepáč
  1 sibling, 1 reply; 14+ messages in thread
From: Libor Klepáč @ 2015-03-12 15:21 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 4412 bytes --]

Hello,
but when i use

# lvs -a | grep Vokapo
output is

lvBackupVokapo                               vgPecDisk2 rwi-aor- 128.00g                                           
  [lvBackupVokapo_rimage_0]                    vgPecDisk2 iwi-aor-  42.67g                                           
  [lvBackupVokapo_rimage_1]                    vgPecDisk2 iwi-aor-  42.67g                                           
  [lvBackupVokapo_rimage_2]                    vgPecDisk2 iwi-aor-  42.67g                                           
  [lvBackupVokapo_rimage_3]                    vgPecDisk2 iwi-aor-  42.67g                                           
  [lvBackupVokapo_rmeta_0]                     vgPecDisk2 ewi-aor-   4.00m                                           
  [lvBackupVokapo_rmeta_1]                     vgPecDisk2 ewi-aor-   4.00m                                           
  [lvBackupVokapo_rmeta_2]                     vgPecDisk2 ewi-aor-   4.00m                                           
  [lvBackupVokapo_rmeta_3]                     vgPecDisk2 ewi-aor-   4.00m

what are these parts then?

it was created using
# lvcreate --type raid5 -i 3 -L 128G -n lvBackupVokapo vgPecDisk2
(with tools 2.02.104)
I was not sure about number of stripes


Libor


On Čt 12. března 2015 10:53:56 John Stoffel wrote:
> Libor> here it comes.
> 
> Great, this is a big help, and it shows me that you are NOT using
> RAID5 for your backup volumes.  The first clue is that you have 4 x
> 3tb disks and you only have a VG with 10.91t (terabytes) of useable
> space, with a name of 'vgPecDisk2'.
> 
> And then none of the LVs in this VG are of type RAID5, so I don't
> think you actually created them properly.  So when you lost one of the
> disks in your VG, you immediately lost any LVs which had extents on
> that missing disk.  Even though you did a vgcfgrestore, that did NOT
> restore the data.
> 
> You really need to redo this entirely.  What you WANT to do is this:
> 
> 0. copy all the remaining good backups elsewhere.  You want to empty
>    all of the disks in the existing vgPecDisk2 VG.
> 
> 1. setup an MD RAID5 using the four big disks.
> 
>    mdadm --create -l 5 -n 4 --name vgPecDisk2 /dev/sda /dev/sdb /dev/sdd
> /dev/sdg
> 
> 2. Create the PV on there
> 
>    pvcreate /dev/md/vgPecDisk2
> 
> 3. Create a new VG ontop of the RAID5 array.
> 
>    vgcreate vgPecDisk2 /dev/md/vgPecDisk2
> 
> 3. NOW you create your LVs on top of this
> 
>    lvcreate ....
> 
> 
> The problem you have is that none of your LVs was ever created with
> RAID5.  If you want to do a test, try this:
> 
>   lvcreate -n test-raid5 --type raid5 --size 5g --stripes 4 vgPecDisk2
> 
> and if it works (which it probably will on your system, assuming your
> LVM tools have support for RAID5 in the first please, you can then
> look at the output of the 'lvdisplay test-raid5' command to see how
> many devices and stripes (segments) that LV has.
> 
> None of the ones you show have this.  For example, your lvBackupVokapo
> only shows 1 segment.  Without multiple segments, and RAID, you can't
> survive any sort of failure in your setup.
> 
> This is why I personally only ever put LVs ontop of RAID devices if I
> have important data.
> 
> Does this help you understand what went wrong here?
> 
> John
> 
> 
> Libor> I think i have all PV not on top of raw partitions. System is on
> mdraid and backup PVs are Libor> directly on disks, without partitions.
> 
> Libor> I think that LVs:
> 
> Libor> lvAmandaDaily01old
> 
> Libor> lvBackupPc
> 
> Libor> lvBackupRsync
> 
> Libor> are old damaged LVs, i left for experimenting on.
> 
> Libor> These LVs are some broken parts of old raid?
> 
> Libor> lvAmandaDailyAuS01_rimage_2_extracted
> 
> Libor> lvAmandaDailyAuS01_rmeta_2_extracted
> 
> Libor> LV lvAmandaDailyBlS01 is also from before crash, but i didn't try to
> repair it (i think)
> 
> Libor> Libor
> 
> Libor> ---------------
> 
> Libor> cat /proc/mdstat (mdraid used only for OS)
> 
> Libor> Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]
> 
> Libor> md1 : active raid1 sde3[0] sdf3[1]
> 
> Libor> 487504704 blocks super 1.2 [2/2] [UU]
> 
> Libor> bitmap: 1/4 pages [4KB], 65536KB chunk
> 
> Libor> md0 : active raid1 sde2[0] sdf2[1]
> 
> Libor> 249664 blocks super 1.2 [2/2] [UU]
> 

[-- Attachment #2: Type: text/html, Size: 244725 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-12 14:53         ` John Stoffel
  2015-03-12 15:21           ` Libor Klepáč
@ 2015-03-12 15:32           ` Libor Klepáč
  1 sibling, 0 replies; 14+ messages in thread
From: Libor Klepáč @ 2015-03-12 15:32 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 3715 bytes --]

Hello again John,

I have " number of stripes == 3" from this page:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/ra
id_volumes.html

It says
"The following command creates a RAID5 array (3 stripes + 1 implicit parity 
drive)"
# lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg


Libor

On Čt 12. března 2015 10:53:56 John Stoffel wrote:
> Libor> here it comes.
> 
> Great, this is a big help, and it shows me that you are NOT using
> RAID5 for your backup volumes.  The first clue is that you have 4 x
> 3tb disks and you only have a VG with 10.91t (terabytes) of useable
> space, with a name of 'vgPecDisk2'.
> 
> And then none of the LVs in this VG are of type RAID5, so I don't
> think you actually created them properly.  So when you lost one of the
> disks in your VG, you immediately lost any LVs which had extents on
> that missing disk.  Even though you did a vgcfgrestore, that did NOT
> restore the data.
> 
> You really need to redo this entirely.  What you WANT to do is this:
> 
> 0. copy all the remaining good backups elsewhere.  You want to empty
>    all of the disks in the existing vgPecDisk2 VG.
> 
> 1. setup an MD RAID5 using the four big disks.
> 
>    mdadm --create -l 5 -n 4 --name vgPecDisk2 /dev/sda /dev/sdb /dev/sdd
> /dev/sdg
> 
> 2. Create the PV on there
> 
>    pvcreate /dev/md/vgPecDisk2
> 
> 3. Create a new VG ontop of the RAID5 array.
> 
>    vgcreate vgPecDisk2 /dev/md/vgPecDisk2
> 
> 3. NOW you create your LVs on top of this
> 
>    lvcreate ....
> 
> 
> The problem you have is that none of your LVs was ever created with
> RAID5.  If you want to do a test, try this:
> 
>   lvcreate -n test-raid5 --type raid5 --size 5g --stripes 4 vgPecDisk2
> 
> and if it works (which it probably will on your system, assuming your
> LVM tools have support for RAID5 in the first please, you can then
> look at the output of the 'lvdisplay test-raid5' command to see how
> many devices and stripes (segments) that LV has.
> 
> None of the ones you show have this.  For example, your lvBackupVokapo
> only shows 1 segment.  Without multiple segments, and RAID, you can't
> survive any sort of failure in your setup.
> 
> This is why I personally only ever put LVs ontop of RAID devices if I
> have important data.
> 
> Does this help you understand what went wrong here?
> 
> John
> 
> 
> Libor> I think i have all PV not on top of raw partitions. System is on
> mdraid and backup PVs are Libor> directly on disks, without partitions.
> 
> Libor> I think that LVs:
> 
> Libor> lvAmandaDaily01old
> 
> Libor> lvBackupPc
> 
> Libor> lvBackupRsync
> 
> Libor> are old damaged LVs, i left for experimenting on.
> 
> Libor> These LVs are some broken parts of old raid?
> 
> Libor> lvAmandaDailyAuS01_rimage_2_extracted
> 
> Libor> lvAmandaDailyAuS01_rmeta_2_extracted
> 
> Libor> LV lvAmandaDailyBlS01 is also from before crash, but i didn't try to
> repair it (i think)
> 
> Libor> Libor
> 
> Libor> ---------------
> 
> Libor> cat /proc/mdstat (mdraid used only for OS)
> 
> Libor> Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]
> 
> Libor> md1 : active raid1 sde3[0] sdf3[1]
> 
> Libor> 487504704 blocks super 1.2 [2/2] [UU]
> 
> Libor> bitmap: 1/4 pages [4KB], 65536KB chunk
> 
> Libor> md0 : active raid1 sde2[0] sdf2[1]
> 
> Libor> 249664 blocks super 1.2 [2/2] [UU]
> 
> Libor> bitmap: 0/1 pages [0KB], 65536KB chunk
> 
> Libor> -----------------
> 
> Libor> cat /proc/partitions
> 
> Libor> major minor #blocks name
> 
> Libor> 8 80 488386584 sdf
> 
> Libor> 8 81 498688 sdf1

[-- Attachment #2: Type: text/html, Size: 241823 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-12 15:21           ` Libor Klepáč
@ 2015-03-12 17:20             ` John Stoffel
  2015-03-12 21:32               ` Libor Klepáč
  0 siblings, 1 reply; 14+ messages in thread
From: John Stoffel @ 2015-03-12 17:20 UTC (permalink / raw)
  To: LVM general discussion and development


Interesting, so maybe it is working, but from looking at the info
you've provided, it's hard to know what happened.  I think it might be
time to do some testing with some loopback devices so you can setup
four 100m disks, then put them into a VG and then do some LVs on top
with the RAID5 setup.  Then you can see what happens when you remove a
disk, either with 'vgreduce' or by stopping the VG and then removing
a single PV, then re-starting the VG.  

Thinking back on it, I suspect the problem was your vgcfgrestore.  You
really really really didn't want to do that, because you lied to the
system.  Instead of four data disks, with good info, you now had three
good disks, and one blank disk.  But you told LVM that the fourth disk
was just fine, so it started to use it.    So I bet that when you read
from an LV, it tried to spread the load out and read from all four
disks, so you'd get Good, good, nothing, good data, which just totally
screwed things up.

Sometimes you were ok I bet because the parity data was on the bad
disk, but other times it wasn't so those LVs go corrupted because 1/3
of their data was now garbage.  You never let LVM rebuild the data by
refreshing the new disk.

Instead you probably should have done a vgreduce and then vgextend
onto the replacement disk, which probably (maybe, not sure) would have
forced a rebuild.


But I'm going to say that I think you were making a big mistake design
wise here.  You should have just setup an MD RAID5 on those four
disks, turn that one MD device into a PV, put that into a VG, then
created your LVs on top of there.  When you noticed problems, you
would simple fail the device, shutdown, replace it, then boot up and
once the system was up, you could add the new disk back into the RAID5
MD device and the system would happily rebuild in the background.  

Does this make sense?  You already use MD for the boot disks, so why
not for the data as well?  I know that LVM RAID5 isn't as mature or
supported as it is under MD.  

John


Libor> but when i use

Libor> # lvs -a | grep Vokapo

Libor> output is

Libor> lvBackupVokapo vgPecDisk2 rwi-aor- 128.00g

Libor> [lvBackupVokapo_rimage_0] vgPecDisk2 iwi-aor- 42.67g

Libor> [lvBackupVokapo_rimage_1] vgPecDisk2 iwi-aor- 42.67g

Libor> [lvBackupVokapo_rimage_2] vgPecDisk2 iwi-aor- 42.67g

Libor> [lvBackupVokapo_rimage_3] vgPecDisk2 iwi-aor- 42.67g

Libor> [lvBackupVokapo_rmeta_0] vgPecDisk2 ewi-aor- 4.00m

Libor> [lvBackupVokapo_rmeta_1] vgPecDisk2 ewi-aor- 4.00m

Libor> [lvBackupVokapo_rmeta_2] vgPecDisk2 ewi-aor- 4.00m

Libor> [lvBackupVokapo_rmeta_3] vgPecDisk2 ewi-aor- 4.00m

Libor> what are these parts then?

Libor> it was created using

Libor> # lvcreate --type raid5 -i 3 -L 128G -n lvBackupVokapo vgPecDisk2

Libor> (with tools 2.02.104)

Libor> I was not sure about number of stripes

Libor> Libor

Libor> On �t 12.�b�ezna�2015 10:53:56 John Stoffel wrote:

Libor> here it comes.

>> 

>> Great, this is a big help, and it shows me that you are NOT using

>> RAID5 for your backup volumes. The first clue is that you have 4 x

>> 3tb disks and you only have a VG with 10.91t (terabytes) of useable

>> space, with a name of 'vgPecDisk2'.

>> 

>> And then none of the LVs in this VG are of type RAID5, so I don't

>> think you actually created them properly. So when you lost one of the

>> disks in your VG, you immediately lost any LVs which had extents on

>> that missing disk. Even though you did a vgcfgrestore, that did NOT

>> restore the data.

>> 

>> You really need to redo this entirely. What you WANT to do is this:

>> 

>> 0. copy all the remaining good backups elsewhere. You want to empty

>> all of the disks in the existing vgPecDisk2 VG.

>> 

>> 1. setup an MD RAID5 using the four big disks.

>> 

>> mdadm --create -l 5 -n 4 --name vgPecDisk2 /dev/sda /dev/sdb /dev/sdd

>> /dev/sdg

>> 

>> 2. Create the PV on there

>> 

>> pvcreate /dev/md/vgPecDisk2

>> 

>> 3. Create a new VG ontop of the RAID5 array.

>> 

>> vgcreate vgPecDisk2 /dev/md/vgPecDisk2

>> 

>> 3. NOW you create your LVs on top of this

>> 

>> lvcreate ....

>> 

>> 

>> The problem you have is that none of your LVs was ever created with

>> RAID5. If you want to do a test, try this:

>> 

>> lvcreate -n test-raid5 --type raid5 --size 5g --stripes 4 vgPecDisk2

>> 

>> and if it works (which it probably will on your system, assuming your

>> LVM tools have support for RAID5 in the first please, you can then

>> look at the output of the 'lvdisplay test-raid5' command to see how

>> many devices and stripes (segments) that LV has.

>> 

>> None of the ones you show have this. For example, your lvBackupVokapo

>> only shows 1 segment. Without multiple segments, and RAID, you can't

>> survive any sort of failure in your setup.

>> 

>> This is why I personally only ever put LVs ontop of RAID devices if I

>> have important data.

>> 

>> Does this help you understand what went wrong here?

>> 

>> John

>> 

>> 

Libor> I think i have all PV not on top of raw partitions. System is on

>> mdraid and backup PVs are Libor> directly on disks, without partitions.

>> 

Libor> I think that LVs:

>> 

Libor> lvAmandaDaily01old

>> 

Libor> lvBackupPc

>> 

Libor> lvBackupRsync

>> 

Libor> are old damaged LVs, i left for experimenting on.

>> 

Libor> These LVs are some broken parts of old raid?

>> 

Libor> lvAmandaDailyAuS01_rimage_2_extracted

>> 

Libor> lvAmandaDailyAuS01_rmeta_2_extracted

>> 

Libor> LV lvAmandaDailyBlS01 is also from before crash, but i didn't try to

>> repair it (i think)

>> 

Libor> Libor

>> 

Libor> ---------------

>> 

Libor> cat /proc/mdstat (mdraid used only for OS)

>> 

Libor> Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]

>> 

Libor> md1 : active raid1 sde3[0] sdf3[1]

>> 

Libor> 487504704 blocks super 1.2 [2/2] [UU]

>> 

Libor> bitmap: 1/4 pages [4KB], 65536KB chunk

>> 

Libor> md0 : active raid1 sde2[0] sdf2[1]

>> 

Libor> 249664 blocks super 1.2 [2/2] [UU]

>> 

Libor> bitmap: 0/1 pages [0KB], 65536KB chunk

>> 

Libor> -----------------

>> 

Libor> cat /proc/partitions

>> 

Libor> major minor #blocks name

>> 

Libor> 8 80 488386584 sdf

>> 

Libor> 8 81 498688 sdf1

>> 

Libor> 8 82 249856 sdf2

>> 

Libor> 8 83 487635968 sdf3

>> 

Libor> 8 48 2930266584 sdd

>> 

Libor> 8 64 488386584 sde

>> 

Libor> 8 65 498688 sde1

>> 

Libor> 8 66 249856 sde2

>> 

Libor> 8 67 487635968 sde3

>> 

Libor> 8 0 2930266584 sda

>> 

Libor> 8 16 2930266584 sdb

>> 

Libor> 9 0 249664 md0

>> 

Libor> 9 1 487504704 md1

>> 

Libor> 253 0 67108864 dm-0

>> 

Libor> 253 1 3903488 dm-1

>> 

Libor> 8 96 2930266584 sdg

>> 

Libor> 253 121 4096 dm-121

>> 

Libor> 253 122 34955264 dm-122

>> 

Libor> 253 123 4096 dm-123

>> 

Libor> 253 124 34955264 dm-124

>> 

Libor> 253 125 4096 dm-125

>> 

Libor> 253 126 34955264 dm-126

>> 

Libor> 253 127 4096 dm-127

>> 

Libor> 253 128 34955264 dm-128

>> 

Libor> 253 129 104865792 dm-129

>> 

Libor> 253 11 4096 dm-11

>> 

Libor> 253 12 209715200 dm-12

>> 

Libor> 253 13 4096 dm-13

>> 

Libor> 253 14 209715200 dm-14

>> 

Libor> 253 15 4096 dm-15

>> 

Libor> 253 16 209715200 dm-16

>> 

Libor> 253 17 4096 dm-17

>> 

Libor> 253 18 209715200 dm-18

>> 

Libor> 253 19 629145600 dm-19

>> 

Libor> 253 38 4096 dm-38

>> 

Libor> 253 39 122335232 dm-39

>> 

Libor> 253 40 4096 dm-40

>> 

Libor> 253 41 122335232 dm-41

>> 

Libor> 253 42 4096 dm-42

>> 

Libor> 253 43 122335232 dm-43

>> 

Libor> 253 44 4096 dm-44

>> 

Libor> 253 45 122335232 dm-45

>> 

Libor> 253 46 367005696 dm-46

>> 

Libor> 253 47 4096 dm-47

>> 

Libor> 253 48 16777216 dm-48

>> 

Libor> 253 49 4096 dm-49

>> 

Libor> 253 50 16777216 dm-50

>> 

Libor> 253 51 16777216 dm-51

>> 

Libor> 253 52 4096 dm-52

>> 

Libor> 253 53 4194304 dm-53

>> 

Libor> 253 54 4096 dm-54

>> 

Libor> 253 55 4194304 dm-55

>> 

Libor> 253 56 4194304 dm-56

>> 

Libor> 253 57 4096 dm-57

>> 

Libor> 253 58 11186176 dm-58

>> 

Libor> 253 59 4096 dm-59

>> 

Libor> 253 60 11186176 dm-60

>> 

Libor> 253 61 4096 dm-61

>> 

Libor> 253 62 11186176 dm-62

>> 

Libor> 253 63 4096 dm-63

>> 

Libor> 253 64 11186176 dm-64

>> 

Libor> 253 65 33558528 dm-65

>> 

Libor> 253 2 4096 dm-2

>> 

Libor> 253 3 125829120 dm-3

>> 

Libor> 253 4 4096 dm-4

>> 

Libor> 253 5 125829120 dm-5

>> 

Libor> 253 6 4096 dm-6

>> 

Libor> 253 7 125829120 dm-7

>> 

Libor> 253 8 4096 dm-8

>> 

Libor> 253 9 125829120 dm-9

>> 

Libor> 253 10 377487360 dm-10

>> 

Libor> 253 20 4096 dm-20

>> 

Libor> 253 21 12582912 dm-21

>> 

Libor> 253 22 4096 dm-22

>> 

Libor> 253 23 12582912 dm-23

>> 

Libor> 253 24 4096 dm-24

>> 

Libor> 253 25 12582912 dm-25

>> 

Libor> 253 26 4096 dm-26

>> 

Libor> 253 27 12582912 dm-27

>> 

Libor> 253 28 37748736 dm-28

>> 

Libor> 253 66 4096 dm-66

>> 

Libor> 253 67 122335232 dm-67

>> 

Libor> 253 68 4096 dm-68

>> 

Libor> 253 69 122335232 dm-69

>> 

Libor> 253 70 4096 dm-70

>> 

Libor> 253 71 122335232 dm-71

>> 

Libor> 253 72 4096 dm-72

>> 

Libor> 253 73 122335232 dm-73

>> 

Libor> 253 74 367005696 dm-74

>> 

Libor> 253 31 416489472 dm-31

>> 

Libor> 253 32 4096 dm-32

>> 

Libor> 253 75 34955264 dm-75

>> 

Libor> 253 78 4096 dm-78

>> 

Libor> 253 79 34955264 dm-79

>> 

Libor> 253 80 4096 dm-80

>> 

Libor> 253 81 34955264 dm-81

>> 

Libor> 253 82 104865792 dm-82

>> 

Libor> 253 92 4096 dm-92

>> 

Libor> 253 93 17477632 dm-93

>> 

Libor> 253 94 4096 dm-94

>> 

Libor> 253 95 17477632 dm-95

>> 

Libor> 253 96 4096 dm-96

>> 

Libor> 253 97 17477632 dm-97

>> 

Libor> 253 98 4096 dm-98

>> 

Libor> 253 99 17477632 dm-99

>> 

Libor> 253 100 52432896 dm-100

>> 

Libor> 253 76 4096 dm-76

>> 

Libor> 253 77 50331648 dm-77

>> 

Libor> 253 83 4096 dm-83

>> 

Libor> 253 84 50331648 dm-84

>> 

Libor> 253 85 4096 dm-85

>> 

Libor> 253 86 50331648 dm-86

>> 

Libor> 253 87 4096 dm-87

>> 

Libor> 253 88 50331648 dm-88

>> 

Libor> 253 89 150994944 dm-89

>> 

Libor> 253 90 4096 dm-90

>> 

Libor> 253 91 44740608 dm-91

>> 

Libor> 253 101 4096 dm-101

>> 

Libor> 253 102 44740608 dm-102

>> 

Libor> 253 103 4096 dm-103

>> 

Libor> 253 104 44740608 dm-104

>> 

Libor> 253 105 4096 dm-105

>> 

Libor> 253 106 44740608 dm-106

>> 

Libor> 253 107 134221824 dm-107

>> 

Libor> -------------------------------

>> 

Libor> pvs -v

>> 

Libor> Scanning for physical volume names

>> 

Libor> PV VG Fmt Attr PSize PFree DevSize PV UUID

>> 

Libor> /dev/md1 vgPecDisk1 lvm2 a-- 464.92g 0 464.92g

>> MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI

>> 

Libor> /dev/sda vgPecDisk2 lvm2 a-- 2.73t 1.20t 2.73t

>> 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw

>> 

Libor> /dev/sdb vgPecDisk2 lvm2 a-- 2.73t 1.20t 2.73t

>> 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr

>> 

Libor> /dev/sdd vgPecDisk2 lvm2 a-- 2.73t 2.03t 2.73t

>> RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO

>> 

Libor> /dev/sdg vgPecDisk2 lvm2 a-- 2.73t 1.23t 2.73t

>> yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

>> 

Libor> -------------------------------

>> 

Libor> pvdisplay

>> 

Libor> --- Physical volume ---

>> 

Libor> PV Name /dev/md1

>> 

Libor> VG Name vgPecDisk1

>> 

Libor> PV Size 464.92 GiB / not usable 1.81 MiB

>> 

Libor> Allocatable yes (but full)

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 119019

>> 

Libor> Free PE 0

>> 

Libor> Allocated PE 119019

>> 

Libor> PV UUID MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI

>> 

Libor> --- Physical volume ---

>> 

Libor> PV Name /dev/sdd

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> PV Size 2.73 TiB / not usable 2.00 MiB

>> 

Libor> Allocatable yes

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 715396

>> 

Libor> Free PE 531917

>> 

Libor> Allocated PE 183479

>> 

Libor> PV UUID RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO

>> 

Libor> --- Physical volume ---

>> 

Libor> PV Name /dev/sda

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> PV Size 2.73 TiB / not usable 1022.00 MiB

>> 

Libor> Allocatable yes

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 714884

>> 

Libor> Free PE 315671

>> 

Libor> Allocated PE 399213

>> 

Libor> PV UUID 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw

>> 

Libor> --- Physical volume ---

>> 

Libor> PV Name /dev/sdb

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> PV Size 2.73 TiB / not usable 1022.00 MiB

>> 

Libor> Allocatable yes

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 714884

>> 

Libor> Free PE 315671

>> 

Libor> Allocated PE 399213

>> 

Libor> PV UUID 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr

>> 

Libor> --- Physical volume ---

>> 

Libor> PV Name /dev/sdg

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> PV Size 2.73 TiB / not usable 2.00 MiB

>> 

Libor> Allocatable yes

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 715396

>> 

Libor> Free PE 321305

>> 

Libor> Allocated PE 394091

>> 

Libor> PV UUID yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

>> 

Libor> -----------------------------

>> 

Libor> vgs -v

>> 

Libor> VG Attr Ext #PV #LV #SN VSize VFree VG UUID

>> 

Libor> vgPecDisk1 wz--n- 4.00m 1 3 0 464.92g 0

>> Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv

>> 

Libor> vgPecDisk2 wz--n- 4.00m 4 20 0 10.91t 5.66t

>> 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

>> 

Libor> --------------------------------

>> 

Libor> vgdisplay

>> 

Libor> --- Volume group ---

>> 

Libor> VG Name vgPecDisk1

>> 

Libor> System ID

>> 

Libor> Format lvm2

>> 

Libor> Metadata Areas 1

>> 

Libor> Metadata Sequence No 9

>> 

Libor> VG Access read/write

>> 

Libor> VG Status resizable

>> 

Libor> MAX LV 0

>> 

Libor> Cur LV 3

>> 

Libor> Open LV 3

>> 

Libor> Max PV 0

>> 

Libor> Cur PV 1

>> 

Libor> Act PV 1

>> 

Libor> VG Size 464.92 GiB

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 119019

>> 

Libor> Alloc PE / Size 119019 / 464.92 GiB

>> 

Libor> Free PE / Size 0 / 0

>> 

Libor> VG UUID Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv

>> 

Libor> --- Volume group ---

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> System ID

>> 

Libor> Format lvm2

>> 

Libor> Metadata Areas 8

>> 

Libor> Metadata Sequence No 476

>> 

Libor> VG Access read/write

>> 

Libor> VG Status resizable

>> 

Libor> MAX LV 0

>> 

Libor> Cur LV 20

>> 

Libor> Open LV 13

>> 

Libor> Max PV 0

>> 

Libor> Cur PV 4

>> 

Libor> Act PV 4

>> 

Libor> VG Size 10.91 TiB

>> 

Libor> PE Size 4.00 MiB

>> 

Libor> Total PE 2860560

>> 

Libor> Alloc PE / Size 1375996 / 5.25 TiB

>> 

Libor> Free PE / Size 1484564 / 5.66 TiB

>> 

Libor> VG UUID 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

>> 

Libor> ------------------------------

>> 

Libor> lvs -v

>> 

Libor> Finding all logical volumes

>> 

Libor> LV VG #Seg Attr LSize Maj Min KMaj KMin Pool Origin Data% Meta% Move

>> Copy% Log Convert LV UUID

>> 

Libor> lvSwap vgPecDisk1 1 -wi-ao-- 3.72g -1 -1 253 1

>> Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe

>> 

Libor> lvSystem vgPecDisk1 1 -wi-ao-- 64.00g -1 -1 253 0

>> ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD

>> 

Libor> lvTmp vgPecDisk1 1 -wi-ao-- 397.20g -1 -1 253 31

>> JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9

>> 

Libor> lvAmandaDaily01 vgPecDisk2 1 rwi-aor- 100.01g -1 -1 253 82

>> lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK

>> 

Libor> lvAmandaDaily01old vgPecDisk2 1 rwi---r- 1.09t -1 -1 -1 -1

>> nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq

>> 

Libor> lvAmandaDailyAuS01 vgPecDisk2 1 rwi-aor- 360.00g -1 -1 253 10

Libor> fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB

>> 

Libor> lvAmandaDailyAuS01_rimage_2_extracted vgPecDisk2 1 vwi---v- 120.00g

>> -1 -1 -1 -1 Libor> Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq

>> 

Libor> lvAmandaDailyAuS01_rmeta_2_extracted vgPecDisk2 1 vwi---v- 4.00m -1

>> -1 -1 -1 Libor> WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS

>> 

Libor> lvAmandaDailyBlS01 vgPecDisk2 1 rwi---r- 320.00g -1 -1 -1 -1

Libor> fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt

>> 

Libor> lvAmandaDailyElme01 vgPecDisk2 1 rwi-aor- 144.00g -1 -1 253 89

Libor> 1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp

>> 

Libor> lvAmandaDailyEl01 vgPecDisk2 1 rwi-aor- 350.00g -1 -1 253 74

Libor> Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22

>> 

Libor> lvAmandaHoldingDisk vgPecDisk2 1 rwi-aor- 36.00g -1 -1 253 28

Libor> e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY

>> 

Libor> lvBackupElme2 vgPecDisk2 1 rwi-aor- 350.00g -1 -1 253 46

>> Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9

>> 

Libor> lvBackupPc vgPecDisk2 1 rwi---r- 640.01g -1 -1 -1 -1

>> KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ

>> 

Libor> lvBackupPc2 vgPecDisk2 1 rwi-aor- 600.00g -1 -1 253 19

>> 2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke

>> 

Libor> lvBackupRsync vgPecDisk2 1 rwi---r- 256.01g -1 -1 -1 -1

>> cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ

>> 

Libor> lvBackupRsync2 vgPecDisk2 1 rwi-aor- 100.01g -1 -1 253 129

>> S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM

>> 

Libor> lvBackupRsyncCCCrossserver vgPecDisk2 1 rwi-aor- 50.00g -1 -1 253 100

Libor> ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf

>> 

Libor> lvBackupVokapo vgPecDisk2 1 rwi-aor- 128.00g -1 -1 253 107

>> pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag

>> 

Libor> lvLXCElMysqlSlave vgPecDisk2 1 rwi-aor- 32.00g -1 -1 253 65

>> 2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut

>> 

Libor> lvLXCIcinga vgPecDisk2 1 rwi---r- 32.00g -1 -1 -1 -1

>> 2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU

>> 

Libor> lvLXCJabber vgPecDisk2 1 rwi-aom- 4.00g -1 -1 253 56 100.00

>> AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ

>> 

Libor> lvLXCWebxMysqlSlave vgPecDisk2 1 rwi-aom- 16.00g -1 -1 253 51 100.00

Libor> m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

>> 

Libor> -----------------------------

>> 

Libor> lvdisplay

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk1/lvSwap

>> 

Libor> LV Name lvSwap

>> 

Libor> VG Name vgPecDisk1

>> 

Libor> LV UUID Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-02-20 12:22:52 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 2

>> 

Libor> LV Size 3.72 GiB

>> 

Libor> Current LE 953

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 256

>> 

Libor> Block device 253:1

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk1/lvSystem

>> 

Libor> LV Name lvSystem

>> 

Libor> VG Name vgPecDisk1

>> 

Libor> LV UUID ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-02-20 12:23:03 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 64.00 GiB

>> 

Libor> Current LE 16384

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 256

>> 

Libor> Block device 253:0

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk1/lvTmp

>> 

Libor> LV Name lvTmp

>> 

Libor> VG Name vgPecDisk1

>> 

Libor> LV UUID JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-06-10 06:47:09 +0200

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 397.20 GiB

>> 

Libor> Current LE 101682

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 256

>> 

Libor> Block device 253:31

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCWebxMysqlSlave

>> 

Libor> LV Name lvLXCWebxMysqlSlave

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-02-21 18:15:22 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 16.00 GiB

>> 

Libor> Current LE 4096

>> 

Libor> Mirrored volumes 2

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 256

>> 

Libor> Block device 253:51

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDaily01old

>> 

Libor> LV Name lvAmandaDaily01old

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-02-24 21:03:49 +0100

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 1.09 TiB

>> 

Libor> Current LE 286722

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyBlS01

>> 

Libor> LV Name lvAmandaDailyBlS01

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-03-18 08:50:38 +0100

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 320.00 GiB

>> 

Libor> Current LE 81921

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCJabber

>> 

Libor> LV Name lvLXCJabber

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-03-20 15:19:54 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 4.00 GiB

>> 

Libor> Current LE 1024

>> 

Libor> Mirrored volumes 2

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 256

>> 

Libor> Block device 253:56

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupPc

>> 

Libor> LV Name lvBackupPc

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-07-01 13:22:50 +0200

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 640.01 GiB

>> 

Libor> Current LE 163842

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCIcinga

>> 

Libor> LV Name lvLXCIcinga

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID 2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-08-13 19:04:28 +0200

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 32.00 GiB

>> 

Libor> Current LE 8193

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupRsync

>> 

Libor> LV Name lvBackupRsync

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-09-17 14:49:57 +0200

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 256.01 GiB

>> 

Libor> Current LE 65538

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDaily01

>> 

Libor> LV Name lvAmandaDaily01

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-04 08:26:46 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 100.01 GiB

>> 

Libor> Current LE 25602

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:82

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupRsync2

>> 

Libor> LV Name lvBackupRsync2

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-04 19:17:17 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 100.01 GiB

>> 

Libor> Current LE 25602

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:129

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupPc2

>> 

Libor> LV Name lvBackupPc2

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID 2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-04 23:13:51 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 600.00 GiB

>> 

Libor> Current LE 153600

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:19

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupElme2

>> 

Libor> LV Name lvBackupElme2

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-04 23:21:44 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 350.00 GiB

>> 

Libor> Current LE 89601

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:46

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCElMysqlSlave

>> 

Libor> LV Name lvLXCElMysqlSlave

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID 2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 16:36:42 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 32.00 GiB

>> 

Libor> Current LE 8193

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:65

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01_rimage_2_extracted

>> 

Libor> LV Name lvAmandaDailyAuS01_rimage_2_extracted

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-02-25 09:55:03 +0100

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 120.00 GiB

>> 

Libor> Current LE 30721

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01_rmeta_2_extracted

>> 

Libor> LV Name lvAmandaDailyAuS01_rmeta_2_extracted

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2014-02-25 09:55:03 +0100

>> 

Libor> LV Status NOT available

>> 

Libor> LV Size 4.00 MiB

>> 

Libor> Current LE 1

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01

>> 

Libor> LV Name lvAmandaDailyAuS01

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 17:49:47 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 360.00 GiB

>> 

Libor> Current LE 92160

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:10

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaHoldingDisk

>> 

Libor> LV Name lvAmandaHoldingDisk

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 18:48:36 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 36.00 GiB

>> 

Libor> Current LE 9216

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:28

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyEl01

>> 

Libor> LV Name lvAmandaDailyEl01

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 19:00:26 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 350.00 GiB

>> 

Libor> Current LE 89601

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:74

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupRsyncCCCrossserver

>> 

Libor> LV Name lvBackupRsyncCCCrossserver

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 22:39:09 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 50.00 GiB

>> 

Libor> Current LE 12801

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:100

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyElme01

>> 

Libor> LV Name lvAmandaDailyElme01

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID 1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 22:49:05 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 144.00 GiB

>> 

Libor> Current LE 36864

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:89

>> 

Libor> --- Logical volume ---

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupVokapo

>> 

Libor> LV Name lvBackupVokapo

>> 

Libor> VG Name vgPecDisk2

>> 

Libor> LV UUID pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag

>> 

Libor> LV Write Access read/write

>> 

Libor> LV Creation host, time pec, 2015-03-05 22:54:23 +0100

>> 

Libor> LV Status available

>> 

Libor> # open 1

>> 

Libor> LV Size 128.00 GiB

>> 

Libor> Current LE 32769

>> 

Libor> Segments 1

>> 

Libor> Allocation inherit

>> 

Libor> Read ahead sectors auto

>> 

Libor> - currently set to 1024

>> 

Libor> Block device 253:107

>> 

Libor> -----------------------

>> 

Libor> Dne St 11.�b�ezna�2015 11:57:43, John Stoffel napsal(a):

>> >> Libor,

>> >>

>> >>

>> >>

>> >> Can you please post the output of the following commands, so that we

>> >>

>> >> can understand your setup and see what's really going on here. More

>> >>

>> >> info is better than less!

>> >>

>> >>

>> >>

>> >> cat /proc/partitions

>> >>

>> >> pvs -v

>> >>

>> >> pvdisplay

>> >>

>> >> vgs -v

>> >>

>> >> vgdisplay

>> >>

>> >> lvs -v

>> >>

>> >> lvdisplay

>> >>

>> >>

>> >>

>> >> and if you have PVs which are NOT on top of raw partitions, then

>> >>

>> >> include cat /proc/mdstat as well, or whatever device tool you have.

>> >>

>> >>

>> >>

>> >> Basically, we're trying to understand how you configured your setup

>> >>

>> >> from the physical disks, to the volumes on them. I don't care much

>> >>

>> >> about the filesystems, they're going to be inside individual LVs I

>> >>

>> >> assume.

>> >>

>> >>

>> >>

>> >> John

>> >>

>> >>

>> >>

>> >>

>> >>

>> >>

>> >>

>> >> _______________________________________________

>> >>

>> >> linux-lvm mailing list

>> >>

>> >> linux-lvm@redhat.com

>> >>

>> >> https://www.redhat.com/mailman/listinfo/linux-lvm

>> >>

>> >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

>> 

Libor> _______________________________________________

Libor> linux-lvm mailing list

Libor> linux-lvm@redhat.com

Libor> https://www.redhat.com/mailman/listinfo/linux-lvm

Libor> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

>> 

>> _______________________________________________

>> linux-lvm mailing list

>> linux-lvm@redhat.com

>> https://www.redhat.com/mailman/listinfo/linux-lvm

>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Libor> _______________________________________________
Libor> linux-lvm mailing list
Libor> linux-lvm@redhat.com
Libor> https://www.redhat.com/mailman/listinfo/linux-lvm
Libor> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-12 17:20             ` John Stoffel
@ 2015-03-12 21:32               ` Libor Klepáč
  2015-03-13 16:18                 ` John Stoffel
  0 siblings, 1 reply; 14+ messages in thread
From: Libor Klepáč @ 2015-03-12 21:32 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 6162 bytes --]

Hello John,

just a quick question, I'll respond on rest later.
I tried to read data from one of old LVs.
To be precise, I tried to read rimage_* directly.

#dd if=vgPecDisk2-lvBackupPc_rimage_0 of=/mnt/tmp/0 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.802423 s, 13.1 MB/s

# dd if=vgPecDisk2-lvBackupPc_rimage_1 of=/mnt/tmp/1 bs=10M count=1
dd: reading `vgPecDisk2-lvBackupPc_rimage_1': Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00582503 s, 0.0 kB/s

#dd if=vgPecDisk2-lvBackupPc_rimage_2 of=/mnt/tmp/2 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.110792 s, 94.6 MB/s

#dd if=vgPecDisk2-lvBackupPc_rimage_3 of=/mnt/tmp/3 bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.336518 s, 31.2 MB/s

As you can see, three parts are ok (and output files do contain *some* data) 
one rimage is missing (well, there is symlink to dm-33 dev node, but it says IO 
error)
Is there a way to kick this rimage out and to use those three remaining rimages?
LV was started 
#lvchange -ay --partial -v vgPecDisk2/lvBackupPc
  Configuration setting "activation/thin_check_executable" unknown.
  PARTIAL MODE. Incomplete logical volumes will be processed.
    Using logical volume(s) on command line
    Activating logical volume "lvBackupPc" exclusively.
    activation/volume_list configuration setting not defined: Checking only host 
tags for vgPecDisk2/lvBackupPc
    Loading vgPecDisk2-lvBackupPc_rmeta_0 table (253:29)
    Suppressed vgPecDisk2-lvBackupPc_rmeta_0 (253:29) identical table reload.
    Loading vgPecDisk2-lvBackupPc_rimage_0 table (253:30)
    Suppressed vgPecDisk2-lvBackupPc_rimage_0 (253:30) identical table 
reload.
    Loading vgPecDisk2-lvBackupPc_rmeta_1 table (253:33)
    Suppressed vgPecDisk2-lvBackupPc_rmeta_1 (253:33) identical table reload.
    Loading vgPecDisk2-lvBackupPc_rimage_1 table (253:34)
    Suppressed vgPecDisk2-lvBackupPc_rimage_1 (253:34) identical table 
reload.
    Loading vgPecDisk2-lvBackupPc_rmeta_2 table (253:35)
    Suppressed vgPecDisk2-lvBackupPc_rmeta_2 (253:35) identical table reload.
    Loading vgPecDisk2-lvBackupPc_rimage_2 table (253:36)
    Suppressed vgPecDisk2-lvBackupPc_rimage_2 (253:36) identical table 
reload.
    Loading vgPecDisk2-lvBackupPc_rmeta_3 table (253:37)
    Suppressed vgPecDisk2-lvBackupPc_rmeta_3 (253:37) identical table reload.
    Loading vgPecDisk2-lvBackupPc_rimage_3 table (253:108)
    Suppressed vgPecDisk2-lvBackupPc_rimage_3 (253:108) identical table 
reload.
    Loading vgPecDisk2-lvBackupPc table (253:109)
  device-mapper: reload ioctl on  failed: Invalid argument


#dmesg says

[747203.140882] device-mapper: raid: Failed to read superblock of device at 
position 1
[747203.149219] device-mapper: raid: New device injected into existing array 
without 'rebuild' parameter specified
[747203.149906] device-mapper: table: 253:109: raid: Unable to assemble 
array: Invalid superblocks
[747203.150576] device-mapper: ioctl: error adding target to table
[747227.051339] device-mapper: raid: Failed to read superblock of device at 
position 1
[747227.062519] device-mapper: raid: New device injected into existing array 
without 'rebuild' parameter specified
[747227.063612] device-mapper: table: 253:109: raid: Unable to assemble 
array: Invalid superblocks
[747227.064667] device-mapper: ioctl: error adding target to table
[747308.206650] quiet_error: 62 callbacks suppressed
[747308.206652] Buffer I/O error on device dm-34, logical block 0
[747308.207383] Buffer I/O error on device dm-34, logical block 1
[747308.208069] Buffer I/O error on device dm-34, logical block 2
[747308.208736] Buffer I/O error on device dm-34, logical block 3
[747308.209383] Buffer I/O error on device dm-34, logical block 4
[747308.210020] Buffer I/O error on device dm-34, logical block 5
[747308.210647] Buffer I/O error on device dm-34, logical block 6
[747308.211262] Buffer I/O error on device dm-34, logical block 7
[747308.211868] Buffer I/O error on device dm-34, logical block 8
[747308.212464] Buffer I/O error on device dm-34, logical block 9
[747560.283263] quiet_error: 55 callbacks suppressed
[747560.283267] Buffer I/O error on device dm-34, logical block 0
[747560.284214] Buffer I/O error on device dm-34, logical block 1
[747560.285059] Buffer I/O error on device dm-34, logical block 2
[747560.285633] Buffer I/O error on device dm-34, logical block 3
[747560.286170] Buffer I/O error on device dm-34, logical block 4
[747560.286687] Buffer I/O error on device dm-34, logical block 5
[747560.287151] Buffer I/O error on device dm-34, logical block 6


Libor

On Čt 12. března 2015 13:20:07 John Stoffel wrote:
> Interesting, so maybe it is working, but from looking at the info
> you've provided, it's hard to know what happened.  I think it might be
> time to do some testing with some loopback devices so you can setup
> four 100m disks, then put them into a VG and then do some LVs on top
> with the RAID5 setup.  Then you can see what happens when you remove a
> disk, either with 'vgreduce' or by stopping the VG and then removing
> a single PV, then re-starting the VG.
> 
> Thinking back on it, I suspect the problem was your vgcfgrestore.  You
> really really really didn't want to do that, because you lied to the
> system.  Instead of four data disks, with good info, you now had three
> good disks, and one blank disk.  But you told LVM that the fourth disk
> was just fine, so it started to use it.    So I bet that when you read
> from an LV, it tried to spread the load out and read from all four
> disks, so you'd get Good, good, nothing, good data, which just totally
> screwed things up.
> 
> Sometimes you were ok I bet because the parity data was on the bad
> disk, but other times it wasn't so those LVs go corrupted because 1/3
> of their data was now garbage.  You never let LVM rebuild the data by
> refreshing the new disk.
> 
> Instead you probably should have done a vgreduce and then vgextend

[-- Attachment #2: Type: text/html, Size: 479133 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [linux-lvm] Removing disk from raid LVM
  2015-03-12 21:32               ` Libor Klepáč
@ 2015-03-13 16:18                 ` John Stoffel
  0 siblings, 0 replies; 14+ messages in thread
From: John Stoffel @ 2015-03-13 16:18 UTC (permalink / raw)
  To: LVM general discussion and development


Hi Libor,

I think you're in big trouble here, but you might be able to fix this
by removing the new disk you added using 'vgreduce', or simply by
shutting down the system, pulling the disk and rebooting.

Then I would try to do:

  lvchange --resync <vg>/<lv>

In your case, I think you could try:

  lvchange --resync vgPecDisk2/lvBackupPc_rimage_1

and see how that works.  I'm looking at the steps documented here:

  http://wiki.gentoo.org/wiki/LVM#Replacing_a_failed_physical_volume

which seems to be much more of what you want.  


But!!!!  I still find this process of using LVM RAID5 LVs to be error
prone, not very resilient to failures, and a pain to manage.  It would
be much much much better to build a RAID5 MD RAID array, then put your
VG and LVs on top of that.

You would be a more resilient system, esp since MD RAID is *actively*
supported by Neil Brown the developer, and it works very very well.
It has more and better features than the RAID5 implementation in LVM.
Honestly, none of this would be happening if you just ran it the way I
(and many others) suggest.  

I still don't understand your reasoning here.  But please let me know
if the 'lvchange --resync' works or not.

John


>>>>> "Libor" == Libor Klep�� <libor.klepac@bcom.cz> writes:

Libor> Hello John,

Libor> just a quick question, I'll respond on rest later.

Libor> I tried to read data from one of old LVs.

Libor> To be precise, I tried to read rimage_* directly.

Libor> #dd if=vgPecDisk2-lvBackupPc_rimage_0 of=/mnt/tmp/0 bs=10M count=1

Libor> 1+0 records in

Libor> 1+0 records out

Libor> 10485760 bytes (10 MB) copied, 0.802423 s, 13.1 MB/s

Libor> # dd if=vgPecDisk2-lvBackupPc_rimage_1 of=/mnt/tmp/1 bs=10M count=1

Libor> dd: reading `vgPecDisk2-lvBackupPc_rimage_1': Input/output error

Libor> 0+0 records in

Libor> 0+0 records out

Libor> 0 bytes (0 B) copied, 0.00582503 s, 0.0 kB/s

Libor> #dd if=vgPecDisk2-lvBackupPc_rimage_2 of=/mnt/tmp/2 bs=10M count=1

Libor> 1+0 records in

Libor> 1+0 records out

Libor> 10485760 bytes (10 MB) copied, 0.110792 s, 94.6 MB/s

Libor> #dd if=vgPecDisk2-lvBackupPc_rimage_3 of=/mnt/tmp/3 bs=10M count=1

Libor> 1+0 records in

Libor> 1+0 records out

Libor> 10485760 bytes (10 MB) copied, 0.336518 s, 31.2 MB/s

Libor> As you can see, three parts are ok (and output files do contain *some* data) one rimage is missing
Libor> (well, there is symlink to dm-33 dev node, but it says IO error)

Libor> Is there a way to kick this rimage out and to use those three remaining rimages?

Libor> LV was started

Libor> #lvchange -ay --partial -v vgPecDisk2/lvBackupPc

Libor> Configuration setting "activation/thin_check_executable" unknown.

Libor> PARTIAL MODE. Incomplete logical volumes will be processed.

Libor> Using logical volume(s) on command line

Libor> Activating logical volume "lvBackupPc" exclusively.

Libor> activation/volume_list configuration setting not defined: Checking only host tags for vgPecDisk2/
Libor> lvBackupPc

Libor> Loading vgPecDisk2-lvBackupPc_rmeta_0 table (253:29)

Libor> Suppressed vgPecDisk2-lvBackupPc_rmeta_0 (253:29) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rimage_0 table (253:30)

Libor> Suppressed vgPecDisk2-lvBackupPc_rimage_0 (253:30) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rmeta_1 table (253:33)

Libor> Suppressed vgPecDisk2-lvBackupPc_rmeta_1 (253:33) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rimage_1 table (253:34)

Libor> Suppressed vgPecDisk2-lvBackupPc_rimage_1 (253:34) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rmeta_2 table (253:35)

Libor> Suppressed vgPecDisk2-lvBackupPc_rmeta_2 (253:35) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rimage_2 table (253:36)

Libor> Suppressed vgPecDisk2-lvBackupPc_rimage_2 (253:36) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rmeta_3 table (253:37)

Libor> Suppressed vgPecDisk2-lvBackupPc_rmeta_3 (253:37) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc_rimage_3 table (253:108)

Libor> Suppressed vgPecDisk2-lvBackupPc_rimage_3 (253:108) identical table reload.

Libor> Loading vgPecDisk2-lvBackupPc table (253:109)

Libor> device-mapper: reload ioctl on failed: Invalid argument

Libor> #dmesg says

Libor> [747203.140882] device-mapper: raid: Failed to read superblock of device at position 1

Libor> [747203.149219] device-mapper: raid: New device injected into existing array without 'rebuild'
Libor> parameter specified

Libor> [747203.149906] device-mapper: table: 253:109: raid: Unable to assemble array: Invalid superblocks

Libor> [747203.150576] device-mapper: ioctl: error adding target to table

Libor> [747227.051339] device-mapper: raid: Failed to read superblock of device at position 1

Libor> [747227.062519] device-mapper: raid: New device injected into existing array without 'rebuild'
Libor> parameter specified

Libor> [747227.063612] device-mapper: table: 253:109: raid: Unable to assemble array: Invalid superblocks

Libor> [747227.064667] device-mapper: ioctl: error adding target to table

Libor> [747308.206650] quiet_error: 62 callbacks suppressed

Libor> [747308.206652] Buffer I/O error on device dm-34, logical block 0

Libor> [747308.207383] Buffer I/O error on device dm-34, logical block 1

Libor> [747308.208069] Buffer I/O error on device dm-34, logical block 2

Libor> [747308.208736] Buffer I/O error on device dm-34, logical block 3

Libor> [747308.209383] Buffer I/O error on device dm-34, logical block 4

Libor> [747308.210020] Buffer I/O error on device dm-34, logical block 5

Libor> [747308.210647] Buffer I/O error on device dm-34, logical block 6

Libor> [747308.211262] Buffer I/O error on device dm-34, logical block 7

Libor> [747308.211868] Buffer I/O error on device dm-34, logical block 8

Libor> [747308.212464] Buffer I/O error on device dm-34, logical block 9

Libor> [747560.283263] quiet_error: 55 callbacks suppressed

Libor> [747560.283267] Buffer I/O error on device dm-34, logical block 0

Libor> [747560.284214] Buffer I/O error on device dm-34, logical block 1

Libor> [747560.285059] Buffer I/O error on device dm-34, logical block 2

Libor> [747560.285633] Buffer I/O error on device dm-34, logical block 3

Libor> [747560.286170] Buffer I/O error on device dm-34, logical block 4

Libor> [747560.286687] Buffer I/O error on device dm-34, logical block 5

Libor> [747560.287151] Buffer I/O error on device dm-34, logical block 6

Libor> Libor

Libor> On �t 12.�b�ezna�2015 13:20:07 John Stoffel wrote:

>> Interesting, so maybe it is working, but from looking at the info

>> you've provided, it's hard to know what happened. I think it might be

>> time to do some testing with some loopback devices so you can setup

>> four 100m disks, then put them into a VG and then do some LVs on top

>> with the RAID5 setup. Then you can see what happens when you remove a

>> disk, either with 'vgreduce' or by stopping the VG and then removing

>> a single PV, then re-starting the VG.

>> 

>> Thinking back on it, I suspect the problem was your vgcfgrestore. You

>> really really really didn't want to do that, because you lied to the

>> system. Instead of four data disks, with good info, you now had three

>> good disks, and one blank disk. But you told LVM that the fourth disk

>> was just fine, so it started to use it. So I bet that when you read

>> from an LV, it tried to spread the load out and read from all four

>> disks, so you'd get Good, good, nothing, good data, which just totally

>> screwed things up.

>> 

>> Sometimes you were ok I bet because the parity data was on the bad

>> disk, but other times it wasn't so those LVs go corrupted because 1/3

>> of their data was now garbage. You never let LVM rebuild the data by

>> refreshing the new disk.

>> 

>> Instead you probably should have done a vgreduce and then vgextend

>> onto the replacement disk, which probably (maybe, not sure) would have

>> forced a rebuild.

>> 

>> 

>> But I'm going to say that I think you were making a big mistake design

>> wise here. You should have just setup an MD RAID5 on those four

>> disks, turn that one MD device into a PV, put that into a VG, then

>> created your LVs on top of there. When you noticed problems, you

>> would simple fail the device, shutdown, replace it, then boot up and

>> once the system was up, you could add the new disk back into the RAID5

>> MD device and the system would happily rebuild in the background.

>> 

>> Does this make sense? You already use MD for the boot disks, so why

>> not for the data as well? I know that LVM RAID5 isn't as mature or

>> supported as it is under MD.

>> 

>> John

>> 

>> 

Libor> but when i use

>> 

Libor> # lvs -a | grep Vokapo

>> 

Libor> output is

>> 

Libor> lvBackupVokapo vgPecDisk2 rwi-aor- 128.00g

>> 

Libor> [lvBackupVokapo_rimage_0] vgPecDisk2 iwi-aor- 42.67g

>> 

Libor> [lvBackupVokapo_rimage_1] vgPecDisk2 iwi-aor- 42.67g

>> 

Libor> [lvBackupVokapo_rimage_2] vgPecDisk2 iwi-aor- 42.67g

>> 

Libor> [lvBackupVokapo_rimage_3] vgPecDisk2 iwi-aor- 42.67g

>> 

Libor> [lvBackupVokapo_rmeta_0] vgPecDisk2 ewi-aor- 4.00m

>> 

Libor> [lvBackupVokapo_rmeta_1] vgPecDisk2 ewi-aor- 4.00m

>> 

Libor> [lvBackupVokapo_rmeta_2] vgPecDisk2 ewi-aor- 4.00m

>> 

Libor> [lvBackupVokapo_rmeta_3] vgPecDisk2 ewi-aor- 4.00m

>> 

Libor> what are these parts then?

>> 

Libor> it was created using

>> 

Libor> # lvcreate --type raid5 -i 3 -L 128G -n lvBackupVokapo vgPecDisk2

>> 

Libor> (with tools 2.02.104)

>> 

Libor> I was not sure about number of stripes

>> 

Libor> Libor

>> 

Libor> On �t 12.�b�ezna�2015 10:53:56 John Stoffel wrote:

>> 

Libor> here it comes.

>> 

>> >> Great, this is a big help, and it shows me that you are NOT using

>> >>

>> >> RAID5 for your backup volumes. The first clue is that you have 4 x

>> >>

>> >> 3tb disks and you only have a VG with 10.91t (terabytes) of useable

>> >>

>> >> space, with a name of 'vgPecDisk2'.

>> >>

>> >>

>> >>

>> >> And then none of the LVs in this VG are of type RAID5, so I don't

>> >>

>> >> think you actually created them properly. So when you lost one of the

>> >>

>> >> disks in your VG, you immediately lost any LVs which had extents on

>> >>

>> >> that missing disk. Even though you did a vgcfgrestore, that did NOT

>> >>

>> >> restore the data.

>> >>

>> >>

>> >>

>> >> You really need to redo this entirely. What you WANT to do is this:

>> >>

>> >>

>> >>

>> >> 0. copy all the remaining good backups elsewhere. You want to empty

>> >>

>> >> all of the disks in the existing vgPecDisk2 VG.

>> >>

>> >>

>> >>

>> >> 1. setup an MD RAID5 using the four big disks.

>> >>

>> >>

>> >>

>> >> mdadm --create -l 5 -n 4 --name vgPecDisk2 /dev/sda /dev/sdb /dev/sdd

>> >>

>> >> /dev/sdg

>> >>

>> >>

>> >>

>> >> 2. Create the PV on there

>> >>

>> >>

>> >>

>> >> pvcreate /dev/md/vgPecDisk2

>> >>

>> >>

>> >>

>> >> 3. Create a new VG ontop of the RAID5 array.

>> >>

>> >>

>> >>

>> >> vgcreate vgPecDisk2 /dev/md/vgPecDisk2

>> >>

>> >>

>> >>

>> >> 3. NOW you create your LVs on top of this

>> >>

>> >>

>> >>

>> >> lvcreate ....

>> >>

>> >>

>> >>

>> >>

>> >>

>> >> The problem you have is that none of your LVs was ever created with

>> >>

>> >> RAID5. If you want to do a test, try this:

>> >>

>> >>

>> >>

>> >> lvcreate -n test-raid5 --type raid5 --size 5g --stripes 4 vgPecDisk2

>> >>

>> >>

>> >>

>> >> and if it works (which it probably will on your system, assuming your

>> >>

>> >> LVM tools have support for RAID5 in the first please, you can then

>> >>

>> >> look at the output of the 'lvdisplay test-raid5' command to see how

>> >>

>> >> many devices and stripes (segments) that LV has.

>> >>

>> >>

>> >>

>> >> None of the ones you show have this. For example, your lvBackupVokapo

>> >>

>> >> only shows 1 segment. Without multiple segments, and RAID, you can't

>> >>

>> >> survive any sort of failure in your setup.

>> >>

>> >>

>> >>

>> >> This is why I personally only ever put LVs ontop of RAID devices if I

>> >>

>> >> have important data.

>> >>

>> >>

>> >>

>> >> Does this help you understand what went wrong here?

>> >>

>> >>

>> >>

>> >> John

>> 

Libor> I think i have all PV not on top of raw partitions. System is on

>> 

>> >> mdraid and backup PVs are Libor> directly on disks, without partitions.

>> 

Libor> I think that LVs:

>> 

>> 

>> 

Libor> lvAmandaDaily01old

>> 

>> 

>> 

Libor> lvBackupPc

>> 

>> 

>> 

Libor> lvBackupRsync

>> 

>> 

>> 

Libor> are old damaged LVs, i left for experimenting on.

>> 

>> 

>> 

Libor> These LVs are some broken parts of old raid?

>> 

>> 

>> 

Libor> lvAmandaDailyAuS01_rimage_2_extracted

>> 

>> 

>> 

Libor> lvAmandaDailyAuS01_rmeta_2_extracted

>> 

>> 

>> 

Libor> LV lvAmandaDailyBlS01 is also from before crash, but i didn't try to

>> 

>> >> repair it (i think)

>> 

Libor> Libor

>> 

>> 

>> 

Libor> ---------------

>> 

>> 

>> 

Libor> cat /proc/mdstat (mdraid used only for OS)

>> 

>> 

>> 

Libor> Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]

>> 

>> 

>> 

Libor> md1 : active raid1 sde3[0] sdf3[1]

>> 

>> 

>> 

Libor> 487504704 blocks super 1.2 [2/2] [UU]

>> 

>> 

>> 

Libor> bitmap: 1/4 pages [4KB], 65536KB chunk

>> 

>> 

>> 

Libor> md0 : active raid1 sde2[0] sdf2[1]

>> 

>> 

>> 

Libor> 249664 blocks super 1.2 [2/2] [UU]

>> 

>> 

>> 

Libor> bitmap: 0/1 pages [0KB], 65536KB chunk

>> 

>> 

>> 

Libor> -----------------

>> 

>> 

>> 

Libor> cat /proc/partitions

>> 

>> 

>> 

Libor> major minor #blocks name

>> 

>> 

>> 

Libor> 8 80 488386584 sdf

>> 

>> 

>> 

Libor> 8 81 498688 sdf1

>> 

>> 

>> 

Libor> 8 82 249856 sdf2

>> 

>> 

>> 

Libor> 8 83 487635968 sdf3

>> 

>> 

>> 

Libor> 8 48 2930266584 sdd

>> 

>> 

>> 

Libor> 8 64 488386584 sde

>> 

>> 

>> 

Libor> 8 65 498688 sde1

>> 

>> 

>> 

Libor> 8 66 249856 sde2

>> 

>> 

>> 

Libor> 8 67 487635968 sde3

>> 

>> 

>> 

Libor> 8 0 2930266584 sda

>> 

>> 

>> 

Libor> 8 16 2930266584 sdb

>> 

>> 

>> 

Libor> 9 0 249664 md0

>> 

>> 

>> 

Libor> 9 1 487504704 md1

>> 

>> 

>> 

Libor> 253 0 67108864 dm-0

>> 

>> 

>> 

Libor> 253 1 3903488 dm-1

>> 

>> 

>> 

Libor> 8 96 2930266584 sdg

>> 

>> 

>> 

Libor> 253 121 4096 dm-121

>> 

>> 

>> 

Libor> 253 122 34955264 dm-122

>> 

>> 

>> 

Libor> 253 123 4096 dm-123

>> 

>> 

>> 

Libor> 253 124 34955264 dm-124

>> 

>> 

>> 

Libor> 253 125 4096 dm-125

>> 

>> 

>> 

Libor> 253 126 34955264 dm-126

>> 

>> 

>> 

Libor> 253 127 4096 dm-127

>> 

>> 

>> 

Libor> 253 128 34955264 dm-128

>> 

>> 

>> 

Libor> 253 129 104865792 dm-129

>> 

>> 

>> 

Libor> 253 11 4096 dm-11

>> 

>> 

>> 

Libor> 253 12 209715200 dm-12

>> 

>> 

>> 

Libor> 253 13 4096 dm-13

>> 

>> 

>> 

Libor> 253 14 209715200 dm-14

>> 

>> 

>> 

Libor> 253 15 4096 dm-15

>> 

>> 

>> 

Libor> 253 16 209715200 dm-16

>> 

>> 

>> 

Libor> 253 17 4096 dm-17

>> 

>> 

>> 

Libor> 253 18 209715200 dm-18

>> 

>> 

>> 

Libor> 253 19 629145600 dm-19

>> 

>> 

>> 

Libor> 253 38 4096 dm-38

>> 

>> 

>> 

Libor> 253 39 122335232 dm-39

>> 

>> 

>> 

Libor> 253 40 4096 dm-40

>> 

>> 

>> 

Libor> 253 41 122335232 dm-41

>> 

>> 

>> 

Libor> 253 42 4096 dm-42

>> 

>> 

>> 

Libor> 253 43 122335232 dm-43

>> 

>> 

>> 

Libor> 253 44 4096 dm-44

>> 

>> 

>> 

Libor> 253 45 122335232 dm-45

>> 

>> 

>> 

Libor> 253 46 367005696 dm-46

>> 

>> 

>> 

Libor> 253 47 4096 dm-47

>> 

>> 

>> 

Libor> 253 48 16777216 dm-48

>> 

>> 

>> 

Libor> 253 49 4096 dm-49

>> 

>> 

>> 

Libor> 253 50 16777216 dm-50

>> 

>> 

>> 

Libor> 253 51 16777216 dm-51

>> 

>> 

>> 

Libor> 253 52 4096 dm-52

>> 

>> 

>> 

Libor> 253 53 4194304 dm-53

>> 

>> 

>> 

Libor> 253 54 4096 dm-54

>> 

>> 

>> 

Libor> 253 55 4194304 dm-55

>> 

>> 

>> 

Libor> 253 56 4194304 dm-56

>> 

>> 

>> 

Libor> 253 57 4096 dm-57

>> 

>> 

>> 

Libor> 253 58 11186176 dm-58

>> 

>> 

>> 

Libor> 253 59 4096 dm-59

>> 

>> 

>> 

Libor> 253 60 11186176 dm-60

>> 

>> 

>> 

Libor> 253 61 4096 dm-61

>> 

>> 

>> 

Libor> 253 62 11186176 dm-62

>> 

>> 

>> 

Libor> 253 63 4096 dm-63

>> 

>> 

>> 

Libor> 253 64 11186176 dm-64

>> 

>> 

>> 

Libor> 253 65 33558528 dm-65

>> 

>> 

>> 

Libor> 253 2 4096 dm-2

>> 

>> 

>> 

Libor> 253 3 125829120 dm-3

>> 

>> 

>> 

Libor> 253 4 4096 dm-4

>> 

>> 

>> 

Libor> 253 5 125829120 dm-5

>> 

>> 

>> 

Libor> 253 6 4096 dm-6

>> 

>> 

>> 

Libor> 253 7 125829120 dm-7

>> 

>> 

>> 

Libor> 253 8 4096 dm-8

>> 

>> 

>> 

Libor> 253 9 125829120 dm-9

>> 

>> 

>> 

Libor> 253 10 377487360 dm-10

>> 

>> 

>> 

Libor> 253 20 4096 dm-20

>> 

>> 

>> 

Libor> 253 21 12582912 dm-21

>> 

>> 

>> 

Libor> 253 22 4096 dm-22

>> 

>> 

>> 

Libor> 253 23 12582912 dm-23

>> 

>> 

>> 

Libor> 253 24 4096 dm-24

>> 

>> 

>> 

Libor> 253 25 12582912 dm-25

>> 

>> 

>> 

Libor> 253 26 4096 dm-26

>> 

>> 

>> 

Libor> 253 27 12582912 dm-27

>> 

>> 

>> 

Libor> 253 28 37748736 dm-28

>> 

>> 

>> 

Libor> 253 66 4096 dm-66

>> 

>> 

>> 

Libor> 253 67 122335232 dm-67

>> 

>> 

>> 

Libor> 253 68 4096 dm-68

>> 

>> 

>> 

Libor> 253 69 122335232 dm-69

>> 

>> 

>> 

Libor> 253 70 4096 dm-70

>> 

>> 

>> 

Libor> 253 71 122335232 dm-71

>> 

>> 

>> 

Libor> 253 72 4096 dm-72

>> 

>> 

>> 

Libor> 253 73 122335232 dm-73

>> 

>> 

>> 

Libor> 253 74 367005696 dm-74

>> 

>> 

>> 

Libor> 253 31 416489472 dm-31

>> 

>> 

>> 

Libor> 253 32 4096 dm-32

>> 

>> 

>> 

Libor> 253 75 34955264 dm-75

>> 

>> 

>> 

Libor> 253 78 4096 dm-78

>> 

>> 

>> 

Libor> 253 79 34955264 dm-79

>> 

>> 

>> 

Libor> 253 80 4096 dm-80

>> 

>> 

>> 

Libor> 253 81 34955264 dm-81

>> 

>> 

>> 

Libor> 253 82 104865792 dm-82

>> 

>> 

>> 

Libor> 253 92 4096 dm-92

>> 

>> 

>> 

Libor> 253 93 17477632 dm-93

>> 

>> 

>> 

Libor> 253 94 4096 dm-94

>> 

>> 

>> 

Libor> 253 95 17477632 dm-95

>> 

>> 

>> 

Libor> 253 96 4096 dm-96

>> 

>> 

>> 

Libor> 253 97 17477632 dm-97

>> 

>> 

>> 

Libor> 253 98 4096 dm-98

>> 

>> 

>> 

Libor> 253 99 17477632 dm-99

>> 

>> 

>> 

Libor> 253 100 52432896 dm-100

>> 

>> 

>> 

Libor> 253 76 4096 dm-76

>> 

>> 

>> 

Libor> 253 77 50331648 dm-77

>> 

>> 

>> 

Libor> 253 83 4096 dm-83

>> 

>> 

>> 

Libor> 253 84 50331648 dm-84

>> 

>> 

>> 

Libor> 253 85 4096 dm-85

>> 

>> 

>> 

Libor> 253 86 50331648 dm-86

>> 

>> 

>> 

Libor> 253 87 4096 dm-87

>> 

>> 

>> 

Libor> 253 88 50331648 dm-88

>> 

>> 

>> 

Libor> 253 89 150994944 dm-89

>> 

>> 

>> 

Libor> 253 90 4096 dm-90

>> 

>> 

>> 

Libor> 253 91 44740608 dm-91

>> 

>> 

>> 

Libor> 253 101 4096 dm-101

>> 

>> 

>> 

Libor> 253 102 44740608 dm-102

>> 

>> 

>> 

Libor> 253 103 4096 dm-103

>> 

>> 

>> 

Libor> 253 104 44740608 dm-104

>> 

>> 

>> 

Libor> 253 105 4096 dm-105

>> 

>> 

>> 

Libor> 253 106 44740608 dm-106

>> 

>> 

>> 

Libor> 253 107 134221824 dm-107

>> 

>> 

>> 

Libor> -------------------------------

>> 

>> 

>> 

Libor> pvs -v

>> 

>> 

>> 

Libor> Scanning for physical volume names

>> 

>> 

>> 

Libor> PV VG Fmt Attr PSize PFree DevSize PV UUID

>> 

>> 

>> 

Libor> /dev/md1 vgPecDisk1 lvm2 a-- 464.92g 0 464.92g

>> 

>> >> MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI

>> 

Libor> /dev/sda vgPecDisk2 lvm2 a-- 2.73t 1.20t 2.73t

>> 

>> >> 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw

>> 

Libor> /dev/sdb vgPecDisk2 lvm2 a-- 2.73t 1.20t 2.73t

>> 

>> >> 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr

>> 

Libor> /dev/sdd vgPecDisk2 lvm2 a-- 2.73t 2.03t 2.73t

>> 

>> >> RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO

>> 

Libor> /dev/sdg vgPecDisk2 lvm2 a-- 2.73t 1.23t 2.73t

>> 

>> >> yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

>> 

Libor> -------------------------------

>> 

>> 

>> 

Libor> pvdisplay

>> 

>> 

>> 

Libor> --- Physical volume ---

>> 

>> 

>> 

Libor> PV Name /dev/md1

>> 

>> 

>> 

Libor> VG Name vgPecDisk1

>> 

>> 

>> 

Libor> PV Size 464.92 GiB / not usable 1.81 MiB

>> 

>> 

>> 

Libor> Allocatable yes (but full)

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 119019

>> 

>> 

>> 

Libor> Free PE 0

>> 

>> 

>> 

Libor> Allocated PE 119019

>> 

>> 

>> 

Libor> PV UUID MLqS2b-iuvt-7ES8-rPHo-SPwm-Liiz-TUtHLI

>> 

>> 

>> 

Libor> --- Physical volume ---

>> 

>> 

>> 

Libor> PV Name /dev/sdd

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> PV Size 2.73 TiB / not usable 2.00 MiB

>> 

>> 

>> 

Libor> Allocatable yes

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 715396

>> 

>> 

>> 

Libor> Free PE 531917

>> 

>> 

>> 

Libor> Allocated PE 183479

>> 

>> 

>> 

Libor> PV UUID RI3dhw-Ns0t-BLyN-BQd5-vDx0-ucHb-X8ntkO

>> 

>> 

>> 

Libor> --- Physical volume ---

>> 

>> 

>> 

Libor> PV Name /dev/sda

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> PV Size 2.73 TiB / not usable 1022.00 MiB

>> 

>> 

>> 

Libor> Allocatable yes

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 714884

>> 

>> 

>> 

Libor> Free PE 315671

>> 

>> 

>> 

Libor> Allocated PE 399213

>> 

>> 

>> 

Libor> PV UUID 0vECyp-EndR-oD66-va0g-0ORd-cS7E-7rMylw

>> 

>> 

>> 

Libor> --- Physical volume ---

>> 

>> 

>> 

Libor> PV Name /dev/sdb

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> PV Size 2.73 TiB / not usable 1022.00 MiB

>> 

>> 

>> 

Libor> Allocatable yes

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 714884

>> 

>> 

>> 

Libor> Free PE 315671

>> 

>> 

>> 

Libor> Allocated PE 399213

>> 

>> 

>> 

Libor> PV UUID 5ZhwR7-AClb-oEsi-s2Zi-xouM-en0Z-ZQ0fwr

>> 

>> 

>> 

Libor> --- Physical volume ---

>> 

>> 

>> 

Libor> PV Name /dev/sdg

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> PV Size 2.73 TiB / not usable 2.00 MiB

>> 

>> 

>> 

Libor> Allocatable yes

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 715396

>> 

>> 

>> 

Libor> Free PE 321305

>> 

>> 

>> 

Libor> Allocated PE 394091

>> 

>> 

>> 

Libor> PV UUID yaohhB-dkF6-rQRk-dBsL-JHS7-8KOo-eYSqOj

>> 

>> 

>> 

Libor> -----------------------------

>> 

>> 

>> 

Libor> vgs -v

>> 

>> 

>> 

Libor> VG Attr Ext #PV #LV #SN VSize VFree VG UUID

>> 

>> 

>> 

Libor> vgPecDisk1 wz--n- 4.00m 1 3 0 464.92g 0

>> 

>> >> Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv

>> 

Libor> vgPecDisk2 wz--n- 4.00m 4 20 0 10.91t 5.66t

>> 

>> >> 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

>> 

Libor> --------------------------------

>> 

>> 

>> 

Libor> vgdisplay

>> 

>> 

>> 

Libor> --- Volume group ---

>> 

>> 

>> 

Libor> VG Name vgPecDisk1

>> 

>> 

>> 

Libor> System ID

>> 

>> 

>> 

Libor> Format lvm2

>> 

>> 

>> 

Libor> Metadata Areas 1

>> 

>> 

>> 

Libor> Metadata Sequence No 9

>> 

>> 

>> 

Libor> VG Access read/write

>> 

>> 

>> 

Libor> VG Status resizable

>> 

>> 

>> 

Libor> MAX LV 0

>> 

>> 

>> 

Libor> Cur LV 3

>> 

>> 

>> 

Libor> Open LV 3

>> 

>> 

>> 

Libor> Max PV 0

>> 

>> 

>> 

Libor> Cur PV 1

>> 

>> 

>> 

Libor> Act PV 1

>> 

>> 

>> 

Libor> VG Size 464.92 GiB

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 119019

>> 

>> 

>> 

Libor> Alloc PE / Size 119019 / 464.92 GiB

>> 

>> 

>> 

Libor> Free PE / Size 0 / 0

>> 

>> 

>> 

Libor> VG UUID Dtbxaa-KySR-R1VY-Wliy-Lqba-HQyt-7PYmnv

>> 

>> 

>> 

Libor> --- Volume group ---

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> System ID

>> 

>> 

>> 

Libor> Format lvm2

>> 

>> 

>> 

Libor> Metadata Areas 8

>> 

>> 

>> 

Libor> Metadata Sequence No 476

>> 

>> 

>> 

Libor> VG Access read/write

>> 

>> 

>> 

Libor> VG Status resizable

>> 

>> 

>> 

Libor> MAX LV 0

>> 

>> 

>> 

Libor> Cur LV 20

>> 

>> 

>> 

Libor> Open LV 13

>> 

>> 

>> 

Libor> Max PV 0

>> 

>> 

>> 

Libor> Cur PV 4

>> 

>> 

>> 

Libor> Act PV 4

>> 

>> 

>> 

Libor> VG Size 10.91 TiB

>> 

>> 

>> 

Libor> PE Size 4.00 MiB

>> 

>> 

>> 

Libor> Total PE 2860560

>> 

>> 

>> 

Libor> Alloc PE / Size 1375996 / 5.25 TiB

>> 

>> 

>> 

Libor> Free PE / Size 1484564 / 5.66 TiB

>> 

>> 

>> 

Libor> VG UUID 0Ok7sE-Eo1O-pbuT-LX3D-dluI-25dw-cr9DY8

>> 

>> 

>> 

Libor> ------------------------------

>> 

>> 

>> 

Libor> lvs -v

>> 

>> 

>> 

Libor> Finding all logical volumes

>> 

>> 

>> 

Libor> LV VG #Seg Attr LSize Maj Min KMaj KMin Pool Origin Data% Meta% Move

>> 

>> >> Copy% Log Convert LV UUID

>> 

Libor> lvSwap vgPecDisk1 1 -wi-ao-- 3.72g -1 -1 253 1

>> 

>> >> Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe

>> 

Libor> lvSystem vgPecDisk1 1 -wi-ao-- 64.00g -1 -1 253 0

>> 

>> >> ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD

>> 

Libor> lvTmp vgPecDisk1 1 -wi-ao-- 397.20g -1 -1 253 31

>> 

>> >> JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9

>> 

Libor> lvAmandaDaily01 vgPecDisk2 1 rwi-aor- 100.01g -1 -1 253 82

>> 

>> >> lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK

>> 

Libor> lvAmandaDaily01old vgPecDisk2 1 rwi---r- 1.09t -1 -1 -1 -1

>> 

>> >> nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq

>> 

Libor> lvAmandaDailyAuS01 vgPecDisk2 1 rwi-aor- 360.00g -1 -1 253 10

>> 

Libor> fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB

>> 

>> 

>> 

Libor> lvAmandaDailyAuS01_rimage_2_extracted vgPecDisk2 1 vwi---v- 120.00g

>> 

>> >> -1 -1 -1 -1 Libor> Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq

>> 

Libor> lvAmandaDailyAuS01_rmeta_2_extracted vgPecDisk2 1 vwi---v- 4.00m -1

>> 

>> >> -1 -1 -1 Libor> WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS

>> 

Libor> lvAmandaDailyBlS01 vgPecDisk2 1 rwi---r- 320.00g -1 -1 -1 -1

>> 

Libor> fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt

>> 

>> 

>> 

Libor> lvAmandaDailyElme01 vgPecDisk2 1 rwi-aor- 144.00g -1 -1 253 89

>> 

Libor> 1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp

>> 

>> 

>> 

Libor> lvAmandaDailyEl01 vgPecDisk2 1 rwi-aor- 350.00g -1 -1 253 74

>> 

Libor> Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22

>> 

>> 

>> 

Libor> lvAmandaHoldingDisk vgPecDisk2 1 rwi-aor- 36.00g -1 -1 253 28

>> 

Libor> e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY

>> 

>> 

>> 

Libor> lvBackupElme2 vgPecDisk2 1 rwi-aor- 350.00g -1 -1 253 46

>> 

>> >> Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9

>> 

Libor> lvBackupPc vgPecDisk2 1 rwi---r- 640.01g -1 -1 -1 -1

>> 

>> >> KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ

>> 

Libor> lvBackupPc2 vgPecDisk2 1 rwi-aor- 600.00g -1 -1 253 19

>> 

>> >> 2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke

>> 

Libor> lvBackupRsync vgPecDisk2 1 rwi---r- 256.01g -1 -1 -1 -1

>> 

>> >> cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ

>> 

Libor> lvBackupRsync2 vgPecDisk2 1 rwi-aor- 100.01g -1 -1 253 129

>> 

>> >> S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM

>> 

Libor> lvBackupRsyncCCCrossserver vgPecDisk2 1 rwi-aor- 50.00g -1 -1 253 100

>> 

Libor> ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf

>> 

>> 

>> 

Libor> lvBackupVokapo vgPecDisk2 1 rwi-aor- 128.00g -1 -1 253 107

>> 

>> >> pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag

>> 

Libor> lvLXCElMysqlSlave vgPecDisk2 1 rwi-aor- 32.00g -1 -1 253 65

>> 

>> >> 2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut

>> 

Libor> lvLXCIcinga vgPecDisk2 1 rwi---r- 32.00g -1 -1 -1 -1

>> 

>> >> 2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU

>> 

Libor> lvLXCJabber vgPecDisk2 1 rwi-aom- 4.00g -1 -1 253 56 100.00

>> 

>> >> AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ

>> 

Libor> lvLXCWebxMysqlSlave vgPecDisk2 1 rwi-aom- 16.00g -1 -1 253 51 100.00

>> 

Libor> m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

>> 

>> 

>> 

Libor> -----------------------------

>> 

>> 

>> 

Libor> lvdisplay

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk1/lvSwap

>> 

>> 

>> 

Libor> LV Name lvSwap

>> 

>> 

>> 

Libor> VG Name vgPecDisk1

>> 

>> 

>> 

Libor> LV UUID Jo9ie0-jKfo-Ks6Q-TsgK-skvM-qJio-Ar5WWe

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-02-20 12:22:52 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 2

>> 

>> 

>> 

Libor> LV Size 3.72 GiB

>> 

>> 

>> 

Libor> Current LE 953

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 256

>> 

>> 

>> 

Libor> Block device 253:1

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk1/lvSystem

>> 

>> 

>> 

Libor> LV Name lvSystem

>> 

>> 

>> 

Libor> VG Name vgPecDisk1

>> 

>> 

>> 

Libor> LV UUID ZEdPxL-Wn5s-QapH-BzdZ-4Os7-eV0g-SVwNoD

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-02-20 12:23:03 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 64.00 GiB

>> 

>> 

>> 

Libor> Current LE 16384

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 256

>> 

>> 

>> 

Libor> Block device 253:0

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk1/lvTmp

>> 

>> 

>> 

Libor> LV Name lvTmp

>> 

>> 

>> 

Libor> VG Name vgPecDisk1

>> 

>> 

>> 

Libor> LV UUID JjgNKC-ctgq-VDz3-BJbn-HZHd-W3s2-XWxUT9

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-06-10 06:47:09 +0200

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 397.20 GiB

>> 

>> 

>> 

Libor> Current LE 101682

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 256

>> 

>> 

>> 

Libor> Block device 253:31

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCWebxMysqlSlave

>> 

>> 

>> 

Libor> LV Name lvLXCWebxMysqlSlave

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID m2dzFv-axwm-2Ne6-kJkN-a3zo-E8Ai-qViTae

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-02-21 18:15:22 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 16.00 GiB

>> 

>> 

>> 

Libor> Current LE 4096

>> 

>> 

>> 

Libor> Mirrored volumes 2

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 256

>> 

>> 

>> 

Libor> Block device 253:51

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDaily01old

>> 

>> 

>> 

Libor> LV Name lvAmandaDaily01old

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID nofmj3-ntya-cbDi-ZjZH-zBKV-K1PA-Sw0Pvq

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-02-24 21:03:49 +0100

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 1.09 TiB

>> 

>> 

>> 

Libor> Current LE 286722

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyBlS01

>> 

>> 

>> 

Libor> LV Name lvAmandaDailyBlS01

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID fJTCsr-MF1S-jAXo-7SHc-Beyf-ICMV-LJQpnt

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-03-18 08:50:38 +0100

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 320.00 GiB

>> 

>> 

>> 

Libor> Current LE 81921

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCJabber

>> 

>> 

>> 

Libor> LV Name lvLXCJabber

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID AAWI1f-fYFO-2ewM-YvfP-AdC4-bXd8-k2NiZZ

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-03-20 15:19:54 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 4.00 GiB

>> 

>> 

>> 

Libor> Current LE 1024

>> 

>> 

>> 

Libor> Mirrored volumes 2

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 256

>> 

>> 

>> 

Libor> Block device 253:56

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupPc

>> 

>> 

>> 

Libor> LV Name lvBackupPc

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID KaX4sX-CJsU-L5Ac-85OA-74HT-JX3L-nFxFTZ

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-07-01 13:22:50 +0200

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 640.01 GiB

>> 

>> 

>> 

Libor> Current LE 163842

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCIcinga

>> 

>> 

>> 

Libor> LV Name lvLXCIcinga

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID 2kYSPl-HONv-zuf0-dhQn-1xI3-YVuU-brbumU

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-08-13 19:04:28 +0200

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 32.00 GiB

>> 

>> 

>> 

Libor> Current LE 8193

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupRsync

>> 

>> 

>> 

Libor> LV Name lvBackupRsync

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID cQOavD-85Pj-yu6X-yTpS-qxxT-XBWV-WIISKQ

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-09-17 14:49:57 +0200

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 256.01 GiB

>> 

>> 

>> 

Libor> Current LE 65538

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDaily01

>> 

>> 

>> 

Libor> LV Name lvAmandaDaily01

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID lrBae6-Yj5V-OZUT-Z4Qz-umsu-6SGe-35SJfK

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-04 08:26:46 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 100.01 GiB

>> 

>> 

>> 

Libor> Current LE 25602

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:82

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupRsync2

>> 

>> 

>> 

Libor> LV Name lvBackupRsync2

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID S4frRu-dVgG-Pomd-5niY-bLzd-S2wq-KxMPhM

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-04 19:17:17 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 100.01 GiB

>> 

>> 

>> 

Libor> Current LE 25602

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:129

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupPc2

>> 

>> 

>> 

Libor> LV Name lvBackupPc2

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID 2o9JWs-2hZT-4uMO-WJTd-ByMH-ugd9-3iGfke

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-04 23:13:51 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 600.00 GiB

>> 

>> 

>> 

Libor> Current LE 153600

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:19

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupElme2

>> 

>> 

>> 

Libor> LV Name lvBackupElme2

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID Ee9RAX-ycZ8-PNzl-MUvg-VjPl-8vfW-BjfaQ9

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-04 23:21:44 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 350.00 GiB

>> 

>> 

>> 

Libor> Current LE 89601

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:46

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvLXCElMysqlSlave

>> 

>> 

>> 

Libor> LV Name lvLXCElMysqlSlave

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID 2fh6ch-2y5s-N3Ua-1Q1u-XSfx-JViq-x6dwut

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 16:36:42 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 32.00 GiB

>> 

>> 

>> 

Libor> Current LE 8193

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:65

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01_rimage_2_extracted

>> 

>> 

>> 

Libor> LV Name lvAmandaDailyAuS01_rimage_2_extracted

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID Ii0Hyk-A2d3-PUC3-CMZL-CqDY-qFLs-yuDKwq

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-02-25 09:55:03 +0100

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 120.00 GiB

>> 

>> 

>> 

Libor> Current LE 30721

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01_rmeta_2_extracted

>> 

>> 

>> 

Libor> LV Name lvAmandaDailyAuS01_rmeta_2_extracted

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID WNq913-IM82-Cnh0-dmPb-BzWE-KJNP-H84dmS

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2014-02-25 09:55:03 +0100

>> 

>> 

>> 

Libor> LV Status NOT available

>> 

>> 

>> 

Libor> LV Size 4.00 MiB

>> 

>> 

>> 

Libor> Current LE 1

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyAuS01

>> 

>> 

>> 

Libor> LV Name lvAmandaDailyAuS01

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID fW0QrZ-sa2J-21nM-0qDv-nTUx-Eomx-3KTocB

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 17:49:47 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 360.00 GiB

>> 

>> 

>> 

Libor> Current LE 92160

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:10

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaHoldingDisk

>> 

>> 

>> 

Libor> LV Name lvAmandaHoldingDisk

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID e5pr0g-cH2I-dMHd-lwsi-JRR0-0D0P-67eXLY

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 18:48:36 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 36.00 GiB

>> 

>> 

>> 

Libor> Current LE 9216

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:28

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyEl01

>> 

>> 

>> 

Libor> LV Name lvAmandaDailyEl01

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID Sni0fy-Bf1V-AKXS-Qfd1-qmFC-MUwY-xgCw22

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 19:00:26 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 350.00 GiB

>> 

>> 

>> 

Libor> Current LE 89601

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:74

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupRsyncCCCrossserver

>> 

>> 

>> 

Libor> LV Name lvBackupRsyncCCCrossserver

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID ytiis9-T1Pq-FAjT-MGhn-2nKd-zHFk-ROzeUf

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 22:39:09 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 50.00 GiB

>> 

>> 

>> 

Libor> Current LE 12801

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:100

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvAmandaDailyElme01

>> 

>> 

>> 

Libor> LV Name lvAmandaDailyElme01

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID 1Q0Sre-CnV1-wqPZ-9bf0-qnW6-6nqt-NOlxyp

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 22:49:05 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 144.00 GiB

>> 

>> 

>> 

Libor> Current LE 36864

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:89

>> 

>> 

>> 

Libor> --- Logical volume ---

>> 

>> 

>> 

Libor> LV Path /dev/vgPecDisk2/lvBackupVokapo

>> 

>> 

>> 

Libor> LV Name lvBackupVokapo

>> 

>> 

>> 

Libor> VG Name vgPecDisk2

>> 

>> 

>> 

Libor> LV UUID pq67wa-NjPs-PwEx-rs1G-cZxf-s5xI-wkB9Ag

>> 

>> 

>> 

Libor> LV Write Access read/write

>> 

>> 

>> 

Libor> LV Creation host, time pec, 2015-03-05 22:54:23 +0100

>> 

>> 

>> 

Libor> LV Status available

>> 

>> 

>> 

Libor> # open 1

>> 

>> 

>> 

Libor> LV Size 128.00 GiB

>> 

>> 

>> 

Libor> Current LE 32769

>> 

>> 

>> 

Libor> Segments 1

>> 

>> 

>> 

Libor> Allocation inherit

>> 

>> 

>> 

Libor> Read ahead sectors auto

>> 

>> 

>> 

Libor> - currently set to 1024

>> 

>> 

>> 

Libor> Block device 253:107

>> 

>> 

>> 

Libor> -----------------------

>> 

Libor> Dne St 11.�b�ezna�2015 11:57:43, John Stoffel napsal(a):

>> >> >> Libor,

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >> Can you please post the output of the following commands, so that we

>> >> >>

>> >> >>

>> >> >>

>> >> >> can understand your setup and see what's really going on here. More

>> >> >>

>> >> >>

>> >> >>

>> >> >> info is better than less!

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >> cat /proc/partitions

>> >> >>

>> >> >>

>> >> >>

>> >> >> pvs -v

>> >> >>

>> >> >>

>> >> >>

>> >> >> pvdisplay

>> >> >>

>> >> >>

>> >> >>

>> >> >> vgs -v

>> >> >>

>> >> >>

>> >> >>

>> >> >> vgdisplay

>> >> >>

>> >> >>

>> >> >>

>> >> >> lvs -v

>> >> >>

>> >> >>

>> >> >>

>> >> >> lvdisplay

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >> and if you have PVs which are NOT on top of raw partitions, then

>> >> >>

>> >> >>

>> >> >>

>> >> >> include cat /proc/mdstat as well, or whatever device tool you have.

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >> Basically, we're trying to understand how you configured your setup

>> >> >>

>> >> >>

>> >> >>

>> >> >> from the physical disks, to the volumes on them. I don't care much

>> >> >>

>> >> >>

>> >> >>

>> >> >> about the filesystems, they're going to be inside individual LVs I

>> >> >>

>> >> >>

>> >> >>

>> >> >> assume.

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >> John

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >>

>> >> >> _______________________________________________

>> >> >>

>> >> >>

>> >> >>

>> >> >> linux-lvm mailing list

>> >> >>

>> >> >>

>> >> >>

>> >> >> linux-lvm@redhat.com

>> >> >>

>> >> >>

>> >> >>

>> >> >> https://www.redhat.com/mailman/listinfo/linux-lvm

>> >> >>

>> >> >>

>> >> >>

>> >> >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

>> 

Libor> _______________________________________________

>> 

Libor> linux-lvm mailing list

>> 

Libor> linux-lvm@redhat.com

>> 

Libor> https://www.redhat.com/mailman/listinfo/linux-lvm

>> 

Libor> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

>> 

>> >> _______________________________________________

>> >>

>> >> linux-lvm mailing list

>> >>

>> >> linux-lvm@redhat.com

>> >>

>> >> https://www.redhat.com/mailman/listinfo/linux-lvm

>> >>

>> >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

>> 

Libor> _______________________________________________

Libor> linux-lvm mailing list

Libor> linux-lvm@redhat.com

Libor> https://www.redhat.com/mailman/listinfo/linux-lvm

Libor> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

>> 

>> _______________________________________________

>> linux-lvm mailing list

>> linux-lvm@redhat.com

>> https://www.redhat.com/mailman/listinfo/linux-lvm

>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Libor> _______________________________________________
Libor> linux-lvm mailing list
Libor> linux-lvm@redhat.com
Libor> https://www.redhat.com/mailman/listinfo/linux-lvm
Libor> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-03-13 16:18 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-03-09 11:21 [linux-lvm] Removing disk from raid LVM Libor Klepáč
2015-03-10  9:23 ` emmanuel segura
2015-03-10  9:34   ` Libor Klepáč
2015-03-10 14:05 ` John Stoffel
2015-03-11 13:05   ` Libor Klepáč
2015-03-11 15:57     ` John Stoffel
2015-03-11 18:02       ` Libor Klepáč
2015-03-12 14:53         ` John Stoffel
2015-03-12 15:21           ` Libor Klepáč
2015-03-12 17:20             ` John Stoffel
2015-03-12 21:32               ` Libor Klepáč
2015-03-13 16:18                 ` John Stoffel
2015-03-12 15:32           ` Libor Klepáč
2015-03-11 23:12 ` Premchand Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).