linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] pvs complains of missing PVs that are not missing
@ 2013-10-09 21:29 Joe Harvell
  2013-10-09 21:49 ` Joe Harvell
       [not found] ` <1381356552.67129.YahooMailNeo@web181504.mail.ne1.yahoo.com>
  0 siblings, 2 replies; 12+ messages in thread
From: Joe Harvell @ 2013-10-09 21:29 UTC (permalink / raw)
  To: linux-lvm

As part of moving disks from one system to another, somehow LVM thinks 
PVS are missing in the new system but they are not.  Before I explain 
the relevant chain of events related to moving the disks, here is the 
problem as it manifests in the new system:

joey@akita ~ $ sudo pvs -v
     Scanning for physical volume names
     There are 9 physical volumes missing.
     There are 9 physical volumes missing.
     There are 9 physical volumes missing.
   PV                   VG       Fmt  Attr PSize   PFree DevSize PV UUID
   /dev/md/saluki:r10_0 bayeux   lvm2 a-m   40,00g      0 40,00g 
C4V6YY-1bA1-Clg9-f45q-ffma-BH07-K7ex1w
   /dev/md/saluki:r10_1 bayeux   lvm2 --m   74,97g      0 75,00g 
d3LmqC-1GnU-LPjb-uYqH-Z2jG-lQrp-31JOXo
   /dev/md/saluki:r10_2 bayeux   lvm2 a-m  224,78g 154,78g 224,81g 
JQsXS2-XfhA-zucx-Xetf-HHI0-aGFp-Ibz2We
   /dev/md/saluki:r5_0  bayeux   lvm2 a-m  360,00g   2,00g 360,00g 
UcdCOb-RqEK-0ofL-ql90-viLC-ssri-eopnTg
   /dev/md/saluki:r5_1  bayeux   lvm2 a-m  279,09g 145,09g 279,11g 
FLFUQW-PZHK-uY19-Zblf-s3RU-QPZK-0Pc3vC
   /dev/sda11           bayeux   lvm2 a--   93,31g  93,31g 93,32g 
KSVeZ5-DUI8-XCK4-NfiF-WB2r-6RGe-GymTuF
   /dev/sda5            bayeux   lvm2 a--  150,00g      0 150,00g 
FhGyS2-yKGw-pfxE-EyY4-yGi3-3Hoa-JCRCk1
   /dev/sda7            bayeux   lvm2 a--  107,16g  59,59g 107,16g 
lghpEG-bnje-tBY3-1jGJ-suAN-S8g5-ti5Df0
   /dev/sda9            bayeux   lvm2 a--   29,38g  22,34g 29,38g 
8wXKU8-2phP-4NKE-hVMo-8VY2-5Z7D-SVRwmU
   /dev/sdb3            seward   lvm2 a--  234,28g 104,28g 234,28g 
WnNkO0-8709-p5lN-bTGF-KdAJ-X29B-1cM5bv
   /dev/sdc11           bayeux   lvm2 a--   93,31g  86,28g 93,32g 
MoWrvQ-oI3A-OWBT-cwkp-eswH-BkNp-fuhXLI
   /dev/sdc5            bayeux   lvm2 a--  150,00g 150,00g 150,00g 
eeVLsy-DIb3-1w1G-VtIa-S6Bv-w9Li-pVQhLD
   /dev/sdc7            bayeux   lvm2 a--  107,16g 107,16g 107,16g 
K8ibVQ-AABO-islF-imv0-a0wv-ho4w-mxAUBO
   /dev/sdc9            bayeux   lvm2 a--   29,38g  29,38g 29,38g 
csjMOF-pIO8-o2dP-Vm5l-QRhP-6g5G-UdOSqH
   /dev/sdd1            shanghai lvm2 ---    2,73t      0 2,73t 
IVJKal-Oode-Yn0T-oS9z-tadX-X1cs-1J2ut1
   /dev/sde1            bayeux   lvm2 a-m  372,59g 172,59g 372,61g 
e21dTH-FxZw-P4Ug-f0S1-jIe9-hdYc-MCyH1D
   /dev/sdf11           bayeux   lvm2 a-m   93,31g  76,88g 93,32g 
jPQ6YS-LPTg-N7jA-65CA-F0tP-VSSE-GZ6vX5
   /dev/sdf6            bayeux   lvm2 a-m  150,00g      0 150,00g 
cxcme3-i1US-6MZI-Nx2U-fkSg-gQfz-X0Y468
   /dev/sdf7            bayeux   lvm2 a--  107,16g 107,16g 107,16g 
pTZN6n-sLvt-whyf-rWkJ-bZoP-IkiE-lEe92P
   /dev/sdf9            bayeux   lvm2 a-m   29,38g 128,00m 29,38g 
Qtqo8m-UjUh-Qe4R-VkkN-oJmK-pHG3-lPJVSm

joey@akita ~ $ sudo pvscan
   PV /dev/sdd1              VG shanghai   lvm2 [2,73 TiB / 0 free]
   PV /dev/sdb3              VG seward     lvm2 [234,28 GiB / 104,28 GiB 
free]
   PV /dev/md/saluki:r10_0   VG bayeux     lvm2 [40,00 GiB / 0 free]
   PV /dev/md/saluki:r5_0    VG bayeux     lvm2 [360,00 GiB / 2,00 GiB free]
   PV /dev/sdf6              VG bayeux     lvm2 [150,00 GiB / 0 free]
   PV /dev/sdf7              VG bayeux     lvm2 [107,16 GiB / 107,16 GiB 
free]
   PV /dev/md/saluki:r10_1   VG bayeux     lvm2 [74,97 GiB / 0 free]
   PV /dev/sdf9              VG bayeux     lvm2 [29,38 GiB / 128,00 MiB 
free]
   PV /dev/sdc5              VG bayeux     lvm2 [150,00 GiB / 150,00 GiB 
free]
   PV /dev/sdc7              VG bayeux     lvm2 [107,16 GiB / 107,16 GiB 
free]
   PV /dev/sdc9              VG bayeux     lvm2 [29,38 GiB / 29,38 GiB free]
   PV /dev/sda5              VG bayeux     lvm2 [150,00 GiB / 0 free]
   PV /dev/sda7              VG bayeux     lvm2 [107,16 GiB / 59,59 GiB 
free]
   PV /dev/sda9              VG bayeux     lvm2 [29,38 GiB / 22,34 GiB free]
   PV /dev/sda11             VG bayeux     lvm2 [93,31 GiB / 93,31 GiB free]
   PV /dev/sdc11             VG bayeux     lvm2 [93,31 GiB / 86,28 GiB free]
   PV /dev/sdf11             VG bayeux     lvm2 [93,31 GiB / 76,88 GiB free]
   PV /dev/sde1              VG bayeux     lvm2 [372,59 GiB / 172,59 GiB 
free]
   PV /dev/md/saluki:r5_1    VG bayeux     lvm2 [279,09 GiB / 145,09 GiB 
free]
   PV /dev/md/saluki:r10_2   VG bayeux     lvm2 [224,78 GiB / 154,78 GiB 
free]
   Total: 20 [5,39 TiB] / in use: 20 [5,39 TiB] / in no VG: 0 [0 ]

The problem is that each of the PVs that show up with the "missing" 
attribute are actually present in the new system.

The way I would like to prove that they are in the system is to read the 
PV disk label directly from the devices and show they have the same 
UUID.  But I am unable to find how to do that.  So instead I'll take a 
simple example from the above 9 missing PVs. pvs shows that the PV with 
UUID e21dTH-FxZw-P4Ug-f0S1-jIe9-hdYc-MCyH1D should be found at /dev/sde1 
but it is not found in the system.  I know for a fact that it is in the 
system.  Here's how I know.

I know that the Seagate disk (the only Seagate in the system) has 1 
partition that is a PV in VG bayeux, and that it contains exactly one LV 
named backup.  Here's what the system shows now about /dev/sde1:

joey@akita /tmp $ sudo parted /dev/sde print
Mot de passe :
Mod�le: ATA ST3400620AS (scsi)
Disque /dev/sde : 400GB
Taille des secteurs (logiques/physiques): 512B/512B
Table de partitions : gpt
Disk Flags:

Num�ro  D�but   Fin    Taille  Syst�me de fichiers  Nom Fanions
  1      17,4kB  400GB  400GB                        bayeux  lvm 
(gestionnaire de volumes logiques)


So /dev/sde1 is the place where /dev/bayeux/backup should reside.

Here is what the system shows now about that LV and the (supposedly 
missing) /dev/sde1 PV:

joey@akita /tmp $ sudo pvdisplay --maps /dev/sde1
   --- Physical volume ---
   PV Name               /dev/sde1
   VG Name               bayeux
   PV Size               372,61 GiB / not usable 18,05 MiB
   Allocatable           yes
   PE Size               32,00 MiB
   Total PE              11923
   Free PE               5523
   Allocated PE          6400
   PV UUID               e21dTH-FxZw-P4Ug-f0S1-jIe9-hdYc-MCyH1D

   --- Physical Segments ---
   Physical extent 0 to 6399:
     Logical volume      /dev/bayeux/backup
     Logical extents     0 to 6399
   Physical extent 6400 to 11922:
     FREE

joey@akita /tmp $ sudo lvdisplay --maps /dev/bayeux/backup
   --- Logical volume ---
   LV Path                /dev/bayeux/backup
   LV Name                backup
   VG Name                bayeux
   LV UUID                QhTdFi-cGuL-380h-2hIB-wXNx-D57w-pdlPpB
   LV Write Access        read/write
   LV Creation host, time saluki, 2013-05-11 17:14:45 -0500
   LV Status              available
   # open                 0
   LV Size                200,00 GiB
   Current LE             6400
   Segments               1
   Allocation             inherit
   Read ahead sectors     auto
   - currently set to     256
   Block device           254:32

   --- Segments ---
   Logical extent 0 to 6399:
     Type                linear
     Physical volume     /dev/sde1
     Physical extents    0 to 6399


And to top it off, I can mount the filesystem that lives on /dev/sde1 
and I see the backup data (I activated bayeux with --partial):

joey@akita /tmp $ sudo mkdir -m 000 /tmp/bb
joey@akita /tmp $ sudo mount -o ro /dev/bayeux/backup /tmp/bb
joey@akita /tmp $ sudo find /tmp/bb -maxdepth 3 -ls
      2    4 drwxr-xr-x   4 root     root         4096 mai 12 00:31 /tmp/bb
7585793    4 drwxr-xr-x   4 root     root         4096 mai 12 00:33 
/tmp/bb/to
7585794    4 drwx------   2 root     root         4096 nov. 27 2012 
/tmp/bb/to/lost+found
7585795    4 drwxr-xr-x   3 marta    marta        4096 mai 12 01:11 
/tmp/bb/to/pikawa
262145    8 -rw-r--r--   1 marta    marta        6148 d�c. 10 2012 
/tmp/bb/to/pikawa/.DS_Store
7585796    4 drwx--S---   3 marta    marta        4096 mai 12 01:11 
/tmp/bb/to/pikawa/pikawa.sparsebundle
     11   16 drwx------   2 root     root        16384 mai 11 23:35 
/tmp/bb/lost+found


Ok, I think that shows the PV is in fact not missing from the system!  
Now here's the explanation of the chain of events that I think 
contributed to getting in this state:

1.  Initially, I had 5 disks in host saluki.  Three Western Digital 1TB 
disks, 1 Western Digital 3TB disk, and 1 Seagate 400 GB disk.
2.  I removed all but two of the 1TB WD disks from saluki and rebooted 
it.  I was able to boot due to a combination of Linux raid (not LVM 
RAID)  and non-essential file systems on the disks I removed.
3.  Then I re-added all the disks to saluki except the Seagate.  I 
re-added the partitions on the WD 1TB I had removed to the corresponding 
RAID volumes, and all the RAID volumes re-synced fine.  I re-added all 
the file systems to /etc/fstab except the one on the Seagate and 
continued in that configuration for a while.
4.  Finally, I moved all 5 disks into host akita.  akita is a new 
machine and I installed the entire OS in a new VG (seward) consisting of 
1 disk.

I think the 9 PVs that are having the problems are related to the disks 
I removed and later re-added.  I should point out that between steps 3 
and 4 I tried to do an lvextend in VG bayeux, but it told me it would 
not allow this since it had partial PVs.  Of course that makes total 
sense, and I'm glad it stopped me.  So that means I cold not have 
changed any LVM configuration for bayeux while the VG was incomplete.  
So I don't see any reason why LVM is complaining about missing PVs.

FYI, here are the details of the RAID configuration:

joey@akita /tmp $ cat /proc/mdstat
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]
md121 : active raid10 sda6[0] sdf5[2] sdc6[1]
       235732992 blocks super 1.2 512K chunks 2 near-copies [3/3] [UUU]
       bitmap: 0/2 pages [0KB], 65536KB chunk

md122 : active raid1 sdc1[4] sda1[5] sdf1[1]
       102436 blocks super 1.0 [3/3] [UUU]
       bitmap: 0/7 pages [0KB], 8KB chunk

md123 : active raid10 sdc2[0] sda2[3] sdf2[1]
       8388864 blocks super 1.0 64K chunks 2 near-copies [3/3] [UUU]
       bitmap: 0/9 pages [0KB], 512KB chunk

md124 : active raid10 sdc3[4] sda3[5] sdf3[1]
       41943232 blocks super 1.0 64K chunks 2 near-copies [3/3] [UUU]
       bitmap: 0/161 pages [0KB], 128KB chunk

md125 : active raid5 sdc4[4] sda4[5] sdf4[3]
       377487616 blocks super 1.0 level 5, 128k chunk, algorithm 2 [3/3] 
[UUU]
       bitmap: 0/181 pages [0KB], 512KB chunk

md126 : active raid10 sda8[4] sdf8[2] sdc8[3]
       78643008 blocks super 1.0 64K chunks 2 near-copies [3/3] [UUU]
       bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid5 sda10[0] sdf10[3] sdc10[1]
       292663040 blocks super 1.0 level 5, 128k chunk, algorithm 2 [3/3] 
[UUU]
       bitmap: 0/140 pages [0KB], 512KB chunk

unused devices: <none>
joey@akita /tmp $ ls -l /dev/md/*
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:boot -> ../md122
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:r10_0 -> ../md124
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:r10_1 -> ../md126
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:r10_2 -> ../md121
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:r5_0 -> ../md125
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:r5_1 -> ../md127
lrwxrwxrwx 1 root root 8  8 oct.  23:51 /dev/md/saluki:swap -> ../md123


Note that all of the RAID volumes are on partitions of disks sda, sdc 
and sdf (the three 1 TB WDs):

joey@akita /dev/disk/by-id $ ls -l ata* | egrep 'sd[acf]$'
lrwxrwxrwx 1 root root  9  8 oct.  23:51 
ata-WDC_WD1001FALS-00E8B0_WD-WMATV6936241 -> ../../sdc
lrwxrwxrwx 1 root root  9  9 oct.  15:24 
ata-WDC_WD1001FALS-00J7B0_WD-WMATV0666975 -> ../../sdf
lrwxrwxrwx 1 root root  9  8 oct.  23:51 
ata-WDC_WD1001FALS-00J7B0_WD-WMATV6998349 -> ../../sda

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-09 21:29 Joe Harvell
@ 2013-10-09 21:49 ` Joe Harvell
       [not found] ` <1381356552.67129.YahooMailNeo@web181504.mail.ne1.yahoo.com>
  1 sibling, 0 replies; 12+ messages in thread
From: Joe Harvell @ 2013-10-09 21:49 UTC (permalink / raw)
  To: linux-lvm

> As part of moving disks from one system to another, somehow LVM thinks 
> PVS are missing in the new system but they are not.  Before I explain 
> the relevant chain of events related to moving the disks, here is the 
> problem as it manifests in the new system:
>
> [snip]
>

I forgot to mention the LVM version info:

joey@akita /dev/disk/by-id $ sudo lvm version
   LVM version:     2.02.103(2) (2013-10-04)
   Library version: 1.02.82 (2013-10-04)
   Driver version:  4.25.0
j

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
       [not found] ` <1381356552.67129.YahooMailNeo@web181504.mail.ne1.yahoo.com>
@ 2013-10-09 23:04   ` Joe Harvell
       [not found]   ` <5255D815.1090704@tekcomms.com>
  1 sibling, 0 replies; 12+ messages in thread
From: Joe Harvell @ 2013-10-09 23:04 UTC (permalink / raw)
  To: linux-lvm; +Cc: matthew patton

Le 09/10/2013 17:09, matthew patton a �crit :
> clear LVM's cache (or just disable it in lvm.conf), rescan pv, vg, lv. make sure udev is creating the necessary hooks.
>
I'm not sure how to disable the cache.  Here are my relevant settings in 
/etc/lvm/lvm.conf:

     # If set, the cache of block device nodes with all associated symlinks
     # will be constructed out of the existing udev database content.
     # This avoids using and opening any inapplicable non-block devices or
     # subdirectories found in the device directory. This setting is applied
     # to udev-managed device directory only, other directories will be 
scanned
     # fully. LVM2 needs to be compiled with udev support for this 
setting to
     # take effect. N.B. Any device node or symlink not managed by udev in
     # udev directory will be ignored with this setting on.
     obtain_device_list_from_udev = 1


     # The results of the filtering are cached on disk to avoid
     # rescanning dud devices (which can take a very long time).
     # By default this cache is stored in the /etc/lvm/cache directory
     # in a file called '.cache'.
     # It is safe to delete the contents: the tools regenerate it.
     # (The old setting 'cache' is still respected if neither of
     # these new ones is present.)
     # N.B. If obtain_device_list_from_udev is set to 1 the list of
     # devices is instead obtained from udev and any existing .cache
     # file is removed.
     cache_dir = "/etc/lvm/cache"
     cache_file_prefix = ""

     # You can turn off writing this cache file by setting this to 0.
     write_cache_state = 1


And as expected based on my setting for obtain_device_list_from_udev, 
there is no /etc/lvm/cache directorypresent:

joey@akita ~ $ sudo nano -w /etc/lvm/lvm.conf
Mot de passe :
joey@akita ~ $ ls -al /etc/lvm/
total 68
drwxr-xr-x  5 root root  4096  7 oct.  14:12 .
drwxr-xr-x 74 root root  4096  9 oct.  11:21 ..
drwx------  2 root root  4096  8 oct.  22:21 archive
drwx------  2 root root  4096  8 oct.  22:21 backup
-rw-r--r--  1 root root 45697  9 oct.  17:27 lvm.conf
drwxr-xr-x  2 root root  4096  8 oct.  19:24 profile

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
       [not found]     ` <1381361455.80237.YahooMailNeo@web181502.mail.ne1.yahoo.com>
@ 2013-10-10 15:04       ` Joe Harvell
  2013-10-10 18:38         ` Peter Rajnoha
  0 siblings, 1 reply; 12+ messages in thread
From: Joe Harvell @ 2013-10-10 15:04 UTC (permalink / raw)
  To: linux-lvm; +Cc: matthew patton

Le 09/10/2013 18:30, matthew patton a �crit :
>> So I should set 'obtain_device_list_from_udev' to 0, then pvscan, vgscan
>> and lvscan?
>
> worth a shot. have you confirmed that udev has all the basic disk devices created?
>
I tried that to no avail.  Yes, all the block devices were present in 
/dev, both for the raw partitions and the RAID ones.

Does anyone know the algorithm LVM uses to determine whether PVs are 
present?  Also, I'd really like an LVM tool that reads the PV label off 
of a PV and displays it...I want to see what UUID label is actually on 
each PV.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-10 15:04       ` Joe Harvell
@ 2013-10-10 18:38         ` Peter Rajnoha
  2013-10-10 18:48           ` Peter Rajnoha
  2013-10-10 19:48           ` Joe Harvell
  0 siblings, 2 replies; 12+ messages in thread
From: Peter Rajnoha @ 2013-10-10 18:38 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: matthew patton

On 10/10/2013 05:04 PM, Joe Harvell wrote:
> Le 09/10/2013 18:30, matthew patton a �crit :
>>> So I should set 'obtain_device_list_from_udev' to 0, then pvscan, vgscan
>>> and lvscan?
>>
>> worth a shot. have you confirmed that udev has all the basic disk
>> devices created?
>>
> I tried that to no avail.  Yes, all the block devices were present in
> /dev, both for the raw partitions and the RAID ones.
> 
> Does anyone know the algorithm LVM uses to determine whether PVs are
> present?  Also, I'd really like an LVM tool that reads the PV label off
> of a PV and displays it...I want to see what UUID label is actually on
> each PV.
> 

Two important questions here - which distro is this?

There are two notions of cache in LVM. One is device cache, the other one
is metadata cache. The first one is controlled by write_cache_state setting
(which is obsoleted by obtaining the device list from udev). The latter one
is controlled by use_lvmetad setting. The lvmetad (and metadata cache) has
been added to LVM just recently, while the device cache is there for a long
time...

As for the other important question:
Is lvmetad used or not? (check global/use_lvmetad lvm.conf setting).
If lvmetad is used, it gathers incoming PVs based on events which means
once the PV is available in the system, lvmetad gets notified automatically.
Then the PV is scanned for LVM metadata and lvmetad stores that information.
This information is then reused for each LVM command call instead of scanning
the /dev again and again for PVs. The lvmetad requires udev for its operation!
If lvmetad is used, does pvscan --cache call help?

If lvmetad is not used, whenever the LVM command is executed, each
block device in /dev is scanned for PV labels, every time! Here,
the obtain_device_list_from_udev lvm.conf setting makes a difference
in a way how we get the list of block devices - if this setting is
disabled, LVM directly scans all the /dev content and it selects block devices
itself. If it's enabled, we get the list of block devices from udev database
(which is a bit quicker as we don't need to iterate over all the content of
/dev and decide which item is a block device or not, saving a bit of time
this way).

Peter

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-10 18:38         ` Peter Rajnoha
@ 2013-10-10 18:48           ` Peter Rajnoha
  2013-10-10 19:48           ` Joe Harvell
  1 sibling, 0 replies; 12+ messages in thread
From: Peter Rajnoha @ 2013-10-10 18:48 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: matthew patton

On 10/10/2013 08:38 PM, Peter Rajnoha wrote:
> There are two notions of cache in LVM. One is device cache, the other one
> is metadata cache. The first one is controlled by write_cache_state setting
> (which is obsoleted by obtaining the device list from udev).
> 
...
> If lvmetad is not used, whenever the LVM command is executed, each
> block device in /dev is scanned for PV labels, every time! Here,
> the obtain_device_list_from_udev lvm.conf setting makes a difference
> in a way how we get the list of block devices - if this setting is
> disabled, LVM directly scans all the /dev content and it selects block devices
> itself.

...if LVM selects block devices itself, it can cache this 'selection' by using
'write_cache_state=1' - LVM will write this list of block devices to a cache
file which is read next time, hence LVM does not need to select them once again
on next command execution. Of course, this cache file is of no use when
'obtain_device_list_from_udev=1' and we get that list of block devices from
udev all the time... that's the reason why 'obtain_device_list_from_udev=1'
obsoletes 'write_cache_state=1'. This is the *device cache* - either from udev
or from file written by LVM itself.

Peter

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-10 18:38         ` Peter Rajnoha
  2013-10-10 18:48           ` Peter Rajnoha
@ 2013-10-10 19:48           ` Joe Harvell
  1 sibling, 0 replies; 12+ messages in thread
From: Joe Harvell @ 2013-10-10 19:48 UTC (permalink / raw)
  To: linux-lvm

Le 10/10/2013 13:38, Peter Rajnoha a �crit :
> On 10/10/2013 05:04 PM, Joe Harvell wrote:
>> Le 09/10/2013 18:30, matthew patton a �crit :
>>>> So I should set 'obtain_device_list_from_udev' to 0, then pvscan, vgscan
>>>> and lvscan?
>>> worth a shot. have you confirmed that udev has all the basic disk
>>> devices created?
>>>
>> I tried that to no avail.  Yes, all the block devices were present in
>> /dev, both for the raw partitions and the RAID ones.
>>
>> Does anyone know the algorithm LVM uses to determine whether PVs are
>> present?  Also, I'd really like an LVM tool that reads the PV label off
>> of a PV and displays it...I want to see what UUID label is actually on
>> each PV.
>>
> Two important questions here - which distro is this?
>
> There are two notions of cache in LVM. One is device cache, the other one
> is metadata cache. The first one is controlled by write_cache_state setting
> (which is obsoleted by obtaining the device list from udev). The latter one
> is controlled by use_lvmetad setting. The lvmetad (and metadata cache) has
> been added to LVM just recently, while the device cache is there for a long
> time...
>
> As for the other important question:
> Is lvmetad used or not? (check global/use_lvmetad lvm.conf setting).
> If lvmetad is used, it gathers incoming PVs based on events which means
> once the PV is available in the system, lvmetad gets notified automatically.
> Then the PV is scanned for LVM metadata and lvmetad stores that information.
> This information is then reused for each LVM command call instead of scanning
> the /dev again and again for PVs. The lvmetad requires udev for its operation!
> If lvmetad is used, does pvscan --cache call help?
>
> If lvmetad is not used, whenever the LVM command is executed, each
> block device in /dev is scanned for PV labels, every time! Here,
> the obtain_device_list_from_udev lvm.conf setting makes a difference
> in a way how we get the list of block devices - if this setting is
> disabled, LVM directly scans all the /dev content and it selects block devices
> itself. If it's enabled, we get the list of block devices from udev database
> (which is a bit quicker as we don't need to iterate over all the content of
> /dev and decide which item is a block device or not, saving a bit of time
> this way).
>
> Peter
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
Thanks, Peter.  I run Gentoo, and I have a custom busybox based initrd 
with a static /dev.  The initrd loads the necessary kernel modules, 
assembles the raid arrays and then runs lvm vgscan, followed by 
'lvmchange -ay seward' before mounting the root file system and doing a 
switch_root exec'ing systemd.  So there are two LVM installations....the 
initrd and the system.  The lvm.conf for the initrd and for the running 
system are both included below with comments removed.  In both cases, 
use_lvmetad=0 and obtain_device_list_from_udev = 1.

Is there some LVM tool that reads and display the PV label from a 
specified block device?  Or that reads and displays the LVM meta-data 
from a specified block device?

Also, I realize I failed to mention something important.  When I first 
booted with all 5 disks in akita (the new system), my static /dev in the 
initrd did not have all the block device entries for all the partitions 
of all the disks in the system. Additionally, after I added the new 
entries with mknod, My devices {filter} config in the initrd was 
filtering them out. I've since corrected both of these.  Before I fixed 
this, I would get complaints running vgscan about several missing 
PVs....and the complaint output would refer to them by UUID. After I 
fixed this, I don't see those complaints.  So that must mean it found 
the PVs with those UUIDs....but still it shows the PVs with the 
"missing" flag and forces me to use --partial when activating the vg :(

On my running system, I have already done the following procedure and 
saw no changes:

1. Set devices { filter = [] }
2. Set devices { obtain_device_list_from_udev = 0 }
3. Executed pvscan
4. Executed vgscan
5. Executed lvscan
6. Observed a new file created /etc/lvm/cache/.cache
7. Observed PVs still have missing flag and I still have to say 
--partial to activate bayeux
8. Reverted config from steps 1 and 2
9. Deleted /etc/lvm/cache and its contents

Also for reference, here is what my initrd does exactly:

#!/bin/busybox sh

export PATH=/bin:/sbin
mount -n -t proc none /proc
mount -n -t sysfs none /sys
modprobe ahci
modprobe sd_mod
modprobe dm-mod
modprobe md-mod
modprobe raid1
modprobe raid10
modprobe raid5
modprobe ext4
modprobe xhci-hcd
modprobe ehci-hcd
modprobe uhci-hcd
modprobe ohci-hcd
modprobe usbhid
modprobe hid-generic
modprobe nvidia
mdadm --assemble --scan
lvm vgscan
lvm vgchange -ay seward
mount -t ext4 /dev/seward/root /mnt
umount /proc
umount /sys
exec switch_root /mnt /usr/lib/systemd/systemd --log-level=debug 
--log-target=journal


// Initrd LVM version (statically linked)
   LVM version:     2.02.103(2) (2013-10-04)
   Library version: 1.02.82 (2013-10-04)
   Driver version:  4.25.0
// Initrd LVM config

config {
     checks = 1
     abort_on_errors = 0
     profile_dir = "/etc/lvm/profile"
}

devices {
     dir = "/dev"
     scan = [ "/dev" ]
     obtain_device_list_from_udev = 1
     preferred_names = [ ]
     filter = [ "a|^/dev/md/.*|", "a|^/dev/sd[a-f][1-9]$|", 
"a|^/dev/sd[a-f]1[0-5]$|", "r|.*|" ]
     cache_dir = "/etc/lvm/cache"
     cache_file_prefix = ""
     write_cache_state = 1
     sysfs_scan = 1
     multipath_component_detection = 1
     md_component_detection = 1
     md_chunk_alignment = 1
     data_alignment_detection = 1
     data_alignment = 0
     data_alignment_offset_detection = 1
     ignore_suspended_devices = 0
     disable_after_error_count = 0
     require_restorefile_with_uuid = 1
     pv_min_size = 2048
     issue_discards = 0
}

allocation {
     maximise_cling = 1
     thin_pool_metadata_require_separate_pvs = 0
}

log {
     verbose = 0
     silent = 0
     syslog = 1
     overwrite = 0
     level = 0
     indent = 1
     command_names = 0
     prefix = "  "
     debug_classes = [ "memory", "devices", "activation", "allocation",
                       "lvmetad", "metadata", "cache", "locking" ]
}

backup {
     backup = 1
     backup_dir = "/etc/lvm/backup"
     archive = 1
     archive_dir = "/etc/lvm/archive"
     retain_min = 10
     retain_days = 30
}

shell {
     history_size = 100
}


global {
     umask = 077
     test = 0
     units = "h"
     si_unit_consistency = 1
     activation = 1
     fallback_to_lvm1 = 0
     proc = "/proc"
     locking_type = 1
     wait_for_locks = 1
     fallback_to_clustered_locking = 1
     fallback_to_local_locking = 1
     locking_dir = "/run/lock/lvm"
     prioritise_write_locks = 1
     abort_on_internal_errors = 0
     detect_internal_vg_cache_corruption = 0
     metadata_read_only = 0
     mirror_segtype_default = "raid1"
     raid10_segtype_default = "raid10"
     use_lvmetad = 0
}

activation {
     checks = 0
     udev_sync = 1
     udev_rules = 1
     verify_udev_operations = 0
     retry_deactivation = 1
     missing_stripe_filler = "error"
     use_linear_target = 1
     reserved_stack = 64
     reserved_memory = 8192
     process_priority = -18
     raid_region_size = 512
     readahead = "auto"
     raid_fault_policy = "warn"
     mirror_log_fault_policy = "allocate"
     mirror_image_fault_policy = "remove"
     snapshot_autoextend_threshold = 100
     snapshot_autoextend_percent = 20
     thin_pool_autoextend_threshold = 100
     thin_pool_autoextend_percent = 20
     use_mlockall = 0
     monitoring = 1
     polling_interval = 15
}

metadata {
}

dmeventd {
}

dmeventd {
     mirror_library = "libdevmapper-event-lvm2mirror.so"
     snapshot_library = "libdevmapper-event-lvm2snapshot.so"
     thin_library = "libdevmapper-event-lvm2thin.so"
}


// Running system LVM version:
  LVM version:     2.02.103(2) (2013-10-04)
   Library version: 1.02.82 (2013-10-04)
   Driver version:  4.25.0

// Running system LVM config:
config {
     checks = 1
     abort_on_errors = 0
     profile_dir = "/etc/lvm/profile"
}

devices {
     dir = "/dev"
     scan = [ "/dev" ]
     obtain_device_list_from_udev = 1
     preferred_names = [ ]
     filter = [ "a|^/dev/md/.*|", "a|^/dev/sd[a-g][1-9]$|", 
"a|^/dev/sd[a-g]1[0-5]$|", "r|.*|" ]
     cache_dir = "/etc/lvm/cache"
     cache_file_prefix = ""
     write_cache_state = 1
     sysfs_scan = 1
     multipath_component_detection = 1
     md_component_detection = 1
     md_chunk_alignment = 1
     data_alignment_detection = 1
     data_alignment = 0
     data_alignment_offset_detection = 1
     ignore_suspended_devices = 0
     disable_after_error_count = 0
     require_restorefile_with_uuid = 1
     pv_min_size = 2048
     issue_discards = 0
}

allocation {
     maximise_cling = 1
     mirror_logs_require_separate_pvs = 0
     thin_pool_metadata_require_separate_pvs = 0
}

log {
     verbose = 0
     silent = 0
     syslog = 1
     overwrite = 0
     level = 0
     indent = 1
     command_names = 0
     prefix = "  "
     debug_classes = [ "memory", "devices", "activation", "allocation",
                       "lvmetad", "metadata", "cache", "locking" ]
}

backup {
     backup = 1
     backup_dir = "/etc/lvm/backup"
     archive = 1
     archive_dir = "/etc/lvm/archive"
     retain_min = 10
     retain_days = 30
}

shell {
     history_size = 100
}


global {
     umask = 077
     test = 0
     units = "h"
     si_unit_consistency = 1
     activation = 1
     fallback_to_lvm1 = 0
     proc = "/proc"
     locking_type = 1
     wait_for_locks = 1
     fallback_to_clustered_locking = 1
     fallback_to_local_locking = 1
     locking_dir = "/run/lock/lvm"
     prioritise_write_locks = 1
     abort_on_internal_errors = 0
     detect_internal_vg_cache_corruption = 0
     metadata_read_only = 0
     mirror_segtype_default = "raid1"
     raid10_segtype_default = "raid10"
     use_lvmetad = 0
}

activation {
     checks = 0
     udev_sync = 1
     udev_rules = 1
     verify_udev_operations = 0
     retry_deactivation = 1
     missing_stripe_filler = "error"
     use_linear_target = 1
     reserved_stack = 64
     reserved_memory = 8192
     process_priority = -18
     raid_region_size = 512
     readahead = "auto"
     raid_fault_policy = "warn"
     mirror_log_fault_policy = "allocate"
     mirror_image_fault_policy = "remove"
     snapshot_autoextend_threshold = 100
     snapshot_autoextend_percent = 20
     thin_pool_autoextend_threshold = 100
     thin_pool_autoextend_percent = 20
     use_mlockall = 0
     monitoring = 1
     polling_interval = 15
}

metadata {
}

dmeventd {
     mirror_library = "libdevmapper-event-lvm2mirror.so"
     snapshot_library = "libdevmapper-event-lvm2snapshot.so"
     thin_library = "libdevmapper-event-lvm2thin.so"
}

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
@ 2013-10-23  4:26 Shi Jin
  2013-10-23 16:43 ` Joe Harvell
  0 siblings, 1 reply; 12+ messages in thread
From: Shi Jin @ 2013-10-23  4:26 UTC (permalink / raw)
  To: linux-lvm; +Cc: joe.harvell

[-- Attachment #1: Type: text/plain, Size: 383 bytes --]

Hi Joe,

I am having similar issues in that I have a LVM2 raid1 mirror between a
local disk and a iSCSI disk and once I remove the iSCSI disk and add it
back, the mirror stays broken. I traced the cause of that to be LVM2 still
complains about a PV missing even though the same PV has been added back.

I wonder if you have resolved your issue and what did you do.
Thanks a lot.
Shi

[-- Attachment #2: Type: text/html, Size: 477 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-23  4:26 [linux-lvm] pvs complains of missing PVs that are not missing Shi Jin
@ 2013-10-23 16:43 ` Joe Harvell
  2013-10-24  9:35   ` Zdenek Kabelac
  0 siblings, 1 reply; 12+ messages in thread
From: Joe Harvell @ 2013-10-23 16:43 UTC (permalink / raw)
  To: linux-lvm

> Hi Joe,
>
> I am having similar issues in that I have a LVM2 raid1 mirror between 
> a local disk and a iSCSI disk and once I remove the iSCSI disk and add 
> it back, the mirror stays broken. I traced the cause of that to be 
> LVM2 still complains about a PV missing even though the same PV has 
> been added back.
>
> I wonder if you have resolved your issue and what did you do.
> Thanks a lot.
> Shi
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Shi,

This is not yet resolved.  I'm still waiting for Peter Rajnoha to follow 
up after I provided him the information he requested.  He said he might 
be able to look at it more closely later this week. In the mean time, I 
work around this issue by activating the VG with --partial.  I can 
access my data, but I can't make any LVM config changes.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-23 16:43 ` Joe Harvell
@ 2013-10-24  9:35   ` Zdenek Kabelac
  2013-10-24 15:30     ` Joe Harvell
  0 siblings, 1 reply; 12+ messages in thread
From: Zdenek Kabelac @ 2013-10-24  9:35 UTC (permalink / raw)
  To: LVM general discussion and development

Dne 23.10.2013 18:43, Joe Harvell napsal(a):
>> Hi Joe,
>>
>> I am having similar issues in that I have a LVM2 raid1 mirror between a
>> local disk and a iSCSI disk and once I remove the iSCSI disk and add it
>> back, the mirror stays broken. I traced the cause of that to be LVM2 still
>> complains about a PV missing even though the same PV has been added back.
>>
>> I wonder if you have resolved your issue and what did you do.
>> Thanks a lot.
>> Shi
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> Shi,
>
> This is not yet resolved.  I'm still waiting for Peter Rajnoha to follow up
> after I provided him the information he requested.  He said he might be able
> to look at it more closely later this week. In the mean time, I work around
> this issue by activating the VG with --partial.  I can access my data, but I
> can't make any LVM config changes.
>
>

If the missing PV is reattached to your system - you need to

vgextend --restoremissing   /dev/path_to_PV


Since once the PV is detected as missing - metadata are update on present PVs,
and the missing PV is marked as MISSING.
When such PV reappear - you need to run the command to re-synchronize metadata.

Zdenek

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-24  9:35   ` Zdenek Kabelac
@ 2013-10-24 15:30     ` Joe Harvell
  2013-10-24 16:26       ` Alasdair G Kergon
  0 siblings, 1 reply; 12+ messages in thread
From: Joe Harvell @ 2013-10-24 15:30 UTC (permalink / raw)
  To: linux-lvm

Le 24/10/2013 04:35, Zdenek Kabelac a �crit :
> Dne 23.10.2013 18:43, Joe Harvell napsal(a):
>>> Hi Joe,
>>>
>>> I am having similar issues in that I have a LVM2 raid1 mirror between a
>>> local disk and a iSCSI disk and once I remove the iSCSI disk and add it
>>> back, the mirror stays broken. I traced the cause of that to be LVM2 
>>> still
>>> complains about a PV missing even though the same PV has been added 
>>> back.
>>>
>>> I wonder if you have resolved your issue and what did you do.
>>> Thanks a lot.
>>> Shi
>>>
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>> Shi,
>>
>> This is not yet resolved.  I'm still waiting for Peter Rajnoha to 
>> follow up
>> after I provided him the information he requested.  He said he might 
>> be able
>> to look at it more closely later this week. In the mean time, I work 
>> around
>> this issue by activating the VG with --partial.  I can access my 
>> data, but I
>> can't make any LVM config changes.
>>
>>
>
> If the missing PV is reattached to your system - you need to
>
> vgextend --restoremissing   /dev/path_to_PV
>
>
> Since once the PV is detected as missing - metadata are update on 
> present PVs,
> and the missing PV is marked as MISSING.
> When such PV reappear - you need to run the command to re-synchronize 
> metadata.
>
> Zdenek
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
My faith in LVM is restored.  This fixed my problem.  I really need to 
read through all those man pages more closely.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [linux-lvm] pvs complains of missing PVs that are not missing
  2013-10-24 15:30     ` Joe Harvell
@ 2013-10-24 16:26       ` Alasdair G Kergon
  0 siblings, 0 replies; 12+ messages in thread
From: Alasdair G Kergon @ 2013-10-24 16:26 UTC (permalink / raw)
  To: LVM general discussion and development

The system has no idea what happened to the PV while it was missing,
so it requires you to take this action to confirm that you've checked
what happened and decided it is OK to continue to use the volume after
it reappears.

Alasdair

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2013-10-24 16:26 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-23  4:26 [linux-lvm] pvs complains of missing PVs that are not missing Shi Jin
2013-10-23 16:43 ` Joe Harvell
2013-10-24  9:35   ` Zdenek Kabelac
2013-10-24 15:30     ` Joe Harvell
2013-10-24 16:26       ` Alasdair G Kergon
  -- strict thread matches above, loose matches on Subject: below --
2013-10-09 21:29 Joe Harvell
2013-10-09 21:49 ` Joe Harvell
     [not found] ` <1381356552.67129.YahooMailNeo@web181504.mail.ne1.yahoo.com>
2013-10-09 23:04   ` Joe Harvell
     [not found]   ` <5255D815.1090704@tekcomms.com>
     [not found]     ` <1381361455.80237.YahooMailNeo@web181502.mail.ne1.yahoo.com>
2013-10-10 15:04       ` Joe Harvell
2013-10-10 18:38         ` Peter Rajnoha
2013-10-10 18:48           ` Peter Rajnoha
2013-10-10 19:48           ` Joe Harvell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).