linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Missing Logical Volumes
@ 2014-12-19 10:32 G Crowe
  2014-12-19 16:46 ` Jack Waterworth
  0 siblings, 1 reply; 5+ messages in thread
From: G Crowe @ 2014-12-19 10:32 UTC (permalink / raw)
  To: linux-lvm

After rebooting, some of my logical volumes did not have device files.

/dev/array1/LVpics
and
/dev/mapper/array1-LVpics
did not exist but the output of "lvdisplay" said that the volume was 
available (see below).

vgscan did not resolve the problem.

I was able to regain access to the LV by renaming it, then renaming it 
back...
[root@host1 ~]# lvrename /dev/array1/LVpics /dev/array1/LVpicsnew
   Renamed "LVpics" to "LVpicsnew" in volume group "array1"
[root@host1 ~]# lvrename /dev/array1/LVpicsnew /dev/array1/LVpics
   Renamed "LVpicsnew" to "LVpics" in volume group "array1"

There are 29 LVs in the VG and 25 of them came up OK and 4 had this 
problem. Note that there is only one single PV (a RAID6 array) in the 
VG, and there are two VGs on the machine.

Is this expected behaviour, or is it something I should be worried about?



   --- Logical volume ---
   LV Path                /dev/array1/LVpics
   LV Name                LVpics
   VG Name                array1
   LV UUID                WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
   LV Write Access        read/write
   LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
   LV Status              available
   # open                 0
   LV Size                350.00 GiB
   Current LE             89600
   Segments               2
   Allocation             inherit
   Read ahead sectors     auto
   - currently set to     256
   Block device           253:9


I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64


Thanks

GC

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Missing Logical Volumes
  2014-12-19 10:32 [linux-lvm] Missing Logical Volumes G Crowe
@ 2014-12-19 16:46 ` Jack Waterworth
  2014-12-19 23:07   ` G Crowe
  0 siblings, 1 reply; 5+ messages in thread
From: Jack Waterworth @ 2014-12-19 16:46 UTC (permalink / raw)
  To: linux-lvm

It sounds like the VG was not activated. You can activate it with the 
following command:

     # vgchange -ay array1

   Jack Waterworth, Red Hat Certified Architect
   Senior Storage Technical Support Engineer
   Red Hat Global Support Services ( 1.888.467.3342 )

On 12/19/2014 05:32 AM, G Crowe wrote:
> After rebooting, some of my logical volumes did not have device files.
>
> /dev/array1/LVpics
> and
> /dev/mapper/array1-LVpics
> did not exist but the output of "lvdisplay" said that the volume was 
> available (see below).
>
> vgscan did not resolve the problem.
>
> I was able to regain access to the LV by renaming it, then renaming it 
> back...
> [root@host1 ~]# lvrename /dev/array1/LVpics /dev/array1/LVpicsnew
>   Renamed "LVpics" to "LVpicsnew" in volume group "array1"
> [root@host1 ~]# lvrename /dev/array1/LVpicsnew /dev/array1/LVpics
>   Renamed "LVpicsnew" to "LVpics" in volume group "array1"
>
> There are 29 LVs in the VG and 25 of them came up OK and 4 had this 
> problem. Note that there is only one single PV (a RAID6 array) in the 
> VG, and there are two VGs on the machine.
>
> Is this expected behaviour, or is it something I should be worried about?
>
>
>
>   --- Logical volume ---
>   LV Path                /dev/array1/LVpics
>   LV Name                LVpics
>   VG Name                array1
>   LV UUID                WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
>   LV Write Access        read/write
>   LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
>   LV Status              available
>   # open                 0
>   LV Size                350.00 GiB
>   Current LE             89600
>   Segments               2
>   Allocation             inherit
>   Read ahead sectors     auto
>   - currently set to     256
>   Block device           253:9
>
>
> I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
>
>
> Thanks
>
> GC
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Missing Logical Volumes
  2014-12-19 16:46 ` Jack Waterworth
@ 2014-12-19 23:07   ` G Crowe
  2014-12-24 13:51     ` mghofran
  0 siblings, 1 reply; 5+ messages in thread
From: G Crowe @ 2014-12-19 23:07 UTC (permalink / raw)
  To: LVM general discussion and development


No, this didn't work.

[root@host1 ~]# vgchange -ay array1
   29 logical volume(s) in volume group "array1" now active

And the missing /dev/mapper files were not created (I left one LV 
un-fixed to try any suggested solutions)

All of the other LVs in the same VG are completely usable, so it doesn't 
seem to be a problem with the VG as a whole.


Thanks

GC


On 20/12/2014 3:46 AM, Jack Waterworth wrote:
> It sounds like the VG was not activated. You can activate it with the 
> following command:
>
>     # vgchange -ay array1
>
>   Jack Waterworth, Red Hat Certified Architect
>   Senior Storage Technical Support Engineer
>   Red Hat Global Support Services ( 1.888.467.3342 )
>
> On 12/19/2014 05:32 AM, G Crowe wrote:
>> After rebooting, some of my logical volumes did not have device files.
>>
>> /dev/array1/LVpics
>> and
>> /dev/mapper/array1-LVpics
>> did not exist but the output of "lvdisplay" said that the volume was 
>> available (see below).
>>
>> vgscan did not resolve the problem.
>>
>> I was able to regain access to the LV by renaming it, then renaming 
>> it back...
>> [root@host1 ~]# lvrename /dev/array1/LVpics /dev/array1/LVpicsnew
>>   Renamed "LVpics" to "LVpicsnew" in volume group "array1"
>> [root@host1 ~]# lvrename /dev/array1/LVpicsnew /dev/array1/LVpics
>>   Renamed "LVpicsnew" to "LVpics" in volume group "array1"
>>
>> There are 29 LVs in the VG and 25 of them came up OK and 4 had this 
>> problem. Note that there is only one single PV (a RAID6 array) in the 
>> VG, and there are two VGs on the machine.
>>
>> Is this expected behaviour, or is it something I should be worried 
>> about?
>>
>>
>>
>>   --- Logical volume ---
>>   LV Path                /dev/array1/LVpics
>>   LV Name                LVpics
>>   VG Name                array1
>>   LV UUID                WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
>>   LV Write Access        read/write
>>   LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
>>   LV Status              available
>>   # open                 0
>>   LV Size                350.00 GiB
>>   Current LE             89600
>>   Segments               2
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>   - currently set to     256
>>   Block device           253:9
>>
>>
>> I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
>>
>>
>> Thanks
>>
>> GC
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Missing Logical Volumes
  2014-12-19 23:07   ` G Crowe
@ 2014-12-24 13:51     ` mghofran
  2015-01-02  9:16       ` Graham Crowe
  0 siblings, 1 reply; 5+ messages in thread
From: mghofran @ 2014-12-24 13:51 UTC (permalink / raw)
  To: linux-lvm

If your VG was not activated to begin with, you could not see any LV at all so "vgchange -a y" was not the way to go.

If you still have the issue, please attach the results of:

# vgdisplay -v /dev/array1
# grep -I filter /etc/lvm/lvmconf
# vgscan
# ls -al /dev/mapper


-----Original Message-----
From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com] On Behalf Of G Crowe
Sent: Friday, December 19, 2014 6:08 PM
To: LVM general discussion and development
Subject: Re: [linux-lvm] Missing Logical Volumes


No, this didn't work.

[root@host1 ~]# vgchange -ay array1
   29 logical volume(s) in volume group "array1" now active

And the missing /dev/mapper files were not created (I left one LV un-fixed to try any suggested solutions)

All of the other LVs in the same VG are completely usable, so it doesn't seem to be a problem with the VG as a whole.


Thanks

GC


On 20/12/2014 3:46 AM, Jack Waterworth wrote:
> It sounds like the VG was not activated. You can activate it with the
> following command:
>
>     # vgchange -ay array1
>
>   Jack Waterworth, Red Hat Certified Architect
>   Senior Storage Technical Support Engineer
>   Red Hat Global Support Services ( 1.888.467.3342 )
>
> On 12/19/2014 05:32 AM, G Crowe wrote:
>> After rebooting, some of my logical volumes did not have device files.
>>
>> /dev/array1/LVpics
>> and
>> /dev/mapper/array1-LVpics
>> did not exist but the output of "lvdisplay" said that the volume was
>> available (see below).
>>
>> vgscan did not resolve the problem.
>>
>> I was able to regain access to the LV by renaming it, then renaming
>> it back...
>> [root@host1 ~]# lvrename /dev/array1/LVpics /dev/array1/LVpicsnew
>>   Renamed "LVpics" to "LVpicsnew" in volume group "array1"
>> [root@host1 ~]# lvrename /dev/array1/LVpicsnew /dev/array1/LVpics
>>   Renamed "LVpicsnew" to "LVpics" in volume group "array1"
>>
>> There are 29 LVs in the VG and 25 of them came up OK and 4 had this
>> problem. Note that there is only one single PV (a RAID6 array) in the
>> VG, and there are two VGs on the machine.
>>
>> Is this expected behaviour, or is it something I should be worried
>> about?
>>
>>
>>
>>   --- Logical volume ---
>>   LV Path                /dev/array1/LVpics
>>   LV Name                LVpics
>>   VG Name                array1
>>   LV UUID                WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
>>   LV Write Access        read/write
>>   LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
>>   LV Status              available
>>   # open                 0
>>   LV Size                350.00 GiB
>>   Current LE             89600
>>   Segments               2
>>   Allocation             inherit
>>   Read ahead sectors     auto
>>   - currently set to     256
>>   Block device           253:9
>>
>>
>> I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
>>
>>
>> Thanks
>>
>> GC
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

This message is intended for the use of the person(s) to whom it may be addressed. It may contain information that is privileged, confidential, or otherwise protected from disclosure under applicable law. If you are not the intended recipient, any dissemination, distribution, copying, or use of this information is prohibited. If you have received this message in error, please permanently delete it and immediately notify the sender. Thank you.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] Missing Logical Volumes
  2014-12-24 13:51     ` mghofran
@ 2015-01-02  9:16       ` Graham Crowe
  0 siblings, 0 replies; 5+ messages in thread
From: Graham Crowe @ 2015-01-02  9:16 UTC (permalink / raw)
  To: LVM general discussion and development

Thanks for the reply, however I had to reboot the server and this has cleared 
the problem. If it reoccurs, then I will save the output as per your suggestion 
(I'd like to find a solution that doesn't involve rebooting as this server has 
many Xen virtual machines which are a pain to reboot).


Thanks for your help.


On 25/12/2014 12:51 AM, mghofran@bidmc.harvard.edu wrote:
> If your VG was not activated to begin with, you could not see any LV at all so "vgchange -a y" was not the way to go.
>
> If you still have the issue, please attach the results of:
>
> # vgdisplay -v /dev/array1
> # grep -I filter /etc/lvm/lvmconf
> # vgscan
> # ls -al /dev/mapper
>
>
> -----Original Message-----
> From: linux-lvm-bounces@redhat.com [mailto:linux-lvm-bounces@redhat.com] On Behalf Of G Crowe
> Sent: Friday, December 19, 2014 6:08 PM
> To: LVM general discussion and development
> Subject: Re: [linux-lvm] Missing Logical Volumes
>
>
> No, this didn't work.
>
> [root@host1 ~]# vgchange -ay array1
>     29 logical volume(s) in volume group "array1" now active
>
> And the missing /dev/mapper files were not created (I left one LV un-fixed to try any suggested solutions)
>
> All of the other LVs in the same VG are completely usable, so it doesn't seem to be a problem with the VG as a whole.
>
>
> Thanks
>
> GC
>
>
> On 20/12/2014 3:46 AM, Jack Waterworth wrote:
>> It sounds like the VG was not activated. You can activate it with the
>> following command:
>>
>>      # vgchange -ay array1
>>
>>    Jack Waterworth, Red Hat Certified Architect
>>    Senior Storage Technical Support Engineer
>>    Red Hat Global Support Services ( 1.888.467.3342 )
>>
>> On 12/19/2014 05:32 AM, G Crowe wrote:
>>> After rebooting, some of my logical volumes did not have device files.
>>>
>>> /dev/array1/LVpics
>>> and
>>> /dev/mapper/array1-LVpics
>>> did not exist but the output of "lvdisplay" said that the volume was
>>> available (see below).
>>>
>>> vgscan did not resolve the problem.
>>>
>>> I was able to regain access to the LV by renaming it, then renaming
>>> it back...
>>> [root@host1 ~]# lvrename /dev/array1/LVpics /dev/array1/LVpicsnew
>>>    Renamed "LVpics" to "LVpicsnew" in volume group "array1"
>>> [root@host1 ~]# lvrename /dev/array1/LVpicsnew /dev/array1/LVpics
>>>    Renamed "LVpicsnew" to "LVpics" in volume group "array1"
>>>
>>> There are 29 LVs in the VG and 25 of them came up OK and 4 had this
>>> problem. Note that there is only one single PV (a RAID6 array) in the
>>> VG, and there are two VGs on the machine.
>>>
>>> Is this expected behaviour, or is it something I should be worried
>>> about?
>>>
>>>
>>>
>>>    --- Logical volume ---
>>>    LV Path                /dev/array1/LVpics
>>>    LV Name                LVpics
>>>    VG Name                array1
>>>    LV UUID                WH7g9u-Ls7J-fIpQ-Hk2p-mUuH-QRKf-9uxcM2
>>>    LV Write Access        read/write
>>>    LV Creation host, time example.com, 2013-11-26 07:29:51 +1100
>>>    LV Status              available
>>>    # open                 0
>>>    LV Size                350.00 GiB
>>>    Current LE             89600
>>>    Segments               2
>>>    Allocation             inherit
>>>    Read ahead sectors     auto
>>>    - currently set to     256
>>>    Block device           253:9
>>>
>>>
>>> I am running Fedora 19 with kernel 3.11.9-200.fc19.x86_64
>>>
>>>
>>> Thanks
>>>
>>> GC
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> This message is intended for the use of the person(s) to whom it may be addressed. It may contain information that is privileged, confidential, or otherwise protected from disclosure under applicable law. If you are not the intended recipient, any dissemination, distribution, copying, or use of this information is prohibited. If you have received this message in error, please permanently delete it and immediately notify the sender. Thank you.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-01-02  9:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-19 10:32 [linux-lvm] Missing Logical Volumes G Crowe
2014-12-19 16:46 ` Jack Waterworth
2014-12-19 23:07   ` G Crowe
2014-12-24 13:51     ` mghofran
2015-01-02  9:16       ` Graham Crowe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).