From: Jonathan E Brassow <jbrassow@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] 5 out of 6 Volumes Vanished!
Date: Wed, 1 Nov 2006 16:42:43 -0600 [thread overview]
Message-ID: <c9c75dc7924988c0f8af3da18c9064d3@redhat.com> (raw)
In-Reply-To: <6.2.3.4.2.20061101143040.07042530@postoffice.no-ip.com>
[-- Attachment #1: Type: text/plain, Size: 4803 bytes --]
What's the output of 'cat /proc/partitions; pvs; vgs; lvs; cat
/etc/lvm/backup/Vol01'?
brassow
On Nov 1, 2006, at 4:31 PM, Mache Creeger wrote:
> I understand about the md issue, but that only addresses Vol05 and
> does not address the other volumes that are gone. Any ideas about
> Vol01 to Vol04?
>
> -- Mache
>
> At 09:40 AM 11/1/2006, Jonathan E Brassow wrote:
>> I'm not clear on how your LVM volume groups are mapped to the
>> underlying devices; and sadly, I'm not that familiar with md or its
>> terminology. What does "inactive" mean? Your first command suggest
>> that /dev/md0 is active, but the second says it is inactive... In
>> any case, if the md devices are not available and your LVM volume
>> groups are composed of MD devices, that would explain why you are not
>> seeing your volume groups.
>>
>> You could look are your various LVM backup files (located in
>> /etc/lvm/backup/<vg name>), see what devices they are using and check
>> whether the system sees those devices...
>>
>> brassow
>>
>> On Oct 31, 2006, at 4:56 PM, Mache Creeger wrote:
>>
>>> Most of my volumes have vanished, except for Vol0. I had 6 volumes
>>> set up with lvm. Vol5 had 600 GB of data running over RAID5 using
>>> XFS.
>>>
>>> Can anyone help.
>>>
>>> Here are some diagnostics.
>>>
>>> -- Mache Creeger
>>>
>>> # mdadm -A /dev/md0
>>> mdadm: device /dev/md0 already active - cannot assemble it
>>>
>>> # cat /proc/mdstat
>>> Personalities : [raid6] [raid5] [raid4]
>>> md0 : inactive hdi1[5](S) hdb1[0] hdh1[4] hdg1[3] hdf1[2] hde1[1]
>>> 1172151808 blocks
>>>
>>> unused devices: <none>
>>>
>>> # more /proc/mdstat
>>> Personalities : [raid6] [raid5] [raid4]
>>> md0 : inactive hdb1[0] hdh1[4] hdg1[3] hdf1[2] hde1[1]
>>> 976791040 blocks
>>>
>>> # mdadm --detail /dev/md0
>>> /dev/md0:
>>> Version : 00.90.03
>>> Creation Time : Sat Apr 8 10:01:48 2006
>>> Raid Level : raid5
>>> Device Size : 195358208 (186.31 GiB 200.05 GB)
>>> Raid Devices : 6
>>> Total Devices : 5
>>> Preferred Minor : 0
>>> Persistence : Superblock is persistent
>>>
>>> Update Time : Sat Oct 21 22:30:40 2006
>>> State : active, degraded
>>> Active Devices : 5
>>> Working Devices : 5
>>> Failed Devices : 0
>>> Spare Devices : 0
>>>
>>> Layout : left-symmetric
>>> Chunk Size : 256K
>>>
>>> UUID : 0e3284f1:bf1053ea:e580013b:368be46b
>>> Events : 0.3090999
>>>
>>> Number Major Minor RaidDevice State
>>> 0 3 65 0 active sync /dev/hdb1
>>> 1 33 1 1 active sync /dev/hde1
>>> 2 33 65 2 active sync /dev/hdf1
>>> 3 34 1 3 active sync /dev/hdg1
>>> 4 34 65 4 active sync /dev/hdh1
>>> 0 0 0 0 removed
>>>
>>> # more /etc/fstab
>>> /dev/VolGroup00/LogVol00 / ext3
>>> defaults 1 1
>>> LABEL=/boot /boot ext3
>>> defaults 1 2
>>> devpts /dev/pts devpts
>>> gid=5,mode=620 0 0
>>> tmpfs /dev/shm tmpfs
>>> defaults 0 0
>>> /dev/VolGroup04/LogVol04 /opt ext3
>>> defaults 1 2
>>> /dev/VolGroup05/LogVol05 /opt/bigdisk xfs
>>> defaults 1 2
>>> proc /proc proc
>>> defaults 0 0
>>> sysfs /sys sysfs
>>> defaults 0 0
>>> /dev/VolGroup01/LogVol01 /usr ext3
>>> defaults 1 2
>>> /dev/VolGroup02/LogVol02 /var ext3
>>> defaults 1 2
>>> /dev/VolGroup03/LogVol03 swap swap
>>> defaults 0 0
>>>
>>> # xfs_repair /dev/VolGroup05/LogVol05
>>> /dev/VolGroup05/LogVol05: No such file or directory
>>>
>>> fatal error -- couldn't initialize XFS library
>>>
>>>
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
[-- Attachment #2: Type: text/enriched, Size: 4743 bytes --]
What's the output of 'cat /proc/partitions; pvs; vgs; lvs; cat
/etc/lvm/backup/Vol01'?
brassow
On Nov 1, 2006, at 4:31 PM, Mache Creeger wrote:
<excerpt>I understand about the md issue, but that only addresses
Vol05 and does not address the other volumes that are gone. Any ideas
about Vol01 to Vol04?
-- Mache
At 09:40 AM 11/1/2006, Jonathan E Brassow wrote:
<excerpt>I'm not clear on how your LVM volume groups are mapped to the
underlying devices; and sadly, I'm not that familiar with md or its
terminology. What does "inactive" mean? Your first command suggest
that /dev/md0 is active, but the second says it is inactive... In any
case, if the md devices are not available and your LVM volume groups
are composed of MD devices, that would explain why you are not seeing
your volume groups.
You could look are your various LVM backup files (located in
/etc/lvm/backup/<<vg name>), see what devices they are using and check
whether the system sees those devices...
brassow
On Oct 31, 2006, at 4:56 PM, Mache Creeger wrote:
<excerpt> Most of my volumes have vanished, except for Vol0. I had 6
volumes set up with lvm. Vol5 had 600 GB of data running over RAID5
using XFS.
Can anyone help.
Here are some diagnostics.
-- Mache Creeger
# mdadm -A /dev/md0
mdadm: device /dev/md0 already active - cannot assemble it
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive hdi1[5](S) hdb1[0] hdh1[4] hdg1[3] hdf1[2] hde1[1]
1172151808 blocks
unused devices: <<none>
# more /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive hdb1[0] hdh1[4] hdg1[3] hdf1[2] hde1[1]
976791040 blocks
# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sat Apr 8 10:01:48 2006
Raid Level : raid5
Device Size : 195358208 (186.31 GiB 200.05 GB)
Raid Devices : 6
Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Oct 21 22:30:40 2006
State : active, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 256K
UUID : 0e3284f1:bf1053ea:e580013b:368be46b
Events : 0.3090999
Number Major Minor RaidDevice State
0 3 65 0 active sync /dev/hdb1
1 33 1 1 active sync /dev/hde1
2 33 65 2 active sync /dev/hdf1
3 34 1 3 active sync /dev/hdg1
4 34 65 4 active sync /dev/hdh1
0 0 0 0 removed
# more /etc/fstab
/dev/VolGroup00/LogVol00 / ext3
defaults 1 1
LABEL=/boot /boot ext3
defaults 1 2
devpts /dev/pts devpts
gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs
defaults 0 0
/dev/VolGroup04/LogVol04 /opt ext3
defaults 1 2
/dev/VolGroup05/LogVol05 /opt/bigdisk xfs
defaults 1 2
proc /proc proc
defaults 0 0
sysfs /sys sysfs
defaults 0 0
/dev/VolGroup01/LogVol01 /usr ext3
defaults 1 2
/dev/VolGroup02/LogVol02 /var ext3
defaults 1 2
/dev/VolGroup03/LogVol03 swap swap
defaults 0 0
# xfs_repair /dev/VolGroup05/LogVol05
/dev/VolGroup05/LogVol05: No such file or directory
fatal error -- couldn't initialize XFS library
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
<color><param>0000,0000,EEEE</param>https://www.redhat.com/mailman/listinfo/linux-lvm</color>
read the LVM HOW-TO at
<color><param>0000,0000,EEEE</param>http://tldp.org/HOWTO/LVM-HOWTO/</color>
</excerpt> _______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
<color><param>0000,0000,EEEE</param>https://www.redhat.com/mailman/listinfo/linux-lvm</color>
read the LVM HOW-TO at
<color><param>0000,0000,EEEE</param>http://tldp.org/HOWTO/LVM-HOWTO/</color>
</excerpt>_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/</excerpt>
prev parent reply other threads:[~2006-11-01 22:39 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-10-31 22:56 [linux-lvm] 5 out of 6 Volumes Vanished! Mache Creeger
2006-11-01 17:40 ` Jonathan E Brassow
2006-11-01 22:31 ` Mache Creeger
2006-11-01 22:42 ` Jonathan E Brassow [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c9c75dc7924988c0f8af3da18c9064d3@redhat.com \
--to=jbrassow@redhat.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).