From: Daniel Troeder <daniel@admin-box.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] Existing VG (created here) takes precedence over VG
Date: Sat, 22 Apr 2006 10:06:41 +0200 [thread overview]
Message-ID: <4449E411.7050505@admin-box.com> (raw)
[-- Attachment #1.1: Type: text/plain, Size: 5975 bytes --]
Hallo :)
When shutting down my PC yesterday I saw something from LVM complaining,
but that scrolled to fast... When booting it today all my LVs were
inaccessible. Problem seems to be some error in some metadata (?) this
is the first error message:
--------------------------------------------------------------------------
Apr 22 08:17:34 [lvm] WARNING: Duplicate VG name vg0: Existing
K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L (created here) takes precedence
over K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
--------------------------------------------------------------------------
I googled for somebody with a similar problem, but found nothing...
After successful scanning for my LVs (they are all there :) I tried some
renaming of the PVs UUIDs, which worked, but never removed that old
reference to the VGs UUID (K0qKAk...) I even renamed the VG (vg0 ->
vg1), exported it, imported it... now all PVs and the VG have different
UUIDs than before, but I still can't get my device-nodes, so that I can
access my data.
Original UUID of the VG is "K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L", an
"vg0" was it's original name. Now it's "vg1" and
"JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5". Sorry... maybe I was a bit
desperate to get to my data when fooling around with the LVM-tools...
--------------------------------------------------------------------------
akira ~ # lvm version
Logging initialised at Sat Apr 22 09:54:11 2006
Set umask to 0077
LVM version: 2.02.04 (2006-04-19)
Library version: 1.02.03 (2006-02-08)
Driver version: 4.5.0
Wiping internal VG cache
akira ~ # pvscan -u
Logging initialised at Sat Apr 22 09:46:30 2006
Set umask to 0077
Wiping cache of LVM-capable devices
Wiping internal VG cache
Walking through all physical volumes
WARNING: Duplicate VG name vg1: Existing
JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5 takes precedence over exported
K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
WARNING: Duplicate VG name vg1: Existing
JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5 (created here) takes precedence
over K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
PV /dev/sda7 with UUID ZKdlL6-18Z7-YX8T-bH2J-IS3u-azbG-bUMLel is in
exported VG vg1 [4.66 GB / 4.66 GB free]
PV /dev/sda8 with UUID ezIhm5-iaCJ-YhCl-Mtdg-2m66-z35A-9OdFJW is in
exported VG vg1 [4.66 GB / 4.66 GB free]
PV /dev/sda11 with UUID n8POw9-c8fg-E39z-whvX-9lgA-P2ls-8n1jMa is in
exported VG vg1 [56.76 GB / 20.76 GB free]
Total: 3 [66.09 GB] / in use: 3 [66.09 GB] / in no VG: 0 [0 ]
Wiping internal VG cache
akira ~ # vgscan -v
Logging initialised at Sat Apr 22 09:47:30 2006
Set umask to 0077
Wiping cache of LVM-capable devices
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
WARNING: Duplicate VG name vg1: Existing
JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5 takes precedence over exported
K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
WARNING: Duplicate VG name vg1: Existing
JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5 (created here) takes precedence
over K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
Finding volume group "vg1"
Found exported volume group "vg1" using metadata type lvm2
Wiping internal VG cache
akira ~ # lvscan -a -v
Logging initialised at Sat Apr 22 09:22:16 2006
Set umask to 0077
Finding all logical volumes
WARNING: Duplicate VG name vg1: Existing
JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5 takes precedence over exported
K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
WARNING: Duplicate VG name vg1: Existing
JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5 (created here) takes precedence
over K0qKAk-Ph5i-BcAX-y4yp-SPF3-TZgj-DufR3L
inactive '/dev/vg1/portagehome' [10.00 GB] inherit
inactive '/dev/vg1/dists' [10.00 GB] inherit
inactive '/dev/vg1/rootext' [2.00 GB] inherit
inactive '/dev/vg1/rootvar' [1.00 GB] inherit
inactive '/dev/vg1/oslices' [4.00 GB] inherit
inactive '/dev/vg1/root2ext' [1.00 GB] inherit
inactive '/dev/vg1/verdicd' [3.00 GB] inherit
inactive '/dev/vg1/games' [5.00 GB] inherit
Wiping internal VG cache
--------------------------------------------------------------------------
I attached /etc/lvm/backup/vg1 to this mail.
I'm running a gentoo-2006.0 stable, but with glibc 2.4 and gcc 4.1.0 on
an athlon-xp, 3ware 6000 ATA-RAID-controller running a RAID-0 on 2 HDDs,
but as it's a hardware-raid linux knows nothing about it, and it looks
like a SCSI-Disk to it.
--------------------------------------------------------------------------
akira ~ # fdisk -l
Disk /dev/sda: 400.0 GB, 400097148928 bytes
255 heads, 63 sectors/track, 48642 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 125 1004031 82 Linux swap /
Solaris
/dev/sda2 126 612 3911827+ c W95 FAT32 (LBA)
/dev/sda3 * 613 619 56227+ 83 Linux
/dev/sda4 620 48642 385744747+ f W95 Ext'd (LBA)
/dev/sda5 620 1471 6843658+ 83 Linux
/dev/sda6 1472 1958 3911796 b W95 FAT32
/dev/sda7 1959 2567 4891761 8e Linux LVM
/dev/sda8 2568 3176 4891761 8e Linux LVM
/dev/sda9 3177 3542 2939863+ 83 Linux
/dev/sda10 3543 41232 302744893+ 83 Linux
/dev/sda11 41233 48642 59520793+ 8e Linux LVM
--------------------------------------------------------------------------
I'd appreciate your help!
Bye,
Daniel
--
use PGP key @ http://www.pgp.net/wwwkeys.html
gpg --recv-keys --keyserver hkp://subkeys.pgp.net 0xBB9D4887
[-- Attachment #1.2: vg1 --]
[-- Type: text/plain, Size: 3657 bytes --]
# Generated by LVM2: Sat Apr 22 09:17:29 2006
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgchange --uuid -v vg1'"
creation_host = "akira" # Linux akira 2.6.16-gentoo-r2-akira #1 Sun Apr 16 01:44:34 CEST 2006 i686
creation_time = 1145690249 # Sat Apr 22 09:17:29 2006
vg1 {
id = "JtUveN-SL2N-cTrH-X2GI-NVux-3kYH-NJwRt5"
seqno = 37
status = ["RESIZEABLE", "READ", "WRITE"]
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "ZKdlL6-18Z7-YX8T-bH2J-IS3u-azbG-bUMLel"
device = "/dev/sda7" # Hint only
status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 1194 # 4.66406 Gigabytes
}
pv1 {
id = "ezIhm5-iaCJ-YhCl-Mtdg-2m66-z35A-9OdFJW"
device = "/dev/sda8" # Hint only
status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 1194 # 4.66406 Gigabytes
}
pv2 {
id = "n8POw9-c8fg-E39z-whvX-9lgA-P2ls-8n1jMa"
device = "/dev/sda11" # Hint only
status = ["ALLOCATABLE"]
pe_start = 384
pe_count = 14531 # 56.7617 Gigabytes
}
}
logical_volumes {
portagehome {
id = "RRQB3v-5EJK-HbXN-DAOR-zoP3-8h8I-a9tmik"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 2
segment1 {
start_extent = 0
extent_count = 2048 # 8 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 0
]
}
segment2 {
start_extent = 2048
extent_count = 512 # 2 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 9984
]
}
}
dists {
id = "Gnb9R4-YExN-7lM1-FG1V-n9Do-QQ8d-kmEXQb"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 2048
]
}
}
rootext {
id = "MwUAhu-Xieg-Iytq-zh3e-EXFE-rkTt-OUJiGE"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 512 # 2 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 6656
]
}
}
rootvar {
id = "Qdlvix-ZpL4-fC06-eidi-py0J-lUwQ-c2hH4T"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 256 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 7168
]
}
}
oslices {
id = "QfPWAF-jJ0C-5Ik9-EdoJ-qBAD-sEta-dmH9iF"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 1024 # 4 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 7424
]
}
}
root2ext {
id = "7eAhn2-poFV-rLW1-uIIE-pUf5-5srx-6nVVR3"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 256 # 1024 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 10496
]
}
}
verdicd {
id = "Up1Hm4-MNrq-34CS-X27E-QYC1-Duea-X8FVrE"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 768 # 3 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 10752
]
}
}
games {
id = "9bpwz1-j4Bq-hymm-7GOF-xlTI-N7Ra-2xFeWg"
status = ["READ", "WRITE", "VISIBLE"]
segment_count = 1
segment1 {
start_extent = 0
extent_count = 1280 # 5 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv2", 11520
]
}
}
}
}
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 191 bytes --]
next reply other threads:[~2006-04-22 8:07 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-04-22 8:06 Daniel Troeder [this message]
2006-04-22 17:21 ` [linux-lvm] Existing VG (created here) takes precedence over VG Daniel Troeder
2006-04-22 17:30 ` Simone Gotti
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4449E411.7050505@admin-box.com \
--to=daniel@admin-box.com \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).