* [linux-lvm] Does pv failure effect whole vg?
@ 2007-06-19 6:39 Richard van den Berg
2007-06-20 17:57 ` Richard van den Berg
0 siblings, 1 reply; 8+ messages in thread
From: Richard van den Berg @ 2007-06-19 6:39 UTC (permalink / raw)
To: linux-lvm
In the case of a pv failure, will the vg that uses that pv become
unavailable? Or just the lv that uses the pv? How about at boot time? If
a pv is unavailable at boot, will the complete vg be unavailable? Or
just the lv using the pv?
I'm trying to decide if I should plan a reboot to split my current vg.
Currently my vg00 uses pv md2 (raid1 mirror) and sdb3. If sdb fails, I
don't want to loose the lvs using the mirror. Is a vg split required to
achieve this?
Sincerely,
Richard van den Beg
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Does pv failure effect whole vg?
2007-06-19 6:39 [linux-lvm] Does pv failure effect whole vg? Richard van den Berg
@ 2007-06-20 17:57 ` Richard van den Berg
2007-06-20 21:36 ` Stuart D. Gathman
0 siblings, 1 reply; 8+ messages in thread
From: Richard van den Berg @ 2007-06-20 17:57 UTC (permalink / raw)
To: LVM general discussion and development
Richard van den Berg wrote:
> In the case of a pv failure, will the vg that uses that pv become
> unavailable? Or just the lv that uses the pv? How about at boot time?
> If a pv is unavailable at boot, will the complete vg be unavailable?
> Or just the lv using the pv?
To answer my own question: when a pv is not available at boot time, the
vg using that pv does not come up. So splitting vgs makes sense when you
want to minimize the impact of one disk failure.
Sincerely,
Richard van den Berg
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Does pv failure effect whole vg?
2007-06-20 17:57 ` Richard van den Berg
@ 2007-06-20 21:36 ` Stuart D. Gathman
2007-06-21 7:43 ` Richard van den Berg
2007-06-22 3:14 ` f-lvm
0 siblings, 2 replies; 8+ messages in thread
From: Stuart D. Gathman @ 2007-06-20 21:36 UTC (permalink / raw)
To: LVM general discussion and development
On Wed, 20 Jun 2007, Richard van den Berg wrote:
> To answer my own question: when a pv is not available at boot time, the
> vg using that pv does not come up. So splitting vgs makes sense when you
> want to minimize the impact of one disk failure.
Hmmm. On AIX LVM, vgs still boot when physical volumes fail, provided
there is a "quorum". The metadata is redundantly stored on all PVs,
so a "quorum" means that more than half of the metadata copies
are available and at the same version.
I think Linux LVM stores only metadata for that PV on a PV, but there
is a backup in /etc/lvm.
If the system truly won't boot with a failed disk, that kind of adds another
reason why current LVM mirroring support is useless.
If you build your VG on raid (md devices or hardware), you solve
the problem for now.
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Does pv failure effect whole vg?
2007-06-20 21:36 ` Stuart D. Gathman
@ 2007-06-21 7:43 ` Richard van den Berg
2007-06-21 14:45 ` Stuart D. Gathman
2007-06-22 3:14 ` f-lvm
1 sibling, 1 reply; 8+ messages in thread
From: Richard van den Berg @ 2007-06-21 7:43 UTC (permalink / raw)
To: LVM general discussion and development
Stuart D. Gathman wrote:
> If the system truly won't boot with a failed disk, that kind of adds another
> reason why current LVM mirroring support is useless.
>
> If you build your VG on raid (md devices or hardware), you solve
> the problem for now.
>
That is what I am doing. The test I did was:
lv00 and lv01 are part of vg00 which consists of /dev/md2 and /dev/sdb3
When booting from cdrom (systemrescuecd) md2 failed to come up, so vg00
did not come up. I then manually brought up md2 and split the vg, and
added sda3 in the mix:
lv00 is part of vg00 which consists of /dev/md2
lv01 is part of vg01 which consists of /dev/sdb3 and /dev/sda3
At the next reboot from cdrom vg01 came up fine, vg00 did not.
At least after the split if sda or sdb fails, md2 should still come up
and so should vg00, but I will loose vg01. Keeping all lvs in vg00
would have given me an unbootable system after 1 disk failure.
Sincerely,
Richard van den Berg
PS: I still think it sucks that I could not split the vg online.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Does pv failure effect whole vg?
2007-06-21 7:43 ` Richard van den Berg
@ 2007-06-21 14:45 ` Stuart D. Gathman
2007-06-21 16:31 ` Richard van den Berg
0 siblings, 1 reply; 8+ messages in thread
From: Stuart D. Gathman @ 2007-06-21 14:45 UTC (permalink / raw)
To: LVM general discussion and development
On Thu, 21 Jun 2007, Richard van den Berg wrote:
> When booting from cdrom (systemrescuecd) md2 failed to come up, so vg00
> did not come up. I then manually brought up md2 and split the vg, and
> added sda3 in the mix:
When md devices haven't come up, it has been one of the following:
1) I forgot to make the partition type fd - RAID autodetect
2) The raid1 (or other) driver was not in initrd. Makeinitrd
does not always add it when it needs to. (Wants to
see /etc/mdadm.conf or /etc/raidtab.)
--
Stuart D. Gathman <stuart@bmsi.com>
Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154
"Confutatis maledictis, flammis acribus addictis" - background song for
a Microsoft sponsored "Where do you want to go from here?" commercial.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Does pv failure effect whole vg?
2007-06-21 14:45 ` Stuart D. Gathman
@ 2007-06-21 16:31 ` Richard van den Berg
0 siblings, 0 replies; 8+ messages in thread
From: Richard van den Berg @ 2007-06-21 16:31 UTC (permalink / raw)
To: LVM general discussion and development
Stuart D. Gathman wrote:
> When md devices haven't come up, it has been one of the following:
>
I think in this case it has to do with lvm being started before mdadm on
the systemrescuecd. I'll start a thread on their support site..
Sincerely,
Richard van den Berg
^ permalink raw reply [flat|nested] 8+ messages in thread
* [linux-lvm] Does pv failure effect whole vg?
2007-06-20 21:36 ` Stuart D. Gathman
2007-06-21 7:43 ` Richard van den Berg
@ 2007-06-22 3:14 ` f-lvm
2007-06-24 15:43 ` Nix
1 sibling, 1 reply; 8+ messages in thread
From: f-lvm @ 2007-06-22 3:14 UTC (permalink / raw)
To: linux-lvm
Date: Wed, 20 Jun 2007 17:36:23 -0400 (EDT)
From: "Stuart D. Gathman" <stuart@bmsi.com>
On Wed, 20 Jun 2007, Richard van den Berg wrote:
> To answer my own question: when a pv is not available at boot time, the
> vg using that pv does not come up. So splitting vgs makes sense when you
> want to minimize the impact of one disk failure.
Hmmm. On AIX LVM, vgs still boot when physical volumes fail, provided
there is a "quorum". The metadata is redundantly stored on all PVs,
so a "quorum" means that more than half of the metadata copies
are available and at the same version.
I think Linux LVM stores only metadata for that PV on a PV, but there
is a backup in /etc/lvm.
If the system truly won't boot with a failed disk, that kind of adds another
reason why current LVM mirroring support is useless.
I can also confirm that Linux LVM won't activate a VG that's missing
PVs. Under Ubuntu Breezy, I had a VG divided into one tiny LV for
the usual OS dirs (/bin /etc /home /var etc etc) and one large LV
(spanning both the boot disk and a second disk) for a single large
filesystem. While I was doing some hardware reconfiguration (details
irrelevant), I tried booting with the second disk disconnected, and
LVM couldn't activate any of the VG, including the LV that held the OS
itself, even though that LV definitely didn't cross into the second
disk. (It was created months before the second disk was added, was
never extended, and thus consisted of a single PV.)
Once I discovered that, and since I was reconfiguring how its disks
worked anyway, I started over and created two VGs. The first held a
single LV, which held the OS, and the second VG held another LV, which
spans boths disks and holds the large filesystem. -That- will boot
with the second disk disconnected: the first VG is activated and the
OS boots, and the second VG won't activate (unless, I assume, I force
it with --partial) because it's missing one of its PVs, but since the
filesystem that the OS is on isn't in that VG, at least the machine
boots.
(Yes, it'd probably be possible to make my own initrd that did
everything the normal one does but supplies --partial in the right
place, but that'd be a total pain to keep up-to-date across every
automatic kernel update, etc. Given that I knew I'd be blowing away
the entire disk structure and starting over, it was far easier to just
create two VGs; presumably there'd have been some sneaky way [not using
the normal LVM API] to have changed the on-disk data structures in-place
if I'd been desperate, since in theory nothing about the size or placement
of the LVs would necessarily have changed.)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [linux-lvm] Does pv failure effect whole vg?
2007-06-22 3:14 ` f-lvm
@ 2007-06-24 15:43 ` Nix
0 siblings, 0 replies; 8+ messages in thread
From: Nix @ 2007-06-24 15:43 UTC (permalink / raw)
To: LVM general discussion and development
On 22 Jun 2007, f-lvm@media.mit.edu said:
> (Yes, it'd probably be possible to make my own initrd that did
> everything the normal one does but supplies --partial in the right
> place, but that'd be a total pain to keep up-to-date across every
> automatic kernel update, etc.
The solution there is to use an initramfs instead, which can be built
automatically by the kernel build system and linked into the kernel.
Therefore, there are very few kernel-up-to-date woes left (basically
none): the kernel and everything needed to mount / is a self-contained
entity again.
--
`... in the sense that dragons logically follow evolution so they would
be able to wield metal.' --- Kenneth Eng's colourless green ideas sleep
furiously
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-06-24 15:43 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-19 6:39 [linux-lvm] Does pv failure effect whole vg? Richard van den Berg
2007-06-20 17:57 ` Richard van den Berg
2007-06-20 21:36 ` Stuart D. Gathman
2007-06-21 7:43 ` Richard van den Berg
2007-06-21 14:45 ` Stuart D. Gathman
2007-06-21 16:31 ` Richard van den Berg
2007-06-22 3:14 ` f-lvm
2007-06-24 15:43 ` Nix
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).