linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] LVM High Availability
@ 2000-09-27 19:57 Michael J Kellen
  2000-09-28  4:59 ` Michael Lausch
  0 siblings, 1 reply; 9+ messages in thread
From: Michael J Kellen @ 2000-09-27 19:57 UTC (permalink / raw)
  To: linux-lvm

From: Mats Wichmann <mats@laplaza.org>

> Does anybody have any experience to contribute about what other
> LVM systems do in these situations?  They don't have the dynamic
> scan/assignment stuff in Solaris or AIX, right?  (Showing my
> ignorance here).

The logical volume manager in Tru64 (aka Digital Unix, aka OSF/1)
from the Q (compaQ) handles high availability by packaging the
volumes in disk groups.  A given disk group may only be imported
into one of the machines on the shared bus.  Since this is basically
a relicensed Veritas (I think), I expect similar strategies in other
environments.


Michael Kellen

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [linux-lvm] LVM High Availability
  2000-09-27 19:57 [linux-lvm] LVM High Availability Michael J Kellen
@ 2000-09-28  4:59 ` Michael Lausch
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Lausch @ 2000-09-28  4:59 UTC (permalink / raw)
  To: Michael J Kellen; +Cc: linux-lvm

Michael J Kellen writes:
 > From: Mats Wichmann <mats@laplaza.org>
 > 
 > > Does anybody have any experience to contribute about what other
 > > LVM systems do in these situations?  They don't have the dynamic
 > > scan/assignment stuff in Solaris or AIX, right?  (Showing my
 > > ignorance here).
 > 
 > The logical volume manager in Tru64 (aka Digital Unix, aka OSF/1)
 > from the Q (compaQ) handles high availability by packaging the
 > volumes in disk groups.  A given disk group may only be imported
 > into one of the machines on the shared bus.  Since this is basically
 > a relicensed Veritas (I think), I expect similar strategies in other
 > environments.

Veritas fails miserable if you are using FCAL, since SCSI reserve is
not working on our FCAL. It messed up a 20GB mail spool partion. it's
not problem using veritas importing a disk group on 2 machines if you
are _not_ using SCSI.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [linux-lvm] LVM high availability
@ 2013-06-13 16:16 lihuiba
  2013-06-14 15:22 ` Digimer
  0 siblings, 1 reply; 9+ messages in thread
From: lihuiba @ 2013-06-13 16:16 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 380 bytes --]

Please consider the following case:
there are multiple storage servers each configured as LVM on top of a RAID,
and the servers are connected by DRBD to synchronize the RAIDs.


The question is
1) can I safely active a read-only volume on all of the servers?
2) can I safely active a read-write volume on a randomly chosen server? 
3) how about thin pool and volumes?


Thank you!

[-- Attachment #2: Type: text/html, Size: 563 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM high availability
  2013-06-13 16:16 [linux-lvm] LVM high availability lihuiba
@ 2013-06-14 15:22 ` Digimer
  2013-06-21  6:44   ` Marian Csontos
  0 siblings, 1 reply; 9+ messages in thread
From: Digimer @ 2013-06-14 15:22 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: lihuiba

On 06/13/2013 12:16 PM, lihuiba wrote:
> Please consider the following case:
> there are multiple storage servers each configured as LVM on top of a RAID,
> and the servers are connected by DRBD to synchronize the RAIDs.
>
> The question is
> 1) can I safely active a read-only volume on all of the servers?

Depends on your file system. GFS2 and OCFS2 is fine. Most all others are 
not. In either case, you can't mount a Secondary DRBD resource no matter 
what.

> 2) can I safely active a read-write volume on a randomly chosen server?

You need a cluster-aware files system like GFS2 and, in the context of 
LVM, you need clustered VGs and LVs which requires clvmd to be running.

> 3) how about thin pool and volumes?

I've not played with these.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without 
access to education?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM high availability
  2013-06-14 15:22 ` Digimer
@ 2013-06-21  6:44   ` Marian Csontos
  2013-06-21  7:32     ` matthew patton
  2013-06-23 14:32     ` lihuiba
  0 siblings, 2 replies; 9+ messages in thread
From: Marian Csontos @ 2013-06-21  6:44 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Digimer, lihuiba

On 06/14/2013 05:22 PM, Digimer wrote:
> On 06/13/2013 12:16 PM, lihuiba wrote:
>> Please consider the following case:
>> there are multiple storage servers each configured as LVM on top of a
>> RAID,
>> and the servers are connected by DRBD to synchronize the RAIDs.
>>
>> The question is
>> 1) can I safely active a read-only volume on all of the servers?
>
> Depends on your file system. GFS2 and OCFS2 is fine. Most all others are
> not. In either case, you can't mount a Secondary DRBD resource no matter
> what.

Adjustement: As long as the volume (actually whole volume group!) is 
read-only everywhere, cluster-FS is not necessary, just like it was not 
shared at all.

However! In case of LVM mirror, should a leg fail, you will need R/W 
access to VG to recover. Seems this is not the case.

But once you will need any changes to the VG you will either need 
cluster-locking or *at your own risk* be the cluster lock yourself:

1. ensure there is only one node modifying the VG during following
2. rescan the VG before any operation
3. do the dirty work and
4. rescan the VG on other nodes.

(Not 100% sure about above. I hope someone will correct me if I am wrong.)

DISCLAIMER: Without cluster-locking, LVM can not ensure LV is not active 
R/W elsewhere. Still there are always users trying to use LVM in cluster 
setup without cluster-locking developing new creative ways to destroy 
their data. If you still want to do it, please, state so clearly in any 
bug reports.

>
>> 2) can I safely active a read-write volume on a randomly chosen server?
>
> You need a cluster-aware files system like GFS2 and, in the context of
> LVM, you need clustered VGs and LVs which requires clvmd to be running.

You can safely activate the LV *exclusively* on any single node 
regardless of FS. This means it can not be active elsewhere at all.
Using the FS RO elsewhere is likely to serve you corrupted data as for 
example any caches may be invalid.

If you want it active R/W while active elsewhere even in read-only mode 
you will need a cluster-aware FS like the GFS2 mentioned.

Again, remember any operation like growing LV, mirror recovery or pvmove 
needs R/W access to VG and thus require cluster-locking.

>
>> 3) how about thin pool and volumes?
>

Thin-pool may be active on single-node only. In cluster only exclusive 
activation is allowed and can not be activated elsewhere even "read-only".

It should work fine without cluster locking while read-only everywhere, 
except you can not enforce read-only on the pool. (There is a bug for that.)

Attempting to write to pool active elsewhere, may render the pool's view 
corrupted on other nodes. So when writing to pool, you should at least 
deactivate the pool and all thin volumes on all other nodes, and 
reactivate afterwards.

-- Marian

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM high availability
  2013-06-21  6:44   ` Marian Csontos
@ 2013-06-21  7:32     ` matthew patton
  2013-06-23 14:32     ` lihuiba
  1 sibling, 0 replies; 9+ messages in thread
From: matthew patton @ 2013-06-21  7:32 UTC (permalink / raw)
  To: LVM general discussion and development

if you turn off caching in lvm.conf then you don't have to explicitly call vgscan or lvscan since it will happen automatically. However, any operation that changes LV or VG metadata must be single-entry. That's what XenServer does, they elect a "LVM master" and only that host is allowed to issue VG/LV commands. I believe it's worthwhile to get the full cluster stack running on your storage nodes, but you can probably use something far lighter like CARP/VRRP and get away with it.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM high availability
  2013-06-21  6:44   ` Marian Csontos
  2013-06-21  7:32     ` matthew patton
@ 2013-06-23 14:32     ` lihuiba
  2013-06-24  8:21       ` Zdenek Kabelac
  1 sibling, 1 reply; 9+ messages in thread
From: lihuiba @ 2013-06-23 14:32 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Digimer, Marian Csontos

[-- Attachment #1: Type: text/plain, Size: 904 bytes --]

>>> 3) how about thin pool and volumes?
>>
>
>Thin-pool may be active on single-node only. In cluster only exclusive 
>activation is allowed and can not be activated elsewhere even "read-only".
>
>It should work fine without cluster locking while read-only everywhere, 
>except you can not enforce read-only on the pool. (There is a bug for that.)
>
>Attempting to write to pool active elsewhere, may render the pool's view 
>corrupted on other nodes. So when writing to pool, you should at least 
>deactivate the pool and all thin volumes on all other nodes, and 
>reactivate afterwards.

I don't think out why thin volumes can not be active on multiple nodes even read-only.It is intuitive that activating thin volumes in read-only mode will neither change it's ownmeta-data nor the pool's.If I "force" the thin volumes to be active by, for example, changing the source code, what errors will occur?



[-- Attachment #2: Type: text/html, Size: 1625 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM high availability
  2013-06-23 14:32     ` lihuiba
@ 2013-06-24  8:21       ` Zdenek Kabelac
  2013-06-27  3:19         ` Huiba Li
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2013-06-24  8:21 UTC (permalink / raw)
  To: LVM general discussion and development; +Cc: Digimer, lihuiba, Marian Csontos

Dne 23.6.2013 16:32, lihuiba napsal(a):
>
>>>> 3) how about thin pool and volumes?
>>>
>>
>>Thin-pool may be active on single-node only. In cluster only exclusive
>>activation is allowed and can not be activated elsewhere even "read-only".
>>
>>It should work fine without cluster locking while read-only everywhere,
>>except you can not enforce read-only on the pool. (There is a bug for that.)
>>
>>Attempting to write to pool active elsewhere, may render the pool's view
>>corrupted on other nodes. So when writing to pool, you should at least
>>deactivate the pool and all thin volumes on all other nodes, and
>>reactivate afterwards.
>
> I don't think out why thin volumes can not be active on multiple nodes even read-only.
>
> It is intuitive that activating thin volumes in read-only mode will neither change it's own
>
> meta-data nor the pool's.
>
> If I "force" the thin volumes to be active by, for example, changing the source code,
>
> what errors will occur?


lvm2 code currently doesn't support activation of all thin pool related LVs in 
read-only mode - there is some work being done in this area - but still not 
finished.

Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] LVM high availability
  2013-06-24  8:21       ` Zdenek Kabelac
@ 2013-06-27  3:19         ` Huiba Li
  0 siblings, 0 replies; 9+ messages in thread
From: Huiba Li @ 2013-06-27  3:19 UTC (permalink / raw)
  To: LVM general discussion and development, Zdenek Kabelac
  Cc: Digimer, Marian Csontos

[-- Attachment #1: Type: text/plain, Size: 1630 bytes --]

Is there a release schedule? I' looking forward to this feathure.

Zdenek Kabelac <zkabelac@redhat.com>写到:
>Dne 23.6.2013 16:32, lihuiba napsal(a):
>>
>>>>> 3) how about thin pool and volumes?
>>>>
>>>
>>>Thin-pool may be active on single-node only. In cluster only
>exclusive
>>>activation is allowed and can not be activated elsewhere even
>"read-only".
>>>
>>>It should work fine without cluster locking while read-only
>everywhere,
>>>except you can not enforce read-only on the pool. (There is a bug for
>that.)
>>>
>>>Attempting to write to pool active elsewhere, may render the pool's
>view
>>>corrupted on other nodes. So when writing to pool, you should at
>least
>>>deactivate the pool and all thin volumes on all other nodes, and
>>>reactivate afterwards.
>>
>> I don't think out why thin volumes can not be active on multiple
>nodes even read-only.
>>
>> It is intuitive that activating thin volumes in read-only mode will
>neither change it's own
>>
>> meta-data nor the pool's.
>>
>> If I "force" the thin volumes to be active by, for example, changing
>the source code,
>>
>> what errors will occur?
>
>
>lvm2 code currently doesn't support activation of all thin pool related
>LVs in 
>read-only mode - there is some work being done in this area - but still
>not 
>finished.
>
>Zdenek
>
>
>
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm@redhat.com
>https://www.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

-- 
抱歉暂时无法详细说明。这份邮件是使用安装有K-9 Mail的Android移动设备发送的。

[-- Attachment #2: Type: text/html, Size: 2601 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-06-27  3:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-13 16:16 [linux-lvm] LVM high availability lihuiba
2013-06-14 15:22 ` Digimer
2013-06-21  6:44   ` Marian Csontos
2013-06-21  7:32     ` matthew patton
2013-06-23 14:32     ` lihuiba
2013-06-24  8:21       ` Zdenek Kabelac
2013-06-27  3:19         ` Huiba Li
  -- strict thread matches above, loose matches on Subject: below --
2000-09-27 19:57 [linux-lvm] LVM High Availability Michael J Kellen
2000-09-28  4:59 ` Michael Lausch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).