linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
@ 2013-07-02 17:27 Christian Schröder
  2013-07-03  2:12 ` matthew patton
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Schröder @ 2013-07-02 17:27 UTC (permalink / raw)
  To: linux-lvm

Hi list!
I have an iSCSI server (using LIO) which provides one LUN. Two client
machines each connect this same LUN. From the client's perspective, let's
say it shows up as device /dev/sda. I have then in one client created a
partition /dev/sda1 with type 8e (LVM). The partition immediately shows up
in the other client.
After that, I have created a physical LVM volume (pvcreate /dev/sda1) and a
volume group (vgcreate shared /dev/sda1). Note that I have not enabled
cluster mode for this volume group. Again, both the physical volume and the
volume group are immediately visible to the other client.
When I create in one client a logical volume in this volume group, it is
visible to the other client, but it is unavailable. This works in both
directions, i.e. I can create a logical volume in one client or the other,
and it is always visible, but unavailable to the other client.
So far, everything seems to work fine. Now my question is: Is this setup
sufficient or do I need cLVM? I have tried to install cLVM, but had some
problems, because I do not have (and probably do not need?) a cluster
manager.
Could anybody explain when cLVM is needed in general and if it is necessary
for me?

Kind regards,
Christian

------------------------------------------------------------
Deriva GmbH Financial IT and Consulting
Christian Schr�der
Gesch�ftsf�hrer
Hans-B�ckler-Stra�e 2 | D-37079 G�ttingen
Tel: +49 (0)551 489 500-42
Fax: +49 (0)551 489 500-91
http://www.deriva.de

Amtsgericht G�ttingen | HRB 3240
Gesch�ftsf�hrer: Dirk Baule, Christian Schr�der

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
  2013-07-02 17:27 [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target? Christian Schröder
@ 2013-07-03  2:12 ` matthew patton
  2013-07-03 13:07   ` Christian Schröder
  0 siblings, 1 reply; 7+ messages in thread
From: matthew patton @ 2013-07-03  2:12 UTC (permalink / raw)
  To: LVM general discussion and development

> directions, i.e. I can create a logical volume in one client or the other,

> and it is always visible, but unavailable to the other client.
> So far, everything seems to work fine. Now my question is: Is this setup
> sufficient or do I need cLVM?

You don't need cLVM just as long as you never screw up. It's been a while since I played with just such a setup but,

1) edit lvm.conf and disable all metadata caching
2) edit lvm.conf and set the locking style to '4' and set wait_for_locks=0

"locking_type�- What type of locking to use. 1 is the default, which use flocks on files in�locking_dir�(see below) to avoid conflicting LVM2 commands running concurrently on a single machine. 0 disables locking and risks corrupting your metadata. If set to 2, the tools will load the external�locking_library�(see below). If the tools were configured�--with-cluster=internal�(the default) then 3 means to use built-in cluster-wide locking. Type 4 enforces read-only metadata and forbids any operations that might want to modify Volume Group metadata. All changes to logical volumes and their states are communicated using locks.
wait_for_locks�- When set to 1, the default, the tools wait if a lock request cannot be satisfied immediately. When set to 0, the operation is aborted instead."

You can approximate cLVM by using a lightway quorum daemon like CARP and issue your LVM commands by SSHing to the shared virtual IP host (just an alias on one of your 2 or N boxes). You could even in jest make it a shell function/alias named 'clvm' which uses an alternate "lvm.conf" that has locking_type=1.

The main complication is with (de)activating LV and assigning it to the other host. I forget what happens if you do an 'lvchange -ay ' on a common LV and the other host has already done so. In any event you'll want to make sure all LVs are NOT automatically activated (default) on bootup.

The best option is to implement your own locking module (ie. repurpose the negotiation/handshake/quorum code from eg. CARP) and your locking module just checks to see if "am I master? T or F" and punts if not. More than a handful of hosts and I would spend the energy getting cLVM working. Though it took me way longer than it should have to get things to stop hanging for all eternity (kernel messages about stalled threads and what not) and for it to behave correctly even if I willy-nilly power-reset the boxes.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
  2013-07-03  2:12 ` matthew patton
@ 2013-07-03 13:07   ` Christian Schröder
  2013-07-03 13:58     ` matthew patton
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Schröder @ 2013-07-03 13:07 UTC (permalink / raw)
  To: linux-lvm

Hi Matthew,
thank a lot for your helpful comments. Unfortunately, it seems that I did not describe in enough
detail what I want to achieve ...

 >> and it is always visible, but unavailable to the other client.
 >> So far, everything seems to work fine. Now my question is: Is this setup
 >> sufficient or do I need cLVM?
 > You don't need cLVM just as long as you never screw up. It's been a while since I played with just such a setup but,
 >
 > 1) edit lvm.conf and disable all metadata caching

I have searched the lvm documentation, but could not find anything about metadata caching.
I did find some notes about the the Metadata Daemon (lvmetad), but it does not seem to be used by default.
Could you give me a hint which setting should be used to disable caching?

 > 2) edit lvm.conf and set the locking style to '4' and set wait_for_locks=0

The clients have a local volume group besides the shared one. Setting the locking style to 4 prevents
modification of this local volume group, too, so it does not seem to be an option.

 > The main complication is with (de)activating LV and assigning it to the other host. I forget what happens if you do an 'lvchange -ay ' on a common LV and the other host has already done so. In any event you'll want to make sure all LVs are NOT automatically activated (default) on bootup.

Each LV is statically assigned to one single host. I do not plan to detach a LV from one host and assign
it to the other. I simply want to give the same (large) volume group to both hosts so each of them can
create its own (private) LVs in it.

 > The best option is to implement your own locking module (ie. repurpose the negotiation/handshake/quorum code from eg. CARP) and your locking module just checks to see if "am I master? T or F" and punts if not. More than a handful of hosts and I would spend the energy getting cLVM working. Though it took me way longer than it should have to get things to stop hanging for all eternity (kernel messages about stalled threads and what not) and for it to behave correctly even if I willy-nilly power-reset the boxes.

As I tried to describe above, the hosts do not behave as master or slave. That's why I would not actually
talk about a cluster. Instead, the iSCSI target should behave like some kind of network storage (like,
let's say, NetApp) that provides a huge disk (i.e. volume group) in which the clients can create their LVs
without any further dependencies between each other.
Is this possible with or without cLVM?

Regards,
Christian

-- 
Deriva GmbH                         Tel.: +49 551 489500-42
Financial IT and Consulting         Fax:  +49 551 489500-91
Hans-B�ckler-Stra�e 2                  http://www.deriva.de
D-37079 G�ttingen

Amtsgericht G�ttingen | HRB 3240
Gesch�ftsf�hrer: Dirk Baule, Christian Schr�der
Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
  2013-07-03 13:07   ` Christian Schröder
@ 2013-07-03 13:58     ` matthew patton
  2013-07-06 12:47       ` Christian Schröder
  0 siblings, 1 reply; 7+ messages in thread
From: matthew patton @ 2013-07-03 13:58 UTC (permalink / raw)
  To: LVM general discussion and development

>>  1) edit lvm.conf and disable all metadata caching

> 
> I have searched the lvm documentation, but could not find anything about 
> metadata caching.

write_cache_state = 0
use_lvmetad = 0


I also move the cache_dir out of /etc to a ephemeral location. /var/lock is tmpfs on my machines.
cache_dir = "/var/lock/subsys"


>>  2) edit lvm.conf and set the locking style to '4' and set 
> wait_for_locks=0
> 
> The clients have a local volume group besides the shared one. Setting the 
> locking style to 4 prevents
> modification of this local volume group, too, so it does not seem to be an 
> option.


You need 2 different lvm.conf's. Use the filter directive so that one can only see the local devices, and the other one only the shared devices.

filter = [ "a|local_devices|", ... "r|shared_devices" ]

vs
filter = [ "a|shared_devices|", ...�"r|local_devices"�]

Run pvscan and pvs and make sure the exclusions are happening correctly.

> Each LV is statically assigned to one single host.


That's an important distinction! You can use just a single lvm.conf and a well crafted volume_list. I haven't played with tags but that might be rather useful as well.


volume_list = [ "vglocal", "vg_shared/lv_mine", "@mytag1" ]


> As I tried to describe above, the hosts do not behave as master or slave.�


Nevermind then. I was implementing a full-on Active/Active storage head that could take over each other's volumes at will.

> Is this possible with or without cLVM?

without.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
  2013-07-03 13:58     ` matthew patton
@ 2013-07-06 12:47       ` Christian Schröder
  2013-07-06 17:40         ` matthew patton
  0 siblings, 1 reply; 7+ messages in thread
From: Christian Schröder @ 2013-07-06 12:47 UTC (permalink / raw)
  To: linux-lvm

Hi Matthew,
thanks again for your comments. The idea to use tags is very 
interesting. I will have a deeper look at it. However, I still do not 
understand if filtering (either hard-coded or using tags) is *necessary* 
to make things working or if it is just *adviseable* to prevent using a 
volume simultaneously from both clients?
When I tried to set up the scenario as described, I encountered another 
problem which might be related to the sharing stuff: When I virtually 
disconnect the physical volume (by disconnecting from the iSCSI server) 
and reconnect again, I get i/o errors when I try to access a logical 
volume. I have tried pvscan, vgscan and lvscan, but nothing helps. 
Interestingly, the volume group and all logical values are found by 
vgscan / lvscan, but I cannot access it. Is this the expected behavior? 
Is there any chance (besides rebooting the client machine) to recover 
from such a situation?

Regards,
Christian

-- 
Deriva GmbH                         Tel.: +49 551 489500-42
Financial IT and Consulting         Fax:  +49 551 489500-91
Hans-B�ckler-Stra�e 2                  http://www.deriva.de
D-37079 G�ttingen

Amtsgericht G�ttingen | HRB 3240
Gesch�ftsf�hrer: Dirk Baule, Christian Schr�der
Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
  2013-07-06 12:47       ` Christian Schröder
@ 2013-07-06 17:40         ` matthew patton
  2013-07-08 10:56           ` Christian Schröder
  0 siblings, 1 reply; 7+ messages in thread
From: matthew patton @ 2013-07-06 17:40 UTC (permalink / raw)
  To: LVM general discussion and development

> (either hard-coded or using tags) is *necessary* to make things working or if it�

> is just *adviseable* to prevent using a volume simultaneously from both clients?

You could probably make a case for saying it's adviseable but why do you want to play roulette with your data? Force it to do the right thing and you won't have to run the risk of it doing something unintended. Why the trepidation to doing it correctly?

> logical values are found by vgscan / lvscan, but I cannot access it. Is this the�
> expected behavior? Is there any chance (besides rebooting the client machine) to 
> recover from such a situation?

How did you disconnect from iSCSI? Did you gracefully log out of the client session? However you did it, it seems the kernel believes the device was just yanked offline and after repeated I/O failures marked it bad. Re-introducing the device doesn't magically clear the error state. Then again maybe it's as simple as you not bringing it fully online with a 'vgchange -ay <vg>' and an 'lvchange -ay <lv>'.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target?
  2013-07-06 17:40         ` matthew patton
@ 2013-07-08 10:56           ` Christian Schröder
  0 siblings, 0 replies; 7+ messages in thread
From: Christian Schröder @ 2013-07-08 10:56 UTC (permalink / raw)
  To: linux-lvm

On 06.07.2013 19:40, matthew patton wrote:
>> (either hard-coded or using tags) is *necessary* to make things working or if it
>> is just *adviseable* to prevent using a volume simultaneously from both clients?
>
> You could probably make a case for saying it's adviseable but why do you want to play roulette with your data? Force it to do the right thing and you won't have to run the risk of it doing something unintended. Why the trepidation to doing it correctly?

I just try to understand how things work. But you indeed convinced me to 
use tags.

>> logical values are found by vgscan / lvscan, but I cannot access it. Is this the
>> expected behavior? Is there any chance (besides rebooting the client machine) to
>> recover from such a situation?
>
> How did you disconnect from iSCSI? Did you gracefully log out of the client session? However you did it, it seems the kernel believes the device was just yanked offline and after repeated I/O failures marked it bad. Re-introducing the device doesn't magically clear the error state. Then again maybe it's as simple as you not bringing it fully online with a 'vgchange -ay <vg>' and an 'lvchange -ay <lv>'.

After some further investigation I think I have found the reason for the 
observed behavior: When I log out of the client session, the block 
device (/dev/sda in my case) still seems to be used, probably by the 
device mapper itself. So when I log in again, the iSCSI device gets 
another device node (/dev/sdb) which obviously doesn't help for the 
logical volume. When I deactivate and reactivate the LV, it gets 
connected to the new block device and works again.
It also works if I deactivate the LV before logging out of the iSCSI 
session. Then the block device is released and reused when I reconnect. 
In any case, it seems to be necessary to deactivate and reactivate the LV.

Regards,
Christian

-- 
Deriva GmbH                         Tel.: +49 551 489500-42
Financial IT and Consulting         Fax:  +49 551 489500-91
Hans-B�ckler-Stra�e 2                  http://www.deriva.de
D-37079 G�ttingen

Amtsgericht G�ttingen | HRB 3240
Gesch�ftsf�hrer: Dirk Baule, Christian Schr�der
Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-07-08 10:57 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-02 17:27 [linux-lvm] Is cLVM necessary when accessing different logical volumes on a shared iSCSI target? Christian Schröder
2013-07-03  2:12 ` matthew patton
2013-07-03 13:07   ` Christian Schröder
2013-07-03 13:58     ` matthew patton
2013-07-06 12:47       ` Christian Schröder
2013-07-06 17:40         ` matthew patton
2013-07-08 10:56           ` Christian Schröder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).