linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "James Miller" <jimm@simutronics.com>
To: linux-lvm@redhat.com
Subject: [linux-lvm] SolvedLRE: CLVMD and Locking type 3 initialisation failed
Date: Tue, 5 Jun 2007 14:55:07 -0500	[thread overview]
Message-ID: <078001c7a7ab$6ae46e30$5dd810d1@e3demo> (raw)
In-Reply-To: 



> -----Original Message-----
> From: James Miller 
> Sent: Tuesday, June 05, 2007 1:37 PM
> To: linux-lvm@redhat.com
> Subject: CLVMD and Locking type 3 initialisation failed
> 
> Hello everyone,
> 
> I'm setting up a cluster and was hoping to get some insight 
> into trouble I'm having with running clvmd. My cluster seems 
> to be in a good state (2 nodes).
> I using Debian Etch, 2.6.18 kernel, and 
> redhat-cluster-modules for 2.6.18.
> 
> ---------------------------
> cman_tool status:
> Protocol version: 5.0.1
> Config version: 2
> Cluster name: alpha1
> Cluster ID: 6387
> Cluster Member: Yes
> Membership state: Cluster-Member
> Nodes: 2
> Expected_votes: 2
> Total_votes: 2
> Quorum: 2
> Active subsystems: 3
> Node name: hari
> Node ID: 1
> Node addresses: 209.16.216.121
> 
> -------------------------------
> cman_tool nodes:
> Node  Votes Exp Sts  Name
>    1    1    2   M   hari
>    2    1    2   M   seldon
> 
> ------------------------------
> cman_tool services:
> Service          Name                              GID LID 
> State     Code
> Fence Domain:    "default"                           1   2 run       -
> [1 2]
> 
> DLM Lock Space:  "clvmd"                             2   3 run       -
> [1 2]
> 
> ---------------------------------
> When I run vgchange -aly I get the following message:
> vgchange -aly
>   Unknown locking type requested.
>   Locking type 3 initialisation failed.
> 
> ---------------------------
> /etc/lvm/lvm.conf:
> #
> devices {
> 
> 	dir = "/dev"
> 	scan = [ "/dev" ]
> 	filter = [ "r|/dev/cdrom|" ]
> 	write_cache_state = 1
> 	sysfs_scan = 1
> 	md_component_detection = 1
> }
> 
> log {
>     verbose = 0
>     syslog = 1
>     overwrite = 0
>     level = 0
>     indent = 1
>     command_names = 0
>     prefix = "  "
> }
> 
> backup {
>     backup = 1
>     backup_dir = "/etc/lvm/backup"
>     archive = 1
>     archive_dir = "/etc/lvm/archive"
>     retain_min = 10
>     retain_days = 30
> }
> 
> shell {
>     history_size = 100
> }
> 
> global {
>     umask = 077
>     test = 0
>     activation = 1
>     proc = "/proc"
>     locking_dir = "/var/lock/lvm"
>     locking_library = "liblvm2clusterlock.so"
> # ***** I VERIFIED the locking_dir exists, the library_dir is 
> correct and the locking_library is in the right place
>     locking_type = 3
>     library_dir = "/lib/lvm2"
> }
> 
> activation {
>     missing_stripe_filler = "/dev/ioerror"
>     reserved_stack = 256
>     reserved_memory = 8192
>     process_priority = -18
>     mirror_region_size = 512
>     mirror_log_fault_policy = "allocate"
>     mirror_device_fault_policy = "remove"
> }
> 
> -------------------------------------
> Important output of lsmod:
> lsmod
> Module                  Size  Used by
> dm_snapshot            20664  0
> dm_mirror              25216  0
> dm_mod                 62800  2 dm_snapshot,dm_mirror
> lock_dlm               44644  0
> dlm                   123040  4 lock_dlm
> cman                  132800  15 lock_dlm,dlm
> lock_harness           10160  1 lock_dlm
> ----------------------------------
> 


For Debian lvm2 you have to set the locking_type=2.  It's in the lvm2 source
debian/clvm.README.Debian 

However, now that I have ccsd, cman and clvmd happily running (I discovered
I had clvmd running ok when I could run LVM commands).  On both computers in
my cluster, I'm still not seeing the volume group of the other cluster
member.  And the status of my Clustser seems fine.  

Is there anything I'm missing or doing wrong?  I'm not running a SAN or
iSCSI.  The PVs for each server are partitions I've created from the HW
Raid.

The only anomaly I'm seeing is in /var/log/messages.  I am getting a kernel
cluster error for one of my nodes:
CMAN: WARNING no listener for port 11 on node seldon.  I don't know what the
error even means, since I thought traffic when over port 6809.

Anyway here's some info:
Server1	Server2
Hari		Seldon

LVM info:
PVs:
/dev/sda5	/devsda5
/dev/sda6	/devsda6
/dev/sda7	/devsda7
Volume Group Info:
hari-vg01	seldon-vg01


I guess I don't really understand how clvm work.



--Jim

             reply	other threads:[~2007-06-05 19:57 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-06-05 19:55 James Miller [this message]
2007-06-05 23:47 ` [linux-lvm] SolvedLRE: CLVMD and Locking type 3 initialisation failed David Robinson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='078001c7a7ab$6ae46e30$5dd810d1@e3demo' \
    --to=jimm@simutronics.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).