* [linux-lvm] Error Attempting to Create LV on Clustered VG
@ 2013-03-19 16:58 Marc Smith
2013-03-20 9:38 ` Jacek Konieczny
0 siblings, 1 reply; 4+ messages in thread
From: Marc Smith @ 2013-03-19 16:58 UTC (permalink / raw)
To: linux-lvm
[-- Attachment #1: Type: text/plain, Size: 3838 bytes --]
Hi,
I'm attempting to create a logical volume on a clustered volume group
in a (2) node Pacemaker + Corosync cluster (active/active). I have
successfully created the VG and it appears available on both hosts:
[root@bill ~]# vgs
VG #PV #LV #SN Attr VSize VFree
r0 1 0 0 wz--nc 203.24g 203.24g
[root@ben ~]# vgs
VG #PV #LV #SN Attr VSize VFree
r0 1 0 0 wz--nc 203.24g 203.24g
When I attempt to create a LV on this VG from one of the hosts, I get
the following error message:
[root@ben ~]# lvcreate -L 150G -n testvmfs1 r0
Error locking on node 40e6640a: Invalid argument
Error locking on node 31e6640a: Invalid argument
Failed to activate new LV.
I turned on the debug option for clvmd and I see what appears to be
the relevant part from the logs (full logs attached for both hosts):
--snip--
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from
nodeid 837182474 for 0. len 84
Mar 19 16:37:02 ben user.debug lvm[2845]: lock_resource
'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh',
flags=1, mode=1
Mar 19 16:37:02 ben user.debug lvm[2845]: dlm_ls_lock returned 22
Mar 19 16:37:02 ben user.debug lvm[2845]: hold_lock. lock at 1 failed:
Invalid argument
Mar 19 16:37:02 ben user.debug lvm[2845]: Command return is 22,
critical_section is 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 17 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from
nodeid 1088840714 for 837182474. len 35
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 17 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
--snip--
I've Google'd around a bit but haven't found any other posts quite
like this one. Here are the software versions I'm using on these
hosts:
Linux kernel: 3.7.8
LVM2: 2.02.97
DLM User Tools (dlm_controld): 4.0.1
Corosync: 2.3.0
Pacemaker: 1.1.8
I built LVM2 with the following configure script options:
--with-lvm1=none --disable-selinux --prefix=/usr --with-clvmd=corosync
--with-cluster=internal --enable-ocf --enable-cmirrord
Here is dlm_tool information from each host (not sure if its helpful,
but just in case):
[root@bill ~]# dlm_tool -n ls
dlm lockspaces
name clvmd
id 0x4104eefa
flags 0x00000000
change member 2 joined 1 remove 0 failed 0 seq 3,3
members 837182474 1088840714
all nodes
nodeid 837182474 member 1 failed 0 start 1 seq_add 3 seq_rem 2 check none
nodeid 1088840714 member 1 failed 0 start 1 seq_add 1 seq_rem 0 check none
[root@bill ~]# dlm_tool status
cluster nodeid 1088840714 quorate 1 ring seq 316 316
daemon now 12430 fence_pid 0
node 837182474 M add 460 rem 178 fail 0 fence 0 at 0 0
node 1088840714 M add 119 rem 0 fail 0 fence 0 at 0 0
[root@ben ~]# dlm_tool -n ls
dlm lockspaces
name clvmd
id 0x4104eefa
flags 0x00000000
change member 2 joined 1 remove 0 failed 0 seq 1,1
members 837182474 1088840714
all nodes
nodeid 837182474 member 1 failed 0 start 1 seq_add 1 seq_rem 0 check none
nodeid 1088840714 member 1 failed 0 start 1 seq_add 1 seq_rem 0 check none
[root@ben ~]# dlm_tool status
cluster nodeid 837182474 quorate 1 ring seq 316 316
daemon now 12174 fence_pid 0
node 837182474 M add 115 rem 0 fail 0 fence 0 at 0 0
node 1088840714 M add 115 rem 0 fail 0 fence 0 at 0 0
I'd be grateful if anyone had a few minutes to look at my issue --
hopefully its something simple I'm missing. =)
Please let me know if you need any additional information.
Thanks,
Marc
[-- Attachment #2: ben_debug --]
[-- Type: application/octet-stream, Size: 64000 bytes --]
Mar 19 13:34:09 ben daemon.debug clvmd[2845]: CLVMD started
Mar 19 13:34:09 ben user.debug kernel: [ 116.726390] dlm: clvmd: joining the lockspace group...
Mar 19 13:34:09 ben user.debug kernel: [ 116.730069] dlm: clvmd: dlm_recover 1
Mar 19 13:34:09 ben user.debug kernel: [ 116.730082] dlm: clvmd: group event done 0 0
Mar 19 13:34:09 ben user.debug kernel: [ 116.730116] dlm: clvmd: add member 1088840714
Mar 19 13:34:09 ben user.debug kernel: [ 116.730117] dlm: clvmd: add member 837182474
Mar 19 13:34:09 ben user.debug kernel: [ 116.730943] dlm: clvmd: dlm_recover_members 2 nodes
Mar 19 13:34:09 ben user.debug kernel: [ 116.730948] dlm: clvmd: join complete
Mar 19 13:34:10 ben user.debug kernel: [ 117.130123] dlm: clvmd: generation 6 slots 2 1:837182474 2:1088840714
Mar 19 13:34:10 ben user.debug kernel: [ 117.130126] dlm: clvmd: dlm_recover_directory
Mar 19 13:34:10 ben user.debug kernel: [ 117.330327] dlm: clvmd: dlm_recover_directory 0 in 0 new
Mar 19 13:34:10 ben user.debug kernel: [ 117.730100] dlm: clvmd: dlm_recover_directory 0 out 1 messages
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: Created DLM lockspace for CLVMD.
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: DLM initialisation complete
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: Our local node id is 837182474
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: Connected to Corosync
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: Cluster ready, doing some more initialisation
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: starting LVM thread
Mar 19 13:34:10 ben daemon.debug clvmd[2845]: LVM thread function started
Mar 19 13:34:10 ben user.debug lvm[2845]: clvmd ready for work
Mar 19 13:34:10 ben user.debug lvm[2845]: Using timeout of 60 seconds
Mar 19 13:34:10 ben user.debug lvm[2845]: Sub thread ready for work.
Mar 19 13:34:10 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:10 ben user.debug lvm[2845]: confchg callback. 1 joined, 0 left, 2 members
Mar 19 13:34:11 ben user.debug kernel: [ 118.951590] dlm: clvmd: dlm_recover 1 generation 6 done: 1821 ms
Mar 19 13:34:12 ben user.debug kernel: [ 119.053187] qla2xxx 0000:03:00.0: irq 309 for MSI/MSI-X
Mar 19 13:34:13 ben user.debug kernel: [ 120.185166] qla2xxx 0000:03:00.1: irq 310 for MSI/MSI-X
Mar 19 13:34:18 ben user.debug iscsi-scstd: max_data_seg_len 1048576, max_queued_cmds 2048
Mar 19 13:34:55 ben user.debug lvm[2845]: Got new connection on fd 5
Mar 19 13:34:55 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 13:34:55 ben user.debug lvm[2845]: creating pipe, [12, 13]
Mar 19 13:34:55 ben user.debug lvm[2845]: Creating pre&post thread
Mar 19 13:34:55 ben user.debug lvm[2845]: Created pre&post thread, state = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: in sub thread: client = 0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: doing PRE command LOCK_VG 'V_r0' at 4 (client=0x19523f0)
Mar 19 13:34:55 ben user.debug lvm[2845]: lock_resource 'V_r0', flags=0, mode=4
Mar 19 13:34:55 ben user.debug lvm[2845]: lock_resource returning 0, lock_id=1
Mar 19 13:34:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: distribute command: XID = 0, flags=0x1 (LOCAL)
Mar 19 13:34:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952750. client=0x19523f0, msg=0x1952500, len=24, csid=(nil), xid=0
Mar 19 13:34:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:55 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x1952790, msglen =24, client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: do_lock_vg: resource 'V_r0', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 13:34:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 1
Mar 19 13:34:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:55 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:55 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:55 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 13:34:55 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:55 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:55 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:55 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: distribute command: XID = 1, flags=0x0 ()
Mar 19 13:34:55 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x19529b0. client=0x19523f0, msg=0x1952500, len=24, csid=(nil), xid=1
Mar 19 13:34:55 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:55 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x1952750, msglen =24, client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 13:34:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:55 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:55 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:55 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 13:34:55 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:55 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:55 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:55 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: distribute command: XID = 2, flags=0x0 ()
Mar 19 13:34:55 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x19529b0. client=0x19523f0, msg=0x1952500, len=24, csid=(nil), xid=2
Mar 19 13:34:55 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:55 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x1952750, msglen =24, client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: vg_commit notification for VG r0
Mar 19 13:34:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:55 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:55 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:55 ben user.debug lvm[2845]: Read on local socket 5, len = 22
Mar 19 13:34:55 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:55 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:55 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:55 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: distribute command: XID = 3, flags=0x0 ()
Mar 19 13:34:55 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x19529b0. client=0x19523f0, msg=0x1952500, len=22, csid=(nil), xid=3
Mar 19 13:34:55 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:55 ben user.debug lvm[2845]: process_local_command: VG_BACKUP (0x2b) msg=0x1952750, msglen =22, client=0x19523f0
Mar 19 13:34:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 22
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 84
Mar 19 13:34:56 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 4, flags=0x0 ()
Mar 19 13:34:56 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x19523f0, msg=0x1952750, len=84, csid=(nil), xid=4
Mar 19 13:34:56 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: LOCK_LV (0x32) msg=0x19527b0, msglen =84, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNYJAvWMwODeOKqbAarOjbvzhzHE5f3Pus', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 84
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 35
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 17 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: lock_resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNYJAvWMwODeOKqbAarOjbvzhzHE5f3Pus', flags=1, mode=1
Mar 19 13:34:56 ben user.debug lvm[2845]: dlm_ls_lock returned 22
Mar 19 13:34:56 ben user.debug lvm[2845]: hold_lock. lock at 1 failed: Invalid argument
Mar 19 13:34:56 ben user.debug lvm[2845]: Command return is 22, critical_section is 0
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 17 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 84
Mar 19 13:34:56 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 5, flags=0x0 ()
Mar 19 13:34:56 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x19707f0. client=0x19523f0, msg=0x1952750, len=84, csid=(nil), xid=5
Mar 19 13:34:56 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: LOCK_LV (0x32) msg=0x19527b0, msglen =84, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNYJAvWMwODeOKqbAarOjbvzhzHE5f3Pus', cmd = 0x98 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: do_deactivate_lock, lock not already held
Mar 19 13:34:56 ben user.debug lvm[2845]: Command return is 0, critical_section is 0
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 84
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 13:34:56 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 6, flags=0x0 ()
Mar 19 13:34:56 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x19707f0. client=0x19523f0, msg=0x196bcf0, len=24, csid=(nil), xid=6
Mar 19 13:34:56 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196b6a0, msglen =24, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 13:34:56 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 7, flags=0x0 ()
Mar 19 13:34:56 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x19523f0, msg=0x196bcf0, len=24, csid=(nil), xid=7
Mar 19 13:34:56 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196bb80, msglen =24, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: vg_commit notification for VG r0
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 22
Mar 19 13:34:56 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 8, flags=0x0 ()
Mar 19 13:34:56 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x19523f0, msg=0x196bcf0, len=22, csid=(nil), xid=8
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: VG_BACKUP (0x2b) msg=0x196acd0, msglen =22, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 22
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 31
Mar 19 13:34:56 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 9, flags=0x0 ()
Mar 19 13:34:56 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x19523f0, msg=0x1952750, len=31, csid=(nil), xid=9
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: SYNC_NAMES (0x2d) msg=0x1952780, msglen =31, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Syncing device names
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 31
Mar 19 13:34:56 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: doing PRE command LOCK_VG 'V_r0' at 6 (client=0x19523f0)
Mar 19 13:34:56 ben user.debug lvm[2845]: unlock_resource: V_r0 lockid: 1
Mar 19 13:34:56 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: distribute command: XID = 10, flags=0x1 (LOCAL)
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952750. client=0x19523f0, msg=0x1952840, len=24, csid=(nil), xid=10
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: local
Mar 19 13:34:56 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196b6a0, msglen =24, client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: do_lock_vg: resource 'V_r0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 13:34:56 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 13:34:56 ben user.debug lvm[2845]: Got 1 replies, expecting: 1
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 13:34:56 ben user.debug lvm[2845]: Got post command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 13:34:56 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 13:34:56 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x19523f0
Mar 19 13:34:56 ben user.debug lvm[2845]: Send local reply
Mar 19 13:34:56 ben user.debug lvm[2845]: Read on local socket 5, len = 0
Mar 19 13:34:56 ben user.debug lvm[2845]: EOF on local socket: inprogress=0
Mar 19 13:34:56 ben user.debug lvm[2845]: Waiting for child thread
Mar 19 13:34:56 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 13:34:56 ben user.debug lvm[2845]: Subthread finished
Mar 19 13:34:56 ben user.debug lvm[2845]: Joined child thread
Mar 19 13:34:56 ben user.debug lvm[2845]: ret == 0, errno = 0. removing client
Mar 19 13:34:56 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952750. client=0x19523f0, msg=(nil), len=0, csid=(nil), xid=10
Mar 19 13:34:56 ben user.debug lvm[2845]: process_work_item: free fd -1
Mar 19 13:34:56 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: Got new connection on fd 5
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:37:02 ben user.debug lvm[2845]: creating pipe, [12, 13]
Mar 19 16:37:02 ben user.debug lvm[2845]: Creating pre&post thread
Mar 19 16:37:02 ben user.debug lvm[2845]: Created pre&post thread, state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: in sub thread: client = 0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: doing PRE command LOCK_VG 'V_r0' at 4 (client=0x1952520)
Mar 19 16:37:02 ben user.debug lvm[2845]: lock_resource 'V_r0', flags=0, mode=4
Mar 19 16:37:02 ben user.debug lvm[2845]: lock_resource returning 0, lock_id=1
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 11, flags=0x1 (LOCAL)
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1962b10. client=0x1952520, msg=0x196bb80, len=24, csid=(nil), xid=11
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196bcf0, msglen =24, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_vg: resource 'V_r0', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 1
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 12, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x196bb80, len=24, csid=(nil), xid=12
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196b6a0, msglen =24, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 13, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x196bb80, len=24, csid=(nil), xid=13
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x1952840, msglen =24, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: vg_commit notification for VG r0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 22
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 14, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x196bb80, len=22, csid=(nil), xid=14
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: VG_BACKUP (0x2b) msg=0x196bcf0, msglen =22, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 22
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 84
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 15, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x1962b10, len=84, csid=(nil), xid=15
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_LV (0x32) msg=0x1962b70, msglen =84, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 84
Mar 19 16:37:02 ben user.debug lvm[2845]: lock_resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh', flags=1, mode=1
Mar 19 16:37:02 ben user.debug lvm[2845]: dlm_ls_lock returned 22
Mar 19 16:37:02 ben user.debug lvm[2845]: hold_lock. lock at 1 failed: Invalid argument
Mar 19 16:37:02 ben user.debug lvm[2845]: Command return is 22, critical_section is 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 17 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 35
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 17 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 84
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 16, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x1962b10, len=84, csid=(nil), xid=16
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_LV (0x32) msg=0x1947120, msglen =84, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh', cmd = 0x98 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: do_deactivate_lock, lock not already held
Mar 19 16:37:02 ben user.debug lvm[2845]: Command return is 0, critical_section is 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 84
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 17, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x1952840, len=24, csid=(nil), xid=17
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196bcf0, msglen =24, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 18, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x1952a10. client=0x1952520, msg=0x1952840, len=24, csid=(nil), xid=18
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x196bb80, msglen =24, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: vg_commit notification for VG r0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 22
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 19, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa8980113d0. client=0x1952520, msg=0x1952840, len=22, csid=(nil), xid=19
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: VG_BACKUP (0x2b) msg=0x7fa8980008e0, msglen =22, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 22
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 31
Mar 19 16:37:02 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 20, flags=0x0 ()
Mar 19 16:37:02 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011400. client=0x1952520, msg=0x7fa898011170, len=31, csid=(nil), xid=20
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: SYNC_NAMES (0x2d) msg=0x7fa8980111a0, msglen =31, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Syncing device names
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 31
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: doing PRE command LOCK_VG 'V_r0' at 6 (client=0x1952520)
Mar 19 16:37:02 ben user.debug lvm[2845]: unlock_resource: V_r0 lockid: 1
Mar 19 16:37:02 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: distribute command: XID = 21, flags=0x1 (LOCAL)
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011170. client=0x1952520, msg=0x7fa8980008e0, len=24, csid=(nil), xid=21
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:37:02 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x7fa8980111b0, msglen =24, client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: do_lock_vg: resource 'V_r0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 1
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:37:02 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:37:02 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x1952520
Mar 19 16:37:02 ben user.debug lvm[2845]: Send local reply
Mar 19 16:37:02 ben user.debug lvm[2845]: Read on local socket 5, len = 0
Mar 19 16:37:02 ben user.debug lvm[2845]: EOF on local socket: inprogress=0
Mar 19 16:37:02 ben user.debug lvm[2845]: Waiting for child thread
Mar 19 16:37:02 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:37:02 ben user.debug lvm[2845]: Subthread finished
Mar 19 16:37:02 ben user.debug lvm[2845]: Joined child thread
Mar 19 16:37:02 ben user.debug lvm[2845]: ret == 0, errno = 0. removing client
Mar 19 16:37:02 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011170. client=0x1952520, msg=(nil), len=0, csid=(nil), xid=21
Mar 19 16:37:02 ben user.debug lvm[2845]: process_work_item: free fd -1
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:39:52 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 0. len 31
Mar 19 16:39:52 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011170. client=0x6940a0, msg=0x7fff3ab71bdc, len=31, csid=0x7fff3ab71a8c, xid=0
Mar 19 16:39:52 ben user.debug lvm[2845]: process_work_item: remote
Mar 19 16:39:52 ben user.debug lvm[2845]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 1 on node 40e6640a
Mar 19 16:39:52 ben user.debug lvm[2845]: Syncing device names
Mar 19 16:39:52 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:39:52 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 1088840714. len 18
Mar 19 16:39:55 ben user.debug lvm[2845]: Got new connection on fd 5
Mar 19 16:39:55 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:39:55 ben user.debug lvm[2845]: creating pipe, [12, 13]
Mar 19 16:39:55 ben user.debug lvm[2845]: Creating pre&post thread
Mar 19 16:39:55 ben user.debug lvm[2845]: Created pre&post thread, state = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: in sub thread: client = 0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: doing PRE command LOCK_VG 'V_r0' at 1 (client=0x7fa898011170)
Mar 19 16:39:55 ben user.debug lvm[2845]: lock_resource 'V_r0', flags=0, mode=3
Mar 19 16:39:55 ben user.debug lvm[2845]: lock_resource returning 0, lock_id=1
Mar 19 16:39:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:39:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: distribute command: XID = 22, flags=0x1 (LOCAL)
Mar 19 16:39:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011390. client=0x7fa898011170, msg=0x7fa8980008e0, len=24, csid=(nil), xid=22
Mar 19 16:39:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:39:55 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x7fa8980113d0, msglen =24, client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: do_lock_vg: resource 'V_r0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 16:39:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:39:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 1
Mar 19 16:39:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:39:55 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:39:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Send local reply
Mar 19 16:39:55 ben user.debug lvm[2845]: Read on local socket 5, len = 31
Mar 19 16:39:55 ben user.debug lvm[2845]: check_all_clvmds_running
Mar 19 16:39:55 ben user.debug lvm[2845]: down_callback. node 1088840714, state = 3
Mar 19 16:39:55 ben user.debug lvm[2845]: down_callback. node 837182474, state = 3
Mar 19 16:39:55 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:39:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:39:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: distribute command: XID = 23, flags=0x0 ()
Mar 19 16:39:55 ben user.debug lvm[2845]: num_nodes = 2
Mar 19 16:39:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011620. client=0x7fa898011170, msg=0x7fa898011390, len=31, csid=(nil), xid=23
Mar 19 16:39:55 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:39:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:39:55 ben user.debug lvm[2845]: process_local_command: SYNC_NAMES (0x2d) msg=0x7fa8980113c0, msglen =31, client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Syncing device names
Mar 19 16:39:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:39:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:39:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:39:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 837182474 for 0. len 31
Mar 19 16:39:55 ben user.debug lvm[2845]: 837182474 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:39:55 ben user.debug lvm[2845]: Reply from node 40e6640a: 0 bytes
Mar 19 16:39:55 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:39:55 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:39:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Send local reply
Mar 19 16:39:55 ben user.debug lvm[2845]: Read on local socket 5, len = 24
Mar 19 16:39:55 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:39:55 ben user.debug lvm[2845]: doing PRE command LOCK_VG 'V_r0' at 6 (client=0x7fa898011170)
Mar 19 16:39:55 ben user.debug lvm[2845]: unlock_resource: V_r0 lockid: 1
Mar 19 16:39:55 ben user.debug lvm[2845]: Writing status 0 down pipe 13
Mar 19 16:39:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting to do post command - state = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: distribute command: XID = 24, flags=0x1 (LOCAL)
Mar 19 16:39:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011390. client=0x7fa898011170, msg=0x7fa8980008e0, len=24, csid=(nil), xid=24
Mar 19 16:39:55 ben user.debug lvm[2845]: process_work_item: local
Mar 19 16:39:55 ben user.debug lvm[2845]: process_local_command: LOCK_VG (0x33) msg=0x7fa8980113d0, msglen =24, client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: do_lock_vg: resource 'V_r0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: Invalidating cached metadata for VG r0
Mar 19 16:39:55 ben user.debug lvm[2845]: Reply from node 31e6640a: 0 bytes
Mar 19 16:39:55 ben user.debug lvm[2845]: Got 1 replies, expecting: 1
Mar 19 16:39:55 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:39:55 ben user.debug lvm[2845]: Got post command condition...
Mar 19 16:39:55 ben user.debug lvm[2845]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:55 ben user.debug lvm[2845]: background routine status was 0, sock_client=0x7fa898011170
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting for next pre command
Mar 19 16:39:55 ben user.debug lvm[2845]: Send local reply
Mar 19 16:39:55 ben user.debug lvm[2845]: Read on local socket 5, len = 0
Mar 19 16:39:55 ben user.debug lvm[2845]: EOF on local socket: inprogress=0
Mar 19 16:39:55 ben user.debug lvm[2845]: Waiting for child thread
Mar 19 16:39:55 ben user.debug lvm[2845]: Got pre command condition...
Mar 19 16:39:55 ben user.debug lvm[2845]: Subthread finished
Mar 19 16:39:55 ben user.debug lvm[2845]: Joined child thread
Mar 19 16:39:55 ben user.debug lvm[2845]: ret == 0, errno = 0. removing client
Mar 19 16:39:55 ben user.debug lvm[2845]: add_to_lvmqueue: cmd=0x7fa898011280. client=0x7fa898011170, msg=(nil), len=0, csid=(nil), xid=24
Mar 19 16:39:55 ben user.debug lvm[2845]: process_work_item: free fd -1
Mar 19 16:39:55 ben user.debug lvm[2845]: LVM thread waiting for work
[-- Attachment #3: bill_debug --]
[-- Type: application/octet-stream, Size: 28463 bytes --]
Mar 19 13:28:28 bill daemon.debug clvmd[5069]: CLVMD started
Mar 19 13:28:28 bill user.debug kernel: [ 120.315423] dlm: clvmd: joining the lockspace group...
Mar 19 13:28:28 bill user.debug kernel: [ 120.318426] dlm: clvmd: dlm_recover 1
Mar 19 13:28:28 bill user.debug kernel: [ 120.318433] dlm: clvmd: group event done 0 0
Mar 19 13:28:28 bill user.debug kernel: [ 120.318495] dlm: clvmd: add member 1088840714
Mar 19 13:28:28 bill user.debug kernel: [ 120.318497] dlm: clvmd: add member 837182474
Mar 19 13:28:28 bill user.debug kernel: [ 120.319338] dlm: clvmd: dlm_recover_members 2 nodes
Mar 19 13:28:28 bill user.debug kernel: [ 120.319342] dlm: clvmd: join complete
Mar 19 13:28:28 bill user.debug kernel: [ 120.541014] dlm: clvmd: generation 4 slots 2 1:837182474 2:1088840714
Mar 19 13:28:28 bill user.debug kernel: [ 120.541018] dlm: clvmd: dlm_recover_directory
Mar 19 13:28:28 bill user.debug kernel: [ 120.582024] dlm: clvmd: dlm_recover_directory 0 in 0 new
Mar 19 13:28:28 bill user.debug kernel: [ 120.663905] dlm: clvmd: dlm_recover_directory 0 out 1 messages
Mar 19 13:28:29 bill user.debug kernel: [ 120.966946] dlm: clvmd: dlm_recover 1 generation 4 done: 425 ms
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: Created DLM lockspace for CLVMD.
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: DLM initialisation complete
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: Our local node id is 1088840714
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: Connected to Corosync
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: Cluster ready, doing some more initialisation
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: starting LVM thread
Mar 19 13:28:29 bill daemon.debug clvmd[5069]: LVM thread function started
Mar 19 13:28:29 bill user.debug lvm[5069]: clvmd ready for work
Mar 19 13:28:29 bill user.debug lvm[5069]: Sub thread ready for work.
Mar 19 13:28:29 bill user.debug lvm[5069]: Using timeout of 60 seconds
Mar 19 13:28:29 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:28:29 bill user.debug lvm[5069]: confchg callback. 1 joined, 0 left, 2 members
Mar 19 13:28:30 bill user.debug kernel: [ 122.629171] qla2xxx 0000:0e:00.0: irq 117 for MSI/MSI-X
Mar 19 13:28:30 bill user.debug kernel: [ 122.629184] qla2xxx 0000:0e:00.0: irq 118 for MSI/MSI-X
Mar 19 13:28:31 bill user.debug kernel: [ 123.805164] qla2xxx 0000:0e:00.1: irq 119 for MSI/MSI-X
Mar 19 13:28:31 bill user.debug kernel: [ 123.805174] qla2xxx 0000:0e:00.1: irq 120 for MSI/MSI-X
Mar 19 13:28:36 bill user.debug iscsi-scstd: max_data_seg_len 1048576, max_queued_cmds 2048
Mar 19 13:29:26 bill user.debug kernel: [ 178.402539] dlm: clvmd: dlm_recover 3
Mar 19 13:29:26 bill user.debug kernel: [ 178.402628] dlm: clvmd: remove member 837182474
Mar 19 13:29:26 bill user.debug kernel: [ 178.402632] dlm: clvmd: dlm_recover_members 1 nodes
Mar 19 13:29:26 bill user.debug kernel: [ 178.402636] dlm: clvmd: generation 5 slots 1 2:1088840714
Mar 19 13:29:26 bill user.debug kernel: [ 178.402638] dlm: clvmd: dlm_recover_directory
Mar 19 13:29:26 bill user.debug kernel: [ 178.402639] dlm: clvmd: dlm_recover_directory 0 in 0 new
Mar 19 13:29:26 bill user.debug kernel: [ 178.402641] dlm: clvmd: dlm_recover_directory 0 out 0 messages
Mar 19 13:29:26 bill user.debug kernel: [ 178.402644] dlm: clvmd: dlm_recover_masters
Mar 19 13:29:26 bill user.debug kernel: [ 178.402645] dlm: clvmd: dlm_recover_masters 0 of 0
Mar 19 13:29:26 bill user.debug kernel: [ 178.402647] dlm: clvmd: dlm_recover_locks 0 out
Mar 19 13:29:26 bill user.debug kernel: [ 178.402648] dlm: clvmd: dlm_recover_locks 0 in
Mar 19 13:29:26 bill user.debug kernel: [ 178.402671] dlm: clvmd: dlm_recover 3 generation 5 done: 0 ms
Mar 19 13:29:26 bill user.debug lvm[5069]: confchg callback. 0 joined, 1 left, 1 members
Mar 19 13:34:09 bill user.debug kernel: [ 461.552920] dlm: clvmd: dlm_recover 5
Mar 19 13:34:09 bill user.debug kernel: [ 461.553014] dlm: clvmd: add member 837182474
Mar 19 13:34:09 bill user.debug kernel: [ 461.553948] dlm: clvmd: dlm_recover_members 2 nodes
Mar 19 13:34:10 bill user.debug kernel: [ 462.153397] dlm: clvmd: generation 6 slots 2 1:837182474 2:1088840714
Mar 19 13:34:10 bill user.debug kernel: [ 462.153400] dlm: clvmd: dlm_recover_directory
Mar 19 13:34:10 bill user.debug kernel: [ 462.153684] dlm: clvmd: dlm_recover_directory 0 in 0 new
Mar 19 13:34:10 bill user.debug lvm[5069]: confchg callback. 1 joined, 0 left, 2 members
Mar 19 13:34:11 bill user.debug kernel: [ 462.953140] dlm: clvmd: dlm_recover_directory 0 out 1 messages
Mar 19 13:34:12 bill user.debug kernel: [ 463.974428] dlm: clvmd: dlm_recover 5 generation 6 done: 1821 ms
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x242d3f0. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 1 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 bill user.debug lvm[5069]: Invalidating cached metadata for VG r0
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x242d3f0. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 2 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 bill user.debug lvm[5069]: vg_commit notification for VG r0
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 22
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x242d3f0. client=0x6940a0, msg=0x7fff41373c7c, len=22, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command VG_BACKUP (0x2b) for clientid 0x5000000 XID 3 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: Triggering backup of VG metadata for r0.
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 84
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=84, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command LOCK_LV (0x32) for clientid 0x5000000 XID 4 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNYJAvWMwODeOKqbAarOjbvzhzHE5f3Pus', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 bill user.debug lvm[5069]: lock_resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNYJAvWMwODeOKqbAarOjbvzhzHE5f3Pus', flags=1, mode=1
Mar 19 13:34:55 bill user.debug lvm[5069]: dlm_ls_lock returned 22
Mar 19 13:34:55 bill user.debug lvm[5069]: hold_lock. lock at 1 failed: Invalid argument
Mar 19 13:34:55 bill user.debug lvm[5069]: Command return is 22, critical_section is 0
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 35
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 84
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2447380. client=0x6940a0, msg=0x7fff41373c7c, len=84, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command LOCK_LV (0x32) for clientid 0x5000000 XID 5 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNYJAvWMwODeOKqbAarOjbvzhzHE5f3Pus', cmd = 0x98 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 bill user.debug lvm[5069]: do_deactivate_lock, lock not already held
Mar 19 13:34:55 bill user.debug lvm[5069]: Command return is 0, critical_section is 0
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2447380. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 6 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 bill user.debug lvm[5069]: Invalidating cached metadata for VG r0
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 7 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 13:34:55 bill user.debug lvm[5069]: vg_commit notification for VG r0
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 22
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=22, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command VG_BACKUP (0x2b) for clientid 0x5000000 XID 8 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: Triggering backup of VG metadata for r0.
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 31
Mar 19 13:34:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=31, csid=0x7fff41373b2c, xid=0
Mar 19 13:34:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 13:34:55 bill user.debug lvm[5069]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 9 on node 31e6640a
Mar 19 13:34:55 bill user.debug lvm[5069]: Syncing device names
Mar 19 13:34:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 13:34:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 12 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 bill user.debug lvm[5069]: Invalidating cached metadata for VG r0
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 13 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 bill user.debug lvm[5069]: vg_commit notification for VG r0
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 22
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=22, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command VG_BACKUP (0x2b) for clientid 0x5000000 XID 14 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: Triggering backup of VG metadata for r0.
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 84
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=84, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command LOCK_LV (0x32) for clientid 0x5000000 XID 15 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 bill user.debug lvm[5069]: lock_resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh', flags=1, mode=1
Mar 19 16:37:02 bill user.debug lvm[5069]: dlm_ls_lock returned 22
Mar 19 16:37:02 bill user.debug lvm[5069]: hold_lock. lock at 1 failed: Invalid argument
Mar 19 16:37:02 bill user.debug lvm[5069]: Command return is 22, critical_section is 0
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 35
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 84
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=84, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command LOCK_LV (0x32) for clientid 0x5000000 XID 16 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: do_lock_lv: resource 'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh', cmd = 0x98 LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 bill user.debug lvm[5069]: do_deactivate_lock, lock not already held
Mar 19 16:37:02 bill user.debug lvm[5069]: Command return is 0, critical_section is 0
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 17 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x104 LCK_VG (WRITE|VG|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 bill user.debug lvm[5069]: Invalidating cached metadata for VG r0
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 24
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=24, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 18 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: do_lock_vg: resource 'P_r0', cmd = 0x124 LCK_VG (WRITE|VG|HOLD|CACHE), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:37:02 bill user.debug lvm[5069]: vg_commit notification for VG r0
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 22
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=22, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command VG_BACKUP (0x2b) for clientid 0x5000000 XID 19 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: Triggering backup of VG metadata for r0.
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 31
Mar 19 16:37:02 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x6940a0, msg=0x7fff41373c7c, len=31, csid=0x7fff41373b2c, xid=0
Mar 19 16:37:02 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:37:02 bill user.debug lvm[5069]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 20 on node 31e6640a
Mar 19 16:37:02 bill user.debug lvm[5069]: Syncing device names
Mar 19 16:37:02 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:37:02 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
Mar 19 16:39:52 bill user.debug lvm[5069]: Got new connection on fd 5
Mar 19 16:39:52 bill user.debug lvm[5069]: Read on local socket 5, len = 24
Mar 19 16:39:52 bill user.debug lvm[5069]: creating pipe, [12, 13]
Mar 19 16:39:52 bill user.debug lvm[5069]: Creating pre&post thread
Mar 19 16:39:52 bill user.debug lvm[5069]: Created pre&post thread, state = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: in sub thread: client = 0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: doing PRE command LOCK_VG 'V_r0' at 1 (client=0x2445560)
Mar 19 16:39:52 bill user.debug lvm[5069]: lock_resource 'V_r0', flags=0, mode=3
Mar 19 16:39:52 bill user.debug lvm[5069]: lock_resource returning 0, lock_id=1
Mar 19 16:39:52 bill user.debug lvm[5069]: Writing status 0 down pipe 13
Mar 19 16:39:52 bill user.debug lvm[5069]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:52 bill user.debug lvm[5069]: background routine status was 0, sock_client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting to do post command - state = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: distribute command: XID = 0, flags=0x1 (LOCAL)
Mar 19 16:39:52 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x2445560, msg=0x24464e0, len=24, csid=(nil), xid=0
Mar 19 16:39:52 bill user.debug lvm[5069]: process_work_item: local
Mar 19 16:39:52 bill user.debug lvm[5069]: process_local_command: LOCK_VG (0x33) msg=0x2445a70, msglen =24, client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: do_lock_vg: resource 'V_r0', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: Invalidating cached metadata for VG r0
Mar 19 16:39:52 bill user.debug lvm[5069]: Reply from node 40e6640a: 0 bytes
Mar 19 16:39:52 bill user.debug lvm[5069]: Got 1 replies, expecting: 1
Mar 19 16:39:52 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:39:52 bill user.debug lvm[5069]: Got post command condition...
Mar 19 16:39:52 bill user.debug lvm[5069]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:52 bill user.debug lvm[5069]: background routine status was 0, sock_client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting for next pre command
Mar 19 16:39:52 bill user.debug lvm[5069]: Send local reply
Mar 19 16:39:52 bill user.debug lvm[5069]: Read on local socket 5, len = 31
Mar 19 16:39:52 bill user.debug lvm[5069]: check_all_clvmds_running
Mar 19 16:39:52 bill user.debug lvm[5069]: down_callback. node 1088840714, state = 3
Mar 19 16:39:52 bill user.debug lvm[5069]: down_callback. node 837182474, state = 3
Mar 19 16:39:52 bill user.debug lvm[5069]: Got pre command condition...
Mar 19 16:39:52 bill user.debug lvm[5069]: Writing status 0 down pipe 13
Mar 19 16:39:52 bill user.debug lvm[5069]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:52 bill user.debug lvm[5069]: background routine status was 0, sock_client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting to do post command - state = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: distribute command: XID = 1, flags=0x0 ()
Mar 19 16:39:52 bill user.debug lvm[5069]: num_nodes = 2
Mar 19 16:39:52 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x244b420. client=0x2445560, msg=0x2446790, len=31, csid=(nil), xid=1
Mar 19 16:39:52 bill user.debug lvm[5069]: Sending message to all cluster nodes
Mar 19 16:39:52 bill user.debug lvm[5069]: process_work_item: local
Mar 19 16:39:52 bill user.debug lvm[5069]: process_local_command: SYNC_NAMES (0x2d) msg=0x2446e00, msglen =31, client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Syncing device names
Mar 19 16:39:52 bill user.debug lvm[5069]: Reply from node 40e6640a: 0 bytes
Mar 19 16:39:52 bill user.debug lvm[5069]: Got 1 replies, expecting: 2
Mar 19 16:39:52 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:39:52 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 0. len 31
Mar 19 16:39:52 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 1088840714. len 18
Mar 19 16:39:52 bill user.debug lvm[5069]: Reply from node 31e6640a: 0 bytes
Mar 19 16:39:52 bill user.debug lvm[5069]: Got 2 replies, expecting: 2
Mar 19 16:39:52 bill user.debug lvm[5069]: Got post command condition...
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting for next pre command
Mar 19 16:39:52 bill user.debug lvm[5069]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:52 bill user.debug lvm[5069]: background routine status was 0, sock_client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Send local reply
Mar 19 16:39:52 bill user.debug lvm[5069]: Read on local socket 5, len = 24
Mar 19 16:39:52 bill user.debug lvm[5069]: Got pre command condition...
Mar 19 16:39:52 bill user.debug lvm[5069]: doing PRE command LOCK_VG 'V_r0' at 6 (client=0x2445560)
Mar 19 16:39:52 bill user.debug lvm[5069]: unlock_resource: V_r0 lockid: 1
Mar 19 16:39:52 bill user.debug lvm[5069]: Writing status 0 down pipe 13
Mar 19 16:39:52 bill user.debug lvm[5069]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:52 bill user.debug lvm[5069]: background routine status was 0, sock_client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting to do post command - state = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: distribute command: XID = 2, flags=0x1 (LOCAL)
Mar 19 16:39:52 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x2446790. client=0x2445560, msg=0x2445a70, len=24, csid=(nil), xid=2
Mar 19 16:39:52 bill user.debug lvm[5069]: process_work_item: local
Mar 19 16:39:52 bill user.debug lvm[5069]: process_local_command: LOCK_VG (0x33) msg=0x24464e0, msglen =24, client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: do_lock_vg: resource 'V_r0', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: Invalidating cached metadata for VG r0
Mar 19 16:39:52 bill user.debug lvm[5069]: Reply from node 40e6640a: 0 bytes
Mar 19 16:39:52 bill user.debug lvm[5069]: Got 1 replies, expecting: 1
Mar 19 16:39:52 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:39:52 bill user.debug lvm[5069]: Got post command condition...
Mar 19 16:39:52 bill user.debug lvm[5069]: read on PIPE 12: 4 bytes: status: 0
Mar 19 16:39:52 bill user.debug lvm[5069]: background routine status was 0, sock_client=0x2445560
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting for next pre command
Mar 19 16:39:52 bill user.debug lvm[5069]: Send local reply
Mar 19 16:39:52 bill user.debug lvm[5069]: Read on local socket 5, len = 0
Mar 19 16:39:52 bill user.debug lvm[5069]: EOF on local socket: inprogress=0
Mar 19 16:39:52 bill user.debug lvm[5069]: Waiting for child thread
Mar 19 16:39:52 bill user.debug lvm[5069]: Got pre command condition...
Mar 19 16:39:52 bill user.debug lvm[5069]: Subthread finished
Mar 19 16:39:52 bill user.debug lvm[5069]: Joined child thread
Mar 19 16:39:52 bill user.debug lvm[5069]: ret == 0, errno = 0. removing client
Mar 19 16:39:52 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x24472e0. client=0x2445560, msg=(nil), len=0, csid=(nil), xid=2
Mar 19 16:39:52 bill user.debug lvm[5069]: process_work_item: free fd -1
Mar 19 16:39:52 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:39:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 837182474 for 0. len 31
Mar 19 16:39:55 bill user.debug lvm[5069]: add_to_lvmqueue: cmd=0x24472e0. client=0x6940a0, msg=0x7fff41373c7c, len=31, csid=0x7fff41373b2c, xid=0
Mar 19 16:39:55 bill user.debug lvm[5069]: process_work_item: remote
Mar 19 16:39:55 bill user.debug lvm[5069]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 23 on node 31e6640a
Mar 19 16:39:55 bill user.debug lvm[5069]: Syncing device names
Mar 19 16:39:55 bill user.debug lvm[5069]: LVM thread waiting for work
Mar 19 16:39:55 bill user.debug lvm[5069]: 1088840714 got message from nodeid 1088840714 for 837182474. len 18
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-lvm] Error Attempting to Create LV on Clustered VG
2013-03-19 16:58 [linux-lvm] Error Attempting to Create LV on Clustered VG Marc Smith
@ 2013-03-20 9:38 ` Jacek Konieczny
2013-03-20 14:55 ` David Teigland
0 siblings, 1 reply; 4+ messages in thread
From: Jacek Konieczny @ 2013-03-20 9:38 UTC (permalink / raw)
To: linux-lvm
On Tue, 19 Mar 2013 12:58:53 -0400
Marc Smith <marc.smith@mcc.edu> wrote:
> Hi,
>
> I'm attempting to create a logical volume on a clustered volume group
> in a (2) node Pacemaker + Corosync cluster (active/active). I have
> successfully created the VG and it appears available on both hosts:
>
> [root@bill ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> r0 1 0 0 wz--nc 203.24g 203.24g
>
> [root@ben ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> r0 1 0 0 wz--nc 203.24g 203.24g
>
> When I attempt to create a LV on this VG from one of the hosts, I get
> the following error message:
> [root@ben ~]# lvcreate -L 150G -n testvmfs1 r0
> Error locking on node 40e6640a: Invalid argument
> Error locking on node 31e6640a: Invalid argument
> Failed to activate new LV.
[...]
> I've Google'd around a bit but haven't found any other posts quite
> like this one. Here are the software versions I'm using on these
> hosts:
> Linux kernel: 3.7.8
> LVM2: 2.02.97
> DLM User Tools (dlm_controld): 4.0.1
> Corosync: 2.3.0
> Pacemaker: 1.1.8
Are you using a 64-bit environment. If so, then this might be the bug in the
3.7 kernel DLM driver.
http://www.redhat.com/archives/cluster-devel/2013-February/msg00060.html
And here is the patch:
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d4b0bcf32b946590afd29e202d6a399b84fe6c67
Greets,
Jacek
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-lvm] Error Attempting to Create LV on Clustered VG
2013-03-20 9:38 ` Jacek Konieczny
@ 2013-03-20 14:55 ` David Teigland
2013-03-20 18:48 ` Marc Smith
0 siblings, 1 reply; 4+ messages in thread
From: David Teigland @ 2013-03-20 14:55 UTC (permalink / raw)
To: Marc Smith; +Cc: linux-lvm
> > I've Google'd around a bit but haven't found any other posts quite
> > like this one. Here are the software versions I'm using on these
> > hosts:
> > Linux kernel: 3.7.8
> > LVM2: 2.02.97
> > DLM User Tools (dlm_controld): 4.0.1
> > Corosync: 2.3.0
> > Pacemaker: 1.1.8
>
> Are you using a 64-bit environment. If so, then this might be the bug in the
> 3.7 kernel DLM driver.
>
> http://www.redhat.com/archives/cluster-devel/2013-February/msg00060.html
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d4b0bcf32b946590afd29e202d6a399b84fe6c67
The fix is in the 3.8 kernel.
> > When I attempt to create a LV on this VG from one of the hosts, I get
> > the following error message:
> > [root@ben ~]# lvcreate -L 150G -n testvmfs1 r0
> > Error locking on node 40e6640a: Invalid argument
> > Error locking on node 31e6640a: Invalid argument
> > Failed to activate new LV.
I'd also advise setting up proper node definitions in corosync.conf;
it avoids the invalid nodeid issue and makes everything cleaner.
Here's what I use:
totem {
version: 2
secauth: off
cluster_name: foo
}
nodelist {
node {
ring0_addr: 1.1.1.1
nodeid: 1
}
node {
ring0_addr: 2.2.2.2
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_syslog: yes
}
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [linux-lvm] Error Attempting to Create LV on Clustered VG
2013-03-20 14:55 ` David Teigland
@ 2013-03-20 18:48 ` Marc Smith
0 siblings, 0 replies; 4+ messages in thread
From: Marc Smith @ 2013-03-20 18:48 UTC (permalink / raw)
To: David Teigland; +Cc: linux-lvm
Jacek: Thank you, that solved the problem.
David: Thanks for the tip, I will use it.
--Marc
On Wed, Mar 20, 2013 at 10:55 AM, David Teigland <teigland@redhat.com> wrote:
>> > I've Google'd around a bit but haven't found any other posts quite
>> > like this one. Here are the software versions I'm using on these
>> > hosts:
>> > Linux kernel: 3.7.8
>> > LVM2: 2.02.97
>> > DLM User Tools (dlm_controld): 4.0.1
>> > Corosync: 2.3.0
>> > Pacemaker: 1.1.8
>>
>> Are you using a 64-bit environment. If so, then this might be the bug in the
>> 3.7 kernel DLM driver.
>>
>> http://www.redhat.com/archives/cluster-devel/2013-February/msg00060.html
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d4b0bcf32b946590afd29e202d6a399b84fe6c67
>
> The fix is in the 3.8 kernel.
>
>> > When I attempt to create a LV on this VG from one of the hosts, I get
>> > the following error message:
>> > [root@ben ~]# lvcreate -L 150G -n testvmfs1 r0
>> > Error locking on node 40e6640a: Invalid argument
>> > Error locking on node 31e6640a: Invalid argument
>> > Failed to activate new LV.
>
> I'd also advise setting up proper node definitions in corosync.conf;
> it avoids the invalid nodeid issue and makes everything cleaner.
> Here's what I use:
>
> totem {
> version: 2
> secauth: off
> cluster_name: foo
> }
>
> nodelist {
> node {
> ring0_addr: 1.1.1.1
> nodeid: 1
> }
> node {
> ring0_addr: 2.2.2.2
> nodeid: 2
> }
> }
>
> quorum {
> provider: corosync_votequorum
> two_node: 1
> }
>
> logging {
> to_syslog: yes
> }
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2013-03-20 18:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-19 16:58 [linux-lvm] Error Attempting to Create LV on Clustered VG Marc Smith
2013-03-20 9:38 ` Jacek Konieczny
2013-03-20 14:55 ` David Teigland
2013-03-20 18:48 ` Marc Smith
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).