cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] "->ls_in_recovery" not released
@ 2010-11-22 16:31 Menyhart Zoltan
  2010-11-22 17:34 ` David Teigland
  0 siblings, 1 reply; 10+ messages in thread
From: Menyhart Zoltan @ 2010-11-22 16:31 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

We have got a two-node OCFS2 file system controlled by the pacemaker.
We do some robustness tests, e.g. blocking the access to the "other" node.
The "local" machine is blocked:

  PID: 15617  TASK: ffff880c77572d90  CPU: 38  COMMAND: "dlm_recoverd"
  #0 [ffff880c7cb07c30] schedule at ffffffff81452830
  #1 [ffff880c7cb07cf8] dlm_wait_function at ffffffffa03aaffb
  #2 [ffff880c7cb07d68] dlm_rcom_status at ffffffffa03aa3d9
                        ping_members
  #3 [ffff880c7cb07db8] dlm_recover_members at ffffffffa03a58a3
                        ls_recover
                        do_ls_recovery
  #4 [ffff880c7cb07e48] dlm_recoverd at ffffffffa03abc89
  #5 [ffff880c7cb07ee8] kthread at ffffffff810820f6
  #6 [ffff880c7cb07f48] kernel_thread at ffffffff8100d1aa

If either the monitor device closes, or someone sends down a "stop"
onto the control device, then "ls_recover()" goes to the "fail:" branch
without setting free "->ls_in_recovery".
As a result OCFS2 operations remain blocked, e.g.:

PID: 3385   TASK: ffff880876e69520  CPU: 1   COMMAND: "bash"
  #0 [ffff88087cb91980] schedule at ffffffff81452830
  #1 [ffff88087cb91a48] rwsem_down_failed_common at ffffffff81454c95
  #2 [ffff88087cb91a98] rwsem_down_read_failed at ffffffff81454e26
  #3 [ffff88087cb91ad8] call_rwsem_down_read_failed at ffffffff81248004
  #4 [ffff88087cb91b40] dlm_lock at ffffffffa03a17b2
  #5 [ffff88087cb91c00] user_dlm_lock at ffffffffa020d18e
  #6 [ffff88087cb91c30] ocfs2_dlm_lock at ffffffffa00683c2
  #7 [ffff88087cb91c40] __ocfs2_cluster_lock at ffffffffa04f951c
  #8 [ffff88087cb91d60] ocfs2_inode_lock_full_nested at ffffffffa04fd800
  #9 [ffff88087cb91df0] ocfs2_inode_revalidate at ffffffffa0507566
#10 [ffff88087cb91e20] ocfs2_getattr at ffffffffa050270b
#11 [ffff88087cb91e60] vfs_getattr at ffffffff8115cac1
#12 [ffff88087cb91ea0] vfs_fstatat at ffffffff8115cb50
#13 [ffff88087cb91ee0] vfs_stat at ffffffff8115cc9b
#14 [ffff88087cb91ef0] sys_newstat at ffffffff8115ccc4
#15 [ffff88087cb91f80] system_call_fastpath at ffffffff8100c172

"ls_recover()" includes several other cases when it simply goes
to the "fail:" branch without setting free "->ls_in_recovery" and
without cleaning up the inconsistent data left behind.

I think some error handling code is missing in "ls_recover()".
Have you modified the DLM since the RHEL 6.0?

Thanks,

Zoltan Menyhart



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-12-01 17:27 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-22 16:31 [Cluster-devel] "->ls_in_recovery" not released Menyhart Zoltan
2010-11-22 17:34 ` David Teigland
2010-11-23 14:58   ` Menyhart Zoltan
2010-11-23 17:15     ` David Teigland
2010-11-24 16:13       ` Menyhart Zoltan
2010-11-24 20:29         ` David Teigland
2010-11-30 16:57       ` [Cluster-devel] Patch: making DLM more robust Menyhart Zoltan
2010-11-30 17:30         ` David Teigland
2010-12-01  9:23           ` Menyhart Zoltan
2010-12-01 17:27             ` David Teigland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).