cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] cluster/dlm-kernel/src lockqueue.c
@ 2007-11-07 15:22 teigland
  0 siblings, 0 replies; 4+ messages in thread
From: teigland @ 2007-11-07 15:22 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	cluster
Branch: 	RHEL4
Changes by:	teigland at sourceware.org	2007-11-07 15:22:31

Modified files:
	dlm-kernel/src : lockqueue.c 

Log message:
	bz 349001
	
	For the entire life of the dlm, there's been an annoying issue that we've
	worked around and not "fixed" directly.  It's the source of all these
	messages:
	
	process_lockqueue_reply id 2c0224 state 0
	
	The problem that a lock master sends an async "granted" message for a
	convert request *before* actually sending the reply for the original
	convert.  The work-around is that the requesting node just takes the
	granted message as an implicit reply to the conversion and ignores the
	convert reply when it arrives later (the message above is printed when
	it gets the out-of-order reply for its convert).  Apart from the annoying
	messages, it's never been a problem.
	
	Now we've found a case where it's a real problem:
	
	1. nodeA: send convert PR->CW to nodeB
	nodeB: send granted message to nodeA
	nodeB: send convert reply to nodeA
	2. nodeA: receive granted message for conversion
	complete request, sending ast to gfs
	3. nodeA: send convert CW->EX to nodeB
	4. nodeA: receive reply for convert in step 1, which we ordinarily
	ignore, but since another convert has been sent, we mistake this
	message as the reply for the convert in step 3, and complete
	the convert request which is *not* really completed yet
	5. nodeA: send unlock to nodeB
	nodeB: complains about an unlock during a conversion
	
	The fix is to have nodeB not send a convert reply if it has already sent a
	granted message.  (We already do this for cases where the conversion is
	granted when first processing it, but we don't in cases where the grant
	is done after processing the convert.)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm-kernel/src/lockqueue.c.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.37.2.9&r2=1.37.2.10

--- cluster/dlm-kernel/src/Attic/lockqueue.c	2006/01/24 14:38:19	1.37.2.9
+++ cluster/dlm-kernel/src/Attic/lockqueue.c	2007/11/07 15:22:31	1.37.2.10
@@ -590,6 +590,14 @@
 	req->rr_lvbseq = lkb->lkb_lvbseq;
 	add_request_lvb(lkb, req);
 
+	/* prevent a convert reply that hasn't been sent yet, the grant message
+	   will serve as an implicit convert reply */
+	if (lkb->lkb_request) {
+		log_debug(lkb->lkb_resource->res_ls, "skip convert reply %x "
+			  "gr %d\n", lkb->lkb_id, lkb->lkb_grmode);
+		lkb->lkb_request = NULL;
+	}
+
 	midcomms_send_buffer(&req->rr_header, e);
 }
 



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Cluster-devel] cluster/dlm-kernel/src lockqueue.c
@ 2007-11-07 15:57 teigland
  0 siblings, 0 replies; 4+ messages in thread
From: teigland @ 2007-11-07 15:57 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	cluster
Branch: 	RHEL46
Changes by:	teigland at sourceware.org	2007-11-07 15:57:09

Modified files:
	dlm-kernel/src : lockqueue.c 

Log message:
	bz 349001
	
	For the entire life of the dlm, there's been an annoying issue that we've
	worked around and not "fixed" directly.  It's the source of all these
	messages:
	
	process_lockqueue_reply id 2c0224 state 0
	
	The problem that a lock master sends an async "granted" message for a
	convert request *before* actually sending the reply for the original
	convert.  The work-around is that the requesting node just takes the
	granted message as an implicit reply to the conversion and ignores the
	convert reply when it arrives later (the message above is printed when
	it gets the out-of-order reply for its convert).  Apart from the annoying
	messages, it's never been a problem.
	
	Now we've found a case where it's a real problem:
	
	1. nodeA: send convert PR->CW to nodeB
	nodeB: send granted message to nodeA
	nodeB: send convert reply to nodeA
	2. nodeA: receive granted message for conversion
	complete request, sending ast to gfs
	3. nodeA: send convert CW->EX to nodeB
	4. nodeA: receive reply for convert in step 1, which we ordinarily
	ignore, but since another convert has been sent, we mistake this
	message as the reply for the convert in step 3, and complete
	the convert request which is *not* really completed yet
	5. nodeA: send unlock to nodeB
	nodeB: complains about an unlock during a conversion
	
	The fix is to have nodeB not send a convert reply if it has already sent a
	granted message.  (We already do this for cases where the conversion is
	granted when first processing it, but we don't in cases where the grant
	is done after processing the convert.)

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm-kernel/src/lockqueue.c.diff?cvsroot=cluster&only_with_tag=RHEL46&r1=1.37.2.9&r2=1.37.2.9.6.1

--- cluster/dlm-kernel/src/Attic/lockqueue.c	2006/01/24 14:38:19	1.37.2.9
+++ cluster/dlm-kernel/src/Attic/lockqueue.c	2007/11/07 15:57:08	1.37.2.9.6.1
@@ -590,6 +590,14 @@
 	req->rr_lvbseq = lkb->lkb_lvbseq;
 	add_request_lvb(lkb, req);
 
+	/* prevent a convert reply that hasn't been sent yet, the grant message
+	   will serve as an implicit convert reply */
+	if (lkb->lkb_request) {
+		log_debug(lkb->lkb_resource->res_ls, "skip convert reply %x "
+			  "gr %d\n", lkb->lkb_id, lkb->lkb_grmode);
+		lkb->lkb_request = NULL;
+	}
+
 	midcomms_send_buffer(&req->rr_header, e);
 }
 



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Cluster-devel] cluster/dlm-kernel/src lockqueue.c
@ 2008-01-04 16:12 teigland
  0 siblings, 0 replies; 4+ messages in thread
From: teigland @ 2008-01-04 16:12 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	cluster
Branch: 	RHEL4
Changes by:	teigland at sourceware.org	2008-01-04 16:12:05

Modified files:
	dlm-kernel/src : lockqueue.c 

Log message:
	Some message gets out of place, but there's no need to panic
	the machine; just ignore it.  bz 427531

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm-kernel/src/lockqueue.c.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.37.2.10&r2=1.37.2.11

--- cluster/dlm-kernel/src/Attic/lockqueue.c	2007/11/07 15:22:31	1.37.2.10
+++ cluster/dlm-kernel/src/Attic/lockqueue.c	2008/01/04 16:12:05	1.37.2.11
@@ -243,8 +243,13 @@
 			 */
 
 			lkb = find_lock_by_id(ls, hd->rh_lkid);
-			DLM_ASSERT(lkb,);
-			if (lkb->lkb_lockqueue_state == GDLM_LQSTATE_WAIT_RSB) {
+			if (!lkb) {
+				log_error(ls, "purge %x from %d no lkb",
+					  hd->rh_lkid, entry->rqe_nodeid);
+				list_del(&entry->rqe_list);
+				kfree(entry);
+				count++;
+			} else if (lkb->lkb_lockqueue_state == GDLM_LQSTATE_WAIT_RSB) {
 				list_del(&entry->rqe_list);
 				kfree(entry);
 				count++;



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Cluster-devel] cluster/dlm-kernel/src lockqueue.c
@ 2008-01-14 15:57 teigland
  0 siblings, 0 replies; 4+ messages in thread
From: teigland @ 2008-01-14 15:57 UTC (permalink / raw)
  To: cluster-devel.redhat.com

CVSROOT:	/cvs/cluster
Module name:	cluster
Branch: 	RHEL4
Changes by:	teigland at sourceware.org	2008-01-14 15:57:46

Modified files:
	dlm-kernel/src : lockqueue.c 

Log message:
	bz 351321
	
	add_to_requestqueue() can add a new message to the requestqueue
	just after process_requestqueue() checks it and determines it's
	empty.  This means dlm_recvd will spin forever in wait_requestqueue()
	waiting for the message to be removed.
	
	The same problem was found and fixed in the RHEL5 code (and then
	subsequently changed again).  This patch is the RHEL4 equivalent of the
	original RHEL5 fix.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/dlm-kernel/src/lockqueue.c.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.37.2.11&r2=1.37.2.12

--- cluster/dlm-kernel/src/Attic/lockqueue.c	2008/01/04 16:12:05	1.37.2.11
+++ cluster/dlm-kernel/src/Attic/lockqueue.c	2008/01/14 15:57:46	1.37.2.12
@@ -112,22 +112,23 @@
  * request queue and processed when recovery is complete.
  */
 
-void add_to_requestqueue(struct dlm_ls *ls, int nodeid, struct dlm_header *hd)
+int add_to_requestqueue(struct dlm_ls *ls, int nodeid, struct dlm_header *hd)
 {
 	struct rq_entry *entry;
 	int length = hd->rh_length;
+	int rv;
 
 	if (test_bit(LSFL_REQUEST_WARN, &ls->ls_flags))
 		log_error(ls, "request during recovery from %u", nodeid);
 
 	if (in_nodes_gone(ls, nodeid))
-		return;
+		return 0;
 
 	entry = kmalloc(sizeof(struct rq_entry) + length, GFP_KERNEL);
 	if (!entry) {
 		// TODO something better
 		printk("dlm: add_to_requestqueue: out of memory\n");
-		return;
+		return 0;
 	}
 
 	log_debug(ls, "add_to_requestq cmd %d fr %d", hd->rh_cmd, nodeid);
@@ -135,8 +136,22 @@
 	memcpy(entry->rqe_request, hd, length);
 
 	down(&ls->ls_requestqueue_lock);
-	list_add_tail(&entry->rqe_list, &ls->ls_requestqueue);
+
+	/* We need to check LS_RUN after taking the mutex to
+	   avoid a race where dlm_recoverd enables locking and runs
+	   process_requestqueue between our earlier LS_RUN check
+	   and this addition to the requestqueue. (From RHEL5 code). */
+
+	if (!test_bit(LSFL_LS_RUN, &ls->ls_flags)) {
+		list_add_tail(&entry->rqe_list, &ls->ls_requestqueue);
+		rv = 0;
+	} else {
+		log_debug(ls, "add_to_requestq skip fr %d", nodeid);
+		kfree(entry);
+		rv = -EAGAIN;
+	}
 	up(&ls->ls_requestqueue_lock);
+	return rv;
 }
 
 int process_requestqueue(struct dlm_ls *ls)
@@ -819,6 +834,7 @@
 	struct dlm_request *freq = (struct dlm_request *) req;
 	struct dlm_reply *rp = (struct dlm_reply *) req;
 	struct dlm_reply reply;
+	int error;
 
 	lspace = find_lockspace_by_global_id(req->rh_lockspace);
 
@@ -840,8 +856,11 @@
 	 */
  retry:
 	if (!test_bit(LSFL_LS_RUN, &lspace->ls_flags)) {
-		if (!recovery)
-			add_to_requestqueue(lspace, nodeid, req);
+		if (!recovery) {
+			error = add_to_requestqueue(lspace, nodeid, req);
+			if (error == -EAGAIN)
+				goto retry;
+		}
 		status = -EINTR;
 		goto out;
 	}



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2008-01-14 15:57 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-07 15:22 [Cluster-devel] cluster/dlm-kernel/src lockqueue.c teigland
  -- strict thread matches above, loose matches on Subject: below --
2007-11-07 15:57 teigland
2008-01-04 16:12 teigland
2008-01-14 15:57 teigland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).