cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc.
@ 2008-06-25  8:47 swhiteho
  2008-06-25  8:47 ` [Cluster-devel] [PATCH] [GFS2] Remove remote lock dropping code swhiteho
  2008-06-25 10:11 ` [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc Fabio M. Di Nitto
  0 siblings, 2 replies; 7+ messages in thread
From: swhiteho @ 2008-06-25  8:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

Here are some patches which I've collected up in preparation for
removal of the lock_dlm module (the plan being to merge it into
the main GFS2 module). I've not yet made a decision about whether
they will go into this merge window or the next, and that might
well depend upon the timing of the final patch in the series which
actually does the final removal of the module.

The main point of these patches is to clean up the code in the
mean time and remove anything which will not be required after
the merge,

Steve.




^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Cluster-devel] [PATCH] [GFS2] Remove remote lock dropping code
  2008-06-25  8:47 [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc swhiteho
@ 2008-06-25  8:47 ` swhiteho
  2008-06-25  8:47   ` [Cluster-devel] [PATCH] [GFS2] Remove obsolete conversion deadlock avoidance code swhiteho
  2008-06-25 10:11 ` [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc Fabio M. Di Nitto
  1 sibling, 1 reply; 7+ messages in thread
From: swhiteho @ 2008-06-25  8:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

From: Steven Whitehouse <swhiteho@redhat.com>

There are several reasons why this is undesirable:

 1. It never happens during normal operation anyway
 2. If it does happen it causes performance to be very, very poor
 3. It isn't likely to solve the original problem (memory shortage
    on remote DLM node) it was supposed to solve
 4. It uses a bunch of arbitrary constants which are unlikely to be
    correct for any particular situation and for which the tuning seems
    to be a black art.
 5. In an N node cluster, only 1/N of the dropped locked will actually
    contribute to solving the problem on average.

So all in all we are better off without it. This also makes merging
the lock_dlm module into GFS2 a bit easier.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>

diff --git a/fs/gfs2/gfs2.h b/fs/gfs2/gfs2.h
index 3bb11c0..ef606e3 100644
--- a/fs/gfs2/gfs2.h
+++ b/fs/gfs2/gfs2.h
@@ -16,11 +16,6 @@ enum {
 };
 
 enum {
-	NO_WAIT = 0,
-	WAIT = 1,
-};
-
-enum {
 	NO_FORCE = 0,
 	FORCE = 1,
 };
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index be7ed50..8d5450f 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1316,11 +1316,6 @@ void gfs2_glock_cb(void *cb_data, unsigned int type, void *data)
 			wake_up_process(sdp->sd_recoverd_process);
 		return;
 
-	case LM_CB_DROPLOCKS:
-		gfs2_gl_hash_clear(sdp, NO_WAIT);
-		gfs2_quota_scan(sdp);
-		return;
-
 	default:
 		gfs2_assert_warn(sdp, 0);
 		return;
@@ -1508,11 +1503,10 @@ static void clear_glock(struct gfs2_glock *gl)
  * @sdp: the filesystem
  * @wait: wait until it's all gone
  *
- * Called when unmounting the filesystem, or when inter-node lock manager
- * requests DROPLOCKS because it is running out of capacity.
+ * Called when unmounting the filesystem.
  */
 
-void gfs2_gl_hash_clear(struct gfs2_sbd *sdp, int wait)
+void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
 {
 	unsigned long t;
 	unsigned int x;
@@ -1527,7 +1521,7 @@ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp, int wait)
 				cont = 1;
 		}
 
-		if (!wait || !cont)
+		if (!cont)
 			break;
 
 		if (time_after_eq(jiffies,
diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
index 7389f8e..971d92a 100644
--- a/fs/gfs2/glock.h
+++ b/fs/gfs2/glock.h
@@ -132,7 +132,7 @@ void gfs2_lvb_unhold(struct gfs2_glock *gl);
 void gfs2_glock_cb(void *cb_data, unsigned int type, void *data);
 void gfs2_glock_schedule_for_reclaim(struct gfs2_glock *gl);
 void gfs2_reclaim_glock(struct gfs2_sbd *sdp);
-void gfs2_gl_hash_clear(struct gfs2_sbd *sdp, int wait);
+void gfs2_gl_hash_clear(struct gfs2_sbd *sdp);
 
 int __init gfs2_glock_init(void);
 void gfs2_glock_exit(void);
diff --git a/fs/gfs2/locking/dlm/lock_dlm.h b/fs/gfs2/locking/dlm/lock_dlm.h
index ad944c6..845a27f 100644
--- a/fs/gfs2/locking/dlm/lock_dlm.h
+++ b/fs/gfs2/locking/dlm/lock_dlm.h
@@ -79,9 +79,6 @@ struct gdlm_ls {
 	wait_queue_head_t	wait_control;
 	struct task_struct	*thread;
 	wait_queue_head_t	thread_wait;
-	unsigned long		drop_time;
-	int			drop_locks_count;
-	int			drop_locks_period;
 };
 
 enum {
diff --git a/fs/gfs2/locking/dlm/mount.c b/fs/gfs2/locking/dlm/mount.c
index 0628520..fa31c54 100644
--- a/fs/gfs2/locking/dlm/mount.c
+++ b/fs/gfs2/locking/dlm/mount.c
@@ -22,8 +22,6 @@ static struct gdlm_ls *init_gdlm(lm_callback_t cb, struct gfs2_sbd *sdp,
 	if (!ls)
 		return NULL;
 
-	ls->drop_locks_count = GDLM_DROP_COUNT;
-	ls->drop_locks_period = GDLM_DROP_PERIOD;
 	ls->fscb = cb;
 	ls->sdp = sdp;
 	ls->fsflags = flags;
@@ -33,7 +31,6 @@ static struct gdlm_ls *init_gdlm(lm_callback_t cb, struct gfs2_sbd *sdp,
 	INIT_LIST_HEAD(&ls->all_locks);
 	init_waitqueue_head(&ls->thread_wait);
 	init_waitqueue_head(&ls->wait_control);
-	ls->drop_time = jiffies;
 	ls->jid = -1;
 
 	strncpy(buf, table_name, 256);
diff --git a/fs/gfs2/locking/dlm/sysfs.c b/fs/gfs2/locking/dlm/sysfs.c
index a4ff271..4ec571c 100644
--- a/fs/gfs2/locking/dlm/sysfs.c
+++ b/fs/gfs2/locking/dlm/sysfs.c
@@ -114,17 +114,6 @@ static ssize_t recover_status_show(struct gdlm_ls *ls, char *buf)
 	return sprintf(buf, "%d\n", ls->recover_jid_status);
 }
 
-static ssize_t drop_count_show(struct gdlm_ls *ls, char *buf)
-{
-	return sprintf(buf, "%d\n", ls->drop_locks_count);
-}
-
-static ssize_t drop_count_store(struct gdlm_ls *ls, const char *buf, size_t len)
-{
-	ls->drop_locks_count = simple_strtol(buf, NULL, 0);
-	return len;
-}
-
 struct gdlm_attr {
 	struct attribute attr;
 	ssize_t (*show)(struct gdlm_ls *, char *);
@@ -144,7 +133,6 @@ GDLM_ATTR(first_done,     0444, first_done_show,     NULL);
 GDLM_ATTR(recover,        0644, recover_show,        recover_store);
 GDLM_ATTR(recover_done,   0444, recover_done_show,   NULL);
 GDLM_ATTR(recover_status, 0444, recover_status_show, NULL);
-GDLM_ATTR(drop_count,     0644, drop_count_show,     drop_count_store);
 
 static struct attribute *gdlm_attrs[] = {
 	&gdlm_attr_proto_name.attr,
@@ -157,7 +145,6 @@ static struct attribute *gdlm_attrs[] = {
 	&gdlm_attr_recover.attr,
 	&gdlm_attr_recover_done.attr,
 	&gdlm_attr_recover_status.attr,
-	&gdlm_attr_drop_count.attr,
 	NULL,
 };
 
diff --git a/fs/gfs2/locking/dlm/thread.c b/fs/gfs2/locking/dlm/thread.c
index f30350a..38823ef 100644
--- a/fs/gfs2/locking/dlm/thread.c
+++ b/fs/gfs2/locking/dlm/thread.c
@@ -20,19 +20,6 @@ static inline int no_work(struct gdlm_ls *ls)
 	return ret;
 }
 
-static inline int check_drop(struct gdlm_ls *ls)
-{
-	if (!ls->drop_locks_count)
-		return 0;
-
-	if (time_after(jiffies, ls->drop_time + ls->drop_locks_period * HZ)) {
-		ls->drop_time = jiffies;
-		if (ls->all_locks_count >= ls->drop_locks_count)
-			return 1;
-	}
-	return 0;
-}
-
 static int gdlm_thread(void *data)
 {
 	struct gdlm_ls *ls = (struct gdlm_ls *) data;
@@ -52,12 +39,6 @@ static int gdlm_thread(void *data)
 			gdlm_do_lock(lp);
 			spin_lock(&ls->async_lock);
 		}
-		/* Does this ever happen these days? I hope not anyway */
-		if (check_drop(ls)) {
-			spin_unlock(&ls->async_lock);
-			ls->fscb(ls->sdp, LM_CB_DROPLOCKS, NULL);
-			spin_lock(&ls->async_lock);
-		}
 		spin_unlock(&ls->async_lock);
 	}
 
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 9bd97c5..6ba69dd 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -874,7 +874,7 @@ fail_sb:
 fail_locking:
 	init_locking(sdp, &mount_gh, UNDO);
 fail_lm:
-	gfs2_gl_hash_clear(sdp, WAIT);
+	gfs2_gl_hash_clear(sdp);
 	gfs2_lm_unmount(sdp);
 	while (invalidate_inodes(sb))
 		yield();
diff --git a/fs/gfs2/ops_super.c b/fs/gfs2/ops_super.c
index 0b7cc92..6825515 100644
--- a/fs/gfs2/ops_super.c
+++ b/fs/gfs2/ops_super.c
@@ -126,7 +126,7 @@ static void gfs2_put_super(struct super_block *sb)
 	gfs2_clear_rgrpd(sdp);
 	gfs2_jindex_free(sdp);
 	/*  Take apart glock structures and buffer lists  */
-	gfs2_gl_hash_clear(sdp, WAIT);
+	gfs2_gl_hash_clear(sdp);
 	/*  Unmount the locking protocol  */
 	gfs2_lm_unmount(sdp);
 
diff --git a/fs/gfs2/sys.c b/fs/gfs2/sys.c
index 9ab9fc8..6f7e2e5 100644
--- a/fs/gfs2/sys.c
+++ b/fs/gfs2/sys.c
@@ -110,18 +110,6 @@ static ssize_t statfs_sync_store(struct gfs2_sbd *sdp, const char *buf,
 	return len;
 }
 
-static ssize_t shrink_store(struct gfs2_sbd *sdp, const char *buf, size_t len)
-{
-	if (!capable(CAP_SYS_ADMIN))
-		return -EACCES;
-
-	if (simple_strtol(buf, NULL, 0) != 1)
-		return -EINVAL;
-
-	gfs2_gl_hash_clear(sdp, NO_WAIT);
-	return len;
-}
-
 static ssize_t quota_sync_store(struct gfs2_sbd *sdp, const char *buf,
 				size_t len)
 {
@@ -175,7 +163,6 @@ static struct gfs2_attr gfs2_attr_##name = __ATTR(name, mode, show, store)
 GFS2_ATTR(id,                  0444, id_show,       NULL);
 GFS2_ATTR(fsname,              0444, fsname_show,   NULL);
 GFS2_ATTR(freeze,              0644, freeze_show,   freeze_store);
-GFS2_ATTR(shrink,              0200, NULL,          shrink_store);
 GFS2_ATTR(withdraw,            0644, withdraw_show, withdraw_store);
 GFS2_ATTR(statfs_sync,         0200, NULL,          statfs_sync_store);
 GFS2_ATTR(quota_sync,          0200, NULL,          quota_sync_store);
@@ -186,7 +173,6 @@ static struct attribute *gfs2_attrs[] = {
 	&gfs2_attr_id.attr,
 	&gfs2_attr_fsname.attr,
 	&gfs2_attr_freeze.attr,
-	&gfs2_attr_shrink.attr,
 	&gfs2_attr_withdraw.attr,
 	&gfs2_attr_statfs_sync.attr,
 	&gfs2_attr_quota_sync.attr,
diff --git a/include/linux/lm_interface.h b/include/linux/lm_interface.h
index f274997..d0a7112 100644
--- a/include/linux/lm_interface.h
+++ b/include/linux/lm_interface.h
@@ -138,9 +138,6 @@ typedef void (*lm_callback_t) (void *ptr, unsigned int type, void *data);
  * LM_CB_NEED_RECOVERY
  * The given journal needs to be recovered.
  *
- * LM_CB_DROPLOCKS
- * Reduce the number of cached locks.
- *
  * LM_CB_ASYNC
  * The given lock has been granted.
  */
@@ -149,7 +146,6 @@ typedef void (*lm_callback_t) (void *ptr, unsigned int type, void *data);
 #define LM_CB_NEED_D		258
 #define LM_CB_NEED_S		259
 #define LM_CB_NEED_RECOVERY	260
-#define LM_CB_DROPLOCKS		261
 #define LM_CB_ASYNC		262
 
 /*
-- 
1.5.1.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Cluster-devel] [PATCH] [GFS2] Remove obsolete conversion deadlock avoidance code
  2008-06-25  8:47 ` [Cluster-devel] [PATCH] [GFS2] Remove remote lock dropping code swhiteho
@ 2008-06-25  8:47   ` swhiteho
  2008-06-25  8:47     ` [Cluster-devel] [PATCH] [GFS2] Clean up the plock operations swhiteho
  0 siblings, 1 reply; 7+ messages in thread
From: swhiteho @ 2008-06-25  8:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

From: Steven Whitehouse <swhiteho@redhat.com>

This is only used by GFS1 so can be removed.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>

diff --git a/fs/gfs2/locking/dlm/lock.c b/fs/gfs2/locking/dlm/lock.c
index 871ffc9..894df45 100644
--- a/fs/gfs2/locking/dlm/lock.c
+++ b/fs/gfs2/locking/dlm/lock.c
@@ -80,7 +80,6 @@ static void process_complete(struct gdlm_lock *lp)
 {
 	struct gdlm_ls *ls = lp->ls;
 	struct lm_async_cb acb;
-	s16 prev_mode = lp->cur;
 
 	memset(&acb, 0, sizeof(acb));
 
@@ -160,15 +159,7 @@ static void process_complete(struct gdlm_lock *lp)
 			 lp->lksb.sb_status, lp->lockname.ln_type,
 			 (unsigned long long)lp->lockname.ln_number,
 			 lp->flags);
-		if (lp->lksb.sb_status == -EDEADLOCK &&
-		    lp->ls->fsflags & LM_MFLAG_CONV_NODROP) {
-			lp->req = lp->cur;
-			acb.lc_ret |= LM_OUT_CONV_DEADLK;
-			if (lp->cur == DLM_LOCK_IV)
-				lp->lksb.sb_lkid = 0;
-			goto out;
-		} else
-			return;
+		return;
 	}
 
 	/*
@@ -268,10 +259,6 @@ out:
 	acb.lc_name = lp->lockname;
 	acb.lc_ret |= gdlm_make_lmstate(lp->cur);
 
-	if (!test_and_clear_bit(LFL_NOCACHE, &lp->flags) &&
-	    (lp->cur > DLM_LOCK_NL) && (prev_mode > DLM_LOCK_NL))
-		acb.lc_ret |= LM_OUT_CACHEABLE;
-
 	ls->fscb(ls->sdp, LM_CB_ASYNC, &acb);
 }
 
@@ -376,14 +363,6 @@ static inline unsigned int make_flags(struct gdlm_lock *lp,
 
 	if (lp->lksb.sb_lkid != 0) {
 		lkf |= DLM_LKF_CONVERT;
-
-		/* Conversion deadlock avoidance by DLM */
-
-		if (!(lp->ls->fsflags & LM_MFLAG_CONV_NODROP) &&
-		    !test_bit(LFL_FORCE_PROMOTE, &lp->flags) &&
-		    !(lkf & DLM_LKF_NOQUEUE) &&
-		    cur > DLM_LOCK_NL && req > DLM_LOCK_NL && cur != req)
-			lkf |= DLM_LKF_CONVDEADLK;
 	}
 
 	if (lp->lvb)
diff --git a/include/linux/lm_interface.h b/include/linux/lm_interface.h
index d0a7112..2ed8fa1 100644
--- a/include/linux/lm_interface.h
+++ b/include/linux/lm_interface.h
@@ -122,11 +122,9 @@ typedef void (*lm_callback_t) (void *ptr, unsigned int type, void *data);
  */
 
 #define LM_OUT_ST_MASK		0x00000003
-#define LM_OUT_CACHEABLE	0x00000004
 #define LM_OUT_CANCELED		0x00000008
 #define LM_OUT_ASYNC		0x00000080
 #define LM_OUT_ERROR		0x00000100
-#define LM_OUT_CONV_DEADLK	0x00000200
 
 /*
  * lm_callback_t types
-- 
1.5.1.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Cluster-devel] [PATCH] [GFS2] Clean up the plock operations
  2008-06-25  8:47   ` [Cluster-devel] [PATCH] [GFS2] Remove obsolete conversion deadlock avoidance code swhiteho
@ 2008-06-25  8:47     ` swhiteho
  2008-06-25  8:47       ` [Cluster-devel] [PATCH] [GFS2] Remove all_list from lock_dlm swhiteho
  0 siblings, 1 reply; 7+ messages in thread
From: swhiteho @ 2008-06-25  8:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

From: Steven Whitehouse <swhiteho@redhat.com>

Historically we've done the cluster posix locking via the
lock modules. Recent changes moved the posix locking into
the DLM itself. Also we have two sets of file_operations,
one for cluster locks (both fcntl and flock) and one for
local locking. Note that local locking is an option
even for a multi-node filesystem (provided the application
is ok with that) so that using lock_dlm does not mean that
we will always use only the cluster lock set of
file_operations.

Anyhow, the net result of that was the we've had bits of
GFS2 which depend on whether or not we are building the
lock_dlm module compiled in even when they were not needed.

This patch alters the configuration such that those parts
of the code are not compiled in when not needed. Also
the DLM lock module becomes a bool. I know thats not
quite right until we finally get rid of what remains
that module, but for now it means that things will build
correctly, and the final removal of the module is in
the next patch. Likewise I hope my adition of header
files part way down ops_file.c will be indulged until
that is fixed, also in the next patch.

The net result of this patch is the loss of about 50 lines
of wrapper functions, and reduction in size of GFS2 when
in lock_nolock only configurations.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>

diff --git a/fs/gfs2/Kconfig b/fs/gfs2/Kconfig
index ab2f57e..5519120 100644
--- a/fs/gfs2/Kconfig
+++ b/fs/gfs2/Kconfig
@@ -1,6 +1,10 @@
 config GFS2_FS
 	tristate "GFS2 file system support"
 	depends on EXPERIMENTAL && (64BIT || (LSF && LBD))
+	select DLM if GFS2_FS_LOCKING_DLM
+	select CONFIGFS_FS if GFS2_FS_LOCKING_DLM
+	select SYSFS if GFS2_FS_LOCKING_DLM
+	select IP_SCTP if DLM_SCTP
 	select FS_POSIX_ACL
 	select CRC32
 	help
@@ -21,11 +25,8 @@ config GFS2_FS
 	  The "nolock" lock module is now built in to GFS2 by default.
 
 config GFS2_FS_LOCKING_DLM
-	tristate "GFS2 DLM locking module"
-	depends on GFS2_FS && SYSFS && NET && INET && (IPV6 || IPV6=n)
-	select IP_SCTP if DLM_SCTP
-	select CONFIGFS_FS
-	select DLM
+	bool "GFS2 DLM locking"
+	depends on GFS2_FS && NET && INET && (IPV6 || IPV6=n)
 	help
 	  Multiple node locking module for GFS2
 
diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
index 09453d0..1622f81 100644
--- a/fs/gfs2/inode.c
+++ b/fs/gfs2/inode.c
@@ -137,16 +137,16 @@ void gfs2_set_iop(struct inode *inode)
 
 	if (S_ISREG(mode)) {
 		inode->i_op = &gfs2_file_iops;
-		if (sdp->sd_args.ar_localflocks)
-			inode->i_fop = &gfs2_file_fops_nolock;
+		if (gfs2_localflocks(sdp))
+			inode->i_fop = gfs2_file_fops_nolock;
 		else
-			inode->i_fop = &gfs2_file_fops;
+			inode->i_fop = gfs2_file_fops;
 	} else if (S_ISDIR(mode)) {
 		inode->i_op = &gfs2_dir_iops;
-		if (sdp->sd_args.ar_localflocks)
-			inode->i_fop = &gfs2_dir_fops_nolock;
+		if (gfs2_localflocks(sdp))
+			inode->i_fop = gfs2_dir_fops_nolock;
 		else
-			inode->i_fop = &gfs2_dir_fops;
+			inode->i_fop = gfs2_dir_fops;
 	} else if (S_ISLNK(mode)) {
 		inode->i_op = &gfs2_symlink_iops;
 	} else {
diff --git a/fs/gfs2/locking/dlm/mount.c b/fs/gfs2/locking/dlm/mount.c
index fa31c54..0f06a11 100644
--- a/fs/gfs2/locking/dlm/mount.c
+++ b/fs/gfs2/locking/dlm/mount.c
@@ -229,27 +229,6 @@ static void gdlm_withdraw(void *lockspace)
 	gdlm_kobject_release(ls);
 }
 
-static int gdlm_plock(void *lockspace, struct lm_lockname *name,
-	       struct file *file, int cmd, struct file_lock *fl)
-{
-	struct gdlm_ls *ls = lockspace;
-	return dlm_posix_lock(ls->dlm_lockspace, name->ln_number, file, cmd, fl);
-}
-
-static int gdlm_punlock(void *lockspace, struct lm_lockname *name,
-		 struct file *file, struct file_lock *fl)
-{
-	struct gdlm_ls *ls = lockspace;
-	return dlm_posix_unlock(ls->dlm_lockspace, name->ln_number, file, fl);
-}
-
-static int gdlm_plock_get(void *lockspace, struct lm_lockname *name,
-		   struct file *file, struct file_lock *fl)
-{
-	struct gdlm_ls *ls = lockspace;
-	return dlm_posix_get(ls->dlm_lockspace, name->ln_number, file, fl);
-}
-
 const struct lm_lockops gdlm_ops = {
 	.lm_proto_name = "lock_dlm",
 	.lm_mount = gdlm_mount,
@@ -260,9 +239,6 @@ const struct lm_lockops gdlm_ops = {
 	.lm_put_lock = gdlm_put_lock,
 	.lm_lock = gdlm_lock,
 	.lm_unlock = gdlm_unlock,
-	.lm_plock = gdlm_plock,
-	.lm_punlock = gdlm_punlock,
-	.lm_plock_get = gdlm_plock_get,
 	.lm_cancel = gdlm_cancel,
 	.lm_hold_lvb = gdlm_hold_lvb,
 	.lm_unhold_lvb = gdlm_unhold_lvb,
diff --git a/fs/gfs2/ops_file.c b/fs/gfs2/ops_file.c
index 0ff512a..4f68c73 100644
--- a/fs/gfs2/ops_file.c
+++ b/fs/gfs2/ops_file.c
@@ -570,57 +570,28 @@ static int gfs2_fsync(struct file *file, struct dentry *dentry, int datasync)
 	return ret;
 }
 
+#ifdef CONFIG_GFS2_FS_LOCKING_DLM
+
+#include <linux/dlm.h>
+#include <linux/dlm_plock.h>
+#include "locking/dlm/lock_dlm.h"
+
 /**
  * gfs2_setlease - acquire/release a file lease
  * @file: the file pointer
  * @arg: lease type
  * @fl: file lock
  *
+ * We don't currently have a way to enforce a lease across the whole
+ * cluster; until we do, disable leases (by just returning -EINVAL),
+ * unless the administrator has requested purely local locking.
+ *
  * Returns: errno
  */
 
 static int gfs2_setlease(struct file *file, long arg, struct file_lock **fl)
 {
-	struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
-
-	/*
-	 * We don't currently have a way to enforce a lease across the whole
-	 * cluster; until we do, disable leases (by just returning -EINVAL),
-	 * unless the administrator has requested purely local locking.
-	 */
-	if (!sdp->sd_args.ar_localflocks)
-		return -EINVAL;
-	return generic_setlease(file, arg, fl);
-}
-
-static int gfs2_lm_plock_get(struct gfs2_sbd *sdp, struct lm_lockname *name,
-		      struct file *file, struct file_lock *fl)
-{
-	int error = -EIO;
-	if (likely(!test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
-		error = sdp->sd_lockstruct.ls_ops->lm_plock_get(
-				sdp->sd_lockstruct.ls_lockspace, name, file, fl);
-	return error;
-}
-
-static int gfs2_lm_plock(struct gfs2_sbd *sdp, struct lm_lockname *name,
-		  struct file *file, int cmd, struct file_lock *fl)
-{
-	int error = -EIO;
-	if (likely(!test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
-		error = sdp->sd_lockstruct.ls_ops->lm_plock(
-				sdp->sd_lockstruct.ls_lockspace, name, file, cmd, fl);
-	return error;
-}
-
-static int gfs2_lm_punlock(struct gfs2_sbd *sdp, struct lm_lockname *name,
-		    struct file *file, struct file_lock *fl)
-{
-	int error = -EIO;
-	if (likely(!test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
-		error = sdp->sd_lockstruct.ls_ops->lm_punlock(
-				sdp->sd_lockstruct.ls_lockspace, name, file, fl);
-	return error;
+	return -EINVAL;
 }
 
 /**
@@ -636,9 +607,8 @@ static int gfs2_lock(struct file *file, int cmd, struct file_lock *fl)
 {
 	struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
 	struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
-	struct lm_lockname name =
-		{ .ln_number = ip->i_no_addr,
-		  .ln_type = LM_TYPE_PLOCK };
+	struct gdlm_ls *gls = sdp->sd_lockstruct.ls_lockspace;
+	dlm_lockspace_t *ls = gls->dlm_lockspace;
 
 	if (!(fl->fl_flags & FL_POSIX))
 		return -ENOLCK;
@@ -650,12 +620,14 @@ static int gfs2_lock(struct file *file, int cmd, struct file_lock *fl)
 		cmd = F_SETLK;
 		fl->fl_type = F_UNLCK;
 	}
+	if (unlikely(test_bit(SDF_SHUTDOWN, &sdp->sd_flags)))
+		return -EIO;
 	if (IS_GETLK(cmd))
-		return gfs2_lm_plock_get(sdp, &name, file, fl);
+		return dlm_posix_get(ls, ip->i_no_addr, file, fl);
 	else if (fl->fl_type == F_UNLCK)
-		return gfs2_lm_punlock(sdp, &name, file, fl);
+		return dlm_posix_unlock(ls, ip->i_no_addr, file, fl);
 	else
-		return gfs2_lm_plock(sdp, &name, file, cmd, fl);
+		return dlm_posix_lock(ls, ip->i_no_addr, file, cmd, fl);
 }
 
 static int do_flock(struct file *file, int cmd, struct file_lock *fl)
@@ -742,7 +714,7 @@ static int gfs2_flock(struct file *file, int cmd, struct file_lock *fl)
 	}
 }
 
-const struct file_operations gfs2_file_fops = {
+const struct file_operations *gfs2_file_fops = &(const struct file_operations){
 	.llseek		= gfs2_llseek,
 	.read		= do_sync_read,
 	.aio_read	= generic_file_aio_read,
@@ -760,7 +732,7 @@ const struct file_operations gfs2_file_fops = {
 	.setlease	= gfs2_setlease,
 };
 
-const struct file_operations gfs2_dir_fops = {
+const struct file_operations *gfs2_dir_fops = &(const struct file_operations){
 	.readdir	= gfs2_readdir,
 	.unlocked_ioctl	= gfs2_ioctl,
 	.open		= gfs2_open,
@@ -770,7 +742,9 @@ const struct file_operations gfs2_dir_fops = {
 	.flock		= gfs2_flock,
 };
 
-const struct file_operations gfs2_file_fops_nolock = {
+#endif /* CONFIG_GFS2_FS_LOCKING_DLM */
+
+const struct file_operations *gfs2_file_fops_nolock = &(const struct file_operations){
 	.llseek		= gfs2_llseek,
 	.read		= do_sync_read,
 	.aio_read	= generic_file_aio_read,
@@ -783,10 +757,10 @@ const struct file_operations gfs2_file_fops_nolock = {
 	.fsync		= gfs2_fsync,
 	.splice_read	= generic_file_splice_read,
 	.splice_write	= generic_file_splice_write,
-	.setlease	= gfs2_setlease,
+	.setlease	= generic_setlease,
 };
 
-const struct file_operations gfs2_dir_fops_nolock = {
+const struct file_operations *gfs2_dir_fops_nolock = &(const struct file_operations){
 	.readdir	= gfs2_readdir,
 	.unlocked_ioctl	= gfs2_ioctl,
 	.open		= gfs2_open,
diff --git a/fs/gfs2/ops_inode.h b/fs/gfs2/ops_inode.h
index 14b4b79..e92ed63 100644
--- a/fs/gfs2/ops_inode.h
+++ b/fs/gfs2/ops_inode.h
@@ -11,15 +11,30 @@
 #define __OPS_INODE_DOT_H__
 
 #include <linux/fs.h>
+#include "incore.h"
 
 extern const struct inode_operations gfs2_file_iops;
 extern const struct inode_operations gfs2_dir_iops;
 extern const struct inode_operations gfs2_symlink_iops;
-extern const struct file_operations gfs2_file_fops;
-extern const struct file_operations gfs2_dir_fops;
-extern const struct file_operations gfs2_file_fops_nolock;
-extern const struct file_operations gfs2_dir_fops_nolock;
+extern const struct file_operations *gfs2_file_fops_nolock;
+extern const struct file_operations *gfs2_dir_fops_nolock;
 
 extern void gfs2_set_inode_flags(struct inode *inode);
 
+#ifdef CONFIG_GFS2_FS_LOCKING_DLM
+extern const struct file_operations *gfs2_file_fops;
+extern const struct file_operations *gfs2_dir_fops;
+static inline int gfs2_localflocks(const struct gfs2_sbd *sdp)
+{
+	return sdp->sd_args.ar_localflocks;
+}
+#else /* Single node only */
+#define gfs2_file_fops NULL
+#define gfs2_dir_fops NULL
+static inline int gfs2_localflocks(const struct gfs2_sbd *sdp)
+{
+	return 1;
+}
+#endif /* CONFIG_GFS2_FS_LOCKING_DLM */
+
 #endif /* __OPS_INODE_DOT_H__ */
diff --git a/include/linux/lm_interface.h b/include/linux/lm_interface.h
index 2ed8fa1..e5347c0 100644
--- a/include/linux/lm_interface.h
+++ b/include/linux/lm_interface.h
@@ -208,19 +208,6 @@ struct lm_lockops {
 	void (*lm_unhold_lvb) (void *lock, char *lvb);
 
 	/*
-	 * Posix Lock oriented operations
-	 */
-
-	int (*lm_plock_get) (void *lockspace, struct lm_lockname *name,
-			     struct file *file, struct file_lock *fl);
-
-	int (*lm_plock) (void *lockspace, struct lm_lockname *name,
-			 struct file *file, int cmd, struct file_lock *fl);
-
-	int (*lm_punlock) (void *lockspace, struct lm_lockname *name,
-			   struct file *file, struct file_lock *fl);
-
-	/*
 	 * Client oriented operations
 	 */
 
-- 
1.5.1.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Cluster-devel] [PATCH] [GFS2] Remove all_list from lock_dlm
  2008-06-25  8:47     ` [Cluster-devel] [PATCH] [GFS2] Clean up the plock operations swhiteho
@ 2008-06-25  8:47       ` swhiteho
  0 siblings, 0 replies; 7+ messages in thread
From: swhiteho @ 2008-06-25  8:47 UTC (permalink / raw)
  To: cluster-devel.redhat.com

From: Steven Whitehouse <swhiteho@redhat.com>

I discovered that we had a list onto which every lock_dlm
lock was being put. Its only function was to discover whether
we'd got any locks left after umount. Since there was already
a counter for that purpose as well, I removed the list. The
saving is sizeof(struct list_head) per glock - well worth
having.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>

diff --git a/fs/gfs2/locking/dlm/lock.c b/fs/gfs2/locking/dlm/lock.c
index 894df45..2482c90 100644
--- a/fs/gfs2/locking/dlm/lock.c
+++ b/fs/gfs2/locking/dlm/lock.c
@@ -58,9 +58,6 @@ static void gdlm_delete_lp(struct gdlm_lock *lp)
 	spin_lock(&ls->async_lock);
 	if (!list_empty(&lp->delay_list))
 		list_del_init(&lp->delay_list);
-	gdlm_assert(!list_empty(&lp->all_list), "%x,%llx", lp->lockname.ln_type,
-		    (unsigned long long)lp->lockname.ln_number);
-	list_del_init(&lp->all_list);
 	ls->all_locks_count--;
 	spin_unlock(&ls->async_lock);
 
@@ -397,7 +394,6 @@ static int gdlm_create_lp(struct gdlm_ls *ls, struct lm_lockname *name,
 	INIT_LIST_HEAD(&lp->delay_list);
 
 	spin_lock(&ls->async_lock);
-	list_add(&lp->all_list, &ls->all_locks);
 	ls->all_locks_count++;
 	spin_unlock(&ls->async_lock);
 
@@ -710,22 +706,3 @@ void gdlm_submit_delayed(struct gdlm_ls *ls)
 	wake_up(&ls->thread_wait);
 }
 
-int gdlm_release_all_locks(struct gdlm_ls *ls)
-{
-	struct gdlm_lock *lp, *safe;
-	int count = 0;
-
-	spin_lock(&ls->async_lock);
-	list_for_each_entry_safe(lp, safe, &ls->all_locks, all_list) {
-		list_del_init(&lp->all_list);
-
-		if (lp->lvb && lp->lvb != junk_lvb)
-			kfree(lp->lvb);
-		kfree(lp);
-		count++;
-	}
-	spin_unlock(&ls->async_lock);
-
-	return count;
-}
-
diff --git a/fs/gfs2/locking/dlm/lock_dlm.h b/fs/gfs2/locking/dlm/lock_dlm.h
index 845a27f..21cf466 100644
--- a/fs/gfs2/locking/dlm/lock_dlm.h
+++ b/fs/gfs2/locking/dlm/lock_dlm.h
@@ -74,7 +74,6 @@ struct gdlm_ls {
 	spinlock_t		async_lock;
 	struct list_head	delayed;
 	struct list_head	submit;
-	struct list_head	all_locks;
 	u32		all_locks_count;
 	wait_queue_head_t	wait_control;
 	struct task_struct	*thread;
@@ -112,7 +111,6 @@ struct gdlm_lock {
 	unsigned long		flags;		/* lock_dlm flags LFL_ */
 
 	struct list_head	delay_list;	/* delayed */
-	struct list_head	all_list;	/* all locks for the fs */
 	struct gdlm_lock	*hold_null;	/* NL lock for hold_lvb */
 };
 
diff --git a/fs/gfs2/locking/dlm/mount.c b/fs/gfs2/locking/dlm/mount.c
index 0f06a11..c03b8a9 100644
--- a/fs/gfs2/locking/dlm/mount.c
+++ b/fs/gfs2/locking/dlm/mount.c
@@ -28,7 +28,6 @@ static struct gdlm_ls *init_gdlm(lm_callback_t cb, struct gfs2_sbd *sdp,
 	spin_lock_init(&ls->async_lock);
 	INIT_LIST_HEAD(&ls->delayed);
 	INIT_LIST_HEAD(&ls->submit);
-	INIT_LIST_HEAD(&ls->all_locks);
 	init_waitqueue_head(&ls->thread_wait);
 	init_waitqueue_head(&ls->wait_control);
 	ls->jid = -1;
@@ -173,7 +172,6 @@ out:
 static void gdlm_unmount(void *lockspace)
 {
 	struct gdlm_ls *ls = lockspace;
-	int rv;
 
 	log_debug("unmount flags %lx", ls->flags);
 
@@ -187,9 +185,7 @@ static void gdlm_unmount(void *lockspace)
 	gdlm_kobject_release(ls);
 	dlm_release_lockspace(ls->dlm_lockspace, 2);
 	gdlm_release_threads(ls);
-	rv = gdlm_release_all_locks(ls);
-	if (rv)
-		log_info("gdlm_unmount: %d stray locks freed", rv);
+	BUG_ON(ls->all_locks_count);
 out:
 	kfree(ls);
 }
-- 
1.5.1.2



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc.
  2008-06-25  8:47 [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc swhiteho
  2008-06-25  8:47 ` [Cluster-devel] [PATCH] [GFS2] Remove remote lock dropping code swhiteho
@ 2008-06-25 10:11 ` Fabio M. Di Nitto
  2008-06-25 10:30   ` Steven Whitehouse
  1 sibling, 1 reply; 7+ messages in thread
From: Fabio M. Di Nitto @ 2008-06-25 10:11 UTC (permalink / raw)
  To: cluster-devel.redhat.com

On Wed, 25 Jun 2008, swhiteho at redhat.com wrote:

> Hi,
>
> Here are some patches which I've collected up in preparation for
> removal of the lock_dlm module (the plan being to merge it into
> the main GFS2 module). I've not yet made a decision about whether
> they will go into this merge window or the next, and that might
> well depend upon the timing of the final patch in the series which
> actually does the final removal of the module.

In order to have gfs1 available for F10, we need those patches in for 
2.6.27 that's what i assume will be the final kernel for F10, otherwise 
all the effort becomes mutt till F11.

Fabio

--
I'm going to make him an offer he can't refuse.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc.
  2008-06-25 10:11 ` [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc Fabio M. Di Nitto
@ 2008-06-25 10:30   ` Steven Whitehouse
  0 siblings, 0 replies; 7+ messages in thread
From: Steven Whitehouse @ 2008-06-25 10:30 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

On Wed, 2008-06-25 at 12:11 +0200, Fabio M. Di Nitto wrote:
> On Wed, 25 Jun 2008, swhiteho at redhat.com wrote:
> 
> > Hi,
> >
> > Here are some patches which I've collected up in preparation for
> > removal of the lock_dlm module (the plan being to merge it into
> > the main GFS2 module). I've not yet made a decision about whether
> > they will go into this merge window or the next, and that might
> > well depend upon the timing of the final patch in the series which
> > actually does the final removal of the module.
> 
> In order to have gfs1 available for F10, we need those patches in for 
> 2.6.27 that's what i assume will be the final kernel for F10, otherwise 
> all the effort becomes mutt till F11.
> 
> Fabio
> 
> --
> I'm going to make him an offer he can't refuse.

Its not far off now. If its possible to do it for 2.6.27, then it will
be done. Since its likely that the merge window won't open for a few
days yet, the chances are that there will be time, but I can make no
promises at this stage since we have a lot of checking to do to ensure
that there are no noticeable changes to the userland interfaces,

Steve.




^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2008-06-25 10:30 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-06-25  8:47 [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc swhiteho
2008-06-25  8:47 ` [Cluster-devel] [PATCH] [GFS2] Remove remote lock dropping code swhiteho
2008-06-25  8:47   ` [Cluster-devel] [PATCH] [GFS2] Remove obsolete conversion deadlock avoidance code swhiteho
2008-06-25  8:47     ` [Cluster-devel] [PATCH] [GFS2] Clean up the plock operations swhiteho
2008-06-25  8:47       ` [Cluster-devel] [PATCH] [GFS2] Remove all_list from lock_dlm swhiteho
2008-06-25 10:11 ` [Cluster-devel] [GFS2] Some patches to clean up lock_dlm etc Fabio M. Di Nitto
2008-06-25 10:30   ` Steven Whitehouse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).