* [RFC PATCH 00/12] locks: saner method for managing file locks
@ 2014-09-10 14:28 Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode Jeff Layton
` (11 more replies)
0 siblings, 12 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
We currently manage all file_locks via a singly-linked list. This is
problematic for a number of reasons:
- we have to protect all file locks with the same spinlock (or
equivalent). Currently that uses the i_lock, but Christoph has voiced
objections due to the potential for contention with other i_lock
users. He'd like to see us move to using a different lock.
- we have to walk through irrelevant file locks in order to get to the
ones we're interested in. For instance, POSIX locks are at the end
of the list, so we have to skip over all of the flock locks and
leases before we can work with them.
- the singly-linked list is primitive and difficult to work with. We
have to keep track of the "before" pointer and it's easy to get that
wrong.
Cleaning all of this up is complicated by the fact that no one really
wants to grow struct inode in order to do so. We have a single pointer
in the inode now and I don't think we want to use any more.
One possibility that Trond raised was to move this to an hlist, but
that doesn't do anything about the desire for a new spinlock.
This patchset takes the approach of replacing the i_flock list with a
new struct file_lock_context that is allocated when we intend to add a
new file lock to an inode. The file_lock_context is only freed when we
destroy the inode.
Within that, we have separate (and standard!) lists for each lock type,
and a dedicated spinlock for managing those lists. In principle we could
even consider adding separate locks for each, but I didn't bother with
that for now.
For now, the code is still pretty "raw" and isn't bisectable. This is
just a RFC for the basic approach. This is probably v3.19 material at
best.
Anyone have thoughts or comments on the basic approach?
Jeff Layton (12):
locks: add a new struct file_locking_context pointer to struct inode
locks: add new struct list_head to struct file_lock
locks: have locks_release_file use flock_lock_file to release generic
flock locks
locks: move flock locks to file_lock_context
locks: convert posix locks to file_lock_context
locks: convert lease handling to file_lock_context
ceph: convert to looking for locks in struct file_lock_context
nfs: convert lock handling to use file_lock_context
cifs: convert it to use file_lock_context
lockd: convert it to use file_lock_context
nfsd: convert to file_lock_context
locks: remove i_flock field from struct inode
fs/ceph/locks.c | 45 +++--
fs/ceph/mds_client.c | 4 -
fs/cifs/file.c | 34 ++--
fs/inode.c | 3 +-
fs/lockd/svcsubs.c | 26 ++-
fs/locks.c | 504 ++++++++++++++++++++++++++-------------------------
fs/nfs/delegation.c | 37 ++--
fs/nfs/nfs4state.c | 24 ++-
fs/nfs/pagelist.c | 3 +-
fs/nfs/write.c | 39 +++-
fs/nfsd/nfs4state.c | 18 +-
fs/read_write.c | 2 +-
include/linux/fs.h | 25 ++-
13 files changed, 425 insertions(+), 339 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 21+ messages in thread
* [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 18:38 ` J. Bruce Fields
2014-09-10 14:28 ` [RFC PATCH 02/12] locks: add new struct list_head to struct file_lock Jeff Layton
` (10 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
The current scheme of using the i_flock list is really difficult to
manage. Start conversion to a new scheme to eventually replace it.
We start by adding a new i_flctx to struct inode. For now, it lives in
parallel with i_flock list, but will eventually replace it. The idea is
to allocate a structure to sit in that pointer and act as a locus for
all things file locking.
We'll manage it with RCU and the i_lock, so that things like the
break_lease() check can use RCU to check for its presence.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/inode.c | 3 ++-
fs/locks.c | 38 ++++++++++++++++++++++++++++++++++++++
include/linux/fs.h | 13 +++++++++++++
3 files changed, 53 insertions(+), 1 deletion(-)
diff --git a/fs/inode.c b/fs/inode.c
index 26753ba7b6d6..e87b47ab69a2 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -192,7 +192,7 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
#ifdef CONFIG_FSNOTIFY
inode->i_fsnotify_mask = 0;
#endif
-
+ inode->i_flctx = NULL;
this_cpu_inc(nr_inodes);
return 0;
@@ -235,6 +235,7 @@ void __destroy_inode(struct inode *inode)
BUG_ON(inode_has_buffers(inode));
security_inode_free(inode);
fsnotify_inode_delete(inode);
+ locks_free_lock_context(inode->i_flctx);
if (!inode->i_nlink) {
WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
atomic_long_dec(&inode->i_sb->s_remove_count);
diff --git a/fs/locks.c b/fs/locks.c
index 735b8d3fa78c..6aa36bf21cc9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -202,8 +202,43 @@ static DEFINE_HASHTABLE(blocked_hash, BLOCKED_HASH_BITS);
*/
static DEFINE_SPINLOCK(blocked_lock_lock);
+static struct kmem_cache *flctx_cache __read_mostly;
static struct kmem_cache *filelock_cache __read_mostly;
+static struct file_lock_context *
+locks_get_lock_context(struct inode *inode)
+{
+ struct file_lock_context *new = NULL;
+
+ if (inode->i_flctx)
+ goto out;
+
+ new = kmem_cache_alloc(flctx_cache, GFP_KERNEL);
+ if (!new)
+ goto out;
+
+ spin_lock(&inode->i_lock);
+ if (likely(!inode->i_flctx)) {
+ spin_lock_init(&new->flc_lock);
+ INIT_LIST_HEAD(&new->flc_flock);
+ swap(inode->i_flctx, new);
+ }
+ spin_unlock(&inode->i_lock);
+ if (new)
+ kmem_cache_free(flctx_cache, new);
+out:
+ return inode->i_flctx;
+}
+
+void
+locks_free_lock_context(struct file_lock_context *ctx)
+{
+ if (ctx) {
+ WARN_ON_ONCE(!list_empty(&ctx->flc_flock));
+ kmem_cache_free(flctx_cache, ctx);
+ }
+}
+
static void locks_init_lock_heads(struct file_lock *fl)
{
INIT_HLIST_NODE(&fl->fl_link);
@@ -2621,6 +2656,9 @@ static int __init filelock_init(void)
{
int i;
+ flctx_cache = kmem_cache_create("file_lock_ctx",
+ sizeof(struct file_lock_context), 0, SLAB_PANIC, NULL);
+
filelock_cache = kmem_cache_create("file_lock_cache",
sizeof(struct file_lock), 0, SLAB_PANIC, NULL);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index bb9484ae1eef..090212e26d12 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -594,6 +594,7 @@ struct inode {
#endif
const struct file_operations *i_fop; /* former ->i_op->default_file_ops */
struct file_lock *i_flock;
+ struct file_lock_context *i_flctx;
struct address_space i_data;
#ifdef CONFIG_QUOTA
struct dquot *i_dquot[MAXQUOTAS];
@@ -933,6 +934,11 @@ struct file_lock {
} fl_u;
};
+struct file_lock_context {
+ spinlock_t flc_lock;
+ struct list_head flc_flock;
+};
+
/* The following constant reflects the upper bound of the file/locking space */
#ifndef OFFSET_MAX
#define INT_LIMIT(x) (~((x)1 << (sizeof(x)*8 - 1)))
@@ -959,6 +965,7 @@ extern int fcntl_setlease(unsigned int fd, struct file *filp, long arg);
extern int fcntl_getlease(struct file *filp);
/* fs/locks.c */
+void locks_free_lock_context(struct file_lock_context *ctx);
void locks_free_lock(struct file_lock *fl);
extern void locks_init_lock(struct file_lock *);
extern struct file_lock * locks_alloc_lock(void);
@@ -1016,6 +1023,12 @@ static inline int fcntl_getlease(struct file *filp)
return 0;
}
+static inline void
+locks_free_lock_context(struct file_lock_context *ctx)
+{
+ return;
+}
+
static inline void locks_init_lock(struct file_lock *fl)
{
return;
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 02/12] locks: add new struct list_head to struct file_lock
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks Jeff Layton
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
...that we can use to queue file_locks to per-ctx list_heads
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/locks.c | 6 +++---
include/linux/fs.h | 1 +
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 6aa36bf21cc9..af55bd35be5f 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -292,8 +292,8 @@ locks_dispose_list(struct list_head *dispose)
struct file_lock *fl;
while (!list_empty(dispose)) {
- fl = list_first_entry(dispose, struct file_lock, fl_block);
- list_del_init(&fl->fl_block);
+ fl = list_first_entry(dispose, struct file_lock, fl_list);
+ list_del_init(&fl->fl_list);
locks_free_lock(fl);
}
}
@@ -726,7 +726,7 @@ static void locks_delete_lock(struct file_lock **thisfl_p,
locks_unlink_lock(thisfl_p);
if (dispose)
- list_add(&fl->fl_block, dispose);
+ list_add(&fl->fl_list, dispose);
else
locks_free_lock(fl);
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 090212e26d12..303dfcd3b663 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -904,6 +904,7 @@ int locks_in_grace(struct net *);
*/
struct file_lock {
struct file_lock *fl_next; /* singly linked list for this inode */
+ struct list_head fl_list; /* link into file_lock_context */
struct hlist_node fl_link; /* node in global lists */
struct list_head fl_block; /* circular list of blocked processes */
fl_owner_t fl_owner;
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 02/12] locks: add new struct list_head to struct file_lock Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 17:38 ` J. Bruce Fields
2014-09-10 14:28 ` [RFC PATCH 04/12] locks: move flock locks to file_lock_context Jeff Layton
` (8 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/locks.c | 49 +++++++++++++++++++++++++++++++------------------
1 file changed, 31 insertions(+), 18 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index af55bd35be5f..5e8b865814a2 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2393,6 +2393,30 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner)
EXPORT_SYMBOL(locks_remove_posix);
+static void
+locks_remove_flock(struct file *filp)
+{
+ struct file_lock fl = {
+ .fl_owner = filp,
+ .fl_pid = current->tgid,
+ .fl_file = filp,
+ .fl_flags = FL_FLOCK,
+ .fl_type = F_UNLCK,
+ .fl_end = OFFSET_MAX,
+ };
+
+ if (!file_inode(filp)->i_flock)
+ return;
+
+ if (filp->f_op->flock)
+ filp->f_op->flock(filp, F_SETLKW, &fl);
+ else
+ flock_lock_file(filp, &fl);
+
+ if (fl.fl_ops && fl.fl_ops->fl_release_private)
+ fl.fl_ops->fl_release_private(&fl);
+}
+
/*
* This function is called on the last close of an open file.
*/
@@ -2403,24 +2427,14 @@ void locks_remove_file(struct file *filp)
struct file_lock **before;
LIST_HEAD(dispose);
- if (!inode->i_flock)
- return;
-
+ /* remove any OFD locks */
locks_remove_posix(filp, filp);
- if (filp->f_op->flock) {
- struct file_lock fl = {
- .fl_owner = filp,
- .fl_pid = current->tgid,
- .fl_file = filp,
- .fl_flags = FL_FLOCK,
- .fl_type = F_UNLCK,
- .fl_end = OFFSET_MAX,
- };
- filp->f_op->flock(filp, F_SETLKW, &fl);
- if (fl.fl_ops && fl.fl_ops->fl_release_private)
- fl.fl_ops->fl_release_private(&fl);
- }
+ /* remove flock locks */
+ locks_remove_flock(filp);
+
+ if (!inode->i_flock)
+ return;
spin_lock(&inode->i_lock);
before = &inode->i_flock;
@@ -2440,8 +2454,7 @@ void locks_remove_file(struct file *filp)
* some info about it and then just remove it from
* the list.
*/
- WARN(!IS_FLOCK(fl),
- "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
+ WARN(1, "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
fl->fl_type, fl->fl_flags,
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 04/12] locks: move flock locks to file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (2 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 05/12] locks: convert posix " Jeff Layton
` (7 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/locks.c | 63 +++++++++++++++++++++++++++++++++++++++-----------------------
1 file changed, 40 insertions(+), 23 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 5e8b865814a2..7f8c2c68a769 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -242,6 +242,7 @@ locks_free_lock_context(struct file_lock_context *ctx)
static void locks_init_lock_heads(struct file_lock *fl)
{
INIT_HLIST_NODE(&fl->fl_link);
+ INIT_LIST_HEAD(&fl->fl_list);
INIT_LIST_HEAD(&fl->fl_block);
init_waitqueue_head(&fl->fl_wait);
}
@@ -278,6 +279,7 @@ EXPORT_SYMBOL_GPL(locks_release_private);
void locks_free_lock(struct file_lock *fl)
{
BUG_ON(waitqueue_active(&fl->fl_wait));
+ BUG_ON(!list_empty(&fl->fl_list));
BUG_ON(!list_empty(&fl->fl_block));
BUG_ON(!hlist_unhashed(&fl->fl_link));
@@ -686,6 +688,14 @@ static void locks_insert_lock(struct file_lock **pos, struct file_lock *fl)
locks_insert_global_locks(fl);
}
+static void
+locks_insert_lock_ctx(struct file_lock *fl, struct list_head *after)
+{
+ fl->fl_nspid = get_pid(task_tgid(current));
+ list_add(&fl->fl_list, after);
+ locks_insert_global_locks(fl);
+}
+
/**
* locks_delete_lock - Delete a lock and then free it.
* @thisfl_p: pointer that points to the fl_next field of the previous
@@ -731,6 +741,18 @@ static void locks_delete_lock(struct file_lock **thisfl_p,
locks_free_lock(fl);
}
+static void
+locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
+{
+ locks_delete_global_locks(fl);
+ if (fl->fl_nspid) {
+ put_pid(fl->fl_nspid);
+ fl->fl_nspid = NULL;
+ }
+ locks_wake_up_blocks(fl);
+ list_move(&fl->fl_list, dispose);
+}
+
/* Determine if lock sys_fl blocks lock caller_fl. Common functionality
* checks for shared/exclusive status of overlapping locks.
*/
@@ -880,34 +902,34 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
static int flock_lock_file(struct file *filp, struct file_lock *request)
{
struct file_lock *new_fl = NULL;
- struct file_lock **before;
- struct inode * inode = file_inode(filp);
+ struct file_lock *fl;
+ struct file_lock_context *ctx;
+ struct inode *inode = file_inode(filp);
int error = 0;
- int found = 0;
+ bool found = false;
LIST_HEAD(dispose);
+ ctx = locks_get_lock_context(inode);
+ if (!ctx)
+ return -ENOMEM;
+
if (!(request->fl_flags & FL_ACCESS) && (request->fl_type != F_UNLCK)) {
new_fl = locks_alloc_lock();
if (!new_fl)
return -ENOMEM;
}
- spin_lock(&inode->i_lock);
+ spin_lock(&ctx->flc_lock);
if (request->fl_flags & FL_ACCESS)
goto find_conflict;
- for_each_lock(inode, before) {
- struct file_lock *fl = *before;
- if (IS_POSIX(fl))
- break;
- if (IS_LEASE(fl))
- continue;
+ list_for_each_entry(fl, &ctx->flc_flock, fl_list) {
if (filp != fl->fl_file)
continue;
if (request->fl_type == fl->fl_type)
goto out;
- found = 1;
- locks_delete_lock(before, &dispose);
+ found = true;
+ locks_delete_lock_ctx(fl, &dispose);
break;
}
@@ -922,18 +944,13 @@ static int flock_lock_file(struct file *filp, struct file_lock *request)
* give it the opportunity to lock the file.
*/
if (found) {
- spin_unlock(&inode->i_lock);
+ spin_unlock(&ctx->flc_lock);
cond_resched();
- spin_lock(&inode->i_lock);
+ spin_lock(&ctx->flc_lock);
}
find_conflict:
- for_each_lock(inode, before) {
- struct file_lock *fl = *before;
- if (IS_POSIX(fl))
- break;
- if (IS_LEASE(fl))
- continue;
+ list_for_each_entry(fl, &ctx->flc_flock, fl_list) {
if (!flock_locks_conflict(request, fl))
continue;
error = -EAGAIN;
@@ -946,12 +963,12 @@ find_conflict:
if (request->fl_flags & FL_ACCESS)
goto out;
locks_copy_lock(new_fl, request);
- locks_insert_lock(before, new_fl);
+ locks_insert_lock_ctx(new_fl, &ctx->flc_flock);
new_fl = NULL;
error = 0;
out:
- spin_unlock(&inode->i_lock);
+ spin_unlock(&ctx->flc_lock);
if (new_fl)
locks_free_lock(new_fl);
locks_dispose_list(&dispose);
@@ -2405,7 +2422,7 @@ locks_remove_flock(struct file *filp)
.fl_end = OFFSET_MAX,
};
- if (!file_inode(filp)->i_flock)
+ if (!file_inode(filp)->i_flctx)
return;
if (filp->f_op->flock)
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 05/12] locks: convert posix locks to file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (3 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 04/12] locks: move flock locks to file_lock_context Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 06/12] locks: convert lease handling " Jeff Layton
` (6 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/locks.c | 119 ++++++++++++++++++++++++++++-------------------------
fs/read_write.c | 2 +-
include/linux/fs.h | 3 +-
3 files changed, 65 insertions(+), 59 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 7f8c2c68a769..750813e708c8 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -157,9 +157,6 @@ static int target_leasetype(struct file_lock *fl)
int leases_enable = 1;
int lease_break_time = 45;
-#define for_each_lock(inode, lockp) \
- for (lockp = &inode->i_flock; *lockp != NULL; lockp = &(*lockp)->fl_next)
-
/*
* The global file_lock_list is only used for displaying /proc/locks, so we
* keep a list on each CPU, with each list protected by its own spinlock via
@@ -221,6 +218,7 @@ locks_get_lock_context(struct inode *inode)
if (likely(!inode->i_flctx)) {
spin_lock_init(&new->flc_lock);
INIT_LIST_HEAD(&new->flc_flock);
+ INIT_LIST_HEAD(&new->flc_posix);
swap(inode->i_flctx, new);
}
spin_unlock(&inode->i_lock);
@@ -235,6 +233,7 @@ locks_free_lock_context(struct file_lock_context *ctx)
{
if (ctx) {
WARN_ON_ONCE(!list_empty(&ctx->flc_flock));
+ WARN_ON_ONCE(!list_empty(&ctx->flc_posix));
kmem_cache_free(flctx_cache, ctx);
}
}
@@ -803,22 +802,27 @@ void
posix_test_lock(struct file *filp, struct file_lock *fl)
{
struct file_lock *cfl;
+ struct file_lock_context *ctx;
struct inode *inode = file_inode(filp);
- spin_lock(&inode->i_lock);
- for (cfl = file_inode(filp)->i_flock; cfl; cfl = cfl->fl_next) {
- if (!IS_POSIX(cfl))
- continue;
- if (posix_locks_conflict(fl, cfl))
- break;
- }
- if (cfl) {
- locks_copy_conflock(fl, cfl);
- if (cfl->fl_nspid)
- fl->fl_pid = pid_vnr(cfl->fl_nspid);
- } else
+ ctx = inode->i_flctx;
+ if (!ctx) {
fl->fl_type = F_UNLCK;
- spin_unlock(&inode->i_lock);
+ return;
+ }
+
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry(cfl, &ctx->flc_posix, fl_list) {
+ if (posix_locks_conflict(fl, cfl)) {
+ locks_copy_conflock(fl, cfl);
+ if (cfl->fl_nspid)
+ fl->fl_pid = pid_vnr(cfl->fl_nspid);
+ goto out;
+ }
+ }
+ fl->fl_type = F_UNLCK;
+out:
+ spin_unlock(&ctx->flc_lock);
return;
}
EXPORT_SYMBOL(posix_test_lock);
@@ -977,16 +981,20 @@ out:
static int __posix_lock_file(struct inode *inode, struct file_lock *request, struct file_lock *conflock)
{
- struct file_lock *fl;
+ struct file_lock *fl, *tmp;
struct file_lock *new_fl = NULL;
struct file_lock *new_fl2 = NULL;
struct file_lock *left = NULL;
struct file_lock *right = NULL;
- struct file_lock **before;
+ struct file_lock_context *ctx;
int error;
bool added = false;
LIST_HEAD(dispose);
+ ctx = locks_get_lock_context(inode);
+ if (!ctx)
+ return -ENOMEM;
+
/*
* We may need two file_lock structures for this operation,
* so we get them in advance to avoid races.
@@ -1000,15 +1008,14 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
new_fl2 = locks_alloc_lock();
}
- spin_lock(&inode->i_lock);
+ spin_lock(&ctx->flc_lock);
/*
* New lock request. Walk all POSIX locks and look for conflicts. If
* there are any, either return error or put the request on the
* blocker's list of waiters and the global blocked_hash.
*/
if (request->fl_type != F_UNLCK) {
- for_each_lock(inode, before) {
- fl = *before;
+ list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
if (!IS_POSIX(fl))
continue;
if (!posix_locks_conflict(request, fl))
@@ -1038,29 +1045,25 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
if (request->fl_flags & FL_ACCESS)
goto out;
- /*
- * Find the first old lock with the same owner as the new lock.
- */
-
- before = &inode->i_flock;
-
- /* First skip locks owned by other processes. */
- while ((fl = *before) && (!IS_POSIX(fl) ||
- !posix_same_owner(request, fl))) {
- before = &fl->fl_next;
+ /* Find the first old lock with the same owner as the new lock */
+ list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
+ if (posix_same_owner(request, fl))
+ break;
}
/* Process locks with this owner. */
- while ((fl = *before) && posix_same_owner(request, fl)) {
- /* Detect adjacent or overlapping regions (if same lock type)
- */
+ list_for_each_entry_safe_from(fl, tmp, &ctx->flc_posix, fl_list) {
+ if (!posix_same_owner(request, fl))
+ break;
+
+ /* Detect adjacent or overlapping regions (if same lock type) */
if (request->fl_type == fl->fl_type) {
/* In all comparisons of start vs end, use
* "start - 1" rather than "end + 1". If end
* is OFFSET_MAX, end + 1 will become negative.
*/
if (fl->fl_end < request->fl_start - 1)
- goto next_lock;
+ continue;
/* If the next lock in the list has entirely bigger
* addresses than the new one, insert the lock here.
*/
@@ -1081,18 +1084,17 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
else
request->fl_end = fl->fl_end;
if (added) {
- locks_delete_lock(before, &dispose);
+ locks_delete_lock_ctx(fl, &dispose);
continue;
}
request = fl;
added = true;
- }
- else {
+ } else {
/* Processing for different lock types is a bit
* more complex.
*/
if (fl->fl_end < request->fl_start)
- goto next_lock;
+ continue;
if (fl->fl_start > request->fl_end)
break;
if (request->fl_type == F_UNLCK)
@@ -1111,7 +1113,7 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
* one (This may happen several times).
*/
if (added) {
- locks_delete_lock(before, &dispose);
+ locks_delete_lock_ctx(fl, &dispose);
continue;
}
/*
@@ -1127,15 +1129,11 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
locks_copy_lock(new_fl, request);
request = new_fl;
new_fl = NULL;
- locks_delete_lock(before, &dispose);
- locks_insert_lock(before, request);
+ locks_insert_lock_ctx(request, &fl->fl_list);
+ locks_delete_lock_ctx(fl, &dispose);
added = true;
}
}
- /* Go on to next lock.
- */
- next_lock:
- before = &fl->fl_next;
}
/*
@@ -1160,7 +1158,7 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
goto out;
}
locks_copy_lock(new_fl, request);
- locks_insert_lock(before, new_fl);
+ locks_insert_lock_ctx(new_fl, &fl->fl_list);
new_fl = NULL;
}
if (right) {
@@ -1171,7 +1169,7 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
left = new_fl2;
new_fl2 = NULL;
locks_copy_lock(left, right);
- locks_insert_lock(before, left);
+ locks_insert_lock_ctx(left, &fl->fl_list);
}
right->fl_start = request->fl_end + 1;
locks_wake_up_blocks(right);
@@ -1181,7 +1179,7 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
locks_wake_up_blocks(left);
}
out:
- spin_unlock(&inode->i_lock);
+ spin_unlock(&ctx->flc_lock);
/*
* Free any unused locks.
*/
@@ -1251,22 +1249,29 @@ EXPORT_SYMBOL(posix_lock_file_wait);
*/
int locks_mandatory_locked(struct file *file)
{
+ int ret;
struct inode *inode = file_inode(file);
+ struct file_lock_context *ctx;
struct file_lock *fl;
+ ctx = inode->i_flctx;
+ if (!ctx)
+ return 0;
+
/*
* Search the lock list for this inode for any POSIX locks.
*/
- spin_lock(&inode->i_lock);
- for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
- if (!IS_POSIX(fl))
- continue;
+ spin_lock(&ctx->flc_lock);
+ ret = 0;
+ list_for_each_entry(fl, &ctx->flc_posix, fl_list) {
if (fl->fl_owner != current->files &&
- fl->fl_owner != file)
+ fl->fl_owner != file) {
+ ret = -EAGAIN;
break;
+ }
}
- spin_unlock(&inode->i_lock);
- return fl ? -EAGAIN : 0;
+ spin_unlock(&ctx->flc_lock);
+ return ret;
}
/**
@@ -2389,7 +2394,7 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner)
* posix_lock_file(). Another process could be setting a lock on this
* file at the same time, but we wouldn't remove that lock anyway.
*/
- if (!file_inode(filp)->i_flock)
+ if (!file_inode(filp)->i_flctx)
return;
lock.fl_type = F_UNLCK;
diff --git a/fs/read_write.c b/fs/read_write.c
index 009d8542a889..0c77edf9fa25 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -358,7 +358,7 @@ int rw_verify_area(int read_write, struct file *file, const loff_t *ppos, size_t
return retval;
}
- if (unlikely(inode->i_flock && mandatory_lock(inode))) {
+ if (unlikely(inode->i_flctx && mandatory_lock(inode))) {
retval = locks_mandatory_area(
read_write == READ ? FLOCK_VERIFY_READ : FLOCK_VERIFY_WRITE,
inode, file, pos, count);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 303dfcd3b663..9cdbda752b13 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -938,6 +938,7 @@ struct file_lock {
struct file_lock_context {
spinlock_t flc_lock;
struct list_head flc_flock;
+ struct list_head flc_posix;
};
/* The following constant reflects the upper bound of the file/locking space */
@@ -1920,7 +1921,7 @@ static inline int locks_verify_truncate(struct inode *inode,
struct file *filp,
loff_t size)
{
- if (inode->i_flock && mandatory_lock(inode))
+ if (inode->i_flctx && mandatory_lock(inode))
return locks_mandatory_area(
FLOCK_VERIFY_WRITE, inode, filp,
size < inode->i_size ? size : inode->i_size,
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 06/12] locks: convert lease handling to file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (4 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 05/12] locks: convert posix " Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 07/12] ceph: convert to looking for locks in struct file_lock_context Jeff Layton
` (5 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/locks.c | 275 +++++++++++++++++++++--------------------------------
include/linux/fs.h | 5 +-
2 files changed, 113 insertions(+), 167 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 750813e708c8..866669348a06 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -219,6 +219,7 @@ locks_get_lock_context(struct inode *inode)
spin_lock_init(&new->flc_lock);
INIT_LIST_HEAD(&new->flc_flock);
INIT_LIST_HEAD(&new->flc_posix);
+ INIT_LIST_HEAD(&new->flc_lease);
swap(inode->i_flctx, new);
}
spin_unlock(&inode->i_lock);
@@ -234,6 +235,7 @@ locks_free_lock_context(struct file_lock_context *ctx)
if (ctx) {
WARN_ON_ONCE(!list_empty(&ctx->flc_flock));
WARN_ON_ONCE(!list_empty(&ctx->flc_posix));
+ WARN_ON_ONCE(!list_empty(&ctx->flc_lease));
kmem_cache_free(flctx_cache, ctx);
}
}
@@ -671,22 +673,6 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
spin_unlock(&blocked_lock_lock);
}
-/* Insert file lock fl into an inode's lock list at the position indicated
- * by pos. At the same time add the lock to the global file lock list.
- *
- * Must be called with the i_lock held!
- */
-static void locks_insert_lock(struct file_lock **pos, struct file_lock *fl)
-{
- fl->fl_nspid = get_pid(task_tgid(current));
-
- /* insert into file's list */
- fl->fl_next = *pos;
- *pos = fl;
-
- locks_insert_global_locks(fl);
-}
-
static void
locks_insert_lock_ctx(struct file_lock *fl, struct list_head *after)
{
@@ -695,63 +681,28 @@ locks_insert_lock_ctx(struct file_lock *fl, struct list_head *after)
locks_insert_global_locks(fl);
}
-/**
- * locks_delete_lock - Delete a lock and then free it.
- * @thisfl_p: pointer that points to the fl_next field of the previous
- * inode->i_flock list entry
- *
- * Unlink a lock from all lists and free the namespace reference, but don't
- * free it yet. Wake up processes that are blocked waiting for this lock and
- * notify the FS that the lock has been cleared.
- *
- * Must be called with the i_lock held!
- */
-static void locks_unlink_lock(struct file_lock **thisfl_p)
+static void
+locks_unlink_lock_ctx(struct file_lock *fl)
{
- struct file_lock *fl = *thisfl_p;
-
locks_delete_global_locks(fl);
-
- *thisfl_p = fl->fl_next;
- fl->fl_next = NULL;
-
+ list_del_init(&fl->fl_list);
if (fl->fl_nspid) {
put_pid(fl->fl_nspid);
fl->fl_nspid = NULL;
}
-
locks_wake_up_blocks(fl);
}
-/*
- * Unlink a lock from all lists and free it.
- *
- * Must be called with i_lock held!
- */
-static void locks_delete_lock(struct file_lock **thisfl_p,
- struct list_head *dispose)
+static void
+locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
{
- struct file_lock *fl = *thisfl_p;
-
- locks_unlink_lock(thisfl_p);
+ locks_unlink_lock_ctx(fl);
if (dispose)
list_add(&fl->fl_list, dispose);
else
locks_free_lock(fl);
}
-static void
-locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
-{
- locks_delete_global_locks(fl);
- if (fl->fl_nspid) {
- put_pid(fl->fl_nspid);
- fl->fl_nspid = NULL;
- }
- locks_wake_up_blocks(fl);
- list_move(&fl->fl_list, dispose);
-}
-
/* Determine if lock sys_fl blocks lock caller_fl. Common functionality
* checks for shared/exclusive status of overlapping locks.
*/
@@ -1370,7 +1321,7 @@ int lease_modify(struct file_lock **before, int arg, struct list_head *dispose)
printk(KERN_ERR "locks_delete_lock: fasync == %p\n", fl->fl_fasync);
fl->fl_fasync = NULL;
}
- locks_delete_lock(before, dispose);
+ locks_delete_lock_ctx(fl, dispose);
}
return 0;
}
@@ -1386,20 +1337,17 @@ static bool past_time(unsigned long then)
static void time_out_leases(struct inode *inode, struct list_head *dispose)
{
- struct file_lock **before;
- struct file_lock *fl;
+ struct file_lock_context *ctx = inode->i_flctx;
+ struct file_lock *fl, *tmp;
- lockdep_assert_held(&inode->i_lock);
+ lockdep_assert_held(&ctx->flc_lock);
- before = &inode->i_flock;
- while ((fl = *before) && IS_LEASE(fl) && lease_breaking(fl)) {
+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) {
trace_time_out_leases(inode, fl);
if (past_time(fl->fl_downgrade_time))
- lease_modify(before, F_RDLCK, dispose);
+ lease_modify(&fl, F_RDLCK, dispose);
if (past_time(fl->fl_break_time))
- lease_modify(before, F_UNLCK, dispose);
- if (fl == *before) /* lease_modify may have freed fl */
- before = &fl->fl_next;
+ lease_modify(&fl, F_UNLCK, dispose);
}
}
@@ -1413,11 +1361,12 @@ static bool leases_conflict(struct file_lock *lease, struct file_lock *breaker)
static bool
any_leases_conflict(struct inode *inode, struct file_lock *breaker)
{
+ struct file_lock_context *ctx = inode->i_flctx;
struct file_lock *fl;
- lockdep_assert_held(&inode->i_lock);
+ lockdep_assert_held(&ctx->flc_lock);
- for (fl = inode->i_flock ; fl && IS_LEASE(fl); fl = fl->fl_next) {
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
if (leases_conflict(fl, breaker))
return true;
}
@@ -1441,7 +1390,8 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
{
int error = 0;
struct file_lock *new_fl;
- struct file_lock *fl, **before;
+ struct file_lock_context *ctx = inode->i_flctx;
+ struct file_lock *fl;
unsigned long break_time;
int want_write = (mode & O_ACCMODE) != O_RDONLY;
LIST_HEAD(dispose);
@@ -1451,7 +1401,13 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
return PTR_ERR(new_fl);
new_fl->fl_flags = type;
- spin_lock(&inode->i_lock);
+ /* typically we will check that ctx is non-NULL before calling */
+ if (!ctx) {
+ WARN_ON_ONCE(1);
+ return error;
+ }
+
+ spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
@@ -1465,9 +1421,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
break_time++; /* so that 0 means no break time */
}
- for (before = &inode->i_flock;
- ((fl = *before) != NULL) && IS_LEASE(fl);
- before = &fl->fl_next) {
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
if (!leases_conflict(fl, new_fl))
continue;
if (want_write) {
@@ -1476,17 +1430,16 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
fl->fl_flags |= FL_UNLOCK_PENDING;
fl->fl_break_time = break_time;
} else {
- if (lease_breaking(inode->i_flock))
+ if (lease_breaking(fl))
continue;
fl->fl_flags |= FL_DOWNGRADE_PENDING;
fl->fl_downgrade_time = break_time;
}
if (fl->fl_lmops->lm_break(fl))
- locks_delete_lock(before, &dispose);
+ locks_delete_lock_ctx(fl, &dispose);
}
- fl = inode->i_flock;
- if (!fl || !IS_LEASE(fl))
+ if (list_empty(&ctx->flc_lease))
goto out;
if (mode & O_NONBLOCK) {
@@ -1496,18 +1449,19 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
}
restart:
- break_time = inode->i_flock->fl_break_time;
+ fl = list_first_entry(&ctx->flc_lease, struct file_lock, fl_list);
+ break_time = fl->fl_break_time;
if (break_time != 0)
break_time -= jiffies;
if (break_time == 0)
break_time++;
- locks_insert_block(inode->i_flock, new_fl);
+ locks_insert_block(fl, new_fl);
trace_break_lease_block(inode, new_fl);
- spin_unlock(&inode->i_lock);
+ spin_unlock(&ctx->flc_lock);
locks_dispose_list(&dispose);
error = wait_event_interruptible_timeout(new_fl->fl_wait,
!new_fl->fl_next, break_time);
- spin_lock(&inode->i_lock);
+ spin_lock(&ctx->flc_lock);
trace_break_lease_unblock(inode, new_fl);
locks_delete_block(new_fl);
if (error >= 0) {
@@ -1519,12 +1473,10 @@ restart:
time_out_leases(inode, &dispose);
if (any_leases_conflict(inode, new_fl))
goto restart;
-
error = 0;
}
-
out:
- spin_unlock(&inode->i_lock);
+ spin_unlock(&ctx->flc_lock);
locks_dispose_list(&dispose);
locks_free_lock(new_fl);
return error;
@@ -1544,14 +1496,17 @@ EXPORT_SYMBOL(__break_lease);
void lease_get_mtime(struct inode *inode, struct timespec *time)
{
bool has_lease = false;
- struct file_lock *flock;
+ struct file_lock_context *ctx = inode->i_flctx;
+ struct file_lock *fl;
- if (inode->i_flock) {
- spin_lock(&inode->i_lock);
- flock = inode->i_flock;
- if (flock && IS_LEASE(flock) && (flock->fl_type == F_WRLCK))
- has_lease = true;
- spin_unlock(&inode->i_lock);
+ if (ctx) {
+ spin_lock(&ctx->flc_lock);
+ if (!list_empty(&ctx->flc_lease)) {
+ fl = list_first_entry(&ctx->flc_lease, struct file_lock, fl_list);
+ if (fl->fl_type == F_WRLCK)
+ has_lease = true;
+ }
+ spin_unlock(&ctx->flc_lock);
}
if (has_lease)
@@ -1589,20 +1544,22 @@ int fcntl_getlease(struct file *filp)
{
struct file_lock *fl;
struct inode *inode = file_inode(filp);
+ struct file_lock_context *ctx = inode->i_flctx;
int type = F_UNLCK;
LIST_HEAD(dispose);
- spin_lock(&inode->i_lock);
- time_out_leases(file_inode(filp), &dispose);
- for (fl = file_inode(filp)->i_flock; fl && IS_LEASE(fl);
- fl = fl->fl_next) {
- if (fl->fl_file == filp) {
+ if (ctx) {
+ spin_lock(&ctx->flc_lock);
+ time_out_leases(file_inode(filp), &dispose);
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
+ if (fl->fl_file != filp)
+ continue;
type = target_leasetype(fl);
break;
}
+ spin_unlock(&ctx->flc_lock);
+ locks_dispose_list(&dispose);
}
- spin_unlock(&inode->i_lock);
- locks_dispose_list(&dispose);
return type;
}
@@ -1635,9 +1592,10 @@ check_conflicting_open(const struct dentry *dentry, const long arg)
static int
generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **priv)
{
- struct file_lock *fl, **before, **my_before = NULL, *lease;
+ struct file_lock *fl, *my_fl = NULL, *lease;
struct dentry *dentry = filp->f_path.dentry;
struct inode *inode = dentry->d_inode;
+ struct file_lock_context *ctx;
bool is_deleg = (*flp)->fl_flags & FL_DELEG;
int error;
LIST_HEAD(dispose);
@@ -1645,6 +1603,10 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
lease = *flp;
trace_generic_add_lease(inode, lease);
+ ctx = locks_get_lock_context(inode);
+ if (!ctx)
+ return -ENOMEM;
+
/*
* In the delegation case we need mutual exclusion with
* a number of operations that take the i_mutex. We trylock
@@ -1663,7 +1625,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
return -EINVAL;
}
- spin_lock(&inode->i_lock);
+ spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose);
error = check_conflicting_open(dentry, arg);
if (error)
@@ -1678,13 +1640,12 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
* except for this filp.
*/
error = -EAGAIN;
- for (before = &inode->i_flock;
- ((fl = *before) != NULL) && IS_LEASE(fl);
- before = &fl->fl_next) {
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
if (fl->fl_file == filp) {
- my_before = before;
+ my_fl = fl;
continue;
}
+
/*
* No exclusive leases if someone else has a lease on
* this file:
@@ -1699,9 +1660,8 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
goto out;
}
- if (my_before != NULL) {
- lease = *my_before;
- error = lease->fl_lmops->lm_change(my_before, arg, &dispose);
+ if (my_fl != NULL) {
+ error = lease->fl_lmops->lm_change(&my_fl, arg, &dispose);
if (error)
goto out;
goto out_setup;
@@ -1711,7 +1671,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
if (!leases_enable)
goto out;
- locks_insert_lock(before, lease);
+ locks_insert_lock_ctx(lease, &ctx->flc_lease);
/*
* The check in break_lease() is lockless. It's possible for another
* open to race in after we did the earlier check for a conflicting
@@ -1723,45 +1683,49 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
*/
smp_mb();
error = check_conflicting_open(dentry, arg);
- if (error)
- goto out_unlink;
+ if (error) {
+ locks_unlink_lock_ctx(lease);
+ goto out;
+ }
out_setup:
if (lease->fl_lmops->lm_setup)
lease->fl_lmops->lm_setup(lease, priv);
out:
- spin_unlock(&inode->i_lock);
+ spin_unlock(&ctx->flc_lock);
locks_dispose_list(&dispose);
if (is_deleg)
mutex_unlock(&inode->i_mutex);
- if (!error && !my_before)
+ if (!error && !my_fl)
*flp = NULL;
return error;
-out_unlink:
- locks_unlink_lock(before);
- goto out;
}
static int generic_delete_lease(struct file *filp)
{
int error = -EAGAIN;
- struct file_lock *fl, **before;
+ struct file_lock *fl, *victim = NULL;
struct dentry *dentry = filp->f_path.dentry;
struct inode *inode = dentry->d_inode;
+ struct file_lock_context *ctx = inode->i_flctx;
LIST_HEAD(dispose);
- spin_lock(&inode->i_lock);
- time_out_leases(inode, &dispose);
- for (before = &inode->i_flock;
- ((fl = *before) != NULL) && IS_LEASE(fl);
- before = &fl->fl_next) {
- if (fl->fl_file == filp)
+ if (!ctx) {
+ trace_generic_delete_lease(inode, NULL);
+ return error;
+ }
+
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
+ if (fl->fl_file == filp) {
+ victim = fl;
break;
+ }
}
trace_generic_delete_lease(inode, fl);
- if (fl)
- error = fl->fl_lmops->lm_change(before, F_UNLCK, &dispose);
- spin_unlock(&inode->i_lock);
+ if (victim)
+ error = fl->fl_lmops->lm_change(&victim, F_UNLCK, &dispose);
+ spin_unlock(&ctx->flc_lock);
locks_dispose_list(&dispose);
return error;
}
@@ -2439,56 +2403,37 @@ locks_remove_flock(struct file *filp)
fl.fl_ops->fl_release_private(&fl);
}
+static void
+locks_remove_lease(struct file *filp)
+{
+ struct inode *inode = file_inode(filp);
+ struct file_lock_context *ctx = inode->i_flctx;
+ struct file_lock *fl, *tmp;
+ LIST_HEAD(dispose);
+
+ if (!ctx)
+ return;
+
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
+ lease_modify(&fl, F_UNLCK, &dispose);
+ spin_unlock(&ctx->flc_lock);
+ locks_dispose_list(&dispose);
+}
+
/*
* This function is called on the last close of an open file.
*/
void locks_remove_file(struct file *filp)
{
- struct inode * inode = file_inode(filp);
- struct file_lock *fl;
- struct file_lock **before;
- LIST_HEAD(dispose);
-
/* remove any OFD locks */
locks_remove_posix(filp, filp);
/* remove flock locks */
locks_remove_flock(filp);
- if (!inode->i_flock)
- return;
-
- spin_lock(&inode->i_lock);
- before = &inode->i_flock;
-
- while ((fl = *before) != NULL) {
- if (fl->fl_file == filp) {
- if (IS_LEASE(fl)) {
- lease_modify(before, F_UNLCK, &dispose);
- continue;
- }
-
- /*
- * There's a leftover lock on the list of a type that
- * we didn't expect to see. Most likely a classic
- * POSIX lock that ended up not getting released
- * properly, or that raced onto the list somehow. Log
- * some info about it and then just remove it from
- * the list.
- */
- WARN(1, "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
- MAJOR(inode->i_sb->s_dev),
- MINOR(inode->i_sb->s_dev), inode->i_ino,
- fl->fl_type, fl->fl_flags,
- fl->fl_start, fl->fl_end);
-
- locks_delete_lock(before, &dispose);
- continue;
- }
- before = &fl->fl_next;
- }
- spin_unlock(&inode->i_lock);
- locks_dispose_list(&dispose);
+ /* remove any leases */
+ locks_remove_lease(filp);
}
/**
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 9cdbda752b13..673aad74cd79 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -939,6 +939,7 @@ struct file_lock_context {
spinlock_t flc_lock;
struct list_head flc_flock;
struct list_head flc_posix;
+ struct list_head flc_lease;
};
/* The following constant reflects the upper bound of the file/locking space */
@@ -1939,7 +1940,7 @@ static inline int break_lease(struct inode *inode, unsigned int mode)
* end up racing with tasks trying to set a new lease on this file.
*/
smp_mb();
- if (inode->i_flock)
+ if (inode->i_flctx && !list_empty(&inode->i_flctx->flc_lease))
return __break_lease(inode, mode, FL_LEASE);
return 0;
}
@@ -1952,7 +1953,7 @@ static inline int break_deleg(struct inode *inode, unsigned int mode)
* end up racing with tasks trying to set a new lease on this file.
*/
smp_mb();
- if (inode->i_flock)
+ if (inode->i_flctx && !list_empty(&inode->i_flctx->flc_lease))
return __break_lease(inode, mode, FL_DELEG);
return 0;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 07/12] ceph: convert to looking for locks in struct file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (5 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 06/12] locks: convert lease handling " Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context Jeff Layton
` (4 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/ceph/locks.c | 45 ++++++++++++++++++++++++++++-----------------
fs/ceph/mds_client.c | 4 ----
2 files changed, 28 insertions(+), 21 deletions(-)
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index fbc39c47bacd..9246ce3c6161 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -203,16 +203,21 @@ int ceph_flock(struct file *file, int cmd, struct file_lock *fl)
void ceph_count_locks(struct inode *inode, int *fcntl_count, int *flock_count)
{
struct file_lock *lock;
+ struct file_lock_context *ctx;
*fcntl_count = 0;
*flock_count = 0;
- for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) {
- if (lock->fl_flags & FL_POSIX)
+ ctx = inode->i_flctx;
+ if (ctx) {
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry(lock, &ctx->flc_posix, fl_list)
++(*fcntl_count);
- else if (lock->fl_flags & FL_FLOCK)
+ list_for_each_entry(lock, &ctx->flc_flock, fl_list)
++(*flock_count);
+ spin_unlock(&ctx->flc_lock);
}
+
dout("counted %d flock locks and %d fcntl locks",
*flock_count, *fcntl_count);
}
@@ -227,6 +232,7 @@ int ceph_encode_locks_to_buffer(struct inode *inode,
int num_fcntl_locks, int num_flock_locks)
{
struct file_lock *lock;
+ struct file_lock_context *ctx;
int err = 0;
int seen_fcntl = 0;
int seen_flock = 0;
@@ -235,33 +241,38 @@ int ceph_encode_locks_to_buffer(struct inode *inode,
dout("encoding %d flock and %d fcntl locks", num_flock_locks,
num_fcntl_locks);
- for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) {
- if (lock->fl_flags & FL_POSIX) {
- ++seen_fcntl;
- if (seen_fcntl > num_fcntl_locks) {
+ ctx = inode->i_flctx;
+ if (ctx) {
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry(lock, &ctx->flc_flock, fl_list) {
+ ++seen_flock;
+ if (seen_flock > num_flock_locks) {
err = -ENOSPC;
- goto fail;
+ break;
}
err = lock_to_ceph_filelock(lock, &flocks[l]);
if (err)
- goto fail;
+ break;
++l;
}
- }
- for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) {
- if (lock->fl_flags & FL_FLOCK) {
- ++seen_flock;
- if (seen_flock > num_flock_locks) {
+
+ if (err)
+ goto fail;
+
+ list_for_each_entry(lock, &ctx->flc_posix, fl_list) {
+ ++seen_fcntl;
+ if (seen_fcntl > num_fcntl_locks) {
err = -ENOSPC;
- goto fail;
+ break;
}
err = lock_to_ceph_filelock(lock, &flocks[l]);
if (err)
- goto fail;
+ break;
++l;
}
- }
fail:
+ spin_unlock(&ctx->flc_lock);
+ }
return err;
}
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index bad07c09f91e..6778da54f7c8 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -2612,20 +2612,16 @@ static int encode_caps_cb(struct inode *inode, struct ceph_cap *cap,
struct ceph_filelock *flocks;
encode_again:
- spin_lock(&inode->i_lock);
ceph_count_locks(inode, &num_fcntl_locks, &num_flock_locks);
- spin_unlock(&inode->i_lock);
flocks = kmalloc((num_fcntl_locks+num_flock_locks) *
sizeof(struct ceph_filelock), GFP_NOFS);
if (!flocks) {
err = -ENOMEM;
goto out_free;
}
- spin_lock(&inode->i_lock);
err = ceph_encode_locks_to_buffer(inode, flocks,
num_fcntl_locks,
num_flock_locks);
- spin_unlock(&inode->i_lock);
if (err) {
kfree(flocks);
if (err == -ENOSPC)
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (6 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 07/12] ceph: convert to looking for locks in struct file_lock_context Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 19:17 ` J. Bruce Fields
2014-09-10 14:28 ` [RFC PATCH 09/12] cifs: convert it " Jeff Layton
` (3 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/nfs/delegation.c | 37 +++++++++++++++++++++----------------
fs/nfs/nfs4state.c | 24 +++++++++++++++---------
fs/nfs/pagelist.c | 3 ++-
fs/nfs/write.c | 39 +++++++++++++++++++++++++++++++++------
4 files changed, 71 insertions(+), 32 deletions(-)
diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
index 5853f53db732..22c6eed9bb5b 100644
--- a/fs/nfs/delegation.c
+++ b/fs/nfs/delegation.c
@@ -85,25 +85,30 @@ static int nfs_delegation_claim_locks(struct nfs_open_context *ctx, struct nfs4_
{
struct inode *inode = state->inode;
struct file_lock *fl;
+ struct file_lock_context *flctx = inode->i_flctx;
+ struct list_head *list;
int status = 0;
- if (inode->i_flock == NULL)
- goto out;
-
- /* Protect inode->i_flock using the i_lock */
- spin_lock(&inode->i_lock);
- for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
- if (!(fl->fl_flags & (FL_POSIX|FL_FLOCK)))
- continue;
- if (nfs_file_open_context(fl->fl_file) != ctx)
- continue;
- spin_unlock(&inode->i_lock);
- status = nfs4_lock_delegation_recall(fl, state, stateid);
- if (status < 0)
- goto out;
- spin_lock(&inode->i_lock);
+ flctx = inode->i_flctx;
+ if (flctx) {
+ list = &flctx->flc_posix;
+ spin_lock(&flctx->flc_lock);
+restart:
+ list_for_each_entry(fl, list, fl_list) {
+ if (nfs_file_open_context(fl->fl_file) != ctx)
+ continue;
+ spin_unlock(&flctx->flc_lock);
+ status = nfs4_lock_delegation_recall(fl, state, stateid);
+ if (status < 0)
+ goto out;
+ spin_lock(&flctx->flc_lock);
+ }
+ if (list == &flctx->flc_posix) {
+ list = &flctx->flc_flock;
+ goto restart;
+ }
+ spin_unlock(&flctx->flc_lock);
}
- spin_unlock(&inode->i_lock);
out:
return status;
}
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index a043f618cd5a..2899a0f26293 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1377,21 +1377,23 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
struct inode *inode = state->inode;
struct nfs_inode *nfsi = NFS_I(inode);
struct file_lock *fl;
+ struct file_lock_context *flctx = inode->i_flctx;
+ struct list_head *list;
int status = 0;
- if (inode->i_flock == NULL)
+ if (!flctx)
return 0;
+ list = &flctx->flc_posix;
+
/* Guard against delegation returns and new lock/unlock calls */
down_write(&nfsi->rwsem);
- /* Protect inode->i_flock using the BKL */
- spin_lock(&inode->i_lock);
- for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
- if (!(fl->fl_flags & (FL_POSIX|FL_FLOCK)))
- continue;
+ spin_lock(&flctx->flc_lock);
+restart:
+ list_for_each_entry(fl, list, fl_list) {
if (nfs_file_open_context(fl->fl_file)->state != state)
continue;
- spin_unlock(&inode->i_lock);
+ spin_unlock(&flctx->flc_lock);
status = ops->recover_lock(state, fl);
switch (status) {
case 0:
@@ -1418,9 +1420,13 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
/* kill_proc(fl->fl_pid, SIGLOST, 1); */
status = 0;
}
- spin_lock(&inode->i_lock);
+ spin_lock(&flctx->flc_lock);
}
- spin_unlock(&inode->i_lock);
+ if (list == &flctx->flc_posix) {
+ list = &flctx->flc_flock;
+ goto restart;
+ }
+ spin_unlock(&flctx->flc_lock);
out:
up_write(&nfsi->rwsem);
return status;
diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
index ba491926df5f..4df8d8755026 100644
--- a/fs/nfs/pagelist.c
+++ b/fs/nfs/pagelist.c
@@ -782,7 +782,8 @@ static bool nfs_can_coalesce_requests(struct nfs_page *prev,
if (prev) {
if (!nfs_match_open_context(req->wb_context, prev->wb_context))
return false;
- if (req->wb_context->dentry->d_inode->i_flock != NULL &&
+ if (req->wb_context->dentry->d_inode->i_flctx != NULL &&
+ !list_empty(&req->wb_context->dentry->d_inode->i_flctx->flc_posix) &&
!nfs_match_lock_context(req->wb_lock_context,
prev->wb_lock_context))
return false;
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index e3b5cf28bdc5..02b8777f8f2f 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1128,7 +1128,8 @@ int nfs_flush_incompatible(struct file *file, struct page *page)
do_flush = req->wb_page != page || req->wb_context != ctx;
/* for now, flush if more than 1 request in page_group */
do_flush |= req->wb_this_page != req;
- if (l_ctx && ctx->dentry->d_inode->i_flock != NULL) {
+ if (l_ctx && ctx->dentry->d_inode->i_flctx &&
+ !list_empty(&ctx->dentry->d_inode->i_flctx->flc_posix)) {
do_flush |= l_ctx->lockowner.l_owner != current->files
|| l_ctx->lockowner.l_pid != current->tgid;
}
@@ -1189,6 +1190,12 @@ out:
return PageUptodate(page) != 0;
}
+static bool
+is_whole_file_wrlock(struct file_lock *fl)
+{
+ return fl->fl_start == 0 && fl->fl_end == OFFSET_MAX && fl->fl_type == F_WRLCK;
+}
+
/* If we know the page is up to date, and we're not using byte range locks (or
* if we have the whole file locked for writing), it may be more efficient to
* extend the write to cover the entire page in order to avoid fragmentation
@@ -1199,17 +1206,37 @@ out:
*/
static int nfs_can_extend_write(struct file *file, struct page *page, struct inode *inode)
{
+ int ret;
+ struct file_lock_context *flctx = inode->i_flctx;
+ struct file_lock *fl;
+
if (file->f_flags & O_DSYNC)
return 0;
if (!nfs_write_pageuptodate(page, inode))
return 0;
if (NFS_PROTO(inode)->have_delegation(inode, FMODE_WRITE))
return 1;
- if (inode->i_flock == NULL || (inode->i_flock->fl_start == 0 &&
- inode->i_flock->fl_end == OFFSET_MAX &&
- inode->i_flock->fl_type != F_RDLCK))
- return 1;
- return 0;
+ /* no lock context == no locks */
+ if (!flctx)
+ return 0;
+
+ /* if lists are empty then there are no locks */
+ if (list_empty(&flctx->flc_posix) && list_empty(&flctx->flc_flock))
+ return 0;
+
+ ret = 0;
+ /* Check to see if there are whole file write locks */
+ spin_lock(&flctx->flc_lock);
+ fl = list_first_entry(&flctx->flc_posix, struct file_lock, fl_list);
+ if (is_whole_file_wrlock(fl)) {
+ ret = 1;
+ } else {
+ fl = list_first_entry(&flctx->flc_flock, struct file_lock, fl_list);
+ if (is_whole_file_wrlock(fl))
+ ret = 1;
+ }
+ spin_unlock(&flctx->flc_lock);
+ return ret;
}
/*
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 09/12] cifs: convert it to use file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (7 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 10/12] lockd: " Jeff Layton
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/cifs/file.c | 34 ++++++++++++++--------------------
1 file changed, 14 insertions(+), 20 deletions(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index d5fec92e0360..dadfbc9fb5a5 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -1101,11 +1101,6 @@ cifs_push_mandatory_locks(struct cifsFileInfo *cfile)
return rc;
}
-/* copied from fs/locks.c with a name change */
-#define cifs_for_each_lock(inode, lockp) \
- for (lockp = &inode->i_flock; *lockp != NULL; \
- lockp = &(*lockp)->fl_next)
-
struct lock_to_push {
struct list_head llist;
__u64 offset;
@@ -1120,8 +1115,9 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
{
struct inode *inode = cfile->dentry->d_inode;
struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
- struct file_lock *flock, **before;
- unsigned int count = 0, i = 0;
+ struct file_lock *flock;
+ struct file_lock_context *flctx = inode->i_flctx;
+ unsigned int count = 0, i;
int rc = 0, xid, type;
struct list_head locks_to_send, *el;
struct lock_to_push *lck, *tmp;
@@ -1129,12 +1125,14 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
xid = get_xid();
- spin_lock(&inode->i_lock);
- cifs_for_each_lock(inode, before) {
- if ((*before)->fl_flags & FL_POSIX)
- count++;
+ if (!flctx)
+ goto out;
+
+ spin_lock(&flctx->flc_lock);
+ list_for_each(el, &flctx->flc_posix) {
+ count++;
}
- spin_unlock(&inode->i_lock);
+ spin_unlock(&flctx->flc_lock);
INIT_LIST_HEAD(&locks_to_send);
@@ -1143,7 +1141,7 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
* added to the list while we are holding cinode->lock_sem that
* protects locking operations of this inode.
*/
- for (; i < count; i++) {
+ for (i = 0; i < count; i++) {
lck = kmalloc(sizeof(struct lock_to_push), GFP_KERNEL);
if (!lck) {
rc = -ENOMEM;
@@ -1153,11 +1151,8 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
}
el = locks_to_send.next;
- spin_lock(&inode->i_lock);
- cifs_for_each_lock(inode, before) {
- flock = *before;
- if ((flock->fl_flags & FL_POSIX) == 0)
- continue;
+ spin_lock(&flctx->flc_lock);
+ list_for_each_entry(flock, &flctx->flc_posix, fl_list) {
if (el == &locks_to_send) {
/*
* The list ended. We don't have enough allocated
@@ -1177,9 +1172,8 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
lck->length = length;
lck->type = type;
lck->offset = flock->fl_start;
- el = el->next;
}
- spin_unlock(&inode->i_lock);
+ spin_unlock(&flctx->flc_lock);
list_for_each_entry_safe(lck, tmp, &locks_to_send, llist) {
int stored_rc;
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 10/12] lockd: convert it to use file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (8 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 09/12] cifs: convert it " Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 11/12] nfsd: convert to file_lock_context Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 12/12] locks: remove i_flock field from struct inode Jeff Layton
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/lockd/svcsubs.c | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
index b6f3b84b6e99..93448f5357f0 100644
--- a/fs/lockd/svcsubs.c
+++ b/fs/lockd/svcsubs.c
@@ -164,12 +164,15 @@ nlm_traverse_locks(struct nlm_host *host, struct nlm_file *file,
{
struct inode *inode = nlmsvc_file_inode(file);
struct file_lock *fl;
+ struct file_lock_context *flctx = inode->i_flctx;
struct nlm_host *lockhost;
+ if (!flctx)
+ return 0;
again:
file->f_locks = 0;
- spin_lock(&inode->i_lock);
- for (fl = inode->i_flock; fl; fl = fl->fl_next) {
+ spin_lock(&flctx->flc_lock);
+ list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
if (fl->fl_lmops != &nlmsvc_lock_operations)
continue;
@@ -180,7 +183,7 @@ again:
if (match(lockhost, host)) {
struct file_lock lock = *fl;
- spin_unlock(&inode->i_lock);
+ spin_unlock(&flctx->flc_lock);
lock.fl_type = F_UNLCK;
lock.fl_start = 0;
lock.fl_end = OFFSET_MAX;
@@ -192,7 +195,7 @@ again:
goto again;
}
}
- spin_unlock(&inode->i_lock);
+ spin_unlock(&flctx->flc_lock);
return 0;
}
@@ -223,18 +226,21 @@ nlm_file_inuse(struct nlm_file *file)
{
struct inode *inode = nlmsvc_file_inode(file);
struct file_lock *fl;
+ struct file_lock_context *flctx = inode->i_flctx;
if (file->f_count || !list_empty(&file->f_blocks) || file->f_shares)
return 1;
- spin_lock(&inode->i_lock);
- for (fl = inode->i_flock; fl; fl = fl->fl_next) {
- if (fl->fl_lmops == &nlmsvc_lock_operations) {
- spin_unlock(&inode->i_lock);
- return 1;
+ if (flctx) {
+ spin_lock(&flctx->flc_lock);
+ list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
+ if (fl->fl_lmops == &nlmsvc_lock_operations) {
+ spin_unlock(&flctx->flc_lock);
+ return 1;
+ }
}
+ spin_unlock(&flctx->flc_lock);
}
- spin_unlock(&inode->i_lock);
file->f_locks = 0;
return 0;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 11/12] nfsd: convert to file_lock_context
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (9 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 10/12] lockd: " Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 12/12] locks: remove i_flock field from struct inode Jeff Layton
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
fs/nfsd/nfs4state.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index c4c8d0621dde..4fc1a2f98d1e 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -5490,10 +5490,11 @@ out_nfserr:
static bool
check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
{
- struct file_lock **flpp;
+ struct file_lock *fl;
int status = false;
struct file *filp = find_any_file(fp);
struct inode *inode;
+ struct file_lock_context *flctx;
if (!filp) {
/* Any valid lock stateid should have some sort of access */
@@ -5502,15 +5503,18 @@ check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner)
}
inode = file_inode(filp);
+ flctx = inode->i_flctx;
- spin_lock(&inode->i_lock);
- for (flpp = &inode->i_flock; *flpp != NULL; flpp = &(*flpp)->fl_next) {
- if ((*flpp)->fl_owner == (fl_owner_t)lowner) {
- status = true;
- break;
+ if (flctx) {
+ spin_lock(&flctx->flc_lock);
+ list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
+ if (fl->fl_owner == (fl_owner_t)lowner) {
+ status = true;
+ break;
+ }
}
+ spin_unlock(&flctx->flc_lock);
}
- spin_unlock(&inode->i_lock);
fput(filp);
return status;
}
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [RFC PATCH 12/12] locks: remove i_flock field from struct inode
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
` (10 preceding siblings ...)
2014-09-10 14:28 ` [RFC PATCH 11/12] nfsd: convert to file_lock_context Jeff Layton
@ 2014-09-10 14:28 ` Jeff Layton
11 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 14:28 UTC (permalink / raw)
To: linux-fsdevel; +Cc: linux-kernel, Al Viro, Christoph Hellwig, bfields
Nothing uses it anymore. Also add a forward declaration for struct
file_lock to silence some compiler warnings that the removal triggers.
Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
include/linux/fs.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 673aad74cd79..811d844b8ed8 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -593,7 +593,6 @@ struct inode {
atomic_t i_readcount; /* struct files open RO */
#endif
const struct file_operations *i_fop; /* former ->i_op->default_file_ops */
- struct file_lock *i_flock;
struct file_lock_context *i_flctx;
struct address_space i_data;
#ifdef CONFIG_QUOTA
@@ -855,6 +854,8 @@ static inline struct file *get_file(struct file *f)
/* legacy typedef, should eventually be removed */
typedef void *fl_owner_t;
+struct file_lock;
+
struct file_lock_operations {
void (*fl_copy_lock)(struct file_lock *, struct file_lock *);
void (*fl_release_private)(struct file_lock *);
--
1.9.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks
2014-09-10 14:28 ` [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks Jeff Layton
@ 2014-09-10 17:38 ` J. Bruce Fields
2014-09-10 17:49 ` Jeff Layton
0 siblings, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2014-09-10 17:38 UTC (permalink / raw)
To: Jeff Layton; +Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig
On Wed, Sep 10, 2014 at 10:28:41AM -0400, Jeff Layton wrote:
> Signed-off-by: Jeff Layton <jlayton@primarydata.com>
ACK to going ahead and applying this now if you want.
Took me a moment to understand why the i_flock test got moved--OK, so
the locks_remove_*'s are doing that themselves instead, got it.
--b.
> ---
> fs/locks.c | 49 +++++++++++++++++++++++++++++++------------------
> 1 file changed, 31 insertions(+), 18 deletions(-)
>
> diff --git a/fs/locks.c b/fs/locks.c
> index af55bd35be5f..5e8b865814a2 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -2393,6 +2393,30 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner)
>
> EXPORT_SYMBOL(locks_remove_posix);
>
> +static void
> +locks_remove_flock(struct file *filp)
> +{
> + struct file_lock fl = {
> + .fl_owner = filp,
> + .fl_pid = current->tgid,
> + .fl_file = filp,
> + .fl_flags = FL_FLOCK,
> + .fl_type = F_UNLCK,
> + .fl_end = OFFSET_MAX,
> + };
> +
> + if (!file_inode(filp)->i_flock)
> + return;
> +
> + if (filp->f_op->flock)
> + filp->f_op->flock(filp, F_SETLKW, &fl);
> + else
> + flock_lock_file(filp, &fl);
> +
> + if (fl.fl_ops && fl.fl_ops->fl_release_private)
> + fl.fl_ops->fl_release_private(&fl);
> +}
> +
> /*
> * This function is called on the last close of an open file.
> */
> @@ -2403,24 +2427,14 @@ void locks_remove_file(struct file *filp)
> struct file_lock **before;
> LIST_HEAD(dispose);
>
> - if (!inode->i_flock)
> - return;
> -
> + /* remove any OFD locks */
> locks_remove_posix(filp, filp);
>
> - if (filp->f_op->flock) {
> - struct file_lock fl = {
> - .fl_owner = filp,
> - .fl_pid = current->tgid,
> - .fl_file = filp,
> - .fl_flags = FL_FLOCK,
> - .fl_type = F_UNLCK,
> - .fl_end = OFFSET_MAX,
> - };
> - filp->f_op->flock(filp, F_SETLKW, &fl);
> - if (fl.fl_ops && fl.fl_ops->fl_release_private)
> - fl.fl_ops->fl_release_private(&fl);
> - }
> + /* remove flock locks */
> + locks_remove_flock(filp);
> +
> + if (!inode->i_flock)
> + return;
>
> spin_lock(&inode->i_lock);
> before = &inode->i_flock;
> @@ -2440,8 +2454,7 @@ void locks_remove_file(struct file *filp)
> * some info about it and then just remove it from
> * the list.
> */
> - WARN(!IS_FLOCK(fl),
> - "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
> + WARN(1, "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
> MAJOR(inode->i_sb->s_dev),
> MINOR(inode->i_sb->s_dev), inode->i_ino,
> fl->fl_type, fl->fl_flags,
> --
> 1.9.3
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks
2014-09-10 17:38 ` J. Bruce Fields
@ 2014-09-10 17:49 ` Jeff Layton
0 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 17:49 UTC (permalink / raw)
To: J. Bruce Fields; +Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig
On Wed, 10 Sep 2014 13:38:43 -0400
"J. Bruce Fields" <bfields@fieldses.org> wrote:
> On Wed, Sep 10, 2014 at 10:28:41AM -0400, Jeff Layton wrote:
> > Signed-off-by: Jeff Layton <jlayton@primarydata.com>
>
> ACK to going ahead and applying this now if you want.
>
Thanks. Right now, I'm planning to hold off until v3.19 in case the
design changes.
> Took me a moment to understand why the i_flock test got moved--OK, so
> the locks_remove_*'s are doing that themselves instead, got it.
>
> --b.
>
Yes. The tricky part here is that I'm trying to migrate each lock
"flavor" to file_lock_context one at a time. So we have some interim
stages where we'll need to check i_flock for some types and i_flctx for
others. I may be able to re-consolidate those checks once the i_flock
list is gone however. This patchset doesn't do that yet.
> > ---
> > fs/locks.c | 49 +++++++++++++++++++++++++++++++------------------
> > 1 file changed, 31 insertions(+), 18 deletions(-)
> >
> > diff --git a/fs/locks.c b/fs/locks.c
> > index af55bd35be5f..5e8b865814a2 100644
> > --- a/fs/locks.c
> > +++ b/fs/locks.c
> > @@ -2393,6 +2393,30 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner)
> >
> > EXPORT_SYMBOL(locks_remove_posix);
> >
> > +static void
> > +locks_remove_flock(struct file *filp)
> > +{
> > + struct file_lock fl = {
> > + .fl_owner = filp,
> > + .fl_pid = current->tgid,
> > + .fl_file = filp,
> > + .fl_flags = FL_FLOCK,
> > + .fl_type = F_UNLCK,
> > + .fl_end = OFFSET_MAX,
> > + };
> > +
> > + if (!file_inode(filp)->i_flock)
> > + return;
> > +
> > + if (filp->f_op->flock)
> > + filp->f_op->flock(filp, F_SETLKW, &fl);
> > + else
> > + flock_lock_file(filp, &fl);
> > +
> > + if (fl.fl_ops && fl.fl_ops->fl_release_private)
> > + fl.fl_ops->fl_release_private(&fl);
> > +}
> > +
> > /*
> > * This function is called on the last close of an open file.
> > */
> > @@ -2403,24 +2427,14 @@ void locks_remove_file(struct file *filp)
> > struct file_lock **before;
> > LIST_HEAD(dispose);
> >
> > - if (!inode->i_flock)
> > - return;
> > -
> > + /* remove any OFD locks */
> > locks_remove_posix(filp, filp);
> >
> > - if (filp->f_op->flock) {
> > - struct file_lock fl = {
> > - .fl_owner = filp,
> > - .fl_pid = current->tgid,
> > - .fl_file = filp,
> > - .fl_flags = FL_FLOCK,
> > - .fl_type = F_UNLCK,
> > - .fl_end = OFFSET_MAX,
> > - };
> > - filp->f_op->flock(filp, F_SETLKW, &fl);
> > - if (fl.fl_ops && fl.fl_ops->fl_release_private)
> > - fl.fl_ops->fl_release_private(&fl);
> > - }
> > + /* remove flock locks */
> > + locks_remove_flock(filp);
> > +
> > + if (!inode->i_flock)
> > + return;
>
>
>
> >
> > spin_lock(&inode->i_lock);
> > before = &inode->i_flock;
> > @@ -2440,8 +2454,7 @@ void locks_remove_file(struct file *filp)
> > * some info about it and then just remove it from
> > * the list.
> > */
> > - WARN(!IS_FLOCK(fl),
> > - "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
> > + WARN(1, "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
> > MAJOR(inode->i_sb->s_dev),
> > MINOR(inode->i_sb->s_dev), inode->i_ino,
> > fl->fl_type, fl->fl_flags,
> > --
> > 1.9.3
> >
--
Jeff Layton <jlayton@primarydata.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode
2014-09-10 14:28 ` [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode Jeff Layton
@ 2014-09-10 18:38 ` J. Bruce Fields
2014-09-10 18:51 ` Jeff Layton
0 siblings, 1 reply; 21+ messages in thread
From: J. Bruce Fields @ 2014-09-10 18:38 UTC (permalink / raw)
To: Jeff Layton; +Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig
On Wed, Sep 10, 2014 at 10:28:39AM -0400, Jeff Layton wrote:
> The current scheme of using the i_flock list is really difficult to
> manage. Start conversion to a new scheme to eventually replace it.
>
> We start by adding a new i_flctx to struct inode. For now, it lives in
> parallel with i_flock list, but will eventually replace it. The idea is
> to allocate a structure to sit in that pointer and act as a locus for
> all things file locking.
>
> We'll manage it with RCU and the i_lock, so that things like the
> break_lease() check can use RCU to check for its presence.
And the plan is to allocate it once and then never destroy it until the
inode's destroyed?
OK, seems like a sensible compromise assuming that typically only a
small nunmber of files are ever locked.
--b.
>
> Signed-off-by: Jeff Layton <jlayton@primarydata.com>
> ---
> fs/inode.c | 3 ++-
> fs/locks.c | 38 ++++++++++++++++++++++++++++++++++++++
> include/linux/fs.h | 13 +++++++++++++
> 3 files changed, 53 insertions(+), 1 deletion(-)
>
> diff --git a/fs/inode.c b/fs/inode.c
> index 26753ba7b6d6..e87b47ab69a2 100644
> --- a/fs/inode.c
> +++ b/fs/inode.c
> @@ -192,7 +192,7 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
> #ifdef CONFIG_FSNOTIFY
> inode->i_fsnotify_mask = 0;
> #endif
> -
> + inode->i_flctx = NULL;
> this_cpu_inc(nr_inodes);
>
> return 0;
> @@ -235,6 +235,7 @@ void __destroy_inode(struct inode *inode)
> BUG_ON(inode_has_buffers(inode));
> security_inode_free(inode);
> fsnotify_inode_delete(inode);
> + locks_free_lock_context(inode->i_flctx);
> if (!inode->i_nlink) {
> WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
> atomic_long_dec(&inode->i_sb->s_remove_count);
> diff --git a/fs/locks.c b/fs/locks.c
> index 735b8d3fa78c..6aa36bf21cc9 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -202,8 +202,43 @@ static DEFINE_HASHTABLE(blocked_hash, BLOCKED_HASH_BITS);
> */
> static DEFINE_SPINLOCK(blocked_lock_lock);
>
> +static struct kmem_cache *flctx_cache __read_mostly;
> static struct kmem_cache *filelock_cache __read_mostly;
>
> +static struct file_lock_context *
> +locks_get_lock_context(struct inode *inode)
> +{
> + struct file_lock_context *new = NULL;
> +
> + if (inode->i_flctx)
> + goto out;
> +
> + new = kmem_cache_alloc(flctx_cache, GFP_KERNEL);
> + if (!new)
> + goto out;
> +
> + spin_lock(&inode->i_lock);
> + if (likely(!inode->i_flctx)) {
> + spin_lock_init(&new->flc_lock);
> + INIT_LIST_HEAD(&new->flc_flock);
> + swap(inode->i_flctx, new);
> + }
> + spin_unlock(&inode->i_lock);
> + if (new)
> + kmem_cache_free(flctx_cache, new);
> +out:
> + return inode->i_flctx;
> +}
> +
> +void
> +locks_free_lock_context(struct file_lock_context *ctx)
> +{
> + if (ctx) {
> + WARN_ON_ONCE(!list_empty(&ctx->flc_flock));
> + kmem_cache_free(flctx_cache, ctx);
> + }
> +}
> +
> static void locks_init_lock_heads(struct file_lock *fl)
> {
> INIT_HLIST_NODE(&fl->fl_link);
> @@ -2621,6 +2656,9 @@ static int __init filelock_init(void)
> {
> int i;
>
> + flctx_cache = kmem_cache_create("file_lock_ctx",
> + sizeof(struct file_lock_context), 0, SLAB_PANIC, NULL);
> +
> filelock_cache = kmem_cache_create("file_lock_cache",
> sizeof(struct file_lock), 0, SLAB_PANIC, NULL);
>
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index bb9484ae1eef..090212e26d12 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -594,6 +594,7 @@ struct inode {
> #endif
> const struct file_operations *i_fop; /* former ->i_op->default_file_ops */
> struct file_lock *i_flock;
> + struct file_lock_context *i_flctx;
> struct address_space i_data;
> #ifdef CONFIG_QUOTA
> struct dquot *i_dquot[MAXQUOTAS];
> @@ -933,6 +934,11 @@ struct file_lock {
> } fl_u;
> };
>
> +struct file_lock_context {
> + spinlock_t flc_lock;
> + struct list_head flc_flock;
> +};
> +
> /* The following constant reflects the upper bound of the file/locking space */
> #ifndef OFFSET_MAX
> #define INT_LIMIT(x) (~((x)1 << (sizeof(x)*8 - 1)))
> @@ -959,6 +965,7 @@ extern int fcntl_setlease(unsigned int fd, struct file *filp, long arg);
> extern int fcntl_getlease(struct file *filp);
>
> /* fs/locks.c */
> +void locks_free_lock_context(struct file_lock_context *ctx);
> void locks_free_lock(struct file_lock *fl);
> extern void locks_init_lock(struct file_lock *);
> extern struct file_lock * locks_alloc_lock(void);
> @@ -1016,6 +1023,12 @@ static inline int fcntl_getlease(struct file *filp)
> return 0;
> }
>
> +static inline void
> +locks_free_lock_context(struct file_lock_context *ctx)
> +{
> + return;
> +}
> +
> static inline void locks_init_lock(struct file_lock *fl)
> {
> return;
> --
> 1.9.3
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode
2014-09-10 18:38 ` J. Bruce Fields
@ 2014-09-10 18:51 ` Jeff Layton
0 siblings, 0 replies; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 18:51 UTC (permalink / raw)
To: J. Bruce Fields; +Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig
On Wed, 10 Sep 2014 14:38:15 -0400
"J. Bruce Fields" <bfields@fieldses.org> wrote:
> On Wed, Sep 10, 2014 at 10:28:39AM -0400, Jeff Layton wrote:
> > The current scheme of using the i_flock list is really difficult to
> > manage. Start conversion to a new scheme to eventually replace it.
> >
> > We start by adding a new i_flctx to struct inode. For now, it lives in
> > parallel with i_flock list, but will eventually replace it. The idea is
> > to allocate a structure to sit in that pointer and act as a locus for
> > all things file locking.
> >
> > We'll manage it with RCU and the i_lock, so that things like the
> > break_lease() check can use RCU to check for its presence.
>
> And the plan is to allocate it once and then never destroy it until the
> inode's destroyed?
>
> OK, seems like a sensible compromise assuming that typically only a
> small nunmber of files are ever locked.
>
> --b.
>
Oops! Forgot to update the patch description there...sorry!
My original idea was to use RCU to manage it, but it's simpler to just
allocate it once and destroy it with the inode.
With that, we don't need to use RCU. It is certainly possible to do
this with RCU though if we think that gives us any benefit. I just
didn't see any immediate need once I started diving into coding this up.
> >
> > Signed-off-by: Jeff Layton <jlayton@primarydata.com>
> > ---
> > fs/inode.c | 3 ++-
> > fs/locks.c | 38 ++++++++++++++++++++++++++++++++++++++
> > include/linux/fs.h | 13 +++++++++++++
> > 3 files changed, 53 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/inode.c b/fs/inode.c
> > index 26753ba7b6d6..e87b47ab69a2 100644
> > --- a/fs/inode.c
> > +++ b/fs/inode.c
> > @@ -192,7 +192,7 @@ int inode_init_always(struct super_block *sb,
> > struct inode *inode) #ifdef CONFIG_FSNOTIFY
> > inode->i_fsnotify_mask = 0;
> > #endif
> > -
> > + inode->i_flctx = NULL;
> > this_cpu_inc(nr_inodes);
> >
> > return 0;
> > @@ -235,6 +235,7 @@ void __destroy_inode(struct inode *inode)
> > BUG_ON(inode_has_buffers(inode));
> > security_inode_free(inode);
> > fsnotify_inode_delete(inode);
> > + locks_free_lock_context(inode->i_flctx);
> > if (!inode->i_nlink) {
> > WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count)
> > == 0); atomic_long_dec(&inode->i_sb->s_remove_count);
> > diff --git a/fs/locks.c b/fs/locks.c
> > index 735b8d3fa78c..6aa36bf21cc9 100644
> > --- a/fs/locks.c
> > +++ b/fs/locks.c
> > @@ -202,8 +202,43 @@ static DEFINE_HASHTABLE(blocked_hash,
> > BLOCKED_HASH_BITS); */
> > static DEFINE_SPINLOCK(blocked_lock_lock);
> >
> > +static struct kmem_cache *flctx_cache __read_mostly;
> > static struct kmem_cache *filelock_cache __read_mostly;
> >
> > +static struct file_lock_context *
> > +locks_get_lock_context(struct inode *inode)
> > +{
> > + struct file_lock_context *new = NULL;
> > +
> > + if (inode->i_flctx)
> > + goto out;
> > +
> > + new = kmem_cache_alloc(flctx_cache, GFP_KERNEL);
> > + if (!new)
> > + goto out;
> > +
> > + spin_lock(&inode->i_lock);
> > + if (likely(!inode->i_flctx)) {
> > + spin_lock_init(&new->flc_lock);
> > + INIT_LIST_HEAD(&new->flc_flock);
> > + swap(inode->i_flctx, new);
> > + }
> > + spin_unlock(&inode->i_lock);
> > + if (new)
> > + kmem_cache_free(flctx_cache, new);
> > +out:
> > + return inode->i_flctx;
> > +}
> > +
> > +void
> > +locks_free_lock_context(struct file_lock_context *ctx)
> > +{
> > + if (ctx) {
> > + WARN_ON_ONCE(!list_empty(&ctx->flc_flock));
> > + kmem_cache_free(flctx_cache, ctx);
> > + }
> > +}
> > +
> > static void locks_init_lock_heads(struct file_lock *fl)
> > {
> > INIT_HLIST_NODE(&fl->fl_link);
> > @@ -2621,6 +2656,9 @@ static int __init filelock_init(void)
> > {
> > int i;
> >
> > + flctx_cache = kmem_cache_create("file_lock_ctx",
> > + sizeof(struct file_lock_context), 0,
> > SLAB_PANIC, NULL); +
> > filelock_cache = kmem_cache_create("file_lock_cache",
> > sizeof(struct file_lock), 0, SLAB_PANIC,
> > NULL);
> > diff --git a/include/linux/fs.h b/include/linux/fs.h
> > index bb9484ae1eef..090212e26d12 100644
> > --- a/include/linux/fs.h
> > +++ b/include/linux/fs.h
> > @@ -594,6 +594,7 @@ struct inode {
> > #endif
> > const struct file_operations *i_fop; /*
> > former ->i_op->default_file_ops */ struct file_lock *i_flock;
> > + struct file_lock_context *i_flctx;
> > struct address_space i_data;
> > #ifdef CONFIG_QUOTA
> > struct dquot *i_dquot[MAXQUOTAS];
> > @@ -933,6 +934,11 @@ struct file_lock {
> > } fl_u;
> > };
> >
> > +struct file_lock_context {
> > + spinlock_t flc_lock;
> > + struct list_head flc_flock;
> > +};
> > +
> > /* The following constant reflects the upper bound of the
> > file/locking space */ #ifndef OFFSET_MAX
> > #define INT_LIMIT(x) (~((x)1 << (sizeof(x)*8 - 1)))
> > @@ -959,6 +965,7 @@ extern int fcntl_setlease(unsigned int fd,
> > struct file *filp, long arg); extern int fcntl_getlease(struct file
> > *filp);
> > /* fs/locks.c */
> > +void locks_free_lock_context(struct file_lock_context *ctx);
> > void locks_free_lock(struct file_lock *fl);
> > extern void locks_init_lock(struct file_lock *);
> > extern struct file_lock * locks_alloc_lock(void);
> > @@ -1016,6 +1023,12 @@ static inline int fcntl_getlease(struct file
> > *filp) return 0;
> > }
> >
> > +static inline void
> > +locks_free_lock_context(struct file_lock_context *ctx)
> > +{
> > + return;
> > +}
> > +
> > static inline void locks_init_lock(struct file_lock *fl)
> > {
> > return;
> > --
> > 1.9.3
> >
--
Jeff Layton <jlayton@primarydata.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context
2014-09-10 14:28 ` [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context Jeff Layton
@ 2014-09-10 19:17 ` J. Bruce Fields
2014-09-10 19:20 ` Al Viro
2014-09-10 19:28 ` Jeff Layton
0 siblings, 2 replies; 21+ messages in thread
From: J. Bruce Fields @ 2014-09-10 19:17 UTC (permalink / raw)
To: Jeff Layton; +Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig
On Wed, Sep 10, 2014 at 10:28:46AM -0400, Jeff Layton wrote:
> Signed-off-by: Jeff Layton <jlayton@primarydata.com>
> ---
> fs/nfs/delegation.c | 37 +++++++++++++++++++++----------------
> fs/nfs/nfs4state.c | 24 +++++++++++++++---------
> fs/nfs/pagelist.c | 3 ++-
> fs/nfs/write.c | 39 +++++++++++++++++++++++++++++++++------
> 4 files changed, 71 insertions(+), 32 deletions(-)
>
> diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
> index 5853f53db732..22c6eed9bb5b 100644
> --- a/fs/nfs/delegation.c
> +++ b/fs/nfs/delegation.c
> @@ -85,25 +85,30 @@ static int nfs_delegation_claim_locks(struct nfs_open_context *ctx, struct nfs4_
> {
> struct inode *inode = state->inode;
> struct file_lock *fl;
> + struct file_lock_context *flctx = inode->i_flctx;
> + struct list_head *list;
> int status = 0;
>
> - if (inode->i_flock == NULL)
> - goto out;
> -
> - /* Protect inode->i_flock using the i_lock */
> - spin_lock(&inode->i_lock);
> - for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
> - if (!(fl->fl_flags & (FL_POSIX|FL_FLOCK)))
> - continue;
> - if (nfs_file_open_context(fl->fl_file) != ctx)
> - continue;
> - spin_unlock(&inode->i_lock);
> - status = nfs4_lock_delegation_recall(fl, state, stateid);
> - if (status < 0)
> - goto out;
> - spin_lock(&inode->i_lock);
> + flctx = inode->i_flctx;
> + if (flctx) {
> + list = &flctx->flc_posix;
> + spin_lock(&flctx->flc_lock);
> +restart:
> + list_for_each_entry(fl, list, fl_list) {
> + if (nfs_file_open_context(fl->fl_file) != ctx)
> + continue;
> + spin_unlock(&flctx->flc_lock);
> + status = nfs4_lock_delegation_recall(fl, state, stateid);
> + if (status < 0)
> + goto out;
> + spin_lock(&flctx->flc_lock);
> + }
> + if (list == &flctx->flc_posix) {
> + list = &flctx->flc_flock;
> + goto restart;
> + }
> + spin_unlock(&flctx->flc_lock);
> }
> - spin_unlock(&inode->i_lock);
> out:
> return status;
> }
> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> index a043f618cd5a..2899a0f26293 100644
> --- a/fs/nfs/nfs4state.c
> +++ b/fs/nfs/nfs4state.c
> @@ -1377,21 +1377,23 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
> struct inode *inode = state->inode;
> struct nfs_inode *nfsi = NFS_I(inode);
> struct file_lock *fl;
> + struct file_lock_context *flctx = inode->i_flctx;
> + struct list_head *list;
> int status = 0;
>
> - if (inode->i_flock == NULL)
> + if (!flctx)
> return 0;
>
> + list = &flctx->flc_posix;
> +
> /* Guard against delegation returns and new lock/unlock calls */
> down_write(&nfsi->rwsem);
> - /* Protect inode->i_flock using the BKL */
> - spin_lock(&inode->i_lock);
> - for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
> - if (!(fl->fl_flags & (FL_POSIX|FL_FLOCK)))
> - continue;
> + spin_lock(&flctx->flc_lock);
> +restart:
> + list_for_each_entry(fl, list, fl_list) {
> if (nfs_file_open_context(fl->fl_file)->state != state)
> continue;
> - spin_unlock(&inode->i_lock);
> + spin_unlock(&flctx->flc_lock);
> status = ops->recover_lock(state, fl);
> switch (status) {
> case 0:
> @@ -1418,9 +1420,13 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
> /* kill_proc(fl->fl_pid, SIGLOST, 1); */
> status = 0;
> }
> - spin_lock(&inode->i_lock);
> + spin_lock(&flctx->flc_lock);
> }
> - spin_unlock(&inode->i_lock);
> + if (list == &flctx->flc_posix) {
> + list = &flctx->flc_flock;
> + goto restart;
> + }
> + spin_unlock(&flctx->flc_lock);
> out:
> up_write(&nfsi->rwsem);
> return status;
> diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
> index ba491926df5f..4df8d8755026 100644
> --- a/fs/nfs/pagelist.c
> +++ b/fs/nfs/pagelist.c
> @@ -782,7 +782,8 @@ static bool nfs_can_coalesce_requests(struct nfs_page *prev,
> if (prev) {
> if (!nfs_match_open_context(req->wb_context, prev->wb_context))
> return false;
> - if (req->wb_context->dentry->d_inode->i_flock != NULL &&
> + if (req->wb_context->dentry->d_inode->i_flctx != NULL &&
> + !list_empty(&req->wb_context->dentry->d_inode->i_flctx->flc_posix) &&
> !nfs_match_lock_context(req->wb_lock_context,
> prev->wb_lock_context))
> return false;
> diff --git a/fs/nfs/write.c b/fs/nfs/write.c
> index e3b5cf28bdc5..02b8777f8f2f 100644
> --- a/fs/nfs/write.c
> +++ b/fs/nfs/write.c
> @@ -1128,7 +1128,8 @@ int nfs_flush_incompatible(struct file *file, struct page *page)
> do_flush = req->wb_page != page || req->wb_context != ctx;
> /* for now, flush if more than 1 request in page_group */
> do_flush |= req->wb_this_page != req;
> - if (l_ctx && ctx->dentry->d_inode->i_flock != NULL) {
> + if (l_ctx && ctx->dentry->d_inode->i_flctx &&
> + !list_empty(&ctx->dentry->d_inode->i_flctx->flc_posix)) {
> do_flush |= l_ctx->lockowner.l_owner != current->files
> || l_ctx->lockowner.l_pid != current->tgid;
> }
> @@ -1189,6 +1190,12 @@ out:
> return PageUptodate(page) != 0;
> }
>
> +static bool
> +is_whole_file_wrlock(struct file_lock *fl)
> +{
> + return fl->fl_start == 0 && fl->fl_end == OFFSET_MAX && fl->fl_type == F_WRLCK;
> +}
> +
> /* If we know the page is up to date, and we're not using byte range locks (or
> * if we have the whole file locked for writing), it may be more efficient to
> * extend the write to cover the entire page in order to avoid fragmentation
> @@ -1199,17 +1206,37 @@ out:
> */
> static int nfs_can_extend_write(struct file *file, struct page *page, struct inode *inode)
> {
> + int ret;
> + struct file_lock_context *flctx = inode->i_flctx;
> + struct file_lock *fl;
> +
> if (file->f_flags & O_DSYNC)
> return 0;
> if (!nfs_write_pageuptodate(page, inode))
> return 0;
> if (NFS_PROTO(inode)->have_delegation(inode, FMODE_WRITE))
> return 1;
> - if (inode->i_flock == NULL || (inode->i_flock->fl_start == 0 &&
> - inode->i_flock->fl_end == OFFSET_MAX &&
> - inode->i_flock->fl_type != F_RDLCK))
Doesn't the existing code already have a bug? Without the i_lock
inode->i_flock could turn NULL partyway through
There's a bug in the existing code, isn't there? Without holding the
i_lock, couldn't inode->i_flock turn NULL partway through this
conditional and cause NULL dereferences? (Or, more bizarrely, the
checks of those various fields could end up being for different locks.)
> - return 1;
> - return 0;
> + /* no lock context == no locks */
> + if (!flctx)
> + return 0;
> +
> + /* if lists are empty then there are no locks */
> + if (list_empty(&flctx->flc_posix) && list_empty(&flctx->flc_flock))
> + return 0;
> +
> + ret = 0;
> + /* Check to see if there are whole file write locks */
> + spin_lock(&flctx->flc_lock);
> + fl = list_first_entry(&flctx->flc_posix, struct file_lock, fl_list);
> + if (is_whole_file_wrlock(fl)) {
> + ret = 1;
> + } else {
> + fl = list_first_entry(&flctx->flc_flock, struct file_lock, fl_list);
> + if (is_whole_file_wrlock(fl))
> + ret = 1;
> + }
> + spin_unlock(&flctx->flc_lock);
> + return ret;
Kind of pity we're turning 5 lines of code into 20 in the name of
simplification. Could be slightly pithier:
ret = is_whole_file_wrlock(fl);
if (!ret) {
fl = ...
ret = is_whole_file_wrlock(fl);
}
But, whatever, looks OK to me.
--b.
> }
>
> /*
> --
> 1.9.3
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context
2014-09-10 19:17 ` J. Bruce Fields
@ 2014-09-10 19:20 ` Al Viro
2014-09-10 19:28 ` Jeff Layton
1 sibling, 0 replies; 21+ messages in thread
From: Al Viro @ 2014-09-10 19:20 UTC (permalink / raw)
To: J. Bruce Fields
Cc: Jeff Layton, linux-fsdevel, linux-kernel, Christoph Hellwig
On Wed, Sep 10, 2014 at 03:17:34PM -0400, J. Bruce Fields wrote:
> On Wed, Sep 10, 2014 at 10:28:46AM -0400, Jeff Layton wrote:
> > Signed-off-by: Jeff Layton <jlayton@primarydata.com>
> > ---
> > fs/nfs/delegation.c | 37 +++++++++++++++++++++----------------
> > fs/nfs/nfs4state.c | 24 +++++++++++++++---------
> > fs/nfs/pagelist.c | 3 ++-
> > fs/nfs/write.c | 39 +++++++++++++++++++++++++++++++++------
> > 4 files changed, 71 insertions(+), 32 deletions(-)
> >
> > diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
> > index 5853f53db732..22c6eed9bb5b 100644
> > --- a/fs/nfs/delegation.c
> > +++ b/fs/nfs/delegation.c
> > @@ -85,25 +85,30 @@ static int nfs_delegation_claim_locks(struct nfs_open_context *ctx, struct nfs4_
> > {
> > struct inode *inode = state->inode;
> > struct file_lock *fl;
> > + struct file_lock_context *flctx = inode->i_flctx;
Ummm... Something ought to be done for inflicting identifiers like that on
those who'll read that code. But That Would Probably Be Illegal(tm), more's
the pity...
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context
2014-09-10 19:17 ` J. Bruce Fields
2014-09-10 19:20 ` Al Viro
@ 2014-09-10 19:28 ` Jeff Layton
2014-09-10 19:34 ` J. Bruce Fields
1 sibling, 1 reply; 21+ messages in thread
From: Jeff Layton @ 2014-09-10 19:28 UTC (permalink / raw)
To: J. Bruce Fields
Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig,
trond.myklebust, smayhew
On Wed, 10 Sep 2014 15:17:34 -0400
"J. Bruce Fields" <bfields@fieldses.org> wrote:
> On Wed, Sep 10, 2014 at 10:28:46AM -0400, Jeff Layton wrote:
> > Signed-off-by: Jeff Layton <jlayton@primarydata.com>
> > ---
> > fs/nfs/delegation.c | 37 +++++++++++++++++++++----------------
> > fs/nfs/nfs4state.c | 24 +++++++++++++++---------
> > fs/nfs/pagelist.c | 3 ++-
> > fs/nfs/write.c | 39 +++++++++++++++++++++++++++++++++------
> > 4 files changed, 71 insertions(+), 32 deletions(-)
> >
> > diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
> > index 5853f53db732..22c6eed9bb5b 100644
> > --- a/fs/nfs/delegation.c
> > +++ b/fs/nfs/delegation.c
> > @@ -85,25 +85,30 @@ static int nfs_delegation_claim_locks(struct nfs_open_context *ctx, struct nfs4_
> > {
> > struct inode *inode = state->inode;
> > struct file_lock *fl;
> > + struct file_lock_context *flctx = inode->i_flctx;
> > + struct list_head *list;
> > int status = 0;
> >
> > - if (inode->i_flock == NULL)
> > - goto out;
> > -
> > - /* Protect inode->i_flock using the i_lock */
> > - spin_lock(&inode->i_lock);
> > - for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
> > - if (!(fl->fl_flags & (FL_POSIX|FL_FLOCK)))
> > - continue;
> > - if (nfs_file_open_context(fl->fl_file) != ctx)
> > - continue;
> > - spin_unlock(&inode->i_lock);
> > - status = nfs4_lock_delegation_recall(fl, state, stateid);
> > - if (status < 0)
> > - goto out;
> > - spin_lock(&inode->i_lock);
> > + flctx = inode->i_flctx;
> > + if (flctx) {
> > + list = &flctx->flc_posix;
> > + spin_lock(&flctx->flc_lock);
> > +restart:
> > + list_for_each_entry(fl, list, fl_list) {
> > + if (nfs_file_open_context(fl->fl_file) != ctx)
> > + continue;
> > + spin_unlock(&flctx->flc_lock);
> > + status = nfs4_lock_delegation_recall(fl, state, stateid);
> > + if (status < 0)
> > + goto out;
> > + spin_lock(&flctx->flc_lock);
> > + }
> > + if (list == &flctx->flc_posix) {
> > + list = &flctx->flc_flock;
> > + goto restart;
> > + }
> > + spin_unlock(&flctx->flc_lock);
> > }
> > - spin_unlock(&inode->i_lock);
> > out:
> > return status;
> > }
> > diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> > index a043f618cd5a..2899a0f26293 100644
> > --- a/fs/nfs/nfs4state.c
> > +++ b/fs/nfs/nfs4state.c
> > @@ -1377,21 +1377,23 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
> > struct inode *inode = state->inode;
> > struct nfs_inode *nfsi = NFS_I(inode);
> > struct file_lock *fl;
> > + struct file_lock_context *flctx = inode->i_flctx;
> > + struct list_head *list;
> > int status = 0;
> >
> > - if (inode->i_flock == NULL)
> > + if (!flctx)
> > return 0;
> >
> > + list = &flctx->flc_posix;
> > +
> > /* Guard against delegation returns and new lock/unlock calls */
> > down_write(&nfsi->rwsem);
> > - /* Protect inode->i_flock using the BKL */
> > - spin_lock(&inode->i_lock);
> > - for (fl = inode->i_flock; fl != NULL; fl = fl->fl_next) {
> > - if (!(fl->fl_flags & (FL_POSIX|FL_FLOCK)))
> > - continue;
> > + spin_lock(&flctx->flc_lock);
> > +restart:
> > + list_for_each_entry(fl, list, fl_list) {
> > if (nfs_file_open_context(fl->fl_file)->state != state)
> > continue;
> > - spin_unlock(&inode->i_lock);
> > + spin_unlock(&flctx->flc_lock);
> > status = ops->recover_lock(state, fl);
> > switch (status) {
> > case 0:
> > @@ -1418,9 +1420,13 @@ static int nfs4_reclaim_locks(struct nfs4_state *state, const struct nfs4_state_
> > /* kill_proc(fl->fl_pid, SIGLOST, 1); */
> > status = 0;
> > }
> > - spin_lock(&inode->i_lock);
> > + spin_lock(&flctx->flc_lock);
> > }
> > - spin_unlock(&inode->i_lock);
> > + if (list == &flctx->flc_posix) {
> > + list = &flctx->flc_flock;
> > + goto restart;
> > + }
> > + spin_unlock(&flctx->flc_lock);
> > out:
> > up_write(&nfsi->rwsem);
> > return status;
> > diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
> > index ba491926df5f..4df8d8755026 100644
> > --- a/fs/nfs/pagelist.c
> > +++ b/fs/nfs/pagelist.c
> > @@ -782,7 +782,8 @@ static bool nfs_can_coalesce_requests(struct nfs_page *prev,
> > if (prev) {
> > if (!nfs_match_open_context(req->wb_context, prev->wb_context))
> > return false;
> > - if (req->wb_context->dentry->d_inode->i_flock != NULL &&
> > + if (req->wb_context->dentry->d_inode->i_flctx != NULL &&
> > + !list_empty(&req->wb_context->dentry->d_inode->i_flctx->flc_posix) &&
> > !nfs_match_lock_context(req->wb_lock_context,
> > prev->wb_lock_context))
> > return false;
> > diff --git a/fs/nfs/write.c b/fs/nfs/write.c
> > index e3b5cf28bdc5..02b8777f8f2f 100644
> > --- a/fs/nfs/write.c
> > +++ b/fs/nfs/write.c
> > @@ -1128,7 +1128,8 @@ int nfs_flush_incompatible(struct file *file, struct page *page)
> > do_flush = req->wb_page != page || req->wb_context != ctx;
> > /* for now, flush if more than 1 request in page_group */
> > do_flush |= req->wb_this_page != req;
> > - if (l_ctx && ctx->dentry->d_inode->i_flock != NULL) {
> > + if (l_ctx && ctx->dentry->d_inode->i_flctx &&
> > + !list_empty(&ctx->dentry->d_inode->i_flctx->flc_posix)) {
> > do_flush |= l_ctx->lockowner.l_owner != current->files
> > || l_ctx->lockowner.l_pid != current->tgid;
> > }
> > @@ -1189,6 +1190,12 @@ out:
> > return PageUptodate(page) != 0;
> > }
> >
> > +static bool
> > +is_whole_file_wrlock(struct file_lock *fl)
> > +{
> > + return fl->fl_start == 0 && fl->fl_end == OFFSET_MAX && fl->fl_type == F_WRLCK;
> > +}
> > +
> > /* If we know the page is up to date, and we're not using byte range locks (or
> > * if we have the whole file locked for writing), it may be more efficient to
> > * extend the write to cover the entire page in order to avoid fragmentation
> > @@ -1199,17 +1206,37 @@ out:
> > */
> > static int nfs_can_extend_write(struct file *file, struct page *page, struct inode *inode)
> > {
> > + int ret;
> > + struct file_lock_context *flctx = inode->i_flctx;
> > + struct file_lock *fl;
> > +
> > if (file->f_flags & O_DSYNC)
> > return 0;
> > if (!nfs_write_pageuptodate(page, inode))
> > return 0;
> > if (NFS_PROTO(inode)->have_delegation(inode, FMODE_WRITE))
> > return 1;
> > - if (inode->i_flock == NULL || (inode->i_flock->fl_start == 0 &&
> > - inode->i_flock->fl_end == OFFSET_MAX &&
> > - inode->i_flock->fl_type != F_RDLCK))
>
> Doesn't the existing code already have a bug? Without the i_lock
> inode->i_flock could turn NULL partyway through
>
> There's a bug in the existing code, isn't there? Without holding the
> i_lock, couldn't inode->i_flock turn NULL partway through this
> conditional and cause NULL dereferences? (Or, more bizarrely, the
> checks of those various fields could end up being for different locks.)
>
(cc'ing Trond and Scott...)
Yeah, I think you're correct. We really ought to hold the i_lock there
once we see that i_flock isn't NULL.
It's stuff like this that makes me wonder if we ought to convert all of
this to using RCU. Being able to hold the rcu_read_lock instead of the
i_lock (or the flc_lock once the conversion is done) would be rather
nice.
> > - return 1;
> > - return 0;
> > + /* no lock context == no locks */
> > + if (!flctx)
> > + return 0;
> > +
> > + /* if lists are empty then there are no locks */
> > + if (list_empty(&flctx->flc_posix) && list_empty(&flctx->flc_flock))
> > + return 0;
> > +
> > + ret = 0;
> > + /* Check to see if there are whole file write locks */
> > + spin_lock(&flctx->flc_lock);
> > + fl = list_first_entry(&flctx->flc_posix, struct file_lock, fl_list);
> > + if (is_whole_file_wrlock(fl)) {
> > + ret = 1;
> > + } else {
> > + fl = list_first_entry(&flctx->flc_flock, struct file_lock, fl_list);
> > + if (is_whole_file_wrlock(fl))
> > + ret = 1;
> > + }
> > + spin_unlock(&flctx->flc_lock);
> > + return ret;
>
> Kind of pity we're turning 5 lines of code into 20 in the name of
> simplification. Could be slightly pithier:
>
> ret = is_whole_file_wrlock(fl);
> if (!ret) {
> fl = ...
> ret = is_whole_file_wrlock(fl);
> }
>
> But, whatever, looks OK to me.
>
> --b.
>
Yes, that's the downside of moving to multiple list_heads. Still, I
think it's worth doing that even if we end up with the code a bit more
verbose.
It may be best to consider moving some of this into helpers that live
in locks.c. I really don't like having filesystems poke around in the
intimate details of the file locking code as a general rule...
> > }
> >
> > /*
> > --
> > 1.9.3
> >
--
Jeff Layton <jlayton@primarydata.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context
2014-09-10 19:28 ` Jeff Layton
@ 2014-09-10 19:34 ` J. Bruce Fields
0 siblings, 0 replies; 21+ messages in thread
From: J. Bruce Fields @ 2014-09-10 19:34 UTC (permalink / raw)
To: Jeff Layton
Cc: linux-fsdevel, linux-kernel, Al Viro, Christoph Hellwig,
trond.myklebust, smayhew
On Wed, Sep 10, 2014 at 03:28:10PM -0400, Jeff Layton wrote:
> Yes, that's the downside of moving to multiple list_heads. Still, I
> think it's worth doing that even if we end up with the code a bit more
> verbose.
>
> It may be best to consider moving some of this into helpers that live
> in locks.c. I really don't like having filesystems poke around in the
> intimate details of the file locking code as a general rule...
I was also wondering whether helpers like for_each_posix_lock() or
first_posix_lock() would be worth it.
--b.
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2014-09-10 19:34 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-10 14:28 [RFC PATCH 00/12] locks: saner method for managing file locks Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 01/12] locks: add a new struct file_locking_context pointer to struct inode Jeff Layton
2014-09-10 18:38 ` J. Bruce Fields
2014-09-10 18:51 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 02/12] locks: add new struct list_head to struct file_lock Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 03/12] locks: have locks_release_file use flock_lock_file to release generic flock locks Jeff Layton
2014-09-10 17:38 ` J. Bruce Fields
2014-09-10 17:49 ` Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 04/12] locks: move flock locks to file_lock_context Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 05/12] locks: convert posix " Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 06/12] locks: convert lease handling " Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 07/12] ceph: convert to looking for locks in struct file_lock_context Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 08/12] nfs: convert lock handling to use file_lock_context Jeff Layton
2014-09-10 19:17 ` J. Bruce Fields
2014-09-10 19:20 ` Al Viro
2014-09-10 19:28 ` Jeff Layton
2014-09-10 19:34 ` J. Bruce Fields
2014-09-10 14:28 ` [RFC PATCH 09/12] cifs: convert it " Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 10/12] lockd: " Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 11/12] nfsd: convert to file_lock_context Jeff Layton
2014-09-10 14:28 ` [RFC PATCH 12/12] locks: remove i_flock field from struct inode Jeff Layton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).