* [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes
@ 2023-07-20 12:58 Alexander Aring
2023-07-20 12:58 ` [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling Alexander Aring
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: Alexander Aring @ 2023-07-20 12:58 UTC (permalink / raw)
To: chuck.lever
Cc: jlayton, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, aahringo, agruenba
This patch fixes races when lockd accessing the global nlm_blocked list.
It was mostly safe to access the list because everything was accessed
from the lockd kernel thread context but there exists cases like
nlmsvc_grant_deferred() that could manipulate the nlm_blocked list and
it can be called from any context.
Cc: stable@vger.kernel.org
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/lockd/svclock.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
index c43ccdf28ed9..28abec5c451d 100644
--- a/fs/lockd/svclock.c
+++ b/fs/lockd/svclock.c
@@ -131,12 +131,14 @@ static void nlmsvc_insert_block(struct nlm_block *block, unsigned long when)
static inline void
nlmsvc_remove_block(struct nlm_block *block)
{
+ spin_lock(&nlm_blocked_lock);
if (!list_empty(&block->b_list)) {
- spin_lock(&nlm_blocked_lock);
list_del_init(&block->b_list);
spin_unlock(&nlm_blocked_lock);
nlmsvc_release_block(block);
+ return;
}
+ spin_unlock(&nlm_blocked_lock);
}
/*
@@ -152,6 +154,7 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
file, lock->fl.fl_pid,
(long long)lock->fl.fl_start,
(long long)lock->fl.fl_end, lock->fl.fl_type);
+ spin_lock(&nlm_blocked_lock);
list_for_each_entry(block, &nlm_blocked, b_list) {
fl = &block->b_call->a_args.lock.fl;
dprintk("lockd: check f=%p pd=%d %Ld-%Ld ty=%d cookie=%s\n",
@@ -161,9 +164,11 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
nlmdbg_cookie2a(&block->b_call->a_args.cookie));
if (block->b_file == file && nlm_compare_locks(fl, &lock->fl)) {
kref_get(&block->b_count);
+ spin_unlock(&nlm_blocked_lock);
return block;
}
}
+ spin_unlock(&nlm_blocked_lock);
return NULL;
}
@@ -185,16 +190,19 @@ nlmsvc_find_block(struct nlm_cookie *cookie)
{
struct nlm_block *block;
+ spin_lock(&nlm_blocked_lock);
list_for_each_entry(block, &nlm_blocked, b_list) {
if (nlm_cookie_match(&block->b_call->a_args.cookie,cookie))
goto found;
}
+ spin_unlock(&nlm_blocked_lock);
return NULL;
found:
dprintk("nlmsvc_find_block(%s): block=%p\n", nlmdbg_cookie2a(cookie), block);
kref_get(&block->b_count);
+ spin_unlock(&nlm_blocked_lock);
return block;
}
@@ -317,6 +325,7 @@ void nlmsvc_traverse_blocks(struct nlm_host *host,
restart:
mutex_lock(&file->f_mutex);
+ spin_lock(&nlm_blocked_lock);
list_for_each_entry_safe(block, next, &file->f_blocks, b_flist) {
if (!match(block->b_host, host))
continue;
@@ -325,11 +334,13 @@ void nlmsvc_traverse_blocks(struct nlm_host *host,
if (list_empty(&block->b_list))
continue;
kref_get(&block->b_count);
+ spin_unlock(&nlm_blocked_lock);
mutex_unlock(&file->f_mutex);
nlmsvc_unlink_block(block);
nlmsvc_release_block(block);
goto restart;
}
+ spin_unlock(&nlm_blocked_lock);
mutex_unlock(&file->f_mutex);
}
--
2.31.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling
2023-07-20 12:58 [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Alexander Aring
@ 2023-07-20 12:58 ` Alexander Aring
2023-07-21 13:09 ` Alexander Aring
2023-07-21 15:45 ` Jeff Layton
2023-07-20 12:58 ` [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op Alexander Aring
` (2 subsequent siblings)
3 siblings, 2 replies; 12+ messages in thread
From: Alexander Aring @ 2023-07-20 12:58 UTC (permalink / raw)
To: chuck.lever
Cc: jlayton, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, aahringo, agruenba
This patch fixes a race in async lock request handling between adding
the relevant struct nlm_block to nlm_blocked list after the request was
sent by vfs_lock_file() and nlmsvc_grant_deferred() does a lookup of the
nlm_block in the nlm_blocked list. It could be that the async request is
completed before the nlm_block was added to the list. This would end
in a -ENOENT and a kernel log message of "lockd: grant for unknown
block".
To solve this issue we add the nlm_block before the vfs_lock_file() call
to be sure it has been added when a possible nlmsvc_grant_deferred() is
called. If the vfs_lock_file() results in an case when it wouldn't be
added to nlm_blocked list, the nlm_block struct will be removed from
this list again.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/lockd/svclock.c | 80 +++++++++++++++++++++++++++----------
include/linux/lockd/lockd.h | 1 +
2 files changed, 60 insertions(+), 21 deletions(-)
diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
index 28abec5c451d..62ef27a69a9e 100644
--- a/fs/lockd/svclock.c
+++ b/fs/lockd/svclock.c
@@ -297,6 +297,8 @@ static void nlmsvc_free_block(struct kref *kref)
dprintk("lockd: freeing block %p...\n", block);
+ WARN_ON_ONCE(block->b_flags & B_PENDING_CALLBACK);
+
/* Remove block from file's list of blocks */
list_del_init(&block->b_flist);
mutex_unlock(&file->f_mutex);
@@ -543,6 +545,12 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
goto out;
}
+ if (block->b_flags & B_PENDING_CALLBACK)
+ goto pending_request;
+
+ /* Append to list of blocked */
+ nlmsvc_insert_block(block, NLM_NEVER);
+
if (!wait)
lock->fl.fl_flags &= ~FL_SLEEP;
mode = lock_to_openmode(&lock->fl);
@@ -552,9 +560,13 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
dprintk("lockd: vfs_lock_file returned %d\n", error);
switch (error) {
case 0:
+ nlmsvc_remove_block(block);
ret = nlm_granted;
goto out;
case -EAGAIN:
+ if (!wait)
+ nlmsvc_remove_block(block);
+pending_request:
/*
* If this is a blocking request for an
* already pending lock request then we need
@@ -565,6 +577,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
ret = async_block ? nlm_lck_blocked : nlm_lck_denied;
goto out;
case FILE_LOCK_DEFERRED:
+ block->b_flags |= B_PENDING_CALLBACK;
+
if (wait)
break;
/* Filesystem lock operation is in progress
@@ -572,17 +586,16 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
ret = nlmsvc_defer_lock_rqst(rqstp, block);
goto out;
case -EDEADLK:
+ nlmsvc_remove_block(block);
ret = nlm_deadlock;
goto out;
default: /* includes ENOLCK */
+ nlmsvc_remove_block(block);
ret = nlm_lck_denied_nolocks;
goto out;
}
ret = nlm_lck_blocked;
-
- /* Append to list of blocked */
- nlmsvc_insert_block(block, NLM_NEVER);
out:
mutex_unlock(&file->f_mutex);
nlmsvc_release_block(block);
@@ -739,34 +752,59 @@ nlmsvc_update_deferred_block(struct nlm_block *block, int result)
block->b_flags |= B_TIMED_OUT;
}
+static int __nlmsvc_grant_deferred(struct nlm_block *block,
+ struct file_lock *fl,
+ int result)
+{
+ int rc = 0;
+
+ dprintk("lockd: nlmsvc_notify_blocked block %p flags %d\n",
+ block, block->b_flags);
+ if (block->b_flags & B_QUEUED) {
+ if (block->b_flags & B_TIMED_OUT) {
+ rc = -ENOLCK;
+ goto out;
+ }
+ nlmsvc_update_deferred_block(block, result);
+ } else if (result == 0)
+ block->b_granted = 1;
+
+ nlmsvc_insert_block_locked(block, 0);
+ svc_wake_up(block->b_daemon);
+out:
+ return rc;
+}
+
static int nlmsvc_grant_deferred(struct file_lock *fl, int result)
{
- struct nlm_block *block;
- int rc = -ENOENT;
+ struct nlm_block *block = NULL;
+ int rc;
spin_lock(&nlm_blocked_lock);
list_for_each_entry(block, &nlm_blocked, b_list) {
if (nlm_compare_locks(&block->b_call->a_args.lock.fl, fl)) {
- dprintk("lockd: nlmsvc_notify_blocked block %p flags %d\n",
- block, block->b_flags);
- if (block->b_flags & B_QUEUED) {
- if (block->b_flags & B_TIMED_OUT) {
- rc = -ENOLCK;
- break;
- }
- nlmsvc_update_deferred_block(block, result);
- } else if (result == 0)
- block->b_granted = 1;
-
- nlmsvc_insert_block_locked(block, 0);
- svc_wake_up(block->b_daemon);
- rc = 0;
+ kref_get(&block->b_count);
break;
}
}
spin_unlock(&nlm_blocked_lock);
- if (rc == -ENOENT)
- printk(KERN_WARNING "lockd: grant for unknown block\n");
+
+ if (!block) {
+ pr_warn("lockd: grant for unknown pending block\n");
+ return -ENOENT;
+ }
+
+ /* don't interfere with nlmsvc_lock() */
+ mutex_lock(&block->b_file->f_mutex);
+ block->b_flags &= ~B_PENDING_CALLBACK;
+
+ spin_lock(&nlm_blocked_lock);
+ WARN_ON_ONCE(list_empty(&block->b_list));
+ rc = __nlmsvc_grant_deferred(block, fl, result);
+ spin_unlock(&nlm_blocked_lock);
+ mutex_unlock(&block->b_file->f_mutex);
+
+ nlmsvc_release_block(block);
return rc;
}
diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
index f42594a9efe0..a977be8bcc2c 100644
--- a/include/linux/lockd/lockd.h
+++ b/include/linux/lockd/lockd.h
@@ -189,6 +189,7 @@ struct nlm_block {
#define B_QUEUED 1 /* lock queued */
#define B_GOT_CALLBACK 2 /* got lock or conflicting lock */
#define B_TIMED_OUT 4 /* filesystem too slow to respond */
+#define B_PENDING_CALLBACK 8 /* pending callback for lock request */
};
/*
--
2.31.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op
2023-07-20 12:58 [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Alexander Aring
2023-07-20 12:58 ` [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling Alexander Aring
@ 2023-07-20 12:58 ` Alexander Aring
2023-07-21 17:46 ` Jeff Layton
2023-07-21 15:14 ` [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Jeff Layton
2023-07-21 16:59 ` Chuck Lever
3 siblings, 1 reply; 12+ messages in thread
From: Alexander Aring @ 2023-07-20 12:58 UTC (permalink / raw)
To: chuck.lever
Cc: jlayton, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, aahringo, agruenba
This patch reverts mostly commit 40595cdc93ed ("nfs: block notification
on fs with its own ->lock") and introduces an EXPORT_OP_SAFE_ASYNC_LOCK
export flag to signal that the "own ->lock" implementation supports
async lock requests. The only main user is DLM that is used by GFS2 and
OCFS2 filesystem. Those implement their own lock() implementation and
return FILE_LOCK_DEFERRED as return value. Since commit 40595cdc93ed
("nfs: block notification on fs with its own ->lock") the DLM
implementation were never updated. This patch should prepare for DLM
to set the EXPORT_OP_SAFE_ASYNC_LOCK export flag and update the DLM
plock implementation regarding to it.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
fs/lockd/svclock.c | 5 ++---
fs/nfsd/nfs4state.c | 11 ++++++++---
include/linux/exportfs.h | 1 +
3 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
index 62ef27a69a9e..54a67bd33843 100644
--- a/fs/lockd/svclock.c
+++ b/fs/lockd/svclock.c
@@ -483,9 +483,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
struct nlm_host *host, struct nlm_lock *lock, int wait,
struct nlm_cookie *cookie, int reclaim)
{
-#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
struct inode *inode = nlmsvc_file_inode(file);
-#endif
struct nlm_block *block = NULL;
int error;
int mode;
@@ -499,7 +497,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
(long long)lock->fl.fl_end,
wait);
- if (nlmsvc_file_file(file)->f_op->lock) {
+ if (!(inode->i_sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK) &&
+ nlmsvc_file_file(file)->f_op->lock) {
async_block = wait;
wait = 0;
}
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 6e61fa3acaf1..efcea229d640 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -7432,6 +7432,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
struct nfsd4_blocked_lock *nbl = NULL;
struct file_lock *file_lock = NULL;
struct file_lock *conflock = NULL;
+ struct super_block *sb;
__be32 status = 0;
int lkflg;
int err;
@@ -7453,6 +7454,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
dprintk("NFSD: nfsd4_lock: permission denied!\n");
return status;
}
+ sb = cstate->current_fh.fh_dentry->d_sb;
if (lock->lk_is_new) {
if (nfsd4_has_session(cstate))
@@ -7504,7 +7506,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
fp = lock_stp->st_stid.sc_file;
switch (lock->lk_type) {
case NFS4_READW_LT:
- if (nfsd4_has_session(cstate))
+ if (sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK &&
+ nfsd4_has_session(cstate))
fl_flags |= FL_SLEEP;
fallthrough;
case NFS4_READ_LT:
@@ -7516,7 +7519,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
fl_type = F_RDLCK;
break;
case NFS4_WRITEW_LT:
- if (nfsd4_has_session(cstate))
+ if (sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK &&
+ nfsd4_has_session(cstate))
fl_flags |= FL_SLEEP;
fallthrough;
case NFS4_WRITE_LT:
@@ -7544,7 +7548,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
* for file locks), so don't attempt blocking lock notifications
* on those filesystems:
*/
- if (nf->nf_file->f_op->lock)
+ if (!(sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK) &&
+ nf->nf_file->f_op->lock)
fl_flags &= ~FL_SLEEP;
nbl = find_or_allocate_block(lock_sop, &fp->fi_fhandle, nn);
diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
index 11fbd0ee1370..da742abbaf3e 100644
--- a/include/linux/exportfs.h
+++ b/include/linux/exportfs.h
@@ -224,6 +224,7 @@ struct export_operations {
atomic attribute updates
*/
#define EXPORT_OP_FLUSH_ON_CLOSE (0x20) /* fs flushes file data on close */
+#define EXPORT_OP_SAFE_ASYNC_LOCK (0x40) /* fs can do async lock request */
unsigned long flags;
};
--
2.31.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling
2023-07-20 12:58 ` [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling Alexander Aring
@ 2023-07-21 13:09 ` Alexander Aring
2023-07-21 16:43 ` Jeff Layton
2023-07-21 15:45 ` Jeff Layton
1 sibling, 1 reply; 12+ messages in thread
From: Alexander Aring @ 2023-07-21 13:09 UTC (permalink / raw)
To: chuck.lever
Cc: jlayton, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, agruenba
Hi,
On Thu, Jul 20, 2023 at 8:58 AM Alexander Aring <aahringo@redhat.com> wrote:
>
> This patch fixes a race in async lock request handling between adding
> the relevant struct nlm_block to nlm_blocked list after the request was
> sent by vfs_lock_file() and nlmsvc_grant_deferred() does a lookup of the
> nlm_block in the nlm_blocked list. It could be that the async request is
> completed before the nlm_block was added to the list. This would end
> in a -ENOENT and a kernel log message of "lockd: grant for unknown
> block".
>
> To solve this issue we add the nlm_block before the vfs_lock_file() call
> to be sure it has been added when a possible nlmsvc_grant_deferred() is
> called. If the vfs_lock_file() results in an case when it wouldn't be
> added to nlm_blocked list, the nlm_block struct will be removed from
> this list again.
>
> Signed-off-by: Alexander Aring <aahringo@redhat.com>
> ---
> fs/lockd/svclock.c | 80 +++++++++++++++++++++++++++----------
> include/linux/lockd/lockd.h | 1 +
> 2 files changed, 60 insertions(+), 21 deletions(-)
>
> diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> index 28abec5c451d..62ef27a69a9e 100644
> --- a/fs/lockd/svclock.c
> +++ b/fs/lockd/svclock.c
> @@ -297,6 +297,8 @@ static void nlmsvc_free_block(struct kref *kref)
>
> dprintk("lockd: freeing block %p...\n", block);
>
> + WARN_ON_ONCE(block->b_flags & B_PENDING_CALLBACK);
> +
> /* Remove block from file's list of blocks */
> list_del_init(&block->b_flist);
> mutex_unlock(&file->f_mutex);
> @@ -543,6 +545,12 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> goto out;
> }
>
> + if (block->b_flags & B_PENDING_CALLBACK)
> + goto pending_request;
> +
> + /* Append to list of blocked */
> + nlmsvc_insert_block(block, NLM_NEVER);
> +
> if (!wait)
> lock->fl.fl_flags &= ~FL_SLEEP;
> mode = lock_to_openmode(&lock->fl);
> @@ -552,9 +560,13 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> dprintk("lockd: vfs_lock_file returned %d\n", error);
> switch (error) {
> case 0:
> + nlmsvc_remove_block(block);
reacting here with nlmsvc_remove_block() assumes that the block was
not being added to the nlm_blocked list before nlmsvc_insert_block()
was called. I am not sure if this is always the case here.
Does somebody see a problem with that?
> ret = nlm_granted;
> goto out;
> case -EAGAIN:
> + if (!wait)
> + nlmsvc_remove_block(block);
> +pending_request:
> /*
> * If this is a blocking request for an
> * already pending lock request then we need
> @@ -565,6 +577,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> ret = async_block ? nlm_lck_blocked : nlm_lck_denied;
> goto out;
> case FILE_LOCK_DEFERRED:
> + block->b_flags |= B_PENDING_CALLBACK;
> +
> if (wait)
> break;
> /* Filesystem lock operation is in progress
> @@ -572,17 +586,16 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> ret = nlmsvc_defer_lock_rqst(rqstp, block);
> goto out;
> case -EDEADLK:
> + nlmsvc_remove_block(block);
> ret = nlm_deadlock;
> goto out;
> default: /* includes ENOLCK */
> + nlmsvc_remove_block(block);
> ret = nlm_lck_denied_nolocks;
> goto out;
> }
>
> ret = nlm_lck_blocked;
> -
> - /* Append to list of blocked */
> - nlmsvc_insert_block(block, NLM_NEVER);
> out:
> mutex_unlock(&file->f_mutex);
> nlmsvc_release_block(block);
> @@ -739,34 +752,59 @@ nlmsvc_update_deferred_block(struct nlm_block *block, int result)
> block->b_flags |= B_TIMED_OUT;
> }
- Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes
2023-07-20 12:58 [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Alexander Aring
2023-07-20 12:58 ` [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling Alexander Aring
2023-07-20 12:58 ` [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op Alexander Aring
@ 2023-07-21 15:14 ` Jeff Layton
2023-07-21 16:59 ` Chuck Lever
3 siblings, 0 replies; 12+ messages in thread
From: Jeff Layton @ 2023-07-21 15:14 UTC (permalink / raw)
To: Alexander Aring, chuck.lever
Cc: neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna, linux-nfs,
teigland, cluster-devel, agruenba
On Thu, 2023-07-20 at 08:58 -0400, Alexander Aring wrote:
> This patch fixes races when lockd accessing the global nlm_blocked list.
> It was mostly safe to access the list because everything was accessed
> from the lockd kernel thread context but there exists cases like
> nlmsvc_grant_deferred() that could manipulate the nlm_blocked list and
> it can be called from any context.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Alexander Aring <aahringo@redhat.com>
> ---
> fs/lockd/svclock.c | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> index c43ccdf28ed9..28abec5c451d 100644
> --- a/fs/lockd/svclock.c
> +++ b/fs/lockd/svclock.c
> @@ -131,12 +131,14 @@ static void nlmsvc_insert_block(struct nlm_block *block, unsigned long when)
> static inline void
> nlmsvc_remove_block(struct nlm_block *block)
> {
> + spin_lock(&nlm_blocked_lock);
> if (!list_empty(&block->b_list)) {
> - spin_lock(&nlm_blocked_lock);
> list_del_init(&block->b_list);
> spin_unlock(&nlm_blocked_lock);
> nlmsvc_release_block(block);
> + return;
> }
> + spin_unlock(&nlm_blocked_lock);
> }
>
> /*
> @@ -152,6 +154,7 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
> file, lock->fl.fl_pid,
> (long long)lock->fl.fl_start,
> (long long)lock->fl.fl_end, lock->fl.fl_type);
> + spin_lock(&nlm_blocked_lock);
> list_for_each_entry(block, &nlm_blocked, b_list) {
> fl = &block->b_call->a_args.lock.fl;
> dprintk("lockd: check f=%p pd=%d %Ld-%Ld ty=%d cookie=%s\n",
> @@ -161,9 +164,11 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
> nlmdbg_cookie2a(&block->b_call->a_args.cookie));
> if (block->b_file == file && nlm_compare_locks(fl, &lock->fl)) {
> kref_get(&block->b_count);
> + spin_unlock(&nlm_blocked_lock);
> return block;
> }
> }
> + spin_unlock(&nlm_blocked_lock);
>
> return NULL;
> }
> @@ -185,16 +190,19 @@ nlmsvc_find_block(struct nlm_cookie *cookie)
> {
> struct nlm_block *block;
>
> + spin_lock(&nlm_blocked_lock);
> list_for_each_entry(block, &nlm_blocked, b_list) {
> if (nlm_cookie_match(&block->b_call->a_args.cookie,cookie))
> goto found;
> }
> + spin_unlock(&nlm_blocked_lock);
>
> return NULL;
>
> found:
> dprintk("nlmsvc_find_block(%s): block=%p\n", nlmdbg_cookie2a(cookie), block);
> kref_get(&block->b_count);
> + spin_unlock(&nlm_blocked_lock);
> return block;
> }
>
> @@ -317,6 +325,7 @@ void nlmsvc_traverse_blocks(struct nlm_host *host,
>
> restart:
> mutex_lock(&file->f_mutex);
> + spin_lock(&nlm_blocked_lock);
> list_for_each_entry_safe(block, next, &file->f_blocks, b_flist) {
> if (!match(block->b_host, host))
> continue;
> @@ -325,11 +334,13 @@ void nlmsvc_traverse_blocks(struct nlm_host *host,
> if (list_empty(&block->b_list))
> continue;
> kref_get(&block->b_count);
> + spin_unlock(&nlm_blocked_lock);
> mutex_unlock(&file->f_mutex);
> nlmsvc_unlink_block(block);
> nlmsvc_release_block(block);
> goto restart;
> }
> + spin_unlock(&nlm_blocked_lock);
> mutex_unlock(&file->f_mutex);
> }
>
The patch itself looks correct. Walking these lists without holding the
lock is quite suspicious. Not sure about the stable designation here
though, unless you have a way to easily reproduce this.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling
2023-07-20 12:58 ` [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling Alexander Aring
2023-07-21 13:09 ` Alexander Aring
@ 2023-07-21 15:45 ` Jeff Layton
2023-08-10 20:37 ` Alexander Aring
1 sibling, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2023-07-21 15:45 UTC (permalink / raw)
To: Alexander Aring, chuck.lever
Cc: neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna, linux-nfs,
teigland, cluster-devel, agruenba
On Thu, 2023-07-20 at 08:58 -0400, Alexander Aring wrote:
> This patch fixes a race in async lock request handling between adding
> the relevant struct nlm_block to nlm_blocked list after the request was
> sent by vfs_lock_file() and nlmsvc_grant_deferred() does a lookup of the
> nlm_block in the nlm_blocked list. It could be that the async request is
> completed before the nlm_block was added to the list. This would end
> in a -ENOENT and a kernel log message of "lockd: grant for unknown
> block".
>
> To solve this issue we add the nlm_block before the vfs_lock_file() call
> to be sure it has been added when a possible nlmsvc_grant_deferred() is
> called. If the vfs_lock_file() results in an case when it wouldn't be
> added to nlm_blocked list, the nlm_block struct will be removed from
> this list again.
>
> Signed-off-by: Alexander Aring <aahringo@redhat.com>
> ---
> fs/lockd/svclock.c | 80 +++++++++++++++++++++++++++----------
> include/linux/lockd/lockd.h | 1 +
> 2 files changed, 60 insertions(+), 21 deletions(-)
>
> diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> index 28abec5c451d..62ef27a69a9e 100644
> --- a/fs/lockd/svclock.c
> +++ b/fs/lockd/svclock.c
> @@ -297,6 +297,8 @@ static void nlmsvc_free_block(struct kref *kref)
>
> dprintk("lockd: freeing block %p...\n", block);
>
> + WARN_ON_ONCE(block->b_flags & B_PENDING_CALLBACK);
> +
> /* Remove block from file's list of blocks */
> list_del_init(&block->b_flist);
> mutex_unlock(&file->f_mutex);
> @@ -543,6 +545,12 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> goto out;
> }
>
> + if (block->b_flags & B_PENDING_CALLBACK)
> + goto pending_request;
> +
> + /* Append to list of blocked */
> + nlmsvc_insert_block(block, NLM_NEVER);
> +
> if (!wait)
> lock->fl.fl_flags &= ~FL_SLEEP;
> mode = lock_to_openmode(&lock->fl);
> @@ -552,9 +560,13 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> dprintk("lockd: vfs_lock_file returned %d\n", error);
> switch (error) {
> case 0:
> + nlmsvc_remove_block(block);
> ret = nlm_granted;
> goto out;
> case -EAGAIN:
> + if (!wait)
> + nlmsvc_remove_block(block);
> +pending_request:
> /*
> * If this is a blocking request for an
> * already pending lock request then we need
> @@ -565,6 +577,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> ret = async_block ? nlm_lck_blocked : nlm_lck_denied;
> goto out;
> case FILE_LOCK_DEFERRED:
> + block->b_flags |= B_PENDING_CALLBACK;
> +
> if (wait)
> break;
> /* Filesystem lock operation is in progress
> @@ -572,17 +586,16 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> ret = nlmsvc_defer_lock_rqst(rqstp, block);
When the above function is called, it's going to end up reinserting the
block into the list. I think you probably also need to remove the call
to nlmsvc_insert_block from nlmsvc_defer_lock_rqst since it could have
been granted before that occurs.
> goto out;
> case -EDEADLK:
> + nlmsvc_remove_block(block);
> ret = nlm_deadlock;
> goto out;
> default: /* includes ENOLCK */
> + nlmsvc_remove_block(block);
> ret = nlm_lck_denied_nolocks;
> goto out;
> }
>
> ret = nlm_lck_blocked;
> -
> - /* Append to list of blocked */
> - nlmsvc_insert_block(block, NLM_NEVER);
> out:
> mutex_unlock(&file->f_mutex);
> nlmsvc_release_block(block);
> @@ -739,34 +752,59 @@ nlmsvc_update_deferred_block(struct nlm_block *block, int result)
> block->b_flags |= B_TIMED_OUT;
> }
>
> +static int __nlmsvc_grant_deferred(struct nlm_block *block,
> + struct file_lock *fl,
> + int result)
> +{
> + int rc = 0;
> +
> + dprintk("lockd: nlmsvc_notify_blocked block %p flags %d\n",
> + block, block->b_flags);
> + if (block->b_flags & B_QUEUED) {
> + if (block->b_flags & B_TIMED_OUT) {
> + rc = -ENOLCK;
> + goto out;
> + }
> + nlmsvc_update_deferred_block(block, result);
> + } else if (result == 0)
> + block->b_granted = 1;
> +
> + nlmsvc_insert_block_locked(block, 0);
> + svc_wake_up(block->b_daemon);
> +out:
> + return rc;
> +}
> +
> static int nlmsvc_grant_deferred(struct file_lock *fl, int result)
> {
> - struct nlm_block *block;
> - int rc = -ENOENT;
> + struct nlm_block *block = NULL;
> + int rc;
>
> spin_lock(&nlm_blocked_lock);
> list_for_each_entry(block, &nlm_blocked, b_list) {
> if (nlm_compare_locks(&block->b_call->a_args.lock.fl, fl)) {
> - dprintk("lockd: nlmsvc_notify_blocked block %p flags %d\n",
> - block, block->b_flags);
> - if (block->b_flags & B_QUEUED) {
> - if (block->b_flags & B_TIMED_OUT) {
> - rc = -ENOLCK;
> - break;
> - }
> - nlmsvc_update_deferred_block(block, result);
> - } else if (result == 0)
> - block->b_granted = 1;
> -
> - nlmsvc_insert_block_locked(block, 0);
> - svc_wake_up(block->b_daemon);
> - rc = 0;
> + kref_get(&block->b_count);
> break;
> }
> }
> spin_unlock(&nlm_blocked_lock);
> - if (rc == -ENOENT)
> - printk(KERN_WARNING "lockd: grant for unknown block\n");
> +
> + if (!block) {
> + pr_warn("lockd: grant for unknown pending block\n");
> + return -ENOENT;
> + }
> +
> + /* don't interfere with nlmsvc_lock() */
> + mutex_lock(&block->b_file->f_mutex);
This is called from lm_grant, and Documentation/filesystems/locking.rst
says that lm_grant is not allowed to block. The only caller though is
dlm_plock_callback, and I don't see anything that would prevent
blocking.
Do we need to fix the documentation there?
> + block->b_flags &= ~B_PENDING_CALLBACK;
> +
You're adding this new flag when the lock is deferred and then clearing
it when the lock is granted. What about when the lock request is
cancelled (e.g. by signal)? It seems like you also need to clear it then
too, correct?
> + spin_lock(&nlm_blocked_lock);
> + WARN_ON_ONCE(list_empty(&block->b_list));
> + rc = __nlmsvc_grant_deferred(block, fl, result);
> + spin_unlock(&nlm_blocked_lock);
> + mutex_unlock(&block->b_file->f_mutex);
> +
> + nlmsvc_release_block(block);
> return rc;
> }
>
> diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
> index f42594a9efe0..a977be8bcc2c 100644
> --- a/include/linux/lockd/lockd.h
> +++ b/include/linux/lockd/lockd.h
> @@ -189,6 +189,7 @@ struct nlm_block {
> #define B_QUEUED 1 /* lock queued */
> #define B_GOT_CALLBACK 2 /* got lock or conflicting lock */
> #define B_TIMED_OUT 4 /* filesystem too slow to respond */
> +#define B_PENDING_CALLBACK 8 /* pending callback for lock request */
> };
>
> /*
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling
2023-07-21 13:09 ` Alexander Aring
@ 2023-07-21 16:43 ` Jeff Layton
2023-08-10 21:00 ` Alexander Aring
0 siblings, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2023-07-21 16:43 UTC (permalink / raw)
To: Alexander Aring, chuck.lever
Cc: neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna, linux-nfs,
teigland, cluster-devel, agruenba
On Fri, 2023-07-21 at 09:09 -0400, Alexander Aring wrote:
> Hi,
>
> On Thu, Jul 20, 2023 at 8:58 AM Alexander Aring <aahringo@redhat.com> wrote:
> >
> > This patch fixes a race in async lock request handling between adding
> > the relevant struct nlm_block to nlm_blocked list after the request was
> > sent by vfs_lock_file() and nlmsvc_grant_deferred() does a lookup of the
> > nlm_block in the nlm_blocked list. It could be that the async request is
> > completed before the nlm_block was added to the list. This would end
> > in a -ENOENT and a kernel log message of "lockd: grant for unknown
> > block".
> >
> > To solve this issue we add the nlm_block before the vfs_lock_file() call
> > to be sure it has been added when a possible nlmsvc_grant_deferred() is
> > called. If the vfs_lock_file() results in an case when it wouldn't be
> > added to nlm_blocked list, the nlm_block struct will be removed from
> > this list again.
> >
> > Signed-off-by: Alexander Aring <aahringo@redhat.com>
> > ---
> > fs/lockd/svclock.c | 80 +++++++++++++++++++++++++++----------
> > include/linux/lockd/lockd.h | 1 +
> > 2 files changed, 60 insertions(+), 21 deletions(-)
> >
> > diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> > index 28abec5c451d..62ef27a69a9e 100644
> > --- a/fs/lockd/svclock.c
> > +++ b/fs/lockd/svclock.c
> > @@ -297,6 +297,8 @@ static void nlmsvc_free_block(struct kref *kref)
> >
> > dprintk("lockd: freeing block %p...\n", block);
> >
> > + WARN_ON_ONCE(block->b_flags & B_PENDING_CALLBACK);
> > +
> > /* Remove block from file's list of blocks */
> > list_del_init(&block->b_flist);
> > mutex_unlock(&file->f_mutex);
> > @@ -543,6 +545,12 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > goto out;
> > }
> >
> > + if (block->b_flags & B_PENDING_CALLBACK)
> > + goto pending_request;
> > +
> > + /* Append to list of blocked */
> > + nlmsvc_insert_block(block, NLM_NEVER);
> > +
> > if (!wait)
> > lock->fl.fl_flags &= ~FL_SLEEP;
> > mode = lock_to_openmode(&lock->fl);
> > @@ -552,9 +560,13 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > dprintk("lockd: vfs_lock_file returned %d\n", error);
> > switch (error) {
> > case 0:
> > + nlmsvc_remove_block(block);
>
> reacting here with nlmsvc_remove_block() assumes that the block was
> not being added to the nlm_blocked list before nlmsvc_insert_block()
> was called. I am not sure if this is always the case here.
>
> Does somebody see a problem with that?
>
The scenario is: we have a block on the list already and now another
lock request comes in for the same thing: the client decided to just re-
poll for the lock. That's a plausible scenario. I think the Linux NLM
client will poll for locks periodically.
In this case though, the lock request was granted by the filesystem, so
this is likely racing with (and winning vs.) a lm_grant callback. Given
that the client decided to repoll for it, we're probably safe to just
dequeue the block and respond here, and not worry about sending a grant
callback.
Ditto for the other cases where the block is removed.
> > ret = nlm_granted;
> > goto out;
> > case -EAGAIN:
> > + if (!wait)
> > + nlmsvc_remove_block(block);
I was thinking that it would be best to not insert a block at all in the
!wait case, but it looks like DLM just returns DEFERRED and almost
always does a callback, even when it's not a blocking lock request?
Anyway, I think we probably do have to handle this like you are.
> > +pending_request:
> > /*
> > * If this is a blocking request for an
> > * already pending lock request then we need
> > @@ -565,6 +577,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > ret = async_block ? nlm_lck_blocked : nlm_lck_denied;
> > goto out;
> > case FILE_LOCK_DEFERRED:
> > + block->b_flags |= B_PENDING_CALLBACK;
> > +
> > if (wait)
> > break;
> > /* Filesystem lock operation is in progress
> > @@ -572,17 +586,16 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > ret = nlmsvc_defer_lock_rqst(rqstp, block);
> > goto out;
> > case -EDEADLK:
> > + nlmsvc_remove_block(block);
> > ret = nlm_deadlock;
> > goto out;
> > default: /* includes ENOLCK */
> > + nlmsvc_remove_block(block);
> > ret = nlm_lck_denied_nolocks;
> > goto out;
> > }
> >
> > ret = nlm_lck_blocked;
> > -
> > - /* Append to list of blocked */
> > - nlmsvc_insert_block(block, NLM_NEVER);
> > out:
> > mutex_unlock(&file->f_mutex);
> > nlmsvc_release_block(block);
> > @@ -739,34 +752,59 @@ nlmsvc_update_deferred_block(struct nlm_block *block, int result)
> > block->b_flags |= B_TIMED_OUT;
> > }
>
> - Alex
>
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes
2023-07-20 12:58 [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Alexander Aring
` (2 preceding siblings ...)
2023-07-21 15:14 ` [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Jeff Layton
@ 2023-07-21 16:59 ` Chuck Lever
3 siblings, 0 replies; 12+ messages in thread
From: Chuck Lever @ 2023-07-21 16:59 UTC (permalink / raw)
To: Alexander Aring
Cc: chuck.lever, jlayton, neilb, kolga, Dai.Ngo, tom, trond.myklebust,
anna, linux-nfs, teigland, cluster-devel, agruenba
On Thu, Jul 20, 2023 at 08:58:04AM -0400, Alexander Aring wrote:
> This patch fixes races when lockd accessing the global nlm_blocked list.
> It was mostly safe to access the list because everything was accessed
> from the lockd kernel thread context but there exists cases like
> nlmsvc_grant_deferred() that could manipulate the nlm_blocked list and
> it can be called from any context.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Alexander Aring <aahringo@redhat.com>
I agree with Jeff, this one looks fine to apply to nfsd-next. I've done
that so it can get test exposure while we consider 2/3 and 3/3.
I've dropped the "Cc: stable" tag -- since there is no specific bug
report this fix addresses, I will defer the decision about backporting
at least until we have some test experience.
> ---
> fs/lockd/svclock.c | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> index c43ccdf28ed9..28abec5c451d 100644
> --- a/fs/lockd/svclock.c
> +++ b/fs/lockd/svclock.c
> @@ -131,12 +131,14 @@ static void nlmsvc_insert_block(struct nlm_block *block, unsigned long when)
> static inline void
> nlmsvc_remove_block(struct nlm_block *block)
> {
> + spin_lock(&nlm_blocked_lock);
> if (!list_empty(&block->b_list)) {
> - spin_lock(&nlm_blocked_lock);
> list_del_init(&block->b_list);
> spin_unlock(&nlm_blocked_lock);
> nlmsvc_release_block(block);
> + return;
> }
> + spin_unlock(&nlm_blocked_lock);
> }
>
> /*
> @@ -152,6 +154,7 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
> file, lock->fl.fl_pid,
> (long long)lock->fl.fl_start,
> (long long)lock->fl.fl_end, lock->fl.fl_type);
> + spin_lock(&nlm_blocked_lock);
> list_for_each_entry(block, &nlm_blocked, b_list) {
> fl = &block->b_call->a_args.lock.fl;
> dprintk("lockd: check f=%p pd=%d %Ld-%Ld ty=%d cookie=%s\n",
> @@ -161,9 +164,11 @@ nlmsvc_lookup_block(struct nlm_file *file, struct nlm_lock *lock)
> nlmdbg_cookie2a(&block->b_call->a_args.cookie));
> if (block->b_file == file && nlm_compare_locks(fl, &lock->fl)) {
> kref_get(&block->b_count);
> + spin_unlock(&nlm_blocked_lock);
> return block;
> }
> }
> + spin_unlock(&nlm_blocked_lock);
>
> return NULL;
> }
> @@ -185,16 +190,19 @@ nlmsvc_find_block(struct nlm_cookie *cookie)
> {
> struct nlm_block *block;
>
> + spin_lock(&nlm_blocked_lock);
> list_for_each_entry(block, &nlm_blocked, b_list) {
> if (nlm_cookie_match(&block->b_call->a_args.cookie,cookie))
> goto found;
> }
> + spin_unlock(&nlm_blocked_lock);
>
> return NULL;
>
> found:
> dprintk("nlmsvc_find_block(%s): block=%p\n", nlmdbg_cookie2a(cookie), block);
> kref_get(&block->b_count);
> + spin_unlock(&nlm_blocked_lock);
> return block;
> }
>
> @@ -317,6 +325,7 @@ void nlmsvc_traverse_blocks(struct nlm_host *host,
>
> restart:
> mutex_lock(&file->f_mutex);
> + spin_lock(&nlm_blocked_lock);
> list_for_each_entry_safe(block, next, &file->f_blocks, b_flist) {
> if (!match(block->b_host, host))
> continue;
> @@ -325,11 +334,13 @@ void nlmsvc_traverse_blocks(struct nlm_host *host,
> if (list_empty(&block->b_list))
> continue;
> kref_get(&block->b_count);
> + spin_unlock(&nlm_blocked_lock);
> mutex_unlock(&file->f_mutex);
> nlmsvc_unlink_block(block);
> nlmsvc_release_block(block);
> goto restart;
> }
> + spin_unlock(&nlm_blocked_lock);
> mutex_unlock(&file->f_mutex);
> }
>
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op
2023-07-20 12:58 ` [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op Alexander Aring
@ 2023-07-21 17:46 ` Jeff Layton
2023-08-10 20:24 ` Alexander Aring
0 siblings, 1 reply; 12+ messages in thread
From: Jeff Layton @ 2023-07-21 17:46 UTC (permalink / raw)
To: Alexander Aring, chuck.lever
Cc: neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna, linux-nfs,
teigland, cluster-devel, agruenba
On Thu, 2023-07-20 at 08:58 -0400, Alexander Aring wrote:
> This patch reverts mostly commit 40595cdc93ed ("nfs: block notification
> on fs with its own ->lock") and introduces an EXPORT_OP_SAFE_ASYNC_LOCK
> export flag to signal that the "own ->lock" implementation supports
> async lock requests. The only main user is DLM that is used by GFS2 and
> OCFS2 filesystem. Those implement their own lock() implementation and
> return FILE_LOCK_DEFERRED as return value. Since commit 40595cdc93ed
> ("nfs: block notification on fs with its own ->lock") the DLM
> implementation were never updated. This patch should prepare for DLM
> to set the EXPORT_OP_SAFE_ASYNC_LOCK export flag and update the DLM
> plock implementation regarding to it.
>
> Signed-off-by: Alexander Aring <aahringo@redhat.com>
> ---
> fs/lockd/svclock.c | 5 ++---
> fs/nfsd/nfs4state.c | 11 ++++++++---
> include/linux/exportfs.h | 1 +
> 3 files changed, 11 insertions(+), 6 deletions(-)
>
> diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> index 62ef27a69a9e..54a67bd33843 100644
> --- a/fs/lockd/svclock.c
> +++ b/fs/lockd/svclock.c
> @@ -483,9 +483,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> struct nlm_host *host, struct nlm_lock *lock, int wait,
> struct nlm_cookie *cookie, int reclaim)
> {
> -#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
> struct inode *inode = nlmsvc_file_inode(file);
> -#endif
> struct nlm_block *block = NULL;
> int error;
> int mode;
> @@ -499,7 +497,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> (long long)lock->fl.fl_end,
> wait);
>
> - if (nlmsvc_file_file(file)->f_op->lock) {
> + if (!(inode->i_sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK) &&
> + nlmsvc_file_file(file)->f_op->lock) {
> async_block = wait;
> wait = 0;
> }
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 6e61fa3acaf1..efcea229d640 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -7432,6 +7432,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> struct nfsd4_blocked_lock *nbl = NULL;
> struct file_lock *file_lock = NULL;
> struct file_lock *conflock = NULL;
> + struct super_block *sb;
> __be32 status = 0;
> int lkflg;
> int err;
> @@ -7453,6 +7454,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> dprintk("NFSD: nfsd4_lock: permission denied!\n");
> return status;
> }
> + sb = cstate->current_fh.fh_dentry->d_sb;
>
> if (lock->lk_is_new) {
> if (nfsd4_has_session(cstate))
> @@ -7504,7 +7506,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> fp = lock_stp->st_stid.sc_file;
> switch (lock->lk_type) {
> case NFS4_READW_LT:
> - if (nfsd4_has_session(cstate))
> + if (sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK &&
This will break existing filesystems that don't set the new flag. Maybe
you also need to test for the filesystem's ->lock operation here too?
This might be more nicely expressed in a helper function.
> + nfsd4_has_session(cstate))
> fl_flags |= FL_SLEEP;
> fallthrough;
> case NFS4_READ_LT:
> @@ -7516,7 +7519,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> fl_type = F_RDLCK;
> break;
> case NFS4_WRITEW_LT:
> - if (nfsd4_has_session(cstate))
> + if (sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK &&
> + nfsd4_has_session(cstate))
> fl_flags |= FL_SLEEP;
> fallthrough;
> case NFS4_WRITE_LT:
> @@ -7544,7 +7548,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> * for file locks), so don't attempt blocking lock notifications
> * on those filesystems:
> */
> - if (nf->nf_file->f_op->lock)
> + if (!(sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK) &&
> + nf->nf_file->f_op->lock)
> fl_flags &= ~FL_SLEEP;
>
> nbl = find_or_allocate_block(lock_sop, &fp->fi_fhandle, nn);
> diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
> index 11fbd0ee1370..da742abbaf3e 100644
> --- a/include/linux/exportfs.h
> +++ b/include/linux/exportfs.h
> @@ -224,6 +224,7 @@ struct export_operations {
> atomic attribute updates
> */
> #define EXPORT_OP_FLUSH_ON_CLOSE (0x20) /* fs flushes file data on close */
> +#define EXPORT_OP_SAFE_ASYNC_LOCK (0x40) /* fs can do async lock request */
> unsigned long flags;
> };
>
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op
2023-07-21 17:46 ` Jeff Layton
@ 2023-08-10 20:24 ` Alexander Aring
0 siblings, 0 replies; 12+ messages in thread
From: Alexander Aring @ 2023-08-10 20:24 UTC (permalink / raw)
To: Jeff Layton
Cc: chuck.lever, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, agruenba
Hi,
On Fri, Jul 21, 2023 at 1:46 PM Jeff Layton <jlayton@kernel.org> wrote:
>
> On Thu, 2023-07-20 at 08:58 -0400, Alexander Aring wrote:
> > This patch reverts mostly commit 40595cdc93ed ("nfs: block notification
> > on fs with its own ->lock") and introduces an EXPORT_OP_SAFE_ASYNC_LOCK
> > export flag to signal that the "own ->lock" implementation supports
> > async lock requests. The only main user is DLM that is used by GFS2 and
> > OCFS2 filesystem. Those implement their own lock() implementation and
> > return FILE_LOCK_DEFERRED as return value. Since commit 40595cdc93ed
> > ("nfs: block notification on fs with its own ->lock") the DLM
> > implementation were never updated. This patch should prepare for DLM
> > to set the EXPORT_OP_SAFE_ASYNC_LOCK export flag and update the DLM
> > plock implementation regarding to it.
> >
> > Signed-off-by: Alexander Aring <aahringo@redhat.com>
> > ---
> > fs/lockd/svclock.c | 5 ++---
> > fs/nfsd/nfs4state.c | 11 ++++++++---
> > include/linux/exportfs.h | 1 +
> > 3 files changed, 11 insertions(+), 6 deletions(-)
> >
> > diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> > index 62ef27a69a9e..54a67bd33843 100644
> > --- a/fs/lockd/svclock.c
> > +++ b/fs/lockd/svclock.c
> > @@ -483,9 +483,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > struct nlm_host *host, struct nlm_lock *lock, int wait,
> > struct nlm_cookie *cookie, int reclaim)
> > {
> > -#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
> > struct inode *inode = nlmsvc_file_inode(file);
> > -#endif
> > struct nlm_block *block = NULL;
> > int error;
> > int mode;
> > @@ -499,7 +497,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > (long long)lock->fl.fl_end,
> > wait);
> >
> > - if (nlmsvc_file_file(file)->f_op->lock) {
> > + if (!(inode->i_sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK) &&
> > + nlmsvc_file_file(file)->f_op->lock) {
> > async_block = wait;
> > wait = 0;
> > }
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index 6e61fa3acaf1..efcea229d640 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -7432,6 +7432,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> > struct nfsd4_blocked_lock *nbl = NULL;
> > struct file_lock *file_lock = NULL;
> > struct file_lock *conflock = NULL;
> > + struct super_block *sb;
> > __be32 status = 0;
> > int lkflg;
> > int err;
> > @@ -7453,6 +7454,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> > dprintk("NFSD: nfsd4_lock: permission denied!\n");
> > return status;
> > }
> > + sb = cstate->current_fh.fh_dentry->d_sb;
> >
> > if (lock->lk_is_new) {
> > if (nfsd4_has_session(cstate))
> > @@ -7504,7 +7506,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
> > fp = lock_stp->st_stid.sc_file;
> > switch (lock->lk_type) {
> > case NFS4_READW_LT:
> > - if (nfsd4_has_session(cstate))
> > + if (sb->s_export_op->flags & EXPORT_OP_SAFE_ASYNC_LOCK &&
>
> This will break existing filesystems that don't set the new flag. Maybe
> you also need to test for the filesystem's ->lock operation here too?
>
yes.
> This might be more nicely expressed in a helper function.
ok.
- Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling
2023-07-21 15:45 ` Jeff Layton
@ 2023-08-10 20:37 ` Alexander Aring
0 siblings, 0 replies; 12+ messages in thread
From: Alexander Aring @ 2023-08-10 20:37 UTC (permalink / raw)
To: Jeff Layton
Cc: chuck.lever, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, agruenba
Hi,
On Fri, Jul 21, 2023 at 11:45 AM Jeff Layton <jlayton@kernel.org> wrote:
>
> On Thu, 2023-07-20 at 08:58 -0400, Alexander Aring wrote:
> > This patch fixes a race in async lock request handling between adding
> > the relevant struct nlm_block to nlm_blocked list after the request was
> > sent by vfs_lock_file() and nlmsvc_grant_deferred() does a lookup of the
> > nlm_block in the nlm_blocked list. It could be that the async request is
> > completed before the nlm_block was added to the list. This would end
> > in a -ENOENT and a kernel log message of "lockd: grant for unknown
> > block".
> >
> > To solve this issue we add the nlm_block before the vfs_lock_file() call
> > to be sure it has been added when a possible nlmsvc_grant_deferred() is
> > called. If the vfs_lock_file() results in an case when it wouldn't be
> > added to nlm_blocked list, the nlm_block struct will be removed from
> > this list again.
> >
> > Signed-off-by: Alexander Aring <aahringo@redhat.com>
> > ---
> > fs/lockd/svclock.c | 80 +++++++++++++++++++++++++++----------
> > include/linux/lockd/lockd.h | 1 +
> > 2 files changed, 60 insertions(+), 21 deletions(-)
> >
> > diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> > index 28abec5c451d..62ef27a69a9e 100644
> > --- a/fs/lockd/svclock.c
> > +++ b/fs/lockd/svclock.c
> > @@ -297,6 +297,8 @@ static void nlmsvc_free_block(struct kref *kref)
> >
> > dprintk("lockd: freeing block %p...\n", block);
> >
> > + WARN_ON_ONCE(block->b_flags & B_PENDING_CALLBACK);
> > +
> > /* Remove block from file's list of blocks */
> > list_del_init(&block->b_flist);
> > mutex_unlock(&file->f_mutex);
> > @@ -543,6 +545,12 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > goto out;
> > }
> >
> > + if (block->b_flags & B_PENDING_CALLBACK)
> > + goto pending_request;
> > +
> > + /* Append to list of blocked */
> > + nlmsvc_insert_block(block, NLM_NEVER);
> > +
> > if (!wait)
> > lock->fl.fl_flags &= ~FL_SLEEP;
> > mode = lock_to_openmode(&lock->fl);
> > @@ -552,9 +560,13 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > dprintk("lockd: vfs_lock_file returned %d\n", error);
> > switch (error) {
> > case 0:
> > + nlmsvc_remove_block(block);
> > ret = nlm_granted;
> > goto out;
> > case -EAGAIN:
> > + if (!wait)
> > + nlmsvc_remove_block(block);
> > +pending_request:
> > /*
> > * If this is a blocking request for an
> > * already pending lock request then we need
> > @@ -565,6 +577,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > ret = async_block ? nlm_lck_blocked : nlm_lck_denied;
> > goto out;
> > case FILE_LOCK_DEFERRED:
> > + block->b_flags |= B_PENDING_CALLBACK;
> > +
> > if (wait)
> > break;
> > /* Filesystem lock operation is in progress
> > @@ -572,17 +586,16 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > ret = nlmsvc_defer_lock_rqst(rqstp, block);
>
> When the above function is called, it's going to end up reinserting the
> block into the list. I think you probably also need to remove the call
> to nlmsvc_insert_block from nlmsvc_defer_lock_rqst since it could have
> been granted before that occurs.
>
it cannot be granted during this time because the f_mutex is held. We
insert it in the first place to have a way to get the block lookup
working when a lm_grant() is really fast. Then lm_grant() will lookup
the lock and have a way to get f_mutex to hold it. Then lm_grant()
will only run when nobody is in this critical area (on a per nlm_file
basis).
There is a difference in the call between NLM_NEVER and NLM_TIMEOUT in
nlmsvc_defer_lock_rqst(), when nlmsvc_defer_lock_rqst() it will just
update the timeout value. I am not sure about the consequences when it
does a nlmsvc_insert_block() with NLM_NEVER instead of NLM_TIMEOUT.
But as I said it should not be possible to grant the block when
f_mutex is held.
> > goto out;
> > case -EDEADLK:
> > + nlmsvc_remove_block(block);
> > ret = nlm_deadlock;
> > goto out;
> > default: /* includes ENOLCK */
> > + nlmsvc_remove_block(block);
> > ret = nlm_lck_denied_nolocks;
> > goto out;
> > }
> >
> > ret = nlm_lck_blocked;
> > -
> > - /* Append to list of blocked */
> > - nlmsvc_insert_block(block, NLM_NEVER);
> > out:
> > mutex_unlock(&file->f_mutex);
> > nlmsvc_release_block(block);
> > @@ -739,34 +752,59 @@ nlmsvc_update_deferred_block(struct nlm_block *block, int result)
> > block->b_flags |= B_TIMED_OUT;
> > }
> >
> > +static int __nlmsvc_grant_deferred(struct nlm_block *block,
> > + struct file_lock *fl,
> > + int result)
> > +{
> > + int rc = 0;
> > +
> > + dprintk("lockd: nlmsvc_notify_blocked block %p flags %d\n",
> > + block, block->b_flags);
> > + if (block->b_flags & B_QUEUED) {
> > + if (block->b_flags & B_TIMED_OUT) {
> > + rc = -ENOLCK;
> > + goto out;
> > + }
> > + nlmsvc_update_deferred_block(block, result);
> > + } else if (result == 0)
> > + block->b_granted = 1;
> > +
> > + nlmsvc_insert_block_locked(block, 0);
> > + svc_wake_up(block->b_daemon);
> > +out:
> > + return rc;
> > +}
> > +
> > static int nlmsvc_grant_deferred(struct file_lock *fl, int result)
> > {
> > - struct nlm_block *block;
> > - int rc = -ENOENT;
> > + struct nlm_block *block = NULL;
> > + int rc;
> >
> > spin_lock(&nlm_blocked_lock);
> > list_for_each_entry(block, &nlm_blocked, b_list) {
> > if (nlm_compare_locks(&block->b_call->a_args.lock.fl, fl)) {
> > - dprintk("lockd: nlmsvc_notify_blocked block %p flags %d\n",
> > - block, block->b_flags);
> > - if (block->b_flags & B_QUEUED) {
> > - if (block->b_flags & B_TIMED_OUT) {
> > - rc = -ENOLCK;
> > - break;
> > - }
> > - nlmsvc_update_deferred_block(block, result);
> > - } else if (result == 0)
> > - block->b_granted = 1;
> > -
> > - nlmsvc_insert_block_locked(block, 0);
> > - svc_wake_up(block->b_daemon);
> > - rc = 0;
> > + kref_get(&block->b_count);
> > break;
> > }
> > }
> > spin_unlock(&nlm_blocked_lock);
> > - if (rc == -ENOENT)
> > - printk(KERN_WARNING "lockd: grant for unknown block\n");
> > +
> > + if (!block) {
> > + pr_warn("lockd: grant for unknown pending block\n");
> > + return -ENOENT;
> > + }
> > +
> > + /* don't interfere with nlmsvc_lock() */
> > + mutex_lock(&block->b_file->f_mutex);
>
>
> This is called from lm_grant, and Documentation/filesystems/locking.rst
> says that lm_grant is not allowed to block. The only caller though is
> dlm_plock_callback, and I don't see anything that would prevent
> blocking.
>
> Do we need to fix the documentation there?
>
You are right and I think it should not call any sleepable API.
However DLM is the only one upstream user and I have no other idea how
to make the current situation better.
We should update the documentation but be open to make it
non-sleepable in future?
>
> > + block->b_flags &= ~B_PENDING_CALLBACK;
> > +
>
> You're adding this new flag when the lock is deferred and then clearing
> it when the lock is granted. What about when the lock request is
> cancelled (e.g. by signal)? It seems like you also need to clear it then
> too, correct?
>
correct. I add code to clear it when the block is getting removed from
nlm_blocked in nlmsvc_remove_block().
> > + spin_lock(&nlm_blocked_lock);
> > + WARN_ON_ONCE(list_empty(&block->b_list));
> > + rc = __nlmsvc_grant_deferred(block, fl, result);
> > + spin_unlock(&nlm_blocked_lock);
> > + mutex_unlock(&block->b_file->f_mutex);
> > +
> > + nlmsvc_release_block(block);
> > return rc;
> > }
> >
> > diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
> > index f42594a9efe0..a977be8bcc2c 100644
> > --- a/include/linux/lockd/lockd.h
> > +++ b/include/linux/lockd/lockd.h
> > @@ -189,6 +189,7 @@ struct nlm_block {
> > #define B_QUEUED 1 /* lock queued */
> > #define B_GOT_CALLBACK 2 /* got lock or conflicting lock */
> > #define B_TIMED_OUT 4 /* filesystem too slow to respond */
> > +#define B_PENDING_CALLBACK 8 /* pending callback for lock request */
> > };
> >
> > /*
>
> --
> Jeff Layton <jlayton@kernel.org>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling
2023-07-21 16:43 ` Jeff Layton
@ 2023-08-10 21:00 ` Alexander Aring
0 siblings, 0 replies; 12+ messages in thread
From: Alexander Aring @ 2023-08-10 21:00 UTC (permalink / raw)
To: Jeff Layton
Cc: chuck.lever, neilb, kolga, Dai.Ngo, tom, trond.myklebust, anna,
linux-nfs, teigland, cluster-devel, agruenba
Hi,
On Fri, Jul 21, 2023 at 12:43 PM Jeff Layton <jlayton@kernel.org> wrote:
>
> On Fri, 2023-07-21 at 09:09 -0400, Alexander Aring wrote:
> > Hi,
> >
> > On Thu, Jul 20, 2023 at 8:58 AM Alexander Aring <aahringo@redhat.com> wrote:
> > >
> > > This patch fixes a race in async lock request handling between adding
> > > the relevant struct nlm_block to nlm_blocked list after the request was
> > > sent by vfs_lock_file() and nlmsvc_grant_deferred() does a lookup of the
> > > nlm_block in the nlm_blocked list. It could be that the async request is
> > > completed before the nlm_block was added to the list. This would end
> > > in a -ENOENT and a kernel log message of "lockd: grant for unknown
> > > block".
> > >
> > > To solve this issue we add the nlm_block before the vfs_lock_file() call
> > > to be sure it has been added when a possible nlmsvc_grant_deferred() is
> > > called. If the vfs_lock_file() results in an case when it wouldn't be
> > > added to nlm_blocked list, the nlm_block struct will be removed from
> > > this list again.
> > >
> > > Signed-off-by: Alexander Aring <aahringo@redhat.com>
> > > ---
> > > fs/lockd/svclock.c | 80 +++++++++++++++++++++++++++----------
> > > include/linux/lockd/lockd.h | 1 +
> > > 2 files changed, 60 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c
> > > index 28abec5c451d..62ef27a69a9e 100644
> > > --- a/fs/lockd/svclock.c
> > > +++ b/fs/lockd/svclock.c
> > > @@ -297,6 +297,8 @@ static void nlmsvc_free_block(struct kref *kref)
> > >
> > > dprintk("lockd: freeing block %p...\n", block);
> > >
> > > + WARN_ON_ONCE(block->b_flags & B_PENDING_CALLBACK);
> > > +
> > > /* Remove block from file's list of blocks */
> > > list_del_init(&block->b_flist);
> > > mutex_unlock(&file->f_mutex);
> > > @@ -543,6 +545,12 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > > goto out;
> > > }
> > >
> > > + if (block->b_flags & B_PENDING_CALLBACK)
> > > + goto pending_request;
> > > +
> > > + /* Append to list of blocked */
> > > + nlmsvc_insert_block(block, NLM_NEVER);
> > > +
> > > if (!wait)
> > > lock->fl.fl_flags &= ~FL_SLEEP;
> > > mode = lock_to_openmode(&lock->fl);
> > > @@ -552,9 +560,13 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file,
> > > dprintk("lockd: vfs_lock_file returned %d\n", error);
> > > switch (error) {
> > > case 0:
> > > + nlmsvc_remove_block(block);
> >
> > reacting here with nlmsvc_remove_block() assumes that the block was
> > not being added to the nlm_blocked list before nlmsvc_insert_block()
> > was called. I am not sure if this is always the case here.
> >
> > Does somebody see a problem with that?
> >
>
> The scenario is: we have a block on the list already and now another
> lock request comes in for the same thing: the client decided to just re-
> poll for the lock. That's a plausible scenario. I think the Linux NLM
> client will poll for locks periodically.
>
> In this case though, the lock request was granted by the filesystem, so
> this is likely racing with (and winning vs.) a lm_grant callback. Given
> that the client decided to repoll for it, we're probably safe to just
> dequeue the block and respond here, and not worry about sending a grant
> callback.
>
> Ditto for the other cases where the block is removed.
>
ok.
> > > ret = nlm_granted;
> > > goto out;
> > > case -EAGAIN:
> > > + if (!wait)
> > > + nlmsvc_remove_block(block);
>
> I was thinking that it would be best to not insert a block at all in the
> !wait case, but it looks like DLM just returns DEFERRED and almost
> always does a callback, even when it's not a blocking lock request?
>
> Anyway, I think we probably do have to handle this like you are.
>
I would prefer to have !wait blocked. We even don't do that in DLM, it
causes problems with cancellation as a cancellation will only do
something (at least in DLM) when there is a waiter that the lock
request waits to be granted, which is only being the case for wait
lock requests.
A !wait is only a trylock, the answer should be back being mostly
immediate and it also makes no sense for me to make them async,
because we have the same problems with cancellation/unlock which are
not being offered to be handled in an asynchronous way. As I said, the
answer should be back mostly immediately. We are somehow doing this
optimization for !wait lock requests only, but operations like unlock
are also being called by lockd and are not being handled
asynchronously. That means we probably don't care about this
optimization, it looks different on wait lock requests.
We should update the documentation and only do async lock requests on
wait only. Is this okay?
- Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-08-10 21:01 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-20 12:58 [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Alexander Aring
2023-07-20 12:58 ` [RFC v6.5-rc2 2/3] fs: lockd: fix race in async lock request handling Alexander Aring
2023-07-21 13:09 ` Alexander Aring
2023-07-21 16:43 ` Jeff Layton
2023-08-10 21:00 ` Alexander Aring
2023-07-21 15:45 ` Jeff Layton
2023-08-10 20:37 ` Alexander Aring
2023-07-20 12:58 ` [RFC v6.5-rc2 3/3] fs: lockd: introduce safe async lock op Alexander Aring
2023-07-21 17:46 ` Jeff Layton
2023-08-10 20:24 ` Alexander Aring
2023-07-21 15:14 ` [RFC v6.5-rc2 1/3] fs: lockd: nlm_blocked list race fixes Jeff Layton
2023-07-21 16:59 ` Chuck Lever
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).