* [PATCH RFC v9 0/2] nfsd: Initial implementation of NFSv4 Courteous Server @ 2022-01-10 18:40 Dai Ngo 2022-01-10 18:40 ` [PATCH RFC v9 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo 2022-01-10 18:40 ` [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo 0 siblings, 2 replies; 14+ messages in thread From: Dai Ngo @ 2022-01-10 18:40 UTC (permalink / raw) To: bfields, chuck.lever; +Cc: jlayton, viro, linux-nfs, linux-fsdevel Hi Bruce, Chuck This series of patches implement the NFSv4 Courteous Server. A server which does not immediately expunge the state on lease expiration is known as a Courteous Server. A Courteous Server continues to recognize previously generated state tokens as valid until conflict arises between the expired state and the requests from another client, or the server reboots. The v2 patch includes the following: . add new callback, lm_expire_lock, to lock_manager_operations to allow the lock manager to take appropriate action with conflict lock. . handle conflicts of NFSv4 locks with NFSv3/NLM and local locks. . expire courtesy client after 24hr if client has not reconnected. . do not allow expired client to become courtesy client if there are waiters for client's locks. . modify client_info_show to show courtesy client and seconds from last renew. . fix a problem with NFSv4.1 server where the it keeps returning SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after the courtesy client re-connects, causing the client to keep sending BCTS requests to server. The v3 patch includes the following: . modified posix_test_lock to check and resolve conflict locks to handle NLM TEST and NFSv4 LOCKT requests. . separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN. The v4 patch includes: . rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock by asking the laudromat thread to destroy the courtesy client. . handle NFSv4 share reservation conflicts with courtesy client. This includes conflicts between access mode and deny mode and vice versa. . drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN. The v5 patch includes: . fix recursive locking of file_rwsem from posix_lock_file. . retest with LOCKDEP enabled. The v6 patch includes: . merge witn 5.15-rc7 . fix a bug in nfs4_check_deny_bmap that did not check for matched nfs4_file before checking for access/deny conflict. This bug causes pynfs OPEN18 to fail since the server taking too long to release lots of un-conflict clients' state. . enhance share reservation conflict handler to handle case where a large number of conflict courtesy clients need to be expired. The 1st 100 clients are expired synchronously and the rest are expired in the background by the laundromat and NFS4ERR_DELAY is returned to the NFS client. This is needed to prevent the NFS client from timing out waiting got the reply. The v7 patch includes: . Fix race condition in posix_test_lock and posix_lock_inode after dropping spinlock. . Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock callback . Always resolve share reservation conflicts asynchrously. . Fix bug in nfs4_laundromat where spinlock is not used when scanning cl_ownerstr_hashtbl. . Fix bug in nfs4_laundromat where idr_get_next was called with incorrect 'id'. . Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock. The v8 patch includes: . Fix warning in nfsd4_fl_expire_lock reported by test robot. The V9 patch include: . Simplify lm_expire_lock API by (1) remove the 'testonly' flag and (2) specifying return value as true/false to indicate whether conflict was succesfully resolved. . Rework nfsd4_fl_expire_lock to mark client with NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire the client in the background. . Add a spinlock in nfs4_client to synchronize access to the NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to handle race conditions when resolving lock and share reservation conflict. . Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT are now consisdered 'dead', waiting for the laundromat to expire it. This client is no longer allowed to use its states if it re-connects before the laundromat finishes expiring the client. For v4.1 client, the detection is done in the processing of the SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client to re-establish new clientid and session. For v4.0 client, the detection is done in the processing of the RENEW and state-related ops and return NFS4ERR_EXPIRE to force the client to re-establish new clientid. ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH RFC v9 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations 2022-01-10 18:40 [PATCH RFC v9 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo @ 2022-01-10 18:40 ` Dai Ngo 2022-01-10 18:40 ` [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo 1 sibling, 0 replies; 14+ messages in thread From: Dai Ngo @ 2022-01-10 18:40 UTC (permalink / raw) To: bfields, chuck.lever; +Cc: jlayton, viro, linux-nfs, linux-fsdevel Add new callback, lm_expire_lock, to lock_manager_operations to allow the lock manager to take appropriate action to resolve the lock conflict if possible. The callback takes 1 argument, the file_lock of the blocker and returns true if the conflict was resolved else returns false. Note that the lock manager has to be able to resolve the conflict while the spinlock flc_lock is held. Lock manager, such as NFSv4 courteous server, uses this callback to resolve conflict by destroying lock owner, or the NFSv4 courtesy client (client that has expired but allowed to maintains its states) that owns the lock. Signed-off-by: Dai Ngo <dai.ngo@oracle.com> --- fs/locks.c | 14 ++++++++++---- include/linux/fs.h | 1 + 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/fs/locks.c b/fs/locks.c index 3d6fb4ae847b..5844fd29560d 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -963,10 +963,13 @@ posix_test_lock(struct file *filp, struct file_lock *fl) spin_lock(&ctx->flc_lock); list_for_each_entry(cfl, &ctx->flc_posix, fl_list) { - if (posix_locks_conflict(fl, cfl)) { - locks_copy_conflock(fl, cfl); - goto out; - } + if (!posix_locks_conflict(fl, cfl)) + continue; + if (cfl->fl_lmops && cfl->fl_lmops->lm_expire_lock && + cfl->fl_lmops->lm_expire_lock(cfl)) + continue; + locks_copy_conflock(fl, cfl); + goto out; } fl->fl_type = F_UNLCK; out: @@ -1169,6 +1172,9 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, list_for_each_entry(fl, &ctx->flc_posix, fl_list) { if (!posix_locks_conflict(request, fl)) continue; + if (fl->fl_lmops && fl->fl_lmops->lm_expire_lock && + fl->fl_lmops->lm_expire_lock(fl)) + continue; if (conflock) locks_copy_conflock(conflock, fl); error = -EAGAIN; diff --git a/include/linux/fs.h b/include/linux/fs.h index e7a633353fd2..0f70e0b39834 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1071,6 +1071,7 @@ struct lock_manager_operations { int (*lm_change)(struct file_lock *, int, struct list_head *); void (*lm_setup)(struct file_lock *, void **); bool (*lm_breaker_owns_lease)(struct file_lock *); + bool (*lm_expire_lock)(struct file_lock *cfl); }; struct lock_manager { -- 2.9.5 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-10 18:40 [PATCH RFC v9 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo 2022-01-10 18:40 ` [PATCH RFC v9 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo @ 2022-01-10 18:40 ` Dai Ngo 1 sibling, 0 replies; 14+ messages in thread From: Dai Ngo @ 2022-01-10 18:40 UTC (permalink / raw) To: bfields, chuck.lever; +Cc: jlayton, viro, linux-nfs, linux-fsdevel Currently an NFSv4 client must maintain its lease by using the at least one of the state tokens or if nothing else, by issuing a RENEW (4.0), or a singleton SEQUENCE (4.1) at least once during each lease period. If the client fails to renew the lease, for any reason, the Linux server expunges the state tokens immediately upon detection of the "failure to renew the lease" condition and begins returning NFS4ERR_EXPIRED if the client should reconnect and attempt to use the (now) expired state. The default lease period for the Linux server is 90 seconds. The typical client cuts that in half and will issue a lease renewing operation every 45 seconds. The 90 second lease period is very short considering the potential for moderately long term network partitions. A network partition refers to any loss of network connectivity between the NFS client and the NFS server, regardless of its root cause. This includes NIC failures, NIC driver bugs, network misconfigurations & administrative errors, routers & switches crashing and/or having software updates applied, even down to cables being physically pulled. In most cases, these network failures are transient, although the duration is unknown. A server which does not immediately expunge the state on lease expiration is known as a Courteous Server. A Courteous Server continues to recognize previously generated state tokens as valid until conflict arises between the expired state and the requests from another client, or the server reboots. The initial implementation of the Courteous Server will do the following: . when the laundromat thread detects an expired client and if that client still has established states on the Linux server and there is no waiters for the client's locks then mark the client as a COURTESY_CLIENT and skip destroying the client and all its states, otherwise destroy the client as usual. . detects conflict of OPEN request with COURTESY_CLIENT, destroys the expired client and all its states, skips the delegation recall then allows the conflicting request to succeed. . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks requests with COURTESY_CLIENT, destroys the expired client and all its states then allows the conflicting request to succeed. . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks requests with COURTESY_CLIENT, destroys the expired client and all its states then allows the conflicting request to succeed. Signed-off-by: Dai Ngo <dai.ngo@oracle.com> --- fs/nfsd/nfs4state.c | 323 ++++++++++++++++++++++++++++++++++++++++++++++++++-- fs/nfsd/state.h | 8 ++ 2 files changed, 323 insertions(+), 8 deletions(-) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 3f4027a5de88..e7fa4da44835 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *); static const struct nfsd4_callback_ops nfsd4_cb_recall_ops; static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops; +static struct workqueue_struct *laundry_wq; +static void laundromat_main(struct work_struct *); + +static const int courtesy_client_expiry = (24 * 60 * 60); /* in secs */ + static bool is_session_dead(struct nfsd4_session *ses) { return ses->se_flags & NFS4_SESSION_DEAD; @@ -155,8 +160,10 @@ static __be32 get_client_locked(struct nfs4_client *clp) return nfs_ok; } -/* must be called under the client_lock */ +/* must be called under the client_lock static inline void +*/ +void renew_client_locked(struct nfs4_client *clp) { struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); @@ -172,7 +179,9 @@ renew_client_locked(struct nfs4_client *clp) list_move_tail(&clp->cl_lru, &nn->client_lru); clp->cl_time = ktime_get_boottime_seconds(); + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); } +EXPORT_SYMBOL_GPL(renew_client_locked); static void put_client_renew_locked(struct nfs4_client *clp) { @@ -1912,10 +1921,22 @@ find_in_sessionid_hashtbl(struct nfs4_sessionid *sessionid, struct net *net, { struct nfsd4_session *session; __be32 status = nfserr_badsession; + struct nfs4_client *clp; session = __find_in_sessionid_hashtbl(sessionid, net); if (!session) goto out; + clp = session->se_client; + if (clp) { + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) { + spin_unlock(&clp->cl_cs_lock); + session = NULL; + goto out; + } + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + } status = nfsd4_get_session_locked(session); if (status) session = NULL; @@ -1992,6 +2013,7 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name) INIT_LIST_HEAD(&clp->async_copies); spin_lock_init(&clp->async_lock); spin_lock_init(&clp->cl_lock); + spin_lock_init(&clp->cl_cs_lock); rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); return clp; err_no_hashtbl: @@ -2389,6 +2411,10 @@ static int client_info_show(struct seq_file *m, void *v) seq_puts(m, "status: confirmed\n"); else seq_puts(m, "status: unconfirmed\n"); + seq_printf(m, "courtesy client: %s\n", + test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no"); + seq_printf(m, "seconds from last renew: %lld\n", + ktime_get_boottime_seconds() - clp->cl_time); seq_printf(m, "name: "); seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len); seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion); @@ -2809,8 +2835,17 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root) node = node->rb_left; else if (cmp < 0) node = node->rb_right; - else - return clp; + else { + spin_lock(&clp->cl_cs_lock); + if (!test_bit(NFSD4_DESTROY_COURTESY_CLIENT, + &clp->cl_flags)) { + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + return clp; + } + spin_unlock(&clp->cl_cs_lock); + return NULL; + } } return NULL; } @@ -2856,6 +2891,14 @@ find_client_in_id_table(struct list_head *tbl, clientid_t *clid, bool sessions) if (same_clid(&clp->cl_clientid, clid)) { if ((bool)clp->cl_minorversion != sessions) return NULL; + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, + &clp->cl_flags)) { + spin_unlock(&clp->cl_cs_lock); + continue; + } + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); renew_client_locked(clp); return clp; } @@ -4662,6 +4705,36 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp) nfsd4_run_cb(&dp->dl_recall); } +/* + * This function is called when a file is opened and there is a + * delegation conflict with another client. If the other client + * is a courtesy client then kick start the laundromat to destroy + * it. + */ +static bool +nfsd_check_courtesy_client(struct nfs4_delegation *dp) +{ + struct svc_rqst *rqst; + struct nfs4_client *clp = dp->dl_recall.cb_clp; + struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); + + if (!i_am_nfsd()) + goto out; + rqst = kthread_data(current); + if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4) + return false; +out: + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); + return true; + } + spin_unlock(&clp->cl_cs_lock); + return false; +} + /* Called from break_lease() with i_lock held. */ static bool nfsd_break_deleg_cb(struct file_lock *fl) @@ -4670,6 +4743,8 @@ nfsd_break_deleg_cb(struct file_lock *fl) struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner; struct nfs4_file *fp = dp->dl_stid.sc_file; + if (nfsd_check_courtesy_client(dp)) + return false; trace_nfsd_cb_recall(&dp->dl_stid); /* @@ -4912,7 +4987,128 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh, return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0); } -static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, +static bool +__nfs4_check_access_deny_bmap(struct nfs4_ol_stateid *stp, u32 access, + bool share_access) +{ + if (share_access) { + if (!stp->st_deny_bmap) + return false; + + if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) || + (access & NFS4_SHARE_ACCESS_READ && + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) || + (access & NFS4_SHARE_ACCESS_WRITE && + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) { + return true; + } + return false; + } + if ((access & NFS4_SHARE_DENY_BOTH) || + (access & NFS4_SHARE_DENY_READ && + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) || + (access & NFS4_SHARE_DENY_WRITE && + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) { + return true; + } + return false; +} + +/* + * Check all files belong to the specified client to determine if there is + * any conflict with the specified access_mode/deny_mode of the file 'fp. + * + * If share_access is true then 'access' is the access mode. Check if + * this access mode conflicts with current deny mode of the file. + * + * If share_access is false then 'access' the deny mode. Check if + * this deny mode conflicts with current access mode of the file. + */ +static bool +nfs4_check_access_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp, + struct nfs4_ol_stateid *st, u32 access, bool share_access) +{ + int i; + struct nfs4_openowner *oo; + struct nfs4_stateowner *so, *tmp; + struct nfs4_ol_stateid *stp, *stmp; + + spin_lock(&clp->cl_lock); + for (i = 0; i < OWNER_HASH_SIZE; i++) { + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], + so_strhash) { + if (!so->so_is_open_owner) + continue; + oo = openowner(so); + list_for_each_entry_safe(stp, stmp, + &oo->oo_owner.so_stateids, st_perstateowner) { + if (stp == st || stp->st_stid.sc_file != fp) + continue; + if (__nfs4_check_access_deny_bmap(stp, access, + share_access)) { + spin_unlock(&clp->cl_lock); + return true; + } + } + } + } + spin_unlock(&clp->cl_lock); + return false; +} + +/* + * This function is called to check whether nfserr_share_denied should + * be returning to client. + * + * access: is op_share_access if share_access is true. + * Check if access mode, op_share_access, would conflict with + * the current deny mode of the file 'fp'. + * access: is op_share_deny if share_access is true. + * Check if the deny mode, op_share_deny, would conflict with + * current access of the file 'fp'. + * stp: skip checking this entry. + * + * Function returns: + * true - access/deny mode conflict with courtesy client(s). + * Caller to return nfserr_jukebox while client(s) being expired. + * false - access/deny mode conflict with non-courtesy client. + * Caller to return nfserr_share_denied to client. + */ +static bool +nfs4_conflict_courtesy_clients(struct svc_rqst *rqstp, struct nfs4_file *fp, + struct nfs4_ol_stateid *stp, u32 access, bool share_access) +{ + struct nfs4_client *cl; + bool conflict = false; + int async_cnt = 0; + struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); + + spin_lock(&nn->client_lock); + list_for_each_entry(cl, &nn->client_lru, cl_lru) { + if (!nfs4_check_access_deny_bmap(cl, fp, stp, access, share_access)) + continue; + spin_lock(&cl->cl_cs_lock); + if (test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) { + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags); + async_cnt++; + spin_unlock(&cl->cl_cs_lock); + continue; + } + /* conflict with non-courtesy client */ + spin_unlock(&cl->cl_cs_lock); + conflict = false; + break; + } + spin_unlock(&nn->client_lock); + if (async_cnt) { + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); + conflict = true; + } + return conflict; +} + +static __be32 +nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, struct nfsd4_open *open) { @@ -4931,6 +5127,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, status = nfs4_file_check_deny(fp, open->op_share_deny); if (status != nfs_ok) { spin_unlock(&fp->fi_lock); + if (status != nfserr_share_denied) + goto out; + if (nfs4_conflict_courtesy_clients(rqstp, fp, + stp, open->op_share_deny, false)) + status = nfserr_jukebox; goto out; } @@ -4938,6 +5139,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, status = nfs4_file_get_access(fp, open->op_share_access); if (status != nfs_ok) { spin_unlock(&fp->fi_lock); + if (status != nfserr_share_denied) + goto out; + if (nfs4_conflict_courtesy_clients(rqstp, fp, + stp, open->op_share_access, true)) + status = nfserr_jukebox; goto out; } @@ -5572,6 +5778,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn) } #endif +static +bool nfs4_anylock_conflict(struct nfs4_client *clp) +{ + int i; + struct nfs4_stateowner *so, *tmp; + struct nfs4_lockowner *lo; + struct nfs4_ol_stateid *stp; + struct nfs4_file *nf; + struct inode *ino; + struct file_lock_context *ctx; + struct file_lock *fl; + + for (i = 0; i < OWNER_HASH_SIZE; i++) { + /* scan each lock owner */ + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], + so_strhash) { + if (so->so_is_open_owner) + continue; + + /* scan lock states of this lock owner */ + lo = lockowner(so); + list_for_each_entry(stp, &lo->lo_owner.so_stateids, + st_perstateowner) { + nf = stp->st_stid.sc_file; + ino = nf->fi_inode; + ctx = ino->i_flctx; + if (!ctx) + continue; + /* check each lock belongs to this lock state */ + list_for_each_entry(fl, &ctx->flc_posix, fl_list) { + if (fl->fl_owner != lo) + continue; + if (!list_empty(&fl->fl_blocked_requests)) + return true; + } + } + } + } + return false; +} + static time64_t nfs4_laundromat(struct nfsd_net *nn) { @@ -5587,7 +5834,9 @@ nfs4_laundromat(struct nfsd_net *nn) }; struct nfs4_cpntf_state *cps; copy_stateid_t *cps_t; + struct nfs4_stid *stid; int i; + int id; if (clients_still_reclaiming(nn)) { lt.new_timeo = 0; @@ -5608,8 +5857,41 @@ nfs4_laundromat(struct nfsd_net *nn) spin_lock(&nn->client_lock); list_for_each_safe(pos, next, &nn->client_lru) { clp = list_entry(pos, struct nfs4_client, cl_lru); - if (!state_expired(<, clp->cl_time)) + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) + goto exp_client; + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { + if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry) + goto exp_client; + /* + * after umount, v4.0 client is still around + * waiting to be expired. Check again and if + * it has no state then expire it. + */ + if (clp->cl_minorversion) { + spin_unlock(&clp->cl_cs_lock); + continue; + } + } + if (!state_expired(<, clp->cl_time)) { + spin_unlock(&clp->cl_cs_lock); break; + } + id = 0; + spin_lock(&clp->cl_lock); + stid = idr_get_next(&clp->cl_stateids, &id); + if (stid && !nfs4_anylock_conflict(clp)) { + /* client still has states */ + spin_unlock(&clp->cl_lock); + clp->courtesy_client_expiry = + ktime_get_boottime_seconds() + courtesy_client_expiry; + set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + continue; + } + spin_unlock(&clp->cl_lock); +exp_client: + spin_unlock(&clp->cl_cs_lock); if (mark_client_expired_locked(clp)) continue; list_add(&clp->cl_lru, &reaplist); @@ -5689,9 +5971,6 @@ nfs4_laundromat(struct nfsd_net *nn) return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT); } -static struct workqueue_struct *laundry_wq; -static void laundromat_main(struct work_struct *); - static void laundromat_main(struct work_struct *laundry) { @@ -6496,6 +6775,33 @@ nfs4_transform_lock_offset(struct file_lock *lock) lock->fl_end = OFFSET_MAX; } +/* + * Return true if lock can be resolved by expiring + * courtesy client else return false. + */ +static bool +nfsd4_fl_expire_lock(struct file_lock *fl) +{ + struct nfs4_lockowner *lo; + struct nfs4_client *clp; + struct nfsd_net *nn; + + if (!fl) + return false; + lo = (struct nfs4_lockowner *)fl->fl_owner; + clp = lo->lo_owner.so_client; + spin_lock(&clp->cl_cs_lock); + if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { + spin_unlock(&clp->cl_cs_lock); + return false; + } + nn = net_generic(clp->net, nfsd_net_id); + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); + return true; +} + static fl_owner_t nfsd4_fl_get_owner(fl_owner_t owner) { @@ -6543,6 +6849,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops = { .lm_notify = nfsd4_lm_notify, .lm_get_owner = nfsd4_fl_get_owner, .lm_put_owner = nfsd4_fl_put_owner, + .lm_expire_lock = nfsd4_fl_expire_lock, }; static inline void diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h index e73bdbb1634a..7f52a79e0743 100644 --- a/fs/nfsd/state.h +++ b/fs/nfsd/state.h @@ -345,6 +345,8 @@ struct nfs4_client { #define NFSD4_CLIENT_UPCALL_LOCK (5) /* upcall serialization */ #define NFSD4_CLIENT_CB_FLAG_MASK (1 << NFSD4_CLIENT_CB_UPDATE | \ 1 << NFSD4_CLIENT_CB_KILL) +#define NFSD4_COURTESY_CLIENT (6) /* be nice to expired client */ +#define NFSD4_DESTROY_COURTESY_CLIENT (7) unsigned long cl_flags; const struct cred *cl_cb_cred; struct rpc_clnt *cl_cb_client; @@ -385,6 +387,12 @@ struct nfs4_client { struct list_head async_copies; /* list of async copies */ spinlock_t async_lock; /* lock for async copies */ atomic_t cl_cb_inflight; /* Outstanding callbacks */ + int courtesy_client_expiry; + /* + * used to synchronize access to NFSD4_COURTESY_CLIENT + * and NFSD4_DESTROY_COURTESY_CLIENT for race conditions. + */ + spinlock_t cl_cs_lock; }; /* struct nfs4_client_reset -- 2.9.5 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH RFC v9 0/2] nfsd: Initial implementation of NFSv4 Courteous Server @ 2022-01-10 18:50 Dai Ngo 2022-01-10 18:50 ` [PATCH RFC v9 2/2] " Dai Ngo 0 siblings, 1 reply; 14+ messages in thread From: Dai Ngo @ 2022-01-10 18:50 UTC (permalink / raw) To: bfields, chuck.lever; +Cc: jlayton, viro, linux-nfs, linux-fsdevel Hi Bruce, Chuck This series of patches implement the NFSv4 Courteous Server. A server which does not immediately expunge the state on lease expiration is known as a Courteous Server. A Courteous Server continues to recognize previously generated state tokens as valid until conflict arises between the expired state and the requests from another client, or the server reboots. The v2 patch includes the following: . add new callback, lm_expire_lock, to lock_manager_operations to allow the lock manager to take appropriate action with conflict lock. . handle conflicts of NFSv4 locks with NFSv3/NLM and local locks. . expire courtesy client after 24hr if client has not reconnected. . do not allow expired client to become courtesy client if there are waiters for client's locks. . modify client_info_show to show courtesy client and seconds from last renew. . fix a problem with NFSv4.1 server where the it keeps returning SEQ4_STATUS_CB_PATH_DOWN in the successful SEQUENCE reply, after the courtesy client re-connects, causing the client to keep sending BCTS requests to server. The v3 patch includes the following: . modified posix_test_lock to check and resolve conflict locks to handle NLM TEST and NFSv4 LOCKT requests. . separate out fix for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN. The v4 patch includes: . rework nfsd_check_courtesy to avoid dead lock of fl_lock and client_lock by asking the laudromat thread to destroy the courtesy client. . handle NFSv4 share reservation conflicts with courtesy client. This includes conflicts between access mode and deny mode and vice versa. . drop the patch for back channel stuck in SEQ4_STATUS_CB_PATH_DOWN. The v5 patch includes: . fix recursive locking of file_rwsem from posix_lock_file. . retest with LOCKDEP enabled. The v6 patch includes: . merge witn 5.15-rc7 . fix a bug in nfs4_check_deny_bmap that did not check for matched nfs4_file before checking for access/deny conflict. This bug causes pynfs OPEN18 to fail since the server taking too long to release lots of un-conflict clients' state. . enhance share reservation conflict handler to handle case where a large number of conflict courtesy clients need to be expired. The 1st 100 clients are expired synchronously and the rest are expired in the background by the laundromat and NFS4ERR_DELAY is returned to the NFS client. This is needed to prevent the NFS client from timing out waiting got the reply. The v7 patch includes: . Fix race condition in posix_test_lock and posix_lock_inode after dropping spinlock. . Enhance nfsd4_fl_expire_lock to work with with new lm_expire_lock callback . Always resolve share reservation conflicts asynchrously. . Fix bug in nfs4_laundromat where spinlock is not used when scanning cl_ownerstr_hashtbl. . Fix bug in nfs4_laundromat where idr_get_next was called with incorrect 'id'. . Merge nfs4_destroy_courtesy_client into nfsd4_fl_expire_lock. The v8 patch includes: . Fix warning in nfsd4_fl_expire_lock reported by test robot. The V9 patch include: . Simplify lm_expire_lock API by (1) remove the 'testonly' flag and (2) specifying return value as true/false to indicate whether conflict was succesfully resolved. . Rework nfsd4_fl_expire_lock to mark client with NFSD4_DESTROY_COURTESY_CLIENT then tell the laundromat to expire the client in the background. . Add a spinlock in nfs4_client to synchronize access to the NFSD4_COURTESY_CLIENT and NFSD4_DESTROY_COURTESY_CLIENT flag to handle race conditions when resolving lock and share reservation conflict. . Courtesy client that was marked as NFSD4_DESTROY_COURTESY_CLIENT are now consisdered 'dead', waiting for the laundromat to expire it. This client is no longer allowed to use its states if it re-connects before the laundromat finishes expiring the client. For v4.1 client, the detection is done in the processing of the SEQUENCE op and returns NFS4ERR_BAD_SESSION to force the client to re-establish new clientid and session. For v4.0 client, the detection is done in the processing of the RENEW and state-related ops and return NFS4ERR_EXPIRE to force the client to re-establish new clientid. ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-10 18:50 [PATCH RFC v9 0/2] " Dai Ngo @ 2022-01-10 18:50 ` Dai Ngo 2022-01-10 23:17 ` Chuck Lever III ` (2 more replies) 0 siblings, 3 replies; 14+ messages in thread From: Dai Ngo @ 2022-01-10 18:50 UTC (permalink / raw) To: bfields, chuck.lever; +Cc: jlayton, viro, linux-nfs, linux-fsdevel Currently an NFSv4 client must maintain its lease by using the at least one of the state tokens or if nothing else, by issuing a RENEW (4.0), or a singleton SEQUENCE (4.1) at least once during each lease period. If the client fails to renew the lease, for any reason, the Linux server expunges the state tokens immediately upon detection of the "failure to renew the lease" condition and begins returning NFS4ERR_EXPIRED if the client should reconnect and attempt to use the (now) expired state. The default lease period for the Linux server is 90 seconds. The typical client cuts that in half and will issue a lease renewing operation every 45 seconds. The 90 second lease period is very short considering the potential for moderately long term network partitions. A network partition refers to any loss of network connectivity between the NFS client and the NFS server, regardless of its root cause. This includes NIC failures, NIC driver bugs, network misconfigurations & administrative errors, routers & switches crashing and/or having software updates applied, even down to cables being physically pulled. In most cases, these network failures are transient, although the duration is unknown. A server which does not immediately expunge the state on lease expiration is known as a Courteous Server. A Courteous Server continues to recognize previously generated state tokens as valid until conflict arises between the expired state and the requests from another client, or the server reboots. The initial implementation of the Courteous Server will do the following: . when the laundromat thread detects an expired client and if that client still has established states on the Linux server and there is no waiters for the client's locks then mark the client as a COURTESY_CLIENT and skip destroying the client and all its states, otherwise destroy the client as usual. . detects conflict of OPEN request with COURTESY_CLIENT, destroys the expired client and all its states, skips the delegation recall then allows the conflicting request to succeed. . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks requests with COURTESY_CLIENT, destroys the expired client and all its states then allows the conflicting request to succeed. . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks requests with COURTESY_CLIENT, destroys the expired client and all its states then allows the conflicting request to succeed. Signed-off-by: Dai Ngo <dai.ngo@oracle.com> --- fs/nfsd/nfs4state.c | 323 ++++++++++++++++++++++++++++++++++++++++++++++++++-- fs/nfsd/state.h | 8 ++ 2 files changed, 323 insertions(+), 8 deletions(-) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 3f4027a5de88..e7fa4da44835 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *); static const struct nfsd4_callback_ops nfsd4_cb_recall_ops; static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops; +static struct workqueue_struct *laundry_wq; +static void laundromat_main(struct work_struct *); + +static const int courtesy_client_expiry = (24 * 60 * 60); /* in secs */ + static bool is_session_dead(struct nfsd4_session *ses) { return ses->se_flags & NFS4_SESSION_DEAD; @@ -155,8 +160,10 @@ static __be32 get_client_locked(struct nfs4_client *clp) return nfs_ok; } -/* must be called under the client_lock */ +/* must be called under the client_lock static inline void +*/ +void renew_client_locked(struct nfs4_client *clp) { struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); @@ -172,7 +179,9 @@ renew_client_locked(struct nfs4_client *clp) list_move_tail(&clp->cl_lru, &nn->client_lru); clp->cl_time = ktime_get_boottime_seconds(); + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); } +EXPORT_SYMBOL_GPL(renew_client_locked); static void put_client_renew_locked(struct nfs4_client *clp) { @@ -1912,10 +1921,22 @@ find_in_sessionid_hashtbl(struct nfs4_sessionid *sessionid, struct net *net, { struct nfsd4_session *session; __be32 status = nfserr_badsession; + struct nfs4_client *clp; session = __find_in_sessionid_hashtbl(sessionid, net); if (!session) goto out; + clp = session->se_client; + if (clp) { + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) { + spin_unlock(&clp->cl_cs_lock); + session = NULL; + goto out; + } + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + } status = nfsd4_get_session_locked(session); if (status) session = NULL; @@ -1992,6 +2013,7 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name) INIT_LIST_HEAD(&clp->async_copies); spin_lock_init(&clp->async_lock); spin_lock_init(&clp->cl_lock); + spin_lock_init(&clp->cl_cs_lock); rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); return clp; err_no_hashtbl: @@ -2389,6 +2411,10 @@ static int client_info_show(struct seq_file *m, void *v) seq_puts(m, "status: confirmed\n"); else seq_puts(m, "status: unconfirmed\n"); + seq_printf(m, "courtesy client: %s\n", + test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no"); + seq_printf(m, "seconds from last renew: %lld\n", + ktime_get_boottime_seconds() - clp->cl_time); seq_printf(m, "name: "); seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len); seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion); @@ -2809,8 +2835,17 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root) node = node->rb_left; else if (cmp < 0) node = node->rb_right; - else - return clp; + else { + spin_lock(&clp->cl_cs_lock); + if (!test_bit(NFSD4_DESTROY_COURTESY_CLIENT, + &clp->cl_flags)) { + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + return clp; + } + spin_unlock(&clp->cl_cs_lock); + return NULL; + } } return NULL; } @@ -2856,6 +2891,14 @@ find_client_in_id_table(struct list_head *tbl, clientid_t *clid, bool sessions) if (same_clid(&clp->cl_clientid, clid)) { if ((bool)clp->cl_minorversion != sessions) return NULL; + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, + &clp->cl_flags)) { + spin_unlock(&clp->cl_cs_lock); + continue; + } + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); renew_client_locked(clp); return clp; } @@ -4662,6 +4705,36 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp) nfsd4_run_cb(&dp->dl_recall); } +/* + * This function is called when a file is opened and there is a + * delegation conflict with another client. If the other client + * is a courtesy client then kick start the laundromat to destroy + * it. + */ +static bool +nfsd_check_courtesy_client(struct nfs4_delegation *dp) +{ + struct svc_rqst *rqst; + struct nfs4_client *clp = dp->dl_recall.cb_clp; + struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); + + if (!i_am_nfsd()) + goto out; + rqst = kthread_data(current); + if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4) + return false; +out: + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); + return true; + } + spin_unlock(&clp->cl_cs_lock); + return false; +} + /* Called from break_lease() with i_lock held. */ static bool nfsd_break_deleg_cb(struct file_lock *fl) @@ -4670,6 +4743,8 @@ nfsd_break_deleg_cb(struct file_lock *fl) struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner; struct nfs4_file *fp = dp->dl_stid.sc_file; + if (nfsd_check_courtesy_client(dp)) + return false; trace_nfsd_cb_recall(&dp->dl_stid); /* @@ -4912,7 +4987,128 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh, return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0); } -static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, +static bool +__nfs4_check_access_deny_bmap(struct nfs4_ol_stateid *stp, u32 access, + bool share_access) +{ + if (share_access) { + if (!stp->st_deny_bmap) + return false; + + if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) || + (access & NFS4_SHARE_ACCESS_READ && + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) || + (access & NFS4_SHARE_ACCESS_WRITE && + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) { + return true; + } + return false; + } + if ((access & NFS4_SHARE_DENY_BOTH) || + (access & NFS4_SHARE_DENY_READ && + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) || + (access & NFS4_SHARE_DENY_WRITE && + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) { + return true; + } + return false; +} + +/* + * Check all files belong to the specified client to determine if there is + * any conflict with the specified access_mode/deny_mode of the file 'fp. + * + * If share_access is true then 'access' is the access mode. Check if + * this access mode conflicts with current deny mode of the file. + * + * If share_access is false then 'access' the deny mode. Check if + * this deny mode conflicts with current access mode of the file. + */ +static bool +nfs4_check_access_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp, + struct nfs4_ol_stateid *st, u32 access, bool share_access) +{ + int i; + struct nfs4_openowner *oo; + struct nfs4_stateowner *so, *tmp; + struct nfs4_ol_stateid *stp, *stmp; + + spin_lock(&clp->cl_lock); + for (i = 0; i < OWNER_HASH_SIZE; i++) { + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], + so_strhash) { + if (!so->so_is_open_owner) + continue; + oo = openowner(so); + list_for_each_entry_safe(stp, stmp, + &oo->oo_owner.so_stateids, st_perstateowner) { + if (stp == st || stp->st_stid.sc_file != fp) + continue; + if (__nfs4_check_access_deny_bmap(stp, access, + share_access)) { + spin_unlock(&clp->cl_lock); + return true; + } + } + } + } + spin_unlock(&clp->cl_lock); + return false; +} + +/* + * This function is called to check whether nfserr_share_denied should + * be returning to client. + * + * access: is op_share_access if share_access is true. + * Check if access mode, op_share_access, would conflict with + * the current deny mode of the file 'fp'. + * access: is op_share_deny if share_access is true. + * Check if the deny mode, op_share_deny, would conflict with + * current access of the file 'fp'. + * stp: skip checking this entry. + * + * Function returns: + * true - access/deny mode conflict with courtesy client(s). + * Caller to return nfserr_jukebox while client(s) being expired. + * false - access/deny mode conflict with non-courtesy client. + * Caller to return nfserr_share_denied to client. + */ +static bool +nfs4_conflict_courtesy_clients(struct svc_rqst *rqstp, struct nfs4_file *fp, + struct nfs4_ol_stateid *stp, u32 access, bool share_access) +{ + struct nfs4_client *cl; + bool conflict = false; + int async_cnt = 0; + struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); + + spin_lock(&nn->client_lock); + list_for_each_entry(cl, &nn->client_lru, cl_lru) { + if (!nfs4_check_access_deny_bmap(cl, fp, stp, access, share_access)) + continue; + spin_lock(&cl->cl_cs_lock); + if (test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) { + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags); + async_cnt++; + spin_unlock(&cl->cl_cs_lock); + continue; + } + /* conflict with non-courtesy client */ + spin_unlock(&cl->cl_cs_lock); + conflict = false; + break; + } + spin_unlock(&nn->client_lock); + if (async_cnt) { + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); + conflict = true; + } + return conflict; +} + +static __be32 +nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, struct nfsd4_open *open) { @@ -4931,6 +5127,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, status = nfs4_file_check_deny(fp, open->op_share_deny); if (status != nfs_ok) { spin_unlock(&fp->fi_lock); + if (status != nfserr_share_denied) + goto out; + if (nfs4_conflict_courtesy_clients(rqstp, fp, + stp, open->op_share_deny, false)) + status = nfserr_jukebox; goto out; } @@ -4938,6 +5139,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, status = nfs4_file_get_access(fp, open->op_share_access); if (status != nfs_ok) { spin_unlock(&fp->fi_lock); + if (status != nfserr_share_denied) + goto out; + if (nfs4_conflict_courtesy_clients(rqstp, fp, + stp, open->op_share_access, true)) + status = nfserr_jukebox; goto out; } @@ -5572,6 +5778,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn) } #endif +static +bool nfs4_anylock_conflict(struct nfs4_client *clp) +{ + int i; + struct nfs4_stateowner *so, *tmp; + struct nfs4_lockowner *lo; + struct nfs4_ol_stateid *stp; + struct nfs4_file *nf; + struct inode *ino; + struct file_lock_context *ctx; + struct file_lock *fl; + + for (i = 0; i < OWNER_HASH_SIZE; i++) { + /* scan each lock owner */ + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], + so_strhash) { + if (so->so_is_open_owner) + continue; + + /* scan lock states of this lock owner */ + lo = lockowner(so); + list_for_each_entry(stp, &lo->lo_owner.so_stateids, + st_perstateowner) { + nf = stp->st_stid.sc_file; + ino = nf->fi_inode; + ctx = ino->i_flctx; + if (!ctx) + continue; + /* check each lock belongs to this lock state */ + list_for_each_entry(fl, &ctx->flc_posix, fl_list) { + if (fl->fl_owner != lo) + continue; + if (!list_empty(&fl->fl_blocked_requests)) + return true; + } + } + } + } + return false; +} + static time64_t nfs4_laundromat(struct nfsd_net *nn) { @@ -5587,7 +5834,9 @@ nfs4_laundromat(struct nfsd_net *nn) }; struct nfs4_cpntf_state *cps; copy_stateid_t *cps_t; + struct nfs4_stid *stid; int i; + int id; if (clients_still_reclaiming(nn)) { lt.new_timeo = 0; @@ -5608,8 +5857,41 @@ nfs4_laundromat(struct nfsd_net *nn) spin_lock(&nn->client_lock); list_for_each_safe(pos, next, &nn->client_lru) { clp = list_entry(pos, struct nfs4_client, cl_lru); - if (!state_expired(<, clp->cl_time)) + spin_lock(&clp->cl_cs_lock); + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) + goto exp_client; + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { + if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry) + goto exp_client; + /* + * after umount, v4.0 client is still around + * waiting to be expired. Check again and if + * it has no state then expire it. + */ + if (clp->cl_minorversion) { + spin_unlock(&clp->cl_cs_lock); + continue; + } + } + if (!state_expired(<, clp->cl_time)) { + spin_unlock(&clp->cl_cs_lock); break; + } + id = 0; + spin_lock(&clp->cl_lock); + stid = idr_get_next(&clp->cl_stateids, &id); + if (stid && !nfs4_anylock_conflict(clp)) { + /* client still has states */ + spin_unlock(&clp->cl_lock); + clp->courtesy_client_expiry = + ktime_get_boottime_seconds() + courtesy_client_expiry; + set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + continue; + } + spin_unlock(&clp->cl_lock); +exp_client: + spin_unlock(&clp->cl_cs_lock); if (mark_client_expired_locked(clp)) continue; list_add(&clp->cl_lru, &reaplist); @@ -5689,9 +5971,6 @@ nfs4_laundromat(struct nfsd_net *nn) return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT); } -static struct workqueue_struct *laundry_wq; -static void laundromat_main(struct work_struct *); - static void laundromat_main(struct work_struct *laundry) { @@ -6496,6 +6775,33 @@ nfs4_transform_lock_offset(struct file_lock *lock) lock->fl_end = OFFSET_MAX; } +/* + * Return true if lock can be resolved by expiring + * courtesy client else return false. + */ +static bool +nfsd4_fl_expire_lock(struct file_lock *fl) +{ + struct nfs4_lockowner *lo; + struct nfs4_client *clp; + struct nfsd_net *nn; + + if (!fl) + return false; + lo = (struct nfs4_lockowner *)fl->fl_owner; + clp = lo->lo_owner.so_client; + spin_lock(&clp->cl_cs_lock); + if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { + spin_unlock(&clp->cl_cs_lock); + return false; + } + nn = net_generic(clp->net, nfsd_net_id); + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); + spin_unlock(&clp->cl_cs_lock); + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); + return true; +} + static fl_owner_t nfsd4_fl_get_owner(fl_owner_t owner) { @@ -6543,6 +6849,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops = { .lm_notify = nfsd4_lm_notify, .lm_get_owner = nfsd4_fl_get_owner, .lm_put_owner = nfsd4_fl_put_owner, + .lm_expire_lock = nfsd4_fl_expire_lock, }; static inline void diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h index e73bdbb1634a..7f52a79e0743 100644 --- a/fs/nfsd/state.h +++ b/fs/nfsd/state.h @@ -345,6 +345,8 @@ struct nfs4_client { #define NFSD4_CLIENT_UPCALL_LOCK (5) /* upcall serialization */ #define NFSD4_CLIENT_CB_FLAG_MASK (1 << NFSD4_CLIENT_CB_UPDATE | \ 1 << NFSD4_CLIENT_CB_KILL) +#define NFSD4_COURTESY_CLIENT (6) /* be nice to expired client */ +#define NFSD4_DESTROY_COURTESY_CLIENT (7) unsigned long cl_flags; const struct cred *cl_cb_cred; struct rpc_clnt *cl_cb_client; @@ -385,6 +387,12 @@ struct nfs4_client { struct list_head async_copies; /* list of async copies */ spinlock_t async_lock; /* lock for async copies */ atomic_t cl_cb_inflight; /* Outstanding callbacks */ + int courtesy_client_expiry; + /* + * used to synchronize access to NFSD4_COURTESY_CLIENT + * and NFSD4_DESTROY_COURTESY_CLIENT for race conditions. + */ + spinlock_t cl_cs_lock; }; /* struct nfs4_client_reset -- 2.9.5 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-10 18:50 ` [PATCH RFC v9 2/2] " Dai Ngo @ 2022-01-10 23:17 ` Chuck Lever III 2022-01-11 1:03 ` dai.ngo 2022-01-12 19:40 ` J. Bruce Fields 2022-01-12 19:52 ` J. Bruce Fields 2 siblings, 1 reply; 14+ messages in thread From: Chuck Lever III @ 2022-01-10 23:17 UTC (permalink / raw) To: Dai Ngo Cc: Bruce Fields, Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel@vger.kernel.org Hi Dai- Still getting the feel of the new approach, but I have made some comments inline... > On Jan 10, 2022, at 1:50 PM, Dai Ngo <dai.ngo@oracle.com> wrote: > > Currently an NFSv4 client must maintain its lease by using the at least > one of the state tokens or if nothing else, by issuing a RENEW (4.0), or > a singleton SEQUENCE (4.1) at least once during each lease period. If the > client fails to renew the lease, for any reason, the Linux server expunges > the state tokens immediately upon detection of the "failure to renew the > lease" condition and begins returning NFS4ERR_EXPIRED if the client should > reconnect and attempt to use the (now) expired state. > > The default lease period for the Linux server is 90 seconds. The typical > client cuts that in half and will issue a lease renewing operation every > 45 seconds. The 90 second lease period is very short considering the > potential for moderately long term network partitions. A network partition > refers to any loss of network connectivity between the NFS client and the > NFS server, regardless of its root cause. This includes NIC failures, NIC > driver bugs, network misconfigurations & administrative errors, routers & > switches crashing and/or having software updates applied, even down to > cables being physically pulled. In most cases, these network failures are > transient, although the duration is unknown. > > A server which does not immediately expunge the state on lease expiration > is known as a Courteous Server. A Courteous Server continues to recognize > previously generated state tokens as valid until conflict arises between > the expired state and the requests from another client, or the server > reboots. > > The initial implementation of the Courteous Server will do the following: > > . when the laundromat thread detects an expired client and if that client > still has established states on the Linux server and there is no waiters > for the client's locks then mark the client as a COURTESY_CLIENT and skip > destroying the client and all its states, otherwise destroy the client as > usual. > > . detects conflict of OPEN request with COURTESY_CLIENT, destroys the > expired client and all its states, skips the delegation recall then allows > the conflicting request to succeed. > > . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks > requests with COURTESY_CLIENT, destroys the expired client and all its > states then allows the conflicting request to succeed. > > . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks > requests with COURTESY_CLIENT, destroys the expired client and all its > states then allows the conflicting request to succeed. > > Signed-off-by: Dai Ngo <dai.ngo@oracle.com> > --- > fs/nfsd/nfs4state.c | 323 ++++++++++++++++++++++++++++++++++++++++++++++++++-- > fs/nfsd/state.h | 8 ++ > 2 files changed, 323 insertions(+), 8 deletions(-) > > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c > index 3f4027a5de88..e7fa4da44835 100644 > --- a/fs/nfsd/nfs4state.c > +++ b/fs/nfsd/nfs4state.c > @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *); > static const struct nfsd4_callback_ops nfsd4_cb_recall_ops; > static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops; > > +static struct workqueue_struct *laundry_wq; > +static void laundromat_main(struct work_struct *); > + > +static const int courtesy_client_expiry = (24 * 60 * 60); /* in secs */ > + > static bool is_session_dead(struct nfsd4_session *ses) > { > return ses->se_flags & NFS4_SESSION_DEAD; > @@ -155,8 +160,10 @@ static __be32 get_client_locked(struct nfs4_client *clp) > return nfs_ok; > } > > -/* must be called under the client_lock */ > +/* must be called under the client_lock > static inline void > +*/ > +void > renew_client_locked(struct nfs4_client *clp) > { > struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); > @@ -172,7 +179,9 @@ renew_client_locked(struct nfs4_client *clp) > > list_move_tail(&clp->cl_lru, &nn->client_lru); > clp->cl_time = ktime_get_boottime_seconds(); > + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); > } > +EXPORT_SYMBOL_GPL(renew_client_locked); I don't see renew_client_locked() being called from outside fs/nfsd/nfs4state.c, and the patch doesn't add a global declaration. Please leave this function as "static inline void". > static void put_client_renew_locked(struct nfs4_client *clp) > { > @@ -1912,10 +1921,22 @@ find_in_sessionid_hashtbl(struct nfs4_sessionid *sessionid, struct net *net, > { > struct nfsd4_session *session; > __be32 status = nfserr_badsession; > + struct nfs4_client *clp; > > session = __find_in_sessionid_hashtbl(sessionid, net); > if (!session) > goto out; > + clp = session->se_client; > + if (clp) { > + spin_lock(&clp->cl_cs_lock); > + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) { > + spin_unlock(&clp->cl_cs_lock); > + session = NULL; > + goto out; > + } > + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); > + } > status = nfsd4_get_session_locked(session); > if (status) > session = NULL; > @@ -1992,6 +2013,7 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name) > INIT_LIST_HEAD(&clp->async_copies); > spin_lock_init(&clp->async_lock); > spin_lock_init(&clp->cl_lock); > + spin_lock_init(&clp->cl_cs_lock); > rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); > return clp; > err_no_hashtbl: > @@ -2389,6 +2411,10 @@ static int client_info_show(struct seq_file *m, void *v) > seq_puts(m, "status: confirmed\n"); > else > seq_puts(m, "status: unconfirmed\n"); > + seq_printf(m, "courtesy client: %s\n", > + test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no"); > + seq_printf(m, "seconds from last renew: %lld\n", > + ktime_get_boottime_seconds() - clp->cl_time); > seq_printf(m, "name: "); > seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len); > seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion); > @@ -2809,8 +2835,17 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root) > node = node->rb_left; > else if (cmp < 0) > node = node->rb_right; > - else > - return clp; > + else { > + spin_lock(&clp->cl_cs_lock); > + if (!test_bit(NFSD4_DESTROY_COURTESY_CLIENT, > + &clp->cl_flags)) { > + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); > + return clp; > + } > + spin_unlock(&clp->cl_cs_lock); > + return NULL; > + } > } > return NULL; > } > @@ -2856,6 +2891,14 @@ find_client_in_id_table(struct list_head *tbl, clientid_t *clid, bool sessions) > if (same_clid(&clp->cl_clientid, clid)) { > if ((bool)clp->cl_minorversion != sessions) > return NULL; > + spin_lock(&clp->cl_cs_lock); > + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, > + &clp->cl_flags)) { > + spin_unlock(&clp->cl_cs_lock); > + continue; > + } > + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); I'm wondering about the transition from COURTESY to active. Does that need to be synchronous with the client tracking database? > renew_client_locked(clp); > return clp; > } > @@ -4662,6 +4705,36 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp) > nfsd4_run_cb(&dp->dl_recall); > } > > +/* > + * This function is called when a file is opened and there is a > + * delegation conflict with another client. If the other client > + * is a courtesy client then kick start the laundromat to destroy > + * it. > + */ > +static bool > +nfsd_check_courtesy_client(struct nfs4_delegation *dp) > +{ > + struct svc_rqst *rqst; > + struct nfs4_client *clp = dp->dl_recall.cb_clp; > + struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); > + > + if (!i_am_nfsd()) > + goto out; > + rqst = kthread_data(current); > + if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4) > + return false; > +out: > + spin_lock(&clp->cl_cs_lock); > + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { > + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); > + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); I'm not sure what is the purpose of the mod_delayed_work() here and below. What's the harm in leaving a DESTROYED nfs4_client around until the laundromat runs again? Won't it run every "grace period" seconds anyway? I didn't think we were depending on the laundromat to resolve edge case races, so if a call to a scheduler function isn't totally necessary in this code, I prefer that it be left out. > + return true; > + } > + spin_unlock(&clp->cl_cs_lock); > + return false; > +} > + > /* Called from break_lease() with i_lock held. */ > static bool > nfsd_break_deleg_cb(struct file_lock *fl) > @@ -4670,6 +4743,8 @@ nfsd_break_deleg_cb(struct file_lock *fl) > struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner; > struct nfs4_file *fp = dp->dl_stid.sc_file; > > + if (nfsd_check_courtesy_client(dp)) > + return false; > trace_nfsd_cb_recall(&dp->dl_stid); > > /* > @@ -4912,7 +4987,128 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh, > return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0); > } > > -static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > +static bool > +__nfs4_check_access_deny_bmap(struct nfs4_ol_stateid *stp, u32 access, > + bool share_access) > +{ > + if (share_access) { > + if (!stp->st_deny_bmap) > + return false; > + > + if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) || > + (access & NFS4_SHARE_ACCESS_READ && > + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) || > + (access & NFS4_SHARE_ACCESS_WRITE && > + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) { > + return true; > + } > + return false; > + } > + if ((access & NFS4_SHARE_DENY_BOTH) || > + (access & NFS4_SHARE_DENY_READ && > + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) || > + (access & NFS4_SHARE_DENY_WRITE && > + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) { > + return true; > + } > + return false; > +} > + > +/* > + * Check all files belong to the specified client to determine if there is > + * any conflict with the specified access_mode/deny_mode of the file 'fp. > + * > + * If share_access is true then 'access' is the access mode. Check if > + * this access mode conflicts with current deny mode of the file. > + * > + * If share_access is false then 'access' the deny mode. Check if > + * this deny mode conflicts with current access mode of the file. > + */ > +static bool > +nfs4_check_access_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp, > + struct nfs4_ol_stateid *st, u32 access, bool share_access) > +{ > + int i; > + struct nfs4_openowner *oo; > + struct nfs4_stateowner *so, *tmp; > + struct nfs4_ol_stateid *stp, *stmp; > + > + spin_lock(&clp->cl_lock); > + for (i = 0; i < OWNER_HASH_SIZE; i++) { > + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], > + so_strhash) { > + if (!so->so_is_open_owner) > + continue; > + oo = openowner(so); > + list_for_each_entry_safe(stp, stmp, > + &oo->oo_owner.so_stateids, st_perstateowner) { > + if (stp == st || stp->st_stid.sc_file != fp) > + continue; > + if (__nfs4_check_access_deny_bmap(stp, access, > + share_access)) { > + spin_unlock(&clp->cl_lock); > + return true; > + } > + } > + } > + } > + spin_unlock(&clp->cl_lock); > + return false; > +} > + > +/* > + * This function is called to check whether nfserr_share_denied should > + * be returning to client. > + * > + * access: is op_share_access if share_access is true. > + * Check if access mode, op_share_access, would conflict with > + * the current deny mode of the file 'fp'. > + * access: is op_share_deny if share_access is true. > + * Check if the deny mode, op_share_deny, would conflict with > + * current access of the file 'fp'. > + * stp: skip checking this entry. > + * > + * Function returns: > + * true - access/deny mode conflict with courtesy client(s). > + * Caller to return nfserr_jukebox while client(s) being expired. > + * false - access/deny mode conflict with non-courtesy client. > + * Caller to return nfserr_share_denied to client. > + */ > +static bool > +nfs4_conflict_courtesy_clients(struct svc_rqst *rqstp, struct nfs4_file *fp, > + struct nfs4_ol_stateid *stp, u32 access, bool share_access) > +{ > + struct nfs4_client *cl; > + bool conflict = false; > + int async_cnt = 0; > + struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); > + > + spin_lock(&nn->client_lock); > + list_for_each_entry(cl, &nn->client_lru, cl_lru) { > + if (!nfs4_check_access_deny_bmap(cl, fp, stp, access, share_access)) > + continue; > + spin_lock(&cl->cl_cs_lock); > + if (test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) { > + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags); > + async_cnt++; You can get rid of async_cnt. Just set conflict = true after unlocking cl_cs_lock. And again, maybe that mod_delayed_work() call site isn't necessary. > + spin_unlock(&cl->cl_cs_lock); > + continue; > + } > + /* conflict with non-courtesy client */ > + spin_unlock(&cl->cl_cs_lock); > + conflict = false; > + break; > + } > + spin_unlock(&nn->client_lock); > + if (async_cnt) { > + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); > + conflict = true; > + } > + return conflict; > +} > + > +static __be32 > +nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, > struct nfsd4_open *open) > { > @@ -4931,6 +5127,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > status = nfs4_file_check_deny(fp, open->op_share_deny); > if (status != nfs_ok) { > spin_unlock(&fp->fi_lock); > + if (status != nfserr_share_denied) > + goto out; > + if (nfs4_conflict_courtesy_clients(rqstp, fp, > + stp, open->op_share_deny, false)) > + status = nfserr_jukebox; > goto out; > } > > @@ -4938,6 +5139,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > status = nfs4_file_get_access(fp, open->op_share_access); > if (status != nfs_ok) { > spin_unlock(&fp->fi_lock); > + if (status != nfserr_share_denied) > + goto out; > + if (nfs4_conflict_courtesy_clients(rqstp, fp, > + stp, open->op_share_access, true)) > + status = nfserr_jukebox; > goto out; > } > > @@ -5572,6 +5778,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn) > } > #endif > > +static > +bool nfs4_anylock_conflict(struct nfs4_client *clp) This function assumes the caller holds cl_lock. That bears mentioning here in a comment. Convention suggests adding "_locked" to the function name too, just like renew_client_locked() above. Also, nit: kernel style is either: static bool nfs4_anylock_conflict( or static bool nfs4_anylock_conflict( > +{ > + int i; > + struct nfs4_stateowner *so, *tmp; > + struct nfs4_lockowner *lo; > + struct nfs4_ol_stateid *stp; > + struct nfs4_file *nf; > + struct inode *ino; > + struct file_lock_context *ctx; > + struct file_lock *fl; > + > + for (i = 0; i < OWNER_HASH_SIZE; i++) { > + /* scan each lock owner */ > + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], > + so_strhash) { > + if (so->so_is_open_owner) > + continue; > + > + /* scan lock states of this lock owner */ > + lo = lockowner(so); > + list_for_each_entry(stp, &lo->lo_owner.so_stateids, > + st_perstateowner) { > + nf = stp->st_stid.sc_file; > + ino = nf->fi_inode; > + ctx = ino->i_flctx; > + if (!ctx) > + continue; > + /* check each lock belongs to this lock state */ > + list_for_each_entry(fl, &ctx->flc_posix, fl_list) { > + if (fl->fl_owner != lo) > + continue; > + if (!list_empty(&fl->fl_blocked_requests)) > + return true; > + } > + } > + } > + } > + return false; > +} > + > static time64_t > nfs4_laundromat(struct nfsd_net *nn) > { > @@ -5587,7 +5834,9 @@ nfs4_laundromat(struct nfsd_net *nn) > }; > struct nfs4_cpntf_state *cps; > copy_stateid_t *cps_t; > + struct nfs4_stid *stid; > int i; > + int id; > > if (clients_still_reclaiming(nn)) { > lt.new_timeo = 0; > @@ -5608,8 +5857,41 @@ nfs4_laundromat(struct nfsd_net *nn) > spin_lock(&nn->client_lock); > list_for_each_safe(pos, next, &nn->client_lru) { > clp = list_entry(pos, struct nfs4_client, cl_lru); > - if (!state_expired(<, clp->cl_time)) > + spin_lock(&clp->cl_cs_lock); > + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) > + goto exp_client; > + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { > + if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry) > + goto exp_client; > + /* > + * after umount, v4.0 client is still around > + * waiting to be expired. Check again and if > + * it has no state then expire it. > + */ > + if (clp->cl_minorversion) { > + spin_unlock(&clp->cl_cs_lock); > + continue; > + } > + } > + if (!state_expired(<, clp->cl_time)) { Now that clients go from active -> COURTEOUS -> DESTROY, why is this check still necessary? If it truly is, a brief explanation/comment would help. > + spin_unlock(&clp->cl_cs_lock); > break; > + } > + id = 0; > + spin_lock(&clp->cl_lock); > + stid = idr_get_next(&clp->cl_stateids, &id); > + if (stid && !nfs4_anylock_conflict(clp)) { > + /* client still has states */ > + spin_unlock(&clp->cl_lock); > + clp->courtesy_client_expiry = > + ktime_get_boottime_seconds() + courtesy_client_expiry; > + set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); > + continue; > + } > + spin_unlock(&clp->cl_lock); > +exp_client: > + spin_unlock(&clp->cl_cs_lock); > if (mark_client_expired_locked(clp)) > continue; > list_add(&clp->cl_lru, &reaplist); > @@ -5689,9 +5971,6 @@ nfs4_laundromat(struct nfsd_net *nn) > return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT); > } > > -static struct workqueue_struct *laundry_wq; > -static void laundromat_main(struct work_struct *); > - If the new mod_delayed_work() call sites aren't necessary, then these static definitions can be left here. > static void > laundromat_main(struct work_struct *laundry) > { > @@ -6496,6 +6775,33 @@ nfs4_transform_lock_offset(struct file_lock *lock) > lock->fl_end = OFFSET_MAX; > } > > +/* > + * Return true if lock can be resolved by expiring > + * courtesy client else return false. > + */ Since this function is invoked from outside of nfs4state.c, please turn the above comment into a kerneldoc comment, eg: /** * nfsd4_fl_expire_lock - check if lock conflict can be resolved * @fl: pointer to file_lock with a potential conflict * * Return values: * %true: No conflict exists * %false: Lock conflict can't be resolved */ > +static bool > +nfsd4_fl_expire_lock(struct file_lock *fl) > +{ > + struct nfs4_lockowner *lo; > + struct nfs4_client *clp; > + struct nfsd_net *nn; > + > + if (!fl) > + return false; > + lo = (struct nfs4_lockowner *)fl->fl_owner; > + clp = lo->lo_owner.so_client; > + spin_lock(&clp->cl_cs_lock); > + if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { > + spin_unlock(&clp->cl_cs_lock); > + return false; > + } > + nn = net_generic(clp->net, nfsd_net_id); Why is "nn =" inside the cl_cs_lock critical section here? I don't think that lock protects clp->net. Also, if the mod_delayed_work() call isn't needed here, then @nn can be removed too. > + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); > + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); > + return true; > +} > + > static fl_owner_t > nfsd4_fl_get_owner(fl_owner_t owner) > { > @@ -6543,6 +6849,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops = { > .lm_notify = nfsd4_lm_notify, > .lm_get_owner = nfsd4_fl_get_owner, > .lm_put_owner = nfsd4_fl_put_owner, > + .lm_expire_lock = nfsd4_fl_expire_lock, This applies to 1/2... You might choose a less NFSD-specific name for the new lm_ method, such as lm_lock_conflict. I'm guessing only NFSD is going to deal with a conflict by /expiring/ something ... > }; > > static inline void > diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h > index e73bdbb1634a..7f52a79e0743 100644 > --- a/fs/nfsd/state.h > +++ b/fs/nfsd/state.h > @@ -345,6 +345,8 @@ struct nfs4_client { > #define NFSD4_CLIENT_UPCALL_LOCK (5) /* upcall serialization */ > #define NFSD4_CLIENT_CB_FLAG_MASK (1 << NFSD4_CLIENT_CB_UPDATE | \ > 1 << NFSD4_CLIENT_CB_KILL) > +#define NFSD4_COURTESY_CLIENT (6) /* be nice to expired client */ > +#define NFSD4_DESTROY_COURTESY_CLIENT (7) > unsigned long cl_flags; > const struct cred *cl_cb_cred; > struct rpc_clnt *cl_cb_client; > @@ -385,6 +387,12 @@ struct nfs4_client { > struct list_head async_copies; /* list of async copies */ > spinlock_t async_lock; /* lock for async copies */ > atomic_t cl_cb_inflight; /* Outstanding callbacks */ > + int courtesy_client_expiry; > + /* > + * used to synchronize access to NFSD4_COURTESY_CLIENT > + * and NFSD4_DESTROY_COURTESY_CLIENT for race conditions. > + */ > + spinlock_t cl_cs_lock; > }; > > /* struct nfs4_client_reset > -- > 2.9.5 > -- Chuck Lever ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-10 23:17 ` Chuck Lever III @ 2022-01-11 1:03 ` dai.ngo 2022-01-11 15:49 ` Chuck Lever III 0 siblings, 1 reply; 14+ messages in thread From: dai.ngo @ 2022-01-11 1:03 UTC (permalink / raw) To: Chuck Lever III Cc: Bruce Fields, Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel@vger.kernel.org Thank you Chuck for your review, please see reply below: On 1/10/22 3:17 PM, Chuck Lever III wrote: > Hi Dai- > > Still getting the feel of the new approach, but I have > made some comments inline... > > >> On Jan 10, 2022, at 1:50 PM, Dai Ngo <dai.ngo@oracle.com> wrote: >> >> Currently an NFSv4 client must maintain its lease by using the at least >> one of the state tokens or if nothing else, by issuing a RENEW (4.0), or >> a singleton SEQUENCE (4.1) at least once during each lease period. If the >> client fails to renew the lease, for any reason, the Linux server expunges >> the state tokens immediately upon detection of the "failure to renew the >> lease" condition and begins returning NFS4ERR_EXPIRED if the client should >> reconnect and attempt to use the (now) expired state. >> >> The default lease period for the Linux server is 90 seconds. The typical >> client cuts that in half and will issue a lease renewing operation every >> 45 seconds. The 90 second lease period is very short considering the >> potential for moderately long term network partitions. A network partition >> refers to any loss of network connectivity between the NFS client and the >> NFS server, regardless of its root cause. This includes NIC failures, NIC >> driver bugs, network misconfigurations & administrative errors, routers & >> switches crashing and/or having software updates applied, even down to >> cables being physically pulled. In most cases, these network failures are >> transient, although the duration is unknown. >> >> A server which does not immediately expunge the state on lease expiration >> is known as a Courteous Server. A Courteous Server continues to recognize >> previously generated state tokens as valid until conflict arises between >> the expired state and the requests from another client, or the server >> reboots. >> >> The initial implementation of the Courteous Server will do the following: >> >> . when the laundromat thread detects an expired client and if that client >> still has established states on the Linux server and there is no waiters >> for the client's locks then mark the client as a COURTESY_CLIENT and skip >> destroying the client and all its states, otherwise destroy the client as >> usual. >> >> . detects conflict of OPEN request with COURTESY_CLIENT, destroys the >> expired client and all its states, skips the delegation recall then allows >> the conflicting request to succeed. >> >> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks >> requests with COURTESY_CLIENT, destroys the expired client and all its >> states then allows the conflicting request to succeed. >> >> . detects conflict of LOCK/LOCKT, NLM LOCK and TEST, and local locks >> requests with COURTESY_CLIENT, destroys the expired client and all its >> states then allows the conflicting request to succeed. >> >> Signed-off-by: Dai Ngo <dai.ngo@oracle.com> >> --- >> fs/nfsd/nfs4state.c | 323 ++++++++++++++++++++++++++++++++++++++++++++++++++-- >> fs/nfsd/state.h | 8 ++ >> 2 files changed, 323 insertions(+), 8 deletions(-) >> >> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c >> index 3f4027a5de88..e7fa4da44835 100644 >> --- a/fs/nfsd/nfs4state.c >> +++ b/fs/nfsd/nfs4state.c >> @@ -125,6 +125,11 @@ static void free_session(struct nfsd4_session *); >> static const struct nfsd4_callback_ops nfsd4_cb_recall_ops; >> static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops; >> >> +static struct workqueue_struct *laundry_wq; >> +static void laundromat_main(struct work_struct *); >> + >> +static const int courtesy_client_expiry = (24 * 60 * 60); /* in secs */ >> + >> static bool is_session_dead(struct nfsd4_session *ses) >> { >> return ses->se_flags & NFS4_SESSION_DEAD; >> @@ -155,8 +160,10 @@ static __be32 get_client_locked(struct nfs4_client *clp) >> return nfs_ok; >> } >> >> -/* must be called under the client_lock */ >> +/* must be called under the client_lock >> static inline void >> +*/ >> +void >> renew_client_locked(struct nfs4_client *clp) >> { >> struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); >> @@ -172,7 +179,9 @@ renew_client_locked(struct nfs4_client *clp) >> >> list_move_tail(&clp->cl_lru, &nn->client_lru); >> clp->cl_time = ktime_get_boottime_seconds(); >> + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >> } >> +EXPORT_SYMBOL_GPL(renew_client_locked); > I don't see renew_client_locked() being called from outside > fs/nfsd/nfs4state.c, and the patch doesn't add a global > declaration. > > Please leave this function as "static inline void". Fix in v10. I did it for debugging and forgot to remove it. Test robot also reported the problem. > > >> static void put_client_renew_locked(struct nfs4_client *clp) >> { >> @@ -1912,10 +1921,22 @@ find_in_sessionid_hashtbl(struct nfs4_sessionid *sessionid, struct net *net, >> { >> struct nfsd4_session *session; >> __be32 status = nfserr_badsession; >> + struct nfs4_client *clp; >> >> session = __find_in_sessionid_hashtbl(sessionid, net); >> if (!session) >> goto out; >> + clp = session->se_client; >> + if (clp) { >> + spin_lock(&clp->cl_cs_lock); >> + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) { >> + spin_unlock(&clp->cl_cs_lock); >> + session = NULL; >> + goto out; >> + } >> + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); >> + } >> status = nfsd4_get_session_locked(session); >> if (status) >> session = NULL; >> @@ -1992,6 +2013,7 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name) >> INIT_LIST_HEAD(&clp->async_copies); >> spin_lock_init(&clp->async_lock); >> spin_lock_init(&clp->cl_lock); >> + spin_lock_init(&clp->cl_cs_lock); >> rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); >> return clp; >> err_no_hashtbl: >> @@ -2389,6 +2411,10 @@ static int client_info_show(struct seq_file *m, void *v) >> seq_puts(m, "status: confirmed\n"); >> else >> seq_puts(m, "status: unconfirmed\n"); >> + seq_printf(m, "courtesy client: %s\n", >> + test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags) ? "yes" : "no"); >> + seq_printf(m, "seconds from last renew: %lld\n", >> + ktime_get_boottime_seconds() - clp->cl_time); >> seq_printf(m, "name: "); >> seq_quote_mem(m, clp->cl_name.data, clp->cl_name.len); >> seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion); >> @@ -2809,8 +2835,17 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root) >> node = node->rb_left; >> else if (cmp < 0) >> node = node->rb_right; >> - else >> - return clp; >> + else { >> + spin_lock(&clp->cl_cs_lock); >> + if (!test_bit(NFSD4_DESTROY_COURTESY_CLIENT, >> + &clp->cl_flags)) { >> + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); >> + return clp; >> + } >> + spin_unlock(&clp->cl_cs_lock); >> + return NULL; >> + } >> } >> return NULL; >> } >> @@ -2856,6 +2891,14 @@ find_client_in_id_table(struct list_head *tbl, clientid_t *clid, bool sessions) >> if (same_clid(&clp->cl_clientid, clid)) { >> if ((bool)clp->cl_minorversion != sessions) >> return NULL; >> + spin_lock(&clp->cl_cs_lock); >> + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, >> + &clp->cl_flags)) { >> + spin_unlock(&clp->cl_cs_lock); >> + continue; >> + } >> + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); > I'm wondering about the transition from COURTESY to active. > Does that need to be synchronous with the client tracking > database? Currently when the client transits from active to COURTESY, we do not remove the client record from the tracking database so on the reverse we do not need to add it back. I think this is something you and Bruce have been discussing on whether when we should remove and add the client record from the database when the client transits from active to COURTESY and vice versa. With this patch we now expire the courtesy clients asynchronously in the background so the overhead/delay from removing the record from the database does not have any impact on resolving conflicts. >> renew_client_locked(clp); >> return clp; >> } >> @@ -4662,6 +4705,36 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp) >> nfsd4_run_cb(&dp->dl_recall); >> } >> >> +/* >> + * This function is called when a file is opened and there is a >> + * delegation conflict with another client. If the other client >> + * is a courtesy client then kick start the laundromat to destroy >> + * it. >> + */ >> +static bool >> +nfsd_check_courtesy_client(struct nfs4_delegation *dp) >> +{ >> + struct svc_rqst *rqst; >> + struct nfs4_client *clp = dp->dl_recall.cb_clp; >> + struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id); >> + >> + if (!i_am_nfsd()) >> + goto out; >> + rqst = kthread_data(current); >> + if (rqst->rq_prog != NFS_PROGRAM || rqst->rq_vers < 4) >> + return false; >> +out: >> + spin_lock(&clp->cl_cs_lock); >> + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { >> + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); >> + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); > I'm not sure what is the purpose of the mod_delayed_work() > here and below. What's the harm in leaving a DESTROYED > nfs4_client around until the laundromat runs again? Won't > it run every "grace period" seconds anyway? I think this is a good idea. With the new approach of destroying courtesy clients asynchronously in the background, I also don't see a need to kick start the laundromat to run immediately. I will make this change in v10 and make sure it works as expected. > > I didn't think we were depending on the laundromat to > resolve edge case races, so if a call to a scheduler > function isn't totally necessary in this code, I prefer > that it be left out. > > >> + return true; >> + } >> + spin_unlock(&clp->cl_cs_lock); >> + return false; >> +} >> + >> /* Called from break_lease() with i_lock held. */ >> static bool >> nfsd_break_deleg_cb(struct file_lock *fl) >> @@ -4670,6 +4743,8 @@ nfsd_break_deleg_cb(struct file_lock *fl) >> struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner; >> struct nfs4_file *fp = dp->dl_stid.sc_file; >> >> + if (nfsd_check_courtesy_client(dp)) >> + return false; >> trace_nfsd_cb_recall(&dp->dl_stid); >> >> /* >> @@ -4912,7 +4987,128 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh, >> return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0); >> } >> >> -static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, >> +static bool >> +__nfs4_check_access_deny_bmap(struct nfs4_ol_stateid *stp, u32 access, >> + bool share_access) >> +{ >> + if (share_access) { >> + if (!stp->st_deny_bmap) >> + return false; >> + >> + if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) || >> + (access & NFS4_SHARE_ACCESS_READ && >> + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) || >> + (access & NFS4_SHARE_ACCESS_WRITE && >> + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) { >> + return true; >> + } >> + return false; >> + } >> + if ((access & NFS4_SHARE_DENY_BOTH) || >> + (access & NFS4_SHARE_DENY_READ && >> + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) || >> + (access & NFS4_SHARE_DENY_WRITE && >> + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) { >> + return true; >> + } >> + return false; >> +} >> + >> +/* >> + * Check all files belong to the specified client to determine if there is >> + * any conflict with the specified access_mode/deny_mode of the file 'fp. >> + * >> + * If share_access is true then 'access' is the access mode. Check if >> + * this access mode conflicts with current deny mode of the file. >> + * >> + * If share_access is false then 'access' the deny mode. Check if >> + * this deny mode conflicts with current access mode of the file. >> + */ >> +static bool >> +nfs4_check_access_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp, >> + struct nfs4_ol_stateid *st, u32 access, bool share_access) >> +{ >> + int i; >> + struct nfs4_openowner *oo; >> + struct nfs4_stateowner *so, *tmp; >> + struct nfs4_ol_stateid *stp, *stmp; >> + >> + spin_lock(&clp->cl_lock); >> + for (i = 0; i < OWNER_HASH_SIZE; i++) { >> + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], >> + so_strhash) { >> + if (!so->so_is_open_owner) >> + continue; >> + oo = openowner(so); >> + list_for_each_entry_safe(stp, stmp, >> + &oo->oo_owner.so_stateids, st_perstateowner) { >> + if (stp == st || stp->st_stid.sc_file != fp) >> + continue; >> + if (__nfs4_check_access_deny_bmap(stp, access, >> + share_access)) { >> + spin_unlock(&clp->cl_lock); >> + return true; >> + } >> + } >> + } >> + } >> + spin_unlock(&clp->cl_lock); >> + return false; >> +} >> + >> +/* >> + * This function is called to check whether nfserr_share_denied should >> + * be returning to client. >> + * >> + * access: is op_share_access if share_access is true. >> + * Check if access mode, op_share_access, would conflict with >> + * the current deny mode of the file 'fp'. >> + * access: is op_share_deny if share_access is true. >> + * Check if the deny mode, op_share_deny, would conflict with >> + * current access of the file 'fp'. >> + * stp: skip checking this entry. >> + * >> + * Function returns: >> + * true - access/deny mode conflict with courtesy client(s). >> + * Caller to return nfserr_jukebox while client(s) being expired. >> + * false - access/deny mode conflict with non-courtesy client. >> + * Caller to return nfserr_share_denied to client. >> + */ >> +static bool >> +nfs4_conflict_courtesy_clients(struct svc_rqst *rqstp, struct nfs4_file *fp, >> + struct nfs4_ol_stateid *stp, u32 access, bool share_access) >> +{ >> + struct nfs4_client *cl; >> + bool conflict = false; >> + int async_cnt = 0; >> + struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); >> + >> + spin_lock(&nn->client_lock); >> + list_for_each_entry(cl, &nn->client_lru, cl_lru) { >> + if (!nfs4_check_access_deny_bmap(cl, fp, stp, access, share_access)) >> + continue; >> + spin_lock(&cl->cl_cs_lock); >> + if (test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) { >> + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags); >> + async_cnt++; > You can get rid of async_cnt. Just set conflict = true > after unlocking cl_cs_lock. And again, maybe that > mod_delayed_work() call site isn't necessary. fix in v10. > > >> + spin_unlock(&cl->cl_cs_lock); >> + continue; >> + } >> + /* conflict with non-courtesy client */ >> + spin_unlock(&cl->cl_cs_lock); >> + conflict = false; >> + break; >> + } >> + spin_unlock(&nn->client_lock); >> + if (async_cnt) { >> + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); >> + conflict = true; >> + } >> + return conflict; >> +} >> + >> +static __be32 >> +nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, >> struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, >> struct nfsd4_open *open) >> { >> @@ -4931,6 +5127,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, >> status = nfs4_file_check_deny(fp, open->op_share_deny); >> if (status != nfs_ok) { >> spin_unlock(&fp->fi_lock); >> + if (status != nfserr_share_denied) >> + goto out; >> + if (nfs4_conflict_courtesy_clients(rqstp, fp, >> + stp, open->op_share_deny, false)) >> + status = nfserr_jukebox; >> goto out; >> } >> >> @@ -4938,6 +5139,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, >> status = nfs4_file_get_access(fp, open->op_share_access); >> if (status != nfs_ok) { >> spin_unlock(&fp->fi_lock); >> + if (status != nfserr_share_denied) >> + goto out; >> + if (nfs4_conflict_courtesy_clients(rqstp, fp, >> + stp, open->op_share_access, true)) >> + status = nfserr_jukebox; >> goto out; >> } >> >> @@ -5572,6 +5778,47 @@ static void nfsd4_ssc_expire_umount(struct nfsd_net *nn) >> } >> #endif >> >> +static >> +bool nfs4_anylock_conflict(struct nfs4_client *clp) > This function assumes the caller holds cl_lock. That bears > mentioning here in a comment. Convention suggests adding > "_locked" to the function name too, just like > renew_client_locked() above. fix in v10. > > Also, nit: kernel style is either: > > static bool > nfs4_anylock_conflict( > > or > > static bool nfs4_anylock_conflict( fix in v10. > > >> +{ >> + int i; >> + struct nfs4_stateowner *so, *tmp; >> + struct nfs4_lockowner *lo; >> + struct nfs4_ol_stateid *stp; >> + struct nfs4_file *nf; >> + struct inode *ino; >> + struct file_lock_context *ctx; >> + struct file_lock *fl; >> + >> + for (i = 0; i < OWNER_HASH_SIZE; i++) { >> + /* scan each lock owner */ >> + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], >> + so_strhash) { >> + if (so->so_is_open_owner) >> + continue; >> + >> + /* scan lock states of this lock owner */ >> + lo = lockowner(so); >> + list_for_each_entry(stp, &lo->lo_owner.so_stateids, >> + st_perstateowner) { >> + nf = stp->st_stid.sc_file; >> + ino = nf->fi_inode; >> + ctx = ino->i_flctx; >> + if (!ctx) >> + continue; >> + /* check each lock belongs to this lock state */ >> + list_for_each_entry(fl, &ctx->flc_posix, fl_list) { >> + if (fl->fl_owner != lo) >> + continue; >> + if (!list_empty(&fl->fl_blocked_requests)) >> + return true; >> + } >> + } >> + } >> + } >> + return false; >> +} >> + >> static time64_t >> nfs4_laundromat(struct nfsd_net *nn) >> { >> @@ -5587,7 +5834,9 @@ nfs4_laundromat(struct nfsd_net *nn) >> }; >> struct nfs4_cpntf_state *cps; >> copy_stateid_t *cps_t; >> + struct nfs4_stid *stid; >> int i; >> + int id; >> >> if (clients_still_reclaiming(nn)) { >> lt.new_timeo = 0; >> @@ -5608,8 +5857,41 @@ nfs4_laundromat(struct nfsd_net *nn) >> spin_lock(&nn->client_lock); >> list_for_each_safe(pos, next, &nn->client_lru) { >> clp = list_entry(pos, struct nfs4_client, cl_lru); >> - if (!state_expired(<, clp->cl_time)) >> + spin_lock(&clp->cl_cs_lock); >> + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) >> + goto exp_client; >> + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { >> + if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry) >> + goto exp_client; >> + /* >> + * after umount, v4.0 client is still around >> + * waiting to be expired. Check again and if >> + * it has no state then expire it. >> + */ >> + if (clp->cl_minorversion) { >> + spin_unlock(&clp->cl_cs_lock); >> + continue; >> + } >> + } >> + if (!state_expired(<, clp->cl_time)) { > Now that clients go from active -> COURTEOUS -> DESTROY, > why is this check still necessary? If it truly is, a brief > explanation/comment would help. We still need this check to (1) transits client from active to COURTESY state and (2) to stop the loop on client_lru since the oldest entry is at the beginning of the list. > >> + spin_unlock(&clp->cl_cs_lock); >> break; >> + } >> + id = 0; >> + spin_lock(&clp->cl_lock); >> + stid = idr_get_next(&clp->cl_stateids, &id); >> + if (stid && !nfs4_anylock_conflict(clp)) { >> + /* client still has states */ >> + spin_unlock(&clp->cl_lock); >> + clp->courtesy_client_expiry = >> + ktime_get_boottime_seconds() + courtesy_client_expiry; >> + set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); >> + continue; >> + } >> + spin_unlock(&clp->cl_lock); >> +exp_client: >> + spin_unlock(&clp->cl_cs_lock); >> if (mark_client_expired_locked(clp)) >> continue; >> list_add(&clp->cl_lru, &reaplist); >> @@ -5689,9 +5971,6 @@ nfs4_laundromat(struct nfsd_net *nn) >> return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT); >> } >> >> -static struct workqueue_struct *laundry_wq; >> -static void laundromat_main(struct work_struct *); >> - > If the new mod_delayed_work() call sites aren't necessary, > then these static definitions can be left here. fix in v10. > > >> static void >> laundromat_main(struct work_struct *laundry) >> { >> @@ -6496,6 +6775,33 @@ nfs4_transform_lock_offset(struct file_lock *lock) >> lock->fl_end = OFFSET_MAX; >> } >> >> +/* >> + * Return true if lock can be resolved by expiring >> + * courtesy client else return false. >> + */ > Since this function is invoked from outside of nfs4state.c, > please turn the above comment into a kerneldoc comment, eg: > > /** > * nfsd4_fl_expire_lock - check if lock conflict can be resolved > * @fl: pointer to file_lock with a potential conflict > * > * Return values: > * %true: No conflict exists > * %false: Lock conflict can't be resolved fix in v10. > */ > > >> +static bool >> +nfsd4_fl_expire_lock(struct file_lock *fl) >> +{ >> + struct nfs4_lockowner *lo; >> + struct nfs4_client *clp; >> + struct nfsd_net *nn; >> + >> + if (!fl) >> + return false; >> + lo = (struct nfs4_lockowner *)fl->fl_owner; >> + clp = lo->lo_owner.so_client; >> + spin_lock(&clp->cl_cs_lock); >> + if (!test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { >> + spin_unlock(&clp->cl_cs_lock); >> + return false; >> + } >> + nn = net_generic(clp->net, nfsd_net_id); > Why is "nn =" inside the cl_cs_lock critical section here? > I don't think that lock protects clp->net. Also, if the > mod_delayed_work() call isn't needed here, then @nn can > be removed too. will remove nn, no longer need mod_delayed_work. fix in v10. > > >> + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); >> + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); >> + return true; >> +} >> + >> static fl_owner_t >> nfsd4_fl_get_owner(fl_owner_t owner) >> { >> @@ -6543,6 +6849,7 @@ static const struct lock_manager_operations nfsd_posix_mng_ops = { >> .lm_notify = nfsd4_lm_notify, >> .lm_get_owner = nfsd4_fl_get_owner, >> .lm_put_owner = nfsd4_fl_put_owner, >> + .lm_expire_lock = nfsd4_fl_expire_lock, > This applies to 1/2... You might choose a less NFSD-specific > name for the new lm_ method, such as lm_lock_conflict. I'm > guessing only NFSD is going to deal with a conflict by > /expiring/ something ... will change from lm_expire_lock to lm_lock_conflict, fix in v10. -Dai > > >> }; >> >> static inline void >> diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h >> index e73bdbb1634a..7f52a79e0743 100644 >> --- a/fs/nfsd/state.h >> +++ b/fs/nfsd/state.h >> @@ -345,6 +345,8 @@ struct nfs4_client { >> #define NFSD4_CLIENT_UPCALL_LOCK (5) /* upcall serialization */ >> #define NFSD4_CLIENT_CB_FLAG_MASK (1 << NFSD4_CLIENT_CB_UPDATE | \ >> 1 << NFSD4_CLIENT_CB_KILL) >> +#define NFSD4_COURTESY_CLIENT (6) /* be nice to expired client */ >> +#define NFSD4_DESTROY_COURTESY_CLIENT (7) >> unsigned long cl_flags; >> const struct cred *cl_cb_cred; >> struct rpc_clnt *cl_cb_client; >> @@ -385,6 +387,12 @@ struct nfs4_client { >> struct list_head async_copies; /* list of async copies */ >> spinlock_t async_lock; /* lock for async copies */ >> atomic_t cl_cb_inflight; /* Outstanding callbacks */ >> + int courtesy_client_expiry; >> + /* >> + * used to synchronize access to NFSD4_COURTESY_CLIENT >> + * and NFSD4_DESTROY_COURTESY_CLIENT for race conditions. >> + */ >> + spinlock_t cl_cs_lock; >> }; >> >> /* struct nfs4_client_reset >> -- >> 2.9.5 >> > -- > Chuck Lever > > > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-11 1:03 ` dai.ngo @ 2022-01-11 15:49 ` Chuck Lever III 2022-01-12 18:53 ` Bruce Fields 0 siblings, 1 reply; 14+ messages in thread From: Chuck Lever III @ 2022-01-11 15:49 UTC (permalink / raw) To: Dai Ngo Cc: Bruce Fields, Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel@vger.kernel.org > On Jan 10, 2022, at 8:03 PM, Dai Ngo <dai.ngo@oracle.com> wrote: > > Thank you Chuck for your review, please see reply below: > > On 1/10/22 3:17 PM, Chuck Lever III wrote: >> Hi Dai- >> >> Still getting the feel of the new approach, but I have >> made some comments inline... >> >> >>> On Jan 10, 2022, at 1:50 PM, Dai Ngo <dai.ngo@oracle.com> wrote: >>> >>> seq_printf(m, "\nminor version: %d\n", clp->cl_minorversion); >>> @@ -2809,8 +2835,17 @@ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root) >>> node = node->rb_left; >>> else if (cmp < 0) >>> node = node->rb_right; >>> - else >>> - return clp; >>> + else { >>> + spin_lock(&clp->cl_cs_lock); >>> + if (!test_bit(NFSD4_DESTROY_COURTESY_CLIENT, >>> + &clp->cl_flags)) { >>> + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >>> + spin_unlock(&clp->cl_cs_lock); >>> + return clp; >>> + } >>> + spin_unlock(&clp->cl_cs_lock); >>> + return NULL; >>> + } >>> } >>> return NULL; >>> } >>> @@ -2856,6 +2891,14 @@ find_client_in_id_table(struct list_head *tbl, clientid_t *clid, bool sessions) >>> if (same_clid(&clp->cl_clientid, clid)) { >>> if ((bool)clp->cl_minorversion != sessions) >>> return NULL; >>> + spin_lock(&clp->cl_cs_lock); >>> + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, >>> + &clp->cl_flags)) { >>> + spin_unlock(&clp->cl_cs_lock); >>> + continue; >>> + } >>> + clear_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >>> + spin_unlock(&clp->cl_cs_lock); >> I'm wondering about the transition from COURTESY to active. >> Does that need to be synchronous with the client tracking >> database? > > Currently when the client transits from active to COURTESY, > we do not remove the client record from the tracking database > so on the reverse we do not need to add it back. > > I think this is something you and Bruce have been discussing > on whether when we should remove and add the client record from > the database when the client transits from active to COURTESY > and vice versa. With this patch we now expire the courtesy clients > asynchronously in the background so the overhead/delay from > removing the record from the database does not have any impact > on resolving conflicts. As I recall, our idea was to record the client as expired when it transitions from active to COURTEOUS so that if the server happens to reboot, it doesn't allow a courteous client to reclaim locks the server may have already given to another active client. So I think the server needs to do an nfsdtrack upcall when transitioning from active -> COURTEOUS to prevent that edge case. That would happen only in the laundromat, right? So when a COURTEOUS client comes back to the server, the server will need to persistently record the transition from COURTEOUS to active. -- Chuck Lever ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-11 15:49 ` Chuck Lever III @ 2022-01-12 18:53 ` Bruce Fields 2022-01-12 18:56 ` dai.ngo 0 siblings, 1 reply; 14+ messages in thread From: Bruce Fields @ 2022-01-12 18:53 UTC (permalink / raw) To: Chuck Lever III Cc: Dai Ngo, Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel@vger.kernel.org On Tue, Jan 11, 2022 at 03:49:19PM +0000, Chuck Lever III wrote: > > On Jan 10, 2022, at 8:03 PM, Dai Ngo <dai.ngo@oracle.com> wrote: > > I think this is something you and Bruce have been discussing > > on whether when we should remove and add the client record from > > the database when the client transits from active to COURTESY > > and vice versa. With this patch we now expire the courtesy clients > > asynchronously in the background so the overhead/delay from > > removing the record from the database does not have any impact > > on resolving conflicts. > > As I recall, our idea was to record the client as expired when > it transitions from active to COURTEOUS so that if the server > happens to reboot, it doesn't allow a courteous client to > reclaim locks the server may have already given to another > active client. > > So I think the server needs to do an nfsdtrack upcall when > transitioning from active -> COURTEOUS to prevent that edge > case. That would happen only in the laundromat, right? > > So when a COURTEOUS client comes back to the server, the server > will need to persistently record the transition from COURTEOUS > to active. Yep. The bad case would be: - client A is marked DESTROY_COURTESY, client B is given A's lock. - server goes down before laundromat thread removes the DESTROY_COURTESY client. - client A's network comes back up. - server comes back up and starts grace period. At this point, both A and B believe they have the lock. Also both still have nfsdcltrack records, so the server can't tell which is in the right. We can't start granting A's locks to B until we've recorded in stable storage that A has expired. What we'd like to do: - When a client transitions from active to courteous, it needs to do nfsdcltrack upcall to expire it. - We mark client as COURTESY only after that upcall has returned. - When the client comes back, we do an nfsdcltrack upcall to mark it as active again. We don't remove the COURTESY mark until that's returned. --b. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-12 18:53 ` Bruce Fields @ 2022-01-12 18:56 ` dai.ngo 0 siblings, 0 replies; 14+ messages in thread From: dai.ngo @ 2022-01-12 18:56 UTC (permalink / raw) To: Bruce Fields, Chuck Lever III Cc: Jeff Layton, Al Viro, Linux NFS Mailing List, linux-fsdevel@vger.kernel.org On 1/12/22 10:53 AM, Bruce Fields wrote: > On Tue, Jan 11, 2022 at 03:49:19PM +0000, Chuck Lever III wrote: >>> On Jan 10, 2022, at 8:03 PM, Dai Ngo <dai.ngo@oracle.com> wrote: >>> I think this is something you and Bruce have been discussing >>> on whether when we should remove and add the client record from >>> the database when the client transits from active to COURTESY >>> and vice versa. With this patch we now expire the courtesy clients >>> asynchronously in the background so the overhead/delay from >>> removing the record from the database does not have any impact >>> on resolving conflicts. >> As I recall, our idea was to record the client as expired when >> it transitions from active to COURTEOUS so that if the server >> happens to reboot, it doesn't allow a courteous client to >> reclaim locks the server may have already given to another >> active client. >> >> So I think the server needs to do an nfsdtrack upcall when >> transitioning from active -> COURTEOUS to prevent that edge >> case. That would happen only in the laundromat, right? >> >> So when a COURTEOUS client comes back to the server, the server >> will need to persistently record the transition from COURTEOUS >> to active. > Yep. The bad case would be: > > - client A is marked DESTROY_COURTESY, client B is given A's > lock. > - server goes down before laundromat thread removes the > DESTROY_COURTESY client. > - client A's network comes back up. > - server comes back up and starts grace period. > > At this point, both A and B believe they have the lock. Also both still > have nfsdcltrack records, so the server can't tell which is in the > right. > > We can't start granting A's locks to B until we've recorded in stable > storage that A has expired. > > What we'd like to do: > > - When a client transitions from active to courteous, it needs > to do nfsdcltrack upcall to expire it. > - We mark client as COURTESY only after that upcall has > returned. > - When the client comes back, we do an nfsdcltrack upcall to > mark it as active again. We don't remove the COURTESY mark > until that's returned. Got it Bruce and Chuck, I will add this in v10. Thanks, -Dai ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-10 18:50 ` [PATCH RFC v9 2/2] " Dai Ngo 2022-01-10 23:17 ` Chuck Lever III @ 2022-01-12 19:40 ` J. Bruce Fields 2022-01-13 8:51 ` dai.ngo 2022-01-12 19:52 ` J. Bruce Fields 2 siblings, 1 reply; 14+ messages in thread From: J. Bruce Fields @ 2022-01-12 19:40 UTC (permalink / raw) To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel On Mon, Jan 10, 2022 at 10:50:53AM -0800, Dai Ngo wrote: > static time64_t > nfs4_laundromat(struct nfsd_net *nn) > { > @@ -5587,7 +5834,9 @@ nfs4_laundromat(struct nfsd_net *nn) > }; > struct nfs4_cpntf_state *cps; > copy_stateid_t *cps_t; > + struct nfs4_stid *stid; > int i; > + int id; > > if (clients_still_reclaiming(nn)) { > lt.new_timeo = 0; > @@ -5608,8 +5857,41 @@ nfs4_laundromat(struct nfsd_net *nn) > spin_lock(&nn->client_lock); > list_for_each_safe(pos, next, &nn->client_lru) { > clp = list_entry(pos, struct nfs4_client, cl_lru); > - if (!state_expired(<, clp->cl_time)) > + spin_lock(&clp->cl_cs_lock); > + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) > + goto exp_client; > + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { > + if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry) > + goto exp_client; > + /* > + * after umount, v4.0 client is still around > + * waiting to be expired. Check again and if > + * it has no state then expire it. > + */ > + if (clp->cl_minorversion) { > + spin_unlock(&clp->cl_cs_lock); > + continue; > + } I'm not following that comment or that logic. > + } > + if (!state_expired(<, clp->cl_time)) { > + spin_unlock(&clp->cl_cs_lock); > break; > + } > + id = 0; > + spin_lock(&clp->cl_lock); > + stid = idr_get_next(&clp->cl_stateids, &id); > + if (stid && !nfs4_anylock_conflict(clp)) { > + /* client still has states */ I'm a little confused by that comment. I think what you just checked is that the client has some state, *and* nobody is waiting for one of its locks. For me, that comment just conufses things. > + spin_unlock(&clp->cl_lock); Is nn->client_lock enough to guarantee that the condition you just checked still holds? (Honest question, I'm not sure.) > + clp->courtesy_client_expiry = > + ktime_get_boottime_seconds() + courtesy_client_expiry; > + set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); > + spin_unlock(&clp->cl_cs_lock); > + continue; > + } > + spin_unlock(&clp->cl_lock); > +exp_client: > + spin_unlock(&clp->cl_cs_lock); > if (mark_client_expired_locked(clp)) > continue; > list_add(&clp->cl_lru, &reaplist); In general this loop is more complicated than the rest of the logic in nfs4_laundromat(). I'd be looking for ways to simplify it and/or move some of it into a helper function. --b. > @@ -5689,9 +5971,6 @@ nfs4_laundromat(struct nfsd_net *nn) > return max_t(time64_t, lt.new_timeo, NFSD_LAUNDROMAT_MINTIMEOUT); > } ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-12 19:40 ` J. Bruce Fields @ 2022-01-13 8:51 ` dai.ngo 2022-01-13 15:42 ` J. Bruce Fields 0 siblings, 1 reply; 14+ messages in thread From: dai.ngo @ 2022-01-13 8:51 UTC (permalink / raw) To: J. Bruce Fields; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel On 1/12/22 11:40 AM, J. Bruce Fields wrote: > On Mon, Jan 10, 2022 at 10:50:53AM -0800, Dai Ngo wrote: >> static time64_t >> nfs4_laundromat(struct nfsd_net *nn) >> { >> @@ -5587,7 +5834,9 @@ nfs4_laundromat(struct nfsd_net *nn) >> }; >> struct nfs4_cpntf_state *cps; >> copy_stateid_t *cps_t; >> + struct nfs4_stid *stid; >> int i; >> + int id; >> >> if (clients_still_reclaiming(nn)) { >> lt.new_timeo = 0; >> @@ -5608,8 +5857,41 @@ nfs4_laundromat(struct nfsd_net *nn) >> spin_lock(&nn->client_lock); >> list_for_each_safe(pos, next, &nn->client_lru) { >> clp = list_entry(pos, struct nfs4_client, cl_lru); >> - if (!state_expired(<, clp->cl_time)) >> + spin_lock(&clp->cl_cs_lock); >> + if (test_bit(NFSD4_DESTROY_COURTESY_CLIENT, &clp->cl_flags)) >> + goto exp_client; >> + if (test_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags)) { >> + if (ktime_get_boottime_seconds() >= clp->courtesy_client_expiry) >> + goto exp_client; >> + /* >> + * after umount, v4.0 client is still around >> + * waiting to be expired. Check again and if >> + * it has no state then expire it. >> + */ >> + if (clp->cl_minorversion) { >> + spin_unlock(&clp->cl_cs_lock); >> + continue; >> + } > I'm not following that comment or that logic. When unmounting an export v4.0 client closes all its state. These state are kept around on nn->close_lru to handle CLOSE replay. They remain on the queue even after the client state (clp->cl_time) expired and became courtesy client. Eventually these state are freed by the laundromat when the state expire. This is why we check v4.0 courtesy client again and if there is no state associated with it then we expire the client. >> + } >> + if (!state_expired(<, clp->cl_time)) { >> + spin_unlock(&clp->cl_cs_lock); >> break; >> + } >> + id = 0; >> + spin_lock(&clp->cl_lock); >> + stid = idr_get_next(&clp->cl_stateids, &id); >> + if (stid && !nfs4_anylock_conflict(clp)) { >> + /* client still has states */ > I'm a little confused by that comment. I think what you just checked is > that the client has some state, *and* nobody is waiting for one of its > locks. For me, that comment just conufses things. will remove. > >> + spin_unlock(&clp->cl_lock); > Is nn->client_lock enough to guarantee that the condition you just > checked still holds? (Honest question, I'm not sure.) nfs4_anylock_conflict_locked scans cl_ownerstr_hashtbl which is protected by the cl_lock. > >> + clp->courtesy_client_expiry = >> + ktime_get_boottime_seconds() + courtesy_client_expiry; >> + set_bit(NFSD4_COURTESY_CLIENT, &clp->cl_flags); >> + spin_unlock(&clp->cl_cs_lock); >> + continue; >> + } >> + spin_unlock(&clp->cl_lock); >> +exp_client: >> + spin_unlock(&clp->cl_cs_lock); >> if (mark_client_expired_locked(clp)) >> continue; >> list_add(&clp->cl_lru, &reaplist); > In general this loop is more complicated than the rest of the logic in > nfs4_laundromat(). I'd be looking for ways to simplify it and/or move some > of it into a helper function. I will move it to a function. -Dai ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-13 8:51 ` dai.ngo @ 2022-01-13 15:42 ` J. Bruce Fields 2022-01-13 19:51 ` dai.ngo 0 siblings, 1 reply; 14+ messages in thread From: J. Bruce Fields @ 2022-01-13 15:42 UTC (permalink / raw) To: dai.ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel On Thu, Jan 13, 2022 at 12:51:57AM -0800, dai.ngo@oracle.com wrote: > > On 1/12/22 11:40 AM, J. Bruce Fields wrote: > >On Mon, Jan 10, 2022 at 10:50:53AM -0800, Dai Ngo wrote: > >>+ } > >>+ if (!state_expired(<, clp->cl_time)) { > >>+ spin_unlock(&clp->cl_cs_lock); > >> break; > >>+ } > >>+ id = 0; > >>+ spin_lock(&clp->cl_lock); > >>+ stid = idr_get_next(&clp->cl_stateids, &id); > >>+ if (stid && !nfs4_anylock_conflict(clp)) { > >>+ /* client still has states */ > >I'm a little confused by that comment. I think what you just checked is > >that the client has some state, *and* nobody is waiting for one of its > >locks. For me, that comment just conufses things. > > will remove. > > > > >>+ spin_unlock(&clp->cl_lock); > >Is nn->client_lock enough to guarantee that the condition you just > >checked still holds? (Honest question, I'm not sure.) > > nfs4_anylock_conflict_locked scans cl_ownerstr_hashtbl which is protected > by the cl_lock. That doesn't answer the question. Which, I confess, was muddled (I should have said "clp->cl_cs_lock", not "nn->client_lock".) Let me try it a different way. You just checked that the client has some state, and that nobody is waiting for one of its locks. After you drop the cl_lock, how do you know that both of those things are still true? --b. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-13 15:42 ` J. Bruce Fields @ 2022-01-13 19:51 ` dai.ngo 0 siblings, 0 replies; 14+ messages in thread From: dai.ngo @ 2022-01-13 19:51 UTC (permalink / raw) To: J. Bruce Fields; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel On 1/13/22 7:42 AM, J. Bruce Fields wrote: > On Thu, Jan 13, 2022 at 12:51:57AM -0800, dai.ngo@oracle.com wrote: >> On 1/12/22 11:40 AM, J. Bruce Fields wrote: >>> On Mon, Jan 10, 2022 at 10:50:53AM -0800, Dai Ngo wrote: >>>> + } >>>> + if (!state_expired(<, clp->cl_time)) { >>>> + spin_unlock(&clp->cl_cs_lock); >>>> break; >>>> + } >>>> + id = 0; >>>> + spin_lock(&clp->cl_lock); >>>> + stid = idr_get_next(&clp->cl_stateids, &id); >>>> + if (stid && !nfs4_anylock_conflict(clp)) { >>>> + /* client still has states */ >>> I'm a little confused by that comment. I think what you just checked is >>> that the client has some state, *and* nobody is waiting for one of its >>> locks. For me, that comment just conufses things. >> will remove. >> >>>> + spin_unlock(&clp->cl_lock); >>> Is nn->client_lock enough to guarantee that the condition you just >>> checked still holds? (Honest question, I'm not sure.) >> nfs4_anylock_conflict_locked scans cl_ownerstr_hashtbl which is protected >> by the cl_lock. > That doesn't answer the question. Which, I confess, was muddled (I > should have said "clp->cl_cs_lock", not "nn->client_lock".) > > Let me try it a different way. You just checked that the client has > some state, and that nobody is waiting for one of its locks. > > After you drop the cl_lock, how do you know that both of those things > are still true? After we drop the lock, if the client now has no state then it just remains in memory until the courtesy client timeout expires then we get rid of it. For the race condition of lock conflict, we use the client->cl_cs_lock to synchronize the laundromat and and lm_lock_conflict/nfsd4_fl_lock_conflict. If the locking thread acquires the cl_cs_lock before the laundromat does then the thread will be blocked and laundromat detects there is blocker and expires the client. If the laundromat acquires the cl_cs_lock first then NFSD4_COURTESY_CLIENT is set and nfsd4_fl_lock_conflict detects this flag and sets the client to NFSD4_DESTROY_COURTESY_CLIENT. -Dai ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server 2022-01-10 18:50 ` [PATCH RFC v9 2/2] " Dai Ngo 2022-01-10 23:17 ` Chuck Lever III 2022-01-12 19:40 ` J. Bruce Fields @ 2022-01-12 19:52 ` J. Bruce Fields 2 siblings, 0 replies; 14+ messages in thread From: J. Bruce Fields @ 2022-01-12 19:52 UTC (permalink / raw) To: Dai Ngo; +Cc: chuck.lever, jlayton, viro, linux-nfs, linux-fsdevel On Mon, Jan 10, 2022 at 10:50:53AM -0800, Dai Ngo wrote: > @@ -4912,7 +4987,128 @@ nfsd4_truncate(struct svc_rqst *rqstp, struct svc_fh *fh, > return nfsd_setattr(rqstp, fh, &iattr, 0, (time64_t)0); > } > > -static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > +static bool > +__nfs4_check_access_deny_bmap(struct nfs4_ol_stateid *stp, u32 access, > + bool share_access) > +{ > + if (share_access) { > + if (!stp->st_deny_bmap) > + return false; > + > + if ((stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_BOTH)) || > + (access & NFS4_SHARE_ACCESS_READ && > + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_READ)) || > + (access & NFS4_SHARE_ACCESS_WRITE && > + stp->st_deny_bmap & (1 << NFS4_SHARE_DENY_WRITE))) { > + return true; > + } > + return false; > + } > + if ((access & NFS4_SHARE_DENY_BOTH) || > + (access & NFS4_SHARE_DENY_READ && > + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_READ)) || > + (access & NFS4_SHARE_DENY_WRITE && > + stp->st_access_bmap & (1 << NFS4_SHARE_ACCESS_WRITE))) { > + return true; > + } > + return false; > +} > + > +/* > + * Check all files belong to the specified client to determine if there is > + * any conflict with the specified access_mode/deny_mode of the file 'fp. > + * > + * If share_access is true then 'access' is the access mode. Check if > + * this access mode conflicts with current deny mode of the file. > + * > + * If share_access is false then 'access' the deny mode. Check if > + * this deny mode conflicts with current access mode of the file. > + */ > +static bool > +nfs4_check_access_deny_bmap(struct nfs4_client *clp, struct nfs4_file *fp, > + struct nfs4_ol_stateid *st, u32 access, bool share_access) > +{ > + int i; > + struct nfs4_openowner *oo; > + struct nfs4_stateowner *so, *tmp; > + struct nfs4_ol_stateid *stp, *stmp; > + > + spin_lock(&clp->cl_lock); > + for (i = 0; i < OWNER_HASH_SIZE; i++) { > + list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i], > + so_strhash) { > + if (!so->so_is_open_owner) > + continue; > + oo = openowner(so); > + list_for_each_entry_safe(stp, stmp, > + &oo->oo_owner.so_stateids, st_perstateowner) { > + if (stp == st || stp->st_stid.sc_file != fp) > + continue; > + if (__nfs4_check_access_deny_bmap(stp, access, > + share_access)) { > + spin_unlock(&clp->cl_lock); > + return true; > + } > + } > + } > + } > + spin_unlock(&clp->cl_lock); > + return false; > +} > + > +/* > + * This function is called to check whether nfserr_share_denied should > + * be returning to client. > + * > + * access: is op_share_access if share_access is true. > + * Check if access mode, op_share_access, would conflict with > + * the current deny mode of the file 'fp'. > + * access: is op_share_deny if share_access is true. > + * Check if the deny mode, op_share_deny, would conflict with > + * current access of the file 'fp'. > + * stp: skip checking this entry. > + * > + * Function returns: > + * true - access/deny mode conflict with courtesy client(s). > + * Caller to return nfserr_jukebox while client(s) being expired. > + * false - access/deny mode conflict with non-courtesy client. > + * Caller to return nfserr_share_denied to client. > + */ > +static bool > +nfs4_conflict_courtesy_clients(struct svc_rqst *rqstp, struct nfs4_file *fp, > + struct nfs4_ol_stateid *stp, u32 access, bool share_access) > +{ > + struct nfs4_client *cl; > + bool conflict = false; > + int async_cnt = 0; > + struct nfsd_net *nn = net_generic(SVC_NET(rqstp), nfsd_net_id); > + > + spin_lock(&nn->client_lock); > + list_for_each_entry(cl, &nn->client_lru, cl_lru) { This means we're manually searching through all the state of every client each time we find a share conflict. Well, maybe I'm OK with that. Share conflicts are not the normal case. (I'm not sure anyone actually uses them.) So I guess I don't care if that case is slow. It's kind of a lot of code, though, I wish there were a way to simplify. --b. > + if (!nfs4_check_access_deny_bmap(cl, fp, stp, access, share_access)) > + continue; > + spin_lock(&cl->cl_cs_lock); > + if (test_bit(NFSD4_COURTESY_CLIENT, &cl->cl_flags)) { > + set_bit(NFSD4_DESTROY_COURTESY_CLIENT, &cl->cl_flags); > + async_cnt++; > + spin_unlock(&cl->cl_cs_lock); > + continue; > + } > + /* conflict with non-courtesy client */ > + spin_unlock(&cl->cl_cs_lock); > + conflict = false; > + break; > + } > + spin_unlock(&nn->client_lock); > + if (async_cnt) { > + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); > + conflict = true; > + } > + return conflict; > +} > + > +static __be32 > +nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > struct svc_fh *cur_fh, struct nfs4_ol_stateid *stp, > struct nfsd4_open *open) > { > @@ -4931,6 +5127,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > status = nfs4_file_check_deny(fp, open->op_share_deny); > if (status != nfs_ok) { > spin_unlock(&fp->fi_lock); > + if (status != nfserr_share_denied) > + goto out; > + if (nfs4_conflict_courtesy_clients(rqstp, fp, > + stp, open->op_share_deny, false)) > + status = nfserr_jukebox; > goto out; > } > > @@ -4938,6 +5139,11 @@ static __be32 nfs4_get_vfs_file(struct svc_rqst *rqstp, struct nfs4_file *fp, > status = nfs4_file_get_access(fp, open->op_share_access); > if (status != nfs_ok) { > spin_unlock(&fp->fi_lock); > + if (status != nfserr_share_denied) > + goto out; > + if (nfs4_conflict_courtesy_clients(rqstp, fp, > + stp, open->op_share_access, true)) > + status = nfserr_jukebox; > goto out; > } > ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2022-01-13 19:51 UTC | newest] Thread overview: 14+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2022-01-10 18:40 [PATCH RFC v9 0/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo 2022-01-10 18:40 ` [PATCH RFC v9 1/2] fs/lock: add new callback, lm_expire_lock, to lock_manager_operations Dai Ngo 2022-01-10 18:40 ` [PATCH RFC v9 2/2] nfsd: Initial implementation of NFSv4 Courteous Server Dai Ngo -- strict thread matches above, loose matches on Subject: below -- 2022-01-10 18:50 [PATCH RFC v9 0/2] " Dai Ngo 2022-01-10 18:50 ` [PATCH RFC v9 2/2] " Dai Ngo 2022-01-10 23:17 ` Chuck Lever III 2022-01-11 1:03 ` dai.ngo 2022-01-11 15:49 ` Chuck Lever III 2022-01-12 18:53 ` Bruce Fields 2022-01-12 18:56 ` dai.ngo 2022-01-12 19:40 ` J. Bruce Fields 2022-01-13 8:51 ` dai.ngo 2022-01-13 15:42 ` J. Bruce Fields 2022-01-13 19:51 ` dai.ngo 2022-01-12 19:52 ` J. Bruce Fields
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).