* [PATCH 0/3 RFC] improve some nfsd_mutex locking
@ 2025-06-20 23:33 NeilBrown
2025-06-20 23:33 ` [PATCH 1/3] nfsd: provide proper locking for all write_ function NeilBrown
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: NeilBrown @ 2025-06-20 23:33 UTC (permalink / raw)
To: Chuck Lever, Jeff Layton
Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng
The first patch hopefully fixes a bug with locking as reported by Li
Lingfeng: some write_foo functions aren't locked properly.
The other two improve the locking code, particulary so that we don't
need a global mutex to change per-netns data.
I've revised the locking to use guard(mutex) for (almost) all places
that the per-netfs mutex is used. I think this is an improvement but
would like to know what others think.
I haven't changed _get/_put to _pin/_unpin as Chuck wondered about. I'm
not against that (though get/put are widely understood) but nor am I
particularly for it yet. Again, opinions are welcome.
NeilBrown
[PATCH 1/3] nfsd: provide proper locking for all write_ function
[PATCH 2/3] nfsd: use kref and new mutex for global config management
[PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace.
^ permalink raw reply [flat|nested] 12+ messages in thread* [PATCH 1/3] nfsd: provide proper locking for all write_ function 2025-06-20 23:33 [PATCH 0/3 RFC] improve some nfsd_mutex locking NeilBrown @ 2025-06-20 23:33 ` NeilBrown 2025-06-21 8:50 ` Li Lingfeng 2025-06-20 23:33 ` [PATCH 2/3] nfsd: use kref and new mutex for global config management NeilBrown ` (2 subsequent siblings) 3 siblings, 1 reply; 12+ messages in thread From: NeilBrown @ 2025-06-20 23:33 UTC (permalink / raw) To: Chuck Lever, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng write_foo functions are called to handle IO to files in /proc/fs/nfsd/. They can be called at any time and so generally need locking to ensure they don't happen at an awkward time. Many already take nfsd_mutex and check if nfsd_serv has been set. This ensures they only run when the server is fully configured. write_filehandle() does *not* need locking. It interacts with the export table which is set up when the netns is set up, so it is always valid and it has its own locking. write_filehandle() is needed before the nfs server is started so checking nfsd_serv would be wrong. The remaining files which do not have any locking are write_v4_end_grace(), write_unlock_ip(), and write_unlock_fs(). None of these make sense when the nfs server is not running and there is evidence that write_v4_end_grace() can race with ->client_tracking_op setup/shutdown and cause problems. This patch adds locking to these three and ensures the "unlock" functions abort if ->nfsd_serv is not set. It uses guard(mutex)(&nfsd_mutex); so there is no need to ensure we unlock on every patch. Reported-by: Li Lingfeng <lilingfeng3@huawei.com> Signed-off-by: NeilBrown <neil@brown.name> --- fs/nfsd/nfsctl.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 3f3e9f6c4250..0e7e89dc730b 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -221,6 +221,12 @@ static ssize_t write_unlock_ip(struct file *file, char *buf, size_t size) size_t salen = sizeof(address); char *fo_path; struct net *net = netns(file); + struct nfsd_net *nn = net_generic(net, nfsd_net_id); + + guard(mutex)(&nfsd_mutex); + if (!nn->nfsd_serv) + /* There cannot be any files to unlock */ + return -EINVAL; /* sanity check */ if (size == 0) @@ -259,6 +265,12 @@ static ssize_t write_unlock_fs(struct file *file, char *buf, size_t size) struct path path; char *fo_path; int error; + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); + + guard(mutex)(&nfsd_mutex); + if (!nn->nfsd_serv) + /* There cannot be any files to unlock */ + return -EINVAL; /* sanity check */ if (size == 0) @@ -1053,6 +1065,7 @@ static ssize_t write_recoverydir(struct file *file, char *buf, size_t size) } #endif + /* * write_v4_end_grace - release grace period for nfsd's v4.x lock manager * @@ -1077,6 +1090,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) { struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); + guard(mutex)(&nfsd_mutex); if (size > 0) { switch(buf[0]) { case 'Y': -- 2.49.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/3] nfsd: provide proper locking for all write_ function 2025-06-20 23:33 ` [PATCH 1/3] nfsd: provide proper locking for all write_ function NeilBrown @ 2025-06-21 8:50 ` Li Lingfeng 0 siblings, 0 replies; 12+ messages in thread From: Li Lingfeng @ 2025-06-21 8:50 UTC (permalink / raw) To: NeilBrown, Chuck Lever, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, yangerkun Thank you for the patch, it did fix the issue. 在 2025/6/21 7:33, NeilBrown 写道: > write_foo functions are called to handle IO to files in /proc/fs/nfsd/. > They can be called at any time and so generally need locking to ensure > they don't happen at an awkward time. > > Many already take nfsd_mutex and check if nfsd_serv has been set. This > ensures they only run when the server is fully configured. > > write_filehandle() does *not* need locking. It interacts with the > export table which is set up when the netns is set up, so it is always > valid and it has its own locking. write_filehandle() is needed before > the nfs server is started so checking nfsd_serv would be wrong. > > The remaining files which do not have any locking are > write_v4_end_grace(), write_unlock_ip(), and write_unlock_fs(). > None of these make sense when the nfs server is not running and there is > evidence that write_v4_end_grace() can race with ->client_tracking_op > setup/shutdown and cause problems. > > This patch adds locking to these three and ensures the "unlock" > functions abort if ->nfsd_serv is not set. It uses > guard(mutex)(&nfsd_mutex); > so there is no need to ensure we unlock on every patch. > > Reported-by: Li Lingfeng <lilingfeng3@huawei.com> > Signed-off-by: NeilBrown <neil@brown.name> > --- > fs/nfsd/nfsctl.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c > index 3f3e9f6c4250..0e7e89dc730b 100644 > --- a/fs/nfsd/nfsctl.c > +++ b/fs/nfsd/nfsctl.c > @@ -221,6 +221,12 @@ static ssize_t write_unlock_ip(struct file *file, char *buf, size_t size) > size_t salen = sizeof(address); > char *fo_path; > struct net *net = netns(file); > + struct nfsd_net *nn = net_generic(net, nfsd_net_id); > + > + guard(mutex)(&nfsd_mutex); > + if (!nn->nfsd_serv) > + /* There cannot be any files to unlock */ > + return -EINVAL; > > /* sanity check */ > if (size == 0) > @@ -259,6 +265,12 @@ static ssize_t write_unlock_fs(struct file *file, char *buf, size_t size) > struct path path; > char *fo_path; > int error; > + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); > + > + guard(mutex)(&nfsd_mutex); > + if (!nn->nfsd_serv) > + /* There cannot be any files to unlock */ > + return -EINVAL; > > /* sanity check */ > if (size == 0) > @@ -1053,6 +1065,7 @@ static ssize_t write_recoverydir(struct file *file, char *buf, size_t size) > } > #endif > > + > /* > * write_v4_end_grace - release grace period for nfsd's v4.x lock manager > * > @@ -1077,6 +1090,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) > { > struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); > > + guard(mutex)(&nfsd_mutex); > if (size > 0) { > switch(buf[0]) { > case 'Y': Tested-by: Li Lingfeng <lilingfeng3@huawei.com> ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/3] nfsd: use kref and new mutex for global config management 2025-06-20 23:33 [PATCH 0/3 RFC] improve some nfsd_mutex locking NeilBrown 2025-06-20 23:33 ` [PATCH 1/3] nfsd: provide proper locking for all write_ function NeilBrown @ 2025-06-20 23:33 ` NeilBrown 2025-06-20 23:33 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown 2025-06-21 15:21 ` [PATCH 0/3 RFC] improve some nfsd_mutex locking Chuck Lever 3 siblings, 0 replies; 12+ messages in thread From: NeilBrown @ 2025-06-20 23:33 UTC (permalink / raw) To: Chuck Lever, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng nfsd_mutex is used for two quite different things: 1/ it prevents races when start/stoping global resources: the filecache and the v4 state table 2/ it prevents races for per-netns config, typically ensure config changes are synchronised w.r.t. server startup/shutdown. This patch splits out the first used. A subsequent patch improves the second. "nfsd_users" is changed to a kref which is can be taken to delay the shutdown of global services. nfsd_startup_get(), it is succeeds, ensure the global services will remain until nfsd_startup_put(). The new mutex, nfsd_startup_mutex, is only take for startup and shutdown. It is not needed to protect the kref. The locking needed by nfsd_file_cache_purge() is now provided internally by that function so calls don't need to be concerned. This replaces NFSD_FILE_CACHE_UP which is effective just a flag which says nfsd_users is non-zero. Signed-off-by: NeilBrown <neil@brown.name> --- fs/nfsd/export.c | 6 ------ fs/nfsd/filecache.c | 31 +++++++++++-------------------- fs/nfsd/nfsd.h | 3 +++ fs/nfsd/nfssvc.c | 41 +++++++++++++++++++++++++++-------------- 4 files changed, 41 insertions(+), 40 deletions(-) diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c index cadfc2bae60e..1ea3d72ef5c9 100644 --- a/fs/nfsd/export.c +++ b/fs/nfsd/export.c @@ -243,13 +243,7 @@ static struct cache_head *expkey_alloc(void) static void expkey_flush(void) { - /* - * Take the nfsd_mutex here to ensure that the file cache is not - * destroyed while we're in the middle of flushing. - */ - mutex_lock(&nfsd_mutex); nfsd_file_cache_purge(current->nsproxy->net_ns); - mutex_unlock(&nfsd_mutex); } static const struct cache_detail svc_expkey_cache_template = { diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index e108b6c705b4..0a9116b7530c 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -50,8 +50,6 @@ #define NFSD_LAUNDRETTE_DELAY (2 * HZ) -#define NFSD_FILE_CACHE_UP (0) - /* We only care about NFSD_MAY_READ/WRITE for this cache */ #define NFSD_FILE_MAY_MASK (NFSD_MAY_READ|NFSD_MAY_WRITE|NFSD_MAY_LOCALIO) @@ -70,7 +68,6 @@ struct nfsd_fcache_disposal { static struct kmem_cache *nfsd_file_slab; static struct kmem_cache *nfsd_file_mark_slab; static struct list_lru nfsd_file_lru; -static unsigned long nfsd_file_flags; static struct fsnotify_group *nfsd_file_fsnotify_group; static struct delayed_work nfsd_filecache_laundrette; static struct rhltable nfsd_file_rhltable @@ -112,9 +109,12 @@ static const struct rhashtable_params nfsd_file_rhash_params = { static void nfsd_file_schedule_laundrette(void) { - if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags)) - queue_delayed_work(system_unbound_wq, &nfsd_filecache_laundrette, + if (nfsd_startup_get()) { + queue_delayed_work(system_unbound_wq, + &nfsd_filecache_laundrette, NFSD_LAUNDRETTE_DELAY); + nfsd_startup_put(); + } } static void @@ -795,10 +795,6 @@ nfsd_file_cache_init(void) { int ret; - lockdep_assert_held(&nfsd_mutex); - if (test_and_set_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 1) - return 0; - ret = rhltable_init(&nfsd_file_rhltable, &nfsd_file_rhash_params); if (ret) goto out; @@ -853,8 +849,6 @@ nfsd_file_cache_init(void) INIT_DELAYED_WORK(&nfsd_filecache_laundrette, nfsd_file_gc_worker); out: - if (ret) - clear_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags); return ret; out_notifier: lease_unregister_notifier(&nfsd_file_lease_notifier); @@ -958,9 +952,10 @@ nfsd_file_cache_start_net(struct net *net) void nfsd_file_cache_purge(struct net *net) { - lockdep_assert_held(&nfsd_mutex); - if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 1) + if (nfsd_startup_get()) { __nfsd_file_cache_purge(net); + nfsd_startup_put(); + } } void @@ -975,10 +970,6 @@ nfsd_file_cache_shutdown(void) { int i; - lockdep_assert_held(&nfsd_mutex); - if (test_and_clear_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 0) - return; - lease_unregister_notifier(&nfsd_file_lease_notifier); shrinker_free(nfsd_file_shrinker); /* @@ -1347,8 +1338,7 @@ int nfsd_file_cache_stats_show(struct seq_file *m, void *v) unsigned long lru = 0, total_age = 0; /* Serialize with server shutdown */ - mutex_lock(&nfsd_mutex); - if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags) == 1) { + if (nfsd_startup_get()) { struct bucket_table *tbl; struct rhashtable *ht; @@ -1360,8 +1350,9 @@ int nfsd_file_cache_stats_show(struct seq_file *m, void *v) tbl = rht_dereference_rcu(ht->tbl, ht); buckets = tbl->size; rcu_read_unlock(); + + nfsd_startup_put(); } - mutex_unlock(&nfsd_mutex); for_each_possible_cpu(i) { hits += per_cpu(nfsd_file_cache_hits, i); diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h index 1bfd0b4e9af7..8ad9fcc23789 100644 --- a/fs/nfsd/nfsd.h +++ b/fs/nfsd/nfsd.h @@ -80,6 +80,9 @@ extern const struct svc_version nfsd_version2, nfsd_version3, nfsd_version4; extern struct mutex nfsd_mutex; extern atomic_t nfsd_th_cnt; /* number of available threads */ +bool nfsd_startup_get(void); +void nfsd_startup_put(void); + extern const struct seq_operations nfs_exports_op; /* diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 82b0111ac469..b2080e5a71e6 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -270,38 +270,51 @@ static int nfsd_init_socks(struct net *net, const struct cred *cred) return 0; } -static int nfsd_users = 0; +static struct kref nfsd_users = KREF_INIT(0); +static DEFINE_MUTEX(nfsd_startup_mutex); static int nfsd_startup_generic(void) { - int ret; + int ret = 0; - if (nfsd_users++) + if (kref_get_unless_zero(&nfsd_users)) return 0; + mutex_lock(&nfsd_startup_mutex); + if (kref_get_unless_zero(&nfsd_users)) + goto out_unlock; ret = nfsd_file_cache_init(); if (ret) - goto dec_users; + goto out_unlock; ret = nfs4_state_start(); if (ret) goto out_file_cache; - return 0; + kref_init(&nfsd_users); +out_unlock: + mutex_unlock(&nfsd_startup_mutex); + return ret; out_file_cache: nfsd_file_cache_shutdown(); -dec_users: - nfsd_users--; - return ret; + goto out_unlock; } -static void nfsd_shutdown_generic(void) +static void nfsd_shutdown_cb(struct kref *kref) { - if (--nfsd_users) - return; - nfs4_state_shutdown(); nfsd_file_cache_shutdown(); + mutex_unlock(&nfsd_startup_mutex); +} + +bool nfsd_startup_get(void) +{ + return kref_get_unless_zero(&nfsd_users); +} + +void nfsd_startup_put(void) +{ + kref_put_mutex(&nfsd_users, nfsd_shutdown_cb, &nfsd_startup_mutex); } static bool nfsd_needs_lockd(struct nfsd_net *nn) @@ -416,7 +429,7 @@ static int nfsd_startup_net(struct net *net, const struct cred *cred) nn->lockd_up = false; } out_socks: - nfsd_shutdown_generic(); + nfsd_startup_put(); return ret; } @@ -443,7 +456,7 @@ static void nfsd_shutdown_net(struct net *net) percpu_ref_exit(&nn->nfsd_net_ref); nn->nfsd_net_up = false; - nfsd_shutdown_generic(); + nfsd_startup_put(); } static DEFINE_SPINLOCK(nfsd_notifier_lock); -- 2.49.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. 2025-06-20 23:33 [PATCH 0/3 RFC] improve some nfsd_mutex locking NeilBrown 2025-06-20 23:33 ` [PATCH 1/3] nfsd: provide proper locking for all write_ function NeilBrown 2025-06-20 23:33 ` [PATCH 2/3] nfsd: use kref and new mutex for global config management NeilBrown @ 2025-06-20 23:33 ` NeilBrown 2025-06-21 13:02 ` kernel test robot 2025-06-21 15:21 ` [PATCH 0/3 RFC] improve some nfsd_mutex locking Chuck Lever 3 siblings, 1 reply; 12+ messages in thread From: NeilBrown @ 2025-06-20 23:33 UTC (permalink / raw) To: Chuck Lever, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng The remaining uses for nfsd_mutex are all to protect per-netns resources. This patch replaces the global mutex with one per netns. The "svc_info" struct now contains that mutex rather than a pointer to the mutex. The use of guard(mutex)(...) is extended to all users of this lock. Signed-off-by: NeilBrown <neil@brown.name> --- .../admin-guide/nfs/nfsd-admin-interfaces.rst | 4 +- fs/nfsd/netns.h | 1 + fs/nfsd/nfsctl.c | 158 +++++++----------- fs/nfsd/nfsd.h | 1 - fs/nfsd/nfssvc.c | 40 ++--- include/linux/sunrpc/svc.h | 2 +- net/sunrpc/svc_xprt.c | 4 +- 7 files changed, 73 insertions(+), 137 deletions(-) diff --git a/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst b/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst index c05926f79054..0d9c3392e1ed 100644 --- a/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst +++ b/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst @@ -37,4 +37,6 @@ Implementation notes Note that the rpc server requires the caller to serialize addition and removal of listening sockets, and startup and shutdown of the server. -For nfsd this is done using nfsd_mutex. +For nfsd this is done using nfsd_info.mutex in struct nfsd_net, which is +called config_mutex. + diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h index 3e2d0fde80a7..d05e3f405d2e 100644 --- a/fs/nfsd/netns.h +++ b/fs/nfsd/netns.h @@ -134,6 +134,7 @@ struct nfsd_net { struct svc_info nfsd_info; #define nfsd_serv nfsd_info.serv +#define config_mutex nfsd_info.mutex struct percpu_ref nfsd_net_ref; struct completion nfsd_net_confirm_done; diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 0e7e89dc730b..66c87f63a258 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -223,7 +223,7 @@ static ssize_t write_unlock_ip(struct file *file, char *buf, size_t size) struct net *net = netns(file); struct nfsd_net *nn = net_generic(net, nfsd_net_id); - guard(mutex)(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); if (!nn->nfsd_serv) /* There cannot be any files to unlock */ return -EINVAL; @@ -267,7 +267,7 @@ static ssize_t write_unlock_fs(struct file *file, char *buf, size_t size) int error; struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); - guard(mutex)(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); if (!nn->nfsd_serv) /* There cannot be any files to unlock */ return -EINVAL; @@ -413,6 +413,7 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size) char *mesg = buf; int rv; struct net *net = netns(file); + struct nfsd_net *nn = net_generic(net, nfsd_net_id); if (size > 0) { int newthreads; @@ -422,9 +423,8 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size) if (newthreads < 0) return -EINVAL; trace_nfsd_ctl_threads(net, newthreads); - mutex_lock(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); rv = nfsd_svc(1, &newthreads, net, file->f_cred, NULL); - mutex_unlock(&nfsd_mutex); if (rv < 0) return rv; } else @@ -467,8 +467,10 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) int npools; int *nthreads; struct net *net = netns(file); + struct nfsd_net *nn = net_generic(net, nfsd_net_id); + + guard(mutex)(&nn->config_mutex); - mutex_lock(&nfsd_mutex); npools = nfsd_nrpools(net); if (npools == 0) { /* @@ -476,7 +478,6 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) * writing to the threads file but NOT the pool_threads * file, sorry. Report zero threads. */ - mutex_unlock(&nfsd_mutex); strcpy(buf, "0\n"); return strlen(buf); } @@ -526,7 +527,6 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) rv = mesg - buf; out_free: kfree(nthreads); - mutex_unlock(&nfsd_mutex); return rv; } @@ -689,12 +689,10 @@ static ssize_t __write_versions(struct file *file, char *buf, size_t size) */ static ssize_t write_versions(struct file *file, char *buf, size_t size) { - ssize_t rv; + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); - mutex_lock(&nfsd_mutex); - rv = __write_versions(file, buf, size); - mutex_unlock(&nfsd_mutex); - return rv; + guard(mutex)(&nn->config_mutex); + return __write_versions(file, buf, size); } /* @@ -848,15 +846,12 @@ static ssize_t __write_ports(struct file *file, char *buf, size_t size, */ static ssize_t write_ports(struct file *file, char *buf, size_t size) { - ssize_t rv; + struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); - mutex_lock(&nfsd_mutex); - rv = __write_ports(file, buf, size, netns(file)); - mutex_unlock(&nfsd_mutex); - return rv; + guard(mutex)(&nn->config_mutex); + return __write_ports(file, buf, size, netns(file)); } - int nfsd_max_blksize; /* @@ -898,13 +893,11 @@ static ssize_t write_maxblksize(struct file *file, char *buf, size_t size) bsize = max_t(int, bsize, 1024); bsize = min_t(int, bsize, NFSSVC_MAXBLKSIZE); bsize &= ~(1024-1); - mutex_lock(&nfsd_mutex); - if (nn->nfsd_serv) { - mutex_unlock(&nfsd_mutex); + + guard(mutex)(&nn->config_mutex); + if (nn->nfsd_serv) return -EBUSY; - } nfsd_max_blksize = bsize; - mutex_unlock(&nfsd_mutex); } return scnprintf(buf, SIMPLE_TRANSACTION_LIMIT, "%d\n", @@ -951,12 +944,8 @@ static ssize_t __nfsd4_write_time(struct file *file, char *buf, size_t size, static ssize_t nfsd4_write_time(struct file *file, char *buf, size_t size, time64_t *time, struct nfsd_net *nn) { - ssize_t rv; - - mutex_lock(&nfsd_mutex); - rv = __nfsd4_write_time(file, buf, size, time, nn); - mutex_unlock(&nfsd_mutex); - return rv; + guard(mutex)(&nn->config_mutex); + return __nfsd4_write_time(file, buf, size, time, nn); } /* @@ -1055,13 +1044,10 @@ static ssize_t __write_recoverydir(struct file *file, char *buf, size_t size, */ static ssize_t write_recoverydir(struct file *file, char *buf, size_t size) { - ssize_t rv; struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); - mutex_lock(&nfsd_mutex); - rv = __write_recoverydir(file, buf, size, nn); - mutex_unlock(&nfsd_mutex); - return rv; + guard(mutex)(&nn->config_mutex); + return __write_recoverydir(file, buf, size, nn); } #endif @@ -1090,7 +1076,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) { struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); - guard(mutex)(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); if (size > 0) { switch(buf[0]) { case 'Y': @@ -1525,17 +1511,13 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) { int i, ret, rqstp_index = 0; - struct nfsd_net *nn; - - mutex_lock(&nfsd_mutex); + struct nfsd_net *nn = net_generic(sock_net(skb->sk), nfsd_net_id); - nn = net_generic(sock_net(skb->sk), nfsd_net_id); - if (!nn->nfsd_serv) { - ret = -ENODEV; - goto out_unlock; - } + guard(mutex)(&nn->config_mutex); + if (!nn->nfsd_serv) + return -ENODEV; - rcu_read_lock(); + guard(rcu)(); for (i = 0; i < nn->nfsd_serv->sv_nrpools; i++) { struct svc_rqst *rqstp; @@ -1601,17 +1583,13 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb, ret = nfsd_genl_rpc_status_compose_msg(skb, cb, &genl_rqstp); if (ret) - goto out; + return ret; } } cb->args[0] = i; cb->args[1] = rqstp_index; ret = skb->len; -out: - rcu_read_unlock(); -out_unlock: - mutex_unlock(&nfsd_mutex); return ret; } @@ -1640,14 +1618,12 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) count++; } - mutex_lock(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); nrpools = max(count, nfsd_nrpools(net)); nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL); - if (!nthreads) { - ret = -ENOMEM; - goto out_unlock; - } + if (!nthreads) + return -ENOMEM; i = 0; nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) { @@ -1663,7 +1639,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) info->attrs[NFSD_A_SERVER_SCOPE]) { ret = -EBUSY; if (nn->nfsd_serv && nn->nfsd_serv->sv_nrthreads) - goto out_unlock; + goto out_free; ret = -EINVAL; attr = info->attrs[NFSD_A_SERVER_GRACETIME]; @@ -1671,7 +1647,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) u32 gracetime = nla_get_u32(attr); if (gracetime < 10 || gracetime > 3600) - goto out_unlock; + goto out_free; nn->nfsd4_grace = gracetime; } @@ -1681,7 +1657,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) u32 leasetime = nla_get_u32(attr); if (leasetime < 10 || leasetime > 3600) - goto out_unlock; + goto out_free; nn->nfsd4_lease = leasetime; } @@ -1694,8 +1670,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) ret = nfsd_svc(nrpools, nthreads, net, get_current_cred(), scope); if (ret > 0) ret = 0; -out_unlock: - mutex_unlock(&nfsd_mutex); +out_free: kfree(nthreads); return ret; } @@ -1724,7 +1699,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_free_msg; } - mutex_lock(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); err = nla_put_u32(skb, NFSD_A_SERVER_GRACETIME, nn->nfsd4_grace) || @@ -1733,7 +1708,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) nla_put_string(skb, NFSD_A_SERVER_SCOPE, nn->nfsd_name); if (err) - goto err_unlock; + goto err_free_msg; if (nn->nfsd_serv) { int i; @@ -1744,22 +1719,18 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) err = nla_put_u32(skb, NFSD_A_SERVER_THREADS, sp->sp_nrthreads); if (err) - goto err_unlock; + goto err_free_msg; } } else { err = nla_put_u32(skb, NFSD_A_SERVER_THREADS, 0); if (err) - goto err_unlock; + goto err_free_msg; } - mutex_unlock(&nfsd_mutex); - genlmsg_end(skb, hdr); return genlmsg_reply(skb, info); -err_unlock: - mutex_unlock(&nfsd_mutex); err_free_msg: nlmsg_free(skb); @@ -1782,13 +1753,10 @@ int nfsd_nl_version_set_doit(struct sk_buff *skb, struct genl_info *info) if (GENL_REQ_ATTR_CHECK(info, NFSD_A_SERVER_PROTO_VERSION)) return -EINVAL; - mutex_lock(&nfsd_mutex); - nn = net_generic(genl_info_net(info), nfsd_net_id); - if (nn->nfsd_serv) { - mutex_unlock(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); + if (nn->nfsd_serv) return -EBUSY; - } /* clear current supported versions. */ nfsd_vers(nn, 2, NFSD_CLEAR); @@ -1831,8 +1799,6 @@ int nfsd_nl_version_set_doit(struct sk_buff *skb, struct genl_info *info) } } - mutex_unlock(&nfsd_mutex); - return 0; } @@ -1859,8 +1825,8 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_free_msg; } - mutex_lock(&nfsd_mutex); nn = net_generic(genl_info_net(info), nfsd_net_id); + guard(mutex)(&nn->config_mutex); for (i = 2; i <= 4; i++) { int j; @@ -1882,13 +1848,13 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) NFSD_A_SERVER_PROTO_VERSION); if (!attr) { err = -EINVAL; - goto err_nfsd_unlock; + goto err_free_msg; } if (nla_put_u32(skb, NFSD_A_VERSION_MAJOR, i) || nla_put_u32(skb, NFSD_A_VERSION_MINOR, j)) { err = -EINVAL; - goto err_nfsd_unlock; + goto err_free_msg; } /* Set the enabled flag if the version is enabled */ @@ -1896,20 +1862,17 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) (i < 4 || nfsd_minorversion(nn, j, NFSD_TEST)) && nla_put_flag(skb, NFSD_A_VERSION_ENABLED)) { err = -EINVAL; - goto err_nfsd_unlock; + goto err_free_msg; } nla_nest_end(skb, attr); } } - mutex_unlock(&nfsd_mutex); genlmsg_end(skb, hdr); return genlmsg_reply(skb, info); -err_nfsd_unlock: - mutex_unlock(&nfsd_mutex); err_free_msg: nlmsg_free(skb); @@ -1934,15 +1897,13 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) bool delete = false; int err, rem; - mutex_lock(&nfsd_mutex); + nn = net_generic(net, nfsd_net_id); + guard(mutex)(&nn->config_mutex); err = nfsd_create_serv(net); - if (err) { - mutex_unlock(&nfsd_mutex); + if (err) return err; - } - nn = net_generic(net, nfsd_net_id); serv = nn->nfsd_serv; spin_lock_bh(&serv->sv_lock); @@ -2003,10 +1964,8 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) spin_unlock_bh(&serv->sv_lock); /* Do not remove listeners while there are active threads. */ - if (serv->sv_nrthreads) { - err = -EBUSY; - goto out_unlock_mtx; - } + if (serv->sv_nrthreads) + return -EBUSY; /* * Since we can't delete an arbitrary llist entry, destroy the @@ -2057,9 +2016,6 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) if (!serv->sv_nrthreads && list_empty(&nn->nfsd_serv->sv_permsocks)) nfsd_destroy_serv(net); -out_unlock_mtx: - mutex_unlock(&nfsd_mutex); - return err; } @@ -2088,12 +2044,12 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_free_msg; } - mutex_lock(&nfsd_mutex); nn = net_generic(genl_info_net(info), nfsd_net_id); + guard(mutex)(&nn->config_mutex); /* no nfs server? Just send empty socket list */ if (!nn->nfsd_serv) - goto out_unlock_mtx; + goto out; serv = nn->nfsd_serv; spin_lock_bh(&serv->sv_lock); @@ -2103,7 +2059,7 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) attr = nla_nest_start(skb, NFSD_A_SERVER_SOCK_ADDR); if (!attr) { err = -EINVAL; - goto err_serv_unlock; + goto err; } if (nla_put_string(skb, NFSD_A_SOCK_TRANSPORT_NAME, @@ -2112,21 +2068,19 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) sizeof(struct sockaddr_storage), &xprt->xpt_local)) { err = -EINVAL; - goto err_serv_unlock; + goto err; } nla_nest_end(skb, attr); } spin_unlock_bh(&serv->sv_lock); -out_unlock_mtx: - mutex_unlock(&nfsd_mutex); +out: genlmsg_end(skb, hdr); return genlmsg_reply(skb, info); -err_serv_unlock: +err: spin_unlock_bh(&serv->sv_lock); - mutex_unlock(&nfsd_mutex); err_free_msg: nlmsg_free(skb); @@ -2228,7 +2182,7 @@ static __net_init int nfsd_net_init(struct net *net) nn->nfsd_versions[i] = nfsd_support_version(i); for (i = 0; i < sizeof(nn->nfsd4_minorversions); i++) nn->nfsd4_minorversions[i] = nfsd_support_version(4); - nn->nfsd_info.mutex = &nfsd_mutex; + mutex_init(&nn->config_mutex); nn->nfsd_serv = NULL; nfsd4_init_leases_net(nn); get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key)); diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h index 8ad9fcc23789..3cbca4d34f48 100644 --- a/fs/nfsd/nfsd.h +++ b/fs/nfsd/nfsd.h @@ -77,7 +77,6 @@ struct nfsd_genl_rqstp { extern struct svc_program nfsd_programs[]; extern const struct svc_version nfsd_version2, nfsd_version3, nfsd_version4; -extern struct mutex nfsd_mutex; extern atomic_t nfsd_th_cnt; /* number of available threads */ bool nfsd_startup_get(void); diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index b2080e5a71e6..f7fdb05203f2 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -56,20 +56,6 @@ static __be32 nfsd_init_request(struct svc_rqst *, const struct svc_program *, struct svc_process_info *); -/* - * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members - * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks. - * - * Finally, the nfsd_mutex also protects some of the global variables that are - * accessed when nfsd starts and that are settable via the write_* routines in - * nfsctl.c. In particular: - * - * user_recovery_dirname - * user_lease_time - * nfsd_versions - */ -DEFINE_MUTEX(nfsd_mutex); - #if IS_ENABLED(CONFIG_NFS_LOCALIO) static const struct svc_version *localio_versions[] = { [1] = &localio_version1, @@ -239,14 +225,12 @@ static void nfsd_net_free(struct percpu_ref *ref) int nfsd_nrthreads(struct net *net) { - int rv = 0; struct nfsd_net *nn = net_generic(net, nfsd_net_id); - mutex_lock(&nfsd_mutex); - if (nn->nfsd_serv) - rv = nn->nfsd_serv->sv_nrthreads; - mutex_unlock(&nfsd_mutex); - return rv; + guard(mutex)(&nn->config_mutex); + if (!nn->nfsd_serv) + return 0; + return nn->nfsd_serv->sv_nrthreads; } static int nfsd_init_socks(struct net *net, const struct cred *cred) @@ -522,7 +506,6 @@ static struct notifier_block nfsd_inet6addr_notifier = { }; #endif -/* Only used under nfsd_mutex, so this atomic may be overkill: */ static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0); /** @@ -534,7 +517,7 @@ void nfsd_destroy_serv(struct net *net) struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv = nn->nfsd_serv; - lockdep_assert_held(&nfsd_mutex); + lockdep_assert_held(&nn->config_mutex); spin_lock(&nfsd_notifier_lock); nn->nfsd_serv = NULL; @@ -606,17 +589,14 @@ void nfsd_shutdown_threads(struct net *net) struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv; - mutex_lock(&nfsd_mutex); + guard(mutex)(&nn->config_mutex); serv = nn->nfsd_serv; - if (serv == NULL) { - mutex_unlock(&nfsd_mutex); + if (serv == NULL) return; - } /* Kill outstanding nfsd threads */ svc_set_num_threads(serv, NULL, 0); nfsd_destroy_serv(net); - mutex_unlock(&nfsd_mutex); } struct svc_rqst *nfsd_current_rqst(void) @@ -632,7 +612,7 @@ int nfsd_create_serv(struct net *net) struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv; - WARN_ON(!mutex_is_locked(&nfsd_mutex)); + WARN_ON(!mutex_is_locked(&nn->config_mutex)); if (nn->nfsd_serv) return 0; @@ -714,7 +694,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) int err = 0; struct nfsd_net *nn = net_generic(net, nfsd_net_id); - lockdep_assert_held(&nfsd_mutex); + lockdep_assert_held(&nn->config_mutex); if (nn->nfsd_serv == NULL || n <= 0) return 0; @@ -787,7 +767,7 @@ nfsd_svc(int n, int *nthreads, struct net *net, const struct cred *cred, const c struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv; - lockdep_assert_held(&nfsd_mutex); + lockdep_assert_held(&nn->config_mutex); dprintk("nfsd: creating service\n"); diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 48666b83fe68..a12fe99156ec 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -98,7 +98,7 @@ struct svc_serv { /* This is used by pool_stats to find and lock an svc */ struct svc_info { struct svc_serv *serv; - struct mutex *mutex; + struct mutex mutex; }; void svc_destroy(struct svc_serv **svcp); diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 8b1837228799..b8352b7d6860 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -1399,7 +1399,7 @@ static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos) dprintk("svc_pool_stats_start, *pidx=%u\n", pidx); - mutex_lock(si->mutex); + mutex_lock(&si->mutex); if (!pidx) return SEQ_START_TOKEN; @@ -1436,7 +1436,7 @@ static void svc_pool_stats_stop(struct seq_file *m, void *p) { struct svc_info *si = m->private; - mutex_unlock(si->mutex); + mutex_unlock(&si->mutex); } static int svc_pool_stats_show(struct seq_file *m, void *p) -- 2.49.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. 2025-06-20 23:33 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown @ 2025-06-21 13:02 ` kernel test robot 0 siblings, 0 replies; 12+ messages in thread From: kernel test robot @ 2025-06-21 13:02 UTC (permalink / raw) To: NeilBrown, Chuck Lever, Jeff Layton Cc: llvm, oe-kbuild-all, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng Hi NeilBrown, kernel test robot noticed the following build errors: [auto build test ERROR on brauner-vfs/vfs.all] [also build test ERROR on trondmy-nfs/linux-next linus/master v6.16-rc2 next-20250620] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/NeilBrown/nfsd-provide-proper-locking-for-all-write_-function/20250621-073955 base: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all patch link: https://lore.kernel.org/r/20250620233802.1453016-4-neil%40brown.name patch subject: [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. config: i386-buildonly-randconfig-003-20250621 (https://download.01.org/0day-ci/archive/20250621/202506212005.LpUsPRa3-lkp@intel.com/config) compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250621/202506212005.LpUsPRa3-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202506212005.LpUsPRa3-lkp@intel.com/ All errors (new ones prefixed by >>): >> fs/nfsd/nfsctl.c:1699:3: error: cannot jump from this goto statement to its label 1699 | goto err_free_msg; | ^ fs/nfsd/nfsctl.c:1702:2: note: jump bypasses initialization of variable with __attribute__((cleanup)) 1702 | guard(mutex)(&nn->config_mutex); | ^ include/linux/cleanup.h:338:15: note: expanded from macro 'guard' 338 | CLASS(_name, __UNIQUE_ID(guard)) | ^ include/linux/compiler.h:166:29: note: expanded from macro '__UNIQUE_ID' 166 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) | ^ include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE' 84 | #define __PASTE(a,b) ___PASTE(a,b) | ^ include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE' 83 | #define ___PASTE(a,b) a##b | ^ <scratch space>:45:1: note: expanded from here 45 | __UNIQUE_ID_guard1097 | ^ fs/nfsd/nfsctl.c:1825:3: error: cannot jump from this goto statement to its label 1825 | goto err_free_msg; | ^ fs/nfsd/nfsctl.c:1829:2: note: jump bypasses initialization of variable with __attribute__((cleanup)) 1829 | guard(mutex)(&nn->config_mutex); | ^ include/linux/cleanup.h:338:15: note: expanded from macro 'guard' 338 | CLASS(_name, __UNIQUE_ID(guard)) | ^ include/linux/compiler.h:166:29: note: expanded from macro '__UNIQUE_ID' 166 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) | ^ include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE' 84 | #define __PASTE(a,b) ___PASTE(a,b) | ^ include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE' 83 | #define ___PASTE(a,b) a##b | ^ <scratch space>:68:1: note: expanded from here 68 | __UNIQUE_ID_guard1099 | ^ fs/nfsd/nfsctl.c:2044:3: error: cannot jump from this goto statement to its label 2044 | goto err_free_msg; | ^ fs/nfsd/nfsctl.c:2048:2: note: jump bypasses initialization of variable with __attribute__((cleanup)) 2048 | guard(mutex)(&nn->config_mutex); | ^ include/linux/cleanup.h:338:15: note: expanded from macro 'guard' 338 | CLASS(_name, __UNIQUE_ID(guard)) | ^ include/linux/compiler.h:166:29: note: expanded from macro '__UNIQUE_ID' 166 | #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) | ^ include/linux/compiler_types.h:84:22: note: expanded from macro '__PASTE' 84 | #define __PASTE(a,b) ___PASTE(a,b) | ^ include/linux/compiler_types.h:83:23: note: expanded from macro '___PASTE' 83 | #define ___PASTE(a,b) a##b | ^ <scratch space>:118:1: note: expanded from here 118 | __UNIQUE_ID_guard1101 | ^ 3 errors generated. vim +1699 fs/nfsd/nfsctl.c 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1677 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1678 /** 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1679 * nfsd_nl_threads_get_doit - get the number of running threads 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1680 * @skb: reply buffer 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1681 * @info: netlink metadata and command arguments 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1682 * 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1683 * Return 0 on success or a negative errno. 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1684 */ 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1685 int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1686 { 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1687 struct net *net = genl_info_net(info); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1688 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1689 void *hdr; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1690 int err; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1691 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1692 skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1693 if (!skb) 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1694 return -ENOMEM; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1695 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1696 hdr = genlmsg_iput(skb, info); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1697 if (!hdr) { 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1698 err = -EMSGSIZE; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 @1699 goto err_free_msg; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1700 } 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1701 40677c06f2c6a4 NeilBrown 2025-06-21 1702 guard(mutex)(&nn->config_mutex); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1703 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1704 err = nla_put_u32(skb, NFSD_A_SERVER_GRACETIME, 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1705 nn->nfsd4_grace) || 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1706 nla_put_u32(skb, NFSD_A_SERVER_LEASETIME, 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1707 nn->nfsd4_lease) || 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1708 nla_put_string(skb, NFSD_A_SERVER_SCOPE, 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1709 nn->nfsd_name); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1710 if (err) 40677c06f2c6a4 NeilBrown 2025-06-21 1711 goto err_free_msg; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1712 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1713 if (nn->nfsd_serv) { 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1714 int i; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1715 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1716 for (i = 0; i < nfsd_nrpools(net); ++i) { 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1717 struct svc_pool *sp = &nn->nfsd_serv->sv_pools[i]; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1718 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1719 err = nla_put_u32(skb, NFSD_A_SERVER_THREADS, 60749cbe3d8ae5 NeilBrown 2024-07-15 1720 sp->sp_nrthreads); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1721 if (err) 40677c06f2c6a4 NeilBrown 2025-06-21 1722 goto err_free_msg; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1723 } 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1724 } else { 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1725 err = nla_put_u32(skb, NFSD_A_SERVER_THREADS, 0); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1726 if (err) 40677c06f2c6a4 NeilBrown 2025-06-21 1727 goto err_free_msg; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1728 } 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1729 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1730 genlmsg_end(skb, hdr); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1731 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1732 return genlmsg_reply(skb, info); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1733 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1734 err_free_msg: 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1735 nlmsg_free(skb); 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1736 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1737 return err; 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1738 } 924f4fb003ba11 Lorenzo Bianconi 2024-04-23 1739 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/3 RFC] improve some nfsd_mutex locking 2025-06-20 23:33 [PATCH 0/3 RFC] improve some nfsd_mutex locking NeilBrown ` (2 preceding siblings ...) 2025-06-20 23:33 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown @ 2025-06-21 15:21 ` Chuck Lever 2025-06-21 17:48 ` Jeff Layton 2025-06-23 2:47 ` NeilBrown 3 siblings, 2 replies; 12+ messages in thread From: Chuck Lever @ 2025-06-21 15:21 UTC (permalink / raw) To: NeilBrown, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng On 6/20/25 7:33 PM, NeilBrown wrote: > The first patch hopefully fixes a bug with locking as reported by Li > Lingfeng: some write_foo functions aren't locked properly. > > The other two improve the locking code, particulary so that we don't > need a global mutex to change per-netns data. > > I've revised the locking to use guard(mutex) for (almost) all places > that the per-netfs mutex is used. I think this is an improvement but > would like to know what others think. > > I haven't changed _get/_put to _pin/_unpin as Chuck wondered about. I'm > not against that (though get/put are widely understood) but nor am I > particularly for it yet. Again, opinions are welcome. I think of get and put as operations you do on an object. Saying nfsd_startup_get(); seems a little strange to me. As I said before, it seems like you are protecting a critical section, not a particular object. -- Chuck Lever ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/3 RFC] improve some nfsd_mutex locking 2025-06-21 15:21 ` [PATCH 0/3 RFC] improve some nfsd_mutex locking Chuck Lever @ 2025-06-21 17:48 ` Jeff Layton 2025-06-23 2:47 ` NeilBrown 1 sibling, 0 replies; 12+ messages in thread From: Jeff Layton @ 2025-06-21 17:48 UTC (permalink / raw) To: Chuck Lever, NeilBrown Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng On Sat, 2025-06-21 at 11:21 -0400, Chuck Lever wrote: > On 6/20/25 7:33 PM, NeilBrown wrote: > > The first patch hopefully fixes a bug with locking as reported by Li > > Lingfeng: some write_foo functions aren't locked properly. > > > > The other two improve the locking code, particulary so that we don't > > need a global mutex to change per-netns data. > > > > I've revised the locking to use guard(mutex) for (almost) all places > > that the per-netfs mutex is used. I think this is an improvement but > > would like to know what others think. > > > > I haven't changed _get/_put to _pin/_unpin as Chuck wondered about. I'm > > not against that (though get/put are widely understood) but nor am I > > particularly for it yet. Again, opinions are welcome. > > I think of get and put as operations you do on an object. Saying > > nfsd_startup_get(); > > seems a little strange to me. As I said before, it seems like you > are protecting a critical section, not a particular object. > I think of it as taking a reference to the service being up and running. Maybe nfsd_service_get/put() ? -- Jeff Layton <jlayton@kernel.org> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/3 RFC] improve some nfsd_mutex locking 2025-06-21 15:21 ` [PATCH 0/3 RFC] improve some nfsd_mutex locking Chuck Lever 2025-06-21 17:48 ` Jeff Layton @ 2025-06-23 2:47 ` NeilBrown 1 sibling, 0 replies; 12+ messages in thread From: NeilBrown @ 2025-06-23 2:47 UTC (permalink / raw) To: Chuck Lever Cc: Jeff Layton, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng On Sun, 22 Jun 2025, Chuck Lever wrote: > On 6/20/25 7:33 PM, NeilBrown wrote: > > The first patch hopefully fixes a bug with locking as reported by Li > > Lingfeng: some write_foo functions aren't locked properly. > > > > The other two improve the locking code, particulary so that we don't > > need a global mutex to change per-netns data. > > > > I've revised the locking to use guard(mutex) for (almost) all places > > that the per-netfs mutex is used. I think this is an improvement but > > would like to know what others think. > > > > I haven't changed _get/_put to _pin/_unpin as Chuck wondered about. I'm > > not against that (though get/put are widely understood) but nor am I > > particularly for it yet. Again, opinions are welcome. > > I think of get and put as operations you do on an object. Saying > > nfsd_startup_get(); > > seems a little strange to me. As I said before, it seems like you > are protecting a critical section, not a particular object. I agree it looks like a critical section. That suggests lock/unlock naming. A "get" is somewhat like a read_trylock. But a put isn't much like an unlock. But maybe I can side-step the whole issue. I think it is reasonable to move the nfsd_shutdown_generic() code in to nfsd_exit() which is called at module_put time. There is no real need for any this before then. With that done the ref on the module that we already maintain correctly will ensure no code can race with the cleanup in nfsd_shutdown_generic(). Then we don't need nfsd_users as a counter - just a flag to say if startup has completed yet. This series does that. It also delays the conversion to guard(mutex) until a final patch so it can be reviewed separately - or rejected separately. Thanks, NeilBrown ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 0/3 RFC] improve some nfsd_mutex locking @ 2025-06-18 21:31 NeilBrown 2025-06-18 21:31 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown 0 siblings, 1 reply; 12+ messages in thread From: NeilBrown @ 2025-06-18 21:31 UTC (permalink / raw) To: Chuck Lever, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng The first patch hopefully fixes a bug with locking: some write_foo functions aren't locked properly. The other two improve the locking code, particulary so that we don't need a global mutex to change per-netns data. NeilBrown [PATCH 1/3] nfsd: provide proper locking for all write_ function [PATCH 2/3] nfsd: use kref and new mutex for global config management [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. 2025-06-18 21:31 NeilBrown @ 2025-06-18 21:31 ` NeilBrown 2025-06-19 12:33 ` kernel test robot 2025-06-20 13:13 ` Jeff Layton 0 siblings, 2 replies; 12+ messages in thread From: NeilBrown @ 2025-06-18 21:31 UTC (permalink / raw) To: Chuck Lever, Jeff Layton Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng The remaining uses for nfsd_mutex are all to protect per-netns resources. This patch replaces the global mutex with one per netns. The "svc_info" struct now contains that mutex rather than a pointer to the mutex. Macros are provided to make it easy to take the mutex given a file or net. Signed-off-by: NeilBrown <neil@brown.name> --- .../admin-guide/nfs/nfsd-admin-interfaces.rst | 2 +- fs/nfsd/nfsctl.c | 113 +++++++++--------- fs/nfsd/nfsd.h | 1 - fs/nfsd/nfssvc.c | 33 ++--- include/linux/sunrpc/svc.h | 2 +- net/sunrpc/svc_xprt.c | 4 +- 6 files changed, 72 insertions(+), 83 deletions(-) diff --git a/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst b/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst index c05926f79054..9548e4ab35b6 100644 --- a/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst +++ b/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst @@ -37,4 +37,4 @@ Implementation notes Note that the rpc server requires the caller to serialize addition and removal of listening sockets, and startup and shutdown of the server. -For nfsd this is done using nfsd_mutex. +For nfsd this is done using nfsd_info.mutex in struct nfsd_net. diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 3710a1992d17..70eddf2640f0 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -95,6 +95,13 @@ static ssize_t (*const write_op[])(struct file *, char *, size_t) = { #endif }; +#define with_nfsd_net_locked(__net) \ + for (struct nfsd_net *__nn = net_generic(__net, nfsd_net_id); \ + __nn ? ({mutex_lock(&__nn->nfsd_info.mutex); 1; }) : 0; \ + ({mutex_unlock(&__nn->nfsd_info.mutex); __nn = NULL;})) +#define with_nfsd_file_locked(__file) \ + with_nfsd_net_locked(netns(__file)) + static ssize_t nfsctl_transaction_write(struct file *file, const char __user *buf, size_t size, loff_t *pos) { ino_t ino = file_inode(file)->i_ino; @@ -249,9 +256,8 @@ static ssize_t write_unlock_ip(struct file *file, char *buf, size_t size) { ssize_t rv; - mutex_lock(&nfsd_mutex); - rv = __write_unlock_ip(file, buf, size); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __write_unlock_ip(file, buf, size); return rv; } @@ -315,9 +321,8 @@ static ssize_t write_unlock_fs(struct file *file, char *buf, size_t size) { ssize_t rv; - mutex_lock(&nfsd_mutex); - rv = __write_unlock_fs(file, buf, size); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __write_unlock_fs(file, buf, size); return rv; } @@ -440,9 +445,8 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size) if (newthreads < 0) return -EINVAL; trace_nfsd_ctl_threads(net, newthreads); - mutex_lock(&nfsd_mutex); - rv = nfsd_svc(1, &newthreads, net, file->f_cred, NULL); - mutex_unlock(&nfsd_mutex); + with_nfsd_net_locked(net) + rv = nfsd_svc(1, &newthreads, net, file->f_cred, NULL); if (rv < 0) return rv; } else @@ -473,7 +477,7 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size) * return code is the size in bytes of the string * On error: return code is zero or a negative errno value */ -static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) +static ssize_t __write_pool_threads(struct file *file, char *buf, size_t size) { /* if size > 0, look for an array of number of threads per node * and apply them then write out number of threads per node as reply @@ -486,7 +490,6 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) int *nthreads; struct net *net = netns(file); - mutex_lock(&nfsd_mutex); npools = nfsd_nrpools(net); if (npools == 0) { /* @@ -494,7 +497,6 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) * writing to the threads file but NOT the pool_threads * file, sorry. Report zero threads. */ - mutex_unlock(&nfsd_mutex); strcpy(buf, "0\n"); return strlen(buf); } @@ -544,10 +546,18 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) rv = mesg - buf; out_free: kfree(nthreads); - mutex_unlock(&nfsd_mutex); return rv; } +static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) +{ + ssize_t ret; + + with_nfsd_file_locked(file) + ret = __write_pool_threads(file, buf, size); + return ret; +} + static ssize_t nfsd_print_version_support(struct nfsd_net *nn, char *buf, int remaining, const char *sep, unsigned vers, int minor) @@ -709,9 +719,9 @@ static ssize_t write_versions(struct file *file, char *buf, size_t size) { ssize_t rv; - mutex_lock(&nfsd_mutex); - rv = __write_versions(file, buf, size); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __write_versions(file, buf, size); + return rv; } @@ -868,9 +878,8 @@ static ssize_t write_ports(struct file *file, char *buf, size_t size) { ssize_t rv; - mutex_lock(&nfsd_mutex); - rv = __write_ports(file, buf, size, netns(file)); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __write_ports(file, buf, size, netns(file)); return rv; } @@ -916,13 +925,13 @@ static ssize_t write_maxblksize(struct file *file, char *buf, size_t size) bsize = max_t(int, bsize, 1024); bsize = min_t(int, bsize, NFSSVC_MAXBLKSIZE); bsize &= ~(1024-1); - mutex_lock(&nfsd_mutex); + mutex_lock(&nn->nfsd_info.mutex); if (nn->nfsd_serv) { - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return -EBUSY; } nfsd_max_blksize = bsize; - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); } return scnprintf(buf, SIMPLE_TRANSACTION_LIMIT, "%d\n", @@ -971,9 +980,8 @@ static ssize_t nfsd4_write_time(struct file *file, char *buf, size_t size, { ssize_t rv; - mutex_lock(&nfsd_mutex); - rv = __nfsd4_write_time(file, buf, size, time, nn); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __nfsd4_write_time(file, buf, size, time, nn); return rv; } @@ -1076,9 +1084,8 @@ static ssize_t write_recoverydir(struct file *file, char *buf, size_t size) ssize_t rv; struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); - mutex_lock(&nfsd_mutex); - rv = __write_recoverydir(file, buf, size, nn); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __write_recoverydir(file, buf, size, nn); return rv; } #endif @@ -1130,9 +1137,8 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) { ssize_t rv; - mutex_lock(&nfsd_mutex); - rv = __write_v4_end_grace(file, buf, size); - mutex_unlock(&nfsd_mutex); + with_nfsd_file_locked(file) + rv = __write_v4_end_grace(file, buf, size); return rv; } #endif @@ -1552,9 +1558,8 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb, int i, ret, rqstp_index = 0; struct nfsd_net *nn; - mutex_lock(&nfsd_mutex); - nn = net_generic(sock_net(skb->sk), nfsd_net_id); + mutex_lock(&nn->nfsd_info.mutex); if (!nn->nfsd_serv) { ret = -ENODEV; goto out_unlock; @@ -1636,7 +1641,7 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb, out: rcu_read_unlock(); out_unlock: - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return ret; } @@ -1665,7 +1670,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) count++; } - mutex_lock(&nfsd_mutex); + mutex_lock(&nn->nfsd_info.mutex); nrpools = max(count, nfsd_nrpools(net)); nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL); @@ -1720,7 +1725,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) if (ret > 0) ret = 0; out_unlock: - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); kfree(nthreads); return ret; } @@ -1749,7 +1754,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_free_msg; } - mutex_lock(&nfsd_mutex); + mutex_lock(&nn->nfsd_info.mutex); err = nla_put_u32(skb, NFSD_A_SERVER_GRACETIME, nn->nfsd4_grace) || @@ -1777,14 +1782,14 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_unlock; } - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); genlmsg_end(skb, hdr); return genlmsg_reply(skb, info); err_unlock: - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); err_free_msg: nlmsg_free(skb); @@ -1807,11 +1812,10 @@ int nfsd_nl_version_set_doit(struct sk_buff *skb, struct genl_info *info) if (GENL_REQ_ATTR_CHECK(info, NFSD_A_SERVER_PROTO_VERSION)) return -EINVAL; - mutex_lock(&nfsd_mutex); - nn = net_generic(genl_info_net(info), nfsd_net_id); + mutex_lock(&nn->nfsd_info.mutex); if (nn->nfsd_serv) { - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return -EBUSY; } @@ -1856,7 +1860,7 @@ int nfsd_nl_version_set_doit(struct sk_buff *skb, struct genl_info *info) } } - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return 0; } @@ -1884,7 +1888,7 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_free_msg; } - mutex_lock(&nfsd_mutex); + mutex_lock(&nn->nfsd_info.mutex); nn = net_generic(genl_info_net(info), nfsd_net_id); for (i = 2; i <= 4; i++) { @@ -1928,13 +1932,13 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) } } - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); genlmsg_end(skb, hdr); return genlmsg_reply(skb, info); err_nfsd_unlock: - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); err_free_msg: nlmsg_free(skb); @@ -1959,15 +1963,16 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) bool delete = false; int err, rem; - mutex_lock(&nfsd_mutex); + nn = net_generic(net, nfsd_net_id); + + mutex_lock(&nn->nfsd_info.mutex); err = nfsd_create_serv(net); if (err) { - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return err; } - nn = net_generic(net, nfsd_net_id); serv = nn->nfsd_serv; spin_lock_bh(&serv->sv_lock); @@ -2083,7 +2088,7 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) nfsd_destroy_serv(net); out_unlock_mtx: - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return err; } @@ -2113,8 +2118,8 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) goto err_free_msg; } - mutex_lock(&nfsd_mutex); nn = net_generic(genl_info_net(info), nfsd_net_id); + mutex_lock(&nn->nfsd_info.mutex); /* no nfs server? Just send empty socket list */ if (!nn->nfsd_serv) @@ -2144,14 +2149,14 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) } spin_unlock_bh(&serv->sv_lock); out_unlock_mtx: - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); genlmsg_end(skb, hdr); return genlmsg_reply(skb, info); err_serv_unlock: spin_unlock_bh(&serv->sv_lock); - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); err_free_msg: nlmsg_free(skb); @@ -2253,7 +2258,7 @@ static __net_init int nfsd_net_init(struct net *net) nn->nfsd_versions[i] = nfsd_support_version(i); for (i = 0; i < sizeof(nn->nfsd4_minorversions); i++) nn->nfsd4_minorversions[i] = nfsd_support_version(4); - nn->nfsd_info.mutex = &nfsd_mutex; + mutex_init(&nn->nfsd_info.mutex); nn->nfsd_serv = NULL; nfsd4_init_leases_net(nn); get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key)); diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h index 8ad9fcc23789..3cbca4d34f48 100644 --- a/fs/nfsd/nfsd.h +++ b/fs/nfsd/nfsd.h @@ -77,7 +77,6 @@ struct nfsd_genl_rqstp { extern struct svc_program nfsd_programs[]; extern const struct svc_version nfsd_version2, nfsd_version3, nfsd_version4; -extern struct mutex nfsd_mutex; extern atomic_t nfsd_th_cnt; /* number of available threads */ bool nfsd_startup_get(void); diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index b2080e5a71e6..9f70b1fbc55e 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -56,20 +56,6 @@ static __be32 nfsd_init_request(struct svc_rqst *, const struct svc_program *, struct svc_process_info *); -/* - * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members - * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks. - * - * Finally, the nfsd_mutex also protects some of the global variables that are - * accessed when nfsd starts and that are settable via the write_* routines in - * nfsctl.c. In particular: - * - * user_recovery_dirname - * user_lease_time - * nfsd_versions - */ -DEFINE_MUTEX(nfsd_mutex); - #if IS_ENABLED(CONFIG_NFS_LOCALIO) static const struct svc_version *localio_versions[] = { [1] = &localio_version1, @@ -242,10 +228,10 @@ int nfsd_nrthreads(struct net *net) int rv = 0; struct nfsd_net *nn = net_generic(net, nfsd_net_id); - mutex_lock(&nfsd_mutex); + mutex_lock(&nn->nfsd_info.mutex); if (nn->nfsd_serv) rv = nn->nfsd_serv->sv_nrthreads; - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return rv; } @@ -522,7 +508,6 @@ static struct notifier_block nfsd_inet6addr_notifier = { }; #endif -/* Only used under nfsd_mutex, so this atomic may be overkill: */ static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0); /** @@ -534,7 +519,7 @@ void nfsd_destroy_serv(struct net *net) struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv = nn->nfsd_serv; - lockdep_assert_held(&nfsd_mutex); + lockdep_assert_held(&nn->nfsd_info.mutex); spin_lock(&nfsd_notifier_lock); nn->nfsd_serv = NULL; @@ -606,17 +591,17 @@ void nfsd_shutdown_threads(struct net *net) struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv; - mutex_lock(&nfsd_mutex); + mutex_lock(&nn->nfsd_info.mutex); serv = nn->nfsd_serv; if (serv == NULL) { - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); return; } /* Kill outstanding nfsd threads */ svc_set_num_threads(serv, NULL, 0); nfsd_destroy_serv(net); - mutex_unlock(&nfsd_mutex); + mutex_unlock(&nn->nfsd_info.mutex); } struct svc_rqst *nfsd_current_rqst(void) @@ -632,7 +617,7 @@ int nfsd_create_serv(struct net *net) struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv; - WARN_ON(!mutex_is_locked(&nfsd_mutex)); + WARN_ON(!mutex_is_locked(&nn->nfsd_info.mutex)); if (nn->nfsd_serv) return 0; @@ -714,7 +699,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) int err = 0; struct nfsd_net *nn = net_generic(net, nfsd_net_id); - lockdep_assert_held(&nfsd_mutex); + lockdep_assert_held(&nn->nfsd_info.mutex); if (nn->nfsd_serv == NULL || n <= 0) return 0; @@ -787,7 +772,7 @@ nfsd_svc(int n, int *nthreads, struct net *net, const struct cred *cred, const c struct nfsd_net *nn = net_generic(net, nfsd_net_id); struct svc_serv *serv; - lockdep_assert_held(&nfsd_mutex); + lockdep_assert_held(&nn->nfsd_info.mutex); dprintk("nfsd: creating service\n"); diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 48666b83fe68..a12fe99156ec 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -98,7 +98,7 @@ struct svc_serv { /* This is used by pool_stats to find and lock an svc */ struct svc_info { struct svc_serv *serv; - struct mutex *mutex; + struct mutex mutex; }; void svc_destroy(struct svc_serv **svcp); diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 8b1837228799..b8352b7d6860 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -1399,7 +1399,7 @@ static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos) dprintk("svc_pool_stats_start, *pidx=%u\n", pidx); - mutex_lock(si->mutex); + mutex_lock(&si->mutex); if (!pidx) return SEQ_START_TOKEN; @@ -1436,7 +1436,7 @@ static void svc_pool_stats_stop(struct seq_file *m, void *p) { struct svc_info *si = m->private; - mutex_unlock(si->mutex); + mutex_unlock(&si->mutex); } static int svc_pool_stats_show(struct seq_file *m, void *p) -- 2.49.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. 2025-06-18 21:31 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown @ 2025-06-19 12:33 ` kernel test robot 2025-06-20 13:13 ` Jeff Layton 1 sibling, 0 replies; 12+ messages in thread From: kernel test robot @ 2025-06-19 12:33 UTC (permalink / raw) To: NeilBrown, Chuck Lever, Jeff Layton Cc: llvm, oe-kbuild-all, linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng Hi NeilBrown, kernel test robot noticed the following build warnings: [auto build test WARNING on brauner-vfs/vfs.all] [also build test WARNING on trondmy-nfs/linux-next linus/master v6.16-rc2 next-20250619] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/NeilBrown/nfsd-provide-proper-locking-for-all-write_-function/20250619-053514 base: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all patch link: https://lore.kernel.org/r/20250618213347.425503-4-neil%40brown.name patch subject: [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. config: x86_64-kexec (https://download.01.org/0day-ci/archive/20250619/202506192052.L9tj28RJ-lkp@intel.com/config) compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250619/202506192052.L9tj28RJ-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202506192052.L9tj28RJ-lkp@intel.com/ All warnings (new ones prefixed by >>): >> fs/nfsd/nfsctl.c:1891:14: warning: variable 'nn' is uninitialized when used here [-Wuninitialized] 1891 | mutex_lock(&nn->nfsd_info.mutex); | ^~ fs/nfsd/nfsctl.c:1877:21: note: initialize the variable 'nn' to silence this warning 1877 | struct nfsd_net *nn; | ^ | = NULL 1 warning generated. vim +/nn +1891 fs/nfsd/nfsctl.c 1867 1868 /** 1869 * nfsd_nl_version_get_doit - get the enabled status for all supported nfs versions 1870 * @skb: reply buffer 1871 * @info: netlink metadata and command arguments 1872 * 1873 * Return 0 on success or a negative errno. 1874 */ 1875 int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) 1876 { 1877 struct nfsd_net *nn; 1878 int i, err; 1879 void *hdr; 1880 1881 skb = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); 1882 if (!skb) 1883 return -ENOMEM; 1884 1885 hdr = genlmsg_iput(skb, info); 1886 if (!hdr) { 1887 err = -EMSGSIZE; 1888 goto err_free_msg; 1889 } 1890 > 1891 mutex_lock(&nn->nfsd_info.mutex); 1892 nn = net_generic(genl_info_net(info), nfsd_net_id); 1893 1894 for (i = 2; i <= 4; i++) { 1895 int j; 1896 1897 for (j = 0; j <= NFSD_SUPPORTED_MINOR_VERSION; j++) { 1898 struct nlattr *attr; 1899 1900 /* Don't record any versions the kernel doesn't have 1901 * compiled in 1902 */ 1903 if (!nfsd_support_version(i)) 1904 continue; 1905 1906 /* NFSv{2,3} does not support minor numbers */ 1907 if (i < 4 && j) 1908 continue; 1909 1910 attr = nla_nest_start(skb, 1911 NFSD_A_SERVER_PROTO_VERSION); 1912 if (!attr) { 1913 err = -EINVAL; 1914 goto err_nfsd_unlock; 1915 } 1916 1917 if (nla_put_u32(skb, NFSD_A_VERSION_MAJOR, i) || 1918 nla_put_u32(skb, NFSD_A_VERSION_MINOR, j)) { 1919 err = -EINVAL; 1920 goto err_nfsd_unlock; 1921 } 1922 1923 /* Set the enabled flag if the version is enabled */ 1924 if (nfsd_vers(nn, i, NFSD_TEST) && 1925 (i < 4 || nfsd_minorversion(nn, j, NFSD_TEST)) && 1926 nla_put_flag(skb, NFSD_A_VERSION_ENABLED)) { 1927 err = -EINVAL; 1928 goto err_nfsd_unlock; 1929 } 1930 1931 nla_nest_end(skb, attr); 1932 } 1933 } 1934 1935 mutex_unlock(&nn->nfsd_info.mutex); 1936 genlmsg_end(skb, hdr); 1937 1938 return genlmsg_reply(skb, info); 1939 1940 err_nfsd_unlock: 1941 mutex_unlock(&nn->nfsd_info.mutex); 1942 err_free_msg: 1943 nlmsg_free(skb); 1944 1945 return err; 1946 } 1947 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace. 2025-06-18 21:31 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown 2025-06-19 12:33 ` kernel test robot @ 2025-06-20 13:13 ` Jeff Layton 1 sibling, 0 replies; 12+ messages in thread From: Jeff Layton @ 2025-06-20 13:13 UTC (permalink / raw) To: NeilBrown, Chuck Lever Cc: linux-nfs, Olga Kornievskaia, Dai Ngo, Tom Talpey, Li Lingfeng On Thu, 2025-06-19 at 07:31 +1000, NeilBrown wrote: > The remaining uses for nfsd_mutex are all to protect per-netns > resources. > > This patch replaces the global mutex with one per netns. The "svc_info" > struct now contains that mutex rather than a pointer to the mutex. > > Macros are provided to make it easy to take the mutex given a file or net. > > Signed-off-by: NeilBrown <neil@brown.name> > --- > .../admin-guide/nfs/nfsd-admin-interfaces.rst | 2 +- > fs/nfsd/nfsctl.c | 113 +++++++++--------- > fs/nfsd/nfsd.h | 1 - > fs/nfsd/nfssvc.c | 33 ++--- > include/linux/sunrpc/svc.h | 2 +- > net/sunrpc/svc_xprt.c | 4 +- > 6 files changed, 72 insertions(+), 83 deletions(-) > > diff --git a/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst b/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst > index c05926f79054..9548e4ab35b6 100644 > --- a/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst > +++ b/Documentation/admin-guide/nfs/nfsd-admin-interfaces.rst > @@ -37,4 +37,4 @@ Implementation notes > > Note that the rpc server requires the caller to serialize addition and > removal of listening sockets, and startup and shutdown of the server. > -For nfsd this is done using nfsd_mutex. > +For nfsd this is done using nfsd_info.mutex in struct nfsd_net. > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c > index 3710a1992d17..70eddf2640f0 100644 > --- a/fs/nfsd/nfsctl.c > +++ b/fs/nfsd/nfsctl.c > @@ -95,6 +95,13 @@ static ssize_t (*const write_op[])(struct file *, char *, size_t) = { > #endif > }; > > +#define with_nfsd_net_locked(__net) \ > + for (struct nfsd_net *__nn = net_generic(__net, nfsd_net_id); \ > + __nn ? ({mutex_lock(&__nn->nfsd_info.mutex); 1; }) : 0; \ > + ({mutex_unlock(&__nn->nfsd_info.mutex); __nn = NULL;})) > +#define with_nfsd_file_locked(__file) \ > + with_nfsd_net_locked(netns(__file)) > + This is certainly clever, but I think I'd rather have simple nfsd_net_mutex_lock/_unlock() functions than have to maintain this sort of macro. > static ssize_t nfsctl_transaction_write(struct file *file, const char __user *buf, size_t size, loff_t *pos) > { > ino_t ino = file_inode(file)->i_ino; > @@ -249,9 +256,8 @@ static ssize_t write_unlock_ip(struct file *file, char *buf, size_t size) > { > ssize_t rv; > > - mutex_lock(&nfsd_mutex); > - rv = __write_unlock_ip(file, buf, size); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __write_unlock_ip(file, buf, size); > return rv; > } > > @@ -315,9 +321,8 @@ static ssize_t write_unlock_fs(struct file *file, char *buf, size_t size) > { > ssize_t rv; > > - mutex_lock(&nfsd_mutex); > - rv = __write_unlock_fs(file, buf, size); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __write_unlock_fs(file, buf, size); > return rv; > } > > @@ -440,9 +445,8 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size) > if (newthreads < 0) > return -EINVAL; > trace_nfsd_ctl_threads(net, newthreads); > - mutex_lock(&nfsd_mutex); > - rv = nfsd_svc(1, &newthreads, net, file->f_cred, NULL); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_net_locked(net) > + rv = nfsd_svc(1, &newthreads, net, file->f_cred, NULL); > if (rv < 0) > return rv; > } else > @@ -473,7 +477,7 @@ static ssize_t write_threads(struct file *file, char *buf, size_t size) > * return code is the size in bytes of the string > * On error: return code is zero or a negative errno value > */ > -static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) > +static ssize_t __write_pool_threads(struct file *file, char *buf, size_t size) > { > /* if size > 0, look for an array of number of threads per node > * and apply them then write out number of threads per node as reply > @@ -486,7 +490,6 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) > int *nthreads; > struct net *net = netns(file); > > - mutex_lock(&nfsd_mutex); > npools = nfsd_nrpools(net); > if (npools == 0) { > /* > @@ -494,7 +497,6 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) > * writing to the threads file but NOT the pool_threads > * file, sorry. Report zero threads. > */ > - mutex_unlock(&nfsd_mutex); > strcpy(buf, "0\n"); > return strlen(buf); > } > @@ -544,10 +546,18 @@ static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) > rv = mesg - buf; > out_free: > kfree(nthreads); > - mutex_unlock(&nfsd_mutex); > return rv; > } > > +static ssize_t write_pool_threads(struct file *file, char *buf, size_t size) > +{ > + ssize_t ret; > + > + with_nfsd_file_locked(file) > + ret = __write_pool_threads(file, buf, size); > + return ret; > +} > + > static ssize_t > nfsd_print_version_support(struct nfsd_net *nn, char *buf, int remaining, > const char *sep, unsigned vers, int minor) > @@ -709,9 +719,9 @@ static ssize_t write_versions(struct file *file, char *buf, size_t size) > { > ssize_t rv; > > - mutex_lock(&nfsd_mutex); > - rv = __write_versions(file, buf, size); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __write_versions(file, buf, size); > + > return rv; > } > > @@ -868,9 +878,8 @@ static ssize_t write_ports(struct file *file, char *buf, size_t size) > { > ssize_t rv; > > - mutex_lock(&nfsd_mutex); > - rv = __write_ports(file, buf, size, netns(file)); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __write_ports(file, buf, size, netns(file)); > return rv; > } > > @@ -916,13 +925,13 @@ static ssize_t write_maxblksize(struct file *file, char *buf, size_t size) > bsize = max_t(int, bsize, 1024); > bsize = min_t(int, bsize, NFSSVC_MAXBLKSIZE); > bsize &= ~(1024-1); > - mutex_lock(&nfsd_mutex); > + mutex_lock(&nn->nfsd_info.mutex); > if (nn->nfsd_serv) { > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > return -EBUSY; > } > nfsd_max_blksize = bsize; > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > } > > return scnprintf(buf, SIMPLE_TRANSACTION_LIMIT, "%d\n", > @@ -971,9 +980,8 @@ static ssize_t nfsd4_write_time(struct file *file, char *buf, size_t size, > { > ssize_t rv; > > - mutex_lock(&nfsd_mutex); > - rv = __nfsd4_write_time(file, buf, size, time, nn); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __nfsd4_write_time(file, buf, size, time, nn); > return rv; > } > > @@ -1076,9 +1084,8 @@ static ssize_t write_recoverydir(struct file *file, char *buf, size_t size) > ssize_t rv; > struct nfsd_net *nn = net_generic(netns(file), nfsd_net_id); > > - mutex_lock(&nfsd_mutex); > - rv = __write_recoverydir(file, buf, size, nn); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __write_recoverydir(file, buf, size, nn); > return rv; > } > #endif > @@ -1130,9 +1137,8 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) > { > ssize_t rv; > > - mutex_lock(&nfsd_mutex); > - rv = __write_v4_end_grace(file, buf, size); > - mutex_unlock(&nfsd_mutex); > + with_nfsd_file_locked(file) > + rv = __write_v4_end_grace(file, buf, size); > return rv; > } > #endif > @@ -1552,9 +1558,8 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb, > int i, ret, rqstp_index = 0; > struct nfsd_net *nn; > > - mutex_lock(&nfsd_mutex); > - > nn = net_generic(sock_net(skb->sk), nfsd_net_id); > + mutex_lock(&nn->nfsd_info.mutex); > if (!nn->nfsd_serv) { > ret = -ENODEV; > goto out_unlock; > @@ -1636,7 +1641,7 @@ int nfsd_nl_rpc_status_get_dumpit(struct sk_buff *skb, > out: > rcu_read_unlock(); > out_unlock: > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > > return ret; > } > @@ -1665,7 +1670,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) > count++; > } > > - mutex_lock(&nfsd_mutex); > + mutex_lock(&nn->nfsd_info.mutex); > > nrpools = max(count, nfsd_nrpools(net)); > nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL); > @@ -1720,7 +1725,7 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) > if (ret > 0) > ret = 0; > out_unlock: > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > kfree(nthreads); > return ret; > } > @@ -1749,7 +1754,7 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) > goto err_free_msg; > } > > - mutex_lock(&nfsd_mutex); > + mutex_lock(&nn->nfsd_info.mutex); > > err = nla_put_u32(skb, NFSD_A_SERVER_GRACETIME, > nn->nfsd4_grace) || > @@ -1777,14 +1782,14 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, struct genl_info *info) > goto err_unlock; > } > > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > > genlmsg_end(skb, hdr); > > return genlmsg_reply(skb, info); > > err_unlock: > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > err_free_msg: > nlmsg_free(skb); > > @@ -1807,11 +1812,10 @@ int nfsd_nl_version_set_doit(struct sk_buff *skb, struct genl_info *info) > if (GENL_REQ_ATTR_CHECK(info, NFSD_A_SERVER_PROTO_VERSION)) > return -EINVAL; > > - mutex_lock(&nfsd_mutex); > - > nn = net_generic(genl_info_net(info), nfsd_net_id); > + mutex_lock(&nn->nfsd_info.mutex); > if (nn->nfsd_serv) { > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > return -EBUSY; > } > > @@ -1856,7 +1860,7 @@ int nfsd_nl_version_set_doit(struct sk_buff *skb, struct genl_info *info) > } > } > > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > > return 0; > } > @@ -1884,7 +1888,7 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) > goto err_free_msg; > } > > - mutex_lock(&nfsd_mutex); > + mutex_lock(&nn->nfsd_info.mutex); > nn = net_generic(genl_info_net(info), nfsd_net_id); > > for (i = 2; i <= 4; i++) { > @@ -1928,13 +1932,13 @@ int nfsd_nl_version_get_doit(struct sk_buff *skb, struct genl_info *info) > } > } > > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > genlmsg_end(skb, hdr); > > return genlmsg_reply(skb, info); > > err_nfsd_unlock: > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > err_free_msg: > nlmsg_free(skb); > > @@ -1959,15 +1963,16 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) > bool delete = false; > int err, rem; > > - mutex_lock(&nfsd_mutex); > + nn = net_generic(net, nfsd_net_id); > + > + mutex_lock(&nn->nfsd_info.mutex); > > err = nfsd_create_serv(net); > if (err) { > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > return err; > } > > - nn = net_generic(net, nfsd_net_id); > serv = nn->nfsd_serv; > > spin_lock_bh(&serv->sv_lock); > @@ -2083,7 +2088,7 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info) > nfsd_destroy_serv(net); > > out_unlock_mtx: > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > > return err; > } > @@ -2113,8 +2118,8 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) > goto err_free_msg; > } > > - mutex_lock(&nfsd_mutex); > nn = net_generic(genl_info_net(info), nfsd_net_id); > + mutex_lock(&nn->nfsd_info.mutex); > > /* no nfs server? Just send empty socket list */ > if (!nn->nfsd_serv) > @@ -2144,14 +2149,14 @@ int nfsd_nl_listener_get_doit(struct sk_buff *skb, struct genl_info *info) > } > spin_unlock_bh(&serv->sv_lock); > out_unlock_mtx: > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > genlmsg_end(skb, hdr); > > return genlmsg_reply(skb, info); > > err_serv_unlock: > spin_unlock_bh(&serv->sv_lock); > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > err_free_msg: > nlmsg_free(skb); > > @@ -2253,7 +2258,7 @@ static __net_init int nfsd_net_init(struct net *net) > nn->nfsd_versions[i] = nfsd_support_version(i); > for (i = 0; i < sizeof(nn->nfsd4_minorversions); i++) > nn->nfsd4_minorversions[i] = nfsd_support_version(4); > - nn->nfsd_info.mutex = &nfsd_mutex; > + mutex_init(&nn->nfsd_info.mutex); > nn->nfsd_serv = NULL; > nfsd4_init_leases_net(nn); > get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key)); > diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h > index 8ad9fcc23789..3cbca4d34f48 100644 > --- a/fs/nfsd/nfsd.h > +++ b/fs/nfsd/nfsd.h > @@ -77,7 +77,6 @@ struct nfsd_genl_rqstp { > > extern struct svc_program nfsd_programs[]; > extern const struct svc_version nfsd_version2, nfsd_version3, nfsd_version4; > -extern struct mutex nfsd_mutex; > extern atomic_t nfsd_th_cnt; /* number of available threads */ > > bool nfsd_startup_get(void); > diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c > index b2080e5a71e6..9f70b1fbc55e 100644 > --- a/fs/nfsd/nfssvc.c > +++ b/fs/nfsd/nfssvc.c > @@ -56,20 +56,6 @@ static __be32 nfsd_init_request(struct svc_rqst *, > const struct svc_program *, > struct svc_process_info *); > > -/* > - * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members > - * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks. > - * > - * Finally, the nfsd_mutex also protects some of the global variables that are > - * accessed when nfsd starts and that are settable via the write_* routines in > - * nfsctl.c. In particular: > - * > - * user_recovery_dirname > - * user_lease_time > - * nfsd_versions > - */ > -DEFINE_MUTEX(nfsd_mutex); > - > #if IS_ENABLED(CONFIG_NFS_LOCALIO) > static const struct svc_version *localio_versions[] = { > [1] = &localio_version1, > @@ -242,10 +228,10 @@ int nfsd_nrthreads(struct net *net) > int rv = 0; > struct nfsd_net *nn = net_generic(net, nfsd_net_id); > > - mutex_lock(&nfsd_mutex); > + mutex_lock(&nn->nfsd_info.mutex); > if (nn->nfsd_serv) > rv = nn->nfsd_serv->sv_nrthreads; > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > return rv; > } > > @@ -522,7 +508,6 @@ static struct notifier_block nfsd_inet6addr_notifier = { > }; > #endif > > -/* Only used under nfsd_mutex, so this atomic may be overkill: */ > static atomic_t nfsd_notifier_refcount = ATOMIC_INIT(0); > > /** > @@ -534,7 +519,7 @@ void nfsd_destroy_serv(struct net *net) > struct nfsd_net *nn = net_generic(net, nfsd_net_id); > struct svc_serv *serv = nn->nfsd_serv; > > - lockdep_assert_held(&nfsd_mutex); > + lockdep_assert_held(&nn->nfsd_info.mutex); > > spin_lock(&nfsd_notifier_lock); > nn->nfsd_serv = NULL; > @@ -606,17 +591,17 @@ void nfsd_shutdown_threads(struct net *net) > struct nfsd_net *nn = net_generic(net, nfsd_net_id); > struct svc_serv *serv; > > - mutex_lock(&nfsd_mutex); > + mutex_lock(&nn->nfsd_info.mutex); > serv = nn->nfsd_serv; > if (serv == NULL) { > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > return; > } > > /* Kill outstanding nfsd threads */ > svc_set_num_threads(serv, NULL, 0); > nfsd_destroy_serv(net); > - mutex_unlock(&nfsd_mutex); > + mutex_unlock(&nn->nfsd_info.mutex); > } > > struct svc_rqst *nfsd_current_rqst(void) > @@ -632,7 +617,7 @@ int nfsd_create_serv(struct net *net) > struct nfsd_net *nn = net_generic(net, nfsd_net_id); > struct svc_serv *serv; > > - WARN_ON(!mutex_is_locked(&nfsd_mutex)); > + WARN_ON(!mutex_is_locked(&nn->nfsd_info.mutex)); > if (nn->nfsd_serv) > return 0; > > @@ -714,7 +699,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net) > int err = 0; > struct nfsd_net *nn = net_generic(net, nfsd_net_id); > > - lockdep_assert_held(&nfsd_mutex); > + lockdep_assert_held(&nn->nfsd_info.mutex); > > if (nn->nfsd_serv == NULL || n <= 0) > return 0; > @@ -787,7 +772,7 @@ nfsd_svc(int n, int *nthreads, struct net *net, const struct cred *cred, const c > struct nfsd_net *nn = net_generic(net, nfsd_net_id); > struct svc_serv *serv; > > - lockdep_assert_held(&nfsd_mutex); > + lockdep_assert_held(&nn->nfsd_info.mutex); > > dprintk("nfsd: creating service\n"); > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h > index 48666b83fe68..a12fe99156ec 100644 > --- a/include/linux/sunrpc/svc.h > +++ b/include/linux/sunrpc/svc.h > @@ -98,7 +98,7 @@ struct svc_serv { > /* This is used by pool_stats to find and lock an svc */ > struct svc_info { > struct svc_serv *serv; > - struct mutex *mutex; > + struct mutex mutex; I know we haven't been good about it with this struct so far, but I like prefixes on names like this. Maybe we can call this "si_mutex" ? > }; > > void svc_destroy(struct svc_serv **svcp); > diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c > index 8b1837228799..b8352b7d6860 100644 > --- a/net/sunrpc/svc_xprt.c > +++ b/net/sunrpc/svc_xprt.c > @@ -1399,7 +1399,7 @@ static void *svc_pool_stats_start(struct seq_file *m, loff_t *pos) > > dprintk("svc_pool_stats_start, *pidx=%u\n", pidx); > > - mutex_lock(si->mutex); > + mutex_lock(&si->mutex); > > if (!pidx) > return SEQ_START_TOKEN; > @@ -1436,7 +1436,7 @@ static void svc_pool_stats_stop(struct seq_file *m, void *p) > { > struct svc_info *si = m->private; > > - mutex_unlock(si->mutex); > + mutex_unlock(&si->mutex); > } > > static int svc_pool_stats_show(struct seq_file *m, void *p) All that said, I definitely support making this mutex per-net. -- Jeff Layton <jlayton@kernel.org> ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-06-23 2:48 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-06-20 23:33 [PATCH 0/3 RFC] improve some nfsd_mutex locking NeilBrown 2025-06-20 23:33 ` [PATCH 1/3] nfsd: provide proper locking for all write_ function NeilBrown 2025-06-21 8:50 ` Li Lingfeng 2025-06-20 23:33 ` [PATCH 2/3] nfsd: use kref and new mutex for global config management NeilBrown 2025-06-20 23:33 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown 2025-06-21 13:02 ` kernel test robot 2025-06-21 15:21 ` [PATCH 0/3 RFC] improve some nfsd_mutex locking Chuck Lever 2025-06-21 17:48 ` Jeff Layton 2025-06-23 2:47 ` NeilBrown -- strict thread matches above, loose matches on Subject: below -- 2025-06-18 21:31 NeilBrown 2025-06-18 21:31 ` [PATCH 3/3] nfsd: split nfsd_mutex into one mutex per net-namespace NeilBrown 2025-06-19 12:33 ` kernel test robot 2025-06-20 13:13 ` Jeff Layton
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox