netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: Chuck Lever III <chuck.lever@oracle.com>
Cc: Neil Brown <neilb@suse.de>, Lorenzo Bianconi <lorenzo@kernel.org>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
	Lorenzo Bianconi <lorenzo.bianconi@redhat.com>,
	 "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [PATCH v3] NFSD: convert write_threads, write_maxblksize and write_maxconn to netlink commands
Date: Mon, 02 Oct 2023 11:53:04 -0400	[thread overview]
Message-ID: <f8c97c51544f55f07c2c470c558f7b078a5c7384.camel@kernel.org> (raw)
In-Reply-To: <11320C5D-9BB2-48D5-90A0-353F6D8EA78A@oracle.com>

On Mon, 2023-10-02 at 15:19 +0000, Chuck Lever III wrote:
> 
> > On Oct 2, 2023, at 9:25 AM, Jeff Layton <jlayton@kernel.org> wrote:
> > 
> > On Fri, 2023-09-29 at 09:44 -0400, Chuck Lever wrote:
> > > On Wed, Sep 27, 2023 at 09:05:10AM +1000, NeilBrown wrote:
> > > > > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > > > > index b71744e355a8..07e7a09e28e3 100644
> > > > > --- a/fs/nfsd/nfsctl.c
> > > > > +++ b/fs/nfsd/nfsctl.c
> > > > > @@ -1694,6 +1694,147 @@ int nfsd_nl_rpc_status_get_done(struct netlink_callback *cb)
> > > > > return 0;
> > > > > }
> > > > > 
> > > > > +/**
> > > > > + * nfsd_nl_threads_set_doit - set the number of running threads
> > > > > + * @skb: reply buffer
> > > > > + * @info: netlink metadata and command arguments
> > > > > + *
> > > > > + * Return 0 on success or a negative errno.
> > > > > + */
> > > > > +int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info)
> > > > > +{
> > > > > + u32 nthreads;
> > > > > + int ret;
> > > > > +
> > > > > + if (!info->attrs[NFSD_A_CONTROL_PLANE_THREADS])
> > > > > + return -EINVAL;
> > > > > +
> > > > > + nthreads = nla_get_u32(info->attrs[NFSD_A_CONTROL_PLANE_THREADS]);
> > > > > +
> > > > > + ret = nfsd_svc(nthreads, genl_info_net(info), get_current_cred());
> > > > > + return ret == nthreads ? 0 : ret;
> > > > > +}
> > > > > +
> > > > > +static int nfsd_nl_get_dump(struct sk_buff *skb, struct netlink_callback *cb,
> > > > > +    int cmd, int attr, u32 val)
> > > > > +{
> > > > > + void *hdr;
> > > > > +
> > > > > + if (cb->args[0]) /* already consumed */
> > > > > + return 0;
> > > > > +
> > > > > + hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
> > > > > +  &nfsd_nl_family, NLM_F_MULTI, cmd);
> > > > > + if (!hdr)
> > > > > + return -ENOBUFS;
> > > > > +
> > > > > + if (nla_put_u32(skb, attr, val))
> > > > > + return -ENOBUFS;
> > > > > +
> > > > > + genlmsg_end(skb, hdr);
> > > > > + cb->args[0] = 1;
> > > > > +
> > > > > + return skb->len;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * nfsd_nl_threads_get_dumpit - dump the number of running threads
> > > > > + * @skb: reply buffer
> > > > > + * @cb: netlink metadata and command arguments
> > > > > + *
> > > > > + * Returns the size of the reply or a negative errno.
> > > > > + */
> > > > > +int nfsd_nl_threads_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
> > > > > +{
> > > > > + return nfsd_nl_get_dump(skb, cb, NFSD_CMD_THREADS_GET,
> > > > > + NFSD_A_CONTROL_PLANE_THREADS,
> > > > > + nfsd_nrthreads(sock_net(skb->sk)));
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * nfsd_nl_max_blksize_set_doit - set the nfs block size
> > > > > + * @skb: reply buffer
> > > > > + * @info: netlink metadata and command arguments
> > > > > + *
> > > > > + * Return 0 on success or a negative errno.
> > > > > + */
> > > > > +int nfsd_nl_max_blksize_set_doit(struct sk_buff *skb, struct genl_info *info)
> > > > > +{
> > > > > + struct nfsd_net *nn = net_generic(genl_info_net(info), nfsd_net_id);
> > > > > + struct nlattr *attr = info->attrs[NFSD_A_CONTROL_PLANE_MAX_BLKSIZE];
> > > > > + int ret = 0;
> > > > > +
> > > > > + if (!attr)
> > > > > + return -EINVAL;
> > > > > +
> > > > > + mutex_lock(&nfsd_mutex);
> > > > > + if (nn->nfsd_serv) {
> > > > > + ret = -EBUSY;
> > > > > + goto out;
> > > > > + }
> > > > 
> > > > This code is wrong... but then the original in write_maxblksize is wrong
> > > > to, so you can't be blamed.
> > > > nfsd_max_blksize applies to nfsd in ALL network namespaces.  So if we
> > > > need to check there are no active services in one namespace, we need to
> > > > check the same for *all* namespaces.
> > > 
> > > Yes, the original code does look strange and is probably incorrect
> > > with regard to its handling of the mutex. Shall we explore and fix
> > > that issue in the nfsctl code first so that it can be backported to
> > > stable kernels?
> > > 
> > > 
> > > > I think we should make nfsd_max_blksize a per-namespace value.
> > > 
> > > That is a different conversation.
> > > 
> > > First, the current name of this tunable is incongruent with its
> > > actual function, which is to specify the maximum network buffer size
> > > that is allocated when the NFSD service pool is created. We should
> > > find a more descriptive and specific name for this element in the
> > > netlink protocol.
> > > 
> > > Second, it does seem like a candidate for becoming namespace-
> > > specific, but TBH I'm not familiar enough with its current user
> > > space consumers to know if that change would be welcome or fraught.
> > > 
> > > Since more discussion, research, and possibly a fix are needed, we
> > > might drop max_blksize from this round and look for one or two
> > > other tunables to convert for the first round.
> > > 
> > > 
> > 
> > I think we need to step back a bit further even, and consider what we
> > want this to look like for users. How do we expect users to interact
> > with these new interfaces in the future?
> > 
> > Most of these settings are things that are "set and forget" and things
> > that we'd want to set up before we ever start any nfsd threads. I think
> > as an initial goal here, we ought to aim to replace the guts of
> > rpc.nfsd(8). Make it (preferentially) use the netlink interfaces for
> > setting everything instead of writing to files under /proc/fs/nfsd.
> > 
> > That gives us a clear set of interfaces that need to be replaced as a
> > first step, and gives us a start on integrating this change into nfs-
> > utils.
> 
> Starting with rpc.nfsd as the initial consumer is a fine idea.
> Those are in nfs-utils/utils/nfsd/nfssvc.c.
> 
> Looks like threads, ports, and versions are the target APIs?
> 

Yeah, those are the most common ones.

Eventually, I think we'd want to add some of the other, more obscure
settings to rpc.nfsd as well (max_block_size, max_connections, etc). We
might want to think about how to subsume the pool_threads handling into
that too. Those can be done in a later phase though, once the core
functionality has been converted.


If we're going to go all-in on netlink, then a long-term goal ought to
be to deprecate /proc/fs/nfsd altogether. Unfortunately, we won't be
able to do that for a _long_ time (years), but I think this is a
reasonable start.
 
> > > > > +
> > > > > + nfsd_max_blksize = nla_get_u32(attr);
> > > > > + nfsd_max_blksize = max_t(int, nfsd_max_blksize, 1024);
> > > > > + nfsd_max_blksize = min_t(int, nfsd_max_blksize, NFSSVC_MAXBLKSIZE);
> > > > > + nfsd_max_blksize &= ~1023;
> > > > > +out:
> > > > > + mutex_unlock(&nfsd_mutex);
> > > > > +
> > > > > + return ret;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * nfsd_nl_max_blksize_get_dumpit - dump the nfs block size
> > > > > + * @skb: reply buffer
> > > > > + * @cb: netlink metadata and command arguments
> > > > > + *
> > > > > + * Returns the size of the reply or a negative errno.
> > > > > + */
> > > > > +int nfsd_nl_max_blksize_get_dumpit(struct sk_buff *skb,
> > > > > +   struct netlink_callback *cb)
> > > > > +{
> > > > > + return nfsd_nl_get_dump(skb, cb, NFSD_CMD_MAX_BLKSIZE_GET,
> > > > > + NFSD_A_CONTROL_PLANE_MAX_BLKSIZE,
> > > > > + nfsd_max_blksize);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * nfsd_nl_max_conn_set_doit - set the max number of connections
> > > > > + * @skb: reply buffer
> > > > > + * @info: netlink metadata and command arguments
> > > > > + *
> > > > > + * Return 0 on success or a negative errno.
> > > > > + */
> > > > > +int nfsd_nl_max_conn_set_doit(struct sk_buff *skb, struct genl_info *info)
> > > > > +{
> > > > > + struct nfsd_net *nn = net_generic(genl_info_net(info), nfsd_net_id);
> > > > > + struct nlattr *attr = info->attrs[NFSD_A_CONTROL_PLANE_MAX_CONN];
> > > > > +
> > > > > + if (!attr)
> > > > > + return -EINVAL;
> > > > > +
> > > > > + nn->max_connections = nla_get_u32(attr);
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * nfsd_nl_max_conn_get_dumpit - dump the max number of connections
> > > > > + * @skb: reply buffer
> > > > > + * @cb: netlink metadata and command arguments
> > > > > + *
> > > > > + * Returns the size of the reply or a negative errno.
> > > > > + */
> > > > > +int nfsd_nl_max_conn_get_dumpit(struct sk_buff *skb,
> > > > > + struct netlink_callback *cb)
> > > > > +{
> > > > > + struct nfsd_net *nn = net_generic(sock_net(cb->skb->sk), nfsd_net_id);
> > > > > +
> > > > > + return nfsd_nl_get_dump(skb, cb, NFSD_CMD_MAX_CONN_GET,
> > > > > + NFSD_A_CONTROL_PLANE_MAX_CONN,
> > > > > + nn->max_connections);
> > > > > +}
> > > > > +
> > > > > /**
> > > > >  * nfsd_net_init - Prepare the nfsd_net portion of a new net namespace
> > > > >  * @net: a freshly-created network namespace
> > 
> > -- 
> > Jeff Layton <jlayton@kernel.org>
> 
> --
> Chuck Lever
> 
> 

-- 
Jeff Layton <jlayton@kernel.org>

  reply	other threads:[~2023-10-02 15:53 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-26 22:13 [PATCH v3] NFSD: convert write_threads, write_maxblksize and write_maxconn to netlink commands Lorenzo Bianconi
2023-09-26 23:05 ` NeilBrown
2023-09-26 23:09   ` Chuck Lever III
2023-09-29 13:44   ` Chuck Lever
2023-10-01 16:56     ` Lorenzo Bianconi
2023-10-02 13:25     ` Jeff Layton
2023-10-02 15:19       ` Chuck Lever III
2023-10-02 15:53         ` Jeff Layton [this message]
2023-10-04 17:09 ` Jakub Kicinski
2023-10-05  9:03   ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f8c97c51544f55f07c2c470c558f7b078a5c7384.camel@kernel.org \
    --to=jlayton@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=neilb@suse.de \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).