Linux NFS development
 help / color / mirror / Atom feed
From: Josef Bacik <josef@toxicpanda.com>
To: Jeff Layton <jlayton@kernel.org>
Cc: Chuck Lever <chuck.lever@oracle.com>,
	linux-nfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 2/2] nfsd: expose /proc/net/sunrpc/nfsd in net namespaces
Date: Wed, 24 Jan 2024 18:18:11 -0500	[thread overview]
Message-ID: <20240124231811.GA1287448@perftesting> (raw)
In-Reply-To: <e724a63a63f30f927f1780ad9018811bc45bf4e1.camel@kernel.org>

On Wed, Jan 24, 2024 at 05:57:06PM -0500, Jeff Layton wrote:
> On Wed, 2024-01-24 at 17:12 -0500, Josef Bacik wrote:
> > On Wed, Jan 24, 2024 at 03:32:06PM -0500, Chuck Lever wrote:
> > > On Wed, Jan 24, 2024 at 02:37:00PM -0500, Josef Bacik wrote:
> > > > We are running nfsd servers inside of containers with their own network
> > > > namespace, and we want to monitor these services using the stats found
> > > > in /proc.  However these are not exposed in the proc inside of the
> > > > container, so we have to bind mount the host /proc into our containers
> > > > to get at this information.
> > > > 
> > > > Separate out the stat counters init and the proc registration, and move
> > > > the proc registration into the pernet operations entry and exit points
> > > > so that these stats can be exposed inside of network namespaces.
> > > 
> > > Maybe I missed something, but this looks like it exposes the global
> > > stat counters to all net namespaces...? Is that an information leak?
> > > As an administrator I might be surprised by that behavior.
> > > 
> > > Seems like this patch needs to make nfsdstats and nfsd_svcstats into
> > > per-namespace objects as well.
> > > 
> > > 
> > 
> > I've got the patches written for this, but I've got a question.  There's a 
> > 
> > svc_seq_show(seq, &nfsd_svcstats);
> > 
> > in nfsd/stats.c.  This appears to be an empty struct, there's nothing that
> > utilizes it, so this is always going to print 0 right?  There's a svc_info in
> > the nfsd_net, and that stats block appears to get updated properly.  Should I
> > print this out here?  I don't see anywhere we get the rpc stats out of nfsd, am
> > I missing something?  I don't want to rip out stuff that I don't quite
> > understand.  Thanks,
> > 
> > 
> 
> nfsd_svcstats ends up being the sv_stats for the nfsd service. The RPC
> code has some counters in there for counting different sorts of net and
> rpc events (see svc_process_common, and some of the recv and accept
> handlers).  I think nfsstat(8) may fetch that info via the above
> seqfile, so it's definitely not unused (and it should be printing more
> than just a '0').

Ahhh, I missed this bit

struct svc_program              nfsd_program = {
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
        .pg_next                = &nfsd_acl_program,
#endif
        .pg_prog                = NFS_PROGRAM,          /* program number */
        .pg_nvers               = NFSD_NRVERS,          /* nr of entries in
nfsd_version */
        .pg_vers                = nfsd_version,         /* version table */
        .pg_name                = "nfsd",               /* program name */
        .pg_class               = "nfsd",               /* authentication class
*/
        .pg_stats               = &nfsd_svcstats,       /* version table */
        .pg_authenticate        = &svc_set_client,      /* export authentication
*/
        .pg_init_request        = nfsd_init_request,
        .pg_rpcbind_set         = nfsd_rpcbind_set,
};

and so nfsd_svcstats definitely is getting used.

> 
> svc_info is a completely different thing: it's a container for the
> svc_serv...so I'm not sure I understand your question?

I was just confused, and still am a little bit.

The counters are easy, I put those into the nfsd_net struct and make everything
mess with those counters and report those from proc.

However the nfsd_svcstats are in this svc_program thing, which appears to need
to be global?  Or do I need to make it per net as well?  Or do I need to do
something completely different to track the rpc stats per network namespace?
Thanks,

Josef

  reply	other threads:[~2024-01-24 23:18 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-24 19:36 [PATCH 0/2] Make nfs and nfsd stats visible in network ns Josef Bacik
2024-01-24 19:36 ` [PATCH 1/2] nfs: expose /proc/net/sunrpc/nfs in net namespaces Josef Bacik
2024-01-24 19:37 ` [PATCH 2/2] nfsd: expose /proc/net/sunrpc/nfsd " Josef Bacik
2024-01-24 20:32   ` Chuck Lever
2024-01-24 21:05     ` Josef Bacik
2024-01-24 22:12     ` Josef Bacik
2024-01-24 22:57       ` Jeff Layton
2024-01-24 23:18         ` Josef Bacik [this message]
2024-01-24 23:41           ` Jeff Layton
2024-01-24 23:47             ` Chuck Lever
2024-01-25  0:06               ` Jeff Layton
2024-01-25  1:54                 ` Chuck Lever III
2024-01-25 10:25                   ` Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240124231811.GA1287448@perftesting \
    --to=josef@toxicpanda.com \
    --cc=chuck.lever@oracle.com \
    --cc=jlayton@kernel.org \
    --cc=kernel-team@fb.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox