From: Lorenzo Bianconi <lorenzo@kernel.org>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: linux-nfs@vger.kernel.org, lorenzo.bianconi@redhat.com,
jlayton@kernel.org, neilb@suse.de
Subject: Re: [PATCH v6 3/3] NFSD: add rpc_status entry in nfsd debug filesystem
Date: Thu, 10 Aug 2023 20:24:54 +0200 [thread overview]
Message-ID: <ZNUrdju18XO4hYMe@lore-rh-laptop> (raw)
In-Reply-To: <ZNT7wdG8SYfDRkDg@tissot.1015granger.net>
[-- Attachment #1: Type: text/plain, Size: 4247 bytes --]
[...]
> > +#ifdef CONFIG_NFSD_V4
> > + if (rqstp->rq_vers == NFS4_VERSION &&
> > + rqstp->rq_proc == NFSPROC4_COMPOUND) {
> > + /* NFSv4 compund */
> > + struct nfsd4_compoundargs *args = rqstp->rq_argp;
> > + int j;
> > +
> > + opcnt = args->opcnt;
> > + for (j = 0; j < opcnt; j++) {
> > + struct nfsd4_op *op = &args->ops[j];
> > +
> > + rqstp_info.opnum[j] = op->opnum;
> > + }
> > + }
> > +#endif /* CONFIG_NFSD_V4 */
> > +
> > + /*
> > + * Acquire rq_status_counter before reporting the rqst
> > + * fields to the user.
> > + */
> > + if (smp_load_acquire(&rqstp->rq_status_counter) != status_counter)
> > + continue;
> > +
> > + seq_printf(m,
> > + "%04u %04ld NFSv%d %s %016lld",
> > + be32_to_cpu(rqstp_info.rq_xid),
>
> It's proper to display XIDs as 8-hexit hexadecimal values, as you
> did before. "0x%08x" is the correct format, as that matches the
> XID display format used by Wireshark and our tracepoints.
ops, I misunderstood your previous comments. I will address them in v7 if there
are no other comments.
Regards,
Lorenzo
>
>
> > + rqstp_info.rq_flags,
>
> I didn't mean for you to change the flags format to decimal. I was
> trying to point out that the content of this field will need to be
> displayed symbolically if we care about an easy user experience.
>
> Let's stick with hex here. A clever user can read the bits directly
> from that. All others should have a tool that parses this field and
> prints the list of bits in it symbolically.
>
>
> > + rqstp_info.rq_vers,
> > + rqstp_info.pc_name,
> > + ktime_to_us(rqstp_info.rq_stime));
> > + seq_printf(m, " %s",
> > + __svc_print_addr(&rqstp_info.saddr, buf,
> > + sizeof(buf), false));
> > + seq_printf(m, " %s",
> > + __svc_print_addr(&rqstp_info.daddr, buf,
> > + sizeof(buf), false));
> > + if (opcnt) {
> > + int j;
> > +
> > + seq_puts(m, " ");
> > + for (j = 0; j < opcnt; j++)
> > + seq_printf(m, "%s%s",
> > + nfsd4_op_name(rqstp_info.opnum[j]),
> > + j == opcnt - 1 ? "" : ":");
> > + } else {
> > + seq_puts(m, " -");
> > + }
>
> This looks correct to me.
>
> I'm leaning towards moving this to a netlink API that can be
> extended over time to handle other stats and also act as an NFSD
> control plane, similar to other network subsystems.
>
> Any comments, complaints or rotten fruit from anyone?
>
>
> > + seq_puts(m, "\n");
> > + }
> > + }
> > +
> > + rcu_read_unlock();
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + * nfsd_rpc_status_open - open routine for nfsd_rpc_status handler
> > + * @inode: entry inode pointer.
> > + * @file: entry file pointer.
> > + *
> > + * nfsd_rpc_status_open is the open routine for nfsd_rpc_status procfs handler.
> > + * nfsd_rpc_status dumps pending RPC requests info queued into nfs server.
> > + */
> > +int nfsd_rpc_status_open(struct inode *inode, struct file *file)
> > +{
> > + int ret = nfsd_stats_open(inode);
> > +
> > + if (ret)
> > + return ret;
> > +
> > + return single_open(file, nfsd_rpc_status_show, inode->i_private);
> > +}
> > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> > index 7838b37bcfa8..b49c0470b4fe 100644
> > --- a/include/linux/sunrpc/svc.h
> > +++ b/include/linux/sunrpc/svc.h
> > @@ -251,6 +251,7 @@ struct svc_rqst {
> > * net namespace
> > */
> > void ** rq_lease_breaker; /* The v4 client breaking a lease */
> > + unsigned int rq_status_counter; /* RPC processing counter */
> > };
> >
> > /* bits for rq_flags */
> > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> > index af692bff44ab..83bee19df104 100644
> > --- a/net/sunrpc/svc.c
> > +++ b/net/sunrpc/svc.c
> > @@ -1656,7 +1656,7 @@ const char *svc_proc_name(const struct svc_rqst *rqstp)
> > return rqstp->rq_procinfo->pc_name;
> > return "unknown";
> > }
> > -
> > +EXPORT_SYMBOL_GPL(svc_proc_name);
> >
> > /**
> > * svc_encode_result_payload - mark a range of bytes as a result payload
> > --
> > 2.41.0
> >
>
> --
> Chuck Lever
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2023-08-10 18:25 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-10 8:39 [PATCH v6 0/3] add rpc_status handler in nfsd debug filesystem Lorenzo Bianconi
2023-08-10 8:39 ` [PATCH v6 1/3] SUNRPC: add verbose parameter to __svc_print_addr() Lorenzo Bianconi
2023-08-10 8:39 ` [PATCH v6 2/3] NFSD: introduce nfsd_stats_open utility routine Lorenzo Bianconi
2023-08-10 8:39 ` [PATCH v6 3/3] NFSD: add rpc_status entry in nfsd debug filesystem Lorenzo Bianconi
2023-08-10 12:58 ` Jeff Layton
2023-08-10 15:01 ` Chuck Lever
2023-08-10 18:24 ` Lorenzo Bianconi [this message]
2023-08-10 20:24 ` NeilBrown
2023-08-11 14:07 ` Chuck Lever III
2023-08-12 8:59 ` Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZNUrdju18XO4hYMe@lore-rh-laptop \
--to=lorenzo@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=lorenzo.bianconi@redhat.com \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox