linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bruce Fields <bfields@fieldses.org>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH 09/15] sunrpc: Report per-RPC execution stats
Date: Thu, 22 Mar 2018 17:01:35 -0400	[thread overview]
Message-ID: <20180322210135.GD31914@fieldses.org> (raw)
In-Reply-To: <70E04F09-EAD2-46B1-B2E8-F501CB8298AF@oracle.com>

On Thu, Mar 22, 2018 at 04:32:36PM -0400, Chuck Lever wrote:
> 
> 
> > On Mar 14, 2018, at 9:11 PM, Chuck Lever <chuck.lever@oracle.com> wrote:
> > 
> > 
> > 
> >> On Mar 14, 2018, at 4:50 PM, J. Bruce Fields <bfields@fieldses.org> wrote:
> >> 
> >> On Tue, Mar 13, 2018 at 11:44:27AM -0400, Chuck Lever wrote:
> >>> Introduce a mechanism to report the server-side execution latency of
> >>> each RPC. The goal is to enable user space to filter the trace
> >>> record for latency outliers, build histograms, etc.
> >> 
> >> Neato.
> >> 
> >> To be useful I'd think you'd want more information about each RPC.
> >> (E.g. you'd like to know that the latency outliers all did reads).  I
> >> guess you could use the address and xid to cross-reference with
> >> information collected from somewhere else?
> > 
> > Yes. You can enable other trace points (like the nfsd_compound ones)
> > to see what class each operation is in.
> > 
> > And yes, I would like to have all the relevant information for each
> > RPC in a single trace record; I just haven't figured out a way to
> > extract it as nicely as I did it on the client (patch forthcoming).
> > On the client side, there is a table set up for each RPC program that
> > contains an RPC procedure name to procedure number mapping. I was not
> > able to find a similar convenience on the server side.
> > 
> > 
> >> What's our commitment to backwards compatibility?  Trace points seem to
> >> be the wild west compared to the rest of the kernel interface, but if
> >> we want to encourage tooling on top of this then I guess we'll need to
> >> be strict.
> > 
> > That has been discussed elsewhere (LWN.net and more recently on
> > linux-rdma). The only compatibility issues are with trace points that
> > have user tools and infrastructure that depends on them, such as the
> > scheduler trace points used for latencyTOP. The NFS and sunrpc trace
> > points do not have this constraint, as they are processed currently
> > only by generic tools like trace-cmd. So we are free to innovate for
> > the time being.
> > 
> > 
> >> Looking at the tcp case, I think it's measuring from the time
> >> tcp_recvfrom receives the last fragment making up an rpc request till
> >> the last sendpage() of the reply returns.  Did you consider other spots?
> >> (E.g. why after the send instead of before?)
> > 
> > Yes, I've considered other spots. I don't consider the spots I'm
> > proposing here to be written in stone. I welcome help for placing the
> > socket-based timestamp capture points.
> > 
> > Some sendto implementations are more complex than others. For instance,
> > RPC/RDMA can post RDMA Writes containing data content first, then in the
> > final step post the RDMA Send carrying the RPC Reply header. The RDMA
> > Write step can be considered server-side processing, and thus is part
> > of the latency. Or, if we ever decide to move the RDMA Write step into
> > the XDR layer, it will definitely be counted as processing latency.
> > 
> > One thing I would like to keep in the latency measurement is how long
> > this rqstp has waited to acquire the send mutex. But otherwise, I'm
> > open to other ideas about how to measure this latency.
> 
> Hi Bruce-
> 
> How about measuring the same way for all transports:
> 
> - Capture a timestamp when xpo_recvfrom returns a positive value
> 
> - Fire the svc_stats_latency event just before invoking xpo_sendto
> 
> Would you be more comfortable with that arrangement?

I honestly don't know.  If I understand right: if we put that event just
before xpo_sendto, then in the case you describe above (RDMA Write step
moved between xdr layer and sendto), the latency number would change for
no really good reason.  So that's a case for your original approach?

I don't know what's likely to contribute to sendto latency in the socket
case.

Really, I've never done latency tracing, and from what I've seen you may
have done more than any of us, so I trust your judgement here....

--b.

  reply	other threads:[~2018-03-22 21:01 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-13 15:43 [PATCH 00/15] NFSD trace point clean up Chuck Lever
2018-03-13 15:43 ` [PATCH 01/15] sunrpc: Remove unneeded pointer dereference Chuck Lever
2018-03-13 15:43 ` [PATCH 02/15] svc: Simplify ->xpo_secure_port Chuck Lever
2018-03-13 15:43 ` [PATCH 03/15] sunrpc: Update show_svc_xprt_flags() to include recently added flags Chuck Lever
2018-03-13 15:44 ` [PATCH 04/15] sunrpc: Move trace_svc_xprt_dequeue() Chuck Lever
2018-03-13 15:44 ` [PATCH 05/15] sunrpc: Simplify do_enqueue tracing Chuck Lever
2018-03-13 15:44 ` [PATCH 06/15] sunrpc: Simplify trace_svc_recv Chuck Lever
2018-03-13 15:44 ` [PATCH 07/15] sunrpc: Save remote presentation address in svc_xprt for trace events Chuck Lever
2018-03-14 20:35   ` J. Bruce Fields
2018-03-15  0:54     ` Chuck Lever
2018-03-15  1:43       ` Bruce Fields
2018-03-13 15:44 ` [PATCH 08/15] sunrpc: Re-purpose trace_svc_process Chuck Lever
2018-03-13 15:44 ` [PATCH 09/15] sunrpc: Report per-RPC execution stats Chuck Lever
2018-03-14 20:50   ` J. Bruce Fields
2018-03-15  1:11     ` Chuck Lever
2018-03-22 20:32       ` Chuck Lever
2018-03-22 21:01         ` Bruce Fields [this message]
2018-03-22 21:18           ` Chuck Lever
2018-03-13 15:44 ` [PATCH 10/15] nfsd: Fix NFSD trace points Chuck Lever
2018-03-13 15:44 ` [PATCH 11/15] nfsd: Record request byte count, not count of vectors Chuck Lever
2018-03-14 20:52   ` J. Bruce Fields
2018-03-15  1:18     ` Chuck Lever
2018-03-13 15:44 ` [PATCH 12/15] nfsd: Add "nfsd_" to trace point names Chuck Lever
2018-03-13 15:44 ` [PATCH 13/15] nfsd: Add I/O trace points in the NFSv4 write path Chuck Lever
2018-03-13 15:44 ` [PATCH 14/15] nfsd: Add I/O trace points in the NFSv4 read proc Chuck Lever
2018-03-13 15:44 ` [PATCH 15/15] nfsd: Trace NFSv4 COMPOUND execution Chuck Lever
2018-03-14 20:55   ` J. Bruce Fields
2018-03-15  1:29     ` Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180322210135.GD31914@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).