From: "J. Bruce Fields" <bfields@fieldses.org>
To: Michael Tokarev <mjt@tls.msk.ru>
Cc: "Myklebust, Trond" <Trond.Myklebust@netapp.com>,
"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
Linux-kernel <linux-kernel@vger.kernel.org>,
Eric Dumazet <eric.dumazet@gmail.com>
Subject: Re: 3.0+ NFS issues (bisected)
Date: Fri, 17 Aug 2012 13:18:54 -0400 [thread overview]
Message-ID: <20120817171854.GA14015@fieldses.org> (raw)
In-Reply-To: <502E7B86.3060702@msgid.tls.msk.ru>
On Fri, Aug 17, 2012 at 09:12:38PM +0400, Michael Tokarev wrote:
> On 17.08.2012 20:00, J. Bruce Fields wrote:
> []> Uh, if I grepped my way through this right: it looks like it's the
> > "memory" column of the "TCP" row of /proc/net/protocols; might be
> > interesting to see how that's changing over time.
>
> This file does not look interesting. Memory usage does not jump,
> there's no high increase either.
>
> But there's something else which is interesting here.
>
> I noticed that in perf top, the top consumer of CPU is svc_recv()
> (I mentioned this in the start of this thread). So I looked how
> this routine is called from nfsd. And here we go.
>
> fs/nfsd/nfssvc.c:
>
> /*
> * This is the NFS server kernel thread
> */
> static int
> nfsd(void *vrqstp)
> {
> ...
> /*
> * The main request loop
> */
> for (;;) {
> /*
> * Find a socket with data available and call its
> * recvfrom routine.
> */
> int i = 0;
> while ((err = svc_recv(rqstp, 60*60*HZ)) == -EAGAIN)
> ++i;
> printk(KERN_ERR "calling svc_recv: %d times (err=%d)\n", i, err);
> if (err == -EINTR)
> break;
> ...
>
> (I added the "i" counter and the printk). And here's the output:
>
> [19626.401136] calling svc_recv: 0 times (err=212)
> [19626.405059] calling svc_recv: 1478 times (err=212)
> [19626.409512] calling svc_recv: 1106 times (err=212)
> [19626.543020] calling svc_recv: 0 times (err=212)
> [19626.543059] calling svc_recv: 0 times (err=212)
> [19626.548074] calling svc_recv: 0 times (err=212)
> [19626.549515] calling svc_recv: 0 times (err=212)
> [19626.552320] calling svc_recv: 0 times (err=212)
> [19626.553503] calling svc_recv: 0 times (err=212)
> [19626.556007] calling svc_recv: 0 times (err=212)
> [19626.557152] calling svc_recv: 0 times (err=212)
> [19626.560109] calling svc_recv: 0 times (err=212)
> [19626.560943] calling svc_recv: 0 times (err=212)
> [19626.565315] calling svc_recv: 1067 times (err=212)
> [19626.569735] calling svc_recv: 2571 times (err=212)
> [19626.574150] calling svc_recv: 3842 times (err=212)
> [19626.581914] calling svc_recv: 2891 times (err=212)
> [19626.583072] calling svc_recv: 1247 times (err=212)
> [19626.616885] calling svc_recv: 0 times (err=212)
> [19626.616952] calling svc_recv: 0 times (err=212)
> [19626.622889] calling svc_recv: 0 times (err=212)
> [19626.624518] calling svc_recv: 0 times (err=212)
> [19626.627118] calling svc_recv: 0 times (err=212)
> [19626.629735] calling svc_recv: 0 times (err=212)
> [19626.631777] calling svc_recv: 0 times (err=212)
> [19626.633986] calling svc_recv: 0 times (err=212)
> [19626.636746] calling svc_recv: 0 times (err=212)
> [19626.637692] calling svc_recv: 0 times (err=212)
> [19626.640769] calling svc_recv: 0 times (err=212)
> [19626.657852] calling svc_recv: 0 times (err=212)
> [19626.661602] calling svc_recv: 0 times (err=212)
> [19626.670160] calling svc_recv: 0 times (err=212)
> [19626.671917] calling svc_recv: 0 times (err=212)
> [19626.684643] calling svc_recv: 0 times (err=212)
> [19626.684680] calling svc_recv: 0 times (err=212)
> [19626.812820] calling svc_recv: 0 times (err=212)
> [19626.814697] calling svc_recv: 0 times (err=212)
> [19626.817195] calling svc_recv: 0 times (err=212)
> [19626.820324] calling svc_recv: 0 times (err=212)
> [19626.822855] calling svc_recv: 0 times (err=212)
> [19626.824823] calling svc_recv: 0 times (err=212)
> [19626.828016] calling svc_recv: 0 times (err=212)
> [19626.829021] calling svc_recv: 0 times (err=212)
> [19626.831970] calling svc_recv: 0 times (err=212)
>
> > the stall begin:
> [19686.823135] calling svc_recv: 3670352 times (err=212)
> [19686.823524] calling svc_recv: 3659205 times (err=212)
>
> > transfer continues
> [19686.854734] calling svc_recv: 0 times (err=212)
> [19686.860023] calling svc_recv: 0 times (err=212)
> [19686.887124] calling svc_recv: 0 times (err=212)
> [19686.895532] calling svc_recv: 0 times (err=212)
> [19686.903667] calling svc_recv: 0 times (err=212)
> [19686.922780] calling svc_recv: 0 times (err=212)
>
> So we're calling svc_recv in a tight loop, eating
> all available CPU. (The above is with just 2 nfsd
> threads).
>
> Something is definitely wrong here. And it happens mure more
> often after the mentioned commit (f03d78db65085).
Oh, neat. Hm. That commit doesn't really sound like the cause, then.
Is that busy-looping reproduceable on kernels before that commit?
--b.
next prev parent reply other threads:[~2012-08-17 17:18 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-25 6:53 3.0+ NFS issues Michael Tokarev
2012-05-29 15:24 ` J. Bruce Fields
2012-05-30 7:11 ` Michael Tokarev
2012-05-30 13:25 ` J. Bruce Fields
2012-05-31 6:47 ` Michael Tokarev
2012-05-31 12:59 ` Myklebust, Trond
2012-05-31 13:24 ` Michael Tokarev
2012-05-31 13:46 ` Myklebust, Trond
2012-05-31 13:51 ` Michael Tokarev
2012-06-20 12:52 ` Christoph Bartoschek
2012-07-10 12:52 ` Michael Tokarev
2012-07-12 12:53 ` J. Bruce Fields
2012-08-17 1:56 ` 3.0+ NFS issues (bisected) Michael Tokarev
2012-08-17 14:56 ` J. Bruce Fields
2012-08-17 16:00 ` J. Bruce Fields
2012-08-17 17:12 ` Michael Tokarev
2012-08-17 17:18 ` J. Bruce Fields [this message]
2012-08-17 17:26 ` Michael Tokarev
2012-08-17 17:29 ` Michael Tokarev
2012-08-17 19:18 ` J. Bruce Fields
2012-08-17 20:08 ` J. Bruce Fields
2012-08-17 22:32 ` J. Bruce Fields
2012-08-18 6:49 ` Michael Tokarev
2012-08-18 11:13 ` J. Bruce Fields
2012-08-18 12:58 ` Michael Tokarev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120817171854.GA14015@fieldses.org \
--to=bfields@fieldses.org \
--cc=Trond.Myklebust@netapp.com \
--cc=eric.dumazet@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=mjt@tls.msk.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).