linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: Michael Tokarev <mjt@tls.msk.ru>
Cc: "Myklebust, Trond" <Trond.Myklebust@netapp.com>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	Linux-kernel <linux-kernel@vger.kernel.org>,
	Eric Dumazet <eric.dumazet@gmail.com>
Subject: Re: 3.0+ NFS issues (bisected)
Date: Fri, 17 Aug 2012 15:18:00 -0400	[thread overview]
Message-ID: <20120817191800.GA14620@fieldses.org> (raw)
In-Reply-To: <502E7F84.3060003@msgid.tls.msk.ru>

On Fri, Aug 17, 2012 at 09:29:40PM +0400, Michael Tokarev wrote:
> On 17.08.2012 21:26, Michael Tokarev wrote:
> > On 17.08.2012 21:18, J. Bruce Fields wrote:
> >> On Fri, Aug 17, 2012 at 09:12:38PM +0400, Michael Tokarev wrote:
> > []
> >>> So we're calling svc_recv in a tight loop, eating
> >>> all available CPU.  (The above is with just 2 nfsd
> >>> threads).
> >>>
> >>> Something is definitely wrong here.  And it happens mure more
> >>> often after the mentioned commit (f03d78db65085).
> >>
> >> Oh, neat.  Hm.  That commit doesn't really sound like the cause, then.
> >> Is that busy-looping reproduceable on kernels before that commit?
> > 
> > Note I bisected this issue to this commit.  I haven't seen it
> > happening before this commit, and reverting it from 3.0 or 3.2
> > kernel makes the problem to go away.
> > 
> > I guess it is looping there:
> > 
> > 
> > net/sunrpc/svc_xprt.c:svc_recv()
> > ...
> >         len = 0;
> > ...
> >         if (test_bit(XPT_LISTENER, &xprt->xpt_flags)) {
> > ...
> >         } else if (xprt->xpt_ops->xpo_has_wspace(xprt)) {  <=== here -- has no wspace due to memory...
> > ...  len = <something>
> >         }
> > 
> >         /* No data, incomplete (TCP) read, or accept() */
> >         if (len == 0 || len == -EAGAIN)
> >                 goto out;
> > ...
> > out:
> >         rqstp->rq_res.len = 0;
> >         svc_xprt_release(rqstp);
> >         return -EAGAIN;
> > }
> > 
> > I'm trying to verify this theory...
> 
> Yes.  I inserted a printk there, and all these million times while
> we're waiting in this EAGAIN loop, this printk is triggering:
> 
> ....
> [21052.533053]  svc_recv: !has_wspace
> [21052.533070]  svc_recv: !has_wspace
> [21052.533087]  svc_recv: !has_wspace
> [21052.533105]  svc_recv: !has_wspace
> [21052.533122]  svc_recv: !has_wspace
> [21052.533139]  svc_recv: !has_wspace
> [21052.533156]  svc_recv: !has_wspace
> [21052.533174]  svc_recv: !has_wspace
> [21052.533191]  svc_recv: !has_wspace
> [21052.533208]  svc_recv: !has_wspace
> [21052.533226]  svc_recv: !has_wspace
> [21052.533244]  svc_recv: !has_wspace
> [21052.533265] calling svc_recv: 1228163 times (err=-4)
> [21052.533403] calling svc_recv: 1226616 times (err=-4)
> [21052.534520] nfsd: last server has exited, flushing export cache
> 
> (I stopped nfsd since it was flooding the log).
> 
> I can only guess that before that commit, we always had space,
> now we don't anymore, and are looping like crazy.

Thanks!  But, arrgh--that should be enough to go on at this point, but
I'm not seeing it.  If has_wspace is returning false then it's likely
also returning false to the call at the start of svc_xprt_enqueue() (see
svc_xprt_has_something_to_do), which means the xprt shouldn't be getting
requeued and the next svc_recv call should find no socket ready (so
svc_xprt_dequeue() returns NULL), and goes to sleep.

But clearly it's not working that way....

--b.

  reply	other threads:[~2012-08-17 19:18 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-25  6:53 3.0+ NFS issues Michael Tokarev
2012-05-29 15:24 ` J. Bruce Fields
2012-05-30  7:11   ` Michael Tokarev
2012-05-30 13:25     ` J. Bruce Fields
2012-05-31  6:47       ` Michael Tokarev
2012-05-31 12:59         ` Myklebust, Trond
2012-05-31 13:24           ` Michael Tokarev
2012-05-31 13:46             ` Myklebust, Trond
2012-05-31 13:51               ` Michael Tokarev
2012-06-20 12:52                 ` Christoph Bartoschek
2012-07-10 12:52                 ` Michael Tokarev
2012-07-12 12:53                   ` J. Bruce Fields
2012-08-17  1:56                     ` 3.0+ NFS issues (bisected) Michael Tokarev
2012-08-17 14:56                       ` J. Bruce Fields
2012-08-17 16:00                         ` J. Bruce Fields
2012-08-17 17:12                           ` Michael Tokarev
2012-08-17 17:18                             ` J. Bruce Fields
2012-08-17 17:26                               ` Michael Tokarev
2012-08-17 17:29                                 ` Michael Tokarev
2012-08-17 19:18                                   ` J. Bruce Fields [this message]
2012-08-17 20:08                                     ` J. Bruce Fields
2012-08-17 22:32                                       ` J. Bruce Fields
2012-08-18  6:49                                         ` Michael Tokarev
2012-08-18 11:13                                           ` J. Bruce Fields
2012-08-18 12:58                                             ` Michael Tokarev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120817191800.GA14620@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=Trond.Myklebust@netapp.com \
    --cc=eric.dumazet@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=mjt@tls.msk.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).