From: "bfields@fieldses.org" <bfields@fieldses.org>
To: Trond Myklebust <trondmy@hammerspace.com>
Cc: "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH] SUNRPC: Don't allow compiler optimisation of svc_xprt_release_slot()
Date: Fri, 4 Jan 2019 12:39:12 -0500 [thread overview]
Message-ID: <20190104173912.GC11787@fieldses.org> (raw)
In-Reply-To: <a0493a5f022a666779d19fde5cdfbfdf9c7316c4.camel@hammerspace.com>
On Thu, Jan 03, 2019 at 11:40:21PM +0000, Trond Myklebust wrote:
> On Thu, 2019-01-03 at 17:45 -0500, J Bruce Fields wrote:
> > On Thu, Jan 03, 2019 at 09:17:12AM -0500, Trond Myklebust wrote:
> > > Use READ_ONCE() to tell the compiler to not optimse away the read
> > > of
> > > xprt->xpt_flags in svc_xprt_release_slot().
> >
> > What exactly is the possible race here? And why is a READ_ONCE()
> > sufficient, as opposed to some memory barriers?
> >
> > I may need to shut myself in a room with memory-barriers.txt, I'm
> > pretty
> > hazy on these things.
> >
>
> It's not about fixing any races. It is about ensuring that the compiler
> does not optimise away the read if the function is ever called from
> inside a loop. Not an important fix, since I'm not aware of any cases
> where this has happened. However strictly speaking, we should use
> READ_ONCE() here because that variable is volatile; it can be changed
> by a background action.
I wonder if there's a race here independent of that change:
svc_xprt_enqueue() callers all do something like:
1. change some condition
2. call svc_xprt_enqueue() to check whether the xprt should
now be enqueued.
where the conditions are settings of the xpt_flags, or socket wspace, or
xpt_nr_rqsts.
In theory if we miss some concurrent change we're OK because whoever's
making that change will then also call svc_xprt_enqueue. But that's not
enough; e.g.:
task 1 task 2
------ ------
set XPT_DATA
atomic_dec(xpt_nr_rqsts)
check XPT_DATA && check xpt_nr_rqsts
check XPT_DATA && check xpt_nr_rqsts
If the tasks only see their local changes, then neither see both
conditions true, so the socket doesn't get enqueued. (And a request
that was ready to be processed will sit around until someone else comes
calls svc_xprt_enqueue() on that xprt.)
The code's more complicated than that and maybe there's some reason that
can't happen.
--b.
next prev parent reply other threads:[~2019-01-04 17:39 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-03 14:17 [PATCH] SUNRPC: Don't allow compiler optimisation of svc_xprt_release_slot() Trond Myklebust
2019-01-03 22:45 ` J Bruce Fields
2019-01-03 23:40 ` Trond Myklebust
2019-01-04 17:39 ` bfields [this message]
2019-01-07 21:32 ` bfields
2019-01-07 22:06 ` Trond Myklebust
2019-01-08 15:01 ` bfields
2019-01-08 16:21 ` Trond Myklebust
2019-01-09 16:51 ` bfields
2019-01-09 17:41 ` Trond Myklebust
2019-01-11 21:12 ` bfields
2019-01-11 21:52 ` Chuck Lever
2019-01-11 21:54 ` Chuck Lever
2019-01-11 22:10 ` Bruce Fields
2019-01-11 22:27 ` Chuck Lever
2019-01-12 0:56 ` Bruce Fields
2019-01-14 17:24 ` Chuck Lever
2019-01-25 20:30 ` Bruce Fields
2019-01-25 21:32 ` Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190104173912.GC11787@fieldses.org \
--to=bfields@fieldses.org \
--cc=linux-nfs@vger.kernel.org \
--cc=trondmy@hammerspace.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox