From: Jeff Layton <jlayton@kernel.org>
To: David Wysochanski <dwysocha@redhat.com>
Cc: David Howells <dhowells@redhat.com>,
Anna Schumaker <anna.schumaker@netapp.com>,
Trond Myklebust <trond.myklebust@hammerspace.com>,
linux-nfs@vger.kernel.org, linux-cachefs@redhat.com
Subject: Re: [PATCH] NFS: Fix nfs_netfs_issue_read() xarray locking for writeback interrupt
Date: Tue, 30 Jan 2024 09:59:51 -0500 [thread overview]
Message-ID: <f135a65b439f85c6f74b977c3859521aaae56477.camel@kernel.org> (raw)
In-Reply-To: <CALF+zOnR1Hu-M=N7+ALcNicbVvEO=G5XN0roigxps15Wj0O8uA@mail.gmail.com>
On Tue, 2024-01-30 at 09:56 -0500, David Wysochanski wrote:
> On Mon, Jan 29, 2024 at 12:44 PM Jeff Layton <jlayton@kernel.org> wrote:
> >
> > On Mon, 2024-01-29 at 12:34 -0500, David Wysochanski wrote:
> > > On Mon, Jan 29, 2024 at 12:15 PM David Howells <dhowells@redhat.com> wrote:
> > > >
> > > > Dave Wysochanski <dwysocha@redhat.com> wrote:
> > > >
> > > > > - xas_lock(&xas);
> > > > > + xas_lock_irqsave(&xas, flags);
> > > > > xas_for_each(&xas, page, last) {
> > > >
> > > > You probably want to use RCU, not xas_lock(). The pages are locked and so
> > > > cannot be evicted from the xarray.
> > > >
> > >
> > > I tried RCU originally and ran into a problem because NFS can schedule
> > > (see comment on line 328 below)
> > >
> > > 326 xas_lock_irqsave(&xas, flags);
> > > 327 xas_for_each(&xas, page, last) {
> > > 328 /* nfs_read_add_folio() may schedule() due to pNFS
> > > layout and other RPCs */
> > > 329 xas_pause(&xas);
> > > 330 xas_unlock_irqrestore(&xas, flags);
> > > 331 err = nfs_read_add_folio(&pgio, ctx, page_folio(page));
> > > 332 if (err < 0) {
> > > 333 netfs->error = err;
> > > 334 goto out;
> > > 335 }
> > > 336 xas_lock_irqsave(&xas, flags);
> > > 337 }
> > > 338 xas_unlock_irqrestore(&xas, flags);
> > >
> >
> > Looking at it more closely, I think you might want to just use
> > xa_for_each_start(). That will do the traversal using the rcu_read_lock
> > under the hood, and you should be able to block on every iteration.
> >
> Thanks Jeff. Yes, I agree after looking at this further, this is a
> good approach, and much cleaner. I'll work on a v2 patch (actually
> with xa_for_each_range as you suggested off list) and send after
> a bit of testing -- so far, so good.
>
> FWIW, my original usage of RCU was outside the whole loop.
> I ran into problems due to nfs_read_add_folio().
>
Makes sense. In principle you could do this by just dropping and
acquiring the rcu_read_lock in the same places you do the spinlock in
the original patch, but using xa_for_each_range is much simpler.
--
Jeff Layton <jlayton@kernel.org>
prev parent reply other threads:[~2024-01-30 14:59 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-29 15:47 [PATCH] NFS: Fix nfs_netfs_issue_read() xarray locking for writeback interrupt Dave Wysochanski
2024-01-29 17:15 ` David Howells
2024-01-29 17:34 ` David Wysochanski
2024-01-29 17:44 ` Jeff Layton
2024-01-30 14:56 ` David Wysochanski
2024-01-30 14:59 ` Jeff Layton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f135a65b439f85c6f74b977c3859521aaae56477.camel@kernel.org \
--to=jlayton@kernel.org \
--cc=anna.schumaker@netapp.com \
--cc=dhowells@redhat.com \
--cc=dwysocha@redhat.com \
--cc=linux-cachefs@redhat.com \
--cc=linux-nfs@vger.kernel.org \
--cc=trond.myklebust@hammerspace.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).