public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: cel@kernel.org, linux-nfs@vger.kernel.org
Subject: Re: [PATCH v2 2/4] nfs/blocklayout: Use bulk page allocation APIs
Date: Sat, 22 Jun 2024 12:29:31 -0400	[thread overview]
Message-ID: <Znb765CNH/5WCVkp@tissot.1015granger.net> (raw)
In-Reply-To: <20240622050812.GB11110@lst.de>

On Sat, Jun 22, 2024 at 07:08:12AM +0200, Christoph Hellwig wrote:
> On Fri, Jun 21, 2024 at 12:22:30PM -0400, cel@kernel.org wrote:
> > From: Chuck Lever <chuck.lever@oracle.com>
> > 
> > nfs4_get_device_info() frequently requests more than a few pages
> > when provisioning a nfs4_deviceid_node object. Make this more
> > efficient by using alloc_pages_bulk_array(). This API is known to be
> > several times faster than an open-coded loop around alloc_page().
> > 
> > release_pages() is folio-enabled so it is also more efficient than
> > repeatedly invoking __free_pages().
> 
> This isn't really a pnfs fix, right?  Just a little optimization.

It doesn't say "fix" anywhere and doesn't include a Fixes: tag.
And subsequent patches in the series are also clearly not fixes.

I can make it more clear that this one is only an optimization.


> It does looks fine to me:
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>

Thank you!


> But I'd really wish if we could do better than this with lazy
> decoding in ->alloc_deviceid_node, which (at least for blocklayout)
> knows roughly how much we need to decode after the first value
> parsed.

Agreed. And it's not the only culprit in NFS and RPC of this kind
of temporary "just in case" overallocation.


> Or at least cache it if it is that frequent (which it
> really shouldn't be due to the device id cache, or am I missing
> something?)

It's not a frequent operation; it's done the first time pNFS
encounters a new block device. But the alloc_page() loop is slow and
takes and releases an IRQ spinlock repeatedly (IIRC) so it's an
opportunity for IRQs to run and delay get_device_info considerably.

-- 
Chuck Lever

  reply	other threads:[~2024-06-22 16:30 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-21 16:22 [PATCH v2 0/4] Fixes for pNFS SCSI layout PR key registration cel
2024-06-21 16:22 ` [PATCH v2 1/4] nfs/blocklayout: Fix premature PR key unregistration cel
2024-06-22  5:03   ` Christoph Hellwig
2024-06-22 17:26     ` Chuck Lever
2024-06-23  7:36       ` Christoph Hellwig
2024-06-24 15:08         ` Chuck Lever
2024-06-21 16:22 ` [PATCH v2 2/4] nfs/blocklayout: Use bulk page allocation APIs cel
2024-06-22  5:08   ` Christoph Hellwig
2024-06-22 16:29     ` Chuck Lever [this message]
2024-06-21 16:22 ` [PATCH v2 3/4] nfs/blocklayout: Report only when /no/ device is found cel
2024-06-21 16:22 ` [PATCH v2 4/4] nfs/blocklayout: SCSI layout trace points for reservation key reg/unreg cel
2024-06-21 17:21   ` Anna Schumaker
2024-06-21 17:46     ` Chuck Lever III
2024-06-22  5:09   ` Christoph Hellwig
2024-06-21 18:03 ` [PATCH v2 0/4] Fixes for pNFS SCSI layout PR key registration Benjamin Coddington

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Znb765CNH/5WCVkp@tissot.1015granger.net \
    --to=chuck.lever@oracle.com \
    --cc=cel@kernel.org \
    --cc=hch@lst.de \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox