From: Anna Schumaker <anna@kernel.org>
To: linux-nfs@vger.kernel.org, chuck.lever@oracle.com
Cc: anna@kernel.org
Subject: [PATCH v4 0/2] NFSD: Simplify READ_PLUS
Date: Tue, 13 Sep 2022 14:01:49 -0400 [thread overview]
Message-ID: <20220913180151.1928363-1-anna@kernel.org> (raw)
From: Anna Schumaker <Anna.Schumaker@Netapp.com>
When we left off with READ_PLUS, Chuck had suggested reverting the
server to reply with a single NFS4_CONTENT_DATA segment essentially
mimicing how the READ operation behaves. Then, a future sparse read
function can be added and the server modified to support it without
needing to rip out the old READ_PLUS code at the same time.
This patch takes that first step. I was even able to re-use the
nfsd4_encode_readv() and nfsd4_encode_splice_read() functions to
remove some duuplicate code.
Below is some performance data comparing the READ and READ_PLUS
operations with v4.2. I tested reading 2G files with various hole
lengths including 100% data, 100% hole, and a handful of mixed hole and
data files. For the mixed files, a notation like "1d" means
every-other-page is data, and the first page is data. "4h" would mean
alternating 4 pages data and 4 pages hole, beginning with hole.
I also used the 'vmtouch' utility to make sure the file is either
evicted from the server's pagecache ("Uncached on server") or present in
the server's page cache ("Cached on server").
2048M-data
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.555 s, 712 MB/s, 0.74 s kern, 24% cpu
: :........................... Cached on server ..... 1.346 s, 1.6 GB/s, 0.69 s kern, 52% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 3.596 s, 690 MB/s, 0.72 s kern, 23% cpu
:........................... Cached on server ..... 1.394 s, 1.6 GB/s, 0.67 s kern, 48% cpu
2048M-hole
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 4.934 s, 762 MB/s, 1.86 s kern, 29% cpu
: :........................... Cached on server ..... 1.328 s, 1.6 GB/s, 0.72 s kern, 54% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 4.823 s, 739 MB/s, 1.88 s kern, 28% cpu
:........................... Cached on server ..... 1.399 s, 1.5 GB/s, 0.70 s kern, 50% cpu
2048M-mixed-1d
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 4.480 s, 598 MB/s, 0.76 s kern, 21% cpu
: :........................... Cached on server ..... 1.445 s, 1.5 GB/s, 0.71 s kern, 50% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 4.774 s, 559 MB/s, 0.75 s kern, 19% cpu
:........................... Cached on server ..... 1.514 s, 1.4 GB/s, 0.67 s kern, 44% cpu
2048M-mixed-1h
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.568 s, 633 MB/s, 0.78 s kern, 23% cpu
: :........................... Cached on server ..... 1.357 s, 1.6 GB/s, 0.71 s kern, 53% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 3.580 s, 641 MB/s, 0.74 s kern, 22% cpu
:........................... Cached on server ..... 1.396 s, 1.5 GB/s, 0.67 s kern, 48% cpu
2048M-mixed-2d
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.159 s, 708 MB/s, 0.78 s kern, 26% cpu
: :........................... Cached on server ..... 1.410 s, 1.5 GB/s, 0.70 s kern, 50% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 3.093 s, 712 MB/s, 0.74 s kern, 25% cpu
:........................... Cached on server ..... 1.474 s, 1.4 GB/s, 0.67 s kern, 46% cpu
2048M-mixed-2h
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.043 s, 722 MB/s, 0.78 s kern, 26% cpu
: :........................... Cached on server ..... 1.374 s, 1.6 GB/s, 0.72 s kern, 53% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 2.913 s, 756 MB/s, 0.74 s kern, 26% cpu
:........................... Cached on server ..... 1.349 s, 1.6 GB/s, 0.67 s kern, 50% cpu
2048M-mixed-4d
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.275 s, 680 MB/s, 0.75 s kern, 24% cpu
: :........................... Cached on server ..... 1.391 s, 1.5 GB/s, 0.71 s kern, 52% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 3.470 s, 626 MB/s, 0.72 s kern, 21% cpu
:........................... Cached on server ..... 1.456 s, 1.5 GB/s, 0.67 s kern, 46% cpu
2048M-mixed-4h
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.035 s, 743 MB/s, 0.74 s kern, 26% cpu
: :........................... Cached on server ..... 1.345 s, 1.6 GB/s, 0.71 s kern, 53% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 2.848 s, 779 MB/s, 0.73 s kern, 26% cpu
:........................... Cached on server ..... 1.421 s, 1.5 GB/s, 0.68 s kern, 48% cpu
2048M-mixed-8d
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.262 s, 687 MB/s, 0.74 s kern, 24% cpu
: :........................... Cached on server ..... 1.366 s, 1.6 GB/s, 0.69 s kern, 51% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 3.195 s, 709 MB/s, 0.72 s kern, 24% cpu
:........................... Cached on server ..... 1.414 s, 1.5 GB/s, 0.67 s kern, 48% cpu
2048M-mixed-8h
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 2.899 s, 789 MB/s, 0.73 s kern, 27% cpu
: :........................... Cached on server ..... 1.338 s, 1.6 GB/s, 0.69 s kern, 52% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 2.910 s, 772 MB/s, 0.72 s kern, 26% cpu
:........................... Cached on server ..... 1.438 s, 1.5 GB/s, 0.67 s kern, 47% cpu
2048M-mixed-16d
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 3.416 s, 661 MB/s, 0.73 s kern, 23% cpu
: :........................... Cached on server ..... 1.345 s, 1.6 GB/s, 0.70 s kern, 53% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 3.177 s, 713 MB/s, 0.70 s kern, 23% cpu
:........................... Cached on server ..... 1.447 s, 1.5 GB/s, 0.68 s kern, 47% cpu
2048M-mixed-16h
:... v6.0-rc4 (w/o Read Plus) ... Uncached on server ... 2.919 s, 780 MB/s, 0.73 s kern, 26% cpu
: :........................... Cached on server ..... 1.363 s, 1.6 GB/s, 0.70 s kern, 51% cpu
:... v6.0-rc4 (w/ Read Plus) .... Uncached on server ... 2.934 s, 773 MB/s, 0.70 s kern, 25% cpu
:........................... Cached on server ..... 1.435 s, 1.5 GB/s, 0.67 s kern, 47% cpu
- v4:
- Change READ and READ_PLUS to return nfserr_serverfault if the splice
splice check fails.
Thanks,
Anna
Anna Schumaker (2):
NFSD: Return nfserr_serverfault if splice_ok but buf->pages have data
NFSD: Simplify READ_PLUS
fs/nfsd/nfs4xdr.c | 141 +++++++++++-----------------------------------
1 file changed, 33 insertions(+), 108 deletions(-)
--
2.37.3
next reply other threads:[~2022-09-13 18:37 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-13 18:01 Anna Schumaker [this message]
2022-09-13 18:01 ` [PATCH v4 1/2] NFSD: Return nfserr_serverfault if splice_ok but buf->pages have data Anna Schumaker
2022-09-13 20:12 ` Chuck Lever III
2022-09-13 20:42 ` Anna Schumaker
2022-09-15 19:59 ` Jeff Layton
2022-09-16 2:17 ` Chuck Lever III
2022-10-06 16:35 ` Jeff Layton
2022-09-13 18:01 ` [PATCH v4 2/2] NFSD: Simplify READ_PLUS Anna Schumaker
2022-10-06 16:35 ` Jeff Layton
2022-09-13 18:45 ` [PATCH v4 0/2] " Chuck Lever III
2022-09-13 20:45 ` Anna Schumaker
2022-10-05 15:10 ` Anna Schumaker
2022-10-05 15:53 ` Chuck Lever III
2022-10-31 17:55 ` Anna Schumaker
2022-10-31 18:00 ` Chuck Lever III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220913180151.1928363-1-anna@kernel.org \
--to=anna@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).