public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@redhat.com>
To: schumaker.anna@gmail.com
Cc: chuck.lever@oracle.com, linux-nfs@vger.kernel.org,
	Anna.Schumaker@netapp.com
Subject: Re: [PATCH v4 0/5] NFSD: Add support for the v4.2 READ_PLUS operation
Date: Wed, 26 Aug 2020 17:54:37 -0400	[thread overview]
Message-ID: <20200826215437.GD62682@pick.fieldses.org> (raw)
In-Reply-To: <20200817165310.354092-1-Anna.Schumaker@Netapp.com>

On Mon, Aug 17, 2020 at 12:53:05PM -0400, schumaker.anna@gmail.com wrote:
> From: Anna Schumaker <Anna.Schumaker@Netapp.com>
> 
> These patches add server support for the READ_PLUS operation, which
> breaks read requests into several "data" and "hole" segments when
> replying to the client.
> 
> - Changes since v3:
>   - Combine first two patches related to xdr_reserve_space_vec()
>   - Remove unnecessary call to svc_encode_read_payload()
> 
> Here are the results of some performance tests I ran on some lab
> machines.

What's the hardware setup (do you know network and disk bandwidth?).

> I tested by reading various 2G files from a few different underlying
> filesystems and across several NFS versions. I used the `vmtouch` utility
> to make sure files were only cached when we wanted them to be. In addition
> to 100% data and 100% hole cases, I also tested with files that alternate
> between data and hole segments. These files have either 4K, 8K, 16K, or 32K
> segment sizes and start with either data or hole segments. So the file
> mixed-4d has a 4K segment size beginning with a data segment, but mixed-32h
> has 32K segments beginning with a hole. The units are in seconds, with the
> first number for each NFS version being the uncached read time and the second
> number is for when the file is cached on the server.

The only numbers that look really strange are in the btrfs uncached
case, in the data-only case and the mixed case that start with a hole.
Do we have any idea what's up there?

--b.

> Read Plus Results (btrfs):
>   data
>    :... v4.1 ... Uncached ... 21.317 s, 101 MB/s, 0.63 s kern, 2% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.67 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 28.665 s,  75 MB/s, 0.65 s kern, 2% cpu
>         :....... Cached ..... 18.253 s, 118 MB/s, 0.66 s kern, 3% cpu
>   hole
>    :... v4.1 ... Uncached ... 18.256 s, 118 MB/s, 0.70 s kern,  3% cpu
>    :    :....... Cached ..... 18.254 s, 118 MB/s, 0.73 s kern,  4% cpu
>    :... v4.2 ... Uncached ...  0.851 s, 2.5 GB/s, 0.72 s kern, 84% cpu
>         :....... Cached .....  0.847 s, 2.5 GB/s, 0.73 s kern, 86% cpu
>   mixed-4d
>    :... v4.1 ... Uncached ... 56.857 s,  38 MB/s, 0.76 s kern, 1% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.72 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 54.455 s,  39 MB/s, 0.73 s kern, 1% cpu
>         :....... Cached .....  9.215 s, 233 MB/s, 0.68 s kern, 7% cpu
>   mixed-8d
>    :... v4.1 ... Uncached ... 36.641 s,  59 MB/s, 0.68 s kern, 1% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.70 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 33.205 s,  65 MB/s, 0.67 s kern, 2% cpu
>         :....... Cached .....  9.172 s, 234 MB/s, 0.65 s kern, 7% cpu
>   mixed-16d
>    :... v4.1 ... Uncached ... 28.653 s,  75 MB/s, 0.72 s kern, 2% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.70 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 25.748 s,  83 MB/s, 0.71 s kern, 2% cpu
>         :....... Cached .....  9.150 s, 235 MB/s, 0.64 s kern, 7% cpu
>   mixed-32d
>    :... v4.1 ... Uncached ... 28.886 s,  74 MB/s, 0.67 s kern, 2% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.71 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 24.724 s,  87 MB/s, 0.74 s kern, 2% cpu
>         :....... Cached .....  9.140 s, 235 MB/s, 0.63 s kern, 6% cpu
>   mixed-4h
>    :... v4.1 ... Uncached ...  52.181 s,  41 MB/s, 0.73 s kern, 1% cpu
>    :    :....... Cached .....  18.252 s, 118 MB/s, 0.66 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 150.341 s,  14 MB/s, 0.72 s kern, 0% cpu
>         :....... Cached .....   9.216 s, 233 MB/s, 0.63 s kern, 6% cpu
>   mixed-8h
>    :... v4.1 ... Uncached ... 36.945 s,  58 MB/s, 0.68 s kern, 1% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.65 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 79.781 s,  27 MB/s, 0.68 s kern, 0% cpu
>         :....... Cached .....  9.172 s, 234 MB/s, 0.66 s kern, 7% cpu
>   mixed-16h
>    :... v4.1 ... Uncached ... 28.651 s,  75 MB/s, 0.73 s kern, 2% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.66 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 47.428 s,  45 MB/s, 0.71 s kern, 1% cpu
>         :....... Cached .....  9.150 s, 235 MB/s, 0.67 s kern, 7% cpu
>   mixed-32h
>    :... v4.1 ... Uncached ... 28.618 s,  75 MB/s, 0.69 s kern, 2% cpu
>    :    :....... Cached ..... 18.252 s, 118 MB/s, 0.70 s kern, 3% cpu
>    :... v4.2 ... Uncached ... 38.813 s,  55 MB/s, 0.67 s kern, 1% cpu
>         :....... Cached .....  9.140 s, 235 MB/s, 0.61 s kern, 6% cpu


  parent reply	other threads:[~2020-08-26 21:54 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-17 16:53 [PATCH v4 0/5] NFSD: Add support for the v4.2 READ_PLUS operation schumaker.anna
2020-08-17 16:53 ` [PATCH v4 1/5] SUNRPC/NFSD: Implement xdr_reserve_space_vec() schumaker.anna
2020-08-17 16:53 ` [PATCH v4 2/5] NFSD: Add READ_PLUS data support schumaker.anna
2020-08-28 21:25   ` J. Bruce Fields
2020-08-28 21:56     ` J. Bruce Fields
2020-08-31 18:16       ` Anna Schumaker
2020-09-01 16:49         ` J. Bruce Fields
2020-09-01 17:40           ` Anna Schumaker
2020-09-01 19:18             ` J. Bruce Fields
2020-09-04 13:52               ` J. Bruce Fields
2020-09-04 13:56                 ` Chuck Lever
2020-09-04 14:03                   ` Bruce Fields
2020-09-04 14:07                     ` Chuck Lever
2020-09-04 14:29                       ` Bruce Fields
2020-09-04 14:36                         ` Chuck Lever
2020-09-04 14:49                           ` J. Bruce Fields
2020-09-04 14:58                             ` Chuck Lever
2020-09-04 15:24                               ` Bruce Fields
2020-09-04 16:17                                 ` Chuck Lever
2020-09-04 16:26                                   ` Bruce Fields
2020-09-04 16:30                                   ` Chuck Lever
2020-08-17 16:53 ` [PATCH v4 3/5] NFSD: Add READ_PLUS hole segment encoding schumaker.anna
2020-08-17 16:53 ` [PATCH v4 4/5] NFSD: Return both a hole and a data segment schumaker.anna
2020-08-28 22:18   ` J. Bruce Fields
2020-08-31 18:15     ` Anna Schumaker
2020-08-17 16:53 ` [PATCH v4 5/5] NFSD: Encode a full READ_PLUS reply schumaker.anna
2020-08-19 17:07 ` [PATCH v4 0/5] NFSD: Add support for the v4.2 READ_PLUS operation Chuck Lever
2020-08-26 21:54 ` J. Bruce Fields [this message]
2020-08-31 18:33   ` Anna Schumaker
2020-09-04 15:56     ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200826215437.GD62682@pick.fieldses.org \
    --to=bfields@redhat.com \
    --cc=Anna.Schumaker@netapp.com \
    --cc=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=schumaker.anna@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox