From: Anna Schumaker <Anna.Schumaker@netapp.com>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
Trond Myklebust <trond.myklebust@primarydata.com>,
Marc Eshel <eshel@us.ibm.com>,
xfs@oss.sgi.com, Christoph Hellwig <hch@infradead.org>,
linux-nfs-owner@vger.kernel.org
Subject: Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments
Date: Thu, 26 Mar 2015 12:18:47 -0400 [thread overview]
Message-ID: <55143167.5030605@Netapp.com> (raw)
In-Reply-To: <20150326161154.GC30482@fieldses.org>
On 03/26/2015 12:11 PM, J. Bruce Fields wrote:
> On Thu, Mar 26, 2015 at 11:47:03AM -0400, Anna Schumaker wrote:
>> On 03/26/2015 11:38 AM, J. Bruce Fields wrote:
>>> On Thu, Mar 26, 2015 at 11:32:25AM -0400, Trond Myklebust wrote:
>>>> On Thu, Mar 26, 2015 at 11:21 AM, Anna Schumaker
>>>> <Anna.Schumaker@netapp.com> wrote:
>>>>> Here are my updated numbers! I tested with files 5G in size: one 100% data, one 100% hole, and one alternating between hole and data every 4K. I collected data for both v4.1 and v4.2 with and without the READ_PLUS patches:
>>>>>
>>>>> ##########################
>>>>> # #
>>>>> # Without READ_PLUS #
>>>>> # #
>>>>> ##########################
>>>>>
>>>>>
>>>>> NFS v4.1:
>>>>> Trial
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | Data | 8.723s | 7.243s | 8.252s | 6.997s | 6.980s | 7.639s |
>>>>> | Hole | 5.271s | 5.224s | 5.060s | 4.897s | 5.321s | 5.155s |
>>>>> | Mixed | 8.050s | 10.057s | 7.919s | 8.060s | 9.557s | 8.729s |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> NFS v4.2:
>>>>> Trial
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | Data | 6.707s | 7.070s | 6.722s | 6.761s | 6.810s | 6.814s |
>>>>> | Hole | 5.152s | 5.149s | 5.213s | 5.206s | 5.312s | 5.206s |
>>>>> | Mixed | 7.979s | 7.985s | 8.177s | 7.772s | 8.280s | 8.039s |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> #######################
>>>>> # #
>>>>> # With READ_PLUS #
>>>>> # #
>>>>> #######################
>>>>>
>>>>>
>>>>> NFS v4.1:
>>>>> Trial
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | Data | 9.082s | 7.008s | 7.116s | 6.771s | 7.902s | 7.576s |
>>>>> | Hole | 5.333s | 5.358s | 5.380s | 5.161s | 5.282s | 5.303s |
>>>>> | Mixed | 8.189s | 8.308s | 9.540s | 7.937s | 8.420s | 8.479s |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> NFS v4.2:
>>>>> Trial
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>> | Data | 7.033s | 6.829s | 7.025s | 6.873s | 7.134s | 6.979s |
>>>>> | Hole | 1.794s | 1.800s | 1.905s | 1.811s | 1.725s | 1.807s |
>>>>> | Mixed | 7.590s | 8.777s | 9.423s | 10.366s | 8.024s | 8.836s |
>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>
>>>>
>>>> So there is a clear win in the 100% hole case here, but otherwise the
>>>> statistical fluctuations are dominating the numbers. Can you get us a
>>>> little more stats and then perhaps run the results through nfsometer?
>>>
>>> Also, could you describe the setup (are these still kvm's), and how
>>> you're clearing the cache between runs?
>>
>> These are still KVMs and my server is exporting an xfs filesystem. I clear caches by running "echo 3 > /proc/sys/vm/drop_caches" on the server before every read, and I remount my client after reading each set of three files once.
>
> What sort of device is the exported xfs filesystem on? (Can't there
> be a second level of caching on the guest, depending on how it's set
> up?)
My host is a macbook pro running Archlinux, and I have all my virtio disks set to "cache mode = none". Let me know if you were asking something different!
>
> Can we get results on bare metal? (The kvm test might be a good
> worst-case for read_plus, as I'd expect bandwidth to be relatively high
> compared to the cost of the extra memcpy's or seek calls. But it also
> seems more complicated.)
I do all of my testing on kvm these days! I'll see how difficult it is to setup refind with a custom kernel to test between my laptop and my desktop (or I could run the test between my raspberry pis!)
Anna
>
> --b.
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2015-03-26 16:18 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20150317213654.GE29843@fieldses.org>
[not found] ` <5509C0FD.70309@Netapp.com>
[not found] ` <20150318185545.GF8818@fieldses.org>
[not found] ` <5509E27C.3080004@Netapp.com>
[not found] ` <20150318205554.GA10716@fieldses.org>
[not found] ` <5509E824.6070006@Netapp.com>
[not found] ` <20150318211144.GB10716@fieldses.org>
[not found] ` <OFB111A6D8.016B8BD5-ON88257E0D.001D174D-88257E0D.005268D6@us.ibm.com>
[not found] ` <20150319153627.GA20852@fieldses.org>
[not found] ` <OF38D4D18B.19055EC2-ON88257E0D.0059BA03-88257E0D.005A781F@us.ibm.com>
2015-03-20 15:17 ` [PATCH v3 3/3] NFSD: Add support for encoding multiple segments J. Bruce Fields
2015-03-20 16:23 ` Christoph Hellwig
2015-03-20 18:26 ` J. Bruce Fields
2015-03-24 12:43 ` Anna Schumaker
2015-03-24 17:49 ` Christoph Hellwig
2015-03-25 17:15 ` Anna Schumaker
2015-03-26 15:21 ` Anna Schumaker
2015-03-26 15:32 ` Trond Myklebust
2015-03-26 15:36 ` Anna Schumaker
2015-03-26 15:38 ` J. Bruce Fields
2015-03-26 15:47 ` Anna Schumaker
2015-03-26 16:06 ` Trond Myklebust
2015-03-26 16:11 ` Anna Schumaker
2015-03-26 16:13 ` Trond Myklebust
2015-03-26 16:14 ` Anna Schumaker
2015-03-27 19:04 ` Anna Schumaker
2015-03-27 20:22 ` Trond Myklebust
2015-03-27 20:46 ` Anna Schumaker
2015-03-27 20:54 ` J. Bruce Fields
2015-03-27 20:55 ` Anna Schumaker
2015-03-27 21:08 ` J. Bruce Fields
2015-04-15 19:32 ` Anna Schumaker
2015-04-15 19:56 ` J. Bruce Fields
2015-04-15 20:00 ` J. Bruce Fields
2015-04-15 22:50 ` Dave Chinner
2015-04-17 22:07 ` J. Bruce Fields
2015-04-15 22:57 ` Dave Chinner
2015-03-26 16:11 ` J. Bruce Fields
2015-03-26 16:18 ` Anna Schumaker [this message]
2015-03-30 14:06 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55143167.5030605@Netapp.com \
--to=anna.schumaker@netapp.com \
--cc=bfields@fieldses.org \
--cc=eshel@us.ibm.com \
--cc=hch@infradead.org \
--cc=linux-nfs-owner@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=trond.myklebust@primarydata.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox