linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: Jeff Layton <jlayton@kernel.org>
Cc: Dave Chinner <david@fromorbit.com>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	hch@lst.de, neilb@suse.de, amir73il@gmail.com, jack@suse.de,
	viro@zeniv.linux.org.uk
Subject: Re: [PATCH 00/19] fs: rework and optimize i_version handling in filesystems
Date: Thu, 14 Dec 2017 10:14:11 -0500	[thread overview]
Message-ID: <20171214151411.GC9205@fieldses.org> (raw)
In-Reply-To: <1513260887.3504.12.camel@kernel.org>

On Thu, Dec 14, 2017 at 09:14:47AM -0500, Jeff Layton wrote:
> On Wed, 2017-12-13 at 19:02 -0500, Jeff Layton wrote:
> > On Thu, 2017-12-14 at 10:03 +1100, Dave Chinner wrote:
> > > On Wed, Dec 13, 2017 at 03:14:28PM -0500, Jeff Layton wrote:
> > > > On Wed, 2017-12-13 at 10:05 -0500, J. Bruce Fields wrote:
> > > > > This is great, thanks.
> > > > > 
> > > > > On Wed, Dec 13, 2017 at 09:19:58AM -0500, Jeff Layton wrote:
> > > > > > With this, we reduce inode metadata updates across all 3 filesystems
> > > > > > down to roughly the frequency of the timestamp granularity, particularly
> > > > > > when it's not being queried (the vastly common case).
> > > > > > 
> > > > > > The pessimal workload here is 1 byte writes, and it helps that
> > > > > > significantly. Of course, that's not what we'd consider a real-world
> > > > > > workload.
> > > > > > 
> > > > > > A tiobench-example.fio workload also shows some modest performance
> > > > > > gains, and I've gotten mails from the kernel test robot that show some
> > > > > > significant performance gains on some microbenchmarks (case-msync-mt in
> > > > > > the vm-scalability testsuite to be specific), with an earlier version of
> > > > > > this set.
> > > > > > 
> > > > > > With larger writes, the gains with this patchset mostly vaporize,
> > > > > > but it does not seem to cause performance to regress anywhere, AFAICT.
> > > > > > 
> > > > > > I'm happy to run other workloads if anyone can suggest them.
> > > > > > 
> > > > > > At this point, the patchset works and does what it's expected to do in
> > > > > > my own testing. It seems like it's at least a modest performance win
> > > > > > across all 3 major disk-based filesystems. It may also encourage others
> > > > > > to implement i_version as well since it reduces the cost.
> > > > > 
> > > > > Do you have an idea what the remaining cost is?
> > > > > 
> > > > > Especially in the ext4 case, are you still able to measure any
> > > > > difference in performance between the cases where i_version is turned on
> > > > > and off, after these patches?
> > > > 
> > > > Attached is a fio jobfile + the output from 3 different runs using it
> > > > with ext4. This one is using 4k writes. There was no querying of
> > > > i_version during the runs.  I've done several runs with each and these
> > > > are pretty representative of the results:
> > > > 
> > > > old = 4.15-rc3, i_version enabled
> > > > ivers = 4.15-rc3 + these patches, i_version enabled
> > > > noivers = 4.15-rc3 + these patches, i_version disabled
> > > > 
> > > > To snip out the run status lines:
> > > > 
> > > > old:
> > > > WRITE: bw=85.6MiB/s (89.8MB/s), 9994KiB/s-11.1MiB/s (10.2MB/s-11.7MB/s), io=50.2GiB (53.8GB), run=600001-600001msec
> > > > 
> > > > ivers:
> > > > WRITE: bw=110MiB/s (115MB/s), 13.5MiB/s-14.2MiB/s (14.1MB/s-14.9MB/s), io=64.3GiB (69.0GB), run=600001-600001msec
> > > > 
> > > > noivers:
> > > > WRITE: bw=117MiB/s (123MB/s), 14.2MiB/s-15.2MiB/s (14.9MB/s-15.9MB/s), io=68.7GiB (73.8GB), run=600001-600001msec
> > > > 
> > > > So, I see some performance degradation with -o iversion compared to not
> > > > having it enabled (maybe due to the extra atomic fetches?), but this set
> > > > erases most of the difference.
> > > 
> > > So what is the performance difference when something is actively
> > > querying the i_version counter as fast as it can (e.g. file being
> > > constantly stat()d via NFS whilst being modified)? How does the
> > > performance compare to the old code in that case?
> > > 
> > 
> > I haven't benchmarked that with the latest set, but I did with the set
> > that I posted around a year ago. Basically I just ran a similar test to
> > this, and had another shell open doing statx(..., STATX_VERSION, ...);
> > the thing in a tight loop.
> > 
> > I did see some performance hit vs. the case where no one is viewing it,
> > but it was still significantly faster than the unpatched version that
> > was incrementing the counter every time.
> > 
> > That was on a different test rig, and the patchset has some small
> > differences now. I'll see if I can get some hard numbers with such a
> > testcase soon.
> 
> I reran the test on xfs, with another shell running test-statx to query
> the i_version value in a tight loop:
> 
>     WRITE: bw=110MiB/s (115MB/s), 13.6MiB/s-14.0MiB/s (14.2MB/s-14.7MB/s), io=64.5GiB (69.3GB), run=600001-600001msec
> 
> ...contrast that with the run where I was not doing any queries:
> 
>    WRITE: bw=129MiB/s (136MB/s), 15.8MiB/s-16.6MiB/s (16.6MB/s-17.4MB/s), io=75.9GiB (81.5GB), run=600001-600001msec
> 
> ...vs the unpatched kernel:
> 
>    WRITE: bw=86.7MiB/s (90.0MB/s), 9689KiB/s-11.7MiB/s (9921kB/s-12.2MB/s), io=50.8GiB (54.6GB), run=600001-600002msec
> 
> There is some clear peformance impact when you are running frequent
> queries of the i_version.
> 
> My gut feeling is that you could probably make the new code perform
> worse than the old if you were to _really_ hammer the inode with queries
> for the i_version (probably from many threads in parallel) while doing a
> lot of small writes to it.
> 
> That'd be a pretty unusual workload though.

It may be pretty common for NFS itself: if I'm understanding the client
code right (mainly nfs4_write_need_cache_consistency()), our client will
request the change attribute in every WRITE that isn't a pNFS write, an
O_DIRECT write, or associated with a delegation.

The goal of this series isn't to improve NFS performance, it's to save
non-NFS users from paying a performance penalty for something that NFS
requires for correctness.  Probably this series doesn't make much
difference in the NFS write case, and that's fine.  Still, might be
worth confirming that a workload with lots of small NFS writes is mostly
unaffected.

--b.

  reply	other threads:[~2017-12-14 15:14 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-13 14:19 [PATCH 00/19] fs: rework and optimize i_version handling in filesystems Jeff Layton
2017-12-13 14:19 ` [PATCH 01/19] fs: new API for handling inode->i_version Jeff Layton
2017-12-13 22:04   ` NeilBrown
2017-12-14  0:27     ` Jeff Layton
2017-12-16  4:17       ` NeilBrown
2017-12-17 13:01         ` Jeff Layton
2017-12-18 14:03         ` Jeff Layton
2017-12-13 14:20 ` [PATCH 02/19] fs: don't take the i_lock in inode_inc_iversion Jeff Layton
2017-12-13 21:52   ` Jeff Layton
2017-12-13 22:07     ` NeilBrown
2017-12-13 14:20 ` [PATCH 03/19] fat: convert to new i_version API Jeff Layton
2017-12-13 14:20 ` [PATCH 04/19] affs: " Jeff Layton
2017-12-13 14:20 ` [PATCH 05/19] afs: " Jeff Layton
2017-12-13 14:20 ` [PATCH 06/19] btrfs: " Jeff Layton
2017-12-13 14:20 ` [PATCH 07/19] exofs: switch " Jeff Layton
2017-12-13 14:20 ` [PATCH 08/19] ext2: convert " Jeff Layton
2017-12-18 12:47   ` Jan Kara
2017-12-13 14:20 ` [PATCH 09/19] ext4: " Jeff Layton
2017-12-14 21:52   ` Theodore Ts'o
2017-12-13 14:20 ` [PATCH 10/19] nfs: " Jeff Layton
2017-12-13 14:20 ` [PATCH 11/19] nfsd: " Jeff Layton
2017-12-13 14:20 ` [PATCH 12/19] ocfs2: " Jeff Layton
2017-12-18 12:49   ` Jan Kara
2017-12-13 14:20 ` [PATCH 13/19] ufs: use " Jeff Layton
2017-12-13 14:20 ` [PATCH 14/19] xfs: convert to " Jeff Layton
2017-12-13 22:48   ` Dave Chinner
2017-12-13 23:25     ` Dave Chinner
2017-12-14  0:10       ` Jeff Layton
2017-12-14  2:17         ` Dave Chinner
2017-12-14 11:16           ` Jeff Layton
2017-12-13 14:20 ` [PATCH 15/19] IMA: switch IMA over " Jeff Layton
2017-12-13 14:20 ` [PATCH 16/19] fs: only set S_VERSION when updating times if necessary Jeff Layton
2017-12-15 12:59   ` Jeff Layton
2017-12-13 14:20 ` [PATCH 17/19] xfs: avoid setting XFS_ILOG_CORE if i_version doesn't need incrementing Jeff Layton
2017-12-13 14:20 ` [PATCH 18/19] btrfs: only dirty the inode in btrfs_update_time if something was changed Jeff Layton
2017-12-15 13:03   ` Jeff Layton
2017-12-13 14:20 ` [PATCH 19/19] fs: handle inode->i_version more efficiently Jeff Layton
2017-12-13 15:05 ` [PATCH 00/19] fs: rework and optimize i_version handling in filesystems J. Bruce Fields
2017-12-13 20:14   ` Jeff Layton
2017-12-13 22:10     ` Jeff Layton
2017-12-13 23:03     ` Dave Chinner
2017-12-14  0:02       ` Jeff Layton
2017-12-14 14:14         ` Jeff Layton
2017-12-14 15:14           ` J. Bruce Fields [this message]
2017-12-15 15:15             ` Jeff Layton
2017-12-15 15:26               ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171214151411.GC9205@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=amir73il@gmail.com \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=jack@suse.de \
    --cc=jlayton@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).