linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Lucas Stach <dev@lynxeye.de>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 0/2] XFS buffer cache scalability improvements
Date: Wed, 19 Oct 2016 08:21:16 +1100	[thread overview]
Message-ID: <20161018212116.GC23194@dastard> (raw)
In-Reply-To: <1476821653-2595-1-git-send-email-dev@lynxeye.de>

On Tue, Oct 18, 2016 at 10:14:11PM +0200, Lucas Stach wrote:
> Hi all,
> 
> this series scratches my own small itch with XFS, namely scalability of the buffer
> cache in metadata intensive workloads. With a large number of cached buffers those
> workloads are CPU bound with a significant amount of time spent searching the cache.
> 
> The first commit replaces the rbtree used to index the cache with an rhashtable. The
> rbtree is a bottleneck in scalability, as the data structure itself is pretty CPU
> cache unfriendly. For larger numbers of cached buffers over 80% of the CPU time
> is spent waiting on cache misses resulting from the inherent pointer chasing.
> 
> rhashtables provide a fast lookup with the ability to have lookups proceed while the
> hashtable is being resized. This seems to match the read dominated workload of the
> buffer cache index structure pretty well.

Yup, it's a good idea - I have considered doing this change for
these reasons, but have never found the time.

> The second patch is logical follow up. The rhashtable cache index is protected by
> RCU and does not need any additional locking. By switching the buffer cache entries
> over to RCU freeing the buffer cache can be operated in a completely lock-free
> manner. This should help scalability in the long run.

Yup, that's another reason I'd considered rhashtables :P

However, this is where it gets hairy. The buffer lifecycle is
intricate, subtle, and has a history of nasty bugs that just never
seem to go away. This change will require a lot of verification
work to ensure things like the LRU manipulations haven't been
compromised by the removal of this lock...

> This series survives at least a xfstests auto group run (though with the scratch
> device being a ramdisk) with no regressions and didn't show any problems in my
> real world testing (using the patched FS with multiple large git trees) so far.

It's a performance modification - any performance/profile numbers
that show the improvement?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2016-10-18 21:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-18 20:14 [PATCH 0/2] XFS buffer cache scalability improvements Lucas Stach
2016-10-18 20:14 ` [PATCH 1/2] xfs: use rhashtable to track buffer cache Lucas Stach
2016-10-18 22:18   ` Dave Chinner
2016-10-22 18:01     ` Lucas Stach
2016-10-24  2:15       ` Dave Chinner
2016-10-24 11:47         ` Lucas Stach
2016-10-19  1:15   ` Dave Chinner
2016-10-18 20:14 ` [PATCH 2/2] xfs: switch buffer cache entries to RCU freeing Lucas Stach
2016-10-18 22:43   ` Dave Chinner
2016-10-22 18:52     ` Lucas Stach
2016-10-24  2:37       ` Dave Chinner
2016-10-18 21:21 ` Dave Chinner [this message]
2016-10-22 17:51   ` [PATCH 0/2] XFS buffer cache scalability improvements Lucas Stach
2016-11-10 23:02   ` Dave Chinner
2016-12-02 21:54     ` Lucas Stach
2016-12-04 21:36       ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161018212116.GC23194@dastard \
    --to=david@fromorbit.com \
    --cc=dev@lynxeye.de \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).