public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH v2 0/9] xfs: use large folios for buffers
Date: Tue, 19 Mar 2024 11:44:13 +1100	[thread overview]
Message-ID: <Zfjf3TBzCZSUIQc6@dread.disaster.area> (raw)
In-Reply-To: <ZfjbKh1Yifn7Ok8x@infradead.org>

On Mon, Mar 18, 2024 at 05:24:10PM -0700, Christoph Hellwig wrote:
> On Tue, Mar 19, 2024 at 09:45:51AM +1100, Dave Chinner wrote:
> > Apart from those small complexities that are resolved by the end of
> > the patchset, the conversion and enhancement is relatively straight
> > forward.  It passes fstests on both 512 and 4096 byte sector size
> > storage (512 byte sectors exercise the XBF_KMEM path which has
> > non-zero bp->b_offset values) and doesn't appear to cause any
> > problems with large 64kB directory buffers on 4kB page machines.
> 
> Just curious, do you have any benchmark numbers to see if this actually
> improves performance?

I have run some fsmark scalability tests on 64kb directory block
sizes to check that nothing fails and the numbers are in the
expected ballpark, but I haven't done any specific back to back
performance regression testing.

The reason for that is two-fold:

1. scalability on 64kb directory buffer workloads is limited by
buffer lock latency and journal size. i.e. even a 2GB journal is
too small for high concurrency and results in significant amounts of
tail pushing and the directory modifications getting stuck on
writeback of directory buffers from tail-pushing.

2. relogging 64kB directory blocks is -expensive-. Comapred to a 4kB
block size, the large directory block sizes are relogged much more
frequently and the memcpy() in each relogging costs *much* more than
relogging a 4kB directory block. It also hits xlog_kvmalloc() really
hard, and that's now where we hit vmalloc scalalbility
issues on large dir block size workloads.

The result of these things is that there hasn't been any significant
change in performance one way or the other - what we gain in buffer
access efficiency, we give back in increased lock contention and
tail pushing latency issues...

-Dave.
-- 
Dave Chinner
david@fromorbit.com

      reply	other threads:[~2024-03-19  0:44 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-18 22:45 [PATCH v2 0/9] xfs: use large folios for buffers Dave Chinner
2024-03-18 22:45 ` [PATCH 1/9] xfs: unmapped buffer item size straddling mismatch Dave Chinner
2024-03-18 22:45 ` [PATCH 2/9] xfs: use folios in the buffer cache Dave Chinner
2024-03-19  6:38   ` Christoph Hellwig
2024-03-19  6:52     ` Dave Chinner
2024-03-19  6:53   ` Christoph Hellwig
2024-03-19 21:42     ` Dave Chinner
2024-03-19 21:42     ` Dave Chinner
2024-03-19 17:15   ` Darrick J. Wong
2024-03-18 22:45 ` [PATCH 3/9] xfs: convert buffer cache to use high order folios Dave Chinner
2024-03-19  6:55   ` Christoph Hellwig
2024-03-19 17:29   ` Darrick J. Wong
2024-03-19 21:32     ` Christoph Hellwig
2024-03-19 21:38       ` Darrick J. Wong
2024-03-19 21:41         ` Christoph Hellwig
2024-03-19 22:23           ` Dave Chinner
2024-03-21  2:12           ` Darrick J. Wong
2024-03-21  2:40             ` Darrick J. Wong
2024-03-21 21:28               ` Christoph Hellwig
2024-03-21 21:39                 ` Darrick J. Wong
2024-03-21 22:02                   ` Christoph Hellwig
2024-03-19 21:55     ` Dave Chinner
2024-03-22  8:02   ` Pankaj Raghav (Samsung)
2024-03-22 22:04     ` Dave Chinner
2024-03-25 11:17       ` Pankaj Raghav (Samsung)
2024-03-18 22:45 ` [PATCH 4/9] xfs: kill XBF_UNMAPPED Dave Chinner
2024-03-19 17:30   ` Darrick J. Wong
2024-03-19 23:36     ` Dave Chinner
2024-03-18 22:45 ` [PATCH 5/9] xfs: buffer items don't straddle pages anymore Dave Chinner
2024-03-19  6:56   ` Christoph Hellwig
2024-03-19 17:31   ` Darrick J. Wong
2024-03-18 22:45 ` [PATCH 6/9] xfs: map buffers in xfs_buf_alloc_folios Dave Chinner
2024-03-19 17:34   ` Darrick J. Wong
2024-03-19 21:32     ` Christoph Hellwig
2024-03-19 21:39       ` Darrick J. Wong
2024-03-19 21:41         ` Christoph Hellwig
2024-03-18 22:45 ` [PATCH 7/9] xfs: walk b_addr for buffer I/O Dave Chinner
2024-03-19 17:42   ` Darrick J. Wong
2024-03-19 21:33     ` Christoph Hellwig
2024-03-18 22:45 ` [PATCH 8/9] xfs: use vmalloc for multi-folio buffers Dave Chinner
2024-03-19 17:48   ` Darrick J. Wong
2024-03-20  0:20     ` Dave Chinner
2024-03-18 22:46 ` [PATCH 9/9] xfs: rename bp->b_folio_count Dave Chinner
2024-03-19  7:37   ` Christoph Hellwig
2024-03-19 23:59     ` Dave Chinner
2024-03-19  0:24 ` [PATCH v2 0/9] xfs: use large folios for buffers Christoph Hellwig
2024-03-19  0:44   ` Dave Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zfjf3TBzCZSUIQc6@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox