public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: zlang@redhat.com, fstests@vger.kernel.org,
	linux-xfs@vger.kernel.org, guan@eryu.me
Subject: Re: [PATCHSET 3/3] xfsprogs: scale shards on ssds
Date: Tue, 4 Jun 2024 17:56:36 -0700	[thread overview]
Message-ID: <20240605005636.GI52987@frogsfrogsfrogs> (raw)
In-Reply-To: <Zl6hdo1ZXQwg2aM0@infradead.org>

On Mon, Jun 03, 2024 at 10:09:10PM -0700, Christoph Hellwig wrote:
> On Mon, Jun 03, 2024 at 01:12:05PM -0700, Darrick J. Wong wrote:
> > This patchset adds a different computation for AG count and log size
> > that is based entirely on a desired level of concurrency.  If we detect
> > storage that is non-rotational (or the sysadmin provides a CLI option),
> > then we will try to match the AG count to the CPU count to minimize AGF
> > contention and make the log large enough to minimize grant head
> > contention.
> 
> Do you have any performance numbers for this?
> 
> Because SSDs still have a limited number of write streams, and doing
> more parallel writes just increases the work that the 'blender' has
> to do for them.  Typical number of internal write streams for not
> crazy expensive SSDs would be at most 8.

Not much other than the AG[IF] and log grant heads becoming less hot.
That pushes the bottlenecks to the storage device, which indeed is about
8 per device.  More if you can raid0 them.

*Fortunately* for metadata workloads the logging code is decent about
deduplicating repeated updates, so unless you're doing something truly
nasty like synchronous direct writes to a directory tree with parent
pointers that is being modified heavily, it takes some effort to
overload the ssd.

(Or a crappy ssd, I guess.  Maybe I'll pull out the 860 QVO and see how
it does.)

--D

  reply	other threads:[~2024-06-05  0:56 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-03 20:12 [PATCHSET 3/3] xfsprogs: scale shards on ssds Darrick J. Wong
2024-06-03 20:13 ` [PATCH 1/1] xfs: test scaling of the mkfs concurrency options Darrick J. Wong
2024-06-04  5:09 ` [PATCHSET 3/3] xfsprogs: scale shards on ssds Christoph Hellwig
2024-06-05  0:56   ` Darrick J. Wong [this message]
2024-06-07  5:06     ` Christoph Hellwig
2024-06-07 18:16       ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240605005636.GI52987@frogsfrogsfrogs \
    --to=djwong@kernel.org \
    --cc=fstests@vger.kernel.org \
    --cc=guan@eryu.me \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=zlang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox