From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [XFS SUMMIT] SSD optimised allocation policy
Date: Mon, 18 May 2020 23:32:04 -0700 [thread overview]
Message-ID: <20200519063204.GI17627@magnolia> (raw)
In-Reply-To: <20200514103454.GL2040@dread.disaster.area>
On Thu, May 14, 2020 at 08:34:54PM +1000, Dave Chinner wrote:
>
> Topic: SSD Optimised allocation policies
>
> Scope:
> Performance
> Storage efficiency
>
> Proposal:
>
> Non-rotational storage is typically very fast. Our allocation
> policies are all, fundamentally, based on very slow storage which
> has extremely high latency between IO to different LBA regions. We
> burn CPU to optimise for minimal seeks to minimise the expensive
> physical movement of disk heads and platter rotation.
>
> We know when the underlying storage is solid state - there's a
> "non-rotational" field in the block device config that tells us the
> storage doesn't need physical seek optimisation. We should make use
> of that.
>
> My proposal is that we look towards arranging the filesystem
> allocation policies into CPU-optimised silos. We start by making
> filesystems on SSDs with AG counts that are multiples of the CPU
> count in the system (e.g. 4x the number of CPUs) to drive
I guess you and I have been doing this for years with seemingly few ill
effects. ;)
That said, I did encounter a wackass system with 104 CPUs, a 1.4T RAID
array of spinning disks, 229 AGs sized ~6.5GB each, and a 50M log. The
~900 io writers were sinking thesystem, so clearly some people are still
getting it wrong even with traditional storage. :(
> parallelism at the allocation level, and then associate allocation
> groups with specific CPUs in the system. Hence each CPU has a set of
> allocation groups is selects between for the operations that are run
> on it. Hence allocation is typically local to a specific CPU.
> Optimisation proceeds from the basis of CPU locality optimisation,
> not storage locality optimisation.
I wonder how hard it would be to compile a locality map for storage and
CPUs from whatever numa and bus topology information the kernel already
knows about?
> What this allows is processes on different CPUs to never contend for
> allocation resources. Locality of objects just doesn't matter for
> solid state storage, so we gain nothing by trying to group inodes,
> directories, their metadata and data physically close together. We
> want writes that happen at the same time to be physically close
> together so we aggregate them into larger IOs, but we really
> don't care about optimising write locality for best read performance
> (i.e. must be contiguous for sequential access) for this storage.
>
> Further, we can look at faster allocation strategies - we don't need
> to find the "nearest" if we don't have a contiguous free extent to
> allocate into, we just want the one that costs the least CPU to
> find. This is because solid state storage is so fast that filesystem
> performance is CPU limited, not storage limited. Hence we need to
> think about allocation policies differently and start optimising
> them for minimum CPU expenditure rather than best layout.
>
> Other things to discuss include:
> - how do we convert metadata structures to write-once style
> behaviour rather than overwrite in place?
(Hm?)
> - extremely large block sizes for metadata (e.g. 4MB) to
> align better with SSD erase block sizes
If we had metadata blocks that size, I'd advocate for studying how we
could restructure the btree to log updates in the slack space and only
checkpoint lower in the tree when necessary.
> - what parts of the allocation algorithms don't we need
Brian reworked part of the allocator a couple of cycles ago to reduce
the long tail latency of chasing through one free space btree when the
other one would have given it a quick answer; how beneficial has that
been? Could it be more aggressive?
(Will have to ponder allocation issues in more depth when I'm more
awake..)
> - are we better off with huge numbers of small AGs rather
> than fewer large AGs?
There's probably some point of dimininishing returns, but this seems
likely. Has anyone studied this recently?
--D
>
> --
> Dave Chinner
> david@fromorbit.com
next prev parent reply other threads:[~2020-05-19 6:32 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-14 10:34 [XFS SUMMIT] SSD optimised allocation policy Dave Chinner
2020-05-19 6:32 ` Darrick J. Wong [this message]
2020-05-20 1:46 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200519063204.GI17627@magnolia \
--to=darrick.wong@oracle.com \
--cc=david@fromorbit.com \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox