From: wangyugui <wangyugui@e16-tech.com>
To: linux-btrfs@vger.kernel.org
Cc: kreijack@libero.it, wangyugui <wangyugui@e16-tech.com>
Subject: [PATCH 0/4] btrfs: basic tier support
Date: Thu, 29 Oct 2020 13:35:52 +0800 [thread overview]
Message-ID: <20201029053556.10619-1-wangyugui@e16-tech.com> (raw)
Storage tier is a big feature, this is just a basic support.
1) Add tier score to device
We use a single score value to define the tier level of a device.
Different score means different tier, and bigger score is faster.
DAX device(dax=1)
SSD device(rotational=0)
HDD device(rotational=1)
TODO/FIXME: FIXME: detect bus(DIMM, NVMe, SCSI, SATA, Virtio, ...)
TODO/FIXME: user-assigned property(refactoring the coming 'read_preferred' property?) to
set to the max score for some not-well-supported case.
In most case, only 1 or 2 tiers are used at the same time, so we group them into
top tier and other tier(s).
2) tiering data and metadata
This based the patch 'btrfs: add ssd_metadata mode' from Goffredo Baroncelli <kreijack@libero.it>
We define a mount option to tiering data/metadata to slower/faster device(s)
When there is only 1 tier, tiering is auto disabled.
mount option: tier[={off|auto|data_tier_X/metadata_tier_Y}]
default is 'tier[=auto]'. 'tier' is same as 'tier=auto', 'tier=OF/TF'
the policies to use the device(s):
Top-tier-Only(TO) : metadata only use top-tier device.
Top-tier-Firstly(TF) : metadata use top-tier device firstly.
Other-tier-First(OF) : data use other-tier device firstly.
Other-tier-Only(OO) : data only use other-tier device.
data_tier_X is the policy for data, support OF, OO.
metadata_tier_Y is the policy for metadata and system, support TF.
3) tier-aware mirror path select
This feature help the read performance, so it is enabled even if tier=off.
4) tier-aware free space cacl
Detect some case of free space 0 because of tier policy of data.
Full support is yet TODO/FIXME.
5) TODO/FIXME: per-subvol tiering policy and then per-subvol profile(RAID)
per-subvol tiering policy and then per-subvol data profile(RAID) is needed for the full tier support.
data policy support TO, TF too, in addition to OF, OO.
But now as a workaround, we can keep them as 2 separated btrfs file system with disk partition and
'btrfs filesystem resize'.
wangyugui (4):
btrfs: add tier score to device
btrfs: tiering data and metadata
btrfs: tier-aware mirror path select
btrfs: tier-aware free space cacl
fs/btrfs/ctree.h | 17 +++++++
fs/btrfs/super.c | 90 ++++++++++++++++++++++++++++++++++
fs/btrfs/volumes.c | 119 +++++++++++++++++++++++++++++++++++++++++++--
fs/btrfs/volumes.h | 5 ++
4 files changed, 228 insertions(+), 3 deletions(-)
--
2.29.1
next reply other threads:[~2020-10-29 7:47 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-29 5:35 wangyugui [this message]
2020-10-29 5:35 ` [PATCH 1/4] btrfs: add tier score to device wangyugui
2020-10-29 5:35 ` [PATCH 2/4] btrfs: tiering data and metadata wangyugui
2020-10-29 5:35 ` [PATCH 3/4] btrfs: tier-aware mirror path select wangyugui
2020-10-29 5:35 ` [PATCH 4/4] btrfs: tier-aware free space cacl wangyugui
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201029053556.10619-1-wangyugui@e16-tech.com \
--to=wangyugui@e16-tech.com \
--cc=kreijack@libero.it \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox