From: Marc MERLIN <marc@merlins.org>
To: Roman Mamedov <rm@romanrm.net>,
Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Cc: Andrea Gelmini <andrea.gelmini@gmail.com>,
Andrei Borzenkov <arvidjaar@gmail.com>,
Josef Bacik <josef@toxicpanda.com>,
Chris Murphy <lists@colorremedies.com>,
Qu Wenruo <quwenruo.btrfs@gmx.com>,
"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: Suggestions for building new 44TB Raid5 array
Date: Mon, 13 Jun 2022 21:51:48 -0700 [thread overview]
Message-ID: <20220614045148.GU1664812@merlins.org> (raw)
In-Reply-To: <YqeZJ1j2ZYGpvY7v@hungrycats.org> <20220613232907.6d71be87@nvm> <Yqd9sIhOiuCSg99Z@hungrycats.org> <20220613230625.78631b8a@nvm>
Thanks to you both for your kind help. If I'm rebulding everything,
might as well future-proof it as well as possible.
On Mon, Jun 13, 2022 at 11:06:25PM +0500, Roman Mamedov wrote:
> What I mean is bcache in this way stays bcache-without-a-cache forever, which
> feels odd; it still goes through the bcache code, has the module loaded, keeps
> the device name, etc;
Fair point. I have done that, but I see what you're saying.
> Whereas in LVM caching is a completely optional side-feature, and many people
> would just run LVM in any case, not even thinking about enabling cache. LVM is
> basically "the next generation" of disk partitions, with way more features,
> but not much more overhead.
Fair enough. I have used LVM for many years, since the now defunct lvm1,
and I've run through a fair amount of issues, some reliability, some
performance. that was many many years ago though, so I'll take your word
for it that it's a lot more lightweight and safe now.
Actually I think I stopped using LVM the same time I started using
btrfs, because effectively btrfs subvolumes were close enough to LVM LVs
for my use, but yes I understand that different LVs are actually
different filesystems and you can do extra stuff like caching.
Actually I have another array where there were so many files and
snapshots that I split it into different LVs with dm-thin so that I
didn't stress the btrfs code too much (which I'm told gets unhappy when
you have hundreds of snapshots).
On Mon, Jun 13, 2022 at 02:10:56PM -0400, Zygo Blaxell wrote:
> You can trivially convert from lvmcache to plain LV on the fly. It's a
> pretty essential capability for long-term maintenance, since you can't
> move or resize the LV while it's cached.
>
> If you have a LV and you want it to be cached with bcache, you can hack
> up the LVM configuration after the fact with https://github.com/g2p/blocks
Got it, thanks much.
On Mon, Jun 13, 2022 at 11:29:07PM +0500, Roman Mamedov wrote:
> It is a question of whether you want to cache encrypted, or plain-text data. I
> guess the former should be preferable, for a complete peace-of-mind against
> data forensics vs the cache device, but with a toll on performance, due to the
> need to re-decrypt even the cache hits each time.
Right, I know that tradeoff. Also, LUKS makes things a bit more complicated
if you want to grow the FS.
> In case of caching encrypted, it's:
>
> mdraid => PV => LV => LUKS
> |
> (cache)
>
> Otherwise:
>
> mdraid => LUKS => PV => LV
> |
> (cache)
Right. I'll probably do that.
On Mon, Jun 13, 2022 at 04:08:07PM -0400, Zygo Blaxell wrote:
> Add a cache LV to an existing LV with:
>
> lvcreate $vg -n $meta -L 1G $device
> lvcreate $vg -n $pool -l 90%PVS $device
> lvconvert -f --type cache-pool --poolmetadata $vg/$meta $vg/$pool
> lvconvert -f --type cache --cachepool $vg/$pool $vg/$data --cachemode writethrough
>
> Uncache with:
>
> lvconvert -f --uncache $vg/$data
>
> Note that 'lvconvert' will flush the entire cache back to the backing
> store during uncache at minimum IO priority, so it will take some time
> and can be prolonged indefinitely by a continuous IO workload on top.
> Also, the uncache operation will propagate any corruption in the SSD
> cache back to the HDD LV, even in writethrough mode.
Thanks much for the heads up.
Best,
Marc
--
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Home page: http://marc.merlins.org/ | PGP 7F55D5F27AAF9D08
next prev parent reply other threads:[~2022-06-14 4:51 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-11 4:51 Suggestions for building new 44TB Raid5 array Marc MERLIN
2022-06-11 9:30 ` Roman Mamedov
[not found] ` <CAK-xaQYc1PufsvksqP77HMe4ZVTkWuRDn2C3P-iMTQzrbQPLGQ@mail.gmail.com>
2022-06-11 14:52 ` Marc MERLIN
2022-06-11 17:54 ` Roman Mamedov
2022-06-12 17:31 ` Marc MERLIN
2022-06-12 21:21 ` Roman Mamedov
2022-06-13 17:46 ` Marc MERLIN
2022-06-13 18:06 ` Roman Mamedov
2022-06-14 4:51 ` Marc MERLIN [this message]
2022-06-13 18:10 ` Zygo Blaxell
2022-06-13 18:13 ` Marc MERLIN
2022-06-13 18:29 ` Roman Mamedov
2022-06-13 20:08 ` Zygo Blaxell
2022-06-14 6:36 ` Torbjörn Jansson
2022-06-20 20:37 ` Andrea Gelmini
2022-06-21 5:26 ` Zygo Blaxell
2022-07-06 9:09 ` Andrea Gelmini
2022-06-11 23:44 ` Zygo Blaxell
2022-06-14 11:03 ` ronnie sahlberg
[not found] ` <5e1733e6-471e-e7cb-9588-3280e659bfc2@aqueos.com>
2022-06-20 15:01 ` Marc MERLIN
2022-06-20 15:52 ` Ghislain Adnet
2022-06-20 16:27 ` Marc MERLIN
2022-06-20 17:02 ` Andrei Borzenkov
2022-06-20 17:26 ` Marc MERLIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220614045148.GU1664812@merlins.org \
--to=marc@merlins.org \
--cc=andrea.gelmini@gmail.com \
--cc=arvidjaar@gmail.com \
--cc=ce3g8jdj@umail.furryterror.org \
--cc=josef@toxicpanda.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=lists@colorremedies.com \
--cc=quwenruo.btrfs@gmx.com \
--cc=rm@romanrm.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox