From: Roman Mamedov <rm@romanrm.net>
To: Marc MERLIN <marc@merlins.org>
Cc: Andrei Borzenkov <arvidjaar@gmail.com>,
Zygo Blaxell <ce3g8jdj@umail.furryterror.org>,
Josef Bacik <josef@toxicpanda.com>,
"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
Chris Murphy <lists@colorremedies.com>,
Qu Wenruo <quwenruo.btrfs@gmx.com>
Subject: Re: Suggestions for building new 44TB Raid5 array
Date: Sat, 11 Jun 2022 14:30:33 +0500 [thread overview]
Message-ID: <20220611143033.56ffa6af@nvm> (raw)
In-Reply-To: <20220611045120.GN22722@merlins.org>
On Fri, 10 Jun 2022 21:51:20 -0700
Marc MERLIN <marc@merlins.org> wrote:
> Kernel will be 5.16. Filesystem will be 24TB and contain mostly bigger
> files (100MB to 10GB).
> 2) echo 0fb96f02-d8da-45ce-aba7-070a1a8420e3 > /sys/block/bcache64/bcache/attach
> gargamel:/dev# cat /sys/block/md7/bcache/cache_mode
> [writethrough] writeback writearound none
Maybe try LVM Cache this time?
> 3) cryptsetup luksFormat --align-payload=2048 -s 256 -c aes-xts-plain64 /dev/bcache64
> 4) cryptsetup luksOpen /dev/bcache64 dshelf1
What's the threat scenario for LUKS on the array?
A major one for me would be not to be having to RMA a disk with all my data
still on the platters. But with RAID5, a single disk by itself would not
contain easily discernible or usable data. Or if you're protecting against
unauthorized access to the entire array, then never mind.
> 5) mkfs.btrfs -m dup -L dshelf1 /dev/mapper/dshelf1
Personally I have switched from Btrfs on MD to individual disks and MergerFS.
The rationale for no RAID is the simplicity and resilience of the individual
single-disk filesystems, and that anything important or not easily
re-obtainable is backed up anyways; so the protection from single disk
failures is not as important, compared to the introduced complexity and the
chance of losing the entire huge FS (like you had).
--
With respect,
Roman
next prev parent reply other threads:[~2022-06-11 9:30 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-11 4:51 Suggestions for building new 44TB Raid5 array Marc MERLIN
2022-06-11 9:30 ` Roman Mamedov [this message]
[not found] ` <CAK-xaQYc1PufsvksqP77HMe4ZVTkWuRDn2C3P-iMTQzrbQPLGQ@mail.gmail.com>
2022-06-11 14:52 ` Marc MERLIN
2022-06-11 17:54 ` Roman Mamedov
2022-06-12 17:31 ` Marc MERLIN
2022-06-12 21:21 ` Roman Mamedov
2022-06-13 17:46 ` Marc MERLIN
2022-06-13 18:06 ` Roman Mamedov
2022-06-14 4:51 ` Marc MERLIN
2022-06-13 18:10 ` Zygo Blaxell
2022-06-13 18:13 ` Marc MERLIN
2022-06-13 18:29 ` Roman Mamedov
2022-06-13 20:08 ` Zygo Blaxell
2022-06-14 6:36 ` Torbjörn Jansson
2022-06-20 20:37 ` Andrea Gelmini
2022-06-21 5:26 ` Zygo Blaxell
2022-07-06 9:09 ` Andrea Gelmini
2022-06-11 23:44 ` Zygo Blaxell
2022-06-14 11:03 ` ronnie sahlberg
[not found] ` <5e1733e6-471e-e7cb-9588-3280e659bfc2@aqueos.com>
2022-06-20 15:01 ` Marc MERLIN
2022-06-20 15:52 ` Ghislain Adnet
2022-06-20 16:27 ` Marc MERLIN
2022-06-20 17:02 ` Andrei Borzenkov
2022-06-20 17:26 ` Marc MERLIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220611143033.56ffa6af@nvm \
--to=rm@romanrm.net \
--cc=arvidjaar@gmail.com \
--cc=ce3g8jdj@umail.furryterror.org \
--cc=josef@toxicpanda.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=lists@colorremedies.com \
--cc=marc@merlins.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox