From: Marc MERLIN <marc@merlins.org>
To: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
Cc: Qu Wenruo <quwenruo.btrfs@gmx.com>,
Su Yue <suy.fnst@cn.fujitsu.com>,
linux-btrfs@vger.kernel.org
Subject: Re: how to best segment a big block device in resizeable btrfs filesystems?
Date: Mon, 2 Jul 2018 12:40:15 -0700 [thread overview]
Message-ID: <20180702194015.GC9859@merlins.org> (raw)
In-Reply-To: <8de54b29-c718-0230-09b2-f849e3ad01df@gmail.com>
On Mon, Jul 02, 2018 at 02:35:19PM -0400, Austin S. Hemmelgarn wrote:
> >I kind of linked the thin provisioning idea because it's hands off,
> >which is appealing. Any reason against it?
> No, not currently, except that it adds a whole lot more stuff between
> BTRFS and whatever layer is below it. That increase in what's being
> done adds some overhead (it's noticeable on 7200 RPM consumer SATA
> drives, but not on decent consumer SATA SSD's).
>
> There used to be issues running BTRFS on top of LVM thin targets which
> had zero mode turned off, but AFAIK, all of those problems were fixed
> long ago (before 4.0).
I see, thanks for the heads up.
> >Does LVM do built in raid5 now? Is it as good/trustworthy as mdadm
> >radi5?
> Actually, it uses MD's RAID5 implementation as a back-end. Same for
> RAID6, and optionally for RAID0, RAID1, and RAID10.
Ok, that makes me feel a bit better :)
> >But yeah, if it's incompatible with thin provisioning, it's not that
> >useful.
> It's technically not incompatible, just a bit of a pain. Last time I
> tried to use it, you had to jump through hoops to repair a damaged RAID
> volume that was serving as an underlying volume in a thin pool, and it
> required keeping the thin pool offline for the entire duration of the
> rebuild.
Argh, not good :( / thanks for the heads up.
> If you do go with thin provisioning, I would encourage you to make
> certain to call fstrim on the BTRFS volumes on a semi regular basis so
> that the thin pool doesn't get filled up with old unused blocks,
That's a very good point/reminder, thanks for that. I guess it's like
running on an ssd :)
> preferably when you are 100% certain that there are no ongoing writes on
> them (trimming blocks on BTRFS gets rid of old root trees, so it's a bit
> dangerous to do it while writes are happening).
Argh, that will be harder, but I'll try.
Given what you said, it sounds like I'll still be best off with separate
layers to avoid the rebuild problem you mentioned.
So it'll be
swraid5 / dmcrypt / bcache / lvm dm thin / btrfs
Hopefully that will work well enough.
Thanks,
Marc
--
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
.... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/
next prev parent reply other threads:[~2018-07-02 19:40 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-29 4:27 So, does btrfs check lowmem take days? weeks? Marc MERLIN
2018-06-29 5:07 ` Qu Wenruo
2018-06-29 5:28 ` Marc MERLIN
2018-06-29 5:48 ` Qu Wenruo
2018-06-29 6:06 ` Marc MERLIN
2018-06-29 6:29 ` Qu Wenruo
2018-06-29 6:59 ` Marc MERLIN
2018-06-29 7:09 ` Roman Mamedov
2018-06-29 7:22 ` Marc MERLIN
2018-06-29 7:34 ` Roman Mamedov
2018-06-29 8:04 ` Lionel Bouton
2018-06-29 16:24 ` btrfs send/receive vs rsync Marc MERLIN
2018-06-30 8:18 ` Duncan
2018-06-29 7:20 ` So, does btrfs check lowmem take days? weeks? Qu Wenruo
2018-06-29 7:28 ` Marc MERLIN
2018-06-29 17:10 ` Marc MERLIN
2018-06-30 0:04 ` Chris Murphy
2018-06-30 2:44 ` Marc MERLIN
2018-06-30 14:49 ` Qu Wenruo
2018-06-30 21:06 ` Marc MERLIN
2018-06-29 6:02 ` Su Yue
2018-06-29 6:10 ` Marc MERLIN
2018-06-29 6:32 ` Su Yue
2018-06-29 6:43 ` Marc MERLIN
2018-07-01 23:22 ` Marc MERLIN
2018-07-02 2:02 ` Su Yue
2018-07-02 3:22 ` Marc MERLIN
2018-07-02 6:22 ` Su Yue
2018-07-02 14:05 ` Marc MERLIN
2018-07-02 14:42 ` Qu Wenruo
2018-07-02 15:18 ` how to best segment a big block device in resizeable btrfs filesystems? Marc MERLIN
2018-07-02 16:59 ` Austin S. Hemmelgarn
2018-07-02 17:34 ` Marc MERLIN
2018-07-02 18:35 ` Austin S. Hemmelgarn
2018-07-02 19:40 ` Marc MERLIN [this message]
2018-07-03 4:25 ` Andrei Borzenkov
2018-07-03 7:15 ` Duncan
2018-07-06 4:28 ` Andrei Borzenkov
2018-07-08 8:05 ` Duncan
2018-07-03 0:51 ` Paul Jones
2018-07-03 4:06 ` Marc MERLIN
2018-07-03 4:26 ` Paul Jones
2018-07-03 5:42 ` Marc MERLIN
2018-07-03 1:37 ` Qu Wenruo
2018-07-03 4:15 ` Marc MERLIN
2018-07-03 9:55 ` Paul Jones
2018-07-03 11:29 ` Qu Wenruo
2018-07-03 4:23 ` Andrei Borzenkov
2018-07-02 15:19 ` So, does btrfs check lowmem take days? weeks? Marc MERLIN
2018-07-02 17:08 ` Austin S. Hemmelgarn
2018-07-02 17:33 ` Roman Mamedov
2018-07-02 17:39 ` Marc MERLIN
2018-07-03 0:31 ` Chris Murphy
2018-07-03 4:22 ` Marc MERLIN
2018-07-03 8:34 ` Su Yue
2018-07-03 21:34 ` Chris Murphy
2018-07-03 21:40 ` Marc MERLIN
2018-07-04 1:37 ` Su Yue
2018-07-03 8:50 ` Qu Wenruo
2018-07-03 14:38 ` Marc MERLIN
2018-07-03 21:46 ` Chris Murphy
2018-07-03 22:00 ` Marc MERLIN
2018-07-03 22:52 ` Qu Wenruo
2018-06-29 5:35 ` Su Yue
2018-06-29 5:46 ` Marc MERLIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180702194015.GC9859@merlins.org \
--to=marc@merlins.org \
--cc=ahferroin7@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
--cc=suy.fnst@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).