From: Roman Mamedov <rm@romanrm.net>
To: Nix <nix@esperi.org.uk>
Cc: John Stoffel <john@stoffel.org>,
Mateusz Korniak <mateusz-lists@ant.gliwice.pl>,
Ron Leach <ronleach@tesco.net>,
linux-raid@vger.kernel.org
Subject: Re: Recovery on new 2TB disk: finish=7248.4min (raid1)
Date: Tue, 2 May 2017 02:46:57 +0500 [thread overview]
Message-ID: <20170502024657.6d33fd88@natsu> (raw)
In-Reply-To: <8737cojn54.fsf@esperi.org.uk>
On Mon, 01 May 2017 22:13:59 +0100
Nix <nix@esperi.org.uk> wrote:
> > (having to fiddle with "echo > /sys/..." and "cat /sys/..." is not the state
> > of something you'd call a finished product).
>
> You mean, like md? :)
You must be kidding. On the contrary I was looking to present md as the
example of how it should be done, with its all-encompassing and extremely
capable 'mdadm' tool -- and a complete lack of a similar tool for bcache.
> I'd be more worried about the complexity required to just figure out the
> space needed for half a dozen sets of lvmcache metadata and cache
> filesystems.
Metadata can be implicitly created and auto-managed in recent lvm versions
http://fibrevillage.com/storage/460-how-to-create-lvm-cache-logical-volume
("Automatic pool metadata LV"). If not, the rule of thumb suggested everywhere
is 1/1000 of the cache volume size; I doubled that just in case, and looks
like I didn't have to, as my metadata partitions are only about 9.5% full each.
As for half a dozen sets, I'd reconsider the need for those, as well as the
entire fast/slow HDD tracks separation, just SSD-cache everything and let it
figure out to not cache streaming writes on your video transcodes, or even bulk
writes during your compile tests (while still caching the filesystem metadata).
However the world of pain begins if you want to have multiple guest VMs each
with its disk as a separate LV. One solution (that doesn't sound too clean but
perhaps could work), is stacked LVM, i.e. a PV of a different volume group made
on top of a cached LV.
--
With respect,
Roman
next prev parent reply other threads:[~2017-05-01 21:46 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-26 21:57 Recovery on new 2TB disk: finish=7248.4min (raid1) Ron Leach
2017-04-27 14:25 ` John Stoffel
2017-04-27 14:43 ` Reindl Harald
2017-04-28 7:05 ` Ron Leach
2017-04-27 14:54 ` Mateusz Korniak
2017-04-27 19:03 ` John Stoffel
2017-04-27 19:42 ` Reindl Harald
2017-04-28 7:30 ` Mateusz Korniak
2017-04-30 12:04 ` Nix
2017-04-30 13:21 ` Roman Mamedov
2017-04-30 16:10 ` Nix
2017-04-30 16:47 ` Roman Mamedov
2017-05-01 21:13 ` Nix
2017-05-01 21:44 ` Anthony Youngman
2017-05-01 21:46 ` Roman Mamedov [this message]
2017-05-01 21:53 ` Anthony Youngman
2017-05-01 22:03 ` Roman Mamedov
2017-05-02 6:10 ` Wols Lists
2017-05-02 10:02 ` Nix
2017-05-01 23:26 ` Nix
2017-04-30 17:16 ` Wols Lists
2017-05-01 20:12 ` Nix
2017-04-27 14:58 ` Mateusz Korniak
2017-04-27 19:01 ` Ron Leach
2017-04-28 7:06 ` Mateusz Korniak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170502024657.6d33fd88@natsu \
--to=rm@romanrm.net \
--cc=john@stoffel.org \
--cc=linux-raid@vger.kernel.org \
--cc=mateusz-lists@ant.gliwice.pl \
--cc=nix@esperi.org.uk \
--cc=ronleach@tesco.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).