From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Ferry Toth <ftoth@exalondelft.nl>, linux-btrfs@vger.kernel.org
Subject: Re: Hot data tracking / hybrid storage
Date: Tue, 31 May 2016 08:21:48 -0400 [thread overview]
Message-ID: <9340ef1a-6746-1c3c-88f9-56fa2a0b7f0e@gmail.com> (raw)
In-Reply-To: <nifkcs$ovp$1@ger.gmane.org>
On 2016-05-29 16:45, Ferry Toth wrote:
> Op Sun, 29 May 2016 12:33:06 -0600, schreef Chris Murphy:
>
>> On Sun, May 29, 2016 at 12:03 PM, Holger Hoffstätte
>> <holger@applied-asynchrony.com> wrote:
>>> On 05/29/16 19:53, Chris Murphy wrote:
>>>> But I'm skeptical of bcache using a hidden area historically for the
>>>> bootloader, to put its device metadata. I didn't realize that was the
>>>> case. Imagine if LVM were to stuff metadata into the MBR gap, or
>>>> mdadm. Egads.
>>>
>>> On the matter of bcache in general this seems noteworthy:
>>>
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/
> commit/?id=4d1034eb7c2f5e32d48ddc4dfce0f1a723d28667
>>>
>>> bummer..
>>
>> Well it doesn't mean no one will take it, just that no one has taken it
>> yet. But the future of SSD caching may only be with LVM.
>>
>> --
>> Chris Murphy
>
> I think all the above posts underline exacly my point:
>
> Instead of using a ssd cache (be it bcache or dm-cache) it would be much
> better to have the btrfs allocator be aware of ssd's in the pool and
> prioritize allocations to the ssd to maximize performance.
>
> This will allow to easily add more ssd's or replace worn out ones,
> without the mentioned headaches. After all adding/replacing drives to a
> pool is one of btrfs's biggest advantages.
It would still need to be pretty configurable, and even then would still
be a niche use case. It would also need automatic migration to be
practical beyond a certain point, most people using regular computers
outside of corporate environments don't have that same 'access frequency
decreases over time' pattern that the manual migration scheme you
suggested would be good for.
I think overall the most useful way of doing it would be something like
the L2ARC on ZFS, which is essentially swap space for the page-cache,
put on an SSD.
next prev parent reply other threads:[~2016-05-31 12:21 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-15 12:12 Hot data tracking / hybrid storage Ferry Toth
2016-05-15 21:11 ` Duncan
2016-05-15 23:05 ` Kai Krakow
2016-05-17 6:27 ` Ferry Toth
2016-05-17 11:32 ` Austin S. Hemmelgarn
2016-05-17 18:33 ` Kai Krakow
2016-05-18 22:44 ` Ferry Toth
2016-05-19 18:09 ` Kai Krakow
2016-05-19 18:51 ` Austin S. Hemmelgarn
2016-05-19 21:01 ` Kai Krakow
2016-05-20 11:46 ` Austin S. Hemmelgarn
2016-05-19 23:23 ` Henk Slager
2016-05-20 12:03 ` Austin S. Hemmelgarn
2016-05-20 17:02 ` Ferry Toth
2016-05-20 17:59 ` Austin S. Hemmelgarn
2016-05-20 21:31 ` Henk Slager
2016-05-29 6:23 ` Andrei Borzenkov
2016-05-29 17:53 ` Chris Murphy
2016-05-29 18:03 ` Holger Hoffstätte
2016-05-29 18:33 ` Chris Murphy
2016-05-29 20:45 ` Ferry Toth
2016-05-31 12:21 ` Austin S. Hemmelgarn [this message]
2016-06-01 10:45 ` Dmitry Katsubo
2016-05-20 22:26 ` Henk Slager
2016-05-23 11:32 ` Austin S. Hemmelgarn
2016-05-16 11:25 ` Austin S. Hemmelgarn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9340ef1a-6746-1c3c-88f9-56fa2a0b7f0e@gmail.com \
--to=ahferroin7@gmail.com \
--cc=ftoth@exalondelft.nl \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).