linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Pat Sailor <psalleetsile@gmail.com>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: SSD caching an existing btrfs raid1
Date: Tue, 19 Sep 2017 11:47:24 -0400	[thread overview]
Message-ID: <2a208f7c-a140-3ac6-12c6-2e953f3f96ec@gmail.com> (raw)
In-Reply-To: <4ed24e33-09e1-c03a-912c-9d1b2bbdc835@gmail.com>

On 2017-09-19 11:30, Pat Sailor wrote:
> Hello,
> 
> I have a half-filled raid1 on top of six spinning devices. Now I have 
> come into a spare SSD I'd like to use for caching, if possible without 
> having to rebuild or, failing that, without having to renounce to btrfs 
> and flexible reshaping.
> 
> I've been reading about the several options out there; I thought that 
> EnhanceIO would be the simplest bet but unfortunately I couldn't get it 
> to build with my recent kernel (last commits are from years ago).
As a general rule, avoid things that are out of tree if you want any 
chance of support here.  In the case of a third party module like 
EnhanceIO, you'll likely get asked to retest with a kernel without that 
module loaded if you run into a bug (because the module itself is suspect).
> 
> Failing that, I read that lvmcache could be the way to go. However, I 
> can't think of a way of setting it up in which I retain the ability to 
> add/remove/replace drives as I can do now with pure btrfs; if I opted to 
> drop btrfs to go to ext4 I still would have to offline the filesystem 
> for downsizes. Not a frequent occurrence I hope, but now I'm used to 
> keep working while I reshape things in btrfs, and it's better if I can 
> avoid large downtimes.
> 
> Is what I want doable at all? Thanks in advance for any 
> suggestions/experiences to proceed.
dm-cache (or lvmcache as the LVM2 developers want you to call it, 
despite it not being an LVM specific thing) would work fine, it won't 
prevent you from adding and removing devices, it would just make it more 
complicated.  Without it, you just need to issue a replace command (or 
add then remove).  With it, you need to set up the new cache device, 
bind that to the target, add the new device, and then delete the old 
device and remove the old cache target.  dm-cache managed through LVM 
also has the advantage that you can convert the existing FS trivially, 
although you will have to take it off-line for the conversion.

A better option if you can afford to remove a single device from that 
array temporarily is to use bcache.  Bcache has one specific advantage 
in this case, multiple backend devices can share the same cache device. 
This means you don't have to carve out dedicated cache space for each 
disk on the SSD and leave some unused space so that you can add new 
devices if needed.  The downside is that you can't convert each device 
in-place, but because you're using BTRFS, you can still convert the 
volume as a whole in-place.  The procedure for doing so looks like this:

1. Format the SSD as a bcache cache.
2. Use `btrfs device delete` to remove a single hard drive from the array.
3. Set up the drive you just removed as a bcache backing device bound to 
the cache you created in step 1.
4. Add the new bcache device to the array.
5. Repeat from step 2 until the whole array is converted.

A similar procedure can actually be used to do almost any underlying 
storage conversion (for example, switching to whole disk encryption, or 
adding LVM underneath BTRFS) provided all your data can fit on one less 
disk than you have.

  reply	other threads:[~2017-09-19 15:47 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-19 15:30 SSD caching an existing btrfs raid1 Pat Sailor
2017-09-19 15:47 ` Austin S. Hemmelgarn [this message]
2017-09-20  1:58   ` Duncan
2017-09-20 15:51   ` Psalle
2017-09-20 20:45     ` Kai Krakow
2017-09-21  6:23       ` Paul Jones
2017-09-21 16:49       ` Psalle
2017-09-20  1:33 ` Paul Jones

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a208f7c-a140-3ac6-12c6-2e953f3f96ec@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=psalleetsile@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).