* good bcache use case?
@ 2013-10-14 22:49 Paul B. Henson
2013-10-15 15:00 ` matthew patton
0 siblings, 1 reply; 5+ messages in thread
From: Paul B. Henson @ 2013-10-14 22:49 UTC (permalink / raw)
To: linux-bcache-u79uwXL29TY76Z2rM5mHXA
I'm building a server to do some kvm virtualization. Right now, I've got 4
2TB Western Digital RE4 hard disks, and 2 256G Samsung 840 Pro SSD's.
My initial intent was to build a RAID10 from the 2 TB hard disks, and a
RAID1 from the SSD's, have two separate lvm volume groups, and manually
partition file systems between them based on the need for performance.
I was wondering if it would make a good use case for bcache to instead use
the 256G RAID1 as a cache for backing the 4TB RAID10? Ideally that would
enhance the performance of all I/O, rather than just whatever filesystems
were manually placed on the SSD's in my original plan.
From my research so far, it seems that bcache in the 3.10 kernel is
considered stable enough for production, and there shouldn't be any issues
using software RAID for the backing and cache devices? Even with writeback
enabled?
Any gotchas or pointers for this potential deployment?
Thanks much.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: good bcache use case?
2013-10-14 22:49 good bcache use case? Paul B. Henson
@ 2013-10-15 15:00 ` matthew patton
[not found] ` <1381849203.67736.YahooMailNeo-XYahOdtEMNn35Xbc4wGBzZOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 5+ messages in thread
From: matthew patton @ 2013-10-15 15:00 UTC (permalink / raw)
To: Paul B. Henson,
linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> From my research so far, it seems that bcache in the 3.10 kernel is
> considered stable enough for production, and there shouldn't be any issues
> using software RAID for the backing and cache devices? Even with writeback
> enabled?
Anything shy of the official linux 3.11.5 kernel release has several time-bomb bugs. And apparently the project's GIT repo is an unreliable place to get the bug-fixed code because it's not being kept up to date. Until Kent et. al. remedies this procedural problem I would *only* use the 3.11.5 release and personally would be a bit circumspect in calling it fully production ready. I don't believe there are any known significant bugs but with the recent flurry of fixes I'd liken it's solidity as more pudding rather than cake.
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: good bcache use case?
[not found] ` <1381849203.67736.YahooMailNeo-XYahOdtEMNn35Xbc4wGBzZOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
@ 2013-10-15 19:12 ` Paul B. Henson
2013-10-15 21:12 ` Gabriel de Perthuis
0 siblings, 1 reply; 5+ messages in thread
From: Paul B. Henson @ 2013-10-15 19:12 UTC (permalink / raw)
To: 'matthew patton', linux-bcache-u79uwXL29TY76Z2rM5mHXA
> From: matthew patton [mailto:pattonme-/E1597aS9LQAvxtiuMwx3w@public.gmane.org]
> Sent: Tuesday, October 15, 2013 8:00 AM
>
> Anything shy of the official linux 3.11.5 kernel release has several
time-bomb
> bugs.
Hmm, given 3.10 is an LTS kernel presumably these bug fixes would be back
ported? I suppose it would just be a matter of making sure for a given
3.10.x that they had been.
> personally would be a bit circumspect in calling it fully production
ready. I
> don't believe there are any known significant bugs but with the recent
flurry
> of fixes I'd liken it's solidity as more pudding rather than cake.
Well, that's not exactly a ringing endorsement :). I suppose I could always
stick with plan A and migrate to bcache later, it should be easy enough to
pvmove everything off of the SSD raid1, pvremove it, and then with a little
downtime convert the raid10 to a backing store and the SSD raid1 to a cache.
Thanks for the opinions.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: good bcache use case?
2013-10-15 19:12 ` Paul B. Henson
@ 2013-10-15 21:12 ` Gabriel de Perthuis
[not found] ` <525DAFA3.4060709-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
0 siblings, 1 reply; 5+ messages in thread
From: Gabriel de Perthuis @ 2013-10-15 21:12 UTC (permalink / raw)
To: Paul B. Henson; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA
Le 15/10/2013 21:12, Paul B. Henson a écrit :
>> From: matthew patton [mailto:pattonme-/E1597aS9LQAvxtiuMwx3w@public.gmane.org] Sent: Tuesday,
>> October 15, 2013 8:00 AM
>>
>> Anything shy of the official linux 3.11.5 kernel release has
>> several time-bomb bugs.
>
> Hmm, given 3.10 is an LTS kernel presumably these bug fixes would be
> back ported? I suppose it would just be a matter of making sure for
> a given 3.10.x that they had been.
3.10.x and 3.11.y stable kernels have been getting bcache backports
whenever necessary. 3.11.4 had a regrettable crasher in writeback mode,
but no time-bomb. The confusion may come from the fact that the patch
that introduced the crash was fixing a more serious bug. Stopping here
because the mailing list is starting to feel like Groundhog day.
>> personally would be a bit circumspect in calling it fully
>> production ready. I don't believe there are any known significant
>> bugs but with the recent flurry of fixes I'd liken it's solidity as
>> more pudding rather than cake.
>
> Well, that's not exactly a ringing endorsement :). I suppose I could
> always stick with plan A and migrate to bcache later, it should be
> easy enough to pvmove everything off of the SSD raid1, pvremove it,
> and then with a little downtime convert the raid10 to a backing
> store and the SSD raid1 to a cache.
>
> Thanks for the opinions.
Right now I have in-place conversion for LVs, but not for PVs (details
on the bcache wiki). The block layout would work for PV conversions,
but since LVM can do raid and a single cache can cache multiple devices,
I've assumed putting bcache on top is preferable. Maybe for people who
do a lot of snapshotting the other option is better. (Do as you prefer,
just thinking out loud)
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: good bcache use case?
[not found] ` <525DAFA3.4060709-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2013-10-16 19:29 ` Paul B. Henson
0 siblings, 0 replies; 5+ messages in thread
From: Paul B. Henson @ 2013-10-16 19:29 UTC (permalink / raw)
To: 'Gabriel de Perthuis'; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA
> From: Gabriel de Perthuis [mailto:g2p.code-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
> Sent: Tuesday, October 15, 2013 2:12 PM
>
> 3.10.x and 3.11.y stable kernels have been getting bcache backports
[...]
> that introduced the crash was fixing a more serious bug. Stopping here
> because the mailing list is starting to feel like Groundhog day.
Sorry, I did do some specific searching on the mailing list and a quick
review of recent traffic, don't mean to make you guys rehash stuff.
3.10.16 was just released this week, does that contain backports for all
known serious issues and would be considered suitable for production
deployment?
> but since LVM can do raid and a single cache can cache multiple devices,
I've seen references to raid using lvm (as opposed to lvm over md raid), but
it's never really been clear to me whether that is considered an alternative
to md raid, or a replacement, or whether it's better to do raid within lvm
rather than layering lvm on top of md raid. At this point, I'm considerably
more familiar with md raid so I've just been sticking with that.
Thanks much.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-10-16 19:29 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-14 22:49 good bcache use case? Paul B. Henson
2013-10-15 15:00 ` matthew patton
[not found] ` <1381849203.67736.YahooMailNeo-XYahOdtEMNn35Xbc4wGBzZOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
2013-10-15 19:12 ` Paul B. Henson
2013-10-15 21:12 ` Gabriel de Perthuis
[not found] ` <525DAFA3.4060709-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2013-10-16 19:29 ` Paul B. Henson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox