linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* btrfs-RAID(3 or 5/6/etc) like btrfs-RAID1?
@ 2014-02-13 16:13 Jim Salter
  2014-02-13 16:21 ` Hugo Mills
  2014-02-13 20:22 ` Goffredo Baroncelli
  0 siblings, 2 replies; 6+ messages in thread
From: Jim Salter @ 2014-02-13 16:13 UTC (permalink / raw)
  To: linux-btrfs

This might be a stupid question but...

Are there any plans to make parity RAID levels in btrfs similar to the 
current implementation of btrfs-raid1?

It took me a while to realize how different and powerful btrfs-raid1 is 
from traditional raid1.  The ability to string together virtually any 
combination of "mutt" hard drives together in arbitrary ways and yet 
maintain redundancy is POWERFUL, and is seriously going to be a killer 
feature advancing btrfs adoption in small environments.

The one real drawback to btrfs-raid1 is that you're committed to n/2 
storage efficiency, since you're using pure redundancy rather than 
parity on the array.  I was thinking about that this morning, and 
suddenly it occurred to me that you ought to be able to create a striped 
parity array in much the same way as a btrfs-raid1 array.

Let's say you have five disks, and you arbitrarily want to define a 
stripe length of four data blocks plus one parity block per "stripe".  
Right now, what you're looking at effectively amounts to a RAID3 array, 
like FreeBSD used to use.  But, what if we add two more disks? Or three 
more disks? Or ten more?  Is there any reason we can't keep our stripe 
length of four blocks + one parity block, and just distribute them 
relatively ad-hoc in the same way btrfs-raid1 distributes redundant data 
blocks across an ad-hoc array of disks?

This could be a pretty powerful setup IMO - if you implemented something 
like this, you'd be able to arbitrarily define your storage efficiency 
(percentage of parity blocks / data blocks) and your fault-tolerance 
level (how many drives you can afford to lose before failure) WITHOUT 
tying it directly to your underlying disks, or necessarily needing to 
rebalance as you add more disks to the array.  This would be a heck of a 
lot more flexible than ZFS' approach of adding more immutable vdevs.

Please feel free to tell me why I'm dumb for either 1. not realizing the 
obvious flaw in this idea or 2. not realizing it's already being worked 
on in exactly this fashion. =)

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-02-13 20:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-13 16:13 btrfs-RAID(3 or 5/6/etc) like btrfs-RAID1? Jim Salter
2014-02-13 16:21 ` Hugo Mills
2014-02-13 16:32   ` Jim Salter
2014-02-13 18:23     ` Hugo Mills
2014-02-13 20:22 ` Goffredo Baroncelli
2014-02-13 20:52   ` Hugo Mills

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).