linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Btrfs RAID space utilization and bitrot reconstruction
@ 2012-07-01 11:50 Waxhead
  2012-07-01 12:27 ` Hugo Mills
  2012-07-02 18:00 ` Martin Steigerwald
  0 siblings, 2 replies; 3+ messages in thread
From: Waxhead @ 2012-07-01 11:50 UTC (permalink / raw)
  To: linux-btrfs

As far as I understand btrfs stores all data in huge chunks that are 
striped, mirrored or "raid5/6'ed" throughout all the disks added to the 
filesystem/volume.

How does btrfs deal with different sized disks? let's say that you for 
example have 10 different disks that are 100GB,200GB,300GB...1000GB and 
you create a btrfs filesystem with all the disks. How will the raid5 
implementation distribute chunks in such a setup. I assume the 
stripe+stripe+parity are separate chunks that are placed on separate 
disks but how does btrfs select the best disk to store a chunk on? In 
short will a slow disk slow down the entire "array", parts of it or will 
btrfs attempt to use the fastest disks first?

Also since btrfs checksums both data and metadata I am thinking that at 
least the raid6 implementation perhaps can (try to) reconstruct corrupt 
data (and try to rewrite it) before reading an alternate copy. Can 
someone please fill me in on the details here?

Finaly how does btrfs deals with advanced format (4k sectors) drives 
when the entire drive (and not a partition) is used to build a btrfs 
filesystem. Is proper alignment achieved?


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-07-02 18:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-01 11:50 Btrfs RAID space utilization and bitrot reconstruction Waxhead
2012-07-01 12:27 ` Hugo Mills
2012-07-02 18:00 ` Martin Steigerwald

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).