linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* "-d single" for data blocks on a multiple devices doesn't work as it should
@ 2014-06-24 10:42 Gerald Hopf
  2014-06-24 11:02 ` Roman Mamedov
  2014-06-24 11:45 ` Duncan
  0 siblings, 2 replies; 6+ messages in thread
From: Gerald Hopf @ 2014-06-24 10:42 UTC (permalink / raw)
  To: linux-btrfs

Dear btrfs-developers,

thank you for making such a nice and innovative filesystem. I do have a 
small complaint however :-)

I read the documentation and liked the idea of having a multiple-device 
filesystem with mirrored metadata while having data in "single" mode. 
This would be perfect for my backup purposes, where I don't want to have 
a parity disk - but I also don't want to lose the _entire_ backup in the 
worst case scenario of having already lost the main data RAID5 array and 
then one of my backup HDDs refusing to spin up or failing while restoring).

For testing purposes, I therefore created a 2x 3TB btrfs filesystem as 
described in the "Using BTRFS with Multiple Devices" Wiki:
# Use full capacity of multiple drives with different sizes (metadata 
mirrored, data not mirrored and not striped)
mkfs.btrfs -d single /dev/sdh1 /dev/sdi1

and proceeded to copy about 5.5TB of data on it, about 800 
subdirectories each containing a few small files (1-5KB), a medium sized 
file (50-100MB) and a big file (3GB-15GB). The copy process was 
completely sequential (only one task copying from source to destination, 
no random writes, no simultaneous copies to the btrfs volume).

After copying, I then unmounted the filesystem, switched off one of the 
two 3TB USB disks and mounted the remaining 3TB disk in recovery mode 
(-o degraded,ro) and proceeded to check whether any data was still left 
alive.

Result:
- the directories and files were there and looked good (metadata raid1 
seems to work)
- some small files I tested were fine (probably 50%?)
- even some the medium sized files (50-100MB) were fine (not sure about 
the percentage, might have been less than for the small files)
- not a single one (!) of the big files (3GB-15GB) survived

Conclusion:
The "-d single" allocator is useless (or broken?). It seems to randomly 
write data blocks to each of the multiple devices, thereby combining the 
disadvantage of a single disk (low write speed) with the disadvantage of 
raid0 (loss of all files when a device is missing), while not offering 
any benefits.

In my opinion to offer any benefit compared to raid0 for data, "-d 
single" should never allocate blocks for a single file across multiple 
disks unless you start to run ouf of contiguous space when the disk gets 
almost full. Is there any chance that "-d single" will be fixed at some 
point in the future?

Thanks for listening,
Gerald

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-06-25  2:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-24 10:42 "-d single" for data blocks on a multiple devices doesn't work as it should Gerald Hopf
2014-06-24 11:02 ` Roman Mamedov
2014-06-24 21:48   ` Gerald Hopf
2014-06-24 11:45 ` Duncan
2014-06-24 21:51   ` Gerald Hopf
2014-06-25  2:20     ` Austin S Hemmelgarn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).