linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Russell Coker <russell@coker.com.au>
To: linux-btrfs@vger.kernel.org
Subject: Re: Putting very big and small files in one subvolume?
Date: Mon, 18 Aug 2014 00:51:45 +1000	[thread overview]
Message-ID: <2834974.yZzVPeX4cV@xev> (raw)
In-Reply-To: <pan$df1ba$a20420a2$2ac33113$90ec0a1@cox.net>

On Sun, 17 Aug 2014 12:31:42 Duncan wrote:
> OTOH, I tend to be rather more of an independent partition booster than 
> many.  The biggest reason for that is the too many eggs in one basket 
> problem.  Fully separate filesystems on separate partitions separate 
> those data "eggs" into separate baskets, so if the metaphorical bottom 
> drops out of one of those filesystem baskets, only the data eggs in that 
> filesystem basket are lost, while the eggs in the separate filesystem 
> baskets are still safe and sound, not affected at all. =:^)
> 
> The thing that troubles me about replacing a bunch of independent 
> partitions and filesystems with a bunch of subvolumes on a single btrfs 
> filesystem is thus just that, you've nicely divided that big basket into 
> little subvolume compartments, but it's still one big basket, and if the 
> bottom falls out, you potentially lose EVERYTHING in that filesystem 
> basket!

I'll write the counter-point to this.

If you have several partitions for /, /var/log, and /home then losing any one 
of them will result in a system that's mostly unusable.  So for continuous 
service there doesn't seem to be a benefit in having multiple partitions.

When you have to restore a backup in adverse circumstances the restore time is 
important.  For example if you have 10*4TB disks and need RAID-1 redundancy 
(which you need on any BTRFS filesystem of note as I don't think RAID-5 and 
RAID-6 are trustworthy) then an advantage of 5*4TB RAID-1 filesystems over a 
20TB RAID-10 is that restore time will be a lot smaller.  But this isn't an 
issue for typical BTRFS users who are working with much smaller amounts of 
data, at this time I have to recommend ZFS over BTRFS for most systems that 
manage 20TB of data.

If you have a RAID-1 array of the biggest disks available (which is probably 
the biggest storage for >99% of BTRFS users) then you are looking at a restore 
time of maybe 4TB at 160MB/s == something less than 7 hours.  For a home 
network 7 hours delay in getting things going after a major failure is quite 
OK.

Finally failures of filesystems on different partitions won't be independent.  
If one filesystem on a disk becomes unusable due to drive firmware issues or 
other serious problems then other filesystems on the same physical disk are 
likely to suffer the same fate.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/


  reply	other threads:[~2014-08-17 14:51 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-17  8:56 Putting very big and small files in one subvolume? Shriramana Sharma
2014-08-17 12:31 ` Duncan
2014-08-17 14:51   ` Russell Coker [this message]
2014-08-18 18:16   ` Martin
2014-08-19  4:07     ` Duncan
2014-08-19  5:26     ` Duncan
2014-08-29 16:04 ` Shriramana Sharma
2014-08-29 16:24   ` Hugo Mills

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2834974.yZzVPeX4cV@xev \
    --to=russell@coker.com.au \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).