linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Chris Murphy <lists@colorremedies.com>,
	John Petrini <jpetrini@coredial.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Volume appears full but TB's of space available
Date: Fri, 7 Apr 2017 12:58:18 -0400	[thread overview]
Message-ID: <12332db1-c52a-f483-e2e7-e23e508e6066@gmail.com> (raw)
In-Reply-To: <CAJCQCtQqomcKRn-J8bJchVng=D4od346p18CKvjYp86zKSZysA@mail.gmail.com>

On 2017-04-07 12:28, Chris Murphy wrote:
> On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
> <ahferroin7@gmail.com> wrote:
>
>> If you care about both performance and data safety, I would suggest using
>> BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
>> and good monitoring.  Statistically speaking, catastrophic hardware failures
>> are rare, and you'll usually have more than enough warning that a device is
>> failing before it actually does, so provided you keep on top of monitoring
>> and replace disks that are showing signs of impending failure as soon as
>> possible, you will be no worse off in terms of data integrity than running
>> ext4 or XFS on top of a LVM or MD RAID10 volume.
>
>
> Depending on the workload, and what replication is being used by Ceph
> above this storage stack, it might make make more sense to do
> something like three lvm/md raid5 arrays, and then Btrfs single data,
> raid1 metadata, across those three raid5s. That's giving up only three
> drives to parity rather than 1/2 the drives, and rebuild time is
> shorter than losing one drive in a raid0 array.
Ah, I had forgotten it was a Ceph back-end system.  In that case, I 
would actually suggest essentially the same setup that Chris did, 
although I would personally be a bit more conservative and use RAID6 
instead of RAID5 for the LVM/MD arrays.  As he said though, it really 
depends on what higher-level replication you're doing.  In particular, 
if you're running erasure coding instead of replication at the Ceph 
level, I would probably still go with BTRFS raid1 on top of LVM/MD RAID0 
just to balance out the performance hit from the erasure coding.

  reply	other threads:[~2017-04-07 16:58 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-07  0:47 Volume appears full but TB's of space available John Petrini
2017-04-07  1:15 ` John Petrini
2017-04-07  1:21   ` Chris Murphy
2017-04-07  1:31     ` John Petrini
2017-04-07  2:42       ` Chris Murphy
2017-04-07  3:25         ` John Petrini
2017-04-07 11:41           ` Austin S. Hemmelgarn
2017-04-07 13:28             ` John Petrini
2017-04-07 13:50               ` Austin S. Hemmelgarn
2017-04-07 16:28                 ` Chris Murphy
2017-04-07 16:58                   ` Austin S. Hemmelgarn [this message]
2017-04-07 17:05                     ` John Petrini
2017-04-07 17:11                       ` Austin S. Hemmelgarn
2017-04-07 16:04             ` Chris Murphy
2017-04-07 16:51               ` Austin S. Hemmelgarn
2017-04-07 16:58                 ` John Petrini
2017-04-07 17:04                   ` Austin S. Hemmelgarn
2017-04-08  5:12             ` Duncan
2017-04-10 11:31               ` Austin S. Hemmelgarn
2017-04-07  1:17 ` Chris Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12332db1-c52a-f483-e2e7-e23e508e6066@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=jpetrini@coredial.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).