linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Martin <rc6encrypted@gmail.com>
Cc: Erkki Seppala <flux-btrfs@inside.org>,
	Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: How to stress test raid6 on 122 disk array
Date: Mon, 15 Aug 2016 09:47:35 -0400	[thread overview]
Message-ID: <da9e55fc-f5d9-0173-0a8b-33d33fcc8cf0@gmail.com> (raw)
In-Reply-To: <CAGQ70Ydg9U0QH57tuWfctnFYQRx9LW6VROnd-+VGfnxmjHTYEg@mail.gmail.com>

On 2016-08-15 09:39, Martin wrote:
>> That really is the case, there's currently no way to do this with BTRFS.
>> You have to keep in mind that the raid5/6 code only went into the mainline
>> kernel a few versions ago, and it's still pretty immature as far as kernel
>> code goes.  I don't know when (if ever) such a feature might get put in, but
>> it's definitely something to add to the list of things that would be nice to
>> have.
>>
>> For the moment, the only option to achieve something like this is to set up
>> a bunch of separate 8 device filesystems, but I would be willing to bet that
>> the way you have it configured right now is closer to what most people would
>> be doing in a regular deployment, and therefore is probably more valuable
>> for testing.
>>
>
> I see.
>
> Right now on our +500TB zfs filesystems we used raid6 with a 6 disk
> vdev, which is often in the zfs world, and for btrfs I would be the
> same when stable/possible.
>
A while back there was talk of implementing a system where you could 
specify any arbitrary number of replicas, stripes or parity (for 
example, if you had 16 devices, you could tell it to do two copies with 
double parity using full width stripes), and in theory, it would be 
possible there (parity level of 2 with a stripe width of 6 or 8 
depending on how it's implemented), but I don't think it's likely that 
that functionality will exist any time soon.  Implementing such a system 
would pretty much require re-writing most of the allocation code (which 
probably would be a good idea for other reasons now too), and that's not 
likely to happen given the amount of coding that went into the raid5/6 
support.

  reply	other threads:[~2016-08-15 13:48 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-04 17:43 How to stress test raid6 on 122 disk array Martin
2016-08-04 19:05 ` Austin S. Hemmelgarn
2016-08-04 20:01   ` Chris Murphy
2016-08-04 20:51     ` Martin
2016-08-04 21:12       ` Chris Murphy
2016-08-04 22:19         ` Martin
2016-08-05 10:15           ` Erkki Seppala
2016-08-15 12:19             ` Martin
2016-08-15 12:38               ` Austin S. Hemmelgarn
2016-08-15 13:39                 ` Martin
2016-08-15 13:47                   ` Austin S. Hemmelgarn [this message]
2016-08-05 11:39         ` Austin S. Hemmelgarn
2016-08-15 12:19           ` Martin
2016-08-15 12:44             ` Austin S. Hemmelgarn
2016-08-15 13:38               ` Martin
2016-08-15 13:41                 ` Austin S. Hemmelgarn
2016-08-15 13:43                 ` Chris Murphy
2016-08-15 13:40             ` Chris Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=da9e55fc-f5d9-0173-0a8b-33d33fcc8cf0@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=flux-btrfs@inside.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=rc6encrypted@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).