linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christian Rohmann <crohmann@netcologne.de>
To: linux-btrfs@vger.kernel.org
Subject: Re: How to properly and efficiently balance RAID6 after more drives are added?
Date: Wed, 11 Nov 2015 15:17:19 +0100	[thread overview]
Message-ID: <56434DEF.9070807@netcologne.de> (raw)
In-Reply-To: <pan$11297$845af192$4bee50bd$587fb5f7@cox.net>

Sorry for the late reply to this list regarding this topic
...

On 09/04/2015 01:04 PM, Duncan wrote:
> And of course, only with 4.1 (nominally 3.19 but there were initial 
> problems) was raid6 mode fully code-complete and functional -- before 
> that, runtime worked, it calculated and wrote the parity stripes as it 
> should, but the code to recover from problems wasn't complete, so you 
> were effectively running a slow raid0 in terms of recovery ability, but 
> one that got "magically" updated to raid6 once the recovery code was 
> actually there and working.

As other who write to this ML, I run into crashes when trying to do a
balance of my filesystem.
I moved through the different kernel versions and btrfs-tools and am
currently running Kernel 4.3 + 4.3rc1 of the tools but still after like
an hour of balancing (and actually moving chunks) the machine crashes
horribly without giving any good stack trace or anything in the kernel
log which I could report here :(

Any ideas on how I could proceed to get some usable debug info for the
devs to look at?


> So I'm guessing you have some 8-strip-stripe chunks at say 20% full or 
> some such.  There's 19.19 data TiB used of 22.85 TiB allocated, a spread 
> of over 3 TiB.  A full nominal-size data stripe allocation, given 12 
> devices in raid6, will be 10x1GiB data plus 2x1GiB parity, so there's 
> about 3.5 TiB / 10 GiB extra stripes worth of chunks, 350 stripes or so, 
> that should be freeable, roughly (the fact that you probably have 8-
> strip, 12-strip, and 4-strip stripes, on the same filesystem, will of 
> course change that a bit, as will the fact that four devices are much 
> smaller than the other eight).

The new devices have been in place for while (> 2 months) now, and are
barely used. Why is there not more data being put onto the new disks?
Even without a balance new data should spread evenly across all devices
right? From the IOPs I can see that only the 8 disks which always have
been in the box are doing any heavy lifting and the new disks are mostly
idle.

Anything I could do to narrow down where a certain file is stored across
the devices?






Regards

Christian

  reply	other threads:[~2015-11-11 14:26 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-02 10:29 How to properly and efficiently balance RAID6 after more drives are added? Christian Rohmann
2015-09-02 11:30 ` Hugo Mills
2015-09-02 13:09   ` Christian Rohmann
2015-09-03  2:22     ` Duncan
2015-09-04  8:28       ` Christian Rohmann
2015-09-04 11:04         ` Duncan
2015-11-11 14:17           ` Christian Rohmann [this message]
2015-11-12  4:31             ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56434DEF.9070807@netcologne.de \
    --to=crohmann@netcologne.de \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).