linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Stéphane Lesimple" <stephane_btrfs@lesimple.fr>
To: "Hugo Mills" <hugo@carfax.org.uk>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Rebalancing raid1 after adding a device
Date: Tue, 18 Jun 2019 19:15:27 +0000	[thread overview]
Message-ID: <97d71a1c6b16fec6a49004db84fac254@lesimple.fr> (raw)
In-Reply-To: <20190618184501.GJ21016@carfax.org.uk>

June 18, 2019 8:45 PM, "Hugo Mills" <hugo@carfax.org.uk> wrote:

> On Tue, Jun 18, 2019 at 08:26:32PM +0200, Stéphane Lesimple wrote:
>> [...]
>> Of course the solution is to run a balance, but as the filesystem is
>> now quite big, I'd like to avoid running a full rebalance. This
>> would be quite i/o intensive, would be running for several days, and
>> putting and unecessary stress on the drives. This also seems
>> excessive as in theory only some Tb would need to be moved: if I'm
>> correct, only one of two block groups of a sufficient amount of
>> chunks to be moved to the new device so that the sum of the amount
>> of available space on the 4 preexisting devices would at least equal
>> the available space on the new device, ~7Tb instead of moving ~22T.
>> I don't need to have a perfectly balanced FS, I just want all the
>> space to be allocatable.
>> 
>> I tried using the -ddevid option but it only instructs btrfs to work
>> on the block groups allocated on said device, as it happens, it
>> tends to move data between the 4 preexisting devices and doesn't fix
>> my problem. A full balance with -dlimit=100 did no better.
> 
> -dlimit=100 will only move 100 GiB of data (i.e. 200 GiB), so it'll
> be a pretty limited change. You'll need to use a larger number than
> that if you want it to have a significant visible effect.

Yes of course, I wasn't clear here but what I meant to do when starting
a full balance with -dlimit=100 was to test under a reasonable amount of
time whether the allocator would prefer to fill the new drive. I observed
after those 100G (200G) of data moved that it wasn't the case at all.
Specifically, no single allocation happened on the new drive. I know this
would be the case at some point, after Terabytes of data would have been
moved, but that's exactly what I'm trying to avoid.

> The -ddevid=<old_10T> option would be my recommendation. It's got
> more chunks on it, so they're likely to have their copies spread
> across the other four devices. This should help with the
> balance.

Makes sense. That's probably what I'm going to do if I don't find
a better solution. That's a bit frustrating because I know exactly
what I want btrfs to do, but I have no way to make it do that.

> Alternatively, just do a full balance and then cancel it when the
> amount of unallocated space is reasonably well spread across the
> devices (specifically, the new device's unallocated space is less than
> the sum of the unallocated space on the other devices).

I'll try with the old 10T and cancel it when I get 0 unallocatable
space, if that happens before all the data is moved around.

>> Is there a way to ask the block group allocator to prefer writing to
>> a specific device during a balance? Something like -ddestdevid=N?
>> This would just be a hint to the allocator and the usual constraints
>> would always apply (and prevail over the hint when needed).
> 
> No, there isn't. Having control over the allocator (or bypassing
> it) would be pretty difficult to implement, I think.
> 
> It would be really great if there was an ioctl that allowed you to
> say things like "take the chunks of this block group and put them on
> devices 2, 4 and 5 in RAID-5", because you could do a load of
> optimisation with reshaping the FS in userspace with that. But I
> suspect it's a long way down the list of things to do.

Exactly, that would be awesome. I would probably even go as far as
writing some C code myself to call this ioctl to do this "intelligent"
balance on my system!

--
Stéphane.

  parent reply	other threads:[~2019-06-18 19:15 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-18 18:26 Rebalancing raid1 after adding a device Stéphane Lesimple
2019-06-18 18:45 ` Hugo Mills
2019-06-18 18:50   ` Austin S. Hemmelgarn
2019-06-18 18:57     ` Hugo Mills
2019-06-18 18:58       ` Austin S. Hemmelgarn
2019-06-18 19:03         ` Chris Murphy
2019-06-18 18:57     ` Chris Murphy
2019-06-19  3:27   ` Andrei Borzenkov
2019-06-19  8:58     ` Stéphane Lesimple
2019-06-19 11:59   ` Supercilious Dude
2019-06-18 19:06 ` Austin S. Hemmelgarn
2019-06-18 19:15 ` Stéphane Lesimple [this message]
2019-06-18 19:22   ` Hugo Mills
2019-06-18 19:37 ` Stéphane Lesimple
2019-06-18 19:42   ` Austin S. Hemmelgarn
2019-06-18 20:03   ` Stéphane Lesimple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=97d71a1c6b16fec6a49004db84fac254@lesimple.fr \
    --to=stephane_btrfs@lesimple.fr \
    --cc=hugo@carfax.org.uk \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).