From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: Add device while rebalancing
Date: Mon, 25 Apr 2016 07:18:10 -0400 [thread overview]
Message-ID: <571DFCF2.6050604@gmail.com> (raw)
In-Reply-To: <pan$c3f48$4829429e$7ab0117b$7d124f52@cox.net>
On 2016-04-23 01:38, Duncan wrote:
> Juan Alberto Cirez posted on Fri, 22 Apr 2016 14:36:44 -0600 as excerpted:
>
>> Good morning,
>> I am new to this list and to btrfs in general. I have a quick question:
>> Can I add a new device to the pool while the btrfs filesystem balance
>> command is running on the drive pool?
>
> Adding a device while balancing shouldn't be a problem. However,
> depending on your redundancy mode, you may wish to cancel the balance and
> start a new one after the device add, so the balance will take account of
> it as well and balance it into the mix.
I'm not 100% certain about how balance will handle this, except that
nothing should break. I believe that it picks a device each time it
goes to move a chunk, so it should evaluate any chunks operated on after
the addition of the device for possible placement on that device (and it
will probably end up putting a lot of them there because that device
will almost certainly be less full than any of the others). That said,
you probably do want to cancel the balance, add the device, and re-run
the balance so that things end up more evenly distributed.
>
> Note that while device add doesn't do more than that on its own, device
> delete/remove effectively initiates its own balance, moving the chunks on
> the device being removed to the other devices. So you wouldn't want to
> be running a balance and then do a device remove at the same time.
IIRC, trying to delete a device while running a balance will fail, and
return an error, because only one balance can be running at a given moment.
>
> Similarly with btrfs replace, altho in that case, it's more directly
> moving data from the device being replaced (if it's still there, or using
> redundancy or parity to recover it if not) to the replacement device, a
> more limited and often faster operation. But you probably still don't
> want to do a balance at the same time as it places unnecessary stress on
> both the filesystem and the hardware, and even if the filesystem and
> devices handle the stress fine, the result is going to be that both
> operations take longer as they're both intensive operations that will
> interfere with each other to some extent.
Agreed, this is generally not a good idea because of the stress it puts
on the devices (and because it probably isn't well tested).
>
> Similarly with btrfs scrub. The operations are logically different
> enough that they shouldn't really interfere with each other logically,
> but they're both hardware intensive operations that will put unnecessary
> stress on the system if you're doing more than one at a time, and will
> result in both going slower than they normally would.
Actually, depending on a number of factors, scrubbing while balancing
can actually finish faster than running one then the other in sequence.
It's really dependent on how both decide to pick chunks, and how your
underlying devices handle read and write caching, but it can happen.
Most of the time though, it should take around the same amount of time
as running one then the other, or a little bit longer if you're on
traditional disks.
>
> And again with snapshotting operations. Making a snapshot is normally
> nearly instantaneous, but there's a scaling issue if you have too many
> per filesystem (try to keep it under 2000 snapshots per filesystem total,
> if possible, and definitely keep it under 10K or some operations will
> slow down substantially), and deleting snapshots is more work, so while
> you should ordinarily automatically thin down snapshots if you're
> automatically making them quite frequently (say daily or more
> frequently), you may want to put the snapshot deletion, at least, on hold
> while you scrub or balance or device delete or replace.
I would actually recommend putting all snapshot operations on hold, as
well as most writes to the filesystem, while doing a balance or device
deletion. The more writes you have while doing those, the longer they
take, and the less likely that you end up with a good on-disk layout of
the data.
next prev parent reply other threads:[~2016-04-25 11:18 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-22 20:36 Add device while rebalancing Juan Alberto Cirez
2016-04-23 5:38 ` Duncan
2016-04-25 11:18 ` Austin S. Hemmelgarn [this message]
2016-04-25 12:43 ` Duncan
2016-04-25 13:02 ` Austin S. Hemmelgarn
2016-04-26 10:50 ` Juan Alberto Cirez
2016-04-26 11:11 ` Austin S. Hemmelgarn
2016-04-26 11:44 ` Juan Alberto Cirez
2016-04-26 12:04 ` Austin S. Hemmelgarn
2016-04-26 12:14 ` Juan Alberto Cirez
2016-04-26 12:44 ` Austin S. Hemmelgarn
2016-04-27 0:58 ` Chris Murphy
2016-04-27 10:37 ` Duncan
2016-04-27 11:22 ` Austin S. Hemmelgarn
2016-04-27 15:58 ` Juan Alberto Cirez
2016-04-27 16:29 ` Holger Hoffstätte
2016-04-27 16:38 ` Juan Alberto Cirez
2016-04-27 16:40 ` Juan Alberto Cirez
2016-04-27 17:23 ` Holger Hoffstätte
2016-04-27 23:19 ` Chris Murphy
2016-04-28 11:21 ` Austin S. Hemmelgarn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=571DFCF2.6050604@gmail.com \
--to=ahferroin7@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).