linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Cloud Admin <admin@cloud.haefemeier.eu>
To: linux-btrfs@vger.kernel.org
Subject: Re: Best Practice: Add new device to RAID1 pool (Summary)
Date: Sun, 30 Jul 2017 01:04:28 +0200	[thread overview]
Message-ID: <1501369468.2795.1.camel@cloud.haefemeier.eu> (raw)
In-Reply-To: <1500914457.2781.10.camel@cloud.haefemeier.eu>

Am Montag, den 24.07.2017, 18:40 +0200 schrieb Cloud Admin:
> Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
> > On 2017-07-24 10:12, Cloud Admin wrote:
> > > Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
> > > Hemmelgarn:
> > > > On 2017-07-24 07:27, Cloud Admin wrote:
> > > > > Hi,
> > > > > I have a multi-device pool (three discs) as RAID1. Now I want
> > > > > to
> > > > > add a
> > > > > new disc to increase the pool. I followed the description on
> > > > > https:
> > > > > //bt
> > > > > rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devic
> > > > > es
> > > > > and
> > > > > used 'btrfs add <device> <btrfs path>'. After that I called a
> > > > > balance
> > > > > for rebalancing the RAID1 using 'btrfs balance start <btrfs
> > > > > path>'.
> > > > > Is that anything or should I need to call a resize (for
> > > > > example) or
> > > > > anything else? Or do I need to specify filter/profile
> > > > > parameters
> > > > > for
> > > > > balancing?
> > > > > I am a little bit confused because the balance command is
> > > > > running
> > > > > since
> > > > > 12 hours and only 3GB of data are touched. This would mean
> > > > > the
> > > > > whole
> > > > > balance process (new disc has 8TB) would run a long, long
> > > > > time...
> > > > > and
> > > > > is using one cpu by 100%.
> > > > 
> > > > Based on what you're saying, it sounds like you've either run
> > > > into a
> > > > bug, or have a huge number of snapshots on this filesystem.
> > > 
> > > It depends what you define as huge. The call of 'btrfs sub list
> > > <btrfs
> > > path>' returns a list of 255 subvolume.
> > 
> > OK, this isn't horrible, especially if most of them aren't
> > snapshots 
> > (it's cross-subvolume reflinks that are most of the issue when it
> > comes 
> > to snapshots, not the fact that they're subvolumes).
> > > I think this is not too huge. The most of this subvolumes was
> > > created
> > > using docker itself. I cancel the balance (this will take awhile)
> > > and will try to delete such of these subvolumes/snapshots.
> > > What can I do more?
> > 
> > As Roman mentioned in his reply, it may also be qgroup related.  If
> > you run:
> > btrfs quota disable
> 
> It seems quota was one part of it. Thanks for the tip. I disabled and
> started balance new.
> Now approx. each 5 min. one chunk will be relocated. But if I take
> the
> reported 10860 chunks and calc. the time it will take ~37 days to
> finish... So, it seems I have to investigate more time into figure
> out
> the subvolume / snapshots structure created by docker.
> A first deeper look shows, there is a subvolume with a snapshot,
> which
> has itself a snapshot, and so forth.
> > 
> > 
Now, the balance process finished after 127h the new disc is in the
pool... Not so long as expected but in my opinion long enough. Quota
seems one big driver in my case. What I could see over the time at the
beginning many extends was relocated ignoring the new disc. Properly it
could be a good idea to rebalance using filter (like -dusage=30 for
example) before add the new disc to decrease the time. 
But only theory. It will try to keep it in my mind for the next time.

Thanks all for your tips, ideas and time!
	Frank


  reply	other threads:[~2017-07-29 23:04 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-24 11:27 Best Practice: Add new device to RAID1 pool Cloud Admin
2017-07-24 13:46 ` Austin S. Hemmelgarn
2017-07-24 14:08   ` Roman Mamedov
2017-07-24 16:42     ` Cloud Admin
2017-07-24 14:12   ` Cloud Admin
2017-07-24 14:25     ` Austin S. Hemmelgarn
2017-07-24 16:40       ` Cloud Admin
2017-07-29 23:04         ` Cloud Admin [this message]
2017-07-31 11:52           ` Best Practice: Add new device to RAID1 pool (Summary) Austin S. Hemmelgarn
2017-07-24 20:35 ` Best Practice: Add new device to RAID1 pool Chris Murphy
2017-07-24 20:42   ` Hugo Mills
2017-07-24 20:55     ` Chris Murphy
2017-07-24 21:00       ` Hugo Mills
2017-07-24 21:17       ` Adam Borowski
2017-07-24 23:18         ` Chris Murphy
2017-07-25 17:56     ` Cloud Admin
2017-07-24 21:12   ` waxhead
2017-07-24 21:20     ` Chris Murphy
2017-07-25  2:22       ` Marat Khalili
2017-07-25  8:13         ` Chris Murphy
2017-07-25 17:46     ` Cloud Admin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1501369468.2795.1.camel@cloud.haefemeier.eu \
    --to=admin@cloud.haefemeier.eu \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).