linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Cloud Admin <admin@cloud.haefemeier.eu>
To: linux-btrfs@vger.kernel.org
Subject: Re: Best Practice: Add new device to RAID1 pool
Date: Mon, 24 Jul 2017 18:40:57 +0200	[thread overview]
Message-ID: <1500914457.2781.10.camel@cloud.haefemeier.eu> (raw)
In-Reply-To: <4b768b97-2c0b-bf67-dec0-1c5a1f7181b2@gmail.com>

Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
> On 2017-07-24 10:12, Cloud Admin wrote:
> > Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
> > Hemmelgarn:
> > > On 2017-07-24 07:27, Cloud Admin wrote:
> > > > Hi,
> > > > I have a multi-device pool (three discs) as RAID1. Now I want
> > > > to
> > > > add a
> > > > new disc to increase the pool. I followed the description on
> > > > https:
> > > > //bt
> > > > rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
> > > > and
> > > > used 'btrfs add <device> <btrfs path>'. After that I called a
> > > > balance
> > > > for rebalancing the RAID1 using 'btrfs balance start <btrfs
> > > > path>'.
> > > > Is that anything or should I need to call a resize (for
> > > > example) or
> > > > anything else? Or do I need to specify filter/profile
> > > > parameters
> > > > for
> > > > balancing?
> > > > I am a little bit confused because the balance command is
> > > > running
> > > > since
> > > > 12 hours and only 3GB of data are touched. This would mean the
> > > > whole
> > > > balance process (new disc has 8TB) would run a long, long
> > > > time...
> > > > and
> > > > is using one cpu by 100%.
> > > 
> > > Based on what you're saying, it sounds like you've either run
> > > into a
> > > bug, or have a huge number of snapshots on this filesystem.
> > 
> > It depends what you define as huge. The call of 'btrfs sub list
> > <btrfs
> > path>' returns a list of 255 subvolume.
> 
> OK, this isn't horrible, especially if most of them aren't snapshots 
> (it's cross-subvolume reflinks that are most of the issue when it
> comes 
> to snapshots, not the fact that they're subvolumes).
> > I think this is not too huge. The most of this subvolumes was
> > created
> > using docker itself. I cancel the balance (this will take awhile)
> > and will try to delete such of these subvolumes/snapshots.
> > What can I do more?
> 
> As Roman mentioned in his reply, it may also be qgroup related.  If
> you run:
> btrfs quota disable
It seems quota was one part of it. Thanks for the tip. I disabled and
started balance new.
Now approx. each 5 min. one chunk will be relocated. But if I take the
reported 10860 chunks and calc. the time it will take ~37 days to
finish... So, it seems I have to investigate more time into figure out
the subvolume / snapshots structure created by docker.
A first deeper look shows, there is a subvolume with a snapshot, which
has itself a snapshot, and so forth.
> 
> On the filesystem in question, that may help too, and if you are
> using 
> quotas, turning them off with that command will get you a much
> bigger 
> performance improvement than removing all the snapshots.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

  reply	other threads:[~2017-07-24 16:44 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-24 11:27 Best Practice: Add new device to RAID1 pool Cloud Admin
2017-07-24 13:46 ` Austin S. Hemmelgarn
2017-07-24 14:08   ` Roman Mamedov
2017-07-24 16:42     ` Cloud Admin
2017-07-24 14:12   ` Cloud Admin
2017-07-24 14:25     ` Austin S. Hemmelgarn
2017-07-24 16:40       ` Cloud Admin [this message]
2017-07-29 23:04         ` Best Practice: Add new device to RAID1 pool (Summary) Cloud Admin
2017-07-31 11:52           ` Austin S. Hemmelgarn
2017-07-24 20:35 ` Best Practice: Add new device to RAID1 pool Chris Murphy
2017-07-24 20:42   ` Hugo Mills
2017-07-24 20:55     ` Chris Murphy
2017-07-24 21:00       ` Hugo Mills
2017-07-24 21:17       ` Adam Borowski
2017-07-24 23:18         ` Chris Murphy
2017-07-25 17:56     ` Cloud Admin
2017-07-24 21:12   ` waxhead
2017-07-24 21:20     ` Chris Murphy
2017-07-25  2:22       ` Marat Khalili
2017-07-25  8:13         ` Chris Murphy
2017-07-25 17:46     ` Cloud Admin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1500914457.2781.10.camel@cloud.haefemeier.eu \
    --to=admin@cloud.haefemeier.eu \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).