linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Cloud Admin <admin@cloud.haefemeier.eu>
To: linux-btrfs@vger.kernel.org
Subject: Re: Best Practice: Add new device to RAID1 pool
Date: Mon, 24 Jul 2017 16:12:46 +0200	[thread overview]
Message-ID: <1500905566.2781.8.camel@cloud.haefemeier.eu> (raw)
In-Reply-To: <18d96f74-00a8-9bb9-dddc-f50940d0971e@gmail.com>

Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S. Hemmelgarn:
> On 2017-07-24 07:27, Cloud Admin wrote:
> > Hi,
> > I have a multi-device pool (three discs) as RAID1. Now I want to
> > add a
> > new disc to increase the pool. I followed the description on https:
> > //bt
> > rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
> > used 'btrfs add <device> <btrfs path>'. After that I called a
> > balance
> > for rebalancing the RAID1 using 'btrfs balance start <btrfs path>'.
> > Is that anything or should I need to call a resize (for example) or
> > anything else? Or do I need to specify filter/profile parameters
> > for
> > balancing?
> > I am a little bit confused because the balance command is running
> > since
> > 12 hours and only 3GB of data are touched. This would mean the
> > whole
> > balance process (new disc has 8TB) would run a long, long time...
> > and
> > is using one cpu by 100%.
> 
> Based on what you're saying, it sounds like you've either run into a 
> bug, or have a huge number of snapshots on this filesystem.  

It depends what you define as huge. The call of 'btrfs sub list <btrfs
path>' returns a list of 255 subvolume.
I think this is not too huge. The most of this subvolumes was created
using docker itself. I cancel the balance (this will take awhile) and will try to delete such of these subvolumes/snapshots.
What can I do more?

> What you 
> described is exactly what you should be doing when expanding an
> array 
> (add the device, then run a full balance).  The fact that it's
> taking 
> this long isn't normal, unless you have very slow storage devices.

  parent reply	other threads:[~2017-07-24 14:12 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-24 11:27 Best Practice: Add new device to RAID1 pool Cloud Admin
2017-07-24 13:46 ` Austin S. Hemmelgarn
2017-07-24 14:08   ` Roman Mamedov
2017-07-24 16:42     ` Cloud Admin
2017-07-24 14:12   ` Cloud Admin [this message]
2017-07-24 14:25     ` Austin S. Hemmelgarn
2017-07-24 16:40       ` Cloud Admin
2017-07-29 23:04         ` Best Practice: Add new device to RAID1 pool (Summary) Cloud Admin
2017-07-31 11:52           ` Austin S. Hemmelgarn
2017-07-24 20:35 ` Best Practice: Add new device to RAID1 pool Chris Murphy
2017-07-24 20:42   ` Hugo Mills
2017-07-24 20:55     ` Chris Murphy
2017-07-24 21:00       ` Hugo Mills
2017-07-24 21:17       ` Adam Borowski
2017-07-24 23:18         ` Chris Murphy
2017-07-25 17:56     ` Cloud Admin
2017-07-24 21:12   ` waxhead
2017-07-24 21:20     ` Chris Murphy
2017-07-25  2:22       ` Marat Khalili
2017-07-25  8:13         ` Chris Murphy
2017-07-25 17:46     ` Cloud Admin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1500905566.2781.8.camel@cloud.haefemeier.eu \
    --to=admin@cloud.haefemeier.eu \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).