From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Tomasz Chmielewski <tch@virtall.com>, linux-btrfs@vger.kernel.org
Subject: Re: [REGRESSION] Super slow balance in v5.0-rc1
Date: Mon, 14 Jan 2019 20:34:59 +0800 [thread overview]
Message-ID: <2c1782e8-b68a-4992-84e8-adeef34ca688@gmx.com> (raw)
In-Reply-To: <7c6234aaeaf813400da5ff1e68f8c898@virtall.com>
[-- Attachment #1.1: Type: text/plain, Size: 3070 bytes --]
On 2019/1/14 下午7:47, Tomasz Chmielewski wrote:
>> When rebasing my qgroup + balance optimization patches, I found one very
>> obvious performance regression for balance.
>>
>> For normal 4G subvolume, 16 snapshots, balance workload, v4.20 kernel
>> only takes 3s to relocate a metadata block group, while for v5.0-rc1, I
>> don't really know how it will take as it hasn't finished yet.
>
> Are you sure it's v5.0-rc1 regression, not earlier?
At least for what I'm describing, yes. v5.0-rc1 introduced bug, and
already pinned down.
So I'm afraid you're hitting another different bug.
>
> I'm trying to do a metadata-only balance from RAID-5 to RAID-1, with
> 4.19.8.
>
> It was going relatively "normal", until it got stuck and showing no
> progress.
>
> I've canceled the balance, upgraded to 4.20, started the balance again.> For straight 11 days, it rewrote terabytes of data on the disks, with no
> progress at all.> Also, 4.19.8 had a balance interrupted because of "out
> of space", despite we have terabytes free.
>
> Metadata RAID-5 usage stays at 4.12GiB for the past 11 days (and a few
> more days with 4.19.8).
Are you using qgroup? Which is another huge cause of slow balance.
Thanks,
Qu
>
>
>
> # btrfs fi usage /data
> WARNING: RAID56 detected, not implemented
> WARNING: RAID56 detected, not implemented
> Overall:
> Device size: 14.47TiB
> Device allocated: 112.06GiB
> Device unallocated: 14.36TiB
> Device missing: 0.00B
> Used: 107.93GiB
> Free (estimated): 0.00B (min: 8.00EiB)
> Data ratio: 0.00
> Metadata ratio: 1.64
> Global reserve: 512.00MiB (used: 1.86MiB)
>
> Data,RAID5: Size:5.28TiB, Used:3.04TiB
> /dev/sda5 1.76TiB
> /dev/sdb5 1.76TiB
> /dev/sdc5 1.76TiB
> /dev/sdd5 1.76TiB
>
> Metadata,RAID1: Size:56.00GiB, Used:53.97GiB
> /dev/sda5 29.00GiB
> /dev/sdb5 27.00GiB
> /dev/sdc5 27.00GiB
> /dev/sdd5 29.00GiB
>
> Metadata,RAID5: Size:12.38GiB, Used:11.13GiB
> /dev/sda5 4.12GiB
> /dev/sdb5 4.12GiB
> /dev/sdc5 4.12GiB
> /dev/sdd5 4.12GiB
>
> System,RAID1: Size:32.00MiB, Used:416.00KiB
> /dev/sdb5 32.00MiB
> /dev/sdc5 32.00MiB
>
> Unallocated:
> /dev/sda5 1.83TiB
> /dev/sdb5 1.83TiB
> /dev/sdc5 1.83TiB
> /dev/sdd5 1.83TiB
>
>
> # btrfs balance status /data
> Balance on '/data' is running
> 13 out of about 64 chunks balanced (15 considered), 80% left
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2019-01-14 12:35 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-14 11:47 [REGRESSION] Super slow balance in v5.0-rc1 Tomasz Chmielewski
2019-01-14 12:34 ` Qu Wenruo [this message]
2019-01-14 13:06 ` Tomasz Chmielewski
-- strict thread matches above, loose matches on Subject: below --
2019-01-14 5:39 Qu Wenruo
2019-01-14 9:35 ` David Sterba
2019-01-14 10:00 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2c1782e8-b68a-4992-84e8-adeef34ca688@gmx.com \
--to=quwenruo.btrfs@gmx.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=tch@virtall.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox