From: kjansen387 <kjansen387@gmail.com>
To: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: btrfs freezing on writes
Date: Sat, 11 Apr 2020 21:46:43 +0200 [thread overview]
Message-ID: <a4c5fcd2-c6cd-71e2-560e-9c7290e0c47d@gmail.com> (raw)
In-Reply-To: <20200409230724.GM2693@hungrycats.org>
I have tried to rebalance metadata..
Starting point:
# btrfs fi usage /storage
Overall:
Device size: 10.92TiB
Device allocated: 7.45TiB
Device unallocated: 3.47TiB
Device missing: 0.00B
Used: 7.35TiB
Free (estimated): 1.78TiB (min: 1.78TiB)
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID1: Size:3.72TiB, Used:3.67TiB (98.74%)
/dev/sdc 2.81TiB
/dev/sdb 2.81TiB
/dev/sda 1017.00GiB
/dev/sdd 840.00GiB
Metadata,RAID1: Size:6.00GiB, Used:5.09GiB (84.86%)
/dev/sdc 3.00GiB
/dev/sdb 3.00GiB
/dev/sda 1.00GiB
/dev/sdd 5.00GiB
System,RAID1: Size:32.00MiB, Used:608.00KiB (1.86%)
/dev/sdb 32.00MiB
/dev/sdd 32.00MiB
Unallocated:
/dev/sdc 845.02GiB
/dev/sdb 845.99GiB
/dev/sda 845.02GiB
/dev/sdd 1017.99GiB
I did:
# btrfs fi resize 4:-2g /storage/
# btrfs balance start -mdevid=4 /storage
# btrfs fi resize 4:max /storage/
but the distribution of metadata ended up like before.
I also tried (to match the free space of the other disks):
# btrfs fi resize 4:-172g /storage/
# btrfs balance start -mdevid=4 /storage
# btrfs fi resize 4:max /storage/
again, the distribution of metadata ended up like before..
Any other tips to rebalance metadata ?
On 10-Apr-20 01:07, Zygo Blaxell wrote:
> On Thu, Apr 09, 2020 at 11:53:00PM +0200, kjansen387 wrote:
>> btrfs fi resize 1:-1g /export; # Assuming 4GB metadata
>> btrfs fi resize 2:-2g /export; # Assuming 5GB metadata
>
> Based on current data, yes; however, it's possible that the device remove
> you are already running might balance the metadata as a side-effect.
> Redo the math with the values you get after the device remove is done.
> You may not need to balance anything.
>
>> btrfs balance start -mdevid=1 /export; # Why only devid 1, and not 2 ?
>
> We want balance to relocate metadata block groups that are on both
> devids 1 and 2, i.e. the BG has a chunk on both drives at the same time.
> Balance filters only allow one devid to be specified, but in this case
> 'devid=1' or 'devid=2' is close enough. All we want to do here is filter
> out block groups where one mirror chunk is already on devid 3, 4, or 5,
> since that would just place the metadata somewhere else on the same disks.
>
>> btrfs fi resize 1:max /export;
>> btrfs fi resize 2:max /export;
>>
>> Thanks!
>>
>>
next prev parent reply other threads:[~2020-04-11 19:46 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-07 19:46 btrfs freezing on writes kjansen387
2020-04-07 20:11 ` Chris Murphy
2020-04-07 20:22 ` Chris Murphy
2020-04-07 20:39 ` kjansen387
2020-04-07 22:11 ` Chris Murphy
2020-04-07 22:30 ` kjansen387
2020-04-09 4:32 ` Zygo Blaxell
2020-04-09 21:53 ` kjansen387
2020-04-09 23:07 ` Zygo Blaxell
2020-04-11 19:46 ` kjansen387 [this message]
2020-04-11 19:59 ` Chris Murphy
2020-04-11 20:21 ` Zygo Blaxell
2020-04-15 11:14 ` kjansen387
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a4c5fcd2-c6cd-71e2-560e-9c7290e0c47d@gmail.com \
--to=kjansen387@gmail.com \
--cc=ce3g8jdj@umail.furryterror.org \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).