public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: j4nn <j4nn.xda@gmail.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: clear space cache v1 fails with Unable to find block group for 0
Date: Mon, 9 Dec 2024 08:06:39 +1030	[thread overview]
Message-ID: <561aab35-007d-48d5-bf61-8cfc159cba28@gmx.com> (raw)
In-Reply-To: <CADhu7iBFDmRvBwWxxa4KszyZpyTq5JetB+a13jxGj4YBjaYWKQ@mail.gmail.com>



在 2024/12/9 07:55, j4nn 写道:
> On Sun, 8 Dec 2024 at 21:26, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>> 在 2024/12/9 02:32, j4nn 写道:
>>> gentoo ~ # time btrfs rescue clear-space-cache v1 /dev/mapper/wdrb-bdata
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>
>> This is a common indicator of -ENOSPC.
>>
>> But according to the fi df output, we should have quite a lot of
>> metadata space left.
>>
>> The only concern is the DUP metadata, which may cause the space
>> reservation code not to work in progs.
>>
>> Have you tried to convert the DUP metadata first?
>
> I am not sure how to do that.
> I see the "Multiple block group profiles detected" warning, assumed it
> is about metadata in RAID1 and DUP.
> But I am not sure how that got created or if it has any benefit or not.
> And what that DUP should be converted into?

Not sure either. But I guess in the past you mounted the device with one
disk missing, and did some writes.
And those writes by incident created a new chunk, and in that case the
new chunk are only seeing one writable disk, so it went DUP.


To remove it, you need specific balance filter, e.g

  # btrfs balance start -mprofiles=dup,convert=raid1 /mnt/data

>
>> And `btrfs fi usage` output please.
>
> gentoo ~ # btrfs fi usage /mnt/data
> Overall:
>     Device size:                  16.00TiB
>     Device allocated:             14.51TiB
>     Device unallocated:            1.48TiB
>     Device missing:                  0.00B
>     Device slack:                    0.00B
>     Used:                         14.18TiB
>     Free (estimated):            923.95GiB      (min: 923.95GiB)
>     Free (statfs, df):           918.95GiB
>     Data ratio:                       2.00
>     Metadata ratio:                   2.00
>     Global reserve:              512.00MiB      (used: 0.00B)
>     Multiple profiles:                 yes      (metadata)
>
> Data,RAID1: Size:7.19TiB, Used:7.03TiB (97.77%)
>    /dev/mapper/wdrb-bdata          7.19TiB
>    /dev/mapper/wdrc-cdata          7.19TiB
>
> Metadata,RAID1: Size:63.00GiB, Used:58.56GiB (92.95%)
>    /dev/mapper/wdrb-bdata         63.00GiB
>    /dev/mapper/wdrc-cdata         63.00GiB
>
> Metadata,DUP: Size:5.00GiB, Used:1.18GiB (23.60%)
>    /dev/mapper/wdrb-bdata         10.00GiB
>
> System,RAID1: Size:32.00MiB, Used:1.08MiB (3.37%)
>    /dev/mapper/wdrb-bdata         32.00MiB
>    /dev/mapper/wdrc-cdata         32.00MiB
>
> Unallocated:
>    /dev/mapper/wdrb-bdata        755.00GiB
>    /dev/mapper/wdrc-cdata        765.00GiB

You have more than enough space to remove the DUP chunks.

>
> gentoo ~ # lvs
>   LV     VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log
> Cpy%Sync Convert
>   bdata  wdrb   -wi-ao----    8.00t
>   cdata  wdrc   -wi-ao----    8.00t
> gentoo ~ # vgs
>   VG     #PV #LV #SN Attr   VSize    VFree
>   wdrb     1   1   0 wz--n-   <9.10t <1.10t
>   wdrc     1   3   0 wz--n-    9.09t     0
> gentoo ~ # pvs
>   PV         VG     Fmt  Attr PSize    PFree
>   /dev/sdb1  wdrc   lvm2 a--     9.09t     0
>   /dev/sdd1  wdrb   lvm2 a--    <9.10t <1.10t
>
>
>>> Tried some balance as found example posted, not really sure if that should help:
>>>
>>> gentoo ~ # btrfs balance start -dusage=10 /mnt/data
>>> Done, had to relocate 32 out of 7467 chunks
>>
>> The balance doesn't do much, the overall chunk layout is still the same.
>>>
>>> gentoo ~ # time btrfs rescue clear-space-cache v1 /dev/mapper/wdrb-bdata
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>> ERROR: failed to clear free space cache
>>> extent buffer leak: start 7995086045184 len 16384
>>
>> Migrating to v2 cache doesn't really need to manually clear the v1 cache.
>>
>> Just mounting with "space_cache=v2" option will automatically purge the
>> v1 cache, just as explained in the man page:
>>
>>     If v2 is enabled, and v1 space cache will be cleared (at the first
>>     mount)
>>
>> If you want to dig deeper, the implementation is done in
>> btrfs_set_free_space_cache_v1_active() which calls
>> cleanup_free_space_cache_v1() if @active is false.
>
> Ok, I just followed a howto for the switch.
> Did not know it is ok just with the mount option.
> Should it be safe to try it if I get the errors with the "btrfs rescue
> clear-space-cache v1"?

Since progs and kernel have different implementations on the space
reservation code, it's not that rare to hit cases where btrfs-progs hits
some false alerts.

If you balanced removed the DUP profile, then you can try "btrfs rescue"
again, just to see if it works and I really appreciate the extra
feedback to help debugging the progs bug.

Otherwise I believe it should be pretty safe just using "space_cache=v2"
mount option.

Thanks,
Qu
>
> Thank you.


  reply	other threads:[~2024-12-08 21:36 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-08 16:02 clear space cache v1 fails with Unable to find block group for 0 j4nn
2024-12-08 20:26 ` Qu Wenruo
2024-12-08 21:25   ` j4nn
2024-12-08 21:36     ` Qu Wenruo [this message]
2024-12-08 22:19       ` j4nn
2024-12-08 22:32         ` Qu Wenruo
2024-12-08 22:50           ` j4nn
2024-12-16 18:52             ` j4nn
2024-12-17  5:31               ` j4nn
2024-12-17  5:50                 ` Qu Wenruo
2024-12-17  6:10                   ` j4nn
2024-12-17  6:34                     ` j4nn
2024-12-17  6:55                       ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=561aab35-007d-48d5-bf61-8cfc159cba28@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=j4nn.xda@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox