* Profile conversion - unexpected large allocation on target profile during conversion
@ 2024-05-30 12:36 Pedro Macedo
2024-05-30 17:47 ` Pedro Macedo
0 siblings, 1 reply; 2+ messages in thread
From: Pedro Macedo @ 2024-05-30 12:36 UTC (permalink / raw)
To: linux-btrfs
Hi folks,
I'm in the process of converting a few btrfs arrays from single to raid
6 and noticed one behavior that seems unexpected: according to btrfs
filesystem usage, the total value is extremely large compared to the
used value during the conversion.
For example, on one filesystem this is the reported total/used while
conversion is running - notice the 2TB allocation for ~370GB of data:
Data,single: Size:40032.00GiB, Used:39982.40GiB (99.88%)
Data,RAID6: Size:2021.09GiB, Used:376.60GiB (18.63%)
Metadata,RAID1C4: Size:84.00GiB, Used:83.47GiB (99.37%)
System,RAID1C4: Size:0.03GiB, Used:0.00GiB (14.06%)
If I cancel the conversion and give it a few seconds then the vast
majority of the reported space for raid6 is reclaimed
(bg_reclaim_threshold is set to a very high 75%, but I don't see the
usual log messages about reclaiming used blocks):
Data,single: Size:40023.00GiB, Used:39973.42GiB (99.88%)
Data,RAID6: Size:406.25GiB, Used:384.58GiB (94.67%)
Metadata,RAID1C4: Size:84.00GiB, Used:83.47GiB (99.37%)
System,RAID1C4: Size:0.03GiB, Used:0.00GiB (14.01%)
Is this over-allocation during conversion expected or known? This is on
kernel 6.8.9; I only really noticed this because one of the filesystems
failed the conversion with ENOSPC even though there should be plenty of
space. For now I'm working around the ENOSPC issue on the smaller array
by using a loop with dconvert=raid6,limit=100 followed by a 30s sleep.
Thanks,
Pedro Macedo
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Profile conversion - unexpected large allocation on target profile during conversion
2024-05-30 12:36 Profile conversion - unexpected large allocation on target profile during conversion Pedro Macedo
@ 2024-05-30 17:47 ` Pedro Macedo
0 siblings, 0 replies; 2+ messages in thread
From: Pedro Macedo @ 2024-05-30 17:47 UTC (permalink / raw)
To: linux-btrfs
On 2024-05-30 2:36 PM, Pedro Macedo wrote:
> Is this over-allocation during conversion expected or known? This is
> on kernel 6.8.9; I only really noticed this because one of the
> filesystems failed the conversion with ENOSPC even though there should
> be plenty of space. For now I'm working around the ENOSPC issue on the
> smaller array by using a loop with dconvert=raid6,limit=100 followed
> by a 30s sleep.
And to add to the oddness found on this conversion: the workaround now
hit ENOSPC, even though technically it should be possible as I have >4
disks with free space (but only 3 disks with equal amounts of free
space, which I'm guessing is why ENOSPC is being triggered):
Unallocated:
/dev/mapper/evg--1 0.00GiB
/dev/mapper/evg--2 0.00GiB
/dev/mapper/evg--3 0.00GiB
/dev/mapper/evg--4 0.00GiB
/dev/mapper/evg--5 0.43GiB
/dev/mapper/evg--6 51.95GiB
/dev/mapper/evg--7 127.95GiB
/dev/mapper/evg--8 127.95GiB
/dev/mapper/evg--9 127.95GiB
/dev/mapper/evg--10 95.92GiB
/dev/mapper/evg--11 100.95GiB
/dev/mapper/evg--12 105.95GiB
/dev/mapper/evg--13 33.92GiB
/dev/mapper/evg--14 0.00GiB
/dev/mapper/evg--15 0.00GiB
However, if I now run a balance with -dprofiles=single, I can clearly
see data being converted between to raid6 with no errors, which is extra
confusing - different code path being used perhaps?
Thanks,
Pedro Macedo
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-05-30 17:47 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-30 12:36 Profile conversion - unexpected large allocation on target profile during conversion Pedro Macedo
2024-05-30 17:47 ` Pedro Macedo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox