Linux Btrfs filesystem development
 help / color / mirror / Atom feed
* btrfs raid10 rebalance questions
       [not found] <CAOv4OrX7kxTMrpE+AdqWo+PCsAGpBkrJ9irr9Xj8ZcRrPTvRoA@mail.gmail.com>
@ 2023-05-22 13:02 ` Todor Ivanov
  2023-05-31 12:48   ` DanglingPointer
  0 siblings, 1 reply; 2+ messages in thread
From: Todor Ivanov @ 2023-05-22 13:02 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1195 bytes --]

     Hello,

     We have a debian10 system with 6x16TB in btrfs raid10. In the
past we hit an issue with lack of space, which we resolved with data
rebalance, but some questions left unanswered:

https://unix.stackexchange.com/questions/743528/btrfs-snapshot-fails-with-no-space-left

We will be very happy if you can answer or give guidelines for at
least the following:

- How often should we run btrfs balance? Trying to use some logic and
looking at https://docs.nvidia.com/networking-ethernet-software/knowledge-base/Configuration-and-Usage/Storage/When-to-Rebalance-BTRFS-Partitions/
looks like a good example, but this is not for RAID10. How do we
calculate chunk size correctly and should we alter Device Size beause
of data duplication?
- Is it dangerous and should we rebalance metadata as well, having in
mind we use btrfs-progs v4.20.1, kernel 4.19.0-16-amd64 and btrfs
raid10? What is an optimal value for musage?
- What does it mean when "btrfs fi us" is showing a lot of
"Unallocated" space, and yet we ran into the out of space issue
(probably on Metadata data - subvolume snapshot), why isn't Metadata
expanding into that Unallocated space automatically?


Kind regards,
Todor

[-- Attachment #2: machine_details.tar.gz --]
[-- Type: application/gzip, Size: 81267 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: btrfs raid10 rebalance questions
  2023-05-22 13:02 ` btrfs raid10 rebalance questions Todor Ivanov
@ 2023-05-31 12:48   ` DanglingPointer
  0 siblings, 0 replies; 2+ messages in thread
From: DanglingPointer @ 2023-05-31 12:48 UTC (permalink / raw)
  To: Todor Ivanov, linux-btrfs

Hi Todor,

Have you tried looking at the new documentation? 
https://btrfs.readthedocs.io/en/latest/

Could someone respond to Todor?  Many have similar questions and 
experiences.  Thanks in advance!


On 22/5/23 23:02, Todor Ivanov wrote:
>       Hello,
>
>       We have a debian10 system with 6x16TB in btrfs raid10. In the
> past we hit an issue with lack of space, which we resolved with data
> rebalance, but some questions left unanswered:
>
> https://unix.stackexchange.com/questions/743528/btrfs-snapshot-fails-with-no-space-left
>
> We will be very happy if you can answer or give guidelines for at
> least the following:
>
> - How often should we run btrfs balance? Trying to use some logic and
> looking at https://docs.nvidia.com/networking-ethernet-software/knowledge-base/Configuration-and-Usage/Storage/When-to-Rebalance-BTRFS-Partitions/
> looks like a good example, but this is not for RAID10. How do we
> calculate chunk size correctly and should we alter Device Size beause
> of data duplication?
> - Is it dangerous and should we rebalance metadata as well, having in
> mind we use btrfs-progs v4.20.1, kernel 4.19.0-16-amd64 and btrfs
> raid10? What is an optimal value for musage?
> - What does it mean when "btrfs fi us" is showing a lot of
> "Unallocated" space, and yet we ran into the out of space issue
> (probably on Metadata data - subvolume snapshot), why isn't Metadata
> expanding into that Unallocated space automatically?
>
>
> Kind regards,
> Todor

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-05-31 12:50 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAOv4OrX7kxTMrpE+AdqWo+PCsAGpBkrJ9irr9Xj8ZcRrPTvRoA@mail.gmail.com>
2023-05-22 13:02 ` btrfs raid10 rebalance questions Todor Ivanov
2023-05-31 12:48   ` DanglingPointer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox