linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Healthy amount of free space?
@ 2018-07-16 20:58 Wolf
  2018-07-17  7:20 ` Nikolay Borisov
  2018-07-17 11:46 ` Austin S. Hemmelgarn
  0 siblings, 2 replies; 19+ messages in thread
From: Wolf @ 2018-07-16 20:58 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 2343 bytes --]

Greetings,
I would like to ask what what is healthy amount of free space to keep on
each device for btrfs to be happy?

This is how my disk array currently looks like

    [root@dennas ~]# btrfs fi usage /raid
    Overall:
        Device size:                  29.11TiB
        Device allocated:             21.26TiB
        Device unallocated:            7.85TiB
        Device missing:                  0.00B
        Used:                         21.18TiB
        Free (estimated):              3.96TiB      (min: 3.96TiB)
        Data ratio:                       2.00
        Metadata ratio:                   2.00
        Global reserve:              512.00MiB      (used: 0.00B)

    Data,RAID1: Size:10.61TiB, Used:10.58TiB
       /dev/mapper/data1       1.75TiB
       /dev/mapper/data2       1.75TiB
       /dev/mapper/data3     856.00GiB
       /dev/mapper/data4     856.00GiB
       /dev/mapper/data5       1.75TiB
       /dev/mapper/data6       1.75TiB
       /dev/mapper/data7       6.29TiB
       /dev/mapper/data8       6.29TiB

    Metadata,RAID1: Size:15.00GiB, Used:13.00GiB
       /dev/mapper/data1       2.00GiB
       /dev/mapper/data2       3.00GiB
       /dev/mapper/data3       1.00GiB
       /dev/mapper/data4       1.00GiB
       /dev/mapper/data5       3.00GiB
       /dev/mapper/data6       1.00GiB
       /dev/mapper/data7       9.00GiB
       /dev/mapper/data8      10.00GiB

    System,RAID1: Size:64.00MiB, Used:1.50MiB
       /dev/mapper/data2      32.00MiB
       /dev/mapper/data6      32.00MiB
       /dev/mapper/data7      32.00MiB
       /dev/mapper/data8      32.00MiB

    Unallocated:
       /dev/mapper/data1    1004.52GiB
       /dev/mapper/data2    1004.49GiB
       /dev/mapper/data3    1006.01GiB
       /dev/mapper/data4    1006.01GiB
       /dev/mapper/data5    1004.52GiB
       /dev/mapper/data6    1004.49GiB
       /dev/mapper/data7    1005.00GiB
       /dev/mapper/data8    1005.00GiB

Btrfs does quite good job of evenly using space on all devices. No, how
low can I let that go? In other words, with how much space
free/unallocated remaining space should I consider adding new disk?

Thanks for advice :)

W.

-- 
There are only two hard things in Computer Science:
cache invalidation, naming things and off-by-one errors.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2018-07-20 12:24 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-07-16 20:58 Healthy amount of free space? Wolf
2018-07-17  7:20 ` Nikolay Borisov
2018-07-17  8:02   ` Martin Steigerwald
2018-07-17  8:16     ` Nikolay Borisov
2018-07-17 17:54       ` Martin Steigerwald
2018-07-18 12:35         ` Austin S. Hemmelgarn
2018-07-18 13:07           ` Chris Murphy
2018-07-18 13:30             ` Austin S. Hemmelgarn
2018-07-18 17:04               ` Chris Murphy
2018-07-18 17:06                 ` Austin S. Hemmelgarn
2018-07-18 17:14                   ` Chris Murphy
2018-07-18 17:40                     ` Chris Murphy
2018-07-18 18:01                       ` Austin S. Hemmelgarn
2018-07-18 21:32                         ` Chris Murphy
2018-07-18 21:47                           ` Chris Murphy
2018-07-19 11:21                           ` Austin S. Hemmelgarn
2018-07-20  5:01               ` Andrei Borzenkov
2018-07-20 11:36                 ` Austin S. Hemmelgarn
2018-07-17 11:46 ` Austin S. Hemmelgarn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).