* Increased disk usage after deduplication and system running out of memory
@ 2016-11-24 14:00 Niccolò Belli
2016-11-24 23:11 ` Zygo Blaxell
0 siblings, 1 reply; 2+ messages in thread
From: Niccolò Belli @ 2016-11-24 14:00 UTC (permalink / raw)
To: linux-btrfs
Hi,
I use snapper, so I have plenty of snapshots in my btrfs partition and most
of my data is already deduplicated because of that.
Since long time ago I run offline defragmentation once (because I didn't
know extents get unshared) I wanted to run offline deduplication to free a
couple of GBs.
This is the script I use to stop snapper, set snapshots to rw, balance,
deduplicate, etc: https://paste.pound-python.org/show/vPUGVNjPQbDvr4HbtMgs/
$ cat after_balance
Overall:
Device size: 152.36GiB
Device allocated: 136.00GiB
Device unallocated: 16.35GiB
Device missing: 0.00B
Used: 133.97GiB
Free (estimated): 17.17GiB (min: 17.17GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 239.94MiB (used: 0.00B)
Data,single: Size:133.00GiB, Used:132.18GiB
/dev/mapper/cryptroot 133.00GiB
Metadata,single: Size:3.00GiB, Used:1.79GiB
/dev/mapper/cryptroot 3.00GiB
System,single: Size:3.00MiB, Used:16.00KiB
/dev/mapper/cryptroot 3.00MiB
Unallocated:
/dev/mapper/cryptroot 16.35GiB
$ cat after_duperemove_and_balance
Overall:
Device size: 152.36GiB
Device allocated: 136.03GiB
Device unallocated: 16.33GiB
Device missing: 0.00B
Used: 133.81GiB
Free (estimated): 16.55GiB (min: 16.55GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:127.00GiB, Used:126.77GiB
/dev/mapper/cryptroot 127.00GiB
Metadata,single: Size:9.00GiB, Used:7.03GiB
/dev/mapper/cryptroot 9.00GiB
System,single: Size:32.00MiB, Used:16.00KiB
/dev/mapper/cryptroot 32.00MiB
Unallocated:
/dev/mapper/cryptroot 16.33GiB
As you can see it freed 5.41 GB of data, but it also added 5.24 GB of
metadata. The estimated free space is now 16.55 GB, while before the
deduplication it was higher: 17.17 GB.
This is when running duperemove git with noblock, but almost nothing
changes if I omitt it (it defaults to block).
Why did my metadata increase by a 4x factor? 99% of my data already had
shared extents because of snapshots, so why such a huge increase?
Deduplication didn't finish up to 100%, because duperemove got killed by
OOM killer at 99%:
https://paste.pound-python.org/show/yUcIOSzXcrfNPkF9rV2L/
As you can see from dmesg
(https://paste.pound-python.org/show/eZIkpxUU6QR9ij6Rn1Oq/) there is no
process stealing so much memory (my system has 8GB): the biggest one takes
as much as 700MB of vm.
Another strange thing that you can see from the previous log is that it
tries to deduplicate /home/niko/nosnap/rootfs/@images/fedora25.qcow2 which
is a UNIQUE file. Such image is stored in a separate subvolume because I
don't want it to be snapshotted, so I'm pretty sure there are no other
copies of this image, but still it tries to deduplicate it.
Niccolò Belli
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Increased disk usage after deduplication and system running out of memory
2016-11-24 14:00 Increased disk usage after deduplication and system running out of memory Niccolò Belli
@ 2016-11-24 23:11 ` Zygo Blaxell
0 siblings, 0 replies; 2+ messages in thread
From: Zygo Blaxell @ 2016-11-24 23:11 UTC (permalink / raw)
To: Niccolò Belli; +Cc: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 4799 bytes --]
On Thu, Nov 24, 2016 at 03:00:26PM +0100, Niccolò Belli wrote:
> Hi,
> I use snapper, so I have plenty of snapshots in my btrfs partition and most
> of my data is already deduplicated because of that.
> Since long time ago I run offline defragmentation once (because I didn't
> know extents get unshared) I wanted to run offline deduplication to free a
> couple of GBs.
>
> This is the script I use to stop snapper, set snapshots to rw, balance,
> deduplicate, etc: https://paste.pound-python.org/show/vPUGVNjPQbDvr4HbtMgs/
>
> $ cat after_balance Overall:
> Device size: 152.36GiB
> Device allocated: 136.00GiB
> Device unallocated: 16.35GiB
> Device missing: 0.00B
> Used: 133.97GiB
> Free (estimated): 17.17GiB (min: 17.17GiB)
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 239.94MiB (used: 0.00B)
> Data,single: Size:133.00GiB, Used:132.18GiB
> /dev/mapper/cryptroot 133.00GiB
> Metadata,single: Size:3.00GiB, Used:1.79GiB
> /dev/mapper/cryptroot 3.00GiB
> System,single: Size:3.00MiB, Used:16.00KiB
> /dev/mapper/cryptroot 3.00MiB
> Unallocated:
> /dev/mapper/cryptroot 16.35GiB
>
>
> $ cat after_duperemove_and_balance
> Overall:
> Device size: 152.36GiB
> Device allocated: 136.03GiB
> Device unallocated: 16.33GiB
> Device missing: 0.00B
> Used: 133.81GiB
> Free (estimated): 16.55GiB (min: 16.55GiB)
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
>
> Data,single: Size:127.00GiB, Used:126.77GiB
> /dev/mapper/cryptroot 127.00GiB
>
> Metadata,single: Size:9.00GiB, Used:7.03GiB
> /dev/mapper/cryptroot 9.00GiB
>
> System,single: Size:32.00MiB, Used:16.00KiB
> /dev/mapper/cryptroot 32.00MiB
>
> Unallocated:
> /dev/mapper/cryptroot 16.33GiB
>
>
> As you can see it freed 5.41 GB of data, but it also added 5.24 GB of
> metadata. The estimated free space is now 16.55 GB, while before the
> deduplication it was higher: 17.17 GB.
>
> This is when running duperemove git with noblock, but almost nothing changes
> if I omitt it (it defaults to block).
> Why did my metadata increase by a 4x factor? 99% of my data already had
> shared extents because of snapshots, so why such a huge increase?
Sharing by snapshot is different from sharing by dedup.
For snapshots, a new tree node is introduced which shares the entire
rest of the tree. So you get:
Root 123 -----\ /--- Node 85 --- data 84
>----- Node 87 ---<
Root 124 -----/ \--- Node 43 --- data 42
This means there's 16K of metadata (actually probably more, but small
nonetheless) that is sharing the entire subvol.
For dedup, each shared data extent is shared individually, and metadata
is not shared at all:
Root 123 -----\ /--- Node 85 --- data 84 (shared)
\----- Node 87 ---<
\--- Node 43 --- data 42 (shared)
/--- Node 129 --- data 84 (shared)
Root 124 ------- Node 131 ---<
\--- Node 126 --- data 42 (shared)
If you dedup over a set of snapshots, it eventually unshares the metadata.
The data is still shared, but _only_ the data, so it multiplies the
metadata size by the number of snapshots. It's even worse if you have
dup metadata since the cost of each new metadata page is doubled.
> Deduplication didn't finish up to 100%, because duperemove got killed by OOM
> killer at 99%: https://paste.pound-python.org/show/yUcIOSzXcrfNPkF9rV2L/
>
> As you can see from dmesg
> (https://paste.pound-python.org/show/eZIkpxUU6QR9ij6Rn1Oq/) there is no
> process stealing so much memory (my system has 8GB): the biggest one takes
> as much as 700MB of vm.
>
> Another strange thing that you can see from the previous log is that it
> tries to deduplicate /home/niko/nosnap/rootfs/@images/fedora25.qcow2 which
> is a UNIQUE file. Such image is stored in a separate subvolume because I
> don't want it to be snapshotted, so I'm pretty sure there are no other
> copies of this image, but still it tries to deduplicate it.
>
> Niccolò Belli
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-11-24 23:11 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-24 14:00 Increased disk usage after deduplication and system running out of memory Niccolò Belli
2016-11-24 23:11 ` Zygo Blaxell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).