public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
@ 2024-08-04  9:20 i.r.e.c.c.a.k.u.n+kernel.org
  2024-08-04 22:19 ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: i.r.e.c.c.a.k.u.n+kernel.org @ 2024-08-04  9:20 UTC (permalink / raw)
  To: linux-btrfs

(Originally reported on Kernel.org Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=219033)

There is a very weird quirk I found with 'btrfs filesystem defragment' command. And no, it's not about reflinks removal, I'm aware of that.

It is kinda hard to replicate, but I found a somewhat reliable way. It reaches extremes with fallocated files specifically.

1. Create a file on a Btrfs filesystem using 'fallocate' and fill it. The easy way to do that is just to copy some files with 'rsync --preallocate'.

2. Check compsize info:

# compsize foo
Processed 71 files, 71 regular extents (71 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      630M         630M         630M
none       100%      630M         630M         630M

All is fine here for now. 1 extent per 1 file, "Disk Usage" = "Referenced".

3. Run defragment:

# btrfs filesystem defragment -r foo

4. Check compsize again:

# compsize foo
Processed 71 files, 76 regular extents (76 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      638M         638M         630M
none       100%      638M         638M         630M

Oops, besides the fact that the amount of extents is actually increased, which means 'btrfs filesystem defragment' actually made fragmentation worse, physical disk usage increased for no reason. And I didn't find any way to shrink it back.

---

The end result seems to be random though. But I managed to achieve some truly horrifying results.

# compsize foo
Processed 45 files, 45 regular extents (45 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      360M         360M         360M
none       100%      360M         360M         360M

# btrfs filesystem defragment -r -t 1G foo

# compsize foo
Processed 45 files, 144 regular extents (144 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      716M         716M         360M
none       100%      716M         716M         360M

Yikes! Triple the extents! Double increase in size!

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-04  9:20 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones i.r.e.c.c.a.k.u.n+kernel.org
@ 2024-08-04 22:19 ` Qu Wenruo
  2024-08-05 18:16   ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-04 22:19 UTC (permalink / raw)
  To: i.r.e.c.c.a.k.u.n+kernel.org, linux-btrfs



在 2024/8/4 18:50, i.r.e.c.c.a.k.u.n+kernel.org@gmail.com 写道:
> (Originally reported on Kernel.org Bugzilla: 
> https://bugzilla.kernel.org/show_bug.cgi?id=219033)
> 
> There is a very weird quirk I found with 'btrfs filesystem defragment' 
> command. And no, it's not about reflinks removal, I'm aware of that.
> 
> It is kinda hard to replicate, but I found a somewhat reliable way. It 
> reaches extremes with fallocated files specifically.
> 
> 1. Create a file on a Btrfs filesystem using 'fallocate' and fill it. 
> The easy way to do that is just to copy some files with 'rsync 
> --preallocate'.
> 
> 2. Check compsize info:

Mind to dump the filemap output (xfs_io -c "fiemap -v") before and after 
the defrag?

Thanks,
Qu
> 
> # compsize foo
> Processed 71 files, 71 regular extents (71 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      630M         630M         630M
> none       100%      630M         630M         630M
> 
> All is fine here for now. 1 extent per 1 file, "Disk Usage" = "Referenced".
> 
> 3. Run defragment:
> 
> # btrfs filesystem defragment -r foo
> 
> 4. Check compsize again:
> 
> # compsize foo
> Processed 71 files, 76 regular extents (76 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      638M         638M         630M
> none       100%      638M         638M         630M
> 
> Oops, besides the fact that the amount of extents is actually increased, 
> which means 'btrfs filesystem defragment' actually made fragmentation 
> worse, physical disk usage increased for no reason. And I didn't find 
> any way to shrink it back.
> 
> ---
> 
> The end result seems to be random though. But I managed to achieve some 
> truly horrifying results.
> 
> # compsize foo
> Processed 45 files, 45 regular extents (45 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      360M         360M         360M
> none       100%      360M         360M         360M
> 
> # btrfs filesystem defragment -r -t 1G foo
> 
> # compsize foo
> Processed 45 files, 144 regular extents (144 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      716M         716M         360M
> none       100%      716M         716M         360M
> 
> Yikes! Triple the extents! Double increase in size!
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-04 22:19 ` Qu Wenruo
@ 2024-08-05 18:16   ` Hanabishi
  2024-08-05 22:47     ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-05 18:16 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/4/24 22:19, Qu Wenruo wrote:
> Mind to dump the filemap output (xfs_io -c "fiemap -v") before and after the defrag?
> 
> Thanks,
> Qu

Sure.

# compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
Processed 1 file, 1 regular extents (1 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      224M         224M         224M
none       100%      224M         224M         224M

# xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE           TOTAL FLAGS
    0: [0..460303]:     545974648..546434951 460304   0x1

# btrfs filesystem defragment -t 1G mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst

# compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
Processed 1 file, 8 regular extents (8 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      420M         420M         224M
none       100%      420M         420M         224M

# xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE           TOTAL FLAGS
    0: [0..511]:        15754800..15755311      512   0x0
    1: [512..2559]:     22070192..22072239     2048   0x0
    2: [2560..6655]:    22632216..22636311     4096   0x0
    3: [6656..14335]:   22072240..22079919     7680   0x0
    4: [14336..385023]: 546434952..546805639 370688   0x0
    5: [385024..400383]: 44592672..44608031    15360   0x0
    6: [400384..460303]: 546375032..546434951  59920   0x1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-05 18:16   ` Hanabishi
@ 2024-08-05 22:47     ` Qu Wenruo
  2024-08-06  7:19       ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-05 22:47 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/6 03:46, Hanabishi 写道:
> On 8/4/24 22:19, Qu Wenruo wrote:
>> Mind to dump the filemap output (xfs_io -c "fiemap -v") before and
>> after the defrag?
>>
>> Thanks,
>> Qu
>
> Sure.
>
> # compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> Processed 1 file, 1 regular extents (1 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      224M         224M         224M
> none       100%      224M         224M         224M
>
> # xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
>   EXT: FILE-OFFSET      BLOCK-RANGE           TOTAL FLAGS
>     0: [0..460303]:     545974648..546434951 460304   0x1

Weird, there is no fallocated space involved at all.

>
> # btrfs filesystem defragment -t 1G

Oh you're using non-default threshold.
Unfortunately 1G makes no sense, as btrfs's largest extent size is only
128M.

(Although the above output shows an extent with 224M size, it's because
btrfs merges the fiemap result internally when possible).

It's recommended to go the default values anyway.

> mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
>
> # compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> Processed 1 file, 8 regular extents (8 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      420M         420M         224M
> none       100%      420M         420M         224M
>
> # xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
>   EXT: FILE-OFFSET      BLOCK-RANGE           TOTAL FLAGS
>     0: [0..511]:        15754800..15755311      512   0x0
>     1: [512..2559]:     22070192..22072239     2048   0x0
>     2: [2560..6655]:    22632216..22636311     4096   0x0
>     3: [6656..14335]:   22072240..22079919     7680   0x0
>     4: [14336..385023]: 546434952..546805639 370688   0x0
>     5: [385024..400383]: 44592672..44608031    15360   0x0

All the above extents are new extents.

>     6: [400384..460303]: 546375032..546434951  59920   0x1
>

While this one is the old one.

This looks like a recent bug fix e42b9d8b9ea2 ("btrfs: defrag: avoid
unnecessary defrag caused by incorrect extent size"), which is merged in
v6.8 kernel.

Mind to provide the kernel version?


Furthermore, there is another problem, according to your fiemap result,
the fs seems to cause fragmented new extents by somehow.

Is there any memory pressure or the fs itself is fragmented?
Btrfs defrag is only re-dirty the data, then write them back.
This expects them to be written in a continuous extent, but both memory
pressure and fragmented fs can all break such assumption.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-05 22:47     ` Qu Wenruo
@ 2024-08-06  7:19       ` Hanabishi
  2024-08-06  9:55         ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06  7:19 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/5/24 22:47, Qu Wenruo wrote:

> It's recommended to go the default values anyway.

It's for testing purposes. As you can see in original message, it happens regardless.
I simply noticed that increasing the threshold makes the problem worse.

> Mind to provide the kernel version?

Originally reported at 6.10-rc7. Current tests with 6.11-rc1 and 6.11-rc2. Still the same results.

> Is there any memory pressure or the fs itself is fragmented?

No. I tested it on multiple machines with lots of free RAM, also tested with like 99% empty disks.

Could you please try it yourself? It is fairly easy to follow the steps.
I use 'rsync --preallocate' to copy the files over (and maybe call 'sync' after to be sure).
Then run defragment on them and see if the problem reproduces.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06  7:19       ` Hanabishi
@ 2024-08-06  9:55         ` Qu Wenruo
  2024-08-06 10:23           ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-06  9:55 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/6 16:49, Hanabishi 写道:
> On 8/5/24 22:47, Qu Wenruo wrote:
>
>> It's recommended to go the default values anyway.
>
> It's for testing purposes. As you can see in original message, it
> happens regardless.
> I simply noticed that increasing the threshold makes the problem worse.
>
>> Mind to provide the kernel version?
>
> Originally reported at 6.10-rc7. Current tests with 6.11-rc1 and
> 6.11-rc2. Still the same results.
>
>> Is there any memory pressure or the fs itself is fragmented?
>
> No. I tested it on multiple machines with lots of free RAM, also tested
> with like 99% empty disks.
>
> Could you please try it yourself? It is fairly easy to follow the steps.
> I use 'rsync --preallocate' to copy the files over (and maybe call
> 'sync' after to be sure).
> Then run defragment on them and see if the problem reproduces.
>

The problem is, I can not reproduce the problem here.
Or I'm already submitting patch to fix it.

# xfs_io  -c "fiemap -v"
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE       TOTAL FLAGS
    0: [0..460303]:     583680..1043983  460304   0x1

# btrfs fi defrag /mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
# sync

# xfs_io  -c "fiemap -v"
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE       TOTAL FLAGS
    0: [0..460303]:     583680..1043983  460304   0x1

In fact, with your initial fiemap layout, btrfs won't even try to defrag
it, due to the extent size is already larger than the default threshold.

I also tried "rsync --preallocate" as request, the same:

# rsync --preallocate
/home/adam/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst  /mnt/btrfs/

# xfs_io  -c "fiemap -v"
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE       TOTAL FLAGS
    0: [0..460303]:     1043984..1504287 460304   0x1

# btrfs fi defrag /mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
# sync
# xfs_io  -c "fiemap -v"
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
/mnt/btrfs/mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE       TOTAL FLAGS
    0: [0..460303]:     1043984..1504287 460304   0x1

The same, btrfs detects the extent is large enough and refuse to do
defrag. (Although tried "-t 1G" for defrag, no difference)

That's why I'm asking you all the information, because:

- The kernel code should skip large enough extents
   At least if using the default parameter.

- Even for preallocated cases, as long as the file occupy
   the whole length, it's no different.

- Even btrfs chose to do defrag, and you have no memory pressure
   there should be new continuous data extents.
   Not the smaller ones you shown.

So either there is something like cgroup involved (which can limits the
dirty page cache and trigger write backs), or some other weird
behavior/bugs.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06  9:55         ` Qu Wenruo
@ 2024-08-06 10:23           ` Hanabishi
  2024-08-06 10:42             ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06 10:23 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 09:55, Qu Wenruo wrote:

> So either there is something like cgroup involved (which can limits the
> dirty page cache and trigger write backs), or some other weird
> behavior/bugs.

Yes, this line reveals something. I do have modified dirty page cache values. I tend to keep it on low values.

Now playing around with it - yes, it is seems to be the cause. When I tune 'vm.dirty_ratio' and 'vm.dirty_background_ratio' up to higher values, the problem becames less prevalent.

Which means lowering them cranks up the problem to extremes. E.g. try

# sysctl -w vm.dirty_bytes=8192
# sysctl -w vm.dirty_background_ratio=0

With that setup defrag completely obliterates files even with default threshold value.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 10:23           ` Hanabishi
@ 2024-08-06 10:42             ` Qu Wenruo
  2024-08-06 11:05               ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-06 10:42 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/6 19:53, Hanabishi 写道:
> On 8/6/24 09:55, Qu Wenruo wrote:
>
>> So either there is something like cgroup involved (which can limits the
>> dirty page cache and trigger write backs), or some other weird
>> behavior/bugs.
>
> Yes, this line reveals something. I do have modified dirty page cache
> values. I tend to keep it on low values.
>
> Now playing around with it - yes, it is seems to be the cause. When I
> tune 'vm.dirty_ratio' and 'vm.dirty_background_ratio' up to higher
> values, the problem becames less prevalent.
>
> Which means lowering them cranks up the problem to extremes. E.g. try
>
> # sysctl -w vm.dirty_bytes=8192
> # sysctl -w vm.dirty_background_ratio=0
>
> With that setup defrag completely obliterates files even with default
> threshold value.

At least I no longer need to live under the fear of new defrag bugs.

This also explains why defrag (even with default values) would trigger
rewrites of extents, because although fiemap is only showing a single
extent, it will be a lot of small extents on the larger pre-allocated range.

Thus btrfs believe it can merge all of them into a larger extent, but VM
settings forces btrfs to write them early, causing extra data COW, and
cause worse fragmentation.

Too low values means kernel will trigger dirty writeback aggressively, I
believe for all extent based file systems (ext4/xfs/btrfs etc), it would
cause a huge waste of metadata, due to the huge amount of small extents.

So yes, that setting is the cause, although it will reduce the memory
used by page cache (it still counts as memory pressure), but the cost is
more fragmented extents and overall worse fs performance and possibly
more wear on NAND based storage.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 10:42             ` Qu Wenruo
@ 2024-08-06 11:05               ` Hanabishi
  2024-08-06 11:23                 ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06 11:05 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 10:42, Qu Wenruo wrote:

> Too low values means kernel will trigger dirty writeback aggressively, I
> believe for all extent based file systems (ext4/xfs/btrfs etc), it would
> cause a huge waste of metadata, due to the huge amount of small extents.
> 
> So yes, that setting is the cause, although it will reduce the memory
> used by page cache (it still counts as memory pressure), but the cost is
> more fragmented extents and overall worse fs performance and possibly
> more wear on NAND based storage.

Thanks for explanation. I'm aware of low dirty page cache performance tradeoffs, I prefer more reliability in case of system failure / power outage.
But that rises questions anyway.

1. Why are files ok initially regardless of page cache size? It only blows up with explicit run of the defragment command. And I didn't face anything similar with other filesystems either.

2. How I get my space back without deleting the files? Even if I crank up the page cache amount and then defragment "properly", it doesn't reclaim the actual space back.

# btrfs filesystem defragment mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst

# compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
Processed 1 file, 3 regular extents (3 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      449M         449M         224M
none       100%      449M         449M         224M

There are 3 extents, it's defenitely not a metadata overhead.

3. Regardless of settings, what if users do end up in low memory conditions for some reason? It's not an uncommon scenario.
You end up with Btrfs borking your disk space. In my opinion it looks like a bug and should not happen.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 11:05               ` Hanabishi
@ 2024-08-06 11:23                 ` Qu Wenruo
  2024-08-06 12:08                   ` Hanabishi
  2024-08-06 12:17                   ` Hanabishi
  0 siblings, 2 replies; 19+ messages in thread
From: Qu Wenruo @ 2024-08-06 11:23 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/6 20:35, Hanabishi 写道:
> On 8/6/24 10:42, Qu Wenruo wrote:
>
>> Too low values means kernel will trigger dirty writeback aggressively, I
>> believe for all extent based file systems (ext4/xfs/btrfs etc), it would
>> cause a huge waste of metadata, due to the huge amount of small extents.
>>
>> So yes, that setting is the cause, although it will reduce the memory
>> used by page cache (it still counts as memory pressure), but the cost is
>> more fragmented extents and overall worse fs performance and possibly
>> more wear on NAND based storage.
>
> Thanks for explanation. I'm aware of low dirty page cache performance
> tradeoffs, I prefer more reliability in case of system failure / power
> outage.
> But that rises questions anyway.
>
> 1. Why are files ok initially regardless of page cache size? It only
> blows up with explicit run of the defragment command. And I didn't face
> anything similar with other filesystems either.

Because btrfs merges extents that are physically adjacent at fiemap time.

Especially if you go fallocate, then the initial write are ensured to
land in that preallocated range.
Although they may be split into many small extents, they are still
physically adjacent.

When defrag happens, it triggers data COW, and screw up everything.

>
> 2. How I get my space back without deleting the files? Even if I crank
> up the page cache amount and then defragment "properly", it doesn't
> reclaim the actual space back.
>
> # btrfs filesystem defragment mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
>
> # compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> Processed 1 file, 3 regular extents (3 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      449M         449M         224M
> none       100%      449M         449M         224M
>
> There are 3 extents, it's defenitely not a metadata overhead.

I'm not sure how high the value you set, but at least please do
everything with default kernel config, not just crank the settings up.

And have you tried sync before compsize/fiemap?

If you still have problems reclaiming the space, please provide the
fiemap output (before defrag, and after defrag and sync)

>
> 3. Regardless of settings, what if users do end up in low memory
> conditions for some reason? It's not an uncommon scenario.
> You end up with Btrfs borking your disk space. In my opinion it looks
> like a bug and should not happen.
>

If we try to lock the defrag range, to ensure them to land in a larger
extent, I'm 100% sure MM guys won't be happy, it's blocking the most
common way to reclaim memory.

By that method we're only going to exhaust the system memory at the
worst timing.


IIRC it's already in the document, although not that clear:

   The value is only advisory and the final size of the extents may
   differ, depending on the state of the free space and fragmentation or
   other internal logic.

To be honest, defrag is not recommended for modern extent based file
systems already, thus there is no longer a common and good example to
follow.

And for COW file systems, along with btrfs' specific bookend behavior,
it brings a new level of complexity.

So overall, if you're not sure what the defrag internal logic is, nor
have a clear problem you want to solve, do not defrag.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 11:23                 ` Qu Wenruo
@ 2024-08-06 12:08                   ` Hanabishi
  2024-08-06 22:10                     ` Qu Wenruo
  2024-08-06 12:17                   ` Hanabishi
  1 sibling, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06 12:08 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 11:23, Qu Wenruo wrote:

> When defrag happens, it triggers data COW, and screw up everything.

Yeah, making test files in NOCOW mode seems to prevent the issue.

> I'm not sure how high the value you set, but at least please do
> everything with default kernel config, not just crank the settings up.

Up to 50% of 32G RAM machine. More than enough for a 224M file.

Kernel defaults are meaningless anyway, as they are relative to RAM size. Even Linus admitted that: https://lwn.net/Articles/572921/

> And have you tried sync before compsize/fiemap?

Of course. I do sync on every step.

> If we try to lock the defrag range, to ensure them to land in a larger
> extent, I'm 100% sure MM guys won't be happy, it's blocking the most
> common way to reclaim memory.

Hmm, but couldn't Btrfs simply preallocate that space? I copied files much larger in size than the page cache and even entire RAM, they are totally fine as you could guess.
Is moving extents under the hood that different from copying files around?

> IIRC it's already in the document, although not that clear:
> 
>    The value is only advisory and the final size of the extents may
>    differ, depending on the state of the free space and fragmentation or
>    other internal logic.
> 
> To be honest, defrag is not recommended for modern extent based file
> systems already, thus there is no longer a common and good example to
> follow.
> 
> And for COW file systems, along with btrfs' specific bookend behavior,
> it brings a new level of complexity.
> 
> So overall, if you're not sure what the defrag internal logic is, nor
> have a clear problem you want to solve, do not defrag.

Well, I went into this hole for a reason.
I worked with some software piece which writes files sequentally, but in a very primitive POSIX-compliant way. For reference, ~17G file it produced was split into more than 1 million(!) extents. Basically shredding entire file into 16K pieces. Producing a no-joke access performance penalty even on SSD. In fact I only noticed the problem because of horrible disk performance with the file.

And I even tried to write it in NOCOW mode, but it didn't help, fragmentation level was the same. So it has nothing to do with CoW, it's Btrfs itself not really getting intentions of the software.
I'm not sure how it would behave with other filesystems, but for me it doesn't really look as a FS fault anyway.

So I ended up falling back to the old good defragmentation. Discovering the reported issue along the way, becaming a double-trouble for me.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 11:23                 ` Qu Wenruo
  2024-08-06 12:08                   ` Hanabishi
@ 2024-08-06 12:17                   ` Hanabishi
  2024-08-06 13:22                     ` Hanabishi
  1 sibling, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06 12:17 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 11:23, Qu Wenruo wrote:

> If you still have problems reclaiming the space, please provide the
> fiemap output (before defrag, and after defrag and sync)

Before defrag:

# compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
Processed 1 file, 883 regular extents (883 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      449M         449M         224M
none       100%      449M         449M         224M

# xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE        TOTAL FLAGS
    0: [0..511]:        15754800..15755311   512   0x0
    1: [512..1023]:     15063136..15063647   512   0x0
    2: [1024..1535]:    15751920..15752431   512   0x0
    3: [1536..2047]:    15145832..15146343   512   0x0
    4: [2048..3071]:    22070192..22071215  1024   0x0
    5: [3072..3583]:    22632216..22632727   512   0x0
    6: [3584..4095]:    22071216..22071727   512   0x0
    7: [4096..4607]:    22632728..22633239   512   0x0
    8: [4608..5119]:    22071728..22072239   512   0x0
    9: [5120..5631]:    22633240..22633751   512   0x0
   10: [5632..6143]:    22072240..22072751   512   0x0
   11: [6144..6655]:    22633752..22634263   512   0x0
   12: [6656..7167]:    22072752..22073263   512   0x0
   13: [7168..7679]:    22634264..22634775   512   0x0
   14: [7680..8191]:    22073264..22073775   512   0x0
   15: [8192..8703]:    21106888..21107399   512   0x0
   16: [8704..9215]:    22863440..22863951   512   0x0
   17: [9216..9727]:    22634776..22635287   512   0x0
   18: [9728..10239]:   22073776..22074287   512   0x0
   19: [10240..10751]:  21107400..21107911   512   0x0
   20: [10752..11263]:  22863952..22864463   512   0x0
   21: [11264..11775]:  22635288..22635799   512   0x0
   22: [11776..12287]:  22074288..22074799   512   0x0
   23: [12288..12799]:  21107912..21108423   512   0x0
   24: [12800..13311]:  22864464..22864975   512   0x0
   25: [13312..13823]:  22635800..22636311   512   0x0
   26: [13824..14335]:  22074800..22075311   512   0x0
   27: [14336..14847]:  21108424..21108935   512   0x0
   28: [14848..15359]:  22864976..22865487   512   0x0
   29: [15360..15871]:  22106224..22106735   512   0x0
   30: [15872..16383]:  22636312..22636823   512   0x0
   31: [16384..16895]:  22075312..22075823   512   0x0
   32: [16896..17407]:  21108936..21109447   512   0x0
   33: [17408..17919]:  22865488..22865999   512   0x0
   34: [17920..18431]:  22106736..22107247   512   0x0
   35: [18432..18943]:  21587608..21588119   512   0x0
   36: [18944..19455]:  21821096..21821607   512   0x0
   37: [19456..19967]:  22636824..22637335   512   0x0
   38: [19968..20479]:  22612032..22612543   512   0x0
   39: [20480..20991]:  22075824..22076335   512   0x0
   40: [20992..21503]:  22886512..22887023   512   0x0
   41: [21504..22015]:  21109448..21109959   512   0x0
   42: [22016..22527]:  22866000..22866511   512   0x0
   43: [22528..23039]:  22107248..22107759   512   0x0
   44: [23040..23551]:  21588120..21588631   512   0x0
   45: [23552..24063]:  21821608..21822119   512   0x0
   46: [24064..24575]:  22637336..22637847   512   0x0
   47: [24576..25087]:  22612544..22613055   512   0x0
   48: [25088..25599]:  22076336..22076847   512   0x0
   49: [25600..26111]:  22887024..22887535   512   0x0
   50: [26112..26623]:  21109960..21110471   512   0x0
   51: [26624..27135]:  22866512..22867023   512   0x0
   52: [27136..27647]:  22107760..22108271   512   0x0
   53: [27648..28159]:  21588632..21589143   512   0x0
   54: [28160..28671]:  21560104..21560615   512   0x0
   55: [28672..29183]:  21822120..21822631   512   0x0
   56: [29184..29695]:  22637848..22638359   512   0x0
   57: [29696..30207]:  22613056..22613567   512   0x0
   58: [30208..30719]:  22076848..22077359   512   0x0
   59: [30720..31231]:  22887536..22888047   512   0x0
   60: [31232..31743]:  21110472..21110983   512   0x0
   61: [31744..32255]:  22867024..22867535   512   0x0
   62: [32256..32767]:  22108272..22108783   512   0x0
   63: [32768..33279]:  21589144..21589655   512   0x0
   64: [33280..33791]:  21067632..21068143   512   0x0
   65: [33792..34303]:  22320824..22321335   512   0x0
   66: [34304..34815]:  21560616..21561127   512   0x0
   67: [34816..35327]:  21822632..21823143   512   0x0
   68: [35328..35839]:  22902232..22902743   512   0x0
   69: [35840..36351]:  22638360..22638871   512   0x0
   70: [36352..36863]:  22613568..22614079   512   0x0
   71: [36864..37375]:  22077360..22077871   512   0x0
   72: [37376..37887]:  21165368..21165879   512   0x0
   73: [37888..38399]:  22888048..22888559   512   0x0
   74: [38400..38911]:  21110984..21111495   512   0x0
   75: [38912..39423]:  22867536..22868047   512   0x0
   76: [39424..39935]:  22108784..22109295   512   0x0
   77: [39936..40447]:  21589656..21590167   512   0x0
   78: [40448..40959]:  21063264..21063775   512   0x0
   79: [40960..41471]:  22389400..22389911   512   0x0
   80: [41472..41983]:  21068144..21068655   512   0x0
   81: [41984..42495]:  22321336..22321847   512   0x0
   82: [42496..43007]:  21324480..21324991   512   0x0
   83: [43008..43519]:  21561128..21561639   512   0x0
   84: [43520..44031]:  21823144..21823655   512   0x0
   85: [44032..44543]:  22902744..22903255   512   0x0
   86: [44544..45055]:  22638872..22639383   512   0x0
   87: [45056..45567]:  22614080..22614591   512   0x0
   88: [45568..46079]:  22216920..22217431   512   0x0
   89: [46080..46591]:  22077872..22078383   512   0x0
   90: [46592..47103]:  21165880..21166391   512   0x0
   91: [47104..47615]:  22888560..22889071   512   0x0
   92: [47616..48127]:  21111496..21112007   512   0x0
   93: [48128..48639]:  21664808..21665319   512   0x0
   94: [48640..49151]:  22868048..22868559   512   0x0
   95: [49152..49663]:  22109296..22109807   512   0x0
   96: [49664..50175]:  21590168..21590679   512   0x0
   97: [50176..50687]:  21063776..21064287   512   0x0
   98: [50688..51199]:  22389912..22390423   512   0x0
   99: [51200..51711]:  21068656..21069167   512   0x0
  100: [51712..52223]:  22321848..22322359   512   0x0
  101: [52224..52735]:  21324992..21325503   512   0x0
  102: [52736..53247]:  21561640..21562151   512   0x0
  103: [53248..53759]:  21823656..21824167   512   0x0
  104: [53760..54271]:  22903256..22903767   512   0x0
  105: [54272..54783]:  22639384..22639895   512   0x0
  106: [54784..55295]:  21076216..21076727   512   0x0
  107: [55296..55807]:  22614592..22615103   512   0x0
  108: [55808..56319]:  22217432..22217943   512   0x0
  109: [56320..56831]:  22078384..22078895   512   0x0
  110: [56832..57343]:  21166392..21166903   512   0x0
  111: [57344..57855]:  22889072..22889583   512   0x0
  112: [57856..58367]:  21112008..21112519   512   0x0
  113: [58368..58879]:  21665320..21665831   512   0x0
  114: [58880..59391]:  22868560..22869071   512   0x0
  115: [59392..59903]:  22109808..22110319   512   0x0
  116: [59904..60415]:  21590680..21591191   512   0x0
  117: [60416..60927]:  21064288..21064799   512   0x0
  118: [60928..61439]:  22341784..22342295   512   0x0
  119: [61440..61951]:  22390424..22390935   512   0x0
  120: [61952..62463]:  21069168..21069679   512   0x0
  121: [62464..62975]:  22322360..22322871   512   0x0
  122: [62976..63487]:  21325504..21326015   512   0x0
  123: [63488..63999]:  21562152..21562663   512   0x0
  124: [64000..64511]:  21824168..21824679   512   0x0
  125: [64512..65023]:  22903768..22904279   512   0x0
  126: [65024..65535]:  22639896..22640407   512   0x0
  127: [65536..66047]:  21076728..21077239   512   0x0
  128: [66048..66559]:  22615104..22615615   512   0x0
  129: [66560..67071]:  22217944..22218455   512   0x0
  130: [67072..67583]:  22078896..22079407   512   0x0
  131: [67584..68095]:  21166904..21167415   512   0x0
  132: [68096..68607]:  22889584..22890095   512   0x0
  133: [68608..69119]:  21112520..21113031   512   0x0
  134: [69120..69631]:  22609256..22609767   512   0x0
  135: [69632..70143]:  21665832..21666343   512   0x0
  136: [70144..70655]:  22869072..22869583   512   0x0
  137: [70656..71167]:  22110320..22110831   512   0x0
  138: [71168..71679]:  21617320..21617831   512   0x0
  139: [71680..72191]:  21591192..21591703   512   0x0
  140: [72192..72703]:  21282824..21283335   512   0x0
  141: [72704..73215]:  21064800..21065311   512   0x0
  142: [73216..73727]:  22342296..22342807   512   0x0
  143: [73728..74239]:  22390936..22391447   512   0x0
  144: [74240..74751]:  21069680..21070191   512   0x0
  145: [74752..75263]:  22322872..22323383   512   0x0
  146: [75264..75775]:  21326016..21326527   512   0x0
  147: [75776..76287]:  21562664..21563175   512   0x0
  148: [76288..76799]:  21824680..21825191   512   0x0
  149: [76800..77311]:  22904280..22904791   512   0x0
  150: [77312..77823]:  22640408..22640919   512   0x0
  151: [77824..78335]:  21077240..21077751   512   0x0
  152: [78336..78847]:  22615616..22616127   512   0x0
  153: [78848..79359]:  22218456..22218967   512   0x0
  154: [79360..79871]:  22079408..22079919   512   0x0
  155: [79872..80383]:  21167416..21167927   512   0x0
  156: [80384..80895]:  22686944..22687455   512   0x0
  157: [80896..81407]:  22890096..22890607   512   0x0
  158: [81408..81919]:  22310672..22311183   512   0x0
  159: [81920..82431]:  21113032..21113543   512   0x0
  160: [82432..82943]:  22609768..22610279   512   0x0
  161: [82944..83455]:  21666344..21666855   512   0x0
  162: [83456..83967]:  22869584..22870095   512   0x0
  163: [83968..84479]:  22110832..22111343   512   0x0
  164: [84480..84991]:  21617832..21618343   512   0x0
  165: [84992..85503]:  21591704..21592215   512   0x0
  166: [85504..86015]:  21283336..21283847   512   0x0
  167: [86016..86527]:  21065312..21065823   512   0x0
  168: [86528..87039]:  22342808..22343319   512   0x0
  169: [87040..87551]:  22391448..22391959   512   0x0
  170: [87552..88063]:  21070192..21070703   512   0x0
  171: [88064..88575]:  22323384..22323895   512   0x0
  172: [88576..89087]:  21326528..21327039   512   0x0
  173: [89088..89599]:  21563176..21563687   512   0x0
  174: [89600..90111]:  21825192..21825703   512   0x0
  175: [90112..90623]:  22904792..22905303   512   0x0
  176: [90624..91135]:  22585336..22585847   512   0x0
  177: [91136..91647]:  22640920..22641431   512   0x0
  178: [91648..92159]:  21077752..21078263   512   0x0
  179: [92160..92671]:  22616128..22616639   512   0x0
  180: [92672..93183]:  21293176..21293687   512   0x0
  181: [93184..93695]:  22218968..22219479   512   0x0
  182: [93696..94207]:  22079920..22080431   512   0x0
  183: [94208..94719]:  21167928..21168439   512   0x0
  184: [94720..95231]:  22687456..22687967   512   0x0
  185: [95232..95743]:  22890608..22891119   512   0x0
  186: [95744..96255]:  22311184..22311695   512   0x0
  187: [96256..96767]:  21113544..21114055   512   0x0
  188: [96768..97279]:  22610280..22610791   512   0x0
  189: [97280..97791]:  21666856..21667367   512   0x0
  190: [97792..98303]:  22870096..22870607   512   0x0
  191: [98304..98815]:  22111344..22111855   512   0x0
  192: [98816..99327]:  21618344..21618855   512   0x0
  193: [99328..99839]:  21592216..21592727   512   0x0
  194: [99840..100351]: 21283848..21284359   512   0x0
  195: [100352..100863]: 21065824..21066335   512   0x0
  196: [100864..101375]: 22343320..22343831   512   0x0
  197: [101376..101887]: 22391960..22392471   512   0x0
  198: [101888..102399]: 21070704..21071215   512   0x0
  199: [102400..102911]: 22323896..22324407   512   0x0
  200: [102912..103423]: 21327040..21327551   512   0x0
  201: [103424..103935]: 21563688..21564199   512   0x0
  202: [103936..104447]: 21825704..21826215   512   0x0
  203: [104448..104959]: 22905304..22905815   512   0x0
  204: [104960..105471]: 22585848..22586359   512   0x0
  205: [105472..105983]: 22641432..22641943   512   0x0
  206: [105984..106495]: 21078264..21078775   512   0x0
  207: [106496..107007]: 22616640..22617151   512   0x0
  208: [107008..107519]: 21234104..21234615   512   0x0
  209: [107520..108031]: 22427128..22427639   512   0x0
  210: [108032..108543]: 21293688..21294199   512   0x0
  211: [108544..109055]: 22219480..22219991   512   0x0
  212: [109056..109567]: 22080432..22080943   512   0x0
  213: [109568..110079]: 21168440..21168951   512   0x0
  214: [110080..110591]: 21409320..21409831   512   0x0
  215: [110592..111103]: 22687968..22688479   512   0x0
  216: [111104..111615]: 22891120..22891631   512   0x0
  217: [111616..112127]: 21684712..21685223   512   0x0
  218: [112128..112639]: 22311696..22312207   512   0x0
  219: [112640..113151]: 21719392..21719903   512   0x0
  220: [113152..113663]: 21114056..21114567   512   0x0
  221: [113664..114175]: 23113504..23114015   512   0x0
  222: [114176..114687]: 22610792..22611303   512   0x0
  223: [114688..115199]: 22807696..22808207   512   0x0
  224: [115200..115711]: 21667368..21667879   512   0x0
  225: [115712..116223]: 22870608..22871119   512   0x0
  226: [116224..116735]: 22937760..22938271   512   0x0
  227: [116736..117247]: 22111856..22112367   512   0x0
  228: [117248..118271]: 21618856..21619879  1024   0x0
  229: [118272..118783]: 21592728..21593239   512   0x0
  230: [118784..119295]: 23056312..23056823   512   0x0
  231: [119296..119807]: 21284360..21284871   512   0x0
  232: [119808..120319]: 22465704..22466215   512   0x0
  233: [120320..120831]: 21066336..21066847   512   0x0
  234: [120832..121343]: 22343832..22344343   512   0x0
  235: [121344..121855]: 22392472..22392983   512   0x0
  236: [121856..122367]: 21071216..21071727   512   0x0
  237: [122368..122879]: 22324408..22324919   512   0x0
  238: [122880..123391]: 21327552..21328063   512   0x0
  239: [123392..123903]: 21564200..21564711   512   0x0
  240: [123904..124415]: 21826216..21826727   512   0x0
  241: [124416..124927]: 22905816..22906327   512   0x0
  242: [124928..125439]: 22586360..22586871   512   0x0
  243: [125440..125951]: 21237560..21238071   512   0x0
  244: [125952..126463]: 22641944..22642455   512   0x0
  245: [126464..126975]: 21078776..21079287   512   0x0
  246: [126976..127487]: 22617152..22617663   512   0x0
  247: [127488..127999]: 21234616..21235127   512   0x0
  248: [128000..128511]: 21606672..21607183   512   0x0
  249: [128512..129023]: 21722976..21723487   512   0x0
  250: [129024..129535]: 22427640..22428151   512   0x0
  251: [129536..130047]: 21294200..21294711   512   0x0
  252: [130048..130559]: 22219992..22220503   512   0x0
  253: [130560..131071]: 22811288..22811799   512   0x0
  254: [131072..132095]: 23927696..23928719  1024   0x0
  255: [132096..132607]: 21084104..21084615   512   0x0
  256: [132608..133119]: 22080944..22081455   512   0x0
  257: [133120..133631]: 21412952..21413463   512   0x0
  258: [133632..134143]: 21567528..21568039   512   0x0
  259: [134144..134655]: 21059024..21059535   512   0x0
  260: [134656..135167]: 22872112..22872623   512   0x0
  261: [135168..135679]: 21168952..21169463   512   0x0
  262: [135680..136191]: 22315000..22315511   512   0x0
  263: [136192..136703]: 23116776..23117287   512   0x0
  264: [136704..137215]: 21409832..21410343   512   0x0
  265: [137216..137727]: 22688480..22688991   512   0x0
  266: [137728..138239]: 22349944..22350455   512   0x0
  267: [138240..138751]: 22891632..22892143   512   0x0
  268: [138752..139263]: 21996600..21997111   512   0x0
  269: [139264..139775]: 21603192..21603703   512   0x0
  270: [139776..140799]: 23928720..23929743  1024   0x0
  271: [140800..141311]: 21050072..21050583   512   0x0
  272: [141312..141823]: 22684136..22684647   512   0x0
  273: [141824..142335]: 21446816..21447327   512   0x0
  274: [142336..142847]: 21685224..21685735   512   0x0
  275: [142848..143359]: 22312208..22312719   512   0x0
  276: [143360..143871]: 21287480..21287991   512   0x0
  277: [143872..144383]: 22430560..22431071   512   0x0
  278: [144384..144895]: 21719904..21720415   512   0x0
  279: [144896..145407]: 21114568..21115079   512   0x0
  280: [145408..145919]: 23114016..23114527   512   0x0
  281: [145920..146431]: 22441448..22441959   512   0x0
  282: [146432..146943]: 22611304..22611815   512   0x0
  283: [146944..147455]: 22808208..22808719   512   0x0
  284: [147456..147967]: 21667880..21668391   512   0x0
  285: [147968..148479]: 22871120..22871631   512   0x0
  286: [148480..148991]: 21179632..21180143   512   0x0
  287: [148992..149503]: 22938272..22938783   512   0x0
  288: [149504..150015]: 22231104..22231615   512   0x0
  289: [150016..150527]: 21080456..21080967   512   0x0
  290: [150528..151039]: 22112368..22112879   512   0x0
  291: [151040..151551]: 21593240..21593751   512   0x0
  292: [151552..152063]: 21495184..21495695   512   0x0
  293: [152064..152575]: 22952056..22952567   512   0x0
  294: [152576..153087]: 23056824..23057335   512   0x0
  295: [153088..153599]: 21531352..21531863   512   0x0
  296: [153600..154111]: 21678384..21678895   512   0x0
  297: [154112..154623]: 21762912..21763423   512   0x0
  298: [154624..155135]: 23033240..23033751   512   0x0
  299: [155136..155647]: 22699112..22699623   512   0x0
  300: [155648..156159]: 21248448..21248959   512   0x0
  301: [156160..156671]: 21953552..21954063   512   0x0
  302: [156672..157183]: 21284872..21285383   512   0x0
  303: [157184..157695]: 22403776..22404287   512   0x0
  304: [157696..158207]: 21616512..21617023   512   0x0
  305: [158208..158719]: 22466216..22466727   512   0x0
  306: [158720..159231]: 21083216..21083727   512   0x0
  307: [159232..159743]: 21066848..21067359   512   0x0
  308: [159744..160255]: 22344344..22344855   512   0x0
  309: [160256..163327]: 23929744..23932815  3072   0x0
  310: [163328..163839]: 24648112..24648623   512   0x0
  311: [163840..164351]: 23892640..23893151   512   0x0
  312: [164352..164863]: 23932816..23933327   512   0x0
  313: [164864..165375]: 24648624..24649135   512   0x0
  314: [165376..165887]: 23893152..23893663   512   0x0
  315: [165888..166399]: 23933328..23933839   512   0x0
  316: [166400..166911]: 24649136..24649647   512   0x0
  317: [166912..167423]: 23893664..23894175   512   0x0
  318: [167424..167935]: 24657336..24657847   512   0x0
  319: [167936..168447]: 24638272..24638783   512   0x0
  320: [168448..168959]: 23933840..23934351   512   0x0
  321: [168960..169471]: 24649648..24650159   512   0x0
  322: [169472..169983]: 23894176..23894687   512   0x0
  323: [169984..170495]: 24657848..24658359   512   0x0
  324: [170496..171007]: 24638784..24639295   512   0x0
  325: [171008..171519]: 23934352..23934863   512   0x0
  326: [171520..172031]: 24650160..24650671   512   0x0
  327: [172032..172543]: 23894688..23895199   512   0x0
  328: [172544..173055]: 24658360..24658871   512   0x0
  329: [173056..173567]: 24639296..24639807   512   0x0
  330: [173568..174079]: 23934864..23935375   512   0x0
  331: [174080..174591]: 24650672..24651183   512   0x0
  332: [174592..175103]: 23895200..23895711   512   0x0
  333: [175104..175615]: 24658872..24659383   512   0x0
  334: [175616..176127]: 24639808..24640319   512   0x0
  335: [176128..176639]: 23935376..23935887   512   0x0
  336: [176640..177151]: 24651184..24651695   512   0x0
  337: [177152..177663]: 23895712..23896223   512   0x0
  338: [177664..178175]: 24659384..24659895   512   0x0
  339: [178176..178687]: 24640320..24640831   512   0x0
  340: [178688..179199]: 23935888..23936399   512   0x0
  341: [179200..179711]: 23950248..23950759   512   0x0
  342: [179712..180223]: 24651696..24652207   512   0x0
  343: [180224..180735]: 23896224..23896735   512   0x0
  344: [180736..181247]: 23916888..23917399   512   0x0
  345: [181248..181759]: 24659896..24660407   512   0x0
  346: [181760..182271]: 23902144..23902655   512   0x0
  347: [182272..182783]: 24640832..24641343   512   0x0
  348: [182784..183295]: 23936400..23936911   512   0x0
  349: [183296..183807]: 23950760..23951271   512   0x0
  350: [183808..184319]: 24652208..24652719   512   0x0
  351: [184320..184831]: 23896736..23897247   512   0x0
  352: [184832..185343]: 23917400..23917911   512   0x0
  353: [185344..185855]: 24660408..24660919   512   0x0
  354: [185856..186367]: 23902656..23903167   512   0x0
  355: [186368..186879]: 24641344..24641855   512   0x0
  356: [186880..187391]: 23936912..23937423   512   0x0
  357: [187392..187903]: 23951272..23951783   512   0x0
  358: [187904..188415]: 24652720..24653231   512   0x0
  359: [188416..188927]: 23897248..23897759   512   0x0
  360: [188928..189439]: 23917912..23918423   512   0x0
  361: [189440..189951]: 24660920..24661431   512   0x0
  362: [189952..190463]: 23903168..23903679   512   0x0
  363: [190464..190975]: 24641856..24642367   512   0x0
  364: [190976..191487]: 24038696..24039207   512   0x0
  365: [191488..191999]: 23937424..23937935   512   0x0
  366: [192000..192511]: 23886312..23886823   512   0x0
  367: [192512..193023]: 23951784..23952295   512   0x0
  368: [193024..193535]: 24624208..24624719   512   0x0
  369: [193536..194047]: 24653232..24653743   512   0x0
  370: [194048..194559]: 23897760..23898271   512   0x0
  371: [194560..195071]: 23918424..23918935   512   0x0
  372: [195072..195583]: 23914568..23915079   512   0x0
  373: [195584..196095]: 24661432..24661943   512   0x0
  374: [196096..196607]: 23903680..23904191   512   0x0
  375: [196608..197119]: 24642368..24642879   512   0x0
  376: [197120..197631]: 24039208..24039719   512   0x0
  377: [197632..198143]: 23937936..23938447   512   0x0
  378: [198144..198655]: 23886824..23887335   512   0x0
  379: [198656..199167]: 23952296..23952807   512   0x0
  380: [199168..199679]: 24624720..24625231   512   0x0
  381: [199680..200191]: 24653744..24654255   512   0x0
  382: [200192..200703]: 23898272..23898783   512   0x0
  383: [200704..201215]: 23918936..23919447   512   0x0
  384: [201216..201727]: 23915080..23915591   512   0x0
  385: [201728..202239]: 24661944..24662455   512   0x0
  386: [202240..202751]: 23904192..23904703   512   0x0
  387: [202752..203263]: 24642880..24643391   512   0x0
  388: [203264..203775]: 24039720..24040231   512   0x0
  389: [203776..204287]: 24892160..24892671   512   0x0
  390: [204288..204799]: 23938448..23938959   512   0x0
  391: [204800..205311]: 23635360..23635871   512   0x0
  392: [205312..205823]: 23887336..23887847   512   0x0
  393: [205824..206335]: 23952808..23953319   512   0x0
  394: [206336..206847]: 24625232..24625743   512   0x0
  395: [206848..207359]: 23922512..23923023   512   0x0
  396: [207360..207871]: 24654256..24654767   512   0x0
  397: [207872..208383]: 23898784..23899295   512   0x0
  398: [208384..208895]: 23919448..23919959   512   0x0
  399: [208896..209407]: 24666688..24667199   512   0x0
  400: [209408..209919]: 23915592..23916103   512   0x0
  401: [209920..210431]: 24662456..24662967   512   0x0
  402: [210432..210943]: 23904704..23905215   512   0x0
  403: [210944..211455]: 24643392..24643903   512   0x0
  404: [211456..211967]: 24040232..24040743   512   0x0
  405: [211968..212479]: 23477400..23477911   512   0x0
  406: [212480..212991]: 24892672..24893183   512   0x0
  407: [212992..213503]: 23938960..23939471   512   0x0
  408: [213504..214015]: 23635872..23636383   512   0x0
  409: [214016..214527]: 23887848..23888359   512   0x0
  410: [214528..215039]: 23953320..23953831   512   0x0
  411: [215040..215551]: 24625744..24626255   512   0x0
  412: [215552..216063]: 23923024..23923535   512   0x0
  413: [216064..216575]: 24654768..24655279   512   0x0
  414: [216576..217087]: 23899296..23899807   512   0x0
  415: [217088..217599]: 25475768..25476279   512   0x0
  416: [217600..218111]: 25567712..25568223   512   0x0
  417: [218112..218623]: 25409728..25410239   512   0x0
  418: [218624..219135]: 25309808..25310319   512   0x0
  419: [219136..219647]: 25476280..25476791   512   0x0
  420: [219648..220159]: 25568224..25568735   512   0x0
  421: [220160..220671]: 25410240..25410751   512   0x0
  422: [220672..221183]: 26502880..26503391   512   0x0
  423: [221184..221695]: 25310320..25310831   512   0x0
  424: [221696..222207]: 25476792..25477303   512   0x0
  425: [222208..222719]: 26124504..26125015   512   0x0
  426: [222720..223231]: 26872056..26872567   512   0x0
  427: [223232..223743]: 25568736..25569247   512   0x0
  428: [223744..224255]: 25410752..25411263   512   0x0
  429: [224256..224767]: 26399624..26400135   512   0x0
  430: [224768..225279]: 26104440..26104951   512   0x0
  431: [225280..225791]: 26503392..26503903   512   0x0
  432: [225792..226303]: 25310832..25311343   512   0x0
  433: [226304..226815]: 25477304..25477815   512   0x0
  434: [226816..227327]: 26188568..26189079   512   0x0
  435: [227328..227839]: 25717352..25717863   512   0x0
  436: [227840..228351]: 26986488..26986999   512   0x0
  437: [228352..228863]: 26125016..26125527   512   0x0
  438: [228864..229375]: 26872568..26873079   512   0x0
  439: [229376..229887]: 25689296..25689807   512   0x0
  440: [229888..230399]: 26684400..26684911   512   0x0
  441: [230400..230911]: 27235296..27235807   512   0x0
  442: [230912..231423]: 25569248..25569759   512   0x0
  443: [231424..231935]: 25411264..25411775   512   0x0
  444: [231936..232447]: 25681648..25682159   512   0x0
  445: [232448..232959]: 26400136..26400647   512   0x0
  446: [232960..233471]: 26104952..26105463   512   0x0
  447: [233472..233983]: 26376072..26376583   512   0x0
  448: [233984..234495]: 26501088..26501599   512   0x0
  449: [234496..235007]: 26428392..26428903   512   0x0
  450: [235008..235519]: 25638672..25639183   512   0x0
  451: [235520..236031]: 26841936..26842447   512   0x0
  452: [236032..236543]: 26844728..26845239   512   0x0
  453: [236544..237055]: 26215208..26215719   512   0x0
  454: [237056..237567]: 26413440..26413951   512   0x0
  455: [237568..238079]: 26874568..26875079   512   0x0
  456: [238080..238591]: 26503904..26504415   512   0x0
  457: [238592..239103]: 25623704..25624215   512   0x0
  458: [239104..239615]: 26992208..26992719   512   0x0
  459: [239616..240127]: 25993696..25994207   512   0x0
  460: [240128..240639]: 27005016..27005527   512   0x0
  461: [240640..241151]: 25311344..25311855   512   0x0
  462: [241152..241663]: 25477816..25478327   512   0x0
  463: [241664..242175]: 26331512..26332023   512   0x0
  464: [242176..242687]: 26189080..26189591   512   0x0
  465: [242688..243199]: 25407936..25408447   512   0x0
  466: [243200..251903]: 29021776..29030479  8704   0x0
  467: [251904..252415]: 28872544..28873055   512   0x0
  468: [252416..252927]: 29297240..29297751   512   0x0
  469: [252928..253439]: 29030480..29030991   512   0x0
  470: [253440..253951]: 28873056..28873567   512   0x0
  471: [253952..254463]: 29297752..29298263   512   0x0
  472: [254464..254975]: 29030992..29031503   512   0x0
  473: [254976..255487]: 28873568..28874079   512   0x0
  474: [255488..255999]: 29298264..29298775   512   0x0
  475: [256000..256511]: 29031504..29032015   512   0x0
  476: [256512..257023]: 28874080..28874591   512   0x0
  477: [257024..257535]: 28612088..28612599   512   0x0
  478: [257536..258047]: 29298776..29299287   512   0x0
  479: [258048..258559]: 29032016..29032527   512   0x0
  480: [258560..259071]: 28874592..28875103   512   0x0
  481: [259072..259583]: 28612600..28613111   512   0x0
  482: [259584..260095]: 29299288..29299799   512   0x0
  483: [260096..260607]: 29032528..29033039   512   0x0
  484: [260608..261119]: 28875104..28875615   512   0x0
  485: [261120..261631]: 28613112..28613623   512   0x0
  486: [261632..262143]: 29299800..29300311   512   0x0
  487: [262144..262655]: 29033040..29033551   512   0x0
  488: [262656..263167]: 28875616..28876127   512   0x0
  489: [263168..263679]: 28613624..28614135   512   0x0
  490: [263680..264191]: 29300312..29300823   512   0x0
  491: [264192..264703]: 29033552..29034063   512   0x0
  492: [264704..265215]: 28876128..28876639   512   0x0
  493: [265216..265727]: 28614136..28614647   512   0x0
  494: [265728..266239]: 27729832..27730343   512   0x0
  495: [266240..266751]: 28700920..28701431   512   0x0
  496: [266752..267263]: 29300824..29301335   512   0x0
  497: [267264..267775]: 27679200..27679711   512   0x0
  498: [267776..268287]: 27389096..27389607   512   0x0
  499: [268288..268799]: 29147000..29147511   512   0x0
  500: [268800..269311]: 29034064..29034575   512   0x0
  501: [269312..269823]: 28876640..28877151   512   0x0
  502: [269824..270335]: 29019776..29020287   512   0x0
  503: [270336..270847]: 28614648..28615159   512   0x0
  504: [270848..271359]: 29239832..29240343   512   0x0
  505: [271360..271871]: 27730344..27730855   512   0x0
  506: [271872..272383]: 28701432..28701943   512   0x0
  507: [272384..272895]: 29301336..29301847   512   0x0
  508: [272896..273407]: 27679712..27680223   512   0x0
  509: [273408..273919]: 27389608..27390119   512   0x0
  510: [273920..274431]: 29147512..29148023   512   0x0
  511: [274432..274943]: 29034576..29035087   512   0x0
  512: [274944..275455]: 27489120..27489631   512   0x0
  513: [275456..275967]: 28877152..28877663   512   0x0
  514: [275968..276479]: 29020288..29020799   512   0x0
  515: [276480..276991]: 28615160..28615671   512   0x0
  516: [276992..277503]: 29240344..29240855   512   0x0
  517: [277504..278015]: 27730856..27731367   512   0x0
  518: [278016..278527]: 28701944..28702455   512   0x0
  519: [278528..279039]: 29301848..29302359   512   0x0
  520: [279040..280063]: 30184504..30185527  1024   0x0
  521: [280064..280575]: 30379504..30380015   512   0x0
  522: [280576..281087]: 30447392..30447903   512   0x0
  523: [281088..281599]: 30185528..30186039   512   0x0
  524: [281600..282111]: 30380016..30380527   512   0x0
  525: [282112..282623]: 30447904..30448415   512   0x0
  526: [282624..283135]: 29611968..29612479   512   0x0
  527: [283136..283647]: 30186040..30186551   512   0x0
  528: [283648..284159]: 30380528..30381039   512   0x0
  529: [284160..284671]: 30448416..30448927   512   0x0
  530: [284672..285183]: 29612480..29612991   512   0x0
  531: [285184..285695]: 30186552..30187063   512   0x0
  532: [285696..286207]: 30381040..30381551   512   0x0
  533: [286208..286719]: 30448928..30449439   512   0x0
  534: [286720..287231]: 29612992..29613503   512   0x0
  535: [287232..287743]: 30187064..30187575   512   0x0
  536: [287744..288255]: 30381552..30382063   512   0x0
  537: [288256..288767]: 30449440..30449951   512   0x0
  538: [288768..289279]: 29613504..29614015   512   0x0
  539: [289280..289791]: 30187576..30188087   512   0x0
  540: [289792..290303]: 30382064..30382575   512   0x0
  541: [290304..290815]: 30449952..30450463   512   0x0
  542: [290816..291327]: 29614016..29614527   512   0x0
  543: [291328..291839]: 30188088..30188599   512   0x0
  544: [291840..292351]: 30382576..30383087   512   0x0
  545: [292352..292863]: 30450464..30450975   512   0x0
  546: [292864..293375]: 29614528..29615039   512   0x0
  547: [293376..293887]: 30315896..30316407   512   0x0
  548: [293888..294399]: 30188600..30189111   512   0x0
  549: [294400..294911]: 30383088..30383599   512   0x0
  550: [294912..295423]: 30450976..30451487   512   0x0
  551: [295424..295935]: 29615040..29615551   512   0x0
  552: [295936..296447]: 30316408..30316919   512   0x0
  553: [296448..296959]: 30189112..30189623   512   0x0
  554: [296960..297471]: 30383600..30384111   512   0x0
  555: [297472..297983]: 30451488..30451999   512   0x0
  556: [297984..300543]: 32891744..32894303  2560   0x0
  557: [300544..301055]: 33570880..33571391   512   0x0
  558: [301056..301567]: 32894304..32894815   512   0x0
  559: [301568..302079]: 33571392..33571903   512   0x0
  560: [302080..302591]: 32894816..32895327   512   0x0
  561: [302592..303103]: 33571904..33572415   512   0x0
  562: [303104..303615]: 32895328..32895839   512   0x0
  563: [303616..304127]: 33572416..33572927   512   0x0
  564: [304128..304639]: 32895840..32896351   512   0x0
  565: [304640..305151]: 33572928..33573439   512   0x0
  566: [305152..305663]: 32896352..32896863   512   0x0
  567: [305664..306175]: 33573440..33573951   512   0x0
  568: [306176..306687]: 32896864..32897375   512   0x0
  569: [306688..307199]: 33573952..33574463   512   0x0
  570: [307200..307711]: 32897376..32897887   512   0x0
  571: [307712..308223]: 31963960..31964471   512   0x0
  572: [308224..308735]: 33574464..33574975   512   0x0
  573: [308736..309247]: 41524800..41525311   512   0x0
  574: [309248..309759]: 41960416..41960927   512   0x0
  575: [309760..310271]: 41589776..41590287   512   0x0
  576: [310272..310783]: 40839608..40840119   512   0x0
  577: [310784..311295]: 41525312..41525823   512   0x0
  578: [311296..311807]: 41960928..41961439   512   0x0
  579: [311808..312319]: 41590288..41590799   512   0x0
  580: [312320..312831]: 40840120..40840631   512   0x0
  581: [312832..313343]: 41525824..41526335   512   0x0
  582: [313344..313855]: 41961440..41961951   512   0x0
  583: [313856..314367]: 41590800..41591311   512   0x0
  584: [314368..314879]: 40840632..40841143   512   0x0
  585: [314880..315391]: 41784272..41784783   512   0x0
  586: [315392..315903]: 41526336..41526847   512   0x0
  587: [315904..316415]: 41595760..41596271   512   0x0
  588: [316416..316927]: 41961952..41962463   512   0x0
  589: [316928..317439]: 41591312..41591823   512   0x0
  590: [317440..317951]: 40841144..40841655   512   0x0
  591: [317952..318463]: 41784784..41785295   512   0x0
  592: [318464..318975]: 41526848..41527359   512   0x0
  593: [318976..319487]: 41596272..41596783   512   0x0
  594: [319488..319999]: 41962464..41962975   512   0x0
  595: [320000..320511]: 41591824..41592335   512   0x0
  596: [320512..321023]: 40841656..40842167   512   0x0
  597: [321024..321535]: 41785296..41785807   512   0x0
  598: [321536..322047]: 41451544..41452055   512   0x0
  599: [322048..322559]: 41527360..41527871   512   0x0
  600: [322560..323071]: 41596784..41597295   512   0x0
  601: [323072..323583]: 41962976..41963487   512   0x0
  602: [323584..324095]: 41600424..41600935   512   0x0
  603: [324096..324607]: 41592336..41592847   512   0x0
  604: [324608..325119]: 40842168..40842679   512   0x0
  605: [325120..325631]: 41785808..41786319   512   0x0
  606: [325632..326143]: 41452056..41452567   512   0x0
  607: [326144..326655]: 41527872..41528383   512   0x0
  608: [326656..327167]: 41597296..41597807   512   0x0
  609: [327168..327679]: 41963488..41963999   512   0x0
  610: [327680..328191]: 41600936..41601447   512   0x0
  611: [328192..328703]: 41592848..41593359   512   0x0
  612: [328704..329215]: 40990472..40990983   512   0x0
  613: [329216..329727]: 41271880..41272391   512   0x0
  614: [329728..330239]: 41438976..41439487   512   0x0
  615: [330240..330751]: 40842680..40843191   512   0x0
  616: [330752..331263]: 41873688..41874199   512   0x0
  617: [331264..331775]: 41786320..41786831   512   0x0
  618: [331776..332287]: 40027960..40028471   512   0x0
  619: [332288..332799]: 41651928..41652439   512   0x0
  620: [332800..333311]: 41452568..41453079   512   0x0
  621: [333312..333823]: 41528384..41528895   512   0x0
  622: [333824..334335]: 41898280..41898791   512   0x0
  623: [334336..334847]: 40035688..40036199   512   0x0
  624: [334848..335359]: 41534936..41535447   512   0x0
  625: [335360..335871]: 41597808..41598319   512   0x0
  626: [335872..336383]: 41964000..41964511   512   0x0
  627: [336384..336895]: 41097552..41098063   512   0x0
  628: [336896..337407]: 40746288..40746799   512   0x0
  629: [337408..337919]: 41601448..41601959   512   0x0
  630: [337920..338431]: 41378544..41379055   512   0x0
  631: [338432..338943]: 41593360..41593871   512   0x0
  632: [338944..339455]: 40990984..40991495   512   0x0
  633: [339456..339967]: 41968536..41969047   512   0x0
  634: [339968..340479]: 41272392..41272903   512   0x0
  635: [340480..340991]: 41439488..41439999   512   0x0
  636: [340992..341503]: 40277504..40278015   512   0x0
  637: [341504..342015]: 40843192..40843703   512   0x0
  638: [342016..342527]: 41915264..41915775   512   0x0
  639: [342528..343039]: 41874200..41874711   512   0x0
  640: [343040..343551]: 40448704..40449215   512   0x0
  641: [343552..344063]: 40758976..40759487   512   0x0
  642: [344064..344575]: 41786832..41787343   512   0x0
  643: [344576..345087]: 40028472..40028983   512   0x0
  644: [345088..345599]: 41652440..41652951   512   0x0
  645: [345600..346111]: 41453080..41453591   512   0x0
  646: [346112..346623]: 41528896..41529407   512   0x0
  647: [346624..347135]: 40217400..40217911   512   0x0
  648: [347136..347647]: 41443864..41444375   512   0x0
  649: [347648..348159]: 41898792..41899303   512   0x0
  650: [348160..348671]: 40036200..40036711   512   0x0
  651: [348672..349183]: 41535448..41535959   512   0x0
  652: [349184..349695]: 41598320..41598831   512   0x0
  653: [349696..350207]: 41964512..41965023   512   0x0
  654: [350208..350719]: 41098064..41098575   512   0x0
  655: [350720..351231]: 40470248..40470759   512   0x0
  656: [351232..351743]: 40746800..40747311   512   0x0
  657: [351744..352255]: 40872232..40872743   512   0x0
  658: [352256..352767]: 40764248..40764759   512   0x0
  659: [352768..353279]: 41601960..41602471   512   0x0
  660: [353280..353791]: 41379056..41379567   512   0x0
  661: [353792..354303]: 41593872..41594383   512   0x0
  662: [354304..354815]: 40991496..40992007   512   0x0
  663: [354816..355327]: 41969048..41969559   512   0x0
  664: [355328..355839]: 41272904..41273415   512   0x0
  665: [355840..356351]: 41440000..41440511   512   0x0
  666: [356352..356863]: 40278016..40278527   512   0x0
  667: [356864..357375]: 40282848..40283359   512   0x0
  668: [357376..357887]: 40843704..40844215   512   0x0
  669: [357888..358399]: 41915776..41916287   512   0x0
  670: [358400..358911]: 40863568..40864079   512   0x0
  671: [358912..359423]: 41823336..41823847   512   0x0
  672: [359424..359935]: 41874712..41875223   512   0x0
  673: [359936..360447]: 40449216..40449727   512   0x0
  674: [360448..360959]: 40597816..40598327   512   0x0
  675: [360960..361471]: 40759488..40759999   512   0x0
  676: [361472..361983]: 41316432..41316943   512   0x0
  677: [361984..362495]: 40106288..40106799   512   0x0
  678: [362496..363007]: 41787344..41787855   512   0x0
  679: [363008..363519]: 40028984..40029495   512   0x0
  680: [363520..364031]: 41652952..41653463   512   0x0
  681: [364032..364543]: 41453592..41454103   512   0x0
  682: [364544..365055]: 41529408..41529919   512   0x0
  683: [365056..365567]: 40217912..40218423   512   0x0
  684: [365568..366079]: 41444376..41444887   512   0x0
  685: [366080..366591]: 40238248..40238759   512   0x0
  686: [366592..367103]: 41899304..41899815   512   0x0
  687: [367104..367615]: 40977312..40977823   512   0x0
  688: [367616..368127]: 40036712..40037223   512   0x0
  689: [368128..368639]: 41535960..41536471   512   0x0
  690: [368640..369151]: 40232424..40232935   512   0x0
  691: [369152..369663]: 41428608..41429119   512   0x0
  692: [369664..370175]: 41598832..41599343   512   0x0
  693: [370176..370687]: 41965024..41965535   512   0x0
  694: [370688..371199]: 41098576..41099087   512   0x0
  695: [371200..371711]: 40470760..40471271   512   0x0
  696: [371712..372223]: 40747312..40747823   512   0x0
  697: [372224..372735]: 40929888..40930399   512   0x0
  698: [372736..373247]: 40872744..40873255   512   0x0
  699: [373248..373759]: 40764760..40765271   512   0x0
  700: [373760..374271]: 41602472..41602983   512   0x0
  701: [374272..374783]: 41379568..41380079   512   0x0
  702: [374784..375295]: 41594384..41594895   512   0x0
  703: [375296..375807]: 40643960..40644471   512   0x0
  704: [375808..376319]: 40478632..40479143   512   0x0
  705: [376320..376831]: 40992008..40992519   512   0x0
  706: [376832..377343]: 40964216..40964727   512   0x0
  707: [377344..377855]: 41450224..41450735   512   0x0
  708: [377856..378367]: 41969560..41970071   512   0x0
  709: [378368..378879]: 41273416..41273927   512   0x0
  710: [378880..379391]: 40837280..40837791   512   0x0
  711: [379392..379903]: 41440512..41441023   512   0x0
  712: [379904..380415]: 40415224..40415735   512   0x0
  713: [380416..380927]: 40278528..40279039   512   0x0
  714: [380928..381439]: 40283360..40283871   512   0x0
  715: [381440..381951]: 40844216..40844727   512   0x0
  716: [381952..382463]: 41220200..41220711   512   0x0
  717: [382464..382975]: 41916288..41916799   512   0x0
  718: [382976..383487]: 40860256..40860767   512   0x0
  719: [383488..383999]: 39948008..39948519   512   0x0
  720: [384000..384511]: 40864080..40864591   512   0x0
  721: [384512..385023]: 41823848..41824359   512   0x0
  722: [385024..385535]: 41875224..41875735   512   0x0
  723: [385536..386047]: 40905368..40905879   512   0x0
  724: [386048..386559]: 41268600..41269111   512   0x0
  725: [386560..387071]: 41799352..41799863   512   0x0
  726: [387072..387583]: 40575928..40576439   512   0x0
  727: [387584..388095]: 41197520..41198031   512   0x0
  728: [388096..388607]: 40449728..40450239   512   0x0
  729: [388608..389119]: 41949000..41949511   512   0x0
  730: [389120..389631]: 40598328..40598839   512   0x0
  731: [389632..390143]: 40760000..40760511   512   0x0
  732: [390144..390655]: 41316944..41317455   512   0x0
  733: [390656..391167]: 41092320..41092831   512   0x0
  734: [391168..391679]: 40106800..40107311   512   0x0
  735: [391680..392191]: 41787856..41788367   512   0x0
  736: [392192..392703]: 40029496..40030007   512   0x0
  737: [392704..393215]: 41610488..41610999   512   0x0
  738: [393216..393727]: 41736560..41737071   512   0x0
  739: [393728..394239]: 41653464..41653975   512   0x0
  740: [394240..394751]: 41454104..41454615   512   0x0
  741: [394752..395263]: 40612216..40612727   512   0x0
  742: [395264..395775]: 41095496..41096007   512   0x0
  743: [395776..396287]: 41529920..41530431   512   0x0
  744: [396288..396799]: 41475544..41476055   512   0x0
  745: [396800..397311]: 40218424..40218935   512   0x0
  746: [397312..397823]: 41444888..41445399   512   0x0
  747: [397824..398335]: 40393408..40393919   512   0x0
  748: [398336..398847]: 40238760..40239271   512   0x0
  749: [398848..399359]: 41899816..41900327   512   0x0
  750: [399360..399871]: 40076552..40077063   512   0x0
  751: [399872..400383]: 40977824..40978335   512   0x0
  752: [400384..400895]: 40037224..40037735   512   0x0
  753: [400896..401407]: 41536472..41536983   512   0x0
  754: [401408..401919]: 40042312..40042823   512   0x0
  755: [401920..402431]: 40280848..40281359   512   0x0
  756: [402432..402943]: 40693776..40694287   512   0x0
  757: [402944..403455]: 41324256..41324767   512   0x0
  758: [403456..403967]: 40812752..40813263   512   0x0
  759: [403968..404479]: 41214544..41215055   512   0x0
  760: [404480..404991]: 39937024..39937535   512   0x0
  761: [404992..405503]: 40139880..40140391   512   0x0
  762: [405504..406015]: 41015120..41015631   512   0x0
  763: [406016..406527]: 40232936..40233447   512   0x0
  764: [406528..407039]: 41429120..41429631   512   0x0
  765: [407040..407551]: 41599344..41599855   512   0x0
  766: [407552..408063]: 41965536..41966047   512   0x0
  767: [408064..408575]: 40944608..40945119   512   0x0
  768: [408576..409087]: 40830200..40830711   512   0x0
  769: [409088..409599]: 41099088..41099599   512   0x0
  770: [409600..410111]: 40589432..40589943   512   0x0
  771: [410112..410623]: 41531976..41532487   512   0x0
  772: [410624..411135]: 40471272..40471783   512   0x0
  773: [411136..411647]: 40747824..40748335   512   0x0
  774: [411648..412159]: 40465256..40465767   512   0x0
  775: [412160..412671]: 40561720..40562231   512   0x0
  776: [412672..413183]: 41802664..41803175   512   0x0
  777: [413184..413695]: 40930400..40930911   512   0x0
  778: [413696..414207]: 40873256..40873767   512   0x0
  779: [414208..414719]: 39961272..39961783   512   0x0
  780: [414720..415743]: 44592672..44593695  1024   0x0
  781: [415744..416255]: 39981600..39982111   512   0x0
  782: [416256..416767]: 40765272..40765783   512   0x0
  783: [416768..417279]: 41588640..41589151   512   0x0
  784: [417280..417791]: 40724408..40724919   512   0x0
  785: [417792..418303]: 41602984..41603495   512   0x0
  786: [418304..418815]: 41380080..41380591   512   0x0
  787: [418816..419327]: 41594896..41595407   512   0x0
  788: [419328..419839]: 40242832..40243343   512   0x0
  789: [419840..420351]: 41184512..41185023   512   0x0
  790: [420352..420863]: 41985208..41985719   512   0x0
  791: [420864..421375]: 40644472..40644983   512   0x0
  792: [421376..421887]: 40479144..40479655   512   0x0
  793: [421888..422399]: 40992520..40993031   512   0x0
  794: [422400..422911]: 40407232..40407743   512   0x0
  795: [422912..423423]: 40964728..40965239   512   0x0
  796: [423424..423935]: 39942072..39942583   512   0x0
  797: [423936..424447]: 41466536..41467047   512   0x0
  798: [424448..424959]: 40225592..40226103   512   0x0
  799: [424960..425471]: 40551696..40552207   512   0x0
  800: [425472..425983]: 40128472..40128983   512   0x0
  801: [425984..427007]: 44593696..44594719  1024   0x0
  802: [427008..427519]: 41450736..41451247   512   0x0
  803: [427520..428543]: 44594720..44595743  1024   0x0
  804: [428544..429055]: 41970072..41970583   512   0x0
  805: [429056..429567]: 41273928..41274439   512   0x0
  806: [429568..430079]: 41361216..41361727   512   0x0
  807: [430080..430591]: 41689184..41689695   512   0x0
  808: [430592..431103]: 40837792..40838303   512   0x0
  809: [431104..431615]: 41441024..41441535   512   0x0
  810: [431616..432127]: 40519160..40519671   512   0x0
  811: [432128..432639]: 40415736..40416247   512   0x0
  812: [432640..433151]: 40279040..40279551   512   0x0
  813: [433152..433663]: 40283872..40284383   512   0x0
  814: [433664..434175]: 40844728..40845239   512   0x0
  815: [434176..434687]: 41164672..41165183   512   0x0
  816: [434688..435199]: 41220712..41221223   512   0x0
  817: [435200..435711]: 41916800..41917311   512   0x0
  818: [435712..436223]: 39957440..39957951   512   0x0
  819: [436224..437247]: 45496648..45497671  1024   0x0
  820: [437248..437759]: 40927080..40927591   512   0x0
  821: [437760..438783]: 44595744..44596767  1024   0x0
  822: [438784..440319]: 45497672..45499207  1536   0x0
  823: [440320..440831]: 41090536..41091047   512   0x0
  824: [440832..441343]: 40336616..40337127   512   0x0
  825: [441344..441855]: 40860768..40861279   512   0x0
  826: [441856..442879]: 44596768..44597791  1024   0x0
  827: [442880..443391]: 40289768..40290279   512   0x0
  828: [443392..444415]: 44275392..44276415  1024   0x0
  829: [444416..444927]: 41137328..41137839   512   0x0
  830: [444928..445951]: 44597792..44598815  1024   0x0
  831: [445952..446463]: 41918592..41919103   512   0x0
  832: [446464..447487]: 45499208..45500231  1024   0x0
  833: [447488..447999]: 40787120..40787631   512   0x0
  834: [448000..449023]: 44577400..44578423  1024   0x0
  835: [449024..449535]: 39948520..39949031   512   0x0
  836: [449536..450047]: 40864592..40865103   512   0x0
  837: [450048..450559]: 41824360..41824871   512   0x0
  838: [450560..451071]: 41875736..41876247   512   0x0
  839: [451072..451583]: 40905880..40906391   512   0x0
  840: [451584..452095]: 41269112..41269623   512   0x0
  841: [452096..452607]: 41435192..41435703   512   0x0
  842: [452608..453119]: 42001720..42002231   512   0x0
  843: [453120..453631]: 41799864..41800375   512   0x0
  844: [453632..454143]: 40637832..40638343   512   0x0
  845: [454144..454655]: 41929600..41930111   512   0x0
  846: [454656..455167]: 40576440..40576951   512   0x0
  847: [455168..455679]: 41198032..41198543   512   0x0
  848: [455680..456191]: 40431640..40432151   512   0x0
  849: [456192..456703]: 40450240..40450751   512   0x0
  850: [456704..457215]: 40539664..40540175   512   0x0
  851: [457216..457727]: 41821640..41822151   512   0x0
  852: [457728..458239]: 41889416..41889927   512   0x0
  853: [458240..458751]: 40254464..40254975   512   0x0
  854: [458752..459263]: 41616880..41617391   512   0x0
  855: [459264..459775]: 41641960..41642471   512   0x0
  856: [459776..460287]: 41949512..41950023   512   0x0
  857: [460288..460303]: 546895240..546895255    16   0x1

After defrag:

# compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
Processed 1 file, 3 regular extents (3 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      449M         449M         224M
none       100%      449M         449M         224M

# xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE           TOTAL FLAGS
    0: [0..262143]:     406841344..407103487 262144   0x0
    1: [262144..460287]: 436002368..436200511 198144   0x0
    2: [460288..460303]: 546895240..546895255     16   0x1

In fact fiemap "TOTAL" adds up correctly to the actual file size here.
So maybe it is actually compsize lying with "Disk Usage" or something else weird happening.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 12:17                   ` Hanabishi
@ 2024-08-06 13:22                     ` Hanabishi
  2024-08-06 22:18                       ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06 13:22 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 12:17, Hanabishi wrote:

> In fact fiemap "TOTAL" adds up correctly to the actual file size here.
> So maybe it is actually compsize lying with "Disk Usage" or something else weird happening.

I reproduced the results on a dedicated disk.
No, compsize is not lying. Confirmed by looking at total fs usage.

# compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
Processed 1 file, 3 regular extents (3 refs), 0 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL      100%      449M         449M         224M
none       100%      449M         449M         224M

# btrfs filesystem usage /mnt
Overall:
     Device size:		 29.88GiB
     Device allocated:		  1.52GiB
     Device unallocated:		 28.36GiB
     Device missing:		    0.00B
     Device slack:		    0.00B
     Used:			450.82MiB
     Free (estimated):		 28.92GiB	(min: 14.74GiB)
     Free (statfs, df):		 28.92GiB
     Data ratio:			     1.00
     Metadata ratio:		     2.00
     Global reserve:		  5.50MiB	(used: 16.00KiB)
     Multiple profiles:		       no

Data,single: Size:1.00GiB, Used:449.51MiB (43.90%)
    /dev/sdc1	  1.00GiB

Metadata,DUP: Size:256.00MiB, Used:656.00KiB (0.25%)
    /dev/sdc1	512.00MiB

System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
    /dev/sdc1	 16.00MiB

Unallocated:
    /dev/sdc1	 28.36GiB

Notice that the space overhead does *not* belong to metadata. It is the actual data space wasted. So the problem is real.
Which also means that fiemap is the one who lies here.

# xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
  EXT: FILE-OFFSET      BLOCK-RANGE       TOTAL FLAGS
    0: [0..460287]:     7335440..7795727 460288   0x0
    1: [460288..460303]: 7335424..7335439     16   0x1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 12:08                   ` Hanabishi
@ 2024-08-06 22:10                     ` Qu Wenruo
  2024-08-06 22:42                       ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-06 22:10 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/6 21:38, Hanabishi 写道:
> On 8/6/24 11:23, Qu Wenruo wrote:
[...]
>
>> If we try to lock the defrag range, to ensure them to land in a larger
>> extent, I'm 100% sure MM guys won't be happy, it's blocking the most
>> common way to reclaim memory.
>
> Hmm, but couldn't Btrfs simply preallocate that space? I copied files
> much larger in size than the page cache and even entire RAM, they are
> totally fine as you could guess.

For preallocation, welcome into the rabbit hole.

TL;DR, preallocation of btrfs is never reliable, it doesn't even ensure
the next write will success.

The biggest reason here is snapshot.

Even if we preallocate the range, the end user is still fully allowed to
do any snapshot.

And a preallocated range shared by multiple subvolumes will never be
overwritten, causing the same problem.


As you have already experienced, if set to NOCOW, everything will be
(mostly) fine, just as all the other non-COW filesystems.

But you're using btrfs for its super fast snapshot, and that will force
data COW, causing all the complexity.

> Is moving extents under the hood that different from copying files around?

Ext4 uses move_extent to do defrag, but unfortunately we do not go that
path, as mentioned even preallocation is not reliable.

>
>> IIRC it's already in the document, although not that clear:
>>
>>    The value is only advisory and the final size of the extents may
>>    differ, depending on the state of the free space and fragmentation or
>>    other internal logic.
>>
>> To be honest, defrag is not recommended for modern extent based file
>> systems already, thus there is no longer a common and good example to
>> follow.
>>
>> And for COW file systems, along with btrfs' specific bookend behavior,
>> it brings a new level of complexity.
>>
>> So overall, if you're not sure what the defrag internal logic is, nor
>> have a clear problem you want to solve, do not defrag.
>
> Well, I went into this hole for a reason.
> I worked with some software piece which writes files sequentally, but in
> a very primitive POSIX-compliant way. For reference, ~17G file it
> produced was split into more than 1 million(!) extents. Basically
> shredding entire file into 16K pieces. Producing a no-joke access
> performance penalty even on SSD. In fact I only noticed the problem
> because of horrible disk performance with the file.
>
> And I even tried to write it in NOCOW mode, but it didn't help,
> fragmentation level was the same. So it has nothing to do with CoW, it's
> Btrfs itself not really getting intentions of the software.
> I'm not sure how it would behave with other filesystems, but for me it
> doesn't really look as a FS fault anyway.

To me, any fs will follow the sync/fsync request from the user space
process, so if the tool wants fragmentation, it will get fragmentation.

>
> So I ended up falling back to the old good defragmentation. Discovering
> the reported issue along the way, becaming a double-trouble for me.
>

In that case, if you do not use snapshot for that subvolume, it's
recommended to go with NOCOW first, then preallocate space for the log
file. By this, the log file is always using continous range on disk.

And finally go with defrag (with high enough writeback threshold), to
reduce the number of extents (fully internal, won't even be reported by
fiemap).


BTW, forgot to mention in your previous reply mentioning the powerloss
of a fs, it doesn't help btrfs at least.

Even we aggressively writeback the dirty data, our metadata is purely
protected by COW, and a transactional system.

It means, even you have written 10GiB new data, as long as our
transaction is not committed, you will only get all the old data after a
power loss (unless it's explicitly fsynced).
That's another point very different from old non-COW filesystems.

Instead "commit=" with a lower value is more helpful for btrfs, but that
would cause more metadata writes though.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 13:22                     ` Hanabishi
@ 2024-08-06 22:18                       ` Qu Wenruo
  2024-08-06 22:55                         ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-06 22:18 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/6 22:52, Hanabishi 写道:
> On 8/6/24 12:17, Hanabishi wrote:
> 
>> In fact fiemap "TOTAL" adds up correctly to the actual file size here.
>> So maybe it is actually compsize lying with "Disk Usage" or something 
>> else weird happening.
> 
> I reproduced the results on a dedicated disk.
> No, compsize is not lying. Confirmed by looking at total fs usage.
> 
> # compsize mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> Processed 1 file, 3 regular extents (3 refs), 0 inline.
> Type       Perc     Disk Usage   Uncompressed Referenced
> TOTAL      100%      449M         449M         224M
> none       100%      449M         449M         224M
> 
> # btrfs filesystem usage /mnt
> Overall:
>      Device size:         29.88GiB
>      Device allocated:          1.52GiB
>      Device unallocated:         28.36GiB
>      Device missing:            0.00B
>      Device slack:            0.00B
>      Used:            450.82MiB
>      Free (estimated):         28.92GiB    (min: 14.74GiB)
>      Free (statfs, df):         28.92GiB
>      Data ratio:                 1.00
>      Metadata ratio:             2.00
>      Global reserve:          5.50MiB    (used: 16.00KiB)
>      Multiple profiles:               no
> 
> Data,single: Size:1.00GiB, Used:449.51MiB (43.90%)
>     /dev/sdc1      1.00GiB
> 
> Metadata,DUP: Size:256.00MiB, Used:656.00KiB (0.25%)
>     /dev/sdc1    512.00MiB
> 
> System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
>     /dev/sdc1     16.00MiB
> 
> Unallocated:
>     /dev/sdc1     28.36GiB
> 
> Notice that the space overhead does *not* belong to metadata. It is the 
> actual data space wasted. So the problem is real.
> Which also means that fiemap is the one who lies here.
> 
> # xfs_io -c "fiemap -v" mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst
> mingw-w64-gcc-13.1.0-1-x86_64.pkg.tar.zst:
>   EXT: FILE-OFFSET      BLOCK-RANGE       TOTAL FLAGS
>     0: [0..460287]:     7335440..7795727 460288   0x0
>     1: [460288..460303]: 7335424..7335439     16   0x1
> 
> 

I'm pretty sure it's the last extent causing the problem.
It's still referring to the old large preallocated extent, as btrfs can 
only free the whole extent when every part of it has no one referring to.

But since it's the last extent, it doesn't get fully defragged because 
we believe it can not be merged with other extents.

In that case, preallocation with COW is causing the problem.
If you sync the file without preallocation (but with COW), defrag should 
work fine.

Or if you sync the file with preallocation but without COW, defrag 
should also work fine (well, in that case you won't even need to defrag).


There is an attempt to enforce defragging for such preallocated extents, 
but not yet merged upstream due to interface change:

https://lore.kernel.org/linux-btrfs/cover.1710213625.git.wqu@suse.com/T/

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 22:10                     ` Qu Wenruo
@ 2024-08-06 22:42                       ` Hanabishi
  2024-08-06 22:51                         ` Qu Wenruo
  0 siblings, 1 reply; 19+ messages in thread
From: Hanabishi @ 2024-08-06 22:42 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 22:10, Qu Wenruo wrote:

> But you're using btrfs for its super fast snapshot, and that will force
> data COW, causing all the complexity.

For me the data checksumming is more of a selling point. I.e. yes, using Btrfs in a NOCOW mode kinda defies the point.

> It means, even you have written 10GiB new data, as long as our
> transaction is not committed, you will only get all the old data after a
> power loss (unless it's explicitly fsynced).
> That's another point very different from old non-COW filesystems.
> 
> Instead "commit=" with a lower value is more helpful for btrfs, but that
> would cause more metadata writes though.

What about "flushoncommit" mount option? Does it make data view more resilient?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 22:42                       ` Hanabishi
@ 2024-08-06 22:51                         ` Qu Wenruo
  2024-08-06 23:04                           ` Hanabishi
  0 siblings, 1 reply; 19+ messages in thread
From: Qu Wenruo @ 2024-08-06 22:51 UTC (permalink / raw)
  To: Hanabishi, linux-btrfs



在 2024/8/7 08:12, Hanabishi 写道:
> On 8/6/24 22:10, Qu Wenruo wrote:
>
>> But you're using btrfs for its super fast snapshot, and that will force
>> data COW, causing all the complexity.
>
> For me the data checksumming is more of a selling point. I.e. yes, using
> Btrfs in a NOCOW mode kinda defies the point.

In that data csum (and COW) case, then I guess one has to choose if
preallocation is really wanted very carefully.

Or it's super easy to cause unexpected on-disk space waste.
(COW is already going to cause space waste, but preallocation amplifies
that much faster)

>
>> It means, even you have written 10GiB new data, as long as our
>> transaction is not committed, you will only get all the old data after a
>> power loss (unless it's explicitly fsynced).
>> That's another point very different from old non-COW filesystems.
>>
>> Instead "commit=" with a lower value is more helpful for btrfs, but that
>> would cause more metadata writes though.
>
> What about "flushoncommit" mount option? Does it make data view more
> resilient?
>

If combined with lower commit= value, yes, it will be data view more
consistent with transactions.

But as usual, it amplifies the metadata writes, which is already pretty
bad for btrfs.

And may worse the extra space usage problem too, if using data COW (as
it forces dirty pages writeback at every transaction commit, causing
smaller writes)

(I guess a UPS would be better for everyone except the budget?)

Thanks,
Qu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 22:18                       ` Qu Wenruo
@ 2024-08-06 22:55                         ` Hanabishi
  0 siblings, 0 replies; 19+ messages in thread
From: Hanabishi @ 2024-08-06 22:55 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 22:18, Qu Wenruo wrote:

> There is an attempt to enforce defragging for such preallocated extents, but not yet merged upstream due to interface change:
> 
> https://lore.kernel.org/linux-btrfs/cover.1710213625.git.wqu@suse.com/T/

So, can we count that you are aware of the problem at this point? Because the main goal of my report is to make devs aware.

For me personally "Don't use defragment. Ever." advice is enough. In real need of defragmentation I can manually copy files without reflinking (rsync never reflinks anyway) or from another drive. Not a big deal really.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones
  2024-08-06 22:51                         ` Qu Wenruo
@ 2024-08-06 23:04                           ` Hanabishi
  0 siblings, 0 replies; 19+ messages in thread
From: Hanabishi @ 2024-08-06 23:04 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/6/24 22:51, Qu Wenruo wrote:

> (I guess a UPS would be better for everyone except the budget?)

Of course, I have it. But you underestimate level of my paranoia :)


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-08-06 23:04 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-04  9:20 'btrfs filesystem defragment' makes files explode in size, especially fallocated ones i.r.e.c.c.a.k.u.n+kernel.org
2024-08-04 22:19 ` Qu Wenruo
2024-08-05 18:16   ` Hanabishi
2024-08-05 22:47     ` Qu Wenruo
2024-08-06  7:19       ` Hanabishi
2024-08-06  9:55         ` Qu Wenruo
2024-08-06 10:23           ` Hanabishi
2024-08-06 10:42             ` Qu Wenruo
2024-08-06 11:05               ` Hanabishi
2024-08-06 11:23                 ` Qu Wenruo
2024-08-06 12:08                   ` Hanabishi
2024-08-06 22:10                     ` Qu Wenruo
2024-08-06 22:42                       ` Hanabishi
2024-08-06 22:51                         ` Qu Wenruo
2024-08-06 23:04                           ` Hanabishi
2024-08-06 12:17                   ` Hanabishi
2024-08-06 13:22                     ` Hanabishi
2024-08-06 22:18                       ` Qu Wenruo
2024-08-06 22:55                         ` Hanabishi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox