Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Jani Partanen <jiipee@sotapeli.fi>, Qu Wenruo <wqu@suse.com>,
	linux-btrfs@vger.kernel.org
Subject: Re: [PATCH 0/5] btrfs: scrub: improve the scrub performance
Date: Wed, 2 Aug 2023 10:20:29 +0800	[thread overview]
Message-ID: <f00f20f4-de51-652b-67b2-35eaac1a7c89@gmx.com> (raw)
In-Reply-To: <a4ed97f4-5b9b-3958-8432-e45e7701ee1c@sotapeli.fi>



On 2023/8/2 10:15, Jani Partanen wrote:
> 
> On 02/08/2023 4.56, Qu Wenruo wrote:
>>
>> So the btrfs scrub report is doing the correct report using the values
>> from kernel.
>>
>> And considering the used space is around 600G, divided by 4 disks (aka,
>> 3 data stripes + 1 parity stripes), it's not that weird, as we would got
>> around 200G per device (parity doesn't contribute to the scrubbed bytes).
>>
>> Especially considering your metadata is RAID1C4, it means we should only
>> have more than 200G.
>> Instead it's the old report of less than 200G doesn't seem correct.
>>
>> Mind to provide the output of "btrfs fi usage <mnt>" to verify my
>> assumption?
>>
> btrfs fi usage /mnt/
> Overall:
>      Device size:                   1.16TiB
>      Device allocated:            844.25GiB
>      Device unallocated:          348.11GiB
>      Device missing:                  0.00B
>      Device slack:                    0.00B
>      Used:                        799.86GiB
>      Free (estimated):            289.58GiB      (min: 115.52GiB)
>      Free (statfs, df):           289.55GiB
>      Data ratio:                       1.33
>      Metadata ratio:                   4.00
>      Global reserve:              471.80MiB      (used: 0.00B)
>      Multiple profiles:                  no
> 
> Data,RAID5: Size:627.00GiB, Used:598.51GiB (95.46%)
>     /dev/sdb      209.00GiB
>     /dev/sdc      209.00GiB
>     /dev/sdd      209.00GiB
>     /dev/sde      209.00GiB

OK, my previous calculation is incorrect...

For each device there should be 209GiB used by RAID5 chunks, and only 
3/4 of them contributes to the scrubbed data bytes.

Thus there seems to be some double accounting.

Definitely needs extra digging for this situation.

Thanks,
Qu

> 
> Metadata,RAID1C4: Size:2.00GiB, Used:472.56MiB (23.07%)
>     /dev/sdb        2.00GiB
>     /dev/sdc        2.00GiB
>     /dev/sdd        2.00GiB
>     /dev/sde        2.00GiB
> 
> System,RAID1C4: Size:64.00MiB, Used:64.00KiB (0.10%)
>     /dev/sdb       64.00MiB
>     /dev/sdc       64.00MiB
>     /dev/sdd       64.00MiB
>     /dev/sde       64.00MiB
> 
> Unallocated:
>     /dev/sdb       87.03GiB
>     /dev/sdc       87.03GiB
>     /dev/sdd       87.03GiB
> 
>     /dev/sde       87.03GiB
> 
> 
> There is 1 extra 2GB file now so thats why it show little more usage now.
> 
> 
>> Sure, I'll CC you when refreshing the patchset, extra tests are always
>> appreciated.
>>
> Sound good, thanks!
> 

  reply	other threads:[~2023-08-02  2:20 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-28 11:14 [PATCH 0/5] btrfs: scrub: improve the scrub performance Qu Wenruo
2023-07-28 11:14 ` [PATCH 1/5] btrfs: scrub: avoid unnecessary extent tree search preparing stripes Qu Wenruo
2023-07-28 11:14 ` [PATCH 2/5] btrfs: scrub: avoid unnecessary csum " Qu Wenruo
2023-07-28 11:14 ` [PATCH 3/5] btrfs: scrub: fix grouping of read IO Qu Wenruo
2023-07-28 11:14 ` [PATCH 4/5] btrfs: scrub: don't go ordered workqueue for dev-replace Qu Wenruo
2023-07-28 11:14 ` [PATCH 5/5] btrfs: scrub: move write back of repaired sectors into scrub_stripe_read_repair_worker() Qu Wenruo
2023-07-28 12:38 ` [PATCH 0/5] btrfs: scrub: improve the scrub performance Martin Steigerwald
2023-07-28 16:50   ` David Sterba
2023-07-28 21:14     ` Martin Steigerwald
2023-08-01 20:14 ` Jani Partanen
2023-08-01 22:06   ` Qu Wenruo
2023-08-01 23:48     ` Jani Partanen
2023-08-02  1:56       ` Qu Wenruo
2023-08-02  2:15         ` Jani Partanen
2023-08-02  2:20           ` Qu Wenruo [this message]
2023-08-03  6:30             ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f00f20f4-de51-652b-67b2-35eaac1a7c89@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=jiipee@sotapeli.fi \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox