public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Martin Steigerwald <martin@lichtvoll.de>
To: linux-btrfs@vger.kernel.org, Tim Cuthbertson <ratcheer@gmail.com>,
	Qu Wenruo <quwenruo.btrfs@gmx.com>, Qu Wenruo <wqu@suse.com>
Subject: Re: Scrub of my nvme SSD has slowed by about 2/3
Date: Tue, 11 Jul 2023 13:26:24 +0200	[thread overview]
Message-ID: <5977988.lOV4Wx5bFT@lichtvoll.de> (raw)
In-Reply-To: <151906c5-6b6f-bb57-ab68-f8bb2edad1a0@suse.com>

Qu Wenruo - 11.07.23, 13:05:42 CEST:
> On 2023/7/11 18:56, Martin Steigerwald wrote:
> > Qu Wenruo - 11.07.23, 11:57:52 CEST:
> >> On 2023/7/11 17:25, Martin Steigerwald wrote:
> >>> Qu Wenruo - 11.07.23, 10:59:55 CEST:
> >>>> On 2023/7/11 13:52, Martin Steigerwald wrote:
> >>>>> Martin Steigerwald - 11.07.23, 07:49:43 CEST:
> >>>>>> I see about 180000 reads in 10 seconds in atop. I have seen
> >>>>>> latency
> >>>>>> values from 55 to 85 µs which is highly unusual for NVME SSD
> >>>>>> ("avio"
> >>>>>> in atop¹).
[…]
> >>>> Mind to try the following branch?
> >>>> 
> >>>> https://github.com/adam900710/linux/tree/scrub_multi_thread
> >>>> 
> >>>> Or you can grab the commit on top and backport to any kernel >=
> >>>> 6.4.
> >>> 
> >>> Cherry picking the commit on top of v6.4.3 lead to a merge
> >>> conflict.
[…]
> >> Well, I have only tested that patch on that development branch,
> >> thus I can not ensure the result of the backport.
> >> 
> >> But still, here you go the backported patch.
> >> 
> >> I'd recommend to test the functionality of scrub on some less
> >> important machine first then on your production latptop though.
> > 
> > I took this calculated risk.
> > 
> > However, while with the patch applied there seem to be more kworker
> > threads doing work using 500-600% of CPU time in system (8 cores
> > with
> > hyper threading, so 16 logical cores) the result is even less
> > performance. Latency values got even worse going up to 0,2 ms. An
> > unrelated BTRFS filesystem in another logical volume is even stalled
> > to almost a second for (mostly) write accesses.
> > 
> > Scrubbing about 650 to 750 MiB/s for a volume with about 1,2 TiB of
> > data, mostly in larger files. Now on second attempt even only 620
> > MiB/s. Which is less than before. Before it reaches about 1 GiB/s.
> > I made sure that no desktop search indexing was interfering.
> > 
> > Oh, I forgot to mention, BTRFS uses xxhash here. However it was
> > easily scrubbing at 1,5 to 2,5 GiB/s with 5.3. The filesystem uses
> > zstd compression and free space tree (free space cache v2).
> > 
> > So from what I can see here, your patch made it worse.
> 
> Thanks for the confirming, this at least prove it's not the hashing
> threads limit causing the regression.
> 
> Which is pretty weird, the read pattern is in fact better than the
> original behavior, all read are in 64K (even if there are some holes,
> we are fine reading the garbage, this should reduce IOPS workload),
> and we submit a batch of 8 of such read in one go.
> 
> BTW, what's the CPU usage of v6.3 kernel? Is it higher or lower?
> And what about the latency?

CPU usage is between 600-700% on 6.3.9, Latency between 50-70 µs. And 
scrubbing speed is above 2 GiB/s, peaking at 2,27 GiB/s. Later it went 
down a bit to 1,7 GiB/s, maybe due to background activity.

I'd say the CPU usage to result (=scrubbing speed) ratio is much, much 
better with 6.3. However the latencies during scrubbing are pretty much 
the same. I even seen up to 0.2 ms.

> Currently I'm out of ideas, for now you can revert that testing patch.
> 
> If you're interested in more testing, you can apply the following
> small diff, which changed the batch number of scrub.
> 
> You can try either double or half the number to see which change helps
> more.

No time for further testing at the moment. Maybe at a later time.

It might be good you put together a test setup yourself. Any computer 
with NVME SSD should do I think. Unless there is something very special 
about my laptop, but I doubt this. This reduces greatly on the turn-
around time.

I think for now I am back at 6.3. It works. :)

-- 
Martin



  reply	other threads:[~2023-07-11 11:26 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-03 20:19 Scrub of my nvme SSD has slowed by about 2/3 Tim Cuthbertson
2023-07-03 23:49 ` Qu Wenruo
2023-07-05  2:44   ` Qu Wenruo
2023-07-11  5:36     ` Martin Steigerwald
2023-07-11  5:33 ` Martin Steigerwald
2023-07-11  5:49   ` Martin Steigerwald
2023-07-11  5:52     ` Martin Steigerwald
2023-07-11  8:59       ` Qu Wenruo
2023-07-11  9:25         ` Martin Steigerwald
2023-07-11  9:57           ` Qu Wenruo
2023-07-11 10:56             ` Martin Steigerwald
2023-07-11 11:05               ` Qu Wenruo
2023-07-11 11:26                 ` Martin Steigerwald [this message]
2023-07-11 11:33                   ` Qu Wenruo
2023-07-11 11:47                     ` Martin Steigerwald
2023-07-14  0:28                     ` Qu Wenruo
2023-07-14  6:01                       ` Qu Wenruo
2023-07-14  6:58                         ` Martin Steigerwald
2023-07-16  9:57                       ` Sebastian Döring
2023-07-16 10:55                         ` Qu Wenruo
2023-07-16 16:01                           ` Sebastian Döring
2023-07-17  5:23                             ` Qu Wenruo
2023-07-12 11:02 ` Linux regression tracking #adding (Thorsten Leemhuis)
2023-07-19  6:42   ` Martin Steigerwald
2023-07-19  6:55     ` Martin Steigerwald
2023-08-29 12:17   ` Linux regression tracking #update (Thorsten Leemhuis)
2023-09-08 11:54     ` Martin Steigerwald
2023-09-08 22:03       ` Qu Wenruo
2023-09-09  8:06         ` Martin Steigerwald
2023-10-13 13:07         ` Martin Steigerwald

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5977988.lOV4Wx5bFT@lichtvoll.de \
    --to=martin@lichtvoll.de \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    --cc=ratcheer@gmail.com \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox