From: Jesper Dangaard Brouer <hawk@kernel.org>
To: Yosry Ahmed <yosryahmed@google.com>
Cc: tj@kernel.org, cgroups@vger.kernel.org, shakeel.butt@linux.dev,
hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com,
kernel-team@cloudflare.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes
Date: Wed, 17 Jul 2024 18:36:28 +0200 [thread overview]
Message-ID: <100caebf-c11c-45c9-b864-d8562e2a5ac5@kernel.org> (raw)
In-Reply-To: <CAJD7tkYV3iwk-ZJcr_==V4e24yH-1NaCYFUL7wDaQEi8ZXqfqQ@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2859 bytes --]
On 17/07/2024 02.35, Yosry Ahmed wrote:
> [..]
>>
>>
>> This is a clean (meaning no cadvisor interference) example of kswapd
>> starting simultaniously on many NUMA nodes, that in 27 out of 98 cases
>> hit the race (which is handled in V6 and V7).
>>
>> The BPF "cnt" maps are getting cleared every second, so this
>> approximates per sec numbers. This patch reduce pressure on the lock,
>> but we are still seeing (kfunc:vmlinux:cgroup_rstat_flush_locked) full
>> flushes approx 37 per sec (every 27 ms). On the positive side
>> ongoing_flusher mitigation stopped 98 per sec of these.
>>
>> In this clean kswapd case the patch removes the lock contention issue
>> for kswapd. The lock_contended cases 27 seems to be all related to
>> handled_race cases 27.
>>
>> The remaning high flush rate should also be addressed, and we should
>> also work on aproaches to limit this like my ealier proposal[1].
>
> I honestly don't think a high number of flushes is a problem on its
> own as long as we are not spending too much time flushing, especially
> when we have magnitude-based thresholding so we know there is
> something to flush (although it may not be relevant to what we are
> doing).
>
We are "spending too much time flushing" see below.
> If we keep observing a lot of lock contention, one thing that I
> thought about is to have a variant of spin_lock with a timeout. This
> limits the flushing latency, instead of limiting the number of flushes
> (which I believe is the wrong metric to optimize).
>
> It also seems to me that we are doing a flush each 27ms, and your
> proposed threshold was once per 50ms. It doesn't seem like a
> fundamental difference.
>
Looking at the production numbers for the time the lock is held for level 0:
@locked_time_level[0]:
[4M, 8M) 623 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[8M, 16M) 860 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[16M, 32M) 295 |@@@@@@@@@@@@@@@@@ |
[32M, 64M) 275 |@@@@@@@@@@@@@@@@ |
The time is in nanosec, so M corresponds to ms (milliseconds).
With 36 flushes per second (as shown earlier) this is a flush every
27.7ms. It is not unreasonable (from above data) that the flush time
also spend 27ms, which means that we spend a full CPU second flushing.
That is spending too much time flushing.
This around 1 sec CPU usage for kswapd is also quite clear in the
attached grafana graph for when server was rebooted into this V7 kernel.
I choose 50ms because at the time I saw flush taking around 30ms, and I
view the flush time as queue service-time. When arrival-rate is faster
than service-time, then a queue will form. So, choosing 50ms as
arrival-rate gave me some headroom. As I mentioned earlier, optimally
this threshold should be dynamically measured.
--Jesper
[-- Attachment #2: 16m1244-new-v7-krn-redacted.png --]
[-- Type: image/png, Size: 139870 bytes --]
next prev parent reply other threads:[~2024-07-17 16:37 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-11 13:28 [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes Jesper Dangaard Brouer
2024-07-11 13:29 ` [PATCH V7 2/2 RFC] cgroup/rstat: add tracepoint for ongoing flusher waits Jesper Dangaard Brouer
2024-07-16 8:42 ` [PATCH V7 1/2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes Jesper Dangaard Brouer
2024-07-17 0:35 ` Yosry Ahmed
2024-07-17 3:00 ` Waiman Long
2024-07-17 16:05 ` Yosry Ahmed
2024-07-17 16:36 ` Jesper Dangaard Brouer [this message]
2024-07-17 16:49 ` Yosry Ahmed
2024-07-18 8:12 ` Jesper Dangaard Brouer
2024-07-18 15:55 ` Yosry Ahmed
2024-07-19 0:40 ` Shakeel Butt
2024-07-19 3:11 ` Yosry Ahmed
2024-07-19 23:01 ` Shakeel Butt
2024-07-19 7:54 ` Jesper Dangaard Brouer
2024-07-19 22:47 ` Shakeel Butt
2024-07-20 4:52 ` Yosry Ahmed
[not found] ` <CAJD7tkaypFa3Nk0jh_ZYJX8YB0i7h9VY2YFXMg7GKzSS+f8H5g@mail.gmail.com>
2024-07-20 15:05 ` Jesper Dangaard Brouer
2024-07-22 20:02 ` Shakeel Butt
2024-07-22 20:12 ` Yosry Ahmed
2024-07-22 21:32 ` Shakeel Butt
2024-07-22 22:58 ` Shakeel Butt
2024-07-23 6:24 ` Yosry Ahmed
2024-07-17 0:30 ` Yosry Ahmed
2024-07-17 7:32 ` Jesper Dangaard Brouer
2024-07-17 16:31 ` Yosry Ahmed
2024-07-17 18:17 ` Jesper Dangaard Brouer
2024-07-17 18:43 ` Yosry Ahmed
2024-07-19 15:07 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=100caebf-c11c-45c9-b864-d8562e2a5ac5@kernel.org \
--to=hawk@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@cloudflare.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan.x@bytedance.com \
--cc=longman@redhat.com \
--cc=shakeel.butt@linux.dev \
--cc=tj@kernel.org \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).