From: Dave Jones <davej@codemonkey.org.uk>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>,
Linux Kernel <linux-kernel@vger.kernel.org>,
mingo@kernel.org, Linus Torvalds <torvalds@linux-foundation.org>,
paul.gortmaker@windriver.com, valentin.schneider@arm.com
Subject: Re: weird loadavg on idle machine post 5.7
Date: Mon, 6 Jul 2020 17:20:57 -0400 [thread overview]
Message-ID: <20200706212057.GA18637@codemonkey.org.uk> (raw)
In-Reply-To: <20200706145952.GB597537@hirez.programming.kicks-ass.net>
On Mon, Jul 06, 2020 at 04:59:52PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 03, 2020 at 04:51:53PM -0400, Dave Jones wrote:
> > On Fri, Jul 03, 2020 at 12:40:33PM +0200, Peter Zijlstra wrote:
> >
> > looked promising the first few hours, but as soon as it hit four hours
> > of uptime, loadavg spiked and is now pinned to at least 1.00
>
> OK, lots of cursing later, I now have the below...
>
> The TL;DR is that while schedule() doesn't change p->state once it
> starts, it does read it quite a bit, and ttwu() will actually change it
> to TASK_WAKING. So if ttwu() changes it to WAKING before schedule()
> reads it to do loadavg accounting, things go sideways.
>
> The below is extra complicated by the fact that I've had to scrounge up
> a bunch of load-store ordering without actually adding barriers. It adds
> yet another control dependency to ttwu(), so take that C standard :-)
Man this stuff is subtle. I could've read this a hundred times and not
even come close to approaching this.
Basically me reading scheduler code:
http://www.quickmeme.com/img/96/9642ed212bbced00885592b39880ec55218e922245e0637cf94db2e41857d558.jpg
> I've booted it, and build a few kernels with it and checked loadavg
> drops to 0 after each build, so from that pov all is well, but since
> I'm not confident I can reproduce the issue, I can't tell this actually
> fixes anything, except maybe phantoms of my imagination.
Five hours in, looking good so far. I think you nailed it.
Dave
next prev parent reply other threads:[~2020-07-06 21:21 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-02 17:15 weird loadavg on idle machine post 5.7 Dave Jones
2020-07-02 19:46 ` Dave Jones
2020-07-02 21:15 ` Paul Gortmaker
2020-07-03 13:23 ` Paul Gortmaker
2020-07-02 21:36 ` Mel Gorman
2020-07-02 23:11 ` Michal Kubecek
2020-07-02 23:24 ` Dave Jones
2020-07-03 9:02 ` Peter Zijlstra
2020-07-03 10:40 ` Peter Zijlstra
2020-07-03 20:51 ` Dave Jones
2020-07-06 14:59 ` Peter Zijlstra
2020-07-06 21:20 ` Dave Jones [this message]
2020-07-07 7:48 ` Peter Zijlstra
2020-07-06 23:56 ` Valentin Schneider
2020-07-07 8:17 ` Peter Zijlstra
2020-07-07 10:20 ` Valentin Schneider
2020-07-07 10:29 ` Peter Zijlstra
2020-07-08 9:46 ` [tip: sched/urgent] sched: Fix loadavg accounting race tip-bot2 for Peter Zijlstra
2020-07-07 9:20 ` weird loadavg on idle machine post 5.7 Qais Yousef
2020-07-07 9:47 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200706212057.GA18637@codemonkey.org.uk \
--to=davej@codemonkey.org.uk \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@kernel.org \
--cc=paul.gortmaker@windriver.com \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
--cc=valentin.schneider@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox