From: Phil Auld <pauld@redhat.com>
To: sashal@kernel.org
Cc: tglx@linutronix.de, hpa@zytor.com, peterz@infradead.org,
mingo@kernel.org, torvalds@linux-foundation.org,
linux-kernel@vger.kernel.org, bsegall@google.com,
stable@vger.kernel.org, anton@ozlabs.org
Subject: Re: [tip:sched/urgent] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup
Date: Tue, 16 Apr 2019 15:26:43 -0400 [thread overview]
Message-ID: <20190416192642.GF17860@pauld.bos.csb> (raw)
In-Reply-To: <tip-2e8e19226398db8265a8e675fcc0118b9e80c9e8@git.kernel.org>
Hi Sasha,
On Tue, Apr 16, 2019 at 08:32:09AM -0700 tip-bot for Phil Auld wrote:
> Commit-ID: 2e8e19226398db8265a8e675fcc0118b9e80c9e8
> Gitweb: https://git.kernel.org/tip/2e8e19226398db8265a8e675fcc0118b9e80c9e8
> Author: Phil Auld <pauld@redhat.com>
> AuthorDate: Tue, 19 Mar 2019 09:00:05 -0400
> Committer: Ingo Molnar <mingo@kernel.org>
> CommitDate: Tue, 16 Apr 2019 16:50:05 +0200
>
> sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup
>
> With extremely short cfs_period_us setting on a parent task group with a large
> number of children the for loop in sched_cfs_period_timer() can run until the
> watchdog fires. There is no guarantee that the call to hrtimer_forward_now()
> will ever return 0. The large number of children can make
> do_sched_cfs_period_timer() take longer than the period.
>
> NMI watchdog: Watchdog detected hard LOCKUP on cpu 24
> RIP: 0010:tg_nop+0x0/0x10
> <IRQ>
> walk_tg_tree_from+0x29/0xb0
> unthrottle_cfs_rq+0xe0/0x1a0
> distribute_cfs_runtime+0xd3/0xf0
> sched_cfs_period_timer+0xcb/0x160
> ? sched_cfs_slack_timer+0xd0/0xd0
> __hrtimer_run_queues+0xfb/0x270
> hrtimer_interrupt+0x122/0x270
> smp_apic_timer_interrupt+0x6a/0x140
> apic_timer_interrupt+0xf/0x20
> </IRQ>
>
> To prevent this we add protection to the loop that detects when the loop has run
> too many times and scales the period and quota up, proportionally, so that the timer
> can complete before then next period expires. This preserves the relative runtime
> quota while preventing the hard lockup.
>
> A warning is issued reporting this state and the new values.
>
> Signed-off-by: Phil Auld <pauld@redhat.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: <stable@vger.kernel.org>
> Cc: Anton Blanchard <anton@ozlabs.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Link: https://lkml.kernel.org/r/20190319130005.25492-1-pauld@redhat.com
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
The above commit won't work on the stable trees. Below is an updated version
that will work on v5.0.7, v4.19.34, v4.14.111, v4.9.168, and v4.4.178 with
increasing offsets. I believe v3.18.138 will require more so that one is not
included.
There is only a minor change to context, none of actual changes in the patch are
different.
Thanks,
Phil
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 310d0637fe4b..f0380229b6f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4859,12 +4859,15 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
return HRTIMER_NORESTART;
}
+extern const u64 max_cfs_quota_period;
+
static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
{
struct cfs_bandwidth *cfs_b =
container_of(timer, struct cfs_bandwidth, period_timer);
int overrun;
int idle = 0;
+ int count = 0;
raw_spin_lock(&cfs_b->lock);
for (;;) {
@@ -4872,6 +4875,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
if (!overrun)
break;
+ if (++count > 3) {
+ u64 new, old = ktime_to_ns(cfs_b->period);
+
+ new = (old * 147) / 128; /* ~115% */
+ new = min(new, max_cfs_quota_period);
+
+ cfs_b->period = ns_to_ktime(new);
+
+ /* since max is 1s, this is limited to 1e9^2, which fits in u64 */
+ cfs_b->quota *= new;
+ cfs_b->quota = div64_u64(cfs_b->quota, old);
+
+ pr_warn_ratelimited(
+ "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
+ smp_processor_id(),
+ div_u64(new, NSEC_PER_USEC),
+ div_u64(cfs_b->quota, NSEC_PER_USEC));
+
+ /* reset count so we don't come right back in here */
+ count = 0;
+ }
+
idle = do_sched_cfs_period_timer(cfs_b, overrun);
}
if (idle)
--
prev parent reply other threads:[~2019-04-16 19:26 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-19 13:00 [PATCH v2] sched/fair: Limit sched_cfs_period_timer loop to avoid hard lockup Phil Auld
2019-03-21 18:01 ` Peter Zijlstra
2019-03-21 18:32 ` Phil Auld
2019-04-03 8:38 ` [tip:sched/core] " tip-bot for Phil Auld
2019-04-09 12:48 ` Phil Auld
2019-04-09 13:05 ` Peter Zijlstra
2019-04-09 13:15 ` Phil Auld
2019-04-16 13:33 ` Phil Auld
2019-04-16 16:18 ` Phil Auld
2019-04-16 15:32 ` [tip:sched/urgent] sched/fair: Limit sched_cfs_period_timer() " tip-bot for Phil Auld
2019-04-16 19:26 ` Phil Auld [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190416192642.GF17860@pauld.bos.csb \
--to=pauld@redhat.com \
--cc=anton@ozlabs.org \
--cc=bsegall@google.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=sashal@kernel.org \
--cc=stable@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox