From: bsegall@google.com
To: Huaixin Chang <changhuaixin@linux.alibaba.com>
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
mingo@redhat.com, bsegall@google.com, chiluk+linux@indeed.com,
vincent.guittot@linaro.org, pauld@redhead.com
Subject: Re: [PATCH 2/2] sched/fair: Refill bandwidth before scaling
Date: Mon, 20 Apr 2020 10:54:53 -0700 [thread overview]
Message-ID: <xm26wo6akpoy.fsf@google.com> (raw)
In-Reply-To: <20200420024421.22442-3-changhuaixin@linux.alibaba.com> (Huaixin Chang's message of "Mon, 20 Apr 2020 10:44:21 +0800")
Huaixin Chang <changhuaixin@linux.alibaba.com> writes:
> In order to prevent possible hardlockup of sched_cfs_period_timer()
> loop, loop count is introduced to denote whether to scale quota and
> period or not. However, scale is done between forwarding period timer
> and refilling cfs bandwidth runtime, which means that period timer is
> forwarded with old "period" while runtime is refilled with scaled
> "quota".
>
> Move do_sched_cfs_period_timer() before scaling to solve this.
Reviewed-by: Ben Segall <bsegall@google.com>
>
> Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup")
> Signed-off-by: Huaixin Chang <changhuaixin@linux.alibaba.com>
> ---
> kernel/sched/fair.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 02f323b85b6d..9ace1c5c73a5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5152,6 +5152,8 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> if (!overrun)
> break;
>
> + idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
> +
> if (++count > 3) {
> u64 new, old = ktime_to_ns(cfs_b->period);
>
> @@ -5181,8 +5183,6 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> /* reset count so we don't come right back in here */
> count = 0;
> }
> -
> - idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
> }
> if (idle)
> cfs_b->period_active = 0;
next prev parent reply other threads:[~2020-04-20 17:55 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-20 2:44 [PATCH 0/2] Two small fixes for bandwidth controller Huaixin Chang
2020-04-20 2:44 ` [PATCH 1/2] sched: Defend cfs and rt bandwidth quota against overflow Huaixin Chang
2020-04-20 17:50 ` bsegall
2020-04-22 3:36 ` changhuaixin
2020-04-22 18:44 ` bsegall
2020-04-23 13:37 ` [PATCH] " Huaixin Chang
2020-04-23 20:33 ` bsegall
2020-04-25 10:52 ` [PATCH v2] " Huaixin Chang
2020-04-27 18:29 ` bsegall
2020-05-11 13:03 ` Peter Zijlstra
2020-05-19 18:44 ` [tip: sched/core] " tip-bot2 for Huaixin Chang
2020-04-22 8:38 ` [PATCH 1/2] " kbuild test robot
2020-04-24 6:35 ` kbuild test robot
2020-04-20 2:44 ` [PATCH 2/2] sched/fair: Refill bandwidth before scaling Huaixin Chang
2020-04-20 17:54 ` bsegall [this message]
2020-04-21 15:09 ` Phil Auld
2020-05-01 18:22 ` [tip: sched/core] " tip-bot2 for Huaixin Chang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=xm26wo6akpoy.fsf@google.com \
--to=bsegall@google.com \
--cc=changhuaixin@linux.alibaba.com \
--cc=chiluk+linux@indeed.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pauld@redhead.com \
--cc=peterz@infradead.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).