From: Peter Zijlstra <peterz@infradead.org>
To: Vladimir Davydov <vdavydov@parallels.com>
Cc: Ingo Molnar <mingo@kernel.org>, Paul Turner <pjt@google.com>,
linux-kernel@vger.kernel.org, devel@openvz.org
Subject: Re: [PATCH 1/2] sched: calculate_imbalance: Fix local->avg_load > sds->avg_load case
Date: Mon, 16 Sep 2013 07:52:02 +0200 [thread overview]
Message-ID: <20130916055202.GL21832@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <8f596cc6bc0e5e655119dc892c9bfcad26e971f4.1379252740.git.vdavydov@parallels.com>
On Sun, Sep 15, 2013 at 05:49:13PM +0400, Vladimir Davydov wrote:
> In busiest->group_imb case we can come to calculate_imbalance() with
> local->avg_load >= busiest->avg_load >= sds->avg_load. This can result
> in imbalance overflow, because it is calculated as follows
>
> env->imbalance = min(
> max_pull * busiest->group_power,
> (sds->avg_load - local->avg_load) * local->group_power
> ) / SCHED_POWER_SCALE;
>
> As a result we can end up constantly bouncing tasks from one cpu to
> another if there are pinned tasks.
>
> Fix this by skipping the assignment and assuming imbalance=0 in case
> local->avg_load > sds->avg_load.
> --
> The bug can be caught by running 2*N cpuhogs pinned to two logical cpus
> belonging to different cores on an HT-enabled machine with N logical
> cpus: just look at se.nr_migrations growth.
>
> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> ---
> kernel/sched/fair.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9b3fe1c..507a8a9 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4896,7 +4896,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
> * max load less than avg load(as we skip the groups at or below
> * its cpu_power, while calculating max_load..)
> */
> - if (busiest->avg_load < sds->avg_load) {
> + if (busiest->avg_load <= sds->avg_load ||
> + local->avg_load >= sds->avg_load) {
> env->imbalance = 0;
> return fix_small_imbalance(env, sds);
> }
Why the = part? Surely 'busiest->avg_load < sds->avg_load ||
local->avg_load > sds->avg_load' avoids both underflows?
next prev parent reply other threads:[~2013-09-16 5:52 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-15 13:49 [PATCH 1/2] sched: calculate_imbalance: Fix local->avg_load > sds->avg_load case Vladimir Davydov
2013-09-15 13:49 ` [PATCH 2/2] sched: fix_small_imbalance: Fix local->avg_load > busiest->avg_load case Vladimir Davydov
2013-09-20 13:46 ` [tip:sched/core] sched/balancing: Fix 'local->avg_load > busiest- >avg_load' case in fix_small_imbalance() tip-bot for Vladimir Davydov
2013-09-16 5:52 ` Peter Zijlstra [this message]
2013-09-16 8:06 ` [PATCH 1/2] sched: calculate_imbalance: Fix local->avg_load > sds->avg_load case Vladimir Davydov
2013-09-16 8:11 ` Peter Zijlstra
2013-09-20 13:46 ` [tip:sched/core] sched/balancing: Fix 'local->avg_load > sds-> avg_load' case in calculate_imbalance() tip-bot for Vladimir Davydov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130916055202.GL21832@twins.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=devel@openvz.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=vdavydov@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox