From: riel@redhat.com (Rik van Riel)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v3 01/12] sched: fix imbalance flag reset
Date: Tue, 08 Jul 2014 23:05:20 -0400 [thread overview]
Message-ID: <53BCB170.9020005@redhat.com> (raw)
In-Reply-To: <1404144343-18720-2-git-send-email-vincent.guittot@linaro.org>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 06/30/2014 12:05 PM, Vincent Guittot wrote:
> The imbalance flag can stay set whereas there is no imbalance.
>
> Let assume that we have 3 tasks that run on a dual cores /dual
> cluster system. We will have some idle load balance which are
> triggered during tick. Unfortunately, the tick is also used to
> queue background work so we can reach the situation where short
> work has been queued on a CPU which already runs a task. The load
> balance will detect this imbalance (2 tasks on 1 CPU and an idle
> CPU) and will try to pull the waiting task on the idle CPU. The
> waiting task is a worker thread that is pinned on a CPU so an
> imbalance due to pinned task is detected and the imbalance flag is
> set. Then, we will not be able to clear the flag because we have at
> most 1 task on each CPU but the imbalance flag will trig to useless
> active load balance between the idle CPU and the busy CPU.
>
> We need to reset of the imbalance flag as soon as we have reached a
> balanced state.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Rik van Riel <riel@redhat.com>
- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iQEcBAEBAgAGBQJTvLFvAAoJEM553pKExN6DUcYIAJkj0fl4DIpx/7ywqSSCo4Da
1IpJI5Hz2zb+NunG8M/4kugDSYvMmiuNhFgG1ET7me11jxTcTqg0e8UZ4zJbW55u
i14IVyXLW+AVhJNQc1Umu3c6tTnjawOIzLa4wJ5JCVFVTj/5AhHyJjbKfQJnew1q
XlYm8+FPGmXQgJ0G3itmpx3gAYsrQIQXtIhM9wwgmKiysF4s+HZZppyZKtGbOtm4
ia408LsmjOYYp4vGSTa4F4IWx1K0fJVpz33TsCLb2pwKy6t/4hKf9tOn/wXPSLVc
NbWrP7zYYJ8EaXgo/RV9OJnPXq0h0Tbp9eMtd4u/hRCrcHw1dUZBpDMIRkS5NzI=
=uoh0
-----END PGP SIGNATURE-----
next prev parent reply other threads:[~2014-07-09 3:05 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-30 16:05 [PATCH v3 00/12] sched: consolidation of cpu_power Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 01/12] sched: fix imbalance flag reset Vincent Guittot
2014-07-08 3:13 ` Preeti U Murthy
2014-07-08 10:12 ` Vincent Guittot
2014-07-09 3:54 ` Preeti U Murthy
2014-07-09 8:27 ` Vincent Guittot
2014-07-09 10:43 ` Peter Zijlstra
2014-07-09 11:41 ` Preeti U Murthy
2014-07-09 14:44 ` Peter Zijlstra
2014-07-10 9:14 ` Vincent Guittot
2014-07-10 9:30 ` [PATCH v4 ] " Vincent Guittot
2014-07-10 10:57 ` Preeti U Murthy
2014-07-10 11:04 ` [PATCH v3 01/12] " Preeti U Murthy
2014-07-09 3:05 ` Rik van Riel [this message]
2014-07-09 3:36 ` Rik van Riel
2014-07-09 10:14 ` Peter Zijlstra
2014-07-09 10:30 ` Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 02/12] sched: remove a wake_affine condition Vincent Guittot
2014-07-09 3:06 ` Rik van Riel
2014-06-30 16:05 ` [PATCH v3 03/12] sched: fix avg_load computation Vincent Guittot
2014-07-09 3:10 ` Rik van Riel
2014-06-30 16:05 ` [PATCH v3 04/12] sched: Allow all archs to set the power_orig Vincent Guittot
2014-07-09 3:11 ` Rik van Riel
2014-07-09 10:57 ` Peter Zijlstra
2014-07-10 13:42 ` Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 05/12] ARM: topology: use new cpu_power interface Vincent Guittot
2014-07-09 3:11 ` Rik van Riel
2014-07-09 7:49 ` Amit Kucheria
2014-07-09 10:09 ` Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 06/12] sched: add per rq cpu_power_orig Vincent Guittot
2014-07-09 3:11 ` Rik van Riel
2014-07-09 7:50 ` Amit Kucheria
2014-06-30 16:05 ` [PATCH v3 07/12] sched: test the cpu's capacity in wake affine Vincent Guittot
2014-07-09 3:12 ` Rik van Riel
2014-07-10 11:06 ` Peter Zijlstra
2014-07-10 13:58 ` Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 08/12] sched: move cfs task on a CPU with higher capacity Vincent Guittot
2014-07-10 11:18 ` Peter Zijlstra
2014-07-10 14:03 ` Vincent Guittot
2014-07-11 14:51 ` Peter Zijlstra
2014-07-11 15:17 ` Vincent Guittot
2014-07-14 13:51 ` Peter Zijlstra
2014-07-15 9:21 ` Vincent Guittot
2014-07-10 11:24 ` Peter Zijlstra
2014-07-10 13:59 ` Vincent Guittot
2014-07-10 11:31 ` Peter Zijlstra
2014-06-30 16:05 ` [PATCH v3 09/12] Revert "sched: Put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED" Vincent Guittot
2014-07-10 13:16 ` Peter Zijlstra
2014-07-11 7:51 ` Vincent Guittot
2014-07-11 15:13 ` Peter Zijlstra
2014-07-11 17:39 ` Vincent Guittot
2014-07-11 20:12 ` Peter Zijlstra
2014-07-14 12:55 ` Morten Rasmussen
2014-07-14 13:20 ` Peter Zijlstra
2014-07-14 14:04 ` Morten Rasmussen
2014-07-14 16:22 ` Peter Zijlstra
2014-07-15 9:20 ` Vincent Guittot
2014-07-14 17:54 ` Dietmar Eggemann
2014-07-18 1:27 ` Yuyang Du
2014-07-11 16:13 ` Morten Rasmussen
2014-07-15 9:27 ` Vincent Guittot
2014-07-15 9:32 ` Morten Rasmussen
2014-07-15 9:53 ` Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 10/12] sched: get CPU's utilization statistic Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 11/12] sched: replace capacity_factor by utilization Vincent Guittot
2014-06-30 16:05 ` [PATCH v3 12/12] sched: add SD_PREFER_SIBLING for SMT level Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53BCB170.9020005@redhat.com \
--to=riel@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).