public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/6] correct load_balance()
@ 2013-04-23  8:27 Joonsoo Kim
  2013-04-23  8:27 ` [PATCH v3 1/6] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Joonsoo Kim @ 2013-04-23  8:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: linux-kernel, Srivatsa Vaddagiri, Davidlohr Bueso, Jason Low,
	Joonsoo Kim

Commit 88b8dac0 makes load_balance() consider other cpus in its group.
But, there are some missing parts for this feature to work properly.
This patchset correct these things and make load_balance() robust.

Others are related to LBF_ALL_PINNED. This is fallback functionality
when all tasks can't be moved as cpu affinity. But, currently,
if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED
flag and 'redo' is triggered. This is not our intention, so correct it.

These are based on sched/core branch in tip tree.

Changelog
v2->v3: Changes from Peter's suggestion
 [2/6]: change comment
 [3/6]: fix coding style
 [6/6]: fix coding style, fix changelog

v1->v2: Changes from Peter's suggestion
 [4/6]: don't include a code to evaluate load value in can_migrate_task()
 [5/6]: rename load_balance_tmpmask to load_balance_mask
 [6/6]: not use one more cpumasks, use env's cpus for prevent to re-select

Joonsoo Kim (6):
  sched: change position of resched_cpu() in load_balance()
  sched: explicitly cpu_idle_type checking in rebalance_domains()
  sched: don't consider other cpus in our group in case of NEWLY_IDLE
  sched: move up affinity check to mitigate useless redoing overhead
  sched: rename load_balance_tmpmask to load_balance_mask
  sched: prevent to re-select dst-cpu in load_balance()

 kernel/sched/core.c |    4 +--
 kernel/sched/fair.c |   69 +++++++++++++++++++++++++++++----------------------
 2 files changed, 41 insertions(+), 32 deletions(-)

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-04-24  9:43 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-23  8:27 [PATCH v3 0/6] correct load_balance() Joonsoo Kim
2013-04-23  8:27 ` [PATCH v3 1/6] sched: change position of resched_cpu() in load_balance() Joonsoo Kim
2013-04-24  9:32   ` [tip:sched/core] sched: Change " tip-bot for Joonsoo Kim
2013-04-23  8:27 ` [PATCH v3 2/6] sched: explicitly cpu_idle_type checking in rebalance_domains() Joonsoo Kim
2013-04-24  9:33   ` [tip:sched/core] sched: Explicitly " tip-bot for Joonsoo Kim
2013-04-23  8:27 ` [PATCH v3 3/6] sched: don't consider other cpus in our group in case of NEWLY_IDLE Joonsoo Kim
2013-04-24  9:35   ` [tip:sched/core] sched: Don' t " tip-bot for Joonsoo Kim
2013-04-23  8:27 ` [PATCH v3 4/6] sched: move up affinity check to mitigate useless redoing overhead Joonsoo Kim
2013-04-24  9:36   ` [tip:sched/core] sched: Move " tip-bot for Joonsoo Kim
2013-04-23  8:27 ` [PATCH v3 5/6] sched: rename load_balance_tmpmask to load_balance_mask Joonsoo Kim
2013-04-24  9:37   ` [tip:sched/core] sched: Rename " tip-bot for Joonsoo Kim
2013-04-23  8:27 ` [PATCH v3 6/6] sched: prevent to re-select dst-cpu in load_balance() Joonsoo Kim
2013-04-24  9:38   ` [tip:sched/core] sched: Prevent " tip-bot for Joonsoo Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox