From: pjt@google.com
To: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Ingo Molnar <mingo@elte.hu>,
Srivatsa Vaddagiri <vatsa@in.ibm.com>,
Chris Friesen <cfriesen@nortel.com>,
Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>,
Pierre Bourdon <pbourdon@excellency.fr>,
Paul Turner <pjt@google.com>,
Bharata B Rao <bharata@linux.vnet.ibm.com>
Subject: [RFC tg_shares_up improvements - v1 06/12] sched: hierarchal order on shares update list
Date: Fri, 15 Oct 2010 21:43:55 -0700 [thread overview]
Message-ID: <20101016045118.907495683@google.com> (raw)
In-Reply-To: 20101016044349.830426011@google.com
[-- Attachment #1: sched-tg-order-ondemand-list.patch --]
[-- Type: text/plain, Size: 2357 bytes --]
Avoid duplicate shares update calls by ensuring children always appear before
parents in rq->leaf_cfs_rq_list.
This allows us to do a single in-order traversal for update_shares().
Since we always enqueue in bottom-up order this reduces to 2 cases:
1) Our parent is already in the list, e.g.
root
\
b
/\
c d* (root->b->c already enqueued)
Since d's parent is enqueued we push it to the head of the list, implicitly ahead of b.
2) Our parent does not appear in the list (or we have no parent)
In this case we enqueue to the tail of the list, if our parent is subsequently enqueued
(bottom-up) it will appear to our right by the same rule.
Signed-off-by: Paul Turner <pjt@google.com>
---
kernel/sched_fair.c | 26 ++++++++++++++++----------
1 file changed, 16 insertions(+), 10 deletions(-)
Index: kernel/sched_fair.c
===================================================================
--- kernel/sched_fair.c.orig
+++ kernel/sched_fair.c
@@ -146,8 +146,20 @@ static inline struct cfs_rq *cpu_cfs_rq(
static inline void list_add_leaf_cfs_rq(struct cfs_rq *cfs_rq)
{
if (!cfs_rq->on_list) {
- list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
+ /*
+ * Ensure we either appear before our parent (if already
+ * enqueued) or force our parent to appear after us when it is
+ * enqueued. The fact that we always enqueue bottom-up
+ * reduces this to two cases.
+ */
+ if (cfs_rq->tg->parent &&
+ cfs_rq->tg->parent->cfs_rq[cpu_of(rq_of(cfs_rq))]->on_list) {
+ list_add_rcu(&cfs_rq->leaf_cfs_rq_list,
+ &rq_of(cfs_rq)->leaf_cfs_rq_list);
+ } else {
+ list_add_tail_rcu(&cfs_rq->leaf_cfs_rq_list,
&rq_of(cfs_rq)->leaf_cfs_rq_list);
+ }
cfs_rq->on_list = 1;
}
@@ -2009,7 +2021,7 @@ out:
/*
* update tg->load_weight by folding this cpu's load_avg
*/
-static int tg_shares_up(struct task_group *tg, int cpu)
+static int update_shares_cpu(struct task_group *tg, int cpu)
{
struct cfs_rq *cfs_rq;
unsigned long flags;
@@ -2049,14 +2061,8 @@ static void update_shares(int cpu)
struct rq *rq = cpu_rq(cpu);
rcu_read_lock();
- for_each_leaf_cfs_rq(rq, cfs_rq) {
- struct task_group *tg = cfs_rq->tg;
-
- do {
- tg_shares_up(tg, cpu);
- tg = tg->parent;
- } while (tg);
- }
+ for_each_leaf_cfs_rq(rq, cfs_rq)
+ update_shares_cpu(cfs_rq->tg, cpu);
rcu_read_unlock();
}
--
next prev parent reply other threads:[~2010-10-16 4:54 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-16 4:43 [RFC tg_shares_up improvements - v1 00/12] [RFC tg_shares_up - v1 00/12] Reducing cost of tg->shares distribution pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 01/12] sched: rewrite tg_shares_up pjt
2010-10-21 6:04 ` Bharata B Rao
2010-10-21 6:28 ` Paul Turner
2010-10-21 8:08 ` Bharata B Rao
2010-10-21 8:38 ` Paul Turner
2010-10-21 9:08 ` Peter Zijlstra
[not found] ` <AANLkTi=zYAfb_izD15ROxH=C6+zPzX+XEGw7r5UUoAar@mail.gmail.com>
2010-11-04 21:00 ` Paul Turner
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 02/12] sched: on-demand (active) cfs_rq list pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 03/12] sched: make tg_shares_up() walk on-demand pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 04/12] sched: fix load corruption from update_cfs_shares pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 05/12] sched: fix update_cfs_load synchronization pjt
2010-10-21 9:52 ` Bharata B Rao
2010-10-21 18:25 ` Paul Turner
2010-10-16 4:43 ` pjt [this message]
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 07/12] sched: add sysctl_sched_shares_window pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 08/12] sched: update shares on idle_balance pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 09/12] sched: demand based update_cfs_load() pjt
2010-10-16 4:43 ` [RFC tg_shares_up improvements - v1 10/12] sched: allow update_cfs_load to update global load pjt
2010-10-16 4:44 ` [RFC tg_shares_up improvements - v1 11/12] sched: update tg->shares after cpu.shares write pjt
2010-10-16 4:44 ` [RFC tg_shares_up improvements - v1 12/12] debug: export effective shares for analysis versus specified pjt
2010-10-16 19:46 ` [RFC tg_shares_up improvements - v1 00/12] [RFC tg_shares_up - v1 00/12] Reducing cost of tg->shares distribution Peter Zijlstra
2010-10-21 6:36 ` Paul Turner
2010-10-22 0:14 ` Paul Turner
2010-10-17 5:24 ` Balbir Singh
2010-10-17 9:38 ` Peter Zijlstra
2010-10-17 12:09 ` Balbir Singh
2010-11-03 18:27 ` Karl Rister
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101016045118.907495683@google.com \
--to=pjt@google.com \
--cc=a.p.zijlstra@chello.nl \
--cc=bharata@linux.vnet.ibm.com \
--cc=cfriesen@nortel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=pbourdon@excellency.fr \
--cc=svaidy@linux.vnet.ibm.com \
--cc=vatsa@in.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox