linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Low <jason.low2@hp.com>
To: bsegall@google.com
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>,
	linux-kernel@vger.kernel.org, Waiman Long <Waiman.Long@hp.com>,
	Mel Gorman <mgorman@suse.de>,
	Mike Galbraith <umgwanakikbuti@gmail.com>,
	Rik van Riel <riel@redhat.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>,
	Scott J Norton <scott.norton@hp.com>,
	pjt@google.com, jason.low2@hp.com
Subject: Re: [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load
Date: Mon, 11 Aug 2014 10:31:47 -0700	[thread overview]
Message-ID: <1407778307.14059.12.camel@j-VirtualBox> (raw)
In-Reply-To: <xm264mxsugds.fsf@sword-of-the-dawn.mtv.corp.google.com>

On Mon, 2014-08-04 at 13:52 -0700, bsegall@google.com wrote:
> 
> That said, it might be better to remove force_update for this function,
> or make it just reduce the minimum to /64 or something. If the test is
> easy to run it would be good to see what it's like just removing the
> force_update param for this function to see if it's worth worrying
> about or if the zero case catches ~all the perf gain.

Hi Ben,

I removed the force update in __update_cfs_rq_tg_load_contrib and it
helped reduce overhead a lot more. I saw up to a 20x reduction in system
overhead from update_cfs_rq_blocked_load when running some of the AIM7
workloads with this change.

-----
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fea7d33..7a6e18b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2352,8 +2352,7 @@ static inline u64 __synchronize_entity_decay(struct sched_entity *se)
 }
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
-static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
-						 int force_update)
+static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq)
 {
 	struct task_group *tg = cfs_rq->tg;
 	long tg_contrib;
@@ -2361,7 +2360,7 @@ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 	tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
 	tg_contrib -= cfs_rq->tg_load_contrib;
 
-	if (force_update || abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
+	if (abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
 		atomic_long_add(tg_contrib, &tg->load_avg);
 		cfs_rq->tg_load_contrib += tg_contrib;
 	}
@@ -2436,8 +2435,7 @@ static inline void update_rq_runnable_avg(struct rq *rq, int runnable)
 	__update_tg_runnable_avg(&rq->avg, &rq->cfs);
 }
 #else /* CONFIG_FAIR_GROUP_SCHED */
-static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
-						 int force_update) {}
+static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq) {}
 static inline void __update_tg_runnable_avg(struct sched_avg *sa,
 						  struct cfs_rq *cfs_rq) {}
 static inline void __update_group_entity_contrib(struct sched_entity *se) {}
@@ -2537,7 +2535,7 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update)
 		cfs_rq->last_decay = now;
 	}
 
-	__update_cfs_rq_tg_load_contrib(cfs_rq, force_update);
+	__update_cfs_rq_tg_load_contrib(cfs_rq);
 }
 
 /* Add the load generated by se into cfs_rq's child load-average */



  parent reply	other threads:[~2014-08-11 17:31 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-04 20:28 [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load Jason Low
2014-08-04 19:15 ` Yuyang Du
2014-08-04 21:42   ` Yuyang Du
2014-08-05 15:42   ` Jason Low
2014-08-06 18:21   ` Jason Low
2014-08-07 18:02     ` Yuyang Du
2014-08-08  4:18       ` Jason Low
2014-08-07 22:30         ` Yuyang Du
2014-08-08  7:11           ` Peter Zijlstra
2014-08-07 23:15             ` Yuyang Du
2014-08-08  0:02               ` Yuyang Du
2014-08-04 20:52 ` bsegall
2014-08-04 21:27   ` Jason Low
2014-08-11 17:31   ` Jason Low [this message]
2014-08-04 21:04 ` Peter Zijlstra
2014-08-05 17:53 ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1407778307.14059.12.camel@j-VirtualBox \
    --to=jason.low2@hp.com \
    --cc=Waiman.Long@hp.com \
    --cc=aswin@hp.com \
    --cc=bsegall@google.com \
    --cc=chegu_vinod@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=umgwanakikbuti@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).