From: Jason Low <jason.low2@hp.com>
To: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@kernel.org>, Jason Low <jason.low2@hp.com>
Cc: linux-kernel@vger.kernel.org, Ben Segall <bsegall@google.com>,
Waiman Long <Waiman.Long@hp.com>, Mel Gorman <mgorman@suse.de>,
Mike Galbraith <umgwanakikbuti@gmail.com>,
Rik van Riel <riel@redhat.com>,
Aswin Chandramouleeswaran <aswin@hp.com>,
Chegu Vinod <chegu_vinod@hp.com>,
Scott J Norton <scott.norton@hp.com>
Subject: [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load
Date: Mon, 04 Aug 2014 13:28:38 -0700 [thread overview]
Message-ID: <1407184118.11407.11.camel@j-VirtualBox> (raw)
When running workloads on 2+ socket systems, based on perf profiles, the
update_cfs_rq_blocked_load function constantly shows up as taking up a
noticeable % of run time. This is especially apparent on an 8 socket
machine. For example, when running the AIM7 custom workload, we see:
4.18% reaim [kernel.kallsyms] [k] update_cfs_rq_blocked_load
Much of the contention is in __update_cfs_rq_tg_load_contrib when we
update the tg load contribution stats. However, it turns out that in many
cases, they don't need to be updated and "tg_contrib" is 0.
This patch adds a check in __update_cfs_rq_tg_load_contrib to skip updating
tg load contribution stats when nothing needs to be updated. This reduces the
cacheline contention that would be unnecessary. In the above case, with the
patch, perf reports the total time spent in this function went down by more
than a factor of 3x:
1.18% reaim [kernel.kallsyms] [k] update_cfs_rq_blocked_load
Signed-off-by: Jason Low <jason.low2@hp.com>
---
kernel/sched/fair.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bfa3c86..8d4cc72 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2377,6 +2377,9 @@ static inline void __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
tg_contrib -= cfs_rq->tg_load_contrib;
+ if (!tg_contrib)
+ return;
+
if (force_update || abs(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
atomic_long_add(tg_contrib, &tg->load_avg);
cfs_rq->tg_load_contrib += tg_contrib;
--
1.7.1
next reply other threads:[~2014-08-04 20:28 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-04 20:28 Jason Low [this message]
2014-08-04 19:15 ` [PATCH] sched: Reduce contention in update_cfs_rq_blocked_load Yuyang Du
2014-08-04 21:42 ` Yuyang Du
2014-08-05 15:42 ` Jason Low
2014-08-06 18:21 ` Jason Low
2014-08-07 18:02 ` Yuyang Du
2014-08-08 4:18 ` Jason Low
2014-08-07 22:30 ` Yuyang Du
2014-08-08 7:11 ` Peter Zijlstra
2014-08-07 23:15 ` Yuyang Du
2014-08-08 0:02 ` Yuyang Du
2014-08-04 20:52 ` bsegall
2014-08-04 21:27 ` Jason Low
2014-08-11 17:31 ` Jason Low
2014-08-04 21:04 ` Peter Zijlstra
2014-08-05 17:53 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1407184118.11407.11.camel@j-VirtualBox \
--to=jason.low2@hp.com \
--cc=Waiman.Long@hp.com \
--cc=aswin@hp.com \
--cc=bsegall@google.com \
--cc=chegu_vinod@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=scott.norton@hp.com \
--cc=umgwanakikbuti@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).