From: Mike Galbraith <efault@gmx.de>
To: RT <linux-rt-users@vger.kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Steven Rostedt <rostedt@goodmis.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Ingo Molnar <mingo@elte.hu>,
Arjan van de Ven <arjan@linux.intel.com>
Subject: [patch] clockevents: Reinstate the per cpu tick skew
Date: Tue, 27 Dec 2011 10:20:05 +0100 [thread overview]
Message-ID: <1324977605.5217.132.camel@marge.simson.net> (raw)
In-Reply-To: <1324968044.5217.103.camel@marge.simson.net>
Quoting removal commit af5ab277ded04bd9bc6b048c5a2f0e7d70ef0867
Historically, Linux has tried to make the regular timer tick on the
various CPUs not happen at the same time, to avoid contention on
xtime_lock.
Nowadays, with the tickless kernel, this contention no longer happens
since time keeping and updating are done differently. In addition,
this skew is actually hurting power consumption in a measurable way on
many-core systems.
End quote
Contention remains a problem if NO_HZ is either not configured, or is
nohz=off disabled due to workload constraints. The RT kernel running
nohz=off was measured to be using > 1.4% CPU just ticking 64 CPUs, with
tick perturbation reaching ~80us. For loads where measured (>100us)
NO_HZ latencies are intolerable, a must have.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
---
kernel/time/tick-sched.c | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -689,6 +689,7 @@ static inline void tick_check_nohz(int c
static inline void tick_nohz_switch_to_nohz(void) { }
static inline void tick_check_nohz(int cpu) { }
+#define tick_nohz_enabled 0
#endif /* NO_HZ */
@@ -777,6 +778,14 @@ void tick_setup_sched_timer(void)
/* Get the next period (per cpu) */
hrtimer_set_expires(&ts->sched_timer, tick_init_jiffy_update());
+ /* Offset the tick when NO_HZ is configured out or boot disabled */
+ if (!tick_nohz_enabled) {
+ u64 offset = ktime_to_ns(tick_period) >> 1;
+ do_div(offset, num_possible_cpus());
+ offset *= smp_processor_id();
+ hrtimer_add_expires_ns(&ts->sched_timer, offset);
+ }
+
for (;;) {
hrtimer_forward(&ts->sched_timer, now, tick_period);
hrtimer_start_expires(&ts->sched_timer,
next prev parent reply other threads:[~2011-12-27 9:20 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-24 9:06 3.0.14-rt31 + 64 cores = very bad jitter == highly synchronized tick? Mike Galbraith
2011-12-25 7:31 ` Mike Galbraith
2011-12-26 8:04 ` Mike Galbraith
2011-12-27 6:40 ` Mike Galbraith
2011-12-27 9:20 ` Mike Galbraith [this message]
2011-12-28 5:17 ` [patch] clockevents: Reinstate the per cpu tick skew Mike Galbraith
2011-12-28 8:22 ` Mike Galbraith
2011-12-28 9:59 ` Mike Galbraith
2011-12-28 13:35 ` Arjan van de Ven
2011-12-28 14:59 ` Mike Galbraith
2011-12-28 16:57 ` Peter Zijlstra
2011-12-28 17:28 ` Mike Galbraith
2011-12-29 7:22 ` Mike Galbraith
2011-12-28 13:32 ` Arjan van de Ven
2011-12-28 15:10 ` Mike Galbraith
2012-01-03 6:20 ` Mike Galbraith
2012-04-23 6:13 ` irq latency regression post af5ab277 - was " Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1324977605.5217.132.camel@marge.simson.net \
--to=efault@gmx.de \
--cc=a.p.zijlstra@chello.nl \
--cc=arjan@linux.intel.com \
--cc=linux-rt-users@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).