* [PATCH 1/3] hrtimer: Fix clock_was_set so it is safe to call from irq context
2012-07-05 19:12 [PATCH 0/3] Fix for leapsecond caused hrtimer/futex issue John Stultz
@ 2012-07-05 19:12 ` John Stultz
2012-07-05 19:12 ` [PATCH 2/3] time: Fix leapsecond triggered hrtimer/futex load spike issue John Stultz
2012-07-05 19:12 ` [PATCH 3/3] hrtimer: Update hrtimer base offsets each hrtimer_interrupt John Stultz
2 siblings, 0 replies; 4+ messages in thread
From: John Stultz @ 2012-07-05 19:12 UTC (permalink / raw)
To: Linux Kernel; +Cc: John Stultz, Prarit Bhargava, stable, Thomas Gleixner
NOTE:This is a prerequisite patch that's required to
address the widely observed leap-second related futex/hrtimer
issues.
Currently clock_was_set() is unsafe to be called from irq
context, as it calls on_each_cpu(). This causes problems when
we need to adjust the time from update_wall_time().
To fix this, if clock_was_set is called when irqs are
disabled, we schedule a timer to fire for immedately after
we're out of interrupt context to then notify the hrtimer
subsystem.
CC: Prarit Bhargava <prarit@redhat.com>
CC: stable@vger.kernel.org
CC: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Reported-by: Jan Engelhardt <jengelh@inai.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
---
kernel/hrtimer.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index ae34bf5..d730678 100644
--- a/kernel/hrtimer.c
+++ b/kernel/hrtimer.c
@@ -746,7 +746,7 @@ static inline void retrigger_next_event(void *arg) { }
* resolution timer interrupts. On UP we just disable interrupts and
* call the high resolution interrupt code.
*/
-void clock_was_set(void)
+static void do_clock_was_set(unsigned long data)
{
#ifdef CONFIG_HIGH_RES_TIMERS
/* Retrigger the CPU local events everywhere */
@@ -755,6 +755,21 @@ void clock_was_set(void)
timerfd_clock_was_set();
}
+static DEFINE_TIMER(clock_was_set_timer, do_clock_was_set , 0, 0);
+
+void clock_was_set(void)
+{
+ /*
+ * We can't call on_each_cpu() from irq context,
+ * so if irqs are disabled , schedule the clock_was_set
+ * via a timer_list timer for right after.
+ */
+ if (irqs_disabled())
+ mod_timer(&clock_was_set_timer, jiffies);
+ else
+ do_clock_was_set(0);
+}
+
/*
* During resume we might have to reprogram the high resolution timer
* interrupt (on the local CPU):
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/3] time: Fix leapsecond triggered hrtimer/futex load spike issue
2012-07-05 19:12 [PATCH 0/3] Fix for leapsecond caused hrtimer/futex issue John Stultz
2012-07-05 19:12 ` [PATCH 1/3] hrtimer: Fix clock_was_set so it is safe to call from irq context John Stultz
@ 2012-07-05 19:12 ` John Stultz
2012-07-05 19:12 ` [PATCH 3/3] hrtimer: Update hrtimer base offsets each hrtimer_interrupt John Stultz
2 siblings, 0 replies; 4+ messages in thread
From: John Stultz @ 2012-07-05 19:12 UTC (permalink / raw)
To: Linux Kernel; +Cc: John Stultz, Prarit Bhargava, stable, Thomas Gleixner
As widely reported on the internet, some Linux systems after
the leapsecond was inserted are experiencing futex related load
spikes (usually connected to MySQL, Firefox, Thunderbird, Java, etc).
An apparent for this issue workaround is running:
$ date -s "`date`"
Credit: http://www.sheeri.com/content/mysql-and-leap-second-high-cpu-and-fix
I this issue is due to the leapsecond being added without
calling clock_was_set() to notify the hrtimer subsystem of the
change.
The workaround functions as it forces a clock_was_set()
call from settimeofday().
This fix adds the required clock_was_set() calls to where
we adjust for leapseconds.
NOTE: This fix *depends* on the previous fix, which allows
clock_was_set to be called from atomic context. Do not try
to apply just this patch.
CC: Prarit Bhargava <prarit@redhat.com>
CC: stable@vger.kernel.org
CC: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Reported-by: Jan Engelhardt <jengelh@inai.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
---
kernel/time/timekeeping.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6f46a00..cc2991d 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -963,6 +963,8 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
leap = second_overflow(timekeeper.xtime.tv_sec);
timekeeper.xtime.tv_sec += leap;
timekeeper.wall_to_monotonic.tv_sec -= leap;
+ if (leap)
+ clock_was_set();
}
/* Accumulate raw time */
@@ -1079,6 +1081,8 @@ static void update_wall_time(void)
leap = second_overflow(timekeeper.xtime.tv_sec);
timekeeper.xtime.tv_sec += leap;
timekeeper.wall_to_monotonic.tv_sec -= leap;
+ if (leap)
+ clock_was_set();
}
timekeeping_update(false);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 3/3] hrtimer: Update hrtimer base offsets each hrtimer_interrupt
2012-07-05 19:12 [PATCH 0/3] Fix for leapsecond caused hrtimer/futex issue John Stultz
2012-07-05 19:12 ` [PATCH 1/3] hrtimer: Fix clock_was_set so it is safe to call from irq context John Stultz
2012-07-05 19:12 ` [PATCH 2/3] time: Fix leapsecond triggered hrtimer/futex load spike issue John Stultz
@ 2012-07-05 19:12 ` John Stultz
2 siblings, 0 replies; 4+ messages in thread
From: John Stultz @ 2012-07-05 19:12 UTC (permalink / raw)
To: Linux Kernel; +Cc: John Stultz, Prarit Bhargava, stable, Thomas Gleixner
This patch introduces a new funciton which captures the
CLOCK_MONOTONIC time, along with the CLOCK_REALTIME and
CLOCK_BOOTTIME offsets at the same moment. This new function
is then used in place of ktime_get() when hrtimer_interrupt()
is expiring timers.
This ensures that any changes to realtime or boottime offsets
are noticed and stored into the per-cpu hrtimer base structures,
prior to doing any hrtimer expiration. This should ensure that
timers are not expired early if the offsets changes under us.
This is useful in the case where clock_was_set() is called from
atomic context and have to schedule the hrtimer base offset update
via a timer, as it provides extra robustness in the face of any
possible timer delay.
CC: Prarit Bhargava <prarit@redhat.com>
CC: stable@vger.kernel.org
CC: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
---
include/linux/hrtimer.h | 3 +++
kernel/hrtimer.c | 14 +++++++++++---
kernel/time/timekeeping.c | 34 ++++++++++++++++++++++++++++++++++
3 files changed, 48 insertions(+), 3 deletions(-)
diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
index fd0dc30..f6b2a74 100644
--- a/include/linux/hrtimer.h
+++ b/include/linux/hrtimer.h
@@ -320,6 +320,9 @@ extern ktime_t ktime_get(void);
extern ktime_t ktime_get_real(void);
extern ktime_t ktime_get_boottime(void);
extern ktime_t ktime_get_monotonic_offset(void);
+extern void ktime_get_and_real_and_sleep_offset(ktime_t *monotonic,
+ ktime_t *real_offset,
+ ktime_t *sleep_offset);
DECLARE_PER_CPU(struct tick_device, tick_cpu_device);
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index d730678..56600c4 100644
--- a/kernel/hrtimer.c
+++ b/kernel/hrtimer.c
@@ -1258,18 +1258,26 @@ static void __run_hrtimer(struct hrtimer *timer, ktime_t *now)
void hrtimer_interrupt(struct clock_event_device *dev)
{
struct hrtimer_cpu_base *cpu_base = &__get_cpu_var(hrtimer_bases);
- ktime_t expires_next, now, entry_time, delta;
+ ktime_t expires_next, now, entry_time, delta, real_offset, sleep_offset;
int i, retries = 0;
BUG_ON(!cpu_base->hres_active);
cpu_base->nr_events++;
dev->next_event.tv64 = KTIME_MAX;
- entry_time = now = ktime_get();
+
+ ktime_get_and_real_and_sleep_offset(&now, &real_offset, &sleep_offset);
+
+ entry_time = now;
retry:
expires_next.tv64 = KTIME_MAX;
raw_spin_lock(&cpu_base->lock);
+
+ /* Update base offsets, to avoid early wakeups */
+ cpu_base->clock_base[HRTIMER_BASE_REALTIME].offset = real_offset;
+ cpu_base->clock_base[HRTIMER_BASE_BOOTTIME].offset = sleep_offset;
+
/*
* We set expires_next to KTIME_MAX here with cpu_base->lock
* held to prevent that a timer is enqueued in our queue via
@@ -1346,7 +1354,7 @@ retry:
* interrupt routine. We give it 3 attempts to avoid
* overreacting on some spurious event.
*/
- now = ktime_get();
+ ktime_get_and_real_and_sleep_offset(&now, &real_offset, &sleep_offset);
cpu_base->nr_retries++;
if (++retries < 3)
goto retry;
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index cc2991d..b3404cf 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1251,6 +1251,40 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
}
/**
+ * ktime_get_and_real_and_sleep_offset() - hrtimer helper, gets monotonic ktime,
+ * realtime offset, and sleep offsets.
+ */
+void ktime_get_and_real_and_sleep_offset(ktime_t *monotonic,
+ ktime_t *real_offset,
+ ktime_t *sleep_offset)
+{
+ unsigned long seq;
+ struct timespec wtom, sleep;
+ u64 secs, nsecs;
+
+ do {
+ seq = read_seqbegin(&timekeeper.lock);
+
+ secs = timekeeper.xtime.tv_sec +
+ timekeeper.wall_to_monotonic.tv_sec;
+ nsecs = timekeeper.xtime.tv_nsec +
+ timekeeper.wall_to_monotonic.tv_nsec;
+ nsecs += timekeeping_get_ns();
+ /* If arch requires, add in gettimeoffset() */
+ nsecs += arch_gettimeoffset();
+
+ wtom = timekeeper.wall_to_monotonic;
+ sleep = timekeeper.total_sleep_time;
+ } while (read_seqretry(&timekeeper.lock, seq));
+
+ *monotonic = ktime_add_ns(ktime_set(secs, 0), nsecs);
+ set_normalized_timespec(&wtom, -wtom.tv_sec, -wtom.tv_nsec);
+ *real_offset = timespec_to_ktime(wtom);
+ *sleep_offset = timespec_to_ktime(sleep);
+}
+
+
+/**
* ktime_get_monotonic_offset() - get wall_to_monotonic in ktime_t format
*/
ktime_t ktime_get_monotonic_offset(void)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread