linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/16] Timekeeping cleanups and locking changes
@ 2011-11-15  4:03 John Stultz
  2011-11-15  4:03 ` [PATCH 01/16] time: Move total_sleep_time into the timekeeper structure John Stultz
                   ` (15 more replies)
  0 siblings, 16 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Hey Thomas,

THIS IS PATCHWAR!

After your 7 patch patchbomb reworking much of the timekeeping locking
from this morning, I decided such aggression will not stand.

So here's 16 patches, some are your patches, just lobbed back, targeting
strategic locations of your inbox.

Its all pretty rough, and the whole set likely needs some refactoring, but
I wanted to let you see my take on your approach. Much of this are changes
I've intended to get to, but have just been too busy of late.

I also included a similar shadow-update trick to reduce the lock 
hold times, but as noted in the patch, I'm still worried about the
likely race there, and we need to do some further vetting to ensure
it really can't cause trouble (as well as warnings to anyone trying 
to later understand the rational there).

I also still need to refactor the update_vsyscall implementations to
utilize the xsec_nsec, so we can avoid truncation issues without 
mucking with the ntp_error code.

Anyway, let me know what you think.

-john

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>


Patches are also available here:
(Fair warning, I may rebase the following branch)

 git://git.linaro.org/people/jstultz/linux.git dev/xtime-breakup


John Stultz (14):
  time: Move total_sleep_time into the timekeeper structure
  time: Move wall_to_monotonic into the timekeeper structure
  time: Move xtime into timekeeeper structure
  time: Move raw_time into timekeeper structure
  time: Cleanup global variables and move them to the top
  time: Add timekeeper lock
  ntp: Cleanup timex.h
  ntp: Access tick_length variable via ntp_tick_length()
  ntp: Add ntp_lock to replace xtime_locking
  time: Remove most of xtime_lock usage in timekeeping.c
  time: Condense timekeeper.xtime into xtime_sec
  time: Rework timekeeping functions to take timekeeper ptr as argument
  time: Update tiemkeeper structure using a local shadow
  time: Rework update_vsyscall to pass timekeeper

Thomas Gleixner (2):
  time: Reorder so the hot data is together
  time: Move common updates to a function

 arch/ia64/kernel/time.c       |   28 ++--
 arch/powerpc/kernel/time.c    |   25 +-
 arch/s390/kernel/time.c       |   18 +-
 arch/x86/kernel/vsyscall_64.c |   21 +-
 include/linux/clocksource.h   |    9 -
 include/linux/timekeeper.h    |   75 +++++++
 include/linux/timex.h         |   17 +--
 kernel/time/ntp.c             |   83 ++++++--
 kernel/time/timekeeping.c     |  494 +++++++++++++++++++++--------------------
 9 files changed, 442 insertions(+), 328 deletions(-)
 create mode 100644 include/linux/timekeeper.h

-- 
1.7.3.2.146.gca209


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 01/16] time: Move total_sleep_time into the timekeeper structure
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 02/16] time: Move wall_to_monotonic " John Stultz
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Move total_sleep_time into the timekeeper structure in preparation
for locking cleanups

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   24 +++++++++++++++---------
 1 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 2b021b0e..bd8e7fd 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -47,6 +47,10 @@ struct timekeeper {
 	int	ntp_error_shift;
 	/* NTP adjusted clock multiplier */
 	u32	mult;
+
+	/* time spent in suspend */
+	struct timespec total_sleep_time;
+
 };
 
 static struct timekeeper timekeeper;
@@ -159,7 +163,6 @@ __cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
  */
 static struct timespec xtime __attribute__ ((aligned (16)));
 static struct timespec wall_to_monotonic __attribute__ ((aligned (16)));
-static struct timespec total_sleep_time;
 
 /*
  * The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock.
@@ -587,8 +590,8 @@ void __init timekeeping_init(void)
 	}
 	set_normalized_timespec(&wall_to_monotonic,
 				-boot.tv_sec, -boot.tv_nsec);
-	total_sleep_time.tv_sec = 0;
-	total_sleep_time.tv_nsec = 0;
+	timekeeper.total_sleep_time.tv_sec = 0;
+	timekeeper.total_sleep_time.tv_nsec = 0;
 	write_sequnlock_irqrestore(&xtime_lock, flags);
 }
 
@@ -612,7 +615,8 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta)
 
 	xtime = timespec_add(xtime, *delta);
 	wall_to_monotonic = timespec_sub(wall_to_monotonic, *delta);
-	total_sleep_time = timespec_add(total_sleep_time, *delta);
+	timekeeper.total_sleep_time = timespec_add(
+					timekeeper.total_sleep_time, *delta);
 }
 
 
@@ -984,8 +988,10 @@ static void update_wall_time(void)
 void getboottime(struct timespec *ts)
 {
 	struct timespec boottime = {
-		.tv_sec = wall_to_monotonic.tv_sec + total_sleep_time.tv_sec,
-		.tv_nsec = wall_to_monotonic.tv_nsec + total_sleep_time.tv_nsec
+		.tv_sec = wall_to_monotonic.tv_sec +
+				timekeeper.total_sleep_time.tv_sec,
+		.tv_nsec = wall_to_monotonic.tv_nsec +
+				timekeeper.total_sleep_time.tv_nsec
 	};
 
 	set_normalized_timespec(ts, -boottime.tv_sec, -boottime.tv_nsec);
@@ -1014,7 +1020,7 @@ void get_monotonic_boottime(struct timespec *ts)
 		seq = read_seqbegin(&xtime_lock);
 		*ts = xtime;
 		tomono = wall_to_monotonic;
-		sleep = total_sleep_time;
+		sleep = timekeeper.total_sleep_time;
 		nsecs = timekeeping_get_ns();
 
 	} while (read_seqretry(&xtime_lock, seq));
@@ -1047,7 +1053,7 @@ EXPORT_SYMBOL_GPL(ktime_get_boottime);
  */
 void monotonic_to_bootbased(struct timespec *ts)
 {
-	*ts = timespec_add(*ts, total_sleep_time);
+	*ts = timespec_add(*ts, timekeeper.total_sleep_time);
 }
 EXPORT_SYMBOL_GPL(monotonic_to_bootbased);
 
@@ -1122,7 +1128,7 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
 		seq = read_seqbegin(&xtime_lock);
 		*xtim = xtime;
 		*wtom = wall_to_monotonic;
-		*sleep = total_sleep_time;
+		*sleep = timekeeper.total_sleep_time;
 	} while (read_seqretry(&xtime_lock, seq));
 }
 
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 02/16] time: Move wall_to_monotonic into the timekeeper structure
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
  2011-11-15  4:03 ` [PATCH 01/16] time: Move total_sleep_time into the timekeeper structure John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 03/16] time: Move xtime into timekeeeper structure John Stultz
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

In preparation for locking cleanups, move wall_to_monotonic
into the timekeeper structure.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   69 ++++++++++++++++++++++++---------------------
 1 files changed, 37 insertions(+), 32 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index bd8e7fd..15740b5 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -48,6 +48,21 @@ struct timekeeper {
 	/* NTP adjusted clock multiplier */
 	u32	mult;
 
+	/*
+	 * wall_to_monotonic is what we need to add to xtime (or xtime corrected
+	 * for sub jiffie times) to get to monotonic time.  Monotonic is pegged
+	 * at zero at system boot time, so wall_to_monotonic will be negative,
+	 * however, we will ALWAYS keep the tv_nsec part positive so we can use
+	 * the usual normalization.
+	 *
+	 * wall_to_monotonic is moved after resume from suspend for the
+	 * monotonic time not to jump. We need to add total_sleep_time to
+	 * wall_to_monotonic to get the real boot based time offset.
+	 *
+	 * - wall_to_monotonic is no longer the boot time, getboottime must be
+	 * used instead.
+	 */
+	struct timespec wall_to_monotonic;
 	/* time spent in suspend */
 	struct timespec total_sleep_time;
 
@@ -148,21 +163,8 @@ __cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
 
 /*
  * The current time
- * wall_to_monotonic is what we need to add to xtime (or xtime corrected
- * for sub jiffie times) to get to monotonic time.  Monotonic is pegged
- * at zero at system boot time, so wall_to_monotonic will be negative,
- * however, we will ALWAYS keep the tv_nsec part positive so we can use
- * the usual normalization.
- *
- * wall_to_monotonic is moved after resume from suspend for the monotonic
- * time not to jump. We need to add total_sleep_time to wall_to_monotonic
- * to get the real boot based time offset.
- *
- * - wall_to_monotonic is no longer the boot time, getboottime must be
- * used instead.
  */
 static struct timespec xtime __attribute__ ((aligned (16)));
-static struct timespec wall_to_monotonic __attribute__ ((aligned (16)));
 
 /*
  * The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock.
@@ -176,8 +178,8 @@ int __read_mostly timekeeping_suspended;
 void timekeeping_leap_insert(int leapsecond)
 {
 	xtime.tv_sec += leapsecond;
-	wall_to_monotonic.tv_sec -= leapsecond;
-	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+	timekeeper.wall_to_monotonic.tv_sec -= leapsecond;
+	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
 			timekeeper.mult);
 }
 
@@ -249,8 +251,8 @@ ktime_t ktime_get(void)
 
 	do {
 		seq = read_seqbegin(&xtime_lock);
-		secs = xtime.tv_sec + wall_to_monotonic.tv_sec;
-		nsecs = xtime.tv_nsec + wall_to_monotonic.tv_nsec;
+		secs = xtime.tv_sec + timekeeper.wall_to_monotonic.tv_sec;
+		nsecs = xtime.tv_nsec + timekeeper.wall_to_monotonic.tv_nsec;
 		nsecs += timekeeping_get_ns();
 
 	} while (read_seqretry(&xtime_lock, seq));
@@ -281,7 +283,7 @@ void ktime_get_ts(struct timespec *ts)
 	do {
 		seq = read_seqbegin(&xtime_lock);
 		*ts = xtime;
-		tomono = wall_to_monotonic;
+		tomono = timekeeper.wall_to_monotonic;
 		nsecs = timekeeping_get_ns();
 
 	} while (read_seqretry(&xtime_lock, seq));
@@ -370,14 +372,15 @@ int do_settimeofday(const struct timespec *tv)
 
 	ts_delta.tv_sec = tv->tv_sec - xtime.tv_sec;
 	ts_delta.tv_nsec = tv->tv_nsec - xtime.tv_nsec;
-	wall_to_monotonic = timespec_sub(wall_to_monotonic, ts_delta);
+	timekeeper.wall_to_monotonic =
+			timespec_sub(timekeeper.wall_to_monotonic, ts_delta);
 
 	xtime = *tv;
 
 	timekeeper.ntp_error = 0;
 	ntp_clear();
 
-	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
 				timekeeper.mult);
 
 	write_sequnlock_irqrestore(&xtime_lock, flags);
@@ -409,12 +412,13 @@ int timekeeping_inject_offset(struct timespec *ts)
 	timekeeping_forward_now();
 
 	xtime = timespec_add(xtime, *ts);
-	wall_to_monotonic = timespec_sub(wall_to_monotonic, *ts);
+	timekeeper.wall_to_monotonic =
+				timespec_sub(timekeeper.wall_to_monotonic, *ts);
 
 	timekeeper.ntp_error = 0;
 	ntp_clear();
 
-	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
 				timekeeper.mult);
 
 	write_sequnlock_irqrestore(&xtime_lock, flags);
@@ -588,7 +592,7 @@ void __init timekeeping_init(void)
 		boot.tv_sec = xtime.tv_sec;
 		boot.tv_nsec = xtime.tv_nsec;
 	}
-	set_normalized_timespec(&wall_to_monotonic,
+	set_normalized_timespec(&timekeeper.wall_to_monotonic,
 				-boot.tv_sec, -boot.tv_nsec);
 	timekeeper.total_sleep_time.tv_sec = 0;
 	timekeeper.total_sleep_time.tv_nsec = 0;
@@ -614,7 +618,8 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta)
 	}
 
 	xtime = timespec_add(xtime, *delta);
-	wall_to_monotonic = timespec_sub(wall_to_monotonic, *delta);
+	timekeeper.wall_to_monotonic =
+			timespec_sub(timekeeper.wall_to_monotonic, *delta);
 	timekeeper.total_sleep_time = timespec_add(
 					timekeeper.total_sleep_time, *delta);
 }
@@ -647,7 +652,7 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 
 	timekeeper.ntp_error = 0;
 	ntp_clear();
-	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
 				timekeeper.mult);
 
 	write_sequnlock_irqrestore(&xtime_lock, flags);
@@ -970,7 +975,7 @@ static void update_wall_time(void)
 	}
 
 	/* check to see if there is a new clocksource to use */
-	update_vsyscall(&xtime, &wall_to_monotonic, timekeeper.clock,
+	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
 				timekeeper.mult);
 }
 
@@ -988,9 +993,9 @@ static void update_wall_time(void)
 void getboottime(struct timespec *ts)
 {
 	struct timespec boottime = {
-		.tv_sec = wall_to_monotonic.tv_sec +
+		.tv_sec = timekeeper.wall_to_monotonic.tv_sec +
 				timekeeper.total_sleep_time.tv_sec,
-		.tv_nsec = wall_to_monotonic.tv_nsec +
+		.tv_nsec = timekeeper.wall_to_monotonic.tv_nsec +
 				timekeeper.total_sleep_time.tv_nsec
 	};
 
@@ -1019,7 +1024,7 @@ void get_monotonic_boottime(struct timespec *ts)
 	do {
 		seq = read_seqbegin(&xtime_lock);
 		*ts = xtime;
-		tomono = wall_to_monotonic;
+		tomono = timekeeper.wall_to_monotonic;
 		sleep = timekeeper.total_sleep_time;
 		nsecs = timekeeping_get_ns();
 
@@ -1092,7 +1097,7 @@ struct timespec get_monotonic_coarse(void)
 		seq = read_seqbegin(&xtime_lock);
 
 		now = xtime;
-		mono = wall_to_monotonic;
+		mono = timekeeper.wall_to_monotonic;
 	} while (read_seqretry(&xtime_lock, seq));
 
 	set_normalized_timespec(&now, now.tv_sec + mono.tv_sec,
@@ -1127,7 +1132,7 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
 	do {
 		seq = read_seqbegin(&xtime_lock);
 		*xtim = xtime;
-		*wtom = wall_to_monotonic;
+		*wtom = timekeeper.wall_to_monotonic;
 		*sleep = timekeeper.total_sleep_time;
 	} while (read_seqretry(&xtime_lock, seq));
 }
@@ -1142,7 +1147,7 @@ ktime_t ktime_get_monotonic_offset(void)
 
 	do {
 		seq = read_seqbegin(&xtime_lock);
-		wtom = wall_to_monotonic;
+		wtom = timekeeper.wall_to_monotonic;
 	} while (read_seqretry(&xtime_lock, seq));
 	return timespec_to_ktime(wtom);
 }
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/16] time: Move xtime into timekeeeper structure
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
  2011-11-15  4:03 ` [PATCH 01/16] time: Move total_sleep_time into the timekeeper structure John Stultz
  2011-11-15  4:03 ` [PATCH 02/16] time: Move wall_to_monotonic " John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 04/16] time: Move raw_time into timekeeper structure John Stultz
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

In preparation for locking cleanups, move xtime into
timekeeper structure.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   91 +++++++++++++++++++++++----------------------
 1 files changed, 47 insertions(+), 44 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 15740b5..6c64931 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -48,6 +48,8 @@ struct timekeeper {
 	/* NTP adjusted clock multiplier */
 	u32	mult;
 
+	/* The current time */
+	struct timespec xtime;
 	/*
 	 * wall_to_monotonic is what we need to add to xtime (or xtime corrected
 	 * for sub jiffie times) to get to monotonic time.  Monotonic is pegged
@@ -161,10 +163,6 @@ static inline s64 timekeeping_get_ns_raw(void)
 __cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
 
 
-/*
- * The current time
- */
-static struct timespec xtime __attribute__ ((aligned (16)));
 
 /*
  * The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock.
@@ -177,10 +175,10 @@ int __read_mostly timekeeping_suspended;
 /* must hold xtime_lock */
 void timekeeping_leap_insert(int leapsecond)
 {
-	xtime.tv_sec += leapsecond;
+	timekeeper.xtime.tv_sec += leapsecond;
 	timekeeper.wall_to_monotonic.tv_sec -= leapsecond;
-	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
-			timekeeper.mult);
+	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+			 timekeeper.clock, timekeeper.mult);
 }
 
 /**
@@ -207,7 +205,7 @@ static void timekeeping_forward_now(void)
 	/* If arch requires, add in gettimeoffset() */
 	nsec += arch_gettimeoffset();
 
-	timespec_add_ns(&xtime, nsec);
+	timespec_add_ns(&timekeeper.xtime, nsec);
 
 	nsec = clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);
 	timespec_add_ns(&raw_time, nsec);
@@ -229,7 +227,7 @@ void getnstimeofday(struct timespec *ts)
 	do {
 		seq = read_seqbegin(&xtime_lock);
 
-		*ts = xtime;
+		*ts = timekeeper.xtime;
 		nsecs = timekeeping_get_ns();
 
 		/* If arch requires, add in gettimeoffset() */
@@ -251,8 +249,10 @@ ktime_t ktime_get(void)
 
 	do {
 		seq = read_seqbegin(&xtime_lock);
-		secs = xtime.tv_sec + timekeeper.wall_to_monotonic.tv_sec;
-		nsecs = xtime.tv_nsec + timekeeper.wall_to_monotonic.tv_nsec;
+		secs = timekeeper.xtime.tv_sec +
+				timekeeper.wall_to_monotonic.tv_sec;
+		nsecs = timekeeper.xtime.tv_nsec +
+				timekeeper.wall_to_monotonic.tv_nsec;
 		nsecs += timekeeping_get_ns();
 
 	} while (read_seqretry(&xtime_lock, seq));
@@ -282,7 +282,7 @@ void ktime_get_ts(struct timespec *ts)
 
 	do {
 		seq = read_seqbegin(&xtime_lock);
-		*ts = xtime;
+		*ts = timekeeper.xtime;
 		tomono = timekeeper.wall_to_monotonic;
 		nsecs = timekeeping_get_ns();
 
@@ -317,7 +317,7 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
 		seq = read_seqbegin(&xtime_lock);
 
 		*ts_raw = raw_time;
-		*ts_real = xtime;
+		*ts_real = timekeeper.xtime;
 
 		nsecs_raw = timekeeping_get_ns_raw();
 		nsecs_real = timekeeping_get_ns();
@@ -370,18 +370,18 @@ int do_settimeofday(const struct timespec *tv)
 
 	timekeeping_forward_now();
 
-	ts_delta.tv_sec = tv->tv_sec - xtime.tv_sec;
-	ts_delta.tv_nsec = tv->tv_nsec - xtime.tv_nsec;
+	ts_delta.tv_sec = tv->tv_sec - timekeeper.xtime.tv_sec;
+	ts_delta.tv_nsec = tv->tv_nsec - timekeeper.xtime.tv_nsec;
 	timekeeper.wall_to_monotonic =
 			timespec_sub(timekeeper.wall_to_monotonic, ts_delta);
 
-	xtime = *tv;
+	timekeeper.xtime = *tv;
 
 	timekeeper.ntp_error = 0;
 	ntp_clear();
 
-	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
-				timekeeper.mult);
+	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+			timekeeper.clock, timekeeper.mult);
 
 	write_sequnlock_irqrestore(&xtime_lock, flags);
 
@@ -411,15 +411,15 @@ int timekeeping_inject_offset(struct timespec *ts)
 
 	timekeeping_forward_now();
 
-	xtime = timespec_add(xtime, *ts);
+	timekeeper.xtime = timespec_add(timekeeper.xtime, *ts);
 	timekeeper.wall_to_monotonic =
 				timespec_sub(timekeeper.wall_to_monotonic, *ts);
 
 	timekeeper.ntp_error = 0;
 	ntp_clear();
 
-	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
-				timekeeper.mult);
+	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+			timekeeper.clock, timekeeper.mult);
 
 	write_sequnlock_irqrestore(&xtime_lock, flags);
 
@@ -584,13 +584,13 @@ void __init timekeeping_init(void)
 		clock->enable(clock);
 	timekeeper_setup_internals(clock);
 
-	xtime.tv_sec = now.tv_sec;
-	xtime.tv_nsec = now.tv_nsec;
+	timekeeper.xtime.tv_sec = now.tv_sec;
+	timekeeper.xtime.tv_nsec = now.tv_nsec;
 	raw_time.tv_sec = 0;
 	raw_time.tv_nsec = 0;
 	if (boot.tv_sec == 0 && boot.tv_nsec == 0) {
-		boot.tv_sec = xtime.tv_sec;
-		boot.tv_nsec = xtime.tv_nsec;
+		boot.tv_sec = timekeeper.xtime.tv_sec;
+		boot.tv_nsec = timekeeper.xtime.tv_nsec;
 	}
 	set_normalized_timespec(&timekeeper.wall_to_monotonic,
 				-boot.tv_sec, -boot.tv_nsec);
@@ -617,7 +617,7 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta)
 		return;
 	}
 
-	xtime = timespec_add(xtime, *delta);
+	timekeeper.xtime = timespec_add(timekeeper.xtime, *delta);
 	timekeeper.wall_to_monotonic =
 			timespec_sub(timekeeper.wall_to_monotonic, *delta);
 	timekeeper.total_sleep_time = timespec_add(
@@ -652,8 +652,8 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 
 	timekeeper.ntp_error = 0;
 	ntp_clear();
-	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
-				timekeeper.mult);
+	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+			timekeeper.clock, timekeeper.mult);
 
 	write_sequnlock_irqrestore(&xtime_lock, flags);
 
@@ -716,7 +716,7 @@ static int timekeeping_suspend(void)
 	 * try to compensate so the difference in system time
 	 * and persistent_clock time stays close to constant.
 	 */
-	delta = timespec_sub(xtime, timekeeping_suspend_time);
+	delta = timespec_sub(timekeeper.xtime, timekeeping_suspend_time);
 	delta_delta = timespec_sub(delta, old_delta);
 	if (abs(delta_delta.tv_sec)  >= 2) {
 		/*
@@ -862,7 +862,7 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
 	timekeeper.xtime_nsec += timekeeper.xtime_interval << shift;
 	while (timekeeper.xtime_nsec >= nsecps) {
 		timekeeper.xtime_nsec -= nsecps;
-		xtime.tv_sec++;
+		timekeeper.xtime.tv_sec++;
 		second_overflow();
 	}
 
@@ -908,7 +908,8 @@ static void update_wall_time(void)
 #else
 	offset = (clock->read(clock) - clock->cycle_last) & clock->mask;
 #endif
-	timekeeper.xtime_nsec = (s64)xtime.tv_nsec << timekeeper.shift;
+	timekeeper.xtime_nsec = (s64)timekeeper.xtime.tv_nsec <<
+						timekeeper.shift;
 
 	/*
 	 * With NO_HZ we may have to accumulate many cycle_intervals
@@ -959,8 +960,10 @@ static void update_wall_time(void)
 	 * Store full nanoseconds into xtime after rounding it up and
 	 * add the remainder to the error difference.
 	 */
-	xtime.tv_nsec =	((s64) timekeeper.xtime_nsec >> timekeeper.shift) + 1;
-	timekeeper.xtime_nsec -= (s64) xtime.tv_nsec << timekeeper.shift;
+	timekeeper.xtime.tv_nsec = ((s64)timekeeper.xtime_nsec >>
+						timekeeper.shift) + 1;
+	timekeeper.xtime_nsec -= (s64)timekeeper.xtime.tv_nsec <<
+						timekeeper.shift;
 	timekeeper.ntp_error +=	timekeeper.xtime_nsec <<
 				timekeeper.ntp_error_shift;
 
@@ -968,15 +971,15 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime.tv_nsec isn't larger then NSEC_PER_SEC
 	 */
-	if (unlikely(xtime.tv_nsec >= NSEC_PER_SEC)) {
-		xtime.tv_nsec -= NSEC_PER_SEC;
-		xtime.tv_sec++;
+	if (unlikely(timekeeper.xtime.tv_nsec >= NSEC_PER_SEC)) {
+		timekeeper.xtime.tv_nsec -= NSEC_PER_SEC;
+		timekeeper.xtime.tv_sec++;
 		second_overflow();
 	}
 
 	/* check to see if there is a new clocksource to use */
-	update_vsyscall(&xtime, &timekeeper.wall_to_monotonic, timekeeper.clock,
-				timekeeper.mult);
+	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+			timekeeper.clock, timekeeper.mult);
 }
 
 /**
@@ -1023,7 +1026,7 @@ void get_monotonic_boottime(struct timespec *ts)
 
 	do {
 		seq = read_seqbegin(&xtime_lock);
-		*ts = xtime;
+		*ts = timekeeper.xtime;
 		tomono = timekeeper.wall_to_monotonic;
 		sleep = timekeeper.total_sleep_time;
 		nsecs = timekeeping_get_ns();
@@ -1064,13 +1067,13 @@ EXPORT_SYMBOL_GPL(monotonic_to_bootbased);
 
 unsigned long get_seconds(void)
 {
-	return xtime.tv_sec;
+	return timekeeper.xtime.tv_sec;
 }
 EXPORT_SYMBOL(get_seconds);
 
 struct timespec __current_kernel_time(void)
 {
-	return xtime;
+	return timekeeper.xtime;
 }
 
 struct timespec current_kernel_time(void)
@@ -1081,7 +1084,7 @@ struct timespec current_kernel_time(void)
 	do {
 		seq = read_seqbegin(&xtime_lock);
 
-		now = xtime;
+		now = timekeeper.xtime;
 	} while (read_seqretry(&xtime_lock, seq));
 
 	return now;
@@ -1096,7 +1099,7 @@ struct timespec get_monotonic_coarse(void)
 	do {
 		seq = read_seqbegin(&xtime_lock);
 
-		now = xtime;
+		now = timekeeper.xtime;
 		mono = timekeeper.wall_to_monotonic;
 	} while (read_seqretry(&xtime_lock, seq));
 
@@ -1131,7 +1134,7 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
 
 	do {
 		seq = read_seqbegin(&xtime_lock);
-		*xtim = xtime;
+		*xtim = timekeeper.xtime;
 		*wtom = timekeeper.wall_to_monotonic;
 		*sleep = timekeeper.total_sleep_time;
 	} while (read_seqretry(&xtime_lock, seq));
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 04/16] time: Move raw_time into timekeeper structure
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (2 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 03/16] time: Move xtime into timekeeeper structure John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 05/16] time: Cleanup global variables and move them to the top John Stultz
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

In preparation for locking cleanups, move raw_time into
timekeeper structure.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   23 ++++++++++-------------
 1 files changed, 10 insertions(+), 13 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6c64931..19d160a 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -67,7 +67,8 @@ struct timekeeper {
 	struct timespec wall_to_monotonic;
 	/* time spent in suspend */
 	struct timespec total_sleep_time;
-
+	/* The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock. */
+	struct timespec raw_time;
 };
 
 static struct timekeeper timekeeper;
@@ -164,10 +165,6 @@ __cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
 
 
 
-/*
- * The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock.
- */
-static struct timespec raw_time;
 
 /* flag for if timekeeping is suspended */
 int __read_mostly timekeeping_suspended;
@@ -208,7 +205,7 @@ static void timekeeping_forward_now(void)
 	timespec_add_ns(&timekeeper.xtime, nsec);
 
 	nsec = clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);
-	timespec_add_ns(&raw_time, nsec);
+	timespec_add_ns(&timekeeper.raw_time, nsec);
 }
 
 /**
@@ -316,7 +313,7 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
 
 		seq = read_seqbegin(&xtime_lock);
 
-		*ts_raw = raw_time;
+		*ts_raw = timekeeper.raw_time;
 		*ts_real = timekeeper.xtime;
 
 		nsecs_raw = timekeeping_get_ns_raw();
@@ -495,7 +492,7 @@ void getrawmonotonic(struct timespec *ts)
 	do {
 		seq = read_seqbegin(&xtime_lock);
 		nsecs = timekeeping_get_ns_raw();
-		*ts = raw_time;
+		*ts = timekeeper.raw_time;
 
 	} while (read_seqretry(&xtime_lock, seq));
 
@@ -586,8 +583,8 @@ void __init timekeeping_init(void)
 
 	timekeeper.xtime.tv_sec = now.tv_sec;
 	timekeeper.xtime.tv_nsec = now.tv_nsec;
-	raw_time.tv_sec = 0;
-	raw_time.tv_nsec = 0;
+	timekeeper.raw_time.tv_sec = 0;
+	timekeeper.raw_time.tv_nsec = 0;
 	if (boot.tv_sec == 0 && boot.tv_nsec == 0) {
 		boot.tv_sec = timekeeper.xtime.tv_sec;
 		boot.tv_nsec = timekeeper.xtime.tv_nsec;
@@ -868,13 +865,13 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
 
 	/* Accumulate raw time */
 	raw_nsecs = timekeeper.raw_interval << shift;
-	raw_nsecs += raw_time.tv_nsec;
+	raw_nsecs += timekeeper.raw_time.tv_nsec;
 	if (raw_nsecs >= NSEC_PER_SEC) {
 		u64 raw_secs = raw_nsecs;
 		raw_nsecs = do_div(raw_secs, NSEC_PER_SEC);
-		raw_time.tv_sec += raw_secs;
+		timekeeper.raw_time.tv_sec += raw_secs;
 	}
-	raw_time.tv_nsec = raw_nsecs;
+	timekeeper.raw_time.tv_nsec = raw_nsecs;
 
 	/* Accumulate error between NTP and clock interval */
 	timekeeper.ntp_error += tick_length << shift;
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 05/16] time: Cleanup global variables and move them to the top
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (3 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 04/16] time: Move raw_time into timekeeper structure John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 06/16] time: Add timekeeper lock John Stultz
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Move global xtime_lock and timekeeping_suspended values up
to the top of timekeeping.c

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 19d160a..78872ba 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -73,6 +73,18 @@ struct timekeeper {
 
 static struct timekeeper timekeeper;
 
+/*
+ * This read-write spinlock protects us from races in SMP while
+ * playing with xtime.
+ */
+__cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
+
+
+/* flag for if timekeeping is suspended */
+int __read_mostly timekeeping_suspended;
+
+
+
 /**
  * timekeeper_setup_internals - Set up internals to use clocksource clock.
  *
@@ -157,18 +169,6 @@ static inline s64 timekeeping_get_ns_raw(void)
 	return clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);
 }
 
-/*
- * This read-write spinlock protects us from races in SMP while
- * playing with xtime.
- */
-__cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
-
-
-
-
-/* flag for if timekeeping is suspended */
-int __read_mostly timekeeping_suspended;
-
 /* must hold xtime_lock */
 void timekeeping_leap_insert(int leapsecond)
 {
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 06/16] time: Add timekeeper lock
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (4 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 05/16] time: Cleanup global variables and move them to the top John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 07/16] ntp: Cleanup timex.h John Stultz
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Now that all the timekeeping variables are stored in
the timekeeper structure, add a new lock to protect the
structure.

For now, this lock nests under the xtime_lock for writes.

For readers, we don't need to take xtime_lock anymore.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |  126 +++++++++++++++++++++++++++++----------------
 1 files changed, 82 insertions(+), 44 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 78872ba..8f54200 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -69,6 +69,9 @@ struct timekeeper {
 	struct timespec total_sleep_time;
 	/* The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock. */
 	struct timespec raw_time;
+
+	/* Seqlock for all timekeeper values */
+	seqlock_t lock;
 };
 
 static struct timekeeper timekeeper;
@@ -172,10 +175,17 @@ static inline s64 timekeeping_get_ns_raw(void)
 /* must hold xtime_lock */
 void timekeeping_leap_insert(int leapsecond)
 {
+	unsigned long flags;
+
+	write_seqlock_irqsave(&timekeeper.lock, flags);
+
 	timekeeper.xtime.tv_sec += leapsecond;
 	timekeeper.wall_to_monotonic.tv_sec -= leapsecond;
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			 timekeeper.clock, timekeeper.mult);
+
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
+
 }
 
 /**
@@ -222,7 +232,7 @@ void getnstimeofday(struct timespec *ts)
 	WARN_ON(timekeeping_suspended);
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 
 		*ts = timekeeper.xtime;
 		nsecs = timekeeping_get_ns();
@@ -230,7 +240,7 @@ void getnstimeofday(struct timespec *ts)
 		/* If arch requires, add in gettimeoffset() */
 		nsecs += arch_gettimeoffset();
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	timespec_add_ns(ts, nsecs);
 }
@@ -245,14 +255,14 @@ ktime_t ktime_get(void)
 	WARN_ON(timekeeping_suspended);
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 		secs = timekeeper.xtime.tv_sec +
 				timekeeper.wall_to_monotonic.tv_sec;
 		nsecs = timekeeper.xtime.tv_nsec +
 				timekeeper.wall_to_monotonic.tv_nsec;
 		nsecs += timekeeping_get_ns();
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 	/*
 	 * Use ktime_set/ktime_add_ns to create a proper ktime on
 	 * 32-bit architectures without CONFIG_KTIME_SCALAR.
@@ -278,12 +288,12 @@ void ktime_get_ts(struct timespec *ts)
 	WARN_ON(timekeeping_suspended);
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 		*ts = timekeeper.xtime;
 		tomono = timekeeper.wall_to_monotonic;
 		nsecs = timekeeping_get_ns();
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec,
 				ts->tv_nsec + tomono.tv_nsec + nsecs);
@@ -311,7 +321,7 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
 	do {
 		u32 arch_offset;
 
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 
 		*ts_raw = timekeeper.raw_time;
 		*ts_real = timekeeper.xtime;
@@ -324,7 +334,7 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
 		nsecs_raw += arch_offset;
 		nsecs_real += arch_offset;
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	timespec_add_ns(ts_raw, nsecs_raw);
 	timespec_add_ns(ts_real, nsecs_real);
@@ -358,12 +368,13 @@ EXPORT_SYMBOL(do_gettimeofday);
 int do_settimeofday(const struct timespec *tv)
 {
 	struct timespec ts_delta;
-	unsigned long flags;
+	unsigned long flags1,flags2;
 
 	if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
 		return -EINVAL;
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	write_seqlock_irqsave(&xtime_lock, flags1);
+	write_seqlock_irqsave(&timekeeper.lock, flags2);
 
 	timekeeping_forward_now();
 
@@ -380,7 +391,8 @@ int do_settimeofday(const struct timespec *tv)
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
 
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
+	write_sequnlock_irqrestore(&xtime_lock, flags1);
 
 	/* signal hrtimers about time change */
 	clock_was_set();
@@ -399,12 +411,13 @@ EXPORT_SYMBOL(do_settimeofday);
  */
 int timekeeping_inject_offset(struct timespec *ts)
 {
-	unsigned long flags;
+	unsigned long flags1,flags2;
 
 	if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC)
 		return -EINVAL;
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	write_seqlock_irqsave(&xtime_lock, flags1);
+	write_seqlock_irqsave(&timekeeper.lock, flags2);
 
 	timekeeping_forward_now();
 
@@ -418,7 +431,8 @@ int timekeeping_inject_offset(struct timespec *ts)
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
 
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
+	write_sequnlock_irqrestore(&xtime_lock, flags1);
 
 	/* signal hrtimers about time change */
 	clock_was_set();
@@ -490,11 +504,11 @@ void getrawmonotonic(struct timespec *ts)
 	s64 nsecs;
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 		nsecs = timekeeping_get_ns_raw();
 		*ts = timekeeper.raw_time;
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	timespec_add_ns(ts, nsecs);
 }
@@ -510,24 +524,30 @@ int timekeeping_valid_for_hres(void)
 	int ret;
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 
 		ret = timekeeper.clock->flags & CLOCK_SOURCE_VALID_FOR_HRES;
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	return ret;
 }
 
 /**
  * timekeeping_max_deferment - Returns max time the clocksource can be deferred
- *
- * Caller must observe xtime_lock via read_seqbegin/read_seqretry to
- * ensure that the clocksource does not change!
  */
 u64 timekeeping_max_deferment(void)
 {
-	return timekeeper.clock->max_idle_ns;
+	unsigned long seq;
+	u64 ret;
+	do {
+		seq = read_seqbegin(&timekeeper.lock);
+
+		ret = timekeeper.clock->max_idle_ns;
+
+	} while (read_seqretry(&timekeeper.lock, seq));
+
+	return ret;
 }
 
 /**
@@ -572,10 +592,13 @@ void __init timekeeping_init(void)
 	read_persistent_clock(&now);
 	read_boot_clock(&boot);
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	seqlock_init(&timekeeper.lock);
 
+	write_seqlock_irqsave(&xtime_lock, flags);
 	ntp_init();
+	write_sequnlock_irqrestore(&xtime_lock, flags);
 
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 	clock = clocksource_default_clock();
 	if (clock->enable)
 		clock->enable(clock);
@@ -593,7 +616,7 @@ void __init timekeeping_init(void)
 				-boot.tv_sec, -boot.tv_nsec);
 	timekeeper.total_sleep_time.tv_sec = 0;
 	timekeeper.total_sleep_time.tv_nsec = 0;
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 }
 
 /* time in seconds when suspend began */
@@ -634,7 +657,7 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta)
  */
 void timekeeping_inject_sleeptime(struct timespec *delta)
 {
-	unsigned long flags;
+	unsigned long flags1,flags2;
 	struct timespec ts;
 
 	/* Make sure we don't set the clock twice */
@@ -642,7 +665,9 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 	if (!(ts.tv_sec == 0 && ts.tv_nsec == 0))
 		return;
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	write_seqlock_irqsave(&xtime_lock, flags1);
+	write_seqlock_irqsave(&timekeeper.lock, flags2);
+
 	timekeeping_forward_now();
 
 	__timekeeping_inject_sleeptime(delta);
@@ -652,7 +677,8 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
 
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
+	write_sequnlock_irqrestore(&xtime_lock, flags1);
 
 	/* signal hrtimers about time change */
 	clock_was_set();
@@ -668,14 +694,15 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
  */
 static void timekeeping_resume(void)
 {
-	unsigned long flags;
+	unsigned long flags1,flags2;
 	struct timespec ts;
 
 	read_persistent_clock(&ts);
 
 	clocksource_resume();
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	write_seqlock_irqsave(&xtime_lock, flags1);
+	write_seqlock_irqsave(&timekeeper.lock, flags2);
 
 	if (timespec_compare(&ts, &timekeeping_suspend_time) > 0) {
 		ts = timespec_sub(ts, timekeeping_suspend_time);
@@ -685,7 +712,8 @@ static void timekeeping_resume(void)
 	timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock);
 	timekeeper.ntp_error = 0;
 	timekeeping_suspended = 0;
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
+	write_sequnlock_irqrestore(&xtime_lock, flags1);
 
 	touch_softlockup_watchdog();
 
@@ -697,13 +725,14 @@ static void timekeeping_resume(void)
 
 static int timekeeping_suspend(void)
 {
-	unsigned long flags;
+	unsigned long flags1,flags2;
 	struct timespec		delta, delta_delta;
 	static struct timespec	old_delta;
 
 	read_persistent_clock(&timekeeping_suspend_time);
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	write_seqlock_irqsave(&xtime_lock, flags1);
+	write_seqlock_irqsave(&timekeeper.lock, flags2);
 	timekeeping_forward_now();
 	timekeeping_suspended = 1;
 
@@ -726,7 +755,8 @@ static int timekeeping_suspend(void)
 		timekeeping_suspend_time =
 			timespec_add(timekeeping_suspend_time, delta_delta);
 	}
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
+	write_sequnlock_irqrestore(&xtime_lock, flags1);
 
 	clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL);
 	clocksource_suspend();
@@ -893,10 +923,13 @@ static void update_wall_time(void)
 	struct clocksource *clock;
 	cycle_t offset;
 	int shift = 0, maxshift;
+	unsigned long flags;
+
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 
 	/* Make sure we're fully resumed: */
 	if (unlikely(timekeeping_suspended))
-		return;
+		goto out;
 
 	clock = timekeeper.clock;
 
@@ -977,6 +1010,10 @@ static void update_wall_time(void)
 	/* check to see if there is a new clocksource to use */
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
+
+out:
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
+
 }
 
 /**
@@ -1022,13 +1059,13 @@ void get_monotonic_boottime(struct timespec *ts)
 	WARN_ON(timekeeping_suspended);
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 		*ts = timekeeper.xtime;
 		tomono = timekeeper.wall_to_monotonic;
 		sleep = timekeeper.total_sleep_time;
 		nsecs = timekeeping_get_ns();
 
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec + sleep.tv_sec,
 			ts->tv_nsec + tomono.tv_nsec + sleep.tv_nsec + nsecs);
@@ -1079,10 +1116,10 @@ struct timespec current_kernel_time(void)
 	unsigned long seq;
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 
 		now = timekeeper.xtime;
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	return now;
 }
@@ -1094,11 +1131,11 @@ struct timespec get_monotonic_coarse(void)
 	unsigned long seq;
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 
 		now = timekeeper.xtime;
 		mono = timekeeper.wall_to_monotonic;
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 
 	set_normalized_timespec(&now, now.tv_sec + mono.tv_sec,
 				now.tv_nsec + mono.tv_nsec);
@@ -1130,11 +1167,11 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
 	unsigned long seq;
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 		*xtim = timekeeper.xtime;
 		*wtom = timekeeper.wall_to_monotonic;
 		*sleep = timekeeper.total_sleep_time;
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
 }
 
 /**
@@ -1146,9 +1183,10 @@ ktime_t ktime_get_monotonic_offset(void)
 	struct timespec wtom;
 
 	do {
-		seq = read_seqbegin(&xtime_lock);
+		seq = read_seqbegin(&timekeeper.lock);
 		wtom = timekeeper.wall_to_monotonic;
-	} while (read_seqretry(&xtime_lock, seq));
+	} while (read_seqretry(&timekeeper.lock, seq));
+
 	return timespec_to_ktime(wtom);
 }
 
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 07/16] ntp: Cleanup timex.h
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (5 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 06/16] time: Add timekeeper lock John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 08/16] ntp: Access tick_length variable via ntp_tick_length() John Stultz
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Move ntp_sycned to ntp.c and mark time_status as static.
Also yank function declaration for non-existant function.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 include/linux/timex.h |   15 ---------------
 kernel/time/ntp.c     |   13 ++++++++++++-
 2 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/include/linux/timex.h b/include/linux/timex.h
index aa60fe7..92e01fc 100644
--- a/include/linux/timex.h
+++ b/include/linux/timex.h
@@ -234,23 +234,9 @@ struct timex {
 extern unsigned long tick_usec;		/* USER_HZ period (usec) */
 extern unsigned long tick_nsec;		/* ACTHZ          period (nsec) */
 
-/*
- * phase-lock loop variables
- */
-extern int time_status;		/* clock synchronization status bits */
-
 extern void ntp_init(void);
 extern void ntp_clear(void);
 
-/**
- * ntp_synced - Returns 1 if the NTP status is not UNSYNC
- *
- */
-static inline int ntp_synced(void)
-{
-	return !(time_status & STA_UNSYNC);
-}
-
 /* Required to safely shift negative values */
 #define shift_right(x, s) ({	\
 	__typeof__(x) __x = (x);	\
@@ -267,7 +253,6 @@ static inline int ntp_synced(void)
 extern u64 tick_length;
 
 extern void second_overflow(void);
-extern void update_ntp_one_tick(void);
 extern int do_adjtimex(struct timex *);
 extern void hardpps(const struct timespec *, const struct timespec *);
 
diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
index f6117a4..ae7e136 100644
--- a/kernel/time/ntp.c
+++ b/kernel/time/ntp.c
@@ -49,7 +49,7 @@ static struct hrtimer		leap_timer;
 static int			time_state = TIME_OK;
 
 /* clock status bits:							*/
-int				time_status = STA_UNSYNC;
+static int			time_status = STA_UNSYNC;
 
 /* TAI offset (secs):							*/
 static long			time_tai;
@@ -233,6 +233,17 @@ static inline void pps_fill_timex(struct timex *txc)
 
 #endif /* CONFIG_NTP_PPS */
 
+
+/**
+ * ntp_synced - Returns 1 if the NTP status is not UNSYNC
+ *
+ */
+static inline int ntp_synced(void)
+{
+	return !(time_status & STA_UNSYNC);
+}
+
+
 /*
  * NTP methods:
  */
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 08/16] ntp: Access tick_length variable via ntp_tick_length()
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (6 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 07/16] ntp: Cleanup timex.h John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:03 ` [PATCH 09/16] ntp: Add ntp_lock to replace xtime_locking John Stultz
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Currently the NTP managed tick_length value is accessed globally,
in preparations for locking cleanups, make sure it is accessed via
a function and mark it as static.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 include/linux/timex.h     |    2 +-
 kernel/time/ntp.c         |    9 ++++++++-
 kernel/time/timekeeping.c |    6 +++---
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/include/linux/timex.h b/include/linux/timex.h
index 92e01fc..b75e186 100644
--- a/include/linux/timex.h
+++ b/include/linux/timex.h
@@ -250,7 +250,7 @@ extern void ntp_clear(void);
 #define NTP_INTERVAL_LENGTH (NSEC_PER_SEC/NTP_INTERVAL_FREQ)
 
 /* Returns how long ticks are at present, in ns / 2^NTP_SCALE_SHIFT. */
-extern u64 tick_length;
+extern u64 ntp_tick_length(void);
 
 extern void second_overflow(void);
 extern int do_adjtimex(struct timex *);
diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
index ae7e136..f131ba6 100644
--- a/kernel/time/ntp.c
+++ b/kernel/time/ntp.c
@@ -28,7 +28,7 @@ unsigned long			tick_usec = TICK_USEC;
 /* ACTHZ period (nsecs): */
 unsigned long			tick_nsec;
 
-u64				tick_length;
+static u64			tick_length;
 static u64			tick_length_base;
 
 static struct hrtimer		leap_timer;
@@ -360,6 +360,13 @@ void ntp_clear(void)
 	pps_clear();
 }
 
+
+u64 ntp_tick_length(void)
+{
+	return tick_length;
+}
+
+
 /*
  * Leap second processing. If in leap-insert state at the end of the
  * day, the system clock is set back one second; if in leap-delete
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 8f54200..f92e636 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -807,7 +807,7 @@ static __always_inline int timekeeping_bigadjust(s64 error, s64 *interval,
 	 * Now calculate the error in (1 << look_ahead) ticks, but first
 	 * remove the single look ahead already included in the error.
 	 */
-	tick_error = tick_length >> (timekeeper.ntp_error_shift + 1);
+	tick_error = ntp_tick_length() >> (timekeeper.ntp_error_shift + 1);
 	tick_error -= timekeeper.xtime_interval >> 1;
 	error = ((error - tick_error) >> look_ahead) + tick_error;
 
@@ -904,7 +904,7 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
 	timekeeper.raw_time.tv_nsec = raw_nsecs;
 
 	/* Accumulate error between NTP and clock interval */
-	timekeeper.ntp_error += tick_length << shift;
+	timekeeper.ntp_error += ntp_tick_length() << shift;
 	timekeeper.ntp_error -=
 	    (timekeeper.xtime_interval + timekeeper.xtime_remainder) <<
 				(timekeeper.ntp_error_shift + shift);
@@ -952,7 +952,7 @@ static void update_wall_time(void)
 	shift = ilog2(offset) - ilog2(timekeeper.cycle_interval);
 	shift = max(0, shift);
 	/* Bound shift to one less then what overflows tick_length */
-	maxshift = (8*sizeof(tick_length) - (ilog2(tick_length)+1)) - 1;
+	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= timekeeper.cycle_interval) {
 		offset = logarithmic_accumulation(offset, shift);
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 09/16] ntp: Add ntp_lock to replace xtime_locking
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (7 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 08/16] ntp: Access tick_length variable via ntp_tick_length() John Stultz
@ 2011-11-15  4:03 ` John Stultz
  2011-11-15  4:04 ` [PATCH 10/16] time: Remove most of xtime_lock usage in timekeeping.c John Stultz
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:03 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Use a ntp_lock seqlock to replace xtime_lock locking in ntp.c

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/ntp.c |   63 +++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 44 insertions(+), 19 deletions(-)

diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
index f131ba6..a0756b3 100644
--- a/kernel/time/ntp.c
+++ b/kernel/time/ntp.c
@@ -22,6 +22,9 @@
  * NTP timekeeping variables:
  */
 
+DEFINE_SEQLOCK(ntp_lock);
+
+
 /* USER_HZ period (usecs): */
 unsigned long			tick_usec = TICK_USEC;
 
@@ -133,7 +136,7 @@ static inline void pps_reset_freq_interval(void)
 /**
  * pps_clear - Clears the PPS state variables
  *
- * Must be called while holding a write on the xtime_lock
+ * Must be called while holding a write on the ntp_lock
  */
 static inline void pps_clear(void)
 {
@@ -149,7 +152,7 @@ static inline void pps_clear(void)
  * the last PPS signal. When it reaches 0, indicate that PPS signal is
  * missing.
  *
- * Must be called while holding a write on the xtime_lock
+ * Must be called while holding a write on the ntp_lock
  */
 static inline void pps_dec_valid(void)
 {
@@ -341,11 +344,13 @@ static void ntp_update_offset(long offset)
 
 /**
  * ntp_clear - Clears the NTP state variables
- *
- * Must be called while holding a write on the xtime_lock
  */
 void ntp_clear(void)
 {
+	unsigned long flags;
+
+	write_seqlock_irqsave(&ntp_lock, flags);
+
 	time_adjust	= 0;		/* stop active adjtime() */
 	time_status	|= STA_UNSYNC;
 	time_maxerror	= NTP_PHASE_LIMIT;
@@ -358,12 +363,21 @@ void ntp_clear(void)
 
 	/* Clear PPS state variables */
 	pps_clear();
+	write_sequnlock_irqrestore(&ntp_lock, flags);
+
 }
 
 
 u64 ntp_tick_length(void)
 {
-	return tick_length;
+	unsigned long seq;
+	s64 ret;
+
+	do {
+		seq = read_seqbegin(&ntp_lock);
+		ret = tick_length;
+	} while (read_seqretry(&ntp_lock, seq));
+	return ret;
 }
 
 
@@ -375,14 +389,14 @@ u64 ntp_tick_length(void)
 static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
 {
 	enum hrtimer_restart res = HRTIMER_NORESTART;
+	int leap = 0;
 
-	write_seqlock(&xtime_lock);
-
+	write_seqlock(&ntp_lock);
 	switch (time_state) {
 	case TIME_OK:
 		break;
 	case TIME_INS:
-		timekeeping_leap_insert(-1);
+		leap = -1;
 		time_state = TIME_OOP;
 		printk(KERN_NOTICE
 			"Clock: inserting leap second 23:59:60 UTC\n");
@@ -390,7 +404,7 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
 		res = HRTIMER_RESTART;
 		break;
 	case TIME_DEL:
-		timekeeping_leap_insert(1);
+		leap = 1;
 		time_tai--;
 		time_state = TIME_WAIT;
 		printk(KERN_NOTICE
@@ -405,8 +419,14 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
 			time_state = TIME_OK;
 		break;
 	}
+	write_sequnlock(&ntp_lock);
 
-	write_sequnlock(&xtime_lock);
+	/*
+	 * We have to call this outside of the ntp_lock to keep
+	 * the proper locking hierarchy
+	 */
+	if (leap)
+		timekeeping_leap_insert(leap);
 
 	return res;
 }
@@ -422,6 +442,9 @@ static enum hrtimer_restart ntp_leap_second(struct hrtimer *timer)
 void second_overflow(void)
 {
 	s64 delta;
+	unsigned long flags;
+
+	write_seqlock_irqsave(&ntp_lock, flags);
 
 	/* Bump the maxerror field */
 	time_maxerror += MAXFREQ / NSEC_PER_USEC;
@@ -441,23 +464,25 @@ void second_overflow(void)
 	pps_dec_valid();
 
 	if (!time_adjust)
-		return;
+		goto out;
 
 	if (time_adjust > MAX_TICKADJ) {
 		time_adjust -= MAX_TICKADJ;
 		tick_length += MAX_TICKADJ_SCALED;
-		return;
+		goto out;
 	}
 
 	if (time_adjust < -MAX_TICKADJ) {
 		time_adjust += MAX_TICKADJ;
 		tick_length -= MAX_TICKADJ_SCALED;
-		return;
+		goto out;
 	}
 
 	tick_length += (s64)(time_adjust * NSEC_PER_USEC / NTP_INTERVAL_FREQ)
 							 << NTP_SCALE_SHIFT;
 	time_adjust = 0;
+out:
+	write_sequnlock_irqrestore(&ntp_lock, flags);
 }
 
 #ifdef CONFIG_GENERIC_CMOS_UPDATE
@@ -681,7 +706,7 @@ int do_adjtimex(struct timex *txc)
 
 	getnstimeofday(&ts);
 
-	write_seqlock_irq(&xtime_lock);
+	write_seqlock_irq(&ntp_lock);
 
 	if (txc->modes & ADJ_ADJTIME) {
 		long save_adjust = time_adjust;
@@ -723,7 +748,7 @@ int do_adjtimex(struct timex *txc)
 	/* fill PPS status fields */
 	pps_fill_timex(txc);
 
-	write_sequnlock_irq(&xtime_lock);
+	write_sequnlock_irq(&ntp_lock);
 
 	txc->time.tv_sec = ts.tv_sec;
 	txc->time.tv_usec = ts.tv_nsec;
@@ -921,7 +946,7 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
 
 	pts_norm = pps_normalize_ts(*phase_ts);
 
-	write_seqlock_irqsave(&xtime_lock, flags);
+	write_seqlock_irqsave(&ntp_lock, flags);
 
 	/* clear the error bits, they will be set again if needed */
 	time_status &= ~(STA_PPSJITTER | STA_PPSWANDER | STA_PPSERROR);
@@ -934,7 +959,7 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
 	 * just start the frequency interval */
 	if (unlikely(pps_fbase.tv_sec == 0)) {
 		pps_fbase = *raw_ts;
-		write_sequnlock_irqrestore(&xtime_lock, flags);
+		write_sequnlock_irqrestore(&ntp_lock, flags);
 		return;
 	}
 
@@ -949,7 +974,7 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
 		time_status |= STA_PPSJITTER;
 		/* restart the frequency calibration interval */
 		pps_fbase = *raw_ts;
-		write_sequnlock_irqrestore(&xtime_lock, flags);
+		write_sequnlock_irqrestore(&ntp_lock, flags);
 		pr_err("hardpps: PPSJITTER: bad pulse\n");
 		return;
 	}
@@ -966,7 +991,7 @@ void hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts)
 
 	hardpps_update_phase(pts_norm.nsec);
 
-	write_sequnlock_irqrestore(&xtime_lock, flags);
+	write_sequnlock_irqrestore(&ntp_lock, flags);
 }
 EXPORT_SYMBOL(hardpps);
 
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 10/16] time: Remove most of xtime_lock usage in timekeeping.c
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (8 preceding siblings ...)
  2011-11-15  4:03 ` [PATCH 09/16] ntp: Add ntp_lock to replace xtime_locking John Stultz
@ 2011-11-15  4:04 ` John Stultz
  2011-11-15  4:04 ` [PATCH 11/16] time: Reorder so the hot data is together John Stultz
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Now that ntp.c's locking is reworked, we can remove most
of the xtime_lock usage in timekeeping.c

The remaining xtime_lock presence is really for jiffies access
and the global load calculation.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   44 +++++++++++++++-----------------------------
 1 files changed, 15 insertions(+), 29 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index f92e636..810b974 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -172,7 +172,6 @@ static inline s64 timekeeping_get_ns_raw(void)
 	return clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);
 }
 
-/* must hold xtime_lock */
 void timekeeping_leap_insert(int leapsecond)
 {
 	unsigned long flags;
@@ -368,13 +367,12 @@ EXPORT_SYMBOL(do_gettimeofday);
 int do_settimeofday(const struct timespec *tv)
 {
 	struct timespec ts_delta;
-	unsigned long flags1,flags2;
+	unsigned long flags;
 
 	if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
 		return -EINVAL;
 
-	write_seqlock_irqsave(&xtime_lock, flags1);
-	write_seqlock_irqsave(&timekeeper.lock, flags2);
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 
 	timekeeping_forward_now();
 
@@ -391,8 +389,7 @@ int do_settimeofday(const struct timespec *tv)
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
 
-	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
-	write_sequnlock_irqrestore(&xtime_lock, flags1);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 	/* signal hrtimers about time change */
 	clock_was_set();
@@ -411,13 +408,12 @@ EXPORT_SYMBOL(do_settimeofday);
  */
 int timekeeping_inject_offset(struct timespec *ts)
 {
-	unsigned long flags1,flags2;
+	unsigned long flags;
 
 	if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC)
 		return -EINVAL;
 
-	write_seqlock_irqsave(&xtime_lock, flags1);
-	write_seqlock_irqsave(&timekeeper.lock, flags2);
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 
 	timekeeping_forward_now();
 
@@ -431,8 +427,7 @@ int timekeeping_inject_offset(struct timespec *ts)
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
 
-	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
-	write_sequnlock_irqrestore(&xtime_lock, flags1);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 	/* signal hrtimers about time change */
 	clock_was_set();
@@ -594,9 +589,7 @@ void __init timekeeping_init(void)
 
 	seqlock_init(&timekeeper.lock);
 
-	write_seqlock_irqsave(&xtime_lock, flags);
 	ntp_init();
-	write_sequnlock_irqrestore(&xtime_lock, flags);
 
 	write_seqlock_irqsave(&timekeeper.lock, flags);
 	clock = clocksource_default_clock();
@@ -657,7 +650,7 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta)
  */
 void timekeeping_inject_sleeptime(struct timespec *delta)
 {
-	unsigned long flags1,flags2;
+	unsigned long flags;
 	struct timespec ts;
 
 	/* Make sure we don't set the clock twice */
@@ -665,8 +658,7 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 	if (!(ts.tv_sec == 0 && ts.tv_nsec == 0))
 		return;
 
-	write_seqlock_irqsave(&xtime_lock, flags1);
-	write_seqlock_irqsave(&timekeeper.lock, flags2);
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 
 	timekeeping_forward_now();
 
@@ -677,8 +669,7 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
 			timekeeper.clock, timekeeper.mult);
 
-	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
-	write_sequnlock_irqrestore(&xtime_lock, flags1);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 	/* signal hrtimers about time change */
 	clock_was_set();
@@ -694,15 +685,14 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
  */
 static void timekeeping_resume(void)
 {
-	unsigned long flags1,flags2;
+	unsigned long flags;
 	struct timespec ts;
 
 	read_persistent_clock(&ts);
 
 	clocksource_resume();
 
-	write_seqlock_irqsave(&xtime_lock, flags1);
-	write_seqlock_irqsave(&timekeeper.lock, flags2);
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 
 	if (timespec_compare(&ts, &timekeeping_suspend_time) > 0) {
 		ts = timespec_sub(ts, timekeeping_suspend_time);
@@ -712,8 +702,7 @@ static void timekeeping_resume(void)
 	timekeeper.clock->cycle_last = timekeeper.clock->read(timekeeper.clock);
 	timekeeper.ntp_error = 0;
 	timekeeping_suspended = 0;
-	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
-	write_sequnlock_irqrestore(&xtime_lock, flags1);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 	touch_softlockup_watchdog();
 
@@ -725,14 +714,13 @@ static void timekeeping_resume(void)
 
 static int timekeeping_suspend(void)
 {
-	unsigned long flags1,flags2;
+	unsigned long flags;
 	struct timespec		delta, delta_delta;
 	static struct timespec	old_delta;
 
 	read_persistent_clock(&timekeeping_suspend_time);
 
-	write_seqlock_irqsave(&xtime_lock, flags1);
-	write_seqlock_irqsave(&timekeeper.lock, flags2);
+	write_seqlock_irqsave(&timekeeper.lock, flags);
 	timekeeping_forward_now();
 	timekeeping_suspended = 1;
 
@@ -755,8 +743,7 @@ static int timekeeping_suspend(void)
 		timekeeping_suspend_time =
 			timespec_add(timekeeping_suspend_time, delta_delta);
 	}
-	write_sequnlock_irqrestore(&timekeeper.lock, flags2);
-	write_sequnlock_irqrestore(&xtime_lock, flags1);
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 	clockevents_notify(CLOCK_EVT_NOTIFY_SUSPEND, NULL);
 	clocksource_suspend();
@@ -916,7 +903,6 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
 /**
  * update_wall_time - Uses the current clocksource to increment the wall time
  *
- * Called from the timer interrupt, must hold a write on xtime_lock.
  */
 static void update_wall_time(void)
 {
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 11/16] time: Reorder so the hot data is together
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (9 preceding siblings ...)
  2011-11-15  4:04 ` [PATCH 10/16] time: Remove most of xtime_lock usage in timekeeping.c John Stultz
@ 2011-11-15  4:04 ` John Stultz
  2011-11-15  4:04 ` [PATCH 12/16] time: Move common updates to a function John Stultz
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: Thomas Gleixner, Eric Dumazet, Richard Cochran, John Stultz

From: Thomas Gleixner <tglx@linutronix.de>

Keep all the interesting data in a single cache line.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 810b974..b632678 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -25,6 +25,8 @@
 struct timekeeper {
 	/* Current clocksource used for timekeeping. */
 	struct clocksource *clock;
+	/* NTP adjusted clock multiplier */
+	u32	mult;
 	/* The shift value of the current clocksource. */
 	int	shift;
 
@@ -45,8 +47,6 @@ struct timekeeper {
 	/* Shift conversion between clock shifted nano seconds and
 	 * ntp shifted nano seconds. */
 	int	ntp_error_shift;
-	/* NTP adjusted clock multiplier */
-	u32	mult;
 
 	/* The current time */
 	struct timespec xtime;
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 12/16] time: Move common updates to a function
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (10 preceding siblings ...)
  2011-11-15  4:04 ` [PATCH 11/16] time: Reorder so the hot data is together John Stultz
@ 2011-11-15  4:04 ` John Stultz
  2011-11-15  4:04 ` [PATCH 13/16] time: Condense timekeeper.xtime into xtime_sec John Stultz
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: Thomas Gleixner, Eric Dumazet, Richard Cochran, John Stultz

From: Thomas Gleixner <tglx@linutronix.de>

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   39 +++++++++++++++++----------------------
 1 files changed, 17 insertions(+), 22 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index b632678..9416be0 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -172,17 +172,26 @@ static inline s64 timekeeping_get_ns_raw(void)
 	return clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);
 }
 
+/* must hold write on timekeeper.lock */
+static void timekeeping_update(bool clearntp)
+{
+	if (clearntp) {
+		timekeeper.ntp_error = 0;
+		ntp_clear();
+	}
+	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+			 timekeeper.clock, timekeeper.mult);
+}
+
+
 void timekeeping_leap_insert(int leapsecond)
 {
 	unsigned long flags;
 
 	write_seqlock_irqsave(&timekeeper.lock, flags);
-
 	timekeeper.xtime.tv_sec += leapsecond;
 	timekeeper.wall_to_monotonic.tv_sec -= leapsecond;
-	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
-			 timekeeper.clock, timekeeper.mult);
-
+	timekeeping_update(false);
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 }
@@ -382,12 +391,7 @@ int do_settimeofday(const struct timespec *tv)
 			timespec_sub(timekeeper.wall_to_monotonic, ts_delta);
 
 	timekeeper.xtime = *tv;
-
-	timekeeper.ntp_error = 0;
-	ntp_clear();
-
-	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
-			timekeeper.clock, timekeeper.mult);
+	timekeeping_update(true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
@@ -421,11 +425,7 @@ int timekeeping_inject_offset(struct timespec *ts)
 	timekeeper.wall_to_monotonic =
 				timespec_sub(timekeeper.wall_to_monotonic, *ts);
 
-	timekeeper.ntp_error = 0;
-	ntp_clear();
-
-	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
-			timekeeper.clock, timekeeper.mult);
+	timekeeping_update(true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
@@ -664,10 +664,7 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 
 	__timekeeping_inject_sleeptime(delta);
 
-	timekeeper.ntp_error = 0;
-	ntp_clear();
-	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
-			timekeeper.clock, timekeeper.mult);
+	timekeeping_update(true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
@@ -993,9 +990,7 @@ static void update_wall_time(void)
 		second_overflow();
 	}
 
-	/* check to see if there is a new clocksource to use */
-	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
-			timekeeper.clock, timekeeper.mult);
+	timekeeping_update(false);
 
 out:
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 13/16] time: Condense timekeeper.xtime into xtime_sec
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (11 preceding siblings ...)
  2011-11-15  4:04 ` [PATCH 12/16] time: Move common updates to a function John Stultz
@ 2011-11-15  4:04 ` John Stultz
  2011-11-15  4:04 ` [PATCH 14/16] time: Rework timekeeping functions to take timekeeper ptr as argument John Stultz
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

The timekeeper struct has a xtime_nsec, which keeps the
sub-nanosecond remainder.  This ends up being somewhat
duplicative of the timekeeper.xtime.tv_nsec value, and we
have to do extra work to keep them apart, copying the full
nsec portion out and back in over and over.

This patch simplifies some of the logic by taking the timekeeper
xtime value and splitting it into timekeeper.xtime_sec and
reuses the timekeeper.xtime_nsec for the sub-second portion
(stored in higher res shifted nanoseconds).

This simplifies some of the accumulation logic. And will
allow for more accurate timekeeping once the vsyscall code
is updated to use the shifted nanosecond remainder.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |  161 +++++++++++++++++++++++++++-----------------
 1 files changed, 99 insertions(+), 62 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 9416be0..6ca50b5 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -39,8 +39,11 @@ struct timekeeper {
 	/* Raw nano seconds accumulated per NTP interval. */
 	u32	raw_interval;
 
-	/* Clock shifted nano seconds remainder not stored in xtime.tv_nsec. */
+	/* Current CLOCK_REALTIME time in seconds */
+	u64	xtime_sec;
+	/* Clock shifted nano seconds */
 	u64	xtime_nsec;
+
 	/* Difference between accumulated time and NTP time in ntp
 	 * shifted nano seconds. */
 	s64	ntp_error;
@@ -48,8 +51,6 @@ struct timekeeper {
 	 * ntp shifted nano seconds. */
 	int	ntp_error_shift;
 
-	/* The current time */
-	struct timespec xtime;
 	/*
 	 * wall_to_monotonic is what we need to add to xtime (or xtime corrected
 	 * for sub jiffie times) to get to monotonic time.  Monotonic is pegged
@@ -87,6 +88,38 @@ __cacheline_aligned_in_smp DEFINE_SEQLOCK(xtime_lock);
 int __read_mostly timekeeping_suspended;
 
 
+static inline void timekeeper_normalize_xtime(struct timekeeper *tk)
+{
+	while (tk->xtime_nsec >= (NSEC_PER_SEC << tk->shift)) {
+		tk->xtime_nsec -= NSEC_PER_SEC << tk->shift;
+		tk->xtime_sec++;
+	}
+}
+
+static struct timespec timekeeper_xtime(struct timekeeper *tk)
+{
+	struct timespec ts;
+
+	ts.tv_sec = tk->xtime_sec;
+	ts.tv_nsec = (long)(tk->xtime_nsec >> tk->shift);
+	return ts;
+}
+
+static void timekeeper_set_xtime(struct timekeeper *tk,
+					const struct timespec *ts)
+{
+	tk->xtime_sec = ts->tv_sec;
+	tk->xtime_nsec = ts->tv_nsec << tk->shift;
+}
+
+
+static void timekeeper_xtime_add(struct timekeeper *tk,
+					const struct timespec *ts)
+{
+	tk->xtime_sec += ts->tv_sec;
+	tk->xtime_nsec += ts->tv_nsec << tk->shift;
+}
+
 
 /**
  * timekeeper_setup_internals - Set up internals to use clocksource clock.
@@ -143,6 +176,7 @@ static inline s64 timekeeping_get_ns(void)
 {
 	cycle_t cycle_now, cycle_delta;
 	struct clocksource *clock;
+	s64 nsec;
 
 	/* read clocksource: */
 	clock = timekeeper.clock;
@@ -151,9 +185,8 @@ static inline s64 timekeeping_get_ns(void)
 	/* calculate the delta since the last update_wall_time: */
 	cycle_delta = (cycle_now - clock->cycle_last) & clock->mask;
 
-	/* return delta convert to nanoseconds using ntp adjusted mult. */
-	return clocksource_cyc2ns(cycle_delta, timekeeper.mult,
-				  timekeeper.shift);
+	nsec = cycle_delta * timekeeper.mult + timekeeper.xtime_nsec;
+	return nsec >> timekeeper.shift;
 }
 
 static inline s64 timekeeping_get_ns_raw(void)
@@ -175,11 +208,13 @@ static inline s64 timekeeping_get_ns_raw(void)
 /* must hold write on timekeeper.lock */
 static void timekeeping_update(bool clearntp)
 {
+	struct timespec xt;
 	if (clearntp) {
 		timekeeper.ntp_error = 0;
 		ntp_clear();
 	}
-	update_vsyscall(&timekeeper.xtime, &timekeeper.wall_to_monotonic,
+	xt = timekeeper_xtime(&timekeeper);
+	update_vsyscall(&xt, &timekeeper.wall_to_monotonic,
 			 timekeeper.clock, timekeeper.mult);
 }
 
@@ -189,7 +224,7 @@ void timekeeping_leap_insert(int leapsecond)
 	unsigned long flags;
 
 	write_seqlock_irqsave(&timekeeper.lock, flags);
-	timekeeper.xtime.tv_sec += leapsecond;
+	timekeeper.xtime_sec += leapsecond;
 	timekeeper.wall_to_monotonic.tv_sec -= leapsecond;
 	timekeeping_update(false);
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
@@ -214,13 +249,12 @@ static void timekeeping_forward_now(void)
 	cycle_delta = (cycle_now - clock->cycle_last) & clock->mask;
 	clock->cycle_last = cycle_now;
 
-	nsec = clocksource_cyc2ns(cycle_delta, timekeeper.mult,
-				  timekeeper.shift);
+	timekeeper.xtime_nsec += cycle_delta * timekeeper.mult;
 
 	/* If arch requires, add in gettimeoffset() */
-	nsec += arch_gettimeoffset();
+	timekeeper.xtime_nsec += arch_gettimeoffset() << timekeeper.shift;
 
-	timespec_add_ns(&timekeeper.xtime, nsec);
+	timekeeper_normalize_xtime(&timekeeper);
 
 	nsec = clocksource_cyc2ns(cycle_delta, clock->mult, clock->shift);
 	timespec_add_ns(&timekeeper.raw_time, nsec);
@@ -235,15 +269,15 @@ static void timekeeping_forward_now(void)
 void getnstimeofday(struct timespec *ts)
 {
 	unsigned long seq;
-	s64 nsecs;
+	s64 nsecs = 0;
 
 	WARN_ON(timekeeping_suspended);
 
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
 
-		*ts = timekeeper.xtime;
-		nsecs = timekeeping_get_ns();
+		ts->tv_sec = timekeeper.xtime_sec;
+		ts->tv_nsec = timekeeping_get_ns();
 
 		/* If arch requires, add in gettimeoffset() */
 		nsecs += arch_gettimeoffset();
@@ -264,12 +298,10 @@ ktime_t ktime_get(void)
 
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
-		secs = timekeeper.xtime.tv_sec +
+		secs = timekeeper.xtime_sec +
 				timekeeper.wall_to_monotonic.tv_sec;
-		nsecs = timekeeper.xtime.tv_nsec +
+		nsecs = timekeeping_get_ns() +
 				timekeeper.wall_to_monotonic.tv_nsec;
-		nsecs += timekeeping_get_ns();
-
 	} while (read_seqretry(&timekeeper.lock, seq));
 	/*
 	 * Use ktime_set/ktime_add_ns to create a proper ktime on
@@ -291,20 +323,19 @@ void ktime_get_ts(struct timespec *ts)
 {
 	struct timespec tomono;
 	unsigned int seq;
-	s64 nsecs;
 
 	WARN_ON(timekeeping_suspended);
 
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
-		*ts = timekeeper.xtime;
+		ts->tv_sec = timekeeper.xtime_sec;
+		ts->tv_nsec = timekeeping_get_ns();
 		tomono = timekeeper.wall_to_monotonic;
-		nsecs = timekeeping_get_ns();
 
 	} while (read_seqretry(&timekeeper.lock, seq));
 
 	set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec,
-				ts->tv_nsec + tomono.tv_nsec + nsecs);
+				ts->tv_nsec + tomono.tv_nsec);
 }
 EXPORT_SYMBOL_GPL(ktime_get_ts);
 
@@ -332,7 +363,8 @@ void getnstime_raw_and_real(struct timespec *ts_raw, struct timespec *ts_real)
 		seq = read_seqbegin(&timekeeper.lock);
 
 		*ts_raw = timekeeper.raw_time;
-		*ts_real = timekeeper.xtime;
+		ts_real->tv_sec = timekeeper.xtime_sec;
+		ts_real->tv_nsec = 0;
 
 		nsecs_raw = timekeeping_get_ns_raw();
 		nsecs_real = timekeeping_get_ns();
@@ -375,7 +407,7 @@ EXPORT_SYMBOL(do_gettimeofday);
  */
 int do_settimeofday(const struct timespec *tv)
 {
-	struct timespec ts_delta;
+	struct timespec ts_delta, xt;
 	unsigned long flags;
 
 	if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
@@ -385,12 +417,15 @@ int do_settimeofday(const struct timespec *tv)
 
 	timekeeping_forward_now();
 
-	ts_delta.tv_sec = tv->tv_sec - timekeeper.xtime.tv_sec;
-	ts_delta.tv_nsec = tv->tv_nsec - timekeeper.xtime.tv_nsec;
+	xt = timekeeper_xtime(&timekeeper);
+	ts_delta.tv_sec = tv->tv_sec - xt.tv_sec;
+	ts_delta.tv_nsec = tv->tv_nsec - xt.tv_nsec;
+
 	timekeeper.wall_to_monotonic =
 			timespec_sub(timekeeper.wall_to_monotonic, ts_delta);
 
-	timekeeper.xtime = *tv;
+	timekeeper_set_xtime(&timekeeper, tv);
+
 	timekeeping_update(true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
@@ -421,7 +456,8 @@ int timekeeping_inject_offset(struct timespec *ts)
 
 	timekeeping_forward_now();
 
-	timekeeper.xtime = timespec_add(timekeeper.xtime, *ts);
+
+	timekeeper_xtime_add(&timekeeper, ts);
 	timekeeper.wall_to_monotonic =
 				timespec_sub(timekeeper.wall_to_monotonic, *ts);
 
@@ -597,14 +633,12 @@ void __init timekeeping_init(void)
 		clock->enable(clock);
 	timekeeper_setup_internals(clock);
 
-	timekeeper.xtime.tv_sec = now.tv_sec;
-	timekeeper.xtime.tv_nsec = now.tv_nsec;
+	timekeeper_set_xtime(&timekeeper, &now);
 	timekeeper.raw_time.tv_sec = 0;
 	timekeeper.raw_time.tv_nsec = 0;
-	if (boot.tv_sec == 0 && boot.tv_nsec == 0) {
-		boot.tv_sec = timekeeper.xtime.tv_sec;
-		boot.tv_nsec = timekeeper.xtime.tv_nsec;
-	}
+	if (boot.tv_sec == 0 && boot.tv_nsec == 0)
+		boot = timekeeper_xtime(&timekeeper);
+
 	set_normalized_timespec(&timekeeper.wall_to_monotonic,
 				-boot.tv_sec, -boot.tv_nsec);
 	timekeeper.total_sleep_time.tv_sec = 0;
@@ -630,7 +664,7 @@ static void __timekeeping_inject_sleeptime(struct timespec *delta)
 		return;
 	}
 
-	timekeeper.xtime = timespec_add(timekeeper.xtime, *delta);
+	timekeeper_xtime_add(&timekeeper, delta);
 	timekeeper.wall_to_monotonic =
 			timespec_sub(timekeeper.wall_to_monotonic, *delta);
 	timekeeper.total_sleep_time = timespec_add(
@@ -727,7 +761,8 @@ static int timekeeping_suspend(void)
 	 * try to compensate so the difference in system time
 	 * and persistent_clock time stays close to constant.
 	 */
-	delta = timespec_sub(timekeeper.xtime, timekeeping_suspend_time);
+	delta = timespec_sub(timekeeper_xtime(&timekeeper),
+				timekeeping_suspend_time);
 	delta_delta = timespec_sub(delta, old_delta);
 	if (abs(delta_delta.tv_sec)  >= 2) {
 		/*
@@ -873,7 +908,7 @@ static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
 	timekeeper.xtime_nsec += timekeeper.xtime_interval << shift;
 	while (timekeeper.xtime_nsec >= nsecps) {
 		timekeeper.xtime_nsec -= nsecps;
-		timekeeper.xtime.tv_sec++;
+		timekeeper.xtime_sec++;
 		second_overflow();
 	}
 
@@ -907,6 +942,7 @@ static void update_wall_time(void)
 	cycle_t offset;
 	int shift = 0, maxshift;
 	unsigned long flags;
+	s64 remainder;
 
 	write_seqlock_irqsave(&timekeeper.lock, flags);
 
@@ -921,8 +957,6 @@ static void update_wall_time(void)
 #else
 	offset = (clock->read(clock) - clock->cycle_last) & clock->mask;
 #endif
-	timekeeper.xtime_nsec = (s64)timekeeper.xtime.tv_nsec <<
-						timekeeper.shift;
 
 	/*
 	 * With NO_HZ we may have to accumulate many cycle_intervals
@@ -968,25 +1002,29 @@ static void update_wall_time(void)
 		timekeeper.ntp_error += neg << timekeeper.ntp_error_shift;
 	}
 
-
 	/*
-	 * Store full nanoseconds into xtime after rounding it up and
-	 * add the remainder to the error difference.
-	 */
-	timekeeper.xtime.tv_nsec = ((s64)timekeeper.xtime_nsec >>
-						timekeeper.shift) + 1;
-	timekeeper.xtime_nsec -= (s64)timekeeper.xtime.tv_nsec <<
-						timekeeper.shift;
-	timekeeper.ntp_error +=	timekeeper.xtime_nsec <<
-				timekeeper.ntp_error_shift;
+	* Store only full nanoseconds into xtime_nsec after rounding
+	* it up and add the remainder to the error difference.
+	* XXX - This is necessary to avoid small 1ns inconsistnecies caused
+	* by truncating the remainder in vsyscalls. However, it causes
+	* additional work to be done in timekeeping_adjust(). Once
+	* the vsyscall implementations are converted to use xtime_nsec
+	* (shifted nanoseconds), this can be killed.
+	*/
+	remainder = timekeeper.xtime_nsec & ((1<<timekeeper.shift)-1);
+	timekeeper.xtime_nsec -= remainder;
+	timekeeper.xtime_nsec += 1<<timekeeper.shift;
+	timekeeper.ntp_error += remainder <<
+					timekeeper.ntp_error_shift;
 
 	/*
 	 * Finally, make sure that after the rounding
 	 * xtime.tv_nsec isn't larger then NSEC_PER_SEC
 	 */
-	if (unlikely(timekeeper.xtime.tv_nsec >= NSEC_PER_SEC)) {
-		timekeeper.xtime.tv_nsec -= NSEC_PER_SEC;
-		timekeeper.xtime.tv_sec++;
+	if (unlikely(timekeeper.xtime_nsec >=
+			(NSEC_PER_SEC << timekeeper.shift))) {
+		timekeeper.xtime_nsec -= NSEC_PER_SEC << timekeeper.shift;
+		timekeeper.xtime_sec++;
 		second_overflow();
 	}
 
@@ -1035,21 +1073,20 @@ void get_monotonic_boottime(struct timespec *ts)
 {
 	struct timespec tomono, sleep;
 	unsigned int seq;
-	s64 nsecs;
 
 	WARN_ON(timekeeping_suspended);
 
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
-		*ts = timekeeper.xtime;
+		ts->tv_sec = timekeeper.xtime_sec;
+		ts->tv_nsec = timekeeping_get_ns();
 		tomono = timekeeper.wall_to_monotonic;
 		sleep = timekeeper.total_sleep_time;
-		nsecs = timekeeping_get_ns();
 
 	} while (read_seqretry(&timekeeper.lock, seq));
 
 	set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec + sleep.tv_sec,
-			ts->tv_nsec + tomono.tv_nsec + sleep.tv_nsec + nsecs);
+			ts->tv_nsec + tomono.tv_nsec + sleep.tv_nsec);
 }
 EXPORT_SYMBOL_GPL(get_monotonic_boottime);
 
@@ -1082,13 +1119,13 @@ EXPORT_SYMBOL_GPL(monotonic_to_bootbased);
 
 unsigned long get_seconds(void)
 {
-	return timekeeper.xtime.tv_sec;
+	return timekeeper.xtime_sec;
 }
 EXPORT_SYMBOL(get_seconds);
 
 struct timespec __current_kernel_time(void)
 {
-	return timekeeper.xtime;
+	return timekeeper_xtime(&timekeeper);
 }
 
 struct timespec current_kernel_time(void)
@@ -1099,7 +1136,7 @@ struct timespec current_kernel_time(void)
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
 
-		now = timekeeper.xtime;
+		now = timekeeper_xtime(&timekeeper);
 	} while (read_seqretry(&timekeeper.lock, seq));
 
 	return now;
@@ -1114,7 +1151,7 @@ struct timespec get_monotonic_coarse(void)
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
 
-		now = timekeeper.xtime;
+		now = timekeeper_xtime(&timekeeper);
 		mono = timekeeper.wall_to_monotonic;
 	} while (read_seqretry(&timekeeper.lock, seq));
 
@@ -1149,7 +1186,7 @@ void get_xtime_and_monotonic_and_sleep_offset(struct timespec *xtim,
 
 	do {
 		seq = read_seqbegin(&timekeeper.lock);
-		*xtim = timekeeper.xtime;
+		*xtim = timekeeper_xtime(&timekeeper);
 		*wtom = timekeeper.wall_to_monotonic;
 		*sleep = timekeeper.total_sleep_time;
 	} while (read_seqretry(&timekeeper.lock, seq));
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 14/16] time: Rework timekeeping functions to take timekeeper ptr as argument
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (12 preceding siblings ...)
  2011-11-15  4:04 ` [PATCH 13/16] time: Condense timekeeper.xtime into xtime_sec John Stultz
@ 2011-11-15  4:04 ` John Stultz
  2011-11-15  4:04 ` [PATCH 15/16] time: Update tiemkeeper structure using a local shadow John Stultz
  2011-11-15  4:04 ` [PATCH 16/16] time: Rework update_vsyscall to pass timekeeper John Stultz
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

As part of cleaning up the timekeeping code, this patch converts
a number of internal functions to takea  timekeeper ptr as an
argument, so that the internal functions don't access the global
timekeeper structure directly. This allows for further optimizations
later.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   87 +++++++++++++++++++++++----------------------
 1 files changed, 44 insertions(+), 43 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6ca50b5..7870a0e 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -206,16 +206,15 @@ static inline s64 timekeeping_get_ns_raw(void)
 }
 
 /* must hold write on timekeeper.lock */
-static void timekeeping_update(bool clearntp)
+static void timekeeping_update(struct timekeeper *tk, bool clearntp)
 {
 	struct timespec xt;
 	if (clearntp) {
-		timekeeper.ntp_error = 0;
+		tk->ntp_error = 0;
 		ntp_clear();
 	}
-	xt = timekeeper_xtime(&timekeeper);
-	update_vsyscall(&xt, &timekeeper.wall_to_monotonic,
-			 timekeeper.clock, timekeeper.mult);
+	xt = timekeeper_xtime(tk);
+	update_vsyscall(&xt, &tk->wall_to_monotonic, tk->clock, tk->mult);
 }
 
 
@@ -226,7 +225,7 @@ void timekeeping_leap_insert(int leapsecond)
 	write_seqlock_irqsave(&timekeeper.lock, flags);
 	timekeeper.xtime_sec += leapsecond;
 	timekeeper.wall_to_monotonic.tv_sec -= leapsecond;
-	timekeeping_update(false);
+	timekeeping_update(&timekeeper, false);
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
 }
@@ -426,7 +425,7 @@ int do_settimeofday(const struct timespec *tv)
 
 	timekeeper_set_xtime(&timekeeper, tv);
 
-	timekeeping_update(true);
+	timekeeping_update(&timekeeper, true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
@@ -461,7 +460,7 @@ int timekeeping_inject_offset(struct timespec *ts)
 	timekeeper.wall_to_monotonic =
 				timespec_sub(timekeeper.wall_to_monotonic, *ts);
 
-	timekeeping_update(true);
+	timekeeping_update(&timekeeper, true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
@@ -698,7 +697,7 @@ void timekeeping_inject_sleeptime(struct timespec *delta)
 
 	__timekeeping_inject_sleeptime(delta);
 
-	timekeeping_update(true);
+	timekeeping_update(&timekeeper, true);
 
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
 
@@ -801,7 +800,8 @@ device_initcall(timekeeping_init_ops);
  * If the error is already larger, we look ahead even further
  * to compensate for late or lost adjustments.
  */
-static __always_inline int timekeeping_bigadjust(s64 error, s64 *interval,
+static __always_inline int timekeeping_bigadjust(struct timekeeper *tk,
+						 s64 error, s64 *interval,
 						 s64 *offset)
 {
 	s64 tick_error, i;
@@ -817,7 +817,7 @@ static __always_inline int timekeeping_bigadjust(s64 error, s64 *interval,
 	 * here.  This is tuned so that an error of about 1 msec is adjusted
 	 * within about 1 sec (or 2^20 nsec in 2^SHIFT_HZ ticks).
 	 */
-	error2 = timekeeper.ntp_error >> (NTP_SCALE_SHIFT + 22 - 2 * SHIFT_HZ);
+	error2 = tk->ntp_error >> (NTP_SCALE_SHIFT + 22 - 2 * SHIFT_HZ);
 	error2 = abs(error2);
 	for (look_ahead = 0; error2 > 0; look_ahead++)
 		error2 >>= 2;
@@ -826,8 +826,8 @@ static __always_inline int timekeeping_bigadjust(s64 error, s64 *interval,
 	 * Now calculate the error in (1 << look_ahead) ticks, but first
 	 * remove the single look ahead already included in the error.
 	 */
-	tick_error = ntp_tick_length() >> (timekeeper.ntp_error_shift + 1);
-	tick_error -= timekeeper.xtime_interval >> 1;
+	tick_error = ntp_tick_length() >> (tk->ntp_error_shift + 1);
+	tick_error -= tk->xtime_interval >> 1;
 	error = ((error - tick_error) >> look_ahead) + tick_error;
 
 	/* Finally calculate the adjustment shift value.  */
@@ -852,18 +852,19 @@ static __always_inline int timekeeping_bigadjust(s64 error, s64 *interval,
  * this is optimized for the most common adjustments of -1,0,1,
  * for other values we can do a bit more work.
  */
-static void timekeeping_adjust(s64 offset)
+static void timekeeping_adjust(struct timekeeper *tk, s64 offset)
 {
-	s64 error, interval = timekeeper.cycle_interval;
+	s64 error, interval = tk->cycle_interval;
 	int adj;
 
-	error = timekeeper.ntp_error >> (timekeeper.ntp_error_shift - 1);
+	error = tk->ntp_error >> (tk->ntp_error_shift - 1);
 	if (error > interval) {
 		error >>= 2;
 		if (likely(error <= interval))
 			adj = 1;
 		else
-			adj = timekeeping_bigadjust(error, &interval, &offset);
+			adj = timekeeping_bigadjust(tk, error, &interval,
+							&offset);
 	} else if (error < -interval) {
 		error >>= 2;
 		if (likely(error >= -interval)) {
@@ -871,15 +872,15 @@ static void timekeeping_adjust(s64 offset)
 			interval = -interval;
 			offset = -offset;
 		} else
-			adj = timekeeping_bigadjust(error, &interval, &offset);
+			adj = timekeeping_bigadjust(tk, error, &interval,
+							&offset);
 	} else
 		return;
 
-	timekeeper.mult += adj;
-	timekeeper.xtime_interval += interval;
-	timekeeper.xtime_nsec -= offset;
-	timekeeper.ntp_error -= (interval - offset) <<
-				timekeeper.ntp_error_shift;
+	tk->mult += adj;
+	tk->xtime_interval += interval;
+	tk->xtime_nsec -= offset;
+	tk->ntp_error -= (interval - offset) << tk->ntp_error_shift;
 }
 
 
@@ -892,41 +893,41 @@ static void timekeeping_adjust(s64 offset)
  *
  * Returns the unconsumed cycles.
  */
-static cycle_t logarithmic_accumulation(cycle_t offset, int shift)
+static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
+						int shift)
 {
-	u64 nsecps = (u64)NSEC_PER_SEC << timekeeper.shift;
+	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
 	u64 raw_nsecs;
 
 	/* If the offset is smaller then a shifted interval, do nothing */
-	if (offset < timekeeper.cycle_interval<<shift)
+	if (offset < tk->cycle_interval<<shift)
 		return offset;
 
 	/* Accumulate one shifted interval */
-	offset -= timekeeper.cycle_interval << shift;
-	timekeeper.clock->cycle_last += timekeeper.cycle_interval << shift;
+	offset -= tk->cycle_interval << shift;
+	tk->clock->cycle_last += tk->cycle_interval << shift;
 
-	timekeeper.xtime_nsec += timekeeper.xtime_interval << shift;
-	while (timekeeper.xtime_nsec >= nsecps) {
-		timekeeper.xtime_nsec -= nsecps;
-		timekeeper.xtime_sec++;
+	tk->xtime_nsec += tk->xtime_interval << shift;
+	while (tk->xtime_nsec >= nsecps) {
+		tk->xtime_nsec -= nsecps;
+		tk->xtime_sec++;
 		second_overflow();
 	}
 
 	/* Accumulate raw time */
-	raw_nsecs = timekeeper.raw_interval << shift;
-	raw_nsecs += timekeeper.raw_time.tv_nsec;
+	raw_nsecs = tk->raw_interval << shift;
+	raw_nsecs += tk->raw_time.tv_nsec;
 	if (raw_nsecs >= NSEC_PER_SEC) {
 		u64 raw_secs = raw_nsecs;
 		raw_nsecs = do_div(raw_secs, NSEC_PER_SEC);
-		timekeeper.raw_time.tv_sec += raw_secs;
+		tk->raw_time.tv_sec += raw_secs;
 	}
-	timekeeper.raw_time.tv_nsec = raw_nsecs;
+	tk->raw_time.tv_nsec = raw_nsecs;
 
 	/* Accumulate error between NTP and clock interval */
-	timekeeper.ntp_error += ntp_tick_length() << shift;
-	timekeeper.ntp_error -=
-	    (timekeeper.xtime_interval + timekeeper.xtime_remainder) <<
-				(timekeeper.ntp_error_shift + shift);
+	tk->ntp_error += ntp_tick_length() << shift;
+	tk->ntp_error -= (tk->xtime_interval + tk->xtime_remainder) <<
+						(tk->ntp_error_shift + shift);
 
 	return offset;
 }
@@ -972,13 +973,13 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= timekeeper.cycle_interval) {
-		offset = logarithmic_accumulation(offset, shift);
+		offset = logarithmic_accumulation(&timekeeper, offset, shift);
 		if(offset < timekeeper.cycle_interval<<shift)
 			shift--;
 	}
 
 	/* correct the clock when NTP error is too big */
-	timekeeping_adjust(offset);
+	timekeeping_adjust(&timekeeper, offset);
 
 	/*
 	 * Since in the loop above, we accumulate any amount of time
@@ -1028,7 +1029,7 @@ static void update_wall_time(void)
 		second_overflow();
 	}
 
-	timekeeping_update(false);
+	timekeeping_update(&timekeeper, false);
 
 out:
 	write_sequnlock_irqrestore(&timekeeper.lock, flags);
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 15/16] time: Update tiemkeeper structure using a local shadow
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (13 preceding siblings ...)
  2011-11-15  4:04 ` [PATCH 14/16] time: Rework timekeeping functions to take timekeeper ptr as argument John Stultz
@ 2011-11-15  4:04 ` John Stultz
  2011-11-17 22:03   ` John Stultz
  2011-11-15  4:04 ` [PATCH 16/16] time: Rework update_vsyscall to pass timekeeper John Stultz
  15 siblings, 1 reply; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Uses a local shadow structure to update the timekeeper. This
reduces the timekeeper.lock hold time.

WARNING: This introduces a race, but the window might be provably
so small as to not be observable. This patch needs lots more math
and comments to validate that assumption.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c |   45 +++++++++++++++++++++++++--------------------
 1 files changed, 25 insertions(+), 20 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 7870a0e..ba595a3 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -940,6 +940,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 static void update_wall_time(void)
 {
 	struct clocksource *clock;
+	struct timekeeper tk;
 	cycle_t offset;
 	int shift = 0, maxshift;
 	unsigned long flags;
@@ -951,10 +952,13 @@ static void update_wall_time(void)
 	if (unlikely(timekeeping_suspended))
 		goto out;
 
-	clock = timekeeper.clock;
+	tk = timekeeper;
+	write_sequnlock_irqrestore(&timekeeper.lock, flags);
+
+	clock = tk.clock;
 
 #ifdef CONFIG_ARCH_USES_GETTIMEOFFSET
-	offset = timekeeper.cycle_interval;
+	offset = tk.cycle_interval;
 #else
 	offset = (clock->read(clock) - clock->cycle_last) & clock->mask;
 #endif
@@ -967,19 +971,19 @@ static void update_wall_time(void)
 	 * chunk in one go, and then try to consume the next smaller
 	 * doubled multiple.
 	 */
-	shift = ilog2(offset) - ilog2(timekeeper.cycle_interval);
+	shift = ilog2(offset) - ilog2(tk.cycle_interval);
 	shift = max(0, shift);
 	/* Bound shift to one less then what overflows tick_length */
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
-	while (offset >= timekeeper.cycle_interval) {
-		offset = logarithmic_accumulation(&timekeeper, offset, shift);
-		if(offset < timekeeper.cycle_interval<<shift)
+	while (offset >= tk.cycle_interval) {
+		offset = logarithmic_accumulation(&tk, offset, shift);
+		if(offset < tk.cycle_interval<<shift)
 			shift--;
 	}
 
 	/* correct the clock when NTP error is too big */
-	timekeeping_adjust(&timekeeper, offset);
+	timekeeping_adjust(&tk, offset);
 
 	/*
 	 * Since in the loop above, we accumulate any amount of time
@@ -997,10 +1001,10 @@ static void update_wall_time(void)
 	 * We'll correct this error next time through this function, when
 	 * xtime_nsec is not as small.
 	 */
-	if (unlikely((s64)timekeeper.xtime_nsec < 0)) {
-		s64 neg = -(s64)timekeeper.xtime_nsec;
-		timekeeper.xtime_nsec = 0;
-		timekeeper.ntp_error += neg << timekeeper.ntp_error_shift;
+	if (unlikely((s64)tk.xtime_nsec < 0)) {
+		s64 neg = -(s64)tk.xtime_nsec;
+		tk.xtime_nsec = 0;
+		tk.ntp_error += neg << tk.ntp_error_shift;
 	}
 
 	/*
@@ -1012,23 +1016,24 @@ static void update_wall_time(void)
 	* the vsyscall implementations are converted to use xtime_nsec
 	* (shifted nanoseconds), this can be killed.
 	*/
-	remainder = timekeeper.xtime_nsec & ((1<<timekeeper.shift)-1);
-	timekeeper.xtime_nsec -= remainder;
-	timekeeper.xtime_nsec += 1<<timekeeper.shift;
-	timekeeper.ntp_error += remainder <<
-					timekeeper.ntp_error_shift;
+	remainder = tk.xtime_nsec & ((1<<tk.shift)-1);
+	tk.xtime_nsec -= remainder;
+	tk.xtime_nsec += 1<<tk.shift;
+	tk.ntp_error += remainder << tk.ntp_error_shift;
 
 	/*
 	 * Finally, make sure that after the rounding
 	 * xtime.tv_nsec isn't larger then NSEC_PER_SEC
 	 */
-	if (unlikely(timekeeper.xtime_nsec >=
-			(NSEC_PER_SEC << timekeeper.shift))) {
-		timekeeper.xtime_nsec -= NSEC_PER_SEC << timekeeper.shift;
-		timekeeper.xtime_sec++;
+	if (unlikely(tk.xtime_nsec >= (NSEC_PER_SEC << tk.shift))) {
+		tk.xtime_nsec -= NSEC_PER_SEC << tk.shift;
+		tk.xtime_sec++;
 		second_overflow();
 	}
 
+	write_seqlock_irqsave(&timekeeper.lock, flags);
+
+	timekeeper = tk;
 	timekeeping_update(&timekeeper, false);
 
 out:
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 16/16] time: Rework update_vsyscall to pass timekeeper
  2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
                   ` (14 preceding siblings ...)
  2011-11-15  4:04 ` [PATCH 15/16] time: Update tiemkeeper structure using a local shadow John Stultz
@ 2011-11-15  4:04 ` John Stultz
  15 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-15  4:04 UTC (permalink / raw)
  To: LKML; +Cc: John Stultz, Thomas Gleixner, Eric Dumazet, Richard Cochran

Rather then trying to pass fragments of values out to the
vsyscall code, pass the entire timekeeper structure.

This will allow vsyscalls to utilize the fractional shifted
nanoseconds.

CC: Thomas Gleixner <tglx@linutronix.de>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 arch/ia64/kernel/time.c       |   28 ++++++++--------
 arch/powerpc/kernel/time.c    |   25 +++++++------
 arch/s390/kernel/time.c       |   18 +++++-----
 arch/x86/kernel/vsyscall_64.c |   21 ++++++-----
 include/linux/clocksource.h   |    9 -----
 include/linux/timekeeper.h    |   75 +++++++++++++++++++++++++++++++++++++++++
 kernel/time/timekeeping.c     |   59 +-------------------------------
 7 files changed, 124 insertions(+), 111 deletions(-)
 create mode 100644 include/linux/timekeeper.h

diff --git a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c
index 43920de..170e381 100644
--- a/arch/ia64/kernel/time.c
+++ b/arch/ia64/kernel/time.c
@@ -20,6 +20,7 @@
 #include <linux/efi.h>
 #include <linux/timex.h>
 #include <linux/clocksource.h>
+#include <linux/timekeeper.h>
 #include <linux/platform_device.h>
 
 #include <asm/machvec.h>
@@ -457,27 +458,26 @@ void update_vsyscall_tz(void)
 {
 }
 
-void update_vsyscall(struct timespec *wall, struct timespec *wtm,
-			struct clocksource *c, u32 mult)
+void update_vsyscall(struct timekeeper *tk)
 {
         unsigned long flags;
 
         write_seqlock_irqsave(&fsyscall_gtod_data.lock, flags);
 
-        /* copy fsyscall clock data */
-        fsyscall_gtod_data.clk_mask = c->mask;
-        fsyscall_gtod_data.clk_mult = mult;
-        fsyscall_gtod_data.clk_shift = c->shift;
-        fsyscall_gtod_data.clk_fsys_mmio = c->archdata.fsys_mmio;
-        fsyscall_gtod_data.clk_cycle_last = c->cycle_last;
+	/* copy fsyscall clock data */
+	fsyscall_gtod_data.clk_mask = tk->clock->mask;
+	fsyscall_gtod_data.clk_mult = tk->mult;
+	fsyscall_gtod_data.clk_shift = tk->shift;
+	fsyscall_gtod_data.clk_fsys_mmio = tk->clock->archdata.fsys_mmio;
+	fsyscall_gtod_data.clk_cycle_last = tk->clock->cycle_last;
 
 	/* copy kernel time structures */
-        fsyscall_gtod_data.wall_time.tv_sec = wall->tv_sec;
-        fsyscall_gtod_data.wall_time.tv_nsec = wall->tv_nsec;
-	fsyscall_gtod_data.monotonic_time.tv_sec = wtm->tv_sec
-							+ wall->tv_sec;
-	fsyscall_gtod_data.monotonic_time.tv_nsec = wtm->tv_nsec
-							+ wall->tv_nsec;
+	fsyscall_gtod_data.wall_time.tv_sec = tk->xtime_sec;
+	fsyscall_gtod_data.wall_time.tv_nsec = tk->xtime_nsec >> tk->shift;
+	fsyscall_gtod_data.monotonic_time.tv_sec = tk->xtime_sec
+						+ tk->wall_to_monotonic.tv_sec;
+	fsyscall_gtod_data.monotonic_time.tv_nsec =
+		(tk->xtime_nsec >> tk->shift) + tk->wall_to_monotonic.tv_nsec;
 
 	/* normalize */
 	while (fsyscall_gtod_data.monotonic_time.tv_nsec >= NSEC_PER_SEC) {
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 522bb1d..1ffe695 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -54,6 +54,7 @@
 #include <linux/irq.h>
 #include <linux/delay.h>
 #include <linux/irq_work.h>
+#include <linux/timekeeper.h>
 #include <asm/trace.h>
 
 #include <asm/io.h>
@@ -811,13 +812,12 @@ static cycle_t timebase_read(struct clocksource *cs)
 	return (cycle_t)get_tb();
 }
 
-void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
-			struct clocksource *clock, u32 mult)
+void update_vsyscall(struct timekeeper *tk);
 {
 	u64 new_tb_to_xs, new_stamp_xsec;
 	u32 frac_sec;
 
-	if (clock != &clocksource_timebase)
+	if (tk->clock != &clocksource_timebase)
 		return;
 
 	/* Make userspace gettimeofday spin until we're done. */
@@ -826,14 +826,14 @@ void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
 
 	/* XXX this assumes clock->shift == 22 */
 	/* 4611686018 ~= 2^(20+64-22) / 1e9 */
-	new_tb_to_xs = (u64) mult * 4611686018ULL;
-	new_stamp_xsec = (u64) wall_time->tv_nsec * XSEC_PER_SEC;
+	new_tb_to_xs = (u64) tk->mult * 4611686018ULL;
+	new_stamp_xsec = (u64) (tk->xtime_nsec >> tk->shift) * XSEC_PER_SEC;
 	do_div(new_stamp_xsec, 1000000000);
-	new_stamp_xsec += (u64) wall_time->tv_sec * XSEC_PER_SEC;
+	new_stamp_xsec += (u64) tk->xtime_sec * XSEC_PER_SEC;
 
-	BUG_ON(wall_time->tv_nsec >= NSEC_PER_SEC);
+	BUG_ON((tk->xtime_nsec >> tk->shift) >= NSEC_PER_SEC);
 	/* this is tv_nsec / 1e9 as a 0.32 fraction */
-	frac_sec = ((u64) wall_time->tv_nsec * 18446744073ULL) >> 32;
+	frac_sec = ((u64) (tk->xtime_nsec >> tk->shift) * 18446744073ULL) >> 32;
 
 	/*
 	 * tb_update_count is used to allow the userspace gettimeofday code
@@ -846,12 +846,13 @@ void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
 	 * We expect the caller to have done the first increment of
 	 * vdso_data->tb_update_count already.
 	 */
-	vdso_data->tb_orig_stamp = clock->cycle_last;
+	vdso_data->tb_orig_stamp = tk->clock->cycle_last;
 	vdso_data->stamp_xsec = new_stamp_xsec;
 	vdso_data->tb_to_xs = new_tb_to_xs;
-	vdso_data->wtom_clock_sec = wtm->tv_sec;
-	vdso_data->wtom_clock_nsec = wtm->tv_nsec;
-	vdso_data->stamp_xtime = *wall_time;
+	vdso_data->wtom_clock_sec = tk->wall_to_monotonic.tv_sec;
+	vdso_data->wtom_clock_nsec = tk->wall_to_monotonic.tv_nsec;
+	vdso_data->stamp_xtime.tv_sec = tk->xtime_sec;
+	vdso_data->stamp_xtime.tv_nsec = (tk->xtime_nsec >> tk->shift);
 	vdso_data->stamp_sec_fraction = frac_sec;
 	smp_wmb();
 	++(vdso_data->tb_update_count);
diff --git a/arch/s390/kernel/time.c b/arch/s390/kernel/time.c
index ebbfab3..f624449 100644
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -36,6 +36,7 @@
 #include <linux/timex.h>
 #include <linux/notifier.h>
 #include <linux/clocksource.h>
+#include <linux/timekeeper.h>
 #include <linux/clockchips.h>
 #include <linux/gfp.h>
 #include <linux/kprobes.h>
@@ -217,21 +218,20 @@ struct clocksource * __init clocksource_default_clock(void)
 	return &clocksource_tod;
 }
 
-void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
-			struct clocksource *clock, u32 mult)
+void update_vsyscall(struct timekeeper *tk);
 {
-	if (clock != &clocksource_tod)
+	if (tk->clock != &clocksource_tod)
 		return;
 
 	/* Make userspace gettimeofday spin until we're done. */
 	++vdso_data->tb_update_count;
 	smp_wmb();
-	vdso_data->xtime_tod_stamp = clock->cycle_last;
-	vdso_data->xtime_clock_sec = wall_time->tv_sec;
-	vdso_data->xtime_clock_nsec = wall_time->tv_nsec;
-	vdso_data->wtom_clock_sec = wtm->tv_sec;
-	vdso_data->wtom_clock_nsec = wtm->tv_nsec;
-	vdso_data->ntp_mult = mult;
+	vdso_data->xtime_tod_stamp = tk->clock->cycle_last;
+	vdso_data->xtime_clock_sec = tk->xtime_sec;
+	vdso_data->xtime_clock_nsec = (tk->xtime_nsec >> tk->shift);
+	vdso_data->wtom_clock_sec = tk->wall_to_monotonic.tv_sec;
+	vdso_data->wtom_clock_nsec = tk->wall_to_monotonic.tv_nsec;
+	vdso_data->ntp_mult = tk->mult;
 	smp_wmb();
 	++vdso_data->tb_update_count;
 }
diff --git a/arch/x86/kernel/vsyscall_64.c b/arch/x86/kernel/vsyscall_64.c
index e4d4a22..f78c6fd 100644
--- a/arch/x86/kernel/vsyscall_64.c
+++ b/arch/x86/kernel/vsyscall_64.c
@@ -27,6 +27,7 @@
 #include <linux/sysctl.h>
 #include <linux/topology.h>
 #include <linux/clocksource.h>
+#include <linux/timekeeper.h>
 #include <linux/getcpu.h>
 #include <linux/cpu.h>
 #include <linux/smp.h>
@@ -88,22 +89,22 @@ void update_vsyscall_tz(void)
 	write_sequnlock_irqrestore(&vsyscall_gtod_data.lock, flags);
 }
 
-void update_vsyscall(struct timespec *wall_time, struct timespec *wtm,
-			struct clocksource *clock, u32 mult)
+void update_vsyscall(struct timekeeper *tk)
 {
 	unsigned long flags;
 
 	write_seqlock_irqsave(&vsyscall_gtod_data.lock, flags);
 
 	/* copy vsyscall data */
-	vsyscall_gtod_data.clock.vclock_mode	= clock->archdata.vclock_mode;
-	vsyscall_gtod_data.clock.cycle_last	= clock->cycle_last;
-	vsyscall_gtod_data.clock.mask		= clock->mask;
-	vsyscall_gtod_data.clock.mult		= mult;
-	vsyscall_gtod_data.clock.shift		= clock->shift;
-	vsyscall_gtod_data.wall_time_sec	= wall_time->tv_sec;
-	vsyscall_gtod_data.wall_time_nsec	= wall_time->tv_nsec;
-	vsyscall_gtod_data.wall_to_monotonic	= *wtm;
+	vsyscall_gtod_data.clock.vclock_mode	=
+						tk->clock->archdata.vclock_mode;
+	vsyscall_gtod_data.clock.cycle_last	= tk->clock->cycle_last;
+	vsyscall_gtod_data.clock.mask		= tk->clock->mask;
+	vsyscall_gtod_data.clock.mult		= tk->mult;
+	vsyscall_gtod_data.clock.shift		= tk->clock->shift;
+	vsyscall_gtod_data.wall_time_sec	= tk->xtime_sec;
+	vsyscall_gtod_data.wall_time_nsec	= tk->xtime_nsec >> tk->shift;
+	vsyscall_gtod_data.wall_to_monotonic	= tk->wall_to_monotonic;
 	vsyscall_gtod_data.wall_time_coarse	= __current_kernel_time();
 
 	write_sequnlock_irqrestore(&vsyscall_gtod_data.lock, flags);
diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
index 139c4db..d21d5f0 100644
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -321,17 +321,8 @@ clocksource_calc_mult_shift(struct clocksource *cs, u32 freq, u32 minsec)
 }
 
 #ifdef CONFIG_GENERIC_TIME_VSYSCALL
-extern void
-update_vsyscall(struct timespec *ts, struct timespec *wtm,
-			struct clocksource *c, u32 mult);
 extern void update_vsyscall_tz(void);
 #else
-static inline void
-update_vsyscall(struct timespec *ts, struct timespec *wtm,
-			struct clocksource *c, u32 mult)
-{
-}
-
 static inline void update_vsyscall_tz(void)
 {
 }
diff --git a/include/linux/timekeeper.h b/include/linux/timekeeper.h
new file mode 100644
index 0000000..89d1b0c
--- /dev/null
+++ b/include/linux/timekeeper.h
@@ -0,0 +1,75 @@
+#ifndef _LINUX_TIMEKEEPER_H
+#define _LINUX_TIMEKEEPER_H
+
+#include <linux/clocksource.h>
+
+/* 
+ * You should not include this unless you are arch 
+ * specific vsyscall code!
+ */
+
+/* Structure holding internal timekeeping values. */
+struct timekeeper {
+	/* Current clocksource used for timekeeping. */
+	struct clocksource *clock;
+	/* NTP adjusted clock multiplier */
+	u32	mult;
+	/* The shift value of the current clocksource. */
+	int	shift;
+
+	/* Number of clock cycles in one NTP interval. */
+	cycle_t cycle_interval;
+	/* Number of clock shifted nano seconds in one NTP interval. */
+	u64	xtime_interval;
+	/* shifted nano seconds left over when rounding cycle_interval */
+	s64	xtime_remainder;
+	/* Raw nano seconds accumulated per NTP interval. */
+	u32	raw_interval;
+
+	/* Current CLOCK_REALTIME time in seconds */
+	u64	xtime_sec;
+	/* Clock shifted nano seconds */
+	u64	xtime_nsec;
+
+	/* Difference between accumulated time and NTP time in ntp
+	 * shifted nano seconds. */
+	s64	ntp_error;
+	/* Shift conversion between clock shifted nano seconds and
+	 * ntp shifted nano seconds. */
+	int	ntp_error_shift;
+
+	/*
+	 * wall_to_monotonic is what we need to add to xtime (or xtime corrected
+	 * for sub jiffie times) to get to monotonic time.  Monotonic is pegged
+	 * at zero at system boot time, so wall_to_monotonic will be negative,
+	 * however, we will ALWAYS keep the tv_nsec part positive so we can use
+	 * the usual normalization.
+	 *
+	 * wall_to_monotonic is moved after resume from suspend for the
+	 * monotonic time not to jump. We need to add total_sleep_time to
+	 * wall_to_monotonic to get the real boot based time offset.
+	 *
+	 * - wall_to_monotonic is no longer the boot time, getboottime must be
+	 * used instead.
+	 */
+	struct timespec wall_to_monotonic;
+	/* time spent in suspend */
+	struct timespec total_sleep_time;
+	/* The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock. */
+	struct timespec raw_time;
+
+	/* Seqlock for all timekeeper values */
+	seqlock_t lock;
+};
+
+#ifdef CONFIG_GENERIC_TIME_VSYSCALL
+extern void
+update_vsyscall(struct timekeeper *tk);
+#else
+static inline void
+update_vsyscall(struct timekeeper *tk)
+{
+}
+#endif
+
+#endif /* _LINUX_TIMEKEEPER_H */
diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index ba595a3..0f28d36 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -20,60 +20,7 @@
 #include <linux/time.h>
 #include <linux/tick.h>
 #include <linux/stop_machine.h>
-
-/* Structure holding internal timekeeping values. */
-struct timekeeper {
-	/* Current clocksource used for timekeeping. */
-	struct clocksource *clock;
-	/* NTP adjusted clock multiplier */
-	u32	mult;
-	/* The shift value of the current clocksource. */
-	int	shift;
-
-	/* Number of clock cycles in one NTP interval. */
-	cycle_t cycle_interval;
-	/* Number of clock shifted nano seconds in one NTP interval. */
-	u64	xtime_interval;
-	/* shifted nano seconds left over when rounding cycle_interval */
-	s64	xtime_remainder;
-	/* Raw nano seconds accumulated per NTP interval. */
-	u32	raw_interval;
-
-	/* Current CLOCK_REALTIME time in seconds */
-	u64	xtime_sec;
-	/* Clock shifted nano seconds */
-	u64	xtime_nsec;
-
-	/* Difference between accumulated time and NTP time in ntp
-	 * shifted nano seconds. */
-	s64	ntp_error;
-	/* Shift conversion between clock shifted nano seconds and
-	 * ntp shifted nano seconds. */
-	int	ntp_error_shift;
-
-	/*
-	 * wall_to_monotonic is what we need to add to xtime (or xtime corrected
-	 * for sub jiffie times) to get to monotonic time.  Monotonic is pegged
-	 * at zero at system boot time, so wall_to_monotonic will be negative,
-	 * however, we will ALWAYS keep the tv_nsec part positive so we can use
-	 * the usual normalization.
-	 *
-	 * wall_to_monotonic is moved after resume from suspend for the
-	 * monotonic time not to jump. We need to add total_sleep_time to
-	 * wall_to_monotonic to get the real boot based time offset.
-	 *
-	 * - wall_to_monotonic is no longer the boot time, getboottime must be
-	 * used instead.
-	 */
-	struct timespec wall_to_monotonic;
-	/* time spent in suspend */
-	struct timespec total_sleep_time;
-	/* The raw monotonic time for the CLOCK_MONOTONIC_RAW posix clock. */
-	struct timespec raw_time;
-
-	/* Seqlock for all timekeeper values */
-	seqlock_t lock;
-};
+#include <linux/timekeeper.h>
 
 static struct timekeeper timekeeper;
 
@@ -208,13 +155,11 @@ static inline s64 timekeeping_get_ns_raw(void)
 /* must hold write on timekeeper.lock */
 static void timekeeping_update(struct timekeeper *tk, bool clearntp)
 {
-	struct timespec xt;
 	if (clearntp) {
 		tk->ntp_error = 0;
 		ntp_clear();
 	}
-	xt = timekeeper_xtime(tk);
-	update_vsyscall(&xt, &tk->wall_to_monotonic, tk->clock, tk->mult);
+	update_vsyscall(tk);
 }
 
 
-- 
1.7.3.2.146.gca209


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 15/16] time: Update tiemkeeper structure using a local shadow
  2011-11-15  4:04 ` [PATCH 15/16] time: Update tiemkeeper structure using a local shadow John Stultz
@ 2011-11-17 22:03   ` John Stultz
  0 siblings, 0 replies; 18+ messages in thread
From: John Stultz @ 2011-11-17 22:03 UTC (permalink / raw)
  To: LKML; +Cc: Thomas Gleixner, Eric Dumazet, Richard Cochran

On Mon, 2011-11-14 at 20:04 -0800, John Stultz wrote:
> Uses a local shadow structure to update the timekeeper. This
> reduces the timekeeper.lock hold time.
> 
> WARNING: This introduces a race, but the window might be provably
> so small as to not be observable. This patch needs lots more math
> and comments to validate that assumption.

Bah. After thinking about it, this patch won't work, since it would
possibly lose updates via settimeofday(), etc.

So I'm coming around to Thomas' double lock reader-seq/writer-lock
method.

thanks
-john


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2011-11-17 22:05 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-15  4:03 [PATCH 00/16] Timekeeping cleanups and locking changes John Stultz
2011-11-15  4:03 ` [PATCH 01/16] time: Move total_sleep_time into the timekeeper structure John Stultz
2011-11-15  4:03 ` [PATCH 02/16] time: Move wall_to_monotonic " John Stultz
2011-11-15  4:03 ` [PATCH 03/16] time: Move xtime into timekeeeper structure John Stultz
2011-11-15  4:03 ` [PATCH 04/16] time: Move raw_time into timekeeper structure John Stultz
2011-11-15  4:03 ` [PATCH 05/16] time: Cleanup global variables and move them to the top John Stultz
2011-11-15  4:03 ` [PATCH 06/16] time: Add timekeeper lock John Stultz
2011-11-15  4:03 ` [PATCH 07/16] ntp: Cleanup timex.h John Stultz
2011-11-15  4:03 ` [PATCH 08/16] ntp: Access tick_length variable via ntp_tick_length() John Stultz
2011-11-15  4:03 ` [PATCH 09/16] ntp: Add ntp_lock to replace xtime_locking John Stultz
2011-11-15  4:04 ` [PATCH 10/16] time: Remove most of xtime_lock usage in timekeeping.c John Stultz
2011-11-15  4:04 ` [PATCH 11/16] time: Reorder so the hot data is together John Stultz
2011-11-15  4:04 ` [PATCH 12/16] time: Move common updates to a function John Stultz
2011-11-15  4:04 ` [PATCH 13/16] time: Condense timekeeper.xtime into xtime_sec John Stultz
2011-11-15  4:04 ` [PATCH 14/16] time: Rework timekeeping functions to take timekeeper ptr as argument John Stultz
2011-11-15  4:04 ` [PATCH 15/16] time: Update tiemkeeper structure using a local shadow John Stultz
2011-11-17 22:03   ` John Stultz
2011-11-15  4:04 ` [PATCH 16/16] time: Rework update_vsyscall to pass timekeeper John Stultz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).