* [patch 01/43] Move div_long_long_rem out of jiffies.h
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
@ 2005-12-01 0:00 ` Thomas Gleixner
2005-12-01 2:06 ` Adrian Bunk
2005-12-01 11:38 ` Christoph Hellwig
2005-12-01 0:02 ` [patch 02/43] Remove duplicate div_long_long_rem implementation Thomas Gleixner
` (41 subsequent siblings)
42 siblings, 2 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:00 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(move-div-long-long-rem-out-of-jiffiesh.patch)
- move div_long_long_rem() from jiffies.h into a new calc64.h include file,
as it is a general math function useful for other things than the jiffy
code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/calc64.h | 46 ++++++++++++++++++++++++++++++++++++++++++++++
include/linux/jiffies.h | 11 +----------
2 files changed, 47 insertions(+), 10 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/calc64.h
===================================================================
--- /dev/null
+++ linux-2.6.15-rc2-rework/include/linux/calc64.h
@@ -0,0 +1,46 @@
+#ifndef _LINUX_CALC64_H
+#define _LINUX_CALC64_H
+
+#include <linux/types.h>
+#include <asm/div64.h>
+
+/*
+ * This is a generic macro which is used when the architecture
+ * specific div64.h does not provide a optimized one.
+ *
+ * The 64bit dividend is divided by the divisor (data type long), the
+ * result is returned and the remainder stored in the variable
+ * referenced by remainder (data type long *). In contrast to the
+ * do_div macro the dividend is kept intact.
+ */
+#ifndef div_long_long_rem
+#define div_long_long_rem(dividend,divisor,remainder) \
+({ \
+ u64 result = (dividend); \
+ \
+ *(remainder) = do_div(result, divisor); \
+ result; \
+})
+#endif
+
+/*
+ * Sign aware variation of the above. On some architectures a
+ * negative dividend leads to an divide overflow exception, which
+ * is avoided by the sign check.
+ */
+static inline long div_long_long_rem_signed(const long long dividend,
+ const long divisor,
+ long *remainder)
+{
+ long res;
+
+ if (unlikely(dividend < 0)) {
+ res = -div_long_long_rem(-dividend, divisor, remainder);
+ *remainder = -(*remainder);
+ } else
+ res = div_long_long_rem(dividend, divisor, remainder);
+
+ return res;
+}
+
+#endif
Index: linux-2.6.15-rc2-rework/include/linux/jiffies.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/jiffies.h
+++ linux-2.6.15-rc2-rework/include/linux/jiffies.h
@@ -1,21 +1,12 @@
#ifndef _LINUX_JIFFIES_H
#define _LINUX_JIFFIES_H
+#include <linux/calc64.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/time.h>
#include <linux/timex.h>
#include <asm/param.h> /* for HZ */
-#include <asm/div64.h>
-
-#ifndef div_long_long_rem
-#define div_long_long_rem(dividend,divisor,remainder) \
-({ \
- u64 result = dividend; \
- *remainder = do_div(result,divisor); \
- result; \
-})
-#endif
/*
* The following defines establish the engineering parameters of the PLL
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 02/43] Remove duplicate div_long_long_rem implementation
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
2005-12-01 0:00 ` [patch 01/43] Move div_long_long_rem out of jiffies.h Thomas Gleixner
@ 2005-12-01 0:02 ` Thomas Gleixner
2005-12-01 0:02 ` [patch 03/43] Deinline mktime and set_normalized_timespec Thomas Gleixner
` (40 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:02 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(remove-div-long-long-rem-duplicate.patch)
- make posix-timers.c use the generic calc64.h facility
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/posix-timers.c | 10 +---------
1 files changed, 1 insertion(+), 9 deletions(-)
Index: linux-2.6.15-rc2-rework/kernel/posix-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-timers.c
@@ -35,6 +35,7 @@
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/time.h>
+#include <linux/calc64.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
@@ -48,15 +49,6 @@
#include <linux/workqueue.h>
#include <linux/module.h>
-#ifndef div_long_long_rem
-#include <asm/div64.h>
-
-#define div_long_long_rem(dividend,divisor,remainder) ({ \
- u64 result = dividend; \
- *remainder = do_div(result,divisor); \
- result; })
-
-#endif
#define CLOCK_REALTIME_RES TICK_NSEC /* In nano seconds. */
static inline u64 mpy_l_X_l_ll(unsigned long mpy1,unsigned long mpy2)
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 03/43] Deinline mktime and set_normalized_timespec
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
2005-12-01 0:00 ` [patch 01/43] Move div_long_long_rem out of jiffies.h Thomas Gleixner
2005-12-01 0:02 ` [patch 02/43] Remove duplicate div_long_long_rem implementation Thomas Gleixner
@ 2005-12-01 0:02 ` Thomas Gleixner
2005-12-01 0:02 ` [patch 04/43] Clean up mktime and add const modifiers Thomas Gleixner
` (39 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:02 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(deinline-mktime-set-normalized-timespec.patch)
- mktime() and set_normalized_timespec() are large inline functions used
in many places: deinline them.
From: George Anzinger, off-by-1 bugfix
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/time.h | 52 ++++---------------------------------------
kernel/time.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 66 insertions(+), 47 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -38,38 +38,9 @@ static __inline__ int timespec_equal(str
return (a->tv_sec == b->tv_sec) && (a->tv_nsec == b->tv_nsec);
}
-/* Converts Gregorian date to seconds since 1970-01-01 00:00:00.
- * Assumes input in normal date format, i.e. 1980-12-31 23:59:59
- * => year=1980, mon=12, day=31, hour=23, min=59, sec=59.
- *
- * [For the Julian calendar (which was used in Russia before 1917,
- * Britain & colonies before 1752, anywhere else before 1582,
- * and is still in use by some communities) leave out the
- * -year/100+year/400 terms, and add 10.]
- *
- * This algorithm was first published by Gauss (I think).
- *
- * WARNING: this function will overflow on 2106-02-07 06:28:16 on
- * machines were long is 32-bit! (However, as time_t is signed, we
- * will already get problems at other places on 2038-01-19 03:14:08)
- */
-static inline unsigned long
-mktime (unsigned int year, unsigned int mon,
- unsigned int day, unsigned int hour,
- unsigned int min, unsigned int sec)
-{
- if (0 >= (int) (mon -= 2)) { /* 1..12 -> 11,12,1..10 */
- mon += 12; /* Puts Feb last since it has leap day */
- year -= 1;
- }
-
- return (((
- (unsigned long) (year/4 - year/100 + year/400 + 367*mon/12 + day) +
- year*365 - 719499
- )*24 + hour /* now have hours */
- )*60 + min /* now have minutes */
- )*60 + sec; /* finally seconds */
-}
+extern unsigned long mktime (unsigned int year, unsigned int mon,
+ unsigned int day, unsigned int hour,
+ unsigned int min, unsigned int sec);
extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
@@ -80,6 +51,8 @@ static inline unsigned long get_seconds(
return xtime.tv_sec;
}
+extern void set_normalized_timespec (struct timespec *ts, time_t sec, long nsec);
+
struct timespec current_kernel_time(void);
#define CURRENT_TIME (current_kernel_time())
@@ -98,21 +71,6 @@ extern void getnstimeofday (struct times
extern struct timespec timespec_trunc(struct timespec t, unsigned gran);
-static inline void
-set_normalized_timespec (struct timespec *ts, time_t sec, long nsec)
-{
- while (nsec >= NSEC_PER_SEC) {
- nsec -= NSEC_PER_SEC;
- ++sec;
- }
- while (nsec < 0) {
- nsec += NSEC_PER_SEC;
- --sec;
- }
- ts->tv_sec = sec;
- ts->tv_nsec = nsec;
-}
-
#endif /* __KERNEL__ */
#define NFDBITS __NFDBITS
Index: linux-2.6.15-rc2-rework/kernel/time.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/time.c
+++ linux-2.6.15-rc2-rework/kernel/time.c
@@ -561,6 +561,67 @@ void getnstimeofday(struct timespec *tv)
EXPORT_SYMBOL_GPL(getnstimeofday);
#endif
+/* Converts Gregorian date to seconds since 1970-01-01 00:00:00.
+ * Assumes input in normal date format, i.e. 1980-12-31 23:59:59
+ * => year=1980, mon=12, day=31, hour=23, min=59, sec=59.
+ *
+ * [For the Julian calendar (which was used in Russia before 1917,
+ * Britain & colonies before 1752, anywhere else before 1582,
+ * and is still in use by some communities) leave out the
+ * -year/100+year/400 terms, and add 10.]
+ *
+ * This algorithm was first published by Gauss (I think).
+ *
+ * WARNING: this function will overflow on 2106-02-07 06:28:16 on
+ * machines were long is 32-bit! (However, as time_t is signed, we
+ * will already get problems at other places on 2038-01-19 03:14:08)
+ */
+unsigned long
+mktime (unsigned int year, unsigned int mon,
+ unsigned int day, unsigned int hour,
+ unsigned int min, unsigned int sec)
+{
+ if (0 >= (int) (mon -= 2)) { /* 1..12 -> 11,12,1..10 */
+ mon += 12; /* Puts Feb last since it has leap day */
+ year -= 1;
+ }
+
+ return ((((unsigned long)
+ (year/4 - year/100 + year/400 + 367*mon/12 + day) +
+ year*365 - 719499
+ )*24 + hour /* now have hours */
+ )*60 + min /* now have minutes */
+ )*60 + sec; /* finally seconds */
+}
+
+/**
+ * set_normalized_timespec - set timespec sec and nsec parts and normalize
+ *
+ * @ts: pointer to timespec variable to be set
+ * @sec: seconds to set
+ * @nsec: nanoseconds to set
+ *
+ * Set seconds and nanoseconds field of a timespec variable and
+ * normalize to the timespec storage format
+ *
+ * Note: The tv_nsec part is always in the range of
+ * 0 <= tv_nsec < NSEC_PER_SEC
+ * For negative values only the tv_sec field is negative !
+ */
+void set_normalized_timespec (struct timespec *ts, time_t sec, long nsec)
+{
+ while (nsec >= NSEC_PER_SEC) {
+ nsec -= NSEC_PER_SEC;
+ ++sec;
+ }
+ while (nsec < 0) {
+ nsec += NSEC_PER_SEC;
+ --sec;
+ }
+ ts->tv_sec = sec;
+ ts->tv_nsec = nsec;
+}
+
#if (BITS_PER_LONG < 64)
u64 get_jiffies_64(void)
{
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 04/43] Clean up mktime and add const modifiers
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (2 preceding siblings ...)
2005-12-01 0:02 ` [patch 03/43] Deinline mktime and set_normalized_timespec Thomas Gleixner
@ 2005-12-01 0:02 ` Thomas Gleixner
2005-12-01 0:02 ` [patch 05/43] Export deinlined mktime Thomas Gleixner
` (38 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:02 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(mktime-set-normalized-timespec-const.patch)
- add 'const' to mktime arguments, and clean it up a bit
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
include/linux/time.h | 10 +++++-----
kernel/time.c | 15 +++++++++------
2 files changed, 14 insertions(+), 11 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -38,9 +38,11 @@ static __inline__ int timespec_equal(str
return (a->tv_sec == b->tv_sec) && (a->tv_nsec == b->tv_nsec);
}
-extern unsigned long mktime (unsigned int year, unsigned int mon,
- unsigned int day, unsigned int hour,
- unsigned int min, unsigned int sec);
+extern unsigned long mktime(const unsigned int year, const unsigned int mon,
+ const unsigned int day, const unsigned int hour,
+ const unsigned int min, const unsigned int sec);
+
+extern void set_normalized_timespec(struct timespec *ts, time_t sec, long nsec);
extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
@@ -51,8 +53,6 @@ static inline unsigned long get_seconds(
return xtime.tv_sec;
}
-extern void set_normalized_timespec (struct timespec *ts, time_t sec, long nsec);
-
struct timespec current_kernel_time(void);
#define CURRENT_TIME (current_kernel_time())
Index: linux-2.6.15-rc2-rework/kernel/time.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/time.c
+++ linux-2.6.15-rc2-rework/kernel/time.c
@@ -577,12 +577,15 @@ EXPORT_SYMBOL_GPL(getnstimeofday);
* will already get problems at other places on 2038-01-19 03:14:08)
*/
unsigned long
-mktime (unsigned int year, unsigned int mon,
- unsigned int day, unsigned int hour,
- unsigned int min, unsigned int sec)
+mktime(const unsigned int year0, const unsigned int mon0,
+ const unsigned int day, const unsigned int hour,
+ const unsigned int min, const unsigned int sec)
{
- if (0 >= (int) (mon -= 2)) { /* 1..12 -> 11,12,1..10 */
- mon += 12; /* Puts Feb last since it has leap day */
+ unsigned int mon = mon0, year = year0;
+
+ /* 1..12 -> 11,12,1..10 */
+ if (0 >= (int) (mon -= 2)) {
+ mon += 12; /* Puts Feb last since it has leap day */
year -= 1;
}
@@ -608,7 +611,7 @@ mktime (unsigned int year, unsigned int
* 0 <= tv_nsec < NSEC_PER_SEC
* For negative values only the tv_sec field is negative !
*/
-void set_normalized_timespec (struct timespec *ts, time_t sec, long nsec)
+void set_normalized_timespec(struct timespec *ts, time_t sec, long nsec)
{
while (nsec >= NSEC_PER_SEC) {
nsec -= NSEC_PER_SEC;
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 05/43] Export deinlined mktime
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (3 preceding siblings ...)
2005-12-01 0:02 ` [patch 04/43] Clean up mktime and add const modifiers Thomas Gleixner
@ 2005-12-01 0:02 ` Thomas Gleixner
2005-12-01 0:02 ` [patch 06/43] Remove unused clock constants Thomas Gleixner
` (37 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:02 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (deinline-mktime-export.patch)
From: Andrew Morton <akpm@osdl.org>
This is now uninlined, but some modules use it.
Make it a non-GPL export, since the inlined mktime() was also available that
way.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/time.c | 2 ++
1 files changed, 2 insertions(+)
Index: linux-2.6.15-rc2-rework/kernel/time.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/time.c
+++ linux-2.6.15-rc2-rework/kernel/time.c
@@ -597,6 +597,8 @@ mktime(const unsigned int year0, const u
)*60 + sec; /* finally seconds */
}
+EXPORT_SYMBOL(mktime);
+
/**
* set_normalized_timespec - set timespec sec and nsec parts and normalize
*
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 06/43] Remove unused clock constants
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (4 preceding siblings ...)
2005-12-01 0:02 ` [patch 05/43] Export deinlined mktime Thomas Gleixner
@ 2005-12-01 0:02 ` Thomas Gleixner
2005-12-01 0:02 ` [patch 07/43] Cleanup clock constants coding style Thomas Gleixner
` (36 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:02 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(time-h-remove-unused-clock-constants.patch)
- remove unused CLOCK_ constants from time.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/time.h | 11 ++++-------
1 files changed, 4 insertions(+), 7 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -103,12 +103,10 @@ struct itimerval {
/*
* The IDs of the various system clocks (for POSIX.1b interval timers).
*/
-#define CLOCK_REALTIME 0
-#define CLOCK_MONOTONIC 1
+#define CLOCK_REALTIME 0
+#define CLOCK_MONOTONIC 1
#define CLOCK_PROCESS_CPUTIME_ID 2
#define CLOCK_THREAD_CPUTIME_ID 3
-#define CLOCK_REALTIME_HR 4
-#define CLOCK_MONOTONIC_HR 5
/*
* The IDs of various hardware clocks
@@ -117,9 +115,8 @@ struct itimerval {
#define CLOCK_SGI_CYCLE 10
#define MAX_CLOCKS 16
-#define CLOCKS_MASK (CLOCK_REALTIME | CLOCK_MONOTONIC | \
- CLOCK_REALTIME_HR | CLOCK_MONOTONIC_HR)
-#define CLOCKS_MONO (CLOCK_MONOTONIC & CLOCK_MONOTONIC_HR)
+#define CLOCKS_MASK (CLOCK_REALTIME | CLOCK_MONOTONIC)
+#define CLOCKS_MONO (CLOCK_MONOTONIC)
/*
* The various flags for setting POSIX.1b interval timers.
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 07/43] Cleanup clock constants coding style
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (5 preceding siblings ...)
2005-12-01 0:02 ` [patch 06/43] Remove unused clock constants Thomas Gleixner
@ 2005-12-01 0:02 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 08/43] Coding style and whitespace cleanup time.h Thomas Gleixner
` (35 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:02 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (time-h-clean-up-clock-constants.patch)
- clean up the CLOCK_ portions of time.h
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
include/linux/time.h | 23 +++++++++--------------
1 files changed, 9 insertions(+), 14 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -99,30 +99,25 @@ struct itimerval {
struct timeval it_value; /* current value */
};
-
/*
* The IDs of the various system clocks (for POSIX.1b interval timers).
*/
-#define CLOCK_REALTIME 0
-#define CLOCK_MONOTONIC 1
-#define CLOCK_PROCESS_CPUTIME_ID 2
-#define CLOCK_THREAD_CPUTIME_ID 3
+#define CLOCK_REALTIME 0
+#define CLOCK_MONOTONIC 1
+#define CLOCK_PROCESS_CPUTIME_ID 2
+#define CLOCK_THREAD_CPUTIME_ID 3
/*
* The IDs of various hardware clocks
*/
-
-
-#define CLOCK_SGI_CYCLE 10
-#define MAX_CLOCKS 16
-#define CLOCKS_MASK (CLOCK_REALTIME | CLOCK_MONOTONIC)
-#define CLOCKS_MONO (CLOCK_MONOTONIC)
+#define CLOCK_SGI_CYCLE 10
+#define MAX_CLOCKS 16
+#define CLOCKS_MASK (CLOCK_REALTIME | CLOCK_MONOTONIC)
+#define CLOCKS_MONO CLOCK_MONOTONIC
/*
* The various flags for setting POSIX.1b interval timers.
*/
-
-#define TIMER_ABSTIME 0x01
-
+#define TIMER_ABSTIME 0x01
#endif
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 08/43] Coding style and whitespace cleanup time.h
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (6 preceding siblings ...)
2005-12-01 0:02 ` [patch 07/43] Cleanup clock constants coding style Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 09/43] Make clock selectors in posix-timers const Thomas Gleixner
` (34 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (time-h-clean-up-rest.patch)
- style and whitespace cleanup of the rest of time.h.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
include/linux/time.h | 63 +++++++++++++++++++++++++--------------------------
1 files changed, 32 insertions(+), 31 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -4,7 +4,7 @@
#include <linux/types.h>
#ifdef __KERNEL__
-#include <linux/seqlock.h>
+# include <linux/seqlock.h>
#endif
#ifndef _STRUCT_TIMESPEC
@@ -13,7 +13,7 @@ struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
-#endif /* _STRUCT_TIMESPEC */
+#endif
struct timeval {
time_t tv_sec; /* seconds */
@@ -27,16 +27,16 @@ struct timezone {
#ifdef __KERNEL__
-/* Parameters used to convert the timespec values */
-#define MSEC_PER_SEC (1000L)
-#define USEC_PER_SEC (1000000L)
-#define NSEC_PER_SEC (1000000000L)
-#define NSEC_PER_USEC (1000L)
+/* Parameters used to convert the timespec values: */
+#define MSEC_PER_SEC 1000L
+#define USEC_PER_SEC 1000000L
+#define NSEC_PER_SEC 1000000000L
+#define NSEC_PER_USEC 1000L
-static __inline__ int timespec_equal(struct timespec *a, struct timespec *b)
-{
+static __inline__ int timespec_equal(struct timespec *a, struct timespec *b)
+{
return (a->tv_sec == b->tv_sec) && (a->tv_nsec == b->tv_nsec);
-}
+}
extern unsigned long mktime(const unsigned int year, const unsigned int mon,
const unsigned int day, const unsigned int hour,
@@ -49,25 +49,26 @@ extern struct timespec wall_to_monotonic
extern seqlock_t xtime_lock;
static inline unsigned long get_seconds(void)
-{
+{
return xtime.tv_sec;
}
struct timespec current_kernel_time(void);
-#define CURRENT_TIME (current_kernel_time())
-#define CURRENT_TIME_SEC ((struct timespec) { xtime.tv_sec, 0 })
+#define CURRENT_TIME (current_kernel_time())
+#define CURRENT_TIME_SEC ((struct timespec) { xtime.tv_sec, 0 })
extern void do_gettimeofday(struct timeval *tv);
extern int do_settimeofday(struct timespec *tv);
extern int do_sys_settimeofday(struct timespec *tv, struct timezone *tz);
-extern void clock_was_set(void); // call when ever the clock is set
+extern void clock_was_set(void); // call whenever the clock is set
extern int do_posix_clock_monotonic_gettime(struct timespec *tp);
-extern long do_utimes(char __user * filename, struct timeval * times);
+extern long do_utimes(char __user *filename, struct timeval *times);
struct itimerval;
-extern int do_setitimer(int which, struct itimerval *value, struct itimerval *ovalue);
+extern int do_setitimer(int which, struct itimerval *value,
+ struct itimerval *ovalue);
extern int do_getitimer(int which, struct itimerval *value);
-extern void getnstimeofday (struct timespec *tv);
+extern void getnstimeofday(struct timespec *tv);
extern struct timespec timespec_trunc(struct timespec t, unsigned gran);
@@ -83,24 +84,24 @@ extern struct timespec timespec_trunc(st
/*
* Names of the interval timers, and structure
- * defining a timer setting.
+ * defining a timer setting:
*/
-#define ITIMER_REAL 0
-#define ITIMER_VIRTUAL 1
-#define ITIMER_PROF 2
-
-struct itimerspec {
- struct timespec it_interval; /* timer period */
- struct timespec it_value; /* timer expiration */
+#define ITIMER_REAL 0
+#define ITIMER_VIRTUAL 1
+#define ITIMER_PROF 2
+
+struct itimerspec {
+ struct timespec it_interval; /* timer period */
+ struct timespec it_value; /* timer expiration */
};
-struct itimerval {
- struct timeval it_interval; /* timer interval */
- struct timeval it_value; /* current value */
+struct itimerval {
+ struct timeval it_interval; /* timer interval */
+ struct timeval it_value; /* current value */
};
/*
- * The IDs of the various system clocks (for POSIX.1b interval timers).
+ * The IDs of the various system clocks (for POSIX.1b interval timers):
*/
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 1
@@ -108,7 +109,7 @@ struct itimerval {
#define CLOCK_THREAD_CPUTIME_ID 3
/*
- * The IDs of various hardware clocks
+ * The IDs of various hardware clocks:
*/
#define CLOCK_SGI_CYCLE 10
#define MAX_CLOCKS 16
@@ -116,7 +117,7 @@ struct itimerval {
#define CLOCKS_MONO CLOCK_MONOTONIC
/*
- * The various flags for setting POSIX.1b interval timers.
+ * The various flags for setting POSIX.1b interval timers:
*/
#define TIMER_ABSTIME 0x01
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 09/43] Make clock selectors in posix-timers const
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (7 preceding siblings ...)
2005-12-01 0:03 ` [patch 08/43] Coding style and whitespace cleanup time.h Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 10/43] Coding style and white space cleanup posix-timer.h Thomas Gleixner
` (33 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (posix-timer-const-overhaul.patch)
- add const arguments to the posix-timers.h API functions
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/posix-timers.h | 22 +++++++++++-----------
kernel/posix-cpu-timers.c | 40 ++++++++++++++++++++++------------------
kernel/posix-timers.c | 38 +++++++++++++++++++++-----------------
3 files changed, 54 insertions(+), 46 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/posix-timers.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/posix-timers.h
+++ linux-2.6.15-rc2-rework/include/linux/posix-timers.h
@@ -72,12 +72,12 @@ struct k_clock_abs {
};
struct k_clock {
int res; /* in nano seconds */
- int (*clock_getres) (clockid_t which_clock, struct timespec *tp);
+ int (*clock_getres) (const clockid_t which_clock, struct timespec *tp);
struct k_clock_abs *abs_struct;
- int (*clock_set) (clockid_t which_clock, struct timespec * tp);
- int (*clock_get) (clockid_t which_clock, struct timespec * tp);
+ int (*clock_set) (const clockid_t which_clock, struct timespec * tp);
+ int (*clock_get) (const clockid_t which_clock, struct timespec * tp);
int (*timer_create) (struct k_itimer *timer);
- int (*nsleep) (clockid_t which_clock, int flags, struct timespec *);
+ int (*nsleep) (const clockid_t which_clock, int flags, struct timespec *);
int (*timer_set) (struct k_itimer * timr, int flags,
struct itimerspec * new_setting,
struct itimerspec * old_setting);
@@ -87,12 +87,12 @@ struct k_clock {
struct itimerspec * cur_setting);
};
-void register_posix_clock(clockid_t clock_id, struct k_clock *new_clock);
+void register_posix_clock(const clockid_t clock_id, struct k_clock *new_clock);
/* Error handlers for timer_create, nanosleep and settime */
int do_posix_clock_notimer_create(struct k_itimer *timer);
-int do_posix_clock_nonanosleep(clockid_t, int flags, struct timespec *);
-int do_posix_clock_nosettime(clockid_t, struct timespec *tp);
+int do_posix_clock_nonanosleep(const clockid_t, int flags, struct timespec *);
+int do_posix_clock_nosettime(const clockid_t, struct timespec *tp);
/* function to call to trigger timer event */
int posix_timer_event(struct k_itimer *timr, int si_private);
@@ -117,11 +117,11 @@ struct now_struct {
} \
}while (0)
-int posix_cpu_clock_getres(clockid_t which_clock, struct timespec *);
-int posix_cpu_clock_get(clockid_t which_clock, struct timespec *);
-int posix_cpu_clock_set(clockid_t which_clock, const struct timespec *tp);
+int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *);
+int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *);
+int posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *tp);
int posix_cpu_timer_create(struct k_itimer *);
-int posix_cpu_nsleep(clockid_t, int, struct timespec *);
+int posix_cpu_nsleep(const clockid_t, int, struct timespec *);
int posix_cpu_timer_set(struct k_itimer *, int,
struct itimerspec *, struct itimerspec *);
int posix_cpu_timer_del(struct k_itimer *);
Index: linux-2.6.15-rc2-rework/kernel/posix-cpu-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-cpu-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-cpu-timers.c
@@ -7,7 +7,7 @@
#include <asm/uaccess.h>
#include <linux/errno.h>
-static int check_clock(clockid_t which_clock)
+static int check_clock(const clockid_t which_clock)
{
int error = 0;
struct task_struct *p;
@@ -31,7 +31,7 @@ static int check_clock(clockid_t which_c
}
static inline union cpu_time_count
-timespec_to_sample(clockid_t which_clock, const struct timespec *tp)
+timespec_to_sample(const clockid_t which_clock, const struct timespec *tp)
{
union cpu_time_count ret;
ret.sched = 0; /* high half always zero when .cpu used */
@@ -43,7 +43,7 @@ timespec_to_sample(clockid_t which_clock
return ret;
}
-static void sample_to_timespec(clockid_t which_clock,
+static void sample_to_timespec(const clockid_t which_clock,
union cpu_time_count cpu,
struct timespec *tp)
{
@@ -55,7 +55,7 @@ static void sample_to_timespec(clockid_t
}
}
-static inline int cpu_time_before(clockid_t which_clock,
+static inline int cpu_time_before(const clockid_t which_clock,
union cpu_time_count now,
union cpu_time_count then)
{
@@ -65,7 +65,7 @@ static inline int cpu_time_before(clocki
return cputime_lt(now.cpu, then.cpu);
}
}
-static inline void cpu_time_add(clockid_t which_clock,
+static inline void cpu_time_add(const clockid_t which_clock,
union cpu_time_count *acc,
union cpu_time_count val)
{
@@ -75,7 +75,7 @@ static inline void cpu_time_add(clockid_
acc->cpu = cputime_add(acc->cpu, val.cpu);
}
}
-static inline union cpu_time_count cpu_time_sub(clockid_t which_clock,
+static inline union cpu_time_count cpu_time_sub(const clockid_t which_clock,
union cpu_time_count a,
union cpu_time_count b)
{
@@ -151,7 +151,7 @@ static inline unsigned long long sched_n
return (p == current) ? current_sched_time(p) : p->sched_time;
}
-int posix_cpu_clock_getres(clockid_t which_clock, struct timespec *tp)
+int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *tp)
{
int error = check_clock(which_clock);
if (!error) {
@@ -169,7 +169,7 @@ int posix_cpu_clock_getres(clockid_t whi
return error;
}
-int posix_cpu_clock_set(clockid_t which_clock, const struct timespec *tp)
+int posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *tp)
{
/*
* You can never reset a CPU clock, but we check for other errors
@@ -186,7 +186,7 @@ int posix_cpu_clock_set(clockid_t which_
/*
* Sample a per-thread clock for the given task.
*/
-static int cpu_clock_sample(clockid_t which_clock, struct task_struct *p,
+static int cpu_clock_sample(const clockid_t which_clock, struct task_struct *p,
union cpu_time_count *cpu)
{
switch (CPUCLOCK_WHICH(which_clock)) {
@@ -259,7 +259,7 @@ static int cpu_clock_sample_group_locked
* Sample a process (thread group) clock for the given group_leader task.
* Must be called with tasklist_lock held for reading.
*/
-static int cpu_clock_sample_group(clockid_t which_clock,
+static int cpu_clock_sample_group(const clockid_t which_clock,
struct task_struct *p,
union cpu_time_count *cpu)
{
@@ -273,7 +273,7 @@ static int cpu_clock_sample_group(clocki
}
-int posix_cpu_clock_get(clockid_t which_clock, struct timespec *tp)
+int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *tp)
{
const pid_t pid = CPUCLOCK_PID(which_clock);
int error = -EINVAL;
@@ -1410,7 +1410,7 @@ void set_process_cpu_timer(struct task_s
static long posix_cpu_clock_nanosleep_restart(struct restart_block *);
-int posix_cpu_nsleep(clockid_t which_clock, int flags,
+int posix_cpu_nsleep(const clockid_t which_clock, int flags,
struct timespec *rqtp)
{
struct restart_block *restart_block =
@@ -1514,11 +1514,13 @@ posix_cpu_clock_nanosleep_restart(struct
#define PROCESS_CLOCK MAKE_PROCESS_CPUCLOCK(0, CPUCLOCK_SCHED)
#define THREAD_CLOCK MAKE_THREAD_CPUCLOCK(0, CPUCLOCK_SCHED)
-static int process_cpu_clock_getres(clockid_t which_clock, struct timespec *tp)
+static int process_cpu_clock_getres(const clockid_t which_clock,
+ struct timespec *tp)
{
return posix_cpu_clock_getres(PROCESS_CLOCK, tp);
}
-static int process_cpu_clock_get(clockid_t which_clock, struct timespec *tp)
+static int process_cpu_clock_get(const clockid_t which_clock,
+ struct timespec *tp)
{
return posix_cpu_clock_get(PROCESS_CLOCK, tp);
}
@@ -1527,16 +1529,18 @@ static int process_cpu_timer_create(stru
timer->it_clock = PROCESS_CLOCK;
return posix_cpu_timer_create(timer);
}
-static int process_cpu_nsleep(clockid_t which_clock, int flags,
+static int process_cpu_nsleep(const clockid_t which_clock, int flags,
struct timespec *rqtp)
{
return posix_cpu_nsleep(PROCESS_CLOCK, flags, rqtp);
}
-static int thread_cpu_clock_getres(clockid_t which_clock, struct timespec *tp)
+static int thread_cpu_clock_getres(const clockid_t which_clock,
+ struct timespec *tp)
{
return posix_cpu_clock_getres(THREAD_CLOCK, tp);
}
-static int thread_cpu_clock_get(clockid_t which_clock, struct timespec *tp)
+static int thread_cpu_clock_get(const clockid_t which_clock,
+ struct timespec *tp)
{
return posix_cpu_clock_get(THREAD_CLOCK, tp);
}
@@ -1545,7 +1549,7 @@ static int thread_cpu_timer_create(struc
timer->it_clock = THREAD_CLOCK;
return posix_cpu_timer_create(timer);
}
-static int thread_cpu_nsleep(clockid_t which_clock, int flags,
+static int thread_cpu_nsleep(const clockid_t which_clock, int flags,
struct timespec *rqtp)
{
return -EINVAL;
Index: linux-2.6.15-rc2-rework/kernel/posix-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-timers.c
@@ -151,7 +151,7 @@ static void posix_timer_fn(unsigned long
static u64 do_posix_clock_monotonic_gettime_parts(
struct timespec *tp, struct timespec *mo);
int do_posix_clock_monotonic_gettime(struct timespec *tp);
-static int do_posix_clock_monotonic_get(clockid_t, struct timespec *tp);
+static int do_posix_clock_monotonic_get(const clockid_t, struct timespec *tp);
static struct k_itimer *lock_timer(timer_t timer_id, unsigned long *flags);
@@ -176,7 +176,7 @@ static inline void unlock_timer(struct k
* the function pointer CALL in struct k_clock.
*/
-static inline int common_clock_getres(clockid_t which_clock,
+static inline int common_clock_getres(const clockid_t which_clock,
struct timespec *tp)
{
tp->tv_sec = 0;
@@ -184,13 +184,15 @@ static inline int common_clock_getres(cl
return 0;
}
-static inline int common_clock_get(clockid_t which_clock, struct timespec *tp)
+static inline int common_clock_get(const clockid_t which_clock,
+ struct timespec *tp)
{
getnstimeofday(tp);
return 0;
}
-static inline int common_clock_set(clockid_t which_clock, struct timespec *tp)
+static inline int common_clock_set(const clockid_t which_clock,
+ struct timespec *tp)
{
return do_sys_settimeofday(tp, NULL);
}
@@ -207,7 +209,7 @@ static inline int common_timer_create(st
/*
* These ones are defined below.
*/
-static int common_nsleep(clockid_t, int flags, struct timespec *t);
+static int common_nsleep(const clockid_t, int flags, struct timespec *t);
static void common_timer_get(struct k_itimer *, struct itimerspec *);
static int common_timer_set(struct k_itimer *, int,
struct itimerspec *, struct itimerspec *);
@@ -216,7 +218,7 @@ static int common_timer_del(struct k_iti
/*
* Return nonzero iff we know a priori this clockid_t value is bogus.
*/
-static inline int invalid_clockid(clockid_t which_clock)
+static inline int invalid_clockid(const clockid_t which_clock)
{
if (which_clock < 0) /* CPU clock, posix_cpu_* will check it */
return 0;
@@ -522,7 +524,7 @@ static inline struct task_struct * good_
return rtn;
}
-void register_posix_clock(clockid_t clock_id, struct k_clock *new_clock)
+void register_posix_clock(const clockid_t clock_id, struct k_clock *new_clock)
{
if ((unsigned) clock_id >= MAX_CLOCKS) {
printk("POSIX clock register failed for clock_id %d\n",
@@ -568,7 +570,7 @@ static void release_posix_timer(struct k
/* Create a POSIX.1b interval timer. */
asmlinkage long
-sys_timer_create(clockid_t which_clock,
+sys_timer_create(const clockid_t which_clock,
struct sigevent __user *timer_event_spec,
timer_t __user * created_timer_id)
{
@@ -1195,7 +1197,8 @@ static u64 do_posix_clock_monotonic_gett
return jiff;
}
-static int do_posix_clock_monotonic_get(clockid_t clock, struct timespec *tp)
+static int do_posix_clock_monotonic_get(const clockid_t clock,
+ struct timespec *tp)
{
struct timespec wall_to_mono;
@@ -1212,7 +1215,7 @@ int do_posix_clock_monotonic_gettime(str
return do_posix_clock_monotonic_get(CLOCK_MONOTONIC, tp);
}
-int do_posix_clock_nosettime(clockid_t clockid, struct timespec *tp)
+int do_posix_clock_nosettime(const clockid_t clockid, struct timespec *tp)
{
return -EINVAL;
}
@@ -1224,7 +1227,8 @@ int do_posix_clock_notimer_create(struct
}
EXPORT_SYMBOL_GPL(do_posix_clock_notimer_create);
-int do_posix_clock_nonanosleep(clockid_t clock, int flags, struct timespec *t)
+int do_posix_clock_nonanosleep(const clockid_t clock, int flags,
+ struct timespec *t)
{
#ifndef ENOTSUP
return -EOPNOTSUPP; /* aka ENOTSUP in userland for POSIX */
@@ -1234,8 +1238,8 @@ int do_posix_clock_nonanosleep(clockid_t
}
EXPORT_SYMBOL_GPL(do_posix_clock_nonanosleep);
-asmlinkage long
-sys_clock_settime(clockid_t which_clock, const struct timespec __user *tp)
+asmlinkage long sys_clock_settime(const clockid_t which_clock,
+ const struct timespec __user *tp)
{
struct timespec new_tp;
@@ -1248,7 +1252,7 @@ sys_clock_settime(clockid_t which_clock,
}
asmlinkage long
-sys_clock_gettime(clockid_t which_clock, struct timespec __user *tp)
+sys_clock_gettime(const clockid_t which_clock, struct timespec __user *tp)
{
struct timespec kernel_tp;
int error;
@@ -1265,7 +1269,7 @@ sys_clock_gettime(clockid_t which_clock,
}
asmlinkage long
-sys_clock_getres(clockid_t which_clock, struct timespec __user *tp)
+sys_clock_getres(const clockid_t which_clock, struct timespec __user *tp)
{
struct timespec rtn_tp;
int error;
@@ -1387,7 +1391,7 @@ void clock_was_set(void)
long clock_nanosleep_restart(struct restart_block *restart_block);
asmlinkage long
-sys_clock_nanosleep(clockid_t which_clock, int flags,
+sys_clock_nanosleep(const clockid_t which_clock, int flags,
const struct timespec __user *rqtp,
struct timespec __user *rmtp)
{
@@ -1419,7 +1423,7 @@ sys_clock_nanosleep(clockid_t which_cloc
}
-static int common_nsleep(clockid_t which_clock,
+static int common_nsleep(const clockid_t which_clock,
int flags, struct timespec *tsave)
{
struct timespec t, dum;
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 10/43] Coding style and white space cleanup posix-timer.h
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (8 preceding siblings ...)
2005-12-01 0:03 ` [patch 09/43] Make clock selectors in posix-timers const Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 11/43] Create timespec_valid macro Thomas Gleixner
` (32 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (posix-timer-h-cleanup.patch)
- style/whitespace/macro cleanups of posix-timers.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/posix-timers.h | 78 +++++++++++++++++++++++--------------------
1 files changed, 43 insertions(+), 35 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/posix-timers.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/posix-timers.h
+++ linux-2.6.15-rc2-rework/include/linux/posix-timers.h
@@ -42,7 +42,7 @@ struct k_itimer {
timer_t it_id; /* timer id */
int it_overrun; /* overrun on pending signal */
int it_overrun_last; /* overrun on last delivered signal */
- int it_requeue_pending; /* waiting to requeue this timer */
+ int it_requeue_pending; /* waiting to requeue this timer */
#define REQUEUE_PENDING 1
int it_sigev_notify; /* notify word of sigevent struct */
int it_sigev_signo; /* signo word of sigevent struct */
@@ -52,8 +52,10 @@ struct k_itimer {
union {
struct {
struct timer_list timer;
- struct list_head abs_timer_entry; /* clock abs_timer_list */
- struct timespec wall_to_prev; /* wall_to_monotonic used when set */
+ /* clock abs_timer_list: */
+ struct list_head abs_timer_entry;
+ /* wall_to_monotonic used when set: */
+ struct timespec wall_to_prev;
unsigned long incr; /* interval in jiffies */
} real;
struct cpu_timer_list cpu;
@@ -70,14 +72,16 @@ struct k_clock_abs {
struct list_head list;
spinlock_t lock;
};
+
struct k_clock {
- int res; /* in nano seconds */
+ int res; /* in nanoseconds */
int (*clock_getres) (const clockid_t which_clock, struct timespec *tp);
struct k_clock_abs *abs_struct;
int (*clock_set) (const clockid_t which_clock, struct timespec * tp);
int (*clock_get) (const clockid_t which_clock, struct timespec * tp);
int (*timer_create) (struct k_itimer *timer);
- int (*nsleep) (const clockid_t which_clock, int flags, struct timespec *);
+ int (*nsleep) (const clockid_t which_clock, int flags,
+ struct timespec *);
int (*timer_set) (struct k_itimer * timr, int flags,
struct itimerspec * new_setting,
struct itimerspec * old_setting);
@@ -89,7 +93,7 @@ struct k_clock {
void register_posix_clock(const clockid_t clock_id, struct k_clock *new_clock);
-/* Error handlers for timer_create, nanosleep and settime */
+/* error handlers for timer_create, nanosleep and settime */
int do_posix_clock_notimer_create(struct k_itimer *timer);
int do_posix_clock_nonanosleep(const clockid_t, int flags, struct timespec *);
int do_posix_clock_nosettime(const clockid_t, struct timespec *tp);
@@ -101,39 +105,43 @@ struct now_struct {
unsigned long jiffies;
};
-#define posix_get_now(now) (now)->jiffies = jiffies;
+#define posix_get_now(now) \
+ do { (now)->jiffies = jiffies; } while (0)
+
#define posix_time_before(timer, now) \
time_before((timer)->expires, (now)->jiffies)
#define posix_bump_timer(timr, now) \
- do { \
- long delta, orun; \
- delta = now.jiffies - (timr)->it.real.timer.expires; \
- if (delta >= 0) { \
- orun = 1 + (delta / (timr)->it.real.incr); \
- (timr)->it.real.timer.expires += \
- orun * (timr)->it.real.incr; \
- (timr)->it_overrun += orun; \
- } \
- }while (0)
-
-int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *);
-int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *);
-int posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *tp);
-int posix_cpu_timer_create(struct k_itimer *);
-int posix_cpu_nsleep(const clockid_t, int, struct timespec *);
-int posix_cpu_timer_set(struct k_itimer *, int,
- struct itimerspec *, struct itimerspec *);
-int posix_cpu_timer_del(struct k_itimer *);
-void posix_cpu_timer_get(struct k_itimer *, struct itimerspec *);
-
-void posix_cpu_timer_schedule(struct k_itimer *);
-
-void run_posix_cpu_timers(struct task_struct *);
-void posix_cpu_timers_exit(struct task_struct *);
-void posix_cpu_timers_exit_group(struct task_struct *);
+ do { \
+ long delta, orun; \
+ \
+ delta = (now).jiffies - (timr)->it.real.timer.expires; \
+ if (delta >= 0) { \
+ orun = 1 + (delta / (timr)->it.real.incr); \
+ (timr)->it.real.timer.expires += \
+ orun * (timr)->it.real.incr; \
+ (timr)->it_overrun += orun; \
+ } \
+ } while (0)
+
+int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *ts);
+int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *ts);
+int posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *ts);
+int posix_cpu_timer_create(struct k_itimer *timer);
+int posix_cpu_nsleep(const clockid_t which_clock, int flags,
+ struct timespec *ts);
+int posix_cpu_timer_set(struct k_itimer *timer, int flags,
+ struct itimerspec *new, struct itimerspec *old);
+int posix_cpu_timer_del(struct k_itimer *timer);
+void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp);
+
+void posix_cpu_timer_schedule(struct k_itimer *timer);
+
+void run_posix_cpu_timers(struct task_struct *task);
+void posix_cpu_timers_exit(struct task_struct *task);
+void posix_cpu_timers_exit_group(struct task_struct *task);
-void set_process_cpu_timer(struct task_struct *, unsigned int,
- cputime_t *, cputime_t *);
+void set_process_cpu_timer(struct task_struct *task, unsigned int clock_idx,
+ cputime_t *newval, cputime_t *oldval);
#endif
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 11/43] Create timespec_valid macro
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (9 preceding siblings ...)
2005-12-01 0:03 ` [patch 10/43] Coding style and white space cleanup posix-timer.h Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 12/43] Check user space timespec in do_sys_settimeofday Thomas Gleixner
` (31 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (introduce-timespec-valid.patch)
- add timespec_valid(ts) [returns false if the timespec is denorm]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/time.h | 6 ++++++
kernel/posix-timers.c | 5 ++---
2 files changed, 8 insertions(+), 3 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -44,6 +44,12 @@ extern unsigned long mktime(const unsign
extern void set_normalized_timespec(struct timespec *ts, time_t sec, long nsec);
+/*
+ * Returns true if the timespec is norm, false if denorm:
+ */
+#define timespec_valid(ts) \
+ (((ts)->tv_sec >= 0) && (((unsigned) (ts)->tv_nsec) < NSEC_PER_SEC))
+
extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
extern seqlock_t xtime_lock;
Index: linux-2.6.15-rc2-rework/kernel/posix-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-timers.c
@@ -712,8 +712,7 @@ out:
*/
static int good_timespec(const struct timespec *ts)
{
- if ((!ts) || (ts->tv_sec < 0) ||
- ((unsigned) ts->tv_nsec >= NSEC_PER_SEC))
+ if ((!ts) || !timespec_valid(ts))
return 0;
return 1;
}
@@ -1406,7 +1405,7 @@ sys_clock_nanosleep(const clockid_t whic
if (copy_from_user(&t, rqtp, sizeof (struct timespec)))
return -EFAULT;
- if ((unsigned) t.tv_nsec >= NSEC_PER_SEC || t.tv_sec < 0)
+ if (!timespec_valid(&t))
return -EINVAL;
/*
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 12/43] Check user space timespec in do_sys_settimeofday
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (10 preceding siblings ...)
2005-12-01 0:03 ` [patch 11/43] Create timespec_valid macro Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 13/43] Introduce nsec_t type and conversion functions Thomas Gleixner
` (30 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (sys-settimeofday-check-timespec.patch)
- Check if the timespec which is provided from user space is
normalized.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/time.c | 3 +++
1 files changed, 3 insertions(+)
Index: linux-2.6.15-rc2-rework/kernel/time.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/time.c
+++ linux-2.6.15-rc2-rework/kernel/time.c
@@ -154,6 +154,9 @@ int do_sys_settimeofday(struct timespec
static int firsttime = 1;
int error = 0;
+ if (!timespec_valid(tv))
+ return -EINVAL;
+
error = security_settime(tv, tz);
if (error)
return error;
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 13/43] Introduce nsec_t type and conversion functions
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (11 preceding siblings ...)
2005-12-01 0:03 ` [patch 12/43] Check user space timespec in do_sys_settimeofday Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 14/43] Introduce ktime_t time format Thomas Gleixner
` (29 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (nsec-t.patch)
- introduce the nsec_t type
- basic nsec conversion routines: timespec_to_ns(), timeval_to_ns(),
ns_to_timespec(), ns_to_timeval().
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/time.h | 47 +++++++++++++++++++++++++++++++++++++++++++++++
kernel/time.c | 36 ++++++++++++++++++++++++++++++++++++
2 files changed, 83 insertions(+)
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -50,6 +50,12 @@ extern void set_normalized_timespec(stru
#define timespec_valid(ts) \
(((ts)->tv_sec >= 0) && (((unsigned) (ts)->tv_nsec) < NSEC_PER_SEC))
+/*
+ * 64-bit nanosec type. Large enough to span 292+ years in nanosecond
+ * resolution. Ought to be enough for a while.
+ */
+typedef s64 nsec_t;
+
extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
extern seqlock_t xtime_lock;
@@ -78,6 +84,47 @@ extern void getnstimeofday(struct timesp
extern struct timespec timespec_trunc(struct timespec t, unsigned gran);
+/**
+ * timespec_to_ns - Convert timespec to nanoseconds
+ * @ts: pointer to the timespec variable to be converted
+ *
+ * Returns the scalar nanosecond representation of the timespec
+ * parameter.
+ */
+static inline nsec_t timespec_to_ns(const struct timespec *ts)
+{
+ return ((nsec_t) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec;
+}
+
+/**
+ * timeval_to_ns - Convert timeval to nanoseconds
+ * @ts: pointer to the timeval variable to be converted
+ *
+ * Returns the scalar nanosecond representation of the timeval
+ * parameter.
+ */
+static inline nsec_t timeval_to_ns(const struct timeval *tv)
+{
+ return ((nsec_t) tv->tv_sec * NSEC_PER_SEC) +
+ tv->tv_usec * NSEC_PER_USEC;
+}
+
+/**
+ * ns_to_timespec - Convert nanoseconds to timespec
+ * @nsec: the nanoseconds value to be converted
+ *
+ * Returns the timespec representation of the nsec parameter.
+ */
+extern struct timespec ns_to_timespec(const nsec_t nsec);
+
+/**
+ * ns_to_timeval - Convert nanoseconds to timeval
+ * @nsec: the nanoseconds value to be converted
+ *
+ * Returns the timeval representation of the nsec parameter.
+ */
+extern struct timeval ns_to_timeval(const nsec_t nsec);
+
#endif /* __KERNEL__ */
#define NFDBITS __NFDBITS
Index: linux-2.6.15-rc2-rework/kernel/time.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/time.c
+++ linux-2.6.15-rc2-rework/kernel/time.c
@@ -630,6 +630,42 @@ void set_normalized_timespec(struct time
ts->tv_nsec = nsec;
}
+/**
+ * ns_to_timespec - Convert nanoseconds to timespec
+ * @nsec: the nanoseconds value to be converted
+ *
+ * Returns the timespec representation of the nsec parameter.
+ */
+inline struct timespec ns_to_timespec(const nsec_t nsec)
+{
+ struct timespec ts;
+
+ if (nsec)
+ ts.tv_sec = div_long_long_rem_signed(nsec, NSEC_PER_SEC,
+ &ts.tv_nsec);
+ else
+ ts.tv_sec = ts.tv_nsec = 0;
+
+ return ts;
+}
+
+/**
+ * ns_to_timeval - Convert nanoseconds to timeval
+ * @nsec: the nanoseconds value to be converted
+ *
+ * Returns the timeval representation of the nsec parameter.
+ */
+struct timeval ns_to_timeval(const nsec_t nsec)
+{
+ struct timespec ts = ns_to_timespec(nsec);
+ struct timeval tv;
+
+ tv.tv_sec = ts.tv_sec;
+ tv.tv_usec = (suseconds_t) ts.tv_nsec / 1000;
+
+ return tv;
+}
+
#if (BITS_PER_LONG < 64)
u64 get_jiffies_64(void)
{
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 14/43] Introduce ktime_t time format
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (12 preceding siblings ...)
2005-12-01 0:03 ` [patch 13/43] Introduce nsec_t type and conversion functions Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 15/43] ktimer core code Thomas Gleixner
` (28 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktime-t.patch)
- introduce ktime_t: nanosecond-resolution time format.
- eliminate the plain s64 scalar type, and always use the union.
This simplifies the arithmetics. Idea from Roman Zippel.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktime.h | 310 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 310 insertions(+)
Index: linux-2.6.15-rc2-rework/include/linux/ktime.h
===================================================================
--- /dev/null
+++ linux-2.6.15-rc2-rework/include/linux/ktime.h
@@ -0,0 +1,310 @@
+/*
+ * include/linux/ktime.h
+ *
+ * ktime_t - nanosecond-resolution time format.
+ *
+ * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>
+ * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar
+ *
+ * data type definitions, declarations, prototypes and macros.
+ *
+ * Started by: Thomas Gleixner and Ingo Molnar
+ *
+ * For licencing details see kernel-base/COPYING
+ */
+#ifndef _LINUX_KTIME_H
+#define _LINUX_KTIME_H
+
+#include <linux/time.h>
+#include <linux/jiffies.h>
+
+/*
+ * ktime_t:
+ *
+ * On 64-bit CPUs a single 64-bit variable is used to store the ktimers
+ * internal representation of time values in scalar nanoseconds. The
+ * design plays out best on 64-bit CPUs, where most conversions are
+ * NOPs and most arithmetic ktime_t operations are plain arithmetic
+ * operations.
+ *
+ * On 32-bit CPUs an optimized representation of the timespec structure
+ * is used to avoid expensive conversions from and to timespecs. The
+ * endian-aware order of the tv struct members is choosen to allow
+ * mathematical operations on the tv64 member of the union too, which
+ * for certain operations produces better code.
+ *
+ * For architectures with efficient support for 64/32-bit conversions the
+ * plain scalar nanosecond based representation can be selected by the
+ * config switch CONFIG_KTIME_SCALAR.
+ */
+typedef union {
+ s64 tv64;
+#if BITS_PER_LONG != 64 && !defined(CONFIG_KTIME_SCALAR)
+ struct {
+# ifdef __BIG_ENDIAN
+ s32 sec, nsec;
+# else
+ s32 nsec, sec;
+# endif
+ } tv;
+#endif
+} ktime_t;
+
+#define KTIME_MAX (~((u64)1 << 63))
+
+/*
+ * ktime_t definitions when using the 64-bit scalar representation:
+ */
+
+#if (BITS_PER_LONG == 64) || defined(CONFIG_KTIME_SCALAR)
+
+/* Define a ktime_t variable and initialize it to zero: */
+#define DEFINE_KTIME(kt) ktime_t kt = { .tv64 = 0 }
+
+/**
+ * ktime_set - Set a ktime_t variable from a seconds/nanoseconds value
+ *
+ * @secs: seconds to set
+ * @nsecs: nanoseconds to set
+ *
+ * Return the ktime_t representation of the value
+ */
+static inline ktime_t ktime_set(const long secs, const unsigned long nsecs)
+{
+ return (ktime_t) { .tv64 = (s64)secs * NSEC_PER_SEC + (s64)nsecs };
+}
+
+/*
+ * The following 3 macros are used for the nanosleep restart handling
+ * to store the "low" and "high" part of a 64-bit ktime variable.
+ * (on 32-bit CPUs the restart block has 32-bit fields, so we have to
+ * split the 64-bit value up into two pieces)
+ *
+ * In the scalar representation we have to split up the 64-bit scalar:
+ */
+
+/* Set the "low" and "high" part of a ktime_t variable: */
+static inline ktime_t
+ktime_set_low_high(const unsigned long low, const unsigned long high)
+{
+ return (ktime_t) { .tv64 = (s64)low | ((s64)high << 32) };
+}
+
+/* Get the "low" part of a ktime_t variable: */
+#define ktime_get_low(kt) ((kt).tv64 & 0xFFFFFFFF)
+
+/* Get the "high" part of a ktime_t variable: */
+#define ktime_get_high(kt) ((kt).tv64 >> 32)
+
+/* Subtract two ktime_t variables. rem = lhs -rhs: */
+#define ktime_sub(lhs, rhs) \
+ ({ (ktime_t){ .tv64 = (lhs).tv64 - (rhs).tv64 }; })
+
+/* Add two ktime_t variables. res = lhs + rhs: */
+#define ktime_add(lhs, rhs) \
+ ({ (ktime_t){ .tv64 = (lhs).tv64 + (rhs).tv64 }; })
+
+/*
+ * Add a ktime_t variable and a scalar nanosecond value.
+ * res = kt + nsval:
+ */
+#define ktime_add_ns(kt, nsval) \
+ ({ (ktime_t){ .tv64 = (kt).tv64 + (nsval) }; })
+
+/* convert a timespec to ktime_t format: */
+#define timespec_to_ktime(ts) ktime_set((ts).tv_sec, (ts).tv_nsec)
+
+/* convert a timeval to ktime_t format: */
+#define timeval_to_ktime(tv) ktime_set((tv).tv_sec, (tv).tv_usec * 1000)
+
+/* Map the ktime_t to timespec conversion to ns_to_timespec function */
+#define ktime_to_timespec(kt) ns_to_timespec((kt).tv64)
+
+/* Map the ktime_t to timeval conversion to ns_to_timeval function */
+#define ktime_to_timeval(kt) ns_to_timeval((kt).tv64)
+
+/* Map the ktime_t to clock_t conversion to the inline in jiffies.h: */
+#define ktime_to_clock_t(kt) nsec_to_clock_t((kt).tv64)
+
+/* Convert ktime_t to nanoseconds - NOP in the scalar storage format: */
+#define ktime_to_ns(kt) ((kt).tv64)
+
+#else
+
+/*
+ * Helper macros/inlines to get the ktime_t math right in the timespec
+ * representation. The macros are sometimes ugly - their actual use is
+ * pretty okay-ish, given the circumstances. We do all this for
+ * performance reasons. The pure scalar nsec_t based code was nice and
+ * simple, but created too many 64-bit / 32-bit conversions and divisions.
+ *
+ * Be especially aware that negative values are represented in a way
+ * that the tv.sec field is negative and the tv.nsec field is greater
+ * or equal to zero but less than nanoseconds per second. This is the
+ * same representation which is used by timespecs.
+ *
+ * tv.sec < 0 and 0 >= tv.nsec < NSEC_PER_SEC
+ */
+
+/* Define a ktime_t variable and initialize it to zero: */
+#define DEFINE_KTIME(kt) ktime_t kt = { .tv64 = 0 }
+
+/* Set a ktime_t variable to a value in sec/nsec representation: */
+static inline ktime_t ktime_set(const long secs, const unsigned long nsecs)
+{
+ return (ktime_t) { .tv = { .sec = secs, .nsec = nsecs } };
+}
+
+/*
+ * The following 3 macros are used for the nanosleep restart handling
+ * to store the "low" and "high" part of a 64-bit ktime variable.
+ * (on 32-bit CPUs the restart block has 32-bit fields, so we have to
+ * split the 64-bit value up into two pieces)
+ *
+ * In the union type representation this is just storing and restoring
+ * the sec and nsec members of the tv structure:
+ */
+
+/* Set the "low" and "high" part of a ktime_t variable: */
+#define ktime_set_low_high(l, h) ktime_set(h, l)
+
+/* Get the "low" part of a ktime_t variable: */
+#define ktime_get_low(kt) (kt).tv.nsec
+
+/* Get the "high" part of a ktime_t variable: */
+#define ktime_get_high(kt) (kt).tv.sec
+
+/**
+ * ktime_sub - subtract two ktime_t variables
+ *
+ * @lhs: minuend
+ * @rhs: subtrahend
+ *
+ * Returns the remainder of the substraction
+ */
+static inline ktime_t ktime_sub(const ktime_t lhs, const ktime_t rhs)
+{
+ ktime_t res;
+
+ res.tv64 = lhs.tv64 - rhs.tv64;
+ if (res.tv.nsec < 0)
+ res.tv.nsec += NSEC_PER_SEC;
+
+ return res;
+}
+
+/**
+ * ktime_add - add two ktime_t variables
+ *
+ * @add1: addend1
+ * @add2: addend2
+ *
+ * Returns the sum of addend1 and addend2
+ */
+static inline ktime_t ktime_add(const ktime_t add1, const ktime_t add2)
+{
+ ktime_t res;
+
+ res.tv64 = add1.tv64 + add2.tv64;
+ /*
+ * performance trick: the (u32) -NSEC gives 0x00000000Fxxxxxxx
+ * so we subtract NSEC_PER_SEC and add 1 to the upper 32 bit.
+ *
+ * it's equivalent to:
+ * tv.nsec -= NSEC_PER_SEC
+ * tv.sec ++;
+ */
+ if (res.tv.nsec >= NSEC_PER_SEC)
+ res.tv64 += (u32)-NSEC_PER_SEC;
+
+ return res;
+}
+
+/**
+ * ktime_add_ns - Add a scalar nanoseconds value to a ktime_t variable
+ *
+ * @kt: addend
+ * @nsec: the scalar nsec value to add
+ *
+ * Returns the sum of kt and nsec in ktime_t format
+ */
+extern ktime_t ktime_add_ns(const ktime_t kt, u64 nsec);
+
+/**
+ * timespec_to_ktime - convert a timespec to ktime_t format
+ *
+ * @ts: the timespec variable to convert
+ *
+ * Returns a ktime_t variable with the converted timespec value
+ */
+static inline ktime_t timespec_to_ktime(const struct timespec ts)
+{
+ return (ktime_t) { .tv = { .sec = (s32)ts.tv_sec,
+ .nsec = (s32)ts.tv_nsec } };
+}
+
+/**
+ * timeval_to_ktime - convert a timeval to ktime_t format
+ *
+ * @tv: the timeval variable to convert
+ *
+ * Returns a ktime_t variable with the converted timeval value
+ */
+static inline ktime_t timeval_to_ktime(const struct timeval tv)
+{
+ return (ktime_t) { .tv = { .sec = (s32)tv.tv_sec,
+ .nsec = (s32)tv.tv_usec * 1000 } };
+}
+
+/**
+ * ktime_to_timespec - convert a ktime_t variable to timespec format
+ *
+ * @kt: the ktime_t variable to convert
+ *
+ * Returns the timespec representation of the ktime value
+ */
+static inline struct timespec ktime_to_timespec(const ktime_t kt)
+{
+ return (struct timespec) { .tv_sec = (time_t) kt.tv.sec,
+ .tv_nsec = (long) kt.tv.nsec };
+}
+
+/**
+ * ktime_to_timeval - convert a ktime_t variable to timeval format
+ *
+ * @kt: the ktime_t variable to convert
+ *
+ * Returns the timeval representation of the ktime value
+ */
+static inline struct timeval ktime_to_timeval(const ktime_t kt)
+{
+ return (struct timeval) {
+ .tv_sec = (time_t) kt.tv.sec,
+ .tv_usec = (suseconds_t) (kt.tv.nsec / NSEC_PER_USEC) };
+}
+
+/**
+ * ktime_to_clock_t - convert a ktime_t variable to clock_t format
+ * @kt: the ktime_t variable to convert
+ *
+ * Returns a clock_t variable with the converted value
+ */
+static inline clock_t ktime_to_clock_t(const ktime_t kt)
+{
+ return nsec_to_clock_t( (u64) kt.tv.sec * NSEC_PER_SEC + kt.tv.nsec);
+}
+
+/**
+ * ktime_to_ns - convert a ktime_t variable to scalar nanoseconds
+ * @kt: the ktime_t variable to convert
+ *
+ * Returns the scalar nanoseconds representation of kt
+ */
+static inline u64 ktime_to_ns(const ktime_t kt)
+{
+ return (u64) kt.tv.sec * NSEC_PER_SEC + kt.tv.nsec;
+}
+
+#endif
+
+#endif
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 15/43] ktimer core code
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (13 preceding siblings ...)
2005-12-01 0:03 ` [patch 14/43] Introduce ktime_t time format Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 16/43] ktimer documentation Thomas Gleixner
` (27 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-core.patch)
- ktimer subsystem core. It is initialized at bootup and expired by the
timer interrupt, but is otherwise not utilized by any other subsystem yet.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktime.h | 21 +
include/linux/ktimer.h | 170 +++++++++
init/main.c | 3
kernel/Makefile | 3
kernel/ktimer.c | 905 +++++++++++++++++++++++++++++++++++++++++++++++++
kernel/timer.c | 2
6 files changed, 1103 insertions(+), 1 deletion(-)
Index: linux-2.6.15-rc2-rework/include/linux/ktime.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/ktime.h
+++ linux-2.6.15-rc2-rework/include/linux/ktime.h
@@ -307,4 +307,25 @@ static inline u64 ktime_to_ns(const ktim
#endif
+/*
+ * The resolution of the clocks. The resolution value is returned in
+ * the clock_getres() system call to give application programmers an
+ * idea of the (in)accuracy of timers. Timer values are rounded up to
+ * this resolution values.
+ */
+#define KTIME_REALTIME_RES (NSEC_PER_SEC/HZ)
+#define KTIME_MONOTONIC_RES (NSEC_PER_SEC/HZ)
+
+/* Get the monotonic time in ktime_t format: */
+extern ktime_t ktime_get(void);
+
+/* Get the real (wall-) time in ktime_t format: */
+extern ktime_t ktime_get_real(void);
+
+/* Get the monotonic time in timespec format: */
+extern void ktime_get_ts(struct timespec *ts);
+
+/* Get the real (wall-) time in timespec format: */
+#define ktime_get_real_ts(ts) getnstimeofday(ts)
+
#endif
Index: linux-2.6.15-rc2-rework/include/linux/ktimer.h
===================================================================
--- /dev/null
+++ linux-2.6.15-rc2-rework/include/linux/ktimer.h
@@ -0,0 +1,170 @@
+/*
+ * include/linux/ktimer.h
+ *
+ * ktimers - high-precision kernel timers
+ *
+ * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>
+ * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar
+ *
+ * data type definitions, declarations, prototypes
+ *
+ * Started by: Thomas Gleixner and Ingo Molnar
+ *
+ * For licencing details see kernel-base/COPYING
+ */
+#ifndef _LINUX_KTIMER_H
+#define _LINUX_KTIMER_H
+
+#include <linux/rbtree.h>
+#include <linux/ktime.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+
+/*
+ * Mode arguments of xxx_ktimer functions:
+ */
+enum ktimer_rearm {
+ KTIMER_ABS = 1, /* Time value is absolute */
+ KTIMER_REL, /* Time value is relative to now */
+ KTIMER_INCR, /* Time value is relative to previous expiry time */
+ KTIMER_FORWARD, /* Timer is rearmed with value. Overruns accounted */
+ KTIMER_REARM, /* Timer is rearmed with interval. Overruns accounted */
+ KTIMER_RESTART, /* Timer is restarted with the stored expiry value */
+
+ /*
+ * Expiry must not be checked when the timer is started:
+ * (can be OR-ed with another above mode flag)
+ */
+ KTIMER_NOCHECK = 0x10000,
+ /*
+ * Rounding is required when the time is set up. Thats an
+ * optimization for relative timers as we read current time
+ * in the enqueing code so we do not need to read is twice.
+ */
+ KTIMER_ROUND = 0x20000,
+
+ /* (used internally: no rearming) */
+ KTIMER_NOREARM = 0
+};
+
+/*
+ * Timer states:
+ */
+enum ktimer_state {
+ KTIMER_INACTIVE, /* Timer is inactive */
+ KTIMER_PENDING, /* Timer is pending */
+};
+
+struct ktimer_base;
+
+/**
+ * struct ktimer - the basic ktimer structure
+ *
+ * @node: red black tree node for time ordered insertion
+ * @list: list head for easier access to the time ordered list,
+ * without walking the red black tree.
+ * @expires: the absolute expiry time in the ktimers internal
+ * representation. The time is related to the clock on
+ * which the timer is based.
+ * @expired: the absolute time when the timer expired. Used for
+ * simplifying return path calculations and for debugging
+ * purposes.
+ * @interval: the timer interval for automatic rearming
+ * @overrun: the number of intervals missed when rearming a timer
+ * @state: state of the timer
+ * @function: timer expiry callback function
+ * @data: argument for the callback function
+ * @base: pointer to the timer base (per cpu and per clock)
+ *
+ * The ktimer structure must be initialized by init_ktimer_#CLOCKTYPE()
+ */
+struct ktimer {
+ struct rb_node node;
+ struct list_head list;
+ ktime_t expires;
+ ktime_t expired;
+ ktime_t interval;
+ int overrun;
+ enum ktimer_state state;
+ void (*function)(void *);
+ void *data;
+ struct ktimer_base *base;
+};
+
+/**
+ * struct ktimer_base - the timer base for a specific clock
+ *
+ * @index: clock type index for per_cpu support when moving a timer
+ * to a base on another cpu.
+ * @lock: lock protecting the base and associated timers
+ * @active: red black tree root node for the active timers
+ * @pending: list of pending timers for simple time ordered access
+ * @count: the number of active timers
+ * @resolution: the resolution of the clock, in nanoseconds
+ * @get_time: function to retrieve the current time of the clock
+ * @curr_timer: the timer which is executing a callback right now
+ * @wait: waitqueue to wait for a currently running timer
+ * @name: string identifier of the clock
+ */
+struct ktimer_base {
+ clockid_t index;
+ spinlock_t lock;
+ struct rb_root active;
+ struct list_head pending;
+ int count;
+ unsigned long resolution;
+ ktime_t (*get_time)(void);
+ struct ktimer *curr_timer;
+ wait_queue_head_t wait;
+ char *name;
+};
+
+#define KTIMER_POISON ((void *) 0x00100101)
+
+/* Exported timer functions: */
+
+/* Initialize timers: */
+extern void ktimer_init(struct ktimer *timer);
+extern void ktimer_init_clock(struct ktimer *timer,
+ const clockid_t which_clock);
+
+/* Basic timer operations: */
+extern int ktimer_start(struct ktimer *timer, const ktime_t *tim,
+ const int mode);
+extern int ktimer_restart(struct ktimer *timer, const ktime_t *tim,
+ const int mode);
+extern int ktimer_cancel(struct ktimer *timer);
+extern int ktimer_try_to_cancel(struct ktimer *timer);
+
+/* Query timers: */
+extern ktime_t ktimer_get_remtime(const struct ktimer *timer);
+extern ktime_t ktimer_get_expiry(const struct ktimer *timer, ktime_t *now);
+extern int ktimer_get_res(const clockid_t which_clock, struct timespec *tp);
+extern int ktimer_get_res_clock(const clockid_t which_clock,
+ struct timespec *tp);
+
+static inline int ktimer_active(const struct ktimer *timer)
+{
+ return timer->state != KTIMER_INACTIVE;
+}
+
+/* Convert with rounding based on resolution of timer's clock: */
+extern ktime_t ktimer_round_timeval(const struct ktimer *timer,
+ const struct timeval *tv);
+extern ktime_t ktimer_round_timespec(const struct ktimer *timer,
+ const struct timespec *ts);
+
+#ifdef CONFIG_SMP
+extern void wait_for_ktimer(const struct ktimer *timer);
+#else
+# define wait_for_ktimer(t) do { } while (0)
+#endif
+
+/* Soft interrupt function to run the ktimer queues: */
+extern void ktimer_run_queues(void);
+
+/* Bootup initialization: */
+extern void __init ktimers_init(void);
+
+#endif
Index: linux-2.6.15-rc2-rework/init/main.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/init/main.c
+++ linux-2.6.15-rc2-rework/init/main.c
@@ -47,6 +47,8 @@
#include <linux/rmap.h>
#include <linux/mempolicy.h>
#include <linux/key.h>
+#include <linux/ktimer.h>
+
#include <net/sock.h>
#include <asm/io.h>
@@ -487,6 +489,7 @@ asmlinkage void __init start_kernel(void
init_IRQ();
pidhash_init();
init_timers();
+ ktimers_init();
softirq_init();
time_init();
Index: linux-2.6.15-rc2-rework/kernel/Makefile
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/Makefile
+++ linux-2.6.15-rc2-rework/kernel/Makefile
@@ -7,7 +7,8 @@ obj-y = sched.o fork.o exec_domain.o
sysctl.o capability.o ptrace.o timer.o user.o \
signal.o sys.o kmod.o workqueue.o pid.o \
rcupdate.o intermodule.o extable.o params.o posix-timers.o \
- kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o
+ kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o \
+ ktimer.o
obj-$(CONFIG_FUTEX) += futex.o
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
Index: linux-2.6.15-rc2-rework/kernel/ktimer.c
===================================================================
--- /dev/null
+++ linux-2.6.15-rc2-rework/kernel/ktimer.c
@@ -0,0 +1,905 @@
+/*
+ * linux/kernel/ktimer.c
+ *
+ * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de>
+ * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar
+ *
+ * High-precision kernel timers
+ *
+ * In contrast to the low-resolution timeout API implemented in
+ * kernel/timer.c, ktimers provide finer resolution and accuracy
+ * depending on system configuration and capabilities.
+ *
+ * These timers are currently used for:
+ * - itimers
+ * - POSIX timers
+ * - nanosleep
+ * - precise in-kernel timing
+ *
+ * Started by: Thomas Gleixner and Ingo Molnar
+ *
+ * Credits:
+ * based on kernel/timer.c
+ *
+ * For licencing details see kernel-base/COPYING
+ */
+
+#include <linux/cpu.h>
+#include <linux/ktimer.h>
+#include <linux/module.h>
+#include <linux/percpu.h>
+#include <linux/notifier.h>
+#include <linux/syscalls.h>
+#include <linux/interrupt.h>
+
+#include <asm/uaccess.h>
+
+/*
+ * The timer bases:
+ */
+
+#define MAX_KTIMER_BASES 2
+
+static DEFINE_PER_CPU(struct ktimer_base, ktimer_bases[MAX_KTIMER_BASES]) =
+{
+ {
+ .index = CLOCK_REALTIME,
+ .name = "Realtime",
+ .get_time = &ktime_get_real,
+ .resolution = KTIME_REALTIME_RES,
+ },
+ {
+ .index = CLOCK_MONOTONIC,
+ .name = "Monotonic",
+ .get_time = &ktime_get,
+ .resolution = KTIME_MONOTONIC_RES,
+ },
+};
+
+/**
+ * ktime_get - get the monotonic time in ktime_t format
+ *
+ * returns the time in ktime_t format
+ */
+ktime_t ktime_get(void)
+{
+ struct timespec now;
+
+ ktime_get_ts(&now);
+
+ return timespec_to_ktime(now);
+}
+
+EXPORT_SYMBOL_GPL(ktime_get);
+
+/**
+ * ktime_get_real - get the real (wall-) time in ktime_t format
+ *
+ * returns the time in ktime_t format
+ */
+ktime_t ktime_get_real(void)
+{
+ struct timespec now;
+
+ getnstimeofday(&now);
+
+ return timespec_to_ktime(now);
+}
+
+EXPORT_SYMBOL_GPL(ktime_get_real);
+
+/**
+ * ktime_get_ts - get the monotonic clock in timespec format
+ *
+ * @ts: pointer to timespec variable
+ *
+ * The function calculates the monotonic clock from the realtime
+ * clock and the wall_to_monotonic offset and stores the result
+ * in normalized timespec format in the variable pointed to by ts.
+ */
+void ktime_get_ts(struct timespec *ts)
+{
+ struct timespec tomono;
+ unsigned long seq;
+
+ do {
+ seq = read_seqbegin(&xtime_lock);
+ getnstimeofday(ts);
+ tomono = wall_to_monotonic;
+
+ } while (read_seqretry(&xtime_lock, seq));
+
+ set_normalized_timespec(ts, ts->tv_sec + tomono.tv_sec,
+ ts->tv_nsec + tomono.tv_nsec);
+}
+
+/*
+ * Functions and macros which are different for UP/SMP systems are kept in a
+ * single place
+ */
+#ifdef CONFIG_SMP
+
+#define set_curr_timer(b, t) (b)->curr_timer = (t)
+#define wake_up_timer_waiters(b) wake_up(&(b)->wait)
+
+/**
+ * wait_for_ktimer - Wait for a running ktimer
+ *
+ * @timer: timer to wait for
+ *
+ * The function waits in case the timers callback function is
+ * currently executed on the waitqueue of the timer base. The
+ * waitqueue is woken up after the timer callback function has
+ * finished execution.
+ */
+void wait_for_ktimer(const struct ktimer *timer)
+{
+ struct ktimer_base *base = timer->base;
+
+ if (base)
+ wait_event(base->wait,
+ base->curr_timer != timer);
+}
+
+/*
+ * We are using hashed locking: holding per_cpu(ktimer_bases)[n].lock
+ * means that all timers which are tied to this base via timer->base are
+ * locked, and the base itself is locked too.
+ *
+ * So __run_timers/migrate_timers can safely modify all timers which could
+ * be found on the lists/queues.
+ *
+ * When the timer's base is locked, and the timer removed from list, it is
+ * possible to set timer->base = NULL and drop the lock: the timer remains
+ * locked.
+ */
+static struct ktimer_base *lock_ktimer_base(const struct ktimer *timer,
+ unsigned long *flags)
+{
+ struct ktimer_base *base;
+
+ for (;;) {
+ base = timer->base;
+ if (likely(base != NULL)) {
+ spin_lock_irqsave(&base->lock, *flags);
+ if (likely(base == timer->base))
+ return base;
+ /* The timer has migrated to another CPU */
+ spin_unlock_irqrestore(&base->lock, *flags);
+ }
+ cpu_relax();
+ }
+}
+
+/*
+ * Switch the timer base to the current CPU when possible.
+ */
+static inline struct ktimer_base *
+switch_ktimer_base(struct ktimer *timer, struct ktimer_base *base)
+{
+ struct ktimer_base *new_base;
+
+ new_base = &__get_cpu_var(ktimer_bases[base->index]);
+
+ if (base != new_base) {
+ /*
+ * We are trying to schedule the timer on the local CPU.
+ * However we can't change timer's base while it is running,
+ * so we keep it on the same CPU. No hassle vs. reprogramming
+ * the event source in the high resolution case. The softirq
+ * code will take care of this when the timer function has
+ * completed. There is no conflict as we hold the lock until
+ * the timer is enqueued.
+ */
+ if (unlikely(base->curr_timer == timer))
+ return base;
+
+ /* See the comment in lock_timer_base() */
+ timer->base = NULL;
+ spin_unlock(&base->lock);
+ spin_lock(&new_base->lock);
+ timer->base = new_base;
+ }
+ return new_base;
+}
+
+/*
+ * Get the timer base unlocked
+ *
+ * Take care of timer->base = NULL in switch_ktimer_base !
+ */
+static inline struct ktimer_base *
+get_ktimer_base_unlocked(const struct ktimer *timer)
+{
+ struct ktimer_base *base;
+
+ while (!(base = timer->base))
+ cpu_relax();
+
+ return base;
+}
+
+#else /* CONFIG_SMP */
+
+#define set_curr_timer(b, t) do { } while (0)
+#define wake_up_timer_waiters(b) do { } while (0)
+
+static inline struct ktimer_base *
+lock_ktimer_base(const struct ktimer *timer, unsigned long *flags)
+{
+ struct ktimer_base *base = timer->base;
+
+ spin_lock_irqsave(&base->lock, *flags);
+
+ return base;
+}
+
+#define switch_ktimer_base(t, b) (b)
+#define get_ktimer_base_unlocked(t) (t)->base
+
+#endif /* !CONFIG_SMP */
+
+/*
+ * Functions for the union type storage format of ktime_t which are
+ * too large for inlining:
+ */
+#if BITS_PER_LONG < 64
+# ifndef CONFIG_KTIME_SCALAR
+/**
+ * ktime_add_ns - Add a scalar nanoseconds value to a ktime_t variable
+ *
+ * @kt: addend
+ * @nsec: the scalar nsec value to add
+ *
+ * Returns the sum of kt and nsec in ktime_t format
+ */
+ktime_t ktime_add_ns(const ktime_t kt, u64 nsec)
+{
+ ktime_t tmp;
+
+ if (likely(nsec < NSEC_PER_SEC)) {
+ tmp.tv64 = nsec;
+ } else {
+ unsigned long rem = do_div(nsec, NSEC_PER_SEC);
+
+ tmp = ktime_set((long)nsec, rem);
+ }
+
+ return ktime_add(kt, tmp);
+}
+
+/**
+ * ktime_modulo - Calculate ktime_t modulo div
+ *
+ * @kt: dividend
+ * @div: divisor
+ *
+ * Return ktime_t modulo div.
+ *
+ * div is less than NSEC_PER_SEC and (NSEC_PER_SEC % div) = 0 !
+ */
+#define ktime_modulo(kt, div) ((unsigned long)(kt).tv.nsec % (div))
+
+#else /* CONFIG_KTIME_SCALAR */
+
+static unsigned long ktime_modulo(const ktime_t kt, const unsigned long div)
+{
+ nsec_t nsec = kt.tv64;
+
+ return do_div(nsec, div);
+}
+
+# endif /* !CONFIG_KTIME_SCALAR */
+#else /* BITS_PER_LONG < 64 */
+# define ktime_modulo(kt, div) (unsigned long)((kt).tv64 % (div))
+#endif /* BITS_PER_LONG >= 64 */
+
+/*
+ * Counterpart to lock_timer_base above.
+ */
+static inline
+void unlock_ktimer_base(const struct ktimer *timer, unsigned long *flags)
+{
+ spin_unlock_irqrestore(&timer->base->lock, *flags);
+}
+
+/**
+ * ktimer_round_timespec - convert timespec to ktime_t with resolution
+ * adjustment
+ *
+ * @timer: ktimer to retrieve the base
+ * @ts: pointer to the timespec value to be converted
+ *
+ * Returns the resolution adjusted ktime_t representation of the
+ * timespec.
+ *
+ * Note: We can access base without locking here, as ktimers can
+ * migrate between CPUs but can not be moved from one clock source to
+ * another. The clock source binding is set at init_ktimer_XXX time.
+ */
+ktime_t ktimer_round_timespec(const struct ktimer *timer,
+ const struct timespec *ts)
+{
+ struct ktimer_base *base = get_ktimer_base_unlocked(timer);
+ long rem = ts->tv_nsec % base->resolution;
+ ktime_t t;
+
+ t = ktime_set(ts->tv_sec, ts->tv_nsec);
+
+ /* Check, if the value has to be rounded */
+ if (rem)
+ t = ktime_add_ns(t, base->resolution - rem);
+
+ return t;
+}
+
+/**
+ * ktimer_round_timeval - convert timeval to ktime_t with resolution
+ * adjustment
+ *
+ * @timer: ktimer to retrieve the base
+ * @tv: pointer to the timeval value to be converted
+ *
+ * Returns the resolution adjusted ktime_t representation of the
+ * timeval.
+ */
+ktime_t ktimer_round_timeval(const struct ktimer *timer,
+ const struct timeval *tv)
+{
+ struct timespec ts;
+
+ ts.tv_sec = tv->tv_sec;
+ ts.tv_nsec = tv->tv_usec * NSEC_PER_USEC;
+
+ return ktimer_round_timespec(timer, &ts);
+}
+
+/*
+ * enqueue_ktimer - internal function to (re)start a timer
+ *
+ * The timer is inserted in expiry order. Insertion into the
+ * red black tree is O(log(n)). Must hold the base lock.
+ */
+static int enqueue_ktimer(struct ktimer *timer, struct ktimer_base *base,
+ const ktime_t *tim, const int mode)
+{
+ struct rb_node **link = &base->active.rb_node;
+ struct list_head *prev = &base->pending;
+ struct rb_node *parent = NULL;
+ struct ktimer *entry;
+ ktime_t now;
+
+ /* Get current time */
+ now = base->get_time();
+
+ /*
+ * Calculate the absolute expiry time based on the
+ * timer expiry mode:
+ */
+ switch (mode & ~(KTIMER_NOCHECK | KTIMER_ROUND)) {
+
+ case KTIMER_ABS:
+ timer->expires = *tim;
+ break;
+
+ case KTIMER_REL:
+ timer->expires = ktime_add(now, *tim);
+ break;
+
+ case KTIMER_INCR:
+ timer->expires = ktime_add(timer->expires, *tim);
+ break;
+
+ case KTIMER_FORWARD:
+ while (timer->expires.tv64 <= now.tv64) {
+ timer->expires = ktime_add(timer->expires, *tim);
+ timer->overrun++;
+ }
+ goto nocheck;
+
+ case KTIMER_REARM:
+ while (timer->expires.tv64 <= now.tv64) {
+ timer->expires = ktime_add(timer->expires,
+ timer->interval);
+ timer->overrun++;
+ }
+ goto nocheck;
+
+ case KTIMER_RESTART:
+ break;
+
+ default:
+ /* illegal mode */
+ BUG();
+ }
+
+ /*
+ * Rounding is requested for one shot timers and the first
+ * event of interval timers. It's done here, so we don't
+ * have to read the current time twice for relative timers.
+ */
+ if (mode & KTIMER_ROUND) {
+ unsigned long rem;
+
+ rem = ktime_modulo(timer->expires, base->resolution);
+ if (rem)
+ timer->expires = ktime_add_ns(timer->expires,
+ base->resolution - rem);
+ }
+
+ /* Expiry time in the past: */
+ if (unlikely(timer->expires.tv64 <= now.tv64)) {
+ timer->expired = now;
+ /* The caller takes care of expiry */
+ if (!(mode & KTIMER_NOCHECK))
+ return -1;
+ }
+ nocheck:
+
+ /*
+ * Find the right place in the rbtree:
+ */
+ while (*link) {
+ parent = *link;
+ entry = rb_entry(parent, struct ktimer, node);
+ /*
+ * We dont care about collisions. Nodes with
+ * the same expiry time stay together.
+ */
+ if (timer->expires.tv64 < entry->expires.tv64)
+ link = &(*link)->rb_left;
+ else {
+ link = &(*link)->rb_right;
+ prev = &entry->list;
+ }
+ }
+
+ /*
+ * Insert the timer to the rbtree and to the sorted list:
+ */
+ rb_link_node(&timer->node, parent, link);
+ rb_insert_color(&timer->node, &base->active);
+ list_add(&timer->list, prev);
+
+ timer->state = KTIMER_PENDING;
+ base->count++;
+
+ return 0;
+}
+
+/*
+ * __remove_ktimer - internal function to remove a timer
+ *
+ * The function also allows automatic rearming for interval timers.
+ * Must hold the base lock.
+ */
+static void
+__remove_ktimer(struct ktimer *timer, struct ktimer_base *base,
+ enum ktimer_rearm rearm)
+{
+ /*
+ * Remove the timer from the sorted list and from the rbtree:
+ */
+ list_del(&timer->list);
+ rb_erase(&timer->node, &base->active);
+ timer->node.rb_parent = KTIMER_POISON;
+
+ timer->state = KTIMER_INACTIVE;
+ base->count--;
+ BUG_ON(base->count < 0);
+
+ /* Auto rearm the timer ? */
+ if (rearm && (timer->interval.tv64 != 0))
+ enqueue_ktimer(timer, base, NULL, KTIMER_REARM);
+}
+
+/*
+ * remove ktimer, called with base lock held
+ */
+static inline int remove_ktimer(struct ktimer *timer, struct ktimer_base *base)
+{
+ if (ktimer_active(timer)) {
+ __remove_ktimer(timer, base, KTIMER_NOREARM);
+ return 1;
+ }
+ return 0;
+}
+
+/*
+ * Internal function to (re)start a timer.
+ */
+static int internal_restart_ktimer(struct ktimer *timer, const ktime_t *tim,
+ const int mode)
+{
+ struct ktimer_base *base, *new_base;
+ unsigned long flags;
+ int ret;
+
+ BUG_ON(!timer->function);
+
+ base = lock_ktimer_base(timer, &flags);
+
+ /* Remove an active timer from the queue */
+ ret = remove_ktimer(timer, base);
+
+ /* Switch the timer base, if necessary */
+ new_base = switch_ktimer_base(timer, base);
+
+ /*
+ * When the new timer setting is already expired,
+ * let the calling code deal with it.
+ */
+ if (enqueue_ktimer(timer, new_base, tim, mode))
+ ret = -1;
+
+ unlock_ktimer_base(timer, &flags);
+
+ return ret;
+}
+
+/**
+ * ktimer_start - start a timer on the current CPU
+ *
+ * @timer: the timer to be added
+ * @tim: expiry time (optional, if not set in the timer)
+ * @mode: timer setup mode
+ *
+ * Returns:
+ * 0 on success
+ * -1 when the new time setting is already expired
+ */
+int ktimer_start(struct ktimer *timer, const ktime_t *tim, const int mode)
+{
+ BUG_ON(ktimer_active(timer));
+
+ return internal_restart_ktimer(timer, tim, mode);
+}
+
+EXPORT_SYMBOL_GPL(ktimer_start);
+
+/**
+ * ktimer_restart - modify a running timer
+ *
+ * @timer: the timer to be modified
+ * @tim: expiry time (required)
+ * @mode: timer setup mode
+ *
+ * Returns:
+ * 0 when the timer was not active
+ * 1 when the timer was active
+ * -1 when the new time setting is already expired
+ */
+int ktimer_restart(struct ktimer *timer, const ktime_t *tim, const int mode)
+{
+ BUG_ON(!tim);
+
+ return internal_restart_ktimer(timer, tim, mode);
+}
+
+EXPORT_SYMBOL_GPL(ktimer_restart);
+
+/**
+ * ktimer_try_to_cancel - try to deactivate a timer
+ *
+ * @timer: ktimer to stop
+ *
+ * Returns:
+ * 0 when the timer was not active
+ * 1 when the timer was active
+ * -1 when the timer is currently excuting the callback function and
+ * can not be stopped
+ */
+int ktimer_try_to_cancel(struct ktimer *timer)
+{
+ struct ktimer_base *base;
+ unsigned long flags;
+ int ret = -1;
+
+ base = lock_ktimer_base(timer, &flags);
+
+ if (base->curr_timer != timer) {
+ ret = remove_ktimer(timer, base);
+ if (ret)
+ timer->expired = base->get_time();
+ }
+
+ unlock_ktimer_base(timer, &flags);
+
+ return ret;
+
+}
+
+EXPORT_SYMBOL_GPL(ktimer_try_to_cancel);
+
+/**
+ * ktimer_cancel - cancel a timer and wait for the handler to finish.
+ *
+ * @timer: the timer to be cancelled
+ *
+ * Returns:
+ * 0 when the timer was not active
+ * 1 when the timer was active
+ */
+int ktimer_cancel(struct ktimer *timer)
+{
+ for (;;) {
+ int ret = ktimer_try_to_cancel(timer);
+
+ if (ret >= 0)
+ return ret;
+ wait_for_ktimer(timer);
+ }
+}
+
+EXPORT_SYMBOL_GPL(ktimer_cancel);
+
+/**
+ * ktimer_get_remtime - get remaining time for the timer
+ *
+ * @timer: the timer to read
+ *
+ * Returns the delta between the expiry time and now, which can be
+ * less than zero.
+ */
+ktime_t ktimer_get_remtime(const struct ktimer *timer)
+{
+ struct ktimer_base *base;
+ unsigned long flags;
+ ktime_t rem;
+
+ base = lock_ktimer_base(timer, &flags);
+ rem = ktime_sub(timer->expires, base->get_time());
+ unlock_ktimer_base(timer, &flags);
+
+ return rem;
+}
+
+/**
+ * ktimer_get_expiry - get expiry time for the timer
+ *
+ * @timer: the timer to read
+ * @now: if != NULL then store current base->time into it
+ */
+ktime_t ktimer_get_expiry(const struct ktimer *timer, ktime_t *now)
+{
+ struct ktimer_base *base;
+ unsigned long flags;
+ ktime_t expiry;
+
+ base = lock_ktimer_base(timer, &flags);
+ expiry = timer->expires;
+ if (now)
+ *now = base->get_time();
+ unlock_ktimer_base(timer, &flags);
+
+ return expiry;
+}
+
+/*
+ * Functions related to clock sources
+ */
+
+static inline void ktimer_common_init(struct ktimer *timer)
+{
+ memset(timer, 0, sizeof(struct ktimer));
+ timer->node.rb_parent = KTIMER_POISON;
+}
+
+/**
+ * ktimer_init - initialize a timer to the monotonic clock
+ *
+ * @timer: the timer to be initialized
+ */
+void ktimer_init(struct ktimer *timer)
+{
+ struct ktimer_base *bases;
+
+ ktimer_common_init(timer);
+ bases = per_cpu(ktimer_bases, raw_smp_processor_id());
+ timer->base = &bases[CLOCK_MONOTONIC];
+}
+
+EXPORT_SYMBOL_GPL(ktimer_init);
+
+/**
+ * ktimer_init_clock - initialize a timer to the given clock
+ *
+ * @timer: the timer to be initialized
+ * @clock_id: the clock to be used
+ */
+void ktimer_init_clock(struct ktimer *timer, const clockid_t clock_id)
+{
+ struct ktimer_base *bases;
+
+ ktimer_common_init(timer);
+ bases = per_cpu(ktimer_bases, raw_smp_processor_id());
+ timer->base = &bases[clock_id];
+}
+
+EXPORT_SYMBOL_GPL(ktimer_init_clock);
+
+/**
+ * ktimer_get_res - get the monotonic timer resolution
+ *
+ * @which_clock: unused parameter for compability with the posix timer code
+ * @tp: pointer to timespec variable to store the resolution
+ *
+ * Store the resolution of clock monotonic in the variable pointed to
+ * by tp.
+ */
+int ktimer_get_res(const clockid_t which_clock, struct timespec *tp)
+{
+ struct ktimer_base *bases;
+
+ tp->tv_sec = 0;
+ bases = per_cpu(ktimer_bases, raw_smp_processor_id());
+ tp->tv_nsec = bases[CLOCK_MONOTONIC].resolution;
+
+ return 0;
+}
+
+/**
+ * ktimer_get_res_clock - get the timer resolution for a clock
+ *
+ * @which_clock: which clock to query
+ * @tp: pointer to timespec variable to store the resolution
+ *
+ * Store the resolution of clock realtime in the variable pointed to
+ * by tp.
+ */
+int ktimer_get_res_clock(const clockid_t which_clock, struct timespec *tp)
+{
+ struct ktimer_base *bases;
+
+ tp->tv_sec = 0;
+ bases = per_cpu(ktimer_bases, raw_smp_processor_id());
+ tp->tv_nsec = bases[which_clock].resolution;
+
+ return 0;
+}
+
+/*
+ * Expire the per base ktimer-queue:
+ */
+static inline void run_ktimer_queue(struct ktimer_base *base)
+{
+ ktime_t now = base->get_time();
+
+ spin_lock_irq(&base->lock);
+
+ while (!list_empty(&base->pending)) {
+ struct ktimer *timer;
+ void (*fn)(void *);
+ void *data;
+
+ timer = list_entry(base->pending.next, struct ktimer, list);
+ if (now.tv64 <= timer->expires.tv64)
+ break;
+
+ timer->expired = now;
+ fn = timer->function;
+ data = timer->data;
+ set_curr_timer(base, timer);
+ __remove_ktimer(timer, base, KTIMER_REARM);
+ spin_unlock_irq(&base->lock);
+
+ fn(data);
+
+ spin_lock_irq(&base->lock);
+ set_curr_timer(base, NULL);
+ }
+ spin_unlock_irq(&base->lock);
+
+ wake_up_timer_waiters(base);
+}
+
+/*
+ * Called from timer softirq every jiffy, expire ktimers:
+ */
+void ktimer_run_queues(void)
+{
+ struct ktimer_base *base = __get_cpu_var(ktimer_bases);
+ int i;
+
+ for (i = 0; i < MAX_KTIMER_BASES; i++)
+ run_ktimer_queue(&base[i]);
+}
+
+/*
+ * Functions related to boot-time initialization:
+ */
+static void __devinit init_ktimers_cpu(int cpu)
+{
+ struct ktimer_base *base = per_cpu(ktimer_bases, cpu);
+ int i;
+
+ for (i = 0; i < MAX_KTIMER_BASES; i++) {
+ spin_lock_init(&base->lock);
+ INIT_LIST_HEAD(&base->pending);
+ init_waitqueue_head(&base->wait);
+ base++;
+ }
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
+
+static void migrate_ktimer_list(struct ktimer_base *old_base,
+ struct ktimer_base *new_base)
+{
+ struct ktimer *timer;
+ struct rb_node *node;
+
+ while ((node = rb_first(&old_base->active))) {
+ timer = rb_entry(node, struct ktimer, node);
+ remove_ktimer(timer, old_base);
+ timer->base = new_base;
+ enqueue_ktimer(timer, new_base, NULL, KTIMER_RESTART);
+ }
+}
+
+static void migrate_ktimers(int cpu)
+{
+ struct ktimer_base *old_base, *new_base;
+ int i;
+
+ BUG_ON(cpu_online(cpu));
+ old_base = per_cpu(ktimer_bases, cpu);
+ new_base = get_cpu_var(ktimer_bases);
+
+ local_irq_disable();
+
+ for (i = 0; i < MAX_KTIMER_BASES; i++) {
+
+ spin_lock(&new_base->lock);
+ spin_lock(&old_base->lock);
+
+ BUG_ON(old_base->curr_timer);
+
+ migrate_ktimer_list(old_base, new_base);
+
+ spin_unlock(&old_base->lock);
+ spin_unlock(&new_base->lock);
+ old_base++;
+ new_base++;
+ }
+
+ local_irq_enable();
+ put_cpu_var(ktimer_bases);
+}
+#endif /* CONFIG_HOTPLUG_CPU */
+
+static int __devinit ktimer_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ long cpu = (long)hcpu;
+
+ switch(action) {
+
+ case CPU_UP_PREPARE:
+ init_ktimers_cpu(cpu);
+ break;
+
+#ifdef CONFIG_HOTPLUG_CPU
+ case CPU_DEAD:
+ migrate_ktimers(cpu);
+ break;
+#endif
+
+ default:
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block __devinitdata ktimers_nb = {
+ .notifier_call = ktimer_cpu_notify,
+};
+
+void __init ktimers_init(void)
+{
+ ktimer_cpu_notify(&ktimers_nb, (unsigned long)CPU_UP_PREPARE,
+ (void *)(long)smp_processor_id());
+ register_cpu_notifier(&ktimers_nb);
+}
+
Index: linux-2.6.15-rc2-rework/kernel/timer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/timer.c
+++ linux-2.6.15-rc2-rework/kernel/timer.c
@@ -30,6 +30,7 @@
#include <linux/thread_info.h>
#include <linux/time.h>
#include <linux/jiffies.h>
+#include <linux/ktimer.h>
#include <linux/posix-timers.h>
#include <linux/cpu.h>
#include <linux/syscalls.h>
@@ -857,6 +858,7 @@ static void run_timer_softirq(struct sof
{
tvec_base_t *base = &__get_cpu_var(tvec_bases);
+ ktimer_run_queues();
if (time_after_eq(jiffies, base->timer_jiffies))
__run_timers(base);
}
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 16/43] ktimer documentation
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (14 preceding siblings ...)
2005-12-01 0:03 ` [patch 15/43] ktimer core code Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 17/43] Switch itimers to ktimer Thomas Gleixner
` (26 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-documentation.patch)
- add ktimer docbook and design document
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Documentation/DocBook/kernel-api.tmpl | 5
Documentation/ktimers.txt | 239 ++++++++++++++++++++++++++++++++++
2 files changed, 244 insertions(+)
Index: linux-2.6.15-rc2-rework/Documentation/DocBook/kernel-api.tmpl
===================================================================
--- linux-2.6.15-rc2-rework.orig/Documentation/DocBook/kernel-api.tmpl
+++ linux-2.6.15-rc2-rework/Documentation/DocBook/kernel-api.tmpl
@@ -54,6 +54,11 @@
!Ekernel/sched.c
!Ekernel/timer.c
</sect1>
+ <sect1><title>High-precision timers</title>
+!Iinclude/linux/ktime.h
+!Iinclude/linux/ktimer.h
+!Ekernel/ktimer.c
+ </sect1>
<sect1><title>Internal Functions</title>
!Ikernel/exit.c
!Ikernel/signal.c
Index: linux-2.6.15-rc2-rework/Documentation/ktimers.txt
===================================================================
--- /dev/null
+++ linux-2.6.15-rc2-rework/Documentation/ktimers.txt
@@ -0,0 +1,239 @@
+
+ktimers - subsystem for high-precision kernel timers
+----------------------------------------------------
+
+This patch introduces a new subsystem for high-precision kernel timers.
+
+Why two timer subsystems? After a lot of back and forth trying to
+integrate high-precision and high-resolution features into the existing
+timer framework, and after testing various such high-resolution timer
+implementations in practice, we came to the conclusion that the timer
+wheel code is fundamentally not suitable for such an approach. We
+initially didnt believe this ('there must be a way to solve this'), and
+we spent a considerable effort trying to integrate things into the timer
+wheel, but we failed. There are several reasons why such integration is
+impossible:
+
+- the forced handling of low-resolution and high-resolution timers in
+ the same way leads to a lot of compromises, macro magic and #ifdef
+ mess. The timers.c code is very "tightly coded" around jiffies and
+ 32-bitness assumptions, and has been honed and micro-optimized for a
+ narrow use case for many years - and thus even small extensions to it
+ frequently break the wheel concept, leading to even worse
+ compromises.
+
+- the unpredictable [O(N)] overhead of cascading leads to delays which
+ necessiate a more complex handling of high resolution timers, which
+ decreases robustness. Such a design still led to rather large timing
+ inaccuracies. Cascading is a fundamental property of the timer wheel
+ concept, it cannot be 'designed out' without unevitabling degrading
+ other portions of the timers.c code in an unacceptable way.
+
+- the implementation of the current posix-timer subsystem on top of
+ the timer wheel has already introduced a quite complex handling of
+ the required readjusting of absolute CLOCK_REALTIME timers at
+ settimeofday or NTP time - showing the rigidity of the timer wheel
+ data structure.
+
+- the timer wheel code is most optimal for use cases which can be
+ identified as "timeouts". Such timeouts are usually set up to cover
+ error conditions in various I/O paths, such as networking and block
+ I/O. The vast majority of those timers never expire and are rarely
+ recascaded because the expected correct event arrives in time so they
+ can be removed from the timer wheel before any further processing of
+ them becomes necessary. Thus the users of these timeouts can accept
+ the granularity and precision tradeoffs of the timer wheel, and
+ largely expect the timer subsystem to have near-zero overhead. Timing
+ for them is not a core purpose, it's most a necessary evil to
+ guarantee the processing of requests, which should be as cheap and
+ unintrusive as possible.
+
+The primary users of precision timers are user-space applications that
+utilize nanosleep, posix-timers and itimer interfaces. Also, in-kernel
+users like drivers and subsystems with a requirement for precise timed
+events can benefit from the availability of a seperate high-precision
+timer subsystem as well.
+
+The ktimer subsystem is easily extended with high-resolution
+capabilities, and patches for that exist and are maturing quickly. The
+increasing demand for realtime and multimedia applications along with
+other potential users for precise timers gives another reason to
+separate the "timeout" and "precise timer" subsystems.
+
+Another potential benefit is that such seperation allows for future
+optimizations of the existing timer wheel implementation for the low
+resolution and low precision use cases - once the precision-sensitive
+APIs are separated from the timer wheel and are migrated over to
+ktimers. E.g. we could decrease the frequency of the timeout subsystem
+from 250 Hz to 100 HZ (or even smaller).
+
+ktimer subsystem implementation details
+---------------------------------------
+
+the basic design considerations were:
+
+- simplicity
+- robust, extensible abstractions
+- data structure not bound to jiffies or any other granularity
+- simplification of existing, timing related kernel code
+
+From our previous experience with various approaches of high-resolution
+timers another basic requirement was the immediate enqueueing and
+ordering of timers at activation time. After looking at several possible
+solutions such as radix trees and hashes, the red black tree was choosen
+as the basic data structure. Rbtrees are available as a library in the
+kernel and are used in various performance-critical areas of e.g. memory
+management and file systems. The rbtree is solely used for the time
+sorted ordering, while a seperate list is used to give the expiry code
+fast access to the queued timers, without having to walk the rbtree.
+(This seperate list is also useful for high-resolution timers where we
+need seperate pending and expired queues while keeping the time-order
+intact.)
+
+The time-ordered enqueueing is not purely for the purposes of the
+high-resolution timers extension though, it also simplifies the handling
+of absolute timers based on CLOCK_REALTIME. The existing implementation
+needed to keep an extra list of all armed absolute CLOCK_REALTIME timers
+along with complex locking. In case of settimeofday and NTP, all the
+timers (!) had to be dequeued, the time-changing code had to fix them up
+one by one, and all of them had to be enqueued again. The time-ordered
+enqueueing and the storage of the expiry time in absolute time units
+removes all this complex and poorly scaling code from the posix-timer
+implementation - the clock can simply be set without having to touch the
+rbtree. This also makes the handling of posix-timers simpler in general.
+
+The locking and per-CPU behavior of ktimers was mostly taken from the
+existing timer wheel code, as it is mature and well suited. Sharing code
+was not really a win, due to the different data structures. Also, the
+ktimer functions now have clearer behavior and clearer names - such as
+ktimer_try_to_cancel() and ktimer_cancel() [which are roughly equivalent
+to del_timer() and del_timer_sync()] - and there's no direct 1:1 mapping
+between them on the algorithmical level.
+
+The internal representation of time values (ktime_t) is implemented via
+macros and inline functions, and can be switched between a "hybrid
+union" type and a plain "scalar" 64bit nanoseconds representation (at
+compile time). The hybrid union type exists to optimize time conversions
+on 32bit CPUs. This build-time-selectable ktime_t storage format was
+implemented to avoid the performance impact of 64-bit multiplications
+and divisions on 32bit CPUs. Such operations are frequently necessary to
+convert between the storage formats provided by kernel and userspace
+interfaces and the internal time format. (See include/linux/ktime.h for
+further details.)
+
+ktimers - rounding of timer values
+----------------------------------
+
+Why do we need rounding at all ?
+
+Firstly, the POSIX specification requires rounding to the resolution -
+whatever that means. The POSIX specification is quite imprecise on the
+details of rounding though, so a practical interpretation had to be
+found.
+
+The first question is which resolution value should be returned to the
+user by the clock_getres() interface.
+
+The simplest case is when the hardware is capable of 1 nsec resolution:
+in that case we can fulfill all wishes and there is no rounding :-)
+
+Another simple case is when the clock hardware has a limited resolution
+that the kernel wants to fully offer to user-space: in this case that
+limited resolution is returned to userspace.
+
+The hairy case is when the underlying hardware is capable of finer
+grained resolution, but the kernel is not willing to offer that
+resolution. Why would the kernel want to do that? Because e.g. the
+system could easily be DoS-ed with high-frequency timer interrupts. Or
+the kernel might want to cluster high-res timer interrupts into groups
+for performance reasons, so that extremely high interrupt rates are
+avoided. So the kernel needs some leeway in deciding the 'effective'
+resolution that it is willing to expose to userspace.
+
+In this case, the clock_getres() decision is easy: we want to return the
+'effective' resolution, not the 'theoretical' resolution. Thus an
+application programmer gets correct information about what granularity
+and accuracy to expect from the system.
+
+What is much less obvious in both the 'hardware is low-res' and 'kernel
+wants to offer low-res' cases is the actual behavior of timers, and
+where and how to round time values to the 'effective' resolution of the
+clock.
+
+For this we first need to see what types of expiries there exist for
+ktimers, and how rounding affects them. Ktimers have the following
+variants:
+
+- relative one-shot timers
+- absolute one-shot timers
+- relative interval timers
+- absolute interval timers
+
+Interval timers can be led back to one-shot timers: they are a series of
+one-shot timers with the same interval. Relative one-shot timers can be
+handled identically to absolute one-shot timers after adding the
+relative expiry time to the current time of the respective clock.
+
+We picked to handle two cases of rounding:
+
+- the rounding of the absolute value of the first expiry time
+- the rounding of the timer interval
+
+An alternative implementation would be to not round the interval and to
+implicitly round at every timer event, but it's not clear what the
+advantages would be from doing that. There are a couple of
+disadvantages:
+
+- the technique seems to contradict the standard's requirement that
+ 'time values ... be rounded' (which the interval clearly is).
+
+- other OSs implement the rounding in the way we implemented it.
+
+- also, there is an application surprise factor, the 'do not round
+ intervals' technique can lead to the following sample sequence of
+ events:
+
+ Interval: 1.7ms
+ Resolution: 1ms
+
+ Event timeline:
+
+ 2ms - 4ms - 6ms - 7ms - 9ms - 11ms - 12ms - 14ms - 16ms - 17ms ...
+
+ this 2,2,1,2,2,1...msec 'unpredictable and uneven' relative distance
+ of events could surprise applications.
+
+(as a sidenote, current POSIX APIs could be extended with a method of
+periodic timers to have an 'average' frequency, where there is no
+rounding of the interval. No such API exists at the moment.)
+
+ktimers - testing and verification
+----------------------------------
+
+We used the high-resolution timer subsystem ontop of ktimers to verify
+the ktimer implementation details in praxis, and we also ran the posix
+timer tests in order to ensure specification compliance.
+
+The ktimer patch converts the following kernel functionality to use
+ktimers:
+
+ - nanosleep
+ - itimers
+ - posix-timers
+
+The conversion of nanosleep and posix-timers enabled the unification of
+nanosleep and clock_nanosleep.
+
+The code was successfully compiled for the following platforms:
+
+ i386, x86_64, ARM, PPC, PPC64, IA64
+
+The code was run-tested on the following platforms:
+
+ i386(UP/SMP), x86_64(UP/SMP), ARM, PPC
+
+ktimers were also integrated into the -rt tree, along with a
+ktimers-based high-resolution timer implementation, so the ktimers code
+got a healthy amount of testing and use in practice.
+
+ Thomas Gleixner, Ingo Molnar
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 17/43] Switch itimers to ktimer
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (15 preceding siblings ...)
2005-12-01 0:03 ` [patch 16/43] ktimer documentation Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 18/43] Remove now unnecessary includes Thomas Gleixner
` (25 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-convert-itimer.patch)
- switch itimers to a ktimers-based implementation
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
fs/exec.c | 7 +--
fs/proc/array.c | 6 +-
include/linux/sched.h | 4 -
include/linux/timer.h | 2
kernel/exit.c | 2
kernel/fork.c | 5 --
kernel/itimer.c | 108 ++++++++++++++++++++++----------------------------
7 files changed, 62 insertions(+), 72 deletions(-)
Index: linux-2.6.15-rc2-rework/fs/exec.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/fs/exec.c
+++ linux-2.6.15-rc2-rework/fs/exec.c
@@ -642,10 +642,11 @@ static inline int de_thread(struct task_
* synchronize with any firing (by calling del_timer_sync)
* before we can safely let the old group leader die.
*/
- sig->real_timer.data = (unsigned long)current;
+ sig->real_timer.data = current;
spin_unlock_irq(lock);
- if (del_timer_sync(&sig->real_timer))
- add_timer(&sig->real_timer);
+ if (ktimer_cancel(&sig->real_timer))
+ ktimer_start(&sig->real_timer, NULL,
+ KTIMER_RESTART|KTIMER_NOCHECK);
spin_lock_irq(lock);
}
while (atomic_read(&sig->count) > count) {
Index: linux-2.6.15-rc2-rework/fs/proc/array.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/fs/proc/array.c
+++ linux-2.6.15-rc2-rework/fs/proc/array.c
@@ -330,7 +330,7 @@ static int do_task_stat(struct task_stru
unsigned long min_flt = 0, maj_flt = 0;
cputime_t cutime, cstime, utime, stime;
unsigned long rsslim = 0;
- unsigned long it_real_value = 0;
+ DEFINE_KTIME(it_real_value);
struct task_struct *t;
char tcomm[sizeof(task->comm)];
@@ -386,7 +386,7 @@ static int do_task_stat(struct task_stru
utime = cputime_add(utime, task->signal->utime);
stime = cputime_add(stime, task->signal->stime);
}
- it_real_value = task->signal->it_real_value;
+ it_real_value = task->signal->real_timer.expires;
}
ppid = pid_alive(task) ? task->group_leader->real_parent->tgid : 0;
read_unlock(&tasklist_lock);
@@ -435,7 +435,7 @@ static int do_task_stat(struct task_stru
priority,
nice,
num_threads,
- jiffies_to_clock_t(it_real_value),
+ (long) ktime_to_clock_t(it_real_value),
start_time,
vsize,
mm ? get_mm_rss(mm) : 0,
Index: linux-2.6.15-rc2-rework/include/linux/sched.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/sched.h
+++ linux-2.6.15-rc2-rework/include/linux/sched.h
@@ -104,6 +104,7 @@ extern unsigned long nr_iowait(void);
#include <linux/param.h>
#include <linux/resource.h>
#include <linux/timer.h>
+#include <linux/ktimer.h>
#include <asm/processor.h>
@@ -402,8 +403,7 @@ struct signal_struct {
struct list_head posix_timers;
/* ITIMER_REAL timer for the process */
- struct timer_list real_timer;
- unsigned long it_real_value, it_real_incr;
+ struct ktimer real_timer;
/* ITIMER_PROF and ITIMER_VIRTUAL timers for the process */
cputime_t it_prof_expires, it_virt_expires;
Index: linux-2.6.15-rc2-rework/include/linux/timer.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/timer.h
+++ linux-2.6.15-rc2-rework/include/linux/timer.h
@@ -96,6 +96,6 @@ static inline void add_timer(struct time
extern void init_timers(void);
extern void run_local_timers(void);
-extern void it_real_fn(unsigned long);
+extern void it_real_fn(void *);
#endif
Index: linux-2.6.15-rc2-rework/kernel/exit.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/exit.c
+++ linux-2.6.15-rc2-rework/kernel/exit.c
@@ -842,7 +842,7 @@ fastcall NORET_TYPE void do_exit(long co
}
group_dead = atomic_dec_and_test(&tsk->signal->live);
if (group_dead) {
- del_timer_sync(&tsk->signal->real_timer);
+ ktimer_cancel(&tsk->signal->real_timer);
exit_itimers(tsk->signal);
acct_process(code);
}
Index: linux-2.6.15-rc2-rework/kernel/fork.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/fork.c
+++ linux-2.6.15-rc2-rework/kernel/fork.c
@@ -793,10 +793,9 @@ static inline int copy_signal(unsigned l
init_sigpending(&sig->shared_pending);
INIT_LIST_HEAD(&sig->posix_timers);
- sig->it_real_value = sig->it_real_incr = 0;
+ ktimer_init(&sig->real_timer);
sig->real_timer.function = it_real_fn;
- sig->real_timer.data = (unsigned long) tsk;
- init_timer(&sig->real_timer);
+ sig->real_timer.data = tsk;
sig->it_virt_expires = cputime_zero;
sig->it_virt_incr = cputime_zero;
Index: linux-2.6.15-rc2-rework/kernel/itimer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/itimer.c
+++ linux-2.6.15-rc2-rework/kernel/itimer.c
@@ -12,36 +12,49 @@
#include <linux/syscalls.h>
#include <linux/time.h>
#include <linux/posix-timers.h>
+#include <linux/ktimer.h>
#include <asm/uaccess.h>
-static unsigned long it_real_value(struct signal_struct *sig)
+/**
+ * itimer_get_remtime - get remaining time for the timer
+ *
+ * @timer: the timer to read
+ * @fake: a pending, but expired timer returns fake (itimers kludge)
+ *
+ * Returns the delta between the expiry time and now, which can be
+ * less than zero or the fake value described above.
+ */
+static struct timeval itimer_get_remtime(struct ktimer *timer, long fake)
{
- unsigned long val = 0;
- if (timer_pending(&sig->real_timer)) {
- val = sig->real_timer.expires - jiffies;
-
- /* look out for negative/zero itimer.. */
- if ((long) val <= 0)
- val = 1;
- }
- return val;
+ ktime_t rem = ktimer_get_remtime(timer);
+
+ /*
+ * Racy but safe: if the itimer expires after the above
+ * ktimer_get_remtime() call but before this condition
+ * then we return KTIMER_ZERO - which is correct.
+ */
+ if (ktimer_active(timer)) {
+ if (rem.tv64 <= 0)
+ rem = ktime_set(0, fake);
+ } else
+ rem.tv64 = 0;
+
+ return ktime_to_timeval(rem);
}
int do_getitimer(int which, struct itimerval *value)
{
struct task_struct *tsk = current;
- unsigned long interval, val;
+ ktime_t interval;
cputime_t cinterval, cval;
switch (which) {
case ITIMER_REAL:
- spin_lock_irq(&tsk->sighand->siglock);
- interval = tsk->signal->it_real_incr;
- val = it_real_value(tsk->signal);
- spin_unlock_irq(&tsk->sighand->siglock);
- jiffies_to_timeval(val, &value->it_value);
- jiffies_to_timeval(interval, &value->it_interval);
+ interval = tsk->signal->real_timer.interval;
+ value->it_value = itimer_get_remtime(&tsk->signal->real_timer,
+ NSEC_PER_USEC);
+ value->it_interval = ktime_to_timeval(interval);
break;
case ITIMER_VIRTUAL:
read_lock(&tasklist_lock);
@@ -113,59 +126,36 @@ asmlinkage long sys_getitimer(int which,
}
-void it_real_fn(unsigned long __data)
+/*
+ * The timer is automagically restarted, when interval != 0
+ */
+void it_real_fn(void *data)
{
- struct task_struct * p = (struct task_struct *) __data;
- unsigned long inc = p->signal->it_real_incr;
-
- send_group_sig_info(SIGALRM, SEND_SIG_PRIV, p);
-
- /*
- * Now restart the timer if necessary. We don't need any locking
- * here because do_setitimer makes sure we have finished running
- * before it touches anything.
- * Note, we KNOW we are (or should be) at a jiffie edge here so
- * we don't need the +1 stuff. Also, we want to use the prior
- * expire value so as to not "slip" a jiffie if we are late.
- * Deal with requesting a time prior to "now" here rather than
- * in add_timer.
- */
- if (!inc)
- return;
- while (time_before_eq(p->signal->real_timer.expires, jiffies))
- p->signal->real_timer.expires += inc;
- add_timer(&p->signal->real_timer);
+ send_group_sig_info(SIGALRM, SEND_SIG_PRIV, data);
}
int do_setitimer(int which, struct itimerval *value, struct itimerval *ovalue)
{
struct task_struct *tsk = current;
- unsigned long val, interval, expires;
+ struct ktimer *timer;
+ ktime_t expires;
cputime_t cval, cinterval, nval, ninterval;
switch (which) {
case ITIMER_REAL:
-again:
- spin_lock_irq(&tsk->sighand->siglock);
- interval = tsk->signal->it_real_incr;
- val = it_real_value(tsk->signal);
- /* We are sharing ->siglock with it_real_fn() */
- if (try_to_del_timer_sync(&tsk->signal->real_timer) < 0) {
- spin_unlock_irq(&tsk->sighand->siglock);
- goto again;
- }
- tsk->signal->it_real_incr =
- timeval_to_jiffies(&value->it_interval);
- expires = timeval_to_jiffies(&value->it_value);
- if (expires)
- mod_timer(&tsk->signal->real_timer,
- jiffies + 1 + expires);
- spin_unlock_irq(&tsk->sighand->siglock);
+ timer = &tsk->signal->real_timer;
+ ktimer_cancel(timer);
if (ovalue) {
- jiffies_to_timeval(val, &ovalue->it_value);
- jiffies_to_timeval(interval,
- &ovalue->it_interval);
- }
+ ovalue->it_value = itimer_get_remtime(timer,
+ NSEC_PER_USEC);
+ ovalue->it_interval = ktime_to_timeval(timer->interval);
+ }
+ timer->interval = ktimer_round_timeval(timer,
+ &value->it_interval);
+ expires = timeval_to_ktime(value->it_value);
+ if (expires.tv64 != 0)
+ ktimer_restart(timer, &expires,
+ KTIMER_REL | KTIMER_NOCHECK | KTIMER_ROUND);
break;
case ITIMER_VIRTUAL:
nval = timeval_to_cputime(&value->it_value);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 18/43] Remove now unnecessary includes
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (16 preceding siblings ...)
2005-12-01 0:03 ` [patch 17/43] Switch itimers to ktimer Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 19/43] Introduce ktimer_nanosleep APIs Thomas Gleixner
` (24 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-cleanup-includes.patch)
- remove some now unnecessary includes
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
init/main.c | 2 --
kernel/timer.c | 1 -
2 files changed, 3 deletions(-)
Index: linux-2.6.15-rc2-rework/init/main.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/init/main.c
+++ linux-2.6.15-rc2-rework/init/main.c
@@ -47,8 +47,6 @@
#include <linux/rmap.h>
#include <linux/mempolicy.h>
#include <linux/key.h>
-#include <linux/ktimer.h>
-
#include <net/sock.h>
#include <asm/io.h>
Index: linux-2.6.15-rc2-rework/kernel/timer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/timer.c
+++ linux-2.6.15-rc2-rework/kernel/timer.c
@@ -30,7 +30,6 @@
#include <linux/thread_info.h>
#include <linux/time.h>
#include <linux/jiffies.h>
-#include <linux/ktimer.h>
#include <linux/posix-timers.h>
#include <linux/cpu.h>
#include <linux/syscalls.h>
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 19/43] Introduce ktimer_nanosleep APIs
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (17 preceding siblings ...)
2005-12-01 0:03 ` [patch 18/43] Remove now unnecessary includes Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 20/43] Convert sys_nanosleep to ktimer_nanosleep Thomas Gleixner
` (23 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-nanosleep-interface.patch)
- introduce the ktimer_nanosleep() and ktimer_nanosleep_real() APIs.
Not yet used by any code.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimer.h | 6 +
kernel/ktimer.c | 164 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 170 insertions(+)
Index: linux-2.6.15-rc2-rework/include/linux/ktimer.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/ktimer.h
+++ linux-2.6.15-rc2-rework/include/linux/ktimer.h
@@ -155,6 +155,12 @@ extern ktime_t ktimer_round_timeval(cons
extern ktime_t ktimer_round_timespec(const struct ktimer *timer,
const struct timespec *ts);
+/* Precise sleep: */
+extern long ktimer_nanosleep(struct timespec *rqtp,
+ struct timespec __user *rmtp, const int mode);
+extern long ktimer_nanosleep_real(struct timespec *rqtp,
+ struct timespec __user *rmtp, const int mode);
+
#ifdef CONFIG_SMP
extern void wait_for_ktimer(const struct ktimer *timer);
#else
Index: linux-2.6.15-rc2-rework/kernel/ktimer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/ktimer.c
+++ linux-2.6.15-rc2-rework/kernel/ktimer.c
@@ -806,6 +806,170 @@ void ktimer_run_queues(void)
}
/*
+ * Sleep related functions:
+ */
+
+/*
+ * Process-wakeup callback:
+ */
+static void ktimer_wake_up(void *data)
+{
+ wake_up_process(data);
+}
+
+/**
+ * schedule_ktimer - sleep until timeout
+ *
+ * @timer: ktimer variable initialized with the correct clock base
+ * @t: timeout value
+ * @mode: timeout value is abs/rel
+ *
+ * Make the current task sleep until @timeout is
+ * elapsed.
+ *
+ * You can set the task state as follows -
+ *
+ * %TASK_UNINTERRUPTIBLE - at least @timeout is guaranteed to
+ * pass before the routine returns. The routine will return 0
+ *
+ * %TASK_INTERRUPTIBLE - the routine may return early if a signal is
+ * delivered to the current task. In this case the remaining time
+ * will be returned
+ *
+ * The current task state is guaranteed to be TASK_RUNNING when this
+ * routine returns.
+ */
+static ktime_t __sched
+schedule_ktimer(struct ktimer *timer, ktime_t *t, const int mode)
+{
+ timer->data = current;
+ timer->function = ktimer_wake_up;
+
+ if (unlikely(ktimer_start(timer, t, mode) < 0)) {
+ __set_current_state(TASK_RUNNING);
+ } else {
+ if (current->state != TASK_RUNNING)
+ schedule();
+ ktimer_cancel(timer);
+ }
+
+ /* Store the absolute expiry time */
+ *t = timer->expires;
+
+ /* Return the remaining time */
+ return ktime_sub(timer->expires, timer->expired);
+}
+
+static ktime_t __sched
+schedule_ktimer_interruptible(struct ktimer *timer, ktime_t *t, const int mode)
+{
+ set_current_state(TASK_INTERRUPTIBLE);
+
+ return schedule_ktimer(timer, t, mode);
+}
+
+static long __sched
+nanosleep_restart(struct ktimer *timer, struct restart_block *restart)
+{
+ void *rfn_save = restart->fn;
+ struct timespec __user *rmtp;
+ struct timespec tu;
+ ktime_t t, rem;
+
+ restart->fn = do_no_restart_syscall;
+
+ t = ktime_set_low_high(restart->arg0, restart->arg1);
+
+ rem = schedule_ktimer_interruptible(timer, &t, KTIMER_ABS);
+
+ if (rem.tv64 <= 0)
+ return 0;
+
+ rmtp = (struct timespec __user *) restart->arg2;
+ tu = ktime_to_timespec(rem);
+ if (rmtp && copy_to_user(rmtp, &tu, sizeof(tu)))
+ return -EFAULT;
+
+ restart->fn = rfn_save;
+
+ /* The other values in restart are already filled in */
+ return -ERESTART_RESTARTBLOCK;
+}
+
+static long __sched nanosleep_restart_mono(struct restart_block *restart)
+{
+ struct ktimer timer;
+
+ ktimer_init(&timer);
+
+ return nanosleep_restart(&timer, restart);
+}
+
+static long __sched nanosleep_restart_real(struct restart_block *restart)
+{
+ struct ktimer timer;
+
+ ktimer_init_clock(&timer, CLOCK_REALTIME);
+
+ return nanosleep_restart(&timer, restart);
+}
+
+static long __ktimer_nanosleep(struct ktimer *timer, struct timespec *rqtp,
+ struct timespec __user *rmtp, const int mode,
+ long (*rfn)(struct restart_block *))
+{
+ struct timespec tu;
+ ktime_t rem, t;
+ struct restart_block *restart;
+
+ t = timespec_to_ktime(*rqtp);
+
+ /* t is updated to absolute expiry time ! */
+ rem = schedule_ktimer_interruptible(timer, &t, mode | KTIMER_ROUND);
+
+ if (rem.tv64 <= 0)
+ return 0;
+
+ /* Absolute timers do not update the rmtp value */
+ if (mode == KTIMER_ABS)
+ return -ERESTARTNOHAND;
+
+ tu = ktime_to_timespec(rem);
+
+ if (rmtp && copy_to_user(rmtp, &tu, sizeof(tu)))
+ return -EFAULT;
+
+ restart = ¤t_thread_info()->restart_block;
+ restart->fn = rfn;
+ restart->arg0 = ktime_get_low(t);
+ restart->arg1 = ktime_get_high(t);
+ restart->arg2 = (unsigned long) rmtp;
+
+ return -ERESTART_RESTARTBLOCK;
+}
+
+long ktimer_nanosleep(struct timespec *rqtp,
+ struct timespec __user *rmtp, const int mode)
+{
+ struct ktimer timer;
+
+ ktimer_init(&timer);
+
+ return __ktimer_nanosleep(&timer, rqtp, rmtp, mode,
+ nanosleep_restart_mono);
+}
+
+long ktimer_nanosleep_real(struct timespec *rqtp,
+ struct timespec __user *rmtp, const int mode)
+{
+ struct ktimer timer;
+
+ ktimer_init_clock(&timer, CLOCK_REALTIME);
+ return __ktimer_nanosleep(&timer, rqtp, rmtp, mode,
+ nanosleep_restart_real);
+}
+
+/*
* Functions related to boot-time initialization:
*/
static void __devinit init_ktimers_cpu(int cpu)
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 20/43] Convert sys_nanosleep to ktimer_nanosleep
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (18 preceding siblings ...)
2005-12-01 0:03 ` [patch 19/43] Introduce ktimer_nanosleep APIs Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 21/43] Switch clock_nanosleep to ktimer nanosleep API Thomas Gleixner
` (22 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-convert-sys-nanosleep.patch)
- convert sys_nanosleep() to use ktimer_nanosleep()
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/ktimer.c | 14 ++++++++++++++
kernel/timer.c | 56 --------------------------------------------------------
2 files changed, 14 insertions(+), 56 deletions(-)
Index: linux-2.6.15-rc2-rework/kernel/ktimer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/ktimer.c
+++ linux-2.6.15-rc2-rework/kernel/ktimer.c
@@ -969,6 +969,20 @@ long ktimer_nanosleep_real(struct timesp
nanosleep_restart_real);
}
+asmlinkage long
+sys_nanosleep(struct timespec __user *rqtp, struct timespec __user *rmtp)
+{
+ struct timespec tu;
+
+ if (copy_from_user(&tu, rqtp, sizeof(tu)))
+ return -EFAULT;
+
+ if (!timespec_valid(&tu))
+ return -EINVAL;
+
+ return ktimer_nanosleep(&tu, rmtp, KTIMER_REL);
+}
+
/*
* Functions related to boot-time initialization:
*/
Index: linux-2.6.15-rc2-rework/kernel/timer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/timer.c
+++ linux-2.6.15-rc2-rework/kernel/timer.c
@@ -1119,62 +1119,6 @@ asmlinkage long sys_gettid(void)
return current->pid;
}
-static long __sched nanosleep_restart(struct restart_block *restart)
-{
- unsigned long expire = restart->arg0, now = jiffies;
- struct timespec __user *rmtp = (struct timespec __user *) restart->arg1;
- long ret;
-
- /* Did it expire while we handled signals? */
- if (!time_after(expire, now))
- return 0;
-
- expire = schedule_timeout_interruptible(expire - now);
-
- ret = 0;
- if (expire) {
- struct timespec t;
- jiffies_to_timespec(expire, &t);
-
- ret = -ERESTART_RESTARTBLOCK;
- if (rmtp && copy_to_user(rmtp, &t, sizeof(t)))
- ret = -EFAULT;
- /* The 'restart' block is already filled in */
- }
- return ret;
-}
-
-asmlinkage long sys_nanosleep(struct timespec __user *rqtp, struct timespec __user *rmtp)
-{
- struct timespec t;
- unsigned long expire;
- long ret;
-
- if (copy_from_user(&t, rqtp, sizeof(t)))
- return -EFAULT;
-
- if ((t.tv_nsec >= 1000000000L) || (t.tv_nsec < 0) || (t.tv_sec < 0))
- return -EINVAL;
-
- expire = timespec_to_jiffies(&t) + (t.tv_sec || t.tv_nsec);
- expire = schedule_timeout_interruptible(expire);
-
- ret = 0;
- if (expire) {
- struct restart_block *restart;
- jiffies_to_timespec(expire, &t);
- if (rmtp && copy_to_user(rmtp, &t, sizeof(t)))
- return -EFAULT;
-
- restart = ¤t_thread_info()->restart_block;
- restart->fn = nanosleep_restart;
- restart->arg0 = jiffies + expire;
- restart->arg1 = (unsigned long) rmtp;
- ret = -ERESTART_RESTARTBLOCK;
- }
- return ret;
-}
-
/*
* sys_sysinfo - fill in sysinfo struct
*/
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 21/43] Switch clock_nanosleep to ktimer nanosleep API
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (19 preceding siblings ...)
2005-12-01 0:03 ` [patch 20/43] Convert sys_nanosleep to ktimer_nanosleep Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 22/43] Convert posix interval timers to use ktimers Thomas Gleixner
` (21 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(ktimer-convert-posix-clock-nanosleep.patch)
- Switch clock_nanosleep to use the new nanosleep functions
in ktimer.c
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/posix-timers.h | 7 +-
kernel/posix-cpu-timers.c | 23 ++++--
kernel/posix-timers.c | 150 +++++++------------------------------------
3 files changed, 44 insertions(+), 136 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/posix-timers.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/posix-timers.h
+++ linux-2.6.15-rc2-rework/include/linux/posix-timers.h
@@ -81,7 +81,7 @@ struct k_clock {
int (*clock_get) (const clockid_t which_clock, struct timespec * tp);
int (*timer_create) (struct k_itimer *timer);
int (*nsleep) (const clockid_t which_clock, int flags,
- struct timespec *);
+ struct timespec *, struct timespec __user *);
int (*timer_set) (struct k_itimer * timr, int flags,
struct itimerspec * new_setting,
struct itimerspec * old_setting);
@@ -95,7 +95,8 @@ void register_posix_clock(const clockid_
/* error handlers for timer_create, nanosleep and settime */
int do_posix_clock_notimer_create(struct k_itimer *timer);
-int do_posix_clock_nonanosleep(const clockid_t, int flags, struct timespec *);
+int do_posix_clock_nonanosleep(const clockid_t, int flags, struct timespec *,
+ struct timespec __user *);
int do_posix_clock_nosettime(const clockid_t, struct timespec *tp);
/* function to call to trigger timer event */
@@ -129,7 +130,7 @@ int posix_cpu_clock_get(const clockid_t
int posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *ts);
int posix_cpu_timer_create(struct k_itimer *timer);
int posix_cpu_nsleep(const clockid_t which_clock, int flags,
- struct timespec *ts);
+ struct timespec *rqtp, struct timespec __user *rmtp);
int posix_cpu_timer_set(struct k_itimer *timer, int flags,
struct itimerspec *new, struct itimerspec *old);
int posix_cpu_timer_del(struct k_itimer *timer);
Index: linux-2.6.15-rc2-rework/kernel/posix-cpu-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-cpu-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-cpu-timers.c
@@ -1411,7 +1411,7 @@ void set_process_cpu_timer(struct task_s
static long posix_cpu_clock_nanosleep_restart(struct restart_block *);
int posix_cpu_nsleep(const clockid_t which_clock, int flags,
- struct timespec *rqtp)
+ struct timespec *rqtp, struct timespec __user *rmtp)
{
struct restart_block *restart_block =
¤t_thread_info()->restart_block;
@@ -1436,7 +1436,6 @@ int posix_cpu_nsleep(const clockid_t whi
error = posix_cpu_timer_create(&timer);
timer.it_process = current;
if (!error) {
- struct timespec __user *rmtp;
static struct itimerspec zero_it;
struct itimerspec it = { .it_value = *rqtp,
.it_interval = {} };
@@ -1483,7 +1482,6 @@ int posix_cpu_nsleep(const clockid_t whi
/*
* Report back to the user the time still remaining.
*/
- rmtp = (struct timespec __user *) restart_block->arg1;
if (rmtp != NULL && !(flags & TIMER_ABSTIME) &&
copy_to_user(rmtp, &it.it_value, sizeof *rmtp))
return -EFAULT;
@@ -1491,6 +1489,7 @@ int posix_cpu_nsleep(const clockid_t whi
restart_block->fn = posix_cpu_clock_nanosleep_restart;
/* Caller already set restart_block->arg1 */
restart_block->arg0 = which_clock;
+ restart_block->arg1 = (unsigned long) rmtp;
restart_block->arg2 = rqtp->tv_sec;
restart_block->arg3 = rqtp->tv_nsec;
@@ -1504,10 +1503,15 @@ static long
posix_cpu_clock_nanosleep_restart(struct restart_block *restart_block)
{
clockid_t which_clock = restart_block->arg0;
- struct timespec t = { .tv_sec = restart_block->arg2,
- .tv_nsec = restart_block->arg3 };
+ struct timespec __user *rmtp;
+ struct timespec t;
+
+ rmtp = (struct timespec __user *) restart_block->arg1;
+ t.tv_sec = restart_block->arg2;
+ t.tv_nsec = restart_block->arg3;
+
restart_block->fn = do_no_restart_syscall;
- return posix_cpu_nsleep(which_clock, TIMER_ABSTIME, &t);
+ return posix_cpu_nsleep(which_clock, TIMER_ABSTIME, &t, rmtp);
}
@@ -1530,9 +1534,10 @@ static int process_cpu_timer_create(stru
return posix_cpu_timer_create(timer);
}
static int process_cpu_nsleep(const clockid_t which_clock, int flags,
- struct timespec *rqtp)
+ struct timespec *rqtp,
+ struct timespec __user *rmtp)
{
- return posix_cpu_nsleep(PROCESS_CLOCK, flags, rqtp);
+ return posix_cpu_nsleep(PROCESS_CLOCK, flags, rqtp, rmtp);
}
static int thread_cpu_clock_getres(const clockid_t which_clock,
struct timespec *tp)
@@ -1550,7 +1555,7 @@ static int thread_cpu_timer_create(struc
return posix_cpu_timer_create(timer);
}
static int thread_cpu_nsleep(const clockid_t which_clock, int flags,
- struct timespec *rqtp)
+ struct timespec *rqtp, struct timespec __user *rmtp)
{
return -EINVAL;
}
Index: linux-2.6.15-rc2-rework/kernel/posix-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-timers.c
@@ -209,7 +209,8 @@ static inline int common_timer_create(st
/*
* These ones are defined below.
*/
-static int common_nsleep(const clockid_t, int flags, struct timespec *t);
+static int common_nsleep(const clockid_t, int flags, struct timespec *t,
+ struct timespec __user *rmtp);
static void common_timer_get(struct k_itimer *, struct itimerspec *);
static int common_timer_set(struct k_itimer *, int,
struct itimerspec *, struct itimerspec *);
@@ -1227,7 +1228,7 @@ int do_posix_clock_notimer_create(struct
EXPORT_SYMBOL_GPL(do_posix_clock_notimer_create);
int do_posix_clock_nonanosleep(const clockid_t clock, int flags,
- struct timespec *t)
+ struct timespec *t, struct timespec __user *r)
{
#ifndef ENOTSUP
return -EOPNOTSUPP; /* aka ENOTSUP in userland for POSIX */
@@ -1387,7 +1388,27 @@ void clock_was_set(void)
up(&clock_was_set_lock);
}
-long clock_nanosleep_restart(struct restart_block *restart_block);
+/*
+ * nanosleep for monotonic and realtime clocks
+ */
+static int common_nsleep(const clockid_t which_clock, int flags,
+ struct timespec *tsave, struct timespec __user *rmtp)
+{
+ int mode = flags & TIMER_ABSTIME ? KTIMER_ABS : KTIMER_REL;
+
+ switch (which_clock) {
+ case CLOCK_REALTIME:
+ /* Posix madness. Only absolute timers on clock realtime
+ are affected by clock set. */
+ if (mode == KTIMER_ABS)
+ return ktimer_nanosleep_real(tsave, rmtp, mode);
+ case CLOCK_MONOTONIC:
+ return ktimer_nanosleep(tsave, rmtp, mode);
+ default:
+ break;
+ }
+ return -EINVAL;
+}
asmlinkage long
sys_clock_nanosleep(const clockid_t which_clock, int flags,
@@ -1395,9 +1416,6 @@ sys_clock_nanosleep(const clockid_t whic
struct timespec __user *rmtp)
{
struct timespec t;
- struct restart_block *restart_block =
- &(current_thread_info()->restart_block);
- int ret;
if (invalid_clockid(which_clock))
return -EINVAL;
@@ -1408,122 +1426,6 @@ sys_clock_nanosleep(const clockid_t whic
if (!timespec_valid(&t))
return -EINVAL;
- /*
- * Do this here as nsleep function does not have the real address.
- */
- restart_block->arg1 = (unsigned long)rmtp;
-
- ret = CLOCK_DISPATCH(which_clock, nsleep, (which_clock, flags, &t));
-
- if ((ret == -ERESTART_RESTARTBLOCK) && rmtp &&
- copy_to_user(rmtp, &t, sizeof (t)))
- return -EFAULT;
- return ret;
-}
-
-
-static int common_nsleep(const clockid_t which_clock,
- int flags, struct timespec *tsave)
-{
- struct timespec t, dum;
- DECLARE_WAITQUEUE(abs_wqueue, current);
- u64 rq_time = (u64)0;
- s64 left;
- int abs;
- struct restart_block *restart_block =
- ¤t_thread_info()->restart_block;
-
- abs_wqueue.flags = 0;
- abs = flags & TIMER_ABSTIME;
-
- if (restart_block->fn == clock_nanosleep_restart) {
- /*
- * Interrupted by a non-delivered signal, pick up remaining
- * time and continue. Remaining time is in arg2 & 3.
- */
- restart_block->fn = do_no_restart_syscall;
-
- rq_time = restart_block->arg3;
- rq_time = (rq_time << 32) + restart_block->arg2;
- if (!rq_time)
- return -EINTR;
- left = rq_time - get_jiffies_64();
- if (left <= (s64)0)
- return 0; /* Already passed */
- }
-
- if (abs && (posix_clocks[which_clock].clock_get !=
- posix_clocks[CLOCK_MONOTONIC].clock_get))
- add_wait_queue(&nanosleep_abs_wqueue, &abs_wqueue);
-
- do {
- t = *tsave;
- if (abs || !rq_time) {
- adjust_abs_time(&posix_clocks[which_clock], &t, abs,
- &rq_time, &dum);
- }
-
- left = rq_time - get_jiffies_64();
- if (left >= (s64)MAX_JIFFY_OFFSET)
- left = (s64)MAX_JIFFY_OFFSET;
- if (left < (s64)0)
- break;
-
- schedule_timeout_interruptible(left);
-
- left = rq_time - get_jiffies_64();
- } while (left > (s64)0 && !test_thread_flag(TIF_SIGPENDING));
-
- if (abs_wqueue.task_list.next)
- finish_wait(&nanosleep_abs_wqueue, &abs_wqueue);
-
- if (left > (s64)0) {
-
- /*
- * Always restart abs calls from scratch to pick up any
- * clock shifting that happened while we are away.
- */
- if (abs)
- return -ERESTARTNOHAND;
-
- left *= TICK_NSEC;
- tsave->tv_sec = div_long_long_rem(left,
- NSEC_PER_SEC,
- &tsave->tv_nsec);
- /*
- * Restart works by saving the time remaing in
- * arg2 & 3 (it is 64-bits of jiffies). The other
- * info we need is the clock_id (saved in arg0).
- * The sys_call interface needs the users
- * timespec return address which _it_ saves in arg1.
- * Since we have cast the nanosleep call to a clock_nanosleep
- * both can be restarted with the same code.
- */
- restart_block->fn = clock_nanosleep_restart;
- restart_block->arg0 = which_clock;
- /*
- * Caller sets arg1
- */
- restart_block->arg2 = rq_time & 0xffffffffLL;
- restart_block->arg3 = rq_time >> 32;
-
- return -ERESTART_RESTARTBLOCK;
- }
-
- return 0;
-}
-/*
- * This will restart clock_nanosleep.
- */
-long
-clock_nanosleep_restart(struct restart_block *restart_block)
-{
- struct timespec t;
- int ret = common_nsleep(restart_block->arg0, 0, &t);
-
- if ((ret == -ERESTART_RESTARTBLOCK) && restart_block->arg1 &&
- copy_to_user((struct timespec __user *)(restart_block->arg1), &t,
- sizeof (t)))
- return -EFAULT;
- return ret;
+ return CLOCK_DISPATCH(which_clock, nsleep,
+ (which_clock, flags, &t, rmtp));
}
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 22/43] Convert posix interval timers to use ktimers
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (20 preceding siblings ...)
2005-12-01 0:03 ` [patch 21/43] Switch clock_nanosleep to ktimer nanosleep API Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 23/43] Simplify ktimers rearm code Thomas Gleixner
` (20 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-convert-posix-timers.patch)
- convert posix-timers.c to use ktimers
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimer.h | 7
include/linux/posix-timers.h | 121 +++++--
include/linux/time.h | 3
kernel/posix-timers.c | 690 +++++++++----------------------------------
4 files changed, 246 insertions(+), 575 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/ktimer.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/ktimer.h
+++ linux-2.6.15-rc2-rework/include/linux/ktimer.h
@@ -122,6 +122,13 @@ struct ktimer_base {
#define KTIMER_POISON ((void *) 0x00100101)
+/*
+ * clock_was_set() is a NOP for non- high-resolution systems. The
+ * time-sorted order guarantees that a timer does not expire early and
+ * is expired in the next softirq when the clock was advanced.
+ */
+#define clock_was_set() do { } while (0)
+
/* Exported timer functions: */
/* Initialize timers: */
Index: linux-2.6.15-rc2-rework/include/linux/posix-timers.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/posix-timers.h
+++ linux-2.6.15-rc2-rework/include/linux/posix-timers.h
@@ -51,12 +51,9 @@ struct k_itimer {
struct sigqueue *sigq; /* signal queue entry. */
union {
struct {
- struct timer_list timer;
- /* clock abs_timer_list: */
- struct list_head abs_timer_entry;
- /* wall_to_monotonic used when set: */
- struct timespec wall_to_prev;
- unsigned long incr; /* interval in jiffies */
+ struct ktimer timer;
+ ktime_t incr;
+ int overrun;
} real;
struct cpu_timer_list cpu;
struct {
@@ -68,11 +65,6 @@ struct k_itimer {
} it;
};
-struct k_clock_abs {
- struct list_head list;
- spinlock_t lock;
-};
-
struct k_clock {
int res; /* in nanoseconds */
int (*clock_getres) (const clockid_t which_clock, struct timespec *tp);
@@ -102,28 +94,91 @@ int do_posix_clock_nosettime(const clock
/* function to call to trigger timer event */
int posix_timer_event(struct k_itimer *timr, int si_private);
-struct now_struct {
- unsigned long jiffies;
-};
-
-#define posix_get_now(now) \
- do { (now)->jiffies = jiffies; } while (0)
-
-#define posix_time_before(timer, now) \
- time_before((timer)->expires, (now)->jiffies)
-
-#define posix_bump_timer(timr, now) \
- do { \
- long delta, orun; \
- \
- delta = (now).jiffies - (timr)->it.real.timer.expires; \
- if (delta >= 0) { \
- orun = 1 + (delta / (timr)->it.real.incr); \
- (timr)->it.real.timer.expires += \
- orun * (timr)->it.real.incr; \
- (timr)->it_overrun += orun; \
- } \
- } while (0)
+#if BITS_PER_LONG < 64
+static inline ktime_t forward_posix_timer(struct k_itimer *t, const ktime_t now)
+{
+ ktime_t delta = ktime_sub(now, t->it.real.timer.expires);
+ unsigned long orun = 1;
+
+ if (delta.tv64 < 0)
+ goto out;
+
+ if (unlikely(delta.tv64 > t->it.real.incr.tv64)) {
+
+ int sft = 0;
+ u64 div, dclc, inc, dns;
+
+ dclc = dns = ktime_to_ns(delta);
+ div = inc = ktime_to_ns(t->it.real.incr);
+ /* Make sure the divisor is less than 2^32 */
+ while(div >> 32) {
+ sft++;
+ div >>= 1;
+ }
+ dclc >>= sft;
+ do_div(dclc, (unsigned long) div);
+ orun = (unsigned long) dclc;
+ if (likely(!(inc >> 32)))
+ dclc *= (unsigned long) inc;
+ else
+ dclc *= inc;
+ t->it.real.timer.expires = ktime_add_ns(t->it.real.timer.expires,
+ dclc);
+ } else {
+ t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
+ t->it.real.incr);
+ }
+ /*
+ * Here is the correction for exact. Also covers delta == incr
+ * which is the else clause above.
+ */
+ if (t->it.real.timer.expires.tv64 <= now.tv64) {
+ t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
+ t->it.real.incr);
+ orun++;
+ }
+ t->it_overrun += orun;
+
+ out:
+ return ktime_sub(t->it.real.timer.expires, now);
+}
+#else
+static inline ktime_t forward_posix_timer(struct k_itimer *t, const ktime_t now)
+{
+ ktime_t delta = ktime_sub(now, t->it.real.timer.expires);
+ unsigned long orun = 1;
+
+ if (delta.tv64 < 0)
+ goto out;
+
+ if (unlikely(delta.tv64 > t->it.real.incr.tv64)) {
+
+ u64 dns, inc;
+
+ dns = ktime_to_ns(delta);
+ inc = ktime_to_ns(t->it.real.incr);
+
+ orun = dns / inc;
+ t->it.real.timer.expires = ktime_add_ns(t->it.real.timer.expires,
+ orun * inc);
+ } else {
+ t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
+ t->it.real.incr);
+ }
+ /*
+ * Here is the correction for exact. Also covers delta == incr
+ * which is the else clause above.
+ */
+ if (t->it.real.timer.expires.tv64 <= now.tv64) {
+ t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
+ t->it.real.incr);
+ orun++;
+ }
+ t->it_overrun += orun;
+ out:
+ return ktime_sub(t->it.real.timer.expires, now);
+}
+#endif
int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *ts);
int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *ts);
Index: linux-2.6.15-rc2-rework/include/linux/time.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/time.h
+++ linux-2.6.15-rc2-rework/include/linux/time.h
@@ -73,8 +73,7 @@ struct timespec current_kernel_time(void
extern void do_gettimeofday(struct timeval *tv);
extern int do_settimeofday(struct timespec *tv);
extern int do_sys_settimeofday(struct timespec *tv, struct timezone *tz);
-extern void clock_was_set(void); // call whenever the clock is set
-extern int do_posix_clock_monotonic_gettime(struct timespec *tp);
+extern void do_posix_clock_monotonic_gettime(struct timespec *ts);
extern long do_utimes(char __user *filename, struct timeval *times);
struct itimerval;
extern int do_setitimer(int which, struct itimerval *value,
Index: linux-2.6.15-rc2-rework/kernel/posix-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-timers.c
@@ -35,7 +35,6 @@
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/time.h>
-#include <linux/calc64.h>
#include <asm/uaccess.h>
#include <asm/semaphore.h>
@@ -49,12 +48,6 @@
#include <linux/workqueue.h>
#include <linux/module.h>
-#define CLOCK_REALTIME_RES TICK_NSEC /* In nano seconds. */
-
-static inline u64 mpy_l_X_l_ll(unsigned long mpy1,unsigned long mpy2)
-{
- return (u64)mpy1 * mpy2;
-}
/*
* Management arrays for POSIX timers. Timers are kept in slab memory
* Timer ids are allocated by an external routine that keeps track of the
@@ -140,18 +133,18 @@ static DEFINE_SPINLOCK(idr_lock);
*/
static struct k_clock posix_clocks[MAX_CLOCKS];
+
/*
- * We only have one real clock that can be set so we need only one abs list,
- * even if we should want to have several clocks with differing resolutions.
+ * These ones are defined below.
*/
-static struct k_clock_abs abs_list = {.list = LIST_HEAD_INIT(abs_list.list),
- .lock = SPIN_LOCK_UNLOCKED};
+static int common_nsleep(const clockid_t, int flags, struct timespec *t,
+ struct timespec __user *rmtp);
+static void common_timer_get(struct k_itimer *, struct itimerspec *);
+static int common_timer_set(struct k_itimer *, int,
+ struct itimerspec *, struct itimerspec *);
+static int common_timer_del(struct k_itimer *timer);
-static void posix_timer_fn(unsigned long);
-static u64 do_posix_clock_monotonic_gettime_parts(
- struct timespec *tp, struct timespec *mo);
-int do_posix_clock_monotonic_gettime(struct timespec *tp);
-static int do_posix_clock_monotonic_get(const clockid_t, struct timespec *tp);
+static void posix_timer_fn(void *data);
static struct k_itimer *lock_timer(timer_t timer_id, unsigned long *flags);
@@ -199,22 +192,25 @@ static inline int common_clock_set(const
static inline int common_timer_create(struct k_itimer *new_timer)
{
- INIT_LIST_HEAD(&new_timer->it.real.abs_timer_entry);
- init_timer(&new_timer->it.real.timer);
- new_timer->it.real.timer.data = (unsigned long) new_timer;
+ return -EINVAL;
+}
+
+static int timer_create_mono(struct k_itimer *new_timer)
+{
+ ktimer_init(&new_timer->it.real.timer);
+ new_timer->it.real.timer.data = new_timer;
+ new_timer->it.real.timer.function = posix_timer_fn;
+ return 0;
+}
+
+static int timer_create_real(struct k_itimer *new_timer)
+{
+ ktimer_init_clock(&new_timer->it.real.timer, CLOCK_REALTIME);
+ new_timer->it.real.timer.data = new_timer;
new_timer->it.real.timer.function = posix_timer_fn;
return 0;
}
-/*
- * These ones are defined below.
- */
-static int common_nsleep(const clockid_t, int flags, struct timespec *t,
- struct timespec __user *rmtp);
-static void common_timer_get(struct k_itimer *, struct itimerspec *);
-static int common_timer_set(struct k_itimer *, int,
- struct itimerspec *, struct itimerspec *);
-static int common_timer_del(struct k_itimer *timer);
/*
* Return nonzero iff we know a priori this clockid_t value is bogus.
@@ -234,19 +230,44 @@ static inline int invalid_clockid(const
return 1;
}
+/*
+ * Get real time for posix timers
+ */
+static int posix_ktime_get_real_ts(clockid_t which_clock, struct timespec *tp)
+{
+ ktime_get_real_ts(tp);
+ return 0;
+}
+
+/*
+ * Get monotonic time for posix timers
+ */
+static int posix_ktime_get_ts(clockid_t which_clock, struct timespec *tp)
+{
+ ktime_get_ts(tp);
+ return 0;
+}
+
+void do_posix_clock_monotonic_gettime(struct timespec *ts)
+{
+ ktime_get_ts(ts);
+}
/*
* Initialize everything, well, just everything in Posix clocks/timers ;)
*/
static __init int init_posix_timers(void)
{
- struct k_clock clock_realtime = {.res = CLOCK_REALTIME_RES,
- .abs_struct = &abs_list
+ struct k_clock clock_realtime = {
+ .clock_getres = ktimer_get_res_clock,
+ .clock_get = posix_ktime_get_real_ts,
+ .timer_create = timer_create_real,
};
- struct k_clock clock_monotonic = {.res = CLOCK_REALTIME_RES,
- .abs_struct = NULL,
- .clock_get = do_posix_clock_monotonic_get,
- .clock_set = do_posix_clock_nosettime
+ struct k_clock clock_monotonic = {
+ .clock_getres = ktimer_get_res,
+ .clock_get = posix_ktime_get_ts,
+ .clock_set = do_posix_clock_nosettime,
+ .timer_create = timer_create_mono,
};
register_posix_clock(CLOCK_REALTIME, &clock_realtime);
@@ -260,117 +281,15 @@ static __init int init_posix_timers(void
__initcall(init_posix_timers);
-static void tstojiffie(struct timespec *tp, int res, u64 *jiff)
-{
- long sec = tp->tv_sec;
- long nsec = tp->tv_nsec + res - 1;
-
- if (nsec >= NSEC_PER_SEC) {
- sec++;
- nsec -= NSEC_PER_SEC;
- }
-
- /*
- * The scaling constants are defined in <linux/time.h>
- * The difference between there and here is that we do the
- * res rounding and compute a 64-bit result (well so does that
- * but it then throws away the high bits).
- */
- *jiff = (mpy_l_X_l_ll(sec, SEC_CONVERSION) +
- (mpy_l_X_l_ll(nsec, NSEC_CONVERSION) >>
- (NSEC_JIFFIE_SC - SEC_JIFFIE_SC))) >> SEC_JIFFIE_SC;
-}
-
-/*
- * This function adjusts the timer as needed as a result of the clock
- * being set. It should only be called for absolute timers, and then
- * under the abs_list lock. It computes the time difference and sets
- * the new jiffies value in the timer. It also updates the timers
- * reference wall_to_monotonic value. It is complicated by the fact
- * that tstojiffies() only handles positive times and it needs to work
- * with both positive and negative times. Also, for negative offsets,
- * we need to defeat the res round up.
- *
- * Return is true if there is a new time, else false.
- */
-static long add_clockset_delta(struct k_itimer *timr,
- struct timespec *new_wall_to)
-{
- struct timespec delta;
- int sign = 0;
- u64 exp;
-
- set_normalized_timespec(&delta,
- new_wall_to->tv_sec -
- timr->it.real.wall_to_prev.tv_sec,
- new_wall_to->tv_nsec -
- timr->it.real.wall_to_prev.tv_nsec);
- if (likely(!(delta.tv_sec | delta.tv_nsec)))
- return 0;
- if (delta.tv_sec < 0) {
- set_normalized_timespec(&delta,
- -delta.tv_sec,
- 1 - delta.tv_nsec -
- posix_clocks[timr->it_clock].res);
- sign++;
- }
- tstojiffie(&delta, posix_clocks[timr->it_clock].res, &exp);
- timr->it.real.wall_to_prev = *new_wall_to;
- timr->it.real.timer.expires += (sign ? -exp : exp);
- return 1;
-}
-
-static void remove_from_abslist(struct k_itimer *timr)
-{
- if (!list_empty(&timr->it.real.abs_timer_entry)) {
- spin_lock(&abs_list.lock);
- list_del_init(&timr->it.real.abs_timer_entry);
- spin_unlock(&abs_list.lock);
- }
-}
-
static void schedule_next_timer(struct k_itimer *timr)
{
- struct timespec new_wall_to;
- struct now_struct now;
- unsigned long seq;
-
- /*
- * Set up the timer for the next interval (if there is one).
- * Note: this code uses the abs_timer_lock to protect
- * it.real.wall_to_prev and must hold it until exp is set, not exactly
- * obvious...
-
- * This function is used for CLOCK_REALTIME* and
- * CLOCK_MONOTONIC* timers. If we ever want to handle other
- * CLOCKs, the calling code (do_schedule_next_timer) would need
- * to pull the "clock" info from the timer and dispatch the
- * "other" CLOCKs "next timer" code (which, I suppose should
- * also be added to the k_clock structure).
- */
- if (!timr->it.real.incr)
+ if (timr->it.real.incr.tv64 == 0)
return;
- do {
- seq = read_seqbegin(&xtime_lock);
- new_wall_to = wall_to_monotonic;
- posix_get_now(&now);
- } while (read_seqretry(&xtime_lock, seq));
-
- if (!list_empty(&timr->it.real.abs_timer_entry)) {
- spin_lock(&abs_list.lock);
- add_clockset_delta(timr, &new_wall_to);
-
- posix_bump_timer(timr, now);
-
- spin_unlock(&abs_list.lock);
- } else {
- posix_bump_timer(timr, now);
- }
- timr->it_overrun_last = timr->it_overrun;
- timr->it_overrun = -1;
+ timr->it.real.timer.overrun = -1;
++timr->it_requeue_pending;
- add_timer(&timr->it.real.timer);
+ ktimer_start(&timr->it.real.timer, &timr->it.real.incr, KTIMER_FORWARD);
+ timr->it_overrun_last += timr->it.real.timer.overrun;
}
/*
@@ -394,11 +313,15 @@ void do_schedule_next_timer(struct sigin
if (!timr || timr->it_requeue_pending != info->si_sys_private)
goto exit;
- if (timr->it_clock < 0) /* CPU clock */
+ if (timr->it_clock < 0) {
+ /* CPU clock */
posix_cpu_timer_schedule(timr);
- else
+ info->si_overrun = timr->it_overrun_last;
+ } else {
schedule_next_timer(timr);
- info->si_overrun = timr->it_overrun_last;
+ info->si_overrun = timr->it_overrun_last;
+ timr->it_overrun_last = 0;
+ }
exit:
if (timr)
unlock_timer(timr, flags);
@@ -408,14 +331,7 @@ int posix_timer_event(struct k_itimer *t
{
memset(&timr->sigq->info, 0, sizeof(siginfo_t));
timr->sigq->info.si_sys_private = si_private;
- /*
- * Send signal to the process that owns this timer.
-
- * This code assumes that all the possible abs_lists share the
- * same lock (there is only one list at this time). If this is
- * not the case, the CLOCK info would need to be used to find
- * the proper abs list lock.
- */
+ /* Send signal to the process that owns this timer.*/
timr->sigq->info.si_signo = timr->it_sigev_signo;
timr->sigq->info.si_errno = 0;
@@ -449,65 +365,28 @@ EXPORT_SYMBOL_GPL(posix_timer_event);
* This code is for CLOCK_REALTIME* and CLOCK_MONOTONIC* timers.
*/
-static void posix_timer_fn(unsigned long __data)
+static void posix_timer_fn(void *data)
{
- struct k_itimer *timr = (struct k_itimer *) __data;
+ struct k_itimer *timr = data;
unsigned long flags;
- unsigned long seq;
- struct timespec delta, new_wall_to;
- u64 exp = 0;
- int do_notify = 1;
+ int si_private = 0;
spin_lock_irqsave(&timr->it_lock, flags);
- if (!list_empty(&timr->it.real.abs_timer_entry)) {
- spin_lock(&abs_list.lock);
- do {
- seq = read_seqbegin(&xtime_lock);
- new_wall_to = wall_to_monotonic;
- } while (read_seqretry(&xtime_lock, seq));
- set_normalized_timespec(&delta,
- new_wall_to.tv_sec -
- timr->it.real.wall_to_prev.tv_sec,
- new_wall_to.tv_nsec -
- timr->it.real.wall_to_prev.tv_nsec);
- if (likely((delta.tv_sec | delta.tv_nsec ) == 0)) {
- /* do nothing, timer is on time */
- } else if (delta.tv_sec < 0) {
- /* do nothing, timer is already late */
- } else {
- /* timer is early due to a clock set */
- tstojiffie(&delta,
- posix_clocks[timr->it_clock].res,
- &exp);
- timr->it.real.wall_to_prev = new_wall_to;
- timr->it.real.timer.expires += exp;
- add_timer(&timr->it.real.timer);
- do_notify = 0;
- }
- spin_unlock(&abs_list.lock);
- }
- if (do_notify) {
- int si_private=0;
+ if (timr->it.real.incr.tv64 != 0)
+ si_private = ++timr->it_requeue_pending;
- if (timr->it.real.incr)
- si_private = ++timr->it_requeue_pending;
- else {
- remove_from_abslist(timr);
- }
+ if (posix_timer_event(timr, si_private))
+ /*
+ * signal was not sent because of sig_ignor
+ * we will not get a call back to restart it AND
+ * it should be restarted.
+ */
+ schedule_next_timer(timr);
- if (posix_timer_event(timr, si_private))
- /*
- * signal was not sent because of sig_ignor
- * we will not get a call back to restart it AND
- * it should be restarted.
- */
- schedule_next_timer(timr);
- }
unlock_timer(timr, flags); /* hold thru abs lock to keep irq off */
}
-
static inline struct task_struct * good_sigevent(sigevent_t * event)
{
struct task_struct *rtn = current->group_leader;
@@ -713,7 +592,8 @@ out:
*/
static int good_timespec(const struct timespec *ts)
{
- if ((!ts) || !timespec_valid(ts))
+ if ((!ts) || (ts->tv_sec < 0) ||
+ ((unsigned) ts->tv_nsec >= NSEC_PER_SEC))
return 0;
return 1;
}
@@ -770,39 +650,41 @@ static struct k_itimer * lock_timer(time
static void
common_timer_get(struct k_itimer *timr, struct itimerspec *cur_setting)
{
- unsigned long expires;
- struct now_struct now;
-
- do
- expires = timr->it.real.timer.expires;
- while ((volatile long) (timr->it.real.timer.expires) != expires);
-
- posix_get_now(&now);
-
- if (expires &&
- ((timr->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE) &&
- !timr->it.real.incr &&
- posix_time_before(&timr->it.real.timer, &now))
- timr->it.real.timer.expires = expires = 0;
- if (expires) {
- if (timr->it_requeue_pending & REQUEUE_PENDING ||
- (timr->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE) {
- posix_bump_timer(timr, now);
- expires = timr->it.real.timer.expires;
- }
- else
- if (!timer_pending(&timr->it.real.timer))
- expires = 0;
- if (expires)
- expires -= now.jiffies;
- }
- jiffies_to_timespec(expires, &cur_setting->it_value);
- jiffies_to_timespec(timr->it.real.incr, &cur_setting->it_interval);
+ ktime_t expires, now, remaining;
+ struct ktimer *timer = &timr->it.real.timer;
- if (cur_setting->it_value.tv_sec < 0) {
+ memset(cur_setting, 0, sizeof(struct itimerspec));
+ expires = ktimer_get_expiry(timer, &now);
+ remaining = ktime_sub(expires, now);
+
+ /* Time left ? or timer pending */
+ if (remaining.tv64 > 0 || ktimer_active(timer))
+ goto calci;
+ /* interval timer ? */
+ if (timr->it.real.incr.tv64 == 0)
+ return;
+ /*
+ * When a requeue is pending or this is a SIGEV_NONE timer
+ * move the expiry time forward by intervals, so expiry is >
+ * now.
+ * The active (non SIGEV_NONE) rearm should be done
+ * automatically by the ktimer REARM mode. Thats the next
+ * iteration. The REQUEUE_PENDING part will go away !
+ */
+ if (timr->it_requeue_pending & REQUEUE_PENDING ||
+ (timr->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE) {
+ remaining = forward_posix_timer(timr, now);
+ }
+ calci:
+ /* interval timer ? */
+ if (timr->it.real.incr.tv64 != 0)
+ cur_setting->it_interval =
+ ktime_to_timespec(timr->it.real.incr);
+ /* Return 0 only, when the timer is expired and not pending */
+ if (remaining.tv64 <= 0)
cur_setting->it_value.tv_nsec = 1;
- cur_setting->it_value.tv_sec = 0;
- }
+ else
+ cur_setting->it_value = ktime_to_timespec(remaining);
}
/* Get the time remaining on a POSIX.1b interval timer. */
@@ -826,6 +708,7 @@ sys_timer_gettime(timer_t timer_id, stru
return 0;
}
+
/*
* Get the number of overruns of a POSIX.1b interval timer. This is to
* be the overrun of the timer last delivered. At the same time we are
@@ -852,84 +735,6 @@ sys_timer_getoverrun(timer_t timer_id)
return overrun;
}
-/*
- * Adjust for absolute time
- *
- * If absolute time is given and it is not CLOCK_MONOTONIC, we need to
- * adjust for the offset between the timer clock (CLOCK_MONOTONIC) and
- * what ever clock he is using.
- *
- * If it is relative time, we need to add the current (CLOCK_MONOTONIC)
- * time to it to get the proper time for the timer.
- */
-static int adjust_abs_time(struct k_clock *clock, struct timespec *tp,
- int abs, u64 *exp, struct timespec *wall_to)
-{
- struct timespec now;
- struct timespec oc = *tp;
- u64 jiffies_64_f;
- int rtn =0;
-
- if (abs) {
- /*
- * The mask pick up the 4 basic clocks
- */
- if (!((clock - &posix_clocks[0]) & ~CLOCKS_MASK)) {
- jiffies_64_f = do_posix_clock_monotonic_gettime_parts(
- &now, wall_to);
- /*
- * If we are doing a MONOTONIC clock
- */
- if((clock - &posix_clocks[0]) & CLOCKS_MONO){
- now.tv_sec += wall_to->tv_sec;
- now.tv_nsec += wall_to->tv_nsec;
- }
- } else {
- /*
- * Not one of the basic clocks
- */
- clock->clock_get(clock - posix_clocks, &now);
- jiffies_64_f = get_jiffies_64();
- }
- /*
- * Take away now to get delta and normalize
- */
- set_normalized_timespec(&oc, oc.tv_sec - now.tv_sec,
- oc.tv_nsec - now.tv_nsec);
- }else{
- jiffies_64_f = get_jiffies_64();
- }
- /*
- * Check if the requested time is prior to now (if so set now)
- */
- if (oc.tv_sec < 0)
- oc.tv_sec = oc.tv_nsec = 0;
-
- if (oc.tv_sec | oc.tv_nsec)
- set_normalized_timespec(&oc, oc.tv_sec,
- oc.tv_nsec + clock->res);
- tstojiffie(&oc, clock->res, exp);
-
- /*
- * Check if the requested time is more than the timer code
- * can handle (if so we error out but return the value too).
- */
- if (*exp > ((u64)MAX_JIFFY_OFFSET))
- /*
- * This is a considered response, not exactly in
- * line with the standard (in fact it is silent on
- * possible overflows). We assume such a large
- * value is ALMOST always a programming error and
- * try not to compound it by setting a really dumb
- * value.
- */
- rtn = -EINVAL;
- /*
- * return the actual jiffies expire time, full 64 bits
- */
- *exp += jiffies_64_f;
- return rtn;
-}
/* Set a POSIX.1b interval timer. */
/* timr->it_lock is taken. */
@@ -937,68 +742,51 @@ static inline int
common_timer_set(struct k_itimer *timr, int flags,
struct itimerspec *new_setting, struct itimerspec *old_setting)
{
- struct k_clock *clock = &posix_clocks[timr->it_clock];
- u64 expire_64;
+ ktime_t expires;
+ int mode;
if (old_setting)
common_timer_get(timr, old_setting);
/* disable the timer */
- timr->it.real.incr = 0;
+ timr->it.real.incr.tv64 = 0;
/*
* careful here. If smp we could be in the "fire" routine which will
* be spinning as we hold the lock. But this is ONLY an SMP issue.
*/
- if (try_to_del_timer_sync(&timr->it.real.timer) < 0) {
-#ifdef CONFIG_SMP
- /*
- * It can only be active if on an other cpu. Since
- * we have cleared the interval stuff above, it should
- * clear once we release the spin lock. Of course once
- * we do that anything could happen, including the
- * complete melt down of the timer. So return with
- * a "retry" exit status.
- */
+ if (ktimer_try_to_cancel(&timr->it.real.timer) < 0)
return TIMER_RETRY;
-#endif
- }
-
- remove_from_abslist(timr);
timr->it_requeue_pending = (timr->it_requeue_pending + 2) &
~REQUEUE_PENDING;
timr->it_overrun_last = 0;
- timr->it_overrun = -1;
- /*
- *switch off the timer when it_value is zero
- */
- if (!new_setting->it_value.tv_sec && !new_setting->it_value.tv_nsec) {
- timr->it.real.timer.expires = 0;
+
+ /* switch off the timer when it_value is zero */
+ if (!new_setting->it_value.tv_sec && !new_setting->it_value.tv_nsec)
return 0;
- }
- if (adjust_abs_time(clock,
- &new_setting->it_value, flags & TIMER_ABSTIME,
- &expire_64, &(timr->it.real.wall_to_prev))) {
- return -EINVAL;
- }
- timr->it.real.timer.expires = (unsigned long)expire_64;
- tstojiffie(&new_setting->it_interval, clock->res, &expire_64);
- timr->it.real.incr = (unsigned long)expire_64;
+ mode = flags & TIMER_ABSTIME ? KTIMER_ABS : KTIMER_REL;
- /*
- * We do not even queue SIGEV_NONE timers! But we do put them
- * in the abs list so we can do that right.
+ /* Posix madness. Only absolute CLOCK_REALTIME timers
+ * are affected by clock sets. So we must reiniatilize
+ * the timer.
*/
+ if (timr->it_clock == CLOCK_REALTIME && mode == KTIMER_ABS)
+ timer_create_real(timr);
+ else
+ timer_create_mono(timr);
+
+ expires = timespec_to_ktime(new_setting->it_value);
+
+ /* Convert and round the interval */
+ timr->it.real.incr = ktimer_round_timespec(&timr->it.real.timer,
+ &new_setting->it_interval);
+
+ /* SIGEV_NONE timers are not queued ! See common_timer_get */
if (((timr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE))
- add_timer(&timr->it.real.timer);
+ ktimer_start(&timr->it.real.timer, &expires,
+ mode | KTIMER_NOCHECK | KTIMER_ROUND);
- if (flags & TIMER_ABSTIME && clock->abs_struct) {
- spin_lock(&clock->abs_struct->lock);
- list_add_tail(&(timr->it.real.abs_timer_entry),
- &(clock->abs_struct->list));
- spin_unlock(&clock->abs_struct->lock);
- }
return 0;
}
@@ -1033,6 +821,7 @@ retry:
unlock_timer(timr, flag);
if (error == TIMER_RETRY) {
+ wait_for_ktimer(&timr->it.real.timer);
rtn = NULL; // We already got the old time...
goto retry;
}
@@ -1046,24 +835,10 @@ retry:
static inline int common_timer_del(struct k_itimer *timer)
{
- timer->it.real.incr = 0;
+ timer->it.real.incr.tv64 = 0;
- if (try_to_del_timer_sync(&timer->it.real.timer) < 0) {
-#ifdef CONFIG_SMP
- /*
- * It can only be active if on an other cpu. Since
- * we have cleared the interval stuff above, it should
- * clear once we release the spin lock. Of course once
- * we do that anything could happen, including the
- * complete melt down of the timer. So return with
- * a "retry" exit status.
- */
+ if (ktimer_try_to_cancel(&timer->it.real.timer) < 0)
return TIMER_RETRY;
-#endif
- }
-
- remove_from_abslist(timer);
-
return 0;
}
@@ -1079,24 +854,17 @@ sys_timer_delete(timer_t timer_id)
struct k_itimer *timer;
long flags;
-#ifdef CONFIG_SMP
- int error;
retry_delete:
-#endif
timer = lock_timer(timer_id, &flags);
if (!timer)
return -EINVAL;
-#ifdef CONFIG_SMP
- error = timer_delete_hook(timer);
-
- if (error == TIMER_RETRY) {
+ if (timer_delete_hook(timer) == TIMER_RETRY) {
unlock_timer(timer, flags);
+ wait_for_ktimer(&timer->it.real.timer);
goto retry_delete;
}
-#else
- timer_delete_hook(timer);
-#endif
+
spin_lock(¤t->sighand->siglock);
list_del(&timer->list);
spin_unlock(¤t->sighand->siglock);
@@ -1113,6 +881,7 @@ retry_delete:
release_posix_timer(timer, IT_ID_SET);
return 0;
}
+
/*
* return timer owned by the process, used by exit_itimers
*/
@@ -1120,22 +889,14 @@ static inline void itimer_delete(struct
{
unsigned long flags;
-#ifdef CONFIG_SMP
- int error;
retry_delete:
-#endif
spin_lock_irqsave(&timer->it_lock, flags);
-#ifdef CONFIG_SMP
- error = timer_delete_hook(timer);
-
- if (error == TIMER_RETRY) {
+ if (timer_delete_hook(timer) == TIMER_RETRY) {
unlock_timer(timer, flags);
+ wait_for_ktimer(&timer->it.real.timer);
goto retry_delete;
}
-#else
- timer_delete_hook(timer);
-#endif
list_del(&timer->list);
/*
* This keeps any tasks waiting on the spin lock from thinking
@@ -1164,57 +925,7 @@ void exit_itimers(struct signal_struct *
}
}
-/*
- * And now for the "clock" calls
- *
- * These functions are called both from timer functions (with the timer
- * spin_lock_irq() held and from clock calls with no locking. They must
- * use the save flags versions of locks.
- */
-
-/*
- * We do ticks here to avoid the irq lock ( they take sooo long).
- * The seqlock is great here. Since we a reader, we don't really care
- * if we are interrupted since we don't take lock that will stall us or
- * any other cpu. Voila, no irq lock is needed.
- *
- */
-
-static u64 do_posix_clock_monotonic_gettime_parts(
- struct timespec *tp, struct timespec *mo)
-{
- u64 jiff;
- unsigned int seq;
-
- do {
- seq = read_seqbegin(&xtime_lock);
- getnstimeofday(tp);
- *mo = wall_to_monotonic;
- jiff = jiffies_64;
-
- } while(read_seqretry(&xtime_lock, seq));
-
- return jiff;
-}
-
-static int do_posix_clock_monotonic_get(const clockid_t clock,
- struct timespec *tp)
-{
- struct timespec wall_to_mono;
-
- do_posix_clock_monotonic_gettime_parts(tp, &wall_to_mono);
-
- set_normalized_timespec(tp, tp->tv_sec + wall_to_mono.tv_sec,
- tp->tv_nsec + wall_to_mono.tv_nsec);
-
- return 0;
-}
-
-int do_posix_clock_monotonic_gettime(struct timespec *tp)
-{
- return do_posix_clock_monotonic_get(CLOCK_MONOTONIC, tp);
-}
-
+/* Not available / possible... functions */
int do_posix_clock_nosettime(const clockid_t clockid, struct timespec *tp)
{
return -EINVAL;
@@ -1288,107 +999,6 @@ sys_clock_getres(const clockid_t which_c
}
/*
- * The standard says that an absolute nanosleep call MUST wake up at
- * the requested time in spite of clock settings. Here is what we do:
- * For each nanosleep call that needs it (only absolute and not on
- * CLOCK_MONOTONIC* (as it can not be set)) we thread a little structure
- * into the "nanosleep_abs_list". All we need is the task_struct pointer.
- * When ever the clock is set we just wake up all those tasks. The rest
- * is done by the while loop in clock_nanosleep().
- *
- * On locking, clock_was_set() is called from update_wall_clock which
- * holds (or has held for it) a write_lock_irq( xtime_lock) and is
- * called from the timer bh code. Thus we need the irq save locks.
- *
- * Also, on the call from update_wall_clock, that is done as part of a
- * softirq thing. We don't want to delay the system that much (possibly
- * long list of timers to fix), so we defer that work to keventd.
- */
-
-static DECLARE_WAIT_QUEUE_HEAD(nanosleep_abs_wqueue);
-static DECLARE_WORK(clock_was_set_work, (void(*)(void*))clock_was_set, NULL);
-
-static DECLARE_MUTEX(clock_was_set_lock);
-
-void clock_was_set(void)
-{
- struct k_itimer *timr;
- struct timespec new_wall_to;
- LIST_HEAD(cws_list);
- unsigned long seq;
-
-
- if (unlikely(in_interrupt())) {
- schedule_work(&clock_was_set_work);
- return;
- }
- wake_up_all(&nanosleep_abs_wqueue);
-
- /*
- * Check if there exist TIMER_ABSTIME timers to correct.
- *
- * Notes on locking: This code is run in task context with irq
- * on. We CAN be interrupted! All other usage of the abs list
- * lock is under the timer lock which holds the irq lock as
- * well. We REALLY don't want to scan the whole list with the
- * interrupt system off, AND we would like a sequence lock on
- * this code as well. Since we assume that the clock will not
- * be set often, it seems ok to take and release the irq lock
- * for each timer. In fact add_timer will do this, so this is
- * not an issue. So we know when we are done, we will move the
- * whole list to a new location. Then as we process each entry,
- * we will move it to the actual list again. This way, when our
- * copy is empty, we are done. We are not all that concerned
- * about preemption so we will use a semaphore lock to protect
- * aginst reentry. This way we will not stall another
- * processor. It is possible that this may delay some timers
- * that should have expired, given the new clock, but even this
- * will be minimal as we will always update to the current time,
- * even if it was set by a task that is waiting for entry to
- * this code. Timers that expire too early will be caught by
- * the expire code and restarted.
-
- * Absolute timers that repeat are left in the abs list while
- * waiting for the task to pick up the signal. This means we
- * may find timers that are not in the "add_timer" list, but are
- * in the abs list. We do the same thing for these, save
- * putting them back in the "add_timer" list. (Note, these are
- * left in the abs list mainly to indicate that they are
- * ABSOLUTE timers, a fact that is used by the re-arm code, and
- * for which we have no other flag.)
-
- */
-
- down(&clock_was_set_lock);
- spin_lock_irq(&abs_list.lock);
- list_splice_init(&abs_list.list, &cws_list);
- spin_unlock_irq(&abs_list.lock);
- do {
- do {
- seq = read_seqbegin(&xtime_lock);
- new_wall_to = wall_to_monotonic;
- } while (read_seqretry(&xtime_lock, seq));
-
- spin_lock_irq(&abs_list.lock);
- if (list_empty(&cws_list)) {
- spin_unlock_irq(&abs_list.lock);
- break;
- }
- timr = list_entry(cws_list.next, struct k_itimer,
- it.real.abs_timer_entry);
-
- list_del_init(&timr->it.real.abs_timer_entry);
- if (add_clockset_delta(timr, &new_wall_to) &&
- del_timer(&timr->it.real.timer)) /* timer run yet? */
- add_timer(&timr->it.real.timer);
- list_add(&timr->it.real.abs_timer_entry, &abs_list.list);
- spin_unlock_irq(&abs_list.lock);
- } while (1);
-
- up(&clock_was_set_lock);
-}
-
-/*
* nanosleep for monotonic and realtime clocks
*/
static int common_nsleep(const clockid_t which_clock, int flags,
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 23/43] Simplify ktimers rearm code
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (21 preceding siblings ...)
2005-12-01 0:03 ` [patch 22/43] Convert posix interval timers to use ktimers Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 24/43] Split timeout code into kernel/ktimeout.c Thomas Gleixner
` (19 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimer-simplify-rearm.patch)
- Simplify the rearming code and expose the functionality so it
can be used instead of forward_posix_timer(). This allows
also to replace the posix-timer struct real by a simple ktimer
structure.
The automatic rearming in the expiry code was modified to depend
on the return value of the callback function. This is based on
an idea of Roman Zippel.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimer.h | 6 ++
include/linux/posix-timers.h | 92 --------------------------------------
include/linux/timer.h | 2
kernel/itimer.c | 5 +-
kernel/ktimer.c | 103 ++++++++++++++++++++++++++++++++-----------
kernel/posix-timers.c | 66 +++++++++++++++------------
6 files changed, 123 insertions(+), 151 deletions(-)
Index: linux-2.6.15-rc2-rework/include/linux/ktimer.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/ktimer.h
+++ linux-2.6.15-rc2-rework/include/linux/ktimer.h
@@ -87,7 +87,7 @@ struct ktimer {
ktime_t interval;
int overrun;
enum ktimer_state state;
- void (*function)(void *);
+ int (*function)(void *);
void *data;
struct ktimer_base *base;
};
@@ -156,6 +156,10 @@ static inline int ktimer_active(const st
return timer->state != KTIMER_INACTIVE;
}
+/* Forward a ktimer so it expires after now */
+extern void ktimer_forward(struct ktimer *timer,
+ const ktime_t interval, const ktime_t now);
+
/* Convert with rounding based on resolution of timer's clock: */
extern ktime_t ktimer_round_timeval(const struct ktimer *timer,
const struct timeval *tv);
Index: linux-2.6.15-rc2-rework/include/linux/posix-timers.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/posix-timers.h
+++ linux-2.6.15-rc2-rework/include/linux/posix-timers.h
@@ -50,11 +50,7 @@ struct k_itimer {
struct task_struct *it_process; /* process to send signal to */
struct sigqueue *sigq; /* signal queue entry. */
union {
- struct {
- struct ktimer timer;
- ktime_t incr;
- int overrun;
- } real;
+ struct ktimer real;
struct cpu_timer_list cpu;
struct {
unsigned int clock;
@@ -94,92 +90,6 @@ int do_posix_clock_nosettime(const clock
/* function to call to trigger timer event */
int posix_timer_event(struct k_itimer *timr, int si_private);
-#if BITS_PER_LONG < 64
-static inline ktime_t forward_posix_timer(struct k_itimer *t, const ktime_t now)
-{
- ktime_t delta = ktime_sub(now, t->it.real.timer.expires);
- unsigned long orun = 1;
-
- if (delta.tv64 < 0)
- goto out;
-
- if (unlikely(delta.tv64 > t->it.real.incr.tv64)) {
-
- int sft = 0;
- u64 div, dclc, inc, dns;
-
- dclc = dns = ktime_to_ns(delta);
- div = inc = ktime_to_ns(t->it.real.incr);
- /* Make sure the divisor is less than 2^32 */
- while(div >> 32) {
- sft++;
- div >>= 1;
- }
- dclc >>= sft;
- do_div(dclc, (unsigned long) div);
- orun = (unsigned long) dclc;
- if (likely(!(inc >> 32)))
- dclc *= (unsigned long) inc;
- else
- dclc *= inc;
- t->it.real.timer.expires = ktime_add_ns(t->it.real.timer.expires,
- dclc);
- } else {
- t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
- t->it.real.incr);
- }
- /*
- * Here is the correction for exact. Also covers delta == incr
- * which is the else clause above.
- */
- if (t->it.real.timer.expires.tv64 <= now.tv64) {
- t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
- t->it.real.incr);
- orun++;
- }
- t->it_overrun += orun;
-
- out:
- return ktime_sub(t->it.real.timer.expires, now);
-}
-#else
-static inline ktime_t forward_posix_timer(struct k_itimer *t, const ktime_t now)
-{
- ktime_t delta = ktime_sub(now, t->it.real.timer.expires);
- unsigned long orun = 1;
-
- if (delta.tv64 < 0)
- goto out;
-
- if (unlikely(delta.tv64 > t->it.real.incr.tv64)) {
-
- u64 dns, inc;
-
- dns = ktime_to_ns(delta);
- inc = ktime_to_ns(t->it.real.incr);
-
- orun = dns / inc;
- t->it.real.timer.expires = ktime_add_ns(t->it.real.timer.expires,
- orun * inc);
- } else {
- t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
- t->it.real.incr);
- }
- /*
- * Here is the correction for exact. Also covers delta == incr
- * which is the else clause above.
- */
- if (t->it.real.timer.expires.tv64 <= now.tv64) {
- t->it.real.timer.expires = ktime_add(t->it.real.timer.expires,
- t->it.real.incr);
- orun++;
- }
- t->it_overrun += orun;
- out:
- return ktime_sub(t->it.real.timer.expires, now);
-}
-#endif
-
int posix_cpu_clock_getres(const clockid_t which_clock, struct timespec *ts);
int posix_cpu_clock_get(const clockid_t which_clock, struct timespec *ts);
int posix_cpu_clock_set(const clockid_t which_clock, const struct timespec *ts);
Index: linux-2.6.15-rc2-rework/include/linux/timer.h
===================================================================
--- linux-2.6.15-rc2-rework.orig/include/linux/timer.h
+++ linux-2.6.15-rc2-rework/include/linux/timer.h
@@ -96,6 +96,6 @@ static inline void add_timer(struct time
extern void init_timers(void);
extern void run_local_timers(void);
-extern void it_real_fn(void *);
+extern int it_real_fn(void *);
#endif
Index: linux-2.6.15-rc2-rework/kernel/itimer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/itimer.c
+++ linux-2.6.15-rc2-rework/kernel/itimer.c
@@ -129,9 +129,10 @@ asmlinkage long sys_getitimer(int which,
/*
* The timer is automagically restarted, when interval != 0
*/
-void it_real_fn(void *data)
+int it_real_fn(void *data)
{
send_group_sig_info(SIGALRM, SEND_SIG_PRIV, data);
+ return KTIMER_REARM;
}
int do_setitimer(int which, struct itimerval *value, struct itimerval *ovalue)
@@ -151,7 +152,7 @@ int do_setitimer(int which, struct itime
ovalue->it_interval = ktime_to_timeval(timer->interval);
}
timer->interval = ktimer_round_timeval(timer,
- &value->it_interval);
+ &value->it_interval);
expires = timeval_to_ktime(value->it_value);
if (expires.tv64 != 0)
ktimer_restart(timer, &expires,
Index: linux-2.6.15-rc2-rework/kernel/ktimer.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/ktimer.c
+++ linux-2.6.15-rc2-rework/kernel/ktimer.c
@@ -290,8 +290,32 @@ static unsigned long ktime_modulo(const
}
# endif /* !CONFIG_KTIME_SCALAR */
+
+/*
+ * Divide a ktime value by a nanosecond value
+ */
+static unsigned long ktime_divns(const ktime_t kt, nsec_t div)
+{
+ int sft = 0;
+ u64 dclc, inc, dns;
+
+ dclc = dns = ktime_to_ns(kt);
+ inc = div;
+ /* Make sure the divisor is less than 2^32 */
+ while(div >> 32) {
+ sft++;
+ div >>= 1;
+ }
+ dclc >>= sft;
+ do_div(dclc, (unsigned long) div);
+ return (unsigned long) dclc;
+}
+
#else /* BITS_PER_LONG < 64 */
+
# define ktime_modulo(kt, div) (unsigned long)((kt).tv64 % (div))
+# define ktime_divns(kt, div) (unsigned long)((kt).tv64 / (div))
+
#endif /* BITS_PER_LONG >= 64 */
/*
@@ -304,6 +328,45 @@ void unlock_ktimer_base(const struct kti
}
/**
+ * ktimer_forward - forward the timer expiry
+ *
+ * @timer: ktimer to forward
+ * @interval: the interval to forward
+ * @now: current time
+ *
+ * Forward the timer expiry so it will expire in the future.
+ * The number of overruns is added to the overrun field.
+ */
+void ktimer_forward(struct ktimer *timer,
+ const ktime_t interval, const ktime_t now)
+{
+ ktime_t delta = ktime_sub(now, timer->expires);
+ unsigned long orun = 1;
+
+ if (delta.tv64 < 0)
+ return;
+
+ if (unlikely(delta.tv64 > interval.tv64)) {
+ nsec_t incr = ktime_to_ns(interval);
+
+ orun = ktime_divns(delta, incr);
+ timer->expires = ktime_add_ns(timer->expires, incr * orun);
+ } else {
+ timer->expires = ktime_add(timer->expires, interval);
+ }
+
+ /*
+ * Here is the correction for exact. Also covers delta == incr
+ * which is the else clause above.
+ */
+ if (timer->expires.tv64 <= now.tv64) {
+ orun++;
+ timer->expires = ktime_add(timer->expires, interval);
+ }
+ timer->overrun += orun;
+}
+
+/**
* ktimer_round_timespec - convert timespec to ktime_t with resolution
* adjustment
*
@@ -391,18 +454,7 @@ static int enqueue_ktimer(struct ktimer
break;
case KTIMER_FORWARD:
- while (timer->expires.tv64 <= now.tv64) {
- timer->expires = ktime_add(timer->expires, *tim);
- timer->overrun++;
- }
- goto nocheck;
-
- case KTIMER_REARM:
- while (timer->expires.tv64 <= now.tv64) {
- timer->expires = ktime_add(timer->expires,
- timer->interval);
- timer->overrun++;
- }
+ ktimer_forward(timer, *tim, now);
goto nocheck;
case KTIMER_RESTART:
@@ -470,12 +522,9 @@ static int enqueue_ktimer(struct ktimer
/*
* __remove_ktimer - internal function to remove a timer
*
- * The function also allows automatic rearming for interval timers.
- * Must hold the base lock.
+ * Caller must hold the base lock.
*/
-static void
-__remove_ktimer(struct ktimer *timer, struct ktimer_base *base,
- enum ktimer_rearm rearm)
+static void __remove_ktimer(struct ktimer *timer, struct ktimer_base *base)
{
/*
* Remove the timer from the sorted list and from the rbtree:
@@ -487,10 +536,6 @@ __remove_ktimer(struct ktimer *timer, st
timer->state = KTIMER_INACTIVE;
base->count--;
BUG_ON(base->count < 0);
-
- /* Auto rearm the timer ? */
- if (rearm && (timer->interval.tv64 != 0))
- enqueue_ktimer(timer, base, NULL, KTIMER_REARM);
}
/*
@@ -499,7 +544,7 @@ __remove_ktimer(struct ktimer *timer, st
static inline int remove_ktimer(struct ktimer *timer, struct ktimer_base *base)
{
if (ktimer_active(timer)) {
- __remove_ktimer(timer, base, KTIMER_NOREARM);
+ __remove_ktimer(timer, base);
return 1;
}
return 0;
@@ -769,7 +814,8 @@ static inline void run_ktimer_queue(stru
while (!list_empty(&base->pending)) {
struct ktimer *timer;
- void (*fn)(void *);
+ int rearm;
+ int (*fn)(void *);
void *data;
timer = list_entry(base->pending.next, struct ktimer, list);
@@ -780,13 +826,17 @@ static inline void run_ktimer_queue(stru
fn = timer->function;
data = timer->data;
set_curr_timer(base, timer);
- __remove_ktimer(timer, base, KTIMER_REARM);
+ __remove_ktimer(timer, base);
spin_unlock_irq(&base->lock);
- fn(data);
+ rearm = fn(data);
spin_lock_irq(&base->lock);
set_curr_timer(base, NULL);
+
+ if (rearm && timer->interval.tv64)
+ enqueue_ktimer(timer, base, &timer->interval,
+ KTIMER_FORWARD);
}
spin_unlock_irq(&base->lock);
@@ -812,9 +862,10 @@ void ktimer_run_queues(void)
/*
* Process-wakeup callback:
*/
-static void ktimer_wake_up(void *data)
+static int ktimer_wake_up(void *data)
{
wake_up_process(data);
+ return 0;
}
/**
Index: linux-2.6.15-rc2-rework/kernel/posix-timers.c
===================================================================
--- linux-2.6.15-rc2-rework.orig/kernel/posix-timers.c
+++ linux-2.6.15-rc2-rework/kernel/posix-timers.c
@@ -144,7 +144,7 @@ static int common_timer_set(struct k_iti
struct itimerspec *, struct itimerspec *);
static int common_timer_del(struct k_itimer *timer);
-static void posix_timer_fn(void *data);
+static int posix_timer_fn(void *data);
static struct k_itimer *lock_timer(timer_t timer_id, unsigned long *flags);
@@ -197,17 +197,17 @@ static inline int common_timer_create(st
static int timer_create_mono(struct k_itimer *new_timer)
{
- ktimer_init(&new_timer->it.real.timer);
- new_timer->it.real.timer.data = new_timer;
- new_timer->it.real.timer.function = posix_timer_fn;
+ ktimer_init(&new_timer->it.real);
+ new_timer->it.real.data = new_timer;
+ new_timer->it.real.function = posix_timer_fn;
return 0;
}
static int timer_create_real(struct k_itimer *new_timer)
{
- ktimer_init_clock(&new_timer->it.real.timer, CLOCK_REALTIME);
- new_timer->it.real.timer.data = new_timer;
- new_timer->it.real.timer.function = posix_timer_fn;
+ ktimer_init_clock(&new_timer->it.real, CLOCK_REALTIME);
+ new_timer->it.real.data = new_timer;
+ new_timer->it.real.function = posix_timer_fn;
return 0;
}
@@ -283,13 +283,13 @@ __initcall(init_posix_timers);
static void schedule_next_timer(struct k_itimer *timr)
{
- if (timr->it.real.incr.tv64 == 0)
+ if (timr->it.real.interval.tv64 == 0)
return;
- timr->it.real.timer.overrun = -1;
+ timr->it.real.overrun = -1;
++timr->it_requeue_pending;
- ktimer_start(&timr->it.real.timer, &timr->it.real.incr, KTIMER_FORWARD);
- timr->it_overrun_last += timr->it.real.timer.overrun;
+ ktimer_start(&timr->it.real, &timr->it.real.interval, KTIMER_FORWARD);
+ timr->it_overrun_last += timr->it.real.overrun;
}
/*
@@ -365,26 +365,29 @@ EXPORT_SYMBOL_GPL(posix_timer_event);
* This code is for CLOCK_REALTIME* and CLOCK_MONOTONIC* timers.
*/
-static void posix_timer_fn(void *data)
+static int posix_timer_fn(void *data)
{
struct k_itimer *timr = data;
unsigned long flags;
int si_private = 0;
+ int ret = 0;
spin_lock_irqsave(&timr->it_lock, flags);
- if (timr->it.real.incr.tv64 != 0)
+ if (timr->it.real.interval.tv64 != 0)
si_private = ++timr->it_requeue_pending;
- if (posix_timer_event(timr, si_private))
+ if (posix_timer_event(timr, si_private)) {
/*
* signal was not sent because of sig_ignor
* we will not get a call back to restart it AND
* it should be restarted.
*/
- schedule_next_timer(timr);
+ ret = (timr->it.real.interval.tv64 == 0) ? 0 : KTIMER_REARM;
+ }
unlock_timer(timr, flags); /* hold thru abs lock to keep irq off */
+ return ret;
}
static inline struct task_struct * good_sigevent(sigevent_t * event)
@@ -651,7 +654,7 @@ static void
common_timer_get(struct k_itimer *timr, struct itimerspec *cur_setting)
{
ktime_t expires, now, remaining;
- struct ktimer *timer = &timr->it.real.timer;
+ struct ktimer *timer = &timr->it.real;
memset(cur_setting, 0, sizeof(struct itimerspec));
expires = ktimer_get_expiry(timer, &now);
@@ -661,7 +664,7 @@ common_timer_get(struct k_itimer *timr,
if (remaining.tv64 > 0 || ktimer_active(timer))
goto calci;
/* interval timer ? */
- if (timr->it.real.incr.tv64 == 0)
+ if (timer->interval.tv64 == 0)
return;
/*
* When a requeue is pending or this is a SIGEV_NONE timer
@@ -673,13 +676,16 @@ common_timer_get(struct k_itimer *timr,
*/
if (timr->it_requeue_pending & REQUEUE_PENDING ||
(timr->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE) {
- remaining = forward_posix_timer(timr, now);
+ timer->overrun = 0;
+ ktimer_forward(timer, timer->interval, now);
+ remaining = ktime_sub(now, timer->expires);
+ timr->it_overrun += timer->overrun;
}
calci:
/* interval timer ? */
- if (timr->it.real.incr.tv64 != 0)
+ if (timr->it.real.interval.tv64 != 0)
cur_setting->it_interval =
- ktime_to_timespec(timr->it.real.incr);
+ ktime_to_timespec(timr->it.real.interval);
/* Return 0 only, when the timer is expired and not pending */
if (remaining.tv64 <= 0)
cur_setting->it_value.tv_nsec = 1;
@@ -749,12 +755,12 @@ common_timer_set(struct k_itimer *timr,
common_timer_get(timr, old_setting);
/* disable the timer */
- timr->it.real.incr.tv64 = 0;
+ timr->it.real.interval.tv64 = 0;
/*
* careful here. If smp we could be in the "fire" routine which will
* be spinning as we hold the lock. But this is ONLY an SMP issue.
*/
- if (ktimer_try_to_cancel(&timr->it.real.timer) < 0)
+ if (ktimer_try_to_cancel(&timr->it.real) < 0)
return TIMER_RETRY;
timr->it_requeue_pending = (timr->it_requeue_pending + 2) &
@@ -779,12 +785,12 @@ common_timer_set(struct k_itimer *timr,
expires = timespec_to_ktime(new_setting->it_value);
/* Convert and round the interval */
- timr->it.real.incr = ktimer_round_timespec(&timr->it.real.timer,
- &new_setting->it_interval);
+ timr->it.real.interval = ktimer_round_timespec(&timr->it.real,
+ &new_setting->it_interval);
/* SIGEV_NONE timers are not queued ! See common_timer_get */
if (((timr->it_sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE))
- ktimer_start(&timr->it.real.timer, &expires,
+ ktimer_start(&timr->it.real, &expires,
mode | KTIMER_NOCHECK | KTIMER_ROUND);
return 0;
@@ -821,7 +827,7 @@ retry:
unlock_timer(timr, flag);
if (error == TIMER_RETRY) {
- wait_for_ktimer(&timr->it.real.timer);
+ wait_for_ktimer(&timr->it.real);
rtn = NULL; // We already got the old time...
goto retry;
}
@@ -835,9 +841,9 @@ retry:
static inline int common_timer_del(struct k_itimer *timer)
{
- timer->it.real.incr.tv64 = 0;
+ timer->it.real.interval.tv64 = 0;
- if (ktimer_try_to_cancel(&timer->it.real.timer) < 0)
+ if (ktimer_try_to_cancel(&timer->it.real) < 0)
return TIMER_RETRY;
return 0;
}
@@ -861,7 +867,7 @@ retry_delete:
if (timer_delete_hook(timer) == TIMER_RETRY) {
unlock_timer(timer, flags);
- wait_for_ktimer(&timer->it.real.timer);
+ wait_for_ktimer(&timer->it.real);
goto retry_delete;
}
@@ -894,7 +900,7 @@ retry_delete:
if (timer_delete_hook(timer) == TIMER_RETRY) {
unlock_timer(timer, flags);
- wait_for_ktimer(&timer->it.real.timer);
+ wait_for_ktimer(&timer->it.real);
goto retry_delete;
}
list_del(&timer->list);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 24/43] Split timeout code into kernel/ktimeout.c
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (22 preceding siblings ...)
2005-12-01 0:03 ` [patch 23/43] Simplify ktimers rearm code Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 25/43] Create ktimeout.h and move timer.h code into it Thomas Gleixner
` (18 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-c.patch)
- split the timeout implementation from kernel/timer.c, into kernel/ktimeout.c
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/Makefile | 2
kernel/ktimeout.c | 771 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
kernel/timer.c | 747 ----------------------------------------------------
3 files changed, 773 insertions(+), 747 deletions(-)
Index: linux/kernel/Makefile
===================================================================
--- linux.orig/kernel/Makefile
+++ linux/kernel/Makefile
@@ -8,7 +8,7 @@ obj-y = sched.o fork.o exec_domain.o
signal.o sys.o kmod.o workqueue.o pid.o \
rcupdate.o intermodule.o extable.o params.o posix-timers.o \
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o \
- ktimer.o
+ ktimer.o ktimeout.o
obj-$(CONFIG_FUTEX) += futex.o
obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
Index: linux/kernel/ktimeout.c
===================================================================
--- /dev/null
+++ linux/kernel/ktimeout.c
@@ -0,0 +1,771 @@
+/*
+ * linux/kernel/ktimeout.c
+ *
+ * Kernel internal timeouts API
+ *
+ * Copyright (C) 1991, 1992 Linus Torvalds
+ *
+ * 1997-01-28 Modified by Finn Arne Gangstad to make timers scale better.
+ * 2000-10-05 Implemented scalable SMP per-CPU timer handling.
+ * Copyright (C) 2000, 2001, 2002 Ingo Molnar
+ * Designed by David S. Miller, Alexey Kuznetsov and Ingo Molnar
+ */
+
+#include <linux/kernel_stat.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/percpu.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/notifier.h>
+#include <linux/thread_info.h>
+#include <linux/time.h>
+#include <linux/jiffies.h>
+#include <linux/posix-timers.h>
+#include <linux/cpu.h>
+#include <linux/syscalls.h>
+
+#include <asm/uaccess.h>
+#include <asm/unistd.h>
+#include <asm/div64.h>
+#include <asm/timex.h>
+#include <asm/io.h>
+
+/*
+ * per-CPU timer vector definitions:
+ */
+
+#define TVN_BITS (CONFIG_BASE_SMALL ? 4 : 6)
+#define TVR_BITS (CONFIG_BASE_SMALL ? 6 : 8)
+#define TVN_SIZE (1 << TVN_BITS)
+#define TVR_SIZE (1 << TVR_BITS)
+#define TVN_MASK (TVN_SIZE - 1)
+#define TVR_MASK (TVR_SIZE - 1)
+
+struct timer_base_s {
+ spinlock_t lock;
+ struct timer_list *running_timer;
+};
+
+typedef struct tvec_s {
+ struct list_head vec[TVN_SIZE];
+} tvec_t;
+
+typedef struct tvec_root_s {
+ struct list_head vec[TVR_SIZE];
+} tvec_root_t;
+
+struct tvec_t_base_s {
+ struct timer_base_s t_base;
+ unsigned long timer_jiffies;
+ tvec_root_t tv1;
+ tvec_t tv2;
+ tvec_t tv3;
+ tvec_t tv4;
+ tvec_t tv5;
+} ____cacheline_aligned_in_smp;
+
+typedef struct tvec_t_base_s tvec_base_t;
+static DEFINE_PER_CPU(tvec_base_t, tvec_bases);
+
+static inline void set_running_timer(tvec_base_t *base,
+ struct timer_list *timer)
+{
+#ifdef CONFIG_SMP
+ base->t_base.running_timer = timer;
+#endif
+}
+
+static void internal_add_timer(tvec_base_t *base, struct timer_list *timer)
+{
+ unsigned long expires = timer->expires;
+ unsigned long idx = expires - base->timer_jiffies;
+ struct list_head *vec;
+
+ if (idx < TVR_SIZE) {
+ int i = expires & TVR_MASK;
+ vec = base->tv1.vec + i;
+ } else if (idx < 1 << (TVR_BITS + TVN_BITS)) {
+ int i = (expires >> TVR_BITS) & TVN_MASK;
+ vec = base->tv2.vec + i;
+ } else if (idx < 1 << (TVR_BITS + 2 * TVN_BITS)) {
+ int i = (expires >> (TVR_BITS + TVN_BITS)) & TVN_MASK;
+ vec = base->tv3.vec + i;
+ } else if (idx < 1 << (TVR_BITS + 3 * TVN_BITS)) {
+ int i = (expires >> (TVR_BITS + 2 * TVN_BITS)) & TVN_MASK;
+ vec = base->tv4.vec + i;
+ } else if ((signed long) idx < 0) {
+ /*
+ * Can happen if you add a timer with expires == jiffies,
+ * or you set a timer to go off in the past
+ */
+ vec = base->tv1.vec + (base->timer_jiffies & TVR_MASK);
+ } else {
+ int i;
+ /* If the timeout is larger than 0xffffffff on 64-bit
+ * architectures then we use the maximum timeout:
+ */
+ if (idx > 0xffffffffUL) {
+ idx = 0xffffffffUL;
+ expires = idx + base->timer_jiffies;
+ }
+ i = (expires >> (TVR_BITS + 3 * TVN_BITS)) & TVN_MASK;
+ vec = base->tv5.vec + i;
+ }
+ /*
+ * Timers are FIFO:
+ */
+ list_add_tail(&timer->entry, vec);
+}
+
+typedef struct timer_base_s timer_base_t;
+/*
+ * Used by TIMER_INITIALIZER, we can't use per_cpu(tvec_bases)
+ * at compile time, and we need timer->base to lock the timer.
+ */
+timer_base_t __init_timer_base
+ ____cacheline_aligned_in_smp = { .lock = SPIN_LOCK_UNLOCKED };
+EXPORT_SYMBOL(__init_timer_base);
+
+/***
+ * init_timer - initialize a timer.
+ * @timer: the timer to be initialized
+ *
+ * init_timer() must be done to a timer prior calling *any* of the
+ * other timer functions.
+ */
+void fastcall init_timer(struct timer_list *timer)
+{
+ timer->entry.next = NULL;
+ timer->base = &per_cpu(tvec_bases, raw_smp_processor_id()).t_base;
+}
+EXPORT_SYMBOL(init_timer);
+
+static inline void detach_timer(struct timer_list *timer,
+ int clear_pending)
+{
+ struct list_head *entry = &timer->entry;
+
+ __list_del(entry->prev, entry->next);
+ if (clear_pending)
+ entry->next = NULL;
+ entry->prev = LIST_POISON2;
+}
+
+/*
+ * We are using hashed locking: holding per_cpu(tvec_bases).t_base.lock
+ * means that all timers which are tied to this base via timer->base are
+ * locked, and the base itself is locked too.
+ *
+ * So __run_timers/migrate_timers can safely modify all timers which could
+ * be found on ->tvX lists.
+ *
+ * When the timer's base is locked, and the timer removed from list, it is
+ * possible to set timer->base = NULL and drop the lock: the timer remains
+ * locked.
+ */
+static timer_base_t *lock_timer_base(struct timer_list *timer,
+ unsigned long *flags)
+{
+ timer_base_t *base;
+
+ for (;;) {
+ base = timer->base;
+ if (likely(base != NULL)) {
+ spin_lock_irqsave(&base->lock, *flags);
+ if (likely(base == timer->base))
+ return base;
+ /* The timer has migrated to another CPU */
+ spin_unlock_irqrestore(&base->lock, *flags);
+ }
+ cpu_relax();
+ }
+}
+
+int __mod_timer(struct timer_list *timer, unsigned long expires)
+{
+ timer_base_t *base;
+ tvec_base_t *new_base;
+ unsigned long flags;
+ int ret = 0;
+
+ BUG_ON(!timer->function);
+
+ base = lock_timer_base(timer, &flags);
+
+ if (timer_pending(timer)) {
+ detach_timer(timer, 0);
+ ret = 1;
+ }
+
+ new_base = &__get_cpu_var(tvec_bases);
+
+ if (base != &new_base->t_base) {
+ /*
+ * We are trying to schedule the timer on the local CPU.
+ * However we can't change timer's base while it is running,
+ * otherwise del_timer_sync() can't detect that the timer's
+ * handler yet has not finished. This also guarantees that
+ * the timer is serialized wrt itself.
+ */
+ if (unlikely(base->running_timer == timer)) {
+ /* The timer remains on a former base */
+ new_base = container_of(base, tvec_base_t, t_base);
+ } else {
+ /* See the comment in lock_timer_base() */
+ timer->base = NULL;
+ spin_unlock(&base->lock);
+ spin_lock(&new_base->t_base.lock);
+ timer->base = &new_base->t_base;
+ }
+ }
+
+ timer->expires = expires;
+ internal_add_timer(new_base, timer);
+ spin_unlock_irqrestore(&new_base->t_base.lock, flags);
+
+ return ret;
+}
+
+EXPORT_SYMBOL(__mod_timer);
+
+/***
+ * add_timer_on - start a timer on a particular CPU
+ * @timer: the timer to be added
+ * @cpu: the CPU to start it on
+ *
+ * This is not very scalable on SMP. Double adds are not possible.
+ */
+void add_timer_on(struct timer_list *timer, int cpu)
+{
+ tvec_base_t *base = &per_cpu(tvec_bases, cpu);
+ unsigned long flags;
+
+ BUG_ON(timer_pending(timer) || !timer->function);
+ spin_lock_irqsave(&base->t_base.lock, flags);
+ timer->base = &base->t_base;
+ internal_add_timer(base, timer);
+ spin_unlock_irqrestore(&base->t_base.lock, flags);
+}
+
+
+/***
+ * mod_timer - modify a timer's timeout
+ * @timer: the timer to be modified
+ *
+ * mod_timer is a more efficient way to update the expire field of an
+ * active timer (if the timer is inactive it will be activated)
+ *
+ * mod_timer(timer, expires) is equivalent to:
+ *
+ * del_timer(timer); timer->expires = expires; add_timer(timer);
+ *
+ * Note that if there are multiple unserialized concurrent users of the
+ * same timer, then mod_timer() is the only safe way to modify the timeout,
+ * since add_timer() cannot modify an already running timer.
+ *
+ * The function returns whether it has modified a pending timer or not.
+ * (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an
+ * active timer returns 1.)
+ */
+int mod_timer(struct timer_list *timer, unsigned long expires)
+{
+ BUG_ON(!timer->function);
+
+ /*
+ * This is a common optimization triggered by the
+ * networking code - if the timer is re-modified
+ * to be the same thing then just return:
+ */
+ if (timer->expires == expires && timer_pending(timer))
+ return 1;
+
+ return __mod_timer(timer, expires);
+}
+
+EXPORT_SYMBOL(mod_timer);
+
+/***
+ * del_timer - deactive a timer.
+ * @timer: the timer to be deactivated
+ *
+ * del_timer() deactivates a timer - this works on both active and inactive
+ * timers.
+ *
+ * The function returns whether it has deactivated a pending timer or not.
+ * (ie. del_timer() of an inactive timer returns 0, del_timer() of an
+ * active timer returns 1.)
+ */
+int del_timer(struct timer_list *timer)
+{
+ timer_base_t *base;
+ unsigned long flags;
+ int ret = 0;
+
+ if (timer_pending(timer)) {
+ base = lock_timer_base(timer, &flags);
+ if (timer_pending(timer)) {
+ detach_timer(timer, 1);
+ ret = 1;
+ }
+ spin_unlock_irqrestore(&base->lock, flags);
+ }
+
+ return ret;
+}
+
+EXPORT_SYMBOL(del_timer);
+
+#ifdef CONFIG_SMP
+/*
+ * This function tries to deactivate a timer. Upon successful (ret >= 0)
+ * exit the timer is not queued and the handler is not running on any CPU.
+ *
+ * It must not be called from interrupt contexts.
+ */
+int try_to_del_timer_sync(struct timer_list *timer)
+{
+ timer_base_t *base;
+ unsigned long flags;
+ int ret = -1;
+
+ base = lock_timer_base(timer, &flags);
+
+ if (base->running_timer == timer)
+ goto out;
+
+ ret = 0;
+ if (timer_pending(timer)) {
+ detach_timer(timer, 1);
+ ret = 1;
+ }
+out:
+ spin_unlock_irqrestore(&base->lock, flags);
+
+ return ret;
+}
+
+/***
+ * del_timer_sync - deactivate a timer and wait for the handler to finish.
+ * @timer: the timer to be deactivated
+ *
+ * This function only differs from del_timer() on SMP: besides deactivating
+ * the timer it also makes sure the handler has finished executing on other
+ * CPUs.
+ *
+ * Synchronization rules: callers must prevent restarting of the timer,
+ * otherwise this function is meaningless. It must not be called from
+ * interrupt contexts. The caller must not hold locks which would prevent
+ * completion of the timer's handler. The timer's handler must not call
+ * add_timer_on(). Upon exit the timer is not queued and the handler is
+ * not running on any CPU.
+ *
+ * The function returns whether it has deactivated a pending timer or not.
+ */
+int del_timer_sync(struct timer_list *timer)
+{
+ for (;;) {
+ int ret = try_to_del_timer_sync(timer);
+ if (ret >= 0)
+ return ret;
+ }
+}
+
+EXPORT_SYMBOL(del_timer_sync);
+#endif
+
+static int cascade(tvec_base_t *base, tvec_t *tv, int index)
+{
+ /* cascade all the timers from tv up one level */
+ struct list_head *head, *curr;
+
+ head = tv->vec + index;
+ curr = head->next;
+ /*
+ * We are removing _all_ timers from the list, so we don't have to
+ * detach them individually, just clear the list afterwards.
+ */
+ while (curr != head) {
+ struct timer_list *tmp;
+
+ tmp = list_entry(curr, struct timer_list, entry);
+ BUG_ON(tmp->base != &base->t_base);
+ curr = curr->next;
+ internal_add_timer(base, tmp);
+ }
+ INIT_LIST_HEAD(head);
+
+ return index;
+}
+
+/***
+ * __run_timers - run all expired timers (if any) on this CPU.
+ * @base: the timer vector to be processed.
+ *
+ * This function cascades all vectors and executes all expired timer
+ * vectors.
+ */
+#define INDEX(N) (base->timer_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
+
+static inline void __run_timers(tvec_base_t *base)
+{
+ struct timer_list *timer;
+
+ spin_lock_irq(&base->t_base.lock);
+ while (time_after_eq(jiffies, base->timer_jiffies)) {
+ struct list_head work_list = LIST_HEAD_INIT(work_list);
+ struct list_head *head = &work_list;
+ int index = base->timer_jiffies & TVR_MASK;
+
+ /*
+ * Cascade timers:
+ */
+ if (!index &&
+ (!cascade(base, &base->tv2, INDEX(0))) &&
+ (!cascade(base, &base->tv3, INDEX(1))) &&
+ !cascade(base, &base->tv4, INDEX(2)))
+ cascade(base, &base->tv5, INDEX(3));
+ ++base->timer_jiffies;
+ list_splice_init(base->tv1.vec + index, &work_list);
+ while (!list_empty(head)) {
+ void (*fn)(unsigned long);
+ unsigned long data;
+
+ timer = list_entry(head->next,struct timer_list,entry);
+ fn = timer->function;
+ data = timer->data;
+
+ set_running_timer(base, timer);
+ detach_timer(timer, 1);
+ spin_unlock_irq(&base->t_base.lock);
+ {
+ int preempt_count = preempt_count();
+ fn(data);
+ if (preempt_count != preempt_count()) {
+ printk(KERN_WARNING "huh, entered %p "
+ "with preempt_count %08x, exited"
+ " with %08x?\n",
+ fn, preempt_count,
+ preempt_count());
+ BUG();
+ }
+ }
+ spin_lock_irq(&base->t_base.lock);
+ }
+ }
+ set_running_timer(base, NULL);
+ spin_unlock_irq(&base->t_base.lock);
+}
+
+#ifdef CONFIG_NO_IDLE_HZ
+/*
+ * Find out when the next timer event is due to happen. This
+ * is used on S/390 to stop all activity when a cpus is idle.
+ * This functions needs to be called disabled.
+ */
+unsigned long next_timer_interrupt(void)
+{
+ tvec_base_t *base;
+ struct list_head *list;
+ struct timer_list *nte;
+ unsigned long expires;
+ tvec_t *varray[4];
+ int i, j;
+
+ base = &__get_cpu_var(tvec_bases);
+ spin_lock(&base->t_base.lock);
+ expires = base->timer_jiffies + (LONG_MAX >> 1);
+ list = 0;
+
+ /* Look for timer events in tv1. */
+ j = base->timer_jiffies & TVR_MASK;
+ do {
+ list_for_each_entry(nte, base->tv1.vec + j, entry) {
+ expires = nte->expires;
+ if (j < (base->timer_jiffies & TVR_MASK))
+ list = base->tv2.vec + (INDEX(0));
+ goto found;
+ }
+ j = (j + 1) & TVR_MASK;
+ } while (j != (base->timer_jiffies & TVR_MASK));
+
+ /* Check tv2-tv5. */
+ varray[0] = &base->tv2;
+ varray[1] = &base->tv3;
+ varray[2] = &base->tv4;
+ varray[3] = &base->tv5;
+ for (i = 0; i < 4; i++) {
+ j = INDEX(i);
+ do {
+ if (list_empty(varray[i]->vec + j)) {
+ j = (j + 1) & TVN_MASK;
+ continue;
+ }
+ list_for_each_entry(nte, varray[i]->vec + j, entry)
+ if (time_before(nte->expires, expires))
+ expires = nte->expires;
+ if (j < (INDEX(i)) && i < 3)
+ list = varray[i + 1]->vec + (INDEX(i + 1));
+ goto found;
+ } while (j != (INDEX(i)));
+ }
+found:
+ if (list) {
+ /*
+ * The search wrapped. We need to look at the next list
+ * from next tv element that would cascade into tv element
+ * where we found the timer element.
+ */
+ list_for_each_entry(nte, list, entry) {
+ if (time_before(nte->expires, expires))
+ expires = nte->expires;
+ }
+ }
+ spin_unlock(&base->t_base.lock);
+ return expires;
+}
+#endif
+
+/*
+ * This function runs timers and the timer-tq in bottom half context.
+ */
+static void run_timer_softirq(struct softirq_action *h)
+{
+ tvec_base_t *base = &__get_cpu_var(tvec_bases);
+
+ ktimer_run_queues();
+ if (time_after_eq(jiffies, base->timer_jiffies))
+ __run_timers(base);
+}
+
+/*
+ * Called by the local, per-CPU timer interrupt on SMP.
+ */
+void run_local_timers(void)
+{
+ raise_softirq(TIMER_SOFTIRQ);
+}
+
+static void process_timeout(unsigned long __data)
+{
+ wake_up_process((task_t *)__data);
+}
+
+/**
+ * schedule_timeout - sleep until timeout
+ * @timeout: timeout value in jiffies
+ *
+ * Make the current task sleep until @timeout jiffies have
+ * elapsed. The routine will return immediately unless
+ * the current task state has been set (see set_current_state()).
+ *
+ * You can set the task state as follows -
+ *
+ * %TASK_UNINTERRUPTIBLE - at least @timeout jiffies are guaranteed to
+ * pass before the routine returns. The routine will return 0
+ *
+ * %TASK_INTERRUPTIBLE - the routine may return early if a signal is
+ * delivered to the current task. In this case the remaining time
+ * in jiffies will be returned, or 0 if the timer expired in time
+ *
+ * The current task state is guaranteed to be TASK_RUNNING when this
+ * routine returns.
+ *
+ * Specifying a @timeout value of %MAX_SCHEDULE_TIMEOUT will schedule
+ * the CPU away without a bound on the timeout. In this case the return
+ * value will be %MAX_SCHEDULE_TIMEOUT.
+ *
+ * In all cases the return value is guaranteed to be non-negative.
+ */
+fastcall signed long __sched schedule_timeout(signed long timeout)
+{
+ struct timer_list timer;
+ unsigned long expire;
+
+ switch (timeout)
+ {
+ case MAX_SCHEDULE_TIMEOUT:
+ /*
+ * These two special cases are useful to be comfortable
+ * in the caller. Nothing more. We could take
+ * MAX_SCHEDULE_TIMEOUT from one of the negative value
+ * but I' d like to return a valid offset (>=0) to allow
+ * the caller to do everything it want with the retval.
+ */
+ schedule();
+ goto out;
+ default:
+ /*
+ * Another bit of PARANOID. Note that the retval will be
+ * 0 since no piece of kernel is supposed to do a check
+ * for a negative retval of schedule_timeout() (since it
+ * should never happens anyway). You just have the printk()
+ * that will tell you if something is gone wrong and where.
+ */
+ if (timeout < 0)
+ {
+ printk(KERN_ERR "schedule_timeout: wrong timeout "
+ "value %lx from %p\n", timeout,
+ __builtin_return_address(0));
+ current->state = TASK_RUNNING;
+ goto out;
+ }
+ }
+
+ expire = timeout + jiffies;
+
+ setup_timer(&timer, process_timeout, (unsigned long)current);
+ __mod_timer(&timer, expire);
+ schedule();
+ del_singleshot_timer_sync(&timer);
+
+ timeout = expire - jiffies;
+
+ out:
+ return timeout < 0 ? 0 : timeout;
+}
+EXPORT_SYMBOL(schedule_timeout);
+
+/*
+ * We can use __set_current_state() here because schedule_timeout() calls
+ * schedule() unconditionally.
+ */
+signed long __sched schedule_timeout_interruptible(signed long timeout)
+{
+ __set_current_state(TASK_INTERRUPTIBLE);
+ return schedule_timeout(timeout);
+}
+EXPORT_SYMBOL(schedule_timeout_interruptible);
+
+signed long __sched schedule_timeout_uninterruptible(signed long timeout)
+{
+ __set_current_state(TASK_UNINTERRUPTIBLE);
+ return schedule_timeout(timeout);
+}
+EXPORT_SYMBOL(schedule_timeout_uninterruptible);
+
+/**
+ * msleep - sleep safely even with waitqueue interruptions
+ * @msecs: Time in milliseconds to sleep for
+ */
+void msleep(unsigned int msecs)
+{
+ unsigned long timeout = msecs_to_jiffies(msecs) + 1;
+
+ while (timeout)
+ timeout = schedule_timeout_uninterruptible(timeout);
+}
+
+EXPORT_SYMBOL(msleep);
+
+/**
+ * msleep_interruptible - sleep waiting for signals
+ * @msecs: Time in milliseconds to sleep for
+ */
+unsigned long msleep_interruptible(unsigned int msecs)
+{
+ unsigned long timeout = msecs_to_jiffies(msecs) + 1;
+
+ while (timeout && !signal_pending(current))
+ timeout = schedule_timeout_interruptible(timeout);
+ return jiffies_to_msecs(timeout);
+}
+
+EXPORT_SYMBOL(msleep_interruptible);
+
+static void __devinit init_timers_cpu(int cpu)
+{
+ int j;
+ tvec_base_t *base;
+
+ base = &per_cpu(tvec_bases, cpu);
+ spin_lock_init(&base->t_base.lock);
+ for (j = 0; j < TVN_SIZE; j++) {
+ INIT_LIST_HEAD(base->tv5.vec + j);
+ INIT_LIST_HEAD(base->tv4.vec + j);
+ INIT_LIST_HEAD(base->tv3.vec + j);
+ INIT_LIST_HEAD(base->tv2.vec + j);
+ }
+ for (j = 0; j < TVR_SIZE; j++)
+ INIT_LIST_HEAD(base->tv1.vec + j);
+
+ base->timer_jiffies = jiffies;
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
+static void migrate_timer_list(tvec_base_t *new_base, struct list_head *head)
+{
+ struct timer_list *timer;
+
+ while (!list_empty(head)) {
+ timer = list_entry(head->next, struct timer_list, entry);
+ detach_timer(timer, 0);
+ timer->base = &new_base->t_base;
+ internal_add_timer(new_base, timer);
+ }
+}
+
+static void __devinit migrate_timers(int cpu)
+{
+ tvec_base_t *old_base;
+ tvec_base_t *new_base;
+ int i;
+
+ BUG_ON(cpu_online(cpu));
+ old_base = &per_cpu(tvec_bases, cpu);
+ new_base = &get_cpu_var(tvec_bases);
+
+ local_irq_disable();
+ spin_lock(&new_base->t_base.lock);
+ spin_lock(&old_base->t_base.lock);
+
+ if (old_base->t_base.running_timer)
+ BUG();
+ for (i = 0; i < TVR_SIZE; i++)
+ migrate_timer_list(new_base, old_base->tv1.vec + i);
+ for (i = 0; i < TVN_SIZE; i++) {
+ migrate_timer_list(new_base, old_base->tv2.vec + i);
+ migrate_timer_list(new_base, old_base->tv3.vec + i);
+ migrate_timer_list(new_base, old_base->tv4.vec + i);
+ migrate_timer_list(new_base, old_base->tv5.vec + i);
+ }
+
+ spin_unlock(&old_base->t_base.lock);
+ spin_unlock(&new_base->t_base.lock);
+ local_irq_enable();
+ put_cpu_var(tvec_bases);
+}
+#endif /* CONFIG_HOTPLUG_CPU */
+
+static int __devinit timer_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ long cpu = (long)hcpu;
+ switch(action) {
+ case CPU_UP_PREPARE:
+ init_timers_cpu(cpu);
+ break;
+#ifdef CONFIG_HOTPLUG_CPU
+ case CPU_DEAD:
+ migrate_timers(cpu);
+ break;
+#endif
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block __devinitdata timers_nb = {
+ .notifier_call = timer_cpu_notify,
+};
+
+
+void __init init_timers(void)
+{
+ timer_cpu_notify(&timers_nb, (unsigned long)CPU_UP_PREPARE,
+ (void *)(long)smp_processor_id());
+ register_cpu_notifier(&timers_nb);
+ open_softirq(TIMER_SOFTIRQ, run_timer_softirq, NULL);
+}
Index: linux/kernel/timer.c
===================================================================
--- linux.orig/kernel/timer.c
+++ linux/kernel/timer.c
@@ -1,12 +1,10 @@
/*
* linux/kernel/timer.c
*
- * Kernel internal timers, kernel timekeeping, basic process system calls
+ * Kernel timekeeping, basic process system calls
*
* Copyright (C) 1991, 1992 Linus Torvalds
*
- * 1997-01-28 Modified by Finn Arne Gangstad to make timers scale better.
- *
* 1997-09-10 Updated NTP code according to technical memorandum Jan '96
* "A Kernel Model for Precision Timekeeping" by Dave Mills
* 1998-12-24 Fixed a xtime SMP race (we need the xtime_lock rw spinlock to
@@ -14,9 +12,6 @@
* Copyright (C) 1998 Andrea Arcangeli
* 1999-03-10 Improved NTP compatibility by Ulrich Windl
* 2002-05-31 Move sys_sysinfo here and make its locking sane, Robert Love
- * 2000-10-05 Implemented scalable SMP per-CPU timer handling.
- * Copyright (C) 2000, 2001, 2002 Ingo Molnar
- * Designed by David S. Miller, Alexey Kuznetsov and Ingo Molnar
*/
#include <linux/kernel_stat.h>
@@ -51,503 +46,6 @@ u64 jiffies_64 __cacheline_aligned_in_sm
EXPORT_SYMBOL(jiffies_64);
/*
- * per-CPU timer vector definitions:
- */
-
-#define TVN_BITS (CONFIG_BASE_SMALL ? 4 : 6)
-#define TVR_BITS (CONFIG_BASE_SMALL ? 6 : 8)
-#define TVN_SIZE (1 << TVN_BITS)
-#define TVR_SIZE (1 << TVR_BITS)
-#define TVN_MASK (TVN_SIZE - 1)
-#define TVR_MASK (TVR_SIZE - 1)
-
-struct timer_base_s {
- spinlock_t lock;
- struct timer_list *running_timer;
-};
-
-typedef struct tvec_s {
- struct list_head vec[TVN_SIZE];
-} tvec_t;
-
-typedef struct tvec_root_s {
- struct list_head vec[TVR_SIZE];
-} tvec_root_t;
-
-struct tvec_t_base_s {
- struct timer_base_s t_base;
- unsigned long timer_jiffies;
- tvec_root_t tv1;
- tvec_t tv2;
- tvec_t tv3;
- tvec_t tv4;
- tvec_t tv5;
-} ____cacheline_aligned_in_smp;
-
-typedef struct tvec_t_base_s tvec_base_t;
-static DEFINE_PER_CPU(tvec_base_t, tvec_bases);
-
-static inline void set_running_timer(tvec_base_t *base,
- struct timer_list *timer)
-{
-#ifdef CONFIG_SMP
- base->t_base.running_timer = timer;
-#endif
-}
-
-static void internal_add_timer(tvec_base_t *base, struct timer_list *timer)
-{
- unsigned long expires = timer->expires;
- unsigned long idx = expires - base->timer_jiffies;
- struct list_head *vec;
-
- if (idx < TVR_SIZE) {
- int i = expires & TVR_MASK;
- vec = base->tv1.vec + i;
- } else if (idx < 1 << (TVR_BITS + TVN_BITS)) {
- int i = (expires >> TVR_BITS) & TVN_MASK;
- vec = base->tv2.vec + i;
- } else if (idx < 1 << (TVR_BITS + 2 * TVN_BITS)) {
- int i = (expires >> (TVR_BITS + TVN_BITS)) & TVN_MASK;
- vec = base->tv3.vec + i;
- } else if (idx < 1 << (TVR_BITS + 3 * TVN_BITS)) {
- int i = (expires >> (TVR_BITS + 2 * TVN_BITS)) & TVN_MASK;
- vec = base->tv4.vec + i;
- } else if ((signed long) idx < 0) {
- /*
- * Can happen if you add a timer with expires == jiffies,
- * or you set a timer to go off in the past
- */
- vec = base->tv1.vec + (base->timer_jiffies & TVR_MASK);
- } else {
- int i;
- /* If the timeout is larger than 0xffffffff on 64-bit
- * architectures then we use the maximum timeout:
- */
- if (idx > 0xffffffffUL) {
- idx = 0xffffffffUL;
- expires = idx + base->timer_jiffies;
- }
- i = (expires >> (TVR_BITS + 3 * TVN_BITS)) & TVN_MASK;
- vec = base->tv5.vec + i;
- }
- /*
- * Timers are FIFO:
- */
- list_add_tail(&timer->entry, vec);
-}
-
-typedef struct timer_base_s timer_base_t;
-/*
- * Used by TIMER_INITIALIZER, we can't use per_cpu(tvec_bases)
- * at compile time, and we need timer->base to lock the timer.
- */
-timer_base_t __init_timer_base
- ____cacheline_aligned_in_smp = { .lock = SPIN_LOCK_UNLOCKED };
-EXPORT_SYMBOL(__init_timer_base);
-
-/***
- * init_timer - initialize a timer.
- * @timer: the timer to be initialized
- *
- * init_timer() must be done to a timer prior calling *any* of the
- * other timer functions.
- */
-void fastcall init_timer(struct timer_list *timer)
-{
- timer->entry.next = NULL;
- timer->base = &per_cpu(tvec_bases, raw_smp_processor_id()).t_base;
-}
-EXPORT_SYMBOL(init_timer);
-
-static inline void detach_timer(struct timer_list *timer,
- int clear_pending)
-{
- struct list_head *entry = &timer->entry;
-
- __list_del(entry->prev, entry->next);
- if (clear_pending)
- entry->next = NULL;
- entry->prev = LIST_POISON2;
-}
-
-/*
- * We are using hashed locking: holding per_cpu(tvec_bases).t_base.lock
- * means that all timers which are tied to this base via timer->base are
- * locked, and the base itself is locked too.
- *
- * So __run_timers/migrate_timers can safely modify all timers which could
- * be found on ->tvX lists.
- *
- * When the timer's base is locked, and the timer removed from list, it is
- * possible to set timer->base = NULL and drop the lock: the timer remains
- * locked.
- */
-static timer_base_t *lock_timer_base(struct timer_list *timer,
- unsigned long *flags)
-{
- timer_base_t *base;
-
- for (;;) {
- base = timer->base;
- if (likely(base != NULL)) {
- spin_lock_irqsave(&base->lock, *flags);
- if (likely(base == timer->base))
- return base;
- /* The timer has migrated to another CPU */
- spin_unlock_irqrestore(&base->lock, *flags);
- }
- cpu_relax();
- }
-}
-
-int __mod_timer(struct timer_list *timer, unsigned long expires)
-{
- timer_base_t *base;
- tvec_base_t *new_base;
- unsigned long flags;
- int ret = 0;
-
- BUG_ON(!timer->function);
-
- base = lock_timer_base(timer, &flags);
-
- if (timer_pending(timer)) {
- detach_timer(timer, 0);
- ret = 1;
- }
-
- new_base = &__get_cpu_var(tvec_bases);
-
- if (base != &new_base->t_base) {
- /*
- * We are trying to schedule the timer on the local CPU.
- * However we can't change timer's base while it is running,
- * otherwise del_timer_sync() can't detect that the timer's
- * handler yet has not finished. This also guarantees that
- * the timer is serialized wrt itself.
- */
- if (unlikely(base->running_timer == timer)) {
- /* The timer remains on a former base */
- new_base = container_of(base, tvec_base_t, t_base);
- } else {
- /* See the comment in lock_timer_base() */
- timer->base = NULL;
- spin_unlock(&base->lock);
- spin_lock(&new_base->t_base.lock);
- timer->base = &new_base->t_base;
- }
- }
-
- timer->expires = expires;
- internal_add_timer(new_base, timer);
- spin_unlock_irqrestore(&new_base->t_base.lock, flags);
-
- return ret;
-}
-
-EXPORT_SYMBOL(__mod_timer);
-
-/***
- * add_timer_on - start a timer on a particular CPU
- * @timer: the timer to be added
- * @cpu: the CPU to start it on
- *
- * This is not very scalable on SMP. Double adds are not possible.
- */
-void add_timer_on(struct timer_list *timer, int cpu)
-{
- tvec_base_t *base = &per_cpu(tvec_bases, cpu);
- unsigned long flags;
-
- BUG_ON(timer_pending(timer) || !timer->function);
- spin_lock_irqsave(&base->t_base.lock, flags);
- timer->base = &base->t_base;
- internal_add_timer(base, timer);
- spin_unlock_irqrestore(&base->t_base.lock, flags);
-}
-
-
-/***
- * mod_timer - modify a timer's timeout
- * @timer: the timer to be modified
- *
- * mod_timer is a more efficient way to update the expire field of an
- * active timer (if the timer is inactive it will be activated)
- *
- * mod_timer(timer, expires) is equivalent to:
- *
- * del_timer(timer); timer->expires = expires; add_timer(timer);
- *
- * Note that if there are multiple unserialized concurrent users of the
- * same timer, then mod_timer() is the only safe way to modify the timeout,
- * since add_timer() cannot modify an already running timer.
- *
- * The function returns whether it has modified a pending timer or not.
- * (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an
- * active timer returns 1.)
- */
-int mod_timer(struct timer_list *timer, unsigned long expires)
-{
- BUG_ON(!timer->function);
-
- /*
- * This is a common optimization triggered by the
- * networking code - if the timer is re-modified
- * to be the same thing then just return:
- */
- if (timer->expires == expires && timer_pending(timer))
- return 1;
-
- return __mod_timer(timer, expires);
-}
-
-EXPORT_SYMBOL(mod_timer);
-
-/***
- * del_timer - deactive a timer.
- * @timer: the timer to be deactivated
- *
- * del_timer() deactivates a timer - this works on both active and inactive
- * timers.
- *
- * The function returns whether it has deactivated a pending timer or not.
- * (ie. del_timer() of an inactive timer returns 0, del_timer() of an
- * active timer returns 1.)
- */
-int del_timer(struct timer_list *timer)
-{
- timer_base_t *base;
- unsigned long flags;
- int ret = 0;
-
- if (timer_pending(timer)) {
- base = lock_timer_base(timer, &flags);
- if (timer_pending(timer)) {
- detach_timer(timer, 1);
- ret = 1;
- }
- spin_unlock_irqrestore(&base->lock, flags);
- }
-
- return ret;
-}
-
-EXPORT_SYMBOL(del_timer);
-
-#ifdef CONFIG_SMP
-/*
- * This function tries to deactivate a timer. Upon successful (ret >= 0)
- * exit the timer is not queued and the handler is not running on any CPU.
- *
- * It must not be called from interrupt contexts.
- */
-int try_to_del_timer_sync(struct timer_list *timer)
-{
- timer_base_t *base;
- unsigned long flags;
- int ret = -1;
-
- base = lock_timer_base(timer, &flags);
-
- if (base->running_timer == timer)
- goto out;
-
- ret = 0;
- if (timer_pending(timer)) {
- detach_timer(timer, 1);
- ret = 1;
- }
-out:
- spin_unlock_irqrestore(&base->lock, flags);
-
- return ret;
-}
-
-/***
- * del_timer_sync - deactivate a timer and wait for the handler to finish.
- * @timer: the timer to be deactivated
- *
- * This function only differs from del_timer() on SMP: besides deactivating
- * the timer it also makes sure the handler has finished executing on other
- * CPUs.
- *
- * Synchronization rules: callers must prevent restarting of the timer,
- * otherwise this function is meaningless. It must not be called from
- * interrupt contexts. The caller must not hold locks which would prevent
- * completion of the timer's handler. The timer's handler must not call
- * add_timer_on(). Upon exit the timer is not queued and the handler is
- * not running on any CPU.
- *
- * The function returns whether it has deactivated a pending timer or not.
- */
-int del_timer_sync(struct timer_list *timer)
-{
- for (;;) {
- int ret = try_to_del_timer_sync(timer);
- if (ret >= 0)
- return ret;
- }
-}
-
-EXPORT_SYMBOL(del_timer_sync);
-#endif
-
-static int cascade(tvec_base_t *base, tvec_t *tv, int index)
-{
- /* cascade all the timers from tv up one level */
- struct list_head *head, *curr;
-
- head = tv->vec + index;
- curr = head->next;
- /*
- * We are removing _all_ timers from the list, so we don't have to
- * detach them individually, just clear the list afterwards.
- */
- while (curr != head) {
- struct timer_list *tmp;
-
- tmp = list_entry(curr, struct timer_list, entry);
- BUG_ON(tmp->base != &base->t_base);
- curr = curr->next;
- internal_add_timer(base, tmp);
- }
- INIT_LIST_HEAD(head);
-
- return index;
-}
-
-/***
- * __run_timers - run all expired timers (if any) on this CPU.
- * @base: the timer vector to be processed.
- *
- * This function cascades all vectors and executes all expired timer
- * vectors.
- */
-#define INDEX(N) (base->timer_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
-
-static inline void __run_timers(tvec_base_t *base)
-{
- struct timer_list *timer;
-
- spin_lock_irq(&base->t_base.lock);
- while (time_after_eq(jiffies, base->timer_jiffies)) {
- struct list_head work_list = LIST_HEAD_INIT(work_list);
- struct list_head *head = &work_list;
- int index = base->timer_jiffies & TVR_MASK;
-
- /*
- * Cascade timers:
- */
- if (!index &&
- (!cascade(base, &base->tv2, INDEX(0))) &&
- (!cascade(base, &base->tv3, INDEX(1))) &&
- !cascade(base, &base->tv4, INDEX(2)))
- cascade(base, &base->tv5, INDEX(3));
- ++base->timer_jiffies;
- list_splice_init(base->tv1.vec + index, &work_list);
- while (!list_empty(head)) {
- void (*fn)(unsigned long);
- unsigned long data;
-
- timer = list_entry(head->next,struct timer_list,entry);
- fn = timer->function;
- data = timer->data;
-
- set_running_timer(base, timer);
- detach_timer(timer, 1);
- spin_unlock_irq(&base->t_base.lock);
- {
- int preempt_count = preempt_count();
- fn(data);
- if (preempt_count != preempt_count()) {
- printk(KERN_WARNING "huh, entered %p "
- "with preempt_count %08x, exited"
- " with %08x?\n",
- fn, preempt_count,
- preempt_count());
- BUG();
- }
- }
- spin_lock_irq(&base->t_base.lock);
- }
- }
- set_running_timer(base, NULL);
- spin_unlock_irq(&base->t_base.lock);
-}
-
-#ifdef CONFIG_NO_IDLE_HZ
-/*
- * Find out when the next timer event is due to happen. This
- * is used on S/390 to stop all activity when a cpus is idle.
- * This functions needs to be called disabled.
- */
-unsigned long next_timer_interrupt(void)
-{
- tvec_base_t *base;
- struct list_head *list;
- struct timer_list *nte;
- unsigned long expires;
- tvec_t *varray[4];
- int i, j;
-
- base = &__get_cpu_var(tvec_bases);
- spin_lock(&base->t_base.lock);
- expires = base->timer_jiffies + (LONG_MAX >> 1);
- list = 0;
-
- /* Look for timer events in tv1. */
- j = base->timer_jiffies & TVR_MASK;
- do {
- list_for_each_entry(nte, base->tv1.vec + j, entry) {
- expires = nte->expires;
- if (j < (base->timer_jiffies & TVR_MASK))
- list = base->tv2.vec + (INDEX(0));
- goto found;
- }
- j = (j + 1) & TVR_MASK;
- } while (j != (base->timer_jiffies & TVR_MASK));
-
- /* Check tv2-tv5. */
- varray[0] = &base->tv2;
- varray[1] = &base->tv3;
- varray[2] = &base->tv4;
- varray[3] = &base->tv5;
- for (i = 0; i < 4; i++) {
- j = INDEX(i);
- do {
- if (list_empty(varray[i]->vec + j)) {
- j = (j + 1) & TVN_MASK;
- continue;
- }
- list_for_each_entry(nte, varray[i]->vec + j, entry)
- if (time_before(nte->expires, expires))
- expires = nte->expires;
- if (j < (INDEX(i)) && i < 3)
- list = varray[i + 1]->vec + (INDEX(i + 1));
- goto found;
- } while (j != (INDEX(i)));
- }
-found:
- if (list) {
- /*
- * The search wrapped. We need to look at the next list
- * from next tv element that would cascade into tv element
- * where we found the timer element.
- */
- list_for_each_entry(nte, list, entry) {
- if (time_before(nte->expires, expires))
- expires = nte->expires;
- }
- }
- spin_unlock(&base->t_base.lock);
- return expires;
-}
-#endif
-
-/******************************************************************/
-
-/*
* Timekeeping variables
*/
unsigned long tick_usec = TICK_USEC; /* USER_HZ period (usec) */
@@ -851,26 +349,6 @@ EXPORT_SYMBOL(xtime_lock);
#endif
/*
- * This function runs timers and the timer-tq in bottom half context.
- */
-static void run_timer_softirq(struct softirq_action *h)
-{
- tvec_base_t *base = &__get_cpu_var(tvec_bases);
-
- ktimer_run_queues();
- if (time_after_eq(jiffies, base->timer_jiffies))
- __run_timers(base);
-}
-
-/*
- * Called by the local, per-CPU timer interrupt on SMP.
- */
-void run_local_timers(void)
-{
- raise_softirq(TIMER_SOFTIRQ);
-}
-
-/*
* Called by the timer interrupt. xtime_lock must already be taken
* by the timer IRQ!
*/
@@ -1015,104 +493,6 @@ asmlinkage long sys_getegid(void)
#endif
-static void process_timeout(unsigned long __data)
-{
- wake_up_process((task_t *)__data);
-}
-
-/**
- * schedule_timeout - sleep until timeout
- * @timeout: timeout value in jiffies
- *
- * Make the current task sleep until @timeout jiffies have
- * elapsed. The routine will return immediately unless
- * the current task state has been set (see set_current_state()).
- *
- * You can set the task state as follows -
- *
- * %TASK_UNINTERRUPTIBLE - at least @timeout jiffies are guaranteed to
- * pass before the routine returns. The routine will return 0
- *
- * %TASK_INTERRUPTIBLE - the routine may return early if a signal is
- * delivered to the current task. In this case the remaining time
- * in jiffies will be returned, or 0 if the timer expired in time
- *
- * The current task state is guaranteed to be TASK_RUNNING when this
- * routine returns.
- *
- * Specifying a @timeout value of %MAX_SCHEDULE_TIMEOUT will schedule
- * the CPU away without a bound on the timeout. In this case the return
- * value will be %MAX_SCHEDULE_TIMEOUT.
- *
- * In all cases the return value is guaranteed to be non-negative.
- */
-fastcall signed long __sched schedule_timeout(signed long timeout)
-{
- struct timer_list timer;
- unsigned long expire;
-
- switch (timeout)
- {
- case MAX_SCHEDULE_TIMEOUT:
- /*
- * These two special cases are useful to be comfortable
- * in the caller. Nothing more. We could take
- * MAX_SCHEDULE_TIMEOUT from one of the negative value
- * but I' d like to return a valid offset (>=0) to allow
- * the caller to do everything it want with the retval.
- */
- schedule();
- goto out;
- default:
- /*
- * Another bit of PARANOID. Note that the retval will be
- * 0 since no piece of kernel is supposed to do a check
- * for a negative retval of schedule_timeout() (since it
- * should never happens anyway). You just have the printk()
- * that will tell you if something is gone wrong and where.
- */
- if (timeout < 0)
- {
- printk(KERN_ERR "schedule_timeout: wrong timeout "
- "value %lx from %p\n", timeout,
- __builtin_return_address(0));
- current->state = TASK_RUNNING;
- goto out;
- }
- }
-
- expire = timeout + jiffies;
-
- setup_timer(&timer, process_timeout, (unsigned long)current);
- __mod_timer(&timer, expire);
- schedule();
- del_singleshot_timer_sync(&timer);
-
- timeout = expire - jiffies;
-
- out:
- return timeout < 0 ? 0 : timeout;
-}
-EXPORT_SYMBOL(schedule_timeout);
-
-/*
- * We can use __set_current_state() here because schedule_timeout() calls
- * schedule() unconditionally.
- */
-signed long __sched schedule_timeout_interruptible(signed long timeout)
-{
- __set_current_state(TASK_INTERRUPTIBLE);
- return schedule_timeout(timeout);
-}
-EXPORT_SYMBOL(schedule_timeout_interruptible);
-
-signed long __sched schedule_timeout_uninterruptible(signed long timeout)
-{
- __set_current_state(TASK_UNINTERRUPTIBLE);
- return schedule_timeout(timeout);
-}
-EXPORT_SYMBOL(schedule_timeout_uninterruptible);
-
/* Thread ID - the internal kernel "pid" */
asmlinkage long sys_gettid(void)
{
@@ -1208,102 +588,6 @@ asmlinkage long sys_sysinfo(struct sysin
return 0;
}
-static void __devinit init_timers_cpu(int cpu)
-{
- int j;
- tvec_base_t *base;
-
- base = &per_cpu(tvec_bases, cpu);
- spin_lock_init(&base->t_base.lock);
- for (j = 0; j < TVN_SIZE; j++) {
- INIT_LIST_HEAD(base->tv5.vec + j);
- INIT_LIST_HEAD(base->tv4.vec + j);
- INIT_LIST_HEAD(base->tv3.vec + j);
- INIT_LIST_HEAD(base->tv2.vec + j);
- }
- for (j = 0; j < TVR_SIZE; j++)
- INIT_LIST_HEAD(base->tv1.vec + j);
-
- base->timer_jiffies = jiffies;
-}
-
-#ifdef CONFIG_HOTPLUG_CPU
-static void migrate_timer_list(tvec_base_t *new_base, struct list_head *head)
-{
- struct timer_list *timer;
-
- while (!list_empty(head)) {
- timer = list_entry(head->next, struct timer_list, entry);
- detach_timer(timer, 0);
- timer->base = &new_base->t_base;
- internal_add_timer(new_base, timer);
- }
-}
-
-static void __devinit migrate_timers(int cpu)
-{
- tvec_base_t *old_base;
- tvec_base_t *new_base;
- int i;
-
- BUG_ON(cpu_online(cpu));
- old_base = &per_cpu(tvec_bases, cpu);
- new_base = &get_cpu_var(tvec_bases);
-
- local_irq_disable();
- spin_lock(&new_base->t_base.lock);
- spin_lock(&old_base->t_base.lock);
-
- if (old_base->t_base.running_timer)
- BUG();
- for (i = 0; i < TVR_SIZE; i++)
- migrate_timer_list(new_base, old_base->tv1.vec + i);
- for (i = 0; i < TVN_SIZE; i++) {
- migrate_timer_list(new_base, old_base->tv2.vec + i);
- migrate_timer_list(new_base, old_base->tv3.vec + i);
- migrate_timer_list(new_base, old_base->tv4.vec + i);
- migrate_timer_list(new_base, old_base->tv5.vec + i);
- }
-
- spin_unlock(&old_base->t_base.lock);
- spin_unlock(&new_base->t_base.lock);
- local_irq_enable();
- put_cpu_var(tvec_bases);
-}
-#endif /* CONFIG_HOTPLUG_CPU */
-
-static int __devinit timer_cpu_notify(struct notifier_block *self,
- unsigned long action, void *hcpu)
-{
- long cpu = (long)hcpu;
- switch(action) {
- case CPU_UP_PREPARE:
- init_timers_cpu(cpu);
- break;
-#ifdef CONFIG_HOTPLUG_CPU
- case CPU_DEAD:
- migrate_timers(cpu);
- break;
-#endif
- default:
- break;
- }
- return NOTIFY_OK;
-}
-
-static struct notifier_block __devinitdata timers_nb = {
- .notifier_call = timer_cpu_notify,
-};
-
-
-void __init init_timers(void)
-{
- timer_cpu_notify(&timers_nb, (unsigned long)CPU_UP_PREPARE,
- (void *)(long)smp_processor_id());
- register_cpu_notifier(&timers_nb);
- open_softirq(TIMER_SOFTIRQ, run_timer_softirq, NULL);
-}
-
#ifdef CONFIG_TIME_INTERPOLATION
struct time_interpolator *time_interpolator;
@@ -1492,32 +776,3 @@ unregister_time_interpolator(struct time
spin_unlock(&time_interpolator_lock);
}
#endif /* CONFIG_TIME_INTERPOLATION */
-
-/**
- * msleep - sleep safely even with waitqueue interruptions
- * @msecs: Time in milliseconds to sleep for
- */
-void msleep(unsigned int msecs)
-{
- unsigned long timeout = msecs_to_jiffies(msecs) + 1;
-
- while (timeout)
- timeout = schedule_timeout_uninterruptible(timeout);
-}
-
-EXPORT_SYMBOL(msleep);
-
-/**
- * msleep_interruptible - sleep waiting for signals
- * @msecs: Time in milliseconds to sleep for
- */
-unsigned long msleep_interruptible(unsigned int msecs)
-{
- unsigned long timeout = msecs_to_jiffies(msecs) + 1;
-
- while (timeout && !signal_pending(current))
- timeout = schedule_timeout_interruptible(timeout);
- return jiffies_to_msecs(timeout);
-}
-
-EXPORT_SYMBOL(msleep_interruptible);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 25/43] Create ktimeout.h and move timer.h code into it
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (23 preceding siblings ...)
2005-12-01 0:03 ` [patch 24/43] Split timeout code into kernel/ktimeout.c Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 2:36 ` Adrian Bunk
2005-12-01 0:03 ` [patch 26/43] Rename struct timer_list to struct ktimeout Thomas Gleixner
` (17 subsequent siblings)
42 siblings, 1 reply; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-h.patch)
- introduce ktimeout.h and move the timeout implementation into it, as-is.
- keep timer.h for compatibility
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 100 +++++++++++++++++++++++++++++++++++++++++++++++
include/linux/timer.h | 96 +--------------------------------------------
2 files changed, 103 insertions(+), 93 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- /dev/null
+++ linux/include/linux/ktimeout.h
@@ -0,0 +1,100 @@
+#ifndef _LINUX_KTIMEOUT_H
+#define _LINUX_KTIMEOUT_H
+
+#include <linux/config.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/stddef.h>
+
+struct timer_base_s;
+
+struct timer_list {
+ struct list_head entry;
+ unsigned long expires;
+
+ void (*function)(unsigned long);
+ unsigned long data;
+
+ struct timer_base_s *base;
+};
+
+extern struct timer_base_s __init_timer_base;
+
+#define TIMER_INITIALIZER(_function, _expires, _data) { \
+ .function = (_function), \
+ .expires = (_expires), \
+ .data = (_data), \
+ .base = &__init_timer_base, \
+ }
+
+#define DEFINE_TIMER(_name, _function, _expires, _data) \
+ struct timer_list _name = \
+ TIMER_INITIALIZER(_function, _expires, _data)
+
+void fastcall init_timer(struct timer_list * timer);
+
+static inline void setup_timer(struct timer_list * timer,
+ void (*function)(unsigned long),
+ unsigned long data)
+{
+ timer->function = function;
+ timer->data = data;
+ init_timer(timer);
+}
+
+/***
+ * timer_pending - is a timer pending?
+ * @timer: the timer in question
+ *
+ * timer_pending will tell whether a given timer is currently pending,
+ * or not. Callers must ensure serialization wrt. other operations done
+ * to this timer, eg. interrupt contexts, or other CPUs on SMP.
+ *
+ * return value: 1 if the timer is pending, 0 if not.
+ */
+static inline int timer_pending(const struct timer_list * timer)
+{
+ return timer->entry.next != NULL;
+}
+
+extern void add_timer_on(struct timer_list *timer, int cpu);
+extern int del_timer(struct timer_list * timer);
+extern int __mod_timer(struct timer_list *timer, unsigned long expires);
+extern int mod_timer(struct timer_list *timer, unsigned long expires);
+
+extern unsigned long next_timer_interrupt(void);
+
+/***
+ * add_timer - start a timer
+ * @timer: the timer to be added
+ *
+ * The kernel will do a ->function(->data) callback from the
+ * timer interrupt at the ->expired point in the future. The
+ * current time is 'jiffies'.
+ *
+ * The timer's ->expired, ->function (and if the handler uses it, ->data)
+ * fields must be set prior calling this function.
+ *
+ * Timers with an ->expired field in the past will be executed in the next
+ * timer tick.
+ */
+static inline void add_timer(struct timer_list *timer)
+{
+ BUG_ON(timer_pending(timer));
+ __mod_timer(timer, timer->expires);
+}
+
+#ifdef CONFIG_SMP
+ extern int try_to_del_timer_sync(struct timer_list *timer);
+ extern int del_timer_sync(struct timer_list *timer);
+#else
+# define try_to_del_timer_sync(t) del_timer(t)
+# define del_timer_sync(t) del_timer(t)
+#endif
+
+#define del_singleshot_timer_sync(t) del_timer_sync(t)
+
+extern void init_timers(void);
+extern void run_local_timers(void);
+
+#endif
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -1,101 +1,11 @@
#ifndef _LINUX_TIMER_H
#define _LINUX_TIMER_H
-#include <linux/config.h>
-#include <linux/list.h>
-#include <linux/spinlock.h>
-#include <linux/stddef.h>
-
-struct timer_base_s;
-
-struct timer_list {
- struct list_head entry;
- unsigned long expires;
-
- void (*function)(unsigned long);
- unsigned long data;
-
- struct timer_base_s *base;
-};
-
-extern struct timer_base_s __init_timer_base;
-
-#define TIMER_INITIALIZER(_function, _expires, _data) { \
- .function = (_function), \
- .expires = (_expires), \
- .data = (_data), \
- .base = &__init_timer_base, \
- }
-
-#define DEFINE_TIMER(_name, _function, _expires, _data) \
- struct timer_list _name = \
- TIMER_INITIALIZER(_function, _expires, _data)
-
-void fastcall init_timer(struct timer_list * timer);
-
-static inline void setup_timer(struct timer_list * timer,
- void (*function)(unsigned long),
- unsigned long data)
-{
- timer->function = function;
- timer->data = data;
- init_timer(timer);
-}
-
-/***
- * timer_pending - is a timer pending?
- * @timer: the timer in question
- *
- * timer_pending will tell whether a given timer is currently pending,
- * or not. Callers must ensure serialization wrt. other operations done
- * to this timer, eg. interrupt contexts, or other CPUs on SMP.
- *
- * return value: 1 if the timer is pending, 0 if not.
- */
-static inline int timer_pending(const struct timer_list * timer)
-{
- return timer->entry.next != NULL;
-}
-
-extern void add_timer_on(struct timer_list *timer, int cpu);
-extern int del_timer(struct timer_list * timer);
-extern int __mod_timer(struct timer_list *timer, unsigned long expires);
-extern int mod_timer(struct timer_list *timer, unsigned long expires);
-
-extern unsigned long next_timer_interrupt(void);
-
-/***
- * add_timer - start a timer
- * @timer: the timer to be added
- *
- * The kernel will do a ->function(->data) callback from the
- * timer interrupt at the ->expired point in the future. The
- * current time is 'jiffies'.
- *
- * The timer's ->expired, ->function (and if the handler uses it, ->data)
- * fields must be set prior calling this function.
- *
- * Timers with an ->expired field in the past will be executed in the next
- * timer tick.
+/*
+ * This file is a compatibility placeholder - it will go away.
*/
-static inline void add_timer(struct timer_list *timer)
-{
- BUG_ON(timer_pending(timer));
- __mod_timer(timer, timer->expires);
-}
-
-#ifdef CONFIG_SMP
- extern int try_to_del_timer_sync(struct timer_list *timer);
- extern int del_timer_sync(struct timer_list *timer);
-#else
-# define try_to_del_timer_sync(t) del_timer(t)
-# define del_timer_sync(t) del_timer(t)
-#endif
-
-#define del_singleshot_timer_sync(t) del_timer_sync(t)
+#include <linux/ktimeout.h>
-extern void init_timers(void);
-extern void run_local_timers(void);
extern int it_real_fn(void *);
#endif
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 26/43] Rename struct timer_list to struct ktimeout
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (24 preceding siblings ...)
2005-12-01 0:03 ` [patch 25/43] Create ktimeout.h and move timer.h code into it Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 27/43] Convert timer_list users to ktimeout Thomas Gleixner
` (16 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-struct-base.patch)
- change the main timeout data structure from 'struct timer_list'
to 'struct ktimeout'
- introduce compatibility define for timer_list
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 2 +-
include/linux/timer.h | 7 ++++++-
2 files changed, 7 insertions(+), 2 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -8,7 +8,7 @@
struct timer_base_s;
-struct timer_list {
+struct ktimeout {
struct list_head entry;
unsigned long expires;
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -1,9 +1,14 @@
+/*
+ * This file is a compatibility placeholder - it will go away.
+ */
#ifndef _LINUX_TIMER_H
#define _LINUX_TIMER_H
/*
- * This file is a compatibility placeholder - it will go away.
+ * Compatibility define to turn 'struct timer_list' into 'struct ktimeout':
*/
+#define timer_list ktimeout
+
#include <linux/ktimeout.h>
extern int it_real_fn(void *);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 27/43] Convert timer_list users to ktimeout
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (25 preceding siblings ...)
2005-12-01 0:03 ` [patch 26/43] Rename struct timer_list to struct ktimeout Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 28/43] Convert ktimeout.h and create wrappers Thomas Gleixner
` (15 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-struct-more.patch)
- convert all uses of struct timer_list in ktimeout.h over to struct ktimeout
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 22 +++++++++++-----------
1 files changed, 11 insertions(+), 11 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -28,12 +28,12 @@ extern struct timer_base_s __init_timer_
}
#define DEFINE_TIMER(_name, _function, _expires, _data) \
- struct timer_list _name = \
+ struct ktimeout _name = \
TIMER_INITIALIZER(_function, _expires, _data)
-void fastcall init_timer(struct timer_list * timer);
+void fastcall init_timer(struct ktimeout * timer);
-static inline void setup_timer(struct timer_list * timer,
+static inline void setup_timer(struct ktimeout * timer,
void (*function)(unsigned long),
unsigned long data)
{
@@ -52,15 +52,15 @@ static inline void setup_timer(struct ti
*
* return value: 1 if the timer is pending, 0 if not.
*/
-static inline int timer_pending(const struct timer_list * timer)
+static inline int timer_pending(const struct ktimeout * timer)
{
return timer->entry.next != NULL;
}
-extern void add_timer_on(struct timer_list *timer, int cpu);
-extern int del_timer(struct timer_list * timer);
-extern int __mod_timer(struct timer_list *timer, unsigned long expires);
-extern int mod_timer(struct timer_list *timer, unsigned long expires);
+extern void add_timer_on(struct ktimeout *timer, int cpu);
+extern int del_timer(struct ktimeout * timer);
+extern int __mod_timer(struct ktimeout *timer, unsigned long expires);
+extern int mod_timer(struct ktimeout *timer, unsigned long expires);
extern unsigned long next_timer_interrupt(void);
@@ -78,15 +78,15 @@ extern unsigned long next_timer_interrup
* Timers with an ->expired field in the past will be executed in the next
* timer tick.
*/
-static inline void add_timer(struct timer_list *timer)
+static inline void add_timer(struct ktimeout *timer)
{
BUG_ON(timer_pending(timer));
__mod_timer(timer, timer->expires);
}
#ifdef CONFIG_SMP
- extern int try_to_del_timer_sync(struct timer_list *timer);
- extern int del_timer_sync(struct timer_list *timer);
+ extern int try_to_del_timer_sync(struct ktimeout *timer);
+ extern int del_timer_sync(struct ktimeout *timer);
#else
# define try_to_del_timer_sync(t) del_timer(t)
# define del_timer_sync(t) del_timer(t)
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 28/43] Convert ktimeout.h and create wrappers
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (26 preceding siblings ...)
2005-12-01 0:03 ` [patch 27/43] Convert timer_list users to ktimeout Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:03 ` [patch 29/43] Convert ktimeout.c to ktimeout struct and APIs Thomas Gleixner
` (14 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-new-apis.patch)
- convert ktimeout.h to use the ktimeout naming
- introduce compatibility wrapper defines in the timer.h code
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 80 +++++++++++++++++++++++------------------------
include/linux/timer.h | 27 +++++++++++++++
2 files changed, 66 insertions(+), 41 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -6,7 +6,7 @@
#include <linux/spinlock.h>
#include <linux/stddef.h>
-struct timer_base_s;
+struct ktimeout_base_s;
struct ktimeout {
struct list_head entry;
@@ -15,86 +15,86 @@ struct ktimeout {
void (*function)(unsigned long);
unsigned long data;
- struct timer_base_s *base;
+ struct ktimeout_base_s *base;
};
-extern struct timer_base_s __init_timer_base;
+extern struct ktimeout_base_s __init_ktimeout_base;
-#define TIMER_INITIALIZER(_function, _expires, _data) { \
+#define KTIMEOUT_INITIALIZER(_function, _expires, _data) { \
.function = (_function), \
.expires = (_expires), \
.data = (_data), \
- .base = &__init_timer_base, \
+ .base = &__init_ktimeout_base, \
}
-#define DEFINE_TIMER(_name, _function, _expires, _data) \
- struct ktimeout _name = \
- TIMER_INITIALIZER(_function, _expires, _data)
+#define DEFINE_KTIMEOUT(_name, _function, _expires, _data) \
+ struct ktimeout _name = \
+ KTIMEOUT_INITIALIZER(_function, _expires, _data)
-void fastcall init_timer(struct ktimeout * timer);
+void fastcall init_ktimeout(struct ktimeout * ktimeout);
-static inline void setup_timer(struct ktimeout * timer,
+static inline void setup_ktimeout(struct ktimeout * ktimeout,
void (*function)(unsigned long),
unsigned long data)
{
- timer->function = function;
- timer->data = data;
- init_timer(timer);
+ ktimeout->function = function;
+ ktimeout->data = data;
+ init_ktimeout(ktimeout);
}
/***
- * timer_pending - is a timer pending?
- * @timer: the timer in question
+ * ktimeout_pending - is a ktimeout pending?
+ * @ktimeout: the ktimeout in question
*
- * timer_pending will tell whether a given timer is currently pending,
+ * ktimeout_pending will tell whether a given ktimeout is currently pending,
* or not. Callers must ensure serialization wrt. other operations done
- * to this timer, eg. interrupt contexts, or other CPUs on SMP.
+ * to this ktimeout, eg. interrupt contexts, or other CPUs on SMP.
*
- * return value: 1 if the timer is pending, 0 if not.
+ * return value: 1 if the ktimeout is pending, 0 if not.
*/
-static inline int timer_pending(const struct ktimeout * timer)
+static inline int ktimeout_pending(const struct ktimeout * ktimeout)
{
- return timer->entry.next != NULL;
+ return ktimeout->entry.next != NULL;
}
-extern void add_timer_on(struct ktimeout *timer, int cpu);
-extern int del_timer(struct ktimeout * timer);
-extern int __mod_timer(struct ktimeout *timer, unsigned long expires);
-extern int mod_timer(struct ktimeout *timer, unsigned long expires);
+extern void add_ktimeout_on(struct ktimeout *ktimeout, int cpu);
+extern int del_ktimeout(struct ktimeout * ktimeout);
+extern int __mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
+extern int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
-extern unsigned long next_timer_interrupt(void);
+extern unsigned long next_ktimeout_interrupt(void);
/***
- * add_timer - start a timer
- * @timer: the timer to be added
+ * add_ktimeout - start a ktimeout
+ * @ktimeout: the ktimeout to be added
*
* The kernel will do a ->function(->data) callback from the
- * timer interrupt at the ->expired point in the future. The
+ * ktimeout interrupt at the ->expired point in the future. The
* current time is 'jiffies'.
*
- * The timer's ->expired, ->function (and if the handler uses it, ->data)
+ * The ktimeout's ->expired, ->function (and if the handler uses it, ->data)
* fields must be set prior calling this function.
*
* Timers with an ->expired field in the past will be executed in the next
- * timer tick.
+ * ktimeout tick.
*/
-static inline void add_timer(struct ktimeout *timer)
+static inline void add_ktimeout(struct ktimeout *ktimeout)
{
- BUG_ON(timer_pending(timer));
- __mod_timer(timer, timer->expires);
+ BUG_ON(ktimeout_pending(ktimeout));
+ __mod_ktimeout(ktimeout, ktimeout->expires);
}
#ifdef CONFIG_SMP
- extern int try_to_del_timer_sync(struct ktimeout *timer);
- extern int del_timer_sync(struct ktimeout *timer);
+ extern int try_to_del_ktimeout_sync(struct ktimeout *ktimeout);
+ extern int del_ktimeout_sync(struct ktimeout *ktimeout);
#else
-# define try_to_del_timer_sync(t) del_timer(t)
-# define del_timer_sync(t) del_timer(t)
+# define try_to_del_ktimeout_sync(t) del_ktimeout(t)
+# define del_ktimeout_sync(t) del_ktimeout(t)
#endif
-#define del_singleshot_timer_sync(t) del_timer_sync(t)
+#define del_singleshot_ktimeout_sync(t) del_ktimeout_sync(t)
-extern void init_timers(void);
-extern void run_local_timers(void);
+extern void init_ktimeouts(void);
+extern void run_local_ktimeouts(void);
#endif
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -7,8 +7,33 @@
/*
* Compatibility define to turn 'struct timer_list' into 'struct ktimeout':
*/
-#define timer_list ktimeout
+#define timer_list ktimeout
+#define timer_base_s ktimeout_base_s
+#define __init_timer_base __init_ktimeout_base
+/*
+ * Compatibility defines for the old timer APIs:
+ */
+#define TIMER_INITIALIZER KTIMEOUT_INITIALIZER
+#define DEFINE_TIMER DEFINE_KTIMEOUT
+#define init_timer init_ktimeout
+#define setup_timer setup_ktimeout
+#define timer_pending ktimeout_pending
+#define add_timer_on add_ktimeout_on
+#define del_timer del_ktimeout
+#define __mod_timer __mod_ktimeout
+#define mod_timer mod_ktimeout
+#define next_timer_interrupt next_ktimeout_interrupt
+#define add_timer add_ktimeout
+#define try_to_del_timer_sync try_to_del_ktimeout_sync
+#define del_timer_sync del_ktimeout_sync
+#define del_singleshot_timer_sync del_singleshot_ktimeout_sync
+#define init_timers init_ktimeouts
+#define run_local_timers run_local_ktimeouts
+
+/*
+ * Pick up the timeout APIs:
+ */
#include <linux/ktimeout.h>
extern int it_real_fn(void *);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 29/43] Convert ktimeout.c to ktimeout struct and APIs
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (27 preceding siblings ...)
2005-12-01 0:03 ` [patch 28/43] Convert ktimeout.h and create wrappers Thomas Gleixner
@ 2005-12-01 0:03 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 30/43] ktimeout documentation Thomas Gleixner
` (13 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:03 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-c-convert.patch)
- convert ktimeout.c to use the new ktimeout structs and APIs
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/ktimeout.c | 360 +++++++++++++++++++++++++++---------------------------
1 files changed, 180 insertions(+), 180 deletions(-)
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -5,8 +5,8 @@
*
* Copyright (C) 1991, 1992 Linus Torvalds
*
- * 1997-01-28 Modified by Finn Arne Gangstad to make timers scale better.
- * 2000-10-05 Implemented scalable SMP per-CPU timer handling.
+ * 1997-01-28 Modified by Finn Arne Gangstad to make timeouts scale better.
+ * 2000-10-05 Implemented scalable SMP per-CPU timeout handling.
* Copyright (C) 2000, 2001, 2002 Ingo Molnar
* Designed by David S. Miller, Alexey Kuznetsov and Ingo Molnar
*/
@@ -33,7 +33,7 @@
#include <asm/io.h>
/*
- * per-CPU timer vector definitions:
+ * per-CPU ktimeout vector definitions:
*/
#define TVN_BITS (CONFIG_BASE_SMALL ? 4 : 6)
@@ -43,9 +43,9 @@
#define TVN_MASK (TVN_SIZE - 1)
#define TVR_MASK (TVR_SIZE - 1)
-struct timer_base_s {
+struct ktimeout_base_s {
spinlock_t lock;
- struct timer_list *running_timer;
+ struct ktimeout *running_ktimeout;
};
typedef struct tvec_s {
@@ -57,8 +57,8 @@ typedef struct tvec_root_s {
} tvec_root_t;
struct tvec_t_base_s {
- struct timer_base_s t_base;
- unsigned long timer_jiffies;
+ struct ktimeout_base_s t_base;
+ unsigned long ktimeout_jiffies;
tvec_root_t tv1;
tvec_t tv2;
tvec_t tv3;
@@ -69,18 +69,18 @@ struct tvec_t_base_s {
typedef struct tvec_t_base_s tvec_base_t;
static DEFINE_PER_CPU(tvec_base_t, tvec_bases);
-static inline void set_running_timer(tvec_base_t *base,
- struct timer_list *timer)
+static inline void set_running_ktimeout(tvec_base_t *base,
+ struct ktimeout *ktimeout)
{
#ifdef CONFIG_SMP
- base->t_base.running_timer = timer;
+ base->t_base.running_ktimeout = ktimeout;
#endif
}
-static void internal_add_timer(tvec_base_t *base, struct timer_list *timer)
+static void internal_add_ktimeout(tvec_base_t *base, struct ktimeout *ktimeout)
{
- unsigned long expires = timer->expires;
- unsigned long idx = expires - base->timer_jiffies;
+ unsigned long expires = ktimeout->expires;
+ unsigned long idx = expires - base->ktimeout_jiffies;
struct list_head *vec;
if (idx < TVR_SIZE) {
@@ -97,10 +97,10 @@ static void internal_add_timer(tvec_base
vec = base->tv4.vec + i;
} else if ((signed long) idx < 0) {
/*
- * Can happen if you add a timer with expires == jiffies,
- * or you set a timer to go off in the past
+ * Can happen if you add a ktimeout with expires == jiffies,
+ * or you set a ktimeout to go off in the past
*/
- vec = base->tv1.vec + (base->timer_jiffies & TVR_MASK);
+ vec = base->tv1.vec + (base->ktimeout_jiffies & TVR_MASK);
} else {
int i;
/* If the timeout is larger than 0xffffffff on 64-bit
@@ -108,7 +108,7 @@ static void internal_add_timer(tvec_base
*/
if (idx > 0xffffffffUL) {
idx = 0xffffffffUL;
- expires = idx + base->timer_jiffies;
+ expires = idx + base->ktimeout_jiffies;
}
i = (expires >> (TVR_BITS + 3 * TVN_BITS)) & TVN_MASK;
vec = base->tv5.vec + i;
@@ -116,36 +116,36 @@ static void internal_add_timer(tvec_base
/*
* Timers are FIFO:
*/
- list_add_tail(&timer->entry, vec);
+ list_add_tail(&ktimeout->entry, vec);
}
-typedef struct timer_base_s timer_base_t;
+typedef struct ktimeout_base_s ktimeout_base_t;
/*
* Used by TIMER_INITIALIZER, we can't use per_cpu(tvec_bases)
- * at compile time, and we need timer->base to lock the timer.
+ * at compile time, and we need ktimeout->base to lock the ktimeout.
*/
-timer_base_t __init_timer_base
+ktimeout_base_t __init_ktimeout_base
____cacheline_aligned_in_smp = { .lock = SPIN_LOCK_UNLOCKED };
-EXPORT_SYMBOL(__init_timer_base);
+EXPORT_SYMBOL(__init_ktimeout_base);
/***
- * init_timer - initialize a timer.
- * @timer: the timer to be initialized
+ * init_ktimeout - initialize a ktimeout.
+ * @ktimeout: the ktimeout to be initialized
*
- * init_timer() must be done to a timer prior calling *any* of the
- * other timer functions.
+ * init_ktimeout() must be done to a ktimeout prior calling *any* of the
+ * other ktimeout functions.
*/
-void fastcall init_timer(struct timer_list *timer)
+void fastcall init_ktimeout(struct ktimeout *ktimeout)
{
- timer->entry.next = NULL;
- timer->base = &per_cpu(tvec_bases, raw_smp_processor_id()).t_base;
+ ktimeout->entry.next = NULL;
+ ktimeout->base = &per_cpu(tvec_bases, raw_smp_processor_id()).t_base;
}
-EXPORT_SYMBOL(init_timer);
+EXPORT_SYMBOL(init_ktimeout);
-static inline void detach_timer(struct timer_list *timer,
+static inline void detach_ktimeout(struct ktimeout *ktimeout,
int clear_pending)
{
- struct list_head *entry = &timer->entry;
+ struct list_head *entry = &ktimeout->entry;
__list_del(entry->prev, entry->next);
if (clear_pending)
@@ -155,47 +155,47 @@ static inline void detach_timer(struct t
/*
* We are using hashed locking: holding per_cpu(tvec_bases).t_base.lock
- * means that all timers which are tied to this base via timer->base are
+ * means that all ktimeouts which are tied to this base via ktimeout->base are
* locked, and the base itself is locked too.
*
- * So __run_timers/migrate_timers can safely modify all timers which could
+ * So __run_ktimeouts/migrate_ktimeouts can safely modify all ktimeouts which could
* be found on ->tvX lists.
*
- * When the timer's base is locked, and the timer removed from list, it is
- * possible to set timer->base = NULL and drop the lock: the timer remains
+ * When the ktimeout's base is locked, and the ktimeout removed from list, it is
+ * possible to set ktimeout->base = NULL and drop the lock: the ktimeout remains
* locked.
*/
-static timer_base_t *lock_timer_base(struct timer_list *timer,
+static ktimeout_base_t *lock_ktimeout_base(struct ktimeout *ktimeout,
unsigned long *flags)
{
- timer_base_t *base;
+ ktimeout_base_t *base;
for (;;) {
- base = timer->base;
+ base = ktimeout->base;
if (likely(base != NULL)) {
spin_lock_irqsave(&base->lock, *flags);
- if (likely(base == timer->base))
+ if (likely(base == ktimeout->base))
return base;
- /* The timer has migrated to another CPU */
+ /* The ktimeout has migrated to another CPU */
spin_unlock_irqrestore(&base->lock, *flags);
}
cpu_relax();
}
}
-int __mod_timer(struct timer_list *timer, unsigned long expires)
+int __mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires)
{
- timer_base_t *base;
+ ktimeout_base_t *base;
tvec_base_t *new_base;
unsigned long flags;
int ret = 0;
- BUG_ON(!timer->function);
+ BUG_ON(!ktimeout->function);
- base = lock_timer_base(timer, &flags);
+ base = lock_ktimeout_base(ktimeout, &flags);
- if (timer_pending(timer)) {
- detach_timer(timer, 0);
+ if (ktimeout_pending(ktimeout)) {
+ detach_ktimeout(ktimeout, 0);
ret = 1;
}
@@ -203,110 +203,110 @@ int __mod_timer(struct timer_list *timer
if (base != &new_base->t_base) {
/*
- * We are trying to schedule the timer on the local CPU.
- * However we can't change timer's base while it is running,
- * otherwise del_timer_sync() can't detect that the timer's
+ * We are trying to schedule the ktimeout on the local CPU.
+ * However we can't change ktimeout's base while it is running,
+ * otherwise del_ktimeout_sync() can't detect that the ktimeout's
* handler yet has not finished. This also guarantees that
- * the timer is serialized wrt itself.
+ * the ktimeout is serialized wrt itself.
*/
- if (unlikely(base->running_timer == timer)) {
- /* The timer remains on a former base */
+ if (unlikely(base->running_ktimeout == ktimeout)) {
+ /* The ktimeout remains on a former base */
new_base = container_of(base, tvec_base_t, t_base);
} else {
- /* See the comment in lock_timer_base() */
- timer->base = NULL;
+ /* See the comment in lock_ktimeout_base() */
+ ktimeout->base = NULL;
spin_unlock(&base->lock);
spin_lock(&new_base->t_base.lock);
- timer->base = &new_base->t_base;
+ ktimeout->base = &new_base->t_base;
}
}
- timer->expires = expires;
- internal_add_timer(new_base, timer);
+ ktimeout->expires = expires;
+ internal_add_ktimeout(new_base, ktimeout);
spin_unlock_irqrestore(&new_base->t_base.lock, flags);
return ret;
}
-EXPORT_SYMBOL(__mod_timer);
+EXPORT_SYMBOL(__mod_ktimeout);
/***
- * add_timer_on - start a timer on a particular CPU
- * @timer: the timer to be added
+ * add_ktimeout_on - start a ktimeout on a particular CPU
+ * @ktimeout: the ktimeout to be added
* @cpu: the CPU to start it on
*
* This is not very scalable on SMP. Double adds are not possible.
*/
-void add_timer_on(struct timer_list *timer, int cpu)
+void add_ktimeout_on(struct ktimeout *ktimeout, int cpu)
{
tvec_base_t *base = &per_cpu(tvec_bases, cpu);
unsigned long flags;
- BUG_ON(timer_pending(timer) || !timer->function);
+ BUG_ON(ktimeout_pending(ktimeout) || !ktimeout->function);
spin_lock_irqsave(&base->t_base.lock, flags);
- timer->base = &base->t_base;
- internal_add_timer(base, timer);
+ ktimeout->base = &base->t_base;
+ internal_add_ktimeout(base, ktimeout);
spin_unlock_irqrestore(&base->t_base.lock, flags);
}
/***
- * mod_timer - modify a timer's timeout
- * @timer: the timer to be modified
+ * mod_ktimeout - modify a ktimeout's timeout
+ * @ktimeout: the ktimeout to be modified
*
- * mod_timer is a more efficient way to update the expire field of an
- * active timer (if the timer is inactive it will be activated)
+ * mod_ktimeout is a more efficient way to update the expire field of an
+ * active ktimeout (if the ktimeout is inactive it will be activated)
*
- * mod_timer(timer, expires) is equivalent to:
+ * mod_ktimeout(ktimeout, expires) is equivalent to:
*
- * del_timer(timer); timer->expires = expires; add_timer(timer);
+ * del_ktimeout(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
*
* Note that if there are multiple unserialized concurrent users of the
- * same timer, then mod_timer() is the only safe way to modify the timeout,
- * since add_timer() cannot modify an already running timer.
+ * same ktimeout, then mod_ktimeout() is the only safe way to modify the timeout,
+ * since add_ktimeout() cannot modify an already running ktimeout.
*
- * The function returns whether it has modified a pending timer or not.
- * (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an
- * active timer returns 1.)
+ * The function returns whether it has modified a pending ktimeout or not.
+ * (ie. mod_ktimeout() of an inactive ktimeout returns 0, mod_ktimeout() of an
+ * active ktimeout returns 1.)
*/
-int mod_timer(struct timer_list *timer, unsigned long expires)
+int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires)
{
- BUG_ON(!timer->function);
+ BUG_ON(!ktimeout->function);
/*
* This is a common optimization triggered by the
- * networking code - if the timer is re-modified
+ * networking code - if the ktimeout is re-modified
* to be the same thing then just return:
*/
- if (timer->expires == expires && timer_pending(timer))
+ if (ktimeout->expires == expires && ktimeout_pending(ktimeout))
return 1;
- return __mod_timer(timer, expires);
+ return __mod_ktimeout(ktimeout, expires);
}
-EXPORT_SYMBOL(mod_timer);
+EXPORT_SYMBOL(mod_ktimeout);
/***
- * del_timer - deactive a timer.
- * @timer: the timer to be deactivated
+ * del_ktimeout - deactive a ktimeout.
+ * @ktimeout: the ktimeout to be deactivated
*
- * del_timer() deactivates a timer - this works on both active and inactive
- * timers.
+ * del_ktimeout() deactivates a ktimeout - this works on both active and inactive
+ * ktimeouts.
*
- * The function returns whether it has deactivated a pending timer or not.
- * (ie. del_timer() of an inactive timer returns 0, del_timer() of an
- * active timer returns 1.)
+ * The function returns whether it has deactivated a pending ktimeout or not.
+ * (ie. del_ktimeout() of an inactive ktimeout returns 0, del_ktimeout() of an
+ * active ktimeout returns 1.)
*/
-int del_timer(struct timer_list *timer)
+int del_ktimeout(struct ktimeout *ktimeout)
{
- timer_base_t *base;
+ ktimeout_base_t *base;
unsigned long flags;
int ret = 0;
- if (timer_pending(timer)) {
- base = lock_timer_base(timer, &flags);
- if (timer_pending(timer)) {
- detach_timer(timer, 1);
+ if (ktimeout_pending(ktimeout)) {
+ base = lock_ktimeout_base(ktimeout, &flags);
+ if (ktimeout_pending(ktimeout)) {
+ detach_ktimeout(ktimeout, 1);
ret = 1;
}
spin_unlock_irqrestore(&base->lock, flags);
@@ -315,29 +315,29 @@ int del_timer(struct timer_list *timer)
return ret;
}
-EXPORT_SYMBOL(del_timer);
+EXPORT_SYMBOL(del_ktimeout);
#ifdef CONFIG_SMP
/*
- * This function tries to deactivate a timer. Upon successful (ret >= 0)
- * exit the timer is not queued and the handler is not running on any CPU.
+ * This function tries to deactivate a ktimeout. Upon successful (ret >= 0)
+ * exit the ktimeout is not queued and the handler is not running on any CPU.
*
* It must not be called from interrupt contexts.
*/
-int try_to_del_timer_sync(struct timer_list *timer)
+int try_to_del_ktimeout_sync(struct ktimeout *ktimeout)
{
- timer_base_t *base;
+ ktimeout_base_t *base;
unsigned long flags;
int ret = -1;
- base = lock_timer_base(timer, &flags);
+ base = lock_ktimeout_base(ktimeout, &flags);
- if (base->running_timer == timer)
+ if (base->running_ktimeout == ktimeout)
goto out;
ret = 0;
- if (timer_pending(timer)) {
- detach_timer(timer, 1);
+ if (ktimeout_pending(ktimeout)) {
+ detach_ktimeout(ktimeout, 1);
ret = 1;
}
out:
@@ -347,52 +347,52 @@ out:
}
/***
- * del_timer_sync - deactivate a timer and wait for the handler to finish.
- * @timer: the timer to be deactivated
+ * del_ktimeout_sync - deactivate a ktimeout and wait for the handler to finish.
+ * @ktimeout: the ktimeout to be deactivated
*
- * This function only differs from del_timer() on SMP: besides deactivating
- * the timer it also makes sure the handler has finished executing on other
+ * This function only differs from del_ktimeout() on SMP: besides deactivating
+ * the ktimeout it also makes sure the handler has finished executing on other
* CPUs.
*
- * Synchronization rules: callers must prevent restarting of the timer,
+ * Synchronization rules: callers must prevent restarting of the ktimeout,
* otherwise this function is meaningless. It must not be called from
* interrupt contexts. The caller must not hold locks which would prevent
- * completion of the timer's handler. The timer's handler must not call
- * add_timer_on(). Upon exit the timer is not queued and the handler is
+ * completion of the ktimeout's handler. The ktimeout's handler must not call
+ * add_ktimeout_on(). Upon exit the ktimeout is not queued and the handler is
* not running on any CPU.
*
- * The function returns whether it has deactivated a pending timer or not.
+ * The function returns whether it has deactivated a pending ktimeout or not.
*/
-int del_timer_sync(struct timer_list *timer)
+int del_ktimeout_sync(struct ktimeout *ktimeout)
{
for (;;) {
- int ret = try_to_del_timer_sync(timer);
+ int ret = try_to_del_ktimeout_sync(ktimeout);
if (ret >= 0)
return ret;
}
}
-EXPORT_SYMBOL(del_timer_sync);
+EXPORT_SYMBOL(del_ktimeout_sync);
#endif
static int cascade(tvec_base_t *base, tvec_t *tv, int index)
{
- /* cascade all the timers from tv up one level */
+ /* cascade all the ktimeouts from tv up one level */
struct list_head *head, *curr;
head = tv->vec + index;
curr = head->next;
/*
- * We are removing _all_ timers from the list, so we don't have to
+ * We are removing _all_ ktimeouts from the list, so we don't have to
* detach them individually, just clear the list afterwards.
*/
while (curr != head) {
- struct timer_list *tmp;
+ struct ktimeout *tmp;
- tmp = list_entry(curr, struct timer_list, entry);
+ tmp = list_entry(curr, struct ktimeout, entry);
BUG_ON(tmp->base != &base->t_base);
curr = curr->next;
- internal_add_timer(base, tmp);
+ internal_add_ktimeout(base, tmp);
}
INIT_LIST_HEAD(head);
@@ -400,44 +400,44 @@ static int cascade(tvec_base_t *base, tv
}
/***
- * __run_timers - run all expired timers (if any) on this CPU.
- * @base: the timer vector to be processed.
+ * __run_ktimeouts - run all expired ktimeouts (if any) on this CPU.
+ * @base: the ktimeout vector to be processed.
*
- * This function cascades all vectors and executes all expired timer
+ * This function cascades all vectors and executes all expired ktimeout
* vectors.
*/
-#define INDEX(N) (base->timer_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
+#define INDEX(N) (base->ktimeout_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
-static inline void __run_timers(tvec_base_t *base)
+static inline void __run_ktimeouts(tvec_base_t *base)
{
- struct timer_list *timer;
+ struct ktimeout *ktimeout;
spin_lock_irq(&base->t_base.lock);
- while (time_after_eq(jiffies, base->timer_jiffies)) {
+ while (time_after_eq(jiffies, base->ktimeout_jiffies)) {
struct list_head work_list = LIST_HEAD_INIT(work_list);
struct list_head *head = &work_list;
- int index = base->timer_jiffies & TVR_MASK;
+ int index = base->ktimeout_jiffies & TVR_MASK;
/*
- * Cascade timers:
+ * Cascade ktimeouts:
*/
if (!index &&
(!cascade(base, &base->tv2, INDEX(0))) &&
(!cascade(base, &base->tv3, INDEX(1))) &&
!cascade(base, &base->tv4, INDEX(2)))
cascade(base, &base->tv5, INDEX(3));
- ++base->timer_jiffies;
+ ++base->ktimeout_jiffies;
list_splice_init(base->tv1.vec + index, &work_list);
while (!list_empty(head)) {
void (*fn)(unsigned long);
unsigned long data;
- timer = list_entry(head->next,struct timer_list,entry);
- fn = timer->function;
- data = timer->data;
+ ktimeout = list_entry(head->next,struct ktimeout,entry);
+ fn = ktimeout->function;
+ data = ktimeout->data;
- set_running_timer(base, timer);
- detach_timer(timer, 1);
+ set_running_ktimeout(base, ktimeout);
+ detach_ktimeout(ktimeout, 1);
spin_unlock_irq(&base->t_base.lock);
{
int preempt_count = preempt_count();
@@ -454,41 +454,41 @@ static inline void __run_timers(tvec_bas
spin_lock_irq(&base->t_base.lock);
}
}
- set_running_timer(base, NULL);
+ set_running_ktimeout(base, NULL);
spin_unlock_irq(&base->t_base.lock);
}
#ifdef CONFIG_NO_IDLE_HZ
/*
- * Find out when the next timer event is due to happen. This
+ * Find out when the next ktimeout event is due to happen. This
* is used on S/390 to stop all activity when a cpus is idle.
* This functions needs to be called disabled.
*/
-unsigned long next_timer_interrupt(void)
+unsigned long next_ktimeout_interrupt(void)
{
tvec_base_t *base;
struct list_head *list;
- struct timer_list *nte;
+ struct ktimeout *nte;
unsigned long expires;
tvec_t *varray[4];
int i, j;
base = &__get_cpu_var(tvec_bases);
spin_lock(&base->t_base.lock);
- expires = base->timer_jiffies + (LONG_MAX >> 1);
+ expires = base->ktimeout_jiffies + (LONG_MAX >> 1);
list = 0;
- /* Look for timer events in tv1. */
- j = base->timer_jiffies & TVR_MASK;
+ /* Look for ktimeout events in tv1. */
+ j = base->ktimeout_jiffies & TVR_MASK;
do {
list_for_each_entry(nte, base->tv1.vec + j, entry) {
expires = nte->expires;
- if (j < (base->timer_jiffies & TVR_MASK))
+ if (j < (base->ktimeout_jiffies & TVR_MASK))
list = base->tv2.vec + (INDEX(0));
goto found;
}
j = (j + 1) & TVR_MASK;
- } while (j != (base->timer_jiffies & TVR_MASK));
+ } while (j != (base->ktimeout_jiffies & TVR_MASK));
/* Check tv2-tv5. */
varray[0] = &base->tv2;
@@ -515,7 +515,7 @@ found:
/*
* The search wrapped. We need to look at the next list
* from next tv element that would cascade into tv element
- * where we found the timer element.
+ * where we found the ktimeout element.
*/
list_for_each_entry(nte, list, entry) {
if (time_before(nte->expires, expires))
@@ -528,21 +528,21 @@ found:
#endif
/*
- * This function runs timers and the timer-tq in bottom half context.
+ * This function runs ktimeouts and the ktimeout-tq in bottom half context.
*/
-static void run_timer_softirq(struct softirq_action *h)
+static void run_ktimeout_softirq(struct softirq_action *h)
{
tvec_base_t *base = &__get_cpu_var(tvec_bases);
ktimer_run_queues();
- if (time_after_eq(jiffies, base->timer_jiffies))
- __run_timers(base);
+ if (time_after_eq(jiffies, base->ktimeout_jiffies))
+ __run_ktimeouts(base);
}
/*
- * Called by the local, per-CPU timer interrupt on SMP.
+ * Called by the local, per-CPU ktimeout interrupt on SMP.
*/
-void run_local_timers(void)
+void run_local_ktimeouts(void)
{
raise_softirq(TIMER_SOFTIRQ);
}
@@ -567,7 +567,7 @@ static void process_timeout(unsigned lon
*
* %TASK_INTERRUPTIBLE - the routine may return early if a signal is
* delivered to the current task. In this case the remaining time
- * in jiffies will be returned, or 0 if the timer expired in time
+ * in jiffies will be returned, or 0 if the ktimeout expired in time
*
* The current task state is guaranteed to be TASK_RUNNING when this
* routine returns.
@@ -580,7 +580,7 @@ static void process_timeout(unsigned lon
*/
fastcall signed long __sched schedule_timeout(signed long timeout)
{
- struct timer_list timer;
+ struct ktimeout ktimeout;
unsigned long expire;
switch (timeout)
@@ -615,10 +615,10 @@ fastcall signed long __sched schedule_ti
expire = timeout + jiffies;
- setup_timer(&timer, process_timeout, (unsigned long)current);
- __mod_timer(&timer, expire);
+ setup_ktimeout(&ktimeout, process_timeout, (unsigned long)current);
+ __mod_ktimeout(&ktimeout, expire);
schedule();
- del_singleshot_timer_sync(&timer);
+ del_singleshot_ktimeout_sync(&ktimeout);
timeout = expire - jiffies;
@@ -674,7 +674,7 @@ unsigned long msleep_interruptible(unsig
EXPORT_SYMBOL(msleep_interruptible);
-static void __devinit init_timers_cpu(int cpu)
+static void __devinit init_ktimeouts_cpu(int cpu)
{
int j;
tvec_base_t *base;
@@ -690,23 +690,23 @@ static void __devinit init_timers_cpu(in
for (j = 0; j < TVR_SIZE; j++)
INIT_LIST_HEAD(base->tv1.vec + j);
- base->timer_jiffies = jiffies;
+ base->ktimeout_jiffies = jiffies;
}
#ifdef CONFIG_HOTPLUG_CPU
-static void migrate_timer_list(tvec_base_t *new_base, struct list_head *head)
+static void migrate_ktimeout(tvec_base_t *new_base, struct list_head *head)
{
- struct timer_list *timer;
+ struct ktimeout *ktimeout;
while (!list_empty(head)) {
- timer = list_entry(head->next, struct timer_list, entry);
- detach_timer(timer, 0);
- timer->base = &new_base->t_base;
- internal_add_timer(new_base, timer);
+ ktimeout = list_entry(head->next, struct ktimeout, entry);
+ detach_ktimeout(ktimeout, 0);
+ ktimeout->base = &new_base->t_base;
+ internal_add_ktimeout(new_base, ktimeout);
}
}
-static void __devinit migrate_timers(int cpu)
+static void __devinit migrate_ktimeouts(int cpu)
{
tvec_base_t *old_base;
tvec_base_t *new_base;
@@ -720,15 +720,15 @@ static void __devinit migrate_timers(int
spin_lock(&new_base->t_base.lock);
spin_lock(&old_base->t_base.lock);
- if (old_base->t_base.running_timer)
+ if (old_base->t_base.running_ktimeout)
BUG();
for (i = 0; i < TVR_SIZE; i++)
- migrate_timer_list(new_base, old_base->tv1.vec + i);
+ migrate_ktimeout(new_base, old_base->tv1.vec + i);
for (i = 0; i < TVN_SIZE; i++) {
- migrate_timer_list(new_base, old_base->tv2.vec + i);
- migrate_timer_list(new_base, old_base->tv3.vec + i);
- migrate_timer_list(new_base, old_base->tv4.vec + i);
- migrate_timer_list(new_base, old_base->tv5.vec + i);
+ migrate_ktimeout(new_base, old_base->tv2.vec + i);
+ migrate_ktimeout(new_base, old_base->tv3.vec + i);
+ migrate_ktimeout(new_base, old_base->tv4.vec + i);
+ migrate_ktimeout(new_base, old_base->tv5.vec + i);
}
spin_unlock(&old_base->t_base.lock);
@@ -738,17 +738,17 @@ static void __devinit migrate_timers(int
}
#endif /* CONFIG_HOTPLUG_CPU */
-static int __devinit timer_cpu_notify(struct notifier_block *self,
+static int __devinit ktimeout_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
{
long cpu = (long)hcpu;
switch(action) {
case CPU_UP_PREPARE:
- init_timers_cpu(cpu);
+ init_ktimeouts_cpu(cpu);
break;
#ifdef CONFIG_HOTPLUG_CPU
case CPU_DEAD:
- migrate_timers(cpu);
+ migrate_ktimeouts(cpu);
break;
#endif
default:
@@ -757,15 +757,15 @@ static int __devinit timer_cpu_notify(st
return NOTIFY_OK;
}
-static struct notifier_block __devinitdata timers_nb = {
- .notifier_call = timer_cpu_notify,
+static struct notifier_block __devinitdata ktimeouts_nb = {
+ .notifier_call = ktimeout_cpu_notify,
};
-void __init init_timers(void)
+void __init init_ktimeouts(void)
{
- timer_cpu_notify(&timers_nb, (unsigned long)CPU_UP_PREPARE,
+ ktimeout_cpu_notify(&ktimeouts_nb, (unsigned long)CPU_UP_PREPARE,
(void *)(long)smp_processor_id());
- register_cpu_notifier(&timers_nb);
- open_softirq(TIMER_SOFTIRQ, run_timer_softirq, NULL);
+ register_cpu_notifier(&ktimeouts_nb);
+ open_softirq(TIMER_SOFTIRQ, run_ktimeout_softirq, NULL);
}
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 30/43] ktimeout documentation
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (28 preceding siblings ...)
2005-12-01 0:03 ` [patch 29/43] Convert ktimeout.c to ktimeout struct and APIs Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 31/43] rename init_ktimeout() to ktimeout_init() Thomas Gleixner
` (12 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-doc.patch)
- document ktimeouts and fix up existing documentation
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 21 ++++---
kernel/ktimeout.c | 124 ++++++++++++++++++++++++-----------------------
2 files changed, 77 insertions(+), 68 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -1,3 +1,6 @@
+/*
+ * Support for kernel-internal timeout events:
+ */
#ifndef _LINUX_KTIMEOUT_H
#define _LINUX_KTIMEOUT_H
@@ -43,14 +46,14 @@ static inline void setup_ktimeout(struct
}
/***
- * ktimeout_pending - is a ktimeout pending?
- * @ktimeout: the ktimeout in question
+ * ktimeout_pending - is a timeout pending?
+ * @ktimeout: the timeout in question
*
- * ktimeout_pending will tell whether a given ktimeout is currently pending,
+ * ktimeout_pending will tell whether a given timeout is currently pending,
* or not. Callers must ensure serialization wrt. other operations done
- * to this ktimeout, eg. interrupt contexts, or other CPUs on SMP.
+ * to this timeout, eg. interrupt contexts, or other CPUs on SMP.
*
- * return value: 1 if the ktimeout is pending, 0 if not.
+ * return value: 1 if the timeout is pending, 0 if not.
*/
static inline int ktimeout_pending(const struct ktimeout * ktimeout)
{
@@ -66,17 +69,17 @@ extern unsigned long next_ktimeout_inter
/***
* add_ktimeout - start a ktimeout
- * @ktimeout: the ktimeout to be added
+ * @ktimeout: the timeout to be added
*
* The kernel will do a ->function(->data) callback from the
- * ktimeout interrupt at the ->expired point in the future. The
+ * timeout interrupt at the ->expired point in the future. The
* current time is 'jiffies'.
*
- * The ktimeout's ->expired, ->function (and if the handler uses it, ->data)
+ * The timeout's ->expired, ->function (and if the handler uses it, ->data)
* fields must be set prior calling this function.
*
* Timers with an ->expired field in the past will be executed in the next
- * ktimeout tick.
+ * timeout tick.
*/
static inline void add_ktimeout(struct ktimeout *ktimeout)
{
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -1,7 +1,12 @@
/*
* linux/kernel/ktimeout.c
*
- * Kernel internal timeouts API
+ * Kernel internal timeouts
+ *
+ * Timeouts are time events set for the future in where the performance and
+ * scalability of setting and removing a future event is the most important
+ * design factor. The actual events are more of an exception (but are still
+ * handled fast). There is no strict precision or latency guarantee.
*
* Copyright (C) 1991, 1992 Linus Torvalds
*
@@ -97,8 +102,8 @@ static void internal_add_ktimeout(tvec_b
vec = base->tv4.vec + i;
} else if ((signed long) idx < 0) {
/*
- * Can happen if you add a ktimeout with expires == jiffies,
- * or you set a ktimeout to go off in the past
+ * Can happen if you add a timeout with expires == jiffies,
+ * or you set a timeout to go off in the past
*/
vec = base->tv1.vec + (base->ktimeout_jiffies & TVR_MASK);
} else {
@@ -114,14 +119,15 @@ static void internal_add_ktimeout(tvec_b
vec = base->tv5.vec + i;
}
/*
- * Timers are FIFO:
+ * Timeouts are FIFO:
*/
list_add_tail(&ktimeout->entry, vec);
}
typedef struct ktimeout_base_s ktimeout_base_t;
+
/*
- * Used by TIMER_INITIALIZER, we can't use per_cpu(tvec_bases)
+ * Used by KTIMEOUT_INITIALIZER, we can't use per_cpu(tvec_bases)
* at compile time, and we need ktimeout->base to lock the ktimeout.
*/
ktimeout_base_t __init_ktimeout_base
@@ -129,11 +135,11 @@ ktimeout_base_t __init_ktimeout_base
EXPORT_SYMBOL(__init_ktimeout_base);
/***
- * init_ktimeout - initialize a ktimeout.
- * @ktimeout: the ktimeout to be initialized
+ * init_ktimeout - initialize a timeout.
+ * @ktimeout: the timeout to be initialized
*
- * init_ktimeout() must be done to a ktimeout prior calling *any* of the
- * other ktimeout functions.
+ * init_ktimeout() must be done to a timeout prior calling *any* of the
+ * other timeout functions.
*/
void fastcall init_ktimeout(struct ktimeout *ktimeout)
{
@@ -155,14 +161,14 @@ static inline void detach_ktimeout(struc
/*
* We are using hashed locking: holding per_cpu(tvec_bases).t_base.lock
- * means that all ktimeouts which are tied to this base via ktimeout->base are
+ * means that all timeouts which are tied to this base via ktimeout->base are
* locked, and the base itself is locked too.
*
- * So __run_ktimeouts/migrate_ktimeouts can safely modify all ktimeouts which could
- * be found on ->tvX lists.
+ * So __run_ktimeouts/migrate_ktimeouts can safely modify all timeouts which
+ * could be found on ->tvX lists.
*
- * When the ktimeout's base is locked, and the ktimeout removed from list, it is
- * possible to set ktimeout->base = NULL and drop the lock: the ktimeout remains
+ * When the timeout's base is locked, and the timeout removed from list, it is
+ * possible to set ktimeout->base = NULL and drop the lock: the timeout remains
* locked.
*/
static ktimeout_base_t *lock_ktimeout_base(struct ktimeout *ktimeout,
@@ -176,7 +182,7 @@ static ktimeout_base_t *lock_ktimeout_ba
spin_lock_irqsave(&base->lock, *flags);
if (likely(base == ktimeout->base))
return base;
- /* The ktimeout has migrated to another CPU */
+ /* The timeout has migrated to another CPU */
spin_unlock_irqrestore(&base->lock, *flags);
}
cpu_relax();
@@ -203,14 +209,14 @@ int __mod_ktimeout(struct ktimeout *ktim
if (base != &new_base->t_base) {
/*
- * We are trying to schedule the ktimeout on the local CPU.
- * However we can't change ktimeout's base while it is running,
- * otherwise del_ktimeout_sync() can't detect that the ktimeout's
+ * We are trying to schedule the timeout on the local CPU.
+ * However we can't change timeout's base while it is running,
+ * otherwise del_ktimeout_sync() can't detect that the timeout's
* handler yet has not finished. This also guarantees that
- * the ktimeout is serialized wrt itself.
+ * the timeout is serialized wrt itself.
*/
if (unlikely(base->running_ktimeout == ktimeout)) {
- /* The ktimeout remains on a former base */
+ /* The timeout remains on a former base */
new_base = container_of(base, tvec_base_t, t_base);
} else {
/* See the comment in lock_ktimeout_base() */
@@ -231,8 +237,8 @@ int __mod_ktimeout(struct ktimeout *ktim
EXPORT_SYMBOL(__mod_ktimeout);
/***
- * add_ktimeout_on - start a ktimeout on a particular CPU
- * @ktimeout: the ktimeout to be added
+ * add_ktimeout_on - start a timeout on a particular CPU
+ * @ktimeout: the timeout to be added
* @cpu: the CPU to start it on
*
* This is not very scalable on SMP. Double adds are not possible.
@@ -251,23 +257,23 @@ void add_ktimeout_on(struct ktimeout *kt
/***
- * mod_ktimeout - modify a ktimeout's timeout
- * @ktimeout: the ktimeout to be modified
+ * mod_ktimeout - modify a timeout's interval
+ * @ktimeout: the timeout to be modified
*
* mod_ktimeout is a more efficient way to update the expire field of an
- * active ktimeout (if the ktimeout is inactive it will be activated)
+ * active timeout (if the timeout is inactive it will be activated)
*
* mod_ktimeout(ktimeout, expires) is equivalent to:
*
- * del_ktimeout(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
+ * del_ktimeout(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
*
- * Note that if there are multiple unserialized concurrent users of the
- * same ktimeout, then mod_ktimeout() is the only safe way to modify the timeout,
+ * Note that if there are multiple unserialized concurrent users of the same
+ * timeout, then mod_ktimeout() is the only safe way to modify the interval,
* since add_ktimeout() cannot modify an already running ktimeout.
*
- * The function returns whether it has modified a pending ktimeout or not.
- * (ie. mod_ktimeout() of an inactive ktimeout returns 0, mod_ktimeout() of an
- * active ktimeout returns 1.)
+ * The function returns whether it has modified a pending timeout or not.
+ * (ie. mod_ktimeout() of an inactive timeout returns 0, mod_ktimeout() of an
+ * active timeout returns 1.)
*/
int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires)
{
@@ -275,7 +281,7 @@ int mod_ktimeout(struct ktimeout *ktimeo
/*
* This is a common optimization triggered by the
- * networking code - if the ktimeout is re-modified
+ * networking code - if the timeout is re-modified
* to be the same thing then just return:
*/
if (ktimeout->expires == expires && ktimeout_pending(ktimeout))
@@ -287,15 +293,15 @@ int mod_ktimeout(struct ktimeout *ktimeo
EXPORT_SYMBOL(mod_ktimeout);
/***
- * del_ktimeout - deactive a ktimeout.
- * @ktimeout: the ktimeout to be deactivated
+ * del_ktimeout - deactive a timeout.
+ * @ktimeout: the timeout to be deactivated
*
- * del_ktimeout() deactivates a ktimeout - this works on both active and inactive
+ * del_ktimeout() deactivates a timeout - this works on both active and inactive
* ktimeouts.
*
- * The function returns whether it has deactivated a pending ktimeout or not.
- * (ie. del_ktimeout() of an inactive ktimeout returns 0, del_ktimeout() of an
- * active ktimeout returns 1.)
+ * The function returns whether it has deactivated a pending timeout or not.
+ * (ie. del_ktimeout() of an inactive timeout returns 0, del_ktimeout() of an
+ * active timeout returns 1.)
*/
int del_ktimeout(struct ktimeout *ktimeout)
{
@@ -319,8 +325,8 @@ EXPORT_SYMBOL(del_ktimeout);
#ifdef CONFIG_SMP
/*
- * This function tries to deactivate a ktimeout. Upon successful (ret >= 0)
- * exit the ktimeout is not queued and the handler is not running on any CPU.
+ * This function tries to deactivate a timeout. Upon successful (ret >= 0)
+ * exit the timeout is not queued and the handler is not running on any CPU.
*
* It must not be called from interrupt contexts.
*/
@@ -347,21 +353,21 @@ out:
}
/***
- * del_ktimeout_sync - deactivate a ktimeout and wait for the handler to finish.
- * @ktimeout: the ktimeout to be deactivated
+ * del_ktimeout_sync - deactivate a timeout and wait for the handler to finish.
+ * @ktimeout: the timeout to be deactivated
*
* This function only differs from del_ktimeout() on SMP: besides deactivating
- * the ktimeout it also makes sure the handler has finished executing on other
+ * the timeout it also makes sure the handler has finished executing on other
* CPUs.
*
- * Synchronization rules: callers must prevent restarting of the ktimeout,
+ * Synchronization rules: callers must prevent restarting of the timeout,
* otherwise this function is meaningless. It must not be called from
* interrupt contexts. The caller must not hold locks which would prevent
- * completion of the ktimeout's handler. The ktimeout's handler must not call
- * add_ktimeout_on(). Upon exit the ktimeout is not queued and the handler is
+ * completion of the timeout's handler. The timeout's handler must not call
+ * add_ktimeout_on(). Upon exit the timeout is not queued and the handler is
* not running on any CPU.
*
- * The function returns whether it has deactivated a pending ktimeout or not.
+ * The function returns whether it has deactivated a pending timeout or not.
*/
int del_ktimeout_sync(struct ktimeout *ktimeout)
{
@@ -377,13 +383,13 @@ EXPORT_SYMBOL(del_ktimeout_sync);
static int cascade(tvec_base_t *base, tvec_t *tv, int index)
{
- /* cascade all the ktimeouts from tv up one level */
+ /* cascade all the timeouts from tv up one level */
struct list_head *head, *curr;
head = tv->vec + index;
curr = head->next;
/*
- * We are removing _all_ ktimeouts from the list, so we don't have to
+ * We are removing _all_ timeouts from the list, so we don't have to
* detach them individually, just clear the list afterwards.
*/
while (curr != head) {
@@ -400,10 +406,10 @@ static int cascade(tvec_base_t *base, tv
}
/***
- * __run_ktimeouts - run all expired ktimeouts (if any) on this CPU.
- * @base: the ktimeout vector to be processed.
+ * __run_ktimeouts - run all expired timeouts (if any) on this CPU.
+ * @base: the timeout vector to be processed.
*
- * This function cascades all vectors and executes all expired ktimeout
+ * This function cascades all vectors and executes all expired timeout
* vectors.
*/
#define INDEX(N) (base->ktimeout_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
@@ -419,7 +425,7 @@ static inline void __run_ktimeouts(tvec_
int index = base->ktimeout_jiffies & TVR_MASK;
/*
- * Cascade ktimeouts:
+ * Cascade timeouts:
*/
if (!index &&
(!cascade(base, &base->tv2, INDEX(0))) &&
@@ -460,7 +466,7 @@ static inline void __run_ktimeouts(tvec_
#ifdef CONFIG_NO_IDLE_HZ
/*
- * Find out when the next ktimeout event is due to happen. This
+ * Find out when the next timeout event is due to happen. This
* is used on S/390 to stop all activity when a cpus is idle.
* This functions needs to be called disabled.
*/
@@ -478,7 +484,7 @@ unsigned long next_ktimeout_interrupt(vo
expires = base->ktimeout_jiffies + (LONG_MAX >> 1);
list = 0;
- /* Look for ktimeout events in tv1. */
+ /* Look for timeout events in tv1. */
j = base->ktimeout_jiffies & TVR_MASK;
do {
list_for_each_entry(nte, base->tv1.vec + j, entry) {
@@ -515,7 +521,7 @@ found:
/*
* The search wrapped. We need to look at the next list
* from next tv element that would cascade into tv element
- * where we found the ktimeout element.
+ * where we found the timeout element.
*/
list_for_each_entry(nte, list, entry) {
if (time_before(nte->expires, expires))
@@ -528,7 +534,7 @@ found:
#endif
/*
- * This function runs ktimeouts and the ktimeout-tq in bottom half context.
+ * This function runs ktimers and timeouts in bottom half context.
*/
static void run_ktimeout_softirq(struct softirq_action *h)
{
@@ -540,7 +546,7 @@ static void run_ktimeout_softirq(struct
}
/*
- * Called by the local, per-CPU ktimeout interrupt on SMP.
+ * Called by the local, per-CPU timeout interrupt on SMP.
*/
void run_local_ktimeouts(void)
{
@@ -567,7 +573,7 @@ static void process_timeout(unsigned lon
*
* %TASK_INTERRUPTIBLE - the routine may return early if a signal is
* delivered to the current task. In this case the remaining time
- * in jiffies will be returned, or 0 if the ktimeout expired in time
+ * in jiffies will be returned, or 0 if the timeout expired in time
*
* The current task state is guaranteed to be TASK_RUNNING when this
* routine returns.
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 31/43] rename init_ktimeout() to ktimeout_init()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (29 preceding siblings ...)
2005-12-01 0:04 ` [patch 30/43] ktimeout documentation Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 32/43] rename setup_ktimeout() to ktimeout_setup() Thomas Gleixner
` (11 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_init.patch)
- rename init_ktimeout() to ktimeout_init()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 4 ++--
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 8 ++++----
3 files changed, 7 insertions(+), 7 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -34,7 +34,7 @@ extern struct ktimeout_base_s __init_kti
struct ktimeout _name = \
KTIMEOUT_INITIALIZER(_function, _expires, _data)
-void fastcall init_ktimeout(struct ktimeout * ktimeout);
+void fastcall ktimeout_init(struct ktimeout * ktimeout);
static inline void setup_ktimeout(struct ktimeout * ktimeout,
void (*function)(unsigned long),
@@ -42,7 +42,7 @@ static inline void setup_ktimeout(struct
{
ktimeout->function = function;
ktimeout->data = data;
- init_ktimeout(ktimeout);
+ ktimeout_init(ktimeout);
}
/***
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -16,7 +16,7 @@
*/
#define TIMER_INITIALIZER KTIMEOUT_INITIALIZER
#define DEFINE_TIMER DEFINE_KTIMEOUT
-#define init_timer init_ktimeout
+#define init_timer ktimeout_init
#define setup_timer setup_ktimeout
#define timer_pending ktimeout_pending
#define add_timer_on add_ktimeout_on
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -135,18 +135,18 @@ ktimeout_base_t __init_ktimeout_base
EXPORT_SYMBOL(__init_ktimeout_base);
/***
- * init_ktimeout - initialize a timeout.
+ * ktimeout_init - initialize a timeout.
* @ktimeout: the timeout to be initialized
*
- * init_ktimeout() must be done to a timeout prior calling *any* of the
+ * ktimeout_init() must be done to a timeout prior calling *any* of the
* other timeout functions.
*/
-void fastcall init_ktimeout(struct ktimeout *ktimeout)
+void fastcall ktimeout_init(struct ktimeout *ktimeout)
{
ktimeout->entry.next = NULL;
ktimeout->base = &per_cpu(tvec_bases, raw_smp_processor_id()).t_base;
}
-EXPORT_SYMBOL(init_ktimeout);
+EXPORT_SYMBOL(ktimeout_init);
static inline void detach_ktimeout(struct ktimeout *ktimeout,
int clear_pending)
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 32/43] rename setup_ktimeout() to ktimeout_setup()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (30 preceding siblings ...)
2005-12-01 0:04 ` [patch 31/43] rename init_ktimeout() to ktimeout_init() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 33/43] rename add_ktimeout_on() to ktimeout_add_on() Thomas Gleixner
` (10 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_setup.patch)
- rename setup_ktimeout() to ktimeout_setup()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 2 +-
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -36,7 +36,7 @@ extern struct ktimeout_base_s __init_kti
void fastcall ktimeout_init(struct ktimeout * ktimeout);
-static inline void setup_ktimeout(struct ktimeout * ktimeout,
+static inline void ktimeout_setup(struct ktimeout * ktimeout,
void (*function)(unsigned long),
unsigned long data)
{
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -17,7 +17,7 @@
#define TIMER_INITIALIZER KTIMEOUT_INITIALIZER
#define DEFINE_TIMER DEFINE_KTIMEOUT
#define init_timer ktimeout_init
-#define setup_timer setup_ktimeout
+#define setup_timer ktimeout_setup
#define timer_pending ktimeout_pending
#define add_timer_on add_ktimeout_on
#define del_timer del_ktimeout
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -621,7 +621,7 @@ fastcall signed long __sched schedule_ti
expire = timeout + jiffies;
- setup_ktimeout(&ktimeout, process_timeout, (unsigned long)current);
+ ktimeout_setup(&ktimeout, process_timeout, (unsigned long)current);
__mod_ktimeout(&ktimeout, expire);
schedule();
del_singleshot_ktimeout_sync(&ktimeout);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 33/43] rename add_ktimeout_on() to ktimeout_add_on()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (31 preceding siblings ...)
2005-12-01 0:04 ` [patch 32/43] rename setup_ktimeout() to ktimeout_setup() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 34/43] rename del_ktimeout() to ktimeout_del() Thomas Gleixner
` (9 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_add_on.patch)
- rename add_ktimeout_on() to ktimeout_add_on()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 2 +-
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -60,7 +60,7 @@ static inline int ktimeout_pending(const
return ktimeout->entry.next != NULL;
}
-extern void add_ktimeout_on(struct ktimeout *ktimeout, int cpu);
+extern void ktimeout_add_on(struct ktimeout *ktimeout, int cpu);
extern int del_ktimeout(struct ktimeout * ktimeout);
extern int __mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
extern int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -19,7 +19,7 @@
#define init_timer ktimeout_init
#define setup_timer ktimeout_setup
#define timer_pending ktimeout_pending
-#define add_timer_on add_ktimeout_on
+#define add_timer_on ktimeout_add_on
#define del_timer del_ktimeout
#define __mod_timer __mod_ktimeout
#define mod_timer mod_ktimeout
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -237,13 +237,13 @@ int __mod_ktimeout(struct ktimeout *ktim
EXPORT_SYMBOL(__mod_ktimeout);
/***
- * add_ktimeout_on - start a timeout on a particular CPU
+ * ktimeout_add_on - start a timeout on a particular CPU
* @ktimeout: the timeout to be added
* @cpu: the CPU to start it on
*
* This is not very scalable on SMP. Double adds are not possible.
*/
-void add_ktimeout_on(struct ktimeout *ktimeout, int cpu)
+void ktimeout_add_on(struct ktimeout *ktimeout, int cpu)
{
tvec_base_t *base = &per_cpu(tvec_bases, cpu);
unsigned long flags;
@@ -364,7 +364,7 @@ out:
* otherwise this function is meaningless. It must not be called from
* interrupt contexts. The caller must not hold locks which would prevent
* completion of the timeout's handler. The timeout's handler must not call
- * add_ktimeout_on(). Upon exit the timeout is not queued and the handler is
+ * ktimeout_add_on(). Upon exit the timeout is not queued and the handler is
* not running on any CPU.
*
* The function returns whether it has deactivated a pending timeout or not.
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 34/43] rename del_ktimeout() to ktimeout_del()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (32 preceding siblings ...)
2005-12-01 0:04 ` [patch 33/43] rename add_ktimeout_on() to ktimeout_add_on() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 35/43] rename __mod_ktimeout() to __mod_ktimeout() Thomas Gleixner
` (8 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_del.patch)
- rename del_ktimeout() to ktimeout_del()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 6 +++---
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 14 +++++++-------
3 files changed, 11 insertions(+), 11 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -61,7 +61,7 @@ static inline int ktimeout_pending(const
}
extern void ktimeout_add_on(struct ktimeout *ktimeout, int cpu);
-extern int del_ktimeout(struct ktimeout * ktimeout);
+extern int ktimeout_del(struct ktimeout * ktimeout);
extern int __mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
extern int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
@@ -91,8 +91,8 @@ static inline void add_ktimeout(struct k
extern int try_to_del_ktimeout_sync(struct ktimeout *ktimeout);
extern int del_ktimeout_sync(struct ktimeout *ktimeout);
#else
-# define try_to_del_ktimeout_sync(t) del_ktimeout(t)
-# define del_ktimeout_sync(t) del_ktimeout(t)
+# define try_to_del_ktimeout_sync(t) ktimeout_del(t)
+# define del_ktimeout_sync(t) ktimeout_del(t)
#endif
#define del_singleshot_ktimeout_sync(t) del_ktimeout_sync(t)
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -20,7 +20,7 @@
#define setup_timer ktimeout_setup
#define timer_pending ktimeout_pending
#define add_timer_on ktimeout_add_on
-#define del_timer del_ktimeout
+#define del_timer ktimeout_del
#define __mod_timer __mod_ktimeout
#define mod_timer mod_ktimeout
#define next_timer_interrupt next_ktimeout_interrupt
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -265,7 +265,7 @@ void ktimeout_add_on(struct ktimeout *kt
*
* mod_ktimeout(ktimeout, expires) is equivalent to:
*
- * del_ktimeout(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
+ * ktimeout_del(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
*
* Note that if there are multiple unserialized concurrent users of the same
* timeout, then mod_ktimeout() is the only safe way to modify the interval,
@@ -293,17 +293,17 @@ int mod_ktimeout(struct ktimeout *ktimeo
EXPORT_SYMBOL(mod_ktimeout);
/***
- * del_ktimeout - deactive a timeout.
+ * ktimeout_del - deactive a timeout.
* @ktimeout: the timeout to be deactivated
*
- * del_ktimeout() deactivates a timeout - this works on both active and inactive
+ * ktimeout_del() deactivates a timeout - this works on both active and inactive
* ktimeouts.
*
* The function returns whether it has deactivated a pending timeout or not.
- * (ie. del_ktimeout() of an inactive timeout returns 0, del_ktimeout() of an
+ * (ie. ktimeout_del() of an inactive timeout returns 0, ktimeout_del() of an
* active timeout returns 1.)
*/
-int del_ktimeout(struct ktimeout *ktimeout)
+int ktimeout_del(struct ktimeout *ktimeout)
{
ktimeout_base_t *base;
unsigned long flags;
@@ -321,7 +321,7 @@ int del_ktimeout(struct ktimeout *ktimeo
return ret;
}
-EXPORT_SYMBOL(del_ktimeout);
+EXPORT_SYMBOL(ktimeout_del);
#ifdef CONFIG_SMP
/*
@@ -356,7 +356,7 @@ out:
* del_ktimeout_sync - deactivate a timeout and wait for the handler to finish.
* @ktimeout: the timeout to be deactivated
*
- * This function only differs from del_ktimeout() on SMP: besides deactivating
+ * This function only differs from ktimeout_del() on SMP: besides deactivating
* the timeout it also makes sure the handler has finished executing on other
* CPUs.
*
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 35/43] rename __mod_ktimeout() to __mod_ktimeout()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (33 preceding siblings ...)
2005-12-01 0:04 ` [patch 34/43] rename del_ktimeout() to ktimeout_del() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 36/43] rename mod_ktimeout() to ktimeout_mod() Thomas Gleixner
` (7 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-__ktimeout_mod.patch)
- rename __mod_ktimeout() to __mod_ktimeout()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 4 ++--
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 8 ++++----
3 files changed, 7 insertions(+), 7 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -62,7 +62,7 @@ static inline int ktimeout_pending(const
extern void ktimeout_add_on(struct ktimeout *ktimeout, int cpu);
extern int ktimeout_del(struct ktimeout * ktimeout);
-extern int __mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
+extern int __ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires);
extern int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
extern unsigned long next_ktimeout_interrupt(void);
@@ -84,7 +84,7 @@ extern unsigned long next_ktimeout_inter
static inline void add_ktimeout(struct ktimeout *ktimeout)
{
BUG_ON(ktimeout_pending(ktimeout));
- __mod_ktimeout(ktimeout, ktimeout->expires);
+ __ktimeout_mod(ktimeout, ktimeout->expires);
}
#ifdef CONFIG_SMP
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -21,7 +21,7 @@
#define timer_pending ktimeout_pending
#define add_timer_on ktimeout_add_on
#define del_timer ktimeout_del
-#define __mod_timer __mod_ktimeout
+#define __mod_timer __ktimeout_mod
#define mod_timer mod_ktimeout
#define next_timer_interrupt next_ktimeout_interrupt
#define add_timer add_ktimeout
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -189,7 +189,7 @@ static ktimeout_base_t *lock_ktimeout_ba
}
}
-int __mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires)
+int __ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires)
{
ktimeout_base_t *base;
tvec_base_t *new_base;
@@ -234,7 +234,7 @@ int __mod_ktimeout(struct ktimeout *ktim
return ret;
}
-EXPORT_SYMBOL(__mod_ktimeout);
+EXPORT_SYMBOL(__ktimeout_mod);
/***
* ktimeout_add_on - start a timeout on a particular CPU
@@ -287,7 +287,7 @@ int mod_ktimeout(struct ktimeout *ktimeo
if (ktimeout->expires == expires && ktimeout_pending(ktimeout))
return 1;
- return __mod_ktimeout(ktimeout, expires);
+ return __ktimeout_mod(ktimeout, expires);
}
EXPORT_SYMBOL(mod_ktimeout);
@@ -622,7 +622,7 @@ fastcall signed long __sched schedule_ti
expire = timeout + jiffies;
ktimeout_setup(&ktimeout, process_timeout, (unsigned long)current);
- __mod_ktimeout(&ktimeout, expire);
+ __ktimeout_mod(&ktimeout, expire);
schedule();
del_singleshot_ktimeout_sync(&ktimeout);
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 36/43] rename mod_ktimeout() to ktimeout_mod()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (34 preceding siblings ...)
2005-12-01 0:04 ` [patch 35/43] rename __mod_ktimeout() to __mod_ktimeout() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 37/43] rename next_ktimeout_interrupt() to ktimeout_next_interrupt() Thomas Gleixner
` (6 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_mod.patch)
- rename mod_ktimeout() to ktimeout_mod()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 2 +-
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 14 +++++++-------
3 files changed, 9 insertions(+), 9 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -63,7 +63,7 @@ static inline int ktimeout_pending(const
extern void ktimeout_add_on(struct ktimeout *ktimeout, int cpu);
extern int ktimeout_del(struct ktimeout * ktimeout);
extern int __ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires);
-extern int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires);
+extern int ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires);
extern unsigned long next_ktimeout_interrupt(void);
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -22,7 +22,7 @@
#define add_timer_on ktimeout_add_on
#define del_timer ktimeout_del
#define __mod_timer __ktimeout_mod
-#define mod_timer mod_ktimeout
+#define mod_timer ktimeout_mod
#define next_timer_interrupt next_ktimeout_interrupt
#define add_timer add_ktimeout
#define try_to_del_timer_sync try_to_del_ktimeout_sync
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -257,25 +257,25 @@ void ktimeout_add_on(struct ktimeout *kt
/***
- * mod_ktimeout - modify a timeout's interval
+ * ktimeout_mod - modify a timeout's interval
* @ktimeout: the timeout to be modified
*
- * mod_ktimeout is a more efficient way to update the expire field of an
+ * ktimeout_mod is a more efficient way to update the expire field of an
* active timeout (if the timeout is inactive it will be activated)
*
- * mod_ktimeout(ktimeout, expires) is equivalent to:
+ * ktimeout_mod(ktimeout, expires) is equivalent to:
*
* ktimeout_del(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
*
* Note that if there are multiple unserialized concurrent users of the same
- * timeout, then mod_ktimeout() is the only safe way to modify the interval,
+ * timeout, then ktimeout_mod() is the only safe way to modify the interval,
* since add_ktimeout() cannot modify an already running ktimeout.
*
* The function returns whether it has modified a pending timeout or not.
- * (ie. mod_ktimeout() of an inactive timeout returns 0, mod_ktimeout() of an
+ * (ie. ktimeout_mod() of an inactive timeout returns 0, ktimeout_mod() of an
* active timeout returns 1.)
*/
-int mod_ktimeout(struct ktimeout *ktimeout, unsigned long expires)
+int ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires)
{
BUG_ON(!ktimeout->function);
@@ -290,7 +290,7 @@ int mod_ktimeout(struct ktimeout *ktimeo
return __ktimeout_mod(ktimeout, expires);
}
-EXPORT_SYMBOL(mod_ktimeout);
+EXPORT_SYMBOL(ktimeout_mod);
/***
* ktimeout_del - deactive a timeout.
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 37/43] rename next_ktimeout_interrupt() to ktimeout_next_interrupt()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (35 preceding siblings ...)
2005-12-01 0:04 ` [patch 36/43] rename mod_ktimeout() to ktimeout_mod() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 38/43] rename add_ktimeout() to ktimeout_add() Thomas Gleixner
` (5 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(ktimeout-rename-ktimeout_next_interrupt.patch)
- rename next_ktimeout_interrupt() to ktimeout_next_interrupt()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 2 +-
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -65,7 +65,7 @@ extern int ktimeout_del(struct ktimeout
extern int __ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires);
extern int ktimeout_mod(struct ktimeout *ktimeout, unsigned long expires);
-extern unsigned long next_ktimeout_interrupt(void);
+extern unsigned long ktimeout_next_interrupt(void);
/***
* add_ktimeout - start a ktimeout
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -23,7 +23,7 @@
#define del_timer ktimeout_del
#define __mod_timer __ktimeout_mod
#define mod_timer ktimeout_mod
-#define next_timer_interrupt next_ktimeout_interrupt
+#define next_timer_interrupt ktimeout_next_interrupt
#define add_timer add_ktimeout
#define try_to_del_timer_sync try_to_del_ktimeout_sync
#define del_timer_sync del_ktimeout_sync
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -470,7 +470,7 @@ static inline void __run_ktimeouts(tvec_
* is used on S/390 to stop all activity when a cpus is idle.
* This functions needs to be called disabled.
*/
-unsigned long next_ktimeout_interrupt(void)
+unsigned long ktimeout_next_interrupt(void)
{
tvec_base_t *base;
struct list_head *list;
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 38/43] rename add_ktimeout() to ktimeout_add()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (36 preceding siblings ...)
2005-12-01 0:04 ` [patch 37/43] rename next_ktimeout_interrupt() to ktimeout_next_interrupt() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 39/43] rename try_to_del_ktimeout_sync() to ktimeout_try_to_del_sync() Thomas Gleixner
` (4 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_add.patch)
- rename add_ktimeout() to ktimeout_add()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 4 ++--
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 14 +++++++-------
3 files changed, 10 insertions(+), 10 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -68,7 +68,7 @@ extern int ktimeout_mod(struct ktimeout
extern unsigned long ktimeout_next_interrupt(void);
/***
- * add_ktimeout - start a ktimeout
+ * ktimeout_add - start a ktimeout
* @ktimeout: the timeout to be added
*
* The kernel will do a ->function(->data) callback from the
@@ -81,7 +81,7 @@ extern unsigned long ktimeout_next_inter
* Timers with an ->expired field in the past will be executed in the next
* timeout tick.
*/
-static inline void add_ktimeout(struct ktimeout *ktimeout)
+static inline void ktimeout_add(struct ktimeout *ktimeout)
{
BUG_ON(ktimeout_pending(ktimeout));
__ktimeout_mod(ktimeout, ktimeout->expires);
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -24,7 +24,7 @@
#define __mod_timer __ktimeout_mod
#define mod_timer ktimeout_mod
#define next_timer_interrupt ktimeout_next_interrupt
-#define add_timer add_ktimeout
+#define add_timer ktimeout_add
#define try_to_del_timer_sync try_to_del_ktimeout_sync
#define del_timer_sync del_ktimeout_sync
#define del_singleshot_timer_sync del_singleshot_ktimeout_sync
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -82,7 +82,7 @@ static inline void set_running_ktimeout(
#endif
}
-static void internal_add_ktimeout(tvec_base_t *base, struct ktimeout *ktimeout)
+static void internal_ktimeout_add(tvec_base_t *base, struct ktimeout *ktimeout)
{
unsigned long expires = ktimeout->expires;
unsigned long idx = expires - base->ktimeout_jiffies;
@@ -228,7 +228,7 @@ int __ktimeout_mod(struct ktimeout *ktim
}
ktimeout->expires = expires;
- internal_add_ktimeout(new_base, ktimeout);
+ internal_ktimeout_add(new_base, ktimeout);
spin_unlock_irqrestore(&new_base->t_base.lock, flags);
return ret;
@@ -251,7 +251,7 @@ void ktimeout_add_on(struct ktimeout *kt
BUG_ON(ktimeout_pending(ktimeout) || !ktimeout->function);
spin_lock_irqsave(&base->t_base.lock, flags);
ktimeout->base = &base->t_base;
- internal_add_ktimeout(base, ktimeout);
+ internal_ktimeout_add(base, ktimeout);
spin_unlock_irqrestore(&base->t_base.lock, flags);
}
@@ -265,11 +265,11 @@ void ktimeout_add_on(struct ktimeout *kt
*
* ktimeout_mod(ktimeout, expires) is equivalent to:
*
- * ktimeout_del(ktimeout); ktimeout->expires = expires; add_ktimeout(ktimeout);
+ * ktimeout_del(ktimeout); ktimeout->expires = expires; ktimeout_add(ktimeout);
*
* Note that if there are multiple unserialized concurrent users of the same
* timeout, then ktimeout_mod() is the only safe way to modify the interval,
- * since add_ktimeout() cannot modify an already running ktimeout.
+ * since ktimeout_add() cannot modify an already running ktimeout.
*
* The function returns whether it has modified a pending timeout or not.
* (ie. ktimeout_mod() of an inactive timeout returns 0, ktimeout_mod() of an
@@ -398,7 +398,7 @@ static int cascade(tvec_base_t *base, tv
tmp = list_entry(curr, struct ktimeout, entry);
BUG_ON(tmp->base != &base->t_base);
curr = curr->next;
- internal_add_ktimeout(base, tmp);
+ internal_ktimeout_add(base, tmp);
}
INIT_LIST_HEAD(head);
@@ -708,7 +708,7 @@ static void migrate_ktimeout(tvec_base_t
ktimeout = list_entry(head->next, struct ktimeout, entry);
detach_ktimeout(ktimeout, 0);
ktimeout->base = &new_base->t_base;
- internal_add_ktimeout(new_base, ktimeout);
+ internal_ktimeout_add(new_base, ktimeout);
}
}
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 39/43] rename try_to_del_ktimeout_sync() to ktimeout_try_to_del_sync()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (37 preceding siblings ...)
2005-12-01 0:04 ` [patch 38/43] rename add_ktimeout() to ktimeout_add() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 40/43] rename del_ktimeout_sync() to del_ktimeout_sync() Thomas Gleixner
` (3 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(ktimeout-rename-ktimeout_try_to_del_sync.patch)
- rename try_to_del_ktimeout_sync() to ktimeout_try_to_del_sync()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 4 ++--
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -88,10 +88,10 @@ static inline void ktimeout_add(struct k
}
#ifdef CONFIG_SMP
- extern int try_to_del_ktimeout_sync(struct ktimeout *ktimeout);
+ extern int ktimeout_try_to_del_sync(struct ktimeout *ktimeout);
extern int del_ktimeout_sync(struct ktimeout *ktimeout);
#else
-# define try_to_del_ktimeout_sync(t) ktimeout_del(t)
+# define ktimeout_try_to_del_sync(t) ktimeout_del(t)
# define del_ktimeout_sync(t) ktimeout_del(t)
#endif
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -25,7 +25,7 @@
#define mod_timer ktimeout_mod
#define next_timer_interrupt ktimeout_next_interrupt
#define add_timer ktimeout_add
-#define try_to_del_timer_sync try_to_del_ktimeout_sync
+#define try_to_del_timer_sync ktimeout_try_to_del_sync
#define del_timer_sync del_ktimeout_sync
#define del_singleshot_timer_sync del_singleshot_ktimeout_sync
#define init_timers init_ktimeouts
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -330,7 +330,7 @@ EXPORT_SYMBOL(ktimeout_del);
*
* It must not be called from interrupt contexts.
*/
-int try_to_del_ktimeout_sync(struct ktimeout *ktimeout)
+int ktimeout_try_to_del_sync(struct ktimeout *ktimeout)
{
ktimeout_base_t *base;
unsigned long flags;
@@ -372,7 +372,7 @@ out:
int del_ktimeout_sync(struct ktimeout *ktimeout)
{
for (;;) {
- int ret = try_to_del_ktimeout_sync(ktimeout);
+ int ret = ktimeout_try_to_del_sync(ktimeout);
if (ret >= 0)
return ret;
}
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 40/43] rename del_ktimeout_sync() to del_ktimeout_sync()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (38 preceding siblings ...)
2005-12-01 0:04 ` [patch 39/43] rename try_to_del_ktimeout_sync() to ktimeout_try_to_del_sync() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 41/43] rename del_singleshot_ktimeout_sync() to ktimeout_del_singleshot_sync() Thomas Gleixner
` (2 subsequent siblings)
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-ktimeout_del_sync.patch)
- rename del_ktimeout_sync() to del_ktimeout_sync()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 6 +++---
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 8 ++++----
3 files changed, 8 insertions(+), 8 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -89,13 +89,13 @@ static inline void ktimeout_add(struct k
#ifdef CONFIG_SMP
extern int ktimeout_try_to_del_sync(struct ktimeout *ktimeout);
- extern int del_ktimeout_sync(struct ktimeout *ktimeout);
+ extern int ktimeout_del_sync(struct ktimeout *ktimeout);
#else
# define ktimeout_try_to_del_sync(t) ktimeout_del(t)
-# define del_ktimeout_sync(t) ktimeout_del(t)
+# define ktimeout_del_sync(t) ktimeout_del(t)
#endif
-#define del_singleshot_ktimeout_sync(t) del_ktimeout_sync(t)
+#define del_singleshot_ktimeout_sync(t) ktimeout_del_sync(t)
extern void init_ktimeouts(void);
extern void run_local_ktimeouts(void);
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -26,7 +26,7 @@
#define next_timer_interrupt ktimeout_next_interrupt
#define add_timer ktimeout_add
#define try_to_del_timer_sync ktimeout_try_to_del_sync
-#define del_timer_sync del_ktimeout_sync
+#define del_timer_sync ktimeout_del_sync
#define del_singleshot_timer_sync del_singleshot_ktimeout_sync
#define init_timers init_ktimeouts
#define run_local_timers run_local_ktimeouts
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -211,7 +211,7 @@ int __ktimeout_mod(struct ktimeout *ktim
/*
* We are trying to schedule the timeout on the local CPU.
* However we can't change timeout's base while it is running,
- * otherwise del_ktimeout_sync() can't detect that the timeout's
+ * otherwise ktimeout_del_sync() can't detect that the timeout's
* handler yet has not finished. This also guarantees that
* the timeout is serialized wrt itself.
*/
@@ -353,7 +353,7 @@ out:
}
/***
- * del_ktimeout_sync - deactivate a timeout and wait for the handler to finish.
+ * ktimeout_del_sync - deactivate a timeout and wait for the handler to finish.
* @ktimeout: the timeout to be deactivated
*
* This function only differs from ktimeout_del() on SMP: besides deactivating
@@ -369,7 +369,7 @@ out:
*
* The function returns whether it has deactivated a pending timeout or not.
*/
-int del_ktimeout_sync(struct ktimeout *ktimeout)
+int ktimeout_del_sync(struct ktimeout *ktimeout)
{
for (;;) {
int ret = ktimeout_try_to_del_sync(ktimeout);
@@ -378,7 +378,7 @@ int del_ktimeout_sync(struct ktimeout *k
}
}
-EXPORT_SYMBOL(del_ktimeout_sync);
+EXPORT_SYMBOL(ktimeout_del_sync);
#endif
static int cascade(tvec_base_t *base, tvec_t *tv, int index)
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 41/43] rename del_singleshot_ktimeout_sync() to ktimeout_del_singleshot_sync()
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (39 preceding siblings ...)
2005-12-01 0:04 ` [patch 40/43] rename del_ktimeout_sync() to del_ktimeout_sync() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 42/43] rename TIMER_SOFTIRQ to TIMEOUT_SOFTIRQ Thomas Gleixner
2005-12-01 0:04 ` [patch 43/43] ktimeout code style cleanups Thomas Gleixner
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment
(ktimeout-rename-ktimeout_del_singleshot_sync.patch)
- rename del_singleshot_ktimeout_sync() to ktimeout_del_singleshot_sync()
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/ktimeout.h | 2 +-
include/linux/timer.h | 2 +-
kernel/ktimeout.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
Index: linux/include/linux/ktimeout.h
===================================================================
--- linux.orig/include/linux/ktimeout.h
+++ linux/include/linux/ktimeout.h
@@ -95,7 +95,7 @@ static inline void ktimeout_add(struct k
# define ktimeout_del_sync(t) ktimeout_del(t)
#endif
-#define del_singleshot_ktimeout_sync(t) ktimeout_del_sync(t)
+#define ktimeout_del_singleshot_sync(t) ktimeout_del_sync(t)
extern void init_ktimeouts(void);
extern void run_local_ktimeouts(void);
Index: linux/include/linux/timer.h
===================================================================
--- linux.orig/include/linux/timer.h
+++ linux/include/linux/timer.h
@@ -27,7 +27,7 @@
#define add_timer ktimeout_add
#define try_to_del_timer_sync ktimeout_try_to_del_sync
#define del_timer_sync ktimeout_del_sync
-#define del_singleshot_timer_sync del_singleshot_ktimeout_sync
+#define del_singleshot_timer_sync ktimeout_del_singleshot_sync
#define init_timers init_ktimeouts
#define run_local_timers run_local_ktimeouts
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -624,7 +624,7 @@ fastcall signed long __sched schedule_ti
ktimeout_setup(&ktimeout, process_timeout, (unsigned long)current);
__ktimeout_mod(&ktimeout, expire);
schedule();
- del_singleshot_ktimeout_sync(&ktimeout);
+ ktimeout_del_singleshot_sync(&ktimeout);
timeout = expire - jiffies;
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 42/43] rename TIMER_SOFTIRQ to TIMEOUT_SOFTIRQ
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (40 preceding siblings ...)
2005-12-01 0:04 ` [patch 41/43] rename del_singleshot_ktimeout_sync() to ktimeout_del_singleshot_sync() Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
2005-12-01 0:04 ` [patch 43/43] ktimeout code style cleanups Thomas Gleixner
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-rename-TIMER_SOFTIRQ.patch)
- rename TIMER_SOFTIRQ to TIMEOUT_SOFTIRQ
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/interrupt.h | 2 +-
kernel/ktimeout.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
Index: linux/include/linux/interrupt.h
===================================================================
--- linux.orig/include/linux/interrupt.h
+++ linux/include/linux/interrupt.h
@@ -109,7 +109,7 @@ extern void local_bh_enable(void);
enum
{
HI_SOFTIRQ=0,
- TIMER_SOFTIRQ,
+ TIMEOUT_SOFTIRQ,
NET_TX_SOFTIRQ,
NET_RX_SOFTIRQ,
SCSI_SOFTIRQ,
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -550,7 +550,7 @@ static void run_ktimeout_softirq(struct
*/
void run_local_ktimeouts(void)
{
- raise_softirq(TIMER_SOFTIRQ);
+ raise_softirq(TIMEOUT_SOFTIRQ);
}
static void process_timeout(unsigned long __data)
@@ -773,5 +773,5 @@ void __init init_ktimeouts(void)
ktimeout_cpu_notify(&ktimeouts_nb, (unsigned long)CPU_UP_PREPARE,
(void *)(long)smp_processor_id());
register_cpu_notifier(&ktimeouts_nb);
- open_softirq(TIMER_SOFTIRQ, run_ktimeout_softirq, NULL);
+ open_softirq(TIMEOUT_SOFTIRQ, run_ktimeout_softirq, NULL);
}
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* [patch 43/43] ktimeout code style cleanups
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
` (41 preceding siblings ...)
2005-12-01 0:04 ` [patch 42/43] rename TIMER_SOFTIRQ to TIMEOUT_SOFTIRQ Thomas Gleixner
@ 2005-12-01 0:04 ` Thomas Gleixner
42 siblings, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2005-12-01 0:04 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm, mingo, zippel, george, johnstul
plain text document attachment (ktimeout-tidy.patch)
- code style cleanups
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/ktimeout.c | 41 ++++++++++++++---------------------------
1 files changed, 14 insertions(+), 27 deletions(-)
Index: linux/kernel/ktimeout.c
===================================================================
--- linux.orig/kernel/ktimeout.c
+++ linux/kernel/ktimeout.c
@@ -16,26 +16,12 @@
* Designed by David S. Miller, Alexey Kuznetsov and Ingo Molnar
*/
-#include <linux/kernel_stat.h>
+#include <linux/notifier.h>
#include <linux/module.h>
-#include <linux/interrupt.h>
#include <linux/percpu.h>
#include <linux/init.h>
-#include <linux/mm.h>
-#include <linux/swap.h>
-#include <linux/notifier.h>
-#include <linux/thread_info.h>
-#include <linux/time.h>
-#include <linux/jiffies.h>
-#include <linux/posix-timers.h>
#include <linux/cpu.h>
-#include <linux/syscalls.h>
-
-#include <asm/uaccess.h>
-#include <asm/unistd.h>
-#include <asm/div64.h>
-#include <asm/timex.h>
-#include <asm/io.h>
+#include <linux/mm.h>
/*
* per-CPU ktimeout vector definitions:
@@ -246,9 +232,9 @@ EXPORT_SYMBOL(__ktimeout_mod);
void ktimeout_add_on(struct ktimeout *ktimeout, int cpu)
{
tvec_base_t *base = &per_cpu(tvec_bases, cpu);
- unsigned long flags;
+ unsigned long flags;
- BUG_ON(ktimeout_pending(ktimeout) || !ktimeout->function);
+ BUG_ON(ktimeout_pending(ktimeout) || !ktimeout->function);
spin_lock_irqsave(&base->t_base.lock, flags);
ktimeout->base = &base->t_base;
internal_ktimeout_add(base, ktimeout);
@@ -389,7 +375,7 @@ static int cascade(tvec_base_t *base, tv
head = tv->vec + index;
curr = head->next;
/*
- * We are removing _all_ timeouts from the list, so we don't have to
+ * We are removing _all_ timeouts from the list, so we don't have to
* detach them individually, just clear the list afterwards.
*/
while (curr != head) {
@@ -412,7 +398,8 @@ static int cascade(tvec_base_t *base, tv
* This function cascades all vectors and executes all expired timeout
* vectors.
*/
-#define INDEX(N) (base->ktimeout_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
+#define INDEX(N) \
+ (base->ktimeout_jiffies >> (TVR_BITS + N * TVN_BITS)) & TVN_MASK
static inline void __run_ktimeouts(tvec_base_t *base)
{
@@ -422,8 +409,8 @@ static inline void __run_ktimeouts(tvec_
while (time_after_eq(jiffies, base->ktimeout_jiffies)) {
struct list_head work_list = LIST_HEAD_INIT(work_list);
struct list_head *head = &work_list;
- int index = base->ktimeout_jiffies & TVR_MASK;
-
+ int index = base->ktimeout_jiffies & TVR_MASK;
+
/*
* Cascade timeouts:
*/
@@ -432,15 +419,15 @@ static inline void __run_ktimeouts(tvec_
(!cascade(base, &base->tv3, INDEX(1))) &&
!cascade(base, &base->tv4, INDEX(2)))
cascade(base, &base->tv5, INDEX(3));
- ++base->ktimeout_jiffies;
+ ++base->ktimeout_jiffies;
list_splice_init(base->tv1.vec + index, &work_list);
while (!list_empty(head)) {
void (*fn)(unsigned long);
unsigned long data;
ktimeout = list_entry(head->next,struct ktimeout,entry);
- fn = ktimeout->function;
- data = ktimeout->data;
+ fn = ktimeout->function;
+ data = ktimeout->data;
set_running_ktimeout(base, ktimeout);
detach_ktimeout(ktimeout, 1);
@@ -540,7 +527,7 @@ static void run_ktimeout_softirq(struct
{
tvec_base_t *base = &__get_cpu_var(tvec_bases);
- ktimer_run_queues();
+ ktimer_run_queues();
if (time_after_eq(jiffies, base->ktimeout_jiffies))
__run_ktimeouts(base);
}
@@ -744,7 +731,7 @@ static void __devinit migrate_ktimeouts(
}
#endif /* CONFIG_HOTPLUG_CPU */
-static int __devinit ktimeout_cpu_notify(struct notifier_block *self,
+static int __devinit ktimeout_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
{
long cpu = (long)hcpu;
--
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [patch 01/43] Move div_long_long_rem out of jiffies.h
2005-12-01 0:00 ` [patch 01/43] Move div_long_long_rem out of jiffies.h Thomas Gleixner
@ 2005-12-01 2:06 ` Adrian Bunk
2005-12-01 11:38 ` Christoph Hellwig
1 sibling, 0 replies; 47+ messages in thread
From: Adrian Bunk @ 2005-12-01 2:06 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: linux-kernel, akpm, mingo, zippel, george, johnstul
On Thu, Dec 01, 2005 at 01:00:54AM +0100, Thomas Gleixner wrote:
> plain text document attachment
> (move-div-long-long-rem-out-of-jiffiesh.patch)
>
> - move div_long_long_rem() from jiffies.h into a new calc64.h include file,
> as it is a general math function useful for other things than the jiffy
> code.
>...
- add a static inline div_long_long_rem_signed() function
This isn't against your patch, but this part of the change wasn't
documented.
And while you are at it, is there a reason against making
div_long_long_rem() a static inline?
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [patch 25/43] Create ktimeout.h and move timer.h code into it
2005-12-01 0:03 ` [patch 25/43] Create ktimeout.h and move timer.h code into it Thomas Gleixner
@ 2005-12-01 2:36 ` Adrian Bunk
2005-12-01 2:51 ` Ingo Molnar
0 siblings, 1 reply; 47+ messages in thread
From: Adrian Bunk @ 2005-12-01 2:36 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: linux-kernel, akpm, mingo, zippel, george, johnstul
On Thu, Dec 01, 2005 at 01:03:48AM +0100, Thomas Gleixner wrote:
> plain text document attachment (ktimeout-h.patch)
> - introduce ktimeout.h and move the timeout implementation into it, as-is.
> - keep timer.h for compatibility
>...
If you do this, you should either immediately remove timer.h or add a
#warning to this file.
Both cases imply changing all in-kernel users (which is anyway a good
idea if we really want to rename this header).
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [patch 25/43] Create ktimeout.h and move timer.h code into it
2005-12-01 2:36 ` Adrian Bunk
@ 2005-12-01 2:51 ` Ingo Molnar
0 siblings, 0 replies; 47+ messages in thread
From: Ingo Molnar @ 2005-12-01 2:51 UTC (permalink / raw)
To: Adrian Bunk; +Cc: Thomas Gleixner, linux-kernel, akpm, zippel, george, johnstul
* Adrian Bunk <bunk@stusta.de> wrote:
> On Thu, Dec 01, 2005 at 01:03:48AM +0100, Thomas Gleixner wrote:
> > plain text document attachment (ktimeout-h.patch)
> > - introduce ktimeout.h and move the timeout implementation into it, as-is.
> > - keep timer.h for compatibility
> >...
>
> If you do this, you should either immediately remove timer.h or add a
> #warning to this file.
>
> Both cases imply changing all in-kernel users (which is anyway a good
> idea if we really want to rename this header).
agreed, but we didnt want to be this drastic - we just wanted to
demonstrate that a smooth transition (short of an overnight changeover)
is possible as well.
also, we are very interested in suggestions to further improve the
ktimeout APIs. The perfect time is when there are no direct users of it
yet.
e.g. there's an interesting thought that Roman demonstrated in his
ptimer queue: the elimination of the .data field from struct ktimer. An
analogous thing could be done for timeouts as well: we do not actually
need a .data field in a fair number of cases - the position of any
data-context information can be recovered via container_of():
void timer_fn(struct ktimeout *kt)
{
struct my_data *ptr = container_of(kt, struct my_data, timer);
...
}
for compatibility we could provide a "struct ktimeout_standalone" that
embedds a .data field and a struct timeout - which would be equivalent
to the current "struct ktimeout".
the advantage would be data-structure size reduction of one word per
embedded ktimeout structure. We'd also have one less word per standalone
timer that needs no data field. For standalone timeouts which do need a
data field there would be no impact.
one downside is that it's not as straightforward to code as the current
.data field.
Ingo
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [patch 01/43] Move div_long_long_rem out of jiffies.h
2005-12-01 0:00 ` [patch 01/43] Move div_long_long_rem out of jiffies.h Thomas Gleixner
2005-12-01 2:06 ` Adrian Bunk
@ 2005-12-01 11:38 ` Christoph Hellwig
1 sibling, 0 replies; 47+ messages in thread
From: Christoph Hellwig @ 2005-12-01 11:38 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: linux-kernel, akpm, mingo, zippel, george, johnstul
On Thu, Dec 01, 2005 at 01:00:54AM +0100, Thomas Gleixner wrote:
> plain text document attachment
> (move-div-long-long-rem-out-of-jiffiesh.patch)
>
> - move div_long_long_rem() from jiffies.h into a new calc64.h include file,
> as it is a general math function useful for other things than the jiffy
> code.
please just kill it div_long_long_rem()
^ permalink raw reply [flat|nested] 47+ messages in thread
end of thread, other threads:[~2005-12-01 11:38 UTC | newest]
Thread overview: 47+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20051130231140.164337000@tglx.tec.linutronix.de>
2005-12-01 0:00 ` [patch 01/43] Move div_long_long_rem out of jiffies.h Thomas Gleixner
2005-12-01 2:06 ` Adrian Bunk
2005-12-01 11:38 ` Christoph Hellwig
2005-12-01 0:02 ` [patch 02/43] Remove duplicate div_long_long_rem implementation Thomas Gleixner
2005-12-01 0:02 ` [patch 03/43] Deinline mktime and set_normalized_timespec Thomas Gleixner
2005-12-01 0:02 ` [patch 04/43] Clean up mktime and add const modifiers Thomas Gleixner
2005-12-01 0:02 ` [patch 05/43] Export deinlined mktime Thomas Gleixner
2005-12-01 0:02 ` [patch 06/43] Remove unused clock constants Thomas Gleixner
2005-12-01 0:02 ` [patch 07/43] Cleanup clock constants coding style Thomas Gleixner
2005-12-01 0:03 ` [patch 08/43] Coding style and whitespace cleanup time.h Thomas Gleixner
2005-12-01 0:03 ` [patch 09/43] Make clock selectors in posix-timers const Thomas Gleixner
2005-12-01 0:03 ` [patch 10/43] Coding style and white space cleanup posix-timer.h Thomas Gleixner
2005-12-01 0:03 ` [patch 11/43] Create timespec_valid macro Thomas Gleixner
2005-12-01 0:03 ` [patch 12/43] Check user space timespec in do_sys_settimeofday Thomas Gleixner
2005-12-01 0:03 ` [patch 13/43] Introduce nsec_t type and conversion functions Thomas Gleixner
2005-12-01 0:03 ` [patch 14/43] Introduce ktime_t time format Thomas Gleixner
2005-12-01 0:03 ` [patch 15/43] ktimer core code Thomas Gleixner
2005-12-01 0:03 ` [patch 16/43] ktimer documentation Thomas Gleixner
2005-12-01 0:03 ` [patch 17/43] Switch itimers to ktimer Thomas Gleixner
2005-12-01 0:03 ` [patch 18/43] Remove now unnecessary includes Thomas Gleixner
2005-12-01 0:03 ` [patch 19/43] Introduce ktimer_nanosleep APIs Thomas Gleixner
2005-12-01 0:03 ` [patch 20/43] Convert sys_nanosleep to ktimer_nanosleep Thomas Gleixner
2005-12-01 0:03 ` [patch 21/43] Switch clock_nanosleep to ktimer nanosleep API Thomas Gleixner
2005-12-01 0:03 ` [patch 22/43] Convert posix interval timers to use ktimers Thomas Gleixner
2005-12-01 0:03 ` [patch 23/43] Simplify ktimers rearm code Thomas Gleixner
2005-12-01 0:03 ` [patch 24/43] Split timeout code into kernel/ktimeout.c Thomas Gleixner
2005-12-01 0:03 ` [patch 25/43] Create ktimeout.h and move timer.h code into it Thomas Gleixner
2005-12-01 2:36 ` Adrian Bunk
2005-12-01 2:51 ` Ingo Molnar
2005-12-01 0:03 ` [patch 26/43] Rename struct timer_list to struct ktimeout Thomas Gleixner
2005-12-01 0:03 ` [patch 27/43] Convert timer_list users to ktimeout Thomas Gleixner
2005-12-01 0:03 ` [patch 28/43] Convert ktimeout.h and create wrappers Thomas Gleixner
2005-12-01 0:03 ` [patch 29/43] Convert ktimeout.c to ktimeout struct and APIs Thomas Gleixner
2005-12-01 0:04 ` [patch 30/43] ktimeout documentation Thomas Gleixner
2005-12-01 0:04 ` [patch 31/43] rename init_ktimeout() to ktimeout_init() Thomas Gleixner
2005-12-01 0:04 ` [patch 32/43] rename setup_ktimeout() to ktimeout_setup() Thomas Gleixner
2005-12-01 0:04 ` [patch 33/43] rename add_ktimeout_on() to ktimeout_add_on() Thomas Gleixner
2005-12-01 0:04 ` [patch 34/43] rename del_ktimeout() to ktimeout_del() Thomas Gleixner
2005-12-01 0:04 ` [patch 35/43] rename __mod_ktimeout() to __mod_ktimeout() Thomas Gleixner
2005-12-01 0:04 ` [patch 36/43] rename mod_ktimeout() to ktimeout_mod() Thomas Gleixner
2005-12-01 0:04 ` [patch 37/43] rename next_ktimeout_interrupt() to ktimeout_next_interrupt() Thomas Gleixner
2005-12-01 0:04 ` [patch 38/43] rename add_ktimeout() to ktimeout_add() Thomas Gleixner
2005-12-01 0:04 ` [patch 39/43] rename try_to_del_ktimeout_sync() to ktimeout_try_to_del_sync() Thomas Gleixner
2005-12-01 0:04 ` [patch 40/43] rename del_ktimeout_sync() to del_ktimeout_sync() Thomas Gleixner
2005-12-01 0:04 ` [patch 41/43] rename del_singleshot_ktimeout_sync() to ktimeout_del_singleshot_sync() Thomas Gleixner
2005-12-01 0:04 ` [patch 42/43] rename TIMER_SOFTIRQ to TIMEOUT_SOFTIRQ Thomas Gleixner
2005-12-01 0:04 ` [patch 43/43] ktimeout code style cleanups Thomas Gleixner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox