* [pchecks v2 1/2] Subject; percpu: Add raw_cpu_ops [not found] <20130924154159.855373283@linux.com> @ 2013-09-24 15:41 ` Christoph Lameter 2013-09-24 15:41 ` [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops Christoph Lameter 1 sibling, 0 replies; 21+ messages in thread From: Christoph Lameter @ 2013-09-24 15:41 UTC (permalink / raw) To: Tejun Heo Cc: akpm, linux-arch, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra The following patches will add preemption checks to __this_cpu ops so we need to have an alternative way to use this cpu operations without preemption checks. raw_cpu ops rely on the __this_cpu_xxx_1/2/4/8 operations having no preemption checks. We therefore do not have to duplicate these functions but can affort to only duplicate the higher level macros. Also add raw_cpu_ptr for symmetries sake. Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/percpu.h =================================================================== --- linux.orig/include/linux/percpu.h 2013-09-23 10:20:05.050535801 -0500 +++ linux/include/linux/percpu.h 2013-09-23 10:20:05.050535801 -0500 @@ -139,6 +139,9 @@ extern int __init pcpu_page_first_chunk( pcpu_fc_populate_pte_fn_t populate_pte_fn); #endif +/* For symmetries sake. */ +#define raw_cpu_ptr __this_cpu_ptr + /* * Use this to get to a cpu's version of the per-cpu object * dynamically allocated. Non-atomic access to the current CPU's @@ -520,17 +523,7 @@ do { \ /* * Generic percpu operations for context that are safe from preemption/interrupts. - * Either we do not care about races or the caller has the - * responsibility of handling preemption/interrupt issues. Arch code can still - * override these instructions since the arch per cpu code may be more - * efficient and may actually get race freeness for free (that is the - * case for x86 for example). - * - * If there is no other protection through preempt disable and/or - * disabling interupts then one of these RMW operations can show unexpected - * behavior because the execution thread was rescheduled on another processor - * or an interrupt occurred and the same percpu variable was modified from - * the interrupt context. + * These functions verify that preemption has been disabled. */ #ifndef __this_cpu_read # ifndef __this_cpu_read_1 @@ -755,4 +748,74 @@ do { \ __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) #endif +/* + * Generic percpu operations for context where we do not want to do + * any checks for preemptiosn. + * + * If there is no other protection through preempt disable and/or + * disabling interupts then one of these RMW operations can show unexpected + * behavior because the execution thread was rescheduled on another processor + * or an interrupt occurred and the same percpu variable was modified from + * the interrupt context. + */ +#ifndef raw_cpu_read +# define raw_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) +#endif + +#ifndef raw_cpu_write +# define raw_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) +#endif + +#ifndef raw_cpu_add +# define raw_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) +#endif + +#ifndef raw_cpu_sub +# define raw_cpu_sub(pcp, val) raw_cpu_add((pcp), -(val)) +#endif + +#ifndef raw_cpu_inc +# define raw_cpu_inc(pcp) raw_cpu_add((pcp), 1) +#endif + +#ifndef raw_cpu_dec +# define raw_cpu_dec(pcp) raw_cpu_sub((pcp), 1) +#endif + +#ifndef raw_cpu_and +# define raw_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) +#endif + +#ifndef raw_cpu_or +# define raw_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) +#endif + +#ifndef raw_cpu_xor +# define raw_cpu_xor(pcp, val) __pcpu_size_call(__this_cpu_xor_, (pcp), (val)) +#endif + +#ifndef raw_cpu_add_return +# define raw_cpu_add_return(pcp, val) \ + __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) +#endif + +#define raw_cpu_sub_return(pcp, val) raw_cpu_add_return(pcp, -(val)) +#define raw_cpu_inc_return(pcp) raw_cpu_add_return(pcp, 1) +#define raw_cpu_dec_return(pcp) raw_cpu_add_return(pcp, -1) + +#ifndef raw_cpu_xchg +# define raw_cpu_xchg(pcp, nval) \ + __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) +#endif + +#ifndef raw_cpu_cmpxchg +# define raw_cpu_cmpxchg(pcp, oval, nval) \ + __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) +#endif + +#ifndef raw_cpu_cmpxchg_double +# define raw_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ + __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) +#endif + #endif /* __LINUX_PERCPU_H */ ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops [not found] <20130924154159.855373283@linux.com> 2013-09-24 15:41 ` [pchecks v2 1/2] Subject; percpu: Add raw_cpu_ops Christoph Lameter @ 2013-09-24 15:41 ` Christoph Lameter 2013-09-24 15:41 ` Christoph Lameter 2013-09-24 17:10 ` Ingo Molnar 1 sibling, 2 replies; 21+ messages in thread From: Christoph Lameter @ 2013-09-24 15:41 UTC (permalink / raw) To: Tejun Heo Cc: akpm, linux-arch, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra We define a check function in order to avoid trouble with the include files. Then the higher level __this_cpu macros are modified to invoke the check before __this_cpu operation Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/percpu.h =================================================================== --- linux.orig/include/linux/percpu.h 2013-09-24 10:07:06.529064593 -0500 +++ linux/include/linux/percpu.h 2013-09-24 10:07:06.525064632 -0500 @@ -175,6 +175,12 @@ extern phys_addr_t per_cpu_ptr_to_phys(v extern void __bad_size_call_parameter(void); +#ifdef CONFIG_DEBUG_PREEMPT +extern void __this_cpu_preempt_check(void); +#else +static inline void __this_cpu_preempt_check(void) { } +#endif + #define __pcpu_size_call_return(stem, variable) \ ({ typeof(variable) pscr_ret__; \ __verify_pcpu_ptr(&(variable)); \ @@ -538,7 +544,8 @@ do { \ # ifndef __this_cpu_read_8 # define __this_cpu_read_8(pcp) (*__this_cpu_ptr(&(pcp))) # endif -# define __this_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) +# define __this_cpu_read(pcp) \ + (__this_cpu_preempt_check(),__pcpu_size_call_return(__this_cpu_read_, (pcp))) #endif #define __this_cpu_generic_to_op(pcp, val, op) \ @@ -559,7 +566,12 @@ do { \ # ifndef __this_cpu_write_8 # define __this_cpu_write_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) # endif -# define __this_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) + +# define __this_cpu_write(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_write_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_add @@ -575,7 +587,12 @@ do { \ # ifndef __this_cpu_add_8 # define __this_cpu_add_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) # endif -# define __this_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) + +# define __this_cpu_add(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_add_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_sub @@ -603,7 +620,12 @@ do { \ # ifndef __this_cpu_and_8 # define __this_cpu_and_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) # endif -# define __this_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) + +# define __this_cpu_and(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_and_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_or @@ -619,7 +641,12 @@ do { \ # ifndef __this_cpu_or_8 # define __this_cpu_or_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) # endif -# define __this_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) + +# define __this_cpu_or(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_or_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_xor @@ -635,7 +662,12 @@ do { \ # ifndef __this_cpu_xor_8 # define __this_cpu_xor_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), ^=) # endif -# define __this_cpu_xor(pcp, val) __pcpu_size_call(__this_cpu_xor_, (pcp), (val)) + +# define __this_cpu_xor(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_xor_, (pcp), (val)); \ +} while (0) + #endif #define __this_cpu_generic_add_return(pcp, val) \ @@ -658,7 +690,7 @@ do { \ # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) # endif # define __this_cpu_add_return(pcp, val) \ - __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_add_return_, pcp, val)) #endif #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) @@ -686,7 +718,7 @@ do { \ # define __this_cpu_xchg_8(pcp, nval) __this_cpu_generic_xchg(pcp, nval) # endif # define __this_cpu_xchg(pcp, nval) \ - __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval)) #endif #define __this_cpu_generic_cmpxchg(pcp, oval, nval) \ @@ -712,7 +744,7 @@ do { \ # define __this_cpu_cmpxchg_8(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) # endif # define __this_cpu_cmpxchg(pcp, oval, nval) \ - __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval)) #endif #define __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ @@ -745,7 +777,7 @@ do { \ __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) # endif # define __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ - __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) + (__this_cpu_preempt_check(),__pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2))) #endif /* Index: linux/lib/smp_processor_id.c =================================================================== --- linux.orig/lib/smp_processor_id.c 2013-09-24 10:06:26.000000000 -0500 +++ linux/lib/smp_processor_id.c 2013-09-24 10:14:19.412825970 -0500 @@ -7,7 +7,7 @@ #include <linux/kallsyms.h> #include <linux/sched.h> -notrace unsigned int debug_smp_processor_id(void) +notrace static unsigned int check_preemption(char *what) { unsigned long preempt_count = preempt_count(); int this_cpu = raw_smp_processor_id(); @@ -39,8 +39,8 @@ notrace unsigned int debug_smp_processor if (!printk_ratelimit()) goto out_enable; - printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x] " - "code: %s/%d\n", + printk(KERN_ERR "BUG: using %s in preemptible [%08x] " + "code: %s/%d\n", what, preempt_count() - 1, current->comm, current->pid); print_symbol("caller is %s\n", (long)__builtin_return_address(0)); dump_stack(); @@ -51,5 +51,18 @@ out: return this_cpu; } +notrace unsigned int debug_smp_processor_id(void) +{ + return check_preemption("smp_processor_id()"); + +} EXPORT_SYMBOL(debug_smp_processor_id); +notrace void __this_cpu_preempt_check(void) +{ +#ifdef CONFIG_THIS_CPU_PREEMPTION_CHECK + check_preemption("__this_cpu operation"); +#endif +} +EXPORT_SYMBOL(__this_cpu_preempt_check); + Index: linux/lib/Kconfig.debug =================================================================== --- linux.orig/lib/Kconfig.debug 2013-09-13 10:42:57.032240911 -0500 +++ linux/lib/Kconfig.debug 2013-09-24 10:19:29.969785268 -0500 @@ -797,6 +797,18 @@ config DEBUG_PREEMPT if kernel code uses it in a preemption-unsafe way. Also, the kernel will detect preemption count underflows. +config DEBUG_THIS_CPU_OPERATIONS + bool "Debug __this_cpu operations" + depends on DEBUG_PREEMPT + default n + help + Enables preemption checks for __this_cpu macros. These + are macros to generate single instruction operations on + per cpu data. The option only affects the __this_cpu variant + which is used when fallback to multiple instructions without + other synchronization is possible. A verification is then + done to make sure that the thread cannot be preempted. + menu "Lock Debugging (spinlocks, mutexes, etc...)" config DEBUG_RT_MUTEXES ^ permalink raw reply [flat|nested] 21+ messages in thread
* [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-24 15:41 ` [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops Christoph Lameter @ 2013-09-24 15:41 ` Christoph Lameter 2013-09-24 17:10 ` Ingo Molnar 1 sibling, 0 replies; 21+ messages in thread From: Christoph Lameter @ 2013-09-24 15:41 UTC (permalink / raw) To: Tejun Heo Cc: akpm, linux-arch, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra We define a check function in order to avoid trouble with the include files. Then the higher level __this_cpu macros are modified to invoke the check before __this_cpu operation Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/percpu.h =================================================================== --- linux.orig/include/linux/percpu.h 2013-09-24 10:07:06.529064593 -0500 +++ linux/include/linux/percpu.h 2013-09-24 10:07:06.525064632 -0500 @@ -175,6 +175,12 @@ extern phys_addr_t per_cpu_ptr_to_phys(v extern void __bad_size_call_parameter(void); +#ifdef CONFIG_DEBUG_PREEMPT +extern void __this_cpu_preempt_check(void); +#else +static inline void __this_cpu_preempt_check(void) { } +#endif + #define __pcpu_size_call_return(stem, variable) \ ({ typeof(variable) pscr_ret__; \ __verify_pcpu_ptr(&(variable)); \ @@ -538,7 +544,8 @@ do { \ # ifndef __this_cpu_read_8 # define __this_cpu_read_8(pcp) (*__this_cpu_ptr(&(pcp))) # endif -# define __this_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) +# define __this_cpu_read(pcp) \ + (__this_cpu_preempt_check(),__pcpu_size_call_return(__this_cpu_read_, (pcp))) #endif #define __this_cpu_generic_to_op(pcp, val, op) \ @@ -559,7 +566,12 @@ do { \ # ifndef __this_cpu_write_8 # define __this_cpu_write_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) # endif -# define __this_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) + +# define __this_cpu_write(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_write_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_add @@ -575,7 +587,12 @@ do { \ # ifndef __this_cpu_add_8 # define __this_cpu_add_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) # endif -# define __this_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) + +# define __this_cpu_add(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_add_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_sub @@ -603,7 +620,12 @@ do { \ # ifndef __this_cpu_and_8 # define __this_cpu_and_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) # endif -# define __this_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) + +# define __this_cpu_and(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_and_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_or @@ -619,7 +641,12 @@ do { \ # ifndef __this_cpu_or_8 # define __this_cpu_or_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) # endif -# define __this_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) + +# define __this_cpu_or(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_or_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_xor @@ -635,7 +662,12 @@ do { \ # ifndef __this_cpu_xor_8 # define __this_cpu_xor_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), ^=) # endif -# define __this_cpu_xor(pcp, val) __pcpu_size_call(__this_cpu_xor_, (pcp), (val)) + +# define __this_cpu_xor(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_xor_, (pcp), (val)); \ +} while (0) + #endif #define __this_cpu_generic_add_return(pcp, val) \ @@ -658,7 +690,7 @@ do { \ # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) # endif # define __this_cpu_add_return(pcp, val) \ - __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_add_return_, pcp, val)) #endif #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) @@ -686,7 +718,7 @@ do { \ # define __this_cpu_xchg_8(pcp, nval) __this_cpu_generic_xchg(pcp, nval) # endif # define __this_cpu_xchg(pcp, nval) \ - __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval)) #endif #define __this_cpu_generic_cmpxchg(pcp, oval, nval) \ @@ -712,7 +744,7 @@ do { \ # define __this_cpu_cmpxchg_8(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) # endif # define __this_cpu_cmpxchg(pcp, oval, nval) \ - __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval)) #endif #define __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ @@ -745,7 +777,7 @@ do { \ __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) # endif # define __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ - __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) + (__this_cpu_preempt_check(),__pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2))) #endif /* Index: linux/lib/smp_processor_id.c =================================================================== --- linux.orig/lib/smp_processor_id.c 2013-09-24 10:06:26.000000000 -0500 +++ linux/lib/smp_processor_id.c 2013-09-24 10:14:19.412825970 -0500 @@ -7,7 +7,7 @@ #include <linux/kallsyms.h> #include <linux/sched.h> -notrace unsigned int debug_smp_processor_id(void) +notrace static unsigned int check_preemption(char *what) { unsigned long preempt_count = preempt_count(); int this_cpu = raw_smp_processor_id(); @@ -39,8 +39,8 @@ notrace unsigned int debug_smp_processor if (!printk_ratelimit()) goto out_enable; - printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x] " - "code: %s/%d\n", + printk(KERN_ERR "BUG: using %s in preemptible [%08x] " + "code: %s/%d\n", what, preempt_count() - 1, current->comm, current->pid); print_symbol("caller is %s\n", (long)__builtin_return_address(0)); dump_stack(); @@ -51,5 +51,18 @@ out: return this_cpu; } +notrace unsigned int debug_smp_processor_id(void) +{ + return check_preemption("smp_processor_id()"); + +} EXPORT_SYMBOL(debug_smp_processor_id); +notrace void __this_cpu_preempt_check(void) +{ +#ifdef CONFIG_THIS_CPU_PREEMPTION_CHECK + check_preemption("__this_cpu operation"); +#endif +} +EXPORT_SYMBOL(__this_cpu_preempt_check); + Index: linux/lib/Kconfig.debug =================================================================== --- linux.orig/lib/Kconfig.debug 2013-09-13 10:42:57.032240911 -0500 +++ linux/lib/Kconfig.debug 2013-09-24 10:19:29.969785268 -0500 @@ -797,6 +797,18 @@ config DEBUG_PREEMPT if kernel code uses it in a preemption-unsafe way. Also, the kernel will detect preemption count underflows. +config DEBUG_THIS_CPU_OPERATIONS + bool "Debug __this_cpu operations" + depends on DEBUG_PREEMPT + default n + help + Enables preemption checks for __this_cpu macros. These + are macros to generate single instruction operations on + per cpu data. The option only affects the __this_cpu variant + which is used when fallback to multiple instructions without + other synchronization is possible. A verification is then + done to make sure that the thread cannot be preempted. + menu "Lock Debugging (spinlocks, mutexes, etc...)" config DEBUG_RT_MUTEXES ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-24 15:41 ` [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops Christoph Lameter 2013-09-24 15:41 ` Christoph Lameter @ 2013-09-24 17:10 ` Ingo Molnar 2013-09-25 16:40 ` Christoph Lameter 1 sibling, 1 reply; 21+ messages in thread From: Ingo Molnar @ 2013-09-24 17:10 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > Index: linux/lib/smp_processor_id.c > =================================================================== > --- linux.orig/lib/smp_processor_id.c 2013-09-24 10:06:26.000000000 -0500 > +++ linux/lib/smp_processor_id.c 2013-09-24 10:14:19.412825970 -0500 > @@ -7,7 +7,7 @@ > #include <linux/kallsyms.h> > #include <linux/sched.h> > > -notrace unsigned int debug_smp_processor_id(void) > +notrace static unsigned int check_preemption(char *what) > { > unsigned long preempt_count = preempt_count(); > int this_cpu = raw_smp_processor_id(); > @@ -39,8 +39,8 @@ notrace unsigned int debug_smp_processor > if (!printk_ratelimit()) > goto out_enable; > > - printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x] " > - "code: %s/%d\n", > + printk(KERN_ERR "BUG: using %s in preemptible [%08x] " > + "code: %s/%d\n", what, > preempt_count() - 1, current->comm, current->pid); > print_symbol("caller is %s\n", (long)__builtin_return_address(0)); > dump_stack(); > @@ -51,5 +51,18 @@ out: > return this_cpu; > } > > +notrace unsigned int debug_smp_processor_id(void) > +{ > + return check_preemption("smp_processor_id()"); > + > +} > EXPORT_SYMBOL(debug_smp_processor_id); > > +notrace void __this_cpu_preempt_check(void) > +{ > +#ifdef CONFIG_THIS_CPU_PREEMPTION_CHECK > + check_preemption("__this_cpu operation"); > +#endif > +} > +EXPORT_SYMBOL(__this_cpu_preempt_check); Ok, this variant looks pretty good, modulo testing. I'd do: s/check_preemption/check_preemption_disabled but that's a minor detail. > +config DEBUG_THIS_CPU_OPERATIONS > + bool "Debug __this_cpu operations" > + depends on DEBUG_PREEMPT > + default n > + help > + Enables preemption checks for __this_cpu macros. These > + are macros to generate single instruction operations on > + per cpu data. The option only affects the __this_cpu variant > + which is used when fallback to multiple instructions without > + other synchronization is possible. A verification is then > + done to make sure that the thread cannot be preempted. I don't think there's a need to make this a separate preemption debug option: smp_processor_id() is really just a subset of the checks. So please just put it under CONFIG_DEBUG_PREEMPT, there's no need to add yet another x2 to the kernel Kconfig space. Maybe extend the DEBUG_PREEMPT help text to explain that it also checks __this_cpu ops. That will remove the #ifdef from lib/smp_processor_id.c and will simplify debugging a bit as well. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-24 17:10 ` Ingo Molnar @ 2013-09-25 16:40 ` Christoph Lameter 2013-09-25 16:40 ` Christoph Lameter 2013-09-25 18:11 ` Ingo Molnar 0 siblings, 2 replies; 21+ messages in thread From: Christoph Lameter @ 2013-09-25 16:40 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra On Tue, 24 Sep 2013, Ingo Molnar wrote: > Ok, this variant looks pretty good, modulo testing. I'd do: > > s/check_preemption/check_preemption_disabled > > but that's a minor detail. Sure. > > +config DEBUG_THIS_CPU_OPERATIONS > > + bool "Debug __this_cpu operations" > > + depends on DEBUG_PREEMPT > > + default n > > + help > > + Enables preemption checks for __this_cpu macros. These > > + are macros to generate single instruction operations on > > + per cpu data. The option only affects the __this_cpu variant > > + which is used when fallback to multiple instructions without > > + other synchronization is possible. A verification is then > > + done to make sure that the thread cannot be preempted. > > I don't think there's a need to make this a separate preemption debug > option: smp_processor_id() is really just a subset of the checks. I think this is necessary since it seems that the discussions on how to do the raw_cpu conversions may take some time. If we enable it by default then there will be numerous new log messages. That way a developer can enable it for working on this. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-25 16:40 ` Christoph Lameter @ 2013-09-25 16:40 ` Christoph Lameter 2013-09-25 18:11 ` Ingo Molnar 1 sibling, 0 replies; 21+ messages in thread From: Christoph Lameter @ 2013-09-25 16:40 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra On Tue, 24 Sep 2013, Ingo Molnar wrote: > Ok, this variant looks pretty good, modulo testing. I'd do: > > s/check_preemption/check_preemption_disabled > > but that's a minor detail. Sure. > > +config DEBUG_THIS_CPU_OPERATIONS > > + bool "Debug __this_cpu operations" > > + depends on DEBUG_PREEMPT > > + default n > > + help > > + Enables preemption checks for __this_cpu macros. These > > + are macros to generate single instruction operations on > > + per cpu data. The option only affects the __this_cpu variant > > + which is used when fallback to multiple instructions without > > + other synchronization is possible. A verification is then > > + done to make sure that the thread cannot be preempted. > > I don't think there's a need to make this a separate preemption debug > option: smp_processor_id() is really just a subset of the checks. I think this is necessary since it seems that the discussions on how to do the raw_cpu conversions may take some time. If we enable it by default then there will be numerous new log messages. That way a developer can enable it for working on this. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-25 16:40 ` Christoph Lameter 2013-09-25 16:40 ` Christoph Lameter @ 2013-09-25 18:11 ` Ingo Molnar 2013-09-27 13:54 ` Christoph Lameter 1 sibling, 1 reply; 21+ messages in thread From: Ingo Molnar @ 2013-09-25 18:11 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Tue, 24 Sep 2013, Ingo Molnar wrote: > > > Ok, this variant looks pretty good, modulo testing. I'd do: > > > > s/check_preemption/check_preemption_disabled > > > > but that's a minor detail. > > Sure. > > > > +config DEBUG_THIS_CPU_OPERATIONS > > > + bool "Debug __this_cpu operations" > > > + depends on DEBUG_PREEMPT > > > + default n > > > + help > > > + Enables preemption checks for __this_cpu macros. These > > > + are macros to generate single instruction operations on > > > + per cpu data. The option only affects the __this_cpu variant > > > + which is used when fallback to multiple instructions without > > > + other synchronization is possible. A verification is then > > > + done to make sure that the thread cannot be preempted. > > > > I don't think there's a need to make this a separate preemption debug > > option: smp_processor_id() is really just a subset of the checks. > > I think this is necessary since it seems that the discussions on how to > do the raw_cpu conversions may take some time. If we enable it by > default then there will be numerous new log messages. That way a > developer can enable it for working on this. Note that for these patches to be eligible for upstream merge any extra warnings that trigger must be fixed, regardless of the default setting. The blind __this_cpu conversions without proper preempt debugging cannot continue without first fixing all the fallout of the missing debug checks to begin with. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-25 18:11 ` Ingo Molnar @ 2013-09-27 13:54 ` Christoph Lameter 2013-09-28 8:39 ` Ingo Molnar 2013-09-28 8:44 ` Ingo Molnar 0 siblings, 2 replies; 21+ messages in thread From: Christoph Lameter @ 2013-09-27 13:54 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Wed, 25 Sep 2013, Ingo Molnar wrote: > > > > I think this is necessary since it seems that the discussions on how to > > do the raw_cpu conversions may take some time. If we enable it by > > default then there will be numerous new log messages. That way a > > developer can enable it for working on this. > > Note that for these patches to be eligible for upstream merge any extra > warnings that trigger must be fixed, regardless of the default setting. That is exactly what this patch does. There will only be warning if the user enabled them. > The blind __this_cpu conversions without proper preempt debugging cannot > continue without first fixing all the fallout of the missing debug checks > to begin with. That will take some time as the feedback from the other patchset suggests. If we merge this patchset then the necessary infrastructure for that work will be present. Here is a revised version of the second patch: Subject: percpu: Add preemption checks to __this_cpu ops We define a check function in order to avoid trouble with the include files. Then the higher level __this_cpu macros are modified to invoke the check before __this_cpu operation Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/percpu.h =================================================================== --- linux.orig/include/linux/percpu.h 2013-09-25 11:49:49.542184644 -0500 +++ linux/include/linux/percpu.h 2013-09-25 11:49:49.498185073 -0500 @@ -175,6 +175,12 @@ extern phys_addr_t per_cpu_ptr_to_phys(v extern void __bad_size_call_parameter(void); +#ifdef CONFIG_DEBUG_PREEMPT +extern void __this_cpu_preempt_check(void); +#else +static inline void __this_cpu_preempt_check(void) { } +#endif + #define __pcpu_size_call_return(stem, variable) \ ({ typeof(variable) pscr_ret__; \ __verify_pcpu_ptr(&(variable)); \ @@ -538,7 +544,8 @@ do { \ # ifndef __this_cpu_read_8 # define __this_cpu_read_8(pcp) (*__this_cpu_ptr(&(pcp))) # endif -# define __this_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) +# define __this_cpu_read(pcp) \ + (__this_cpu_preempt_check(),__pcpu_size_call_return(__this_cpu_read_, (pcp))) #endif #define __this_cpu_generic_to_op(pcp, val, op) \ @@ -559,7 +566,12 @@ do { \ # ifndef __this_cpu_write_8 # define __this_cpu_write_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) # endif -# define __this_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) + +# define __this_cpu_write(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_write_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_add @@ -575,7 +587,12 @@ do { \ # ifndef __this_cpu_add_8 # define __this_cpu_add_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) # endif -# define __this_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) + +# define __this_cpu_add(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_add_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_sub @@ -603,7 +620,12 @@ do { \ # ifndef __this_cpu_and_8 # define __this_cpu_and_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) # endif -# define __this_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) + +# define __this_cpu_and(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_and_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_or @@ -619,7 +641,12 @@ do { \ # ifndef __this_cpu_or_8 # define __this_cpu_or_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) # endif -# define __this_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) + +# define __this_cpu_or(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_or_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_xor @@ -635,7 +662,12 @@ do { \ # ifndef __this_cpu_xor_8 # define __this_cpu_xor_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), ^=) # endif -# define __this_cpu_xor(pcp, val) __pcpu_size_call(__this_cpu_xor_, (pcp), (val)) + +# define __this_cpu_xor(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_xor_, (pcp), (val)); \ +} while (0) + #endif #define __this_cpu_generic_add_return(pcp, val) \ @@ -658,7 +690,7 @@ do { \ # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) # endif # define __this_cpu_add_return(pcp, val) \ - __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_add_return_, pcp, val)) #endif #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) @@ -686,7 +718,7 @@ do { \ # define __this_cpu_xchg_8(pcp, nval) __this_cpu_generic_xchg(pcp, nval) # endif # define __this_cpu_xchg(pcp, nval) \ - __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval)) #endif #define __this_cpu_generic_cmpxchg(pcp, oval, nval) \ @@ -712,7 +744,7 @@ do { \ # define __this_cpu_cmpxchg_8(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) # endif # define __this_cpu_cmpxchg(pcp, oval, nval) \ - __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval)) #endif #define __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ @@ -745,7 +777,7 @@ do { \ __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) # endif # define __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ - __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) + (__this_cpu_preempt_check(),__pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2))) #endif /* Index: linux/lib/smp_processor_id.c =================================================================== --- linux.orig/lib/smp_processor_id.c 2013-09-25 11:49:49.542184644 -0500 +++ linux/lib/smp_processor_id.c 2013-09-25 11:51:11.461386858 -0500 @@ -7,7 +7,7 @@ #include <linux/kallsyms.h> #include <linux/sched.h> -notrace unsigned int debug_smp_processor_id(void) +notrace static unsigned int check_preemption_disabled(char *what) { unsigned long preempt_count = preempt_count(); int this_cpu = raw_smp_processor_id(); @@ -39,8 +39,8 @@ notrace unsigned int debug_smp_processor if (!printk_ratelimit()) goto out_enable; - printk(KERN_ERR "BUG: using smp_processor_id() in preemptible [%08x] " - "code: %s/%d\n", + printk(KERN_ERR "%s in preemptible [%08x] " + "code: %s/%d\n", what, preempt_count() - 1, current->comm, current->pid); print_symbol("caller is %s\n", (long)__builtin_return_address(0)); dump_stack(); @@ -51,5 +51,17 @@ out: return this_cpu; } +notrace unsigned int debug_smp_processor_id(void) +{ + return check_preemption_disabled("BUG: using smp_processor_id()"); +} EXPORT_SYMBOL(debug_smp_processor_id); +notrace void __this_cpu_preempt_check(void) +{ +#ifdef CONFIG_THIS_CPU_PREEMPTION_CHECK + check_preemption_disabled("__this_cpu operation"); +#endif +} +EXPORT_SYMBOL(__this_cpu_preempt_check); + Index: linux/lib/Kconfig.debug =================================================================== --- linux.orig/lib/Kconfig.debug 2013-09-25 11:49:49.542184644 -0500 +++ linux/lib/Kconfig.debug 2013-09-25 11:49:49.542184644 -0500 @@ -797,6 +797,19 @@ config DEBUG_PREEMPT if kernel code uses it in a preemption-unsafe way. Also, the kernel will detect preemption count underflows. +config DEBUG_THIS_CPU_OPERATIONS + bool "Preemption checks for __this_cpu operations" + depends on DEBUG_PREEMPT + default n + help + Enables preemption checks for __this_cpu macros. These + are macros to generate single instruction operations on + per cpu data. The option only affects the __this_cpu variant + which is used when fallback to multiple instructions without + other synchronization is possible should the arch not support + these types of instruction. A verification is then + done to make sure that the thread cannot be preempted. + menu "Lock Debugging (spinlocks, mutexes, etc...)" config DEBUG_RT_MUTEXES ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-27 13:54 ` Christoph Lameter @ 2013-09-28 8:39 ` Ingo Molnar 2013-10-02 15:11 ` Christoph Lameter 2013-09-28 8:44 ` Ingo Molnar 1 sibling, 1 reply; 21+ messages in thread From: Ingo Molnar @ 2013-09-28 8:39 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Wed, 25 Sep 2013, Ingo Molnar wrote: > > > > > > > I think this is necessary since it seems that the discussions on how > > > to do the raw_cpu conversions may take some time. If we enable it by > > > default then there will be numerous new log messages. That way a > > > developer can enable it for working on this. > > > > Note that for these patches to be eligible for upstream merge any > > extra warnings that trigger must be fixed, regardless of the default > > setting. > > That is exactly what this patch does. There will only be warning if the > user enabled them. You didn't understand me apparently: all warnings that trigger with the debug CONFIG option enabled must be fixed before this can be sent upstream. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-28 8:39 ` Ingo Molnar @ 2013-10-02 15:11 ` Christoph Lameter 2013-10-03 7:26 ` Ingo Molnar 0 siblings, 1 reply; 21+ messages in thread From: Christoph Lameter @ 2013-10-02 15:11 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Sat, 28 Sep 2013, Ingo Molnar wrote: > > * Christoph Lameter <cl@linux.com> wrote: > > > On Wed, 25 Sep 2013, Ingo Molnar wrote: > > > > > > > > > > I think this is necessary since it seems that the discussions on how > > > > to do the raw_cpu conversions may take some time. If we enable it by > > > > default then there will be numerous new log messages. That way a > > > > developer can enable it for working on this. > > > > > > Note that for these patches to be eligible for upstream merge any > > > extra warnings that trigger must be fixed, regardless of the default > > > setting. > > > > That is exactly what this patch does. There will only be warning if the > > user enabled them. > > You didn't understand me apparently: all warnings that trigger with the > debug CONFIG option enabled must be fixed before this can be sent > upstream. This patchset is required to determine the warnings that will be triggered and to get the work to address these issues done. The feedback so far indicates that there may be lots of discussions regarding the warnings that have been discovered so far. Its not realistic to do this in one go. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-02 15:11 ` Christoph Lameter @ 2013-10-03 7:26 ` Ingo Molnar 0 siblings, 0 replies; 21+ messages in thread From: Ingo Molnar @ 2013-10-03 7:26 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > > > That is exactly what this patch does. There will only be warning if > > > the user enabled them. > > > > You didn't understand me apparently: all warnings that trigger with > > the debug CONFIG option enabled must be fixed before this can be sent > > upstream. > > This patchset is required to determine the warnings that will be > triggered and to get the work to address these issues done. The feedback > so far indicates that there may be lots of discussions regarding the > warnings that have been discovered so far. Its not realistic to do this > in one go. Please submit them once you had the time to ensure that they are ready and produce no warnings on your system(s) with debugging enabled. To help out I can stage them for you in a branch within the scheduler tree if you think there are spurious warnings - as long as you fix and address any bug reports and review feedback received in a timely fashion. I'll send it upstream once the debug code has settled down and doesn't produce false warnings. Also, just to make it clear, as I mentioned it in my previous mail: until the debug code becomes upstream worthy and converge, consider all your __this_cpu patches NAK-ed by me. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-27 13:54 ` Christoph Lameter 2013-09-28 8:39 ` Ingo Molnar @ 2013-09-28 8:44 ` Ingo Molnar 2013-10-02 15:08 ` Christoph Lameter 1 sibling, 1 reply; 21+ messages in thread From: Ingo Molnar @ 2013-09-28 8:44 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > > The blind __this_cpu conversions without proper preempt debugging > > cannot continue without first fixing all the fallout of the missing > > debug checks to begin with. > > That will take some time as the feedback from the other patchset > suggests. That's the reason why we insisted on __this_cpu*() primitives growing these essential debug checks early on - which you resisted. I had to bring out NAKs for you to see sense and start fixing the mess already - next time around I'll probably have to NAK your changes earlier to prevent such mishaps. ( Note that some false positives were possibly fixed by the use of the lib/smp_processor_id.c methods to check for preemptability, so the situation might not be as dire as your first series suggests. ) Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-09-28 8:44 ` Ingo Molnar @ 2013-10-02 15:08 ` Christoph Lameter 2013-10-02 15:08 ` Christoph Lameter 2013-10-03 7:21 ` Ingo Molnar 0 siblings, 2 replies; 21+ messages in thread From: Christoph Lameter @ 2013-10-02 15:08 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Sat, 28 Sep 2013, Ingo Molnar wrote: > > * Christoph Lameter <cl@linux.com> wrote: > > > > The blind __this_cpu conversions without proper preempt debugging > > > cannot continue without first fixing all the fallout of the missing > > > debug checks to begin with. > > > > That will take some time as the feedback from the other patchset > > suggests. > > That's the reason why we insisted on __this_cpu*() primitives growing > these essential debug checks early on - which you resisted. I had to bring > out NAKs for you to see sense and start fixing the mess already - next > time around I'll probably have to NAK your changes earlier to prevent such > mishaps. I pointed out the issues that would have to be addressed when the brought up the issue. It seemed that Steven was working on it, I fixed some of the problems that he mentioned and then waited. Seems that nothing was happening on the issue then. Guess it was not that important to you. That was years ago. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-02 15:08 ` Christoph Lameter @ 2013-10-02 15:08 ` Christoph Lameter 2013-10-03 7:21 ` Ingo Molnar 1 sibling, 0 replies; 21+ messages in thread From: Christoph Lameter @ 2013-10-02 15:08 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Sat, 28 Sep 2013, Ingo Molnar wrote: > > * Christoph Lameter <cl@linux.com> wrote: > > > > The blind __this_cpu conversions without proper preempt debugging > > > cannot continue without first fixing all the fallout of the missing > > > debug checks to begin with. > > > > That will take some time as the feedback from the other patchset > > suggests. > > That's the reason why we insisted on __this_cpu*() primitives growing > these essential debug checks early on - which you resisted. I had to bring > out NAKs for you to see sense and start fixing the mess already - next > time around I'll probably have to NAK your changes earlier to prevent such > mishaps. I pointed out the issues that would have to be addressed when the brought up the issue. It seemed that Steven was working on it, I fixed some of the problems that he mentioned and then waited. Seems that nothing was happening on the issue then. Guess it was not that important to you. That was years ago. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-02 15:08 ` Christoph Lameter 2013-10-02 15:08 ` Christoph Lameter @ 2013-10-03 7:21 ` Ingo Molnar 2013-10-03 13:55 ` Peter Zijlstra 2013-10-03 14:15 ` Christoph Lameter 1 sibling, 2 replies; 21+ messages in thread From: Ingo Molnar @ 2013-10-03 7:21 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Sat, 28 Sep 2013, Ingo Molnar wrote: > > > > > * Christoph Lameter <cl@linux.com> wrote: > > > > > > The blind __this_cpu conversions without proper preempt debugging > > > > cannot continue without first fixing all the fallout of the > > > > missing debug checks to begin with. > > > > > > That will take some time as the feedback from the other patchset > > > suggests. > > > > That's the reason why we insisted on __this_cpu*() primitives growing > > these essential debug checks early on - which you resisted. I had to > > bring out NAKs for you to see sense and start fixing the mess already > > - next time around I'll probably have to NAK your changes earlier to > > prevent such mishaps. > > I pointed out the issues that would have to be addressed when the > brought up the issue. It seemed that Steven was working on it, I fixed > some of the problems that he mentioned and then waited. Seems that > nothing was happening on the issue then. Guess it was not that important > to you. [...] It was important to me and other maintainers as well back then and today as well, as me and others complained about it out numerous times. What we didn't do was to outright NAK your naked __this_cpu changes straight away and thus prevent them from getting upstream. I can fix that omission easily: consider all your __this_cpu* patches NAK-ed by me until the (trivial) preemption debug checks are upstream worthy: - tested - complete - don't produce false warnings when enabled. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-03 7:21 ` Ingo Molnar @ 2013-10-03 13:55 ` Peter Zijlstra 2013-10-03 14:15 ` Christoph Lameter 1 sibling, 0 replies; 21+ messages in thread From: Peter Zijlstra @ 2013-10-03 13:55 UTC (permalink / raw) To: Ingo Molnar Cc: Christoph Lameter, Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Thomas Gleixner On Thu, Oct 03, 2013 at 09:21:00AM +0200, Ingo Molnar wrote: > It was important to me and other maintainers as well back then and today > as well, as me and others complained about it out numerous times. I can testify to that; the lack of these preemption checks have wasted hours of my and Thomas' time, probably others as well. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-03 7:21 ` Ingo Molnar 2013-10-03 13:55 ` Peter Zijlstra @ 2013-10-03 14:15 ` Christoph Lameter 2013-10-03 14:15 ` Christoph Lameter 2013-10-03 15:35 ` Ingo Molnar 1 sibling, 2 replies; 21+ messages in thread From: Christoph Lameter @ 2013-10-03 14:15 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Thu, 3 Oct 2013, Ingo Molnar wrote: > It was important to me and other maintainers as well back then and today > as well, as me and others complained about it out numerous times. Yes there were some complaints and in discussions about what to do. I suggested how this could be addressed. But no patches showed up and there were always other more pressing things. Especially since this is a minor issue related to CONFIG_PREEMPT which seems to be not in use in the kernels that I see in HPC, FIS and the industry at large. > I can fix that omission easily: consider all your __this_cpu* patches > NAK-ed by me until the (trivial) preemption debug checks are upstream > worthy: > > - tested > - complete > - don't produce false warnings when enabled. Not sure what tests you will like to see run and if it is even possible to test all possible kernel runtime configurations. You seem to have some setup to do some testing along these lines I believe? These two patches will allow this testing to be done. And I do not see any mention of technical issues with the code. Once these patches are merged then I will post an updated series of patches that use raw_cpu_ops for the places where this patch triggers and we can then discuss how to address these issues in each of the cases. Having these two patches merged will mean that other can join in an reproduce test as well discover more instances where the preempt checks trigger. There is no harm in merging these. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-03 14:15 ` Christoph Lameter @ 2013-10-03 14:15 ` Christoph Lameter 2013-10-03 15:35 ` Ingo Molnar 1 sibling, 0 replies; 21+ messages in thread From: Christoph Lameter @ 2013-10-03 14:15 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Thu, 3 Oct 2013, Ingo Molnar wrote: > It was important to me and other maintainers as well back then and today > as well, as me and others complained about it out numerous times. Yes there were some complaints and in discussions about what to do. I suggested how this could be addressed. But no patches showed up and there were always other more pressing things. Especially since this is a minor issue related to CONFIG_PREEMPT which seems to be not in use in the kernels that I see in HPC, FIS and the industry at large. > I can fix that omission easily: consider all your __this_cpu* patches > NAK-ed by me until the (trivial) preemption debug checks are upstream > worthy: > > - tested > - complete > - don't produce false warnings when enabled. Not sure what tests you will like to see run and if it is even possible to test all possible kernel runtime configurations. You seem to have some setup to do some testing along these lines I believe? These two patches will allow this testing to be done. And I do not see any mention of technical issues with the code. Once these patches are merged then I will post an updated series of patches that use raw_cpu_ops for the places where this patch triggers and we can then discuss how to address these issues in each of the cases. Having these two patches merged will mean that other can join in an reproduce test as well discover more instances where the preempt checks trigger. There is no harm in merging these. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-03 14:15 ` Christoph Lameter 2013-10-03 14:15 ` Christoph Lameter @ 2013-10-03 15:35 ` Ingo Molnar 2013-10-03 15:59 ` Christoph Lameter 1 sibling, 1 reply; 21+ messages in thread From: Ingo Molnar @ 2013-10-03 15:35 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Thu, 3 Oct 2013, Ingo Molnar wrote: > > > It was important to me and other maintainers as well back then and today > > as well, as me and others complained about it out numerous times. > > Yes there were some complaints and in discussions about what to do. I > suggested how this could be addressed. But no patches showed up [...] _You_ added the facility with broken (== non-existent) preemption debugging for __this_cpu ops, _you_ caused Peter Zijstra and others to waste time due to you ignoring those requests to add debugging. Everyone rightfully expected _you_ to fix the problem you introduced. And now you blame the victims of your sloppiness, that they should have fixed the problem you introduced? > [...] and there were always other more pressing things. Especially since > this is a minor issue related to CONFIG_PREEMPT which seems to be not in > use in the kernels that I see in HPC, FIS and the industry at large. People wasting time and the kernel becoming less robust is not a minor issue at all. > > I can fix that omission easily: consider all your __this_cpu* patches > > NAK-ed by me until the (trivial) preemption debug checks are upstream > > worthy: > > > > - tested > > - complete > > - don't produce false warnings when enabled. > > Not sure what tests you will like to see run and if it is even possible > to test all possible kernel runtime configurations. You seem to have > some setup to do some testing along these lines I believe? As a starting point it would be fine if you tested it on your own systems with all relevant debugging enabled... > These two patches will allow this testing to be done. And I do not see > any mention of technical issues with the code. [...] Here's the list of open technical problems: - Lack of testing - you have not stated it whether any warnings trigger with those two patches applied and debugging enabled, on your systems. - I pointed out in detail how your last submission was broken in several places which show lack of time and care on the patch series. - Your statement in the discussion that warnings will trigger with the debug option enabled points to an obvious technical problem as well - all warnings known to trigger by you should be fixed by you, as part of the series. Please resolve these technical problems and resend a clean, tested, working series. Until all the problems are addressed my NAK stands and I suspect Peter Zijlstra's as well. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-03 15:35 ` Ingo Molnar @ 2013-10-03 15:59 ` Christoph Lameter 2013-10-03 16:44 ` Ingo Molnar 0 siblings, 1 reply; 21+ messages in thread From: Christoph Lameter @ 2013-10-03 15:59 UTC (permalink / raw) To: Ingo Molnar Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra On Thu, 3 Oct 2013, Ingo Molnar wrote: > _You_ added the facility with broken (== non-existent) preemption > debugging for __this_cpu ops, _you_ caused Peter Zijstra and others to > waste time due to you ignoring those requests to add debugging. Everyone > rightfully expected _you_ to fix the problem you introduced. Oh what __this_cpu ops was doing was clearly documented at the time it was merged. > And now you blame the victims of your sloppiness, that they should have > fixed the problem you introduced? I fix problems that others introduce into my subsystems as well. If there is a problem then usually someone shows up with patches to address these. > People wasting time and the kernel becoming less robust is not a minor > issue at all. Well then I would have expected someone to show up with patches following through on what was discussed. I am no expert on preemption. > As a starting point it would be fine if you tested it on your own systems > with all relevant debugging enabled... Ok done that. > > These two patches will allow this testing to be done. And I do not see > > any mention of technical issues with the code. [...] > > Here's the list of open technical problems: > > - Lack of testing - you have not stated it whether any warnings trigger > with those two patches applied and debugging enabled, on your systems. I have posted an earlier patchset which includes fixes for the warnings that were triggered. It described the testing that was done. > - I pointed out in detail how your last submission was broken in several > places which show lack of time and care on the patch series. This is regarding the garbled subject line in your inbox? So you want me to fix the quilt tool now? I can just make this a single patch if that helps? > - Your statement in the discussion that warnings will trigger with the > debug option enabled points to an obvious technical problem as well - > all warnings known to trigger by you should be fixed by you, as part of > the series. The problem is that you raised more issues related to the fixes that I posted. I dont think this can be handled in one patchset. ^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 2013-10-03 15:59 ` Christoph Lameter @ 2013-10-03 16:44 ` Ingo Molnar 0 siblings, 0 replies; 21+ messages in thread From: Ingo Molnar @ 2013-10-03 16:44 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, linux-arch, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Thu, 3 Oct 2013, Ingo Molnar wrote: > > > _You_ added the facility with broken (== non-existent) preemption > > debugging for __this_cpu ops, _you_ caused Peter Zijstra and others to > > waste time due to you ignoring those requests to add debugging. > > Everyone rightfully expected _you_ to fix the problem you introduced. > > Oh what __this_cpu ops was doing was clearly documented at the time it > was merged. In hindsight it was merged prematurely - such things happen. After it was merged multiple people ran into problems: Thomas Gleixner, Peter Zijlstra and myself. IIRC at the Kernel Summit Linus agreed as well that not having __this_cpu() debug ops was a mistake. If your only argument left is "but it was merged in that inferior form", and you refuse to fix its shortcomings, then our answer is to learn from our mistake and not merge patches from you in the future, until the facility is fixed. > > And now you blame the victims of your sloppiness, that they should > > have fixed the problem you introduced? > > I fix problems that others introduce into my subsystems as well. If > there is a problem then usually someone shows up with patches to address > these. > > > People wasting time and the kernel becoming less robust is not a minor > > issue at all. > > Well then I would have expected someone to show up with patches > following through on what was discussed. I am no expert on preemption. If you don't understand the impact of your changes and the fragility it introduces then that's one more reason to not merge more patches from you until you rectify your omissions. > > As a starting point it would be fine if you tested it on your own > > systems with all relevant debugging enabled... > > Ok done that. > > > > These two patches will allow this testing to be done. And I do not see > > > any mention of technical issues with the code. [...] > > > > Here's the list of open technical problems: > > > > - Lack of testing - you have not stated it whether any warnings > > trigger with those two patches applied and debugging enabled, on > > your systems. > > I have posted an earlier patchset which includes fixes for the warnings > that were triggered. It described the testing that was done. That 'earlier' patch set was with a different version of the preemption checks, and it's not at all clear whether the same warnings still trigger with your latest series. > > - I pointed out in detail how your last submission was broken in > > several places which show lack of time and care on the patch > > series. > > This is regarding the garbled subject line in your inbox? So you want me > to fix the quilt tool now? I can just make this a single patch if that > helps? Other people are able to submit patch series with non-garbled subject lines using Quilt and other tools. You should not blame Quilt for your messed up submission. > > - Your statement in the discussion that warnings will trigger with > > the debug option enabled points to an obvious technical problem as > > well - all warnings known to trigger by you should be fixed by you, > > as part of the series. > > The problem is that you raised more issues related to the fixes that I > posted. I dont think this can be handled in one patchset. I disagree - but in any case the patch set is not acceptable for upstream in its current form and my NAK stands, until this is fixed. Thanks, Ingo ^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2013-10-03 16:45 UTC | newest] Thread overview: 21+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <20130924154159.855373283@linux.com> 2013-09-24 15:41 ` [pchecks v2 1/2] Subject; percpu: Add raw_cpu_ops Christoph Lameter 2013-09-24 15:41 ` [pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops Christoph Lameter 2013-09-24 15:41 ` Christoph Lameter 2013-09-24 17:10 ` Ingo Molnar 2013-09-25 16:40 ` Christoph Lameter 2013-09-25 16:40 ` Christoph Lameter 2013-09-25 18:11 ` Ingo Molnar 2013-09-27 13:54 ` Christoph Lameter 2013-09-28 8:39 ` Ingo Molnar 2013-10-02 15:11 ` Christoph Lameter 2013-10-03 7:26 ` Ingo Molnar 2013-09-28 8:44 ` Ingo Molnar 2013-10-02 15:08 ` Christoph Lameter 2013-10-02 15:08 ` Christoph Lameter 2013-10-03 7:21 ` Ingo Molnar 2013-10-03 13:55 ` Peter Zijlstra 2013-10-03 14:15 ` Christoph Lameter 2013-10-03 14:15 ` Christoph Lameter 2013-10-03 15:35 ` Ingo Molnar 2013-10-03 15:59 ` Christoph Lameter 2013-10-03 16:44 ` Ingo Molnar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).