* [pchecks v1 1/4] Subject; percpu: Add raw_cpu_ops [not found] <20130923191256.584672290@linux.com> @ 2013-09-23 19:12 ` Christoph Lameter 2013-09-23 19:12 ` [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks Christoph Lameter ` (2 subsequent siblings) 3 siblings, 0 replies; 18+ messages in thread From: Christoph Lameter @ 2013-09-23 19:12 UTC (permalink / raw) To: Tejun Heo; +Cc: akpm, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra The following patches will add preemption checks to __this_cpu ops so we need to have an alternative way to use this cpu operations without preemption checks. raw_cpu ops rely on the __this_cpu_xxx_1/2/4/8 operations having no preemption checks. We therefore do not have to duplicate these functions but can affort to only duplicate the higher level macros. Also add raw_cpu_ptr for symmetries sake. Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/percpu.h =================================================================== --- linux.orig/include/linux/percpu.h 2013-09-23 10:20:05.050535801 -0500 +++ linux/include/linux/percpu.h 2013-09-23 10:20:05.050535801 -0500 @@ -139,6 +139,9 @@ extern int __init pcpu_page_first_chunk( pcpu_fc_populate_pte_fn_t populate_pte_fn); #endif +/* For symmetries sake. */ +#define raw_cpu_ptr __this_cpu_ptr + /* * Use this to get to a cpu's version of the per-cpu object * dynamically allocated. Non-atomic access to the current CPU's @@ -520,17 +523,7 @@ do { \ /* * Generic percpu operations for context that are safe from preemption/interrupts. - * Either we do not care about races or the caller has the - * responsibility of handling preemption/interrupt issues. Arch code can still - * override these instructions since the arch per cpu code may be more - * efficient and may actually get race freeness for free (that is the - * case for x86 for example). - * - * If there is no other protection through preempt disable and/or - * disabling interupts then one of these RMW operations can show unexpected - * behavior because the execution thread was rescheduled on another processor - * or an interrupt occurred and the same percpu variable was modified from - * the interrupt context. + * These functions verify that preemption has been disabled. */ #ifndef __this_cpu_read # ifndef __this_cpu_read_1 @@ -755,4 +748,74 @@ do { \ __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) #endif +/* + * Generic percpu operations for context where we do not want to do + * any checks for preemptiosn. + * + * If there is no other protection through preempt disable and/or + * disabling interupts then one of these RMW operations can show unexpected + * behavior because the execution thread was rescheduled on another processor + * or an interrupt occurred and the same percpu variable was modified from + * the interrupt context. + */ +#ifndef raw_cpu_read +# define raw_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) +#endif + +#ifndef raw_cpu_write +# define raw_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) +#endif + +#ifndef raw_cpu_add +# define raw_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) +#endif + +#ifndef raw_cpu_sub +# define raw_cpu_sub(pcp, val) raw_cpu_add((pcp), -(val)) +#endif + +#ifndef raw_cpu_inc +# define raw_cpu_inc(pcp) raw_cpu_add((pcp), 1) +#endif + +#ifndef raw_cpu_dec +# define raw_cpu_dec(pcp) raw_cpu_sub((pcp), 1) +#endif + +#ifndef raw_cpu_and +# define raw_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) +#endif + +#ifndef raw_cpu_or +# define raw_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) +#endif + +#ifndef raw_cpu_xor +# define raw_cpu_xor(pcp, val) __pcpu_size_call(__this_cpu_xor_, (pcp), (val)) +#endif + +#ifndef raw_cpu_add_return +# define raw_cpu_add_return(pcp, val) \ + __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) +#endif + +#define raw_cpu_sub_return(pcp, val) raw_cpu_add_return(pcp, -(val)) +#define raw_cpu_inc_return(pcp) raw_cpu_add_return(pcp, 1) +#define raw_cpu_dec_return(pcp) raw_cpu_add_return(pcp, -1) + +#ifndef raw_cpu_xchg +# define raw_cpu_xchg(pcp, nval) \ + __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) +#endif + +#ifndef raw_cpu_cmpxchg +# define raw_cpu_cmpxchg(pcp, oval, nval) \ + __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) +#endif + +#ifndef raw_cpu_cmpxchg_double +# define raw_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ + __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) +#endif + #endif /* __LINUX_PERCPU_H */ ^ permalink raw reply [flat|nested] 18+ messages in thread
* [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks [not found] <20130923191256.584672290@linux.com> 2013-09-23 19:12 ` [pchecks v1 1/4] Subject; percpu: Add raw_cpu_ops Christoph Lameter @ 2013-09-23 19:12 ` Christoph Lameter 2013-09-24 7:28 ` Ingo Molnar ` (2 more replies) 2013-09-23 19:24 ` [pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats() Christoph Lameter 2013-09-23 19:24 ` [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops Christoph Lameter 3 siblings, 3 replies; 18+ messages in thread From: Christoph Lameter @ 2013-09-23 19:12 UTC (permalink / raw) To: Tejun Heo; +Cc: akpm, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra These location triggered during testing with KVM. These are fetches without preemption off where we judged that to be more performance efficient or where other means of providing synchronization (BH handling) are available. Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/topology.h =================================================================== --- linux.orig/include/linux/topology.h 2013-09-12 13:26:29.216103951 -0500 +++ linux/include/linux/topology.h 2013-09-12 13:41:30.762358687 -0500 @@ -182,7 +182,7 @@ DECLARE_PER_CPU(int, numa_node); /* Returns the number of the current Node. */ static inline int numa_node_id(void) { - return __this_cpu_read(numa_node); + return raw_cpu_read(numa_node); } #endif @@ -239,7 +239,7 @@ static inline void set_numa_mem(int node /* Returns the number of the nearest Node with memory */ static inline int numa_mem_id(void) { - return __this_cpu_read(_numa_mem_); + return raw_cpu_read(_numa_mem_); } #endif Index: linux/include/net/snmp.h =================================================================== --- linux.orig/include/net/snmp.h 2013-09-12 13:26:29.216103951 -0500 +++ linux/include/net/snmp.h 2013-09-12 13:26:29.208104037 -0500 @@ -126,7 +126,7 @@ struct linux_xfrm_mib { extern __typeof__(type) __percpu *name[SNMP_ARRAY_SZ] #define SNMP_INC_STATS_BH(mib, field) \ - __this_cpu_inc(mib[0]->mibs[field]) + raw_cpu_inc(mib[0]->mibs[field]) #define SNMP_INC_STATS_USER(mib, field) \ this_cpu_inc(mib[0]->mibs[field]) @@ -141,7 +141,7 @@ struct linux_xfrm_mib { this_cpu_dec(mib[0]->mibs[field]) #define SNMP_ADD_STATS_BH(mib, field, addend) \ - __this_cpu_add(mib[0]->mibs[field], addend) + raw_cpu_add(mib[0]->mibs[field], addend) #define SNMP_ADD_STATS_USER(mib, field, addend) \ this_cpu_add(mib[0]->mibs[field], addend) Index: linux/kernel/hrtimer.c =================================================================== --- linux.orig/kernel/hrtimer.c 2013-09-12 13:26:29.216103951 -0500 +++ linux/kernel/hrtimer.c 2013-09-12 13:26:29.212103994 -0500 @@ -538,7 +538,7 @@ static inline int hrtimer_is_hres_enable */ static inline int hrtimer_hres_active(void) { - return __this_cpu_read(hrtimer_bases.hres_active); + return raw_cpu_read(hrtimer_bases.hres_active); } /* ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks 2013-09-23 19:12 ` [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks Christoph Lameter @ 2013-09-24 7:28 ` Ingo Molnar 2013-09-24 7:32 ` Ingo Molnar 2013-09-24 7:34 ` Ingo Molnar 2 siblings, 0 replies; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 7:28 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > These location triggered during testing with KVM. > > These are fetches without preemption off where we judged that > to be more performance efficient or where other means of > providing synchronization (BH handling) are available. > > Signed-off-by: Christoph Lameter <cl@linux.com> > > Index: linux/include/linux/topology.h > =================================================================== > --- linux.orig/include/linux/topology.h 2013-09-12 13:26:29.216103951 -0500 > +++ linux/include/linux/topology.h 2013-09-12 13:41:30.762358687 -0500 > @@ -182,7 +182,7 @@ DECLARE_PER_CPU(int, numa_node); > /* Returns the number of the current Node. */ > static inline int numa_node_id(void) > { > - return __this_cpu_read(numa_node); > + return raw_cpu_read(numa_node); > } > #endif > > @@ -239,7 +239,7 @@ static inline void set_numa_mem(int node > /* Returns the number of the nearest Node with memory */ > static inline int numa_mem_id(void) > { > - return __this_cpu_read(_numa_mem_); > + return raw_cpu_read(_numa_mem_); > } These are generic primitives used in quite a few places and it can easily be a bug to use numa_node_id() in a preemptible section - and this patch would hide that fact. So the correct way to do it is to have checking in these and to introduce raw_numa_node_id()/raw_numa_mem_id() and change eventual KVM (and any other) preemptible-section use of numa_node_id() to raw_numa_node_id() and explain why it's safe to do it. Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks 2013-09-23 19:12 ` [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks Christoph Lameter 2013-09-24 7:28 ` Ingo Molnar @ 2013-09-24 7:32 ` Ingo Molnar 2013-09-24 12:45 ` Eric Dumazet 2013-09-24 7:34 ` Ingo Molnar 2 siblings, 1 reply; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 7:32 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra, netdev (netdev Cc:-ed) * Christoph Lameter <cl@linux.com> wrote: > These location triggered during testing with KVM. > > These are fetches without preemption off where we judged that > to be more performance efficient or where other means of > providing synchronization (BH handling) are available. > Index: linux/include/net/snmp.h > =================================================================== > --- linux.orig/include/net/snmp.h 2013-09-12 13:26:29.216103951 -0500 > +++ linux/include/net/snmp.h 2013-09-12 13:26:29.208104037 -0500 > @@ -126,7 +126,7 @@ struct linux_xfrm_mib { > extern __typeof__(type) __percpu *name[SNMP_ARRAY_SZ] > > #define SNMP_INC_STATS_BH(mib, field) \ > - __this_cpu_inc(mib[0]->mibs[field]) > + raw_cpu_inc(mib[0]->mibs[field]) > > #define SNMP_INC_STATS_USER(mib, field) \ > this_cpu_inc(mib[0]->mibs[field]) > @@ -141,7 +141,7 @@ struct linux_xfrm_mib { > this_cpu_dec(mib[0]->mibs[field]) > > #define SNMP_ADD_STATS_BH(mib, field, addend) \ > - __this_cpu_add(mib[0]->mibs[field], addend) > + raw_cpu_add(mib[0]->mibs[field], addend) Are the networking folks fine with allowing unafe operations of SNMP stats in preemptible sections, or should the kernel produce an optional warning message if CONFIG_PREEMPT_DEBUG=y and these ops are used in preemptible (non-bh, non-irq-handler, non-irqs-off, etc.) sections? RAW_SNMP_*_STATS() ops could be used to annotate those places where that kind of usage is safe. Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks 2013-09-24 7:32 ` Ingo Molnar @ 2013-09-24 12:45 ` Eric Dumazet 0 siblings, 0 replies; 18+ messages in thread From: Eric Dumazet @ 2013-09-24 12:45 UTC (permalink / raw) To: Ingo Molnar Cc: Christoph Lameter, Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra, netdev On Tue, 2013-09-24 at 09:32 +0200, Ingo Molnar wrote: > (netdev Cc:-ed) > > * Christoph Lameter <cl@linux.com> wrote: > > > These location triggered during testing with KVM. > > > > These are fetches without preemption off where we judged that > > to be more performance efficient or where other means of > > providing synchronization (BH handling) are available. > > > Index: linux/include/net/snmp.h > > =================================================================== > > --- linux.orig/include/net/snmp.h 2013-09-12 13:26:29.216103951 -0500 > > +++ linux/include/net/snmp.h 2013-09-12 13:26:29.208104037 -0500 > > @@ -126,7 +126,7 @@ struct linux_xfrm_mib { > > extern __typeof__(type) __percpu *name[SNMP_ARRAY_SZ] > > > > #define SNMP_INC_STATS_BH(mib, field) \ > > - __this_cpu_inc(mib[0]->mibs[field]) > > + raw_cpu_inc(mib[0]->mibs[field]) > > > > #define SNMP_INC_STATS_USER(mib, field) \ > > this_cpu_inc(mib[0]->mibs[field]) > > @@ -141,7 +141,7 @@ struct linux_xfrm_mib { > > this_cpu_dec(mib[0]->mibs[field]) > > > > #define SNMP_ADD_STATS_BH(mib, field, addend) \ > > - __this_cpu_add(mib[0]->mibs[field], addend) > > + raw_cpu_add(mib[0]->mibs[field], addend) > > Are the networking folks fine with allowing unafe operations of SNMP stats > in preemptible sections, or should the kernel produce an optional warning > message if CONFIG_PREEMPT_DEBUG=y and these ops are used in preemptible > (non-bh, non-irq-handler, non-irqs-off, etc.) sections? > > RAW_SNMP_*_STATS() ops could be used to annotate those places where that > kind of usage is safe. I would rather not use RAW_ prefix in the macro, but add debugging check to make sure we use _BH() variant in the right context. BUG_ON(!in_softirq()) ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks 2013-09-23 19:12 ` [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks Christoph Lameter 2013-09-24 7:28 ` Ingo Molnar 2013-09-24 7:32 ` Ingo Molnar @ 2013-09-24 7:34 ` Ingo Molnar 2 siblings, 0 replies; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 7:34 UTC (permalink / raw) To: Christoph Lameter, Thomas Gleixner Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > Index: linux/kernel/hrtimer.c > =================================================================== > --- linux.orig/kernel/hrtimer.c 2013-09-12 13:26:29.216103951 -0500 > +++ linux/kernel/hrtimer.c 2013-09-12 13:26:29.212103994 -0500 > @@ -538,7 +538,7 @@ static inline int hrtimer_is_hres_enable > */ > static inline int hrtimer_hres_active(void) > { > - return __this_cpu_read(hrtimer_bases.hres_active); > + return raw_cpu_read(hrtimer_bases.hres_active); > } If cpu_read() is used, does this check trigger? If yes, what makes ignoring the check safe? Per change explanation is necessary for such annotations. Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* [pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats() [not found] <20130923191256.584672290@linux.com> 2013-09-23 19:12 ` [pchecks v1 1/4] Subject; percpu: Add raw_cpu_ops Christoph Lameter 2013-09-23 19:12 ` [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks Christoph Lameter @ 2013-09-23 19:24 ` Christoph Lameter 2013-09-24 7:43 ` Ingo Molnar 2013-09-23 19:24 ` [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops Christoph Lameter 3 siblings, 1 reply; 18+ messages in thread From: Christoph Lameter @ 2013-09-23 19:24 UTC (permalink / raw) To: Tejun Heo; +Cc: akpm, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra We do not care about races for the expiration logic in refresh_cpu_vm_stats(). Draining is a rare act after all. No need to create too much overhead for that. Use raw_cpu_ops there. Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/mm/vmstat.c =================================================================== --- linux.orig/mm/vmstat.c 2013-09-23 10:20:31.742262228 -0500 +++ linux/mm/vmstat.c 2013-09-23 10:20:31.738262268 -0500 @@ -439,6 +439,10 @@ static inline void fold_diff(int *diff) * statistics in the remote zone struct as well as the global cachelines * with the global counters. These could cause remote node cache line * bouncing and will have to be only done when necessary. + * + * Note that we have to use raw_cpu ops here. The thread is pinned + * to a specific processor but the preempt checking logic does not + * know about this. */ static void refresh_cpu_vm_stats(void) { @@ -459,7 +463,7 @@ static void refresh_cpu_vm_stats(void) global_diff[i] += v; #ifdef CONFIG_NUMA /* 3 seconds idle till flush */ - __this_cpu_write(p->expire, 3); + raw_cpu_write(p->expire, 3); #endif } } @@ -472,23 +476,23 @@ static void refresh_cpu_vm_stats(void) * Check if there are pages remaining in this pageset * if not then there is nothing to expire. */ - if (!__this_cpu_read(p->expire) || - !__this_cpu_read(p->pcp.count)) + if (!raw_cpu_read(p->expire) || + !raw_cpu_read(p->pcp.count)) continue; /* * We never drain zones local to this processor. */ if (zone_to_nid(zone) == numa_node_id()) { - __this_cpu_write(p->expire, 0); + raw_cpu_write(p->expire, 0); continue; } - if (__this_cpu_dec_return(p->expire)) + if (raw_cpu_dec_return(p->expire)) continue; - if (__this_cpu_read(p->pcp.count)) + if (raw_cpu_read(p->pcp.count)) drain_zone_pages(zone, __this_cpu_ptr(&p->pcp)); #endif } ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats() 2013-09-23 19:24 ` [pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats() Christoph Lameter @ 2013-09-24 7:43 ` Ingo Molnar 0 siblings, 0 replies; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 7:43 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > We do not care about races for the expiration logic in > refresh_cpu_vm_stats(). Draining is a rare act after all. > No need to create too much overhead for that. > > Use raw_cpu_ops there. > > Signed-off-by: Christoph Lameter <cl@linux.com> > > Index: linux/mm/vmstat.c > =================================================================== > --- linux.orig/mm/vmstat.c 2013-09-23 10:20:31.742262228 -0500 > +++ linux/mm/vmstat.c 2013-09-23 10:20:31.738262268 -0500 > @@ -439,6 +439,10 @@ static inline void fold_diff(int *diff) > * statistics in the remote zone struct as well as the global cachelines > * with the global counters. These could cause remote node cache line > * bouncing and will have to be only done when necessary. > + * > + * Note that we have to use raw_cpu ops here. The thread is pinned > + * to a specific processor but the preempt checking logic does not > + * know about this. That's not actually true - debug_smp_processor_id() does a check for the pinning status of the current task: /* * Kernel threads bound to a single CPU can safely use * smp_processor_id(): */ if (cpumask_equal(tsk_cpus_allowed(current), cpumask_of(this_cpu))) goto out; You should factor out those existing debug checks and reuse them, instead of using inferior ones. Note that debug_smp_processor_id() can probably be optimized a bit: today we have p->nr_cpus_allowed which tracks the pinning status, so instead of the above line we could write this cheaper form: if (current->nr_cpus_allowed == 1) goto out; (This should help on kernels configured for larger systems where the cpumask is non-trivial.) What we cannot do is to hide the weakness of the debug check you added by adding various workarounds to core code. Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops [not found] <20130923191256.584672290@linux.com> ` (2 preceding siblings ...) 2013-09-23 19:24 ` [pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats() Christoph Lameter @ 2013-09-23 19:24 ` Christoph Lameter 2013-09-24 8:03 ` Ingo Molnar 3 siblings, 1 reply; 18+ messages in thread From: Christoph Lameter @ 2013-09-23 19:24 UTC (permalink / raw) To: Tejun Heo; +Cc: akpm, Steven Rostedt, linux-kernel, Ingo Molnar, Peter Zijlstra We define a check function in order to avoid trouble with the include files. Then the higher level __this_cpu macros are modified to invoke the check before __this_cpu operation Signed-off-by: Christoph Lameter <cl@linux.com> Index: linux/include/linux/percpu.h =================================================================== --- linux.orig/include/linux/percpu.h 2013-09-23 10:24:47.371629684 -0500 +++ linux/include/linux/percpu.h 2013-09-23 10:26:01.314865122 -0500 @@ -175,6 +175,12 @@ extern phys_addr_t per_cpu_ptr_to_phys(v extern void __bad_size_call_parameter(void); +#ifdef CONFIG_DEBUG_PREEMPT +extern void __this_cpu_preempt_check(void); +#else +static inline void __this_cpu_preempt_check(void) { } +#endif + #define __pcpu_size_call_return(stem, variable) \ ({ typeof(variable) pscr_ret__; \ __verify_pcpu_ptr(&(variable)); \ @@ -538,7 +544,8 @@ do { \ # ifndef __this_cpu_read_8 # define __this_cpu_read_8(pcp) (*__this_cpu_ptr(&(pcp))) # endif -# define __this_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) +# define __this_cpu_read(pcp) \ + (__this_cpu_preempt_check(),__pcpu_size_call_return(__this_cpu_read_, (pcp))) #endif #define __this_cpu_generic_to_op(pcp, val, op) \ @@ -559,7 +566,12 @@ do { \ # ifndef __this_cpu_write_8 # define __this_cpu_write_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) # endif -# define __this_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) + +# define __this_cpu_write(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_write_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_add @@ -575,7 +587,12 @@ do { \ # ifndef __this_cpu_add_8 # define __this_cpu_add_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) # endif -# define __this_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) + +# define __this_cpu_add(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_add_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_sub @@ -603,7 +620,12 @@ do { \ # ifndef __this_cpu_and_8 # define __this_cpu_and_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) # endif -# define __this_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) + +# define __this_cpu_and(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_and_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_or @@ -619,7 +641,12 @@ do { \ # ifndef __this_cpu_or_8 # define __this_cpu_or_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) # endif -# define __this_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) + +# define __this_cpu_or(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_or_, (pcp), (val)); \ +} while (0) + #endif #ifndef __this_cpu_xor @@ -635,7 +662,12 @@ do { \ # ifndef __this_cpu_xor_8 # define __this_cpu_xor_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), ^=) # endif -# define __this_cpu_xor(pcp, val) __pcpu_size_call(__this_cpu_xor_, (pcp), (val)) + +# define __this_cpu_xor(pcp, val) \ +do { __this_cpu_preempt_check(); \ + __pcpu_size_call(__this_cpu_xor_, (pcp), (val)); \ +} while (0) + #endif #define __this_cpu_generic_add_return(pcp, val) \ @@ -658,7 +690,7 @@ do { \ # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) # endif # define __this_cpu_add_return(pcp, val) \ - __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_add_return_, pcp, val)) #endif #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) @@ -686,7 +718,7 @@ do { \ # define __this_cpu_xchg_8(pcp, nval) __this_cpu_generic_xchg(pcp, nval) # endif # define __this_cpu_xchg(pcp, nval) \ - __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval)) #endif #define __this_cpu_generic_cmpxchg(pcp, oval, nval) \ @@ -712,7 +744,7 @@ do { \ # define __this_cpu_cmpxchg_8(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) # endif # define __this_cpu_cmpxchg(pcp, oval, nval) \ - __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) + (__this_cpu_preempt_check(),__pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval)) #endif #define __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ @@ -745,7 +777,7 @@ do { \ __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) # endif # define __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ - __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) + (__this_cpu_preempt_check(),__pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2))) #endif /* Index: linux/kernel/sched/core.c =================================================================== --- linux.orig/kernel/sched/core.c 2013-09-23 10:24:47.371629684 -0500 +++ linux/kernel/sched/core.c 2013-09-23 10:24:47.371629684 -0500 @@ -2566,6 +2566,29 @@ asmlinkage void __sched preempt_schedule exception_exit(prev_state); } +#ifdef CONFIG_DEBUG_PREEMPT +/* + * This function is called if the kernel is compiled with preempt + * support for each __this_cpu operations. It verifies that + * preemption has been disabled. + * + * The function cannot be a macro due to the low level nature + * of the per cpu header files. + */ +void __this_cpu_preempt_check(void) +{ + int p; + + p = preemptible(); + if (p) { + printk(KERN_ERR "__this_cpu but preemptable." + " preempt_count=%d irqs_disabled=%d\n", + preempt_count(), irqs_disabled()); + dump_stack(); + } + +} +#endif /* CONFIG_DEBUG_PREEMPT */ #endif /* CONFIG_PREEMPT */ int default_wake_function(wait_queue_t *curr, unsigned mode, int wake_flags, ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-23 19:24 ` [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops Christoph Lameter @ 2013-09-24 8:03 ` Ingo Molnar 2013-09-24 14:24 ` Christoph Lameter 0 siblings, 1 reply; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 8:03 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > --- linux.orig/kernel/sched/core.c 2013-09-23 10:24:47.371629684 -0500 > +++ linux/kernel/sched/core.c 2013-09-23 10:24:47.371629684 -0500 > @@ -2566,6 +2566,29 @@ asmlinkage void __sched preempt_schedule > exception_exit(prev_state); > } > > +#ifdef CONFIG_DEBUG_PREEMPT > +/* > + * This function is called if the kernel is compiled with preempt > + * support for each __this_cpu operations. It verifies that > + * preemption has been disabled. > + * > + * The function cannot be a macro due to the low level nature > + * of the per cpu header files. > + */ > +void __this_cpu_preempt_check(void) > +{ > + int p; > + > + p = preemptible(); > + if (p) { > + printk(KERN_ERR "__this_cpu but preemptable." > + " preempt_count=%d irqs_disabled=%d\n", > + preempt_count(), irqs_disabled()); > + dump_stack(); > + } > + > +} > +#endif /* CONFIG_DEBUG_PREEMPT */ During past review of your series Peter Zijlstra very explicitly told you to reuse (and unify with) the preempt checks in lib/smp_processor_id.c! See debug_smp_processor_id(). The problem isn't just that you are duplicating code and adding unnecessary #ifdefs into the wrong place, the bigger problem is that you are implementing weak checks which creates unnecessary raw_*() pollution all across the kernel. Your lack of cooperation is getting ridiculous! My NAK still stands, obviously. Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-24 8:03 ` Ingo Molnar @ 2013-09-24 14:24 ` Christoph Lameter 2013-09-24 15:18 ` Ingo Molnar 0 siblings, 1 reply; 18+ messages in thread From: Christoph Lameter @ 2013-09-24 14:24 UTC (permalink / raw) To: Ingo Molnar; +Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra On Tue, 24 Sep 2013, Ingo Molnar wrote: > During past review of your series Peter Zijlstra very explicitly told you > to reuse (and unify with) the preempt checks in lib/smp_processor_id.c! > See debug_smp_processor_id(). No he did not. He mentioned something about debug_smp_processor_id() at the end of a post after talking about something else. Given your comments now I see what was meant. That was not really obvious in the first place. > The problem isn't just that you are duplicating code and adding > unnecessary #ifdefs into the wrong place, the bigger problem is that you > are implementing weak checks which creates unnecessary raw_*() pollution > all across the kernel. what kind of idiotic comment is that? I am using a single function preemptible(). How is that duplicating anything? > Your lack of cooperation is getting ridiculous! And this kind of insulting behavior is really discouraging people to do work on the kernel. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-24 14:24 ` Christoph Lameter @ 2013-09-24 15:18 ` Ingo Molnar 2013-09-24 15:35 ` Christoph Lameter 0 siblings, 1 reply; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 15:18 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Tue, 24 Sep 2013, Ingo Molnar wrote: > > > During past review of your series Peter Zijlstra very explicitly told you > > to reuse (and unify with) the preempt checks in lib/smp_processor_id.c! > > See debug_smp_processor_id(). > > No he did not. He mentioned something about debug_smp_processor_id() at > the end of a post after talking about something else. Given your > comments now I see what was meant. That was not really obvious in the > first place. Holy cow, this is what PeterZ wrote to you a week ago: > +#ifdef CONFIG_PREEMPT > +extern void this_cpu_preempt_check(void); > +#else > +static inline void this_cpu_preempt_check(void) { } > +#endif How about re-using debug_smp_processor_id() instead? http://lkml.org/lkml/2013/9/16/137 Firstly, that sentence is as damn obvious as it gets. Secondly, even if it wasn't obvious to you, a 'git grep debug_smp_processor_id' would have told you the story. Thirdly, if it wasn't obvious to you, and if you didn't think of using git grep, how about ... asking? If a reviewer gives you review feedback then you should address _EVERY_ single review feedback and not just ignore it... I get the impression that you are trying to deny your excessive sloppiness in this thread - there's no other way to put it really. > > The problem isn't just that you are duplicating code and adding > > unnecessary #ifdefs into the wrong place, the bigger problem is that > > you are implementing weak checks which creates unnecessary raw_*() > > pollution all across the kernel. > > what kind of idiotic comment is that? I am using a single function > preemptible(). How is that duplicating anything? as PeterZ pointed it out, we have well-working preempt checks in debug_smp_processor_id() / lib/smp_processor_id.c. Instead of reusing that you created a new preempt check, plopped your new, 25 lines, duplicated check into an ugly #ifdef section into sched/core.c - see that gem attached further below. > > Your lack of cooperation is getting ridiculous! > > And this kind of insulting behavior is really discouraging people to do > work on the kernel. Pointing out your repeated lack of cooperation in this matter is a statement of facts, not an 'insulting behavior'. Your wasting of other people's time is simply not acceptable. That I called you out on it might be embarrassing to you but there's a really easy solution to that: implement reviewer and maintainer requests and don't send sloppy patches repeatedly. Ingo > Index: linux/kernel/sched/core.c > =================================================================== > --- linux.orig/kernel/sched/core.c 2013-09-23 10:24:47.371629684 -0500 > +++ linux/kernel/sched/core.c 2013-09-23 10:24:47.371629684 -0500 > @@ -2566,6 +2566,29 @@ asmlinkage void __sched preempt_schedule > exception_exit(prev_state); > } > > +#ifdef CONFIG_DEBUG_PREEMPT > +/* > + * This function is called if the kernel is compiled with preempt > + * support for each __this_cpu operations. It verifies that > + * preemption has been disabled. > + * > + * The function cannot be a macro due to the low level nature > + * of the per cpu header files. > + */ > +void __this_cpu_preempt_check(void) > +{ > + int p; > + > + p = preemptible(); > + if (p) { > + printk(KERN_ERR "__this_cpu but preemptable." > + " preempt_count=%d irqs_disabled=%d\n", > + preempt_count(), irqs_disabled()); > + dump_stack(); > + } > + > +} > +#endif /* CONFIG_DEBUG_PREEMPT */ > #endif /* CONFIG_PREEMPT */ > ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-24 15:18 ` Ingo Molnar @ 2013-09-24 15:35 ` Christoph Lameter 2013-09-24 17:26 ` Ingo Molnar 0 siblings, 1 reply; 18+ messages in thread From: Christoph Lameter @ 2013-09-24 15:35 UTC (permalink / raw) To: Ingo Molnar; +Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra On Tue, 24 Sep 2013, Ingo Molnar wrote: > > No he did not. He mentioned something about debug_smp_processor_id() at > > the end of a post after talking about something else. Given your > > comments now I see what was meant. That was not really obvious in the > > first place. > > Holy cow, this is what PeterZ wrote to you a week ago: > > > +#ifdef CONFIG_PREEMPT > > +extern void this_cpu_preempt_check(void); > > +#else > > +static inline void this_cpu_preempt_check(void) { } > > +#endif > > How about re-using debug_smp_processor_id() instead? > > Firstly, that sentence is as damn obvious as it gets. No its not. This is a side comment and did not explain in detail what was intended. There was another issue mentioned there. You did that in your dysfunctional communication. > Pointing out your repeated lack of cooperation in this matter is a > statement of facts, not an 'insulting behavior'. Your wasting of other > people's time is simply not acceptable. > > That I called you out on it might be embarrassing to you but there's a > really easy solution to that: implement reviewer and maintainer requests > and don't send sloppy patches repeatedly. What is embarrasing here is your behavior. Usually I do not respond to this kind of crap because its obvious that it is just that and it needs to stand there for all to see not requiring a response. And the patches were repeatedly send to you as well. You could have said something earlier too when you realized that I did not note Peter's comment. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-24 15:35 ` Christoph Lameter @ 2013-09-24 17:26 ` Ingo Molnar 2013-09-25 16:46 ` Christoph Lameter 0 siblings, 1 reply; 18+ messages in thread From: Ingo Molnar @ 2013-09-24 17:26 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Tue, 24 Sep 2013, Ingo Molnar wrote: > > > > No he did not. He mentioned something about debug_smp_processor_id() at > > > the end of a post after talking about something else. Given your > > > comments now I see what was meant. That was not really obvious in the > > > first place. > > > > Holy cow, this is what PeterZ wrote to you a week ago: > > > > > +#ifdef CONFIG_PREEMPT > > > +extern void this_cpu_preempt_check(void); > > > +#else > > > +static inline void this_cpu_preempt_check(void) { } > > > +#endif > > > > How about re-using debug_smp_processor_id() instead? > > > > Firstly, that sentence is as damn obvious as it gets. > > No its not. This is a side comment and did not explain in detail what > was intended. There was another issue mentioned there. You did that in > your dysfunctional communication. Sorry, if you cannot work it out from that very, clear plain sentence, or if you cannot at least ASK what it is about then you have absolutely ZERO business modifying core kernel code really... > > Pointing out your repeated lack of cooperation in this matter is a > > statement of facts, not an 'insulting behavior'. Your wasting of other > > people's time is simply not acceptable. > > > > That I called you out on it might be embarrassing to you but there's a > > really easy solution to that: implement reviewer and maintainer > > requests and don't send sloppy patches repeatedly. > > What is embarrasing here is your behavior. Usually I do not respond to > this kind of crap because its obvious that it is just that and it needs > to stand there for all to see not requiring a response. > > And the patches were repeatedly send to you as well. You could have said > something earlier too when you realized that I did not note Peter's > comment. I complained about it exactly when I noticed it. I usually don't assume that long-time contributors ignore very clear maintainer feedback, so I don't always notice such cases straight away. > > > > Your lack of cooperation is getting ridiculous! > > > > > > And this kind of insulting behavior is really discouraging people to > > > do work on the kernel. You can stop playing the victim card: you are not a newbie anymore by any definition, you've been hacking the Linux kernel for how long, 10+ years, writing hundreds of patches? People expect higher quality series from you and you need to learn to address criticism of your workflow as well. You won't find a _single_ mail in the last 15+ years of lkml where I reacted strongly to a newbie being dense or abusive. Newbies can make all sorts of mistakes, it's part of the learning process - but after 10 years you are not a newbie anymore... Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-24 17:26 ` Ingo Molnar @ 2013-09-25 16:46 ` Christoph Lameter 2013-09-25 18:26 ` Ingo Molnar 0 siblings, 1 reply; 18+ messages in thread From: Christoph Lameter @ 2013-09-25 16:46 UTC (permalink / raw) To: Ingo Molnar; +Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra On Tue, 24 Sep 2013, Ingo Molnar wrote: > > > > And this kind of insulting behavior is really discouraging people to > > > > do work on the kernel. > > You can stop playing the victim card: you are not a newbie anymore by any > definition, you've been hacking the Linux kernel for how long, 10+ years, > writing hundreds of patches? People expect higher quality series from you > and you need to learn to address criticism of your workflow as well. > > You won't find a _single_ mail in the last 15+ years of lkml where I > reacted strongly to a newbie being dense or abusive. Newbies can make all > sorts of mistakes, it's part of the learning process - but after 10 years > you are not a newbie anymore... This has nothing to do with newbieness but with general communication behavior. I am not a full time kernel developer (nor would I want to be because it seems to cause some sort of cabin fever) and need to take time off my other duties in order to work on these patches. Time is limited. And then instead of thanks I get insults sprinkled with some paranoia. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-25 16:46 ` Christoph Lameter @ 2013-09-25 18:26 ` Ingo Molnar 2013-09-27 14:36 ` Christoph Lameter 0 siblings, 1 reply; 18+ messages in thread From: Ingo Molnar @ 2013-09-25 18:26 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Tue, 24 Sep 2013, Ingo Molnar wrote: > > > > > > > > > > > > > Your lack of cooperation is getting ridiculous! > > > > > > > > > > And this kind of insulting behavior is really discouraging > > > > > people to do work on the kernel. > > > > You can stop playing the victim card: you are not a newbie anymore by > > any definition, you've been hacking the Linux kernel for how long, 10+ > > years, writing hundreds of patches? People expect higher quality > > series from you and you need to learn to address criticism of your > > workflow as well. > > > > You won't find a _single_ mail in the last 15+ years of lkml where I > > reacted strongly to a newbie being dense or abusive. Newbies can make > > all sorts of mistakes, it's part of the learning process - but after > > 10 years you are not a newbie anymore... > > This has nothing to do with newbieness but with general communication > behavior. I am not a full time kernel developer (nor would I want to be > because it seems to cause some sort of cabin fever) and need to take > time off my other duties in order to work on these patches. Time is > limited. > > And then instead of thanks I get insults sprinkled with some paranoia. Pointing out your lack of cooperation (such as repeatedly ignoring maintainer feedback) is not an "insult" - it's my duty as a maintainer to protect other submitters who do their homework and it's also my duty as a maintainer to keep crappy patches from entering the kernel. Resisting low-quality patches like yours and pointing out patch submission errors and inefficiencies is my job. For example lets just take your latest submission from yesterday to see sloppiness in action: 63175 C Sep 24 Christoph Lamet ( 121) ┬─>[pchecks v2 1/2] Subject; percpu: Add raw_cpu_ops 63176 C Sep 24 Christoph Lamet ( 206) └─>[pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops 63178 C Sep 24 Christoph Lamet ( 14) [pchecks v2 0/2] percpu v2: Implement Preemption checks for __this_cpu operatio The 0/2 mail arrived before the 1/2 and 2/2 mails, because you did not use git-send-email threading options properly to thread them all together... Furthermore, your first patch's subject line was mangled in a weird way, mentioning 'Subject;' twice: Subject: [pchecks v2 1/2] Subject; percpu: Add raw_cpu_ops Patch submissions are expected to have such a coherent format: 63346 T Sep 25 Arnaldo Carvalh ( 70) [GIT PULL 0/6] perf/urgent fixes 63347 T Sep 25 Arnaldo Carvalh ( 37) ├─>[PATCH 1/6] perf kmem: Make it work again on non NUMA machines 63348 T Sep 25 Arnaldo Carvalh ( 31) ├─>[PATCH 2/6] perf trace: Add mmap2 handler 63349 T Sep 25 Arnaldo Carvalh ( 218) ├─>[PATCH 3/6] perf probe: Fix probing symbols with optimization suffix 63350 T Sep 25 Arnaldo Carvalh ( 27) ├─>[PATCH 4/6] perf tools: Explicitly add libdl dependency 63351 T Sep 25 Arnaldo Carvalh ( 37) ├─>[PATCH 5/6] perf machine: Fix path unpopulated in machine__create_modules() 63352 T Sep 25 Arnaldo Carvalh ( 56) └─>[PATCH 6/6] perf symbols: Demangle cloned functions It's not rocket science - and in fact it takes less time to submit patches properly and consistently. All that can be ignored if the submitter is a newbie who is struggling with his first few submissions - but you with 10+ years of experience and hundreds of patches track record are held to a higher standard. Such kind of trivial quality problems does not give me any confidence at all to consider your patches for inclusion - which modify the core kernel after all. There are tons of part-time developers who get their submissions right. If you have limited time to contribute I'd suggest you work on each submission a bit more before sending them, to make sure it has the expected quality, to make sure you've addressed all review feedback, etc. - this will waste less time of everyone involved and will generally result in fewer complaints as well. Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-25 18:26 ` Ingo Molnar @ 2013-09-27 14:36 ` Christoph Lameter 2013-09-28 8:52 ` Ingo Molnar 0 siblings, 1 reply; 18+ messages in thread From: Christoph Lameter @ 2013-09-27 14:36 UTC (permalink / raw) To: Ingo Molnar; +Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra [-- Attachment #1: Type: TEXT/PLAIN, Size: 1853 bytes --] On Wed, 25 Sep 2013, Ingo Molnar wrote: > > And then instead of thanks I get insults sprinkled with some paranoia. > > Pointing out your lack of cooperation (such as repeatedly ignoring > maintainer feedback) is not an "insult" - it's my duty as a maintainer to > protect other submitters who do their homework and it's also my duty as a > maintainer to keep crappy patches from entering the kernel. Resisting > low-quality patches like yours and pointing out patch submission errors > and inefficiencies is my job. Thats paranoia. No lack of cooperation. Feedback to patches during development is normal until they reach proper maturity for merging. Thats why these things are usually versioned. > For example lets just take your latest submission from yesterday to see > sloppiness in action: > > 63175 C Sep 24 Christoph Lamet ( 121) О©╫О©╫>[pchecks v2 1/2] Subject; percpu: Add raw_cpu_ops > 63176 C Sep 24 Christoph Lamet ( 206) О©╫О©╫>[pchecks v2 2/2] percpu: Add preemption checks to __this_cpu ops > 63178 C Sep 24 Christoph Lamet ( 14) [pchecks v2 0/2] percpu v2: Implement Preemption checks for __this_cpu operatio > > The 0/2 mail arrived before the 1/2 and 2/2 mails, because you did not use > git-send-email threading options properly to thread them all together... "quilt mail" was used for those which provides proper threading AFAICT. The headers look fine here without the additional "subject: stuff". Wonder why the 0/2 did not get put the right way. > If you have limited time to contribute I'd suggest you work on each > submission a bit more before sending them, to make sure it has the > expected quality, to make sure you've addressed all review feedback, etc. > - this will waste less time of everyone involved and will generally result > in fewer complaints as well. Certainly I am trying to address all the issues. ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops 2013-09-27 14:36 ` Christoph Lameter @ 2013-09-28 8:52 ` Ingo Molnar 0 siblings, 0 replies; 18+ messages in thread From: Ingo Molnar @ 2013-09-28 8:52 UTC (permalink / raw) To: Christoph Lameter Cc: Tejun Heo, akpm, Steven Rostedt, linux-kernel, Peter Zijlstra, Thomas Gleixner, Peter Zijlstra * Christoph Lameter <cl@linux.com> wrote: > On Wed, 25 Sep 2013, Ingo Molnar wrote: > > > > And then instead of thanks I get insults sprinkled with some paranoia. > > > > Pointing out your lack of cooperation (such as repeatedly ignoring > > maintainer feedback) is not an "insult" - it's my duty as a maintainer > > to protect other submitters who do their homework and it's also my > > duty as a maintainer to keep crappy patches from entering the kernel. > > Resisting low-quality patches like yours and pointing out patch > > submission errors and inefficiencies is my job. > > Thats paranoia. [...] Pointing out your track record is not paranoia nor an insult - it's merely embarrassing to you. And it's not just me: I heard similar complaints about you from other maintainers as well and I had to use a heavy NAK here to make you cooperate and listen already... > [...] No lack of cooperation. Feedback to patches during development is > normal until they reach proper maturity for merging. Thats why these > things are usually versioned. What I'm complaining about is you _ignoring_ feedback - such as when you ignored PeterZ's feedback. This is kernel development 101: every new version of a series must address _all_ previously given feedback - or if does not do so it should very prominently explain why it has not done so. If you "don't have time" to do it properly then you need to wait more between posting new versions of a series, to not make other people waste time reviewing the series and discovering that the review they gave against a prior series got ignored by you... Thanks, Ingo ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2013-09-28 8:52 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20130923191256.584672290@linux.com>
2013-09-23 19:12 ` [pchecks v1 1/4] Subject; percpu: Add raw_cpu_ops Christoph Lameter
2013-09-23 19:12 ` [pchecks v1 2/4] Use raw cpu ops for calls that would trigger with checks Christoph Lameter
2013-09-24 7:28 ` Ingo Molnar
2013-09-24 7:32 ` Ingo Molnar
2013-09-24 12:45 ` Eric Dumazet
2013-09-24 7:34 ` Ingo Molnar
2013-09-23 19:24 ` [pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats() Christoph Lameter
2013-09-24 7:43 ` Ingo Molnar
2013-09-23 19:24 ` [pchecks v1 4/4] percpu: Add preemption checks to __this_cpu ops Christoph Lameter
2013-09-24 8:03 ` Ingo Molnar
2013-09-24 14:24 ` Christoph Lameter
2013-09-24 15:18 ` Ingo Molnar
2013-09-24 15:35 ` Christoph Lameter
2013-09-24 17:26 ` Ingo Molnar
2013-09-25 16:46 ` Christoph Lameter
2013-09-25 18:26 ` Ingo Molnar
2013-09-27 14:36 ` Christoph Lameter
2013-09-28 8:52 ` Ingo Molnar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox