public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH for 2.6.25] Markers - use synchronize_sched()
@ 2008-03-31 13:16 Mathieu Desnoyers
  2008-04-01 20:30 ` Andrew Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Mathieu Desnoyers @ 2008-03-31 13:16 UTC (permalink / raw)
  To: Paul E. McKenney, Andrew Morton, linux-kernel

Use synchronize_sched before calling call_rcu in CONFIG_PREEMPT_RCU until we
have call_rcu_sched and rcu_barrier_sched in mainline. It will slow down the
marker operations in CONFIG_PREEMPT_RCU, but it fixes the current race against
the preempt_disable/enable() protected code paths.

Paul, is this ok ? It would be good to get this in for 2.6.25 final.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 kernel/marker.c |    9 +++++++++
 1 file changed, 9 insertions(+)

Index: linux-2.6-lttng/kernel/marker.c
===================================================================
--- linux-2.6-lttng.orig/kernel/marker.c	2008-03-31 08:22:24.000000000 -0400
+++ linux-2.6-lttng/kernel/marker.c	2008-03-31 08:43:30.000000000 -0400
@@ -671,6 +671,9 @@ int marker_probe_register(const char *na
 	entry->rcu_pending = 1;
 	/* write rcu_pending before calling the RCU callback */
 	smp_wmb();
+#ifdef CONFIG_PREEMPT_RCU
+	synchronize_sched();	/* Until we have the call_rcu_sched() */
+#endif
 	call_rcu(&entry->rcu, free_old_closure);
 end:
 	mutex_unlock(&markers_mutex);
@@ -714,6 +717,9 @@ int marker_probe_unregister(const char *
 	entry->rcu_pending = 1;
 	/* write rcu_pending before calling the RCU callback */
 	smp_wmb();
+#ifdef CONFIG_PREEMPT_RCU
+	synchronize_sched();	/* Until we have the call_rcu_sched() */
+#endif
 	call_rcu(&entry->rcu, free_old_closure);
 	remove_marker(name);	/* Ignore busy error message */
 	ret = 0;
@@ -792,6 +798,9 @@ int marker_probe_unregister_private_data
 	entry->rcu_pending = 1;
 	/* write rcu_pending before calling the RCU callback */
 	smp_wmb();
+#ifdef CONFIG_PREEMPT_RCU
+	synchronize_sched();	/* Until we have the call_rcu_sched() */
+#endif
 	call_rcu(&entry->rcu, free_old_closure);
 	remove_marker(entry->name);	/* Ignore busy error message */
 end:
-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH for 2.6.25] Markers - use synchronize_sched()
  2008-03-31 13:16 [PATCH for 2.6.25] Markers - use synchronize_sched() Mathieu Desnoyers
@ 2008-04-01 20:30 ` Andrew Morton
  2008-04-01 20:45   ` Paul E. McKenney
  2008-04-01 22:25   ` Mathieu Desnoyers
  0 siblings, 2 replies; 4+ messages in thread
From: Andrew Morton @ 2008-04-01 20:30 UTC (permalink / raw)
  To: Mathieu Desnoyers; +Cc: paulmck, linux-kernel

On Mon, 31 Mar 2008 09:16:09 -0400
Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> wrote:

> Use synchronize_sched before calling call_rcu in CONFIG_PREEMPT_RCU until we
> have call_rcu_sched and rcu_barrier_sched in mainline. It will slow down the
> marker operations in CONFIG_PREEMPT_RCU, but it fixes the current race against
> the preempt_disable/enable() protected code paths.

A better changelog would have described the bug which is being fixed.

> Paul, is this ok ? It would be good to get this in for 2.6.25 final.

Paul seems to have nodded off.  I'll merge it.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH for 2.6.25] Markers - use synchronize_sched()
  2008-04-01 20:30 ` Andrew Morton
@ 2008-04-01 20:45   ` Paul E. McKenney
  2008-04-01 22:25   ` Mathieu Desnoyers
  1 sibling, 0 replies; 4+ messages in thread
From: Paul E. McKenney @ 2008-04-01 20:45 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Mathieu Desnoyers, linux-kernel

On Tue, Apr 01, 2008 at 01:30:17PM -0700, Andrew Morton wrote:
> On Mon, 31 Mar 2008 09:16:09 -0400
> Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> wrote:
> 
> > Use synchronize_sched before calling call_rcu in CONFIG_PREEMPT_RCU until we
> > have call_rcu_sched and rcu_barrier_sched in mainline. It will slow down the
> > marker operations in CONFIG_PREEMPT_RCU, but it fixes the current race against
> > the preempt_disable/enable() protected code paths.
> 
> A better changelog would have described the bug which is being fixed.
> 
> > Paul, is this ok ? It would be good to get this in for 2.6.25 final.
> 
> Paul seems to have nodded off.  I'll merge it.

Paul seems to have fat-fingered the earlier message.  But he does see the
message adding this to -mm, and the patch looks good to me.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH for 2.6.25] Markers - use synchronize_sched()
  2008-04-01 20:30 ` Andrew Morton
  2008-04-01 20:45   ` Paul E. McKenney
@ 2008-04-01 22:25   ` Mathieu Desnoyers
  1 sibling, 0 replies; 4+ messages in thread
From: Mathieu Desnoyers @ 2008-04-01 22:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: paulmck, linux-kernel

* Andrew Morton (akpm@linux-foundation.org) wrote:
> On Mon, 31 Mar 2008 09:16:09 -0400
> Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> wrote:
> 
> > Use synchronize_sched before calling call_rcu in CONFIG_PREEMPT_RCU until we
> > have call_rcu_sched and rcu_barrier_sched in mainline. It will slow down the
> > marker operations in CONFIG_PREEMPT_RCU, but it fixes the current race against
> > the preempt_disable/enable() protected code paths.
> 
> A better changelog would have described the bug which is being fixed.
> 

Hi Andrew,

Right, this could be appended to the changelog then :

Markers do not mix well with CONFIG_PREEMPT_RCU because it uses
preempt_disable/enable() and not rcu_read_lock/unlock for minimal
intrusiveness. We would need call_sched and sched_barrier primitives.

Currently, the modification (connection and disconnection) of probes
from markers requires changes to the data structure done in RCU-style :
a new data structure is created, the pointer is changed atomically, a
quiescent state is reached and then the old data structure is freed.

The quiescent state is reached once all the currently running
preempt_disable regions are done running. We use the call_rcu mechanism
to execute kfree() after such quiescent state has been reached. However,
the new CONFIG_PREEMPT_RCU version of call_rcu and rcu_barrier does not
guarantee that all preempt_disable code regions have finished, hence the
race.

The "proper" way to do this is to use rcu_read_lock/unlock, but we don't
want to use it to minimize intrusiveness on the traced system. (we do
not want the marker code to call into much of the OS code, because it
would quickly restrict what can and cannot be instrumented, such as the
scheduler).

The temporary fix, until we get call_rcu_sched and rcu_barrier_sched in
mainline, is to use synchronize_sched before each call_rcu calls, so we
wait for the quiescent state in the system call code path. It will slow
down batch marker enable/disable, but will make sure the race is gone.

Thanks,

Mathieu

> > Paul, is this ok ? It would be good to get this in for 2.6.25 final.
> 
> Paul seems to have nodded off.  I'll merge it.

-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2008-04-01 22:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-03-31 13:16 [PATCH for 2.6.25] Markers - use synchronize_sched() Mathieu Desnoyers
2008-04-01 20:30 ` Andrew Morton
2008-04-01 20:45   ` Paul E. McKenney
2008-04-01 22:25   ` Mathieu Desnoyers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox