linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] CPU hotplug: Fix the long-standing "IPI to offline CPU" issue
@ 2014-05-06 18:02 Srivatsa S. Bhat
  2014-05-06 18:02 ` [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU Srivatsa S. Bhat
  2014-05-06 18:03 ` [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU" Srivatsa S. Bhat
  0 siblings, 2 replies; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 18:02 UTC (permalink / raw)
  To: peterz, tglx, mingo, tj, rusty, akpm, fweisbec, hch
  Cc: mgorman, riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel, Srivatsa S. Bhat


Hi,

There is a long-standing problem related to CPU hotplug which causes IPIs to
be delivered to offline CPUs, and the smp-call-function IPI handler code
prints out a warning whenever this is detected. Every once in a while this
(usually harmless) warning gets reported on LKML, but so far it has not been
completely fixed. Usually the solution involves finding out the IPI sender
and fixing it by adding appropriate synchronization with CPU hotplug.

However, while going through one such internal bug reports, I found that
there is a significant bug in the receiver side itself (more specifically,
in stop-machine) that can lead to this problem even when the sender code
is perfectly fine. This patchset fixes that synchronization problem in the
CPU hotplug stop-machine code.

Patch 1 adds some additional debug code to the smp-call-function framework,
to help debug such issues easily.

Patch 2 modifies the stop-machine code to ensure that any IPIs that were sent
while the target CPU was online, would be noticed and handled by that CPU
without fail before it goes offline. Thus, this avoids scenarios where IPIs
are received on offline CPUs (as long as the sender uses proper hotplug
synchronization).


In fact, I debugged the problem by using Patch 1, and found that the
payload of the IPI was always the block layer's trigger_softirq() function.
But I was not able to find anything wrong with the block layer code. That's
when I started looking at the stop-machine code and realized that there is
a race-window which makes the IPI _receiver_ the culprit, not the sender.
Patch 2 fixes that race and hence this should put an end to most of the
hard-to-debug IPI-to-offline-CPU issues.


 Srivatsa S. Bhat (2):
      smp: Print more useful debug info upon receiving IPI on an offline CPU
      CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"


 kernel/smp.c          |   15 ++++++++++++++-
 kernel/stop_machine.c |   22 +++++++++++++++++++---
 2 files changed, 33 insertions(+), 4 deletions(-)


Thanks,
Srivatsa S. Bhat
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU
  2014-05-06 18:02 [PATCH 0/2] CPU hotplug: Fix the long-standing "IPI to offline CPU" issue Srivatsa S. Bhat
@ 2014-05-06 18:02 ` Srivatsa S. Bhat
  2014-05-06 20:34   ` Andrew Morton
  2014-05-06 18:03 ` [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU" Srivatsa S. Bhat
  1 sibling, 1 reply; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 18:02 UTC (permalink / raw)
  To: peterz, tglx, mingo, tj, rusty, akpm, fweisbec, hch
  Cc: mgorman, riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel, Srivatsa S. Bhat

Today the smp-call-function code just prints a warning if we get an IPI on
an offline CPU. This info is sufficient to let us know that something went
wrong, but often it is very hard to debug exactly who sent the IPI and why,
from this info alone.

In most cases, we get the warning about the IPI to an offline CPU, immediately
after the CPU going offline comes out of the stop-machine phase and reenables
interrupts. Since all online CPUs participate in stop-machine, the information
regarding the sender of the IPI is already lost by the time we exit the
stop-machine loop. So even if we dump the stack on each CPU at this point,
we won't find anything useful since all of them will show the stack-trace of
the stopper thread. So we need a better way to figure out who sent the IPI and
why.

To achieve this, when we detect an IPI targeted to an offline CPU, loop through
the call-single-data linked list and print out the payload (i.e., the name
of the function which was supposed to be executed by the target CPU). This
would give us an insight as to who might have sent the IPI and help us debug
this further.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/smp.c |   15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 06d574e..6cb0e2e 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -185,15 +185,28 @@ void generic_smp_call_function_single_interrupt(void)
 {
 	struct llist_node *entry;
 	struct call_single_data *csd, *csd_next;
+	int warn = 0;
 
 	/*
 	 * Shouldn't receive this interrupt on a cpu that is not yet online.
 	 */
-	WARN_ON_ONCE(!cpu_online(smp_processor_id()));
+	if (unlikely(!cpu_online(smp_processor_id()))) {
+		warn = 1;
+		WARN_ON_ONCE(1);
+	}
 
 	entry = llist_del_all(&__get_cpu_var(call_single_queue));
 	entry = llist_reverse_order(entry);
 
+	if (unlikely(warn)) {
+		/*
+		 * We don't have to use the _safe() variant here
+		 * because we are not invoking the IPI handlers yet.
+		 */
+		llist_for_each_entry(csd, entry, llist)
+			pr_warn("SMP IPI Payload: %pS \n", csd->func);
+	}
+
 	llist_for_each_entry_safe(csd, csd_next, entry, llist) {
 		csd->func(csd->info);
 		csd_unlock(csd);


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-06 18:02 [PATCH 0/2] CPU hotplug: Fix the long-standing "IPI to offline CPU" issue Srivatsa S. Bhat
  2014-05-06 18:02 ` [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU Srivatsa S. Bhat
@ 2014-05-06 18:03 ` Srivatsa S. Bhat
  2014-05-06 20:40   ` Andrew Morton
  1 sibling, 1 reply; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 18:03 UTC (permalink / raw)
  To: peterz, tglx, mingo, tj, rusty, akpm, fweisbec, hch
  Cc: mgorman, riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel, Srivatsa S. Bhat

During CPU offline, stop-machine is used to take control over all the online
CPUs (via the per-cpu stopper thread) and then run take_cpu_down() on the CPU
that is to be taken offline.

But stop-machine itself has several stages: _PREPARE, _DISABLE_IRQ, _RUN etc.
The important thing to note here is that the _DISABLE_IRQ stage comes much
later after starting stop-machine, and hence there is a large window where
other CPUs can send IPIs to the CPU going offline. As a result, we can
encounter a scenario as depicted below, which causes IPIs to be sent to the
CPU going offline, and that CPU notices them *after* it has gone offline,
triggering the "IPI-to-offline-CPU" warning from the smp-call-function code.


              CPU 1                                         CPU 2
          (Online CPU)                               (CPU going offline)

       Enter _PREPARE stage                          Enter _PREPARE stage

                                                     Enter _DISABLE_IRQ stage


                                                   =
       Got a device interrupt,                     | Didn't notice the IPI
       and the interrupt handler                   | since interrupts were
       called smp_call_function()                  | disabled on this CPU.
       and sent an IPI to CPU 2.                   |
                                                   =


       Enter _DISABLE_IRQ stage


       Enter _RUN stage                              Enter _RUN stage

                                  =
       Busy loop with interrupts  |                  Invoke take_cpu_down()
       disabled.                  |                  and take CPU 2 offline
                                  =


       Enter _EXIT stage                             Enter _EXIT stage

       Re-enable interrupts                          Re-enable interrupts

                                                     The pending IPI is noted
                                                     immediately, but alas,
                                                     the CPU is offline at
                                                     this point.



So, as we can observe from this scenario, the IPI was sent when CPU 2 was
still online, and hence it was perfectly legal. But unfortunately it was
noted only after CPU 2 went offline, resulting in the warning from the
IPI handling code. In other words, the fault was not at the sender, but
at the receiver side - and if we look closely, the real bug is in the
stop-machine sequence itself.

The problem here is that the CPU going offline disabled its local interrupts
(by entering _DISABLE_IRQ phase) *before* the other CPUs. And that's the
reason why it was not able to respond to the IPI before going offline.

A simple solution to this problem is to ensure that the CPU going offline
*follows* all other CPUs while entering each subsequent phase within
stop-machine. In particular, all other CPUs will enter the _DISABLE_IRQ
phase and disable their local interrupts, and only *then*, the CPU going
offline will follow suit. Since the other CPUs are executing the stop-machine
code with interrupts disabled, they won't send any IPIs at all, at that
point. And by the time stop-machine ends, the CPU would have gone offline
and disappeared from the cpu_online_mask, and hence future invocations of
smp_call_function() and friends will automatically prune that CPU out.
Thus, we can guarantee that no CPU will end up *inadvertently* sending
IPIs to an offline CPU.

We can implement this by introducing a "holding area" for the CPUs marked
as 'active_cpus', and use this infrastructure to let the other CPUs
progress from one stage to the next, before allowing the active_cpus to
do the same thing.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/stop_machine.c |   22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 01fbae5..d65168e 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -165,12 +165,21 @@ static void ack_state(struct multi_stop_data *msdata)
 		set_state(msdata, msdata->state + 1);
 }
 
+/* Holding area for active CPUs, to let all the non-active CPUs go first */
+static void hold_active_cpus(struct multi_stop_data *msdata,
+			     int num_active_cpus)
+{
+	/* Wait until all the non-active threads ack the state */
+	while (atomic_read(&msdata->thread_ack) > num_active_cpus)
+		cpu_relax();
+}
+
 /* This is the cpu_stop function which stops the CPU. */
 static int multi_cpu_stop(void *data)
 {
 	struct multi_stop_data *msdata = data;
 	enum multi_stop_state curstate = MULTI_STOP_NONE;
-	int cpu = smp_processor_id(), err = 0;
+	int cpu = smp_processor_id(), num_active_cpus, err = 0;
 	unsigned long flags;
 	bool is_active;
 
@@ -180,15 +189,22 @@ static int multi_cpu_stop(void *data)
 	 */
 	local_save_flags(flags);
 
-	if (!msdata->active_cpus)
+	if (!msdata->active_cpus) {
 		is_active = cpu == cpumask_first(cpu_online_mask);
-	else
+		num_active_cpus = 1;
+	} else {
 		is_active = cpumask_test_cpu(cpu, msdata->active_cpus);
+		num_active_cpus = cpumask_weight(msdata->active_cpus);
+	}
 
 	/* Simple state machine */
 	do {
 		/* Chill out and ensure we re-read multi_stop_state. */
 		cpu_relax();
+
+		if (is_active)
+			hold_active_cpus(msdata, num_active_cpus);
+
 		if (msdata->state != curstate) {
 			curstate = msdata->state;
 			switch (curstate) {


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU
  2014-05-06 18:02 ` [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU Srivatsa S. Bhat
@ 2014-05-06 20:34   ` Andrew Morton
  2014-05-06 21:23     ` Srivatsa S. Bhat
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2014-05-06 20:34 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: peterz, tglx, mingo, tj, rusty, fweisbec, hch, mgorman, riel, bp,
	rostedt, mgalbraith, ego, paulmck, oleg, rjw, linux-kernel

On Tue, 06 May 2014 23:32:51 +0530 "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> wrote:

> Today the smp-call-function code just prints a warning if we get an IPI on
> an offline CPU. This info is sufficient to let us know that something went
> wrong, but often it is very hard to debug exactly who sent the IPI and why,
> from this info alone.
> 
> In most cases, we get the warning about the IPI to an offline CPU, immediately
> after the CPU going offline comes out of the stop-machine phase and reenables
> interrupts. Since all online CPUs participate in stop-machine, the information
> regarding the sender of the IPI is already lost by the time we exit the
> stop-machine loop. So even if we dump the stack on each CPU at this point,
> we won't find anything useful since all of them will show the stack-trace of
> the stopper thread. So we need a better way to figure out who sent the IPI and
> why.
> 
> To achieve this, when we detect an IPI targeted to an offline CPU, loop through
> the call-single-data linked list and print out the payload (i.e., the name
> of the function which was supposed to be executed by the target CPU). This
> would give us an insight as to who might have sent the IPI and help us debug
> this further.
> 
> ...
>
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -185,15 +185,28 @@ void generic_smp_call_function_single_interrupt(void)
>  {
>  	struct llist_node *entry;
>  	struct call_single_data *csd, *csd_next;
> +	int warn = 0;
>  
>  	/*
>  	 * Shouldn't receive this interrupt on a cpu that is not yet online.
>  	 */
> -	WARN_ON_ONCE(!cpu_online(smp_processor_id()));
> +	if (unlikely(!cpu_online(smp_processor_id()))) {
> +		warn = 1;
> +		WARN_ON_ONCE(1);
> +	}
>  
>  	entry = llist_del_all(&__get_cpu_var(call_single_queue));
>  	entry = llist_reverse_order(entry);
>  
> +	if (unlikely(warn)) {
> +		/*
> +		 * We don't have to use the _safe() variant here
> +		 * because we are not invoking the IPI handlers yet.
> +		 */
> +		llist_for_each_entry(csd, entry, llist)
> +			pr_warn("SMP IPI Payload: %pS \n", csd->func);
> +	}
> +

This will emit the WARN_ON a single time, but will emit the "IPI
Payload" list every time the cpu is found to be offline.  So on the
second and successive occurrences some output will still occur.

Unfortunately WARN_ON_ONCE() returns the value of `condition', not
`__warned', so we have to hand-code things.  Like this?

void generic_smp_call_function_single_interrupt(void)
{
	struct llist_node *entry;
	struct call_single_data *csd, *csd_next;
	static bool warned;

	entry = llist_del_all(&__get_cpu_var(call_single_queue));
	entry = llist_reverse_order(entry);

	/*
	 * Shouldn't receive this interrupt on a cpu that is not yet online.
	 */
	if (unlikely(!cpu_online(smp_processor_id()) && !warned)) {
		warned = true;
		WARN_ON(1);
		/*
		 * We don't have to use the _safe() variant here
		 * because we are not invoking the IPI handlers yet.
		 */
		llist_for_each_entry(csd, entry, llist)
			pr_warn("SMP IPI Payload: %pS \n", csd->func);
	}

	llist_for_each_entry_safe(csd, csd_next, entry, llist) {
		csd->func(csd->info);
		csd_unlock(csd);
	}
}


--- a/kernel/smp.c~smp-print-more-useful-debug-info-upon-receiving-ipi-on-an-offline-cpu-fix
+++ a/kernel/smp.c
@@ -185,20 +185,17 @@ void generic_smp_call_function_single_in
 {
 	struct llist_node *entry;
 	struct call_single_data *csd, *csd_next;
-	int warn = 0;
-
-	/*
-	 * Shouldn't receive this interrupt on a cpu that is not yet online.
-	 */
-	if (unlikely(!cpu_online(smp_processor_id()))) {
-		warn = 1;
-		WARN_ON_ONCE(1);
-	}
+	static bool warned;
 
 	entry = llist_del_all(&__get_cpu_var(call_single_queue));
 	entry = llist_reverse_order(entry);
 
-	if (unlikely(warn)) {
+	/*
+	 * Shouldn't receive this interrupt on a cpu that is not yet online.
+	 */
+	if (unlikely(!cpu_online(smp_processor_id()) && !warned)) {
+		warned = true;
+		WARN_ON(1);
 		/*
 		 * We don't have to use the _safe() variant here
 		 * because we are not invoking the IPI handlers yet.
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-06 18:03 ` [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU" Srivatsa S. Bhat
@ 2014-05-06 20:40   ` Andrew Morton
  2014-05-06 20:42     ` Tejun Heo
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2014-05-06 20:40 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: peterz, tglx, mingo, tj, rusty, fweisbec, hch, mgorman, riel, bp,
	rostedt, mgalbraith, ego, paulmck, oleg, rjw, linux-kernel

On Tue, 06 May 2014 23:33:03 +0530 "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> wrote:

> --- a/kernel/stop_machine.c
> +++ b/kernel/stop_machine.c
> @@ -165,12 +165,21 @@ static void ack_state(struct multi_stop_data *msdata)
>  		set_state(msdata, msdata->state + 1);
>  }
>  
> +/* Holding area for active CPUs, to let all the non-active CPUs go first */
> +static void hold_active_cpus(struct multi_stop_data *msdata,
> +			     int num_active_cpus)
> +{
> +	/* Wait until all the non-active threads ack the state */
> +	while (atomic_read(&msdata->thread_ack) > num_active_cpus)
> +		cpu_relax();
> +}

The code comments are a bit lame.  Can we do a better job of explaining
the overall dynamic behaviour?  Help readers to understand the problem
which hold_active_cpus() is solving and how it solves it?



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-06 20:40   ` Andrew Morton
@ 2014-05-06 20:42     ` Tejun Heo
  2014-05-06 21:27       ` Srivatsa S. Bhat
  0 siblings, 1 reply; 12+ messages in thread
From: Tejun Heo @ 2014-05-06 20:42 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Srivatsa S. Bhat, peterz, tglx, mingo, rusty, fweisbec, hch,
	mgorman, riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel

On Tue, May 06, 2014 at 01:40:54PM -0700, Andrew Morton wrote:
> On Tue, 06 May 2014 23:33:03 +0530 "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> wrote:
> 
> > --- a/kernel/stop_machine.c
> > +++ b/kernel/stop_machine.c
> > @@ -165,12 +165,21 @@ static void ack_state(struct multi_stop_data *msdata)
> >  		set_state(msdata, msdata->state + 1);
> >  }
> >  
> > +/* Holding area for active CPUs, to let all the non-active CPUs go first */
> > +static void hold_active_cpus(struct multi_stop_data *msdata,
> > +			     int num_active_cpus)
> > +{
> > +	/* Wait until all the non-active threads ack the state */
> > +	while (atomic_read(&msdata->thread_ack) > num_active_cpus)
> > +		cpu_relax();
> > +}
> 
> The code comments are a bit lame.  Can we do a better job of explaining
> the overall dynamic behaviour?  Help readers to understand the problem
> which hold_active_cpus() is solving and how it solves it?

Does it even need to be a separate function?  I kinda really dislike
trivial helpers which are used only once.  It obfuscates more than
helping anything.  I think proper comment where the actual
synchronization is happening along with open coded wait would be
easier to follow.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU
  2014-05-06 20:34   ` Andrew Morton
@ 2014-05-06 21:23     ` Srivatsa S. Bhat
  2014-05-06 22:01       ` [PATCH v2 " Srivatsa S. Bhat
  0 siblings, 1 reply; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 21:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: peterz, tglx, mingo, tj, rusty, fweisbec, hch, mgorman, riel, bp,
	rostedt, mgalbraith, ego, paulmck, oleg, rjw, linux-kernel

On 05/07/2014 02:04 AM, Andrew Morton wrote:
> On Tue, 06 May 2014 23:32:51 +0530 "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> wrote:
> 
>> Today the smp-call-function code just prints a warning if we get an IPI on
>> an offline CPU. This info is sufficient to let us know that something went
>> wrong, but often it is very hard to debug exactly who sent the IPI and why,
>> from this info alone.
>>
>> In most cases, we get the warning about the IPI to an offline CPU, immediately
>> after the CPU going offline comes out of the stop-machine phase and reenables
>> interrupts. Since all online CPUs participate in stop-machine, the information
>> regarding the sender of the IPI is already lost by the time we exit the
>> stop-machine loop. So even if we dump the stack on each CPU at this point,
>> we won't find anything useful since all of them will show the stack-trace of
>> the stopper thread. So we need a better way to figure out who sent the IPI and
>> why.
>>
>> To achieve this, when we detect an IPI targeted to an offline CPU, loop through
>> the call-single-data linked list and print out the payload (i.e., the name
>> of the function which was supposed to be executed by the target CPU). This
>> would give us an insight as to who might have sent the IPI and help us debug
>> this further.
>>
>> ...
>>
>> --- a/kernel/smp.c
>> +++ b/kernel/smp.c
>> @@ -185,15 +185,28 @@ void generic_smp_call_function_single_interrupt(void)
>>  {
>>  	struct llist_node *entry;
>>  	struct call_single_data *csd, *csd_next;
>> +	int warn = 0;
>>  
>>  	/*
>>  	 * Shouldn't receive this interrupt on a cpu that is not yet online.
>>  	 */
>> -	WARN_ON_ONCE(!cpu_online(smp_processor_id()));
>> +	if (unlikely(!cpu_online(smp_processor_id()))) {
>> +		warn = 1;
>> +		WARN_ON_ONCE(1);
>> +	}
>>  
>>  	entry = llist_del_all(&__get_cpu_var(call_single_queue));
>>  	entry = llist_reverse_order(entry);
>>  
>> +	if (unlikely(warn)) {
>> +		/*
>> +		 * We don't have to use the _safe() variant here
>> +		 * because we are not invoking the IPI handlers yet.
>> +		 */
>> +		llist_for_each_entry(csd, entry, llist)
>> +			pr_warn("SMP IPI Payload: %pS \n", csd->func);
>> +	}
>> +
> 
> This will emit the WARN_ON a single time, but will emit the "IPI
> Payload" list every time the cpu is found to be offline.  So on the
> second and successive occurrences some output will still occur.
> 
> Unfortunately WARN_ON_ONCE() returns the value of `condition', not
> `__warned', so we have to hand-code things.  Like this?
>

Yeah, this version looks better. Sorry for missing this earlier.
I'll incorporate this in my next version of the patchset.

Thanks a lot!

Regards,
Srivatsa S. Bhat
 
> void generic_smp_call_function_single_interrupt(void)
> {
> 	struct llist_node *entry;
> 	struct call_single_data *csd, *csd_next;
> 	static bool warned;
> 
> 	entry = llist_del_all(&__get_cpu_var(call_single_queue));
> 	entry = llist_reverse_order(entry);
> 
> 	/*
> 	 * Shouldn't receive this interrupt on a cpu that is not yet online.
> 	 */
> 	if (unlikely(!cpu_online(smp_processor_id()) && !warned)) {
> 		warned = true;
> 		WARN_ON(1);
> 		/*
> 		 * We don't have to use the _safe() variant here
> 		 * because we are not invoking the IPI handlers yet.
> 		 */
> 		llist_for_each_entry(csd, entry, llist)
> 			pr_warn("SMP IPI Payload: %pS \n", csd->func);
> 	}
> 
> 	llist_for_each_entry_safe(csd, csd_next, entry, llist) {
> 		csd->func(csd->info);
> 		csd_unlock(csd);
> 	}
> }
> 
> 
> --- a/kernel/smp.c~smp-print-more-useful-debug-info-upon-receiving-ipi-on-an-offline-cpu-fix
> +++ a/kernel/smp.c
> @@ -185,20 +185,17 @@ void generic_smp_call_function_single_in
>  {
>  	struct llist_node *entry;
>  	struct call_single_data *csd, *csd_next;
> -	int warn = 0;
> -
> -	/*
> -	 * Shouldn't receive this interrupt on a cpu that is not yet online.
> -	 */
> -	if (unlikely(!cpu_online(smp_processor_id()))) {
> -		warn = 1;
> -		WARN_ON_ONCE(1);
> -	}
> +	static bool warned;
> 
>  	entry = llist_del_all(&__get_cpu_var(call_single_queue));
>  	entry = llist_reverse_order(entry);
> 
> -	if (unlikely(warn)) {
> +	/*
> +	 * Shouldn't receive this interrupt on a cpu that is not yet online.
> +	 */
> +	if (unlikely(!cpu_online(smp_processor_id()) && !warned)) {
> +		warned = true;
> +		WARN_ON(1);
>  		/*
>  		 * We don't have to use the _safe() variant here
>  		 * because we are not invoking the IPI handlers yet.
> _
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-06 20:42     ` Tejun Heo
@ 2014-05-06 21:27       ` Srivatsa S. Bhat
  2014-05-06 22:01         ` [PATCH v2 " Srivatsa S. Bhat
  0 siblings, 1 reply; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 21:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Andrew Morton, peterz, tglx, mingo, rusty, fweisbec, hch, mgorman,
	riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel

On 05/07/2014 02:12 AM, Tejun Heo wrote:
> On Tue, May 06, 2014 at 01:40:54PM -0700, Andrew Morton wrote:
>> On Tue, 06 May 2014 23:33:03 +0530 "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>
>>> --- a/kernel/stop_machine.c
>>> +++ b/kernel/stop_machine.c
>>> @@ -165,12 +165,21 @@ static void ack_state(struct multi_stop_data *msdata)
>>>  		set_state(msdata, msdata->state + 1);
>>>  }
>>>  
>>> +/* Holding area for active CPUs, to let all the non-active CPUs go first */
>>> +static void hold_active_cpus(struct multi_stop_data *msdata,
>>> +			     int num_active_cpus)
>>> +{
>>> +	/* Wait until all the non-active threads ack the state */
>>> +	while (atomic_read(&msdata->thread_ack) > num_active_cpus)
>>> +		cpu_relax();
>>> +}
>>
>> The code comments are a bit lame.  Can we do a better job of explaining
>> the overall dynamic behaviour?  Help readers to understand the problem
>> which hold_active_cpus() is solving and how it solves it?
> 
> Does it even need to be a separate function?  I kinda really dislike
> trivial helpers which are used only once.  It obfuscates more than
> helping anything.  I think proper comment where the actual
> synchronization is happening along with open coded wait would be
> easier to follow.
> 

Ok, I'll open code it and add an appropriate comment explaining the
synchronization.

Thank you!
 
Regards,
Srivatsa S. Bhat


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU
  2014-05-06 21:23     ` Srivatsa S. Bhat
@ 2014-05-06 22:01       ` Srivatsa S. Bhat
  0 siblings, 0 replies; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 22:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: peterz, tglx, mingo, tj, rusty, fweisbec, hch, mgorman, riel, bp,
	rostedt, mgalbraith, ego, paulmck, oleg, rjw, linux-kernel

[...]
>> This will emit the WARN_ON a single time, but will emit the "IPI
>> Payload" list every time the cpu is found to be offline.  So on the
>> second and successive occurrences some output will still occur.
>>
>> Unfortunately WARN_ON_ONCE() returns the value of `condition', not
>> `__warned', so we have to hand-code things.  Like this?
>>
> 
> Yeah, this version looks better. Sorry for missing this earlier.
> I'll incorporate this in my next version of the patchset.
> 

Here is the updated patch:

-------------------------------------------------------------------------

From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
[PATCH v2 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU

Today the smp-call-function code just prints a warning if we get an IPI on
an offline CPU. This info is sufficient to let us know that something went
wrong, but often it is very hard to debug exactly who sent the IPI and why,
from this info alone.

In most cases, we get the warning about the IPI to an offline CPU, immediately
after the CPU going offline comes out of the stop-machine phase and reenables
interrupts. Since all online CPUs participate in stop-machine, the information
regarding the sender of the IPI is already lost by the time we exit the
stop-machine loop. So even if we dump the stack on each CPU at this point,
we won't find anything useful since all of them will show the stack-trace of
the stopper thread. So we need a better way to figure out who sent the IPI and
why.

To achieve this, when we detect an IPI targeted to an offline CPU, loop through
the call-single-data linked list and print out the payload (i.e., the name
of the function which was supposed to be executed by the target CPU). This
would give us an insight as to who might have sent the IPI and help us debug
this further.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/smp.c |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 06d574e..f864921 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -185,14 +185,24 @@ void generic_smp_call_function_single_interrupt(void)
 {
 	struct llist_node *entry;
 	struct call_single_data *csd, *csd_next;
+	static bool warned;
+
+	entry = llist_del_all(&__get_cpu_var(call_single_queue));
+	entry = llist_reverse_order(entry);
 
 	/*
 	 * Shouldn't receive this interrupt on a cpu that is not yet online.
 	 */
-	WARN_ON_ONCE(!cpu_online(smp_processor_id()));
-
-	entry = llist_del_all(&__get_cpu_var(call_single_queue));
-	entry = llist_reverse_order(entry);
+	if (unlikely(!cpu_online(smp_processor_id()) && !warned)) {
+		warned = true;
+		WARN_ON(1);
+		/*
+		 * We don't have to use the _safe() variant here
+		 * because we are not invoking the IPI handlers yet.
+		 */
+		llist_for_each_entry(csd, entry, llist)
+			pr_warn("SMP IPI Payload: %pS \n", csd->func);
+	}
 
 	llist_for_each_entry_safe(csd, csd_next, entry, llist) {
 		csd->func(csd->info);



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-06 21:27       ` Srivatsa S. Bhat
@ 2014-05-06 22:01         ` Srivatsa S. Bhat
  2014-05-10  3:06           ` Tejun Heo
  0 siblings, 1 reply; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-06 22:01 UTC (permalink / raw)
  To: Tejun Heo, Andrew Morton
  Cc: peterz, tglx, mingo, rusty, fweisbec, hch, mgorman, riel, bp,
	rostedt, mgalbraith, ego, paulmck, oleg, rjw, linux-kernel

[...]
>>> The code comments are a bit lame.  Can we do a better job of explaining
>>> the overall dynamic behaviour?  Help readers to understand the problem
>>> which hold_active_cpus() is solving and how it solves it?
>>
>> Does it even need to be a separate function?  I kinda really dislike
>> trivial helpers which are used only once.  It obfuscates more than
>> helping anything.  I think proper comment where the actual
>> synchronization is happening along with open coded wait would be
>> easier to follow.
>>
> 
> Ok, I'll open code it and add an appropriate comment explaining the
> synchronization.
> 

How about this?

---------------------------------------------------------------------------

From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
[PATCH v2 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"

During CPU offline, stop-machine is used to take control over all the online
CPUs (via the per-cpu stopper thread) and then run take_cpu_down() on the CPU
that is to be taken offline.

But stop-machine itself has several stages: _PREPARE, _DISABLE_IRQ, _RUN etc.
The important thing to note here is that the _DISABLE_IRQ stage comes much
later after starting stop-machine, and hence there is a large window where
other CPUs can send IPIs to the CPU going offline. As a result, we can
encounter a scenario as depicted below, which causes IPIs to be sent to the
CPU going offline, and that CPU notices them *after* it has gone offline,
triggering the "IPI-to-offline-CPU" warning from the smp-call-function code.


              CPU 1                                         CPU 2
          (Online CPU)                               (CPU going offline)

       Enter _PREPARE stage                          Enter _PREPARE stage

                                                     Enter _DISABLE_IRQ stage


                                                   =
       Got a device interrupt,                     | Didn't notice the IPI
       and the interrupt handler                   | since interrupts were
       called smp_call_function()                  | disabled on this CPU.
       and sent an IPI to CPU 2.                   |
                                                   =


       Enter _DISABLE_IRQ stage


       Enter _RUN stage                              Enter _RUN stage

                                  =
       Busy loop with interrupts  |                  Invoke take_cpu_down()
       disabled.                  |                  and take CPU 2 offline
                                  =


       Enter _EXIT stage                             Enter _EXIT stage

       Re-enable interrupts                          Re-enable interrupts

                                                     The pending IPI is noted
                                                     immediately, but alas,
                                                     the CPU is offline at
                                                     this point.



So, as we can observe from this scenario, the IPI was sent when CPU 2 was
still online, and hence it was perfectly legal. But unfortunately it was
noted only after CPU 2 went offline, resulting in the warning from the
IPI handling code. In other words, the fault was not at the sender, but
at the receiver side - and if we look closely, the real bug is in the
stop-machine sequence itself.

The problem here is that the CPU going offline disabled its local interrupts
(by entering _DISABLE_IRQ phase) *before* the other CPUs. And that's the
reason why it was not able to respond to the IPI before going offline.

A simple solution to this problem is to ensure that the CPU going offline
*follows* all other CPUs while entering each subsequent phase within
stop-machine. In particular, all other CPUs will enter the _DISABLE_IRQ
phase and disable their local interrupts, and only *then*, the CPU going
offline will follow suit. Since the other CPUs are executing the stop-machine
code with interrupts disabled, they won't send any IPIs at all, at that
point. And by the time stop-machine ends, the CPU would have gone offline
and disappeared from the cpu_online_mask, and hence future invocations of
smp_call_function() and friends will automatically prune that CPU out.
Thus, we can guarantee that no CPU will end up *inadvertently* sending
IPIs to an offline CPU.

We can implement this by introducing a "holding area" for the CPUs marked
as 'active_cpus', and use this infrastructure to let the other CPUs
progress from one stage to the next, before allowing the active_cpus to
do the same thing.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/stop_machine.c |   30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 01fbae5..7abb361 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -165,12 +165,13 @@ static void ack_state(struct multi_stop_data *msdata)
 		set_state(msdata, msdata->state + 1);
 }
 
+
 /* This is the cpu_stop function which stops the CPU. */
 static int multi_cpu_stop(void *data)
 {
 	struct multi_stop_data *msdata = data;
 	enum multi_stop_state curstate = MULTI_STOP_NONE;
-	int cpu = smp_processor_id(), err = 0;
+	int cpu = smp_processor_id(), num_active_cpus, err = 0;
 	unsigned long flags;
 	bool is_active;
 
@@ -180,15 +181,38 @@ static int multi_cpu_stop(void *data)
 	 */
 	local_save_flags(flags);
 
-	if (!msdata->active_cpus)
+	if (!msdata->active_cpus) {
 		is_active = cpu == cpumask_first(cpu_online_mask);
-	else
+		num_active_cpus = 1;
+	} else {
 		is_active = cpumask_test_cpu(cpu, msdata->active_cpus);
+		num_active_cpus = cpumask_weight(msdata->active_cpus);
+	}
 
 	/* Simple state machine */
 	do {
 		/* Chill out and ensure we re-read multi_stop_state. */
 		cpu_relax();
+
+		/*
+		 * In the case of CPU offline, we don't want the other CPUs to
+		 * send IPIs to the active_cpu (the one going offline) after it
+		 * has entered the _DISABLE_IRQ state (because, then it will
+		 * notice the IPIs only after it goes offline). So ensure that
+		 * the active_cpu always follows the others while entering
+		 * each subsequent state in this state-machine.
+		 *
+		 * msdata->thread_ack tracks the number of CPUs that are yet to
+		 * move to the next state, during each transition. So make the
+		 * active_cpu(s) wait until ->thread_ack indicates that the
+		 * active_cpus are the only ones left to complete the transition.
+		 */
+		if (is_active) {
+			/* Wait until all the non-active threads ack the state */
+			while (atomic_read(&msdata->thread_ack) > num_active_cpus)
+				cpu_relax();
+		}
+
 		if (msdata->state != curstate) {
 			curstate = msdata->state;
 			switch (curstate) {



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-06 22:01         ` [PATCH v2 " Srivatsa S. Bhat
@ 2014-05-10  3:06           ` Tejun Heo
  2014-05-11 20:07             ` Srivatsa S. Bhat
  0 siblings, 1 reply; 12+ messages in thread
From: Tejun Heo @ 2014-05-10  3:06 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: Andrew Morton, peterz, tglx, mingo, rusty, fweisbec, hch, mgorman,
	riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel

On Wed, May 07, 2014 at 03:31:51AM +0530, Srivatsa S. Bhat wrote:
> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
> index 01fbae5..7abb361 100644
> --- a/kernel/stop_machine.c
> +++ b/kernel/stop_machine.c
> @@ -165,12 +165,13 @@ static void ack_state(struct multi_stop_data *msdata)
>  		set_state(msdata, msdata->state + 1);
>  }
>  
> +

Why add a new line here?

>  /* This is the cpu_stop function which stops the CPU. */
>  static int multi_cpu_stop(void *data)
>  {
>  	struct multi_stop_data *msdata = data;
>  	enum multi_stop_state curstate = MULTI_STOP_NONE;
> -	int cpu = smp_processor_id(), err = 0;
> +	int cpu = smp_processor_id(), num_active_cpus, err = 0;

	TYPE var0 = INIT0, var1, var2 = INIT2;

looks kinda weird.  Maybe collect initialized ones to one side or
separate out uninitialized one to a separate declaration?

Also, isn't nr_active_cpus more common way of naming it?

>  	unsigned long flags;
>  	bool is_active;
>  
> @@ -180,15 +181,38 @@ static int multi_cpu_stop(void *data)
>  	 */
>  	local_save_flags(flags);
>  
> -	if (!msdata->active_cpus)
> +	if (!msdata->active_cpus) {
>  		is_active = cpu == cpumask_first(cpu_online_mask);
> -	else
> +		num_active_cpus = 1;
> +	} else {
>  		is_active = cpumask_test_cpu(cpu, msdata->active_cpus);
> +		num_active_cpus = cpumask_weight(msdata->active_cpus);
> +	}
>  
>  	/* Simple state machine */
>  	do {
>  		/* Chill out and ensure we re-read multi_stop_state. */
>  		cpu_relax();
> +
> +		/*
> +		 * In the case of CPU offline, we don't want the other CPUs to
> +		 * send IPIs to the active_cpu (the one going offline) after it
> +		 * has entered the _DISABLE_IRQ state (because, then it will
> +		 * notice the IPIs only after it goes offline). So ensure that
> +		 * the active_cpu always follows the others while entering
> +		 * each subsequent state in this state-machine.
> +		 *
> +		 * msdata->thread_ack tracks the number of CPUs that are yet to
> +		 * move to the next state, during each transition. So make the
> +		 * active_cpu(s) wait until ->thread_ack indicates that the
> +		 * active_cpus are the only ones left to complete the transition.
> +		 */
> +		if (is_active) {
> +			/* Wait until all the non-active threads ack the state */
> +			while (atomic_read(&msdata->thread_ack) > num_active_cpus)
> +				cpu_relax();
> +		}

Wouldn't it be cleaner to separate this out to a separate stage so
that there are two separate DISABLE_IRQ stages - sth like
MULTI_STOP_DISABLE_IRQ_INACTIVE and MULTI_STOP_DISABLE_IRQ_ACTIVE?
The above adds an ad-hoc mechanism on top of the existing mechanism
which is built to sequence similar things anyway.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU"
  2014-05-10  3:06           ` Tejun Heo
@ 2014-05-11 20:07             ` Srivatsa S. Bhat
  0 siblings, 0 replies; 12+ messages in thread
From: Srivatsa S. Bhat @ 2014-05-11 20:07 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Andrew Morton, peterz, tglx, mingo, rusty, fweisbec, hch, mgorman,
	riel, bp, rostedt, mgalbraith, ego, paulmck, oleg, rjw,
	linux-kernel

On 05/10/2014 08:36 AM, Tejun Heo wrote:
> On Wed, May 07, 2014 at 03:31:51AM +0530, Srivatsa S. Bhat wrote:
>> diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
>> index 01fbae5..7abb361 100644
>> --- a/kernel/stop_machine.c
>> +++ b/kernel/stop_machine.c
>> @@ -165,12 +165,13 @@ static void ack_state(struct multi_stop_data *msdata)
>>  		set_state(msdata, msdata->state + 1);
>>  }
>>  
>> +
> 
> Why add a new line here?

Argh, a stray newline.. will remove it.

> 
>>  /* This is the cpu_stop function which stops the CPU. */
>>  static int multi_cpu_stop(void *data)
>>  {
>>  	struct multi_stop_data *msdata = data;
>>  	enum multi_stop_state curstate = MULTI_STOP_NONE;
>> -	int cpu = smp_processor_id(), err = 0;
>> +	int cpu = smp_processor_id(), num_active_cpus, err = 0;
> 
> 	TYPE var0 = INIT0, var1, var2 = INIT2;
> 
> looks kinda weird.  Maybe collect initialized ones to one side or
> separate out uninitialized one to a separate declaration?
>

Yeah, now that you point out, it does look very odd. I don't
remember why I wrote it that way in the first place! :-(
I'll fix this in the next version. Thanks!

> Also, isn't nr_active_cpus more common way of naming it?
> 

Sure, will use this convention.

>>  	unsigned long flags;
>>  	bool is_active;
>>  
>> @@ -180,15 +181,38 @@ static int multi_cpu_stop(void *data)
>>  	 */
>>  	local_save_flags(flags);
>>  
>> -	if (!msdata->active_cpus)
>> +	if (!msdata->active_cpus) {
>>  		is_active = cpu == cpumask_first(cpu_online_mask);
>> -	else
>> +		num_active_cpus = 1;
>> +	} else {
>>  		is_active = cpumask_test_cpu(cpu, msdata->active_cpus);
>> +		num_active_cpus = cpumask_weight(msdata->active_cpus);
>> +	}
>>  
>>  	/* Simple state machine */
>>  	do {
>>  		/* Chill out and ensure we re-read multi_stop_state. */
>>  		cpu_relax();
>> +
>> +		/*
>> +		 * In the case of CPU offline, we don't want the other CPUs to
>> +		 * send IPIs to the active_cpu (the one going offline) after it
>> +		 * has entered the _DISABLE_IRQ state (because, then it will
>> +		 * notice the IPIs only after it goes offline). So ensure that
>> +		 * the active_cpu always follows the others while entering
>> +		 * each subsequent state in this state-machine.
>> +		 *
>> +		 * msdata->thread_ack tracks the number of CPUs that are yet to
>> +		 * move to the next state, during each transition. So make the
>> +		 * active_cpu(s) wait until ->thread_ack indicates that the
>> +		 * active_cpus are the only ones left to complete the transition.
>> +		 */
>> +		if (is_active) {
>> +			/* Wait until all the non-active threads ack the state */
>> +			while (atomic_read(&msdata->thread_ack) > num_active_cpus)
>> +				cpu_relax();
>> +		}
> 
> Wouldn't it be cleaner to separate this out to a separate stage so
> that there are two separate DISABLE_IRQ stages - sth like
> MULTI_STOP_DISABLE_IRQ_INACTIVE and MULTI_STOP_DISABLE_IRQ_ACTIVE?
> The above adds an ad-hoc mechanism on top of the existing mechanism
> which is built to sequence similar things anyway.
>

Indeed, that looks like a much more elegant method! Thanks a lot for the
suggestion Tejun, I'll use that in the next version of the patchset.

Thank you!

Regards,
Srivatsa S. Bhat


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2014-05-11 20:09 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-05-06 18:02 [PATCH 0/2] CPU hotplug: Fix the long-standing "IPI to offline CPU" issue Srivatsa S. Bhat
2014-05-06 18:02 ` [PATCH 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU Srivatsa S. Bhat
2014-05-06 20:34   ` Andrew Morton
2014-05-06 21:23     ` Srivatsa S. Bhat
2014-05-06 22:01       ` [PATCH v2 " Srivatsa S. Bhat
2014-05-06 18:03 ` [PATCH 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU" Srivatsa S. Bhat
2014-05-06 20:40   ` Andrew Morton
2014-05-06 20:42     ` Tejun Heo
2014-05-06 21:27       ` Srivatsa S. Bhat
2014-05-06 22:01         ` [PATCH v2 " Srivatsa S. Bhat
2014-05-10  3:06           ` Tejun Heo
2014-05-11 20:07             ` Srivatsa S. Bhat

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).