netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] net: rps: fix data stall after hotplug
@ 2015-03-19 19:54 subashab
  2015-03-19 21:50 ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: subashab @ 2015-03-19 19:54 UTC (permalink / raw)
  To: netdev; +Cc: eric.dumazet, therbert

When RPS is enabled, IPI is triggered to enqueue the
backlog NAPI to the poll list. If the CPU is hotplugged
after the NAPI_STATE_SCHED bit is set on
enqueue_to_backlog but before the IPI is delivered
successfully, the poll list does not have the backlog
NAPI queued. As a consequence of this, dev_cpu_callback
does not clear the NAPI_STATE_SCHED bit on hotplug.
Since NAPI_STATE_SCHED is set even after the cpu comes
back up, packets get enqueued onto the input packet queue
but are never processed since the IPI will not be triggered.

This patch handles this race by unconditionally resetting
the NAPI state for the backlog NAPI on the offline CPU.

Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
---
 net/core/dev.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 6f561de..61d9579 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7119,12 +7119,11 @@ static int dev_cpu_callback(struct notifier_block
*nfb,
 							    poll_list);

 		list_del_init(&napi->poll_list);
-		if (napi->poll == process_backlog)
-			napi->state = 0;
-		else
+		if (napi->poll != process_backlog)
 			____napi_schedule(sd, napi);
 	}

+	oldsd->backlog.state = 0;
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();

--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
 a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-19 19:54 [PATCH] net: rps: fix data stall after hotplug subashab
@ 2015-03-19 21:50 ` Eric Dumazet
  2015-03-20 11:50   ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2015-03-19 21:50 UTC (permalink / raw)
  To: subashab; +Cc: netdev

On Thu, 2015-03-19 at 19:54 +0000, subashab@codeaurora.org wrote:
> When RPS is enabled, IPI is triggered to enqueue the
> backlog NAPI to the poll list. If the CPU is hotplugged
> after the NAPI_STATE_SCHED bit is set on
> enqueue_to_backlog but before the IPI is delivered
> successfully, the poll list does not have the backlog
> NAPI queued. As a consequence of this, dev_cpu_callback
> does not clear the NAPI_STATE_SCHED bit on hotplug.
> Since NAPI_STATE_SCHED is set even after the cpu comes
> back up, packets get enqueued onto the input packet queue
> but are never processed since the IPI will not be triggered.
> 
> This patch handles this race by unconditionally resetting
> the NAPI state for the backlog NAPI on the offline CPU.
> 
> Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
> ---
>  net/core/dev.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 6f561de..61d9579 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -7119,12 +7119,11 @@ static int dev_cpu_callback(struct notifier_block
> *nfb,
>  							    poll_list);
> 
>  		list_del_init(&napi->poll_list);
> -		if (napi->poll == process_backlog)
> -			napi->state = 0;
> -		else
> +		if (napi->poll != process_backlog)
>  			____napi_schedule(sd, napi);
>  	}
> 
> +	oldsd->backlog.state = 0;
>  	raise_softirq_irqoff(NET_TX_SOFTIRQ);
>  	local_irq_enable();

Are you seeing this race on x86 ?

If IPI are not reliable on your arch, I am guessing you should fix them.

Otherwise, even without hotplug you'll have hangs.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-19 21:50 ` Eric Dumazet
@ 2015-03-20 11:50   ` Eric Dumazet
  2015-03-20 16:40     ` subashab
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2015-03-20 11:50 UTC (permalink / raw)
  To: subashab; +Cc: netdev

On Thu, 2015-03-19 at 14:50 -0700, Eric Dumazet wrote:

> Are you seeing this race on x86 ?
> 
> If IPI are not reliable on your arch, I am guessing you should fix them.
> 
> Otherwise, even without hotplug you'll have hangs.

Please try instead this patch :

diff --git a/net/core/dev.c b/net/core/dev.c
index 5d43e010ef870a6ab92895297fe18d6e6a03593a..baa4bff9a6fbe0d77d7921865c038060cb5efffd 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4320,9 +4320,8 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 		while (remsd) {
 			struct softnet_data *next = remsd->rps_ipi_next;
 
-			if (cpu_online(remsd->cpu))
-				smp_call_function_single_async(remsd->cpu,
-							   &remsd->csd);
+			smp_call_function_single_async(remsd->cpu,
+						       &remsd->csd);
 			remsd = next;
 		}
 	} else

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-20 11:50   ` Eric Dumazet
@ 2015-03-20 16:40     ` subashab
  2015-03-23 22:16       ` subashab
  0 siblings, 1 reply; 11+ messages in thread
From: subashab @ 2015-03-20 16:40 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: netdev

> On Thu, 2015-03-19 at 14:50 -0700, Eric Dumazet wrote:
>
>> Are you seeing this race on x86 ?
>>
>> If IPI are not reliable on your arch, I am guessing you should fix them.
>>
>> Otherwise, even without hotplug you'll have hangs.
>
> Please try instead this patch :
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index
> 5d43e010ef870a6ab92895297fe18d6e6a03593a..baa4bff9a6fbe0d77d7921865c038060cb5efffd
> 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -4320,9 +4320,8 @@ static void net_rps_action_and_irq_enable(struct
> softnet_data *sd)
>  		while (remsd) {
>  			struct softnet_data *next = remsd->rps_ipi_next;
>
> -			if (cpu_online(remsd->cpu))
> -				smp_call_function_single_async(remsd->cpu,
> -							   &remsd->csd);
> +			smp_call_function_single_async(remsd->cpu,
> +						       &remsd->csd);
>  			remsd = next;
>  		}
>  	} else
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
Thanks for the patch Eric. We are seeing this race on ARM.
I will try this and update.

--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
 a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-20 16:40     ` subashab
@ 2015-03-23 22:16       ` subashab
  2015-03-23 22:29         ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: subashab @ 2015-03-23 22:16 UTC (permalink / raw)
  To: eric.dumazet; +Cc: netdev

>> On Thu, 2015-03-19 at 14:50 -0700, Eric Dumazet wrote:
>>
>>> Are you seeing this race on x86 ?
>>>
>>> If IPI are not reliable on your arch, I am guessing you should fix
>>> them.
>>>
>>> Otherwise, even without hotplug you'll have hangs.
>>
>> Please try instead this patch :
>>
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index
>> 5d43e010ef870a6ab92895297fe18d6e6a03593a..baa4bff9a6fbe0d77d7921865c038060cb5efffd
>> 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>> @@ -4320,9 +4320,8 @@ static void net_rps_action_and_irq_enable(struct
>> softnet_data *sd)
>>  		while (remsd) {
>>  			struct softnet_data *next = remsd->rps_ipi_next;
>>
>> -			if (cpu_online(remsd->cpu))
>> -				smp_call_function_single_async(remsd->cpu,
>> -							   &remsd->csd);
>> +			smp_call_function_single_async(remsd->cpu,
>> +						       &remsd->csd);
>>  			remsd = next;
>>  		}
>>  	} else
>>
>>
> Thanks for the patch Eric. We are seeing this race on ARM.
> I will try this and update.
>

Unfortunately, I am not able to reproduce data stall now with or without
the patch. Could you tell me more about the patch and what issue you were
suspecting?

Based on the code, it looks like we BUG out on our arch if we try to call
an IPI on an offline CPU. Since this condition is never hit, I feel that
the IPI might not have failed.

void smp_send_reschedule(int cpu)
{
        BUG_ON(cpu_is_offline(cpu));
        smp_cross_call_common(cpumask_of(cpu), IPI_RESCHEDULE);
}

--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
 a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-23 22:16       ` subashab
@ 2015-03-23 22:29         ` Eric Dumazet
  2015-03-25 18:54           ` subashab
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2015-03-23 22:29 UTC (permalink / raw)
  To: subashab; +Cc: netdev

On Mon, 2015-03-23 at 22:16 +0000, subashab@codeaurora.org wrote:
> >> On Thu, 2015-03-19 at 14:50 -0700, Eric Dumazet wrote:
> >>
> >>> Are you seeing this race on x86 ?
> >>>
> >>> If IPI are not reliable on your arch, I am guessing you should fix
> >>> them.
> >>>
> >>> Otherwise, even without hotplug you'll have hangs.
> >>
> >> Please try instead this patch :
> >>
> >> diff --git a/net/core/dev.c b/net/core/dev.c
> >> index
> >> 5d43e010ef870a6ab92895297fe18d6e6a03593a..baa4bff9a6fbe0d77d7921865c038060cb5efffd
> >> 100644
> >> --- a/net/core/dev.c
> >> +++ b/net/core/dev.c
> >> @@ -4320,9 +4320,8 @@ static void net_rps_action_and_irq_enable(struct
> >> softnet_data *sd)
> >>  		while (remsd) {
> >>  			struct softnet_data *next = remsd->rps_ipi_next;
> >>
> >> -			if (cpu_online(remsd->cpu))
> >> -				smp_call_function_single_async(remsd->cpu,
> >> -							   &remsd->csd);
> >> +			smp_call_function_single_async(remsd->cpu,
> >> +						       &remsd->csd);
> >>  			remsd = next;
> >>  		}
> >>  	} else
> >>
> >>
> > Thanks for the patch Eric. We are seeing this race on ARM.
> > I will try this and update.
> >
> 
> Unfortunately, I am not able to reproduce data stall now with or without
> the patch. Could you tell me more about the patch and what issue you were
> suspecting?
> 
> Based on the code, it looks like we BUG out on our arch if we try to call
> an IPI on an offline CPU. Since this condition is never hit, I feel that
> the IPI might not have failed.
> 
> void smp_send_reschedule(int cpu)
> {
>         BUG_ON(cpu_is_offline(cpu));
>         smp_cross_call_common(cpumask_of(cpu), IPI_RESCHEDULE);
> }



The bug I am fixing is the following :


if (cpu_is_online(x))
    target = x

...

queue packet on queue of cpu x


net_rps_action_and_irq_enable()


if (cpu_is_online(x))  [2]
    smp_call_function_single_async(x, ...)


Problem is that first test in [1] can succeed, but second in [2] can
fail.

But we should still send this IPI.

We run in a softirq, so it is OK to deliver the IPI to the _about to be
offlined_ cpu.

We should test the cpu_is_online(x) once.

Doing this a second time is the bug.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-23 22:29         ` Eric Dumazet
@ 2015-03-25 18:54           ` subashab
  2015-03-30 23:49             ` subashab
  0 siblings, 1 reply; 11+ messages in thread
From: subashab @ 2015-03-25 18:54 UTC (permalink / raw)
  To: eric.dumazet; +Cc: netdev

> On Mon, 2015-03-23 at 22:16 +0000, subashab@codeaurora.org wrote:
>> >> On Thu, 2015-03-19 at 14:50 -0700, Eric Dumazet wrote:
>> >>
>> >>> Are you seeing this race on x86 ?
>> >>>
>> >>> If IPI are not reliable on your arch, I am guessing you should fix
>> >>> them.
>> >>>
>> >>> Otherwise, even without hotplug you'll have hangs.
>> >>
>> >> Please try instead this patch :
>> >>
>> >> diff --git a/net/core/dev.c b/net/core/dev.c
>> >> index
>> >> 5d43e010ef870a6ab92895297fe18d6e6a03593a..baa4bff9a6fbe0d77d7921865c038060cb5efffd
>> >> 100644
>> >> --- a/net/core/dev.c
>> >> +++ b/net/core/dev.c
>> >> @@ -4320,9 +4320,8 @@ static void
>> net_rps_action_and_irq_enable(struct
>> >> softnet_data *sd)
>> >>  		while (remsd) {
>> >>  			struct softnet_data *next = remsd->rps_ipi_next;
>> >>
>> >> -			if (cpu_online(remsd->cpu))
>> >> -				smp_call_function_single_async(remsd->cpu,
>> >> -							   &remsd->csd);
>> >> +			smp_call_function_single_async(remsd->cpu,
>> >> +						       &remsd->csd);
>> >>  			remsd = next;
>> >>  		}
>> >>  	} else
>> >>
>> >>
>> > Thanks for the patch Eric. We are seeing this race on ARM.
>> > I will try this and update.
>> >
>>
>> Unfortunately, I am not able to reproduce data stall now with or without
>> the patch. Could you tell me more about the patch and what issue you
>> were
>> suspecting?
>>
>> Based on the code, it looks like we BUG out on our arch if we try to
>> call
>> an IPI on an offline CPU. Since this condition is never hit, I feel that
>> the IPI might not have failed.
>>
>> void smp_send_reschedule(int cpu)
>> {
>>         BUG_ON(cpu_is_offline(cpu));
>>         smp_cross_call_common(cpumask_of(cpu), IPI_RESCHEDULE);
>> }
>
>
>
> The bug I am fixing is the following :
>
>
> if (cpu_is_online(x))
>     target = x
>
> ...
>
> queue packet on queue of cpu x
>
>
> net_rps_action_and_irq_enable()
>
>
> if (cpu_is_online(x))  [2]
>     smp_call_function_single_async(x, ...)
>
>
> Problem is that first test in [1] can succeed, but second in [2] can
> fail.
>
> But we should still send this IPI.
>
> We run in a softirq, so it is OK to deliver the IPI to the _about to be
> offlined_ cpu.
>
> We should test the cpu_is_online(x) once.
>
> Doing this a second time is the bug.

Thanks for the explanation. It looks like the issue is related to the way
our driver is designed.

We have a legacy driver which does not use the NAPI framework (and cannot
be modified for reasons beyond my control). They rely on an interrupt
mitigation system which disables hardware interrupts after receiving the
interrupt for the first packet and then enters a polling mode for a
duration and queues packets up to the network stack using netif_rx().
We have observed that the the time between softirq raised in the worker
thread context and softirq entry in softirq context is around 1-3
milliseconds (local pending softirq's might be triggered after exit of a
hardware irq).
When we used netif_rx_ni() instead, the delay between softirq raise and
entry is around microseconds since the local pending irq's are services
immediately. We have modified the driver to use netif_rx_ni() now.

With netif_rx() I was able to reproduce the data stall consistently. Upon
hotplug, I was able to see that the cpu_is_online(x) check in [1 -
get_rps_cpus] returned the cpu as mentioned in the rps mask but
the check in [2 - net_rps_action_and_irq_enable] returned that the cpu was
offline. After I applied your patch, I was crashing since my arch
explicitly BUGs out on sending IPI's on offline CPU's.

With netif_rx_ni(), I have not run into any issue as of now both with and
without your patch since both [1] and [2] might have been returning the
same the value for cpu_is_online(x). However, it is possible that this
race occurs with a much lesser chance. I would still see the data stall
if [1] returns online while [2] returns the cpu as offline. Would it be
acceptable to reset NAPI state in [2] in this case?

--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
 a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-25 18:54           ` subashab
@ 2015-03-30 23:49             ` subashab
  2015-03-31  4:48               ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: subashab @ 2015-03-30 23:49 UTC (permalink / raw)
  To: eric.dumazet; +Cc: netdev

>>> >> Please try instead this patch :
>>> >>
>>> >> diff --git a/net/core/dev.c b/net/core/dev.c
>>> >> index
>>> >> 5d43e010ef870a6ab92895297fe18d6e6a03593a..baa4bff9a6fbe0d77d7921865c038060cb5efffd
>>> >> 100644
>>> >> --- a/net/core/dev.c
>>> >> +++ b/net/core/dev.c
>>> >> @@ -4320,9 +4320,8 @@ static void
>>> net_rps_action_and_irq_enable(struct
>>> >> softnet_data *sd)
>>> >>  		while (remsd) {
>>> >>  			struct softnet_data *next = remsd->rps_ipi_next;
>>> >>
>>> >> -			if (cpu_online(remsd->cpu))
>>> >> -				smp_call_function_single_async(remsd->cpu,
>>> >> -							   &remsd->csd);
>>> >> +			smp_call_function_single_async(remsd->cpu,
>>> >> +						       &remsd->csd);
>>> >>  			remsd = next;
>>> >>  		}
>>> >>  	} else
>>> >>
>>> >>

Hi Eric

While the original issue of data stall due to missing IPI is no longer
seen with netif_rx_ni(), the scenario of rps cpu online in [1 -
get_rps_cpus] but offline in [2 - net_rps_action_and_irq_enable] could
still occur. Using your patch, triggering an IPI on an offline cpu in [2]
leads to a crash on my arch.

I would like to know your thoughts on how to fix this race. Could the
patch which I had initially proposed help here. Alternatively, is it
correct to reset NAPI state and increment dropped sd count if an offline
CPU is detected in [2].

--
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux
Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-30 23:49             ` subashab
@ 2015-03-31  4:48               ` Eric Dumazet
  2015-03-31 22:02                 ` subashab
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2015-03-31  4:48 UTC (permalink / raw)
  To: subashab; +Cc: netdev

On Mon, 2015-03-30 at 23:49 +0000, subashab@codeaurora.org wrote:

> Hi Eric
> 
> While the original issue of data stall due to missing IPI is no longer
> seen with netif_rx_ni(), the scenario of rps cpu online in [1 -
> get_rps_cpus] but offline in [2 - net_rps_action_and_irq_enable] could
> still occur. Using your patch, triggering an IPI on an offline cpu in [2]
> leads to a crash on my arch.
> 
> I would like to know your thoughts on how to fix this race. Could the
> patch which I had initially proposed help here. Alternatively, is it
> correct to reset NAPI state and increment dropped sd count if an offline
> CPU is detected in [2].
> 


Listen, I would rather disable RPS on your arch, instead of messing with
it.

Reset NAPI state as you did is in direct violation of the rules.

Only cpu owning the bit is allowed to reset it.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-31  4:48               ` Eric Dumazet
@ 2015-03-31 22:02                 ` subashab
  2015-03-31 23:44                   ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: subashab @ 2015-03-31 22:02 UTC (permalink / raw)
  To: eric.dumazet; +Cc: netdev

> Listen, I would rather disable RPS on your arch, instead of messing with
> it.
>
> Reset NAPI state as you did is in direct violation of the rules.
>
> Only cpu owning the bit is allowed to reset it.
>

Perhaps my understanding of the code in dev_cpu_callback() is incorrect?
Please correct me if I am wrong.

The poll list is copied from an offline cpu to an online cpu.
Specifically for process_backlog, I was under the impression that
the online cpu tries to reset the state of NAPI of the offline cpu.
The process and input queues are then always copied to the
online cpu.

while (!list_empty(&oldsd->poll_list)) {
	struct napi_struct *napi = list_first_entry(&oldsd->poll_list,
						    struct napi_struct,
							     poll_list);

	list_del_init(&napi->poll_list);
	if (napi->poll == process_backlog)
		napi->state = 0;
	else
		____napi_schedule(sd, napi);
}

My request was to know why it would be incorrect to clear the offline cpu
backlog NAPI state unconditionally.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] net: rps: fix data stall after hotplug
  2015-03-31 22:02                 ` subashab
@ 2015-03-31 23:44                   ` Eric Dumazet
  0 siblings, 0 replies; 11+ messages in thread
From: Eric Dumazet @ 2015-03-31 23:44 UTC (permalink / raw)
  To: subashab; +Cc: netdev

On Tue, 2015-03-31 at 22:02 +0000, subashab@codeaurora.org wrote:
> > Listen, I would rather disable RPS on your arch, instead of messing with
> > it.
> >
> > Reset NAPI state as you did is in direct violation of the rules.
> >
> > Only cpu owning the bit is allowed to reset it.
> >
> 
> Perhaps my understanding of the code in dev_cpu_callback() is incorrect?
> Please correct me if I am wrong.
> 
> The poll list is copied from an offline cpu to an online cpu.
> Specifically for process_backlog, I was under the impression that
> the online cpu tries to reset the state of NAPI of the offline cpu.
> The process and input queues are then always copied to the
> online cpu.
> 
> while (!list_empty(&oldsd->poll_list)) {
> 	struct napi_struct *napi = list_first_entry(&oldsd->poll_list,
> 						    struct napi_struct,
> 							     poll_list);
> 
> 	list_del_init(&napi->poll_list);
> 	if (napi->poll == process_backlog)
> 		napi->state = 0;
> 	else
> 		____napi_schedule(sd, napi);
> }
> 
> My request was to know why it would be incorrect to clear the offline cpu
> backlog NAPI state unconditionally.
> 

It is incorrect because the moment we chose to send an IPI to a cpu, we
effectively transfered NAPI bit ownership to this target cpu. Another
cpu cannot take over without risking tricky corruptions.

If your arch fails to send the IPI, we have no way to clear the bit in a
safe way.

Only target cpu is allowed to clear the bit by virtue of following being
called :

/* Called from hardirq (IPI) context */
static void rps_trigger_softirq(void *data)
{
        struct softnet_data *sd = data;

        ____napi_schedule(sd, &sd->backlog);
        sd->received_rps++;
}

If it is not called even in 0.000001% of the cases, all bets are off.

We are not going to add yet another atomic ops for every packet just to
solve this corner case. Just do not enable RPS on your arch, as it is by
default not enabled.

I am currently in vacations, I will not reply to another inquiry on this
topic.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-03-31 23:44 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-03-19 19:54 [PATCH] net: rps: fix data stall after hotplug subashab
2015-03-19 21:50 ` Eric Dumazet
2015-03-20 11:50   ` Eric Dumazet
2015-03-20 16:40     ` subashab
2015-03-23 22:16       ` subashab
2015-03-23 22:29         ` Eric Dumazet
2015-03-25 18:54           ` subashab
2015-03-30 23:49             ` subashab
2015-03-31  4:48               ` Eric Dumazet
2015-03-31 22:02                 ` subashab
2015-03-31 23:44                   ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).