* [PATCH net-next-2.6] rps: immediate send IPI in process_backlog()
@ 2010-04-21 21:04 Eric Dumazet
2010-04-22 7:21 ` David Miller
0 siblings, 1 reply; 4+ messages in thread
From: Eric Dumazet @ 2010-04-21 21:04 UTC (permalink / raw)
To: David Miller; +Cc: Tom Herbert, Changli Gao, netdev
If some skb are queued to our backlog, we are delaying IPI sending at
the end of net_rx_action(), increasing latencies. This defeats the
queueing, since we want to quickly dispatch packets to the pool of
worker cpus, then eventually deeply process our packets.
It's better to send IPI before processing our packets in upper layers,
from process_backlog().
Change the _and_disable_irq suffix to _and_enable_irq(), since we enable
local irq in net_rps_action(), sorry for the confusion.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
net/core/dev.c | 76 +++++++++++++++++++++++++----------------------
1 file changed, 42 insertions(+), 34 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index b31d5d6..9871660 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3239,11 +3239,48 @@ gro_result_t napi_gro_frags(struct napi_struct *napi)
}
EXPORT_SYMBOL(napi_gro_frags);
+/*
+ * net_rps_action sends any pending IPI's for rps.
+ * Note: called with local irq disabled, but exits with local irq enabled.
+ */
+static void net_rps_action_and_irq_enable(struct softnet_data *sd)
+{
+#ifdef CONFIG_RPS
+ struct softnet_data *remsd = sd->rps_ipi_list;
+
+ if (remsd) {
+ sd->rps_ipi_list = NULL;
+
+ local_irq_enable();
+
+ /* Send pending IPI's to kick RPS processing on remote cpus. */
+ while (remsd) {
+ struct softnet_data *next = remsd->rps_ipi_next;
+
+ if (cpu_online(remsd->cpu))
+ __smp_call_function_single(remsd->cpu,
+ &remsd->csd, 0);
+ remsd = next;
+ }
+ } else
+#endif
+ local_irq_enable();
+}
+
static int process_backlog(struct napi_struct *napi, int quota)
{
int work = 0;
struct softnet_data *sd = &__get_cpu_var(softnet_data);
+#ifdef CONFIG_RPS
+ /* Check if we have pending ipi, its better to send them now,
+ * not waiting net_rx_action() end.
+ */
+ if (sd->rps_ipi_list) {
+ local_irq_disable();
+ net_rps_action_and_irq_enable(sd);
+ }
+#endif
napi->weight = weight_p;
do {
struct sk_buff *skb;
@@ -3350,45 +3387,16 @@ void netif_napi_del(struct napi_struct *napi)
}
EXPORT_SYMBOL(netif_napi_del);
-/*
- * net_rps_action sends any pending IPI's for rps.
- * Note: called with local irq disabled, but exits with local irq enabled.
- */
-static void net_rps_action_and_irq_disable(void)
-{
-#ifdef CONFIG_RPS
- struct softnet_data *sd = &__get_cpu_var(softnet_data);
- struct softnet_data *remsd = sd->rps_ipi_list;
-
- if (remsd) {
- sd->rps_ipi_list = NULL;
-
- local_irq_enable();
-
- /* Send pending IPI's to kick RPS processing on remote cpus. */
- while (remsd) {
- struct softnet_data *next = remsd->rps_ipi_next;
-
- if (cpu_online(remsd->cpu))
- __smp_call_function_single(remsd->cpu,
- &remsd->csd, 0);
- remsd = next;
- }
- } else
-#endif
- local_irq_enable();
-}
-
static void net_rx_action(struct softirq_action *h)
{
- struct list_head *list = &__get_cpu_var(softnet_data).poll_list;
+ struct softnet_data *sd = &__get_cpu_var(softnet_data);
unsigned long time_limit = jiffies + 2;
int budget = netdev_budget;
void *have;
local_irq_disable();
- while (!list_empty(list)) {
+ while (!list_empty(&sd->poll_list)) {
struct napi_struct *n;
int work, weight;
@@ -3406,7 +3414,7 @@ static void net_rx_action(struct softirq_action *h)
* entries to the tail of this list, and only ->poll()
* calls can remove this head entry from the list.
*/
- n = list_first_entry(list, struct napi_struct, poll_list);
+ n = list_first_entry(&sd->poll_list, struct napi_struct, poll_list);
have = netpoll_poll_lock(n);
@@ -3441,13 +3449,13 @@ static void net_rx_action(struct softirq_action *h)
napi_complete(n);
local_irq_disable();
} else
- list_move_tail(&n->poll_list, list);
+ list_move_tail(&n->poll_list, &sd->poll_list);
}
netpoll_poll_unlock(have);
}
out:
- net_rps_action_and_irq_disable();
+ net_rps_action_and_irq_enable(sd);
#ifdef CONFIG_NET_DMA
/*
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH net-next-2.6] rps: immediate send IPI in process_backlog()
2010-04-21 21:04 [PATCH net-next-2.6] rps: immediate send IPI in process_backlog() Eric Dumazet
@ 2010-04-22 7:21 ` David Miller
2010-04-22 7:22 ` David Miller
2010-04-22 7:28 ` Eric Dumazet
0 siblings, 2 replies; 4+ messages in thread
From: David Miller @ 2010-04-22 7:21 UTC (permalink / raw)
To: eric.dumazet; +Cc: therbert, xiaosuo, netdev
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Wed, 21 Apr 2010 23:04:58 +0200
> If some skb are queued to our backlog, we are delaying IPI sending at
> the end of net_rx_action(), increasing latencies. This defeats the
> queueing, since we want to quickly dispatch packets to the pool of
> worker cpus, then eventually deeply process our packets.
>
> It's better to send IPI before processing our packets in upper layers,
> from process_backlog().
>
> Change the _and_disable_irq suffix to _and_enable_irq(), since we enable
> local irq in net_rps_action(), sorry for the confusion.
>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Eric, irqs are enabled in process_backlog(), so I don't know how legal
it is to invoke net_rps_action_and_irq_enable() from there.
At least, if you are depending upon a later action to pick up the
pieces if the rps_ipi_list test races, you need to update the comment
above net_rps_action_and_irq_enable() since it states that it is
always invoked with IRQs disabled :-)
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next-2.6] rps: immediate send IPI in process_backlog()
2010-04-22 7:21 ` David Miller
@ 2010-04-22 7:22 ` David Miller
2010-04-22 7:28 ` Eric Dumazet
1 sibling, 0 replies; 4+ messages in thread
From: David Miller @ 2010-04-22 7:22 UTC (permalink / raw)
To: eric.dumazet; +Cc: therbert, xiaosuo, netdev
From: David Miller <davem@davemloft.net>
Date: Thu, 22 Apr 2010 00:21:18 -0700 (PDT)
> Eric, irqs are enabled in process_backlog(), so I don't know how legal
> it is to invoke net_rps_action_and_irq_enable() from there.
Nevermind I mis-read your patch.
Ignore me, I'll apply this.
Thanks!
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH net-next-2.6] rps: immediate send IPI in process_backlog()
2010-04-22 7:21 ` David Miller
2010-04-22 7:22 ` David Miller
@ 2010-04-22 7:28 ` Eric Dumazet
1 sibling, 0 replies; 4+ messages in thread
From: Eric Dumazet @ 2010-04-22 7:28 UTC (permalink / raw)
To: David Miller; +Cc: therbert, xiaosuo, netdev
Le jeudi 22 avril 2010 à 00:21 -0700, David Miller a écrit :
> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Wed, 21 Apr 2010 23:04:58 +0200
>
> > If some skb are queued to our backlog, we are delaying IPI sending at
> > the end of net_rx_action(), increasing latencies. This defeats the
> > queueing, since we want to quickly dispatch packets to the pool of
> > worker cpus, then eventually deeply process our packets.
> >
> > It's better to send IPI before processing our packets in upper layers,
> > from process_backlog().
> >
> > Change the _and_disable_irq suffix to _and_enable_irq(), since we enable
> > local irq in net_rps_action(), sorry for the confusion.
> >
> > Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
>
> Eric, irqs are enabled in process_backlog(), so I don't know how legal
> it is to invoke net_rps_action_and_irq_enable() from there.
>
> At least, if you are depending upon a later action to pick up the
> pieces if the rps_ipi_list test races, you need to update the comment
> above net_rps_action_and_irq_enable() since it states that it is
> always invoked with IRQs disabled :-)
> --
But I do disable irqs berfore calling this function from
process_backlog, only if current pointer is non null.
Pointer is then re-fetched inside net_rps_action_and_irq_enable()
I thought using xchg(), but this adds an atomic op, so I think its
better to use local_irq_disable()/enable() pairs.
About the comment, it says :
/*
* net_rps_action sends any pending IPI's for rps.
* Note: called with local irq disabled, but exits with local irq
enabled.
*/
So it documents this function is called with irq disabled, and re-enable
them before return ?
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2010-04-22 7:28 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-04-21 21:04 [PATCH net-next-2.6] rps: immediate send IPI in process_backlog() Eric Dumazet
2010-04-22 7:21 ` David Miller
2010-04-22 7:22 ` David Miller
2010-04-22 7:28 ` Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox