netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] net: Performance fix for process_backlog
@ 2014-06-30 16:48 Tom Herbert
  0 siblings, 0 replies; 4+ messages in thread
From: Tom Herbert @ 2014-06-30 16:48 UTC (permalink / raw)
  To: davem, netdev

In process_backlog the input_pkt_queue is only checked once for new
packets and quota is artificially reduced to reflect precisely the
number of packets on the input_pkt_queue so that the loop exits
appropriately.

This patches changes the behavior to be more straightforward and
less convoluted. Packets are processed until either the quota
is met or there are no more packets to process.

This patch seems to provide a small, but noticeable performance
improvement. The performance improvement is a result of staying
in the process_backlog loop longer which can reduce number of IPI's.

Performance data using super_netperf TCP_RR with 200 flows:

Before fix:

88.06% CPU utilization
125/190/309 90/95/99% latencies
1.46808e+06 tps
1145382 intrs.sec.

With fix:

87.73% CPU utilization
122/183/296 90/95/99% latencies
1.4921e+06 tps
1021674.30 intrs./sec.

Signed-off-by: Tom Herbert <therbert@google.com>
---
 net/core/dev.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index a04b12f..bb88964 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4227,9 +4227,8 @@ static int process_backlog(struct napi_struct *napi, int quota)
 #endif
 	napi->weight = weight_p;
 	local_irq_disable();
-	while (work < quota) {
+	while (1) {
 		struct sk_buff *skb;
-		unsigned int qlen;
 
 		while ((skb = __skb_dequeue(&sd->process_queue))) {
 			local_irq_enable();
@@ -4243,24 +4242,24 @@ static int process_backlog(struct napi_struct *napi, int quota)
 		}
 
 		rps_lock(sd);
-		qlen = skb_queue_len(&sd->input_pkt_queue);
-		if (qlen)
-			skb_queue_splice_tail_init(&sd->input_pkt_queue,
-						   &sd->process_queue);
-
-		if (qlen < quota - work) {
+		if (skb_queue_empty(&sd->input_pkt_queue)) {
 			/*
 			 * Inline a custom version of __napi_complete().
 			 * only current cpu owns and manipulates this napi,
-			 * and NAPI_STATE_SCHED is the only possible flag set on backlog.
-			 * we can use a plain write instead of clear_bit(),
+			 * and NAPI_STATE_SCHED is the only possible flag set
+			 * on backlog.
+			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
 			list_del(&napi->poll_list);
 			napi->state = 0;
+			rps_unlock(sd);
 
-			quota = work + qlen;
+			break;
 		}
+
+		skb_queue_splice_tail_init(&sd->input_pkt_queue,
+					   &sd->process_queue);
 		rps_unlock(sd);
 	}
 	local_irq_enable();
-- 
2.0.0.526.g5318336

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2] net: Performance fix for process_backlog
@ 2014-06-30 16:50 Tom Herbert
  2014-06-30 19:28 ` Eric Dumazet
  2014-07-08  2:25 ` David Miller
  0 siblings, 2 replies; 4+ messages in thread
From: Tom Herbert @ 2014-06-30 16:50 UTC (permalink / raw)
  To: davem, netdev

In process_backlog the input_pkt_queue is only checked once for new
packets and quota is artificially reduced to reflect precisely the
number of packets on the input_pkt_queue so that the loop exits
appropriately.

This patches changes the behavior to be more straightforward and
less convoluted. Packets are processed until either the quota
is met or there are no more packets to process.

This patch seems to provide a small, but noticeable performance
improvement. The performance improvement is a result of staying
in the process_backlog loop longer which can reduce number of IPI's.

Performance data using super_netperf TCP_RR with 200 flows:

Before fix:

88.06% CPU utilization
125/190/309 90/95/99% latencies
1.46808e+06 tps
1145382 intrs.sec.

With fix:

87.73% CPU utilization
122/183/296 90/95/99% latencies
1.4921e+06 tps
1021674.30 intrs./sec.

Signed-off-by: Tom Herbert <therbert@google.com>
---
 net/core/dev.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index a04b12f..bb88964 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4227,9 +4227,8 @@ static int process_backlog(struct napi_struct *napi, int quota)
 #endif
 	napi->weight = weight_p;
 	local_irq_disable();
-	while (work < quota) {
+	while (1) {
 		struct sk_buff *skb;
-		unsigned int qlen;
 
 		while ((skb = __skb_dequeue(&sd->process_queue))) {
 			local_irq_enable();
@@ -4243,24 +4242,24 @@ static int process_backlog(struct napi_struct *napi, int quota)
 		}
 
 		rps_lock(sd);
-		qlen = skb_queue_len(&sd->input_pkt_queue);
-		if (qlen)
-			skb_queue_splice_tail_init(&sd->input_pkt_queue,
-						   &sd->process_queue);
-
-		if (qlen < quota - work) {
+		if (skb_queue_empty(&sd->input_pkt_queue)) {
 			/*
 			 * Inline a custom version of __napi_complete().
 			 * only current cpu owns and manipulates this napi,
-			 * and NAPI_STATE_SCHED is the only possible flag set on backlog.
-			 * we can use a plain write instead of clear_bit(),
+			 * and NAPI_STATE_SCHED is the only possible flag set
+			 * on backlog.
+			 * We can use a plain write instead of clear_bit(),
 			 * and we dont need an smp_mb() memory barrier.
 			 */
 			list_del(&napi->poll_list);
 			napi->state = 0;
+			rps_unlock(sd);
 
-			quota = work + qlen;
+			break;
 		}
+
+		skb_queue_splice_tail_init(&sd->input_pkt_queue,
+					   &sd->process_queue);
 		rps_unlock(sd);
 	}
 	local_irq_enable();
-- 
2.0.0.526.g5318336

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] net: Performance fix for process_backlog
  2014-06-30 16:50 [PATCH v2] net: Performance fix for process_backlog Tom Herbert
@ 2014-06-30 19:28 ` Eric Dumazet
  2014-07-08  2:25 ` David Miller
  1 sibling, 0 replies; 4+ messages in thread
From: Eric Dumazet @ 2014-06-30 19:28 UTC (permalink / raw)
  To: Tom Herbert; +Cc: davem, netdev

On Mon, 2014-06-30 at 09:50 -0700, Tom Herbert wrote:
> In process_backlog the input_pkt_queue is only checked once for new
> packets and quota is artificially reduced to reflect precisely the
> number of packets on the input_pkt_queue so that the loop exits
> appropriately.
> 
> This patches changes the behavior to be more straightforward and
> less convoluted. Packets are processed until either the quota
> is met or there are no more packets to process.
> 
> This patch seems to provide a small, but noticeable performance
> improvement. The performance improvement is a result of staying
> in the process_backlog loop longer which can reduce number of IPI's.

Yes, this is likely the fact that napi->state is cleared at the very end
of the run.

Acked-by: Eric Dumazet <edumazet@google.com>

Thanks Tom !

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] net: Performance fix for process_backlog
  2014-06-30 16:50 [PATCH v2] net: Performance fix for process_backlog Tom Herbert
  2014-06-30 19:28 ` Eric Dumazet
@ 2014-07-08  2:25 ` David Miller
  1 sibling, 0 replies; 4+ messages in thread
From: David Miller @ 2014-07-08  2:25 UTC (permalink / raw)
  To: therbert; +Cc: netdev

From: Tom Herbert <therbert@google.com>
Date: Mon, 30 Jun 2014 09:50:40 -0700 (PDT)

> In process_backlog the input_pkt_queue is only checked once for new
> packets and quota is artificially reduced to reflect precisely the
> number of packets on the input_pkt_queue so that the loop exits
> appropriately.
> 
> This patches changes the behavior to be more straightforward and
> less convoluted. Packets are processed until either the quota
> is met or there are no more packets to process.
> 
> This patch seems to provide a small, but noticeable performance
> improvement. The performance improvement is a result of staying
> in the process_backlog loop longer which can reduce number of IPI's.
> 
> Performance data using super_netperf TCP_RR with 200 flows:
...
> Signed-off-by: Tom Herbert <therbert@google.com>

Applied, thanks Tom.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-07-08  2:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-30 16:50 [PATCH v2] net: Performance fix for process_backlog Tom Herbert
2014-06-30 19:28 ` Eric Dumazet
2014-07-08  2:25 ` David Miller
  -- strict thread matches above, loose matches on Subject: below --
2014-06-30 16:48 Tom Herbert

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).