netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <paul.durrant@citrix.com>
To: <netdev@vger.kernel.org>, <xen-devel@lists.xenproject.org>
Cc: David Vrabel <david.vrabel@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Subject: [PATCH net-next 5/7] xen-netback: process guest rx packets in batches
Date: Mon, 3 Oct 2016 08:31:10 +0100	[thread overview]
Message-ID: <1475479872-23717-6-git-send-email-paul.durrant@citrix.com> (raw)
In-Reply-To: <1475479872-23717-1-git-send-email-paul.durrant@citrix.com>

From: David Vrabel <david.vrabel@citrix.com>

Instead of only placing one skb on the guest rx ring at a time, process
a batch of up-to 64.  This improves performance by ~10% in some tests.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netback/rx.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
index 9548709..ae822b8 100644
--- a/drivers/net/xen-netback/rx.c
+++ b/drivers/net/xen-netback/rx.c
@@ -399,7 +399,7 @@ static void xenvif_rx_extra_slot(struct xenvif_queue *queue,
 	BUG();
 }
 
-void xenvif_rx_action(struct xenvif_queue *queue)
+void xenvif_rx_skb(struct xenvif_queue *queue)
 {
 	struct xenvif_pkt_state pkt;
 
@@ -425,6 +425,19 @@ void xenvif_rx_action(struct xenvif_queue *queue)
 	xenvif_rx_complete(queue, &pkt);
 }
 
+#define RX_BATCH_SIZE 64
+
+void xenvif_rx_action(struct xenvif_queue *queue)
+{
+	unsigned int work_done = 0;
+
+	while (xenvif_rx_ring_slots_available(queue) &&
+	       work_done < RX_BATCH_SIZE) {
+		xenvif_rx_skb(queue);
+		work_done++;
+	}
+}
+
 static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
 {
 	RING_IDX prod, cons;
-- 
2.1.4

  parent reply	other threads:[~2016-10-03  7:50 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-03  7:31 [PATCH net-next 0/7] xen-netback: guest rx side refactor Paul Durrant
2016-10-03  7:31 ` [PATCH net-next 1/7] xen-netback: separate guest side rx code into separate module Paul Durrant
2016-10-03  7:31 ` [PATCH net-next 2/7] xen-netback: retire guest rx side prefix GSO feature Paul Durrant
2016-10-03  7:31 ` [PATCH net-next 3/7] xen-netback: refactor guest rx Paul Durrant
2016-10-03  7:31 ` [PATCH net-next 4/7] xen-netback: immediately wake tx queue when guest rx queue has space Paul Durrant
2016-10-03  7:31 ` Paul Durrant [this message]
2016-10-03  7:31 ` [PATCH net-next 6/7] xen-netback: batch copies for multiple to-guest rx packets Paul Durrant
2016-10-03  7:31 ` [PATCH net-next 7/7] xen/netback: add fraglist support for to-guest rx Paul Durrant
2016-10-04  4:51 ` [PATCH net-next 0/7] xen-netback: guest rx side refactor David Miller
2016-10-04  8:26   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1475479872-23717-6-git-send-email-paul.durrant@citrix.com \
    --to=paul.durrant@citrix.com \
    --cc=david.vrabel@citrix.com \
    --cc=netdev@vger.kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).