netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
To: davem@davemloft.net
Cc: Alexander Duyck <alexander.h.duyck@intel.com>,
	netdev@vger.kernel.org, gospo@redhat.com, sassmann@redhat.com,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
Subject: [net-next 4/9] ixgbe: Have the CPU take ownership of the buffers sooner
Date: Thu, 16 Aug 2012 15:48:33 -0700	[thread overview]
Message-ID: <1345157318-23731-5-git-send-email-peter.p.waskiewicz.jr@intel.com> (raw)
In-Reply-To: <1345157318-23731-1-git-send-email-peter.p.waskiewicz.jr@intel.com>

From: Alexander Duyck <alexander.h.duyck@intel.com>

This patch makes it so that we will always have ownership of the buffers by
the time we get to ixgbe_add_rx_frag. This is necessary as I am planning to
add a copy-break to ixgbe_add_rx_frag and in order for that to function
correctly we need the CPU to have ownership of the buffer.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 52 +++++++++++++++++++--------
 1 file changed, 38 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 9305e9a..b0020fc 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1458,6 +1458,36 @@ static bool ixgbe_is_non_eop(struct ixgbe_ring *rx_ring,
 }
 
 /**
+ * ixgbe_dma_sync_frag - perform DMA sync for first frag of SKB
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being updated
+ *
+ * This function provides a basic DMA sync up for the first fragment of an
+ * skb.  The reason for doing this is that the first fragment cannot be
+ * unmapped until we have reached the end of packet descriptor for a buffer
+ * chain.
+ */
+static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
+				struct sk_buff *skb)
+{
+	/* if the page was released unmap it, else just sync our portion */
+	if (unlikely(IXGBE_CB(skb)->page_released)) {
+		dma_unmap_page(rx_ring->dev, IXGBE_CB(skb)->dma,
+			       ixgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE);
+		IXGBE_CB(skb)->page_released = false;
+	} else {
+		struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+
+		dma_sync_single_range_for_cpu(rx_ring->dev,
+					      IXGBE_CB(skb)->dma,
+					      frag->page_offset,
+					      ixgbe_rx_bufsz(rx_ring),
+					      DMA_FROM_DEVICE);
+	}
+	IXGBE_CB(skb)->dma = 0;
+}
+
+/**
  * ixgbe_cleanup_headers - Correct corrupted or empty headers
  * @rx_ring: rx descriptor ring packet is being transacted on
  * @rx_desc: pointer to the EOP Rx descriptor
@@ -1484,20 +1514,6 @@ static bool ixgbe_cleanup_headers(struct ixgbe_ring *rx_ring,
 	unsigned char *va;
 	unsigned int pull_len;
 
-	/* if the page was released unmap it, else just sync our portion */
-	if (unlikely(IXGBE_CB(skb)->page_released)) {
-		dma_unmap_page(rx_ring->dev, IXGBE_CB(skb)->dma,
-			       ixgbe_rx_pg_size(rx_ring), DMA_FROM_DEVICE);
-		IXGBE_CB(skb)->page_released = false;
-	} else {
-		dma_sync_single_range_for_cpu(rx_ring->dev,
-					      IXGBE_CB(skb)->dma,
-					      frag->page_offset,
-					      ixgbe_rx_bufsz(rx_ring),
-					      DMA_FROM_DEVICE);
-	}
-	IXGBE_CB(skb)->dma = 0;
-
 	/* verify that the packet does not have any known errors */
 	if (unlikely(ixgbe_test_staterr(rx_desc,
 					IXGBE_RXDADV_ERR_FRAME_ERR_MASK) &&
@@ -1742,8 +1758,16 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 			 * after the writeback.  Only unmap it when EOP is
 			 * reached
 			 */
+			if (likely(ixgbe_test_staterr(rx_desc,
+						      IXGBE_RXD_STAT_EOP)))
+				goto dma_sync;
+
 			IXGBE_CB(skb)->dma = rx_buffer->dma;
 		} else {
+			if (ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_EOP))
+				ixgbe_dma_sync_frag(rx_ring, skb);
+
+dma_sync:
 			/* we are reusing so sync this buffer for CPU use */
 			dma_sync_single_range_for_cpu(rx_ring->dev,
 						      rx_buffer->dma,
-- 
1.7.11.2

  parent reply	other threads:[~2012-08-16 22:48 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-16 22:48 [net-next 0/9][pull request] Intel Wired LAN Driver Updates Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 1/9] ixgbe: Remove code that was initializing Rx page offset Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 2/9] ixgbe: combine ixgbe_add_rx_frag and ixgbe_can_reuse_page Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 3/9] ixgbe: Only use double buffering if page size is less than 8K Peter P Waskiewicz Jr
2012-08-16 22:48 ` Peter P Waskiewicz Jr [this message]
2012-08-16 22:48 ` [net-next 5/9] ixgbe: Make pull tail function separate from rest of cleanup_headers Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 6/9] ixgbe: Copybreak sooner to avoid get_page/put_page and offset change overhead Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 7/9] ixgbe: Make allocating skb and placing data in it a separate function Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 8/9] ixgbe: Roll RSC code into non-EOP code Peter P Waskiewicz Jr
2012-08-16 22:48 ` [net-next 9/9] ixgbe: Rewrite code related to configuring IFCS bit in Tx descriptor Peter P Waskiewicz Jr
2012-08-20  9:31 ` [net-next 0/9][pull request] Intel Wired LAN Driver Updates David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1345157318-23731-5-git-send-email-peter.p.waskiewicz.jr@intel.com \
    --to=peter.p.waskiewicz.jr@intel.com \
    --cc=alexander.h.duyck@intel.com \
    --cc=davem@davemloft.net \
    --cc=gospo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=sassmann@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).