linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.h.duyck@intel.com>
To: netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: davem@davemloft.net, brouer@redhat.com
Subject: [net-next PATCH 25/27] igb: Update driver to make use of DMA_ATTR_SKIP_CPU_SYNC
Date: Tue, 25 Oct 2016 11:39:00 -0400	[thread overview]
Message-ID: <20161025153900.4815.4927.stgit@ahduyck-blue-test.jf.intel.com> (raw)
In-Reply-To: <20161025153220.4815.61239.stgit@ahduyck-blue-test.jf.intel.com>

The ARM architecture provides a mechanism for deferring cache line
invalidation in the case of map/unmap.  This patch makes use of this
mechanism to avoid unnecessary synchronization.

A secondary effect of this change is that the portion of the page that has
been synchronized for use by the CPU should be writable and could be passed
up the stack (at least on ARM).

The last bit that occurred to me is that on architectures where the
sync_for_cpu call invalidates cache lines we were prefetching and then
invalidating the first 128 bytes of the packet.  To avoid that I have moved
the sync up to before we perform the prefetch and allocate the skbuff so
that we can actually make use of it.

Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
---
 drivers/net/ethernet/intel/igb/igb_main.c |   53 ++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 20 deletions(-)

diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 4feca69..c8c458c 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -3947,10 +3947,21 @@ static void igb_clean_rx_ring(struct igb_ring *rx_ring)
 		if (!buffer_info->page)
 			continue;
 
-		dma_unmap_page(rx_ring->dev,
-			       buffer_info->dma,
-			       PAGE_SIZE,
-			       DMA_FROM_DEVICE);
+		/* Invalidate cache lines that may have been written to by
+		 * device so that we avoid corrupting memory.
+		 */
+		dma_sync_single_range_for_cpu(rx_ring->dev,
+					      buffer_info->dma,
+					      buffer_info->page_offset,
+					      IGB_RX_BUFSZ,
+					      DMA_FROM_DEVICE);
+
+		/* free resources associated with mapping */
+		dma_unmap_page_attrs(rx_ring->dev,
+				     buffer_info->dma,
+				     PAGE_SIZE,
+				     DMA_FROM_DEVICE,
+				     DMA_ATTR_SKIP_CPU_SYNC);
 		__free_page(buffer_info->page);
 
 		buffer_info->page = NULL;
@@ -6808,12 +6819,6 @@ static void igb_reuse_rx_page(struct igb_ring *rx_ring,
 
 	/* transfer page from old buffer to new buffer */
 	*new_buff = *old_buff;
-
-	/* sync the buffer for use by the device */
-	dma_sync_single_range_for_device(rx_ring->dev, old_buff->dma,
-					 old_buff->page_offset,
-					 IGB_RX_BUFSZ,
-					 DMA_FROM_DEVICE);
 }
 
 static inline bool igb_page_is_reserved(struct page *page)
@@ -6934,6 +6939,13 @@ static struct sk_buff *igb_fetch_rx_buffer(struct igb_ring *rx_ring,
 	page = rx_buffer->page;
 	prefetchw(page);
 
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(rx_ring->dev,
+				      rx_buffer->dma,
+				      rx_buffer->page_offset,
+				      size,
+				      DMA_FROM_DEVICE);
+
 	if (likely(!skb)) {
 		void *page_addr = page_address(page) +
 				  rx_buffer->page_offset;
@@ -6958,21 +6970,15 @@ static struct sk_buff *igb_fetch_rx_buffer(struct igb_ring *rx_ring,
 		prefetchw(skb->data);
 	}
 
-	/* we are reusing so sync this buffer for CPU use */
-	dma_sync_single_range_for_cpu(rx_ring->dev,
-				      rx_buffer->dma,
-				      rx_buffer->page_offset,
-				      size,
-				      DMA_FROM_DEVICE);
-
 	/* pull page into skb */
 	if (igb_add_rx_frag(rx_ring, rx_buffer, size, rx_desc, skb)) {
 		/* hand second half of page back to the ring */
 		igb_reuse_rx_page(rx_ring, rx_buffer);
 	} else {
 		/* we are not reusing the buffer so unmap it */
-		dma_unmap_page(rx_ring->dev, rx_buffer->dma,
-			       PAGE_SIZE, DMA_FROM_DEVICE);
+		dma_unmap_page_attrs(rx_ring->dev, rx_buffer->dma,
+				     PAGE_SIZE, DMA_FROM_DEVICE,
+				     DMA_ATTR_SKIP_CPU_SYNC);
 	}
 
 	/* clear contents of rx_buffer */
@@ -7230,7 +7236,8 @@ static bool igb_alloc_mapped_page(struct igb_ring *rx_ring,
 	}
 
 	/* map page for use */
-	dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
+	dma = dma_map_page_attrs(rx_ring->dev, page, 0, PAGE_SIZE,
+				 DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
 
 	/* if mapping failed free memory back to system since
 	 * there isn't much point in holding memory we can't use
@@ -7271,6 +7278,12 @@ void igb_alloc_rx_buffers(struct igb_ring *rx_ring, u16 cleaned_count)
 		if (!igb_alloc_mapped_page(rx_ring, bi))
 			break;
 
+		/* sync the buffer for use by the device */
+		dma_sync_single_range_for_device(rx_ring->dev, bi->dma,
+						 bi->page_offset,
+						 IGB_RX_BUFSZ,
+						 DMA_FROM_DEVICE);
+
 		/* Refresh the desc even if buffer_addrs didn't change
 		 * because each write-back erases this info.
 		 */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-10-25 21:39 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-25 15:36 [net-next PATCH 00/27] Add support for DMA writable pages being writable by the network stack Alexander Duyck
2016-10-25 15:36 ` [net-next PATCH 01/27] swiotlb: Drop unused function swiotlb_map_sg Alexander Duyck
2016-10-25 15:36 ` [net-next PATCH 02/27] swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping function Alexander Duyck
2016-10-28 17:35   ` Konrad Rzeszutek Wilk
2016-10-25 15:37 ` [net-next PATCH 03/27] swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNC Alexander Duyck
2016-10-28 17:34   ` Konrad Rzeszutek Wilk
2016-10-28 18:09     ` Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 04/27] arch/arc: Add option to skip sync on DMA mapping Alexander Duyck
2016-10-25 22:00   ` Vineet Gupta
2016-10-25 15:37 ` [net-next PATCH 05/27] arch/arm: Add option to skip sync on DMA map and unmap Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 06/27] arch/avr32: Add option to skip sync on DMA map Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 07/27] arch/blackfin: " Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 08/27] arch/c6x: Add option to skip sync on DMA map and unmap Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 09/27] arch/frv: Add option to skip sync on DMA map Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 10/27] arch/hexagon: Add option to skip DMA sync as a part of mapping Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 11/27] arch/m68k: " Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 12/27] arch/metag: Add option to skip DMA sync as a part of map and unmap Alexander Duyck
2016-10-25 15:37 ` [net-next PATCH 13/27] arch/microblaze: " Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 14/27] arch/mips: " Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 15/27] arch/nios2: " Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 16/27] arch/openrisc: Add option to skip DMA sync as a part of mapping Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 17/27] arch/parisc: Add option to skip DMA sync as a part of map and unmap Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 18/27] arch/powerpc: Add option to skip DMA sync as a part of mapping Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 19/27] arch/sh: " Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 20/27] arch/sparc: Add option to skip DMA sync as a part of map and unmap Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 21/27] arch/tile: " Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 22/27] arch/xtensa: Add option to skip DMA sync as a part of mapping Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 23/27] dma: Add calls for dma_map_page_attrs and dma_unmap_page_attrs Alexander Duyck
2016-10-25 15:38 ` [net-next PATCH 24/27] mm: Add support for releasing multiple instances of a page Alexander Duyck
2016-10-25 15:39 ` Alexander Duyck [this message]
2016-10-26 17:21   ` [Intel-wired-lan] [net-next PATCH 25/27] igb: Update driver to make use of DMA_ATTR_SKIP_CPU_SYNC Jeff Kirsher
2016-10-25 15:39 ` [net-next PATCH 26/27] igb: Update code to better handle incrementing page count Alexander Duyck
2016-10-26 17:21   ` [Intel-wired-lan] " Jeff Kirsher
2016-10-25 15:39 ` [net-next PATCH 27/27] igb: Revert "igb: Revert support for build_skb in igb" Alexander Duyck
2016-10-26 17:22   ` [Intel-wired-lan] " Jeff Kirsher
2016-10-26 15:45 ` [net-next PATCH 00/27] Add support for DMA writable pages being writable by the network stack Jesper Dangaard Brouer
2016-10-28 15:48 ` Alexander Duyck
2016-10-28 17:06   ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161025153900.4815.4927.stgit@ahduyck-blue-test.jf.intel.com \
    --to=alexander.h.duyck@intel.com \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).