netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: netdev@vger.kernel.org, jaswinder.singh@linaro.org
Cc: ard.biesheuvel@linaro.org, arnd@arndb.de,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>
Subject: [net-next, PATCH, v2] net: netsec: Sync dma for device on buffer allocation
Date: Thu,  4 Jul 2019 17:46:09 +0300	[thread overview]
Message-ID: <1562251569-16506-1-git-send-email-ilias.apalodimas@linaro.org> (raw)

Quoting Arnd,

We have to do a sync_single_for_device /somewhere/ before the
buffer is given to the device. On a non-cache-coherent machine with
a write-back cache, there may be dirty cache lines that get written back
after the device DMA's data into it (e.g. from a previous memset
from before the buffer got freed), so you absolutely need to flush any
dirty cache lines on it first.

Since the coherency is configurable in this device make sure we cover
all configurations by explicitly syncing the allocated buffer for the
device before refilling it's descriptors

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---

Changes since V1: 
- Make the code more readable
 
 drivers/net/ethernet/socionext/netsec.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index 5544a722543f..ada7626bf3a2 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -727,21 +727,26 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
 {
 
 	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
+	enum dma_data_direction dma_dir;
+	dma_addr_t dma_start;
 	struct page *page;
 
 	page = page_pool_dev_alloc_pages(dring->page_pool);
 	if (!page)
 		return NULL;
 
+	dma_start = page_pool_get_dma_addr(page);
 	/* We allocate the same buffer length for XDP and non-XDP cases.
 	 * page_pool API will map the whole page, skip what's needed for
 	 * network payloads and/or XDP
 	 */
-	*dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM;
+	*dma_handle = dma_start + NETSEC_RXBUF_HEADROOM;
 	/* Make sure the incoming payload fits in the page for XDP and non-XDP
 	 * cases and reserve enough space for headroom + skb_shared_info
 	 */
 	*desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
+	dma_dir = page_pool_get_dma_dir(dring->page_pool);
+	dma_sync_single_for_device(priv->dev, dma_start, PAGE_SIZE, dma_dir);
 
 	return page_address(page);
 }
-- 
2.20.1


             reply	other threads:[~2019-07-04 14:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-04 14:46 Ilias Apalodimas [this message]
2019-07-04 15:22 ` [net-next, PATCH, v2] net: netsec: Sync dma for device on buffer allocation Arnd Bergmann
2019-07-04 17:39 ` Jesper Dangaard Brouer
2019-07-04 17:52   ` Ilias Apalodimas
2019-07-04 19:12     ` Ilias Apalodimas
2019-07-05 22:45 ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1562251569-16506-1-git-send-email-ilias.apalodimas@linaro.org \
    --to=ilias.apalodimas@linaro.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=arnd@arndb.de \
    --cc=jaswinder.singh@linaro.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).