Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Larysa Zaremba <larysa.zaremba@intel.com>,
	netdev@vger.kernel.org, Alexander Duyck <alexanderduyck@fb.com>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	linux-kernel@vger.kernel.org,
	Alexander Lobakin <aleksander.lobakin@intel.com>,
	Yunsheng Lin <linyunsheng@huawei.com>,
	Michal Kubiak <michal.kubiak@intel.com>,
	intel-wired-lan@lists.osuosl.org,
	David Christensen <drc@linux.vnet.ibm.com>
Subject: [Intel-wired-lan] [PATCH net-next v5 08/14] page_pool: add DMA-sync-for-CPU inline helpers
Date: Fri, 24 Nov 2023 16:47:26 +0100	[thread overview]
Message-ID: <20231124154732.1623518-9-aleksander.lobakin@intel.com> (raw)
In-Reply-To: <20231124154732.1623518-1-aleksander.lobakin@intel.com>

Each driver is responsible for syncing buffers written by HW for CPU
before accessing them. Almost each PP-enabled driver uses the same
pattern, which could be shorthanded into a static inline to make driver
code a little bit more compact.
Introduce a couple such functions. The first one is sorta internal and
performs DMA synchronization unconditionally for the size passed from
the driver. The second checks whether the synchronization is needed
for this pool first and is the main one to be used by the drivers.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
 include/net/page_pool/helpers.h | 43 +++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 528a76c66270..797275e75d38 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -432,6 +432,49 @@ static inline bool page_pool_dma_addr_need_sync(const struct page *page)
 	return page->dma_addr & PAGE_POOL_DMA_ADDR_NEED_SYNC;
 }
 
+/**
+ * __page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW
+ * @pool: &page_pool the @page belongs to
+ * @page: page to sync
+ * @offset: offset from page start to "hard" start if using frags
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Can be used as a shorthand to sync Rx pages before accessing them in the
+ * driver. Caller must ensure the pool was created with ```PP_FLAG_DMA_MAP```.
+ * Note that this version performs DMA sync unconditionally, even if the
+ * associated PP doesn't perform sync-for-device. Consider the non-underscored
+ * version first if unsure.
+ */
+static inline void __page_pool_dma_sync_for_cpu(const struct page_pool *pool,
+						const struct page *page,
+						u32 offset, u32 dma_sync_size)
+{
+	dma_sync_single_range_for_cpu(pool->p.dev,
+				      page_pool_get_dma_addr(page),
+				      offset + pool->p.offset, dma_sync_size,
+				      page_pool_get_dma_dir(pool));
+}
+
+/**
+ * page_pool_dma_sync_for_cpu - sync Rx page for CPU if needed
+ * @pool: &page_pool the @page belongs to
+ * @page: page to sync
+ * @offset: offset from page start to "hard" start if using frags
+ * @dma_sync_size: size of the data written to the page
+ *
+ * Performs DMA sync for CPU, but *only* when both:
+ * 1) page_pool was created with ```PP_FLAG_DMA_SYNC_DEV``` to manage DMA sync;
+ * 2) sync shortcut is not available (IOMMU, swiotlb, non-coherent DMA, ...)
+ */
+static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
+					      const struct page *page,
+					      u32 offset, u32 dma_sync_size)
+{
+	if (page_pool_dma_addr_need_sync(page))
+		__page_pool_dma_sync_for_cpu(pool, page, offset,
+					     dma_sync_size);
+}
+
 static inline bool page_pool_put(struct page_pool *pool)
 {
 	return refcount_dec_and_test(&pool->user_cnt);
-- 
2.42.0

_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

  parent reply	other threads:[~2023-11-24 15:50 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-24 15:47 [Intel-wired-lan] [PATCH net-next v5 00/14] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 01/14] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2023-11-25 12:29   ` Yunsheng Lin
2023-11-27 14:08     ` Alexander Lobakin
2023-11-29  2:55       ` Yunsheng Lin
2023-11-29 13:12         ` Alexander Lobakin
2023-11-26 22:54   ` Jakub Kicinski
2023-11-27 14:12     ` Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 02/14] page_pool: don't use driver-set flags field directly Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 03/14] page_pool: avoid calling no-op externals when possible Alexander Lobakin
2023-11-25 13:04   ` Yunsheng Lin
2023-11-27 14:32     ` Alexander Lobakin
2023-11-27 18:17       ` Jakub Kicinski
2023-11-28 16:50         ` Alexander Lobakin
2023-11-29  3:17       ` Yunsheng Lin
2023-11-29 13:17         ` Alexander Lobakin
2023-11-30  8:46           ` Yunsheng Lin
2023-11-30 11:58             ` Alexander Lobakin
2023-11-30 12:20               ` Yunsheng Lin
2023-12-01 14:37                 ` Alexander Lobakin
2023-12-12 15:25                 ` Christoph Hellwig
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 04/14] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 05/14] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 06/14] iavf: drop page splitting and recycling Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 07/14] page_pool: constify some read-only function arguments Alexander Lobakin
2023-11-24 15:47 ` Alexander Lobakin [this message]
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 09/14] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 10/14] iavf: pack iavf_ring more efficiently Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 11/14] iavf: switch to Page Pool Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 12/14] libie: add common queue stats Alexander Lobakin
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 13/14] libie: add per-queue Page Pool stats Alexander Lobakin
2023-11-29 13:40   ` Alexander Lobakin
2023-11-29 14:29     ` Jakub Kicinski
2023-11-30 16:01       ` Alexander Lobakin
2023-11-30 16:45         ` Alexander Lobakin
2023-12-01  6:55           ` Jakub Kicinski
2023-11-24 15:47 ` [Intel-wired-lan] [PATCH net-next v5 14/14] iavf: switch queue stats to libie Alexander Lobakin
2023-11-27  9:04 ` [Intel-wired-lan] [PATCH net-next v5 00/14] net: intel: start The Great Code Dedup + Page Pool for iavf Jiri Pirko
2023-11-27 10:23   ` Przemek Kitszel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231124154732.1623518-9-aleksander.lobakin@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=alexanderduyck@fb.com \
    --cc=davem@davemloft.net \
    --cc=drc@linux.vnet.ibm.com \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=larysa.zaremba@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox