From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: "David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Alexander Lobakin <aleksander.lobakin@intel.com>,
Christoph Hellwig <hch@lst.de>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Robin Murphy <robin.murphy@arm.com>,
Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
"Rafael J. Wysocki" <rafael@kernel.org>,
Magnus Karlsson <magnus.karlsson@intel.com>,
nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org,
netdev@vger.kernel.org, iommu@lists.linux.dev,
linux-kernel@vger.kernel.org
Subject: [PATCH net-next v5 6/7] page_pool: check for DMA sync shortcut earlier
Date: Mon, 6 May 2024 11:48:54 +0200 [thread overview]
Message-ID: <20240506094855.12944-7-aleksander.lobakin@intel.com> (raw)
In-Reply-To: <20240506094855.12944-1-aleksander.lobakin@intel.com>
We can save a couple more function calls in the Page Pool code if we
check for dma_need_sync() earlier, just when we test pp->p.dma_sync.
Move both these checks into an inline wrapper and call the PP wrapper
over the generic DMA sync function only when both are true.
You can't cache the result of dma_need_sync() in &page_pool, as it may
change anytime if an SWIOTLB buffer is allocated or mapped.
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
net/core/page_pool.c | 33 +++++++++++++++++++--------------
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index e680c4af2745..84132c978773 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -399,16 +399,26 @@ static struct page *__page_pool_get_cached(struct page_pool *pool)
return page;
}
-static void page_pool_dma_sync_for_device(const struct page_pool *pool,
- const struct page *page,
- unsigned int dma_sync_size)
+static void __page_pool_dma_sync_for_device(const struct page_pool *pool,
+ const struct page *page,
+ u32 dma_sync_size)
{
+#ifdef CONFIG_DMA_NEED_SYNC
dma_addr_t dma_addr = page_pool_get_dma_addr(page);
dma_sync_size = min(dma_sync_size, pool->p.max_len);
- dma_sync_single_range_for_device(pool->p.dev, dma_addr,
- pool->p.offset, dma_sync_size,
- pool->p.dma_dir);
+ __dma_sync_single_for_device(pool->p.dev, dma_addr + pool->p.offset,
+ dma_sync_size, pool->p.dma_dir);
+#endif
+}
+
+static __always_inline void
+page_pool_dma_sync_for_device(const struct page_pool *pool,
+ const struct page *page,
+ u32 dma_sync_size)
+{
+ if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
+ __page_pool_dma_sync_for_device(pool, page, dma_sync_size);
}
static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
@@ -430,8 +440,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
if (page_pool_set_dma_addr(page, dma))
goto unmap_failed;
- if (pool->dma_sync)
- page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
+ page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
return true;
@@ -701,9 +710,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page,
if (likely(__page_pool_page_can_be_recycled(page))) {
/* Read barrier done in page_ref_count / READ_ONCE */
- if (pool->dma_sync)
- page_pool_dma_sync_for_device(pool, page,
- dma_sync_size);
+ page_pool_dma_sync_for_device(pool, page, dma_sync_size);
if (allow_direct && page_pool_recycle_in_cache(page, pool))
return NULL;
@@ -842,9 +849,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool,
return NULL;
if (__page_pool_page_can_be_recycled(page)) {
- if (pool->dma_sync)
- page_pool_dma_sync_for_device(pool, page, -1);
-
+ page_pool_dma_sync_for_device(pool, page, -1);
return page;
}
--
2.45.0
next prev parent reply other threads:[~2024-05-06 9:49 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-06 9:48 [PATCH net-next v5 0/7] dma: skip calling no-op sync ops when possible Alexander Lobakin
2024-05-06 9:48 ` [PATCH net-next v5 1/7] dma: compile-out DMA sync op calls when not used Alexander Lobakin
2024-05-06 9:48 ` [PATCH net-next v5 2/7] dma: avoid redundant calls for sync operations Alexander Lobakin
2024-05-06 9:48 ` [PATCH net-next v5 3/7] iommu/dma: avoid expensive indirect " Alexander Lobakin
2024-05-06 9:48 ` [PATCH net-next v5 4/7] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2024-05-06 9:48 ` [PATCH net-next v5 5/7] page_pool: don't use driver-set flags field directly Alexander Lobakin
2024-05-06 9:48 ` Alexander Lobakin [this message]
2024-05-06 11:50 ` [PATCH net-next v5 6/7] page_pool: check for DMA sync shortcut earlier Christoph Hellwig
2024-05-07 9:51 ` Alexander Lobakin
2024-05-07 10:41 ` Christoph Hellwig
2024-05-06 9:48 ` [PATCH net-next v5 7/7] xsk: use generic DMA sync shortcut instead of a custom one Alexander Lobakin
2024-05-06 18:29 ` Jakub Kicinski
2024-05-07 10:06 ` Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240506094855.12944-7-aleksander.lobakin@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=bpf@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux.dev \
--cc=joro@8bytes.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=nex.sw.ncis.osdt.itp.upstreaming@intel.com \
--cc=pabeni@redhat.com \
--cc=rafael@kernel.org \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox