netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] page_pool: add DMA_ATTR_WEAK_ORDERING on all mappings
@ 2023-04-17 15:28 Jakub Kicinski
  2023-04-18  9:32 ` Ilias Apalodimas
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Jakub Kicinski @ 2023-04-17 15:28 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, michael.chan, Jakub Kicinski, hawk,
	ilias.apalodimas

Commit c519fe9a4f0d ("bnxt: add dma mapping attributes") added
DMA_ATTR_WEAK_ORDERING to DMA attrs on bnxt. It has since spread
to a few more drivers (possibly as a copy'n'paste).

DMA_ATTR_WEAK_ORDERING only seems to matter on Sparc and PowerPC/cell,
the rarity of these platforms is likely why we never bothered adding
the attribute in the page pool, even though it should be safe to add.

To make the page pool migration in drivers which set this flag less
of a risk (of regressing the precious sparc database workloads or
whatever needed this) let's add DMA_ATTR_WEAK_ORDERING on all
page pool DMA mappings.

We could make this a driver opt-in but frankly I don't think it's
worth complicating the API. I can't think of a reason why device
accesses to packet memory would have to be ordered.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: hawk@kernel.org
CC: ilias.apalodimas@linaro.org
---
 net/core/page_pool.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 2f6bf422ed30..97f20f7ff4fc 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -316,7 +316,8 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
 	 */
 	dma = dma_map_page_attrs(pool->p.dev, page, 0,
 				 (PAGE_SIZE << pool->p.order),
-				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
+				 pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC |
+						  DMA_ATTR_WEAK_ORDERING);
 	if (dma_mapping_error(pool->p.dev, dma))
 		return false;
 
@@ -484,7 +485,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
 	/* When page is unmapped, it cannot be returned to our pool */
 	dma_unmap_page_attrs(pool->p.dev, dma,
 			     PAGE_SIZE << pool->p.order, pool->p.dma_dir,
-			     DMA_ATTR_SKIP_CPU_SYNC);
+			     DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
 	page_pool_set_dma_addr(page, 0);
 skip_dma_unmap:
 	page_pool_clear_pp_info(page);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-04-19 21:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-17 15:28 [PATCH net-next] page_pool: add DMA_ATTR_WEAK_ORDERING on all mappings Jakub Kicinski
2023-04-18  9:32 ` Ilias Apalodimas
2023-04-18 10:19 ` Somnath Kotur
2023-04-18 11:23 ` Jesper Dangaard Brouer
2023-04-19 15:12   ` Jesper Dangaard Brouer
2023-04-19 13:49 ` Alexander Lobakin
2023-04-19 18:24   ` Jakub Kicinski
2023-04-19 21:20 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).