From: Lorenzo Bianconi <lorenzo@kernel.org>
To: netdev@vger.kernel.org
Cc: davem@davemloft.net, ilias.apalodimas@linaro.org,
brouer@redhat.com, lorenzo.bianconi@redhat.com,
mcroce@redhat.com, jonathan.lemon@gmail.com
Subject: [PATCH v5 net-next 0/3] add DMA-sync-for-device capability to page_pool API
Date: Wed, 20 Nov 2019 16:54:16 +0200 [thread overview]
Message-ID: <cover.1574261017.git.lorenzo@kernel.org> (raw)
Introduce the possibility to sync DMA memory for device in the page_pool API.
This feature allows to sync proper DMA size and not always full buffer
(dma_sync_single_for_device can be very costly).
Please note DMA-sync-for-CPU is still device driver responsibility.
Relying on page_pool DMA sync mvneta driver improves XDP_DROP pps of
about 170Kpps:
- XDP_DROP DMA sync managed by mvneta driver: ~420Kpps
- XDP_DROP DMA sync managed by page_pool API: ~585Kpps
Do not change naming convention for the moment since the changes will hit other
drivers as well. I will address it in another series.
Changes since v4:
- do not allow the driver to set max_len to 0
- convert PP_FLAG_DMA_MAP/PP_FLAG_DMA_SYNC_DEV to BIT() macro
Changes since v3:
- move dma_sync_for_device before putting the page in ptr_ring in
__page_pool_recycle_into_ring since ptr_ring can be consumed
concurrently. Simplify the code moving dma_sync_for_device
before running __page_pool_recycle_direct/__page_pool_recycle_into_ring
Changes since v2:
- rely on PP_FLAG_DMA_SYNC_DEV flag instead of dma_sync
Changes since v1:
- rename sync in dma_sync
- set dma_sync_size to 0xFFFFFFFF in page_pool_recycle_direct and
page_pool_put_page routines
- Improve documentation
Lorenzo Bianconi (3):
net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp
net: page_pool: add the possibility to sync DMA memory for device
net: mvneta: get rid of huge dma sync in mvneta_rx_refill
drivers/net/ethernet/marvell/mvneta.c | 24 +++++++++++-------
include/net/page_pool.h | 24 +++++++++++++-----
net/core/page_pool.c | 36 +++++++++++++++++++++++++--
3 files changed, 67 insertions(+), 17 deletions(-)
--
2.21.0
next reply other threads:[~2019-11-20 14:54 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-20 14:54 Lorenzo Bianconi [this message]
2019-11-20 14:54 ` [PATCH v5 net-next 1/3] net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp Lorenzo Bianconi
2019-11-20 15:45 ` Jesper Dangaard Brouer
2019-11-20 14:54 ` [PATCH v5 net-next 2/3] net: page_pool: add the possibility to sync DMA memory for device Lorenzo Bianconi
2019-11-20 17:49 ` Jesper Dangaard Brouer
2019-11-20 18:00 ` Ilias Apalodimas
2019-11-20 18:42 ` Jonathan Lemon
2019-11-20 19:04 ` Jesper Dangaard Brouer
2019-11-20 14:54 ` [PATCH v5 net-next 3/3] net: mvneta: get rid of huge dma sync in mvneta_rx_refill Lorenzo Bianconi
2019-11-20 17:49 ` Jesper Dangaard Brouer
2019-11-20 15:37 ` [PATCH v5 net-next 0/3] add DMA-sync-for-device capability to page_pool API Jesper Dangaard Brouer
2019-11-20 15:45 ` Lorenzo Bianconi
2019-11-20 18:05 ` Ilias Apalodimas
2019-11-20 20:34 ` David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1574261017.git.lorenzo@kernel.org \
--to=lorenzo@kernel.org \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=jonathan.lemon@gmail.com \
--cc=lorenzo.bianconi@redhat.com \
--cc=mcroce@redhat.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).