From: Paolo Abeni <pabeni@redhat.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: netdev@vger.kernel.org, edumazet@google.com,
andrew+netdev@lunn.ch, horms@kernel.org, almasrymina@google.com,
michael.chan@broadcom.com, tariqt@nvidia.com,
dtatulea@nvidia.com, hawk@kernel.org,
ilias.apalodimas@linaro.org, alexanderduyck@fb.com,
sdf@fomichev.me, davem@davemloft.net
Subject: Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
Date: Thu, 21 Aug 2025 09:51:55 +0200 [thread overview]
Message-ID: <5bba5969-36f4-4a0a-8c03-aea16e2a40de@redhat.com> (raw)
In-Reply-To: <20250820025704.166248-1-kuba@kernel.org>
On 8/20/25 4:56 AM, Jakub Kicinski wrote:
> Add support for queue API to fbnic, enable zero-copy Rx.
>
> The first patch adds page_pool_get(), I alluded to this
> new helper when dicussing commit 64fdaa94bfe0 ("net: page_pool:
> allow enabling recycling late, fix false positive warning").
> For page pool-oriented reviewers another patch of interest
> is patch 11, which adds a helper to test whether rxq wants
> to create a unreadable page pool. mlx5 already has this
> sort of a check, we said we will add a helper when more
> drivers need it (IIRC), so I guess now is the time.
>
> Patches 2-4 reshuffle the Rx init/allocation path to better
> align structures and functions which operate on them. Notably
> patch 2 moves the page pool pointer to the queue struct (from
> NAPI).
>
> Patch 5 converts the driver to use netmem_ref. The driver has
> separate and explicit buffer queue for scatter / payloads,
> so only references to those are converted.
>
> Next 5 patches are more boring code shifts.
>
> Patch 12 adds unreadable memory support to page pool allocation.
>
> Patch 15 finally adds the support for queue API.
>
> $ ./tools/testing/selftests/drivers/net/hw/iou-zcrx.py
> TAP version 13
> 1..3
> ok 1 iou-zcrx.test_zcrx
> ok 2 iou-zcrx.test_zcrx_oneshot
> ok 3 iou-zcrx.test_zcrx_rss
> # Totals: pass:3 fail:0 xfail:0 xpass:0 skip:0 error:0
Blindly noting that this series is apparently causing a few H/W
selftests failures, even if i.e. this one:
# ok 2 ping.test_default_v6
# # Exception| Traceback (most recent call last):
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/py/ksft.py",
line 244, in ksft_run
# # Exception| case(*args)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/drivers/net/./ping.py",
line 173, in test_xdp_generic_sb
# # Exception| _set_xdp_generic_sb_on(cfg)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/drivers/net/./ping.py",
line 72, in _set_xdp_generic_sb_on
# # Exception| cmd(f"ip link set dev {cfg.ifname} mtu 1500
xdpgeneric obj {prog} sec xdp", shell=True)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/py/utils.py",
line 71, in __init__
# # Exception| self.process(terminate=False, fail=fail, timeout=timeout)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/py/utils.py",
line 91, in process
# # Exception| raise CmdExitFailure("Command failed: %s\nSTDOUT:
%s\nSTDERR: %s" %
# # Exception| net.lib.py.utils.CmdExitFailure: Command failed: ip link
set dev enp1s0 mtu 1500 xdpgeneric obj
/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/xdp_dummy.bpf.o
sec xdp
# # Exception| STDOUT: b''
# # Exception| STDERR: b'Error: unable to install XDP to device using
tcp-data-split.\n'
# not ok 3 ping.test_xdp_generic_sb
looks more related to commit 2b30fc01a6c788ed4a799ed8a6f42ed9ac82417f
(but ping did not failed back than)
/P
next prev parent reply other threads:[~2025-08-21 7:52 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 01/15] net: page_pool: add page_pool_get() Jakub Kicinski
2025-08-20 10:35 ` Jesper Dangaard Brouer
2025-08-20 10:58 ` Dragos Tatulea
2025-08-20 23:11 ` Mina Almasry
2025-08-20 2:56 ` [PATCH net-next 02/15] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 03/15] eth: fbnic: move xdp_rxq_info_reg() to resource alloc Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 04/15] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources() Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 05/15] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
2025-08-20 23:22 ` Mina Almasry
2025-08-20 2:56 ` [PATCH net-next 06/15] eth: fbnic: request ops lock Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 07/15] eth: fbnic: split fbnic_disable() Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 08/15] eth: fbnic: split fbnic_flush() Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 09/15] eth: fbnic: split fbnic_enable() Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 10/15] eth: fbnic: split fbnic_fill() Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable Jakub Kicinski
2025-08-20 11:30 ` Dragos Tatulea
2025-08-20 14:52 ` Jakub Kicinski
2025-08-20 17:45 ` Dragos Tatulea
2025-08-20 2:57 ` [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
2025-08-20 23:33 ` Mina Almasry
2025-08-21 0:45 ` Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 13/15] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 14/15] eth: fbnic: don't pass NAPI into pp alloc Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 15/15] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
2025-08-21 7:51 ` Paolo Abeni [this message]
2025-08-21 14:28 ` [PATCH net-next 00/15] eth: fbnic: support queue API and " Jakub Kicinski
2025-08-21 14:53 ` Taehee Yoo
2025-08-21 15:03 ` Jakub Kicinski
2025-08-21 15:22 ` Mina Almasry
2025-08-21 15:42 ` Jakub Kicinski
2025-08-21 15:02 ` Paolo Abeni
2025-08-21 15:20 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5bba5969-36f4-4a0a-8c03-aea16e2a40de@redhat.com \
--to=pabeni@redhat.com \
--cc=alexanderduyck@fb.com \
--cc=almasrymina@google.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=sdf@fomichev.me \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).