From: Dave Jiang <dave.jiang@intel.com>
To: Koichiro Den <den@valinux.co.jp>, Jon Mason <jdmason@kudzu.us>,
Allen Hubbe <allenbh@gmail.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: ntb@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
Date: Tue, 24 Feb 2026 09:20:35 -0700 [thread overview]
Message-ID: <eeae611a-d35f-44f1-a100-50a397ae3eb4@intel.com> (raw)
In-Reply-To: <20260224152809.1799199-1-den@valinux.co.jp>
On 2/24/26 8:28 AM, Koichiro Den wrote:
> Hi,
>
> ntb_netdev currently hard-codes a single NTB transport queue pair, which
> means the datapath effectively runs as a single-queue netdev regardless
> of available CPUs / parallel flows.
>
> The longer-term motivation here is throughput scale-out: allow
> ntb_netdev to grow beyond the single-QP bottleneck and make it possible
> to spread TX/RX work across multiple queue pairs as link speeds and core
> counts keep increasing.
>
> Multi-queue also unlocks the standard networking knobs on top of it. In
> particular, once the device exposes multiple TX queues, qdisc/tc can
> steer flows/traffic classes into different queues (via
> skb->queue_mapping), enabling per-flow/per-class scheduling and QoS in a
> familiar way.
>
> This series is a small plumbing step towards that direction:
>
> 1) Introduce a per-queue context object (struct ntb_netdev_queue) and
> move queue-pair state out of struct ntb_netdev. Probe creates queue
> pairs in a loop and configures the netdev queue counts to match the
> number that was successfully created.
>
> 2) Expose ntb_num_queues as a module parameter to request multiple
> queue pairs at probe time. The value is clamped to 1..64 and kept
> read-only for now (no runtime reconfiguration).
>
> 3) Report the active queue-pair count via ethtool -l (get_channels),
> so users can confirm the device configuration from user space.
>
> Compatibility:
> - Default remains ntb_num_queues=1, so behaviour is unchanged unless
> the user explicitly requests more queues.
>
> Kernel base:
> - ntb-next latest:
> commit 7b3302c687ca ("ntb_hw_amd: Fix incorrect debug message in link
> disable path")
>
> Usage (example):
> - modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
> - ethtool -l <ifname> # Patch 3 takes care of it
>
> Patch summary:
> 1/3 net: ntb_netdev: Introduce per-queue context
> 2/3 net: ntb_netdev: Make queue pair count configurable
> 3/3 net: ntb_netdev: Expose queue pair count via ethtool -l
>
> Testing / results:
> Environment / command line:
> - 2x R-Car S4 Spider boards
> "Kernel base" (see above) + this series
> - For TCP load:
> [RC] $ sudo iperf3 -s
> [EP] $ sudo iperf3 -Z -c ${SERVER_IP} -l 65480 -w 512M -P 4
> - For UDP load:
> [RC] $ sudo iperf3 -s
> [EP] $ sudo iperf3 -ub0 -c ${SERVER_IP} -l 65480 -w 512M -P 4
>
> Before (without this series):
> TCP / UDP : 602 Mbps / 598 Mbps
>
> Before (ntb_num_queues=1):
> TCP / UDP : 588 Mbps / 605 Mbps
What accounts for the dip in TCP performance?
>
> After (ntb_num_queues=2):
> TCP / UDP : 602 Mbps / 598 Mbps
>
> Notes:
> In my current test environment, enabling multiple queue pairs does
> not improve throughput. The receive-side memcpy in ntb_transport is
> the dominant cost and limits scaling at present.
>
> Still, this series lays the groundwork for future scaling, for
> example once a transport backend is introduced that avoids memcpy
> to/from PCI memory space on both ends (see the superseded RFC
> series:
> https://lore.kernel.org/all/20251217151609.3162665-1-den@valinux.co.jp/).
>
>
> Best regards,
> Koichiro
>
> Koichiro Den (3):
> net: ntb_netdev: Introduce per-queue context
> net: ntb_netdev: Make queue pair count configurable
> net: ntb_netdev: Expose queue pair count via ethtool -l
>
> drivers/net/ntb_netdev.c | 326 +++++++++++++++++++++++++++------------
> 1 file changed, 228 insertions(+), 98 deletions(-)
>
for the series
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
next prev parent reply other threads:[~2026-02-24 16:20 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
2026-02-24 15:28 ` [PATCH 1/3] net: ntb_netdev: Introduce per-queue context Koichiro Den
2026-02-24 15:28 ` [PATCH 2/3] net: ntb_netdev: Make queue pair count configurable Koichiro Den
2026-02-24 15:28 ` [PATCH 3/3] net: ntb_netdev: Expose queue pair count via ethtool -l Koichiro Den
2026-02-24 16:20 ` Dave Jiang [this message]
2026-02-25 3:36 ` [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
2026-02-25 15:07 ` Dave Jiang
2026-02-26 3:50 ` Jakub Kicinski
2026-02-26 8:01 ` Koichiro Den
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=eeae611a-d35f-44f1-a100-50a397ae3eb4@intel.com \
--to=dave.jiang@intel.com \
--cc=allenbh@gmail.com \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=den@valinux.co.jp \
--cc=edumazet@google.com \
--cc=jdmason@kudzu.us \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=ntb@lists.linux.dev \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox