* [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
@ 2026-02-24 15:28 Koichiro Den
2026-02-24 15:28 ` [PATCH 1/3] net: ntb_netdev: Introduce per-queue context Koichiro Den
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Koichiro Den @ 2026-02-24 15:28 UTC (permalink / raw)
To: Jon Mason, Dave Jiang, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: ntb, netdev, linux-kernel
Hi,
ntb_netdev currently hard-codes a single NTB transport queue pair, which
means the datapath effectively runs as a single-queue netdev regardless
of available CPUs / parallel flows.
The longer-term motivation here is throughput scale-out: allow
ntb_netdev to grow beyond the single-QP bottleneck and make it possible
to spread TX/RX work across multiple queue pairs as link speeds and core
counts keep increasing.
Multi-queue also unlocks the standard networking knobs on top of it. In
particular, once the device exposes multiple TX queues, qdisc/tc can
steer flows/traffic classes into different queues (via
skb->queue_mapping), enabling per-flow/per-class scheduling and QoS in a
familiar way.
This series is a small plumbing step towards that direction:
1) Introduce a per-queue context object (struct ntb_netdev_queue) and
move queue-pair state out of struct ntb_netdev. Probe creates queue
pairs in a loop and configures the netdev queue counts to match the
number that was successfully created.
2) Expose ntb_num_queues as a module parameter to request multiple
queue pairs at probe time. The value is clamped to 1..64 and kept
read-only for now (no runtime reconfiguration).
3) Report the active queue-pair count via ethtool -l (get_channels),
so users can confirm the device configuration from user space.
Compatibility:
- Default remains ntb_num_queues=1, so behaviour is unchanged unless
the user explicitly requests more queues.
Kernel base:
- ntb-next latest:
commit 7b3302c687ca ("ntb_hw_amd: Fix incorrect debug message in link
disable path")
Usage (example):
- modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
- ethtool -l <ifname> # Patch 3 takes care of it
Patch summary:
1/3 net: ntb_netdev: Introduce per-queue context
2/3 net: ntb_netdev: Make queue pair count configurable
3/3 net: ntb_netdev: Expose queue pair count via ethtool -l
Testing / results:
Environment / command line:
- 2x R-Car S4 Spider boards
"Kernel base" (see above) + this series
- For TCP load:
[RC] $ sudo iperf3 -s
[EP] $ sudo iperf3 -Z -c ${SERVER_IP} -l 65480 -w 512M -P 4
- For UDP load:
[RC] $ sudo iperf3 -s
[EP] $ sudo iperf3 -ub0 -c ${SERVER_IP} -l 65480 -w 512M -P 4
Before (without this series):
TCP / UDP : 602 Mbps / 598 Mbps
Before (ntb_num_queues=1):
TCP / UDP : 588 Mbps / 605 Mbps
After (ntb_num_queues=2):
TCP / UDP : 602 Mbps / 598 Mbps
Notes:
In my current test environment, enabling multiple queue pairs does
not improve throughput. The receive-side memcpy in ntb_transport is
the dominant cost and limits scaling at present.
Still, this series lays the groundwork for future scaling, for
example once a transport backend is introduced that avoids memcpy
to/from PCI memory space on both ends (see the superseded RFC
series:
https://lore.kernel.org/all/20251217151609.3162665-1-den@valinux.co.jp/).
Best regards,
Koichiro
Koichiro Den (3):
net: ntb_netdev: Introduce per-queue context
net: ntb_netdev: Make queue pair count configurable
net: ntb_netdev: Expose queue pair count via ethtool -l
drivers/net/ntb_netdev.c | 326 +++++++++++++++++++++++++++------------
1 file changed, 228 insertions(+), 98 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] net: ntb_netdev: Introduce per-queue context
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
@ 2026-02-24 15:28 ` Koichiro Den
2026-02-24 15:28 ` [PATCH 2/3] net: ntb_netdev: Make queue pair count configurable Koichiro Den
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Koichiro Den @ 2026-02-24 15:28 UTC (permalink / raw)
To: Jon Mason, Dave Jiang, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: ntb, netdev, linux-kernel
Prepare ntb_netdev for multi-queue operation by moving queue-pair state
out of struct ntb_netdev.
Introduce struct ntb_netdev_queue to carry the ntb_transport_qp pointer,
the per-QP TX timer and queue id. Pass this object as the callback
context and convert the RX/TX handlers and link event path accordingly.
The probe path allocates an array of per-queue objects and creates queue
pairs in a loop, recording the number successfully created in
dev->num_queues. The netdev is configured to match that count and the
driver uses the netdev subqueue helpers for flow control.
With the current default of ntb_num_queues=1, behavior is unchanged.
Signed-off-by: Koichiro Den <den@valinux.co.jp>
---
drivers/net/ntb_netdev.c | 298 ++++++++++++++++++++++++++-------------
1 file changed, 200 insertions(+), 98 deletions(-)
diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
index fbeae05817e9..d8734dfc2eee 100644
--- a/drivers/net/ntb_netdev.c
+++ b/drivers/net/ntb_netdev.c
@@ -53,6 +53,7 @@
#include <linux/pci.h>
#include <linux/ntb.h>
#include <linux/ntb_transport.h>
+#include <linux/slab.h>
#define NTB_NETDEV_VER "0.7"
@@ -70,11 +71,22 @@ static unsigned int tx_start = 10;
/* Number of descriptors still available before stop upper layer tx */
static unsigned int tx_stop = 5;
-struct ntb_netdev {
- struct pci_dev *pdev;
- struct net_device *ndev;
+static unsigned int ntb_num_queues = 1;
+
+struct ntb_netdev;
+
+struct ntb_netdev_queue {
+ struct ntb_netdev *ntdev;
struct ntb_transport_qp *qp;
struct timer_list tx_timer;
+ u16 qid;
+};
+
+struct ntb_netdev {
+ struct pci_dev *pdev;
+ struct net_device *ndev;
+ unsigned int num_queues;
+ struct ntb_netdev_queue *queues;
};
#define NTB_TX_TIMEOUT_MS 1000
@@ -82,14 +94,24 @@ struct ntb_netdev {
static void ntb_netdev_event_handler(void *data, int link_is_up)
{
- struct net_device *ndev = data;
- struct ntb_netdev *dev = netdev_priv(ndev);
+ struct ntb_netdev_queue *q = data;
+ struct ntb_netdev *dev = q->ntdev;
+ struct net_device *ndev = dev->ndev;
+ bool any_up = false;
+ unsigned int i;
- netdev_dbg(ndev, "Event %x, Link %x\n", link_is_up,
- ntb_transport_link_query(dev->qp));
+ netdev_dbg(ndev, "Event %x, Link %x, qp %u\n", link_is_up,
+ ntb_transport_link_query(q->qp), q->qid);
if (link_is_up) {
- if (ntb_transport_link_query(dev->qp))
+ for (i = 0; i < dev->num_queues; i++) {
+ if (ntb_transport_link_query(dev->queues[i].qp)) {
+ any_up = true;
+ break;
+ }
+ }
+
+ if (any_up)
netif_carrier_on(ndev);
} else {
netif_carrier_off(ndev);
@@ -99,7 +121,9 @@ static void ntb_netdev_event_handler(void *data, int link_is_up)
static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
void *data, int len)
{
- struct net_device *ndev = qp_data;
+ struct ntb_netdev_queue *q = qp_data;
+ struct ntb_netdev *dev = q->ntdev;
+ struct net_device *ndev = dev->ndev;
struct sk_buff *skb;
int rc;
@@ -118,6 +142,7 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
skb_put(skb, len);
skb->protocol = eth_type_trans(skb, ndev);
skb->ip_summed = CHECKSUM_NONE;
+ skb_record_rx_queue(skb, q->qid);
if (netif_rx(skb) == NET_RX_DROP) {
ndev->stats.rx_errors++;
@@ -135,7 +160,8 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
}
enqueue_again:
- rc = ntb_transport_rx_enqueue(qp, skb, skb->data, ndev->mtu + ETH_HLEN);
+ rc = ntb_transport_rx_enqueue(q->qp, skb, skb->data,
+ ndev->mtu + ETH_HLEN);
if (rc) {
dev_kfree_skb_any(skb);
ndev->stats.rx_errors++;
@@ -143,42 +169,37 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
}
}
-static int __ntb_netdev_maybe_stop_tx(struct net_device *netdev,
- struct ntb_transport_qp *qp, int size)
+static int ntb_netdev_maybe_stop_tx(struct ntb_netdev_queue *q, int size)
{
- struct ntb_netdev *dev = netdev_priv(netdev);
+ struct net_device *ndev = q->ntdev->ndev;
+
+ if (ntb_transport_tx_free_entry(q->qp) >= size)
+ return 0;
+
+ netif_stop_subqueue(ndev, q->qid);
- netif_stop_queue(netdev);
/* Make sure to see the latest value of ntb_transport_tx_free_entry()
* since the queue was last started.
*/
smp_mb();
- if (likely(ntb_transport_tx_free_entry(qp) < size)) {
- mod_timer(&dev->tx_timer, jiffies + usecs_to_jiffies(tx_time));
+ if (likely(ntb_transport_tx_free_entry(q->qp) < size)) {
+ mod_timer(&q->tx_timer, jiffies + usecs_to_jiffies(tx_time));
return -EBUSY;
}
- netif_start_queue(netdev);
+ netif_wake_subqueue(ndev, q->qid);
+
return 0;
}
-static int ntb_netdev_maybe_stop_tx(struct net_device *ndev,
- struct ntb_transport_qp *qp, int size)
-{
- if (netif_queue_stopped(ndev) ||
- (ntb_transport_tx_free_entry(qp) >= size))
- return 0;
-
- return __ntb_netdev_maybe_stop_tx(ndev, qp, size);
-}
-
static void ntb_netdev_tx_handler(struct ntb_transport_qp *qp, void *qp_data,
void *data, int len)
{
- struct net_device *ndev = qp_data;
+ struct ntb_netdev_queue *q = qp_data;
+ struct ntb_netdev *dev = q->ntdev;
+ struct net_device *ndev = dev->ndev;
struct sk_buff *skb;
- struct ntb_netdev *dev = netdev_priv(ndev);
skb = data;
if (!skb || !ndev)
@@ -194,13 +215,12 @@ static void ntb_netdev_tx_handler(struct ntb_transport_qp *qp, void *qp_data,
dev_kfree_skb_any(skb);
- if (ntb_transport_tx_free_entry(dev->qp) >= tx_start) {
+ if (ntb_transport_tx_free_entry(qp) >= tx_start) {
/* Make sure anybody stopping the queue after this sees the new
* value of ntb_transport_tx_free_entry()
*/
smp_mb();
- if (netif_queue_stopped(ndev))
- netif_wake_queue(ndev);
+ netif_wake_subqueue(ndev, q->qid);
}
}
@@ -208,16 +228,26 @@ static netdev_tx_t ntb_netdev_start_xmit(struct sk_buff *skb,
struct net_device *ndev)
{
struct ntb_netdev *dev = netdev_priv(ndev);
+ u16 qid = skb_get_queue_mapping(skb);
+ struct ntb_netdev_queue *q;
int rc;
- ntb_netdev_maybe_stop_tx(ndev, dev->qp, tx_stop);
+ if (unlikely(!dev->num_queues))
+ goto err;
- rc = ntb_transport_tx_enqueue(dev->qp, skb, skb->data, skb->len);
+ if (unlikely(qid >= dev->num_queues))
+ qid = 0;
+
+ q = &dev->queues[qid];
+
+ ntb_netdev_maybe_stop_tx(q, tx_stop);
+
+ rc = ntb_transport_tx_enqueue(q->qp, skb, skb->data, skb->len);
if (rc)
goto err;
/* check for next submit */
- ntb_netdev_maybe_stop_tx(ndev, dev->qp, tx_stop);
+ ntb_netdev_maybe_stop_tx(q, tx_stop);
return NETDEV_TX_OK;
@@ -229,80 +259,103 @@ static netdev_tx_t ntb_netdev_start_xmit(struct sk_buff *skb,
static void ntb_netdev_tx_timer(struct timer_list *t)
{
- struct ntb_netdev *dev = timer_container_of(dev, t, tx_timer);
+ struct ntb_netdev_queue *q = timer_container_of(q, t, tx_timer);
+ struct ntb_netdev *dev = q->ntdev;
struct net_device *ndev = dev->ndev;
- if (ntb_transport_tx_free_entry(dev->qp) < tx_stop) {
- mod_timer(&dev->tx_timer, jiffies + usecs_to_jiffies(tx_time));
+ if (ntb_transport_tx_free_entry(q->qp) < tx_stop) {
+ mod_timer(&q->tx_timer, jiffies + usecs_to_jiffies(tx_time));
} else {
- /* Make sure anybody stopping the queue after this sees the new
+ /*
+ * Make sure anybody stopping the queue after this sees the new
* value of ntb_transport_tx_free_entry()
*/
smp_mb();
- if (netif_queue_stopped(ndev))
- netif_wake_queue(ndev);
+ netif_wake_subqueue(ndev, q->qid);
}
}
static int ntb_netdev_open(struct net_device *ndev)
{
struct ntb_netdev *dev = netdev_priv(ndev);
+ struct ntb_netdev_queue *queue;
struct sk_buff *skb;
- int rc, i, len;
+ int rc = 0, i, len;
+ unsigned int q;
- /* Add some empty rx bufs */
- for (i = 0; i < NTB_RXQ_SIZE; i++) {
- skb = netdev_alloc_skb(ndev, ndev->mtu + ETH_HLEN);
- if (!skb) {
- rc = -ENOMEM;
- goto err;
- }
+ /* Add some empty rx bufs for each queue */
+ for (q = 0; q < dev->num_queues; q++) {
+ queue = &dev->queues[q];
+
+ for (i = 0; i < NTB_RXQ_SIZE; i++) {
+ skb = netdev_alloc_skb(ndev, ndev->mtu + ETH_HLEN);
+ if (!skb) {
+ rc = -ENOMEM;
+ goto err;
+ }
- rc = ntb_transport_rx_enqueue(dev->qp, skb, skb->data,
- ndev->mtu + ETH_HLEN);
- if (rc) {
- dev_kfree_skb(skb);
- goto err;
+ rc = ntb_transport_rx_enqueue(queue->qp, skb, skb->data,
+ ndev->mtu + ETH_HLEN);
+ if (rc) {
+ dev_kfree_skb(skb);
+ goto err;
+ }
}
+
+ timer_setup(&queue->tx_timer, ntb_netdev_tx_timer, 0);
}
- timer_setup(&dev->tx_timer, ntb_netdev_tx_timer, 0);
-
netif_carrier_off(ndev);
- ntb_transport_link_up(dev->qp);
- netif_start_queue(ndev);
+
+ for (q = 0; q < dev->num_queues; q++)
+ ntb_transport_link_up(dev->queues[q].qp);
+
+ netif_tx_start_all_queues(ndev);
return 0;
err:
- while ((skb = ntb_transport_rx_remove(dev->qp, &len)))
- dev_kfree_skb(skb);
+ for (q = 0; q < dev->num_queues; q++) {
+ queue = &dev->queues[q];
+
+ while ((skb = ntb_transport_rx_remove(queue->qp, &len)))
+ dev_kfree_skb(skb);
+ }
return rc;
}
static int ntb_netdev_close(struct net_device *ndev)
{
struct ntb_netdev *dev = netdev_priv(ndev);
+ struct ntb_netdev_queue *queue;
struct sk_buff *skb;
+ unsigned int q;
int len;
- ntb_transport_link_down(dev->qp);
+ netif_tx_stop_all_queues(ndev);
- while ((skb = ntb_transport_rx_remove(dev->qp, &len)))
- dev_kfree_skb(skb);
+ for (q = 0; q < dev->num_queues; q++) {
+ queue = &dev->queues[q];
- timer_delete_sync(&dev->tx_timer);
+ ntb_transport_link_down(queue->qp);
+ while ((skb = ntb_transport_rx_remove(queue->qp, &len)))
+ dev_kfree_skb(skb);
+
+ timer_delete_sync(&queue->tx_timer);
+ }
return 0;
}
static int ntb_netdev_change_mtu(struct net_device *ndev, int new_mtu)
{
struct ntb_netdev *dev = netdev_priv(ndev);
+ struct ntb_netdev_queue *queue;
struct sk_buff *skb;
- int len, rc;
+ unsigned int q, i;
+ int len, rc = 0;
- if (new_mtu > ntb_transport_max_size(dev->qp) - ETH_HLEN)
+ if (new_mtu > ntb_transport_max_size(dev->queues[0].qp) - ETH_HLEN)
return -EINVAL;
if (!netif_running(ndev)) {
@@ -311,41 +364,54 @@ static int ntb_netdev_change_mtu(struct net_device *ndev, int new_mtu)
}
/* Bring down the link and dispose of posted rx entries */
- ntb_transport_link_down(dev->qp);
+ for (q = 0; q < dev->num_queues; q++)
+ ntb_transport_link_down(dev->queues[q].qp);
if (ndev->mtu < new_mtu) {
- int i;
+ for (q = 0; q < dev->num_queues; q++) {
+ queue = &dev->queues[q];
- for (i = 0; (skb = ntb_transport_rx_remove(dev->qp, &len)); i++)
- dev_kfree_skb(skb);
-
- for (; i; i--) {
- skb = netdev_alloc_skb(ndev, new_mtu + ETH_HLEN);
- if (!skb) {
- rc = -ENOMEM;
- goto err;
- }
-
- rc = ntb_transport_rx_enqueue(dev->qp, skb, skb->data,
- new_mtu + ETH_HLEN);
- if (rc) {
+ for (i = 0;
+ (skb = ntb_transport_rx_remove(queue->qp, &len));
+ i++)
dev_kfree_skb(skb);
- goto err;
+
+ for (; i; i--) {
+ skb = netdev_alloc_skb(ndev,
+ new_mtu + ETH_HLEN);
+ if (!skb) {
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ rc = ntb_transport_rx_enqueue(queue->qp, skb,
+ skb->data,
+ new_mtu +
+ ETH_HLEN);
+ if (rc) {
+ dev_kfree_skb(skb);
+ goto err;
+ }
}
}
}
WRITE_ONCE(ndev->mtu, new_mtu);
- ntb_transport_link_up(dev->qp);
+ for (q = 0; q < dev->num_queues; q++)
+ ntb_transport_link_up(dev->queues[q].qp);
return 0;
err:
- ntb_transport_link_down(dev->qp);
+ for (q = 0; q < dev->num_queues; q++) {
+ struct ntb_netdev_queue *queue = &dev->queues[q];
- while ((skb = ntb_transport_rx_remove(dev->qp, &len)))
- dev_kfree_skb(skb);
+ ntb_transport_link_down(queue->qp);
+
+ while ((skb = ntb_transport_rx_remove(queue->qp, &len)))
+ dev_kfree_skb(skb);
+ }
netdev_err(ndev, "Error changing MTU, device inoperable\n");
return rc;
@@ -404,6 +470,7 @@ static int ntb_netdev_probe(struct device *client_dev)
struct net_device *ndev;
struct pci_dev *pdev;
struct ntb_netdev *dev;
+ unsigned int q;
int rc;
ntb = dev_ntb(client_dev->parent);
@@ -411,7 +478,7 @@ static int ntb_netdev_probe(struct device *client_dev)
if (!pdev)
return -ENODEV;
- ndev = alloc_etherdev(sizeof(*dev));
+ ndev = alloc_etherdev_mq(sizeof(*dev), ntb_num_queues);
if (!ndev)
return -ENOMEM;
@@ -420,6 +487,15 @@ static int ntb_netdev_probe(struct device *client_dev)
dev = netdev_priv(ndev);
dev->ndev = ndev;
dev->pdev = pdev;
+ dev->num_queues = 0;
+
+ dev->queues = kcalloc(ntb_num_queues, sizeof(*dev->queues),
+ GFP_KERNEL);
+ if (!dev->queues) {
+ rc = -ENOMEM;
+ goto err_free_netdev;
+ }
+
ndev->features = NETIF_F_HIGHDMA;
ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
@@ -436,26 +512,47 @@ static int ntb_netdev_probe(struct device *client_dev)
ndev->min_mtu = 0;
ndev->max_mtu = ETH_MAX_MTU;
- dev->qp = ntb_transport_create_queue(ndev, client_dev,
- &ntb_netdev_handlers);
- if (!dev->qp) {
+ for (q = 0; q < ntb_num_queues; q++) {
+ struct ntb_netdev_queue *queue = &dev->queues[q];
+
+ queue->ntdev = dev;
+ queue->qid = q;
+ queue->qp = ntb_transport_create_queue(queue, client_dev,
+ &ntb_netdev_handlers);
+ if (!queue->qp)
+ break;
+
+ dev->num_queues++;
+ }
+
+ if (!dev->num_queues) {
rc = -EIO;
- goto err;
+ goto err_free_queues;
}
- ndev->mtu = ntb_transport_max_size(dev->qp) - ETH_HLEN;
+ rc = netif_set_real_num_queues(ndev, dev->num_queues, dev->num_queues);
+ if (rc)
+ goto err_free_qps;
+
+ ndev->mtu = ntb_transport_max_size(dev->queues[0].qp) - ETH_HLEN;
rc = register_netdev(ndev);
if (rc)
- goto err1;
+ goto err_free_qps;
dev_set_drvdata(client_dev, ndev);
- dev_info(&pdev->dev, "%s created\n", ndev->name);
+ dev_info(&pdev->dev, "%s created with %u queue pairs\n",
+ ndev->name, dev->num_queues);
return 0;
-err1:
- ntb_transport_free_queue(dev->qp);
-err:
+err_free_qps:
+ for (q = 0; q < dev->num_queues; q++)
+ ntb_transport_free_queue(dev->queues[q].qp);
+
+err_free_queues:
+ kfree(dev->queues);
+
+err_free_netdev:
free_netdev(ndev);
return rc;
}
@@ -464,9 +561,14 @@ static void ntb_netdev_remove(struct device *client_dev)
{
struct net_device *ndev = dev_get_drvdata(client_dev);
struct ntb_netdev *dev = netdev_priv(ndev);
+ unsigned int q;
+
unregister_netdev(ndev);
- ntb_transport_free_queue(dev->qp);
+ for (q = 0; q < dev->num_queues; q++)
+ ntb_transport_free_queue(dev->queues[q].qp);
+
+ kfree(dev->queues);
free_netdev(ndev);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] net: ntb_netdev: Make queue pair count configurable
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
2026-02-24 15:28 ` [PATCH 1/3] net: ntb_netdev: Introduce per-queue context Koichiro Den
@ 2026-02-24 15:28 ` Koichiro Den
2026-02-24 15:28 ` [PATCH 3/3] net: ntb_netdev: Expose queue pair count via ethtool -l Koichiro Den
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Koichiro Den @ 2026-02-24 15:28 UTC (permalink / raw)
To: Jon Mason, Dave Jiang, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: ntb, netdev, linux-kernel
Expose ntb_num_queues as a module parameter so users can request more
than one NTB transport queue pair.
The value is clamped to a reasonable range (1..64) to avoid unbounded
allocations. The default remains 1. Since we don't support changing the
value at runtime yet, permission is intentionally set 0444.
Signed-off-by: Koichiro Den <den@valinux.co.jp>
---
drivers/net/ntb_netdev.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
index d8734dfc2eee..837a2e95f06e 100644
--- a/drivers/net/ntb_netdev.c
+++ b/drivers/net/ntb_netdev.c
@@ -71,7 +71,25 @@ static unsigned int tx_start = 10;
/* Number of descriptors still available before stop upper layer tx */
static unsigned int tx_stop = 5;
+/*
+ * This is an arbitrary safety cap to avoid unbounded allocations.
+ */
+#define NTB_NETDEV_MAX_QUEUES 64
+
+static int ntb_num_queues_set(const char *val, const struct kernel_param *kp)
+{
+ return param_set_uint_minmax(val, kp, 1, NTB_NETDEV_MAX_QUEUES);
+}
+
+static const struct kernel_param_ops ntb_num_queues_ops = {
+ .set = ntb_num_queues_set,
+ .get = param_get_uint,
+};
+
static unsigned int ntb_num_queues = 1;
+module_param_cb(ntb_num_queues, &ntb_num_queues_ops, &ntb_num_queues, 0444);
+MODULE_PARM_DESC(ntb_num_queues,
+ "Number of NTB netdev queue pairs to use (1 by default)");
struct ntb_netdev;
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] net: ntb_netdev: Expose queue pair count via ethtool -l
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
2026-02-24 15:28 ` [PATCH 1/3] net: ntb_netdev: Introduce per-queue context Koichiro Den
2026-02-24 15:28 ` [PATCH 2/3] net: ntb_netdev: Make queue pair count configurable Koichiro Den
@ 2026-02-24 15:28 ` Koichiro Den
2026-02-24 16:20 ` [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Dave Jiang
2026-02-26 3:50 ` Jakub Kicinski
4 siblings, 0 replies; 9+ messages in thread
From: Koichiro Den @ 2026-02-24 15:28 UTC (permalink / raw)
To: Jon Mason, Dave Jiang, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: ntb, netdev, linux-kernel
Implement the read-only get_channels ethtool callback so the active
number of queue pairs is visible via ethtool -l.
Report dev->num_queues as the combined channel count and maximum. The
driver does not support changing the channel count at runtime, so no
set_channels implementation is provided.
Signed-off-by: Koichiro Den <den@valinux.co.jp>
---
drivers/net/ntb_netdev.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
index 837a2e95f06e..185cc9ddfce7 100644
--- a/drivers/net/ntb_netdev.c
+++ b/drivers/net/ntb_netdev.c
@@ -470,10 +470,20 @@ static int ntb_get_link_ksettings(struct net_device *dev,
return 0;
}
+static void ntb_get_channels(struct net_device *ndev,
+ struct ethtool_channels *channels)
+{
+ struct ntb_netdev *dev = netdev_priv(ndev);
+
+ channels->max_combined = dev->num_queues;
+ channels->combined_count = dev->num_queues;
+}
+
static const struct ethtool_ops ntb_ethtool_ops = {
.get_drvinfo = ntb_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_link_ksettings = ntb_get_link_ksettings,
+ .get_channels = ntb_get_channels,
};
static const struct ntb_queue_handlers ntb_netdev_handlers = {
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
` (2 preceding siblings ...)
2026-02-24 15:28 ` [PATCH 3/3] net: ntb_netdev: Expose queue pair count via ethtool -l Koichiro Den
@ 2026-02-24 16:20 ` Dave Jiang
2026-02-25 3:36 ` Koichiro Den
2026-02-26 3:50 ` Jakub Kicinski
4 siblings, 1 reply; 9+ messages in thread
From: Dave Jiang @ 2026-02-24 16:20 UTC (permalink / raw)
To: Koichiro Den, Jon Mason, Allen Hubbe, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: ntb, netdev, linux-kernel
On 2/24/26 8:28 AM, Koichiro Den wrote:
> Hi,
>
> ntb_netdev currently hard-codes a single NTB transport queue pair, which
> means the datapath effectively runs as a single-queue netdev regardless
> of available CPUs / parallel flows.
>
> The longer-term motivation here is throughput scale-out: allow
> ntb_netdev to grow beyond the single-QP bottleneck and make it possible
> to spread TX/RX work across multiple queue pairs as link speeds and core
> counts keep increasing.
>
> Multi-queue also unlocks the standard networking knobs on top of it. In
> particular, once the device exposes multiple TX queues, qdisc/tc can
> steer flows/traffic classes into different queues (via
> skb->queue_mapping), enabling per-flow/per-class scheduling and QoS in a
> familiar way.
>
> This series is a small plumbing step towards that direction:
>
> 1) Introduce a per-queue context object (struct ntb_netdev_queue) and
> move queue-pair state out of struct ntb_netdev. Probe creates queue
> pairs in a loop and configures the netdev queue counts to match the
> number that was successfully created.
>
> 2) Expose ntb_num_queues as a module parameter to request multiple
> queue pairs at probe time. The value is clamped to 1..64 and kept
> read-only for now (no runtime reconfiguration).
>
> 3) Report the active queue-pair count via ethtool -l (get_channels),
> so users can confirm the device configuration from user space.
>
> Compatibility:
> - Default remains ntb_num_queues=1, so behaviour is unchanged unless
> the user explicitly requests more queues.
>
> Kernel base:
> - ntb-next latest:
> commit 7b3302c687ca ("ntb_hw_amd: Fix incorrect debug message in link
> disable path")
>
> Usage (example):
> - modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
> - ethtool -l <ifname> # Patch 3 takes care of it
>
> Patch summary:
> 1/3 net: ntb_netdev: Introduce per-queue context
> 2/3 net: ntb_netdev: Make queue pair count configurable
> 3/3 net: ntb_netdev: Expose queue pair count via ethtool -l
>
> Testing / results:
> Environment / command line:
> - 2x R-Car S4 Spider boards
> "Kernel base" (see above) + this series
> - For TCP load:
> [RC] $ sudo iperf3 -s
> [EP] $ sudo iperf3 -Z -c ${SERVER_IP} -l 65480 -w 512M -P 4
> - For UDP load:
> [RC] $ sudo iperf3 -s
> [EP] $ sudo iperf3 -ub0 -c ${SERVER_IP} -l 65480 -w 512M -P 4
>
> Before (without this series):
> TCP / UDP : 602 Mbps / 598 Mbps
>
> Before (ntb_num_queues=1):
> TCP / UDP : 588 Mbps / 605 Mbps
What accounts for the dip in TCP performance?
>
> After (ntb_num_queues=2):
> TCP / UDP : 602 Mbps / 598 Mbps
>
> Notes:
> In my current test environment, enabling multiple queue pairs does
> not improve throughput. The receive-side memcpy in ntb_transport is
> the dominant cost and limits scaling at present.
>
> Still, this series lays the groundwork for future scaling, for
> example once a transport backend is introduced that avoids memcpy
> to/from PCI memory space on both ends (see the superseded RFC
> series:
> https://lore.kernel.org/all/20251217151609.3162665-1-den@valinux.co.jp/).
>
>
> Best regards,
> Koichiro
>
> Koichiro Den (3):
> net: ntb_netdev: Introduce per-queue context
> net: ntb_netdev: Make queue pair count configurable
> net: ntb_netdev: Expose queue pair count via ethtool -l
>
> drivers/net/ntb_netdev.c | 326 +++++++++++++++++++++++++++------------
> 1 file changed, 228 insertions(+), 98 deletions(-)
>
for the series
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
2026-02-24 16:20 ` [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Dave Jiang
@ 2026-02-25 3:36 ` Koichiro Den
2026-02-25 15:07 ` Dave Jiang
0 siblings, 1 reply; 9+ messages in thread
From: Koichiro Den @ 2026-02-25 3:36 UTC (permalink / raw)
To: Dave Jiang
Cc: Jon Mason, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, ntb, netdev,
linux-kernel
On Tue, Feb 24, 2026 at 09:20:35AM -0700, Dave Jiang wrote:
>
>
> On 2/24/26 8:28 AM, Koichiro Den wrote:
> > Hi,
> >
> > ntb_netdev currently hard-codes a single NTB transport queue pair, which
> > means the datapath effectively runs as a single-queue netdev regardless
> > of available CPUs / parallel flows.
> >
> > The longer-term motivation here is throughput scale-out: allow
> > ntb_netdev to grow beyond the single-QP bottleneck and make it possible
> > to spread TX/RX work across multiple queue pairs as link speeds and core
> > counts keep increasing.
> >
> > Multi-queue also unlocks the standard networking knobs on top of it. In
> > particular, once the device exposes multiple TX queues, qdisc/tc can
> > steer flows/traffic classes into different queues (via
> > skb->queue_mapping), enabling per-flow/per-class scheduling and QoS in a
> > familiar way.
> >
> > This series is a small plumbing step towards that direction:
> >
> > 1) Introduce a per-queue context object (struct ntb_netdev_queue) and
> > move queue-pair state out of struct ntb_netdev. Probe creates queue
> > pairs in a loop and configures the netdev queue counts to match the
> > number that was successfully created.
> >
> > 2) Expose ntb_num_queues as a module parameter to request multiple
> > queue pairs at probe time. The value is clamped to 1..64 and kept
> > read-only for now (no runtime reconfiguration).
> >
> > 3) Report the active queue-pair count via ethtool -l (get_channels),
> > so users can confirm the device configuration from user space.
> >
> > Compatibility:
> > - Default remains ntb_num_queues=1, so behaviour is unchanged unless
> > the user explicitly requests more queues.
> >
> > Kernel base:
> > - ntb-next latest:
> > commit 7b3302c687ca ("ntb_hw_amd: Fix incorrect debug message in link
> > disable path")
> >
> > Usage (example):
> > - modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
> > - ethtool -l <ifname> # Patch 3 takes care of it
> >
> > Patch summary:
> > 1/3 net: ntb_netdev: Introduce per-queue context
> > 2/3 net: ntb_netdev: Make queue pair count configurable
> > 3/3 net: ntb_netdev: Expose queue pair count via ethtool -l
> >
> > Testing / results:
> > Environment / command line:
> > - 2x R-Car S4 Spider boards
> > "Kernel base" (see above) + this series
> > - For TCP load:
> > [RC] $ sudo iperf3 -s
> > [EP] $ sudo iperf3 -Z -c ${SERVER_IP} -l 65480 -w 512M -P 4
> > - For UDP load:
> > [RC] $ sudo iperf3 -s
> > [EP] $ sudo iperf3 -ub0 -c ${SERVER_IP} -l 65480 -w 512M -P 4
> >
> > Before (without this series):
> > TCP / UDP : 602 Mbps / 598 Mbps
> >
> > Before (ntb_num_queues=1):
> > TCP / UDP : 588 Mbps / 605 Mbps
>
> What accounts for the dip in TCP performance?
I believe this is within normal run-to-run variance. To be sure, I repeated the
TCP tests multiple times. The aggregated results are:
+------+----------+------------------+------------------+
| | Baseline | ntb_num_queues=1 | ntb_num_queues=2 |
+------+----------+------------------+------------------+
| Mean | 599.5 | 595.2 (-0.7%) | 600.4 (+0.2%) |
| Min | 590 | 590 (+0.0%) | 593 (+0.5%) |
| Max | 605 | 604 (-0.2%) | 605 (+0.0%) |
| Med | 602 | 593 | 601.5 |
| SD | 5.84 | 6.01 | 4.12 |
+------+----------+------------------+------------------+
On my setup (2x R-Car S4 Spider), I do not observe any statistically meaningful
improvement or degradation. For completeness, here is the raw data:
.----------------------------- Baseline (without this series)
: .----------------- ntb_num_queues=1
: : .---- ntb_num_queues=2
: : :
#1 601 Mbps 604 Mbps 601 Mbps
#2 604 Mbps 604 Mbps 603 Mbps
#3 592 Mbps 590 Mbps 600 Mbps
#4 593 Mbps 593 Mbps 603 Mbps
#5 605 Mbps 591 Mbps 605 Mbps
#6 590 Mbps 603 Mbps 602 Mbps
#7 605 Mbps 590 Mbps 596 Mbps
#8 598 Mbps 594 Mbps 593 Mbps
#9 603 Mbps 590 Mbps 605 Mbps
#10 604 Mbps 593 Mbps 596 Mbps
To see a tangible performance gain, another patch series I submitted yesterday
is also relevant:
[PATCH 00/10] NTB: epf: Enable per-doorbell bit handling while keeping legacy offset
https://lore.kernel.org/all/20260224133459.1741537-1-den@valinux.co.jp/
With that series applied as well, and with irq smp_affinity properly adjusted,
the results become:
After (ntb_num_queues=2 + the other series also applied):
TCP / UDP : 1.15 Gbps / 1.18 Gbps
In that sense, that series is also important groundwork from a performance
perspective. Since that work touches NTB-tree code, I'd appreciate it if you
could also have a look at that series.
Side note: R-Car S4 Spider has limited BAR resources. Although BAR2 is
resizable, ~2 MiB appears to be the practical ceiling for arbitrary mappings in
this setup, so I haven't tested larger ntb_num_queues=<N> values. On platforms
with more BAR space, sufficient CPUs for memcpy, or sufficent DMA channels for
DMA memcpy available to ntb_transport, further scaling with larger <N> values
should be possible.
Thanks,
Koichiro
>
> >
> > After (ntb_num_queues=2):
> > TCP / UDP : 602 Mbps / 598 Mbps
> >
> > Notes:
> > In my current test environment, enabling multiple queue pairs does
> > not improve throughput. The receive-side memcpy in ntb_transport is
> > the dominant cost and limits scaling at present.
> >
> > Still, this series lays the groundwork for future scaling, for
> > example once a transport backend is introduced that avoids memcpy
> > to/from PCI memory space on both ends (see the superseded RFC
> > series:
> > https://lore.kernel.org/all/20251217151609.3162665-1-den@valinux.co.jp/).
> >
> >
> > Best regards,
> > Koichiro
> >
> > Koichiro Den (3):
> > net: ntb_netdev: Introduce per-queue context
> > net: ntb_netdev: Make queue pair count configurable
> > net: ntb_netdev: Expose queue pair count via ethtool -l
> >
> > drivers/net/ntb_netdev.c | 326 +++++++++++++++++++++++++++------------
> > 1 file changed, 228 insertions(+), 98 deletions(-)
> >
>
> for the series
> Reviewed-by: Dave Jiang <dave.jiang@intel.com>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
2026-02-25 3:36 ` Koichiro Den
@ 2026-02-25 15:07 ` Dave Jiang
0 siblings, 0 replies; 9+ messages in thread
From: Dave Jiang @ 2026-02-25 15:07 UTC (permalink / raw)
To: Koichiro Den
Cc: Jon Mason, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, ntb, netdev,
linux-kernel
On 2/24/26 8:36 PM, Koichiro Den wrote:
> On Tue, Feb 24, 2026 at 09:20:35AM -0700, Dave Jiang wrote:
>>
>>
>> On 2/24/26 8:28 AM, Koichiro Den wrote:
>>> Hi,
>>>
>>> ntb_netdev currently hard-codes a single NTB transport queue pair, which
>>> means the datapath effectively runs as a single-queue netdev regardless
>>> of available CPUs / parallel flows.
>>>
>>> The longer-term motivation here is throughput scale-out: allow
>>> ntb_netdev to grow beyond the single-QP bottleneck and make it possible
>>> to spread TX/RX work across multiple queue pairs as link speeds and core
>>> counts keep increasing.
>>>
>>> Multi-queue also unlocks the standard networking knobs on top of it. In
>>> particular, once the device exposes multiple TX queues, qdisc/tc can
>>> steer flows/traffic classes into different queues (via
>>> skb->queue_mapping), enabling per-flow/per-class scheduling and QoS in a
>>> familiar way.
>>>
>>> This series is a small plumbing step towards that direction:
>>>
>>> 1) Introduce a per-queue context object (struct ntb_netdev_queue) and
>>> move queue-pair state out of struct ntb_netdev. Probe creates queue
>>> pairs in a loop and configures the netdev queue counts to match the
>>> number that was successfully created.
>>>
>>> 2) Expose ntb_num_queues as a module parameter to request multiple
>>> queue pairs at probe time. The value is clamped to 1..64 and kept
>>> read-only for now (no runtime reconfiguration).
>>>
>>> 3) Report the active queue-pair count via ethtool -l (get_channels),
>>> so users can confirm the device configuration from user space.
>>>
>>> Compatibility:
>>> - Default remains ntb_num_queues=1, so behaviour is unchanged unless
>>> the user explicitly requests more queues.
>>>
>>> Kernel base:
>>> - ntb-next latest:
>>> commit 7b3302c687ca ("ntb_hw_amd: Fix incorrect debug message in link
>>> disable path")
>>>
>>> Usage (example):
>>> - modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
>>> - ethtool -l <ifname> # Patch 3 takes care of it
>>>
>>> Patch summary:
>>> 1/3 net: ntb_netdev: Introduce per-queue context
>>> 2/3 net: ntb_netdev: Make queue pair count configurable
>>> 3/3 net: ntb_netdev: Expose queue pair count via ethtool -l
>>>
>>> Testing / results:
>>> Environment / command line:
>>> - 2x R-Car S4 Spider boards
>>> "Kernel base" (see above) + this series
>>> - For TCP load:
>>> [RC] $ sudo iperf3 -s
>>> [EP] $ sudo iperf3 -Z -c ${SERVER_IP} -l 65480 -w 512M -P 4
>>> - For UDP load:
>>> [RC] $ sudo iperf3 -s
>>> [EP] $ sudo iperf3 -ub0 -c ${SERVER_IP} -l 65480 -w 512M -P 4
>>>
>>> Before (without this series):
>>> TCP / UDP : 602 Mbps / 598 Mbps
>>>
>>> Before (ntb_num_queues=1):
>>> TCP / UDP : 588 Mbps / 605 Mbps
>>
>> What accounts for the dip in TCP performance?
>
> I believe this is within normal run-to-run variance. To be sure, I repeated the
> TCP tests multiple times. The aggregated results are:
>
> +------+----------+------------------+------------------+
> | | Baseline | ntb_num_queues=1 | ntb_num_queues=2 |
> +------+----------+------------------+------------------+
> | Mean | 599.5 | 595.2 (-0.7%) | 600.4 (+0.2%) |
> | Min | 590 | 590 (+0.0%) | 593 (+0.5%) |
> | Max | 605 | 604 (-0.2%) | 605 (+0.0%) |
> | Med | 602 | 593 | 601.5 |
> | SD | 5.84 | 6.01 | 4.12 |
> +------+----------+------------------+------------------+
>
> On my setup (2x R-Car S4 Spider), I do not observe any statistically meaningful
> improvement or degradation. For completeness, here is the raw data:
>
> .----------------------------- Baseline (without this series)
> : .----------------- ntb_num_queues=1
> : : .---- ntb_num_queues=2
> : : :
> #1 601 Mbps 604 Mbps 601 Mbps
> #2 604 Mbps 604 Mbps 603 Mbps
> #3 592 Mbps 590 Mbps 600 Mbps
> #4 593 Mbps 593 Mbps 603 Mbps
> #5 605 Mbps 591 Mbps 605 Mbps
> #6 590 Mbps 603 Mbps 602 Mbps
> #7 605 Mbps 590 Mbps 596 Mbps
> #8 598 Mbps 594 Mbps 593 Mbps
> #9 603 Mbps 590 Mbps 605 Mbps
> #10 604 Mbps 593 Mbps 596 Mbps
>
> To see a tangible performance gain, another patch series I submitted yesterday
> is also relevant:
>
> [PATCH 00/10] NTB: epf: Enable per-doorbell bit handling while keeping legacy offset
> https://lore.kernel.org/all/20260224133459.1741537-1-den@valinux.co.jp/
>
> With that series applied as well, and with irq smp_affinity properly adjusted,
> the results become:
>
> After (ntb_num_queues=2 + the other series also applied):
> TCP / UDP : 1.15 Gbps / 1.18 Gbps
>
> In that sense, that series is also important groundwork from a performance
> perspective. Since that work touches NTB-tree code, I'd appreciate it if you
> could also have a look at that series.
>
> Side note: R-Car S4 Spider has limited BAR resources. Although BAR2 is
> resizable, ~2 MiB appears to be the practical ceiling for arbitrary mappings in
> this setup, so I haven't tested larger ntb_num_queues=<N> values. On platforms
> with more BAR space, sufficient CPUs for memcpy, or sufficent DMA channels for
> DMA memcpy available to ntb_transport, further scaling with larger <N> values
> should be possible.
Thanks for the data. I'll take a look at the other series.
>
> Thanks,
> Koichiro
>
>>
>>>
>>> After (ntb_num_queues=2):
>>> TCP / UDP : 602 Mbps / 598 Mbps
>>>
>>> Notes:
>>> In my current test environment, enabling multiple queue pairs does
>>> not improve throughput. The receive-side memcpy in ntb_transport is
>>> the dominant cost and limits scaling at present.
>>>
>>> Still, this series lays the groundwork for future scaling, for
>>> example once a transport backend is introduced that avoids memcpy
>>> to/from PCI memory space on both ends (see the superseded RFC
>>> series:
>>> https://lore.kernel.org/all/20251217151609.3162665-1-den@valinux.co.jp/).
>>>
>>>
>>> Best regards,
>>> Koichiro
>>>
>>> Koichiro Den (3):
>>> net: ntb_netdev: Introduce per-queue context
>>> net: ntb_netdev: Make queue pair count configurable
>>> net: ntb_netdev: Expose queue pair count via ethtool -l
>>>
>>> drivers/net/ntb_netdev.c | 326 +++++++++++++++++++++++++++------------
>>> 1 file changed, 228 insertions(+), 98 deletions(-)
>>>
>>
>> for the series
>> Reviewed-by: Dave Jiang <dave.jiang@intel.com>
>>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
` (3 preceding siblings ...)
2026-02-24 16:20 ` [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Dave Jiang
@ 2026-02-26 3:50 ` Jakub Kicinski
2026-02-26 8:01 ` Koichiro Den
4 siblings, 1 reply; 9+ messages in thread
From: Jakub Kicinski @ 2026-02-26 3:50 UTC (permalink / raw)
To: Koichiro Den
Cc: Jon Mason, Dave Jiang, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, ntb, netdev, linux-kernel
On Wed, 25 Feb 2026 00:28:06 +0900 Koichiro Den wrote:
> Usage (example):
> - modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
> - ethtool -l <ifname> # Patch 3 takes care of it
Module parameters are not a very user friendly choice for uAPI.
You use ethtool -l for GET, what's the challenge with implementing SET
via ethtool -L?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/3] net: ntb_netdev: Add Multi-queue support
2026-02-26 3:50 ` Jakub Kicinski
@ 2026-02-26 8:01 ` Koichiro Den
0 siblings, 0 replies; 9+ messages in thread
From: Koichiro Den @ 2026-02-26 8:01 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Jon Mason, Dave Jiang, Allen Hubbe, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, ntb, netdev, linux-kernel
On Wed, Feb 25, 2026 at 07:50:04PM -0800, Jakub Kicinski wrote:
> On Wed, 25 Feb 2026 00:28:06 +0900 Koichiro Den wrote:
> > Usage (example):
> > - modprobe ntb_netdev ntb_num_queues=<N> # Patch 2 takes care of it
> > - ethtool -l <ifname> # Patch 3 takes care of it
>
> Module parameters are not a very user friendly choice for uAPI.
> You use ethtool -l for GET, what's the challenge with implementing SET
> via ethtool -L?
Thanks for the comment, Jakub.
There is no technical limitation. I didn't include SET support simply to keep
the initial series minimal as possible.
However, you're right, adding such a module parameter would also make it part of
the uAPI and unnecesarily hard to remove later. It's better to implement SET
from the beggining.
Dave, let me respin the series with SET support and drop the module parameter.
Please let me know if you have any objections.
Best regards,
Koichiro
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-02-26 8:01 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-24 15:28 [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Koichiro Den
2026-02-24 15:28 ` [PATCH 1/3] net: ntb_netdev: Introduce per-queue context Koichiro Den
2026-02-24 15:28 ` [PATCH 2/3] net: ntb_netdev: Make queue pair count configurable Koichiro Den
2026-02-24 15:28 ` [PATCH 3/3] net: ntb_netdev: Expose queue pair count via ethtool -l Koichiro Den
2026-02-24 16:20 ` [PATCH 0/3] net: ntb_netdev: Add Multi-queue support Dave Jiang
2026-02-25 3:36 ` Koichiro Den
2026-02-25 15:07 ` Dave Jiang
2026-02-26 3:50 ` Jakub Kicinski
2026-02-26 8:01 ` Koichiro Den
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox