* [PATCH net-next 1/4] ibmvnic: Derive NUM_RX_STATS/NUM_TX_STATS dynamically
2025-06-30 23:48 [PATCH net-next 0/4] ibmvnic: Improve queue stats and subcrq indirect handling Mingming Cao
@ 2025-06-30 23:48 ` Mingming Cao
2025-07-02 1:30 ` Jakub Kicinski
2025-06-30 23:48 ` [PATCH net-next 2/4] ibmvnic: Use atomic64_t for queue stats Mingming Cao
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Mingming Cao @ 2025-06-30 23:48 UTC (permalink / raw)
To: netdev; +Cc: bjking1, haren, ricklind, mmc, Dave Marquardt
Replace the hardcoded #define NUM_RX_STATS/NUM_TX_STATS 3
with a sizeof-based calculation to automatically reflect
the number of fields in struct ibmvnic_rx_queue_stats.
and struct ibmvnic_tx_queue_stats.
This avoids mismatches and improves maintainability.
Fixes: 2ee73c54a615 ("ibmvnic: Add stat for tx direct vs tx batched")
Signed-off-by: Mingming Cao <mmc@linux.ibm.com>
Reviewed-by: Dave Marquardt <davemarq@linux.ibm.com>
---
drivers/net/ethernet/ibm/ibmvnic.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index a189038d88..246ddce753 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -211,7 +211,6 @@ struct ibmvnic_statistics {
u8 reserved[72];
} __packed __aligned(8);
-#define NUM_TX_STATS 3
struct ibmvnic_tx_queue_stats {
u64 batched_packets;
u64 direct_packets;
@@ -219,13 +218,18 @@ struct ibmvnic_tx_queue_stats {
u64 dropped_packets;
};
-#define NUM_RX_STATS 3
+#define NUM_TX_STATS \
+ (sizeof(struct ibmvnic_tx_queue_stats) / sizeof(u64))
+
struct ibmvnic_rx_queue_stats {
u64 packets;
u64 bytes;
u64 interrupts;
};
+#define NUM_RX_STATS \
+ (sizeof(struct ibmvnic_rx_queue_stats) / sizeof(u64))
+
struct ibmvnic_acl_buffer {
__be32 len;
__be32 version;
--
2.39.3 (Apple Git-146)
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH net-next 2/4] ibmvnic: Use atomic64_t for queue stats
2025-06-30 23:48 [PATCH net-next 0/4] ibmvnic: Improve queue stats and subcrq indirect handling Mingming Cao
2025-06-30 23:48 ` [PATCH net-next 1/4] ibmvnic: Derive NUM_RX_STATS/NUM_TX_STATS dynamically Mingming Cao
@ 2025-06-30 23:48 ` Mingming Cao
2025-07-03 2:58 ` kernel test robot
2025-06-30 23:48 ` [PATCH net-next 3/4] ibmvnic: Use ndo_get_stats64 to fix inaccurate SAR reporting Mingming Cao
2025-06-30 23:48 ` [PATCH net-next 4/4] ibmvnic: Make max subcrq indirect entries tunable via module param Mingming Cao
3 siblings, 1 reply; 8+ messages in thread
From: Mingming Cao @ 2025-06-30 23:48 UTC (permalink / raw)
To: netdev; +Cc: bjking1, haren, ricklind, mmc, Dave Marquardt
This patch improves the ibmvnic driver by changing the per-queue
packet and byte counters to atomic64_t types. This makes updates
thread-safe and easier to manage across multiple cores.
It also updates the ethtool statistics to safely read these new counters.
Signed-off-by: Mingming Cao <mmc@linux.ibm.com>
Reviewed-by: Brian King <bjking1@linux.ibm.com>
Reviewed-by: Dave Marquardt <davemarq@linux.ibm.com>
Reviewed by: Rick Lindsley <ricklind@linux.ibm.com>
---
drivers/net/ethernet/ibm/ibmvnic.c | 34 +++++++++++++++---------------
drivers/net/ethernet/ibm/ibmvnic.h | 14 ++++++------
2 files changed, 24 insertions(+), 24 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 92647e137c..7b2be8eeb5 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -69,6 +69,7 @@
#include <linux/if_vlan.h>
#include <linux/utsname.h>
#include <linux/cpu.h>
+#include <linux/atomic.h>
#include "ibmvnic.h"
@@ -2314,9 +2315,8 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
tx_buff = &tx_pool->tx_buff[index];
adapter->netdev->stats.tx_packets--;
adapter->netdev->stats.tx_bytes -= tx_buff->skb->len;
- adapter->tx_stats_buffers[queue_num].batched_packets--;
- adapter->tx_stats_buffers[queue_num].bytes -=
- tx_buff->skb->len;
+ atomic64_dec(&adapter->tx_stats_buffers[queue_num].batched_packets);
+ atomic64_sub(tx_buff->skb->len, &adapter->tx_stats_buffers[queue_num].bytes);
dev_kfree_skb_any(tx_buff->skb);
tx_buff->skb = NULL;
adapter->netdev->stats.tx_dropped++;
@@ -2652,10 +2652,10 @@ out:
netdev->stats.tx_packets += tx_bpackets + tx_dpackets;
adapter->tx_send_failed += tx_send_failed;
adapter->tx_map_failed += tx_map_failed;
- adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
- adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
- adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
- adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
+ atomic64_add(tx_bpackets, &adapter->tx_stats_buffers[queue_num].batched_packets);
+ atomic64_add(tx_dpackets, &adapter->tx_stats_buffers[queue_num].direct_packets);
+ atomic64_add(tx_bytes, &adapter->tx_stats_buffers[queue_num].bytes);
+ atomic64_add(tx_dropped, &adapter->tx_stats_buffers[queue_num].dropped_packets);
return ret;
}
@@ -3569,8 +3569,8 @@ restart_poll:
napi_gro_receive(napi, skb); /* send it up */
netdev->stats.rx_packets++;
netdev->stats.rx_bytes += length;
- adapter->rx_stats_buffers[scrq_num].packets++;
- adapter->rx_stats_buffers[scrq_num].bytes += length;
+ atomic64_inc(&adapter->rx_stats_buffers[scrq_num].packets);
+ atomic64_add(length, &adapter->rx_stats_buffers[scrq_num].bytes);
frames_processed++;
}
@@ -3874,22 +3874,22 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev,
(adapter, ibmvnic_stats[i].offset));
for (j = 0; j < adapter->req_tx_queues; j++) {
- data[i] = adapter->tx_stats_buffers[j].batched_packets;
+ data[i] = atomic64_read(&adapter->tx_stats_buffers[j].batched_packets);
i++;
- data[i] = adapter->tx_stats_buffers[j].direct_packets;
+ data[i] = atomic64_read(&adapter->tx_stats_buffers[j].direct_packets);
i++;
- data[i] = adapter->tx_stats_buffers[j].bytes;
+ data[i] = atomic64_read(&adapter->tx_stats_buffers[j].bytes);
i++;
- data[i] = adapter->tx_stats_buffers[j].dropped_packets;
+ data[i] = atomic64_read(&adapter->tx_stats_buffers[j].dropped_packets);
i++;
}
for (j = 0; j < adapter->req_rx_queues; j++) {
- data[i] = adapter->rx_stats_buffers[j].packets;
+ data[i] = atomic64_read(&adapter->rx_stats_buffers[j].packets);
i++;
- data[i] = adapter->rx_stats_buffers[j].bytes;
+ data[i] = atomic64_read(&adapter->rx_stats_buffers[j].bytes);
i++;
- data[i] = adapter->rx_stats_buffers[j].interrupts;
+ data[i] = atomic64_read(&adapter->rx_stats_buffers[j].interrupts);
i++;
}
}
@@ -4307,7 +4307,7 @@ static irqreturn_t ibmvnic_interrupt_rx(int irq, void *instance)
if (unlikely(adapter->state != VNIC_OPEN))
return IRQ_NONE;
- adapter->rx_stats_buffers[scrq->scrq_num].interrupts++;
+ atomic64_inc(&adapter->rx_stats_buffers[scrq->scrq_num].interrupts);
if (napi_schedule_prep(&adapter->napi[scrq->scrq_num])) {
disable_scrq_irq(adapter, scrq);
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index 246ddce753..1cc6e2d13a 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -212,19 +212,19 @@ struct ibmvnic_statistics {
} __packed __aligned(8);
struct ibmvnic_tx_queue_stats {
- u64 batched_packets;
- u64 direct_packets;
- u64 bytes;
- u64 dropped_packets;
+ atomic64_t batched_packets;
+ atomic64_t direct_packets;
+ atomic64_t bytes;
+ atomic64_t dropped_packets;
};
#define NUM_TX_STATS \
(sizeof(struct ibmvnic_tx_queue_stats) / sizeof(u64))
struct ibmvnic_rx_queue_stats {
- u64 packets;
- u64 bytes;
- u64 interrupts;
+ atomic64_t packets;
+ atomic64_t bytes;
+ atomic64_t interrupts;
};
#define NUM_RX_STATS \
--
2.39.3 (Apple Git-146)
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH net-next 3/4] ibmvnic: Use ndo_get_stats64 to fix inaccurate SAR reporting
2025-06-30 23:48 [PATCH net-next 0/4] ibmvnic: Improve queue stats and subcrq indirect handling Mingming Cao
2025-06-30 23:48 ` [PATCH net-next 1/4] ibmvnic: Derive NUM_RX_STATS/NUM_TX_STATS dynamically Mingming Cao
2025-06-30 23:48 ` [PATCH net-next 2/4] ibmvnic: Use atomic64_t for queue stats Mingming Cao
@ 2025-06-30 23:48 ` Mingming Cao
2025-06-30 23:48 ` [PATCH net-next 4/4] ibmvnic: Make max subcrq indirect entries tunable via module param Mingming Cao
3 siblings, 0 replies; 8+ messages in thread
From: Mingming Cao @ 2025-06-30 23:48 UTC (permalink / raw)
To: netdev; +Cc: bjking1, haren, ricklind, mmc
VNIC testing on multi-core Power systems showed SAR stats drift
and packet rate inconsistencies under load.
Implements ndo_get_stats64 to provide safe aggregation of queue-level
atomic64 counters into rtnl_link_stats64 for use by tools like 'ip -s',
'ifconfig', and 'sar'. Switch to ndo_get_stats64 to align SAR reporting
with the standard kernel interface for retrieving netdev stats.
This removes redundant per-adapter stat updates, reduces overhead,
eliminates cacheline bouncing from hot path updates, and improves
the accuracy of reported packet rates.
Signed-off-by: Mingming Cao <mmc@linux.ibm.com>
Reviewed by: Brian King <bjking1@linux.ibm.com>
Reviewed by: Dave Marquardt <davemarq@linux.ibm.com>
---
drivers/net/ethernet/ibm/ibmvnic.c | 29 +++++++++++++++++++++--------
1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 7b2be8eeb5..8c959d5db2 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2313,13 +2313,11 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
tx_pool->num_buffers - 1 :
tx_pool->consumer_index - 1;
tx_buff = &tx_pool->tx_buff[index];
- adapter->netdev->stats.tx_packets--;
- adapter->netdev->stats.tx_bytes -= tx_buff->skb->len;
atomic64_dec(&adapter->tx_stats_buffers[queue_num].batched_packets);
atomic64_sub(tx_buff->skb->len, &adapter->tx_stats_buffers[queue_num].bytes);
+ atomic64_inc(&adapter->tx_stats_buffers[queue_num].dropped_packets);
dev_kfree_skb_any(tx_buff->skb);
tx_buff->skb = NULL;
- adapter->netdev->stats.tx_dropped++;
}
ind_bufp->index = 0;
@@ -2647,9 +2645,6 @@ tx_err:
}
out:
rcu_read_unlock();
- netdev->stats.tx_dropped += tx_dropped;
- netdev->stats.tx_bytes += tx_bytes;
- netdev->stats.tx_packets += tx_bpackets + tx_dpackets;
adapter->tx_send_failed += tx_send_failed;
adapter->tx_map_failed += tx_map_failed;
atomic64_add(tx_bpackets, &adapter->tx_stats_buffers[queue_num].batched_packets);
@@ -3452,6 +3447,25 @@ err:
return -ret;
}
+static void ibmvnic_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct ibmvnic_adapter *adapter = netdev_priv(netdev);
+ int i;
+
+ for (i = 0; i < adapter->req_rx_queues; i++) {
+ stats->rx_packets += atomic64_read(&adapter->rx_stats_buffers[i].packets);
+ stats->rx_bytes += atomic64_read(&adapter->rx_stats_buffers[i].bytes);
+ }
+
+ for (i = 0; i < adapter->req_tx_queues; i++) {
+ stats->tx_packets += atomic64_read(&adapter->tx_stats_buffers[i].batched_packets);
+ stats->tx_packets += atomic64_read(&adapter->tx_stats_buffers[i].direct_packets);
+ stats->tx_bytes += atomic64_read(&adapter->tx_stats_buffers[i].bytes);
+ stats->tx_dropped += atomic64_read(&adapter->tx_stats_buffers[i].dropped_packets);
+ }
+}
+
static void ibmvnic_tx_timeout(struct net_device *dev, unsigned int txqueue)
{
struct ibmvnic_adapter *adapter = netdev_priv(dev);
@@ -3567,8 +3581,6 @@ restart_poll:
length = skb->len;
napi_gro_receive(napi, skb); /* send it up */
- netdev->stats.rx_packets++;
- netdev->stats.rx_bytes += length;
atomic64_inc(&adapter->rx_stats_buffers[scrq_num].packets);
atomic64_add(length, &adapter->rx_stats_buffers[scrq_num].bytes);
frames_processed++;
@@ -3678,6 +3690,7 @@ static const struct net_device_ops ibmvnic_netdev_ops = {
.ndo_set_rx_mode = ibmvnic_set_multi,
.ndo_set_mac_address = ibmvnic_set_mac,
.ndo_validate_addr = eth_validate_addr,
+ .ndo_get_stats64 = ibmvnic_get_stats64,
.ndo_tx_timeout = ibmvnic_tx_timeout,
.ndo_change_mtu = ibmvnic_change_mtu,
.ndo_features_check = ibmvnic_features_check,
--
2.39.3 (Apple Git-146)
^ permalink raw reply related [flat|nested] 8+ messages in thread* [PATCH net-next 4/4] ibmvnic: Make max subcrq indirect entries tunable via module param
2025-06-30 23:48 [PATCH net-next 0/4] ibmvnic: Improve queue stats and subcrq indirect handling Mingming Cao
` (2 preceding siblings ...)
2025-06-30 23:48 ` [PATCH net-next 3/4] ibmvnic: Use ndo_get_stats64 to fix inaccurate SAR reporting Mingming Cao
@ 2025-06-30 23:48 ` Mingming Cao
2025-07-02 1:31 ` Jakub Kicinski
3 siblings, 1 reply; 8+ messages in thread
From: Mingming Cao @ 2025-06-30 23:48 UTC (permalink / raw)
To: netdev; +Cc: bjking1, haren, ricklind, mmc
This patch increased the default of max subcrq indirect entries ,
and introduces a way to tune the maximum number of indirect
subcrq descriptors via a module parameter. The default now is set to 128,
as supported on P9, allowing for better throughput performance on
large system workloads while maintaining flexibility to fall back
to a smaller maximum limit on P8 or systems with limited memory resources
Signed-off-by: Mingming Cao <mmc@linux.ibm.com>
Reviewed by: Rick Lindsley <ricklind@linux.ibm.com>
Reviewed by: Dave Marquardt <davemarq@linux.ibm.com>
Reviewed by: Brian King <bjking1@linux.ibm.com>
---
drivers/net/ethernet/ibm/ibmvnic.c | 29 ++++++++++++++++++++++++-----
drivers/net/ethernet/ibm/ibmvnic.h | 7 +++++--
2 files changed, 29 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 8c959d5db2..a9c313d6c7 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -154,6 +154,11 @@ static const struct ibmvnic_stat ibmvnic_stats[] = {
{"internal_mac_rx_errors", IBMVNIC_STAT_OFF(internal_mac_rx_errors)},
};
+/* Module parameter for max_ind_descs */
+static unsigned int max_ind_descs = IBMVNIC_MAX_IND_DESCS_DEFAULT;
+module_param(max_ind_descs, uint, 0444);
+MODULE_PARM_DESC(max_ind_descs, "Max indirect subcrq descriptors (16 to 128, default 128)");
+
static int send_crq_init_complete(struct ibmvnic_adapter *adapter)
{
union ibmvnic_crq crq;
@@ -844,7 +849,7 @@ static void replenish_rx_pool(struct ibmvnic_adapter *adapter,
sub_crq->rx_add.len = cpu_to_be32(pool->buff_size << shift);
/* if send_subcrq_indirect queue is full, flush to VIOS */
- if (ind_bufp->index == IBMVNIC_MAX_IND_DESCS ||
+ if (ind_bufp->index == max_ind_descs ||
i == count - 1) {
lpar_rc =
send_subcrq_indirect(adapter, handle,
@@ -2590,7 +2595,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
tx_crq.v1.n_crq_elem = num_entries;
tx_buff->num_entries = num_entries;
/* flush buffer if current entry can not fit */
- if (num_entries + ind_bufp->index > IBMVNIC_MAX_IND_DESCS) {
+ if (num_entries + ind_bufp->index > max_ind_descs) {
lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, true);
if (lpar_rc != H_SUCCESS)
goto tx_flush_err;
@@ -2603,7 +2608,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
ind_bufp->index += num_entries;
if (__netdev_tx_sent_queue(txq, skb->len,
netdev_xmit_more() &&
- ind_bufp->index < IBMVNIC_MAX_IND_DESCS)) {
+ ind_bufp->index < max_ind_descs)) {
lpar_rc = ibmvnic_tx_scrq_flush(adapter, tx_scrq, true);
if (lpar_rc != H_SUCCESS)
goto tx_err;
@@ -4006,7 +4011,7 @@ static void release_sub_crq_queue(struct ibmvnic_adapter *adapter,
}
dma_free_coherent(dev,
- IBMVNIC_IND_ARR_SZ,
+ max_ind_descs * IBMVNIC_IND_DESC_SZ,
scrq->ind_buf.indir_arr,
scrq->ind_buf.indir_dma);
@@ -4063,7 +4068,7 @@ static struct ibmvnic_sub_crq_queue *init_sub_crq_queue(struct ibmvnic_adapter
scrq->ind_buf.indir_arr =
dma_alloc_coherent(dev,
- IBMVNIC_IND_ARR_SZ,
+ max_ind_descs * IBMVNIC_IND_DESC_SZ,
&scrq->ind_buf.indir_dma,
GFP_KERNEL);
@@ -6725,6 +6730,20 @@ static int __init ibmvnic_module_init(void)
{
int ret;
+ if (max_ind_descs < IBMVNIC_MAX_IND_DESC_MIN ||
+ max_ind_descs > IBMVNIC_MAX_IND_DESC_MAX) {
+ pr_info("ibmvnic: max_ind_descs=%u, must be between %d and %d. default %u\n",
+ max_ind_descs,
+ IBMVNIC_MAX_IND_DESC_MIN,
+ IBMVNIC_MAX_IND_DESC_MAX,
+ IBMVNIC_MAX_IND_DESCS_DEFAULT);
+
+ pr_info("ibmvnic: resetting max_ind_descs to default\n");
+ max_ind_descs = IBMVNIC_MAX_IND_DESCS_DEFAULT;
+ }
+
+ pr_info("ibmvnic: max_ind_descs set to %u\n", max_ind_descs);
+
ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "net/ibmvnic:online",
ibmvnic_cpu_online,
ibmvnic_cpu_down_prep);
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index 1cc6e2d13a..56f157cd8a 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -29,8 +29,10 @@
#define IBMVNIC_BUFFS_PER_POOL 100
#define IBMVNIC_MAX_QUEUES 16
#define IBMVNIC_MAX_QUEUE_SZ 4096
-#define IBMVNIC_MAX_IND_DESCS 16
-#define IBMVNIC_IND_ARR_SZ (IBMVNIC_MAX_IND_DESCS * 32)
+#define IBMVNIC_IND_DESC_SZ 32
+#define IBMVNIC_MAX_IND_DESCS_DEFAULT 128
+#define IBMVNIC_MAX_IND_DESC_MAX 128
+#define IBMVNIC_MAX_IND_DESC_MIN 16
#define IBMVNIC_TSO_BUF_SZ 65536
#define IBMVNIC_TSO_BUFS 64
@@ -945,6 +947,7 @@ struct ibmvnic_adapter {
int replenish_task_cycles;
int tx_send_failed;
int tx_map_failed;
+ u32 max_ind_descs;
struct ibmvnic_tx_queue_stats *tx_stats_buffers;
struct ibmvnic_rx_queue_stats *rx_stats_buffers;
--
2.39.3 (Apple Git-146)
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH net-next 4/4] ibmvnic: Make max subcrq indirect entries tunable via module param
2025-06-30 23:48 ` [PATCH net-next 4/4] ibmvnic: Make max subcrq indirect entries tunable via module param Mingming Cao
@ 2025-07-02 1:31 ` Jakub Kicinski
0 siblings, 0 replies; 8+ messages in thread
From: Jakub Kicinski @ 2025-07-02 1:31 UTC (permalink / raw)
To: Mingming Cao; +Cc: netdev, bjking1, haren, ricklind
On Mon, 30 Jun 2025 16:48:06 -0700 Mingming Cao wrote:
> This patch increased the default of max subcrq indirect entries ,
> and introduces a way to tune the maximum number of indirect
> subcrq descriptors via a module parameter. The default now is set to 128,
> as supported on P9, allowing for better throughput performance on
> large system workloads while maintaining flexibility to fall back
> to a smaller maximum limit on P8 or systems with limited memory resources
Module parameters are strongly discouraged. Please provide more details
about what this parameter does, I supposed it should be mapped to on of
the ethtool -g options.
--
pw-bot: cr
^ permalink raw reply [flat|nested] 8+ messages in thread