* [PATCH net-next v6 00/10] bng_en: add link management and statistics support
@ 2026-03-10 14:40 Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 01/10] bng_en: add per-PF workqueue, timer, and slow-path task Bhargava Marreddy
` (9 more replies)
0 siblings, 10 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy
Hi,
This series enhances the bng_en driver by adding:
1. Link/PHY support
a. Link query
b. Async Link events
c. Ethtool link set/get functionality
2. Hardware statistics reporting via ethtool -S
This version incorporates feedback received prior to splitting the
original series into two parts.
Addressed following comments from Andrew Lunn
- Apply Rev-xmas fix in several places.
- Correct ethtool-speed comment to reflect accurate behavior.
https://lore.kernel.org/netdev/49930724-74b8-41fe-8f5c-482afc976b82@lunn.ch/
https://lore.kernel.org/netdev/CANXQDtb5XuLKOOorCMYDUpVz6aFuQgvmQZ4pS6RJGkAgeM8n1A@mail.gmail.com/
Addressed following comments from Jakub Kicinski
- Took care of missing mutex_init()
https://lore.kernel.org/netdev/20251127190855.69779791@kernel.org/#t
https://lore.kernel.org/netdev/CANXQDtYy2JziaaVi=Cqt+gAJNt4NjFFs8NbhrS=RBV61ORwinA@mail.gmail.com/
Addressed following comments from ALOK TIWARI
- Add newline for clarity/formatting.
https://lore.kernel.org/netdev/CANXQDtZbLhdwEtN+kKg1OVLV+uy-gsNfX+rU4MR77QPuO6_y9A@mail.gmail.com/
v6:
- Resending the full series due to an SMTP failure during v5 submission.
- No functional changes from v5.
v5:
Address Jakub Kicinski's comments by holding the instance lock across
the entire service task (instead of flag-based locking) and checking
the OPEN bit under the lock to resolve the close-path deadlock.
v4:
Addressed comments from Jakub Kicinski:
- Explicitly size speed mask arrays to remove the need for manual
sentinels in the hardware configuration.
- Fix an off-by-one error in the queue index bound check.
- Standardize ethtool -S per-ring stat naming to "rxq$id_$name"
(e.g., rxq0_errors). Removed spaces and brackets to ensure
compatibility with parsing scripts.
- Exclude ethtool -S global counters (tx_err, tx_fifo_underruns,
tx_total_collisions) already covered by rtnl_link_stats64.
v3:
Addressed comments from Andrew Lunn:
- Fix a comment and refactor bnge_hwrm_set_pause()
to improve code readability.
Addressed comments from Jakub Kicinski:
- Switch to RCU for stats synchronization and drop the
BNGE_STATE_READ_STATS state bit.
Fix checkpatch warning by switching to kzalloc_node().
v2:
Addressed comments from Andrew Lunn:
- Add implementation for .get_pauseparam and .set_pauseparam to support
a full flow control configuration, covering global autoneg, independent
pause autoneg, and forced RX/TX pause functionality.
- Move MAC, PHY, Control, Pause, and RMON statistics from
.get_ethtool_stats to their respective structured ethtool
callbacks (e.g., .get_eth_mac_stats, .get_rmon_stats).
Addressed comments from Jakub Kicinski:
- Fix potential out-of-bounds read in speed mask array by adding
the missing sentinel.
- Simplify timer synchronization by removing the redundant intr_sem.
- Replace link_lock with the netdev instance lock, as it is sufficient
to handle the required synchronization.
v1: https://lore.kernel.org/all/20260223163641.755200-1-bhargava.marreddy@broadcom.com/
Bhargava Marreddy (10):
bng_en: add per-PF workqueue, timer, and slow-path task
bng_en: query PHY capabilities and report link status
bng_en: add ethtool link settings, get_link, and nway_reset
bng_en: implement ethtool pauseparam operations
bng_en: add support for link async events
bng_en: add HW stats infra and structured ethtool ops
bng_en: periodically fetch and accumulate hardware statistics
bng_en: implement ndo_get_stats64
bng_en: implement netdev_stat_ops
bng_en: add support for ethtool -S stats display
drivers/net/ethernet/broadcom/bnge/Makefile | 3 +-
drivers/net/ethernet/broadcom/bnge/bnge.h | 12 +
.../net/ethernet/broadcom/bnge/bnge_core.c | 1 +
.../net/ethernet/broadcom/bnge/bnge_ethtool.c | 808 ++++++++++
.../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 388 ++++-
.../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 8 +
.../net/ethernet/broadcom/bnge/bnge_link.c | 1300 +++++++++++++++++
.../net/ethernet/broadcom/bnge/bnge_link.h | 204 +++
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 632 +++++++-
.../net/ethernet/broadcom/bnge/bnge_netdev.h | 119 +-
.../net/ethernet/broadcom/bnge/bnge_txrx.c | 35 +-
11 files changed, 3491 insertions(+), 19 deletions(-)
create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_link.c
create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_link.h
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH net-next v6 01/10] bng_en: add per-PF workqueue, timer, and slow-path task
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status Bhargava Marreddy
` (8 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Ajit Kumar Khaparde
Add a dedicated single-thread workqueue and a timer for each PF
to drive deferred slow-path work such as link event handling and
stats collection. The timer is stopped via timer_delete_sync()
when interrupts are disabled and restarted on open. The open and
close paths now start and drain these resources.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 74 ++++++++++++++++++-
.../net/ethernet/broadcom/bnge/bnge_netdev.h | 12 ++-
2 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index a20dc3ca640c..edbb42efb70b 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -101,6 +101,36 @@ static int bnge_alloc_ring_stats(struct bnge_net *bn)
return rc;
}
+static void bnge_timer(struct timer_list *t)
+{
+ struct bnge_net *bn = timer_container_of(bn, t, timer);
+ struct bnge_dev *bd = bn->bd;
+
+ if (!netif_running(bn->netdev) ||
+ !test_bit(BNGE_STATE_OPEN, &bd->state))
+ return;
+
+ /* Periodic work added by later patches */
+
+ mod_timer(&bn->timer, jiffies + bn->current_interval);
+}
+
+static void bnge_sp_task(struct work_struct *work)
+{
+ struct bnge_net *bn = container_of(work, struct bnge_net, sp_task);
+ struct bnge_dev *bd = bn->bd;
+
+ netdev_lock(bn->netdev);
+ if (!test_bit(BNGE_STATE_OPEN, &bd->state)) {
+ netdev_unlock(bn->netdev);
+ return;
+ }
+
+ /* Event handling work added by later patches */
+
+ netdev_unlock(bn->netdev);
+}
+
static void bnge_free_nq_desc_arr(struct bnge_nq_ring_info *nqr)
{
struct bnge_ring_struct *ring = &nqr->ring_struct;
@@ -2507,6 +2537,9 @@ static int bnge_open_core(struct bnge_net *bn)
bnge_enable_int(bn);
bnge_tx_enable(bn);
+
+ mod_timer(&bn->timer, jiffies + bn->current_interval);
+
return 0;
err_free_irq:
@@ -2542,6 +2575,8 @@ static void bnge_close_core(struct bnge_net *bn)
bnge_tx_disable(bn);
clear_bit(BNGE_STATE_OPEN, &bd->state);
+
+ timer_delete_sync(&bn->timer);
bnge_shutdown_nic(bn);
bnge_disable_napi(bn);
bnge_free_all_rings_bufs(bn);
@@ -2690,6 +2725,23 @@ static void bnge_init_ring_params(struct bnge_net *bn)
bn->netdev->cfg->hds_thresh = max(BNGE_DEFAULT_RX_COPYBREAK, rx_size);
}
+static struct workqueue_struct *
+bnge_create_workqueue_thread(struct bnge_dev *bd, const char *thread_name)
+{
+ struct workqueue_struct *wq;
+ char *wq_name;
+
+ wq_name = kasprintf(GFP_KERNEL, "%s-%s", thread_name,
+ dev_name(bd->dev));
+ if (!wq_name)
+ return NULL;
+
+ wq = create_singlethread_workqueue(wq_name);
+ kfree(wq_name);
+
+ return wq;
+}
+
int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs)
{
struct net_device *netdev;
@@ -2774,6 +2826,17 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs)
if (bd->tso_max_segs)
netif_set_tso_max_segs(netdev, bd->tso_max_segs);
+ INIT_WORK(&bn->sp_task, bnge_sp_task);
+ timer_setup(&bn->timer, bnge_timer, 0);
+ bn->current_interval = BNGE_TIMER_INTERVAL;
+
+ bn->bnge_pf_wq = bnge_create_workqueue_thread(bd, "bnge_pf_wq");
+ if (!bn->bnge_pf_wq) {
+ netdev_err(netdev, "Unable to create workqueue.\n");
+ rc = -ENOMEM;
+ goto err_netdev;
+ }
+
bn->rx_ring_size = BNGE_DEFAULT_RX_RING_SIZE;
bn->tx_ring_size = BNGE_DEFAULT_TX_RING_SIZE;
bn->rx_dir = DMA_FROM_DEVICE;
@@ -2789,11 +2852,13 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs)
rc = register_netdev(netdev);
if (rc) {
dev_err(bd->dev, "Register netdev failed rc: %d\n", rc);
- goto err_netdev;
+ goto err_free_workq;
}
return 0;
+err_free_workq:
+ destroy_workqueue(bn->bnge_pf_wq);
err_netdev:
free_netdev(netdev);
return rc;
@@ -2802,8 +2867,15 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs)
void bnge_netdev_free(struct bnge_dev *bd)
{
struct net_device *netdev = bd->netdev;
+ struct bnge_net *bn;
+
+ bn = netdev_priv(netdev);
unregister_netdev(netdev);
+
+ cancel_work_sync(&bn->sp_task);
+ destroy_workqueue(bn->bnge_pf_wq);
+
free_netdev(netdev);
bd->netdev = NULL;
}
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
index 70f1a7c24814..d2ccee725454 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
@@ -224,6 +224,12 @@ struct bnge_tpa_info {
#define BNGE_NQ_HDL_TYPE(hdl) (((hdl) & BNGE_NQ_HDL_TYPE_MASK) >> \
BNGE_NQ_HDL_TYPE_SHIFT)
+enum bnge_net_state {
+ BNGE_STATE_NAPI_DISABLED,
+};
+
+#define BNGE_TIMER_INTERVAL HZ
+
struct bnge_net {
struct bnge_dev *bd;
struct net_device *netdev;
@@ -281,13 +287,17 @@ struct bnge_net {
u32 stats_coal_ticks;
unsigned long state;
-#define BNGE_STATE_NAPI_DISABLED 0
u32 msg_enable;
u16 max_tpa;
__be16 vxlan_port;
__be16 nge_port;
__be16 vxlan_gpe_port;
+
+ unsigned int current_interval;
+ struct timer_list timer;
+ struct workqueue_struct *bnge_pf_wq;
+ struct work_struct sp_task;
};
#define BNGE_DEFAULT_RX_RING_SIZE 511
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 01/10] bng_en: add per-PF workqueue, timer, and slow-path task Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-11 15:44 ` ALOK TIWARI
2026-03-13 2:18 ` [net-next,v6,02/10] " Jakub Kicinski
2026-03-10 14:40 ` [PATCH net-next v6 03/10] bng_en: add ethtool link settings, get_link, and nway_reset Bhargava Marreddy
` (7 subsequent siblings)
9 siblings, 2 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Rajashekar Hudumula, Ajit Kumar Khaparde
Query PHY capabilities and supported speeds from firmware,
retrieve current link state (speed, duplex, pause, FEC),
and log the information. Seed initial link state during probe.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Rajashekar Hudumula <rajashekar.hudumula@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/ethernet/broadcom/bnge/Makefile | 3 +-
drivers/net/ethernet/broadcom/bnge/bnge.h | 10 +
.../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 229 +++++++++
.../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 5 +
.../net/ethernet/broadcom/bnge/bnge_link.c | 460 ++++++++++++++++++
.../net/ethernet/broadcom/bnge/bnge_link.h | 193 ++++++++
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 50 +-
.../net/ethernet/broadcom/bnge/bnge_netdev.h | 12 +
8 files changed, 959 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_link.c
create mode 100644 drivers/net/ethernet/broadcom/bnge/bnge_link.h
diff --git a/drivers/net/ethernet/broadcom/bnge/Makefile b/drivers/net/ethernet/broadcom/bnge/Makefile
index fa604ee20264..8e07cb307d21 100644
--- a/drivers/net/ethernet/broadcom/bnge/Makefile
+++ b/drivers/net/ethernet/broadcom/bnge/Makefile
@@ -11,4 +11,5 @@ bng_en-y := bnge_core.o \
bnge_netdev.o \
bnge_ethtool.o \
bnge_auxr.o \
- bnge_txrx.o
+ bnge_txrx.o \
+ bnge_link.o
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge.h b/drivers/net/ethernet/broadcom/bnge/bnge.h
index f376913aa321..83ee4749cc70 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge.h
@@ -94,6 +94,11 @@ struct bnge_queue_info {
u8 queue_profile;
};
+#define BNGE_PHY_FLAGS2_SHIFT 8
+#define BNGE_PHY_FL_NO_FCS PORT_PHY_QCAPS_RESP_FLAGS_NO_FCS
+#define BNGE_PHY_FL_SPEEDS2 \
+ (PORT_PHY_QCAPS_RESP_FLAGS2_SPEEDS2_SUPPORTED << 8)
+
struct bnge_dev {
struct device *dev;
struct pci_dev *pdev;
@@ -207,6 +212,11 @@ struct bnge_dev {
struct bnge_auxr_priv *aux_priv;
struct bnge_auxr_dev *auxr_dev;
+
+ struct bnge_link_info link_info;
+
+ /* Copied from flags and flags2 in hwrm_port_phy_qcaps_output */
+ u32 phy_flags;
};
static inline bool bnge_is_roce_en(struct bnge_dev *bd)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
index c46da3413417..f7570eb89b85 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
@@ -981,6 +981,235 @@ void bnge_hwrm_vnic_ctx_free_one(struct bnge_dev *bd,
vnic->fw_rss_cos_lb_ctx[ctx_idx] = INVALID_HW_RING_ID;
}
+static bool bnge_phy_qcaps_no_speed(struct hwrm_port_phy_qcaps_output *resp)
+{
+ if (!resp->supported_speeds_auto_mode &&
+ !resp->supported_speeds_force_mode &&
+ !resp->supported_pam4_speeds_auto_mode &&
+ !resp->supported_pam4_speeds_force_mode &&
+ !resp->supported_speeds2_auto_mode &&
+ !resp->supported_speeds2_force_mode)
+ return true;
+ return false;
+}
+
+int bnge_hwrm_phy_qcaps(struct bnge_dev *bd)
+{
+ struct bnge_link_info *link_info = &bd->link_info;
+ struct hwrm_port_phy_qcaps_output *resp;
+ struct hwrm_port_phy_qcaps_input *req;
+ int rc;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_PORT_PHY_QCAPS);
+ if (rc)
+ return rc;
+
+ resp = bnge_hwrm_req_hold(bd, req);
+ rc = bnge_hwrm_req_send(bd, req);
+ if (rc)
+ goto hwrm_phy_qcaps_exit;
+
+ bd->phy_flags = resp->flags |
+ (le16_to_cpu(resp->flags2) << BNGE_PHY_FLAGS2_SHIFT);
+
+ if (bnge_phy_qcaps_no_speed(resp)) {
+ link_info->phy_enabled = false;
+ netdev_warn(bd->netdev, "Ethernet link disabled\n");
+ } else if (!link_info->phy_enabled) {
+ link_info->phy_enabled = true;
+ netdev_info(bd->netdev, "Ethernet link enabled\n");
+ /* Phy re-enabled, reprobe the speeds */
+ link_info->support_auto_speeds = 0;
+ link_info->support_pam4_auto_speeds = 0;
+ link_info->support_auto_speeds2 = 0;
+ }
+
+ if (resp->supported_speeds_auto_mode)
+ link_info->support_auto_speeds =
+ le16_to_cpu(resp->supported_speeds_auto_mode);
+ if (resp->supported_pam4_speeds_auto_mode)
+ link_info->support_pam4_auto_speeds =
+ le16_to_cpu(resp->supported_pam4_speeds_auto_mode);
+ if (resp->supported_speeds2_auto_mode)
+ link_info->support_auto_speeds2 =
+ le16_to_cpu(resp->supported_speeds2_auto_mode);
+
+ bd->port_count = resp->port_cnt;
+
+hwrm_phy_qcaps_exit:
+ bnge_hwrm_req_drop(bd, req);
+ return rc;
+}
+
+int bnge_hwrm_set_link_setting(struct bnge_net *bn, bool set_pause)
+{
+ struct hwrm_port_phy_cfg_input *req;
+ struct bnge_dev *bd = bn->bd;
+ int rc;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_PORT_PHY_CFG);
+ if (rc)
+ return rc;
+
+ if (set_pause)
+ bnge_hwrm_set_pause_common(bn, req);
+
+ bnge_hwrm_set_link_common(bn, req);
+
+ return bnge_hwrm_req_send(bd, req);
+}
+
+int bnge_update_link(struct bnge_net *bn, bool chng_link_state)
+{
+ struct hwrm_port_phy_qcfg_output *resp;
+ struct hwrm_port_phy_qcfg_input *req;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ bool support_changed;
+ u8 link_state;
+ int rc;
+
+ link_info = &bd->link_info;
+ link_state = link_info->link_state;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_PORT_PHY_QCFG);
+ if (rc)
+ return rc;
+
+ resp = bnge_hwrm_req_hold(bd, req);
+ rc = bnge_hwrm_req_send(bd, req);
+ if (rc) {
+ bnge_hwrm_req_drop(bd, req);
+ return rc;
+ }
+
+ memcpy(&link_info->phy_qcfg_resp, resp, sizeof(*resp));
+ link_info->phy_link_status = resp->link;
+ link_info->duplex = resp->duplex_state;
+ link_info->pause = resp->pause;
+ link_info->auto_mode = resp->auto_mode;
+ link_info->auto_pause_setting = resp->auto_pause;
+ link_info->lp_pause = resp->link_partner_adv_pause;
+ link_info->force_pause_setting = resp->force_pause;
+ link_info->duplex_setting = resp->duplex_cfg;
+ if (link_info->phy_link_status == BNGE_LINK_LINK) {
+ link_info->link_speed = le16_to_cpu(resp->link_speed);
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2)
+ link_info->active_lanes = resp->active_lanes;
+ } else {
+ link_info->link_speed = 0;
+ link_info->active_lanes = 0;
+ }
+ link_info->force_link_speed = le16_to_cpu(resp->force_link_speed);
+ link_info->force_pam4_link_speed =
+ le16_to_cpu(resp->force_pam4_link_speed);
+ link_info->force_link_speed2 = le16_to_cpu(resp->force_link_speeds2);
+ link_info->support_speeds = le16_to_cpu(resp->support_speeds);
+ link_info->support_pam4_speeds = le16_to_cpu(resp->support_pam4_speeds);
+ link_info->support_speeds2 = le16_to_cpu(resp->support_speeds2);
+ link_info->auto_link_speeds = le16_to_cpu(resp->auto_link_speed_mask);
+ link_info->auto_pam4_link_speeds =
+ le16_to_cpu(resp->auto_pam4_link_speed_mask);
+ link_info->auto_link_speeds2 = le16_to_cpu(resp->auto_link_speeds2);
+ link_info->lp_auto_link_speeds =
+ le16_to_cpu(resp->link_partner_adv_speeds);
+ link_info->lp_auto_pam4_link_speeds =
+ resp->link_partner_pam4_adv_speeds;
+ link_info->media_type = resp->media_type;
+ link_info->phy_type = resp->phy_type;
+ link_info->phy_addr = resp->eee_config_phy_addr &
+ PORT_PHY_QCFG_RESP_PHY_ADDR_MASK;
+ link_info->module_status = resp->module_status;
+
+ link_info->fec_cfg = le16_to_cpu(resp->fec_cfg);
+ link_info->active_fec_sig_mode = resp->active_fec_signal_mode;
+
+ if (chng_link_state) {
+ if (link_info->phy_link_status == BNGE_LINK_LINK)
+ link_info->link_state = BNGE_LINK_STATE_UP;
+ else
+ link_info->link_state = BNGE_LINK_STATE_DOWN;
+ if (link_state != link_info->link_state)
+ bnge_report_link(bd);
+ } else {
+ /* always link down if not required to update link state */
+ link_info->link_state = BNGE_LINK_STATE_DOWN;
+ }
+ bnge_hwrm_req_drop(bd, req);
+
+ if (!BNGE_PHY_CFG_ABLE(bd))
+ return 0;
+
+ support_changed = bnge_support_speed_dropped(bn);
+ if (support_changed && (bn->eth_link_info.autoneg & BNGE_AUTONEG_SPEED))
+ rc = bnge_hwrm_set_link_setting(bn, true);
+ return rc;
+}
+
+int bnge_hwrm_set_pause(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct hwrm_port_phy_cfg_input *req;
+ struct bnge_dev *bd = bn->bd;
+ bool pause_autoneg;
+ int rc;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_PORT_PHY_CFG);
+ if (rc)
+ return rc;
+
+ pause_autoneg = !!(elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL);
+
+ /* Prepare PHY pause-advertisement or forced-pause settings. */
+ bnge_hwrm_set_pause_common(bn, req);
+
+ /* Prepare speed/autoneg settings */
+ if (pause_autoneg || elink_info->force_link_chng)
+ bnge_hwrm_set_link_common(bn, req);
+
+ rc = bnge_hwrm_req_send(bd, req);
+ if (!rc && !pause_autoneg) {
+ /* Since changing of pause setting, with pause autoneg off,
+ * doesn't trigger any link change event, the driver needs to
+ * update the current MAC pause upon successful return of the
+ * phy_cfg command.
+ */
+ bd->link_info.force_pause_setting =
+ bd->link_info.pause = elink_info->req_flow_ctrl;
+ bd->link_info.auto_pause_setting = 0;
+ if (!elink_info->force_link_chng)
+ bnge_report_link(bd);
+ }
+ elink_info->force_link_chng = false;
+
+ return rc;
+}
+
+int bnge_hwrm_shutdown_link(struct bnge_dev *bd)
+{
+ struct hwrm_port_phy_cfg_input *req;
+ int rc;
+
+ if (!BNGE_PHY_CFG_ABLE(bd))
+ return 0;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_PORT_PHY_CFG);
+ if (rc)
+ return rc;
+
+ req->flags = cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_FORCE_LINK_DWN);
+ rc = bnge_hwrm_req_send(bd, req);
+ if (!rc) {
+ /* Device is not obliged to link down in certain scenarios,
+ * even when forced. Setting the state unknown is consistent
+ * with driver startup and will force link state to be
+ * reported during subsequent open based on PORT_PHY_QCFG.
+ */
+ bd->link_info.link_state = BNGE_LINK_STATE_UNKNOWN;
+ }
+ return rc;
+}
+
void bnge_hwrm_stat_ctx_free(struct bnge_net *bn)
{
struct hwrm_stat_ctx_free_input *req;
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h
index 38b046237feb..86ca3ac2244b 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h
@@ -57,4 +57,9 @@ int hwrm_ring_alloc_send_msg(struct bnge_net *bn,
int bnge_hwrm_set_async_event_cr(struct bnge_dev *bd, int idx);
int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vnic,
u32 tpa_flags);
+int bnge_update_link(struct bnge_net *bn, bool chng_link_state);
+int bnge_hwrm_phy_qcaps(struct bnge_dev *bd);
+int bnge_hwrm_set_link_setting(struct bnge_net *bn, bool set_pause);
+int bnge_hwrm_set_pause(struct bnge_net *bn);
+int bnge_hwrm_shutdown_link(struct bnge_dev *bd);
#endif /* _BNGE_HWRM_LIB_H_ */
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.c b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
new file mode 100644
index 000000000000..0c76cbc794f7
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
@@ -0,0 +1,460 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2026 Broadcom.
+
+#include <linux/linkmode.h>
+
+#include "bnge.h"
+#include "bnge_link.h"
+#include "bnge_hwrm_lib.h"
+
+static u32 bnge_fw_to_ethtool_speed(u16 fw_link_speed)
+{
+ switch (fw_link_speed) {
+ case BNGE_LINK_SPEED_50GB:
+ case BNGE_LINK_SPEED_50GB_PAM4:
+ return SPEED_50000;
+ case BNGE_LINK_SPEED_100GB:
+ case BNGE_LINK_SPEED_100GB_PAM4:
+ case BNGE_LINK_SPEED_100GB_PAM4_112:
+ return SPEED_100000;
+ case BNGE_LINK_SPEED_200GB:
+ case BNGE_LINK_SPEED_200GB_PAM4:
+ case BNGE_LINK_SPEED_200GB_PAM4_112:
+ return SPEED_200000;
+ case BNGE_LINK_SPEED_400GB:
+ case BNGE_LINK_SPEED_400GB_PAM4:
+ case BNGE_LINK_SPEED_400GB_PAM4_112:
+ return SPEED_400000;
+ case BNGE_LINK_SPEED_800GB:
+ case BNGE_LINK_SPEED_800GB_PAM4_112:
+ return SPEED_800000;
+ default:
+ return SPEED_UNKNOWN;
+ }
+}
+
+static void bnge_set_auto_speed(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ elink_info->advertising = link_info->auto_link_speeds2;
+ return;
+ }
+ elink_info->advertising = link_info->auto_link_speeds;
+ elink_info->advertising_pam4 = link_info->auto_pam4_link_speeds;
+}
+
+static void bnge_set_force_speed(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ elink_info->req_link_speed = link_info->force_link_speed2;
+ switch (elink_info->req_link_speed) {
+ case BNGE_LINK_SPEED_50GB_PAM4:
+ case BNGE_LINK_SPEED_100GB_PAM4:
+ case BNGE_LINK_SPEED_200GB_PAM4:
+ case BNGE_LINK_SPEED_400GB_PAM4:
+ elink_info->req_signal_mode = BNGE_SIG_MODE_PAM4;
+ break;
+ case BNGE_LINK_SPEED_100GB_PAM4_112:
+ case BNGE_LINK_SPEED_200GB_PAM4_112:
+ case BNGE_LINK_SPEED_400GB_PAM4_112:
+ case BNGE_LINK_SPEED_800GB_PAM4_112:
+ elink_info->req_signal_mode = BNGE_SIG_MODE_PAM4_112;
+ break;
+ default:
+ elink_info->req_signal_mode = BNGE_SIG_MODE_NRZ;
+ break;
+ }
+ } else if (link_info->force_pam4_link_speed) {
+ elink_info->req_link_speed = link_info->force_pam4_link_speed;
+ elink_info->req_signal_mode = BNGE_SIG_MODE_PAM4;
+ } else {
+ elink_info->req_link_speed = link_info->force_link_speed;
+ elink_info->req_signal_mode = BNGE_SIG_MODE_NRZ;
+ }
+}
+
+void bnge_init_ethtool_link_settings(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (BNGE_AUTO_MODE(link_info->auto_mode)) {
+ elink_info->autoneg = BNGE_AUTONEG_SPEED;
+ if (link_info->auto_pause_setting &
+ PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE)
+ elink_info->autoneg |= BNGE_AUTONEG_FLOW_CTRL;
+ bnge_set_auto_speed(bn);
+ } else {
+ bnge_set_force_speed(bn);
+ elink_info->req_duplex = link_info->duplex_setting;
+ }
+ if (elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL)
+ elink_info->req_flow_ctrl =
+ link_info->auto_pause_setting & BNGE_LINK_PAUSE_BOTH;
+ else
+ elink_info->req_flow_ctrl = link_info->force_pause_setting;
+}
+
+int bnge_probe_phy(struct bnge_net *bn, bool fw_dflt)
+{
+ struct bnge_dev *bd = bn->bd;
+ int rc;
+
+ bd->phy_flags = 0;
+ rc = bnge_hwrm_phy_qcaps(bd);
+ if (rc) {
+ netdev_err(bn->netdev,
+ "Probe PHY can't get PHY qcaps (rc: %d)\n", rc);
+ return rc;
+ }
+ if (bd->phy_flags & BNGE_PHY_FL_NO_FCS)
+ bn->netdev->priv_flags |= IFF_SUPP_NOFCS;
+ else
+ bn->netdev->priv_flags &= ~IFF_SUPP_NOFCS;
+ if (!fw_dflt)
+ return 0;
+
+ rc = bnge_update_link(bn, false);
+ if (rc) {
+ netdev_err(bn->netdev, "Probe PHY can't update link (rc: %d)\n",
+ rc);
+ return rc;
+ }
+ bnge_init_ethtool_link_settings(bn);
+
+ return 0;
+}
+
+void bnge_hwrm_set_link_common(struct bnge_net *bn,
+ struct hwrm_port_phy_cfg_input *req)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ if (elink_info->autoneg & BNGE_AUTONEG_SPEED) {
+ req->auto_mode |= PORT_PHY_CFG_REQ_AUTO_MODE_SPEED_MASK;
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ req->enables |= cpu_to_le32(BNGE_PHY_AUTO_SPEEDS2_MASK);
+ req->auto_link_speeds2_mask =
+ cpu_to_le16(elink_info->advertising);
+ } else if (elink_info->advertising) {
+ req->enables |= cpu_to_le32(BNGE_PHY_AUTO_SPEED_MASK);
+ req->auto_link_speed_mask =
+ cpu_to_le16(elink_info->advertising);
+ }
+ if (elink_info->advertising_pam4) {
+ req->enables |=
+ cpu_to_le32(BNGE_PHY_AUTO_PAM4_SPEED_MASK);
+ req->auto_link_pam4_speed_mask =
+ cpu_to_le16(elink_info->advertising_pam4);
+ }
+ req->enables |= cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_MODE);
+ req->flags |= cpu_to_le32(BNGE_PHY_FLAGS_RESTART_AUTO);
+ } else {
+ req->flags |= cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_FORCE);
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ req->force_link_speeds2 =
+ cpu_to_le16(elink_info->req_link_speed);
+ req->enables |=
+ cpu_to_le32(BNGE_PHY_FLAGS_ENA_FORCE_SPEEDS2);
+ netif_info(bn, link, bn->netdev,
+ "Forcing FW speed2: %d\n",
+ (u32)elink_info->req_link_speed);
+ } else if (elink_info->req_signal_mode == BNGE_SIG_MODE_PAM4) {
+ req->force_pam4_link_speed =
+ cpu_to_le16(elink_info->req_link_speed);
+ req->enables |=
+ cpu_to_le32(BNGE_PHY_FLAGS_ENA_FORCE_PM4_SPEED);
+ } else {
+ req->force_link_speed =
+ cpu_to_le16(elink_info->req_link_speed);
+ }
+ }
+
+ /* tell FW that the setting takes effect immediately */
+ req->flags |= cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_RESET_PHY);
+}
+
+static bool bnge_auto_speed_updated(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2)
+ return elink_info->advertising != link_info->auto_link_speeds2;
+
+ return elink_info->advertising != link_info->auto_link_speeds ||
+ elink_info->advertising_pam4 != link_info->auto_pam4_link_speeds;
+}
+
+void bnge_hwrm_set_pause_common(struct bnge_net *bn,
+ struct hwrm_port_phy_cfg_input *req)
+{
+ if (bn->eth_link_info.autoneg & BNGE_AUTONEG_FLOW_CTRL) {
+ req->auto_pause = PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
+ if (bn->eth_link_info.req_flow_ctrl & BNGE_LINK_PAUSE_RX)
+ req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX;
+ if (bn->eth_link_info.req_flow_ctrl & BNGE_LINK_PAUSE_TX)
+ req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_TX;
+ req->enables |=
+ cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_PAUSE);
+ } else {
+ if (bn->eth_link_info.req_flow_ctrl & BNGE_LINK_PAUSE_RX)
+ req->force_pause |= PORT_PHY_CFG_REQ_FORCE_PAUSE_RX;
+ if (bn->eth_link_info.req_flow_ctrl & BNGE_LINK_PAUSE_TX)
+ req->force_pause |= PORT_PHY_CFG_REQ_FORCE_PAUSE_TX;
+ req->enables |=
+ cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_FORCE_PAUSE);
+ req->auto_pause = req->force_pause;
+ req->enables |=
+ cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_PAUSE);
+ }
+}
+
+static bool bnge_force_speed_updated(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2)
+ return elink_info->req_link_speed != link_info->force_link_speed2;
+
+ if (elink_info->req_signal_mode == BNGE_SIG_MODE_NRZ)
+ return elink_info->req_link_speed != link_info->force_link_speed;
+
+ return elink_info->req_signal_mode == BNGE_SIG_MODE_PAM4 &&
+ elink_info->req_link_speed != link_info->force_pam4_link_speed;
+}
+
+int bnge_update_phy_setting(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ bool update_pause = false;
+ bool update_link = false;
+ int rc;
+
+ link_info = &bd->link_info;
+ elink_info = &bn->eth_link_info;
+ rc = bnge_update_link(bn, true);
+ if (rc) {
+ netdev_err(bn->netdev, "failed to update link (rc: %d)\n",
+ rc);
+ return rc;
+ }
+
+ if ((elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL) &&
+ (link_info->auto_pause_setting & BNGE_LINK_PAUSE_BOTH) !=
+ elink_info->req_flow_ctrl)
+ update_pause = true;
+ if (!(elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL) &&
+ link_info->force_pause_setting != elink_info->req_flow_ctrl)
+ update_pause = true;
+ if (!(elink_info->autoneg & BNGE_AUTONEG_SPEED)) {
+ if (BNGE_AUTO_MODE(link_info->auto_mode))
+ update_link = true;
+ if (bnge_force_speed_updated(bn))
+ update_link = true;
+ if (elink_info->req_duplex != link_info->duplex_setting)
+ update_link = true;
+ } else {
+ if (link_info->auto_mode == BNGE_LINK_AUTO_NONE)
+ update_link = true;
+ if (bnge_auto_speed_updated(bn))
+ update_link = true;
+ }
+
+ /* The last close may have shut down the link, so need to call
+ * PHY_CFG to bring it back up.
+ */
+ if (!BNGE_LINK_IS_UP(bd))
+ update_link = true;
+
+ if (update_link)
+ rc = bnge_hwrm_set_link_setting(bn, update_pause);
+ else if (update_pause)
+ rc = bnge_hwrm_set_pause(bn);
+
+ if (rc) {
+ netdev_err(bn->netdev,
+ "failed to update PHY setting (rc: %d)\n", rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+void bnge_get_port_module_status(struct bnge_net *bn)
+{
+ struct hwrm_port_phy_qcfg_output *resp;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ u8 module_status;
+
+ link_info = &bd->link_info;
+ resp = &link_info->phy_qcfg_resp;
+
+ if (bnge_update_link(bn, true))
+ return;
+
+ module_status = link_info->module_status;
+ switch (module_status) {
+ case PORT_PHY_QCFG_RESP_MODULE_STATUS_DISABLETX:
+ case PORT_PHY_QCFG_RESP_MODULE_STATUS_PWRDOWN:
+ case PORT_PHY_QCFG_RESP_MODULE_STATUS_WARNINGMSG:
+ netdev_warn(bn->netdev,
+ "Unqualified SFP+ module detected on port %d\n",
+ bd->pf.port_id);
+ netdev_warn(bn->netdev, "Module part number %s\n",
+ resp->phy_vendor_partnumber);
+ if (module_status == PORT_PHY_QCFG_RESP_MODULE_STATUS_DISABLETX)
+ netdev_warn(bn->netdev, "TX is disabled\n");
+ if (module_status == PORT_PHY_QCFG_RESP_MODULE_STATUS_PWRDOWN)
+ netdev_warn(bn->netdev, "SFP+ module is shut down\n");
+ break;
+ }
+}
+
+static bool bnge_support_dropped(u16 advertising, u16 supported)
+{
+ return (advertising & ~supported) != 0;
+}
+
+bool bnge_support_speed_dropped(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ /* Check if any advertised speeds are no longer supported. The caller
+ * holds the netdev instance lock, so we can modify link_info settings.
+ */
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ if (bnge_support_dropped(elink_info->advertising,
+ link_info->support_auto_speeds2)) {
+ elink_info->advertising =
+ link_info->support_auto_speeds2;
+ return true;
+ }
+ return false;
+ }
+ if (bnge_support_dropped(elink_info->advertising,
+ link_info->support_auto_speeds)) {
+ elink_info->advertising = link_info->support_auto_speeds;
+ return true;
+ }
+ if (bnge_support_dropped(elink_info->advertising_pam4,
+ link_info->support_pam4_auto_speeds)) {
+ elink_info->advertising_pam4 =
+ link_info->support_pam4_auto_speeds;
+ return true;
+ }
+ return false;
+}
+
+static char *bnge_report_fec(struct bnge_link_info *link_info)
+{
+ u8 active_fec = link_info->active_fec_sig_mode &
+ PORT_PHY_QCFG_RESP_ACTIVE_FEC_MASK;
+
+ switch (active_fec) {
+ default:
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_NONE_ACTIVE:
+ return "None";
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_CLAUSE74_ACTIVE:
+ return "Clause 74 BaseR";
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_CLAUSE91_ACTIVE:
+ return "Clause 91 RS(528,514)";
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS544_1XN_ACTIVE:
+ return "Clause 91 RS544_1XN";
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS544_IEEE_ACTIVE:
+ return "Clause 91 RS(544,514)";
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS272_1XN_ACTIVE:
+ return "Clause 91 RS272_1XN";
+ case PORT_PHY_QCFG_RESP_ACTIVE_FEC_FEC_RS272_IEEE_ACTIVE:
+ return "Clause 91 RS(272,257)";
+ }
+}
+
+void bnge_report_link(struct bnge_dev *bd)
+{
+ if (BNGE_LINK_IS_UP(bd)) {
+ const char *signal = "";
+ const char *flow_ctrl;
+ const char *duplex;
+ u32 speed;
+ u16 fec;
+
+ netif_carrier_on(bd->netdev);
+ speed = bnge_fw_to_ethtool_speed(bd->link_info.link_speed);
+ if (speed == SPEED_UNKNOWN) {
+ netdev_info(bd->netdev,
+ "NIC Link is Up, speed unknown\n");
+ return;
+ }
+ if (bd->link_info.duplex == BNGE_LINK_DUPLEX_FULL)
+ duplex = "full";
+ else
+ duplex = "half";
+ if (bd->link_info.pause == BNGE_LINK_PAUSE_BOTH)
+ flow_ctrl = "ON - receive & transmit";
+ else if (bd->link_info.pause == BNGE_LINK_PAUSE_TX)
+ flow_ctrl = "ON - transmit";
+ else if (bd->link_info.pause == BNGE_LINK_PAUSE_RX)
+ flow_ctrl = "ON - receive";
+ else
+ flow_ctrl = "none";
+ if (bd->link_info.phy_qcfg_resp.option_flags &
+ PORT_PHY_QCFG_RESP_OPTION_FLAGS_SIGNAL_MODE_KNOWN) {
+ u8 sig_mode = bd->link_info.active_fec_sig_mode &
+ PORT_PHY_QCFG_RESP_SIGNAL_MODE_MASK;
+ switch (sig_mode) {
+ case PORT_PHY_QCFG_RESP_SIGNAL_MODE_NRZ:
+ signal = "(NRZ) ";
+ break;
+ case PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4:
+ signal = "(PAM4 56Gbps) ";
+ break;
+ case PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4_112:
+ signal = "(PAM4 112Gbps) ";
+ break;
+ default:
+ break;
+ }
+ }
+ netdev_info(bd->netdev, "NIC Link is Up, %u Mbps %s%s duplex, Flow control: %s\n",
+ speed, signal, duplex, flow_ctrl);
+ fec = bd->link_info.fec_cfg;
+ if (!(fec & PORT_PHY_QCFG_RESP_FEC_CFG_FEC_NONE_SUPPORTED))
+ netdev_info(bd->netdev, "FEC autoneg %s encoding: %s\n",
+ (fec & BNGE_FEC_AUTONEG) ? "on" : "off",
+ bnge_report_fec(&bd->link_info));
+ } else {
+ netif_carrier_off(bd->netdev);
+ netdev_err(bd->netdev, "NIC Link is Down\n");
+ }
+}
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.h b/drivers/net/ethernet/broadcom/bnge/bnge_link.h
new file mode 100644
index 000000000000..f22dec2899e3
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.h
@@ -0,0 +1,193 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2026 Broadcom */
+
+#ifndef _BNGE_LINK_H_
+#define _BNGE_LINK_H_
+
+#define BNGE_PHY_CFG_ABLE(bd) \
+ ((bd)->link_info.phy_enabled)
+
+#define BNGE_PHY_AUTO_SPEEDS2_MASK \
+ PORT_PHY_CFG_REQ_ENABLES_AUTO_LINK_SPEEDS2_MASK
+#define BNGE_PHY_AUTO_SPEED_MASK \
+ PORT_PHY_CFG_REQ_ENABLES_AUTO_LINK_SPEED_MASK
+#define BNGE_PHY_AUTO_PAM4_SPEED_MASK \
+ PORT_PHY_CFG_REQ_ENABLES_AUTO_PAM4_LINK_SPEED_MASK
+#define BNGE_PHY_FLAGS_RESTART_AUTO \
+ PORT_PHY_CFG_REQ_FLAGS_RESTART_AUTONEG
+#define BNGE_PHY_FLAGS_ENA_FORCE_SPEEDS2 \
+ PORT_PHY_CFG_REQ_ENABLES_FORCE_LINK_SPEEDS2
+#define BNGE_PHY_FLAGS_ENA_FORCE_PM4_SPEED \
+ PORT_PHY_CFG_REQ_ENABLES_FORCE_PAM4_LINK_SPEED
+
+#define BNGE_LINK_LINK PORT_PHY_QCFG_RESP_LINK_LINK
+
+enum bnge_link_state {
+ BNGE_LINK_STATE_UNKNOWN,
+ BNGE_LINK_STATE_DOWN,
+ BNGE_LINK_STATE_UP,
+};
+
+#define BNGE_LINK_IS_UP(bd) \
+ ((bd)->link_info.link_state == BNGE_LINK_STATE_UP)
+
+#define BNGE_LINK_DUPLEX_FULL PORT_PHY_QCFG_RESP_DUPLEX_STATE_FULL
+
+#define BNGE_LINK_PAUSE_TX PORT_PHY_QCFG_RESP_PAUSE_TX
+#define BNGE_LINK_PAUSE_RX PORT_PHY_QCFG_RESP_PAUSE_RX
+#define BNGE_LINK_PAUSE_BOTH (PORT_PHY_QCFG_RESP_PAUSE_RX | \
+ PORT_PHY_QCFG_RESP_PAUSE_TX)
+
+#define BNGE_LINK_AUTO_NONE PORT_PHY_QCFG_RESP_AUTO_MODE_NONE
+#define BNGE_LINK_AUTO_MSK PORT_PHY_QCFG_RESP_AUTO_MODE_SPEED_MASK
+#define BNGE_AUTO_MODE(mode) ((mode) > BNGE_LINK_AUTO_NONE && \
+ (mode) <= BNGE_LINK_AUTO_MSK)
+
+#define BNGE_LINK_SPEED_50GB PORT_PHY_QCFG_RESP_LINK_SPEED_50GB
+#define BNGE_LINK_SPEED_100GB PORT_PHY_QCFG_RESP_LINK_SPEED_100GB
+#define BNGE_LINK_SPEED_200GB PORT_PHY_QCFG_RESP_LINK_SPEED_200GB
+#define BNGE_LINK_SPEED_400GB PORT_PHY_QCFG_RESP_LINK_SPEED_400GB
+#define BNGE_LINK_SPEED_800GB PORT_PHY_QCFG_RESP_LINK_SPEED_800GB
+
+#define BNGE_LINK_SPEED_MSK_50GB PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_50GB
+#define BNGE_LINK_SPEED_MSK_100GB PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_100GB
+
+#define BNGE_LINK_PAM4_SPEED_MSK_50GB PORT_PHY_QCFG_RESP_SUPPORT_PAM4_SPEEDS_50G
+#define BNGE_LINK_PAM4_SPEED_MSK_100GB \
+ PORT_PHY_QCFG_RESP_SUPPORT_PAM4_SPEEDS_100G
+#define BNGE_LINK_PAM4_SPEED_MSK_200GB \
+ PORT_PHY_QCFG_RESP_SUPPORT_PAM4_SPEEDS_200G
+
+#define BNGE_LINK_SPEEDS2_MSK_50GB PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_50GB
+#define BNGE_LINK_SPEEDS2_MSK_100GB PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_100GB
+#define BNGE_LINK_SPEEDS2_MSK_50GB_PAM4 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_50GB_PAM4_56
+#define BNGE_LINK_SPEEDS2_MSK_100GB_PAM4 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_100GB_PAM4_56
+#define BNGE_LINK_SPEEDS2_MSK_200GB_PAM4 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_200GB_PAM4_56
+#define BNGE_LINK_SPEEDS2_MSK_400GB_PAM4 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_400GB_PAM4_56
+#define BNGE_LINK_SPEEDS2_MSK_100GB_PAM4_112 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_100GB_PAM4_112
+#define BNGE_LINK_SPEEDS2_MSK_200GB_PAM4_112 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_200GB_PAM4_112
+#define BNGE_LINK_SPEEDS2_MSK_400GB_PAM4_112 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_400GB_PAM4_112
+#define BNGE_LINK_SPEEDS2_MSK_800GB_PAM4_112 \
+ PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS2_800GB_PAM4_112
+
+#define BNGE_LINK_SPEED_50GB_PAM4 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_50GB_PAM4_56
+#define BNGE_LINK_SPEED_100GB_PAM4 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_100GB_PAM4_56
+#define BNGE_LINK_SPEED_200GB_PAM4 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_200GB_PAM4_56
+#define BNGE_LINK_SPEED_400GB_PAM4 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_400GB_PAM4_56
+#define BNGE_LINK_SPEED_100GB_PAM4_112 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_100GB_PAM4_112
+#define BNGE_LINK_SPEED_200GB_PAM4_112 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_200GB_PAM4_112
+#define BNGE_LINK_SPEED_400GB_PAM4_112 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_400GB_PAM4_112
+#define BNGE_LINK_SPEED_800GB_PAM4_112 \
+ PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_800GB_PAM4_112
+
+#define BNGE_FEC_NONE PORT_PHY_QCFG_RESP_FEC_CFG_FEC_NONE_SUPPORTED
+#define BNGE_FEC_AUTONEG PORT_PHY_QCFG_RESP_FEC_CFG_FEC_AUTONEG_ENABLED
+#define BNGE_FEC_ENC_BASE_R_CAP \
+ PORT_PHY_QCFG_RESP_FEC_CFG_FEC_CLAUSE74_SUPPORTED
+#define BNGE_FEC_ENC_BASE_R PORT_PHY_QCFG_RESP_FEC_CFG_FEC_CLAUSE74_ENABLED
+#define BNGE_FEC_ENC_RS_CAP \
+ PORT_PHY_QCFG_RESP_FEC_CFG_FEC_CLAUSE91_SUPPORTED
+#define BNGE_FEC_ENC_LLRS_CAP \
+ (PORT_PHY_QCFG_RESP_FEC_CFG_FEC_RS272_1XN_SUPPORTED | \
+ PORT_PHY_QCFG_RESP_FEC_CFG_FEC_RS272_IEEE_SUPPORTED)
+#define BNGE_FEC_ENC_RS \
+ (PORT_PHY_QCFG_RESP_FEC_CFG_FEC_CLAUSE91_ENABLED | \
+ PORT_PHY_QCFG_RESP_FEC_CFG_FEC_RS544_1XN_ENABLED | \
+ PORT_PHY_QCFG_RESP_FEC_CFG_FEC_RS544_IEEE_ENABLED)
+#define BNGE_FEC_ENC_LLRS \
+ (PORT_PHY_QCFG_RESP_FEC_CFG_FEC_RS272_1XN_ENABLED | \
+ PORT_PHY_QCFG_RESP_FEC_CFG_FEC_RS272_IEEE_ENABLED)
+
+struct bnge_link_info {
+ u8 phy_type;
+ u8 media_type;
+ u8 phy_addr;
+ u8 phy_link_status;
+ bool phy_enabled;
+
+ u8 link_state;
+ u8 active_lanes;
+ u8 duplex;
+ u8 pause;
+ u8 lp_pause;
+ u8 auto_pause_setting;
+ u8 force_pause_setting;
+ u8 duplex_setting;
+ u8 auto_mode;
+ u16 link_speed;
+ u16 support_speeds;
+ u16 support_pam4_speeds;
+ u16 support_speeds2;
+
+ u16 auto_link_speeds; /* fw adv setting */
+ u16 auto_pam4_link_speeds;
+ u16 auto_link_speeds2;
+
+ u16 support_auto_speeds;
+ u16 support_pam4_auto_speeds;
+ u16 support_auto_speeds2;
+
+ u16 lp_auto_link_speeds;
+ u16 lp_auto_pam4_link_speeds;
+ u16 force_link_speed;
+ u16 force_pam4_link_speed;
+ u16 force_link_speed2;
+
+ u8 module_status;
+ u8 active_fec_sig_mode;
+ u16 fec_cfg;
+
+ /* A copy of phy_qcfg output used to report link
+ * info to VF
+ */
+ struct hwrm_port_phy_qcfg_output phy_qcfg_resp;
+
+ bool phy_retry;
+ unsigned long phy_retry_expires;
+};
+
+#define BNGE_AUTONEG_SPEED 1
+#define BNGE_AUTONEG_FLOW_CTRL 2
+
+#define BNGE_SIG_MODE_NRZ PORT_PHY_QCFG_RESP_SIGNAL_MODE_NRZ
+#define BNGE_SIG_MODE_PAM4 PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4
+#define BNGE_SIG_MODE_PAM4_112 PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4_112
+#define BNGE_SIG_MODE_MAX (PORT_PHY_QCFG_RESP_SIGNAL_MODE_LAST + 1)
+
+struct bnge_ethtool_link_info {
+ /* copy of requested setting from ethtool cmd */
+ u8 autoneg;
+ u8 req_signal_mode;
+ u8 req_duplex;
+ u8 req_flow_ctrl;
+ u16 req_link_speed;
+ u16 advertising; /* user adv setting */
+ u16 advertising_pam4;
+ bool force_link_chng;
+};
+
+void bnge_hwrm_set_link_common(struct bnge_net *bn,
+ struct hwrm_port_phy_cfg_input *req);
+void bnge_hwrm_set_pause_common(struct bnge_net *bn,
+ struct hwrm_port_phy_cfg_input *req);
+int bnge_update_phy_setting(struct bnge_net *bn);
+void bnge_get_port_module_status(struct bnge_net *bn);
+void bnge_report_link(struct bnge_dev *bd);
+bool bnge_support_speed_dropped(struct bnge_net *bn);
+void bnge_init_ethtool_link_settings(struct bnge_net *bn);
+int bnge_probe_phy(struct bnge_net *bn, bool fw_dflt);
+#endif /* _BNGE_LINK_H_ */
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index edbb42efb70b..db3a19bbf75a 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -101,6 +101,17 @@ static int bnge_alloc_ring_stats(struct bnge_net *bn)
return rc;
}
+void __bnge_queue_sp_work(struct bnge_net *bn)
+{
+ queue_work(bn->bnge_pf_wq, &bn->sp_task);
+}
+
+static void bnge_queue_sp_work(struct bnge_net *bn, unsigned int event)
+{
+ set_bit(event, &bn->sp_event);
+ __bnge_queue_sp_work(bn);
+}
+
static void bnge_timer(struct timer_list *t)
{
struct bnge_net *bn = timer_container_of(bn, t, timer);
@@ -110,7 +121,14 @@ static void bnge_timer(struct timer_list *t)
!test_bit(BNGE_STATE_OPEN, &bd->state))
return;
- /* Periodic work added by later patches */
+ if (bd->link_info.phy_retry) {
+ if (time_after(jiffies, bd->link_info.phy_retry_expires)) {
+ bd->link_info.phy_retry = false;
+ netdev_warn(bn->netdev, "failed to update PHY settings after maximum retries.\n");
+ } else {
+ bnge_queue_sp_work(bn, BNGE_UPDATE_PHY_SP_EVENT);
+ }
+ }
mod_timer(&bn->timer, jiffies + bn->current_interval);
}
@@ -126,7 +144,17 @@ static void bnge_sp_task(struct work_struct *work)
return;
}
- /* Event handling work added by later patches */
+ if (test_and_clear_bit(BNGE_UPDATE_PHY_SP_EVENT, &bn->sp_event)) {
+ int rc;
+
+ rc = bnge_update_phy_setting(bn);
+ if (rc) {
+ netdev_warn(bn->netdev, "update PHY settings retry failed\n");
+ } else {
+ bd->link_info.phy_retry = false;
+ netdev_info(bn->netdev, "update PHY settings retry succeeded\n");
+ }
+ }
netdev_unlock(bn->netdev);
}
@@ -2496,6 +2524,8 @@ static void bnge_tx_enable(struct bnge_net *bn)
/* Make sure napi polls see @dev_state change */
synchronize_net();
netif_tx_wake_all_queues(bn->netdev);
+ if (BNGE_LINK_IS_UP(bn->bd))
+ netif_carrier_on(bn->netdev);
}
static int bnge_open_core(struct bnge_net *bn)
@@ -2532,6 +2562,14 @@ static int bnge_open_core(struct bnge_net *bn)
bnge_enable_napi(bn);
+ rc = bnge_update_phy_setting(bn);
+ if (rc) {
+ netdev_warn(bn->netdev, "failed to update PHY settings (rc: %d)\n",
+ rc);
+ bd->link_info.phy_retry = true;
+ bd->link_info.phy_retry_expires = jiffies + 5 * HZ;
+ }
+
set_bit(BNGE_STATE_OPEN, &bd->state);
bnge_enable_int(bn);
@@ -2540,6 +2578,9 @@ static int bnge_open_core(struct bnge_net *bn)
mod_timer(&bn->timer, jiffies + bn->current_interval);
+ /* Poll link status and check for SFP+ module status */
+ bnge_get_port_module_status(bn);
+
return 0;
err_free_irq:
@@ -2591,6 +2632,7 @@ static int bnge_close(struct net_device *dev)
struct bnge_net *bn = netdev_priv(dev);
bnge_close_core(bn);
+ bnge_hwrm_shutdown_link(bn->bd);
return 0;
}
@@ -2848,6 +2890,10 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs)
bnge_init_l2_fltr_tbl(bn);
bnge_init_mac_addr(bd);
+ rc = bnge_probe_phy(bn, true);
+ if (rc)
+ goto err_free_workq;
+
netdev->request_ops_lock = true;
rc = register_netdev(netdev);
if (rc) {
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
index d2ccee725454..5636eb371e24 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
@@ -9,6 +9,7 @@
#include <linux/refcount.h>
#include "bnge_db.h"
#include "bnge_hw_def.h"
+#include "bnge_link.h"
struct tx_bd {
__le32 tx_bd_len_flags_type;
@@ -230,6 +231,13 @@ enum bnge_net_state {
#define BNGE_TIMER_INTERVAL HZ
+enum bnge_sp_event {
+ BNGE_LINK_CHNG_SP_EVENT,
+ BNGE_LINK_SPEED_CHNG_SP_EVENT,
+ BNGE_LINK_CFG_CHANGE_SP_EVENT,
+ BNGE_UPDATE_PHY_SP_EVENT,
+};
+
struct bnge_net {
struct bnge_dev *bd;
struct net_device *netdev;
@@ -298,6 +306,9 @@ struct bnge_net {
struct timer_list timer;
struct workqueue_struct *bnge_pf_wq;
struct work_struct sp_task;
+ unsigned long sp_event;
+
+ struct bnge_ethtool_link_info eth_link_info;
};
#define BNGE_DEFAULT_RX_RING_SIZE 511
@@ -576,4 +587,5 @@ u8 *__bnge_alloc_rx_frag(struct bnge_net *bn, dma_addr_t *mapping,
struct bnge_rx_ring_info *rxr, gfp_t gfp);
int bnge_alloc_rx_netmem(struct bnge_net *bn, struct bnge_rx_ring_info *rxr,
u16 prod, gfp_t gfp);
+void __bnge_queue_sp_work(struct bnge_net *bn);
#endif /* _BNGE_NETDEV_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 03/10] bng_en: add ethtool link settings, get_link, and nway_reset
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 01/10] bng_en: add per-PF workqueue, timer, and slow-path task Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 04/10] bng_en: implement ethtool pauseparam operations Bhargava Marreddy
` (6 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Rajashekar Hudumula, Ajit Kumar Khaparde
Add get/set_link_ksettings, get_link, and nway_reset support.
Report supported, advertised, and link-partner speeds across NRZ,
PAM4, and PAM4-112 signaling modes. Enable lane count reporting.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Rajashekar Hudumula <rajashekar.hudumula@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/ethernet/broadcom/bnge/bnge.h | 2 +
.../net/ethernet/broadcom/bnge/bnge_core.c | 1 +
.../net/ethernet/broadcom/bnge/bnge_ethtool.c | 24 +
.../net/ethernet/broadcom/bnge/bnge_link.c | 793 ++++++++++++++++++
.../net/ethernet/broadcom/bnge/bnge_link.h | 7 +
5 files changed, 827 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge.h b/drivers/net/ethernet/broadcom/bnge/bnge.h
index 83ee4749cc70..e9b83c0c64b6 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge.h
@@ -98,6 +98,8 @@ struct bnge_queue_info {
#define BNGE_PHY_FL_NO_FCS PORT_PHY_QCAPS_RESP_FLAGS_NO_FCS
#define BNGE_PHY_FL_SPEEDS2 \
(PORT_PHY_QCAPS_RESP_FLAGS2_SPEEDS2_SUPPORTED << 8)
+#define BNGE_PHY_FL_NO_PAUSE \
+ (PORT_PHY_QCAPS_RESP_FLAGS2_PAUSE_UNSUPPORTED << 8)
struct bnge_dev {
struct device *dev;
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_core.c b/drivers/net/ethernet/broadcom/bnge/bnge_core.c
index b4090283df0f..1c14c5fe8d61 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_core.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_core.c
@@ -10,6 +10,7 @@
#include "bnge_devlink.h"
#include "bnge_hwrm.h"
#include "bnge_hwrm_lib.h"
+#include "bnge_link.h"
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION(DRV_SUMMARY);
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
index 569371c1b4f2..e4d3041db0af 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
@@ -11,6 +11,25 @@
#include "bnge.h"
#include "bnge_ethtool.h"
+#include "bnge_hwrm_lib.h"
+
+static int bnge_nway_reset(struct net_device *dev)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_dev *bd = bn->bd;
+ int rc = 0;
+
+ if (!BNGE_PHY_CFG_ABLE(bd))
+ return -EOPNOTSUPP;
+
+ if (!(bn->eth_link_info.autoneg & BNGE_AUTONEG_SPEED))
+ return -EINVAL;
+
+ if (netif_running(dev))
+ rc = bnge_hwrm_set_link_setting(bn, true);
+
+ return rc;
+}
static void bnge_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
@@ -24,7 +43,12 @@ static void bnge_get_drvinfo(struct net_device *dev,
}
static const struct ethtool_ops bnge_ethtool_ops = {
+ .cap_link_lanes_supported = 1,
+ .get_link_ksettings = bnge_get_link_ksettings,
+ .set_link_ksettings = bnge_set_link_ksettings,
.get_drvinfo = bnge_get_drvinfo,
+ .get_link = bnge_get_link,
+ .nway_reset = bnge_nway_reset,
};
void bnge_set_ethtool_ops(struct net_device *dev)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.c b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
index 0c76cbc794f7..e3d7ff9dfc83 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_link.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
@@ -7,6 +7,51 @@
#include "bnge_link.h"
#include "bnge_hwrm_lib.h"
+enum bnge_media_type {
+ BNGE_MEDIA_UNKNOWN = 0,
+ BNGE_MEDIA_CR,
+ BNGE_MEDIA_SR,
+ BNGE_MEDIA_LR_ER_FR,
+ BNGE_MEDIA_KR,
+ __BNGE_MEDIA_END,
+};
+
+static const enum bnge_media_type bnge_phy_types[] = {
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASECR4] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR4] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASELR4] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASEER4] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR10] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASECR4] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASESR4] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASELR4] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASEER4] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASECR] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASESR] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASELR] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASEER] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASECR2] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR2] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASELR2] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASEER2] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASECR] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASELR] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASEER] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASECR2] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASESR2] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASELR2] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASEER2] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASECR8] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASESR8] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASELR8] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASEER8] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASECR4] = BNGE_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASESR4] = BNGE_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASELR4] = BNGE_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASEER4] = BNGE_MEDIA_LR_ER_FR,
+};
+
static u32 bnge_fw_to_ethtool_speed(u16 fw_link_speed)
{
switch (fw_link_speed) {
@@ -458,3 +503,751 @@ void bnge_report_link(struct bnge_dev *bd)
netdev_err(bd->netdev, "NIC Link is Down\n");
}
}
+
+static void bnge_get_ethtool_modes(struct bnge_net *bn,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct bnge_ethtool_link_info *elink_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ elink_info = &bn->eth_link_info;
+ link_info = &bd->link_info;
+
+ if (!(bd->phy_flags & BNGE_PHY_FL_NO_PAUSE)) {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ lk_ksettings->link_modes.supported);
+ }
+
+ if (link_info->support_auto_speeds || link_info->support_auto_speeds2 ||
+ link_info->support_pam4_auto_speeds)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ lk_ksettings->link_modes.supported);
+
+ if (~elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL)
+ return;
+
+ if (link_info->auto_pause_setting & BNGE_LINK_PAUSE_RX)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ lk_ksettings->link_modes.advertising);
+ if (hweight8(link_info->auto_pause_setting & BNGE_LINK_PAUSE_BOTH) == 1)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ lk_ksettings->link_modes.advertising);
+ if (link_info->lp_pause & BNGE_LINK_PAUSE_RX)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ lk_ksettings->link_modes.lp_advertising);
+ if (hweight8(link_info->lp_pause & BNGE_LINK_PAUSE_BOTH) == 1)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ lk_ksettings->link_modes.lp_advertising);
+}
+
+u32 bnge_get_link(struct net_device *dev)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+
+ return BNGE_LINK_IS_UP(bn->bd);
+}
+
+static enum bnge_media_type
+bnge_get_media(struct bnge_link_info *link_info)
+{
+ switch (link_info->media_type) {
+ case PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC:
+ return BNGE_MEDIA_CR;
+ default:
+ if (link_info->phy_type < ARRAY_SIZE(bnge_phy_types))
+ return bnge_phy_types[link_info->phy_type];
+ return BNGE_MEDIA_UNKNOWN;
+ }
+}
+
+enum bnge_link_speed_indices {
+ BNGE_LINK_SPEED_UNKNOWN = 0,
+ BNGE_LINK_SPEED_50GB_IDX,
+ BNGE_LINK_SPEED_100GB_IDX,
+ BNGE_LINK_SPEED_200GB_IDX,
+ BNGE_LINK_SPEED_400GB_IDX,
+ BNGE_LINK_SPEED_800GB_IDX,
+ __BNGE_LINK_SPEED_END
+};
+
+static enum bnge_link_speed_indices bnge_fw_speed_idx(u16 speed)
+{
+ switch (speed) {
+ case BNGE_LINK_SPEED_50GB:
+ case BNGE_LINK_SPEED_50GB_PAM4:
+ return BNGE_LINK_SPEED_50GB_IDX;
+ case BNGE_LINK_SPEED_100GB:
+ case BNGE_LINK_SPEED_100GB_PAM4:
+ case BNGE_LINK_SPEED_100GB_PAM4_112:
+ return BNGE_LINK_SPEED_100GB_IDX;
+ case BNGE_LINK_SPEED_200GB:
+ case BNGE_LINK_SPEED_200GB_PAM4:
+ case BNGE_LINK_SPEED_200GB_PAM4_112:
+ return BNGE_LINK_SPEED_200GB_IDX;
+ case BNGE_LINK_SPEED_400GB:
+ case BNGE_LINK_SPEED_400GB_PAM4:
+ case BNGE_LINK_SPEED_400GB_PAM4_112:
+ return BNGE_LINK_SPEED_400GB_IDX;
+ case BNGE_LINK_SPEED_800GB:
+ case BNGE_LINK_SPEED_800GB_PAM4_112:
+ return BNGE_LINK_SPEED_800GB_IDX;
+ default:
+ return BNGE_LINK_SPEED_UNKNOWN;
+ }
+}
+
+/* Compile-time link mode mapping table.
+ * Indexed by [speed_idx][sig_mode][media].
+ */
+#define BNGE_LINK_M(speed, sig, media, lm) \
+ [BNGE_LINK_SPEED_##speed##_IDX] \
+ [BNGE_SIG_MODE_##sig] \
+ [BNGE_MEDIA_##media] = ETHTOOL_LINK_MODE_##lm##_Full_BIT
+
+static const enum ethtool_link_mode_bit_indices
+bnge_link_modes[__BNGE_LINK_SPEED_END]
+ [BNGE_SIG_MODE_MAX]
+ [__BNGE_MEDIA_END] = {
+ /* 50GB PAM4 */
+ BNGE_LINK_M(50GB, PAM4, CR, 50000baseCR),
+ BNGE_LINK_M(50GB, PAM4, SR, 50000baseSR),
+ BNGE_LINK_M(50GB, PAM4, LR_ER_FR, 50000baseLR_ER_FR),
+ BNGE_LINK_M(50GB, PAM4, KR, 50000baseKR),
+
+ /* 100GB NRZ */
+ BNGE_LINK_M(100GB, NRZ, CR, 100000baseCR4),
+ BNGE_LINK_M(100GB, NRZ, SR, 100000baseSR4),
+ BNGE_LINK_M(100GB, NRZ, LR_ER_FR, 100000baseLR4_ER4),
+ BNGE_LINK_M(100GB, NRZ, KR, 100000baseKR4),
+
+ /* 100GB PAM4 */
+ BNGE_LINK_M(100GB, PAM4, CR, 100000baseCR2),
+ BNGE_LINK_M(100GB, PAM4, SR, 100000baseSR2),
+ BNGE_LINK_M(100GB, PAM4, LR_ER_FR, 100000baseLR2_ER2_FR2),
+ BNGE_LINK_M(100GB, PAM4, KR, 100000baseKR2),
+
+ /* 100GB PAM4_112 */
+ BNGE_LINK_M(100GB, PAM4_112, CR, 100000baseCR),
+ BNGE_LINK_M(100GB, PAM4_112, SR, 100000baseSR),
+ BNGE_LINK_M(100GB, PAM4_112, LR_ER_FR, 100000baseLR_ER_FR),
+ BNGE_LINK_M(100GB, PAM4_112, KR, 100000baseKR),
+
+ /* 200GB PAM4 */
+ BNGE_LINK_M(200GB, PAM4, CR, 200000baseCR4),
+ BNGE_LINK_M(200GB, PAM4, SR, 200000baseSR4),
+ BNGE_LINK_M(200GB, PAM4, LR_ER_FR, 200000baseLR4_ER4_FR4),
+ BNGE_LINK_M(200GB, PAM4, KR, 200000baseKR4),
+
+ /* 200GB PAM4_112 */
+ BNGE_LINK_M(200GB, PAM4_112, CR, 200000baseCR2),
+ BNGE_LINK_M(200GB, PAM4_112, SR, 200000baseSR2),
+ BNGE_LINK_M(200GB, PAM4_112, LR_ER_FR, 200000baseLR2_ER2_FR2),
+ BNGE_LINK_M(200GB, PAM4_112, KR, 200000baseKR2),
+
+ /* 400GB PAM4 */
+ BNGE_LINK_M(400GB, PAM4, CR, 400000baseCR8),
+ BNGE_LINK_M(400GB, PAM4, SR, 400000baseSR8),
+ BNGE_LINK_M(400GB, PAM4, LR_ER_FR, 400000baseLR8_ER8_FR8),
+ BNGE_LINK_M(400GB, PAM4, KR, 400000baseKR8),
+
+ /* 400GB PAM4_112 */
+ BNGE_LINK_M(400GB, PAM4_112, CR, 400000baseCR4),
+ BNGE_LINK_M(400GB, PAM4_112, SR, 400000baseSR4),
+ BNGE_LINK_M(400GB, PAM4_112, LR_ER_FR, 400000baseLR4_ER4_FR4),
+ BNGE_LINK_M(400GB, PAM4_112, KR, 400000baseKR4),
+
+ /* 800GB PAM4_112 */
+ BNGE_LINK_M(800GB, PAM4_112, CR, 800000baseCR8),
+ BNGE_LINK_M(800GB, PAM4_112, SR, 800000baseSR8),
+ BNGE_LINK_M(800GB, PAM4_112, KR, 800000baseKR8),
+};
+
+#define BNGE_LINK_MODE_UNKNOWN -1
+
+static enum ethtool_link_mode_bit_indices
+bnge_get_link_mode(struct bnge_net *bn)
+{
+ enum ethtool_link_mode_bit_indices link_mode;
+ struct bnge_ethtool_link_info *elink_info;
+ enum bnge_link_speed_indices speed;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ enum bnge_media_type media;
+ u8 sig_mode;
+
+ elink_info = &bn->eth_link_info;
+ link_info = &bd->link_info;
+
+ if (link_info->phy_link_status != BNGE_LINK_LINK)
+ return BNGE_LINK_MODE_UNKNOWN;
+
+ media = bnge_get_media(link_info);
+ if (BNGE_AUTO_MODE(link_info->auto_mode)) {
+ speed = bnge_fw_speed_idx(link_info->link_speed);
+ sig_mode = link_info->active_fec_sig_mode &
+ PORT_PHY_QCFG_RESP_SIGNAL_MODE_MASK;
+ } else {
+ speed = bnge_fw_speed_idx(elink_info->req_link_speed);
+ sig_mode = elink_info->req_signal_mode;
+ }
+ if (sig_mode >= BNGE_SIG_MODE_MAX)
+ return BNGE_LINK_MODE_UNKNOWN;
+
+ /* Since ETHTOOL_LINK_MODE_10baseT_Half_BIT is defined as 0 and
+ * not actually supported, the zeroes in this map can be safely
+ * used to represent unknown link modes.
+ */
+ link_mode = bnge_link_modes[speed][sig_mode][media];
+ if (!link_mode)
+ return BNGE_LINK_MODE_UNKNOWN;
+
+ return link_mode;
+}
+
+static const u16 bnge_nrz_speed_masks[__BNGE_LINK_SPEED_END] = {
+ [BNGE_LINK_SPEED_100GB_IDX] = BNGE_LINK_SPEED_MSK_100GB,
+};
+
+static const u16 bnge_pam4_speed_masks[__BNGE_LINK_SPEED_END] = {
+ [BNGE_LINK_SPEED_50GB_IDX] = BNGE_LINK_PAM4_SPEED_MSK_50GB,
+ [BNGE_LINK_SPEED_100GB_IDX] = BNGE_LINK_PAM4_SPEED_MSK_100GB,
+ [BNGE_LINK_SPEED_200GB_IDX] = BNGE_LINK_PAM4_SPEED_MSK_200GB,
+};
+
+static const u16 bnge_nrz_speeds2_masks[__BNGE_LINK_SPEED_END] = {
+ [BNGE_LINK_SPEED_100GB_IDX] = BNGE_LINK_SPEEDS2_MSK_100GB,
+};
+
+static const u16 bnge_pam4_speeds2_masks[__BNGE_LINK_SPEED_END] = {
+ [BNGE_LINK_SPEED_50GB_IDX] = BNGE_LINK_SPEEDS2_MSK_50GB_PAM4,
+ [BNGE_LINK_SPEED_100GB_IDX] = BNGE_LINK_SPEEDS2_MSK_100GB_PAM4,
+ [BNGE_LINK_SPEED_200GB_IDX] = BNGE_LINK_SPEEDS2_MSK_200GB_PAM4,
+ [BNGE_LINK_SPEED_400GB_IDX] = BNGE_LINK_SPEEDS2_MSK_400GB_PAM4,
+};
+
+static const u16 bnge_pam4_112_speeds2_masks[__BNGE_LINK_SPEED_END] = {
+ [BNGE_LINK_SPEED_100GB_IDX] = BNGE_LINK_SPEEDS2_MSK_100GB_PAM4_112,
+ [BNGE_LINK_SPEED_200GB_IDX] = BNGE_LINK_SPEEDS2_MSK_200GB_PAM4_112,
+ [BNGE_LINK_SPEED_400GB_IDX] = BNGE_LINK_SPEEDS2_MSK_400GB_PAM4_112,
+ [BNGE_LINK_SPEED_800GB_IDX] = BNGE_LINK_SPEEDS2_MSK_800GB_PAM4_112,
+};
+
+static enum bnge_link_speed_indices
+bnge_encoding_speed_idx(u8 sig_mode, u32 phy_flags, u16 speed_msk)
+{
+ const u16 *speeds;
+ int idx, len;
+
+ switch (sig_mode) {
+ case BNGE_SIG_MODE_NRZ:
+ if (phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ speeds = bnge_nrz_speeds2_masks;
+ len = ARRAY_SIZE(bnge_nrz_speeds2_masks);
+ } else {
+ speeds = bnge_nrz_speed_masks;
+ len = ARRAY_SIZE(bnge_nrz_speed_masks);
+ }
+ break;
+ case BNGE_SIG_MODE_PAM4:
+ if (phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ speeds = bnge_pam4_speeds2_masks;
+ len = ARRAY_SIZE(bnge_pam4_speeds2_masks);
+ } else {
+ speeds = bnge_pam4_speed_masks;
+ len = ARRAY_SIZE(bnge_pam4_speed_masks);
+ }
+ break;
+ case BNGE_SIG_MODE_PAM4_112:
+ speeds = bnge_pam4_112_speeds2_masks;
+ len = ARRAY_SIZE(bnge_pam4_112_speeds2_masks);
+ break;
+ default:
+ return BNGE_LINK_SPEED_UNKNOWN;
+ }
+
+ for (idx = 0; idx < len; idx++) {
+ if (speeds[idx] == speed_msk)
+ return idx;
+ }
+
+ return BNGE_LINK_SPEED_UNKNOWN;
+}
+
+#define BNGE_FW_SPEED_MSK_BITS 16
+
+static void
+__bnge_get_ethtool_speeds(unsigned long fw_mask, enum bnge_media_type media,
+ u8 sig_mode, u32 phy_flags, unsigned long *et_mask)
+{
+ enum ethtool_link_mode_bit_indices link_mode;
+ enum bnge_link_speed_indices speed;
+ u8 bit;
+
+ for_each_set_bit(bit, &fw_mask, BNGE_FW_SPEED_MSK_BITS) {
+ speed = bnge_encoding_speed_idx(sig_mode, phy_flags, 1 << bit);
+ if (!speed)
+ continue;
+
+ link_mode = bnge_link_modes[speed][sig_mode][media];
+ if (!link_mode)
+ continue;
+
+ linkmode_set_bit(link_mode, et_mask);
+ }
+}
+
+static void
+bnge_get_ethtool_speeds(unsigned long fw_mask, enum bnge_media_type media,
+ u8 sig_mode, u32 phy_flags, unsigned long *et_mask)
+{
+ if (media) {
+ __bnge_get_ethtool_speeds(fw_mask, media, sig_mode, phy_flags,
+ et_mask);
+ return;
+ }
+
+ /* list speeds for all media if unknown */
+ for (media = 1; media < __BNGE_MEDIA_END; media++)
+ __bnge_get_ethtool_speeds(fw_mask, media, sig_mode, phy_flags,
+ et_mask);
+}
+
+static void
+bnge_get_all_ethtool_support_speeds(struct bnge_dev *bd,
+ enum bnge_media_type media,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct bnge_link_info *link_info = &bd->link_info;
+ u16 sp_nrz, sp_pam4, sp_pam4_112 = 0;
+ u32 phy_flags = bd->phy_flags;
+
+ if (phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ sp_nrz = link_info->support_speeds2;
+ sp_pam4 = link_info->support_speeds2;
+ sp_pam4_112 = link_info->support_speeds2;
+ } else {
+ sp_nrz = link_info->support_speeds;
+ sp_pam4 = link_info->support_pam4_speeds;
+ }
+ bnge_get_ethtool_speeds(sp_nrz, media, BNGE_SIG_MODE_NRZ, phy_flags,
+ lk_ksettings->link_modes.supported);
+ bnge_get_ethtool_speeds(sp_pam4, media, BNGE_SIG_MODE_PAM4, phy_flags,
+ lk_ksettings->link_modes.supported);
+ bnge_get_ethtool_speeds(sp_pam4_112, media, BNGE_SIG_MODE_PAM4_112,
+ phy_flags, lk_ksettings->link_modes.supported);
+}
+
+static void
+bnge_get_all_ethtool_adv_speeds(struct bnge_net *bn,
+ enum bnge_media_type media,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ u16 sp_nrz, sp_pam4, sp_pam4_112 = 0;
+ struct bnge_dev *bd = bn->bd;
+ u32 phy_flags;
+
+ phy_flags = bd->phy_flags;
+ sp_nrz = elink_info->advertising;
+
+ if (phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ sp_pam4 = elink_info->advertising;
+ sp_pam4_112 = elink_info->advertising;
+ } else {
+ sp_pam4 = elink_info->advertising_pam4;
+ }
+ bnge_get_ethtool_speeds(sp_nrz, media, BNGE_SIG_MODE_NRZ, phy_flags,
+ lk_ksettings->link_modes.advertising);
+ bnge_get_ethtool_speeds(sp_pam4, media, BNGE_SIG_MODE_PAM4, phy_flags,
+ lk_ksettings->link_modes.advertising);
+ bnge_get_ethtool_speeds(sp_pam4_112, media, BNGE_SIG_MODE_PAM4_112,
+ phy_flags,
+ lk_ksettings->link_modes.advertising);
+}
+
+static void
+bnge_get_all_ethtool_lp_speeds(struct bnge_dev *bd,
+ enum bnge_media_type media,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct bnge_link_info *link_info = &bd->link_info;
+ u32 phy_flags = bd->phy_flags;
+
+ bnge_get_ethtool_speeds(link_info->lp_auto_link_speeds, media,
+ BNGE_SIG_MODE_NRZ, phy_flags,
+ lk_ksettings->link_modes.lp_advertising);
+ bnge_get_ethtool_speeds(link_info->lp_auto_pam4_link_speeds, media,
+ BNGE_SIG_MODE_PAM4, phy_flags,
+ lk_ksettings->link_modes.lp_advertising);
+}
+
+static void bnge_update_speed(u32 *delta, bool installed_media, u16 *speeds,
+ u16 speed_msk, const unsigned long *et_mask,
+ enum ethtool_link_mode_bit_indices mode)
+{
+ bool mode_desired = linkmode_test_bit(mode, et_mask);
+
+ if (!mode)
+ return;
+
+ /* enabled speeds for installed media should override */
+ if (installed_media && mode_desired) {
+ *speeds |= speed_msk;
+ *delta |= speed_msk;
+ return;
+ }
+
+ /* many to one mapping, only allow one change per fw_speed bit */
+ if (!(*delta & speed_msk) && (mode_desired == !(*speeds & speed_msk))) {
+ *speeds ^= speed_msk;
+ *delta |= speed_msk;
+ }
+}
+
+static void bnge_set_ethtool_speeds(struct bnge_net *bn,
+ const unsigned long *et_mask)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ u16 const *sp_msks, *sp_pam4_msks, *sp_pam4_112_msks = NULL;
+ u16 *adv, *adv_pam4, *adv_pam4_112 = NULL;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ enum bnge_media_type media;
+ u32 delta_pam4_112 = 0;
+ u32 delta_pam4 = 0;
+ u32 delta_nrz = 0;
+ int i, m;
+
+ link_info = &bd->link_info;
+ media = bnge_get_media(link_info);
+ adv = &elink_info->advertising;
+
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2) {
+ adv_pam4 = &elink_info->advertising;
+ adv_pam4_112 = &elink_info->advertising;
+ sp_msks = bnge_nrz_speeds2_masks;
+ sp_pam4_msks = bnge_pam4_speeds2_masks;
+ sp_pam4_112_msks = bnge_pam4_112_speeds2_masks;
+ } else {
+ adv_pam4 = &elink_info->advertising_pam4;
+ sp_msks = bnge_nrz_speed_masks;
+ sp_pam4_msks = bnge_pam4_speed_masks;
+ }
+ for (i = 1; i < __BNGE_LINK_SPEED_END; i++) {
+ /* accept any legal media from user */
+ for (m = 1; m < __BNGE_MEDIA_END; m++) {
+ bnge_update_speed(&delta_nrz, m == media,
+ adv, sp_msks[i], et_mask,
+ bnge_link_modes[i][BNGE_SIG_MODE_NRZ][m]);
+ bnge_update_speed(&delta_pam4, m == media,
+ adv_pam4, sp_pam4_msks[i], et_mask,
+ bnge_link_modes[i][BNGE_SIG_MODE_PAM4][m]);
+ if (!adv_pam4_112)
+ continue;
+
+ bnge_update_speed(&delta_pam4_112, m == media,
+ adv_pam4_112, sp_pam4_112_msks[i],
+ et_mask,
+ bnge_link_modes[i][BNGE_SIG_MODE_PAM4_112][m]);
+ }
+ }
+}
+
+static void
+bnge_fw_to_ethtool_advertised_fec(struct bnge_link_info *link_info,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ u16 fec_cfg = link_info->fec_cfg;
+
+ if ((fec_cfg & BNGE_FEC_NONE) || !(fec_cfg & BNGE_FEC_AUTONEG)) {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_NONE_BIT,
+ lk_ksettings->link_modes.advertising);
+ return;
+ }
+ if (fec_cfg & BNGE_FEC_ENC_BASE_R)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_BASER_BIT,
+ lk_ksettings->link_modes.advertising);
+ if (fec_cfg & BNGE_FEC_ENC_RS)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_RS_BIT,
+ lk_ksettings->link_modes.advertising);
+ if (fec_cfg & BNGE_FEC_ENC_LLRS)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_LLRS_BIT,
+ lk_ksettings->link_modes.advertising);
+}
+
+static void
+bnge_fw_to_ethtool_support_fec(struct bnge_link_info *link_info,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ u16 fec_cfg = link_info->fec_cfg;
+
+ if (fec_cfg & BNGE_FEC_NONE) {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_NONE_BIT,
+ lk_ksettings->link_modes.supported);
+ return;
+ }
+ if (fec_cfg & BNGE_FEC_ENC_BASE_R_CAP)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_BASER_BIT,
+ lk_ksettings->link_modes.supported);
+ if (fec_cfg & BNGE_FEC_ENC_RS_CAP)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_RS_BIT,
+ lk_ksettings->link_modes.supported);
+ if (fec_cfg & BNGE_FEC_ENC_LLRS_CAP)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FEC_LLRS_BIT,
+ lk_ksettings->link_modes.supported);
+}
+
+static void bnge_get_default_speeds(struct bnge_net *bn,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct ethtool_link_settings *base = &lk_ksettings->base;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (link_info->link_state == BNGE_LINK_STATE_UP) {
+ base->speed = bnge_fw_to_ethtool_speed(link_info->link_speed);
+ base->duplex = DUPLEX_HALF;
+ if (link_info->duplex & BNGE_LINK_DUPLEX_FULL)
+ base->duplex = DUPLEX_FULL;
+ lk_ksettings->lanes = link_info->active_lanes;
+ } else if (!elink_info->autoneg) {
+ base->speed =
+ bnge_fw_to_ethtool_speed(elink_info->req_link_speed);
+ base->duplex = DUPLEX_HALF;
+ if (elink_info->req_duplex == BNGE_LINK_DUPLEX_FULL)
+ base->duplex = DUPLEX_FULL;
+ }
+}
+
+int bnge_get_link_ksettings(struct net_device *dev,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct ethtool_link_settings *base = &lk_ksettings->base;
+ enum ethtool_link_mode_bit_indices link_mode;
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ enum bnge_media_type media;
+
+ ethtool_link_ksettings_zero_link_mode(lk_ksettings, lp_advertising);
+ ethtool_link_ksettings_zero_link_mode(lk_ksettings, advertising);
+ ethtool_link_ksettings_zero_link_mode(lk_ksettings, supported);
+ base->duplex = DUPLEX_UNKNOWN;
+ base->speed = SPEED_UNKNOWN;
+ link_info = &bd->link_info;
+
+ bnge_get_ethtool_modes(bn, lk_ksettings);
+ media = bnge_get_media(link_info);
+ bnge_get_all_ethtool_support_speeds(bd, media, lk_ksettings);
+ bnge_fw_to_ethtool_support_fec(link_info, lk_ksettings);
+ link_mode = bnge_get_link_mode(bn);
+ if (link_mode != BNGE_LINK_MODE_UNKNOWN)
+ ethtool_params_from_link_mode(lk_ksettings, link_mode);
+ else
+ bnge_get_default_speeds(bn, lk_ksettings);
+
+ if (bn->eth_link_info.autoneg) {
+ bnge_fw_to_ethtool_advertised_fec(link_info, lk_ksettings);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ lk_ksettings->link_modes.advertising);
+ base->autoneg = AUTONEG_ENABLE;
+ bnge_get_all_ethtool_adv_speeds(bn, media, lk_ksettings);
+ if (link_info->phy_link_status == BNGE_LINK_LINK)
+ bnge_get_all_ethtool_lp_speeds(bd, media, lk_ksettings);
+ } else {
+ base->autoneg = AUTONEG_DISABLE;
+ }
+
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.advertising);
+
+ if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC)
+ base->port = PORT_DA;
+ else
+ base->port = PORT_FIBRE;
+ base->phy_address = link_info->phy_addr;
+
+ return 0;
+}
+
+static int
+bnge_force_link_speed(struct net_device *dev, u32 ethtool_speed, u32 lanes)
+{
+ u16 support_pam4_spds, support_spds2, support_spds;
+ struct bnge_ethtool_link_info *elink_info;
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_link_info *link_info;
+ u8 sig_mode = BNGE_SIG_MODE_NRZ;
+ struct bnge_dev *bd = bn->bd;
+ u32 lanes_needed = 1;
+ u16 fw_speed = 0;
+
+ elink_info = &bn->eth_link_info;
+ link_info = &bd->link_info;
+ support_pam4_spds = link_info->support_pam4_speeds;
+ support_spds2 = link_info->support_speeds2;
+ support_spds = link_info->support_speeds;
+
+ switch (ethtool_speed) {
+ case SPEED_50000:
+ if (((support_spds & BNGE_LINK_SPEED_MSK_50GB) ||
+ (support_spds2 & BNGE_LINK_SPEEDS2_MSK_50GB)) &&
+ lanes != 1) {
+ fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_50GB;
+ lanes_needed = 2;
+ } else if (support_pam4_spds & BNGE_LINK_PAM4_SPEED_MSK_50GB) {
+ fw_speed = PORT_PHY_CFG_REQ_FORCE_PAM4_LINK_SPEED_50GB;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ } else if (support_spds2 & BNGE_LINK_SPEEDS2_MSK_50GB_PAM4) {
+ fw_speed = BNGE_LINK_SPEED_50GB_PAM4;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ }
+ break;
+ case SPEED_100000:
+ if (((support_spds & BNGE_LINK_SPEED_MSK_100GB) ||
+ (support_spds2 & BNGE_LINK_SPEEDS2_MSK_100GB)) &&
+ lanes != 2 && lanes != 1) {
+ fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_100GB;
+ lanes_needed = 4;
+ } else if (support_pam4_spds & BNGE_LINK_PAM4_SPEED_MSK_100GB) {
+ fw_speed = PORT_PHY_CFG_REQ_FORCE_PAM4_LINK_SPEED_100GB;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ lanes_needed = 2;
+ } else if ((support_spds2 & BNGE_LINK_SPEEDS2_MSK_100GB_PAM4) &&
+ lanes != 1) {
+ fw_speed = BNGE_LINK_SPEED_100GB_PAM4;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ lanes_needed = 2;
+ } else if (support_spds2 &
+ BNGE_LINK_SPEEDS2_MSK_100GB_PAM4_112) {
+ fw_speed = BNGE_LINK_SPEED_100GB_PAM4_112;
+ sig_mode = BNGE_SIG_MODE_PAM4_112;
+ }
+ break;
+ case SPEED_200000:
+ if (support_pam4_spds & BNGE_LINK_PAM4_SPEED_MSK_200GB) {
+ fw_speed = PORT_PHY_CFG_REQ_FORCE_PAM4_LINK_SPEED_200GB;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ lanes_needed = 4;
+ } else if ((support_spds2 & BNGE_LINK_SPEEDS2_MSK_200GB_PAM4) &&
+ lanes != 2) {
+ fw_speed = BNGE_LINK_SPEED_200GB_PAM4;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ lanes_needed = 4;
+ } else if (support_spds2 &
+ BNGE_LINK_SPEEDS2_MSK_200GB_PAM4_112) {
+ fw_speed = BNGE_LINK_SPEED_200GB_PAM4_112;
+ sig_mode = BNGE_SIG_MODE_PAM4_112;
+ lanes_needed = 2;
+ }
+ break;
+ case SPEED_400000:
+ if ((support_spds2 & BNGE_LINK_SPEEDS2_MSK_400GB_PAM4) &&
+ lanes != 4) {
+ fw_speed = BNGE_LINK_SPEED_400GB_PAM4;
+ sig_mode = BNGE_SIG_MODE_PAM4;
+ lanes_needed = 8;
+ } else if (support_spds2 &
+ BNGE_LINK_SPEEDS2_MSK_400GB_PAM4_112) {
+ fw_speed = BNGE_LINK_SPEED_400GB_PAM4_112;
+ sig_mode = BNGE_SIG_MODE_PAM4_112;
+ lanes_needed = 4;
+ }
+ break;
+ case SPEED_800000:
+ if (support_spds2 & BNGE_LINK_SPEEDS2_MSK_800GB_PAM4_112) {
+ fw_speed = BNGE_LINK_SPEED_800GB_PAM4_112;
+ sig_mode = BNGE_SIG_MODE_PAM4_112;
+ lanes_needed = 8;
+ }
+ break;
+ default:
+ break;
+ }
+
+ if (!fw_speed) {
+ netdev_err(dev, "unsupported speed!\n");
+ return -EINVAL;
+ }
+
+ if (lanes && lanes != lanes_needed) {
+ netdev_err(dev, "unsupported number of lanes for speed\n");
+ return -EINVAL;
+ }
+
+ if (elink_info->req_link_speed == fw_speed &&
+ elink_info->req_signal_mode == sig_mode &&
+ elink_info->autoneg == 0)
+ return -EALREADY;
+
+ elink_info->req_link_speed = fw_speed;
+ elink_info->req_signal_mode = sig_mode;
+ elink_info->req_duplex = BNGE_LINK_DUPLEX_FULL;
+ elink_info->autoneg = 0;
+ elink_info->advertising = 0;
+ elink_info->advertising_pam4 = 0;
+
+ return 0;
+}
+
+int bnge_set_link_ksettings(struct net_device *dev,
+ const struct ethtool_link_ksettings *lk_ksettings)
+{
+ const struct ethtool_link_settings *base = &lk_ksettings->base;
+ struct bnge_ethtool_link_info *elink_info;
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+ bool set_pause = false;
+ int rc = 0;
+
+ elink_info = &bn->eth_link_info;
+ link_info = &bd->link_info;
+
+ if (!BNGE_PHY_CFG_ABLE(bd))
+ return -EOPNOTSUPP;
+
+ if (base->autoneg == AUTONEG_ENABLE) {
+ bnge_set_ethtool_speeds(bn,
+ lk_ksettings->link_modes.advertising);
+ elink_info->autoneg |= BNGE_AUTONEG_SPEED;
+ if (!elink_info->advertising && !elink_info->advertising_pam4) {
+ elink_info->advertising =
+ link_info->support_auto_speeds;
+ elink_info->advertising_pam4 =
+ link_info->support_pam4_auto_speeds;
+ }
+ /* any change to autoneg will cause link change, therefore the
+ * driver should put back the original pause setting in autoneg
+ */
+ if (!(bd->phy_flags & BNGE_PHY_FL_NO_PAUSE))
+ set_pause = true;
+ } else {
+ if (base->duplex == DUPLEX_HALF) {
+ netdev_err(dev, "HALF DUPLEX is not supported!\n");
+ rc = -EINVAL;
+ goto set_setting_exit;
+ }
+ rc = bnge_force_link_speed(dev, base->speed,
+ lk_ksettings->lanes);
+ if (rc) {
+ if (rc == -EALREADY)
+ rc = 0;
+ goto set_setting_exit;
+ }
+ }
+
+ if (netif_running(dev))
+ rc = bnge_hwrm_set_link_setting(bn, set_pause);
+
+set_setting_exit:
+ return rc;
+}
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.h b/drivers/net/ethernet/broadcom/bnge/bnge_link.h
index f22dec2899e3..70d0901b9a32 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_link.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.h
@@ -4,6 +4,8 @@
#ifndef _BNGE_LINK_H_
#define _BNGE_LINK_H_
+#include <linux/ethtool.h>
+
#define BNGE_PHY_CFG_ABLE(bd) \
((bd)->link_info.phy_enabled)
@@ -190,4 +192,9 @@ void bnge_report_link(struct bnge_dev *bd);
bool bnge_support_speed_dropped(struct bnge_net *bn);
void bnge_init_ethtool_link_settings(struct bnge_net *bn);
int bnge_probe_phy(struct bnge_net *bn, bool fw_dflt);
+int bnge_set_link_ksettings(struct net_device *dev,
+ const struct ethtool_link_ksettings *lk_ksettings);
+int bnge_get_link_ksettings(struct net_device *dev,
+ struct ethtool_link_ksettings *lk_ksettings);
+u32 bnge_get_link(struct net_device *dev);
#endif /* _BNGE_LINK_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 04/10] bng_en: implement ethtool pauseparam operations
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (2 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 03/10] bng_en: add ethtool link settings, get_link, and nway_reset Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 05/10] bng_en: add support for link async events Bhargava Marreddy
` (5 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy
Implement .get_pauseparam and .set_pauseparam to support flow control
configuration. This allows reporting and setting of autoneg, RX pause,
and TX pause states.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_ethtool.c | 58 +++++++++++++++++++
1 file changed, 58 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
index e4d3041db0af..01382ad784e3 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
@@ -42,6 +42,62 @@ static void bnge_get_drvinfo(struct net_device *dev,
strscpy(info->bus_info, pci_name(bd->pdev), sizeof(info->bus_info));
}
+static void bnge_get_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *epause)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_dev *bd = bn->bd;
+
+ if (bd->phy_flags & BNGE_PHY_FL_NO_PAUSE) {
+ epause->autoneg = 0;
+ epause->rx_pause = 0;
+ epause->tx_pause = 0;
+ return;
+ }
+
+ epause->autoneg = !!(bn->eth_link_info.autoneg &
+ BNGE_AUTONEG_FLOW_CTRL);
+ epause->rx_pause = !!(bn->eth_link_info.req_flow_ctrl &
+ BNGE_LINK_PAUSE_RX);
+ epause->tx_pause = !!(bn->eth_link_info.req_flow_ctrl &
+ BNGE_LINK_PAUSE_TX);
+}
+
+static int bnge_set_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *epause)
+{
+ struct bnge_ethtool_link_info *elink_info;
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_dev *bd = bn->bd;
+
+ if (!BNGE_PHY_CFG_ABLE(bd) || (bd->phy_flags & BNGE_PHY_FL_NO_PAUSE))
+ return -EOPNOTSUPP;
+
+ elink_info = &bn->eth_link_info;
+
+ if (epause->autoneg) {
+ if (!(elink_info->autoneg & BNGE_AUTONEG_SPEED))
+ return -EINVAL;
+
+ elink_info->autoneg |= BNGE_AUTONEG_FLOW_CTRL;
+ } else {
+ if (elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL)
+ elink_info->force_link_chng = true;
+ elink_info->autoneg &= ~BNGE_AUTONEG_FLOW_CTRL;
+ }
+
+ elink_info->req_flow_ctrl = 0;
+ if (epause->rx_pause)
+ elink_info->req_flow_ctrl |= BNGE_LINK_PAUSE_RX;
+ if (epause->tx_pause)
+ elink_info->req_flow_ctrl |= BNGE_LINK_PAUSE_TX;
+
+ if (netif_running(dev))
+ return bnge_hwrm_set_pause(bn);
+
+ return 0;
+}
+
static const struct ethtool_ops bnge_ethtool_ops = {
.cap_link_lanes_supported = 1,
.get_link_ksettings = bnge_get_link_ksettings,
@@ -49,6 +105,8 @@ static const struct ethtool_ops bnge_ethtool_ops = {
.get_drvinfo = bnge_get_drvinfo,
.get_link = bnge_get_link,
.nway_reset = bnge_nway_reset,
+ .get_pauseparam = bnge_get_pauseparam,
+ .set_pauseparam = bnge_set_pauseparam,
};
void bnge_set_ethtool_ops(struct net_device *dev)
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 05/10] bng_en: add support for link async events
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (3 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 04/10] bng_en: implement ethtool pauseparam operations Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops Bhargava Marreddy
` (4 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Rajashekar Hudumula, Ajit Kumar Khaparde
Register for firmware asynchronous events, including link-status,
link-speed, and PHY configuration changes. Upon event reception,
re-query the PHY and update ethtool settings accordingly.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Rajashekar Hudumula <rajashekar.hudumula@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
.../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 17 ++++++-
.../net/ethernet/broadcom/bnge/bnge_link.c | 47 +++++++++++++++++++
.../net/ethernet/broadcom/bnge/bnge_link.h | 4 ++
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 17 +++++++
.../net/ethernet/broadcom/bnge/bnge_txrx.c | 35 ++++++++++++--
5 files changed, 115 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
index f7570eb89b85..040b73ee4097 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
@@ -15,6 +15,13 @@
#include "bnge_rmem.h"
#include "bnge_resc.h"
+static const u16 bnge_async_events_arr[] = {
+ ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE,
+ ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CHANGE,
+ ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE,
+ ASYNC_EVENT_CMPL_EVENT_ID_PORT_PHY_CFG_CHANGE,
+};
+
int bnge_hwrm_ver_get(struct bnge_dev *bd)
{
u32 dev_caps_cfg, hwrm_ver, hwrm_spec_code;
@@ -166,10 +173,12 @@ int bnge_hwrm_fw_set_time(struct bnge_dev *bd)
int bnge_hwrm_func_drv_rgtr(struct bnge_dev *bd)
{
+ DECLARE_BITMAP(async_events_bmap, 256);
+ u32 *events = (u32 *)async_events_bmap;
struct hwrm_func_drv_rgtr_output *resp;
struct hwrm_func_drv_rgtr_input *req;
u32 flags;
- int rc;
+ int rc, i;
rc = bnge_hwrm_req_init(bd, req, HWRM_FUNC_DRV_RGTR);
if (rc)
@@ -190,6 +199,12 @@ int bnge_hwrm_func_drv_rgtr(struct bnge_dev *bd)
req->ver_min = cpu_to_le16(DRV_VER_MIN);
req->ver_upd = cpu_to_le16(DRV_VER_UPD);
+ memset(async_events_bmap, 0, sizeof(async_events_bmap));
+ for (i = 0; i < ARRAY_SIZE(bnge_async_events_arr); i++)
+ __set_bit(bnge_async_events_arr[i], async_events_bmap);
+ for (i = 0; i < ARRAY_SIZE(req->async_event_fwd); i++)
+ req->async_event_fwd[i] |= cpu_to_le32(events[i]);
+
resp = bnge_hwrm_req_hold(bd, req);
rc = bnge_hwrm_req_send(bd, req);
if (!rc) {
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.c b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
index e3d7ff9dfc83..a032696b0777 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_link.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
@@ -94,6 +94,21 @@ static void bnge_set_auto_speed(struct bnge_net *bn)
elink_info->advertising_pam4 = link_info->auto_pam4_link_speeds;
}
+static u16 bnge_get_force_speed(struct bnge_net *bn)
+{
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+ struct bnge_link_info *link_info;
+ struct bnge_dev *bd = bn->bd;
+
+ link_info = &bd->link_info;
+
+ if (bd->phy_flags & BNGE_PHY_FL_SPEEDS2)
+ return link_info->force_link_speed2;
+ if (elink_info->req_signal_mode == BNGE_SIG_MODE_PAM4)
+ return link_info->force_pam4_link_speed;
+ return link_info->force_link_speed;
+}
+
static void bnge_set_force_speed(struct bnge_net *bn)
{
struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
@@ -1251,3 +1266,35 @@ int bnge_set_link_ksettings(struct net_device *dev,
set_setting_exit:
return rc;
}
+
+void bnge_link_async_event_process(struct bnge_net *bn,
+ u16 event_id, u32 evt_data)
+{
+ switch (event_id) {
+ case ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE: {
+ struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
+
+ /* print unsupported speed warning in forced speed mode only */
+ if (!(elink_info->autoneg & BNGE_AUTONEG_SPEED) &&
+ (evt_data & BNGE_SPEED_CFG_CHANGE_SP_NOT_SUPP)) {
+ u16 fw_speed = bnge_get_force_speed(bn);
+ u32 speed = bnge_fw_to_ethtool_speed(fw_speed);
+
+ if (speed != SPEED_UNKNOWN)
+ netdev_warn(bn->netdev, "Link speed %d no longer supported\n",
+ speed);
+ }
+ set_bit(BNGE_LINK_SPEED_CHNG_SP_EVENT, &bn->sp_event);
+ fallthrough;
+ }
+ case ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CHANGE:
+ case ASYNC_EVENT_CMPL_EVENT_ID_PORT_PHY_CFG_CHANGE:
+ set_bit(BNGE_LINK_CFG_CHANGE_SP_EVENT, &bn->sp_event);
+ fallthrough;
+ case ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE:
+ set_bit(BNGE_LINK_CHNG_SP_EVENT, &bn->sp_event);
+ break;
+ default:
+ break;
+ }
+}
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.h b/drivers/net/ethernet/broadcom/bnge/bnge_link.h
index 70d0901b9a32..cadc26cfbcf5 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_link.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.h
@@ -197,4 +197,8 @@ int bnge_set_link_ksettings(struct net_device *dev,
int bnge_get_link_ksettings(struct net_device *dev,
struct ethtool_link_ksettings *lk_ksettings);
u32 bnge_get_link(struct net_device *dev);
+#define BNGE_SPEED_CFG_CHANGE_SP_NOT_SUPP BIT(17)
+
+void bnge_link_async_event_process(struct bnge_net *bn,
+ u16 event_id, u32 evt_data);
#endif /* _BNGE_LINK_H_ */
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index db3a19bbf75a..f51d5eee0f6c 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -156,6 +156,23 @@ static void bnge_sp_task(struct work_struct *work)
}
}
+ if (test_and_clear_bit(BNGE_LINK_CHNG_SP_EVENT, &bn->sp_event)) {
+ int rc;
+
+ if (test_and_clear_bit(BNGE_LINK_SPEED_CHNG_SP_EVENT,
+ &bn->sp_event))
+ bnge_hwrm_phy_qcaps(bd);
+
+ rc = bnge_update_link(bn, true);
+ if (rc)
+ netdev_err(bn->netdev, "SP task cannot update link (rc: %d)\n",
+ rc);
+
+ if (test_and_clear_bit(BNGE_LINK_CFG_CHANGE_SP_EVENT,
+ &bn->sp_event))
+ bnge_init_ethtool_link_settings(bn);
+ }
+
netdev_unlock(bn->netdev);
}
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c
index a2616f037557..88f9cf1db308 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_txrx.c
@@ -1128,6 +1128,29 @@ static void __bnge_poll_work_done(struct bnge_net *bn, struct bnge_napi *bnapi,
}
}
+static void bnge_async_event_process(struct bnge_net *bn,
+ struct hwrm_async_event_cmpl *cmpl)
+{
+ u16 event_id = le16_to_cpu(cmpl->event_id);
+ u32 data1 = le32_to_cpu(cmpl->event_data1);
+ u32 data2 = le32_to_cpu(cmpl->event_data2);
+
+ netdev_dbg(bn->netdev, "hwrm event 0x%x {0x%x, 0x%x}\n",
+ event_id, data1, data2);
+
+ switch (event_id) {
+ case ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE:
+ case ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CHANGE:
+ case ASYNC_EVENT_CMPL_EVENT_ID_PORT_PHY_CFG_CHANGE:
+ case ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE:
+ bnge_link_async_event_process(bn, event_id, data1);
+ break;
+ default:
+ return;
+ }
+ __bnge_queue_sp_work(bn);
+}
+
static void
bnge_hwrm_update_token(struct bnge_dev *bd, u16 seq_id,
enum bnge_hwrm_wait_state state)
@@ -1146,7 +1169,7 @@ bnge_hwrm_update_token(struct bnge_dev *bd, u16 seq_id,
dev_err(bd->dev, "Invalid hwrm seq id %d\n", seq_id);
}
-static int bnge_hwrm_handler(struct bnge_dev *bd, struct tx_cmp *txcmp)
+static int bnge_hwrm_handler(struct bnge_net *bn, struct tx_cmp *txcmp)
{
struct hwrm_cmpl *h_cmpl = (struct hwrm_cmpl *)txcmp;
u16 cmpl_type = TX_CMP_TYPE(txcmp), seq_id;
@@ -1154,10 +1177,14 @@ static int bnge_hwrm_handler(struct bnge_dev *bd, struct tx_cmp *txcmp)
switch (cmpl_type) {
case CMPL_BASE_TYPE_HWRM_DONE:
seq_id = le16_to_cpu(h_cmpl->sequence_id);
- bnge_hwrm_update_token(bd, seq_id, BNGE_HWRM_COMPLETE);
+ bnge_hwrm_update_token(bn->bd, seq_id, BNGE_HWRM_COMPLETE);
break;
case CMPL_BASE_TYPE_HWRM_ASYNC_EVENT:
+ bnge_async_event_process(bn,
+ (struct hwrm_async_event_cmpl *)txcmp);
+ break;
+
default:
break;
}
@@ -1235,7 +1262,7 @@ static int __bnge_poll_work(struct bnge_net *bn, struct bnge_cp_ring_info *cpr,
} else if (unlikely(cmp_type == CMPL_BASE_TYPE_HWRM_DONE ||
cmp_type == CMPL_BASE_TYPE_HWRM_FWD_REQ ||
cmp_type == CMPL_BA_TY_HWRM_ASY_EVT)) {
- bnge_hwrm_handler(bn->bd, txcmp);
+ bnge_hwrm_handler(bn, txcmp);
}
raw_cons = NEXT_RAW_CMP(raw_cons);
@@ -1355,7 +1382,7 @@ int bnge_napi_poll(struct napi_struct *napi, int budget)
budget - work_done);
nqr->has_more_work |= cpr->has_more_work;
} else {
- bnge_hwrm_handler(bn->bd, (struct tx_cmp *)nqcmp);
+ bnge_hwrm_handler(bn, (struct tx_cmp *)nqcmp);
}
raw_cons = NEXT_RAW_CMP(raw_cons);
}
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (4 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 05/10] bng_en: add support for link async events Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-18 7:47 ` Dan Carpenter
2026-03-10 14:40 ` [PATCH net-next v6 07/10] bng_en: periodically fetch and accumulate hardware statistics Bhargava Marreddy
` (3 subsequent siblings)
9 siblings, 1 reply; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Rahul Gupta, Ajit Kumar Khaparde
Implement the hardware-level statistics foundation and modern structured
ethtool operations.
1. Infrastructure: Add HWRM firmware wrappers (FUNC_QSTATS_EXT,
PORT_QSTATS_EXT, and PORT_QSTATS) to query ring and port counters.
2. Structured ops: Implement .get_eth_phy_stats, .get_eth_mac_stats,
.get_eth_ctrl_stats, .get_pause_stats, and .get_rmon_stats.
Stats are initially reported as 0; accumulation logic is added
in a subsequent patch.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Rahul Gupta <rahul-rg.gupta@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_ethtool.c | 160 ++++++++++++
.../ethernet/broadcom/bnge/bnge_hwrm_lib.c | 142 +++++++++++
.../ethernet/broadcom/bnge/bnge_hwrm_lib.h | 3 +
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 229 +++++++++++++++++-
.../net/ethernet/broadcom/bnge/bnge_netdev.h | 93 ++++++-
5 files changed, 616 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
index 01382ad784e3..2ae13f18e2d7 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
@@ -42,6 +42,161 @@ static void bnge_get_drvinfo(struct net_device *dev,
strscpy(info->bus_info, pci_name(bd->pdev), sizeof(info->bus_info));
}
+static void bnge_get_eth_phy_stats(struct net_device *dev,
+ struct ethtool_eth_phy_stats *phy_stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ u64 *rx;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS_EXT))
+ return;
+
+ rx = bn->rx_port_stats_ext.sw_stats;
+ phy_stats->SymbolErrorDuringCarrier =
+ *(rx + BNGE_RX_STATS_EXT_OFFSET(rx_pcs_symbol_err));
+}
+
+static void bnge_get_eth_mac_stats(struct net_device *dev,
+ struct ethtool_eth_mac_stats *mac_stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ u64 *rx, *tx;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS))
+ return;
+
+ rx = bn->port_stats.sw_stats;
+ tx = bn->port_stats.sw_stats + BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+
+ mac_stats->FramesReceivedOK =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_good_frames);
+ mac_stats->FramesTransmittedOK =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_good_frames);
+ mac_stats->FrameCheckSequenceErrors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_fcs_err_frames);
+ mac_stats->AlignmentErrors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_align_err_frames);
+ mac_stats->OutOfRangeLengthField =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_oor_len_frames);
+ mac_stats->OctetsReceivedOK = BNGE_GET_RX_PORT_STATS64(rx, rx_bytes);
+ mac_stats->OctetsTransmittedOK = BNGE_GET_TX_PORT_STATS64(tx, tx_bytes);
+ mac_stats->MulticastFramesReceivedOK =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_mcast_frames);
+ mac_stats->BroadcastFramesReceivedOK =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_bcast_frames);
+ mac_stats->MulticastFramesXmittedOK =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_mcast_frames);
+ mac_stats->BroadcastFramesXmittedOK =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_bcast_frames);
+ mac_stats->FrameTooLongErrors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_ovrsz_frames);
+}
+
+static void bnge_get_eth_ctrl_stats(struct net_device *dev,
+ struct ethtool_eth_ctrl_stats *ctrl_stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ u64 *rx;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS))
+ return;
+
+ rx = bn->port_stats.sw_stats;
+ ctrl_stats->MACControlFramesReceived =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_ctrl_frames);
+}
+
+static void bnge_get_pause_stats(struct net_device *dev,
+ struct ethtool_pause_stats *pause_stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ u64 *rx, *tx;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS))
+ return;
+
+ rx = bn->port_stats.sw_stats;
+ tx = bn->port_stats.sw_stats + BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+
+ pause_stats->rx_pause_frames =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_pause_frames);
+ pause_stats->tx_pause_frames =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_pause_frames);
+}
+
+static const struct ethtool_rmon_hist_range bnge_rmon_ranges[] = {
+ { 0, 64 },
+ { 65, 127 },
+ { 128, 255 },
+ { 256, 511 },
+ { 512, 1023 },
+ { 1024, 1518 },
+ { 1519, 2047 },
+ { 2048, 4095 },
+ { 4096, 9216 },
+ { 9217, 16383 },
+ {}
+};
+
+static void bnge_get_rmon_stats(struct net_device *dev,
+ struct ethtool_rmon_stats *rmon_stats,
+ const struct ethtool_rmon_hist_range **ranges)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ u64 *rx, *tx;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS))
+ return;
+
+ rx = bn->port_stats.sw_stats;
+ tx = bn->port_stats.sw_stats + BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+
+ rmon_stats->jabbers = BNGE_GET_RX_PORT_STATS64(rx, rx_jbr_frames);
+ rmon_stats->oversize_pkts =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_ovrsz_frames);
+ rmon_stats->undersize_pkts =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_undrsz_frames);
+
+ rmon_stats->hist[0] = BNGE_GET_RX_PORT_STATS64(rx, rx_64b_frames);
+ rmon_stats->hist[1] = BNGE_GET_RX_PORT_STATS64(rx, rx_65b_127b_frames);
+ rmon_stats->hist[2] = BNGE_GET_RX_PORT_STATS64(rx, rx_128b_255b_frames);
+ rmon_stats->hist[3] = BNGE_GET_RX_PORT_STATS64(rx, rx_256b_511b_frames);
+ rmon_stats->hist[4] =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_512b_1023b_frames);
+ rmon_stats->hist[5] =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_1024b_1518b_frames);
+ rmon_stats->hist[6] =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_1519b_2047b_frames);
+ rmon_stats->hist[7] =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_2048b_4095b_frames);
+ rmon_stats->hist[8] =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_4096b_9216b_frames);
+ rmon_stats->hist[9] =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_9217b_16383b_frames);
+
+ rmon_stats->hist_tx[0] = BNGE_GET_TX_PORT_STATS64(tx, tx_64b_frames);
+ rmon_stats->hist_tx[1] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_65b_127b_frames);
+ rmon_stats->hist_tx[2] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_128b_255b_frames);
+ rmon_stats->hist_tx[3] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_256b_511b_frames);
+ rmon_stats->hist_tx[4] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_512b_1023b_frames);
+ rmon_stats->hist_tx[5] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_1024b_1518b_frames);
+ rmon_stats->hist_tx[6] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_1519b_2047b_frames);
+ rmon_stats->hist_tx[7] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_2048b_4095b_frames);
+ rmon_stats->hist_tx[8] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_4096b_9216b_frames);
+ rmon_stats->hist_tx[9] =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_9217b_16383b_frames);
+
+ *ranges = bnge_rmon_ranges;
+}
+
static void bnge_get_pauseparam(struct net_device *dev,
struct ethtool_pauseparam *epause)
{
@@ -107,6 +262,11 @@ static const struct ethtool_ops bnge_ethtool_ops = {
.nway_reset = bnge_nway_reset,
.get_pauseparam = bnge_get_pauseparam,
.set_pauseparam = bnge_set_pauseparam,
+ .get_eth_phy_stats = bnge_get_eth_phy_stats,
+ .get_eth_mac_stats = bnge_get_eth_mac_stats,
+ .get_eth_ctrl_stats = bnge_get_eth_ctrl_stats,
+ .get_pause_stats = bnge_get_pause_stats,
+ .get_rmon_stats = bnge_get_rmon_stats,
};
void bnge_set_ethtool_ops(struct net_device *dev)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
index 040b73ee4097..ab75acf31db9 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
@@ -14,6 +14,7 @@
#include "bnge_hwrm_lib.h"
#include "bnge_rmem.h"
#include "bnge_resc.h"
+#include "bnge_netdev.h"
static const u16 bnge_async_events_arr[] = {
ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE,
@@ -610,6 +611,8 @@ int bnge_hwrm_func_qcaps(struct bnge_dev *bd)
bd->flags |= BNGE_EN_ROCE_V1;
if (flags & FUNC_QCAPS_RESP_FLAGS_ROCE_V2_SUPPORTED)
bd->flags |= BNGE_EN_ROCE_V2;
+ if (flags & FUNC_QCAPS_RESP_FLAGS_EXT_STATS_SUPPORTED)
+ bd->fw_cap |= BNGE_FW_CAP_EXT_STATS_SUPPORTED;
pf->fw_fid = le16_to_cpu(resp->fid);
pf->port_id = le16_to_cpu(resp->port_id);
@@ -1492,3 +1495,142 @@ int bnge_hwrm_vnic_set_tpa(struct bnge_dev *bd, struct bnge_vnic_info *vnic,
return bnge_hwrm_req_send(bd, req);
}
+
+int bnge_hwrm_func_qstat_ext(struct bnge_dev *bd, struct bnge_stats_mem *stats)
+{
+ struct hwrm_func_qstats_ext_output *resp;
+ struct hwrm_func_qstats_ext_input *req;
+ __le64 *hw_masks;
+ int rc;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_FUNC_QSTATS_EXT);
+ if (rc)
+ return rc;
+
+ req->fid = cpu_to_le16(0xffff);
+ req->flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
+
+ resp = bnge_hwrm_req_hold(bd, req);
+ rc = bnge_hwrm_req_send(bd, req);
+ if (!rc) {
+ hw_masks = &resp->rx_ucast_pkts;
+ bnge_copy_hw_masks(stats->hw_masks, hw_masks, stats->len / 8);
+ }
+ bnge_hwrm_req_drop(bd, req);
+ return rc;
+}
+
+int bnge_hwrm_port_qstats_ext(struct bnge_dev *bd, u8 flags)
+{
+ struct hwrm_queue_pri2cos_qcfg_output *resp_qc;
+ struct bnge_net *bn = netdev_priv(bd->netdev);
+ struct hwrm_queue_pri2cos_qcfg_input *req_qc;
+ struct hwrm_port_qstats_ext_output *resp_qs;
+ struct hwrm_port_qstats_ext_input *req_qs;
+ struct bnge_pf_info *pf = &bd->pf;
+ u32 tx_stat_size;
+ int rc;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS_EXT))
+ return 0;
+
+ if (flags && !(bd->fw_cap & BNGE_FW_CAP_EXT_HW_STATS_SUPPORTED))
+ return -EOPNOTSUPP;
+
+ rc = bnge_hwrm_req_init(bd, req_qs, HWRM_PORT_QSTATS_EXT);
+ if (rc)
+ return rc;
+
+ req_qs->flags = flags;
+ req_qs->port_id = cpu_to_le16(pf->port_id);
+ req_qs->rx_stat_size = cpu_to_le16(sizeof(struct rx_port_stats_ext));
+ req_qs->rx_stat_host_addr =
+ cpu_to_le64(bn->rx_port_stats_ext.hw_stats_map);
+ tx_stat_size = bn->tx_port_stats_ext.hw_stats ?
+ sizeof(struct tx_port_stats_ext) : 0;
+ req_qs->tx_stat_size = cpu_to_le16(tx_stat_size);
+ req_qs->tx_stat_host_addr =
+ cpu_to_le64(bn->tx_port_stats_ext.hw_stats_map);
+ resp_qs = bnge_hwrm_req_hold(bd, req_qs);
+ rc = bnge_hwrm_req_send(bd, req_qs);
+ if (!rc) {
+ bn->fw_rx_stats_ext_size =
+ le16_to_cpu(resp_qs->rx_stat_size) / 8;
+ bn->fw_tx_stats_ext_size = tx_stat_size ?
+ le16_to_cpu(resp_qs->tx_stat_size) / 8 : 0;
+ } else {
+ bn->fw_rx_stats_ext_size = 0;
+ bn->fw_tx_stats_ext_size = 0;
+ }
+ bnge_hwrm_req_drop(bd, req_qs);
+
+ if (flags)
+ return rc;
+
+ if (bn->fw_tx_stats_ext_size <=
+ offsetof(struct tx_port_stats_ext, pfc_pri0_tx_duration_us) / 8) {
+ bn->pri2cos_valid = false;
+ return rc;
+ }
+
+ rc = bnge_hwrm_req_init(bd, req_qc, HWRM_QUEUE_PRI2COS_QCFG);
+ if (rc)
+ return rc;
+
+ req_qc->flags = cpu_to_le32(QUEUE_PRI2COS_QCFG_REQ_FLAGS_IVLAN);
+
+ resp_qc = bnge_hwrm_req_hold(bd, req_qc);
+ rc = bnge_hwrm_req_send(bd, req_qc);
+ if (!rc) {
+ u8 *pri2cos;
+ int i, j;
+
+ pri2cos = &resp_qc->pri0_cos_queue_id;
+ for (i = 0; i < 8; i++) {
+ u8 queue_id = pri2cos[i];
+ u8 queue_idx;
+
+ /* Per port queue IDs start from 0, 10, 20, etc */
+ queue_idx = queue_id % 10;
+ if (queue_idx >= BNGE_MAX_QUEUE) {
+ bn->pri2cos_valid = false;
+ bnge_hwrm_req_drop(bd, req_qc);
+ return rc;
+ }
+ for (j = 0; j < bd->max_q; j++) {
+ if (bd->q_ids[j] == queue_id)
+ bn->pri2cos_idx[i] = queue_idx;
+ }
+ }
+ bn->pri2cos_valid = true;
+ }
+ bnge_hwrm_req_drop(bd, req_qc);
+
+ return rc;
+}
+
+int bnge_hwrm_port_qstats(struct bnge_dev *bd, u8 flags)
+{
+ struct bnge_net *bn = netdev_priv(bd->netdev);
+ struct hwrm_port_qstats_input *req;
+ struct bnge_pf_info *pf = &bd->pf;
+ int rc;
+
+ if (!(bn->flags & BNGE_FLAG_PORT_STATS))
+ return 0;
+
+ if (flags && !(bd->fw_cap & BNGE_FW_CAP_EXT_HW_STATS_SUPPORTED))
+ return -EOPNOTSUPP;
+
+ rc = bnge_hwrm_req_init(bd, req, HWRM_PORT_QSTATS);
+ if (rc)
+ return rc;
+
+ req->flags = flags;
+ req->port_id = cpu_to_le16(pf->port_id);
+ req->tx_stat_host_addr = cpu_to_le64(bn->port_stats.hw_stats_map +
+ BNGE_TX_PORT_STATS_BYTE_OFFSET);
+ req->rx_stat_host_addr = cpu_to_le64(bn->port_stats.hw_stats_map);
+
+ return bnge_hwrm_req_send(bd, req);
+}
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h
index 86ca3ac2244b..3501de7a89b9 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.h
@@ -62,4 +62,7 @@ int bnge_hwrm_phy_qcaps(struct bnge_dev *bd);
int bnge_hwrm_set_link_setting(struct bnge_net *bn, bool set_pause);
int bnge_hwrm_set_pause(struct bnge_net *bn);
int bnge_hwrm_shutdown_link(struct bnge_dev *bd);
+int bnge_hwrm_port_qstats(struct bnge_dev *bd, u8 flags);
+int bnge_hwrm_port_qstats_ext(struct bnge_dev *bd, u8 flags);
+int bnge_hwrm_func_qstat_ext(struct bnge_dev *bd, struct bnge_stats_mem *stats);
#endif /* _BNGE_HWRM_LIB_H_ */
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index f51d5eee0f6c..0b8131d9df46 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -39,6 +39,10 @@ static void bnge_free_stats_mem(struct bnge_net *bn,
{
struct bnge_dev *bd = bn->bd;
+ kfree(stats->hw_masks);
+ stats->hw_masks = NULL;
+ kfree(stats->sw_stats);
+ stats->sw_stats = NULL;
if (stats->hw_stats) {
dma_free_coherent(bd->dev, stats->len, stats->hw_stats,
stats->hw_stats_map);
@@ -47,7 +51,7 @@ static void bnge_free_stats_mem(struct bnge_net *bn,
}
static int bnge_alloc_stats_mem(struct bnge_net *bn,
- struct bnge_stats_mem *stats)
+ struct bnge_stats_mem *stats, bool alloc_masks)
{
struct bnge_dev *bd = bn->bd;
@@ -56,7 +60,20 @@ static int bnge_alloc_stats_mem(struct bnge_net *bn,
if (!stats->hw_stats)
return -ENOMEM;
+ stats->sw_stats = kzalloc(stats->len, GFP_KERNEL);
+ if (!stats->sw_stats)
+ goto stats_mem_err;
+
+ if (alloc_masks) {
+ stats->hw_masks = kzalloc(stats->len, GFP_KERNEL);
+ if (!stats->hw_masks)
+ goto stats_mem_err;
+ }
return 0;
+
+stats_mem_err:
+ bnge_free_stats_mem(bn, stats);
+ return -ENOMEM;
}
static void bnge_free_ring_stats(struct bnge_net *bn)
@@ -72,7 +89,111 @@ static void bnge_free_ring_stats(struct bnge_net *bn)
struct bnge_nq_ring_info *nqr = &bnapi->nq_ring;
bnge_free_stats_mem(bn, &nqr->stats);
+
+ kfree(nqr->sw_stats);
+ nqr->sw_stats = NULL;
+ }
+}
+
+static void bnge_fill_masks(u64 *mask_arr, u64 mask, int count)
+{
+ int i;
+
+ for (i = 0; i < count; i++)
+ mask_arr[i] = mask;
+}
+
+void bnge_copy_hw_masks(u64 *mask_arr, __le64 *hw_mask_arr, int count)
+{
+ int i;
+
+ for (i = 0; i < count; i++)
+ mask_arr[i] = le64_to_cpu(hw_mask_arr[i]);
+}
+
+static void bnge_init_stats(struct bnge_net *bn)
+{
+ struct bnge_napi *bnapi = bn->bnapi[0];
+ struct bnge_nq_ring_info *nqr;
+ struct bnge_stats_mem *stats;
+ struct bnge_dev *bd = bn->bd;
+ __le64 *rx_stats, *tx_stats;
+ int rc, rx_count, tx_count;
+ u64 *rx_masks, *tx_masks;
+ u8 flags;
+
+ nqr = &bnapi->nq_ring;
+ stats = &nqr->stats;
+ rc = bnge_hwrm_func_qstat_ext(bd, stats);
+ if (rc) {
+ u64 mask = (1ULL << 48) - 1;
+
+ bnge_fill_masks(stats->hw_masks, mask, stats->len / 8);
}
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS) {
+ stats = &bn->port_stats;
+ rx_stats = stats->hw_stats;
+ rx_masks = stats->hw_masks;
+ rx_count = sizeof(struct rx_port_stats) / 8;
+ tx_stats = rx_stats + BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+ tx_masks = rx_masks + BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+ tx_count = sizeof(struct tx_port_stats) / 8;
+
+ flags = PORT_QSTATS_REQ_FLAGS_COUNTER_MASK;
+ rc = bnge_hwrm_port_qstats(bd, flags);
+ if (rc) {
+ u64 mask = (1ULL << 40) - 1;
+
+ bnge_fill_masks(rx_masks, mask, rx_count);
+ bnge_fill_masks(tx_masks, mask, tx_count);
+ } else {
+ bnge_copy_hw_masks(rx_masks, rx_stats, rx_count);
+ bnge_copy_hw_masks(tx_masks, tx_stats, tx_count);
+ bnge_hwrm_port_qstats(bd, 0);
+ }
+ }
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS_EXT) {
+ stats = &bn->rx_port_stats_ext;
+ rx_stats = stats->hw_stats;
+ rx_masks = stats->hw_masks;
+ rx_count = sizeof(struct rx_port_stats_ext) / 8;
+ stats = &bn->tx_port_stats_ext;
+ tx_stats = stats->hw_stats;
+ tx_masks = stats->hw_masks;
+ tx_count = sizeof(struct tx_port_stats_ext) / 8;
+
+ flags = PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
+ rc = bnge_hwrm_port_qstats_ext(bd, flags);
+ if (rc) {
+ u64 mask = (1ULL << 40) - 1;
+
+ bnge_fill_masks(rx_masks, mask, rx_count);
+ if (tx_stats)
+ bnge_fill_masks(tx_masks, mask, tx_count);
+ } else {
+ bnge_copy_hw_masks(rx_masks, rx_stats, rx_count);
+ if (tx_stats)
+ bnge_copy_hw_masks(tx_masks, tx_stats,
+ tx_count);
+ bnge_hwrm_port_qstats_ext(bd, 0);
+ }
+ }
+}
+
+static void bnge_free_port_ext_stats(struct bnge_net *bn)
+{
+ bn->flags &= ~BNGE_FLAG_PORT_STATS_EXT;
+ bnge_free_stats_mem(bn, &bn->rx_port_stats_ext);
+ bnge_free_stats_mem(bn, &bn->tx_port_stats_ext);
+}
+
+static void bnge_free_port_stats(struct bnge_net *bn)
+{
+ bn->flags &= ~BNGE_FLAG_PORT_STATS;
+ bnge_free_stats_mem(bn, &bn->port_stats);
+ bnge_free_port_ext_stats(bn);
}
static int bnge_alloc_ring_stats(struct bnge_net *bn)
@@ -87,13 +208,84 @@ static int bnge_alloc_ring_stats(struct bnge_net *bn)
struct bnge_napi *bnapi = bn->bnapi[i];
struct bnge_nq_ring_info *nqr = &bnapi->nq_ring;
+ nqr->sw_stats = kzalloc_obj(*nqr->sw_stats, GFP_KERNEL);
+ if (!nqr->sw_stats) {
+ rc = -ENOMEM;
+ goto err_free_ring_stats;
+ }
+
nqr->stats.len = size;
- rc = bnge_alloc_stats_mem(bn, &nqr->stats);
+ rc = bnge_alloc_stats_mem(bn, &nqr->stats, !i);
if (rc)
goto err_free_ring_stats;
nqr->hw_stats_ctx_id = INVALID_STATS_CTX_ID;
}
+
+ return 0;
+
+err_free_ring_stats:
+ bnge_free_ring_stats(bn);
+ return rc;
+}
+
+static void bnge_alloc_port_ext_stats(struct bnge_net *bn)
+{
+ struct bnge_dev *bd = bn->bd;
+ int rc;
+
+ if (!(bd->fw_cap & BNGE_FW_CAP_EXT_STATS_SUPPORTED))
+ return;
+
+ if (!bn->rx_port_stats_ext.hw_stats) {
+ bn->rx_port_stats_ext.len = sizeof(struct rx_port_stats_ext);
+ rc = bnge_alloc_stats_mem(bn, &bn->rx_port_stats_ext, true);
+ /* Extended stats are optional */
+ if (rc)
+ return;
+ }
+
+ if (!bn->tx_port_stats_ext.hw_stats) {
+ bn->tx_port_stats_ext.len = sizeof(struct tx_port_stats_ext);
+ rc = bnge_alloc_stats_mem(bn, &bn->tx_port_stats_ext, true);
+ /* Extended stats are optional */
+ if (rc) {
+ bnge_free_port_ext_stats(bn);
+ return;
+ }
+ }
+ bn->flags |= BNGE_FLAG_PORT_STATS_EXT;
+}
+
+static int bnge_alloc_port_stats(struct bnge_net *bn)
+{
+ int rc;
+
+ if (!bn->port_stats.hw_stats) {
+ bn->port_stats.len = BNGE_PORT_STATS_SIZE;
+ rc = bnge_alloc_stats_mem(bn, &bn->port_stats, true);
+ if (rc)
+ return rc;
+
+ bn->flags |= BNGE_FLAG_PORT_STATS;
+ }
+
+ bnge_alloc_port_ext_stats(bn);
+ return 0;
+}
+
+static int bnge_alloc_stats(struct bnge_net *bn)
+{
+ int rc;
+
+ rc = bnge_alloc_ring_stats(bn);
+ if (rc)
+ return rc;
+
+ rc = bnge_alloc_port_stats(bn);
+ if (rc)
+ goto err_free_ring_stats;
+
return 0;
err_free_ring_stats:
@@ -934,6 +1126,7 @@ static void bnge_free_core(struct bnge_net *bn)
bnge_free_nq_tree(bn);
bnge_free_nq_arrays(bn);
bnge_free_ring_stats(bn);
+ bnge_free_port_stats(bn);
bnge_free_ring_grps(bn);
bnge_free_vnics(bn);
kfree(bn->tx_ring_map);
@@ -1018,10 +1211,12 @@ static int bnge_alloc_core(struct bnge_net *bn)
txr->bnapi = bnapi2;
}
- rc = bnge_alloc_ring_stats(bn);
+ rc = bnge_alloc_stats(bn);
if (rc)
goto err_free_core;
+ bnge_init_stats(bn);
+
rc = bnge_alloc_vnics(bn);
if (rc)
goto err_free_core;
@@ -2654,6 +2849,34 @@ static int bnge_close(struct net_device *dev)
return 0;
}
+static void bnge_get_one_ring_err_stats(struct bnge_total_ring_err_stats *stats,
+ struct bnge_nq_ring_info *nqr)
+{
+ struct bnge_sw_stats *sw_stats = nqr->sw_stats;
+ u64 *hw_stats = nqr->stats.sw_stats;
+
+ stats->rx_total_l4_csum_errors += sw_stats->rx.rx_l4_csum_errors;
+ stats->rx_total_resets += sw_stats->rx.rx_resets;
+ stats->rx_total_buf_errors += sw_stats->rx.rx_buf_errors;
+ stats->rx_total_oom_discards += sw_stats->rx.rx_oom_discards;
+ stats->rx_total_netpoll_discards += sw_stats->rx.rx_netpoll_discards;
+ stats->rx_total_ring_discards +=
+ BNGE_GET_RING_STATS64(hw_stats, rx_discard_pkts);
+ stats->tx_total_resets += sw_stats->tx.tx_resets;
+ stats->tx_total_ring_discards +=
+ BNGE_GET_RING_STATS64(hw_stats, tx_discard_pkts);
+}
+
+void bnge_get_ring_err_stats(struct bnge_net *bn,
+ struct bnge_total_ring_err_stats *stats)
+{
+ int i;
+
+ for (i = 0; i < bn->bd->nq_nr_rings; i++)
+ bnge_get_one_ring_err_stats(stats,
+ &bn->bnapi[i]->nq_ring);
+}
+
static const struct net_device_ops bnge_netdev_ops = {
.ndo_open = bnge_open,
.ndo_stop = bnge_close,
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
index 5636eb371e24..10ecd5d64a87 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
@@ -224,6 +224,69 @@ struct bnge_tpa_info {
#define BNGE_NQ_HDL_IDX(hdl) ((hdl) & BNGE_NQ_HDL_IDX_MASK)
#define BNGE_NQ_HDL_TYPE(hdl) (((hdl) & BNGE_NQ_HDL_TYPE_MASK) >> \
BNGE_NQ_HDL_TYPE_SHIFT)
+#define BNGE_GET_RING_STATS64(sw, counter) \
+ (*((sw) + offsetof(struct ctx_hw_stats, counter) / 8))
+
+#define BNGE_GET_RX_PORT_STATS64(sw, counter) \
+ (*((sw) + offsetof(struct rx_port_stats, counter) / 8))
+
+#define BNGE_GET_TX_PORT_STATS64(sw, counter) \
+ (*((sw) + offsetof(struct tx_port_stats, counter) / 8))
+
+#define BNGE_PORT_STATS_SIZE \
+ (sizeof(struct rx_port_stats) + sizeof(struct tx_port_stats) + 1024)
+
+#define BNGE_TX_PORT_STATS_BYTE_OFFSET \
+ (sizeof(struct rx_port_stats) + 512)
+
+#define BNGE_RX_STATS_OFFSET(counter) \
+ (offsetof(struct rx_port_stats, counter) / 8)
+
+#define BNGE_TX_STATS_OFFSET(counter) \
+ ((offsetof(struct tx_port_stats, counter) + \
+ BNGE_TX_PORT_STATS_BYTE_OFFSET) / 8)
+
+#define BNGE_RX_STATS_EXT_OFFSET(counter) \
+ (offsetof(struct rx_port_stats_ext, counter) / 8)
+
+#define BNGE_TX_STATS_EXT_OFFSET(counter) \
+ (offsetof(struct tx_port_stats_ext, counter) / 8)
+
+struct bnge_total_ring_err_stats {
+ u64 rx_total_l4_csum_errors;
+ u64 rx_total_resets;
+ u64 rx_total_buf_errors;
+ u64 rx_total_oom_discards;
+ u64 rx_total_netpoll_discards;
+ u64 rx_total_ring_discards;
+ u64 tx_total_resets;
+ u64 tx_total_ring_discards;
+};
+
+struct bnge_rx_sw_stats {
+ u64 rx_l4_csum_errors;
+ u64 rx_resets;
+ u64 rx_buf_errors;
+ u64 rx_oom_discards;
+ u64 rx_netpoll_discards;
+};
+
+struct bnge_tx_sw_stats {
+ u64 tx_resets;
+};
+
+struct bnge_stats_mem {
+ u64 *sw_stats;
+ u64 *hw_masks;
+ void *hw_stats;
+ dma_addr_t hw_stats_map;
+ u32 len;
+};
+
+struct bnge_sw_stats {
+ struct bnge_rx_sw_stats rx;
+ struct bnge_tx_sw_stats tx;
+};
enum bnge_net_state {
BNGE_STATE_NAPI_DISABLED,
@@ -231,6 +294,11 @@ enum bnge_net_state {
#define BNGE_TIMER_INTERVAL HZ
+enum bnge_net_flag {
+ BNGE_FLAG_PORT_STATS = BIT(0),
+ BNGE_FLAG_PORT_STATS_EXT = BIT(1),
+};
+
enum bnge_sp_event {
BNGE_LINK_CHNG_SP_EVENT,
BNGE_LINK_SPEED_CHNG_SP_EVENT,
@@ -309,6 +377,19 @@ struct bnge_net {
unsigned long sp_event;
struct bnge_ethtool_link_info eth_link_info;
+
+ u64 flags;
+
+ struct bnge_total_ring_err_stats ring_err_stats_prev;
+
+ struct bnge_stats_mem port_stats;
+ struct bnge_stats_mem rx_port_stats_ext;
+ struct bnge_stats_mem tx_port_stats_ext;
+ u16 fw_rx_stats_ext_size;
+ u16 fw_tx_stats_ext_size;
+
+ u8 pri2cos_idx[8];
+ bool pri2cos_valid;
};
#define BNGE_DEFAULT_RX_RING_SIZE 511
@@ -374,14 +455,6 @@ void bnge_set_ring_params(struct bnge_dev *bd);
bnge_writeq(bd, (db)->db_key64 | DBR_TYPE_NQ_ARM | \
DB_RING_IDX(db, idx), (db)->doorbell)
-struct bnge_stats_mem {
- u64 *sw_stats;
- u64 *hw_masks;
- void *hw_stats;
- dma_addr_t hw_stats_map;
- int len;
-};
-
struct nqe_cn {
__le16 type;
#define NQ_CN_TYPE_MASK 0x3fUL
@@ -425,6 +498,7 @@ struct bnge_nq_ring_info {
struct bnge_db_info nq_db;
struct bnge_stats_mem stats;
+ struct bnge_sw_stats *sw_stats;
u32 hw_stats_ctx_id;
bool has_more_work;
@@ -588,4 +662,7 @@ u8 *__bnge_alloc_rx_frag(struct bnge_net *bn, dma_addr_t *mapping,
int bnge_alloc_rx_netmem(struct bnge_net *bn, struct bnge_rx_ring_info *rxr,
u16 prod, gfp_t gfp);
void __bnge_queue_sp_work(struct bnge_net *bn);
+void bnge_get_ring_err_stats(struct bnge_net *bn,
+ struct bnge_total_ring_err_stats *stats);
+void bnge_copy_hw_masks(u64 *mask_arr, __le64 *hw_mask_arr, int count);
#endif /* _BNGE_NETDEV_H_ */
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 07/10] bng_en: periodically fetch and accumulate hardware statistics
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (5 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 08/10] bng_en: implement ndo_get_stats64 Bhargava Marreddy
` (2 subsequent siblings)
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Rahul Gupta, Ajit Kumar Khaparde
Use the timer to schedule periodic stats collection via
the workqueue when the link is up. Fetch fresh counters from
hardware via DMA and accumulate them into 64-bit software
shadows, handling wrap-around for counters narrower than
64 bits.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Rahul Gupta <rahul-rg.gupta@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 90 +++++++++++++++++++
.../net/ethernet/broadcom/bnge/bnge_netdev.h | 1 +
2 files changed, 91 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index 0b8131d9df46..43446f3f5bd2 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -322,9 +322,93 @@ static void bnge_timer(struct timer_list *t)
}
}
+ if (BNGE_LINK_IS_UP(bd) && bn->stats_coal_ticks)
+ bnge_queue_sp_work(bn, BNGE_PERIODIC_STATS_SP_EVENT);
+
mod_timer(&bn->timer, jiffies + bn->current_interval);
}
+static void bnge_add_one_ctr(u64 hw, u64 *sw, u64 mask)
+{
+ u64 sw_tmp, sw_val;
+
+ hw &= mask;
+ sw_val = READ_ONCE(*sw);
+ sw_tmp = (sw_val & ~mask) | hw;
+ if (hw < (sw_val & mask))
+ sw_tmp += mask + 1;
+ WRITE_ONCE(*sw, sw_tmp);
+}
+
+static void __bnge_accumulate_stats(__le64 *hw_stats, u64 *sw_stats, u64 *masks,
+ int count)
+{
+ int i;
+
+ for (i = 0; i < count; i++) {
+ u64 hw = le64_to_cpu(READ_ONCE(hw_stats[i]));
+
+ if (masks[i] == -1ULL)
+ sw_stats[i] = hw;
+ else
+ bnge_add_one_ctr(hw, &sw_stats[i], masks[i]);
+ }
+}
+
+static void bnge_accumulate_stats(struct bnge_stats_mem *stats)
+{
+ if (!stats->hw_stats)
+ return;
+
+ __bnge_accumulate_stats(stats->hw_stats, stats->sw_stats,
+ stats->hw_masks, stats->len / 8);
+}
+
+static void bnge_accumulate_all_stats(struct bnge_dev *bd)
+{
+ struct bnge_net *bn = netdev_priv(bd->netdev);
+ struct bnge_stats_mem *ring0_stats = NULL;
+ int i;
+
+ for (i = 0; i < bd->nq_nr_rings; i++) {
+ struct bnge_napi *bnapi = bn->bnapi[i];
+ struct bnge_nq_ring_info *nqr;
+ struct bnge_stats_mem *stats;
+
+ nqr = &bnapi->nq_ring;
+ stats = &nqr->stats;
+
+ if (!ring0_stats)
+ ring0_stats = &bn->bnapi[0]->nq_ring.stats;
+
+ __bnge_accumulate_stats(stats->hw_stats, stats->sw_stats,
+ ring0_stats->hw_masks,
+ ring0_stats->len / 8);
+ }
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS) {
+ struct bnge_stats_mem *stats = &bn->port_stats;
+ __le64 *hw_stats = stats->hw_stats;
+ u64 *sw_stats = stats->sw_stats;
+ u64 *masks = stats->hw_masks;
+ u16 cnt;
+
+ cnt = sizeof(struct rx_port_stats) / 8;
+ __bnge_accumulate_stats(hw_stats, sw_stats, masks, cnt);
+
+ hw_stats += BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+ sw_stats += BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+ masks += BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+ cnt = sizeof(struct tx_port_stats) / 8;
+ __bnge_accumulate_stats(hw_stats, sw_stats, masks, cnt);
+ }
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS_EXT) {
+ bnge_accumulate_stats(&bn->rx_port_stats_ext);
+ bnge_accumulate_stats(&bn->tx_port_stats_ext);
+ }
+}
+
static void bnge_sp_task(struct work_struct *work)
{
struct bnge_net *bn = container_of(work, struct bnge_net, sp_task);
@@ -336,6 +420,12 @@ static void bnge_sp_task(struct work_struct *work)
return;
}
+ if (test_and_clear_bit(BNGE_PERIODIC_STATS_SP_EVENT, &bn->sp_event)) {
+ bnge_hwrm_port_qstats(bd, 0);
+ bnge_hwrm_port_qstats_ext(bd, 0);
+ bnge_accumulate_all_stats(bd);
+ }
+
if (test_and_clear_bit(BNGE_UPDATE_PHY_SP_EVENT, &bn->sp_event)) {
int rc;
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
index 10ecd5d64a87..ca1977b58964 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
@@ -304,6 +304,7 @@ enum bnge_sp_event {
BNGE_LINK_SPEED_CHNG_SP_EVENT,
BNGE_LINK_CFG_CHANGE_SP_EVENT,
BNGE_UPDATE_PHY_SP_EVENT,
+ BNGE_PERIODIC_STATS_SP_EVENT,
};
struct bnge_net {
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 08/10] bng_en: implement ndo_get_stats64
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (6 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 07/10] bng_en: periodically fetch and accumulate hardware statistics Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 09/10] bng_en: implement netdev_stat_ops Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display Bhargava Marreddy
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy,
Ajit Kumar Khaparde
Implement the ndo_get_stats64 callback to report aggregate network
statistics. The driver gathers these by accumulating the per-ring
counters into the provided rtnl_link_stats64 structure.
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 105 ++++++++++++++++++
.../net/ethernet/broadcom/bnge/bnge_netdev.h | 1 +
2 files changed, 106 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index 43446f3f5bd2..03872bde4cd0 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -2911,6 +2911,102 @@ static int bnge_shutdown_nic(struct bnge_net *bn)
return 0;
}
+static void bnge_get_ring_stats(struct bnge_dev *bd,
+ struct rtnl_link_stats64 *stats)
+{
+ struct bnge_net *bn = netdev_priv(bd->netdev);
+ int i;
+
+ for (i = 0; i < bd->nq_nr_rings; i++) {
+ struct bnge_napi *bnapi = bn->bnapi[i];
+ struct bnge_nq_ring_info *nqr = &bnapi->nq_ring;
+ u64 *sw = nqr->stats.sw_stats;
+
+ stats->rx_packets += BNGE_GET_RING_STATS64(sw, rx_ucast_pkts);
+ stats->rx_packets += BNGE_GET_RING_STATS64(sw, rx_mcast_pkts);
+ stats->rx_packets += BNGE_GET_RING_STATS64(sw, rx_bcast_pkts);
+
+ stats->tx_packets += BNGE_GET_RING_STATS64(sw, tx_ucast_pkts);
+ stats->tx_packets += BNGE_GET_RING_STATS64(sw, tx_mcast_pkts);
+ stats->tx_packets += BNGE_GET_RING_STATS64(sw, tx_bcast_pkts);
+
+ stats->rx_bytes += BNGE_GET_RING_STATS64(sw, rx_ucast_bytes);
+ stats->rx_bytes += BNGE_GET_RING_STATS64(sw, rx_mcast_bytes);
+ stats->rx_bytes += BNGE_GET_RING_STATS64(sw, rx_bcast_bytes);
+
+ stats->tx_bytes += BNGE_GET_RING_STATS64(sw, tx_ucast_bytes);
+ stats->tx_bytes += BNGE_GET_RING_STATS64(sw, tx_mcast_bytes);
+ stats->tx_bytes += BNGE_GET_RING_STATS64(sw, tx_bcast_bytes);
+
+ stats->rx_missed_errors +=
+ BNGE_GET_RING_STATS64(sw, rx_discard_pkts);
+
+ stats->multicast += BNGE_GET_RING_STATS64(sw, rx_mcast_pkts);
+
+ stats->tx_dropped += BNGE_GET_RING_STATS64(sw, tx_error_pkts);
+
+ stats->rx_dropped +=
+ nqr->sw_stats->rx.rx_netpoll_discards +
+ nqr->sw_stats->rx.rx_oom_discards;
+ }
+}
+
+static void bnge_add_prev_stats(struct bnge_net *bn,
+ struct rtnl_link_stats64 *stats)
+{
+ struct rtnl_link_stats64 *prev_stats = &bn->net_stats_prev;
+
+ stats->rx_packets += prev_stats->rx_packets;
+ stats->tx_packets += prev_stats->tx_packets;
+ stats->rx_bytes += prev_stats->rx_bytes;
+ stats->tx_bytes += prev_stats->tx_bytes;
+ stats->rx_missed_errors += prev_stats->rx_missed_errors;
+ stats->multicast += prev_stats->multicast;
+ stats->rx_dropped += prev_stats->rx_dropped;
+ stats->tx_dropped += prev_stats->tx_dropped;
+}
+
+static void bnge_get_stats64(struct net_device *dev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_dev *bd = bn->bd;
+
+ rcu_read_lock();
+ if (!test_bit(BNGE_STATE_OPEN, &bd->state)) {
+ rcu_read_unlock();
+ *stats = bn->net_stats_prev;
+ return;
+ }
+
+ bnge_get_ring_stats(bd, stats);
+ bnge_add_prev_stats(bn, stats);
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS) {
+ u64 *rx = bn->port_stats.sw_stats;
+ u64 *tx = bn->port_stats.sw_stats +
+ BNGE_TX_PORT_STATS_BYTE_OFFSET / 8;
+
+ stats->rx_crc_errors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_fcs_err_frames);
+ stats->rx_frame_errors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_align_err_frames);
+ stats->rx_length_errors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_undrsz_frames) +
+ BNGE_GET_RX_PORT_STATS64(rx, rx_ovrsz_frames) +
+ BNGE_GET_RX_PORT_STATS64(rx, rx_runt_frames);
+ stats->rx_errors =
+ BNGE_GET_RX_PORT_STATS64(rx, rx_false_carrier_frames) +
+ BNGE_GET_RX_PORT_STATS64(rx, rx_jbr_frames);
+ stats->collisions =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_total_collisions);
+ stats->tx_fifo_errors =
+ BNGE_GET_TX_PORT_STATS64(tx, tx_fifo_underruns);
+ stats->tx_errors = BNGE_GET_TX_PORT_STATS64(tx, tx_err);
+ }
+ rcu_read_unlock();
+}
+
static void bnge_close_core(struct bnge_net *bn)
{
struct bnge_dev *bd = bn->bd;
@@ -2918,10 +3014,18 @@ static void bnge_close_core(struct bnge_net *bn)
bnge_tx_disable(bn);
clear_bit(BNGE_STATE_OPEN, &bd->state);
+ synchronize_rcu();
timer_delete_sync(&bn->timer);
bnge_shutdown_nic(bn);
bnge_disable_napi(bn);
+
+ /* Save ring stats before shutdown */
+ if (bn->bnapi) {
+ bnge_get_ring_stats(bd, &bn->net_stats_prev);
+ bnge_get_ring_err_stats(bn, &bn->ring_err_stats_prev);
+ }
+
bnge_free_all_rings_bufs(bn);
bnge_free_irq(bn);
bnge_del_napi(bn);
@@ -2971,6 +3075,7 @@ static const struct net_device_ops bnge_netdev_ops = {
.ndo_open = bnge_open,
.ndo_stop = bnge_close,
.ndo_start_xmit = bnge_start_xmit,
+ .ndo_get_stats64 = bnge_get_stats64,
.ndo_features_check = bnge_features_check,
};
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
index ca1977b58964..6be7dbe44be8 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.h
@@ -381,6 +381,7 @@ struct bnge_net {
u64 flags;
+ struct rtnl_link_stats64 net_stats_prev;
struct bnge_total_ring_err_stats ring_err_stats_prev;
struct bnge_stats_mem port_stats;
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 09/10] bng_en: implement netdev_stat_ops
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (7 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 08/10] bng_en: implement ndo_get_stats64 Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display Bhargava Marreddy
9 siblings, 0 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy
Implement netdev_stat_ops to provide standardized per-queue
statistics via the Netlink API.
The implementation was verified using the ynl tool:
$ ./cli.py --spec netlink/specs/netdev.yaml --dump \
qstats-get --json '{"ifindex":5, "scope":"queue"}'
[{'ifindex': 5, 'queue-id': 0, 'queue-type': 'rx',
'rx-alloc-fail': 0, 'rx-bytes': 1024, 'rx-packets': 10},
{'ifindex': 5, 'queue-id': 1, 'queue-type': 'rx',
'rx-alloc-fail': 0, 'rx-bytes': 2048, 'rx-packets': 20},
{'ifindex': 5, 'queue-id': 0, 'queue-type': 'tx',
'tx-bytes': 512, 'tx-packets': 5},
{'ifindex': 5, 'queue-id': 1, 'queue-type': 'tx',
'tx-bytes': 1536, 'tx-packets': 15}]
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_netdev.c | 71 +++++++++++++++++++
1 file changed, 71 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
index 03872bde4cd0..4925d31fc152 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
@@ -3071,6 +3071,76 @@ void bnge_get_ring_err_stats(struct bnge_net *bn,
&bn->bnapi[i]->nq_ring);
}
+static void bnge_get_queue_stats_rx(struct net_device *dev, int i,
+ struct netdev_queue_stats_rx *stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_nq_ring_info *nqr;
+ u64 *sw;
+
+ if (!bn->bnapi)
+ return;
+
+ nqr = &bn->bnapi[i]->nq_ring;
+ sw = nqr->stats.sw_stats;
+
+ stats->packets = 0;
+ stats->packets += BNGE_GET_RING_STATS64(sw, rx_ucast_pkts);
+ stats->packets += BNGE_GET_RING_STATS64(sw, rx_mcast_pkts);
+ stats->packets += BNGE_GET_RING_STATS64(sw, rx_bcast_pkts);
+
+ stats->bytes = 0;
+ stats->bytes += BNGE_GET_RING_STATS64(sw, rx_ucast_bytes);
+ stats->bytes += BNGE_GET_RING_STATS64(sw, rx_mcast_bytes);
+ stats->bytes += BNGE_GET_RING_STATS64(sw, rx_bcast_bytes);
+
+ stats->alloc_fail = nqr->sw_stats->rx.rx_oom_discards;
+}
+
+static void bnge_get_queue_stats_tx(struct net_device *dev, int i,
+ struct netdev_queue_stats_tx *stats)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_napi *bnapi;
+ u64 *sw;
+
+ if (!bn->tx_ring)
+ return;
+
+ bnapi = bn->tx_ring[bn->tx_ring_map[i]].bnapi;
+ sw = bnapi->nq_ring.stats.sw_stats;
+
+ stats->packets = 0;
+ stats->packets += BNGE_GET_RING_STATS64(sw, tx_ucast_pkts);
+ stats->packets += BNGE_GET_RING_STATS64(sw, tx_mcast_pkts);
+ stats->packets += BNGE_GET_RING_STATS64(sw, tx_bcast_pkts);
+
+ stats->bytes = 0;
+ stats->bytes += BNGE_GET_RING_STATS64(sw, tx_ucast_bytes);
+ stats->bytes += BNGE_GET_RING_STATS64(sw, tx_mcast_bytes);
+ stats->bytes += BNGE_GET_RING_STATS64(sw, tx_bcast_bytes);
+}
+
+static void bnge_get_base_stats(struct net_device *dev,
+ struct netdev_queue_stats_rx *rx,
+ struct netdev_queue_stats_tx *tx)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+
+ rx->packets = bn->net_stats_prev.rx_packets;
+ rx->bytes = bn->net_stats_prev.rx_bytes;
+ rx->alloc_fail = bn->ring_err_stats_prev.rx_total_oom_discards;
+
+ tx->packets = bn->net_stats_prev.tx_packets;
+ tx->bytes = bn->net_stats_prev.tx_bytes;
+}
+
+static const struct netdev_stat_ops bnge_stat_ops = {
+ .get_queue_stats_rx = bnge_get_queue_stats_rx,
+ .get_queue_stats_tx = bnge_get_queue_stats_tx,
+ .get_base_stats = bnge_get_base_stats,
+};
+
static const struct net_device_ops bnge_netdev_ops = {
.ndo_open = bnge_open,
.ndo_stop = bnge_close,
@@ -3234,6 +3304,7 @@ int bnge_netdev_alloc(struct bnge_dev *bd, int max_irqs)
bd->netdev = netdev;
netdev->netdev_ops = &bnge_netdev_ops;
+ netdev->stat_ops = &bnge_stat_ops;
bnge_set_ethtool_ops(netdev);
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
` (8 preceding siblings ...)
2026-03-10 14:40 ` [PATCH net-next v6 09/10] bng_en: implement netdev_stat_ops Bhargava Marreddy
@ 2026-03-10 14:40 ` Bhargava Marreddy
2026-03-12 7:01 ` ALOK TIWARI
2026-03-13 2:22 ` Jakub Kicinski
9 siblings, 2 replies; 16+ messages in thread
From: Bhargava Marreddy @ 2026-03-10 14:40 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, andrew+netdev, horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Bhargava Marreddy
Implement the legacy ethtool statistics interface (.get_sset_count,
.get_strings, and .get_ethtool_stats) to provide per-queue and
port-level visibility.
A. Exclusions:
To comply with the "no duplication" policy, excluded all stats already
covered by rtnl_link_stats64.
For ex:
1. tx_err
2. tx_fifo_underruns
3. tx_total_collisions
B. Retained Stats Analysis:
Retained the counters which provide unique per-queue granularity or
represent different hardware events:
1. rxqN_errors
2. txqN_errors
These provide per-ring visibility for debugging.
While they look similar to ndo_get_stats64's [rx/tx]_errors counters,
they map to different hardware sources:
For ex:
- ethtool rxqN_errors: ctx_hw_stats->rx_error_pkts (per-ring)
- ndo_get_stats64 rx_errors: (rx_port_stats->rx_false_carrier_frames +
rx_jbr_frames) (per-port)
Signed-off-by: Bhargava Marreddy <bhargava.marreddy@broadcom.com>
Reviewed-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
.../net/ethernet/broadcom/bnge/bnge_ethtool.c | 566 ++++++++++++++++++
1 file changed, 566 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
index 2ae13f18e2d7..1a15c3ff0149 100644
--- a/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnge/bnge_ethtool.c
@@ -31,6 +31,328 @@ static int bnge_nway_reset(struct net_device *dev)
return rc;
}
+static const char * const bnge_ring_rx_stats_str[] = {
+ "ucast_packets",
+ "mcast_packets",
+ "bcast_packets",
+ "discards",
+ "errors",
+ "ucast_bytes",
+ "mcast_bytes",
+ "bcast_bytes",
+};
+
+static const char * const bnge_ring_tx_stats_str[] = {
+ "ucast_packets",
+ "mcast_packets",
+ "bcast_packets",
+ "errors",
+ "discards",
+ "ucast_bytes",
+ "mcast_bytes",
+ "bcast_bytes",
+};
+
+static const char * const bnge_ring_tpa_stats_str[] = {
+ "tpa_packets",
+ "tpa_bytes",
+ "tpa_events",
+ "tpa_aborts",
+};
+
+static const char * const bnge_ring_tpa2_stats_str[] = {
+ "tpa_eligible_pkt",
+ "tpa_eligible_bytes",
+ "tpa_pkt",
+ "tpa_bytes",
+ "tpa_errors",
+ "tpa_events",
+};
+
+static const char * const bnge_rx_sw_stats_str[] = {
+ "l4_csum_errors",
+ "resets",
+ "buf_errors",
+};
+
+#define BNGE_RX_STATS_ENTRY(counter) \
+ { BNGE_RX_STATS_OFFSET(counter), __stringify(counter) }
+
+#define BNGE_TX_STATS_ENTRY(counter) \
+ { BNGE_TX_STATS_OFFSET(counter), __stringify(counter) }
+
+#define BNGE_RX_STATS_EXT_ENTRY(counter) \
+ { BNGE_RX_STATS_EXT_OFFSET(counter), __stringify(counter) }
+
+#define BNGE_TX_STATS_EXT_ENTRY(counter) \
+ { BNGE_TX_STATS_EXT_OFFSET(counter), __stringify(counter) }
+
+#define BNGE_RX_STATS_EXT_PFC_ENTRY(n) \
+ BNGE_RX_STATS_EXT_ENTRY(pfc_pri##n##_rx_duration_us), \
+ BNGE_RX_STATS_EXT_ENTRY(pfc_pri##n##_rx_transitions)
+
+#define BNGE_TX_STATS_EXT_PFC_ENTRY(n) \
+ BNGE_TX_STATS_EXT_ENTRY(pfc_pri##n##_tx_duration_us), \
+ BNGE_TX_STATS_EXT_ENTRY(pfc_pri##n##_tx_transitions)
+
+#define BNGE_RX_STATS_EXT_PFC_ENTRIES \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(0), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(1), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(2), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(3), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(4), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(5), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(6), \
+ BNGE_RX_STATS_EXT_PFC_ENTRY(7)
+
+#define BNGE_TX_STATS_EXT_PFC_ENTRIES \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(0), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(1), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(2), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(3), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(4), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(5), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(6), \
+ BNGE_TX_STATS_EXT_PFC_ENTRY(7)
+
+#define BNGE_RX_STATS_EXT_COS_ENTRY(n) \
+ BNGE_RX_STATS_EXT_ENTRY(rx_bytes_cos##n), \
+ BNGE_RX_STATS_EXT_ENTRY(rx_packets_cos##n)
+
+#define BNGE_TX_STATS_EXT_COS_ENTRY(n) \
+ BNGE_TX_STATS_EXT_ENTRY(tx_bytes_cos##n), \
+ BNGE_TX_STATS_EXT_ENTRY(tx_packets_cos##n)
+
+#define BNGE_RX_STATS_EXT_COS_ENTRIES \
+ BNGE_RX_STATS_EXT_COS_ENTRY(0), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(1), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(2), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(3), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(4), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(5), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(6), \
+ BNGE_RX_STATS_EXT_COS_ENTRY(7) \
+
+#define BNGE_TX_STATS_EXT_COS_ENTRIES \
+ BNGE_TX_STATS_EXT_COS_ENTRY(0), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(1), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(2), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(3), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(4), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(5), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(6), \
+ BNGE_TX_STATS_EXT_COS_ENTRY(7) \
+
+#define BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(n) \
+ BNGE_RX_STATS_EXT_ENTRY(rx_discard_bytes_cos##n), \
+ BNGE_RX_STATS_EXT_ENTRY(rx_discard_packets_cos##n)
+
+#define BNGE_RX_STATS_EXT_DISCARD_COS_ENTRIES \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(0), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(1), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(2), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(3), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(4), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(5), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(6), \
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRY(7)
+
+#define BNGE_RX_STATS_PRI_ENTRY(counter, n) \
+ { BNGE_RX_STATS_EXT_OFFSET(counter##_cos0), \
+ __stringify(counter##_pri##n) }
+
+#define BNGE_TX_STATS_PRI_ENTRY(counter, n) \
+ { BNGE_TX_STATS_EXT_OFFSET(counter##_cos0), \
+ __stringify(counter##_pri##n) }
+
+#define BNGE_RX_STATS_PRI_ENTRIES(counter) \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 0), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 1), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 2), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 3), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 4), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 5), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 6), \
+ BNGE_RX_STATS_PRI_ENTRY(counter, 7)
+
+#define BNGE_TX_STATS_PRI_ENTRIES(counter) \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 0), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 1), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 2), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 3), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 4), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 5), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 6), \
+ BNGE_TX_STATS_PRI_ENTRY(counter, 7)
+
+static const char *const bnge_ring_err_stats_arr[] = {
+ "rx_total_l4_csum_errors",
+ "rx_total_resets",
+ "rx_total_buf_errors",
+ "rx_total_oom_discards",
+ "rx_total_netpoll_discards",
+ "rx_total_ring_discards",
+ "tx_total_resets",
+ "tx_total_ring_discards",
+};
+
+#define NUM_RING_RX_SW_STATS ARRAY_SIZE(bnge_rx_sw_stats_str)
+#define NUM_RING_RX_HW_STATS ARRAY_SIZE(bnge_ring_rx_stats_str)
+#define NUM_RING_TX_HW_STATS ARRAY_SIZE(bnge_ring_tx_stats_str)
+
+static const struct {
+ long offset;
+ char string[ETH_GSTRING_LEN];
+} bnge_tx_port_stats_ext_arr[] = {
+ BNGE_TX_STATS_EXT_COS_ENTRIES,
+ BNGE_TX_STATS_EXT_PFC_ENTRIES,
+};
+
+static const struct {
+ long base_off;
+ char string[ETH_GSTRING_LEN];
+} bnge_rx_bytes_pri_arr[] = {
+ BNGE_RX_STATS_PRI_ENTRIES(rx_bytes),
+};
+
+static const struct {
+ long base_off;
+ char string[ETH_GSTRING_LEN];
+} bnge_rx_pkts_pri_arr[] = {
+ BNGE_RX_STATS_PRI_ENTRIES(rx_packets),
+};
+
+static const struct {
+ long base_off;
+ char string[ETH_GSTRING_LEN];
+} bnge_tx_bytes_pri_arr[] = {
+ BNGE_TX_STATS_PRI_ENTRIES(tx_bytes),
+};
+
+static const struct {
+ long base_off;
+ char string[ETH_GSTRING_LEN];
+} bnge_tx_pkts_pri_arr[] = {
+ BNGE_TX_STATS_PRI_ENTRIES(tx_packets),
+};
+
+static const struct {
+ long offset;
+ char string[ETH_GSTRING_LEN];
+} bnge_port_stats_arr[] = {
+ BNGE_RX_STATS_ENTRY(rx_good_vlan_frames),
+ BNGE_RX_STATS_ENTRY(rx_total_frames),
+ BNGE_RX_STATS_ENTRY(rx_ucast_frames),
+ BNGE_RX_STATS_ENTRY(rx_pfc_frames),
+ BNGE_RX_STATS_ENTRY(rx_mtu_err_frames),
+ BNGE_RX_STATS_ENTRY(rx_tagged_frames),
+ BNGE_RX_STATS_ENTRY(rx_double_tagged_frames),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri0),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri1),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri2),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri3),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri4),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri5),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri6),
+ BNGE_RX_STATS_ENTRY(rx_pfc_ena_frames_pri7),
+ BNGE_RX_STATS_ENTRY(rx_eee_lpi_events),
+ BNGE_RX_STATS_ENTRY(rx_eee_lpi_duration),
+ BNGE_RX_STATS_ENTRY(rx_runt_bytes),
+ BNGE_RX_STATS_ENTRY(rx_runt_frames),
+ BNGE_RX_STATS_ENTRY(rx_stat_discard),
+ BNGE_RX_STATS_ENTRY(rx_stat_err),
+
+ BNGE_TX_STATS_ENTRY(tx_good_vlan_frames),
+ BNGE_TX_STATS_ENTRY(tx_total_frames),
+ BNGE_TX_STATS_ENTRY(tx_ucast_frames),
+ BNGE_TX_STATS_ENTRY(tx_pfc_frames),
+ BNGE_TX_STATS_ENTRY(tx_jabber_frames),
+ BNGE_TX_STATS_ENTRY(tx_fcs_err_frames),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri0),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri1),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri2),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri3),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri4),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri5),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri6),
+ BNGE_TX_STATS_ENTRY(tx_pfc_ena_frames_pri7),
+ BNGE_TX_STATS_ENTRY(tx_eee_lpi_events),
+ BNGE_TX_STATS_ENTRY(tx_eee_lpi_duration),
+ BNGE_TX_STATS_ENTRY(tx_xthol_frames),
+ BNGE_TX_STATS_ENTRY(tx_stat_discard),
+ BNGE_TX_STATS_ENTRY(tx_stat_error),
+};
+
+static const struct {
+ long offset;
+ char string[ETH_GSTRING_LEN];
+} bnge_port_stats_ext_arr[] = {
+ BNGE_RX_STATS_EXT_ENTRY(link_down_events),
+ BNGE_RX_STATS_EXT_ENTRY(continuous_pause_events),
+ BNGE_RX_STATS_EXT_ENTRY(resume_pause_events),
+ BNGE_RX_STATS_EXT_ENTRY(continuous_roce_pause_events),
+ BNGE_RX_STATS_EXT_ENTRY(resume_roce_pause_events),
+ BNGE_RX_STATS_EXT_COS_ENTRIES,
+ BNGE_RX_STATS_EXT_PFC_ENTRIES,
+ BNGE_RX_STATS_EXT_ENTRY(rx_bits),
+ BNGE_RX_STATS_EXT_ENTRY(rx_buffer_passed_threshold),
+ BNGE_RX_STATS_EXT_ENTRY(rx_corrected_bits),
+ BNGE_RX_STATS_EXT_DISCARD_COS_ENTRIES,
+ BNGE_RX_STATS_EXT_ENTRY(rx_fec_corrected_blocks),
+ BNGE_RX_STATS_EXT_ENTRY(rx_fec_uncorrectable_blocks),
+ BNGE_RX_STATS_EXT_ENTRY(rx_filter_miss),
+};
+
+static int bnge_get_num_tpa_ring_stats(struct bnge_dev *bd)
+{
+ if (BNGE_SUPPORTS_TPA(bd))
+ return BNGE_NUM_TPA_RING_STATS;
+ return 0;
+}
+
+#define BNGE_NUM_RING_ERR_STATS ARRAY_SIZE(bnge_ring_err_stats_arr)
+#define BNGE_NUM_PORT_STATS ARRAY_SIZE(bnge_port_stats_arr)
+#define BNGE_NUM_STATS_PRI \
+ (ARRAY_SIZE(bnge_rx_bytes_pri_arr) + \
+ ARRAY_SIZE(bnge_rx_pkts_pri_arr) + \
+ ARRAY_SIZE(bnge_tx_bytes_pri_arr) + \
+ ARRAY_SIZE(bnge_tx_pkts_pri_arr))
+
+static int bnge_get_num_ring_stats(struct bnge_dev *bd)
+{
+ int rx, tx;
+
+ rx = NUM_RING_RX_HW_STATS + NUM_RING_RX_SW_STATS +
+ bnge_get_num_tpa_ring_stats(bd);
+ tx = NUM_RING_TX_HW_STATS;
+ return rx * bd->rx_nr_rings +
+ tx * bd->tx_nr_rings_per_tc;
+}
+
+static u32 bnge_get_num_stats(struct bnge_net *bn)
+{
+ u32 num_stats = bnge_get_num_ring_stats(bn->bd);
+ u32 len;
+
+ num_stats += BNGE_NUM_RING_ERR_STATS;
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS)
+ num_stats += BNGE_NUM_PORT_STATS;
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS_EXT) {
+ len = min_t(int, bn->fw_rx_stats_ext_size,
+ ARRAY_SIZE(bnge_port_stats_ext_arr));
+ num_stats += len;
+ len = min_t(int, bn->fw_tx_stats_ext_size,
+ ARRAY_SIZE(bnge_tx_port_stats_ext_arr));
+ num_stats += len;
+ if (bn->pri2cos_valid)
+ num_stats += BNGE_NUM_STATS_PRI;
+ }
+
+ return num_stats;
+}
+
static void bnge_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
@@ -40,6 +362,247 @@ static void bnge_get_drvinfo(struct net_device *dev,
strscpy(info->driver, DRV_NAME, sizeof(info->driver));
strscpy(info->fw_version, bd->fw_ver_str, sizeof(info->fw_version));
strscpy(info->bus_info, pci_name(bd->pdev), sizeof(info->bus_info));
+ info->n_stats = bnge_get_num_stats(bn);
+}
+
+static int bnge_get_sset_count(struct net_device *dev, int sset)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+
+ switch (sset) {
+ case ETH_SS_STATS:
+ return bnge_get_num_stats(bn);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static bool is_rx_ring(struct bnge_dev *bd, u16 ring_num)
+{
+ return ring_num < bd->rx_nr_rings;
+}
+
+static bool is_tx_ring(struct bnge_dev *bd, u16 ring_num)
+{
+ u16 tx_base = 0;
+
+ if (!(bd->flags & BNGE_EN_SHARED_CHNL))
+ tx_base = bd->rx_nr_rings;
+
+ if (ring_num >= tx_base && ring_num < (tx_base + bd->tx_nr_rings))
+ return true;
+ return false;
+}
+
+static void bnge_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *buf)
+{
+ struct bnge_total_ring_err_stats ring_err_stats = {};
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_dev *bd = bn->bd;
+ u64 *curr, *prev;
+ u32 tpa_stats;
+ u32 i, j = 0;
+
+ if (!bn->bnapi) {
+ j += bnge_get_num_ring_stats(bd);
+ goto skip_ring_stats;
+ }
+
+ tpa_stats = bnge_get_num_tpa_ring_stats(bd);
+ for (i = 0; i < bd->nq_nr_rings; i++) {
+ struct bnge_napi *bnapi = bn->bnapi[i];
+ struct bnge_nq_ring_info *nqr = &bnapi->nq_ring;
+ u64 *sw_stats = nqr->stats.sw_stats;
+ u64 *sw;
+ int k;
+
+ if (is_rx_ring(bd, i)) {
+ for (k = 0; k < NUM_RING_RX_HW_STATS; j++, k++)
+ buf[j] = sw_stats[k];
+ }
+ if (is_tx_ring(bd, i)) {
+ k = NUM_RING_RX_HW_STATS;
+ for (; k < NUM_RING_RX_HW_STATS + NUM_RING_TX_HW_STATS;
+ j++, k++)
+ buf[j] = sw_stats[k];
+ }
+ if (!tpa_stats || !is_rx_ring(bd, i))
+ goto skip_tpa_ring_stats;
+
+ k = NUM_RING_RX_HW_STATS + NUM_RING_TX_HW_STATS;
+ for (; k < NUM_RING_RX_HW_STATS + NUM_RING_TX_HW_STATS +
+ tpa_stats; j++, k++)
+ buf[j] = sw_stats[k];
+
+skip_tpa_ring_stats:
+ sw = (u64 *)&nqr->sw_stats->rx;
+ if (is_rx_ring(bd, i)) {
+ for (k = 0; k < NUM_RING_RX_SW_STATS; j++, k++)
+ buf[j] = sw[k];
+ }
+ }
+
+ bnge_get_ring_err_stats(bn, &ring_err_stats);
+
+skip_ring_stats:
+ curr = &ring_err_stats.rx_total_l4_csum_errors;
+ prev = &bn->ring_err_stats_prev.rx_total_l4_csum_errors;
+ for (i = 0; i < BNGE_NUM_RING_ERR_STATS; i++, j++, curr++, prev++)
+ buf[j] = *curr + *prev;
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS) {
+ u64 *port_stats = bn->port_stats.sw_stats;
+
+ for (i = 0; i < BNGE_NUM_PORT_STATS; i++, j++)
+ buf[j] = *(port_stats + bnge_port_stats_arr[i].offset);
+ }
+ if (bn->flags & BNGE_FLAG_PORT_STATS_EXT) {
+ u64 *rx_port_stats_ext = bn->rx_port_stats_ext.sw_stats;
+ u64 *tx_port_stats_ext = bn->tx_port_stats_ext.sw_stats;
+ u32 len;
+
+ len = min_t(u32, bn->fw_rx_stats_ext_size,
+ ARRAY_SIZE(bnge_port_stats_ext_arr));
+ for (i = 0; i < len; i++, j++) {
+ buf[j] = *(rx_port_stats_ext +
+ bnge_port_stats_ext_arr[i].offset);
+ }
+ len = min_t(u32, bn->fw_tx_stats_ext_size,
+ ARRAY_SIZE(bnge_tx_port_stats_ext_arr));
+ for (i = 0; i < len; i++, j++) {
+ buf[j] = *(tx_port_stats_ext +
+ bnge_tx_port_stats_ext_arr[i].offset);
+ }
+ if (bn->pri2cos_valid) {
+ for (i = 0; i < 8; i++, j++) {
+ long n = bnge_rx_bytes_pri_arr[i].base_off +
+ bn->pri2cos_idx[i];
+
+ buf[j] = *(rx_port_stats_ext + n);
+ }
+ for (i = 0; i < 8; i++, j++) {
+ long n = bnge_rx_pkts_pri_arr[i].base_off +
+ bn->pri2cos_idx[i];
+
+ buf[j] = *(rx_port_stats_ext + n);
+ }
+ for (i = 0; i < 8; i++, j++) {
+ long n = bnge_tx_bytes_pri_arr[i].base_off +
+ bn->pri2cos_idx[i];
+
+ buf[j] = *(tx_port_stats_ext + n);
+ }
+ for (i = 0; i < 8; i++, j++) {
+ long n = bnge_tx_pkts_pri_arr[i].base_off +
+ bn->pri2cos_idx[i];
+
+ buf[j] = *(tx_port_stats_ext + n);
+ }
+ }
+ }
+}
+
+static void bnge_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+{
+ struct bnge_net *bn = netdev_priv(dev);
+ struct bnge_dev *bd = bn->bd;
+ u32 i, j, num_str;
+ const char *str;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < bd->nq_nr_rings; i++) {
+ if (is_rx_ring(bd, i))
+ for (j = 0; j < NUM_RING_RX_HW_STATS; j++) {
+ str = bnge_ring_rx_stats_str[j];
+ ethtool_sprintf(&buf, "rxq%d_%s", i,
+ str);
+ }
+ if (is_tx_ring(bd, i))
+ for (j = 0; j < NUM_RING_TX_HW_STATS; j++) {
+ str = bnge_ring_tx_stats_str[j];
+ ethtool_sprintf(&buf, "txq%d_%s", i,
+ str);
+ }
+ num_str = bnge_get_num_tpa_ring_stats(bd);
+ if (!num_str || !is_rx_ring(bd, i))
+ goto skip_tpa_stats;
+
+ if (bd->max_tpa_v2)
+ for (j = 0; j < num_str; j++) {
+ str = bnge_ring_tpa2_stats_str[j];
+ ethtool_sprintf(&buf, "rxq%d_%s", i,
+ str);
+ }
+ else
+ for (j = 0; j < num_str; j++) {
+ str = bnge_ring_tpa_stats_str[j];
+ ethtool_sprintf(&buf, "rxq%d_%s", i,
+ str);
+ }
+skip_tpa_stats:
+ if (is_rx_ring(bd, i))
+ for (j = 0; j < NUM_RING_RX_SW_STATS; j++) {
+ str = bnge_rx_sw_stats_str[j];
+ ethtool_sprintf(&buf, "rxq%d_%s", i,
+ str);
+ }
+ }
+ for (i = 0; i < BNGE_NUM_RING_ERR_STATS; i++)
+ ethtool_puts(&buf, bnge_ring_err_stats_arr[i]);
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS)
+ for (i = 0; i < BNGE_NUM_PORT_STATS; i++) {
+ str = bnge_port_stats_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+
+ if (bn->flags & BNGE_FLAG_PORT_STATS_EXT) {
+ u32 len;
+
+ len = min_t(u32, bn->fw_rx_stats_ext_size,
+ ARRAY_SIZE(bnge_port_stats_ext_arr));
+ for (i = 0; i < len; i++) {
+ str = bnge_port_stats_ext_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+
+ len = min_t(u32, bn->fw_tx_stats_ext_size,
+ ARRAY_SIZE(bnge_tx_port_stats_ext_arr));
+ for (i = 0; i < len; i++) {
+ str = bnge_tx_port_stats_ext_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+
+ if (bn->pri2cos_valid) {
+ for (i = 0; i < 8; i++) {
+ str = bnge_rx_bytes_pri_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+
+ for (i = 0; i < 8; i++) {
+ str = bnge_rx_pkts_pri_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+
+ for (i = 0; i < 8; i++) {
+ str = bnge_tx_bytes_pri_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+
+ for (i = 0; i < 8; i++) {
+ str = bnge_tx_pkts_pri_arr[i].string;
+ ethtool_puts(&buf, str);
+ }
+ }
+ }
+ break;
+ default:
+ netdev_err(bd->netdev, "%s invalid request %x\n",
+ __func__, stringset);
+ break;
+ }
}
static void bnge_get_eth_phy_stats(struct net_device *dev,
@@ -262,6 +825,9 @@ static const struct ethtool_ops bnge_ethtool_ops = {
.nway_reset = bnge_nway_reset,
.get_pauseparam = bnge_get_pauseparam,
.set_pauseparam = bnge_set_pauseparam,
+ .get_sset_count = bnge_get_sset_count,
+ .get_strings = bnge_get_strings,
+ .get_ethtool_stats = bnge_get_ethtool_stats,
.get_eth_phy_stats = bnge_get_eth_phy_stats,
.get_eth_mac_stats = bnge_get_eth_mac_stats,
.get_eth_ctrl_stats = bnge_get_eth_ctrl_stats,
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status
2026-03-10 14:40 ` [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status Bhargava Marreddy
@ 2026-03-11 15:44 ` ALOK TIWARI
2026-03-13 2:18 ` [net-next,v6,02/10] " Jakub Kicinski
1 sibling, 0 replies; 16+ messages in thread
From: ALOK TIWARI @ 2026-03-11 15:44 UTC (permalink / raw)
To: Bhargava Marreddy, davem, edumazet, kuba, pabeni, andrew+netdev,
horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta, Rajashekar Hudumula,
Ajit Kumar Khaparde
On 3/10/2026 8:10 PM, Bhargava Marreddy wrote:
> +void bnge_report_link(struct bnge_dev *bd)
> +{
> + if (BNGE_LINK_IS_UP(bd)) {
> + const char *signal = "";
> + const char *flow_ctrl;
> + const char *duplex;
> + u32 speed;
> + u16 fec;
> +
> + netif_carrier_on(bd->netdev);
> + speed = bnge_fw_to_ethtool_speed(bd->link_info.link_speed);
> + if (speed == SPEED_UNKNOWN) {
> + netdev_info(bd->netdev,
> + "NIC Link is Up, speed unknown\n");
> + return;
> + }
> + if (bd->link_info.duplex == BNGE_LINK_DUPLEX_FULL)
> + duplex = "full";
> + else
> + duplex = "half";
> + if (bd->link_info.pause == BNGE_LINK_PAUSE_BOTH)
> + flow_ctrl = "ON - receive & transmit";
> + else if (bd->link_info.pause == BNGE_LINK_PAUSE_TX)
> + flow_ctrl = "ON - transmit";
> + else if (bd->link_info.pause == BNGE_LINK_PAUSE_RX)
> + flow_ctrl = "ON - receive";
> + else
> + flow_ctrl = "none";
> + if (bd->link_info.phy_qcfg_resp.option_flags &
> + PORT_PHY_QCFG_RESP_OPTION_FLAGS_SIGNAL_MODE_KNOWN) {
> + u8 sig_mode = bd->link_info.active_fec_sig_mode &
> + PORT_PHY_QCFG_RESP_SIGNAL_MODE_MASK;
> + switch (sig_mode) {
> + case PORT_PHY_QCFG_RESP_SIGNAL_MODE_NRZ:
> + signal = "(NRZ) ";
> + break;
> + case PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4:
> + signal = "(PAM4 56Gbps) ";
> + break;
> + case PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4_112:
> + signal = "(PAM4 112Gbps) ";
> + break;
> + default:
> + break;
> + }
> + }
> + netdev_info(bd->netdev, "NIC Link is Up, %u Mbps %s%s duplex, Flow control: %s\n",
> + speed, signal, duplex, flow_ctrl);
> + fec = bd->link_info.fec_cfg;
> + if (!(fec & PORT_PHY_QCFG_RESP_FEC_CFG_FEC_NONE_SUPPORTED))
> + netdev_info(bd->netdev, "FEC autoneg %s encoding: %s\n",
> + (fec & BNGE_FEC_AUTONEG) ? "on" : "off",
> + bnge_report_fec(&bd->link_info));
> + } else {
> + netif_carrier_off(bd->netdev);
> + netdev_err(bd->netdev, "NIC Link is Down\n");
Is netdev_err() appropriate here? If this is not a hardware fault
netdev_info() might be more appropriate for a normal link-down
event.
> + }
> +}
Reviewed-by: Alok Tiwari <alok.a.tiwari@oracle.com>
Thanks,
Alok
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display
2026-03-10 14:40 ` [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display Bhargava Marreddy
@ 2026-03-12 7:01 ` ALOK TIWARI
2026-03-13 2:22 ` Jakub Kicinski
1 sibling, 0 replies; 16+ messages in thread
From: ALOK TIWARI @ 2026-03-12 7:01 UTC (permalink / raw)
To: Bhargava Marreddy, davem, edumazet, kuba, pabeni, andrew+netdev,
horms
Cc: netdev, linux-kernel, michael.chan, pavan.chebbi,
vsrama-krishna.nemani, vikas.gupta
On 3/10/2026 8:10 PM, Bhargava Marreddy wrote:
> +static bool is_tx_ring(struct bnge_dev *bd, u16 ring_num)
> +{
> + u16 tx_base = 0;
> +
> + if (!(bd->flags & BNGE_EN_SHARED_CHNL))
> + tx_base = bd->rx_nr_rings;
> +
> + if (ring_num >= tx_base && ring_num < (tx_base + bd->tx_nr_rings))
> + return true;
> + return false;
what about just ?
return (ring_num >= tx_base && ring_num < (tx_base + bd->tx_nr_rings));
> +}
Reviewed-by: Alok Tiwari <alok.a.tiwari@oracle.com>
Thanks,
Alok
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [net-next,v6,02/10] bng_en: query PHY capabilities and report link status
2026-03-10 14:40 ` [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status Bhargava Marreddy
2026-03-11 15:44 ` ALOK TIWARI
@ 2026-03-13 2:18 ` Jakub Kicinski
1 sibling, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2026-03-13 2:18 UTC (permalink / raw)
To: bhargava.marreddy
Cc: Jakub Kicinski, andrew+netdev, ajit.khaparde, pabeni, netdev,
rajashekar.hudumula, linux-kernel, davem, horms, rahul-rg.gupta,
edumazet, michael.chan, vikas.gupta, vsrama-krishna.nemani,
pavan.chebbi
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
bng_en: query PHY capabilities and report link status
This commit queries PHY capabilities and supported speeds from firmware,
retrieves current link state (speed, duplex, pause, FEC), logs the
information, and seeds initial link state during probe.
> diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_link.c b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
> new file mode 100644
> index 000000000000..0c76cbc794f7
> --- /dev/null
> +++ b/drivers/net/ethernet/broadcom/bnge/bnge_link.c
[ ... ]
> +void bnge_init_ethtool_link_settings(struct bnge_net *bn)
> +{
> + struct bnge_ethtool_link_info *elink_info = &bn->eth_link_info;
> + struct bnge_link_info *link_info;
> + struct bnge_dev *bd = bn->bd;
> +
> + link_info = &bd->link_info;
> +
> + if (BNGE_AUTO_MODE(link_info->auto_mode)) {
> + elink_info->autoneg = BNGE_AUTONEG_SPEED;
> + if (link_info->auto_pause_setting &
> + PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE)
> + elink_info->autoneg |= BNGE_AUTONEG_FLOW_CTRL;
> + bnge_set_auto_speed(bn);
> + } else {
> + bnge_set_force_speed(bn);
> + elink_info->req_duplex = link_info->duplex_setting;
> + }
Does the forced-speed (else) branch need to clear elink_info->autoneg?
When called only at probe time on a zero-initialized struct, leaving
autoneg uninitialized is harmless. However, commit f833dbb3f7ab
('bng_en: add support for link async events') in this same series adds
a re-init call path from bnge_sp_task() in response to
BNGE_LINK_CFG_CHANGE_SP_EVENT async firmware events.
If the PHY transitions from auto-negotiation to forced-speed mode (e.g.,
via BMC/management configuration change), the stale BNGE_AUTONEG_SPEED
and BNGE_AUTONEG_FLOW_CTRL bits in elink_info->autoneg persist. This
causes:
bnge_hwrm_set_link_common() to send auto-negotiation configuration to
firmware instead of forced-speed:
if (elink_info->autoneg & BNGE_AUTONEG_SPEED) {
req->auto_mode |= PORT_PHY_CFG_REQ_AUTO_MODE_SPEED_MASK;
...
bnge_hwrm_set_pause_common() to configure auto-negotiation pause
instead of forced pause:
if (bn->eth_link_info.autoneg & BNGE_AUTONEG_FLOW_CTRL) {
req->auto_pause = PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
...
bnge_update_phy_setting() to make incorrect speed-change decisions:
if (!(elink_info->autoneg & BNGE_AUTONEG_SPEED)) {
if (BNGE_AUTO_MODE(link_info->auto_mode))
update_link = true;
The bnxt reference driver (bnxt_init_ethtool_link_settings) has the same
pattern, but this doesn't excuse introducing it in a new driver.
No fix was found later in the series (checked through 2d7643a78ddb).
> + if (elink_info->autoneg & BNGE_AUTONEG_FLOW_CTRL)
> + elink_info->req_flow_ctrl =
> + link_info->auto_pause_setting & BNGE_LINK_PAUSE_BOTH;
> + else
> + elink_info->req_flow_ctrl = link_info->force_pause_setting;
> +}
[ ... ]
> diff --git a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
> index edbb42efb70b..db3a19bbf75a 100644
> --- a/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
> +++ b/drivers/net/ethernet/broadcom/bnge/bnge_netdev.c
[ ... ]
> @@ -2532,6 +2562,14 @@ static int bnge_open_core(struct bnge_net *bn)
>
> bnge_enable_napi(bn);
>
> + rc = bnge_update_phy_setting(bn);
> + if (rc) {
> + netdev_warn(bn->netdev, "failed to update PHY settings (rc: %d)\n",
> + rc);
> + bd->link_info.phy_retry = true;
> + bd->link_info.phy_retry_expires = jiffies + 5 * HZ;
> + }
> +
Does stale phy_retry state from a previous open/close cycle cause spurious
warnings?
When bnge_update_phy_setting() fails during bnge_open_core(),
bd->link_info.phy_retry is set to true and phy_retry_expires is set to
jiffies + 5*HZ. Neither bnge_close_core() nor bnge_close() clears
phy_retry.
If the interface is closed and then reopened with a successful
bnge_update_phy_setting() call (rc=0), the stale phy_retry remains true
with an expired phy_retry_expires. On the next timer tick, bnge_timer()
sees:
if (bd->link_info.phy_retry) {
if (time_after(jiffies, bd->link_info.phy_retry_expires)) {
bd->link_info.phy_retry = false;
netdev_warn(bn->netdev, "failed to update PHY settings after maximum retries.\n");
and logs a spurious warning even though the PHY update succeeded on reopen.
No later fix was found in the series (searched git range
f342d5af0695..2d7643a78ddb).
> set_bit(BNGE_STATE_OPEN, &bd->state);
>
> bnge_enable_int(bn);
[ ... ]
--
pw-bot: cr
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display
2026-03-10 14:40 ` [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display Bhargava Marreddy
2026-03-12 7:01 ` ALOK TIWARI
@ 2026-03-13 2:22 ` Jakub Kicinski
1 sibling, 0 replies; 16+ messages in thread
From: Jakub Kicinski @ 2026-03-13 2:22 UTC (permalink / raw)
To: Bhargava Marreddy
Cc: davem, edumazet, pabeni, andrew+netdev, horms, netdev,
linux-kernel, michael.chan, pavan.chebbi, vsrama-krishna.nemani,
vikas.gupta
On Tue, 10 Mar 2026 20:10:44 +0530 Bhargava Marreddy wrote:
> Implement the legacy ethtool statistics interface (.get_sset_count,
> .get_strings, and .get_ethtool_stats) to provide per-queue and
> port-level visibility.
>
> A. Exclusions:
> To comply with the "no duplication" policy, excluded all stats already
> covered by rtnl_link_stats64.
> For ex:
> 1. tx_err
> 2. tx_fifo_underruns
> 3. tx_total_collisions
>
> B. Retained Stats Analysis:
> Retained the counters which provide unique per-queue granularity or
> represent different hardware events:
>
> 1. rxqN_errors
> 2. txqN_errors
>
> These provide per-ring visibility for debugging.
> While they look similar to ndo_get_stats64's [rx/tx]_errors counters,
> they map to different hardware sources:
> For ex:
> - ethtool rxqN_errors: ctx_hw_stats->rx_error_pkts (per-ring)
> - ndo_get_stats64 rx_errors: (rx_port_stats->rx_false_carrier_frames +
> rx_jbr_frames) (per-port)
Still a lot of stuff here that has standard APIs. Here is random 4:
- oom_discards
- rx_total_frames
- link_down_event
- rx_fec_corrected_blocks
Please explain what the per-ring errors and discards are exactly.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops
2026-03-10 14:40 ` [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops Bhargava Marreddy
@ 2026-03-18 7:47 ` Dan Carpenter
0 siblings, 0 replies; 16+ messages in thread
From: Dan Carpenter @ 2026-03-18 7:47 UTC (permalink / raw)
To: oe-kbuild, Bhargava Marreddy, davem, edumazet, kuba, pabeni,
andrew+netdev, horms
Cc: lkp, oe-kbuild-all, netdev, linux-kernel, michael.chan,
pavan.chebbi, vsrama-krishna.nemani, vikas.gupta,
Bhargava Marreddy, Rahul Gupta, Ajit Kumar Khaparde
Hi Bhargava,
kernel test robot noticed the following build warnings:
url: https://github.com/intel-lab-lkp/linux/commits/Bhargava-Marreddy/bng_en-add-per-PF-workqueue-timer-and-slow-path-task/20260310-231948
base: net-next/main
patch link: https://lore.kernel.org/r/20260310144044.21672-7-bhargava.marreddy%40broadcom.com
patch subject: [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops
config: arm64-randconfig-r071-20260317 (https://download.01.org/0day-ci/archive/20260318/202603180509.nOQGZrG0-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 8.5.0
smatch: v0.5.0-9004-gb810ac53
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
| Closes: https://lore.kernel.org/r/202603180509.nOQGZrG0-lkp@intel.com/
smatch warnings:
drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c:1598 bnge_hwrm_port_qstats_ext() warn: missing error code? 'rc'
vim +/rc +1598 drivers/net/ethernet/broadcom/bnge/bnge_hwrm_lib.c
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1576 rc = bnge_hwrm_req_init(bd, req_qc, HWRM_QUEUE_PRI2COS_QCFG);
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1577 if (rc)
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1578 return rc;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1579
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1580 req_qc->flags = cpu_to_le32(QUEUE_PRI2COS_QCFG_REQ_FLAGS_IVLAN);
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1581
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1582 resp_qc = bnge_hwrm_req_hold(bd, req_qc);
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1583 rc = bnge_hwrm_req_send(bd, req_qc);
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1584 if (!rc) {
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1585 u8 *pri2cos;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1586 int i, j;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1587
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1588 pri2cos = &resp_qc->pri0_cos_queue_id;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1589 for (i = 0; i < 8; i++) {
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1590 u8 queue_id = pri2cos[i];
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1591 u8 queue_idx;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1592
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1593 /* Per port queue IDs start from 0, 10, 20, etc */
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1594 queue_idx = queue_id % 10;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1595 if (queue_idx >= BNGE_MAX_QUEUE) {
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1596 bn->pri2cos_valid = false;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1597 bnge_hwrm_req_drop(bd, req_qc);
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 @1598 return rc;
return -EINVAL;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1599 }
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1600 for (j = 0; j < bd->max_q; j++) {
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1601 if (bd->q_ids[j] == queue_id)
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1602 bn->pri2cos_idx[i] = queue_idx;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1603 }
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1604 }
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1605 bn->pri2cos_valid = true;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1606 }
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1607 bnge_hwrm_req_drop(bd, req_qc);
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1608
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1609 return rc;
598cdbbcf8f4c1 Bhargava Marreddy 2026-03-10 1610 }
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-03-18 7:47 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-10 14:40 [PATCH net-next v6 00/10] bng_en: add link management and statistics support Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 01/10] bng_en: add per-PF workqueue, timer, and slow-path task Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 02/10] bng_en: query PHY capabilities and report link status Bhargava Marreddy
2026-03-11 15:44 ` ALOK TIWARI
2026-03-13 2:18 ` [net-next,v6,02/10] " Jakub Kicinski
2026-03-10 14:40 ` [PATCH net-next v6 03/10] bng_en: add ethtool link settings, get_link, and nway_reset Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 04/10] bng_en: implement ethtool pauseparam operations Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 05/10] bng_en: add support for link async events Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 06/10] bng_en: add HW stats infra and structured ethtool ops Bhargava Marreddy
2026-03-18 7:47 ` Dan Carpenter
2026-03-10 14:40 ` [PATCH net-next v6 07/10] bng_en: periodically fetch and accumulate hardware statistics Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 08/10] bng_en: implement ndo_get_stats64 Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 09/10] bng_en: implement netdev_stat_ops Bhargava Marreddy
2026-03-10 14:40 ` [PATCH net-next v6 10/10] bng_en: add support for ethtool -S stats display Bhargava Marreddy
2026-03-12 7:01 ` ALOK TIWARI
2026-03-13 2:22 ` Jakub Kicinski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox